the parser couldn't parse `1=2 not between 3 and 5`
after `2` it expected only NOT2_SYM, but not NOT_SYM
(visible from the sql_yacc.output file), which resulted in
Syntax error ... near 'not between 3 and 4'
The parser was confused by a rather low NOT_SYM precedence and
%prec BETWEEN_SYM didn't resolve this confusion.
As a fix, let's remove any %precedence from NOT_SYM and
specify %prec explicitly in the only place where it matters for NOT_SYM.
In other places, such as for NOT BETWEEN, NOT_SYM won't have a
precedence, so bison won't be confused about it.
The idea is to put Item_direct_ref_to_item as a transparent and
permanent wrapper before a string which require conversion.
So that Item_direct_ref_to_item would be the only place where
the pointer to the string item is stored, this pointer can be changed
and restored during PS execution as needed. And if any permanent
(subquery) optimization would need a pointer to the item,
it'll use a pointer to the Item_direct_ref_to_item - which is
a permanent item and won't go away.
1. In case of system-versioned table add row_end into FTS_DOC_ID index
in fts_create_common_tables() and innobase_create_key_defs().
fts_n_uniq() returns 1 or 2 depending on whether the table is
system-versioned.
After this patch recreate of FTS_DOC_ID index is required for
existing system-versioned tables. If you see this message in error
log or server warnings: "InnoDB: Table db/t1 contains 2 indexes
inside InnoDB, which is different from the number of indexes 1
defined in the MariaDB" use this command to fix the table:
ALTER TABLE db.t1 FORCE;
2. Fix duplicate history for secondary unique index like it was done
in MDEV-23644 for clustered index (932ec586aa). In case of
existing history row which conflicts with currently inseted row we
check in row_ins_scan_sec_index_for_duplicate() whether that row
was inserted as part of current transaction. In that case we
indicate with DB_FOREIGN_DUPLICATE_KEY that new history row is not
needed and should be silently skipped.
3. Some parts of MDEV-21138 (7410ff436e) reverted. Skipping of
FTS_DOC_ID index for history rows made problems with purge
system. Now this is fixed differently by p.2.
4. wait_all_purged.inc checks that we didn't affect non-history rows
so they are deleted and purged correctly.
Additional FTS fixes
fts_init_get_doc_id(): exclude history rows from max_doc_id
calculation. fts_init_get_doc_id() callback is used only for crash
recovery.
fts_add_doc_by_id(): set max value for row_end field.
fts_read_stopword(): stopwords table can be system-versioned too. We
now read stopwords only for current data.
row_insert_for_mysql(): exclude history rows from doc_id validation.
row_merge_read_clustered_index(): exclude history_rows from doc_id
processing.
fts_load_user_stopword(): for versioned table retrieve row_end field
and skip history rows. For non-versioned table we retrieve 'value'
field twice (just for uniformity).
FTS tests for System Versioning now include maybe_versioning.inc which
adds 3 combinations:
'vers' for debug build sets sysvers_force and
sysvers_hide. sysvers_force makes every created table
system-versioned, sysvers_hide hides WITH SYSTEM VERSIONING
for SHOW CREATE.
Note: basic.test, stopword.test and versioning.test do not
require debug for 'vers' combination. This is controlled by
$modify_create_table in maybe_versioning.inc and these
tests run WITH SYSTEM VERSIONING explicitly which allows to
test 'vers' combination on non-debug builds.
'vers_trx' like 'vers' sets sysvers_force_trx and sysvers_hide. That
tests FTS with trx_id-based System Versioning.
'orig' works like before: no System Versioning is added, no debug is
required.
Upgrade/downgrade test for System Versioning is done by
innodb_fts.versioning. It has 2 combinations:
'prepare' makes binaries in std_data (requires old server and OLD_BINDIR).
It tests upgrade/downgrade against old server as well.
'upgrade' tests upgrade against binaries in std_data.
Cleanups:
Removed innodb-fts-stopword.test as it duplicates stopword.test
Adds new parameter $restart_bindir for restart_mysqld.inc.
Example:
let $restart_bindir= /home/midenok/src/mariadb/10.3b/build;
--source include/restart_mysqld.inc
It is good to return back original server before check_mysqld will be
run at the test end:
let $restart_bindir=;
--source include/restart_mysqld.inc
Before the fix next-key lock was requested only if a record was
delete-marked for locking unique search in RR isolation level.
There can be several delete-marked records for the same unique key,
that's why InnoDB scans the records until eighter non-delete-marked record
is reached or all delete-marked records with the same unique key are
scanned.
For range scan next-key locks are used for RR to protect scanned range from
inserting new records by other transactions. And this is the reason of why
next-key locks are used for delete-marked records for unique searches.
If a record is not delete-marked, the requested lock type was "not-gap".
When a record is not delete-marked during lock request by trx 1, and
some other transaction holds conflicting lock, trx 1 creates waiting
not-gap lock on the record and suspends. During trx 1 suspending the
record can be delete-marked. And when the lock is granted on conflicting
transaction commit or rollback, its type is still "not-gap". So we have
"not-gap" lock on delete-marked record for RR. And this let some other
transaction to insert some record with the same unique key when trx 1 is
not committed, what can cause isolation level violation.
The fix is to set next-key locks for both delete-marked and
non-delete-marked records for unique search in RR.
An unfortunate change to the default behavior of the handling of
core dumps was implemented in
commit e9be5428a2
by making MTR_PRINT_CORE=small the default value, that is, to
only display the stack trace of one thread in crash reports.
Many if not most failures that occur in regression tests are
sporadic and involve race conditions or deadlocks. To be able to
analyze such failures, having the stack traces of all active threads
is a must, because CI environments typically do not save any core dumps.
While the environment variable MTR_PRINT_CORE could be set in CI
environments to compensate for the unfortunate change, it is better
to revert to the old default (dumping all threads) so that
no explicit action will be required from maintainers of independent
CI systems. In that case, if something fails once in a blue moon,
we can have some hope of diagnosing it based on the output.
We fix this regression by defaulting the unset environment variable
MTR_PRINT_CORE to "medium".
Some tests drop the default mtr database "test". This may fail due
to the directory not being empty. InnoDB may not delete all tables
immediately, due to the "background drop table queue" or its
replacement in commit 1bd681c8b3
(the purge of history would clean up after a DDL operation during
which the server was killed).
Let us try to avoid "drop database test" whenever it is easily possible.
Where it is not, SET GLOBAL innodb_max_purge_lag_wait=0 will ensure
that the replacement of the "background drop table queue" will have
completed its job.
Consistent with MDEV-4206 and empty log_slow_filter still means
no explict filtering. Since 21518ab2e4 however the
log_queries_not_using_indexes became stored in the same variable.
As we need to test for the absense of log_queries_not_using_indexes
the SERVER_QUERY_NO_INDEX USED part of log_slow_statement, the empty
criteria resulted in an always true to log queries not using indexes if
log_slow_filter was set to empty.
Adjusted the log_slow.test for MDEV-4206 as slow_log_query has been
global and session for a while and it was relying on the MDEV-21187
buggy behavior to detect a slow query.
Reviewer: Monty
(Patch from Monty, slightly amended)
Fix rowid filtering optimization in best_access_path():
== Ref access + rowid filtering ==
The cost computations compare #records and index-only scan cost
(keyread_tmp) to find out the per-record advantage one will get if
they skip reading full table record.
The computations produce wrong result when:
- the #records are "clipped down" with s->worst_seeks or
thd->variables.max_seeks_for_key. keyread_tmp is not clipped
this way so the numbers are not comparable.
- access_factor is negative. This means index_only read is
cheaper than non-index-only read.
This patch makes the optimizer not to consider Rowid Filtering in
such cases.
The decision is logged in the Optimizer Trace using
"rowid_filter_skipped" name.
== Range access + rowid filtering ==
when considering to use Rowid Filter with range access, do multiply
keyread_tmp by record_count. That way, it is comparable with the
range access's estimate, which is multiplied by record_count.
This commit fixes the test system hanging due to
the galera_var_notify_ssl_ipv6 test and also brings
the wsrep_notify[_ssl].sh files in line with each other
between the user template and the mtr suite.
Quotes are also added here to avoid problems if the
user specifies the value of one of the variables at the
beginning of the file containing shell-specific characters,
for example, if the password or username specified in the
PSWD and USER variables will contain the "$" character.
Also fixed an issue with automatic --ssl-verify-server-cert
option substitution when the corresponding value is set
by the user to "1" or "on".
Also fixed some tests here to avoid joining one of the nodes
to another cluster when the nodes are restarted from the mtr
side, which can lead to random failures when testing with
buildbot.
This commit fixes the test system hanging due to
the galera_var_notify_ssl_ipv6 test and also brings
the wsrep_notify[_ssl].sh files in line with each other
between the user template and the mtr suite.
Quotes are also added here to avoid problems if the
user specifies the value of one of the variables at the
beginning of the file containing shell-specific characters,
for example, if the password or username specified in the
PSWD and USER variables will contain the "$" character.
Also fixed an issue with automatic --ssl-verify-server-cert
option substitution when the corresponding value is set
by the user to "1" or "on".
Also fixed some tests here to avoid joining one of the nodes
to another cluster when the nodes are restarted from the mtr
side, which can lead to random failures when testing with
buildbot.
mysql_discard_or_import_tablespace(): On successful
ALTER TABLE...DISCARD TABLESPACE, evict the table handle from the
table definition cache, so that ha_innobase::close() will be invoked,
like InnoDB expects to be the case. This will avoid an assertion failure
ut_a(table->get_ref_count() == 0) during IMPORT TABLESPACE.
ha_innobase::open(): Do not issue any ER_TABLESPACE_DISCARDED warning.
Member functions for DML will do that.
ha_innobase::truncate(), ha_innobase::check_if_supported_inplace_alter():
Issue ER_TABLESPACE_DISCARDED warnings, to compensate for the removal of
the warning in ha_innobase::open().
row_quiesce_write_indexes(): Only write information about committed
indexes. The ALTER TABLE t NOWAIT ADD INDEX(c) in the nondeterministic
test case will most of the time fail due to a metadata lock (MDL) timeout
and leave behind an uncommitted index.
Reviewed by: Sergei Golubchik
The geometry type requires Type:"Feature" but the feature need
not be first in the JSON structure.
Adjust code to return an error if geometry isn't a JSON object,
but continue parsing searching for Type: "Feature" to trigger
the geometry parsing.
Thanks Derick Magnusen for the bug report.
The bug is caused by a similar mechanism as MDEV-21027.
The function, check_insert_or_replace_autoincrement, failed to open
all the partitions on REPLACE SELECT statements and it results in the
assertion error.
* Delete tests that are not supported and not going to be supported
any time soon
* Fix result set on tests that are not run on bb
* Fix tests that fail because of auto increment offset
* Make sure that disabled tests have open bug report
- Regression introduced 7baf24a0f8 for multidigit gcc dump.
There is no dot in `dumpversion`.
```
$ gcc -dumpversion
10
```
Otherwise it will fail and not produce the output
```
Running dgcov
Name "IO::Uncompress::Gunzip::GunzipError" used only once: possible typo at ./dgcov.pl line 197.
Cannot parse gcc -dumpversion: 9
```
- The warning `once` is always generated:
```
Running dgcov
Name "IO::Uncompress::Gunzip::GunzipError" used only once: possible typo at ./dgcov.pl line 197.
<number>
```
Suppresing the line `Name "IO::Uncompress::Gunzip::GunzipError" used only once: possible typo at ./dgcov.pl line 197.`
with the patch.
- Reviewed by: <>
* update README
* fix version regex to support versions with two digits
* die if the version cannot be parsed
* support gcc versions 11+
* require JSON::PP not use, to avoid introducing new rpm dependency
into MariaDB-test
- Script covers situation for gcov (gcc) < 9 with non-json format of
generated files as well as for gcov (gcc) >=8 with json generated format
Reviewed by: serg@mariadb.com
sometimes `KILL QUERY ID @id` was executed before the previous
`send SELECT SLEEP(1000)` has reached the parser. As the statement
resets the kill status before execution, the effect of the KILL
was ignored.
Item_func_not_all::print() either uses Item_func::print() or
directly invokes args[0]->print(). Thus the precedence should be
either the one of Item_func or of args[0].
Item_allany_subselect::print() prints args[0], then a comparison op,
then a subquery. That is, the precedence should be the one of
a comparison.
rename to stress that is a specific hack for Item_func_nextval
and should not be used for other items.
If a vcol uses Item_func_nextval, a corresponding table for the sequence
should be added to the prelocking list (in that sense NEXTVAL is not
simply a function, but more like a subquery), see add_internal_tables()
in DML_prelocking_strategy::handle_table(). At the moment it is only
implemented for DEFAULT, not for GENERATED ALWAYS AS, thus the
VCOL_NEXTVAL hack.
select_union_direct::send_data() only sends a record when
the LIMIT ... OFFSET clause of the individual select won't skip it.
Thus, select_union_direct::send_data() should not do any actions
related to a sending a record if the offset of a select isn't
reached yet
Like in MDEV-16110 we must release items allocated on thd->mem_root by
reopening the table.
MDEV-16290 relocated MDEV-16110 fix in 10.5 so it works for MDEV-28576
as well. 10.3 without MDEV-16290 now duplicates this fix.
The change from MDEV-29465 exposed a flaw in replace_column_table
where again we were not properly updating the column-level bits.
replace_table_table was changed in MDEV-29465 to properly update
grant_table->init_cols, however replace_column_table still only
modified grant_column->rights when the GRANT_COLUMN already existed.
This lead to a missmatch between GRANT_COLUMN::init_rights and
GRANT_COLUMN::rights, *if* the GRANT_COLUMN already existed.
As an example:
GRANT SELECT (col1) ...
Here:
For col1
GRANT_COLUMN::init_rights and GRANT_COLUMN::rights are set to 1 (SELECT) in
replace_column_table.
GRANT INSERT (col1) ...
Here, without this patch GRANT_COLUMN::init_rights is still 1 and
GRANT_COLUMN::rights is 3 (SELECT_PRIV | INSERT_PRIV)
Finally, if before this patch, one does:
REVOKE SELECT (col1) ...
replace_table_table will see that init_rights loses bit 1 thus it
considers there are no more rights granted on that particular table.
This prompts the whole GRANT_TABLE to be removed via the first revoke,
when the GRANT_COLUMN corresponding to it should still have init_rights == 2.
By also updating replace_column_table to keep init_rights in sync
properly, the issue is resolved.
Reviewed by <serg@mariadb.com>
Commit 32158be added a new test `bad_startup_options`. This test fails
if run as root, which is common on many CI systems.
This test should include `not_as_root.inc` so it is skipped, just
like all other similar tests in MariaDB.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer
Amazon Web Services, Inc.
This commit contains only a mtr test for reproducing the issue in MDEV-29512
The actual fix will be pushed in wsrep-lib repository
The hanging in MDEV-29512 happens when binlog purging is attempted, and there is
one local BF aborted transaction waiting for commit monitor.
The test will launch two node cluster and enable binlogging with expire log days,
to force binlog purging to happen.
A local transaction is executed so that will become BF abort victim, and has advanced
to replication stage waiting for commit monitor for final cleanup (to mark position in innodb)
after that, applier is released to complete the BF abort and due to binlog configuration,
starting the binlog purging. This is where the hanging would occur, if code is buggy
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Test MDEV-26575 fails when it runs after MDEV-25389. This is because
the latter simulates a failure while an applier thread is
created in `start_wsrep_THD()`. The failure was not handled correctly
and would not cleanup the created THD from the global
`server_threads`. A subsequent shutdown would hang and eventually fail
trying to close this THD.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Fix `wsrep_table_accessible_when_detached()` so that commands that
access no tables are rejected while a node is disconnected from a
cluster.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
The following tests are disabled when running --valgrding without --big:
- rpl.rpl_ssl
- rpl.rpl_semi_sync_event
- All encryption test (which includes have_file_key_management.inc)
MDEV-16735 describes how mysql_upgrade fails when alter_algorithm
is set to a value different than 'DEFAULT'/'COPY'. It was marked as
fixed by 0ee0868, but the fix didn't covered the possibility of having
the global value of alter_algorithm set to something different than
'DEFAULT'/'COPY'. To ensure that the upgrade process works properly
regardless the global value of alter_altorithm, this commit force it's
value to 'DEFAULT' (note the quotes) for the mysql_upgrade session.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer
Amazon Web Services, Inc.