regression from MDEV-29540 / 8c38939369.
INSERT SELECT errors needed to be unconditionally ignored.
As this touches the CREATE .. SELECT functionality, show
the equalivent test there.
This bug affected queries with nested left joins having the same last inner
table such that not_exists optimization could be applied to the most inner
outer join when optimizer chose to use join buffers. The bug could lead to
producing wrong a result set.
If the WHERE condition a query contains a conjunctive IS NULL predicate
over a non-nullable column of an inner table of a not nested outer join
then not_exists optimization can be applied to tho the outer join. With
this optimization when looking for matches for a certain record from the
outer table of the join the records of the inner table can be ignored
right after the first match satisfying the ON condition is found.
In the case of nested outer joins having the same last inner table this
optimization still can be applied but only if all ON conditions of the
embedding outer joins are satisfied. Such check was missing in the code
that tried to apply not_exists optimization when join buffers were used
for outer join operations.
This problem has been already fixed in the patch for bug MDEV-7992. Yet
there it was resolved only for the cases when join buffers were not used
for outer joins.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
MariaDB MDEV-12583 added `SOURCE_REVISION` variable that exposes the
SHA1 of source code commit that the current running engine was built
from. This info is useful for troubleshooting and debugging.
This commit does the following:
- addes the `SOURCE_REVISION` value into engine error log.
- when a crash triggers handle_fatal_signal, the `SOURCE_REVISION` will
be included in crash report.
- resolves MDEV-20344: startup messages belong in stderr/error-log not
stdout
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
If mariadb-service-convert is run and the user variable is unset then
this sets `User=` in `[Service]`, which then tries to run mariadb as
root, which in-turn fails. This only happens when mysqld_safe is missing
which is all the time now. So don't set `User=` if there is no user variable.
Reviewer: Sergei Golubchik <serg@mariadb.org> (in PR #2382)
the parser couldn't parse `1=2 not between 3 and 5`
after `2` it expected only NOT2_SYM, but not NOT_SYM
(visible from the sql_yacc.output file), which resulted in
Syntax error ... near 'not between 3 and 4'
The parser was confused by a rather low NOT_SYM precedence and
%prec BETWEEN_SYM didn't resolve this confusion.
As a fix, let's remove any %precedence from NOT_SYM and
specify %prec explicitly in the only place where it matters for NOT_SYM.
In other places, such as for NOT BETWEEN, NOT_SYM won't have a
precedence, so bison won't be confused about it.
The idea is to put Item_direct_ref_to_item as a transparent and
permanent wrapper before a string which require conversion.
So that Item_direct_ref_to_item would be the only place where
the pointer to the string item is stored, this pointer can be changed
and restored during PS execution as needed. And if any permanent
(subquery) optimization would need a pointer to the item,
it'll use a pointer to the Item_direct_ref_to_item - which is
a permanent item and won't go away.
1. In case of system-versioned table add row_end into FTS_DOC_ID index
in fts_create_common_tables() and innobase_create_key_defs().
fts_n_uniq() returns 1 or 2 depending on whether the table is
system-versioned.
After this patch recreate of FTS_DOC_ID index is required for
existing system-versioned tables. If you see this message in error
log or server warnings: "InnoDB: Table db/t1 contains 2 indexes
inside InnoDB, which is different from the number of indexes 1
defined in the MariaDB" use this command to fix the table:
ALTER TABLE db.t1 FORCE;
2. Fix duplicate history for secondary unique index like it was done
in MDEV-23644 for clustered index (932ec586aa). In case of
existing history row which conflicts with currently inseted row we
check in row_ins_scan_sec_index_for_duplicate() whether that row
was inserted as part of current transaction. In that case we
indicate with DB_FOREIGN_DUPLICATE_KEY that new history row is not
needed and should be silently skipped.
3. Some parts of MDEV-21138 (7410ff436e) reverted. Skipping of
FTS_DOC_ID index for history rows made problems with purge
system. Now this is fixed differently by p.2.
4. wait_all_purged.inc checks that we didn't affect non-history rows
so they are deleted and purged correctly.
Additional FTS fixes
fts_init_get_doc_id(): exclude history rows from max_doc_id
calculation. fts_init_get_doc_id() callback is used only for crash
recovery.
fts_add_doc_by_id(): set max value for row_end field.
fts_read_stopword(): stopwords table can be system-versioned too. We
now read stopwords only for current data.
row_insert_for_mysql(): exclude history rows from doc_id validation.
row_merge_read_clustered_index(): exclude history_rows from doc_id
processing.
fts_load_user_stopword(): for versioned table retrieve row_end field
and skip history rows. For non-versioned table we retrieve 'value'
field twice (just for uniformity).
FTS tests for System Versioning now include maybe_versioning.inc which
adds 3 combinations:
'vers' for debug build sets sysvers_force and
sysvers_hide. sysvers_force makes every created table
system-versioned, sysvers_hide hides WITH SYSTEM VERSIONING
for SHOW CREATE.
Note: basic.test, stopword.test and versioning.test do not
require debug for 'vers' combination. This is controlled by
$modify_create_table in maybe_versioning.inc and these
tests run WITH SYSTEM VERSIONING explicitly which allows to
test 'vers' combination on non-debug builds.
'vers_trx' like 'vers' sets sysvers_force_trx and sysvers_hide. That
tests FTS with trx_id-based System Versioning.
'orig' works like before: no System Versioning is added, no debug is
required.
Upgrade/downgrade test for System Versioning is done by
innodb_fts.versioning. It has 2 combinations:
'prepare' makes binaries in std_data (requires old server and OLD_BINDIR).
It tests upgrade/downgrade against old server as well.
'upgrade' tests upgrade against binaries in std_data.
Cleanups:
Removed innodb-fts-stopword.test as it duplicates stopword.test
Adds new parameter $restart_bindir for restart_mysqld.inc.
Example:
let $restart_bindir= /home/midenok/src/mariadb/10.3b/build;
--source include/restart_mysqld.inc
It is good to return back original server before check_mysqld will be
run at the test end:
let $restart_bindir=;
--source include/restart_mysqld.inc
Works like vers_force but forces trx_id-based system-versioned tables
if the storage supports it (currently InnoDB-only). Otherwise creates
timestamp-based system-versioned table.
The incorrect type handler caused an incorrect result_type() for
Item_cache_row (STRING_RESULT rather than ROW_RESULT). By updating the
constructor of Item_cache_row with the correct type handler, it fixes
this problem.
Signed-off-by: Yuchen Pei <yuchen.pei@mariadb.com>
Reviewed-by: Sergei Golubchik <serg@mariadb.com>
Before the fix next-key lock was requested only if a record was
delete-marked for locking unique search in RR isolation level.
There can be several delete-marked records for the same unique key,
that's why InnoDB scans the records until eighter non-delete-marked record
is reached or all delete-marked records with the same unique key are
scanned.
For range scan next-key locks are used for RR to protect scanned range from
inserting new records by other transactions. And this is the reason of why
next-key locks are used for delete-marked records for unique searches.
If a record is not delete-marked, the requested lock type was "not-gap".
When a record is not delete-marked during lock request by trx 1, and
some other transaction holds conflicting lock, trx 1 creates waiting
not-gap lock on the record and suspends. During trx 1 suspending the
record can be delete-marked. And when the lock is granted on conflicting
transaction commit or rollback, its type is still "not-gap". So we have
"not-gap" lock on delete-marked record for RR. And this let some other
transaction to insert some record with the same unique key when trx 1 is
not committed, what can cause isolation level violation.
The fix is to set next-key locks for both delete-marked and
non-delete-marked records for unique search in RR.
When trying to create a spider table with banned charsets including
utf32, utf16, ucs2 and utf16le[1], spider should emit an error
immediately, rather than wait until a separate statement that
establishes a connection (e.g. SELECT). This also applies to ALTER
TABLE statement that changes charsets.
[1] https://mariadb.com/kb/en/server-system-variables/#character_set_client
Signed-off-by: Yuchen Pei <yuchen.pei@mariadb.com>
Reviewed-by: Nayuta Yanagisawa <nayuta.yanagisawa@mariadb.com>
An unfortunate change to the default behavior of the handling of
core dumps was implemented in
commit e9be5428a2
by making MTR_PRINT_CORE=small the default value, that is, to
only display the stack trace of one thread in crash reports.
Many if not most failures that occur in regression tests are
sporadic and involve race conditions or deadlocks. To be able to
analyze such failures, having the stack traces of all active threads
is a must, because CI environments typically do not save any core dumps.
While the environment variable MTR_PRINT_CORE could be set in CI
environments to compensate for the unfortunate change, it is better
to revert to the old default (dumping all threads) so that
no explicit action will be required from maintainers of independent
CI systems. In that case, if something fails once in a blue moon,
we can have some hope of diagnosing it based on the output.
We fix this regression by defaulting the unset environment variable
MTR_PRINT_CORE to "medium".
Some tests drop the default mtr database "test". This may fail due
to the directory not being empty. InnoDB may not delete all tables
immediately, due to the "background drop table queue" or its
replacement in commit 1bd681c8b3
(the purge of history would clean up after a DDL operation during
which the server was killed).
Let us try to avoid "drop database test" whenever it is easily possible.
Where it is not, SET GLOBAL innodb_max_purge_lag_wait=0 will ensure
that the replacement of the "background drop table queue" will have
completed its job.
Consistent with MDEV-4206 and empty log_slow_filter still means
no explict filtering. Since 21518ab2e4 however the
log_queries_not_using_indexes became stored in the same variable.
As we need to test for the absense of log_queries_not_using_indexes
the SERVER_QUERY_NO_INDEX USED part of log_slow_statement, the empty
criteria resulted in an always true to log queries not using indexes if
log_slow_filter was set to empty.
Adjusted the log_slow.test for MDEV-4206 as slow_log_query has been
global and session for a while and it was relying on the MDEV-21187
buggy behavior to detect a slow query.
Reviewer: Monty
Previously we parsed it out in mysql_install_db for use in the error
message, but failed to pass it to mysqld in the bootstrap.
Also match log_error as it might appear in the .cnf files.
Thanks Michal Schorm for the test case.
Reviewed by: Faustin Lammler
(Patch from Monty, slightly amended)
Fix rowid filtering optimization in best_access_path():
== Ref access + rowid filtering ==
The cost computations compare #records and index-only scan cost
(keyread_tmp) to find out the per-record advantage one will get if
they skip reading full table record.
The computations produce wrong result when:
- the #records are "clipped down" with s->worst_seeks or
thd->variables.max_seeks_for_key. keyread_tmp is not clipped
this way so the numbers are not comparable.
- access_factor is negative. This means index_only read is
cheaper than non-index-only read.
This patch makes the optimizer not to consider Rowid Filtering in
such cases.
The decision is logged in the Optimizer Trace using
"rowid_filter_skipped" name.
== Range access + rowid filtering ==
when considering to use Rowid Filter with range access, do multiply
keyread_tmp by record_count. That way, it is comparable with the
range access's estimate, which is multiplied by record_count.
Hide the errors related to missing innodb stats tables in bootstrap mode
on the assumption that because we are in bootstrap mode they are going
to be created.
This commit fixes the test system hanging due to
the galera_var_notify_ssl_ipv6 test and also brings
the wsrep_notify[_ssl].sh files in line with each other
between the user template and the mtr suite.
Quotes are also added here to avoid problems if the
user specifies the value of one of the variables at the
beginning of the file containing shell-specific characters,
for example, if the password or username specified in the
PSWD and USER variables will contain the "$" character.
Also fixed an issue with automatic --ssl-verify-server-cert
option substitution when the corresponding value is set
by the user to "1" or "on".
Also fixed some tests here to avoid joining one of the nodes
to another cluster when the nodes are restarted from the mtr
side, which can lead to random failures when testing with
buildbot.
This commit fixes the test system hanging due to
the galera_var_notify_ssl_ipv6 test and also brings
the wsrep_notify[_ssl].sh files in line with each other
between the user template and the mtr suite.
Quotes are also added here to avoid problems if the
user specifies the value of one of the variables at the
beginning of the file containing shell-specific characters,
for example, if the password or username specified in the
PSWD and USER variables will contain the "$" character.
Also fixed an issue with automatic --ssl-verify-server-cert
option substitution when the corresponding value is set
by the user to "1" or "on".
Also fixed some tests here to avoid joining one of the nodes
to another cluster when the nodes are restarted from the mtr
side, which can lead to random failures when testing with
buildbot.
mysql_discard_or_import_tablespace(): On successful
ALTER TABLE...DISCARD TABLESPACE, evict the table handle from the
table definition cache, so that ha_innobase::close() will be invoked,
like InnoDB expects to be the case. This will avoid an assertion failure
ut_a(table->get_ref_count() == 0) during IMPORT TABLESPACE.
ha_innobase::open(): Do not issue any ER_TABLESPACE_DISCARDED warning.
Member functions for DML will do that.
ha_innobase::truncate(), ha_innobase::check_if_supported_inplace_alter():
Issue ER_TABLESPACE_DISCARDED warnings, to compensate for the removal of
the warning in ha_innobase::open().
row_quiesce_write_indexes(): Only write information about committed
indexes. The ALTER TABLE t NOWAIT ADD INDEX(c) in the nondeterministic
test case will most of the time fail due to a metadata lock (MDL) timeout
and leave behind an uncommitted index.
Reviewed by: Sergei Golubchik
The geometry type requires Type:"Feature" but the feature need
not be first in the JSON structure.
Adjust code to return an error if geometry isn't a JSON object,
but continue parsing searching for Type: "Feature" to trigger
the geometry parsing.
Thanks Derick Magnusen for the bug report.
The bug is caused by a similar mechanism as MDEV-21027.
The function, check_insert_or_replace_autoincrement, failed to open
all the partitions on REPLACE SELECT statements and it results in the
assertion error.
* Delete tests that are not supported and not going to be supported
any time soon
* Fix result set on tests that are not run on bb
* Fix tests that fail because of auto increment offset
* Make sure that disabled tests have open bug report
- Regression introduced 7baf24a0f8 for multidigit gcc dump.
There is no dot in `dumpversion`.
```
$ gcc -dumpversion
10
```
Otherwise it will fail and not produce the output
```
Running dgcov
Name "IO::Uncompress::Gunzip::GunzipError" used only once: possible typo at ./dgcov.pl line 197.
Cannot parse gcc -dumpversion: 9
```
- The warning `once` is always generated:
```
Running dgcov
Name "IO::Uncompress::Gunzip::GunzipError" used only once: possible typo at ./dgcov.pl line 197.
<number>
```
Suppresing the line `Name "IO::Uncompress::Gunzip::GunzipError" used only once: possible typo at ./dgcov.pl line 197.`
with the patch.
- Reviewed by: <>