Make dict_table_t::n_ref_count private, and protect it with
a combination of dict_sys->mutex and atomics. We want to be
able to invoke dict_table_t::release() without holding
dict_sys->mutex.
When processing a query containing with clauses a call of the function
check_dependencies_in_with_clauses() before opening tables used in the
query is necessary if with clauses include specifications of recursive
CTEs.
This call was missing if such a query belonged to a stored function.
This caused misbehavior of the server: it could report a fake error
as in the test case for MDEV-16629 or the executed query could hang
as in the test cases for MDEV-16661 and MDEV-15151.
Marko mentions, it could be caused by MDEV-15740 where InnoDB does not
flush redo log as often as it should, with innodb_flush_log_at_trx_commit=1
The workaround is to use innodb_flush_log_at_trx_commit=2, which,
according to MDEV-15740 is more durable.
One can create table with the same name for `field` and `table` `check` constraint.
For example:
`create table t(a int check(a>0), constraint a check(a>10));`
But when inserting new rows same error is always raised.
For example with
```insert into t values (-1);```
and
```insert into t values (10);```
same error `ER_CONSTRAINT_FAILED` is obtained and it is not clear which constraint is violated.
This patch solve this error so that in case if field constraint is violated the first parameter
in the error message is `table.field_name` and if table constraint is violated the first parameter
in error message is `constraint_name`.
Disks with native 4K sectors need 4K alignment and size for unbuffered IO
(i.e for files opened with FILE_FLAG_NO_BUFFERING)
Innodb opens redo log with FILE_FLAG_NO_BUFFERING, however it always does
512byte IOs. Thus, the IO on 4K native sectors will fail, rendering
Innodb non-functional.
The fix is to check whether OS_FILE_LOG_BLOCK_SIZE is multiple of logical
sector size, and if it is not, reopen the redo log without
FILE_FLAG_NO_BUFFERING flag.
MDEV-7257 made a dump thread to read from binlog concurrently with
writers as long as the read bytes are below a water-mark
(MYSQL_BIN_LOG::binlog_end_pos). However it appeared to be possible a
dump thread reader reach out for bytes past the water mark through a
feature of IO_CACHE that fills in the internal buffer and while doing
so it could read what the reader is not supposed to see (the bytes
above MYSQL_BIN_LOG::binlog_end_pos).
The issue is fixed with constraining the IO_CACHE buffer fill to respect
the watermark.
An added unit test proves reading from file is bound to an external
parameter
passed to {IO_CACHE::end_of_file} cache member.
MDEV-7257 made a dump thread to read from binlog concurrently with
writers as long as the read bytes are below a water-mark
(MYSQL_BIN_LOG::binlog_end_pos). However it appeared to be possible a
dump thread reader reach out for bytes past the water mark through a
feature of IO_CACHE that fills in the internal buffer and while doing
so it could read what the reader is not supposed to see (the bytes
above MYSQL_BIN_LOG::binlog_end_pos).
The issue is fixed with constraining the IO_CACHE buffer fill to respect
the watermark.
An added unit test proves reading from file is bound to an external
parameter
passed to {IO_CACHE::end_of_file} cache member.
The previous correction of the patch for mdev-16473 did not work
correctly for the databases whose names started with '*'.
Added a test case with a database named "*".
Only close stdin if it was open initinally. Otherwise we may close file
descriptor which is reused for different puprose (specifically for binlog
index file in case of this bug).
This is a typical systemd response where it tries to shutdown the
joiner (due to "timeout") before the joiner manages to complete SST.
wsrep_sst_wait
wsrep_SE_init_wait
While waiting the operation to finish use mysql_cond_timedwait
instead of mysql_cond_wait and if operation is not finished
extend systemd timeout (if needed).
dict0dict.cc
buf_LRU_drop_page_hash_for_tablespace(): Return whether any adaptive
hash index entries existed. If yes, the caller should keep retrying to
drop the adaptive hash index.
row_import_for_mysql(), row_truncate_table_for_mysql(),
row_drop_table_for_mysql(): Ensure that the adaptive hash index was
entirely dropped for the table.
Before this patch if no default database was set the server threw
an error for any table name reference that was not fully qualified by
database name. In particular it happened for table names referenced
CTE tables. This was incorrect.
The error message was thrown at the parser stage when the names referencing
different tables were not resolved yet.
Now if no default database is set and a with clause is used in the
processed statement any table reference is just supplied with a dummy
database name "*none*" at the parser stage. Later after a call
of check_dependencies_in_with_clauses() when the names for CTE tables
can be resolved error messages are thrown only for those names that
refer to non-CTE tables. This is done in open_and_process_table().
Observed and described
partitioned engine execution time difference
between master and slave was caused by excessive invocation
of base_engine::rnd_init which was done also for partitions
uninvolved into Rows-event operation.
The bug's slave slowdown therefore scales with the number of partitions.
Fixed with applying an upstream patch.
References:
----------
https://bugs.mysql.com/bug.php?id=73648
Bug#25687813 REPLICATION REGRESSION WITH RBR AND PARTITIONED TABLES
it only worked if mroonga plugin wasn't installed before (normal case),
but then it didn't need to delete anything.
if, by some glitch, mroonga was already installed, it would delete
mroonga from mysql.plugin, but INSTALL would fail (as mroonga was running),
and the script aborted, leaving mroonga not in mysql.plugin at all.
Solution for Debian/Ubuntu: install a trigger to restart mysqld
automatically whenever a package changes something in /etc/mysql
or in /etc/systemd/system/mariadb.service.d
There is a tiny chance for race condition during MDL acquisition.
If table is renamed just prior to
SELECT 1 FROM <table_name> LIMIT 0
then this query would fail, yet mariabackup --backup does not handle
it as fatal error and continues, only to fail later during file copy.
The fix is to die on error, of MDL lock query fails.