In MemorySanitizer builds of 10.10 and 10.11, we would rather often
have the assertion fail in innodb_init() during mariadb-backup --prepare.
The assertion could also fail during InnoDB startup, but less often.
Before commit 685d958e38 in 10.8 the
log file cleanup after a successfully applied backup is different,
and the os_aio_pending_writes() assertion is in srv0start.cc.
IORequest::write_complete(): Invoke node->complete_write() before
releasing the page latch, so that a log checkpoint that is about to
execute concurrently will not miss a fdatasync() or fsync() on the
file, in case this was the first write since the last such call.
create_log_file(), srv_start(): Replace the debug assertion with
a debug check. For all intents and purposes, all writes could have
been completed but some write_io_callback() may not have invoked
io_slots::release() yet.
This was introduced in last merge with 10.6
The reason is that 10.6 does not need anything special to free histograms
as everything is allocated on a memroot.
In 10.10 histograms is using the vector class, which has some problems:
- No automatic free
- No memory usage accounting
(we should at some point remove vector usage because of the above problem)
Fixed by expliciting freeing histograms when freeing TABLE_STATISTICS
objects.
- InnoDB fails to check the overflow buffer while applying
the operation to the table that was rebuilt. This is caused
by commit 3cef4f8f0f (MDEV-515).
Fixed missing initialization of Alter_info()
This could cause crashes in some create table like scenarios
where some generated indexes where automatically dropped.
I also added a test that we do not try to drop from index_stats for
temporary tables.
The intentention was always to not create histograms for single value
unique keys (as histograms is not useful in this case), but because of
a bug in the code this was still done.
The changes in the test cases was mainly because hist_size is now NULL
for these kind of columns.
The MDEV-29693 conflict resolution is from Monty, as well as is
a bug fix where ANALYZE TABLE wrongly built histograms for
single-column PRIMARY KEY.
Also includes a fix for safe_malloc error reporting.
Other things:
- Copied main.log_slow from 10.4 to avoid mtr issue
Disabled test:
- spider/bugfix.mdev_27239 because we started to get
+Error 1429 Unable to connect to foreign data source: localhost
-Error 1158 Got an error reading communication packets
- main.delayed
- Bug#54332 Deadlock with two connections doing LOCK TABLE+INSERT DELAYED
This part is disabled for now as it fails randomly with different
warnings/errors (no corruption).
The error is caused by MDEV-30165 fix with the following commit:
d13a57ae81
There is logical error in lock_release_on_prepare_try():
if (supremum_bit)
lock_rec_unlock_supremum(*cell, lock);
else
lock_rec_dequeue_from_page(lock, false);
Because there can be other bits set in the lock's bitmap, and the lock
type can be suitable for releasing criteria, but the above logic
releases only supremum bit of the lock.
The fix is to release lock if it suits for releasing criteria and unlock
supremum if supremum is locked otherwise.
Tere is also the test for the case, which was reported by QA team. I
placed it in a separate files, because it requires debug build.
Reviewed by: Marko Mäkelä
- InnoDB fails to report the error when encryption configuration
wasn't passed. This patch addresses the issue by adding
the error while loading the tablespace and deferring the
tablespace creation.
There are many filesystem related errors that can occur with
MariaBackup. These already outputed to stderr with a good description of
the error. Many of these are permission or resource (file descriptor)
limits where the assertion and resulting core crash doesn't offer
developers anything more than the log message. To the user, assertions
and core crashes come across as poor error handling.
As such we return an error and handle this all the way up the stack.
log_t::create(), log_t::attach(): Return whether the initialisation
succeeded. It may fail if too large an innodb_log_buffer_size is specified.
recv_sys_t::close_files(): Actually close the data files so that the
test mariabackup.huge_lsn,strict_crc32 will not fail on Microsoft Windows
when renaming ib_logfile101 due to a leaked file handle of ib_logfile0.
recv_sys_t::find_checkpoint(): Register recv_sys.files[0] as OS_FILE_CLOSED
because the file handle has already been attached to log_sys.log and
we do not want to close the file twice.
recv_sys_t::read(): Access the first log file via log_sys.log.
This is a port of commit 6e9b421f77
adapted to commit 685d958e38 (MDEV-14425).
The test case is omitted, because it would fail to fail when the log
is stored in persistent memory (or "fake PMEM" on Linux /dev/shm).
The system variable spider_disable_group_by_handler, if on, will
disable the spider group by handler (gbh), and such disablement serves
as workaround to bugs caused by gbh, labelled with spider-gbh on jira,
including MDEV-26247, MDEV-28998, MDEV-29163, MDEV-30392, MDEV-31645.
Tests for these tickets are added accordingly with the workaround in
place.
In commit 5ea5291 @sanja-byelkin for unknown reason switched the file mode
for 3 Galera tzinfo related test files from 644 -> 755. This exists only
from branch 10.6 onward:
$ git checkout 10.5
$ find mysql-test -executable -name *.test -or -executable -name *.result
(no results)
$ git checkout 10.6
$ find mysql-test -executable -name *.test -or -executable -name *.result
mysql-test/suite/galera/t/mysql_tzmysql-test/suite/galera/t/mysql_tzinfo_to_sql.test
mysql-test/suite/galera/t/mariadb_tzinfo_to_sql.test
mysql-test/suite/galera/r/mariadb_tzinfo_to_sql.resultinfo_to_sql.test
mysql-test/suite/galera/t/mariadb_tzinfo_to_sql.test
mysql-test/suite/galera/r/mariadb_tzinfo_to_sql.result
No test file nor test result file should be executable, so run chmod -x
on them.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
copy_back(): Also copy the dummy empty ib_logfile0 so that
MariaDB Server 10.8 or later can be started after
--copy-back or --move-back.
Thanks to Daniel Black for reporting this.
This is a 10.5 version of
commit ebf3649259
Fix spider init bugs (MDEV-22979, MDEV-27233, MDEV-28218) while
preventing regression on old ones (MDEV-30370, MDEV-29904)
Two things are changed:
First, Spider initialisation is made fully synchronous, i.e. it no
longer happens in a background thread. Adapted from the original fix
by nayuta for MDEV-27233. This change itself would cause failure when
spider is initialised early, by plugin-load-add, due to dependency on
Aria and udf function creation, which are fixed in the second and
third parts below. Requires SQL Service, thus porting earlier versions
requires MDEV-27595
Second, if spider is initialised before udf_init(), create udf by
inserting into `mysql.func`, otherwise do it by `CREATE FUNCTION` as
usual. This change may be generalised in MDEV-31401.
Also factor out some clean-up queries from deinit_spider.inc for use
of spider init tests.
A minor caveat is that early spider initialisation will fail if the
server is bootstrapped for the first time, due to missing `mysql`
database which needs to be created by the bootstrap script.
Removing procedures that were created and dropped during init.
This also fixes a race condition where mtr test with
plugin-load-add=ha_spider.so causes post test check to fail as it
expects the procedures to still be there.
There are several plugins in ha_spider: spider, spider_alloc_mem,
spider_wrapper_protocols, spider_rewrite etc.
INSTALL PLUGIN foo SONAME ha_spider causes all the other ones to be
installed by the init queries where foo is any of the plugins.
This introduces unnecessary complexiy. For example it reads
mysql.plugins to find all other plugins, causing the hack of moving
spider plugin init to a separate thread.
To install all spider related plugins, install soname ha_spider should
be used instead.
This also fixes spurious rows in mysql.plugin when installing say only
the spider plugin with `plugin-load-add=SPIDER=ha_spider.so`:
select * from mysql.plugin;
name dl
spider_alloc_mem ha_spider.so # should not be here
spider_wrapper_protocols ha_spider.so # should not be here
Adapted from part of the reverted commit
c160a115b8.
Add threadpool functionality to restrict concurrency during "batch"
periods (where tasks are added in rapid succession).
This will throttle thread creation more agressively than usual, while
keeping performance at least on-par.
One of these cases is bufferpool load, where async read IOs are executed
without any throttling. There can be as much as 650K read IOs for
loading 10GB buffer pool.
Another one is recovery, where "fake read" IOs are executed.
Why there are more threads than we expect?
Worker threads are not be recognized as idle, until they return to the
standby list, and to return to that list, they need to acquire
mutex currently held in the submit_task(). In those cases, submit_task()
has no worker to wake, and would create threads until default concurrency
level (2*ncpus) is satisfied. Only after that throttling would happen.
The problem was that we did not handle errors properly in
JOIN::get_best_combination. In case an early error, JOIN->join_tab would
contain unintialized values, which would cause errors on cleanup().
The error in question was reported earlier, but not noticed until later.
One cause of this is that most of the sql_select.cc code just checks
thd->fatal_error and not thd->is_error().
Fixed by changing of checks of fatal_error to is_error().
This allows a user to to change the default value of MAX_SEL_ARGS (16000)
in the rare case where they neeed more generated SEL_ARGS (as part of
the range optimizer)
Raise notes if indexes cannot be used:
- in case of data type or collation mismatch (diferent error messages).
- in case if a table field was replaced to something else
(e.g. Item_func_conv_charset) during a condition rewrite.
Added option to write warnings and notes to the slow query log for
slow queries.
New variables added/changed:
- note_verbosity, with is a set of the following options:
basic - All old notes
unusable_keys - Print warnings about keys that cannot be used
for select, delete or update.
explain - Print unusable_keys warnings for EXPLAIN querys.
The default is 'basic,explain'. This means that for old installations
the only notable new behavior is that one will get notes about
unusable keys when one does an EXPLAIN for a query. One can turn all
of all notes by either setting note_verbosity to "" or setting sql_notes=0.
- log_slow_verbosity has a new option 'warnings'. If this is set
then warnings and notes generated are printed in the slow query log
(up to log_slow_max_warnings times per statement).
- log_slow_max_warnings - Max number of warnings written to
slow query log.
Other things:
- One can now use =ALL for any 'set' variable to set all options at once.
For example using "note_verbosity=ALL" in a config file or
"SET @@note_verbosity=ALL' in SQL.
- mysqldump will in the future use @@note_verbosity=""' instead of
@sql_notes=0 to disable notes.
- Added "enum class Data_type_compatibility" and changing the return type
of all Field::can_optimize*() methods from "bool" to this new data type.
Reviewer & Co-author: Alexander Barkov <bar@mariadb.com>
- The code that prints out the notes comes mainly from Alexander
The warning is given in case of table not found or if there is a lock
timeout. The warning is needed as in case of a lock timeout then the
persistent table stats are going to be wrong.
Updated ha_mroonga::storage_check_if_supported_inplace_alter to support
new ALTER TABLE flags.
This fixes failing tests:
mroonga/storage.alter_table_add_index_unique_duplicated
mroonga/storage.alter_table_add_index_unique_multiple_column_duplicated
Example of what causes the problem:
T1: ANALYZE TABLE starts to collect statistics
T2: ALTER TABLE starts by deleting statistics for all changed fields,
then creates a temp table and copies data to it.
T1: ANALYZE ends and writes to the statistics tables.
T2: ALTER TABLE renames temp table in place of the old table.
Now the statistics from analyze matches the old deleted tables.
Fixed by waiting to delete old statistics until ALTER TABLE is
the only one using the old table and ensure that rename of columns
can handle swapping of column names.
rename_columns_in_stat_table() (former rename_column_in_stat_tables())
now takes a list of columns to rename. It uses the following algorithm
to update column_stats to be able to handle circular renames
- While there are columns to be renamed and it is the first loop or
last rename loop did change something.
- Loop over all columns to be renamed
- Change column name in column_stat
- If fail because of duplicate key
- If this is first change attempt for this column
- Change column name to a temporary column name
- If there was a conflicting row, replace it with the current row.
else
- Remove entry from column list
- Loop over all remaining columns in the list
- Remove the conflicting row
- Change column from temporary name to final name in column_stat
Other things:
- Don't flush tables for every operation. Only flush when all updates
are done.
- Rename of columns was not handled in case of ALGORITHM=copy (old bug).
- Fixed that we do not collect statistics for hidden hash columns
used by UNIQUE constraint on long values.
- Fixed that we do not collect statistics for blob columns referred by
generated virtual columns. This was achieved by storing the fields for
which we want to have statistics in table->has_value_set instead of
in table->read_set.
- Rename of indexes was not handled for persistent statistics.
- This is now handled similar as rename of columns. Renamed columns
are now stored in 'rename_stat_indexes' and handled in
Alter_info::delete_statistics() together with drooped indexes.
- ALTER TABLE .. ADD INDEX may instead of creating a new index rename
an existing generated foreign key index. This was not reflected in
the index_stats table because this was handled in
mysql_prepare_create_table instead instead of in the mysql_alter() code.
Fixed by adding a call in mysql_prepare_create_table() to drop the
changed index.
I also had to change the code that 'marked the index' to be ignored
with code that would not destroy the original index name.
Reviewer: Sergei Petrunia <sergey@mariadb.com>
- This failure happens due to commit bf3b787e02 (MDEV-31835). InnoDB fails to apply buffered insert
operation for transaction_registry during commit operation.
To avoid this, ha_commit_trans() should call extra() with
HA_EXTRA_RESET_STATE to apply bulk buffered insert operation.
In `pseudo_slave_mode=1` aka "psedo-slave" mode any prepared XA
transaction disconnects from the user session, as if the user
connection drops. The xid of such transaction remains in the server,
and should the prepared transaction be read-only, it is marked.
The marking makes sure that the following termination of the
read-only transaction ends up with ER_XA_RBROLLBACK.
This did not take place actually for `pseudo_slave_mode=1` read-only.
Fixed with checking the read-only status of a prepared transaction
at time it disconnects from the `pseudo_slave_mode=1` session, to mark
its xid when that's the case.