This failure was caused because of several bugs:
- Someone had removed s3-slave-ignore-updates=1 from slave.cnf, which
caused the slave to remove files that the master was working on.
- Bug in ha_partition::change_partitions() that didn't reset m_new_file
in case of errors. This caused crashes in ha_maria::extra() as the
maria handler was called on files that was already closed.
- In ma_pagecache there was a bug that when one got a read error one a
big block (s3 block), it left the flag PCBLOCK_BIG_READ on for the page
which cased an assert when the page where flushed.
- Flush all cached tables in case of ignored ALTER TABLE
Note that when merging code from 10.3, that fixes the partition bug, use
the code from this patch instead.
Changes to ma_pagecache.cc written or reviewed by Sanja
first step in moving drop table out of the handler.
todo: other methods that don't need an open table
for now hton->drop_table is optional, for backward compatibility
reasons
Apply this patch from Percona Server (amended for 10.5):
commit cd7201514fee78aaf7d3eb2b28d2573c76f53b84
Author: Laurynas Biveinis <laurynas.biveinis@gmail.com>
Date: Tue Nov 14 06:34:19 2017 +0200
Fix bug 1704195 / 87065 / TDB-83 (Stop ANALYZE TABLE from flushing table definition cache)
Make ANALYZE TABLE stop flushing affected tables from the table
definition cache, which has the effect of not blocking any subsequent
new queries involving the table if there's a parallel long-running
query:
- new table flag HA_ONLINE_ANALYZE, return it for InnoDB and TokuDB
tables;
- in mysql_admin_table, if we are performing ANALYZE TABLE, and the
table flag is set, do not remove the table from the table
definition cache, do not invalidate query cache;
- in partitioning handler, refresh the query optimizer statistics
after ANALYZE if the underlying handler supports HA_ONLINE_ANALYZE;
- new testcases main.percona_nonflushing_analyze_debug,
parts.percona_nonflushing_abalyze_debug and a supporting debug sync
point.
For TokuDB, this change exposes bug TDB-83 (Index cardinality stats
updated for handler::info(HA_STATUS_CONST), not often enough for
tokudb_cardinality_scale_percent). TokuDB may return different
rec_per_key values depending on dynamic variable
tokudb_cardinality_scale_percent value. The server does not have a way
of knowing that changing this variable invalidates the previous
rec_per_key values in any opened table shares, and so does not call
info(HA_STATUS_CONST) again. Fix by updating rec_per_key for both
HA_STATUS_CONST and HA_STATUS_VARIABLE. This also forces a re-record
of tokudb.bugs.db756_card_part_hash_1_pick, with the new output
seeming to be more correct.
create_table_info_t::create_foreign_keys(): Make the create_name buffer
long enough for both the database and table name. It is still not long
enough to hold partition or subpartition names. Because we do never
supported FOREIGN KEY constraints on partitions, we can simply skip
the call to innobase_convert_name() on CREATE TABLE.
MDEV-22488 test failures: parts.partition_debug_innodb /
parts.partition_debug_myisam
The reason for the failure was a wrong printf() that accessed not existing
things on the stack.
The reason the falure was hard to find was that the partition_debug_...
tests disables core dumps, so there was no trace that the server had
crashed in the logs.
Fixed by fixing the faulty push_warning_printf() and splitting the tests
into two parts, one that test failures (with core dumps enabled) and one
part that test crash recovery.
The review and test splitting was done by Monty
The test was broken in commit f40ca33bbc.
The background DROP TABLE queue in InnoDB will continue to
use names like #sql-ib, and we must filter out those file names.
The reason for this is to make all temporary file names similar and
also to be able to figure out from where a #sql-xxx name orginates.
New format is for most cases:
'#sql-name-current_pid-thread_id[-increment]'
Where name is one of subselect, alter, exchange, temptable or backup
The exceptions are:
ALTER PARTITION shadow files:
'#sql-shadow-thread_id-'original_table_name'
Names used with temp pool:
'#sql-name-current_pid-pool_number'
MDEV-22088 S3 partitioning support
All ALTER PARTITION commands should now work on S3 tables except
REBUILD PARTITION
TRUNCATE PARTITION
REORGANIZE PARTITION
In addition, PARTIONED S3 TABLES can also be replicated.
This is achived by storing the partition tables .frm and .par file on S3
for partitioned shared (S3) tables.
The discovery methods are enchanced by allowing engines that supports
discovery to also support of the partitioned tables .frm and .par file
Things in more detail
- The .frm and .par files of partitioned tables are stored in S3 and kept
in sync.
- Added hton callback create_partitioning_metadata to inform handler
that metadata for a partitoned file has changed
- Added back handler::discover_check_version() to be able to check if
a table's or a part table's definition has changed.
- Added handler::check_if_updates_are_ignored(). Needed for partitioning.
- Renamed rebind() -> rebind_psi(), as it was before.
- Changed CHF_xxx hadnler flags to an enum
- Changed some checks from using table->file->ht to use
table->file->partition_ht() to get discovery to work with partitioning.
- If TABLE_SHARE::init_from_binary_frm_image() fails, ensure that we
don't leave any .frm or .par files around.
- Fixed that writefrm() doesn't leave unusable .frm files around
- Appended extension to path for writefrm() to be able to reuse to function
for creating .par files.
- Added DBUG_PUSH("") to a a few functions that caused a lot of not
critical tracing.
>= M_TOT_PARTS' FAILED.
This patch is taken from MySQL, originally written by Mattias Jonsson
Here follows the original commit message:
Problem in handle_alter_part_error(),
result in altered partition_info object was still used
if table was under LOCK TABLES.
Solution was to always close and destroy all table
and table_share instances if exclusive mdl lock was
possible.
If not succeeding in get an exlusive lock (only possible
during rollback of DDL), at least close and destroy this
table instance.
rb#7361.
Approved by Mikael and Aditya.
In main.index_merge_myisam we remove the test that was added in
commit a2d24def8c because
it duplicates the test case that was added in
commit 5af12e4635.
- Flag ALTER_STORED_COLUMN_TYPE set while doing varchar extension
for partition table. Basically all partition supports
can_be_converted_by_engine() then it should be set to
ALTER_COLUMN_TYPE_CHANGE_BY_ENGINE.
Remove usage of deprecated variable storage_engine. It was deprecated in 5.5 but
it never issued a deprecation warning. Make it issue a warning in 10.5.1.
Replaced with default_storage_engine.
Apparently, regular expression operations that remove entire lines
of output do not work with list_files, and hence the adjustments in
commit 1c282d4bc4 were ineffective.
For cat_file (preceded by list_files_write_file) the replace_regex
does work.
For some reason, for suite/parts/inc/partition_crash_exchange.inc
some file names will be lost when using list_files_write_file
instead of list_files.
We use a precise pattern match. dict_mem_create_temporary_tablename()
is generating #sql-ib names followed by decimal digits only.
Apply the correct pattern for debug instrumentation:
SET @save_dbug=@@debug_dbug;
SET debug_dbug='+d,...';
...
SET debug_dbug=@save_dbug;
Numerous tests use statements of the form
SET debug_dbug='-d,...';
which will inadvertently enable all DBUG tracing output,
causing unnecessary waste of resources.