The problem was caused by the following scenario:
Subquery's table has two indexes, KEY a(a), KEY a_b(a,b)
- LATERAL DERIVED optimization decides to use index a.
= The subquery uses ref access over key a.
- test_if_skip_sort_order() sees that KEY a_b satisfies the
subquery's GROUP BY clause, and attempts to switch to it.
= It fails to do so, because KEYUSE objects for index a_b
are switched off.
Fixed by disallowing to change the ref access key if it uses KEYUSE
objects injected by LATERAL DERIVED optimization.
Problem:
========
180511 11:07:58 [ERROR] Slave I/O: Unexpected master's heartbeat data:
heartbeat is not compatible with local info;the event's data: log_file_name
mysql-bin.000009 log_pos 1054262041, Error_code: 1623
Analysis:
=========
In replication setup when master server doesn't have any events to send to
slave server it sends an 'Heartbeat_log_event'. This event carries the
current binary log filename and offset details. The offset values is stored
within 4 bytes of event header. When the size of binary log is higher than
UINT32_MAX the log_pos values will not fit in 4 bytes memory. It overflows
and hence slave stops with an error.
Fix:
===
Since we cannot extend the common_header of Log_event class, a greater than
4GB value of Log_event::log_pos is made to be transported with a HeartBeat
event's sub-header. Log_event::log_pos in such case is set to zero to
indicate that the 8 byte sub-header is allocated in the event.
In case of cross version replication following behaviour is expected
OLD - Server without fix
NEW - Server with fix
OLD<->NEW : works bidirectionally as long as the binlog offset is
(normally) within 4GB.
When log_pos > UINT32_MAX
OLD->NEW : The 'log_pos' is bound to overflow and NEW slave may report
an invalid event/incompatible heart beat event error.
NEW->OLD : Since patched server sets log_pos=0 on overflow, OLD slave will
report invalid event error.
This happens during repair when a temporary table is opened
with HA_OPEN_COPY, which resets 'share->born_transactional', which
the encryption code did not like.
Fixed by resetting just share->now_transactional.
InnoDB tries to fetch the deleted doc ids for discarded
tablespace. In i_s_fts_deleted_generic_fill(), InnoDB needs
to check whether the table is discarded or not before fetching
deleted doc ids.
fil_ibd_load(): Remove a message that is basically saying that
everything works as expected. The other "Ignoring data file" message
about the presence of an extraneous file will be retained
(and expected by the test innodb.log_file_name).
after dfb41fddf6 tables that failed to drop are excluded from the
binlogged DROP TABLE statement. It means that the slave should not
expect any errors when executing DROP TABLE, and the binlog should
report that no error has happened, even if it was.
Do not write error code into the binlogged DROP TABLE,
and remove all code that was needed to compute it.
This commits replaces the call of the function setup_tables() with
a call of the function setup_tables_and_check_access() in the method
Multiupdate_prelocking_strategy::handle_end().
There is no known bug that would require this change. However the change
aligns this piece of code with the code existed before the patch for
MDEV-24823.
remove code duplication in Lex_input_stream::scan_ident_middle(),
make sure identifiers are always use the same code path whether
they start form an underscore or not.
Attempt to build MariaDB server on MacOS could result in
compilation errors like the following one:
In file included from server-10.2/storage/perfschema/cursor_by_account.cc:28:
In file included from server-10.2/include/my_global.h:287:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.3.sdk/usr/include/c++/v1/math.h:309:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.3.sdk/usr/include/c++/v1/type_traits:418:
server-10.2/version:1:1: error: expected unqualified-id
MYSQL_VERSION_MAJOR=10
^
server-10.2/build.dir/include/my_config.h:529:29: note: expanded from macro 'MYSQL_VERSION_MAJOR'
This kind of compiler errors occur by the reson that compiler's system headers
contain the directive '#include <version>' and a compiler is invoked
with -I${CMAKE_SOURCE_DIR}.
The MariaDB source code root directory contains the file VERSION that is handled
by the compiler during processing the directive #include <version>
since file names on MacOS are case insensetive, so version and VERSION is treated as
the same file name.
To fix the issue the source code root directory should be removed from a list
of directories used by the compiler for include search path.
innodb_adaptive_flushing_lwm is hit. (possible regression)
adaptive flushing should kick in if
a. dirty_pct (dirty pages in buffer pool) > innodb_max_dirty_pages_pct_lwm
OR
b. innodb_adaptive_flushing_lwm limit is reached (default to 10%)
both conditions are mutually exclusive and whichever is first to evaluate
true should kick-start the adaptive flushing.
After recent changes to simplify the flushing algorithm logic, (b) got ignored
that introduced the said regression.
The problem is that sharing default expression among set instruction
leads to attempt access result field of function created in
other instruction runtime MEM_ROOT and already freed
(a bit different then MySQL problem).
Fix is the same as in MySQL (but no optimisation for constant), turn
DECLARE a, b, c type DEFAULT expr;
to
DECLARE a type DEFAULT expr, b type DEFAULT a, c type DEFAULT a;
This patch changes statement rollback for streaming replication.
Previously, a statement rollback was turned into full transaction
rollback in the case where the transaction had already replicated a
fragment. This was introduced in the initial implementation of
streaming replication due to the fact that we do not have a mechanism
to perform a statement rollback on the applying side.
This policy is however overly pessimistic, causing full rollbacks even
in cases where a local statement rollback, would not require a
statement rollback on the applying side. This happens to be case when
the statement itself has not replicated any fragments.
So the patch changes the condition that determines if a statement
rollback should be turned into a full rollback accordingly.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Removed redundant code for BF abort transaction in `thr_lock.cc`.
TOI operations will ignore provided lock_wait_timeout and use `LONG_TIMEOUT`
until operation is finished.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Problem:
=======
InnoDB alter fails before applying instant operation. So rollback
assigns wrong column to the secondary index field. It leads
to the assert failure in the consecutive alter.
Fix:
===
InnoDB shouldn't do rollback of instant operation when it fails
before applying instant operation.
plugin variables in SET only locked the plugin till the end of the
statement. If SET with a plugin variable was prepared, it was possible
to uninstall the plugin before EXECUTE. Then EXECUTE would crash,
trying to resolve a now-invalid pointer to a disappeared variable.
Fix: keep plugins locked until the prepared statement is closed.
encourage the use of mysql_secure_installation,
that can always set the root password correctly for all root accounts,
no matter how many are there and what the structure of privilege tables is
InnoDB startup hangs if a DDL transaction needs to be
rolled back and a recovered transaction on statistics
tables exists. In that case, InnoDB should rollback
the transaction which holds locks on innodb_table_stats
or innodb_index_stats during trx_rollback_or_clean_recovered().
InnoDB fails to fetch the index type when innodb dictionary
doesn't match with frm. InnoDB should return corrupted if it
can't find the index in ha_innobase::index_type().
table->move_fields wasn't undone in case of error.
1. move_fields is unconditionally undone even when error is occurred
2. cherry-pick an assertion in `ptr_in_record`, which is already in 10.5
The assertion is improved: storage engines like myisam always have to store
at least one field, so the assertion does not cover tables with no stored
columns.
So we are having a race condition of three of threads, resulting in a
deadlock backoff in purge, which is unexpected.
More precisely, the following happens:
T1: NOCOPY ALTER TABLE begins, and eventually it holds MDL_SHARED_NO_WRITE
lock;
T2: FLUSH TABLES begins. it sets share->tdc->flushed = true
T3: purge on a record with virtual column begins. it is going to open a
table. MDL_SHARED_READ lock is acquired therefore.
Since share->tdc->flushed is set, it waits for a TDC purge end.
T1: is going to elevate MDL LOCK to exclusive and therefore has to set
other waiters to back off.
T3: receives VICTIM status, reports a DEADLOCK, sets OT_BACKOFF_AND_RETRY
to Open_table_context::m_action
My fix is to allow opening table in purge while flushing. It is already
done the same way in other maintainance facilities like REPAIR TABLE.
Another way would be making an actual backoff, but Open_table_context
does not allow to distinguish it from other failure types, which still
seem to be unexpected. Making this would require hacking into
Open_table_context interface for no benefit, in comparison to passing
MYSQL_OPEN_IGNORE_FLUSH during table open.
innodb_debug_sync was introduced in commit
b393e2cb0c and reverted in
commit fc58c17216 due to memory leak reported
by valgrind, see MDEV-21336.
The leak is now fixed by adding `rw_lock_free(&slot->debug_sync_lock)`
after background thread working loop is finished, and the patch is
reapplied, with respect to c++98 fixes by Marko.
The missing DEBUG_SYNC for MDEV-18546 in row0vers.cc is also reapplied.
The name of the table sent as an argument to the handler::init() has the
database name in front of it. So we should use
table_share->table_name.length.
Item_func_history (is_history()) is a bool function that checks if the
row is the history row by checking row_end->is_max(). The argument to
this function must be row_end system field.
Added the above function to conjunction with SYSTEM_TIME_BEFORE
versioning condition.
row_merge_is_index_usable(): Allow access to any SEQUENCE, even if it was
created after the read view. SQL sequences are no-rollback tables with no
history at all.
Quoting MDEV reporter Daniel Lewart:
Starting MariaDB with default configuration causes the following problems:
"[Warning] Could not increase number of max_open_files to more than 16384 (request: 32186)"
silently reduces table_open_cache_instances from 8 (default) to 4
Default Server System Variables:
extra_max_connections = 1
max_connections = 151
table_open_cache = 2000
table_open_cache_instances = 8
thread_pool_size = 4
LimitNOFILE=16834 is in the following files:
support-files/mariadb.service.in
support-files/mariadb@.service.in
Looking at sql/mysqld.cc lines 3837-3917:
wanted_files= (extra_files + max_connections + extra_max_connections +
tc_size * 2 * tc_instances);
wanted_files+= threadpool_size;
Plugging in the default values:
wanted_files = (30 + 151 + 1 + 2000 * 2 * 8 + 4) = 32186
However, systemd configuration has LimitNOFILE = 16384, which is far smaller.
I suggest increasing LimitNOFILE to 32768.