The problem was that in a timeout event,
thd->lex->restore_backup_query_tables_list() was called when it should
not have been.
Patch tested with the script in MDEV-25651 (not suitable for mtr)
The bug is that we don't have a a lock on the trigger name, so it is
possible for two threads to try to create the same trigger at the same
time and both thinks that they have succeed.
Same thing can happen with drop trigger or a combinations of create and
drop trigger.
Fixed by adding a mdl lock for the trigger name for the duration of the
create/drop.
Patch tested by Elena
XA transaction only allows to access data in specific states,
in ACTIVE, but not in IDLE or PREPARE.
But even then one should be able to run SHOW STATUS.
and configuration.
1. Pass joiner's authentication information to donor together with address
in State Transfer Request. This allows joiner to authenticate donor on
connection. Previously joiner would accept data from anywhere.
2. Deprecate custom SSL configuration variables tca, tcert and tkey in favor
of more familiar ssl-ca, ssl-cert and ssl-key. For backward compatibility
tca, tcert and tkey are still supported.
3. Allow falling back to server-wide SSL configuration in [mysqld] if no SSL
configuration is found in [sst] section of the config file.
4. Introduce ssl-mode variable in [sst] section that takes standard values
and has following effects:
- old-style SSL configuration present in [sst]: no effect
otherwise:
- ssl-mode=DISABLED or absent: retains old, backward compatible behavior
and ignores any other SSL configuration
- ssl-mode=VERIFY*: verify joiner's certificate and CN on donor,
verify donor's secret on joiner
(passed to donor via State Transfer Request)
BACKWARD INCOMPATIBLE BEHAVIOR
- anything else enables new SSL configuration convetions but does not
require verification
ssl-mode should be set to VERIFY only in a fully upgraded cluster.
Examples:
[mysqld]
ssl-cert=/path/to/cert
ssl-key=/path/to/key
ssl-ca=/path/to/ca
[sst]
-- server-wide SSL configuration is ignored, SST does not use SSL
[mysqld]
ssl-cert=/path/to/cert
ssl-key=/path/to/key
ssl-ca=/path/to/ca
[sst]
ssl-mode=REQUIRED
-- use server-wide SSL configuration for SST but don't attempt to
verify the peer identity
[sst]
ssl-cert=/path/to/cert
ssl-key=/path/to/key
ssl-ca=/path/to/ca
ssl-mode=VERIFY_CA
-- use SST-specific SSL configuration for SST and require verification
on both sides
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
When you only need view structure, don't call handle_derived with
DT_CREATE and rely on its internal hackish check to skip DT_CREATE.
Because handle_derived is called from many different places,
and this internal hackish check is indiscriminative.
Instead, just don't ask handle_derived to do DT_CREATE
if you don't want it to do DT_CREATE.
When you only need view structure, don't call handle_derived with
DT_CREATE and rely on its internal hackish check to skip DT_CREATE.
Because handle_derived is called from many different places,
and this internal hackish check is indiscriminative.
Instead, just don't ask handle_derived to do DT_CREATE
if you don't want it to do DT_CREATE.
The problem was caused by the following scenario:
Subquery's table has two indexes, KEY a(a), KEY a_b(a,b)
- LATERAL DERIVED optimization decides to use index a.
= The subquery uses ref access over key a.
- test_if_skip_sort_order() sees that KEY a_b satisfies the
subquery's GROUP BY clause, and attempts to switch to it.
= It fails to do so, because KEYUSE objects for index a_b
are switched off.
Fixed by disallowing to change the ref access key if it uses KEYUSE
objects injected by LATERAL DERIVED optimization.
Problem:
========
180511 11:07:58 [ERROR] Slave I/O: Unexpected master's heartbeat data:
heartbeat is not compatible with local info;the event's data: log_file_name
mysql-bin.000009 log_pos 1054262041, Error_code: 1623
Analysis:
=========
In replication setup when master server doesn't have any events to send to
slave server it sends an 'Heartbeat_log_event'. This event carries the
current binary log filename and offset details. The offset values is stored
within 4 bytes of event header. When the size of binary log is higher than
UINT32_MAX the log_pos values will not fit in 4 bytes memory. It overflows
and hence slave stops with an error.
Fix:
===
Since we cannot extend the common_header of Log_event class, a greater than
4GB value of Log_event::log_pos is made to be transported with a HeartBeat
event's sub-header. Log_event::log_pos in such case is set to zero to
indicate that the 8 byte sub-header is allocated in the event.
In case of cross version replication following behaviour is expected
OLD - Server without fix
NEW - Server with fix
OLD<->NEW : works bidirectionally as long as the binlog offset is
(normally) within 4GB.
When log_pos > UINT32_MAX
OLD->NEW : The 'log_pos' is bound to overflow and NEW slave may report
an invalid event/incompatible heart beat event error.
NEW->OLD : Since patched server sets log_pos=0 on overflow, OLD slave will
report invalid event error.
after dfb41fddf6 tables that failed to drop are excluded from the
binlogged DROP TABLE statement. It means that the slave should not
expect any errors when executing DROP TABLE, and the binlog should
report that no error has happened, even if it was.
Do not write error code into the binlogged DROP TABLE,
and remove all code that was needed to compute it.
This commits replaces the call of the function setup_tables() with
a call of the function setup_tables_and_check_access() in the method
Multiupdate_prelocking_strategy::handle_end().
There is no known bug that would require this change. However the change
aligns this piece of code with the code existed before the patch for
MDEV-24823.
remove code duplication in Lex_input_stream::scan_ident_middle(),
make sure identifiers are always use the same code path whether
they start form an underscore or not.
The problem is that sharing default expression among set instruction
leads to attempt access result field of function created in
other instruction runtime MEM_ROOT and already freed
(a bit different then MySQL problem).
Fix is the same as in MySQL (but no optimisation for constant), turn
DECLARE a, b, c type DEFAULT expr;
to
DECLARE a type DEFAULT expr, b type DEFAULT a, c type DEFAULT a;
This patch changes statement rollback for streaming replication.
Previously, a statement rollback was turned into full transaction
rollback in the case where the transaction had already replicated a
fragment. This was introduced in the initial implementation of
streaming replication due to the fact that we do not have a mechanism
to perform a statement rollback on the applying side.
This policy is however overly pessimistic, causing full rollbacks even
in cases where a local statement rollback, would not require a
statement rollback on the applying side. This happens to be case when
the statement itself has not replicated any fragments.
So the patch changes the condition that determines if a statement
rollback should be turned into a full rollback accordingly.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Removed redundant code for BF abort transaction in `thr_lock.cc`.
TOI operations will ignore provided lock_wait_timeout and use `LONG_TIMEOUT`
until operation is finished.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
plugin variables in SET only locked the plugin till the end of the
statement. If SET with a plugin variable was prepared, it was possible
to uninstall the plugin before EXECUTE. Then EXECUTE would crash,
trying to resolve a now-invalid pointer to a disappeared variable.
Fix: keep plugins locked until the prepared statement is closed.
InnoDB fails to fetch the index type when innodb dictionary
doesn't match with frm. InnoDB should return corrupted if it
can't find the index in ha_innobase::index_type().
table->move_fields wasn't undone in case of error.
1. move_fields is unconditionally undone even when error is occurred
2. cherry-pick an assertion in `ptr_in_record`, which is already in 10.5
So we are having a race condition of three of threads, resulting in a
deadlock backoff in purge, which is unexpected.
More precisely, the following happens:
T1: NOCOPY ALTER TABLE begins, and eventually it holds MDL_SHARED_NO_WRITE
lock;
T2: FLUSH TABLES begins. it sets share->tdc->flushed = true
T3: purge on a record with virtual column begins. it is going to open a
table. MDL_SHARED_READ lock is acquired therefore.
Since share->tdc->flushed is set, it waits for a TDC purge end.
T1: is going to elevate MDL LOCK to exclusive and therefore has to set
other waiters to back off.
T3: receives VICTIM status, reports a DEADLOCK, sets OT_BACKOFF_AND_RETRY
to Open_table_context::m_action
My fix is to allow opening table in purge while flushing. It is already
done the same way in other maintainance facilities like REPAIR TABLE.
Another way would be making an actual backoff, but Open_table_context
does not allow to distinguish it from other failure types, which still
seem to be unexpected. Making this would require hacking into
Open_table_context interface for no benefit, in comparison to passing
MYSQL_OPEN_IGNORE_FLUSH during table open.
Item_func_history (is_history()) is a bool function that checks if the
row is the history row by checking row_end->is_max(). The argument to
this function must be row_end system field.
Added the above function to conjunction with SYSTEM_TIME_BEFORE
versioning condition.
This is a backport of
commit fd9ca2a742 (MDEV-23295) and
commit 9a156e1a23 (MDEV-23345) to 10.3.
An instant ADD/DROP/reorder column could create a dummy table
object with the wrong ROW_FORMAT when innodb_default_row_format
was changed between CREATE TABLE and ALTER TABLE.
prepare_inplace_alter_table_dict(): If we had promised that
ALGORITHM=INPLACE is supported, we must preserve the ROW_FORMAT.
The rest of the changes are related to adding
Alter_inplace_info::inplace_supported to cache the return value of
handler::check_if_supported_inplace_alter().
Back port upstream fix
commit 1800b015a1d487330f7b15f2020b887be348a66b
Author: Venkatesh Duggirala <venkatesh.duggirala@oracle.com>
Date: Fri Sep 8 20:29:22 2017 +0530
Bug#26027024 SLAVE_COMPRESSED_PROTOCOL DOESN'T WORK WITH
SEMI-SYNC REPLICATION IN MYSQL-5.7
Analysis: In mysql-5.6, dump thread (the thread that is created
on Master after Slave requested for a binlog dump) is also used
to receive acknowledgements from the Slave and act on them accordingly.
For performance reasons, a special thread called Ack Receiver thread
is added in mysql-5.7 Semi synchronous replication plugin.
This thread does not have special handling to receive acknowledgements
if Slave has enabled compression in the protocol. Hence Master is
unable to handle any slave if Slave_compressed_protocol is enabled
on it.
Fix: Enable compress flag on the communication channels if the Slave
has Slave_compressed_protocol ON.
(trivial backport to 10.2)
The optimizer removes redundant GROUP BY operations. If GROUP BY element
is a subselect, it is "eliminated".
However one must not eliminate the item if it is used both in the select
list and in the GROUP BY, like so:
select (select ... ) as SUBQ from ... group by SUBQ
Do not eliminate such items.
The optimizer removes redundant GROUP BY operations. If GROUP BY element
is a subselect, it is "eliminated".
However one must not eliminate the item if it is used both in the select
list and in the GROUP BY, like so:
select (select ... ) as SUBQ from ... group by SUBQ
Do not eliminate such items.
Before FRM is written walk vcol expressions through
check_table_name_processor() and check if field items match (db,
table_name) qualifier.
We cannot do this in check_vcol_func_processor() as there is already
no table name qualifiers in expressions of written and loaded FRM.
Before this patch mergeable derived tables / view used in a multi-table
update / delete were merged before the preparation stage.
When the merge of a derived table / view is performed the on expression
attached to it is fixed and ANDed with the where condition of the select S
containing this derived table / view. It happens after the specification of
the derived table / view has been merged into S. If the ON expression refers
to a non existing field an error is reported and some other mergeable derived
tables / views remain unmerged. It's not a problem if the multi-table
update / delete statement is standalone. Yet if it is used in a stored
procedure the select with incompletely merged derived tables / views may
cause a problem for the second call of the procedure. This does not happen
for select queries using derived tables / views, because in this case their
specifications are merged after the preparation stage at which all ON
expressions are fixed.
This patch makes sure that merging of the derived tables / views used in a
multi-table update / delete statement is performed after the preparation
stage.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Before this patch mergeable derived tables / view used in a multi-table
update / delete were merged before the preparation stage.
When the merge of a derived table / view is performed the on expression
attached to it is fixed and ANDed with the where condition of the select S
containing this derived table / view. It happens after the specification of
the derived table / view has been merged into S. If the ON expression refers
to a non existing field an error is reported and some other mergeable derived
tables / views remain unmerged. It's not a problem if the multi-table
update / delete statement is standalone. Yet if it is used in a stored
procedure the select with incompletely merged derived tables / views may
cause a problem for the second call of the procedure. This does not happen
for select queries using derived tables / views, because in this case their
specifications are merged after the preparation stage at which all ON
expressions are fixed.
This patch makes sure that merging of the derived tables / views used in a
multi-table update / delete statement is performed after the preparation
stage.
Approved by Oleksandr Byelkin <sanja@mariadb.com>