This is done by mapping most of the existing MySQL unicode 0900 collations
to MariadB 1400 unicode collations. The assumption is that 1400 is a super
set of 0900 for all practical purposes.
I also added a new function 'compare_collations()' and changed most code
to use this instead of comparing character sets directly.
This enables one to seamlessly mix-and-match the corresponding 0900 and
1400 sets. Field comparision and alter table treats the character sets
as identical.
All MySQL 8.0 0900 collations are supported except:
- utf8mb4_ja_0900_as_cs
- utf8mb4_ja_0900_as_cs_ks
- utf8mb4_ru_0900_as_cs
- utf8mb4_zh_0900_as_cs
These do not have corresponding entries in the MariadB 01400 collations.
Other things:
- Added COMMENT colum to information_schema.collations. For utf8mb4_0900
colletions it contains the corresponding alias collation.
Handle null bits for record comparison in row events the same way as in
handler::calculate_checksum(), forcing bits that can be undefined to 1.
These bits are the trailing unused bits, as well as the first bit for
tables not using HA_OPTION_PACK_RECORD.
The csv storage engine leaves these bits at 0, while the row-based
replication has them set to 1, which otherwise cause can't find record error.
Reviewed-by: Monty <monty@mariadb.org>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Check the user privileges and fail the command, even if there are no slaves
that need starting respectively stopping.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
The slave replication should accept not supported table options (eg.
"transactional" for MyISAM), as such options can end up being set from the
master in binlogged CREATE TABLE.
This was already handled in report_unknown_option(), which skips the error
in slave threads. But in mysql_prepare_create_table_finalize() there was still
a warning given, and this warning gets converted into an error when
STRICT_(ALL|TRANS)_TABLES. So skip this warning for replication also.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
The assertion occurred in the SQL thread if an event group was incompletely
written, missing the end XID or COMMIT event, and immediately followed by a
new event group. This could also lead to the incomplete event group being
committed, and with the wrong GTID.
Fix by rolling back any active transaction from a prior event group when
applying the following GTID event.
Getting an incomplete event like this is somewhat rare to happen. If the
server crashes in the middle of writing an event group, the server restart
will write a new format description event, which makes the slave roll back
the partial event group. But presumably it could happen if the master
experiences temporary write errors in the binlog, like intermittent disk
full for example.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
The FLUSH TABLE WITH READ LOCK briefly set the state (in PROCESSLIST) to
"Waiting while replication worker thread pool is busy", even if there was
nothing to wait for. This is somewhat confusing on a server that might not
even have any replication configured, let alone replication workers.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Disallow changing @@gtid_domain_id while a temporary table is open in
STATEMENT or MIXED binlog mode. Otherwise, a slave may try to replicate
events refering to the same temporary table in parallel, using domain-based
out-of-order parallel replication. This is not valid, temporary tables are
only available for use within a single thread at a time.
One concrete consequence seen from this bug was a ROLLBACK on an
InnoDB temporary table running in one domain in parallel with DROP
TEMPORARY TABLE in another domain, causing an assertion inside InnoDB:
InnoDB: Failing assertion: table->get_ref_count() == 0 in
dict_sys_t::remove.
Use an existing error code that's somewhat close to the real issue
(ER_INSIDE_TRANSACTION_PREVENTS_SWITCH_GTID_DOMAIN_ID_SEQ_NO), to not add a
new error code in a GA release. When this is merged to the next GA release,
we could optionally introduce a new and more precise error code for an
attempt to change the domain_id while temporary tables are open.
Reviewed-by: Brandon Nesterenko <brandon.nesterenko@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
The Write_rows_log_event originally allocated the m_rows_buf up-front, and
thus is_valid() checks that the buffer is allocated correctly. But at some
point this was changed to allocate the buffer lazily on demand. This means
that a a valid event can now have m_rows_buf==NULL. The is_valid() code was
not changed, and thus is_valid() could return false on a valid event.
This caused a bug for REPLACE INTO t() VALUES(), () which generates a
write_rows event with no after image; then the m_rows_buf was never
allocated and is_valid() incorrectly returned false, causing an error in
some other parts of the code.
Also fix a couple of missing special cases in the code for mysqlbinlog to
correctly decode (in comments) row events with missing after image.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
The partitioning error handling code was looking at
thd->lex->alter_info.partition_flags in non-alter-table cases, in which cases
the value is stale and contains whatever was set by any earlier ALTER TABLE.
This could cause the wrong error code to be generated, which then in some cases
can cause replication to break with "different errorcode" error.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Limit SHOW BINLOG/RELAYLOG EVENTS in show_rpl_debug_info.inc to 200 lines.
Reviewed-by: Daniel Black <daniel@mariadb.org>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Fractional part < 100000 microseconds was printed without leading zeros,
causing such timestamps to be applied incorrectly in mariadb-binlog | mysql
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
MDEV-32329 (patch) pushdown from having into where: Server crashes at sub_select
When generating an Item_equal with a Item_ref that refers to a field
outside of a subselect, remove_item_direct_ref() causes the dependency
(depended_from) on the outer select to be lost, which causes trouble
for code downstream that can no longer determine the scope of the Item.
Not calling remove_item_direct_ref() retains the Item's dependency.
Test cases from MDEV-32395 and MDEV-32329 are included.
Some fixes from other developers:
Monty:
- Fixed wrong code in Item_equal::create_pushable_equalities()
that could cause wrong item to be used if there was no matching items.
Daniel Black:
- Added test cases from MDEV-32329
Igor Babaev:
- Provided fix for removing call to remove_item_direct_ref() in
eliminate_item_equal()
MDEV-32395: update_depend_map_for_order: SEGV at /mariadb-11.3.0/sql/sql_select.cc:16583
Include test cases from MDEV-32329.
Parallel slave failed to retry in retry_event_group() with error
WSREP: Parallel slave worker failed at wsrep_before_command() hook
Fix wsrep transaction cleanup/restart in retry_event_group() to properly
clean up previous transaction by calling wsrep_after_statement().
Also move call to reset error after call to wsrep_after_statement()
to make sure that it remains effective.
Add a MTR test galera_as_slave_parallel_retry to reproduce the error
when the fix is not present.
Other issues which were detected when testing with sysbench:
Check if parallel slave is killed for retry before waiting for prior
commits in THD::wsrep_parallel_slave_wait_for_prior_commit(). This
is required with slave-parallel-mode=optimistic to avoid deadlock
when a slave later in commit order manages to reach prepare phase
before a lock conflict is detected.
Suppress wsrep applier specific warning for slave threads.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Test failed sporadically when --ps-protocol was enabled:
a transaction that was BF aborted on COMMIT would succeed
instead of reporting the expected deadlock error.
The reason for the failure was that, depending on timing,
the transaction was BF aborted while the COMMIT statement
was being prepared through a COM_STMT_PREPARE command.
In the failing cases, the transaction was BF aborted
after COM_STMT_PREPARE had already disabled the diagnostics
area of the client. Attempt to override the deadlock error
towards the end of dispatch_command() would be skipped,
resulting in a successful COMMIT even if the transaction
is aborted.
This bug affected the following MTR tests:
- galera_insert_multi
- galera_nopk_unicode
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
I could not reproduce reported assertion. However, I could
reporoduce test failure because missing wait_condition
and error in test case. Added missing wait_condition and
fixed error in test case to make test stable.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
For some reason InnoDB was disabled at --wsrep-recover call.
Added --loose-innodb to command like to make sure InnoDB is
enabled.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
When binding to NULL, DEFAULT or IGNORE from an Item value, Item_param did not
change m_type_handler, so its value remained from the previous bind.
Thid led to DBUG_ASSERTs in Item_param::get_date() and
Timestamp_or_zero_datetime_native_null.
Fix:
Set Item_param::m_type_handler to &type_handler_null when
binding from an Item returning NULL.
This patch also fixes MDEV-35427.
row_purge_remove_sec_if_poss_leaf(): If there is an active transaction
that is not newer than PAGE_MAX_TRX_ID, return the bogus value 1
so that row_purge_remove_sec_if_poss_tree() is guaranteed to recheck if
the record needs to be purged. It could be the case that an active
transaction would insert this record between the time this check
completed and row_purge_remove_sec_if_poss_tree() acquired a latch
on the secondary index leaf page again.
row_purge_del_mark_error(), row_purge_check(): Some unlikely code
refactored into separate non-inline functions.
trx_sys_t::find_same_or_older_low(): Move the unlikely and bulky
part of trx_sys_t::find_same_or_older() to a non-inline function.
trx_sys_t::find_same_or_older_in_purge(): A variant of
trx_sys_t::find_same_or_older() for use in the purge subsystem,
with potential concurrent access of the same trx_t object from
multiple threads.
trx_t::max_inactive_id_atomic: An Atomic_relaxed alias of the
regular data field trx_t::max_inactive_id, which we
use on systems that have native 64-bit loads or stores.
On any 64-bit system that seems to be supported by GCC, Clang or MSVC,
relaxed atomic loads and stores use the regular load and store
instructions. On -march=i686 the 64-bit atomic loads and stores
would use an XMM register.
This fixes a regression that had been introduced in
commit b7b9f3ce82 (MDEV-34515).
There would be messages
[ERROR] InnoDB: tried to purge non-delete-marked record in index
in the server error log, and an assertion ut_ad(0) would cause a
crash of debug instrumented builds. This could also cause incorrect
results for MVCC reads and corrupted secondary indexes.
The debug instrumented test case was written by Debarun Banerjee.
Reviewed by: Debarun Banerjee
Make galera_sr.GCF-572 behave the same with and without option
innodb-snapshot-isolation. It is sufficient to remove a SELECT
statement from a transaction to delay the creation of the read
view in innodb. Avoiding the detection of a write-write conflict
under innodb-snapshot-isolation.
Ignore snapshot isolation conflict during fragment removal, before
streaming transaction commits. This happens when a streaming
transaction creates a read view that precedes the INSERTion of
fragments into the streaming_log table. Fragments are INSERTed
using a different transaction. These fragment are then removed
as part of COMMIT of the streaming transaction. This fragment
removal operation could fail when the fragments were not part
the transaction's read view, thus violating snapshot isolation.
Item_char_typecast::val_str_generic() uses Item::str_value as a buffer.
Item::val_str_ascii() also used Item::str_value as a buffer.
As a result, str_value tried to copy to itself.
Fixing val_str_ascii() to use a local buffer instead of str_value.
Make sure that node_1 remains in primary view by increasing it's
weight. Add suppression on expected warnings because we kill
node_2.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
- Innochecksum misinterprets the freed pages as active one.
This leads the user to think there are too many valid
pages exist.
- To avoid this confusion, innochecksum introduced one
more option --skip-freed-pages and -r to avoid the freed
pages while dumping or printing the summary of the tablespace.
- Innochecksum can safely assume the page is freed if
the respective extent doesn't belong to a segment and marked as
freed in XDES_BITMAP in extent descriptor page.
- Innochecksum shouldn't assume that zero-filled page as extent
descriptor page.
Reviewed-by: Marko Mäkelä
Item_{date|datetime}_typecase::get_date() erroneously passed the
TIME_INTERVAL_DAY flag from the caller to args[0] which made
CAST('100000:00:00' AS DATETIME) parse '100000:00:00' as TIME
rather that DATETIME.
Suppressing this flag.
MTR test rpl_semi_sync_no_missed_ack_after_add_slave fails on
buildbot after the preparatory commit for MDEV-35109 (5290fa043b)
which changed a sleep to a debug_sync point. The problem is that the
debug_sync point would time-out on a slave while waiting to enter
the logic to send an ACK reply. More specifically, where the test
config is a primary with two replicas, and the test waits on one of
the replicas to start sending an ACK, if the other replica was able
to receive the event and respond with an ACK before the binlog dump
thread of the timing-out server would prepare to send event, it
wouldn't set the SEMI_SYNC_NEED_ACK flag, and the replica wouldn't
even try to respond with an ACK.
Fix is to use debug_sync for both replicas such that both replicas
are held before sending their ack, so one can’t temporarily disable
semi-sync for the other before it receives the transaction.
Cursor protocol cannot handle select... into.
Disable this on loaddata.
For the grant_plugin/innodb_fts.fulltext changed
the tests to use a temporary table rather than a
user variable.