This is done by mapping most of the existing MySQL unicode 0900 collations
to MariadB 1400 unicode collations. The assumption is that 1400 is a super
set of 0900 for all practical purposes.
I also added a new function 'compare_collations()' and changed most code
to use this instead of comparing character sets directly.
This enables one to seamlessly mix-and-match the corresponding 0900 and
1400 sets. Field comparision and alter table treats the character sets
as identical.
All MySQL 8.0 0900 collations are supported except:
- utf8mb4_ja_0900_as_cs
- utf8mb4_ja_0900_as_cs_ks
- utf8mb4_ru_0900_as_cs
- utf8mb4_zh_0900_as_cs
These do not have corresponding entries in the MariadB 01400 collations.
Other things:
- Added COMMENT colum to information_schema.collations. For utf8mb4_0900
colletions it contains the corresponding alias collation.
Handle null bits for record comparison in row events the same way as in
handler::calculate_checksum(), forcing bits that can be undefined to 1.
These bits are the trailing unused bits, as well as the first bit for
tables not using HA_OPTION_PACK_RECORD.
The csv storage engine leaves these bits at 0, while the row-based
replication has them set to 1, which otherwise cause can't find record error.
Reviewed-by: Monty <monty@mariadb.org>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Check the user privileges and fail the command, even if there are no slaves
that need starting respectively stopping.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
The slave replication should accept not supported table options (eg.
"transactional" for MyISAM), as such options can end up being set from the
master in binlogged CREATE TABLE.
This was already handled in report_unknown_option(), which skips the error
in slave threads. But in mysql_prepare_create_table_finalize() there was still
a warning given, and this warning gets converted into an error when
STRICT_(ALL|TRANS)_TABLES. So skip this warning for replication also.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
The assertion occurred in the SQL thread if an event group was incompletely
written, missing the end XID or COMMIT event, and immediately followed by a
new event group. This could also lead to the incomplete event group being
committed, and with the wrong GTID.
Fix by rolling back any active transaction from a prior event group when
applying the following GTID event.
Getting an incomplete event like this is somewhat rare to happen. If the
server crashes in the middle of writing an event group, the server restart
will write a new format description event, which makes the slave roll back
the partial event group. But presumably it could happen if the master
experiences temporary write errors in the binlog, like intermittent disk
full for example.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
The FLUSH TABLE WITH READ LOCK briefly set the state (in PROCESSLIST) to
"Waiting while replication worker thread pool is busy", even if there was
nothing to wait for. This is somewhat confusing on a server that might not
even have any replication configured, let alone replication workers.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
If both do_gco_wait() and do_ftwrl_wait() had to wait, the state was not restored correctly.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Disallow changing @@gtid_domain_id while a temporary table is open in
STATEMENT or MIXED binlog mode. Otherwise, a slave may try to replicate
events refering to the same temporary table in parallel, using domain-based
out-of-order parallel replication. This is not valid, temporary tables are
only available for use within a single thread at a time.
One concrete consequence seen from this bug was a ROLLBACK on an
InnoDB temporary table running in one domain in parallel with DROP
TEMPORARY TABLE in another domain, causing an assertion inside InnoDB:
InnoDB: Failing assertion: table->get_ref_count() == 0 in
dict_sys_t::remove.
Use an existing error code that's somewhat close to the real issue
(ER_INSIDE_TRANSACTION_PREVENTS_SWITCH_GTID_DOMAIN_ID_SEQ_NO), to not add a
new error code in a GA release. When this is merged to the next GA release,
we could optionally introduce a new and more precise error code for an
attempt to change the domain_id while temporary tables are open.
Reviewed-by: Brandon Nesterenko <brandon.nesterenko@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
The Write_rows_log_event originally allocated the m_rows_buf up-front, and
thus is_valid() checks that the buffer is allocated correctly. But at some
point this was changed to allocate the buffer lazily on demand. This means
that a a valid event can now have m_rows_buf==NULL. The is_valid() code was
not changed, and thus is_valid() could return false on a valid event.
This caused a bug for REPLACE INTO t() VALUES(), () which generates a
write_rows event with no after image; then the m_rows_buf was never
allocated and is_valid() incorrectly returned false, causing an error in
some other parts of the code.
Also fix a couple of missing special cases in the code for mysqlbinlog to
correctly decode (in comments) row events with missing after image.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
The partitioning error handling code was looking at
thd->lex->alter_info.partition_flags in non-alter-table cases, in which cases
the value is stale and contains whatever was set by any earlier ALTER TABLE.
This could cause the wrong error code to be generated, which then in some cases
can cause replication to break with "different errorcode" error.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Limit SHOW BINLOG/RELAYLOG EVENTS in show_rpl_debug_info.inc to 200 lines.
Reviewed-by: Daniel Black <daniel@mariadb.org>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Fractional part < 100000 microseconds was printed without leading zeros,
causing such timestamps to be applied incorrectly in mariadb-binlog | mysql
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
RISC-V and Clang produce rdcycle for __builtin_readcyclecounter.
Since Linux kernel 6.6 this is a privileged instruction not available
to userspace programs.
The use of __builtin_readcyclecounter is excluded from RISCV falling
back to the rdtime/rdtimeh instructions provided in MDEV-33435.
Thanks Alexander Richardson for noting it should be linux only in the
code and noting FreeBSD RISC-V permits rdcycle.
Author: BINSZ on JIRA
MDEV-32329 (patch) pushdown from having into where: Server crashes at sub_select
When generating an Item_equal with a Item_ref that refers to a field
outside of a subselect, remove_item_direct_ref() causes the dependency
(depended_from) on the outer select to be lost, which causes trouble
for code downstream that can no longer determine the scope of the Item.
Not calling remove_item_direct_ref() retains the Item's dependency.
Test cases from MDEV-32395 and MDEV-32329 are included.
Some fixes from other developers:
Monty:
- Fixed wrong code in Item_equal::create_pushable_equalities()
that could cause wrong item to be used if there was no matching items.
Daniel Black:
- Added test cases from MDEV-32329
Igor Babaev:
- Provided fix for removing call to remove_item_direct_ref() in
eliminate_item_equal()
MDEV-32395: update_depend_map_for_order: SEGV at /mariadb-11.3.0/sql/sql_select.cc:16583
Include test cases from MDEV-32329.
gcc 7.5.0 does not understand __attribute__((no_sanitize("undefined"))
I moved the usage of this attribute from sql/set_var.h to
include/my_attribute.h and created a macro for it depending on
compiler used.
Parallel slave failed to retry in retry_event_group() with error
WSREP: Parallel slave worker failed at wsrep_before_command() hook
Fix wsrep transaction cleanup/restart in retry_event_group() to properly
clean up previous transaction by calling wsrep_after_statement().
Also move call to reset error after call to wsrep_after_statement()
to make sure that it remains effective.
Add a MTR test galera_as_slave_parallel_retry to reproduce the error
when the fix is not present.
Other issues which were detected when testing with sysbench:
Check if parallel slave is killed for retry before waiting for prior
commits in THD::wsrep_parallel_slave_wait_for_prior_commit(). This
is required with slave-parallel-mode=optimistic to avoid deadlock
when a slave later in commit order manages to reach prepare phase
before a lock conflict is detected.
Suppress wsrep applier specific warning for slave threads.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Fix the regular expression that determines which statements
can use the Prepared Statement API, when --ps-protocol is
used. The current regular expression allows COMMIT only if
it is followed by a whitespace.
Meaning that statement "COMMIT ;" is allowed to run with
prepared statements, while "COMMIT;" is not.
Fix the filter so that both are allowed.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Test failed sporadically when --ps-protocol was enabled:
a transaction that was BF aborted on COMMIT would succeed
instead of reporting the expected deadlock error.
The reason for the failure was that, depending on timing,
the transaction was BF aborted while the COMMIT statement
was being prepared through a COM_STMT_PREPARE command.
In the failing cases, the transaction was BF aborted
after COM_STMT_PREPARE had already disabled the diagnostics
area of the client. Attempt to override the deadlock error
towards the end of dispatch_command() would be skipped,
resulting in a successful COMMIT even if the transaction
is aborted.
This bug affected the following MTR tests:
- galera_insert_multi
- galera_nopk_unicode
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
insufficient space for an object of type 'uchar' in sys_vars.inl
Disable UBSAN for global_var_ptr()/session_var_ptr() (none of
"undefined" suboptions for gcc-13 worked).
I could not reproduce reported assertion. However, I could
reporoduce test failure because missing wait_condition
and error in test case. Added missing wait_condition and
fixed error in test case to make test stable.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Problem was that in DeadlockChecker::trx_rollback() we hold lock_sys
before we call wsrep_handle_SR_rollback() where THD::LOCK_thd_data
(and some cases THD::LOCK_thd_kill) are acquired. This is against
current mutex ordering rules.
However, acquiring THD::LOCK_thd_data is not necessary because we
always are in victim_thd context, either client session is rolling
back or rollbacker thread should be in control. Therefore, we should
always use wsrep_thd_self_abort() and then no additional mutexes are
required.
Fixed by removing locking of THD::LOCK_thd_data and using only
wsrep_thd_self_abort(). In debug builds added assertions to
verify that we are always in victim_thd context.
This fix is for MariaDB 10.5 and we already have a test case
that sporadically fail in Jenkins before this fix.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
For some reason InnoDB was disabled at --wsrep-recover call.
Added --loose-innodb to command like to make sure InnoDB is
enabled.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
This is a fixup of MDEV-26345 commit
77ed235d50.
In MDEV-26345 the spider group by handler was updated so that it uses
the item_ptr fields of Query::group_by and Query::order_by, instead of
item. This was and is because the call to
join->set_items_ref_array(join->items1) during the execution stage,
just before the execution replaces the order-by / group-by item arrays
with Item_temptable_field.
Spider traverses the item tree during the group by handler (gbh)
creation at the end of the optimization stage, and decides a gbh could
handle the execution of the query. Basically spider gbh can handle the
execution if it can construct a well-formed query, executes on the
data node, and store the results in the correct places. If so, it will
create one, otherwise it will return NULL and the execution will use
the usual handler (ha_spider instead of spider_group_by_handler). To
that end, the general principle is the items checked for creation
should be the same items later used for query construciton. Since in
MDEV-26345 we changed to use the item_ptr field instead of item field
of order-by and group-by in query construction, in this patch we do
the same for the gbh creation.
The item_ptr field could be the uninitialised NULL value during the
gbh creation. This is because the optimizer may replace a DISTINCT
with a GROUP BY, which only happens if the original GROUP BY is empty.
It creates the artificial GROUP BY by calling create_distinct_group(),
which creates the corresponding ORDER object with item field aligning
with somewhere in ref_pointer_array, but leaving item_ptr to be NULL.
When spider finds out that item_ptr is NULL, it knows there's some
optimizer skullduggery and it is passed a query different from the
original. Without a clear contract between the server layer and the
gbh, it is better to be safe than sorry and not create the gbh in this
case.
Also add a check and error reporting for the unlikely case of item_ptr
changing from non-NULL at gbh construction to NULL at execution to
prevent server crash.
Also, we remove a check added in MDEV-29480 of order by items being
aggregate functions. That check was added with the premise that spider
was including auxiliary SELECT items which is referenced by ORDER BY
items. This premise was no longer true since MDEV-26345, and caused
problems such as MDEV-29546, which was fixed by MDEV-26345.
When binding to NULL, DEFAULT or IGNORE from an Item value, Item_param did not
change m_type_handler, so its value remained from the previous bind.
Thid led to DBUG_ASSERTs in Item_param::get_date() and
Timestamp_or_zero_datetime_native_null.
Fix:
Set Item_param::m_type_handler to &type_handler_null when
binding from an Item returning NULL.
This patch also fixes MDEV-35427.
btr_cur_t::search_leaf(): In the BTR_SEARCH_PREV and BTR_MODIFY_PREV
modes, reset the previous search status before invoking
page_cur_search_with_match(). Otherwise, we the search could invoke
in a totally wrong subtree.
This fixes a regression that was introduced in
commit de4030e4d4 (MDEV-30400).
buf_block_alloc(): Define as an alias in buf0lru.h, which defines
the underlying buf_LRU_get_free_block().
buf_block_free(): Define as an alias of the non-inline function
buf_pool.free_block(block).
Reviewed by: Vladislav Lesin
Instead of repurposing buf_page_t::access_time for state()==MEMORY
blocks that are part of recv_sys.pages, let us define an anonymous
union around buf_page_t::hash. In this way, we will be able to
declare access_time private.
Reviewed by: Vladislav Lesin