This was caused by wrong allocation of variable on stack.
(Was allocating 4K of data instead of 512 bytes).
No test case as the original MDEV test cases is not usable for mtr.
UPDATE statement that is run in PS mode and uses positional parameter
handles columns declared with the clause DEFAULT NULL incorrectly in
case the clause DEFAULT is passed as actual value for the positional
parameter of the prepared statement. Similar issue happens in case
an expression specified in the DEFAULT clause of table's column definition.
The reason for incorrect processing of columns declared as DEFAULT NULL
is that setting of null flag for a field being updated was missed
in implementation of the method Item_param::assign_default().
The reason for incorrect handling of an expression in DEFAULT clause is
also missed saving of a field inside implementation of the method
Item_param::assign_default().
signal_hand(): Remove the cmake -DWITH_DBUG_TRACE=ON instrumentation.
It can cause a crash on shutdown when the only other thread is
waiting in wait_for_signal_thread_to_end().
In the JSON functions, the debug injection for stack overflows is
inaccurate and may cause actual stack overflows. Let us simply
inject stack overflow errors without actually relying on the ability
of check_stack_overrun() to do so.
Reviewed by: Rucha Deodhar
Fix a scenario where `mariabackup --prepare` fails with assertion
`!m_modifications || !recv_no_log_write' in `mtr_t::commit()`. This
happens if the prepare step of the backup encounters a data directory
which happens to store wsrep xid position in TRX SYS page (this is no
longer the case since 10.3.5). And since MDEV-17458,
`trx_rseg_array_init()` handles this case by copying the xid position
to rollback segments, before clearing the xid from TRX SYS page.
However, this step should be avoided when `trx_rseg_array_init()` is
invoked from mariabackup. The relevant code was surrounded by the
condition `srv_operation == SRV_OPERATION_NORMAL`. An additional check
ensures that we are not trying to copy a xid position which has
already zeroed.
MDEV-33502 Slowdown when running nested statement with many partitions
caused this error as I failed to take into account bigendian architectures.
This patch also introduces bitmap_import() and bitmap_export() to be used
when one wants to store bitmaps in files/logs in a portable way.
Reviewed-by: Kristian Nielsen <knielsen@knielsen-hq.org>
This will makes it easier to find out what replication workers are
doing and what they are waiting for.
Things changed in processlist:
- Slave_SQL time was not consistent. Now time for state "Slave has
read all relay log; waiting for more updates" shows how long it has
waited for getting the next event.
- Slave_worker threads did often show "Closing tables" for a long
time. Now the state is reverted to the previous state after
"Closing tables" is done.
- Commit and Rollback states where not shown for replication (and some
other threads). Now Commit and Rollback states are always shown and
the state is reverted to previous state when the Commit/Rollback
have finished.
Code changes:
- Added thd->set_time_for_next_stage() for parallel replication when
when starting to wait for prior transactions to commit, group commit,
and FTWRL and for free space in thread pool.
Before we reset the time only after the above events.
- Moved THD_STAGE_INFO(stage_rollback) and THD_STAGE_INFO(stage_commit)
from sql_parse.cc to transaction.cc to ensure this is done for
all commits and not only 'normal connection queries'.
Test case changes:
- close_thread_tables() reverting stage to previous stage caused the
counter in performance_schema to be increased. In many case it is
the 'sql/starting' stage that was effected.
- We only change to "Commit" stage if there is a need for a commit.
This caused some "Commit" stages to disapper from perfschema reports.
TODO in 11.#:
- Slave_IO always showes "Waiting for master to send event" and the time is
from SLAVE START. We should in 11.# change this to be the time since
reading the last event.
The issue here is ha_innobase::get_auto_increment() could cause a
deadlock involving auto-increment lock and rollback the transaction
implicitly. For such cases, storage engines usually call
thd_mark_transaction_to_rollback() to inform SQL engine about it which
in turn takes appropriate actions and close the transaction. In innodb,
we call it while converting Innodb error code to MySQL.
However, since ::innobase_get_autoinc() returns void, we skip the call
for error code conversion and also miss marking the transaction for
rollback for deadlock error. We assert eventually while releasing a
savepoint as the transaction state is not active.
Since convert_error_code_to_mysql() is handling some generic error
handling part, like invoking the callback when needed, we should call
that function in ha_innobase::get_auto_increment() even if we don't
return the resulting mysql error code back.
MDEV-33582 Add more warnings to be able to better diagnose network issues
- Disabled "Semisync ack receiver got hangup" warning
- One could get this warning from semisync if running
mtr --mysqld=log-warnings=3 rpl.rpl_semi_sync_shutdown_await_ack
- Fixed result file for engines/funcs/rpl_get_lock.test
Problem:
========
During upgrade, InnoDB does write the redo log for adjusting
the tablespace size or tablespace flags even before the log
has upgraded to configured format. This could lead to data
inconsistent if any crash happened during upgrade process.
Fix:
===
srv_start(): Write the tablespace flags adjustment, increased
tablespace size redo log only after redo log upgradation.
log_write_low(), log_reserve_and_write_fast(): Check whether
the redo log is in physical format.
Warnings are added to net_server.cc when
global_system_variables.log_warnings >= 4.
When the above condition holds then:
- All communication errors from net_serv.cc is also written to the
error log.
- In case of a of not being able to read or write a packet, a more
detailed error is given.
Other things:
- Added detection of slaves that has hangup to Ack_receiver::run()
- vio_close() is now first marking the socket closed before closing it.
The reason for this is to ensure that the connection that gets a read
error can check if the reason was that the socket was closed.
- Add a new state to vio to be able to detect if vio is acive, shutdown or
closed. This is used to detect if socket is closed by another thread.
- Testing of the new warnings is done in rpl_get_lock.test
- Suppress some of the new warnings in mtr to allow one to run some of
the tests with -mysqld=--log-warnings=4. All test in the 'rpl' suite
can now be run with this option.
- Ensure that global.log_warnings are restored at test end in a way
that allows one to use mtr --mysqld=--log-warnings=4.
Reviewed-by: <serg@mariadb.org>,<brandon.nesterenko@mariadb.com>
This commit fixes the following issues:
- memory leak checking enabled for mysqltest. This cover all cases except
calls to 'die()' that only happens in case of internal failures in
mysqltest. die() is not called anymore in the result files differs.
- One can now run mtr --embedded without failures (this crashed or hang
before)
- cleanup_and_exit() has a new parameter that indicates that it is called
from die(), in which case we should not do memory leak checks. We now
always call cleanup_and_exit() instead of exit() to be able to free up
memory and discover memory leaks.
- Lots of new assert to catch error conditions
- More DBUG statements.
- Fixed that all results are freed in mysqltest (Fixed a memory leak in
mysqltest when using prepared statements).
- Fixed race condition in do_stmt_close() that caused embedded server
to not free memory. (Memory leak in mysqltest with embedded server).
- Fixed two memory leaks in embedded server when using prepared statements.
These memory leaks caused timeout hangs in mtr when server was compiled
with safemalloc. This issue was not noticed (except as timeouts) as
memory report checking was done but output of it was disabled.
Previously, the behavior was to error out on the first invalid option
encountered. With this change, a best effort approach is made so that
all invalid options processed will be printed before exiting.
There is a caveat. The options are processed many times at varying
stages of server startup because the server is not aware of all valid
options immediately (e.g. plugins have to be loaded first before the
server knows what are the available plugin options). So, there are some
options that the server can determine are invalid "early" on, and there
are some options that the server cannot determine are invalid until
"later" on. For example, the server can determine an option such as
`--a` is an ambiguous option very early on but an option such as
`--this-does-not-match-any-option` cannot be labelled as invalid until
the server is aware of all available options.
Thus, it is possible that the server will still fail before printing out
all "invalid" options. You can see this by passing `--a
--obvious-invalid-option`.
Test cases were added to `mysqld_option_err.test` to validate that
multiple invalid options will be displayed in the error message.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services.
If a server has a default configuration (e.g. in a my.cnf file) with
rpl_semi_sync_slave_enabled set, on server start, the corresponding
rpl_semi_sync_slave_status variable will also be ON initially, even
if the slave was never configured/started. This is because the
Repl_semi_sync_slave initialization logic (function init_object())
sets the running status to the enabled value during
init_server_components().
This patch fixes this by removing the statement which sets the
semi-sync slave running status from the initialization logic. An
additional change needed from this is to semi-sync recovery: this
status variable was used as a condition to determine binlog
truncation during server recovery. This patch also switches this
condition to reference the global rpl_semi_sync_slave_enabled
variable. Though note, the semi-sync recovery condition is to be
changed entirely with the MDEV-33424 agenda.
Reviewed By:
============
Andrei Elkin <andrei.elkin@mariadb.com>
In most cases, ha_partition forwards calls to extra() to all
locked_partitions. It doesn't make sense to forward some calls for
partitions that were pruned away.
This patch introduces ha_partition::loop_read_partitions and makes
these calls use it:
- ha_partition::extra_opt(HA_EXTRA_KEYREAD)
- ha_partition::extra(HA_EXTRA_KEYREAD)
- ha_partition::extra(HA_EXTRA_NO_KEYREAD)
Reviewed-by: Monty
buf_flush_page_cleaner(): Remove a loop that had originally been added
in commit 9d1466522e (MDEV-32029) and made
redundant by commit 5b53342a6a (MDEV-32588).
Starting with commit d34479dc66 (MDEV-33053)
this loop would cause a significant performance regression in workloads
where buf_pool.need_LRU_eviction() constantly holds in
buf_flush_page_cleaner().
Thanks to Steve Shaw of Intel for noticing this.
Reviewed by: Debarun Banerjee
Tested by: Matthias Leich
MDEV-33502 Slowdown when running nested statement with many partitions
This change was triggered to help some MariaDB users with close to
10000 bits in their bitmaps.
- Change underlaying storage to be 64 bit instead of 32bit.
- This reduses number of loops to scan bitmaps.
- This can cause some bitmaps to be 4 byte large.
- Ensure that all not used top-bits are always 0 (simplifes code as
the last 64 bit storage is not a special case anymore).
- Use my_find_first_bit() to find the first set bit which is much faster
than scanning trough things byte by byte and then bit by bit.
Other things:
- Added a bool to remember if my_bitmap_init() did allocate the bitmap
array. my_bitmap_free() will only free arrays it did allocate.
This allowed me to remove setting 'bitmap=0' before calling
my_bitmap_free() for cases where the bitmap's where allocated externally.
- my_bitmap_init() sets bitmap to 0 in case of failure.
- Added 'universal' asserts to most bitmap functions.
- Change all remaining calls to bitmap_init() to my_bitmap_init().
- To finish the change from 2014.
- Changed all usage of uint32 in my_bitmap.h to my_bitmap_map.
- Updated bitmap_copy() to handle bitmaps of different size.
- Removed const from bitmap_exists_intersection() as this caused casts
on all usage.
- Removed not used function bitmap_set_above().
- Renamed create_last_word_mask() to create_last_bit_mask() (to match
name changes in my_bitmap.cc)
- Extended bitmap-t with test for more bitmap functions.
The root cause is the WAL logging of file operation when the actual
operation fails afterwards. It creates a situation with a log entry for
a operation that would always fail. I could simulate both the backup
scenario error and Innodb recovery failure exploiting the weakness.
We are following WAL for file rename operation and once logged the
operation must eventually complete successfully, or it is a major
catastrophe. Right now, we fail for rename and handle it as normal error
and it is the problem.
I created a patch to address RENAME operation to a non existing schema
where the destination schema directory is missing. The patch checks for
the missing schema before logging in an attempt to avoid the failure
after WAL log is written/flushed. I also checked that the schema cannot
be dropped or there cannot be any race with other rename to the same
file. This is protected by the MDL lock in SQL today.
The patch should this be a good improvement over the current situation
and solves the issue at hand.
Problem:
========
- InnoDB reads the length of the variable length field wrongly
while applying the modification log of instant table.
Solution:
========
rec_init_offsets_comp_ordinary(): For the temporary instant
file record, InnoDB should read the length of the variable length
field from the record itself.
If a query with GROUP_CONCAT is executed then the server reports a warning
every time when the length of the result of this function exceeds the set
value of the system variable group_concat_max_len. This bug led to the set
of warnings from the second execution of the prepared statement that did
not coincide with the one from the first execution if the executed query
was a grouping query over a join of tables using GROUP_CONCAT function and
join cache was not allowed to be employed.
The descrepancy of the sets of warnings was due to lack of cleanup for
Item_func_group_concat::row_count after execution of the query.
Approved by Oleksandr Byelkin <sanja@mariadb.com>