handle_slave_io(), handle_slave_sql(), os_thread_exit():
Remove a redundant pthread_exit(nullptr) call, because it
would cause SIGSEGV.
mysql_print_status(): Add MEM_MAKE_DEFINED() to work around
some missing instrumentation around mallinfo2().
que_graph_free_stat_list(): Invoke que_node_get_next(node) before
que_graph_free_recursive(node). That is the logical and
MSAN_OPTIONS=poison_in_dtor=1 compatible way of freeing memory.
ins_node_t::~ins_node_t(): Invoke mem_heap_free(entry_sys_heap).
que_graph_free_recursive(): Rely on ins_node_t::~ins_node_t().
fts_t::~fts_t(): Invoke mem_heap_free(fts_heap).
fts_free(): Replace with direct calls to fts_t::~fts_t().
The failures in free_root() due to MSAN_OPTIONS=poison_in_dtor=1
will be covered in MDEV-30942.
* it isn't "pfs" function, don't call it Item_func_pfs,
don't use item_pfsfunc.*
* tests don't depend on performance schema, put in the main suite
* inherit from Item_str_ascii_func
* use connection collation, not utf8mb3_general_ci
* set result length in fix_length_and_dec
* do not set maybe_null
* use my_snprintf() where possible
* don't set m_value.ptr on every invocation
* update sys schema to use the format_pico_time()
* len must be size_t (compilation error on Windows)
* the correct function name for double->double is fabs()
* drop volatile hack
AWK in used in Debian SysV-init and postinst scripts to determine
is there enough space starting MariaDB database or create new
database to target destination.
These AWK scripts can be rewrited to use pure SH or help
using Coreutils which is mandatory for usage of MariaDB currently.
Reasoning behind this is to get rid of one very less used dependency
…with: Test assertion failed
Problem:
=======
Assertion text: 'Value returned by SSS and PS table for Last_Error_Number
should be same.'
Assertion condition: '"1146" = "0"'
Assertion condition, interpolated: '"1146" = "0"'
Assertion result: '0'
Analysis:
========
In parallel replication when slave is started the worker pool gets
activated and it gets cleared when slave stops. Each time the worker pool
gets activated a backup worker pool also gets created to store worker
specific perforance schema information in case of errors. On error, all
relevant information is copied from rpl_parallel_thread to rli and it gets
cleared from thread. Then server waits for all workers to complete their
work, during this stage performance schema table specific worker info is
stored into the backup pool and finally the actual pool gets cleared. If
users query the performance schema table to know the status of workers the
information from backup pool will be used. The test simulates
ER_NO_SUCH_TABLE error and verifies the worker information in pfs table.
Test works fine if execution occurs in following order.
Step 1. Error occurred 'worker information is copied to backup pool'.
Step 2. handle_slave_sql invokes 'rpl_parallel_resize_pool_if_no_slaves' to
deactivate worker pool, it marks the pool->count=0
Step 3. PFS table is queried, since actual pool is deactivated backup pool
information is read.
If the Step 3 happens prior to Step2 the pool is yet to be deactivated and
the actual pool is read, which doesn't have any error details as they were
cleared. Hence test ocasionally fails.
Fix:
===
Upon error mark the back pool as being active so that if PFS table is
quried since the backup pool is flagged as valid its information will be
read, in case it is not flagged regular pool will be read.
This work is one of the last pieces created by the late Sujatha Sivakumar.
Problem:
========
- InnoDB replace statement returns can't find record as result during
bulk insert operation. InnoDB returns DB_END_OF_INDEX blindly when
bulk transaction is visible to current transaction even though
the search tuple is inserted as a part of current replace statement.
Solution:
=========
row_search_mvcc(): InnoDB should allow the transaction to read
all the rows when innodb intends to do any locking on the
record even though bulk insert transaction changes are
visible to the current transaction
buf_dblwr_t::init(), buf_dblwr_t::close(): Cover also write_cond,
which was added in commit a55b951e60
without explicit initialization. On GNU/Linux, PTHREAD_COND_INITIALIZER
is a zero-initializer. That is why the default zero initialization
happened to work on that platform.
Commit reduces need of AWK-command at least
for Debian mariadb-server-compat package.
Commit removes need of AWK-command from
scripts/mysql_install_db.sh script.
AWK command is replace by purely Posix sh compiliant version.
- agressively -> aggressively
- exising -> existing
- occured -> occurred
- releated -> related
- seperated -> separated
- sucess -> success
- use use -> use
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
In commit d6aed21621 a condition at
the start of buf_read_ahead_random() was refactored. Only the caller
buf_read_recv_pages() was adjusted for this. We must in fact adjust
every caller and make sure that spare blocks will be allocated
while crash recovery is in progress. This is the simplest fix;
ideally recovery would operate on the compressed page frame.
The observed recovery hang occurred because pages 0 and 3 of a
tablespace were being read due to buf_page_get_gen() calls by
trx_resurrect_table_locks() before the log records for these pages
had been applied. In buf_page_t::read_complete() we would skip
the call to recv_recover_page() because no uncompressed page frame
had been allocated for the block.
btr_cur_upd_rec_in_place(): Avoid calling page_zip_write_rec() if we
are not modifying any fields that are stored in compressed format.
btr_cur_update_in_place_zip_check(): New function to check if a
ROW_FORMAT=COMPRESSED record can actually be updated in place.
btr_cur_pessimistic_update(): If the BTR_KEEP_POS_FLAG is not set
(we are in a ROLLBACK and cannot write any BLOBs), ignore the potential
overflow and let page_zip_reorganize() or page_zip_compress() handle it.
This avoids a failure when an attempted UPDATE of an NULL column to 0 is
rolled back. During the ROLLBACK, we would try to move a non-updated
long column to off-page storage in order to avoid a compression failure
of the ROW_FORMAT=COMPRESSED page.
page_zip_write_trx_id_and_roll_ptr(): Remove an assertion that would fail
in row_upd_rec_in_place() because the uncompressed page would already
have been modified there.
This is a 10.5 version of commit ff3d4395d8
(different because of commit 08ba388713).
Commit a923d6f49c disabled numeric setting
of character_set_* variables with non-default values:
MariaDB [(none)]> set character_set_client=224;
ERROR 1115 (42000): Unknown character set: '224'
However the corresponding binlog functionality still write numeric
values for log event, and this will break binlog replay if the value is
not default. Now make the server use 'String' type for
'character_set_client' when generating binlog events
Before:
/*!\C utf8mb4 *//*!*/;
SET @@session.character_set_client=224,@@session.collation_connection=224,@@session.collation_server=33/*!*/;
After:
/*!\C utf8mb4 *//*!*/;
SET @@session.character_set_client=utf8mb4,@@session.collation_connection=33,@@session.collation_server=8/*!*/;
Note: prior to the previous commit, setting with '224' or '45' or
'utf8mb4' have the same effect, as they all set the parameter to
'utf8mb4'.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
In some cases, the errors would not be written to the log.
This was however not critical as in most cases mysql_install_db
should not normally write anything to the log.
- InnoDB does rollback the whole transaction and discards the
savepoint when there is a failure happens during bulk
insert operation. When server request to release the savepoint,
InnoDB should return DB_SUCCESS when it deals with bulk
insert operation