for the small InnoDB table
This bug was introduced by the patch 6c414fcf89.
The patch has not taken into account that some objects of the Field_* types
are created only for TABLE_SHARE and the field 'table' is set to NULL
for them. In particular such are objects created to store statistical
min/max values for columns.
for blob column
ANALYZE TABLE <table> does not collect statistical data on min/max values
for BLOB columns of <table>. However these values can be added into
mysql.column_stats manually by executing proper statements.
Unfortunately this led to a memory leak because the memory allocated
for these values was never freed.
This patch provides the server with a function to free memory allocated
for min/max statistical values of BLOB types.
Temporarily changed the test case until MDEV-16711 is fixed as without
this fix the test case for MDEV-16757 did not fail only for 10.0.
for blob column
ANALYZE TABLE <table> does not collect statistical data on min/max values
for BLOB columns of <table>. However these values can be added into
mysql.column_stats manually by executing proper statements.
Unfortunately this led to a memory leak because the memory allocated
for these values was never freed.
This patch provides the server with a function to free memory allocated
for min/max statistical values of BLOB types.
Backport the fix f214d36512 to 10.0
Author: Sergei Golubchik <serg@mariadb.org>
Date: Tue Apr 17 00:44:34 2018 +0200
ASAN error in is_stat_table()
don't memcmp beyond the first argument's end
Also: use my_strcasecmp(table_alias_charset), like elsewhere, not memcmp
In this issue we are using derived_with_keys optimization and we are using these keys to do a hash join which is incorrect.
We cannot create keys for dervied tables whose keyparts have types are of BLOB or TEXT type. TEXT or BLOB columns can only be
indexed over a specified length.
Disks with native 4K sectors need 4K alignment and size for unbuffered IO
(i.e for files opened with FILE_FLAG_NO_BUFFERING)
Innodb opens redo log with FILE_FLAG_NO_BUFFERING, however it always does
512byte IOs. Thus, the IO on 4K native sectors will fail, rendering
Innodb non-functional.
The fix is to check whether OS_FILE_LOG_BLOCK_SIZE is multiple of logical
sector size, and if it is not, reopen the redo log without
FILE_FLAG_NO_BUFFERING flag.
"my_snprintf(NULL, 0, ...)" does not follow what snprintf does. Change
the call in wsrep_dump_rbr_buf_with_header to use snprintf (note that
wsrep_dump_rbr_buf was using snprintf all the way).
Fix an off-by-one error in comparison so it actually prints the output
When the definition of the index used for hash join was created
in create_hj_key_for_table() it could cause memory overwrite
due to a bug that led to an underestimation of the number of
the index component.
Change the float comparison function to use a negated version when
comparing for equality. This actually produces less code when compiling
with optimizations (O3) on.
MDEV-7257 made a dump thread to read from binlog concurrently with
writers as long as the read bytes are below a water-mark
(MYSQL_BIN_LOG::binlog_end_pos). However it appeared to be possible a
dump thread reader reach out for bytes past the water mark through a
feature of IO_CACHE that fills in the internal buffer and while doing
so it could read what the reader is not supposed to see (the bytes
above MYSQL_BIN_LOG::binlog_end_pos).
The issue is fixed with constraining the IO_CACHE buffer fill to respect
the watermark.
An added unit test proves reading from file is bound to an external
parameter
passed to {IO_CACHE::end_of_file} cache member.
For the purpose of reporting an error to error log, shutdown thread was
attempting to access current_thd->variables.lc_messages->errmsgs->errmsgs.
Whereas current_thd was NULL.
We should log errors according to global lc_messages setting instead of
session setting.
Only close stdin if it was open initinally. Otherwise we may close file
descriptor which is reused for different puprose (specifically for binlog
index file in case of this bug).
MDEV-16512 Server crashes in find_field_in_table_ref on 2nd
execution of SP referring to non-existing field
Problem was in the natural join code that it changed TABLE_LIST and
Item_fields but didn't restore changed things if things goes wrong
and was not able to re-execute after failure.
Some of the problems could have been avoided if we would have run
fix_fields before doing natural join transformations.
Fixed by marking functions complete AFTER they had executed, instead at
start.
I had also to change some tests that checked if Item_fields are usable.
This doesn't fix all known problems, but at least avoids some crashes.
What should be done in the near future is to mark the statement in the SP
as 'not re-executable' and force a reparse of it on next execution.
Reviewer: Sergei Petrunia <psergey@askmonty.org>
This is a typical systemd response where it tries to shutdown the
joiner (due to "timeout") before the joiner manages to complete SST.
wsrep_sst_wait
wsrep_SE_init_wait
While waiting the operation to finish use mysql_cond_timedwait
instead of mysql_cond_wait and if operation is not finished
extend systemd timeout (if needed).
dict0dict.cc
buf_LRU_drop_page_hash_for_tablespace(): Return whether any adaptive
hash index entries existed. If yes, the caller should keep retrying to
drop the adaptive hash index.
row_import_for_mysql(), row_truncate_table_for_mysql(),
row_drop_table_for_mysql(): Ensure that the adaptive hash index was
entirely dropped for the table.
Observed and described
partitioned engine execution time difference
between master and slave was caused by excessive invocation
of base_engine::rnd_init which was done also for partitions
uninvolved into Rows-event operation.
The bug's slave slowdown therefore scales with the number of partitions.
Fixed with applying an upstream patch.
References:
----------
https://bugs.mysql.com/bug.php?id=73648
Bug#25687813 REPLICATION REGRESSION WITH RBR AND PARTITIONED TABLES
This reverts commit d39629f01e.
Because running mtr for many hours with no output whatsoever
is not really what we should do.
And in 5.5 `make test` just works anyway, nothing to fix here.
In this case we are accessing incorrect memory when we have mergeable semi-joins.
In the case when we have mergeable semi joins parent select will have a table count
of all the tables in that select plus all the tables involved in the IN-subquery.
But this table count does not include the "sjm table" (only includes the inner and outer tables)
denotes as <subquery#> in explain.