recalculate long unique hash in Write_rows_log_event
and Update_rows_log_event.
normally generated columns (stored and indexed virtual)
are deterministic and their values don't need to be recalculated
on the slave as they're already present in the row image.
but the long unique hash function was changed in MDEV-27653,
so a row event from the old master will have the old hash,
but a table created on the new slave will need a new hash.
Field_varstring::get_copy_func() did not take into account
that functions do_varstring1[_mb], do_varstring2[_mb] do not support
compressed data.
Changing the return value of Field_varstring::get_copy_func()
to `do_field_string` if there is a compresion and truncation
at the same time. This fixes the problem, so now it works as follows:
- val_str() uncompresses the data
- The prefix is then calculated on the uncompressed data
Additionally, introducing two new copying functions
- do_varstring1_no_truncation()
- do_varstring2_no_truncation()
Using new copying functions in cases when:
- a Field_varstring with length_bytes==1 is changing to a longer
Field_varstring with length_bytes==1
- a Field_varstring with length_bytes==2 is changing to a longer
Field_varstring with length_bytes==2
In these cases we don't care neither of compression nor
of multi-byte prefixes: the entire data gets fully copied
from the source column to the target column as is.
This is a kind of new optimization, but this also was needed
to preserve existing MTR test results.
This is also related to
MDEV-31348 Assertion `last_key_entry >= end_pos' failed in virtual bool
JOIN_CACHE_HASHED::put_record()
Valgrind exposed a problem with the join_cache for hash joins:
=25636== Conditional jump or move depends on uninitialised value(s)
==25636== at 0xA8FF4E: JOIN_CACHE_HASHED::init_hash_table()
(sql_join_cache.cc:2901)
The reason for this was that avg_record_length contained a random value
if one had used SET optimizer_switch='optimize_join_buffer_size=off'.
This causes either 'random size' memory to be allocated (up to
join_buffer_size) which can increase memory usage or, if avg_record_length
is less than the row size, memory overwrites in thd->mem_root, which is
bad.
Fixed by setting avg_record_length in JOIN_CACHE_HASHED::init()
before it's used.
There is no test case for MDEV-31893 as valgrind of join_cache_notasan
checks that.
I added a test case for MDEV-31348.
don't construct open ranges from prefix blob keys for < (less than)
just as it's already done for > (greater than)
because prefix KEY_PART doesn't create prefix Field for blobs
(see open_table_from_share() near "Create a new field for the key part"),
so stored_field_cmp_to_item() will compare the original field to the
value not taking the prefix length into account.
This patch adds for "--ps-protocol" second execution
of queries "SELECT".
Also in this patch it is added ability to disable/enable
(--disable_ps2_protocol/--enable_ps2_protocol) second
execution for "--ps-prototocol" in testcases.
Also fixes:
MDEV-30982 UBSAN: runtime error: null pointer passed as argument 2, which is declared to never be null in my_strnncoll_binary on DELETE
Calling memcmp() with a NULL pointer is undefined behaviour
according to the C standard, even if the length argument is 0.
Adding tests for length==0 before calling memcmp() into:
- my_strnncoll_binary()
- my_strnncoll_8bit_bin
Problem:
Item_func_conv::val_str() copied the ASCII string with the numeric base
conversion result directly to the function result string. In case of a
tricky character set (e.g. utf32) it produced an illformed string.
Fix:
Copy the base conversion result to the function result as is only if
the function character set is ASCII compatible, go through a
character set conversion otherwise.
Change history in the affected code:
- Since 10.4.8 (MDEV-20397 and MDEV-23311), functions ROUND(), CEILING(),
FLOOR() return a TIME value for a TIME input.
- Since 10.4.14 (MDEV-23525), MIN() and MAX() calculate a result for a TIME
input using val_native() rather than val_str().
Problem:
The patch for MDEV-23525 did not take into account combinations like
MIN(ROUND(time)), MAX(FLOOR(time)), etc.
MIN() and MAX() with ROUND(time), CEILING(time), FLOOR(time) as an argument
call the method val_native() of the undelying classes Item_func_round and
Item_func_int_val. However these classes implemented the method val_native()
as DBUG_ASSERT(0).
Fix:
This patch adds a TIME-specific code inside:
- Item_func_round::val_native()
- Item_func_int_val::val_native()
still with DBUG_ASSERT(0) for all other data types,
as other data types do not call val_native() of these classes.
We'll need a more generic solition eventualy, e.g.
turn Item_func_round and Item_func_int_val into Item_handled_func.
However, this change would be too risky for 10.4 at this point.
The problematic query outlined a bug in window functions sorting
optimization. When multiple window functions are present in a query,
we sort the sorting key (as defined by PARTITION BY and ORDER BY) from
generic to specific.
SELECT RANK() OVER (ORDER BY const_col) as r1,
RANK() OVER (ORDER BY const_col, a) as r2,
RANK() OVER (PARTITION BY c) as r3,
RANK() OVER (PARTITION BY c ORDER BY b) as r4
FROM table;
For these functions, the sorting we need to do for window function
computations are: [(const_col), (const_col, a)] and [(c), (c, b)].
Instead of doing 4 different sort order, the sorts grouped within [] are
compatible and we can use the most *specific* sort to cover both window
functions.
The bug was caused by an incorrect flagging of which sort is most
specific for a compatible group of functions. In our specific test case,
instead of picking (const_col, a) as the most specific sort, it would
only sort by (const_col), which lead to wrong results for rank function.
By ensuring that we pick the last sort key before an "incompatible sort"
flag is met in our "ordered array of sorting specifications", we
guarantee correct results.
If procedure is changed in one connection, and other procedure has
already called the initial version of the procedure, the query to
INFORMATION_SCHEMA.PARAMETERS would use obsolete information from sp
cache for that connection. That happens because cache invalidating
method only increments cache version, and does not flush (all) the
cache(s), and changing of a procedure only invalidates cache, and
removes the procedure's cache entry from local thread cache only.
The fix adds the check if sp info obtained from the cache for forming of
results for the query to I_S, is not obsoleted, and does not use it, if
it is.
The test has been added to main.information_schema. It changes the SP in
one connection, and ensures, that the change is seen in the query to the
I_S.PARAMETERS in other connection, that already has called the
procedure before the change.
"mtr --view-protocol func_math" failed because of a too long
column names imlicitly generated for the underlying expressions.
With --view-protocol they were replaced to "Name_exp_1".
Adding column aliases for these expressions.
'long long int'; cast to an unsigned type to negate this value ..
to itself in Item_func_mul::int_op and Item_func_round::int_op
Problems:
The code in multiple places in the following methods:
- Item_func_mul::int_op()
- longlong Item_func_int_div::val_int()
- Item_func_mod::int_op()
- Item_func_round::int_op()
did not properly check for corner values LONGLONG_MIN
and (LONGLONG_MAX+1) before doing negation.
This cuased UBSAN to complain about undefined behaviour.
Fix summary:
- Adding helper classes ULonglong, ULonglong_null, ULonglong_hybrid
(in addition to their signed couterparts in sql/sql_type_int.h).
- Moving the code performing multiplication of ulonglong numbers
from Item_func_mul::int_op() to ULonglong_hybrid::ullmul().
- Moving the code responsible for extracting absolute values
from negative numbers to Longlong::abs().
It makes sure to perform negation without undefinite behavior:
LONGLONG_MIN is handled in a special way.
- Moving negation related code to ULonglong::operator-().
It makes sure to perform negation without undefinite behavior:
(LONGLONG_MAX + 1) is handled in a special way.
- Moving signed<=>unsigned conversion code to
Longlong_hybrid::val_int() and ULonglong_hybrid::val_int().
- Reusing old and new sql_type_int.h classes in multiple
places in Item_func_xxx::int_op().
Fix details (explain how sql_type_int.h classes are reused):
- Instead of straight negation of negative "longlong" arguments
*before* performing unsigned multiplication,
Item_func_mul::int_op() now calls ULonglong_null::ullmul()
using Longlong_hybrid_null::abs() to pass arguments.
This fixes undefined behavior N1.
- Instead of straight negation of "ulonglong" result
*after* performing unsigned multiplication,
Item_func_mul::int_op() now calls ULonglong_hybrid::val_int(),
which recursively calls ULonglong::operator-().
This fixes undefined behavior N2.
- Removing duplicate negating code from Item_func_mod::int_op().
Using ULonglong_hybrid::val_int() instead.
This fixes undefinite behavior N3.
- Removing literal "longlong" negation from Item_func_round::int_op().
Using Longlong::abs() instead, which correctly handler LONGLONG_MIN.
This fixes undefinite behavior N4.
- Removing the duplicate (negation related) code from
Item_func_int_div::val_int(). Reusing class ULonglong_hybrid.
There were no undefinite behavior in here.
However, this change allowed to reveal a bug in
"-9223372036854775808 DIV 1".
The removed negation code appeared to be incorrect when
negating +9223372036854775808. It returned the "out of range" error.
ULonglong_hybrid::operator-() now handles all values correctly
and returns +9223372036854775808 as a negation for -9223372036854775808.
Re-recording wrong results for
SELECT -9223372036854775808 DIV 1;
Now instead of "out of range", it returns -9223372036854775808,
which is the smallest possible value for the expression data type
(signed) BIGINT.
- Removing "no UBSAN" branch from Item_func_splus::int_opt()
and Item_func_minus::int_opt(), as it made UBSAN happy but
in RelWithDebInfo some MTR tests started to fail.
The code in choose_best_splitting() assumed that the join prefix is
in join->positions[].
This is not necessarily the case. This function might be called when
the join prefix is in join->best_positions[], too.
Follow the approach from best_access_path(), which calls this function:
pass the current join prefix as an argument,
"const POSITION *join_positions" and use that.
This bug could affect queries containing a subquery over splittable derived
tables and having an outer references in its WHERE clause. If such subquery
contained an equality condition whose left part was a reference to a column
of the derived table and the right part referred only to outer columns
then the server crashed in the function st_join_table::choose_best_splitting()
The crashing code was added in the commit ce7ffe61d8
that made the code of the function sensitive to presence of the flag
OUTER_REF_TABLE_BIT in the KEYUSE_EXT::needed_in_prefix fields.
The field needed_in_prefix of the KEYUSE_EXT structure should not contain
table maps with OUTER_REF_TABLE_BIT or RAND_TABLE_BIT.
Note that this fix is quite conservative: for affected queries it just
returns the query plans that were used before the above mentioned commit.
In fact the equalities causing crashes should be pushed into derived tables
without any usage of split optimization.
Approved by Sergei Petrunia <sergey@mariadb.com>
EXPLAIN EXTENDED should always print the field item used in the left part
of an equality expression from the SET clause of an update statement as a
reference to table column.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
The problem was that JOIN_CACHE::alloc_buffer() did not check if the
given join_buffer_value is less than the query require.
Added a check for this and disabled join cache if it cannot be used.
The problem was trying to access JOIN_TAB::select which is set to NULL
when using the filesort. The correct way is accessing either
JOIN_TAB::select or JOIN_TAB::filesort->select depending on whether
the filesort is used.
This commit introduces member function JOIN_TAB::get_sql_select()
encapsulating that check so the code duplication is eliminated.
The new condition (s->table->quick_keys.is_set(best_key->key))
was added to best_access_path() to eliminate a Valgrind error.
The cause of that error was using TRASH_ALLOC(quick_key_parts)
instead of bzero(quick_key_parts); hence, accessing
s->table->quick_key_parts[best_key->key]) without prior checking
for quick_keys.is_set() might have caused reading "dirty" memory
collations
Fix by Alexey Botchkov
The 'value_len' is calculated wrong for the multibyte charsets. In the
read_strn() function we get the length of the string with the final ' " '
character. So have to subtract it's length from the value_len. And the
length of '1' isn't correct for the ucs2 charset (must be 2).
ROW variables did not get assigned from subselects in these contexts:
BEGIN
DECLARE r ROW TYPE OF t1;
SET r=(SELECT * FROM t1 WHERE a=1);
END;
BEGIN
DECLARE r ROW TYPE OF t1 DEFAULT (SELECT * FROM t1 WHERE a=1);
END;
All fields of the ROW variable remained NULL.
This bug could affect queries containing a subquery over splittable derived
tables and having an outer references in its WHERE clause. If such subquery
contained an equality condition whose left part was a reference to a column
of the derived table and the right part referred only to outer columns
then the server crashed in the function st_join_table::choose_best_splitting()
The crashing code was added in the commit ce7ffe61d8
that made the code of the function sensitive to presence of the flag
OUTER_REF_TABLE_BIT in the KEYUSE_EXT::needed_in_prefix fields.
The field needed_in_prefix of the KEYUSE_EXT structure should not contain
table maps with OUTER_REF_TABLE_BIT or RAND_TABLE_BIT.
Note that this fix is quite conservative: for affected queries it just
returns the query plans that were used before the above mentioned commit.
In fact the equalities causing crashes should be pushed into derived tables
without any usage of split optimization.
Approved by Sergei Petrunia <sergey@mariadb.com>
lower_case_table_names=2 means "table names and database names are
stored as declared, but they are compared in lowercase".
But names of objects in grants are stored in lowercase for any value
of lower_case_table_names. This caused an error when checking grants
for objects containing uppercase letters since table_hash_search()
didn't take into account lower_case_table_names value
EXPLAIN EXTENDED should always print the field item used in the left part
of an equality expression from the SET clause of an update statement as a
reference to table column.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
This bug affected EXPLAIN EXTENDED command for single-table DELETE that
used an IN subquery in its WHERE clause. A crash happened if the optimizer
chose to employ index_subquery or unique_subquery access when processing
such command.
The crash happened when the command tried to print the transformed query.
In the current code of 10.4 for single-table DELETE statements the output
of any explain command is produced after the join structures of all used
subqueries have been destroyed. JOIN::destroy() sets the field tab of the
JOIN_TAB structures created for subquery tables to NULL. As a result
subselect_indexsubquery_engine::print(), subselect_indexsubquery_engine()
cannot use this field to get the alias name of the joined table.
This patch suggests to use the field TABLE_LIST::TAB that can be accessed
from JOIN_TAB::tab_list to get the alias name of the joined table.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
- main.selectivity failed because one test produced different result with
embedded (missing feature). Fixed by moving the failing part to
selectivity_notembedded.
- Disabled maria.encrypt-no-key for embedded as embedded does not support
encryption
- Moved test from join_cache to join_cache_notasan that tried to alloc()
a buffer bigger than available memory.
The old code did set max_records to either number_of_rows
(partial_join_cardinality) or memory size (join_buffer_space_limit)
which did not make sense.
Fixed by setting max_records to number of rows that fits into
join_buffer_size.
Other things:
- Initialize buffer cache values in JOIN_CACHE constructors (safety)
Reviewer: Sergei Petrunia <sergey@mariadb.com>
The problem, introduced in patch for MDEV-26301:
When check_join_cache_usage() decides not to use join buffer, it must
adjust the access method accordingly. For BNL-H joins this means switching
from pseudo-"ref access"(with index=MAX_KEY) to some other access method.
Failing to do this will cause assertions down the line when code that is
not aware of BNL-H will try to initialize index use for ref access with
index=MAX_KEY.
The fix is to follow the regular code path to disable the join buffer for
the join_tab ("goto no_join_cache") instead of just returning from
check_join_cache_usage().
The problem was that join_buffer_size conflicted with
join_buffer_space_limit, which caused the query to be run without join
buffer. However this caused wrong results as the optimizer assumed
that hash+join buffer would ensure that the equi-join condition
would be satisfied, and didn't check it itself.
Fixed by not using join_buffer_space_limit when
optimize_join_buffer_size=off. This matches the documentation at
https://mariadb.com/kb/en/block-based-join-algorithms
Other things:
- Removed not used variable JOIN_TAB::join_buffer_size_limit
- Give an error if we cannot allocate a join buffer. This can
only happen if the join_buffer variables are wrongly configured or
we are running out of memory.
In the future, instead of returning an error, we could properly
convert the query plan that uses BNL-H join into one that doesn't
use join buffering:
make sure the equi-join condition is checked where appropriate.
Reviewer: Sergei Petrunia <sergey@mariadb.com>
select_insert::store_values() must reset
has_value_set bitmap before every row, just like mysql_insert() does.
because ON DUPLICATE KEY UPDATE and triggers modify it