The presumed reason for the error is that the file was opened
by 3rd party antivirus or backup program, causing ERROR_SHARING_VIOLATION
on rename.
The fix, actually a workaround, is to retry MoveFileEx couple of times
before finally giving up. We expect 3rd party programs not to hold file
for extended time.
(This commit is for 10.3 and upper branches)
In case of a pattern of non-STMT_END-marked Rows-log-event (A) followed by
a STMT_END marked one (B) mysqlbinlog mixes up the base64 encoded rows events
with their pseudo sql representation produced by the verbose option:
BINLOG '
base64 encoded data for A
### verbose section for A
base64 encoded data for B
### verbose section for B
'/*!*/;
In effect the produced BINLOG '...' query is not valid and is rejected with the error.
Examples of this way malformed BINLOG could have been found in binlog_row_annotate.result
that gets corrected with the patch.
The issue is fixed with introduction an auxiliary IO_CACHE to hold on the verbose
comments until the terminal STMT_END event is found. The new cache is emptied
out after two pre-existing ones are done at that time.
The correctly produced output now for the above case is as the following:
BINLOG '
base64 encoded data for A
base64 encoded data for B
'/*!*/;
### verbose section for A
### verbose section for B
Thanks to Alexey Midenkov for the problem recognition and attempt to tackle,
and to Venkatesh Duggirala who produced a patch for the upstream whose
idea is exploited here, as well as to MDEV-23077 reporter LukeXwang who
also contributed a piece of a patch aiming at this issue.
When duplicates are removed from a table using a hash, if the record is a duplicate it is marked
as deleted. The handler API check if the record is deleted and send an error flag HA_ERR_RECORD_DELETED.
When we scan over the table if the thread is not killed then we skip the
records marked as HA_ERR_RECORD_DELETED.
The issue here is when a query is aborted by a user (this is happening when the LIMIT for ROWS EXAMINED
is exceeded), the scan over the table does not skip the records for which HA_ERR_RECORD_DELETED is sent.
It just returns an error flag HA_ERR_ABORTED_BY_USER.
This error flag is not checked at the upper level and hence we hit the assert.
If the query is aborted by the user we should just skip reading rows and return
control to the upper levels of execution.
Field::make_new_field() resets invisible property (needed for "CREATE
.. SELECT" f.ex.). Recover invisible property in
Delayed_insert::get_local_table() (unireg_check works by the same
principle).
Problem:
When calculatung MIN() and MAX() in a query with GROUP BY, like this:
SELECT MIN(time_expr), MAX(time_expr) FROM t1 GROUP BY i;
the code in Item_sum_min_max::update_field() erroneosly used
string format comparison, therefore '100:20:30' was considered as
smaller than '10:20:30'.
Fix:
1. Implementing low level "native" related methods in class Time:
Time::Time(const Native &native) - convert native to Time
Time::to_native(Native *to, uint decimals) - convert Time to native
The "native" binary representation for TIME is equal to
the binary data format of Field_timef, which is used to
store TIME when mysql56_temporal_format is ON (default).
2. Implementing Type_handler_time_common "native" related methods:
Type_handler_time_common::cmp_native()
Type_handler_time_common::Item_val_native_with_conversion()
Type_handler_time_common::Item_val_native_with_conversion_result()
Type_handler_time_common::Item_param_val_native()
3. Implementing missing "native representation" related methods
in Field_time and Field_timef:
Field_time::store_native()
Field_time::val_native()
Field_timef::store_native()
Field_timef::val_native()
4. Implementing missing "native" related methods in all Items
that can have the TIME data type:
Item_timefunc::val_native()
Item_name_const::val_native()
Item_time_literal::val_native()
Item_cache_time::val_native()
Item_handled_func::val_native()
5. Marking Type_handler_time_common as "native ready".
So now Item_sum_min_max::update_field() calculates
values using min_max_update_native_field(),
which uses native binary representation rather than string representation.
Before this change, only the TIMESTAMP data type used native
representation to calculate MIN() and MAX().
Benchmarks (see more details in MDEV):
This change not only fixes the wrong result, but also
makes a "SELECT .. MAX.. GROUP BY .." query faster:
# TIME(0)
CREATE TABLE t1 (id INT, time_col TIME) ENGINE=HEAP;
INSERT INTO t1 VALUES (1,'10:10:10'); -- repeat this 1m times
SELECT id, MAX(time_col) FROM t1 GROUP BY id;
MySQL80: 0.159 sec
10.3: 0.108 sec
10.4: 0.094 sec (fixed)
# TIME(6):
CREATE TABLE t1 (id INT, time_col TIME(6)) ENGINE=HEAP;
INSERT INTO t1 VALUES (1,'10:10:10.999999'); -- repeat this 1m times
SELECT id, MAX(time_col) FROM t1 GROUP BY id;
My80: 0.154
10.3: 0.135
10.4: 0.093 (fixed)
In the merge eae968f62d, it turns out that
I had accidentally initiated an in-source build in the past, and
$MYSQL_TZINFO_TO_SQL was pointing to a stale copy of the executable in
the source directory, instead of the correct one in the build directory.
Problem:
The crash happened in FORMAT(double, dec>=31, 'de_DE').
The patch for MDEV-23118 (commit 0041dacc1b)
did not take into account that String::set_real() has a limit of 31
(FLOATING_POINT_DECIMALS) fractional digits. So for the range of 31..38
digits, set_real() switches to use:
- my_fcvt() - decimal point notation, e.g. 1.9999999999
- my_gcvt() - scientific notation, e.g. 1e22
my_gcvt() returned a shorter string than Item_func_format::val_str_ascii()
expected to get after the my_fcvt() call, so it crashed on assert.
Solution:
We cannot extend set_real() to use the my_fcvt() mode for the range of
31..38 fractional digits, because set_real() is used in a lot of places
and such a change will break everything.
Introducing String::set_fcvt() which always prints using my_fcvt()
for the whole range of decimals 0..38, supported by the FORMAT() function.
FORMAT() can print more integer digits (than the argument has)
if rounding happens:
FORMAT(9.9,0) -> '10'
The old code did not take this into account.
Fix:
1. One extra digit is needed in case of rounding
- If args[1] is a not-NULL constant, then reserve space for one extra integer
digit if the requested number of decimals is less than args[0]->decimals.
- Otherwise, reserve space for one extra integer digit if
args[0]->decimals is not 0, because rounding can potentially happen
(depending on the exact data in arguments).
2. One extra digit is also needed if the argument has no integer digits,
e.g. in a data type like DECIMAL(38,38).
The conditions 1 and 2 are ORed.
3. Fixing FORMAT_MAX_DECIMALS from 30 to 38. This was forgotten in 10.2.1
(when the limit for the number of fractional digits in DECIMAL was extended).
Lex_input_stream::scan_ident_delimited() could go beyond the end
of the input when a starting backtick (`) delimiter did not have a
corresponding ending backtick.
Fix: catch the case when yyGet() returns 0, which means
either eof-of-query or straight 0x00 byte inside backticks,
and make the parser fail on syntax error, displaying the left
backtick as the syntax error place.
In case of filename in a script like this:
SET CHARACTER_SET_CLIENT=17; -- 17 is 'filename'
SELECT doc.`Children`.0 FROM t1;
the ending backtick was not recognized as such because my_charlen() returns 0 for
a straight backtick (backticks must normally be encoded as @0060 in filename).
The same fix works for 'filename': the execution skips the backtick
and reaches the end of the query, then yyGet() returns 0.
This fix is OK for now. But eventually 'filename' should either be disallowed
as a parser character set, or fixed to handle encoded punctuation properly.
Type_handler_temporal_result::Item_func_min_max_fix_attributes()
in an expression GREATEST(string,date), e.g:
SELECT GREATEST('1', CAST('2020-12-12' AS DATE));
incorrectly evaluated decimals as 6 (like for DATETIME).
Adding a separate virtual implementation:
Type_handler_date_common::Item_func_min_max_fix_attributes()
This makes the code simpler.
The code in Item_func_int_val::fix_length_and_dec_int_or_decimal()
calculated badly the result data type for FLOOR()/CEIL(), so for example
the decimal(38,10) input created a decimal(28,0) result.
That was not correct, because one extra integer digit is needed.
floor(-9.9) -> -10
ceil(9.9) -> 10
Rewritting the code in a more straightforward way.
Additional changes:
- FLOOR() now takes into account the presence of the UNSIGNED
flag of the argument: FLOOR(unsigned decimal) does not need an extra digits.
- FLOOR()/CEILING() now preserve the unsigned flag in the result
data type is decimal.
These changes give nicer data types.
Changing that in case of *INT and hex hybrid input:
- ROUND(x,NULL) creates a column with the same type as x.
The old code created a DOUBLE column, which was not relevant at all.
This change simplifies the code a lot.
- ROUND(x,non_constant) creates a column of the INT, BIGINT or DECIMAL
data type (depending on the exact type of x).
The old code created a column of the DOUBLE data type,
which lead to precision loss. Hence MDEV-23366.
- ROUND(bigint_30,negative_constant) creates a column of the DECIMAL(30,0)
data type. The old code created DECIMAL(29,0), which looked strange:
the data type promoted to a higher one, but max length reduced.
Now the length attribute is preserved.
failed or late ER_PERIOD_FIELD_WRONG_ATTRIBUTES upon attempt to create
existing table
Analysis: Error state is not stored when field is checked in
Table_period_info::check_field()
Fix: Store error state by setting res to true.
Item_func_round::fix_arg_int() did not take into account cases
when the result of ROUND(bigint_subject,negative_precision)
could go outside of the BIGINT range. The old code only incremented
max_length, but did not extend change the data type.
Fixing to extend the data type (together with max_length increment).
The condition in Item_func_round::fix_arg_int() to decide whether:
- we can preserve the data type of args[0] versus
- the result can go outside of the args[0] data type
was wrong.
The data type of the first argument can be preserved in these cases:
- TRUNCATE(x, n)
- ROUND(x, n>=0)
Fixing the condition accordingly.
On MariaDB 10.4 (commit 4db4b77365),
the query results would not be sorted, which creates random result
differences. Let us explicitly sort the results already in 10.3
in order to avoid future merge trouble.
On MariaDB 10.4 (commit 4db4b77365),
the query results would not be sorted, which creates random result
differences. Let us explicitly sort the results already in 10.3
in order to avoid future merge trouble.
- Adding optional qualifiers to data types:
CREATE TABLE t1 (a schema.DATE);
Qualifiers now work only for three pre-defined schemas:
mariadb_schema
oracle_schema
maxdb_schema
These schemas are virtual (hard-coded) for now, but may turn into real
databases on disk in the future.
- mariadb_schema.TYPE now always resolves to a true MariaDB data
type TYPE without sql_mode specific translations.
- oracle_schema.DATE translates to MariaDB DATETIME.
- maxdb_schema.TIMESTAMP translates to MariaDB DATETIME.
- Fixing SHOW CREATE TABLE to use a qualifier for a data type TYPE
if the current sql_mode translates TYPE to something else.
The above changes fix the reported problem, so this script:
SET sql_mode=ORACLE;
CREATE TABLE t2 AS SELECT mariadb_date_column FROM t1;
is now replicated as:
SET sql_mode=ORACLE;
CREATE TABLE t2 (mariadb_date_column mariadb_schema.DATE);
and the slave can unambiguously treat DATE as the true MariaDB DATE
without ORACLE specific translation to DATETIME.
Similar,
SET sql_mode=MAXDB;
CREATE TABLE t2 AS SELECT mariadb_timestamp_column FROM t1;
is now replicated as:
SET sql_mode=MAXDB;
CREATE TABLE t2 (mariadb_timestamp_column mariadb_schema.TIMESTAMP);
so the slave treats TIMESTAMP as the true MariaDB TIMESTAMP
without MAXDB specific translation to DATETIME.
Fixing ROUND(date,0), TRUNCATE(date,x), FLOOR(date), CEILING(date)
to return the `int(8) unsigned` data type.
Details:
1. Cleanup: moving virtual implementations
- Type_handler_temporal_result::Item_func_int_val_fix_length_and_dec()
- Type_handler_temporal_result::Item_func_round_fix_length_and_dec()
to Type_handler_date_common. Other temporal data type handlers
override these methods anyway. So they were only DATE specific.
This change makes the code clearer.
2. Backporting DTCollation_numeric from 10.5, to reuse the code easier.
3. Adding the `preferred_attrs` argument to Item_func_round::fix_arg_int(). Now
Type_handler_xxx::Item_func_round_val_fix_length_and_dec() work as follows:
- The INT-alike and YEAR handlers copy preferred_attrs from args[0].
- The DATE handler passes explicit attributes, to get `int(8) unsigned`.
- The hex hybrid handler passes NULL, so fix_arg_int() calculates attributes.
4. Type_handler_date_common::Item_func_int_val_fix_length_and_dec()
now sets the type handler and attributes to get `int(8) unsigned`.
In case of NATURAL JOIN / USING mark all field (one table can not be opened
in any case so optimisation does not worth it).
IMHO table should be checked for used fields and filled after prepare,
when we will fave whole info about used fields but it is too big change
for a bugfix. Which will be made later by Serg patch
Classes that handle privilege tables (like Tables_priv_table)
could read some columns conditionally but they expect a certain
minimal number of colunms always to exist.
Add a check for a minimal required number of columns in privilege tables,
don't use a table that has fewer columns than required.
1. Fixing ROUND(x) and TRUNCATE(x,0) with TINYINT, SMALLINT, MEDIUMINT, BIGINT
input to preserve the exact data type of the argument when it's possible.
2. Fixing FLOOR(x) and CEILING(x) with TINYINT, SMALLINT, MEDIUMINT, BIGINT
to preserve the exact data type of the argument.
3. Adding dedicated Type_handler_year::Item_func_round_fix_length_and_dec()
to easier handle ROUND(x) and TRUNCATE(x,y) for the YEAR(2) and YEAR(4)
input. They still return INT(2) UNSIGNED and INT(4) UNSIGNED correspondingly,
as before.