The InnoDB index fields store bytes, not characters.
Remove some unnecessary conversions from characters to bytes.
This also fixes MDEV-20422 and the wrong-result bug MDEV-12486.
Add a proper error handling of innobase_get_computed_value results in
row_upd_store_row/row_upd_store_v_row.
Also add an assertion in row_vers_build_clust_v_col to fail during row
purge.
Add one more assertion in row_sel_sec_rec_is_for_clust_rec for possible
future catches.
The problem was in improper error handling behavior in
`row_upd_build_difference_binary`:
`innobase_free_row_for_vcol` wasn't called.
To eliminate this problem in all potential places, a refactoring has been
made:
* class ib_vcol_row is added. It owns VCOL_STORAGE and heap and maintains
it in RAII manner
* all innobase_allocate_row_for_vcol/innobase_free_row_for_vcol pairs are
substituted
with ib_vcol_row usage
* row_merge_buf_add is only left untouched because it doesn't own vheap
passed as an argument
* innobase_allocate_row_for_vcol does not allocate VCOL_STORAGE anymore and
accepts it as an argument -- this reduces a number of memory allocations
* move rec_printer out of `#ifndef DBUG_OFF` and mark it cold
(This commit is exclusively for 10.1 branch, do not merge it to upper ones)
In case of a pattern of non-STMT_END-marked Rows-log-event (A) followed by
a STMT_END marked one (B) mysqlbinlog mixes up the base64 encoded rows events
with their pseudo sql representation produced by the verbose option:
BINLOG '
base64 encoded data for A
### verbose section for A
base64 encoded data for B
### verbose section for B
'/*!*/;
In effect the produced BINLOG '...' query is not valid and is rejected with the error.
Examples of this way malformed BINLOG could have been found in binlog_row_annotate.result
that gets corrected with the patch.
The issue is fixed with introduction an auxiliary IO_CACHE to hold on the verbose
comments until the terminal STMT_END event is found. The new cache is emptied
out after two pre-existing ones are done at that time.
The correctly produced output now for the above case is as the following:
BINLOG '
base64 encoded data for A
base64 encoded data for B
'/*!*/;
### verbose section for A
### verbose section for B
Thanks to Alexey Midenkov for the problem recognition and attempt to tackle,
Venkatesh Duggirala who produced a patch for the upstream whose
idea is exploited here, as well as to MDEV-23077 reporter LukeXwang who
also contributed a piece of a patch aiming at this issue.
Extra: mysqlbinlog_row_minimal refined to not produce mutable numeric values into the result file.
(This commit is for 10.3 and upper branches)
In case of a pattern of non-STMT_END-marked Rows-log-event (A) followed by
a STMT_END marked one (B) mysqlbinlog mixes up the base64 encoded rows events
with their pseudo sql representation produced by the verbose option:
BINLOG '
base64 encoded data for A
### verbose section for A
base64 encoded data for B
### verbose section for B
'/*!*/;
In effect the produced BINLOG '...' query is not valid and is rejected with the error.
Examples of this way malformed BINLOG could have been found in binlog_row_annotate.result
that gets corrected with the patch.
The issue is fixed with introduction an auxiliary IO_CACHE to hold on the verbose
comments until the terminal STMT_END event is found. The new cache is emptied
out after two pre-existing ones are done at that time.
The correctly produced output now for the above case is as the following:
BINLOG '
base64 encoded data for A
base64 encoded data for B
'/*!*/;
### verbose section for A
### verbose section for B
Thanks to Alexey Midenkov for the problem recognition and attempt to tackle,
and to Venkatesh Duggirala who produced a patch for the upstream whose
idea is exploited here, as well as to MDEV-23077 reporter LukeXwang who
also contributed a piece of a patch aiming at this issue.
(This commit is exclusively for 10.2 branch. Do not merge it to 10.3)
In case of a pattern of non-STMT_END-marked Rows-log-event (A) followed by
a STMT_END marked one (B) mysqlbinlog mixes up the base64 encoded rows events
with their pseudo sql representation produced by the verbose option:
BINLOG '
base64 encoded data for A
### verbose section for A
base64 encoded data for B
### verbose section for B
'/*!*/;
In effect the produced BINLOG '...' query is not valid and is rejected with the error.
Examples of this way malformed BINLOG could have been found in binlog_row_annotate.result
that gets corrected with the patch.
The issue is fixed with introduction an auxiliary IO_CACHE to hold on the verbose
comments until the terminal STMT_END event is found. The new cache is emptied
out after two pre-existing ones are done at that time.
The correctly produced output now for the above case is as the following:
BINLOG '
base64 encoded data for A
base64 encoded data for B
'/*!*/;
### verbose section for A
### verbose section for B
Thanks to Alexey Midenkov for the problem recognition and attempt to tackle,
and to Venkatesh Duggirala who produced a patch for the upstream whose
idea is exploited here, as well as to MDEV-23077 reporter LukeXwang who
also contributed a piece of a patch aiming at this issue.
This commit contains a fix and extended test case for a ASAN failure
reported during galera.fk mtr testing.
The reported heap buffer overflow happens in test case where a cascading
foreign key constraint is defined for a column of varchar type, and
galera.fk.test has such vulnerable test scenario.
Troubleshoting revealed that erlier fix for MDEV-19660 has made a fix
for cascading delete handling to append wsrep keys from pcur->old_rec,
in row_ins_foreign_check_on_constraint(). And, the ASAN failuer comes from
later scanning of this old_rec reference.
The fix in this commit, moves the call for wsrep_append_foreign_key() to happen
somewhat earlier, and inside ongoing mtr, and using clust_rec which is set
earlier in the same mtr for both update and delete cascade operations.
for wsrep key populating, it does not matter when the keys are populated,
all keys just have to be appended before wsrep transaction replicates.
Note that I also tried similar fix for earlier wsrep key append, but using
the old implementation with pcur->old_rec (instead of clust_rec), and same
ASAN failure was reported. So it appears that pcur->old_rec is not properly
set, to be used for wsrep key appending.
galera.galera_fk_cascade_delete test has been extended by two new test scenarios:
* FK cascade on varchar column.
This test case reproduces same scenario as galera.fk, and this test scenario
will also trigger ASAN failure with non fixed MariaDB versions.
* multi-master conflict with FK cascading.
this scenario causes a conflict between a replicated FK cascading transaction
and local transaction trying to modify the cascaded child table row.
Local transaction should be aborted and get deadlock error.
This test scenario is passing both with old MariaDB version and with this
commit as well.
The issue here was that the query was using ORDER BY LIMIT optimzation where
the access method was changed from EQ_REF access to an index scan (index that would
resolve the ORDER BY clause).
But the parameter READ_RECORD::unlock_row was not reset to rr_unlock_row, which is
used when the access method is not EQ_REF access.
If statement still contains changes it must be committed
before actual transaction is committed.
This assertion was found using randgen and happens only on
applier. No repeatable test case found.
When duplicates are removed from a table using a hash, if the record is a duplicate it is marked
as deleted. The handler API check if the record is deleted and send an error flag HA_ERR_RECORD_DELETED.
When we scan over the table if the thread is not killed then we skip the
records marked as HA_ERR_RECORD_DELETED.
The issue here is when a query is aborted by a user (this is happening when the LIMIT for ROWS EXAMINED
is exceeded), the scan over the table does not skip the records for which HA_ERR_RECORD_DELETED is sent.
It just returns an error flag HA_ERR_ABORTED_BY_USER.
This error flag is not checked at the upper level and hence we hit the assert.
If the query is aborted by the user we should just skip reading rows and return
control to the upper levels of execution.
Field::make_new_field() resets invisible property (needed for "CREATE
.. SELECT" f.ex.). Recover invisible property in
Delayed_insert::get_local_table() (unireg_check works by the same
principle).
for internal temporary tables: don't use realpath(),
and let them overwrite whatever orphan temp files might've
left in the tmpdir (see main.error_simulation test).
for user created temporary tables: we have to use realpath(),
(see 3a726ab6e2, remember DATA/INDEX DIRECTORY). don't allow
them to overwrite existing files.
This bug was reported by RACK911 LABS
This bug was originally repeated on 10.4 after defining a UNIQUE KEY
on a TEXT column, which is implemented by MDEV-371 by creating the
index on a hidden virtual column.
While row_vers_vc_matches_cluster() is executing in a purge thread
to find out if an index entry may be removed in a secondary index
that comprises a virtual column, another purge thread may process
the undo log record that this check is interested in, and write
a null BLOB pointer in that record. This would trip the assertion.
To prevent this from occurring, we must propagate the 'missing BLOB'
error up the call stack.
row_upd_ext_fetch(): Return NULL when the error occurs.
row_upd_index_replace_new_col_val(): Return whether the previous
version was built successfully.
row_upd_index_replace_new_col_vals_index_pos(): Check the error
result. Yes, we would intentionally crash on this error if it
occurs outside the purge thread.
row_upd_index_replace_new_col_vals(): Check for the error condition,
and simplify the logic.
trx_undo_prev_version_build(): Check for the error condition.
The optional AND CHAIN clause is a convenience for initiating a new
transaction as soon as the old transaction terminates. Therefore,
do not start new transaction if it is already started
at wsrep_start_transaction.
fix the following type mrr scan
(select 0,`id`,`node` from `auto_test_remote`.`tbl_a` where (`id` <> 0) order by `id`)union all(select 1,`id`,`node` from `auto_test_remote`.`tbl_a` where (`id` <> 0) order by `id`) order by `id`
The code in vers_select_conds_t::init_from_sysvar() assumed that
the value of the system_versioning_asof is DATETIME.
But it also could be DATE after a query like this:
SET system_versioning_asof = DATE(NOW());
Fixing Sys_var_vers_asof::update() to convert the value
of the assignment source to DATETIME if it returned DATE.
Now vers_select_conds_t::init_from_sysvar() always gets a DATETIME value.
Problem:
Queries like this showed performance degratation in 10.4 over 10.3:
SELECT temporal_literal FROM t1;
SELECT temporal_literal + 1 FROM t1;
SELECT COUNT(*) FROM t1 WHERE temporal_column = temporal_literal;
SELECT COUNT(*) FROM t1 WHERE temporal_column = string_literal;
Fix:
Replacing the universal member "MYSQL_TIME cached_time" in
Item_temporal_literal to data type specific containers:
- Date in Item_date_literal
- Time in Item_time_literal
- Datetime in Item_datetime_literal
This restores the performance, and make it even better in some cases.
See benchmark results in MDEV.
Also, this change makes futher separations of Date, Time, Datetime
from each other, which will make it possible not to derive them from
a too heavy (40 bytes) MYSQL_TIME, and replace them to smaller data
type specific containers.
Implementing methods:
- Field::val_time_packed()
- Field::val_datetime_packed()
- Item_field::val_datetime_packed(THD *thd);
- Item_field::val_time_packed(THD *thd);
to give a faster access to temporal packed longlong representation of a Field,
which is used in temporal Arg_comparator's to DATE, TIME, DATETIME data types.
The same idea is used in MySQL-5.6+.
This improves performance.
Problem:
When calculatung MIN() and MAX() in a query with GROUP BY, like this:
SELECT MIN(time_expr), MAX(time_expr) FROM t1 GROUP BY i;
the code in Item_sum_min_max::update_field() erroneosly used
string format comparison, therefore '100:20:30' was considered as
smaller than '10:20:30'.
Fix:
1. Implementing low level "native" related methods in class Time:
Time::Time(const Native &native) - convert native to Time
Time::to_native(Native *to, uint decimals) - convert Time to native
The "native" binary representation for TIME is equal to
the binary data format of Field_timef, which is used to
store TIME when mysql56_temporal_format is ON (default).
2. Implementing Type_handler_time_common "native" related methods:
Type_handler_time_common::cmp_native()
Type_handler_time_common::Item_val_native_with_conversion()
Type_handler_time_common::Item_val_native_with_conversion_result()
Type_handler_time_common::Item_param_val_native()
3. Implementing missing "native representation" related methods
in Field_time and Field_timef:
Field_time::store_native()
Field_time::val_native()
Field_timef::store_native()
Field_timef::val_native()
4. Implementing missing "native" related methods in all Items
that can have the TIME data type:
Item_timefunc::val_native()
Item_name_const::val_native()
Item_time_literal::val_native()
Item_cache_time::val_native()
Item_handled_func::val_native()
5. Marking Type_handler_time_common as "native ready".
So now Item_sum_min_max::update_field() calculates
values using min_max_update_native_field(),
which uses native binary representation rather than string representation.
Before this change, only the TIMESTAMP data type used native
representation to calculate MIN() and MAX().
Benchmarks (see more details in MDEV):
This change not only fixes the wrong result, but also
makes a "SELECT .. MAX.. GROUP BY .." query faster:
# TIME(0)
CREATE TABLE t1 (id INT, time_col TIME) ENGINE=HEAP;
INSERT INTO t1 VALUES (1,'10:10:10'); -- repeat this 1m times
SELECT id, MAX(time_col) FROM t1 GROUP BY id;
MySQL80: 0.159 sec
10.3: 0.108 sec
10.4: 0.094 sec (fixed)
# TIME(6):
CREATE TABLE t1 (id INT, time_col TIME(6)) ENGINE=HEAP;
INSERT INTO t1 VALUES (1,'10:10:10.999999'); -- repeat this 1m times
SELECT id, MAX(time_col) FROM t1 GROUP BY id;
My80: 0.154
10.3: 0.135
10.4: 0.093 (fixed)
In trx_free() we used to declare the entire trx_t unaccessible
and then declare that some data members are accessible.
This involves a race condition with other threads that may concurrently
access the data members that must remain accessible.
One type of error is "AddressSanitizer: unknown-crash", whose
exact cause we have not determined.
Another type of error (reported in MDEV-23472) is "use-after-poison",
where the reported shadow bytes would in fact be 00, indicating that
the memory was no longer poisoned. The poison-access-unpoison race
condition was confirmed by "rr replay".
We eliminate the race condition by invoking MEM_NOACCESS on each
individual data member of trx_t before freeing the memory to the pool.
The memory would not be unpoisoned until the pool is freed
or the memory is being reused for another allocation.
trx_t::free(): Replaces trx_free().
trx_t::active_commit_ordered: Changed to bool, so that MEM_NOACCESS
can be invoked. Removed some accessor functions.
Pool: Remove all MEM_ instrumentation.
TrxFactory: Move the MEM_ instrumentation from Pool.
TrxFactory::debug(): Removed. Moved to trx_t::free(). Because
the memory was already marked unaccessible in trx_t::free(), the
Factory::debug() call in Pool::putl() would be unable to access it.
trx_allocate_for_background(): Replaces trx_create_low().
trx_t::free(): Perform all consistency checks while avoiding
duplication, and declare most data members unaccessible.
Shutdown of mtr tests may be too impatient, esp on CI environment where
10 seconds of `arg` of `shutdown_server arg` may not be enough for the clean
shutdown to complete.
This is fixed to remove explicit non-zero timeout argument to
`shutdown_server` from all mtr tests. mysqltest computes 60 seconds default
value for the timeout for the argless `shutdown_server` command.
This policy is additionally ensured with a compile time assert.
A leak of the contents of fil_system.ssd that was introduced in
commit 10dd290b4b (MDEV-17380)
was caught by implementing SAFEMALLOC instrumentation of operator new.
I did not try to find out how to make AddressSanitizer or Valgrind
detect it.
fil_system_t::close(): Clear fil_system.ssd.
The leak was identified and a fix suggested by Michael Widenius
and Vicențiu Ciorbaru.