Problem:- Calling mark_columns_per_binlog_row_image() earlier may change the
result of mark_virtual_columns_for_write() , Since it can set the bitmap on
for virtual column, and henceforth mark_virtual_column_deps(field) will
never be called in mark_virtual_column_with_deps.
This bug is not specific for long unique, It also fails for this case
create table t2(id int primary key, a blob, b varchar(20) as (LEFT(a,2)));
For low sort_buffer_size, in the cost calculation of using the Unique object the elements in the tree were evaluated to 0, make sure to have atleast 1 element in the Unique tree.
Also for the function Unique::get allocate memory for atleast MERGEBUFF2+1 keys.
This bug is the same as the bug MDEV-17024. The crashes caused by these
bugs were due to premature cleanups of the unit specifying recursive CTEs
that happened in some cases when there were several outer references the
same recursive CTE.
The problem of premature cleanups for recursive CTEs could be already
resolved by the correction in TABLE_LIST::set_as_with_table() introduced
in this patch. ALL other changes introduced by the patches for MDEV-17024
and MDEV-22748 guarantee that this clean-ups are performed as soon as
possible: when the select containing the last outer reference to a
recursive CTE is being cleaned up the specification of the recursive CTE
should be cleaned up as well.
Bit operators (~ ^ | & << >>) and the function BIT_COUNT()
always called val_int() for their arguments.
It worked correctly only for INT type arguments.
In case of DECIMAL and DOUBLE arguments it did not work well:
the argument values were truncated to the maximum SIGNED BIGINT value
of 9223372036854775807.
Fixing the code as follows:
- If the argument if of an integer data type,
it works using val_int() as before.
- If the argument if of some other data type, it gets the argument value
using val_decimal(), to avoid truncation, and then converts the result
to ulonglong.
Using Item_handled_func to switch between the two approaches easier.
As an additional advantage, with Item_handled_func it will be easier
to implement overloading in the future, so data type plugings will be able
to define their own behavioir of bit operators and BIT_COUNT().
Moving the code from the former val_int() implementations
as methods to Longlong_null, to avoid code duplication in the
INT and DECIMAL branches.
When processing a query with a recursive CTE a temporary table is used for
each recursive reference of the CTE. As any temporary table it uses its own
mem-root for table definition structures. Due to specifics of the current
implementation of ANALYZE stmt command this mem-root can be freed only at
the very of query processing. Such deallocation of mem-root memory happens
in close_thread_tables(). The function looks through the list of the tmp
tables rec_tables attached to the THD of the query and frees corresponding
mem-roots. If the query uses a stored function then such list is created
for each query of the function. When a new rec_list has to be created the
old one has to be saved and then restored at the proper moment.
The bug occurred because only one rec_list for the query containing CTE was
created. As a result close_thread_tables() freed tmp mem-roots used for
rec_tables prematurely destroying some data needed for the output produced
by the ANALYZE command.
- commit ea37b14409 (MDEV-16678) caused
a regression. when purge thread tries to open the table for virtual
column computation, there is no need to acquire MDL for the table.
Because purge thread already hold MDL for the table
Change the following function for batch call instead of each partition
- store_lock
- external_lock
- start_stmt
- extra
- cond_push
- info_push
- top_table
Insert worked incorrect as well. RocksDB used table->record[0] internally to store some
intermediate results for key conversion, during index searching among other operations.
So table->record[0] is spoiled during ha_rnd_index_map in ha_check_overlaps, so in turn
the broken record data was inserted.
The fix is to store RocksDB intermediate result in its own buffer instead of table->record[0].
`rocksdb` MTR suite is is checked and runs fine.
No need for additional tests. The existing overlaps.test covers the case completely.
However, I am not going to add anything related to rocksdb to suite, to keep it away
from additional dependencies.
To run tests with RocksDB engine, one can add following to engines.combinations:
[rocksdb]
plugin-load=$HA_ROCKSDB_SO
default-storage-engine=rocksdb
rocksdb
When acquiring SNW/SNRW/X MDL lock DDL/admin statements may abort pending
thr lock in concurrent connection with open HANDLER (or delayed insert
thread).
This may lead to a race condition when table->alias is accessed
concurrently by such threads. Either assertion failure or memory leak
is a practical consequence of this race condition.
Specifically HANDLER is opening a table and issuing alias.copy(), while
DDL executing get_lock_data()/alias.c_ptr()/realloc()/realloc_raw().
Fixed by perforimg table->init() before it is published via
thd->open_tables.
For DECIMAL[(M[,D])] datatype max_sort_length was not being honoured which was leading to buffer
overflow while making the sort key. The fix to this problem would be to create sort keys for decimals
with atmost max_sort_key bytes
Important:
The minimum value of max_sort_length has been raised to 8 (previously was 4),
so fixed size datatypes like DOUBLE and BIGINIT are not truncated for
lower values of max_sort_length.
Currently when both PARTITION BY and ORDER BY clauses are empty then we create a Item
with the first field in the select list and sort with that field.
It should be created as an Item_temptable_field instead of Item_field because the
print() function continues to work even if the table has been dropped.
Incorrect syntax for SYSTEM_TIME partition. work_part_info is detected
as HASH partition. We cannot add partition of different type neither
we cannot reorganize SYSTEM_TIME into/from different type
partitioning.
The sidefix for version until 10.5 corrects the message:
"For LIST partitions each partition must be defined"
We have to include NULL in the result which the GOUP_CONCAT doesn't
always do. Also converting should be done into another String instance
as these can be same.
Backported the support for aborting and replaying stored procedure and fix for trigger
key assigments from 10.4 version.
Backported also two mtr tests: wsrep_sp_bf_abort and MDEV-20225
For field with type INET, during EITS collection the min and max values are store in text
representation in the statistical table.
While retrieving the value from the statistical table, the value is stored back in the original
field using binary form instead of text and this was resulting in the crash.
Introduced 2 functions in the Field structure:
1) store_to_statistical_minmax_field
2) store_from_statistical_minmax_field
For character sets and collation where character to weight mapping > 1,
there we need to make sure while creating a sort key,
a temporary buffer is created to store the value of the item by val_str function
and then copy that value back to the sort buffer.
In this case when using a priority queue Sort_param::tmp_buffer was not allocated.
Minor refactoring:
Changed Sort_param::tmp_buffer from char* to String
Item_sum_sp did not override val_native(). So the reported script
crashed in the default implementation in Item::val_native() on DBUG_ASSERT().
Implementing a correct Item_sum_sp::val_native().
The opt_for_user subrule was incorrectly scanned before sp_create_assignment_lex(),
so the user name and the host were created on a wrong memory root.
- Reoganizing the grammar to make sure that sp_create_assignment_lex()
is called immediately after PASSWORD_SYM is scanned, so all attributes
are then allocated on its memory root.
- Moving the semantic code as methods to LEX, so the grammar looks as simple as possible.
- Changing text_or_password to be of the data type USER_AUTH*.
As a side effect, the LEX::definer member is now not used when processing
the SET PASSWORD statement. Everything is done using Bison's stack.
The bug sas introduced by this commit:
commit bf5a144e16
This change also affects information_schema.tables
The create table option "transactional=0 | 1" is now always shown for
storage engines that supports both transactional/crash safe tables and
non transactional tables.
Before this patch the transactional=... option was only shown if the user
specified transactional=... in the CREATE TABLE or ALTER TABLE statement.
The reason for the change was to be able to make it easy to know if an Aria
table is transactional or not.
Previously multiple threads were allowed to load histograms concurrently.
There were no known problems caused by this. But given amount of data
races in this code, it'd happen sooner or later.
To avoid scalability bottleneck, histograms loading is protected by
per-TABLE_SHARE atomic variable.
Whenever histograms were loaded by preceding statement (hot-path), a
scalable load-acquire check is performed.
Whenever histograms have to be loaded anew, mutual exclusion for loaders
is established by atomic variable. If histograms are being loaded
concurrently, statement waits until load is completed.
- Table_statistics::total_hist_size moved to TABLE_STATISTICS_CB: only
meaningful within TABLE_SHARE (not used for collected stats).
- TABLE_STATISTICS_CB::histograms_can_be_read and
TABLE_STATISTICS_CB::histograms_are_read are replaced with a tri state
atomic variable.
- Simplified away alloc_histograms_for_table_share().
Note: there's still likely a data race if a thread attempts accessing
histograms data after it failed to load it (because of concurrent load).
It was there previously and goes out of the scope of this effort. One way
of fixing it could be reviving TABLE::histograms_are_read and adding
appropriate checks whenever it is needed.
Part of MDEV-19061 - table_share used for reading statistical tables is
not protected
Previously multiple threads were allowed to load statistics concurrently.
There were no known problems caused by this. But given amount of data
races in this code, it'd happen sooner or later.
To avoid scalability bottleneck, statistics loading is protected by
per-TABLE_SHARE atomic variable.
Whenever statistics were loaded by preceding statement (hot-path), a
scalable load-acquire check is performed.
Whenever statistics have to be loaded anew, mutual exclusion for loaders
is established by atomic variable. If statistics are being loaded
concurrently, statement waits until load is completed.
TABLE_STATISTICS_CB::stats_can_be_read and
TABLE_STATISTICS_CB::stats_is_read are replaced with a tri state atomic
variable.
Part of MDEV-19061 - table_share used for reading statistical tables is
not protected