When processing a condition like:
WHERE timestamp_column='2010-00-01 00:00:00'
don't replace the constant to Item_datetime_literal if the constant
it has zeros (in the month or in the day).
Problem:- Calling mark_columns_per_binlog_row_image() earlier may change the
result of mark_virtual_columns_for_write() , Since it can set the bitmap on
for virtual column, and henceforth mark_virtual_column_deps(field) will
never be called in mark_virtual_column_with_deps.
This bug is not specific for long unique, It also fails for this case
create table t2(id int primary key, a blob, b varchar(20) as (LEFT(a,2)));
For low sort_buffer_size, in the cost calculation of using the Unique object the elements in the tree were evaluated to 0, make sure to have atleast 1 element in the Unique tree.
Also for the function Unique::get allocate memory for atleast MERGEBUFF2+1 keys.
This bug is the same as the bug MDEV-17024. The crashes caused by these
bugs were due to premature cleanups of the unit specifying recursive CTEs
that happened in some cases when there were several outer references the
same recursive CTE.
The problem of premature cleanups for recursive CTEs could be already
resolved by the correction in TABLE_LIST::set_as_with_table() introduced
in this patch. ALL other changes introduced by the patches for MDEV-17024
and MDEV-22748 guarantee that this clean-ups are performed as soon as
possible: when the select containing the last outer reference to a
recursive CTE is being cleaned up the specification of the recursive CTE
should be cleaned up as well.
Bit operators (~ ^ | & << >>) and the function BIT_COUNT()
always called val_int() for their arguments.
It worked correctly only for INT type arguments.
In case of DECIMAL and DOUBLE arguments it did not work well:
the argument values were truncated to the maximum SIGNED BIGINT value
of 9223372036854775807.
Fixing the code as follows:
- If the argument if of an integer data type,
it works using val_int() as before.
- If the argument if of some other data type, it gets the argument value
using val_decimal(), to avoid truncation, and then converts the result
to ulonglong.
Using Item_handled_func to switch between the two approaches easier.
As an additional advantage, with Item_handled_func it will be easier
to implement overloading in the future, so data type plugings will be able
to define their own behavioir of bit operators and BIT_COUNT().
Moving the code from the former val_int() implementations
as methods to Longlong_null, to avoid code duplication in the
INT and DECIMAL branches.
When processing a query with a recursive CTE a temporary table is used for
each recursive reference of the CTE. As any temporary table it uses its own
mem-root for table definition structures. Due to specifics of the current
implementation of ANALYZE stmt command this mem-root can be freed only at
the very of query processing. Such deallocation of mem-root memory happens
in close_thread_tables(). The function looks through the list of the tmp
tables rec_tables attached to the THD of the query and frees corresponding
mem-roots. If the query uses a stored function then such list is created
for each query of the function. When a new rec_list has to be created the
old one has to be saved and then restored at the proper moment.
The bug occurred because only one rec_list for the query containing CTE was
created. As a result close_thread_tables() freed tmp mem-roots used for
rec_tables prematurely destroying some data needed for the output produced
by the ANALYZE command.
- commit ea37b14409 (MDEV-16678) caused
a regression. when purge thread tries to open the table for virtual
column computation, there is no need to acquire MDL for the table.
Because purge thread already hold MDL for the table
Change the following function for batch call instead of each partition
- store_lock
- external_lock
- start_stmt
- extra
- cond_push
- info_push
- top_table
Insert worked incorrect as well. RocksDB used table->record[0] internally to store some
intermediate results for key conversion, during index searching among other operations.
So table->record[0] is spoiled during ha_rnd_index_map in ha_check_overlaps, so in turn
the broken record data was inserted.
The fix is to store RocksDB intermediate result in its own buffer instead of table->record[0].
`rocksdb` MTR suite is is checked and runs fine.
No need for additional tests. The existing overlaps.test covers the case completely.
However, I am not going to add anything related to rocksdb to suite, to keep it away
from additional dependencies.
To run tests with RocksDB engine, one can add following to engines.combinations:
[rocksdb]
plugin-load=$HA_ROCKSDB_SO
default-storage-engine=rocksdb
rocksdb
When acquiring SNW/SNRW/X MDL lock DDL/admin statements may abort pending
thr lock in concurrent connection with open HANDLER (or delayed insert
thread).
This may lead to a race condition when table->alias is accessed
concurrently by such threads. Either assertion failure or memory leak
is a practical consequence of this race condition.
Specifically HANDLER is opening a table and issuing alias.copy(), while
DDL executing get_lock_data()/alias.c_ptr()/realloc()/realloc_raw().
Fixed by perforimg table->init() before it is published via
thd->open_tables.
For DECIMAL[(M[,D])] datatype max_sort_length was not being honoured which was leading to buffer
overflow while making the sort key. The fix to this problem would be to create sort keys for decimals
with atmost max_sort_key bytes
Important:
The minimum value of max_sort_length has been raised to 8 (previously was 4),
so fixed size datatypes like DOUBLE and BIGINIT are not truncated for
lower values of max_sort_length.
Currently when both PARTITION BY and ORDER BY clauses are empty then we create a Item
with the first field in the select list and sort with that field.
It should be created as an Item_temptable_field instead of Item_field because the
print() function continues to work even if the table has been dropped.
Incorrect syntax for SYSTEM_TIME partition. work_part_info is detected
as HASH partition. We cannot add partition of different type neither
we cannot reorganize SYSTEM_TIME into/from different type
partitioning.
The sidefix for version until 10.5 corrects the message:
"For LIST partitions each partition must be defined"
We have to include NULL in the result which the GOUP_CONCAT doesn't
always do. Also converting should be done into another String instance
as these can be same.
Backported the support for aborting and replaying stored procedure and fix for trigger
key assigments from 10.4 version.
Backported also two mtr tests: wsrep_sp_bf_abort and MDEV-20225
For field with type INET, during EITS collection the min and max values are store in text
representation in the statistical table.
While retrieving the value from the statistical table, the value is stored back in the original
field using binary form instead of text and this was resulting in the crash.
Introduced 2 functions in the Field structure:
1) store_to_statistical_minmax_field
2) store_from_statistical_minmax_field
For character sets and collation where character to weight mapping > 1,
there we need to make sure while creating a sort key,
a temporary buffer is created to store the value of the item by val_str function
and then copy that value back to the sort buffer.
In this case when using a priority queue Sort_param::tmp_buffer was not allocated.
Minor refactoring:
Changed Sort_param::tmp_buffer from char* to String
Item_sum_sp did not override val_native(). So the reported script
crashed in the default implementation in Item::val_native() on DBUG_ASSERT().
Implementing a correct Item_sum_sp::val_native().
The opt_for_user subrule was incorrectly scanned before sp_create_assignment_lex(),
so the user name and the host were created on a wrong memory root.
- Reoganizing the grammar to make sure that sp_create_assignment_lex()
is called immediately after PASSWORD_SYM is scanned, so all attributes
are then allocated on its memory root.
- Moving the semantic code as methods to LEX, so the grammar looks as simple as possible.
- Changing text_or_password to be of the data type USER_AUTH*.
As a side effect, the LEX::definer member is now not used when processing
the SET PASSWORD statement. Everything is done using Bison's stack.
The bug sas introduced by this commit:
commit bf5a144e16
This change also affects information_schema.tables
The create table option "transactional=0 | 1" is now always shown for
storage engines that supports both transactional/crash safe tables and
non transactional tables.
Before this patch the transactional=... option was only shown if the user
specified transactional=... in the CREATE TABLE or ALTER TABLE statement.
The reason for the change was to be able to make it easy to know if an Aria
table is transactional or not.
Previously multiple threads were allowed to load histograms concurrently.
There were no known problems caused by this. But given amount of data
races in this code, it'd happen sooner or later.
To avoid scalability bottleneck, histograms loading is protected by
per-TABLE_SHARE atomic variable.
Whenever histograms were loaded by preceding statement (hot-path), a
scalable load-acquire check is performed.
Whenever histograms have to be loaded anew, mutual exclusion for loaders
is established by atomic variable. If histograms are being loaded
concurrently, statement waits until load is completed.
- Table_statistics::total_hist_size moved to TABLE_STATISTICS_CB: only
meaningful within TABLE_SHARE (not used for collected stats).
- TABLE_STATISTICS_CB::histograms_can_be_read and
TABLE_STATISTICS_CB::histograms_are_read are replaced with a tri state
atomic variable.
- Simplified away alloc_histograms_for_table_share().
Note: there's still likely a data race if a thread attempts accessing
histograms data after it failed to load it (because of concurrent load).
It was there previously and goes out of the scope of this effort. One way
of fixing it could be reviving TABLE::histograms_are_read and adding
appropriate checks whenever it is needed.
Part of MDEV-19061 - table_share used for reading statistical tables is
not protected
Previously multiple threads were allowed to load statistics concurrently.
There were no known problems caused by this. But given amount of data
races in this code, it'd happen sooner or later.
To avoid scalability bottleneck, statistics loading is protected by
per-TABLE_SHARE atomic variable.
Whenever statistics were loaded by preceding statement (hot-path), a
scalable load-acquire check is performed.
Whenever statistics have to be loaded anew, mutual exclusion for loaders
is established by atomic variable. If statistics are being loaded
concurrently, statement waits until load is completed.
TABLE_STATISTICS_CB::stats_can_be_read and
TABLE_STATISTICS_CB::stats_is_read are replaced with a tri state atomic
variable.
Part of MDEV-19061 - table_share used for reading statistical tables is
not protected
Removed redundant loops, integrated logics into the caller instead.
Unified condition in read_statistics_for_tables(), less
"table_share != NULL" checks, no more potential "table_share == NULL"
dereferencing.
Part of MDEV-19061 - table_share used for reading statistical tables is
not protected
In Item_nodeset_func_predicate::val_nodeset, args[1] is not necessarily
an Item_func descendant. It can be Item_bool.
Removing a wrong cast. It was not really needed anyway.
Respect system fields in NO_ZERO_DATE mode.
This is the subject for refactoring in MDEV-19597
Conflict resolution from 7d5223310789f967106d86ce193ef31b315ecff0
The constructor of Lex_ident_sys returns LEX_CSTRING(NULL,0) if character set
conversion goes wrong, and raises the "wrong character string" error in
the diagnostics area.
The code in sql_yacc.yy did not check Lex_ident_sys::ptr against NULL,
so the execution entered functions that did not expect NULL (and crashed).
Fixing the code to do MYSQL_YYABORT if Lex_ident_sys::ptr is NULL
after constructing.
UPDATE gets access to history records because versioning conditions
are not set for VIEW. This leads to endless loop of inserting history
records when clustered index is rebuilt and ha_rnd_next() returns
newly inserted history record.
Return back original behavior of failing on write-locked table in
historical query.
35b679b9 assumed that SELECT_LEX::lock_type influences anything, but
actually at this point table is already locked. Original bug report
was tempesta-tech/mariadb#102
System versioning assertion fix. Since DROP SYSTEM VERSIONING does not
change list of dropped keys we should handle a special case.
Caused by MDEV-19751. This fix deprecates MDEV-17091.
- `SET DEFAULT ROLE xxx [FOR yyy]` should say:
"User yyy has not been granted a role xxx" if:
- The current user (not the user `yyy` in the FOR clause) can see the
role xxx. It can see the role if:
* role exists in `mysql.roles_mappings` (traverse the graph),
* If the current user has read access on `mysql.user` table - in
that case, it can see all roles, granted or not.
- Otherwise it should be "Invalid role specification".
In other words, it should not be possible to use `SET DEFAULT ROLE` to discover whether a specific role exist or not.
This reverts commit 6f1f911497.
because it doesn't do anything now (the server doesn't check
my_disable_leak_check) and it never did anything before
(because without `extern` it simply created a local instance of
my_disable_leak_check, did not affect server's my_disable_leak_check).
cannot use the current THD::mem_root, because it can be temporarily
reassigned to something with a very different life time
(e.g. to TABLE::mem_root or range optimizer mem_root).
MDEV-20578 Got error 126 when executing undo undo_key_delete
upon Aria crash recovery
The crash happens in this scenario:
- Table with unique keys and non unique keys
- Batch insert (LOAD DATA or INSERT ... SELECT) with REPLACE
- Some insert succeeds followed by duplicate key error
In the above scenario the table gets corrupted.
The bug was that we don't generate any undo entry for the
failed insert as the whole insert can be ignored by undo.
The code did however not take into account that when bulk
insert is used, we would write cached keys to the file on
failure and undo would wrongly ignore these.
Fixed by moving the writing of the cache keys after we write
the aborted-insert event to the log.
The immediate bug was caused by a failure to recognize a correct
position to stop the slave applier run in optimistic parallel mode.
There were the following set of issues that the analysis unveil.
1 incorrect estimate for the event binlog position passed to
is_until_satisfied
2 wait for workers to complete by the driver thread did not account non-group events
that could be left unprocessed and thus to mix up the last executed
binlog group's file and position:
the file remained old and the position related to the new rotated file
3 incorrect 'slave reached file:pos' by the parallel slave report in the error log
4 relay log UNTIL missed out the parallel slave branch in
is_until_satisfied.
The patch addresses all of them to simplify logics of log change
notification in either the master and relay-log until case.
P.1 is addressed with passing the event into is_until_satisfied()
for proper analisis by the function.
P.2 is fixed by changes in handle_queued_pos_update().
P.4 required removing relay-log change notification by workers.
Instead the driver thread updates the notion of the current relay-log
fully itself with aid of introduced
bool Relay_log_info::until_relay_log_names_defer.
An extra print out of the requested until file:pos is arranged
with --log-warning=3.
The immediate bug was caused by a failure to recognize a correct
position to stop the slave applier run in optimistic parallel mode.
There were the following set of issues that the analysis unveil.
1 incorrect estimate for the event binlog position passed to
is_until_satisfied
2 wait for workers to complete by the driver thread did not account non-group events
that could be left unprocessed and thus to mix up the last executed
binlog group's file and position:
the file remained old and the position related to the new rotated file
3 incorrect 'slave reached file:pos' by the parallel slave report in the error log
4 relay log UNTIL missed out the parallel slave branch in
is_until_satisfied.
The patch addresses all of them to simplify logics of log change
notification in either the master and relay-log until case.
P.1 is addressed with passing the event into is_until_satisfied()
for proper analisis by the function.
P.2 is fixed by changes in handle_queued_pos_update().
P.4 required removing relay-log change notification by workers.
Instead the driver thread updates the notion of the current relay-log
fully itself with aid of introduced
bool Relay_log_info::until_relay_log_names_defer.
An extra print out of the requested until file:pos is arranged
with --log-warning=3.
update_virtual_field() is called as part of index rebuild in
ha_myisam::repair() (MDEV-5800) which is done on bulk INSERT finish.
Assertion in update_virtual_field() was put as part of MDEV-16222
because update_virtual_field() returns in_use->is_error(). The idea:
wrongly mixed semantics of error status before update_virtual_field()
and the status returned by update_virtual_field(). The former can
falsely influence the latter.
Default (empty) field list in partitioning by KEY() clause is assigned
from primary key. If primary key is changed the partitioning field
list is changed as well, so repartitioning required. Not applicable to
any non-primary keys as default field list may be taken only from
primary key.
The hang can happen between a lock connection issuing KILL CONNECTION for a victim,
which is in committing phase.
There happens two resource deadlockwhere killer is holding victim's
LOCK_thd_data and requires trx mutex for the victim.
The victim, otoh, holds his own trx mutex, but requires LOCK_thd_data
in wsrep_commit_ordered(). Hence a classic two thread deadlock happens.
The fix in this commit changes innodb commit so that wsrep_commit_ordered()
is not called while holding trx mutex. With this, wsrep patch commit time mutex
locking does not violate the locking protocol of KILL command
(i.e. LOCK_thd_data -> trx mutex)
Also, a new test case has been added in galera.galera_bf_kill.test for scenario
where a client connection is killed in committting phase.
A temporary table is needed for window function computation but if only a NAMED WINDOW SPEC
is used and there is no window function, then there is no need to create a temporary
table as there is no stage to compute WINDOW FUNCTION
MDEV-21398 Deadlock (server hang) or assertion failure in
Diagnostics_area::set_error_status upon ALTER under lock
This failure could only happen if one locked the same table
multiple times and then did an ALTER TABLE on the table.
Major change is to change all instances of
table->m_needs_reopen= true;
to
table->mark_table_for_reopen();
The main fix for the problem was to ensure that we mark all
instances of the table in the locked_table_list and when we
reopen the tables, we first close all tables before reopening
and locking them.
Other things:
- Don't call thd->locked_tables_list.reopen_tables if there
are no tables marked for reopen. (performance)
MDEV-22617 Galera node crashes when trying to log to slow_log table in
streaming replication mode
Other things:
- Changed name of wsrep_after_row(two arguments) to
wsrep_after_row_internal(one argument) to not depended on the
function signature with unused arguments.
When my_vsnprintf() is patched, the code protected disabled with
'WAITING_FOR_BUGFIX_TO_VSPRINTF' should be enabled again. Also all %b
formats in this patch should be revert to %s again
MDEV-22531 Remove maria::implicit_commit()
MDEV-22607 Assertion `ha_info->ht() != binlog_hton' failed in
MYSQL_BIN_LOG::unlog_xa_prepare
From the handler point of view, Aria now looks like a transactional
engine. One effect of this is that we don't need to call
maria::implicit_commit() anymore.
This change also forces the server to call trans_commit_stmt() after doing
any read or writes to system tables. This work will also make it easier
to later allow users to have system tables in other engines than Aria.
To handle the case that Aria doesn't support rollback, a new
handlerton flag, HTON_NO_ROLLBACK, was added to engines that has
transactions without rollback (for the moment only binlog and Aria).
Other things
- Moved freeing of MARIA_SHARE to a separate function as the MARIA_SHARE
can be still part of a transaction even if the table has closed.
- Changed Aria checkpoint to use the new MARIA_SHARE free function. This
fixes a possible memory leak when using S3 tables
- Changed testing of binlog_hton to instead test for HTON_NO_ROLLBACK
- Removed checking of has_transaction_manager() in handler.cc as we can
assume that as the transaction was started by the engine, it does
support transactions.
- Added new class 'start_new_trans' that can be used to start indepdendent
sub transactions, for example while reading mysql.proc, using help or
status tables etc.
- open_system_tables...() and open_proc_table_for_Read() doesn't anymore
take a Open_tables_backup list. This is now handled by 'start_new_trans'.
- Split thd::has_transactions() to thd::has_transactions() and
thd::has_transactions_and_rollback()
- Added handlerton code to free cached transactions objects.
Needed by InnoDB.
squash! 2ed35999f2a2d84f1c786a21ade5db716b6f1bbc
All changes (except one) is of type
thd->transaction. -> thd->transaction->
thd->transaction points by default to 'thd->default_transaction'
This allows us to 'easily' have multiple active transactions for a
THD object, like when reading data from the mysql.proc table
MDEV-22468 BACKUP STAGE BLOCK_COMMIT should block commit in the Aria engine
This is needed to ensure that mariabackup works properly with Aria tables
This code ads new calls to ha_maria::implicit_commit(). These will be
deleted by MDEV-22531 Remove maria::implicit_commit().