For partitioned table, ensure that the AUTO_INCREMENT values will
be assigned from the same sequence. This is based on the following
change in MySQL 5.6.44:
commit aaba359c13d9200747a609730dafafc3b63cd4d6
Author: Rahul Malik <rahul.m.malik@oracle.com>
Date: Mon Feb 4 13:31:41 2019 +0530
Bug#28573894 ALTER PARTITIONED TABLE ADD AUTO_INCREMENT DIFF RESULT DEPENDING ON ALGORITHM
Problem:
When a partition table is in-place altered to add an auto-increment column,
then its values are starting over for each partition.
Analysis:
In the case of in-place alter, InnoDB is creating a new sequence object
for each partition. It is default initialized. So auto-increment columns
start over for each partition.
Fix:
Assign old sequence of the partition to the sequence of next partition
so it won't start over.
RB#21148
Reviewed by Bin Su <bin.x.su@oracle.com>
Field_bit for BIT(20) uses 2 full bytes in the record,
with additional 4 uneven bits in the "null bit area".
Field::set_default() called from Field_bit::set_default() erroneously
copied 3 bytes instead of 2 bytes from the record with default values.
Changing Field::set_default() to copy pack_length_in_rec() bytes
instead of pack_length() bytes.
group concat tree is allocated in a memroot, so the only way to free
memory is to copy a part of the tree into a new memroot.
track the accumilated length of the result, and when it crosses
the threshold - copy the result into a new tree, free the old one.
This patch fixes an invalid read in fill_effective_table_privileges
triggered by a grant_version increase between a PREPARE for a
statement creating a view from I_S and EXECUTE.
A tmp table was created and free'd while preparing the statement,
TABLE_LIST::table_name was set to point to the tmp table
TABLE_SHARE::table_name which no longer existed after preparing was
done.
The grant version increase made fill_effective_table_privileges
called during EXECUTE to try fetch the updated grant info and
this is where the dangling table name was used.
triggers are opened and tables used in triggers are prelocked in
open_tables(). But multi-update can detect what tables will actually
be updated only later, after all main tables are opened.
Meaning, if a table is used in multi-update, but is not actually updated,
its on-update treggers will be opened and tables will be prelocked,
even if it's unnecessary. This can cause more tables to be
write-locked than needed, causing read_only errors, privilege errors
and lock waits.
Fix: don't open/prelock triggers unless table->updating is true.
In multi-update after setting table->updating=true, do a second
open_tables() for newly added tables, if any.
it always required UPDATE privilege on views, not being able to detect
when a views was not actually updated in multi-update.
fix: instead of marking all tables as "updating" by default,
only set "updating" on tables that will actually be updated
by multi-update. And mark the view "updating" if any of the
view's tables is.
InnoDB could return the same list again and again if the buffer
passed to trx_recover_for_mysql() is smaller than the number of
transactions that InnoDB recovered in XA PREPARE state.
We introduce the transaction state TRX_PREPARED_RECOVERED, which
is like TRX_PREPARED, but will be set during trx_recover_for_mysql()
so that each transaction will only be returned once.
Because init_server_components() is invoking ha_recover() twice,
we must reset the state of the transactions back to TRX_PREPARED
after returning the complete list, so that repeated traversals
will see the complete list again, instead of seeing an empty list.
Without this tweak, the test main.tc_heuristic_recover would hang
in MariaDB 10.1.
The command SHOW INDEXES ignored setting of the system variable
use_stat_tables to the value of 'preferably' and and showed statistical
data received from the engine. Similarly queries over the table
STATISTICS from INFORMATION_SCHEMA ignored this setting. It happened
because the function fill_schema_table_by_open() did not read any data
from statistical tables.
For single table updates and multi-table updates , engine independent statistics were not being
read even if the statistics were collected.
Fixed it, so when the optimizer_use_condition_selectivity > 2 then we would read the available
statistics for update queries.
Always set SERVER_MORE_RESULTS_EXIST when executing stored procedure statements
If statements produce a result, EOF packet needs this flag (SP ends
with an OK packet). IF statetement does not produce a result, affected rows
count are part of the final OK packet.
within stored procedure
Always set SERVER_MORE_RESULTS_EXIST when executing stored procedure.
statements
If statements produce a result, EOF packet needs this flag (SP ends with
an OK packet). IF statetement does not produce a result, affected rows
count are part of the final OK packet.
This reverts commit 21b2fada7a
and commit 81d71ee6b2.
The MDEV-18464 change introduces a few data race issues. Contrary to
the documentation, the field trx_t::victim is not always being protected
by lock_sys_t::mutex and trx_t::mutex. Most importantly, it seems
that KILL QUERY could wrongly avoid acquiring both mutexes when
invoking lock_trx_handle_wait_low(), in case another thread had
already set trx->victim=true.
We also revert MDEV-12009, because it should depend on the MDEV-18464
fix being present.
As noted on kill_one_thread SUPER should be able to kill even
system threads i.e. threads/query flagged as high priority or
wsrep applier thread. Normal user, should not able to kill
threads/query flagged as high priority (BF) or wsrep applier
thread.
ignore FK-prelocked tables when looking for write-prelocked tables
with auto-increment to complain about "Statement is unsafe because
it invokes a trigger or a stored function that inserts into an
AUTO_INCREMENT column"
now we can afford it. Fix -Werror errors. Note:
* old gcc is bad at detecting uninit variables, disable it.
* time_t is int or long, cast it for printf's
select from I_S
Problem:
========
When applier thread tries to access 'variable_name' of
INFORMATION_SCHEMA.SESSION_VARIABLES table through triggers, it results in an
abnormal exit of slave server.
Analysis:
========
At the time of replication of stored routines and triggers, their associated
security context will be sent by the master. The applier thread on the slave
server will use this information to set the required security context for the
execution of stored routines and triggers. This is achieved as follows.
->The stored routine object has a member named 'm_security_ctx' which holds the
security context received from master.
->The applier thread's security_ctx is stored into a 'backup' object.
->Set the applier thread's security_ctx to 'm_security_ctx'.
->Upon the completion of stored routine execution restore the original security
context of applier thread from the backup.
During the above process the 'm_security_ctx' object is not initialized
properly. Hence the 'external_user' of 'm_security_ctx' has invalid value for
this variable and accessing this variable results in abnormal exit of server.
Fix:
===
Invoke the Security_context::init() call from the constructor of stored routine
so that 'm_security_ctx' gets initialized properly.
Item_cond::eval_not_null_tables(): Use Item::eval_const_cond(),
just like Item_cond::fix_fields().
This inconsistency was found while merging to 10.3, where the
Microsoft compiler is configured to report an error for comparing
longlong to bool.
Simulate slow statements only for COM_QUERY and COM_STMT_EXECUTE commands,
to exclude mysqld_stmt_prepare() and mysqld_stmt_close() entries from the log,
as they are not relevant for log_slow_debug.test. This simplifies the test.
Adding an intermediate volatile variable to avoid using co-processor registers
on some platforms (e.g. 32-bit x86).
This change makes test results stable accross all platforms.