No functional change.
Call my_timer_init() only once and then reuse it from InnoDB and
perfschema storage engines.
This patch speeds up empty test for me like this:
./mtr -mem innodb.kevg,xtradb 1.21s user 0.84s system 34% cpu 5.999 total
./mtr -mem innodb.kevg,xtradb 1.12s user 0.60s system 31% cpu 5.385 total
For partitioned table, ensure that the AUTO_INCREMENT values will
be assigned from the same sequence. This is based on the following
change in MySQL 5.6.44:
commit aaba359c13d9200747a609730dafafc3b63cd4d6
Author: Rahul Malik <rahul.m.malik@oracle.com>
Date: Mon Feb 4 13:31:41 2019 +0530
Bug#28573894 ALTER PARTITIONED TABLE ADD AUTO_INCREMENT DIFF RESULT DEPENDING ON ALGORITHM
Problem:
When a partition table is in-place altered to add an auto-increment column,
then its values are starting over for each partition.
Analysis:
In the case of in-place alter, InnoDB is creating a new sequence object
for each partition. It is default initialized. So auto-increment columns
start over for each partition.
Fix:
Assign old sequence of the partition to the sequence of next partition
so it won't start over.
RB#21148
Reviewed by Bin Su <bin.x.su@oracle.com>
Correctly document the usage of m_max_value. Remove the const
qualifier, so that the implicit assignment operator can be used.
Make all members of ib_sequence private, and add an accessor
member function max_value().
PROBLEM
=======
An add index doesn't update index length stats in information schema
TABLES table.
FIX
===
Update the dict_table_t variable with index length stats that is
actually calculated post alter . As this variable is used to populated
the information schema index length statistics.
Reviewed by: Bin su<bin.x.su@oracle.com>
RB: 21277
In debug builds, this setting is allowed to be turned off temporarily after it was
turned on during startup. Howewer memory garbage also caused it to be accidentally
turned on when it was disabled at startup.
InnoDB could return the same list again and again if the buffer
passed to trx_recover_for_mysql() is smaller than the number of
transactions that InnoDB recovered in XA PREPARE state.
We introduce the transaction state TRX_PREPARED_RECOVERED, which
is like TRX_PREPARED, but will be set during trx_recover_for_mysql()
so that each transaction will only be returned once.
Because init_server_components() is invoking ha_recover() twice,
we must reset the state of the transactions back to TRX_PREPARED
after returning the complete list, so that repeated traversals
will see the complete list again, instead of seeing an empty list.
Without this tweak, the test main.tc_heuristic_recover would hang
in MariaDB 10.1.
dict_create_foreign_constraints_low(): Tolerate the keywords
IGNORE and ONLINE between the keywords ALTER and TABLE.
We should really remove the hacky FOREIGN KEY constraint parser
from InnoDB.
InnoDB includes 3 parsers, which use 3 lexical analyzers that
are generated with flex. Flex versions before 2.6 emitted
the keyword "register", which is deprecated in C++17.
The lexical analyzers were regenerated as follows:
for s in storage/innobase storage/xtradb
do
(cd "$s"/pars; ./make_flex.sh)
touch "$s"/fts/*.l
make -C "$s"/fts -f Makefile.query
done
This reverts commit 21b2fada7a
and commit 81d71ee6b2.
The MDEV-18464 change introduces a few data race issues. Contrary to
the documentation, the field trx_t::victim is not always being protected
by lock_sys_t::mutex and trx_t::mutex. Most importantly, it seems
that KILL QUERY could wrongly avoid acquiring both mutexes when
invoking lock_trx_handle_wait_low(), in case another thread had
already set trx->victim=true.
We also revert MDEV-12009, because it should depend on the MDEV-18464
fix being present.
Pushed the decision for innodb transaction and system
locking down to lock0lock.cc level. With this,
we can avoid releasing these mutexes for executions
where these mutexes were acquired upfront.
This patch will also fix BF aborting of native threads, e.g.
threads which have declared wsrep_on=OFF. Earlier, we have
used, for innodb trx locks, was_chosen_as_deadlock_victim
flag, for marking inodb transactions, which are victims for
wsrep BF abort. With native threads (wsrep_on==OFF), re-using
was_chosen_as_deadlock_victim flag may lead to inteference
with real deadlock, and to deal with this, the patch has added new
flag for marking wsrep BF aborts only: victim=true
Similar way if replication decides to abort one of the threads
we mark victim by: victim=true
innobase_kill_query
Remove lock sys and trx mutex handling.
wsrep_innobase_kill_one_trx
Mark victim trx with victim=true
trx0trx.h
Remove trx_abort_t type and abort type variable from
trx struct. Add victim variable to trx.
wsrep_kill_victim
Remove abort_type
lock_report_waiters_to_mysql
Take also trx mutex and mark trx as a victim for
replication abort.
lock_trx_handle_wait_low
New low level function to check whether the transaction
has already been rolled back because it was selected as
a deadlock victim, or if it has to wait then cancel
the wait lock.
lock_trx_handle_wait
If transaction is not marked as victim take lock sys
and trx mutex before calling lock_trx_handle_wait_low
and release them after that.
row_search_for_mysql
Remove lock sys and trx mutex taking and releasing.
trx_rollback_to_savepoint_for_mysql_low
trx_commit_in_memory
Clean up victim variable.
now we can afford it. Fix -Werror errors. Note:
* old gcc is bad at detecting uninit variables, disable it.
* time_t is int or long, cast it for printf's
Before MDEV-12113 (MariaDB Server 10.1.25), on shutdown InnoDB would write
the current LSN to the first page of each file of the system tablespace.
This is incompatible with MariaDB's InnoDB table encryption, because
encryption repurposed the field for an encryption key ID and checksum.
buf_page_is_corrupted(): For the InnoDB system tablespace, skip
FIL_PAGE_FILE_FLUSH_LSN when checking if a page is all zero,
because the first page of each file in the system tablespace can
contain nonzero bytes in the field.
When there is a huge transaction in the undo log, the purge threads
may get stuck in trx_purge_attach_undo_recs() for a long time,
causing the server to hang on a normal shutdown (innodb_fast_shutdown>0).
Apparently the innodb_purge_batch_size does not work correctly, or the
n_pages_handled is not being incremented correctly. We do not fix that
for now, but we will instead check if shutdown has been initiated,
allowing the purge threads to shut down without delays.
There were two newly enabled warnings:
1. cast for a function pointers. Affected sql_analyse.h, mi_write.c
and ma_write.cc, mf_iocache-t.cc, mysqlbinlog.cc, encryption.cc, etc
2. memcpy/memset of nontrivial structures. Fixed as:
* the warning disabled for InnoDB
* TABLE, TABLE_SHARE, and TABLE_LIST got a new method reset() which
does the bzero(), which is safe for these classes, but any other
bzero() will still cause a warning
* Table_scope_and_contents_source_st uses `TABLE_LIST *` (trivial)
instead of `SQL_I_List<TABLE_LIST>` (not trivial) so it's safe to
bzero now.
* added casts in debug_sync.cc and sql_select.cc (for JOIN)
* move assignment method for MDL_request instead of memcpy()
* PARTIAL_INDEX_INTERSECT_INFO::init() instead of bzero()
* remove constructor from READ_RECORD() to make it trivial
* replace some memcpy() with c++ copy assignments
row_merge_create_fts_sort_index(): Initialize dict_col_t in
an unambiguous way. GCC 6 and later appear to be able to optimize
away the memset() that is part of mem_heap_zalloc() in the
placement new call. Let us avoid using placement new in order
to ensure that the objects will actually be initialized.
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=71388https://gcc.gnu.org/ml/gcc/2016-02/msg00207.html
While the latter reference hints that the optimization is only
applicable to non-POD types (and dict_col_t does not define
any member functions before 10.2), it is most consistent to
use the same initialization across all versions.
row_merge_create_index_graph(): Relay the internal state
from dict_create_index_step(). Our caller should free the index
only if it was not copied, added to the cache, and freed.
row_merge_create_index(): Free the index template if it was
not added to the cache. This is a safer variant of the logic
that was introduced in 65070beffd in 10.2.
prepare_inplace_alter_table_dict(): Add additional fault injection
to exercise a code path where we have already added an index
to the cache.
row_mysql_handle_errors(): Correct the wrong error handling for
the code DB_FOREIGN_EXCEED_MAX_CASCADE that was introduced in
c0923d396a
commit 35f5429eda
Author: Jimmy Yang <jimmy.yang@oracle.com>
Date: Wed Oct 6 06:55:34 2010 -0700
Manual port Bug #Bug #54582 "stack overflow when opening many tables
linked with foreign keys at once" from mysql-5.1-security to
mysql-5.5-security again.
rb://391 approved by Heikki
No known test case exists for repeating the bug before MariaDB 10.2.
The scenario should be that DB_FOREIGN_EXCEED_MAX_CASCADE is returned,
then InnoDB wrongly skips the rollback to the start of the current
row operation, and finally the SQL layer commits the transaction.
Normally the SQL layer would roll back either the entire transaction or
to the start of the statement. In the faulty scenario, InnoDB would
leave the transaction in an inconsistent state, and the SQL layer could
commit the transaction.
I know no test case for this bug in 10.1. So a test case will be
committed separately in 10.2
fts_reset_get_doc(): properly initialize fts_get_doc_t::cache
fts_fetch_index_words(): Restore the initialization len=0.
The test innodb_fts.create in 10.2 would end up in an infinite loop
if this assignment is removed, because a following iteration of the
while() loop would assign zip->zp->avail_in=len with the original value
instead of the 0 that was reset in the previous iteration.
Fix the warnings issued by GCC 8 -Wstringop-truncation
and -Wstringop-overflow in InnoDB and XtraDB.
This work is motivated by Jan Lindström. The patch mainly differs
from his original one as follows:
(1) We remove explicit initialization of stack-allocated string buffers.
The minimum amount of initialization that is needed is a terminating
NUL character.
(2) GCC issues a warning for invoking strncpy(dest, src, sizeof dest)
because if strlen(src) >= sizeof dest, there would be no terminating
NUL byte in dest. We avoid this problem by invoking strncpy() with
a limit that is 1 less than the buffer size, and by always writing
NUL to the last byte of the buffer.
(3) We replace strncpy() with memcpy() or strcpy() in those cases
when the result is functionally equivalent.
Note: fts_fetch_index_words() never deals with len==UNIV_SQL_NULL.
This was enforced by an assertion that limits the maximum length
to FTS_MAX_WORD_LEN. Also, the encoding that InnoDB uses for
the compressed fulltext index is not byte-order agnostic, that is,
InnoDB data files that use FULLTEXT INDEX are not portable between
big-endian and little-endian systems.
row_merge_create_fts_sort_index(): Initialize dict_col_t.
This fixes an access to uninitialized dict_col_t::ind when a debug
assertion in MariaDB 10.4 invokes is_dropped() in
rec_get_converted_size_comp_prefix_low(). Older MariaDB versions
seem to be unaffected by the uninitialized values, but it should
not hurt to initialize everything.
Only starting with MariaDB 10.3.8 (MDEV-16365), InnoDB can actually
handle ALTER IGNORE TABLE correctly when introducing a NOT NULL
attribute to a column that contains a NULL value. Between
MariaDB Server 10.0 and 10.2, we would incorrectly return an error
for ALTER IGNORE TABLE when the column contains a NULL value.
The problem with the InnoDB table attribute encryption_key_id is that it is
not being persisted anywhere in InnoDB except if the table attribute
encryption is specified and is something else than encryption=default.
MDEV-17320 made it a hard error if encryption_key_id is specified to be
anything else than 1 in that case.
Ideally, we would always persist encryption_key_id in InnoDB. But, then we
would have to be prepared for the case that when encryption is being enabled
for a table whose encryption_key_id attribute refers to a non-existing key.
In MariaDB Server 10.1, our best option remains to not store anything
inside InnoDB. But, instead of returning the error that MDEV-17320
introduced, we should merely issue a warning that the specified
encryption_key_id is going to be ignored if encryption=default.
To improve the situation a little more, we will issue a warning if
SET [GLOBAL|SESSION] innodb_default_encryption_key_id is being set
to something that does not refer to an available encryption key.
Starting with MariaDB Server 10.2, thanks to MDEV-5800, we could open the
table definition from InnoDB side when the encryption is being enabled,
and actually fix the root cause of what was reported in MDEV-17320.
If we have a 2+ node cluster which is replicating from an async master
and the binlog_format is set to STATEMENT and multi-row inserts are executed
on a table with an auto_increment column such that values are automatically
generated by MySQL, then the server node generates wrong auto_increment
values, which are different from what was generated on the async master.
In the title of the MDEV-9519 it was proposed to ban start slave on a Galera
if master binlog_format = statement and wsrep_auto_increment_control = 1,
but the problem can be solved without such a restriction.
The causes and fixes:
1. We need to improve processing of changing the auto-increment values
after changing the cluster size.
2. If wsrep auto_increment_control switched on during operation of
the node, then we should immediately update the auto_increment_increment
and auto_increment_offset global variables, without waiting of the next
invocation of the wsrep_view_handler_cb() callback. In the current version
these variables retain its initial values if wsrep_auto_increment_control
is switched on during operation of the node, which leads to inconsistent
results on the different nodes in some scenarios.
3. If wsrep auto_increment_control switched off during operation of the node,
then we must return the original values of the auto_increment_increment and
auto_increment_offset global variables, as the user has set. To make this
possible, we need to add a "shadow copies" of these variables (which stores
the latest values set by the user).
https://jira.mariadb.org/browse/MDEV-9519