Simplify in an attempt to avoid:
mysqltest: At line 275: File already exist: on the write_file
lines.
Using write_line as that's what a lot of other tests
do for writing small bits to a expect file.
Review thanks Valdislav Vaintroub
Problem:
========
- innodb.temp_truncate_freed fails with ER_WRONG_ARGUMENTS and
states that another buffer pool resize is already in progress.
Test case has wait condition to ensure that buffer pool
resize is completed. There is a possibility that wait condition
check could get false impression that InnoDB buffer pool resize
completed due to previous buffer pool resize.
Fix:
===
Add more elaborate wait_condition to ensure the current buffer
pool resize completed.
buf_pool_t::resize(): Set the buffer pool resize status only
after setting previous buffer pool size to current buffer pool
size. This should help the test case to make reliable.
Trivial batch, using the handler statistics already collected for
the slow query log.
The reason for the changes in test cases was mainly to change to use
select TABLE_SCHEMA ... from information_schema.table_statistics instead
of 'show table_statistics' to avoid future changes to test results
if we add more columns to table_statistics.
MDEV-34274 did not fix the test failure. The test has a START SLAVE
UNTIL condition, where we can't use sync_with_master_gtid.inc,
wait_for_slave_to_start.inc, or wait_for_slave_to_stop.inc because
our MTR connection thread races with the start/stop of the SQL/IO
threads. So instead, for slave start, we prove the threads started
by waiting for the connection count to increase by 2; and for slave
stop, we wait for the processlist count to return to its pre start
slave number.
Note this is a backport of 8c8b3ab784
from 11.1.
The test rpl.rpl_change_master_demote used a `sleep 1` command
to give time for a START SLAVE UNTIL to start the slave threads
and wait for them to automatically die by UNTIL. On machines
with heavy load (especially MSAN bb builders), one second was
not enough, and the test would fail due to the IO thread
still being up.
This patch fixes the test by replacing the sleep with specific
conditions to wait for. The test cannot wait for the IO or SQL
threads to start, as it would be possible that they would be
started and stopped by the time the MTR executor would check
the slave status. So instead, we test for proof that they
existed via the Connections status variable being incremented
by at least 2 (Connections just shows the global thread id).
At this point, we still can't use the wait_for_slave_to_stop
helper, as the SQL/IO_Running fields of SHOW SLAVE STATUS
may not be updated yet. So instead, we use
information_schema.processlist, which would show the presence
of the Slave_SQL/IO threads. So to "wait for the slave to stop",
we wait for the Slave_SQL/IO threads to be gone from the
processlist.
The issue was that the test did not take into account that the IO thread
could have been in COMMAND=Connecting state, which happens before the
COMMANMD=Slave_IO state.
The test is a bit fragile as it depends on the COMMAND state to be
syncronised with the Slave_IO_State, which is not the case.
I added a new proc state and some more information to the error
output to be able to diagnose future failures more easily.
Improve performance of queries like
SELECT * FROM t1 WHERE field = NAME_CONST('a', 4);
by, in this example, replacing the WHERE clause with field = 4
in the case of ref access.
The rewrite is done during fix_fields and we disambiguate this
case from other cases of NAME_CONST by inspecting where we are
in parsing. We rely on THD::where to accomplish this. To
improve performance there, we change the type of THD::where to
be an enumeration, so we can avoid string comparisons during
Item_name_const::fix_fields. Consequently, this patch also
changes all usages of THD::where to conform likewise.
There are two problems.
First, replication fails when XA transactions are used where the
slave has replicate_do_db set and the client has touched a different
database when running DML such as inserts. This is because XA
commands are not treated as keywords, and are thereby not exempt
from the replication filter. The effect of this is that during an XA
transaction, if its logged “use db” from the master is filtered out
by the replication filter, then XA END will be ignored, yet its
corresponding XA PREPARE will be executed in an invalid state,
thereby breaking replication.
Second, if the slave replicates an XA transaction which results in
an empty transaction, the XA START through XA PREPARE first phase of
the transaction won’t be binlogged, yet the XA COMMIT will be
binlogged. This will break replication in chain configurations.
The first problem is fixed by treating XA commands in
Query_log_event as keywords, thus allowing them to bypass the
replication filter. Note that Query_log_event::is_trans_keyword() is
changed to accept a new parameter to define its mode, to either
check for XA commands or regular transaction commands, but not both.
In addition, mysqlbinlog is adapted to use this mode so its
--database filter does not remove XA commands from its output.
The second problem fixed by overwriting the XA state in the XID
cache to be XA_ROLLBACK_ONLY, so at commit time, the server knows to
rollback the transaction and skip its binlogging. If the xid cache
is cleared before an XA transaction receives its completion command
(e.g. on server shutdown), then before reporting ER_XAER_NOTA when
the completion command is executed, the filter is first checked if
the database is ignored, and if so, the error is ignored.
Reviewed By:
============
Kristian Nielsen <knielsen@knielsen-hq.org>
Andrei Elkin <andrei.elkin@mariadb.com>
PURGE BINARY LOGS did not always purge binary logs. This commit fixes
some of the issues and adds notifications if a binary log cannot be
purged.
User visible changes:
- 'PURGE BINARY LOG TO log_name' and 'PURGE BINARY LOGS BEFORE date'
worked differently. 'TO' ignored 'slave_connections_needed_for_purge'
while 'BEFORE' did not. Now both versions ignores the
'slave_connections_needed_for_purge variable'.
- 'PURGE BINARY LOG..' commands now returns 'note' if a binary log cannot
be deleted like
Note 1375 Binary log 'master-bin.000004' is not purged because it is
the current active binlog
- Automatic binary log purges, based on date or size, will write a
note to the error log if a binary log matching the size or date
cannot yet be deleted.
- If 'slave_connections_needed_for_purge' is set from a config or
command line, it is set to 0 if Galera is enabled and 1 otherwise
(old default). This ensures that automatic binary log purge works
with Galera as before the addition of
'slave_connections_needed_for_purge'.
If the variable is changed to 0, a warning will be printed to the error
log.
Code changes:
- Added THD argument to several purge_logs related functions that needed
THD.
- Added 'interactive' options to purge_logs functions. This allowed
me to remove testing of sql_command == SQLCOM_PURGE.
- Changed purge_logs_before_date() to first check if log is applicable
before calling can_purge_logs(). This ensures we do not get a
notification for logs that does not match the remove criteria.
- MYSQL_BIN_LOG::can_purge_log() will write notifications to the user
or error log if a log cannot yet be removed.
- log_in_use() will return reason why a binary log cannot be removed.
Changes to keep code consistent:
- Moved checking of binlog_format for Galera to be after Galera is
initialized (The old check never worked). If Galera is enabled
we now change the binlog_format to ROW, with a warning, instead of
aborting the server. If this change happens a warning will be printed to
the error log.
- Print a warning if Galera or FLASHBACK changes the binlog_format
to ROW. Before it was done silently.
Reviewed by: Sergei Golubchik <serg@mariadb.com>,
Kristian Nielsen <knielsen@knielsen-hq.org>
- Column stat_value and sample_size in mysql.innodb_index_stats
table is declared as BIGINT UNSIGNED without any check constraint.
user manually updates the value of stat_value and sample_size
to zero. InnoDB aborts the server while reading the statistics
information because InnoDB expects at least one leaf
page to exist for the index.
- To fix this issue, InnoDB should interpret the value of
stat_n_leaf_pages, stat_index_size in innodb_index_stats
stat_clustered_index_size, stat_sum_of_other_index_sizes
in innodb_table_stats as valid one even though user
mentioned it as 0.
Builders:
- amd64-fedora-38-last-N-failed
- amd64-debian-10-last-N-failed
erroneously execute MTR for these files:
- rpl/include/rpl_extra_col_master.test
- rpl/include.rpl_binlog_max_cache_size.test
- rpl/include.rpl_charset.test
when these files are changed by a commit.
Changing their extension from *.test to *.inc to avoid this.
The problem was in error message suppression, which did not match
the actual warning messages, due to bad quotations.
Changed warnings message suppressions to more simple format.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
DML transactions on FK-child tables also get table locks
on FK-parent tables. If there is a DML transaction holding
such a lock, and a TOI transaction starts, the latter
BF-aborts the former and puts itself into a waiting state.
If at this moment another DML transaction on FK-child table
starts, it doesn't check that the transaction waiting on
a parent table lock is TOI, and it erroneously BF-aborts
the waiting TOI transaction.
The fix: don't roll back high-priority transaction waiting
on a lock in InnoDB, instead roll back an incoming DML
transaction.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
The IO thread can report error code 2013 into the error log when it
is stopped during the initial connection process to the primary, as
well as when trying to read an event. However, because the IO thread
is being stopped, its connection to the primary is force-killed by
the signaling thread (see THD::awake_no_mutex()), and thereby these
connection errors should be ignored.
Reviewed By:
============
Kristian Nielsen <knielsen@knielsen-hq.org>
Several variables declared in mysqld.h appear to be old system variables
that have been left over after deprecation. Delete them using IDE
refactoring to automatically search for other uses. Most cases had no
other uses in the code.
slave_allow_batching had a test that was effectively unused, as the
result was only
-ERROR HY000: Unknown system variable 'slave_allow_batching'
so that was deleted as well.
Build and test still works without issue as expected.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services.
The current semi-sync binlog fail-over recovery process uses
rpl_semi_sync_slave_enabled==TRUE as its condition to truncate a
primary server’s binlog, as it is anticipating the server to re-join
a replication topology as a replica. However, for servers configured
with both rpl_semi_sync_master_enabled=1 and
rpl_semi_sync_slave_enabled=1, if a primary is just re-started (i.e.
retaining its role as master), it can truncate its binlog to drop
transactions which its replica(s) has already received and executed.
If this happens, when the replica reconnects, its gtid_slave_pos can
be ahead of the recovered primary’s gtid_binlog_pos, resulting in an
error state where the replica’s state is ahead of the primary’s.
This patch changes the condition for semi-sync recovery to truncate
the binlog to instead use the configuration variable
--init-rpl-role, when set to SLAVE. This allows for both
rpl_semi_sync_master_enabled and rpl_semi_sync_slave_enabled to be
set for a primary that is restarted, and no transactions will be
lost, so long as --init-rpl-role is not set to SLAVE.
Reviewed By:
============
Sergei Golubchik <serg@mariadb.com>
The special logic used by the memory storage engine
to keep slaves in sync with the master on a restart can
break replication. In particular, after a restart, the
master writes DELETE statements in the binlog for
each MEMORY-based table so the slave can empty its
data. If the DELETE is not executable, e.g. due to
invalid triggers, the slave will error and fail, whereas
the master will never see the problem.
Instead of DELETE statements, use TRUNCATE to
keep slaves in-sync with the master, thereby bypassing
triggers.
Reviewed By:
===========
Kristian Nielsen <knielsen@knielsen-hq.org>
Andrei Elkin <andrei.elkin@mariadb.com>
During read only mode, InnoDB doesn't allow checkpoint to happen.
So InnoDB should throw the warning when InnoDB tries to
force the checkpoint when innodb_read_only = 1 or
innodb_force_recovery = 6.
The feedback plugin server_uid variable and the calculate_server_uid()
function is moved from feedback/utils.cc to sql/mysqld.cc
server_uid is added as a global variable (shown in 'show variables') and
is written to the error log on server startup together with server version
and server commit id.
We have an issue if a user have the following in a configuration file:
log_slow_filter="" # Log everything to slow query log
log_queries_not_using_indexes=ON
This set log_slow_filter to 'not_using_index' which disables
slow_query_logging of most queries.
In effect, on should never use log_slow_filter="" in config files but
instead use log_slow_filter=ALL.
Fixed by changing log_slow_filter="" that comes either from a
configuration file or from the command line, when starting to the server,
to log_slow_filter=ALL.
A warning will be printed when this happens.
Other things:
- One can now use =ALL for any 'set' variable to set all options at once.
(backported from 10.6)
InnoDB transactions may be reused after committed:
- when taken from the transaction pool
- during a DDL operation execution
In this case wsrep flag on trx object is cleared, which may cause wrong
execution logic afterwards (wsrep-related hooks are not run).
Make trx->wsrep flag initialize from THD object only once on InnoDB transaction
start and don't change it throughout the transaction's lifetime.
The flag is reset at commit time as before.
Unconditionally set wsrep=OFF for THD objects that represent InnoDB background
threads.
Make Wsrep_schema::store_view() operate in its own transaction.
Fix streaming replication transactions' fragments rollback to not switch
THD->wsrep value during transaction's execution
(use THD->wsrep_ignore_table as a workaround).
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Affects:
MDEV-34150 Assertion failure in Diagnostics_area::set_error_status upon binary
logging hitting tmp space limit
MDEV-9101 Limit size of created disk temporary files and tables
This bug was caused by moving flushing of the in-memory-row-events from
close_thread_tables() to binlog_commit() in MDEV-34150.
This was needed to be able to handle the case where binlog writes could
fail.
Galera have two case where the change caused problems:
- Row events in commit_one_phase_2() was not done in the case the standard
binary log was not enabled but Galera was using the binary log
internally.
- Galera disabled the call to binlog_commit_flush_stmt_cache() for not
ending transactions.
Fixed by adding code that flushes the in-memory-row-events to the binary
log (write, but now sync) in the two above cases if Galera is enabled.
realloc can't be done for mysql->passwd. There's also no pretty way to
pass the ext-salt into hash_password, so let's use thread_local instead.
For GCC < 4.9 (SLES 12, CentOS 7), which don't support _Thread_local,
pk will be recalculated instead
As part of commit 685d958e38 (MDEV-14425)
the parameter innodb_log_write_ahead_size was removed, because it was
thought that determining the physical block size would be a sufficient
replacement.
However, we can only determine the physical block size on Linux or
Microsoft Windows. On some file systems, the physical block size
is not relevant. For example, XFS uses a block size of 4096 bytes
even if the underlying block size may be smaller.
On Linux, we failed to determine the physical block size if
innodb_log_file_buffered=OFF was not requested or possible.
This will be fixed.
log_sys.write_size: The value of the reintroduced parameter
innodb_log_write_ahead_size. To keep it simple, this is read-only
and a power of two between 512 and 4096 bytes, so that the previous
alignment guarantees are fulfilled. This will replace the previous
log_sys.get_block_size().
log_sys.block_size, log_t::get_block_size(): Remove.
log_t::set_block_size(): Ensure that write_size will not be less
than the physical block size. There is no point to invoke this
function with 512 or less, because that is the minimum value of
write_size.
innodb_params_adjust(): Add some disabled code for adjusting
the minimum value and default value of innodb_log_write_ahead_size
to reflect the log_sys.write_size.
log_t::set_recovered(): Mark the recovery completed. This is the
place to adjust some things if we want to allow write_size>4096.
log_t::resize_write_buf(): Refer to write_size.
log_t::resize_start(): Refer to write_size instead of get_block_size().
log_write_buf(): Simplify some arithmetics and remove a goto.
log_t::write_buf(): Refer to write_size. If we are writing less than
that, do not switch buffers, but keep writing to the same buffer.
Move some code to improve the locality of reference.
recv_scan_log(): Refer to write_size instead of get_block_size().
os_file_create_func(): For type==OS_LOG_FILE on Linux, always invoke
os_file_log_maybe_unbuffered(), so that log_sys.set_block_size() will
be invoked even if we are not attempting to use O_DIRECT.
recv_sys_t::find_checkpoint(): Read the entire log header
in a single 12 KiB request into log_sys.buf.
Tested with:
./mtr --loose-innodb-log-write-ahead-size=4096
./mtr --loose-innodb-log-write-ahead-size=2048
innodb_doublewrite_update(): Disallow any change if srv_read_only_mode
holds, that is, the server was started with innodb_read_only=ON or
innodb_force_recovery=6.
This fixes up commit 1122ac978e (MDEV-33545).
New error codes can only be added in the latest major version.
Adding ER_KILL_DENIED_HIGH_PRIORITY would shift by one all
error codes that were added in MariaDB Server 10.6 or later.
This amends commit 1001dae186
Suggested by: Sergei Golubchik
- Few of test case should make sure that InnoDB does hit
the debug sync point during startup of the server.
InnoDB can remove the double quotes of debug point
in restart parameters.
The fixes in b8a6719889 have not disabled
semi-consistent read for innodb_snapshot_isolation=ON mode, they just allowed
read uncommitted version of a record, that's why the test for MDEV-26643 worked
well.
The semi-consistent read should be disabled on upper level in
row_search_mvcc() for READ COMMITTED isolation level.
Reviewed by Marko Mäkelä.
- InnoDB tries to write FILE_CHECKPOINT marker during
early recovery when log file size is insufficient.
While updating the log checkpoint at the end of the recovery,
InnoDB must already have written out all pending changes
to the persistent files. To complete the checkpoint, InnoDB
has to write some log records for the checkpoint and to
update the checkpoint header. If the server gets killed
before updating the checkpoint header then it would lead
the logfile to be unrecoverable.
- This patch avoids FILE_CHECKPOINT marker during early
recovery and narrows down the window of opportunity to
make the log file unrecoverable.
- The deadlock counter was moved from
Deadlock::find_cycle into Deadlock::report, because
the find_cycle method is called multiple times during deadlock
detection flow, which means it shouldn't have such side effects.
But report() can, which called only once for
a victim transaction.
- Also the deadlock_detect.test and *.result test case
has been extended to handle the fix.
The test rpl.rpl_change_master_demote used a `sleep 1` command
to give time for a START SLAVE UNTIL to start the slave threads
and wait for them to automatically die by UNTIL. On machines
with heavy load (especially MSAN bb builders), one second was
not enough, and the test would fail due to the IO thread
still being up.
This patch fixes the test by replacing the sleep with specific
conditions to wait for. The test cannot wait for the IO or SQL
threads to start, as it would be possible that they would be
started and stopped by the time the MTR executor would check
the slave status. So instead, we test for proof that they
existed via the Connections status variable being incremented
by at least 2 (Connections just shows the global thread id).
At this point, we still can't use the wait_for_slave_to_stop
helper, as the SQL/IO_Running fields of SHOW SLAVE STATUS
may not be updated yet. So instead, we use
information_schema.processlist, which would show the presence
of the Slave_SQL/IO threads. So to "wait for the slave to stop",
we wait for the Slave_SQL/IO threads to be gone from the
processlist.
Problem was that there was two non-conflicting local idle
transactions in node_1 that both inserted a key to primary key.
Then two transactions from other nodes inserted also
a key to primary key so that insert from node_2 conflicted
one of the local transactions in node_1 so that there would
be duplicate key if both are committed. For this insert
from other node tries to acquire S-lock for this record
and because this insert is high priority brute force (BF)
transaction it will kill idle local transaction.
Concurrently, second insert from node_3 conflicts the second
idle insert transaction in node_1. Again, it tries to acquire
S-lock for this record and kills idle local transaction.
At this point we have two non-conflicting high priority
transactions holding S-lock on different records in node_1.
For example like this: rec s-lock-node2-rec s-lock-node3-rec rec.
Because these high priority BF-transactions do not wait
each other insert from node3 that has later seqno compared
to insert from node2 can continue. It will try to acquire
insert intention for record it tries to insert (to avoid
duplicate key to be inserted by local transaction). Hower,
it will note that there is conflicting S-lock in same gap
between records. This will lead deadlock error as we have
defined that BF-transactions may not wait for record lock
but we can't kill conflicting BF-transaction because
it has lower seqno and it should commit first.
BF-transactions are executed concurrently because their
values to primary key are different i.e. they do not
conflict.
Galera certification will make sure that inserts from
other nodes i.e these high priority BF-transactions
can't insert duplicate keys. Local transactions naturally
can but they will be killed when BF-transaction
acquires required record locks.
Therefore, we can allow situation where there is conflicting
S-lock and insert intention lock regardless of their seqno
order and let both continue with no wait. This will lead
to situation where we need to allow BF-transaction
to wait when lock_rec_has_to_wait_in_queue is called
because this function is also called from
lock_rec_queue_validate and because lock is waiting
there would be assertion in ut_a(lock->is_gap()
|| lock_rec_has_to_wait_in_queue(cell, lock));
lock_wait_wsrep_kill
Add debug sync points for BF-transactions killing
local transaction.
wsrep_assert_no_bf_bf_wait
Print also requested lock information
lock_rec_has_to_wait
Add function to handle wsrep transaction lock wait
cases.
lock_rec_has_to_wait_wsrep
New function to handle wsrep transaction lock wait
exceptions.
lock_rec_has_to_wait_in_queue
Remove wsrep exception, in this function all
conflicting locks need to wait in queue.
Conflicts between BF and local transactions
are handled in lock_wait.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Changed error code for Galera unkillable threads to
be ER_KILL_DENIED_HIGH_PRIORITY giving message
This is a high priority thread/query and cannot be killed
without the compromising consistency of the cluster
also a warning is produced
Thread %lld is [wsrep applier|high priority] and cannot be killed
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
rpl.rpl_start_alter_chain_basic was used to fail sporadically due
to a missed GTID master-slave synchronization which was necessary
because of the following SELECT from GTID-state table.
Fixed with arranging two synchronization pieces for two
chain slaves requiring that.
Note rpl.rpl_start_alter_restart_slave must have been fixed by
MDEV-30460 and 87e13722a9 (manual) merge commit.
This problem was earlier fixed by this commit:
> commit 08c7ab404f
> Author: Aleksey Midenkov <midenok@gmail.com>
> Date: Mon Apr 18 12:44:27 2022 +0300
>
> MDEV-24176 Server crashes after insert in the table with virtual
> column generated using date_format() and if()
Adding an mtr test only.
After MDEV-4013, the maximum length of replication passwords was extended to
96 ASCII characters. After a restart, however, slaves only read the first 41
characters of MASTER_PASSWORD from the master.info file. This lead to slaves
unable to reconnect to the master after a restart.
After a slave restart, if a master.info file is detected, use the full
allowable length of the password rather than 41 characters.
Reviewed By:
============
Sergei Golubchik <serg@mariadb.com>
The test was used to fail because of lacking a synchronization point
designating an expected for processing "CA_1" event has been indeed
taken into this phase.
The event apparently may not have even arrived at slave.
Fixed with deploying the missed synchronization.
Implement automatic creation of temporary accounts for SST and pass
account credentials to SST script via socket as opposed to environment
variables. Delete the user after the SST script returns,
Respect wsrep_sst_auth set by the adminitrator in case some additional
privilege grants are needed for particular SST method.
mysqldump SST requires significant change to make use of the new
automatic user generation facility. For now just make it compatible
by ignoring automatically generated user and rely only on wsrep_sst_auth
setting on the joiner node to keep backward compatibility.
Adapt mysqldump SST to automatic SST user generation changes:
- disable special treatment for mysqldump SST on donor
- make mysqldump SST script compatible with the new SST script
interface.
Differentiate user privileges for different SST methods:
- grant minimum required privileges for clone and xtrabackup SST
accounts
- grant all privileges to custom SST accounts as it is not known what
is needed.
- disable SST account generation for rsync SST since it is not needed.
MTR tests:
- add MTR tests for clone and xtrabackup SSTs without wsrep_sst_auth,
- add MTR test for testing masking of wsrep_sst_auth.
- don't attmept to restore original wsrep_sst_auth in MTR tests as it
is always masked.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
The problem is that the test could query the status variable
Rpl_semi_sync_slave_send_ack before the slave actually updated it.
This would result in an immediate --die assertion killing the rest
of the test. The bottom of this commit message has a small patch
that can be applied to reproduce the test failure.
This patch fixes the test failure by waiting for the variable to be
updated before querying its value.
diff --git a/sql/semisync_slave.cc b/sql/semisync_slave.cc
index 9ddd4c5c8d7..60538079fce 100644
--- a/sql/semisync_slave.cc
+++ b/sql/semisync_slave.cc
@@ -303,7 +303,10 @@ int Repl_semi_sync_slave::slave_reply(Master_info *mi)
reply_res= DBUG_EVALUATE_IF("semislave_failed_net_flush", 1,
net_flush(net));
if (!reply_res)
+ {
+ sleep(1);
rpl_semi_sync_slave_send_ack++;
+ }
}
DBUG_RETURN(reply_res);
}
This is regression from commit 3228c08fa8. Problem is that
when table storage engine is determined there should be
check is table partitioned and if it is then determine
partition implementing storage engine.
Reported bug is reproducible only with --log-bin so make
sure tests changed by 3228c08fa8 and new test are run
with --log-bin and binlog disabled.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
number of non-user tablespace.
fil_space_t::try_to_close(): Don't try to close
the tablespace which is acquired by the caller of
the function
Added the suppression message in open_files_limit test case
number of non-user tablespace.
- InnoDB only closes the user tablespace when the number of open
files exceeds innodb_open_files limit. In that case, InnoDB should
make sure that innodb_open_files value should be greater
than number of undo tablespace, system and temporary tablespace files.
Avoid starting transactions in wsrep-lib side when wsrep is
disabled. It is unnecessary, and causes spurious deadlock errors on
transaction clean up.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Problem was that updates to mysql.gtid_slave_pos table were
replicated even when they were newer used and because that
newer deleted. Avoid replication of mysql.gtid_slave_pos
table if wsrep_gtid_mode=OFF.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Problem:
=======
- This commit is a merge of mysql commit 129ee47ef994652081a11ee9040c0488e5275b14.
InnoDB FTS can be in inconsistent state when sync operation
terminates the server before committing the operation. This
could lead to incorrect synced doc id and incorrect query results.
Solution:
========
- During sync commit operation, InnoDB should pass
the sync transaction to update the max doc id
in the config table.
fts_read_synced_doc_id() : This function is used
to read only synced doc id from the config table.
When checkpoint age goes beyond the sync flush threshold and
buf_flush_sync_lsn is set, page cleaner enters into "furious flush"
stage to aggressively flush dirty pages from flush list and pull
checkpoint LSN above safe margin. In this stage, page cleaner skips
doing LRU flush and eviction.
In 10.6, all other threads entirely rely on page cleaner to generate
free pages. If free pages get over while page cleaner is busy in
"furious flush" stage, a session thread could wait for free page in the
middle of a min-transaction(mtr) while holding latches on other pages.
It, in turn, can prevent page cleaner to flush such pages preventing
checkpoint LSN to move forward creating a deadlock situation. Even
otherwise, it could create a stall and hang like situation for large BP
with plenty of dirty pages to flush before the stage could finish.
Fix: During furious flush, check and evict LRU pages after each flush
iteration.
The problem was in the Aria part of the range optimizer,
maria_records_in_range(), which wrong concluded that there was no rows
in the range.
This error would happen in the unlikely case when searching for a range
on a partial key and there was a match for the first key part in the
upper part of the b-tree (node) and also a match in the underlying
node page.
In other words, for this bug to happen one have to use Aria, have a multi
part key with a lot of identical values for the first key part and do a
range search on the second part of the key.
Fixed by ensuring that we do not stop searching for partial keys found
on node.
Other things:
- Added some comments
- Changed a variable name to more clearly explain it's purpose.
- Fixed wrong cast in _ma_record_pos() that could cause problems on 32 bit
systems.
This bug could affect queries containing a join of derived tables over
grouping views such that one of the derived tables contains a window
function while another uses view V with dependent subquery DSQ containing
a set function aggregated outside of the subquery in the view V. The
subquery also refers to the fields from the group clause of the view.Due to
this bug execution of such queries could produce wrong result sets.
When the fix_fields() method performs context analysis of a set function AF
first, at the very beginning the function Item_sum::init_sum_func_check()
is called. The function copies the pointer to the embedding set function,
if any, stored in THD::LEX::in_sum_func into the corresponding field of the
set function AF simultaneously changing the value of THD::LEX::in_sum_func
to point to AF. When at the very end of the fix_fields() method the function
Item_sum::check_sum_func() is called it is supposed to restore the value
of THD::LEX::in_sum_func to point to the embedding set function. And in
fact Item_sum::check_sum_func() did it, but only for regular set functions,
not for those used in window functions. As a result after the context
analysis of AF had finished THD::LEX::in_sum_func still pointed to AF.
It confused the further context analysis. In particular it led to wrong
resolution of Item_outer_ref objects in the fix_inner_refs() function.
This wrong resolution forced reading the values of grouping fields referred
in DSQ not from the temporary table used for aggregation from which they
were supposed to be read, but from the table used as the source table for
aggregation.
This patch guarantees that the value of THD::LEX::in_sum_func is properly
restored after the call of fix_fields() for any set function.
Field_string::val_int(), Field_string::val_real(), Field_string::val_decimal()
passed the whole buffer of field_length bytes to data type conversion routines.
This made conversion routines to print redundant trailing spaces in case of warnings.
Adding a method Field_string::to_lex_cstring() and using it inside
val_int(), val_real(), val_decimal(), val_str().
After this change conversion routines get the same value with what val_str() returns,
and no redundant trailing spaces are displayed.
- Added a counter innodb_num_bulk_insert_operation in
INFORMATION_SCHEMA.GLOBAL_STATUS. This counter is incremented
whenever a InnoDB undergoes bulk insert operation.
- Change the innodb_instant_alter_column to atomic variable.
GTID events are applied without a running server transaction,
we need to set next transaction ID for Wsrep transaction.
The whole Galera cluster now has a single GTID value (including
the server ID throughout the cluster), fix the config accordingly.
Add force restart so that repeated MTR test execution prints
consistent GTID values, otherwise they would have been recovered
from the previous run.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
It's possible to establish Galera multi-cluster setups connected
through the native replication when every Galera cluster is configured
to have a separate domain ID.
For this setup to work, we need to replace domain ID values in generated
GTID events when they are written at transaction commit to the values
configured by Wsrep replication.
At the same time, it's possible that the GTID event already contains
a correct domain ID if it comes through the native replication from
another Galera cluster.
In this case, when such an event is applied either through a native
replication slave thread or through Wsrep applier, we write GTID event
on transaction start and avoid writing it during transaction commit.
The code contained multiple problems that were fixed:
- applying GTID events didn't work because it's applied without a
running server transaction and Wsrep transaction was not started
- GTID event generation on transaction start didn't contain proper
"standalone" and "is_transactional" flags that the original applied
GTID event contained
- condition determining that GTID event is written on transaction start
to avoid writing it on commit relied on the fact that the GTID event
is the first found in transaction/statement caches, which wasn't the
case and resulted in duplicate GTID events written
- instead of relying on the caches to find a GTID event, a simple check
is introduced that follows the exact rules for checking if event is
written at transaction start as described above
- the test case is improved to check that exact GTID events are
applied after two Galera clusters have synced.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
The test that triggers multi-master conflict between two CTAS commands
uses LOCK/UNLOCK TABLES to block local CTAS from progress. It could
result in a race when UNLOCK TABLES command is issued a bit earlier
then needed, causing local CTAS to run further and change wsrep
transaction state, so that a different code path is taken later and
the original error gets overridden, causing the test to fail.
The solution is to replace LOCK/UNLOCK TABLES with debug sync points.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
tablespace truncation
- InnoDB fails with out of bound write error after temporary
tablespace truncation. This issue caused by
commit c507678b20 (MDEV-28699).
InnoDB fail to clear freed ranges if shrinking size
is the last offset of the freed range.
* show it as a datetime, not number of seconds
* show all users
* show manually expired users as 0000-00-00 00:00:00
* show default expiration interval correctly
* numerous test fixes, add more tests
* fix compilation of embedded
Columns added to TABLE_STATISTICS
- ROWS_INSERTED, ROWS_DELETED, ROWS_UPDATED, KEY_READ_HITS and
KEY_READ_MISSES.
Columns added to CLIENT_STATISTICS and USER_STATISTICS:
- KEY_READ_HITS and KEY_READ_MISSES.
User visible changes (except new columns):
- CLIENT_STATISTICS and USER_STATISTICS has columns KEY_READ_HITS and
KEY_READ_MISSES added after column ROWS_UPDATED before SELECT_COMMANDS.
Other changes:
- Do not collect table statistics for system tables like index_stats
table_stats, performance_schema, information_schema etc as the user
has no control of these and the generate noice in the statistics.
- All row variables that are part of user_stats are moved to
'struct rows_stats' to make it easy to clear all of them at once.
- ha_read_key_misses added to STATUS_VAR
Notes:
- userstat.result has a change of numbers of rows for handler_read_key.
This is because use-stat-tables is now disabled for the test.
- Moved writing to binlog_cache from close_thread_tables() to
binlog_commit().
- In select_create() delete cached row events instead of flushing them
to disk. This was done to avoid possible disk write error in this code.
Two new variables added:
- max_tmp_space_usage : Limits the the temporary space allowance per user
- max_total_tmp_space_usage: Limits the temporary space allowance for
all users.
New status variables: tmp_space_used & max_tmp_space_used
New field in information_schema.process_list: TMP_SPACE_USED
The temporary space is counted for:
- All SQL level temporary files. This includes files for filesort,
transaction temporary space, analyze, binlog_stmt_cache etc.
It does not include engine internal temporary files used for repair,
alter table, index pre sorting etc.
- All internal on disk temporary tables created as part of resolving a
SELECT, multi-source update etc.
Special cases:
- When doing a commit, the last flush of the binlog_stmt_cache
will not cause an error even if the temporary space limit is exceeded.
This is to avoid giving errors on commit. This means that a user
can temporary go over the limit with up to binlog_stmt_cache_size.
Noteworthy issue:
- One has to be careful when using small values for max_tmp_space_limit
together with binary logging and with non transactional tables.
If a the binary log entry for the query is bigger than
binlog_stmt_cache_size and one hits the limit of max_tmp_space_limit
when flushing the entry to disk, the query will abort and the
binary log will not contain the last changes to the table.
This will also stop the slave!
This is also true for all Aria tables as Aria cannot do rollback
(except in case of crashes)!
One way to avoid it is to use @@binlog_format=statement for
queries that updates a lot of rows.
Implementation:
- All writes to temporary files or internal temporary tables, that
increases the file size, are routed through temp_file_size_cb_func()
which updates and checks the temp space usage.
- Most of the temporary file monitoring is done inside IO_CACHE.
Temporary file monitoring is done inside the Aria engine.
- MY_TRACK and MY_TRACK_WITH_LIMIT are new flags for ini_io_cache().
MY_TRACK means that we track the file usage. TRACK_WITH_LIMIT means
that we track the file usage and we give an error if the limit is
breached. This is used to not give an error on commit when
binlog_stmp_cache is flushed.
- global_tmp_space_used contains the total tmp space used so far.
This is needed quickly check against max_total_tmp_space_usage.
- Temporary space errors are using EE_LOCAL_TMP_SPACE_FULL and
handler errors are using HA_ERR_LOCAL_TMP_SPACE_FULL.
This is needed until we move general errors to it's own error space
so that they cannot conflict with system error numbers.
- Return value of my_chsize() and mysql_file_chsize() has changed
so that -1 is returned in the case my_chsize() could not decrease
the file size (very unlikely and will not happen on modern systems).
All calls to _chsize() are updated to check for > 0 as the error
condition.
- At the destruction of THD we check that THD::tmp_file_space == 0
- At server end we check that global_tmp_space_used == 0
- As a precaution against errors in the tmp_space_used code, one can set
max_tmp_space_usage and max_total_tmp_space_usage to 0 to disable
the tmp space quota errors.
- truncate_io_cache() function added.
- Aria tables using static or dynamic row length are registered in 8K
increments to avoid some calls to update_tmp_file_size().
Other things:
- Ensure that all handler errors are registered. Before, some engine
errors could be printed as "Unknown error".
- Fixed bug in filesort() that causes a assert if there was an error
when writing to the temporay file.
- Fixed that compute_window_func() now takes into account write errors.
- In case of parallel replication, rpl_group_info::cleanup_context()
could call trans_rollback() with thd->error set, which would cause
an assert. Fixed by resetting the error before calling trans_rollback().
- Fixed bug in subselect3.inc which caused following test to use
heap tables with low value for max_heap_table_size
- Fixed bug in sql_expression_cache where it did not overflow
heap table to Aria table.
- Added Max_tmp_disk_space_used to slow query log.
- Fixed some bugs in log_slow_innodb.test
- FLUSH GLOBAL STATUS now resets most global_status_vars.
At this stage, this is mainly to be used for testing.
- FLUSH SESSION STATUS added as an alias for FLUSH STATUS.
- FLUSH STATUS does not require any privilege (before required RELOAD).
- FLUSH GLOBAL STATUS requires RELOAD privilege.
- All global status reset moved to FLUSH GLOBAL STATUS.
- Replication semisync status variables are now reset by
FLUSH GLOBAL STATUS.
- In test cases, the only changes are:
- Replace FLUSH STATUS with FLUSH GLOBAL STATUS
- Replace FLUSH STATUS with FLUSH STATUS; FLUSH GLOBAL STATUS.
This was only done in a few tests where the test was using SHOW STATUS
for both local and global variables.
- Uptime_since_flush_status is now always provided, independent if
ENABLED_PROFILING is enabled when compiling MariaDB.
- @@global.Uptime_since_flush_status is reset on FLUSH GLOBAL STATUS
and @@session.Uptime_since_flush_status is reset on FLUSH SESSION STATUS.
- When connected, @@session.Uptime_since_flush_status is set to 0.
plugins can have unused variables too. If they use a literal "Unused"
string a compiler might or might not merge two identical strings into
one (-fmerge-constants) and depending on that the server will or will
not issue a "variable is ignored" warning.
MDEV-32188 make TIMESTAMP use whole 32-bit unsigned range
- Added --update-history option to mariadb-dump to change 2038
row_end timestamp to 2106.
- Updated ALTER TABLE ... to convert old row_end timestamps to
2106 timestamp for tables created before MariaDB 11.4.0.
- Fixed bug in CHECK TABLE where we wrongly suggested to USE REPAIR
TABLE when ALTER TABLE...FORCE is needed.
- mariadb-check printed table names that where used with REPAIR TABLE but
did not print table names used with ALTER TABLE or with name repair.
Fixed by always printing a table that is fixed if --silent is not
used.
- Added TABLE::vers_fix_old_timestamp() that will change max-timestamp
for versioned tables when replication from a pre-11.4.0 server.
A few test cases changed. This is caused by:
- CHECK TABLE now prints 'Please do ALTER TABLE... instead of
'Please do REPAIR TABLE' when there is a problem with the information
in the .frm file (for example a very old frm file).
- mariadb-check now prints repaired table names.
- mariadb-check also now prints nicer error message in case ALTER TABLE
is needed to repair a table.
This task is to ensure we have a clear definition and rules of how to
repair or optimize a table.
The rules are:
- REPAIR should be used with tables that are crashed and are
unreadable (hardware issues with not readable blocks, blocks with
'unexpected data' etc)
- OPTIMIZE table should be used to optimize the storage layout for the
table (recover space for delete rows and optimize the index
structure.
- ALTER TABLE table_name FORCE should be used to rebuild the .frm file
(the table definition) and the table (with the original table row
format). If the table is from and older MariaDB/MySQL release with a
different storage format, it will convert the data to the new
format. ALTER TABLE ... FORCE is used as part of mariadb-upgrade
Here follows some more background:
The 3 ways to repair a table are:
1) ALTER TABLE table_name FORCE" (not other options).
As an alias we allow: "ALTER TABLE table_name ENGINE=original_engine"
2) "REPAIR TABLE" (without FORCE)
3) "OPTIMIZE TABLE"
All of the above commands will optimize row space usage (which means that
space will be needed to hold a temporary copy of the table) and
re-generate all indexes. They will also try to replicate the original
table definition as exact as possible.
For ALTER TABLE and "REPAIR TABLE without FORCE", the following will hold:
If the table is from an older MariaDB version and data conversion is
needed (for example for old type HASH columns, MySQL JSON type or new
TIMESTAMP format) "ALTER TABLE table_name FORCE, algorithm=COPY" will be
used.
The differences between the algorithms are
1) Will use the fastest algorithm the engine supports to do a full repair
of the table (except if data conversions are is needed).
2) Will use the storage engine internal REPAIR facility (MyISAM, Aria).
If the engine does not support REPAIR then
"ALTER TABLE FORCE, ALGORITHM=COPY" will be used.
If there was data incompatibilities (which means that FORCE was used)
then there will be a warning after REPAIR that ALTER TABLE FORCE is
still needed.
The reason for this is that REPAIR may be able to go around data
errors (wrong incompatible data, crashed or unreadable sectors) that
ALTER TABLE cannot do.
3) Will use the storage engine internal OPTIMIZE. If engine does not
support optimize, then "ALTER TABLE FORCE" is used.
The above will ensure that ALTER TABLE FORCE is able to
correct almost any errors in the row or index data. In case of
corrupted blocks then REPAIR possible followed by ALTER TABLE is needed.
This is important as mariadb-upgrade executes ALTER TABLE table_name
FORCE for any table that must be re-created.
Bugs fixed with InnoDB tables when using ALTER TABLE FORCE:
- No error for INNODB_DEFAULT_ROW_FORMAT=COMPACT even if row length
would be too wide. (Independent of innodb_strict_mode).
- Tables using symlinks will be symlinked after any of the above commands
(Independent of the setting of --symbolic-links)
If one specifies an algorithm together with ALTER TABLE FORCE, things
will work as before (except if data conversion is required as then
the COPY algorithm is enforced).
ALTER TABLE .. OPTIMIZE ALL PARTITIONS will work as before.
Other things:
- FORCE argument added to REPAIR to allow one to first run internal
repair to fix damaged blocks and then follow it with ALTER TABLE.
- REPAIR will not update frm_version if ha_check_for_upgrade() finds
that table is still incompatible with current version. In this case the
REPAIR will end with an error.
- REPAIR for storage engines that does not have native repair, like InnoDB,
is now using ALTER TABLE FORCE.
- REPAIR csv-table USE_FRM now works.
- It did not work before as CSV tables had extension list in wrong
order.
- Default error messages length for %M increased from 128 to 256 to not
cut information from REPAIR.
- Documented HA_ADMIN_XX variables related to repair.
- Added HA_ADMIN_NEEDS_DATA_CONVERSION to signal that we have to
do data conversions when converting the table (and thus ALTER TABLE
copy algorithm is needed).
- Fixed typo in error message (caused test changes).
Remove alter_algorithm but keep the variable as no-op (with a warning).
The reasons for removing alter_algorithm are:
- alter_algorithm was introduced as a replacement for the
old_alter_table that was used to force the usage of the original
alter table algorithm (copy) in the cases where the new alter
algorithm did not work. The new option was added as a way to force
the usage of a specific algorithm when it should instead have made
it possible to disable algorithms that would not work for some
reason.
- alter_algorithm introduced some cases where ALTER TABLE would not
work without specifying the ALGORITHM=XXX option together with
ALTER TABLE.
- Having different values of alter_algorithm on master and slave could
cause slave to stop unexpectedly.
- ALTER TABLE FORCE, as used by mariadb-upgrade, would not always work
if alter_algorithm was set for the server.
- As part of the MDEV-33449 "improving repair of tables" it become
clear that alter- algorithm made it harder to provide a better and
more consistent ALTER TABLE FORCE and REPAIR TABLE and it would be
better to remove it.
Also fixed that all unused variables are using the same variable comment.
The warning will be tested with the next commit that deprecates the
variable alter_algorithm.
This patch extends the timestamp from
2038-01-19 03:14:07.999999 to 2106-02-07 06:28:15.999999
for 64 bit hardware and OS where 'long' is 64 bits.
This is true for 64 bit Linux but not for Windows.
This is done by treating the 32 bit stored int as unsigned instead of
signed. This is safe as MariaDB has never accepted dates before the epoch
(1970).
The benefit of this approach that for normal timestamp the storage is
compatible with earlier version.
However for tables using system versioning we before stored a
timestamp with the year 2038 as the 'max timestamp', which is used to
detect current values. This patch stores the new 2106 year max value
as the max timestamp. This means that old tables using system
versioning needs to be updated with mariadb-upgrade when moving them
to 11.4. That will be done in a separate commit.
Other changes:
- Do not collect index statistics for system tables like index_stats
table_stats, performance_schema, information_schema etc as the user
has no control of these and the generate noise in the statistics.
Various help message improvements:
* MySQL->MariaDB, mysqld->mariadbd, "mysqld daemon" -> "mariadbd process"
* typos
* don't specify defaults directly in the help message
* don't say that an option is deprecated, mark is as such
* missing spaces in the middle of the text
etc
* use new deprecated printer for all deprecated server options
* restore alphabetic option sorting order
* move deprecated printer from mysqld.cc to my_getopt.c
* in --help print deprecation message at the end of the option help
* move 'ALL' help text where it belongs - to other SET options, and
with a correct indentation.
* consistently end all or none command-line option help strings
with a dot - my_print_help() needs that.
It's about 50/50 now, so let's do none, less line wraps in --help
* remove trailing spaces from command-line option help strings
Currently there are mechanism to mark a system variable as
deprecated, but they are only used to print warning messages
when a deprecated variable is set.
Leverage the existing mechanisms in order to make the
deprecation information available at the --help output of mysqld by:
* Moving the deprecation information (i.e `deprecation_substitute`
attribute) from the `sys_var` class into the `my_option` struct.
As every `sys_var` contains its own `my_option` struct, the access
to the deprecation information remains available to `sys_var`
objects. `my_getotp` functions, which works directly with
`my_option` structs, gain access to this information while building
the --help output.
* For plugin variables, leverages the `PLUGIN_VAR_DEPRECATED` flag
and set the `deprecation_substitute` attribute accordingly when
building the `my_option` objects.
* Change the `option_cmp` function to use the `deprecation_substitute`
attribute instead of the name when sorting the options. This way
deprecated options and the substitutes will be grouped together.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer
Amazon Web Services, Inc.
- InnoDB page compression works only on COMPACT or DYNAMIC row
format tables. So InnoDB should throw error when alter table
tries to enable PAGE_COMPRESSED for redundant table.
in buf_dblwr_t::init_or_load_pages()
- InnoDB fails to set the TRX_SYS_DOUBLEWRITE_SPACE_ID_STORED
flag in transaction system header page while recreating
the undo log tablespaces
buf_dblwr_t::init_or_load_pages(): Tries to reset the
space id and try to write into doublewrite buffer even
when read_only mode is enabled.
In srv_all_undo_tablespaces_open(), InnoDB should try to
open the extra unused undo tablespaces instead of trying to
creating it.
in buf_dblwr_t::init_or_load_pages()
- InnoDB fails to set the TRX_SYS_DOUBLEWRITE_SPACE_ID_STORED
flag in transaction system header page while recreating
the undo log tablespaces
buf_dblwr_t::init_or_load_pages(): Tries to reset the
space id and try to write into doublewrite buffer even
when read_only mode is enabled.
In srv_all_undo_tablespaces_open(), InnoDB should try to
open the extra unused undo tablespaces instead of trying to
creating it.
C/C 3.4 disables mysql_old_password by default, so
add an option for the `connect` command to support specifying
allowed authentication plugins (MARIADB_OPT_RESTRICTED_AUTH).
use it to enable mysql_old_password when needed for testing
The patch for MDEV-31340 fixed the following bugs:
MDEV-33084 LASTVAL(t1) and LASTVAL(T1) do not work well with lower-case-table-names=0
MDEV-33085 Tables T1 and t1 do not work well with ENGINE=CSV and lower-case-table-names=0
MDEV-33086 SHOW OPEN TABLES IN DB1 -- is case insensitive with lower-case-table-names=0
MDEV-33088 Cannot create triggers in the database `MYSQL`
MDEV-33103 LOCK TABLE t1 AS t2 -- alias is not case sensitive with lower-case-table-names=0
MDEV-33108 TABLE_STATISTICS and INDEX_STATISTICS are case insensitive with lower-case-table-names=0
MDEV-33109 DROP DATABASE MYSQL -- does not drop SP with lower-case-table-names=0
MDEV-33110 HANDLER commands are case insensitive with lower-case-table-names=0
MDEV-33119 User is case insensitive in INFORMATION_SCHEMA.VIEWS
MDEV-33120 System log table names are case insensitive with lower-cast-table-names=0
Backporting the fixes from 11.5 to 10.5
BUF_LRU_MIN_LEN (256) is too high value for low buffer pool(BP) size.
For example, for BP size lower than 80M and 16 K page size, the limit is
more than 5% of total BP and for lowest BP 5M, it is 80% of the BP.
Non-data objects like explicit locks could occupy part of the BP pool
reducing the pages available for LRU. If LRU reaches minimum limit and
if no free pages are available, server would hang with page cleaner not
able to free any more pages.
Fix: To avoid such hang, we adjust the LRU limit lower than the limit
for data objects as checked in buf_LRU_check_size_of_non_data_objects()
i.e. one page less than 5% of BP.
trx_free_at_shutdown(): Similar to trx_t::commit_in_memory(),
clear the detailed_error (FOREIGN KEY constraint error) before
invoking trx_t::free(). We only do this on debug instrumented
builds in order to avoid a debug assertion failure on shutdown.
This regression is introduced in 10.6 by following commit.
commit 898dcf93a8
(Cleanup the lock creation)
It removed one important optimization for lock bitmap pre-allocation.
We pre-allocate about 8 byte extra space along with every lock object to
adjust for similar locks on newly created records on the same page by
same transaction. When it is exhausted, a new lock object is created
with similar 8 byte pre-allocation. With this optimization removed we
are left with only 1 byte pre-allocation. When large number of records
are inserted and locked in a single page, we end up creating too many
new locks almost in n^2 order.
Fix-1: Bring back LOCK_PAGE_BITMAP_MARGIN for pre-allocation.
Fix-2: Use the extra space (40 bytes) for bitmap in trx->lock.rec_pool.
the test failed almost always in release (but not in debug) builds with
--- galera_sst_mariabackup.result
+++ galera_sst_mariabackup.reject
@@ -5,7 +5,7 @@
connection node_1;
select @@innodb_undo_tablespaces;
@@innodb_undo_tablespaces
-0
+3
connection node_2;
select @@innodb_undo_tablespaces;
@@innodb_undo_tablespaces
and
[Warning] InnoDB: Cannot change innodb_undo_tablespaces=0 because previous shutdown was not with innodb_fast_shutdown=0
because mariadbd *before this test* wasn't using innodb_fast_shutdown=0
Fix the bootstrap to use innodb_fast_shutdown=0 (and the bootstrap
creates a starting point for any test that uses a .cnf file)
followup for cac0fc97cc
also, remove redundant include/have_innodb.inc
MariaDB-backup needs to check for SLAVE MONITOR as that is
what is returned by SHOW GRANTS.
Update test to ensure that warnings about missing privileges
do not occur when the backup is successful.
Reviewer: Andrew Hutchings
Thanks Eugene for reporting the issue.
The problem was two fold:
- REPAIR TABLE t1 USE_FRM did not work for transactional
Aria tables (Table was thought to be repaired, which it was not) which
caused issues in later usage of the table.
- When swapping tmp_data file to data file, sort_info files where not
updated. This caused problems if there was several unique keys and
there was a duplicate for the second key.
Reset the connection_name to contain a null string, if the pointer
points to the same space as that of the system variable
default_master_connection.
We do this because the system variable may be updated which could free
the pointer and create a new one, causing use-after-free for
re-execution of prepared statements and stored procedures where the
LEX may be reused.
This allows connection_name to be set again be to the system variable
pointer in the next call of this function (see earlier in this
function), after any possible updates to the system variable.
safety first - tell mariadb client not to execute dangerous
cli commands, they cannot be present in the dump anyway.
wrapping the command in /*!999999 ..... */ guarantees that
if a non-mariadb-cli client loads the dump and sends it to the
server - the server will ignore the command it doesn't understand
on disable_indexes(HA_KEY_SWITCH_NONUNIQ_SAVE) the engine does
not know that the long unique is logically unique, because on the
engine level it is not. And the engine disables it,
Change the disable_indexes/enable_indexes API. Instead of the enum
mode, send a key_map of indexes that should be enabled. This way the
server will decide what is unique, not the engine.
This commit fixes sporadic failures in galera_3nodes_sr.GCF-336
test. The following changes have been made here:
1) A small addition to the test itself which should make
it more deterministic by waiting for non-primary state
before COMMIT;
2) More careful handling of the wsrep_ready variable in
the server code (it should always be protected with mutex).
No additional tests are required.
the test waits for the event to get stuck on MASTER_DELAY,
but on a slow/overloaded slave the event might pass MASTER_DELAY
before the test starts waiting.
Wait for the event to get stuck on the LOCK TABLES (after MASTER_DELAY),
the event cannot avoid that,
Refinement of the original patch.
Move the code to reset the kill up into the parent class
Xid_apply_log_event, to also fix the similar issue for XA COMMIT.
Increase the number of slave retries in the test case
rpl.rpl_parallel_multi_domain_xa to fix some sporadic failures. The test
generates massive amounts of conflicting transactions in multiple
independent domains, which can cause multiple rollback+retry for a
transaction as it conflicts with transactions in other domains one-by-one.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
When binlog_get_pending_rows_event was refactored, one usage in
binlog_need_stmt_format has not been taken in mind.
As binlog_get_pending_rows_event now requires existing cache_mngr, this check
is now made first.
This patch augments Gtid_log_event with the user thread-id.
In particular that compensates for the loss of this info in
Rows_log_events.
Gtid_log_event::thread_id gets visible in mysqlbinlog output like
#231025 16:21:45 server id 1 end_log_pos 537 CRC32 0x1cf1d963 GTID 0-1-2 ddl thread_id=10
as a 32 bit unsigned integer. Note this is a 32-bit value, as
the connection id can only be 32 bits (see MDEV-15089 for
details).
While the size of Gtid event has grown by 4 bytes
replication from OLD <-> NEW is not affected by it. This patch
also slightly changes the logic to convert Gtid events to Query
events for older replicas which don't support Gtid. Instead of
hard-coding the padding of the sys var section of the generated
Query event, the length to pad is dynamically calculated based
on the length of the Gtid event.
This work was started by the late Sujatha Sivakumar.
Brandon Nesterenko took it over, reviewed initial patches and
extended the work.
Also thanks to Andrei for his help in finalizing the fixes for
MDEV-33924, which were squashed into this patch.
Reviewed-by:
=============
Andrei Elkin <andrei.elkin@mariadb.com>
Kristian Nielsen <knielsen@knielsen-hq.org>
CURRENT_TEST: binlog_encryption.rpl_parallel_gco_wait_kill
mysqltest: In included file "./suite/rpl/t/rpl_parallel_gco_wait_kill.test":
included from /home/buildbot/amd64-ubuntu-2004-debug/build/mysql-test/suite/binlog_encryption/rpl_parallel_gco_wait_kill.test at line 2:
At line 334: Can't initialize replace from 'replace_result $thd_id THD_ID'
An sql thread can reach the "Slave has read all relay log" state
and then start reading relay log again. Let's use a more generic
pattern to retrieve the sql thread ID even if it's not
in the "read all relay log" state.
One case is conflicting transactions T1 and T2 with different domain id, in
optimistic parallel replication in non-GTID mode. Then T2 will
wait_for_prior_commit on T1; and if T1 got a row lock wait on T2 it would
hang, as different domains caused the deadlock kill to be skipped in
thd_rpl_deadlock_check().
More generally, if we have transactions T1 and T2 in one domain/master
connection, and independent transactions U in another, then we can
still deadlock like this:
T1 row low wait on U
U row lock wait on T2
T2 wait_for_prior_commit on T1
This commit enforces the deadlock kill in these cases. If the waited-for
transaction is speculatively applied, then it will be deadlock killed in
case of a conflict, even if the two transactions are in different domains
or master connections.
Reviewed-by: Andrei Elkin <andrei.elkin@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Problem:
=======
During InnoDB non-rebuild online alter operation, InnoDB set the
dummy log to clustered index online log. This can be used by
concurrent DML to identify whether the table undergoes online DDL.
InnoDB fails to reset the dummy log of clustered index in case
of error happened during prepare phase.
Solution:
========
Reset the InnoDB clustered index online log in case of error during
prepare phase.
Problem:
========
- Currently mariabackup have to reread the pages in case they are
modified by server concurrently. But while reading the undo
tablespace, mariabackup failed to do reread the page in case of
error.
Fix:
===
Mariabackup --backup functionality should have retry logic
while reading the undo tablespaces.
The test's header is not written to follow strictly a correct order
of checks by mtr at test start which may lead to an error. E.g
./mtr --mysqld=--binlog-format=row rpl.rpl_using_gtid_default
to
At line 175: query 'SET GLOBAL gtid_slave_pos= ""' failed: ER_SLAVE_MUST_STOP (1198): This operation cannot be performed as you have a running slave ''; run STOP SLAVE '' first
Fixed to require the binlog format first in the test header.
rpl.rpl_heartbeat turns out to miss a standard include/master-slave
header which made it potentially in BB and actually with manual mtr
failing as it may have used a previous slave GTID state.
Fixed with installing the standard rpl suite header/footer in the
test file.
(returns NULL) and for Date/DateTime returns "INTEGER"
Analysis:
When the first character of json is scanned it is number. Based on that
integer is returned.
Fix:
Scan rest of the json before returning the final result to ensure json is
valid in the first place in order to have a valid type.
Problem:
========
- InnoDB wrongly calulates the record size in
btr_node_ptr_max_size() when prefix index of
the column has to be stored externally.
Fix:
====
- InnoDB should add the maximum field size to
record size when the field is a fixed length one.
Regexp_processor_pcre::fix_owner() called Regexp_processor_pcre::compile(),
which could fail on the regex syntax error in the pattern and put
an error into the diagnostics area. However, the callers:
- Item_func_regex::fix_length_and_dec()
- Item_func_regexp_instr::fix_length_and_dec()
still returned "false" in such cases, which made the code
crash later inside Diagnostics_area::set_ok_status().
Fix:
- Change the return type of fix_onwer() from "void" to "bool"
and return "true" whenever an error is put to the DA
(e.g. on the syntax error in the pattern).
- Fixing fix_length_and_dec() of the mentioned Item_func_xxx
classes to return "true" if fix_onwer() returned "true".
Replicated events have time associated with them from originating
node which will be used for commit timestamp. Associated time can
be set in past before event is even applied.
For WSREP replication we don't need to use time information from
event.
Addressed review comments:
Jan Lindström <jan.lindstrom@galeracluster.com>
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Tests using MW-369.inc sometimes hanged after
signaling two debug sync points inside a Galera
library. Replaced Galera library sync point
with server code sync point when possible and
added more wait_conditions to make sure we are
in correct state.
Tests effected: MW-369, MW-402, MDEV-27276, and
mysql-wsrep#332.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
This commit contains a fix for the code that extracts and parses
the CN (common name, domain name) record from certificates using
the openssl utility. This code is also made common to the rsync
and mariabackup scripts. There is also some systematization of
the use of 'printf' and 'echo' builtins/utilities.
it's a slow test, the slave needs to catch up, reading >1500
transactions. A default MASTER_GTID_WAIT() timeout in
sync_with_master_gtid.inc is 120 seconds, which might be not
enough for a slow/overloaded slave.
Let's wait forever or until ./mtr --testcase-timeout,
whatever comes first.
Based on logs we might start SST before donor has reached
Primary state. Because this test shutdowns all nodes we
need to make sure when we start nodes that previous nodes
have reached Primary state and joined the cluster.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
the test waits for the event to get stuck on MASTER_DELAY,
but on a slow/overloaded slave the event might pass MASTER_DELAY
before the test starts waiting.
Wait for the event to get stuck on the LOCK TABLES (after MASTER_DELAY),
the event cannot avoid that,
commit_try_norebuild(): Add the parameter statistics_exist,
similar to commit_try_rebuild(). If the InnoDB statistics tables
did not exist, we will not attempt to update statistics later on
during the transaction.
Thanks to Matthias Leich for originally reproducing this scenario.
The test could fail with a duplicate key error because switching to non-GTID
mode could start at the wrong old-style position. The position could be
wrong when the previous GTID connect was stopped before receiving the fake
GTID list event which gives the old-style position corresponding to the GTID
connected position.
Work-around by injecting an extra event and syncing the slave before
switching to non-GTID mode.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Problem:
========
- Partition update operation enables the bulk insert for the
transaction while moving the row between partitions. This leads
to debug assert failure while removing the row from one
of the partition.
Solution:
========
- Disallow the bulk insert operation for non-insert operation
of partition table.
MDL wait consists of short 1 second waits (this is not configurable)
repeated until lock_wait_timeout is reached. The stage is changed
to Waiting and back every second. To have predictable result in the
test the query should filter all sequences of X, "Waiting for MDL", X,
leaving just X.
Extends 89c907bd4f to account for
binlog_two_phase_alter flags in a Gtid log event. I.e., if the
FL_COMMIT_ALTER_E1 or FL_ROLLBACK_ALTER_E2 flags are set in the
event flags, yet the length of the event is too short to hold
the value, then set the event as invalid
Problem:
========
mariabackup --prepare fails to write the pages in encrypted format.
This issue happens only for default encrypted table when
innodb_encrypt_tables variable is enabled.
Fix:
====
backup process should write the value of innodb_encrypt_tables
variable in configuration file. prepare should enable the
variable based on configuration file.
Part#2, variant 2: Make the printed r_ values in JSON output consistent.
After this patch, ANALYZE output has:
- r_index_rows (NEW) - Observed number of rows before ICP or Rowid Filtering
checks. This is a per-scan average. like r_rows and "rows" are.
- r_rows (AS BEFORE) - Observed number of rows after ICP and Rowid Filtering.
- r_icp_filtered (NEW) - Observed selectivity of ICP condition.
- (AS BEFORE) observed selectivity of Rowid Filter is in
$.rowid_filter.r_selectivity_pct
- r_total_filtered - Observed combined selectivity: fraction of rows left
after applying ICP condition, Rowid Filter, and attached_condition.
This is now comparable with "filtered" and is printed right after it.
- r_filtered (AS BEFORE) - Observed selectivity of "attached_condition".
Tabular ANALYZE output is not changed. Note that JSON's r_filtered and
r_rows have the same meanings as before and have the same meaning as in
tabular output.
In case of partition insert, InnoDB fails to end the bulk insert
for one of the partition. It leads to bulk insert operation for
the consecutive delete statement.
trx_t::bulk_insert_apply_for_table(): Irrespective of bulk insert
value, InnoDB should end the bulk insert for the table.
first stop the slave, then run commands on the master that are
supposed to fail on the slave, then start the slave.
if you swap first two steps, the slave might get and execute those
commands before it's stopped, which will fail the test.
also, improve debugability
C/C 3.4 disables mysql_old_password by default, so
add an option for the `connect` command to support specifying
allowed authentication plugins (MARIADB_OPT_RESTRICTED_AUTH).
use it to enable mysql_old_password when needed for testing
in the $case=2 - it's wrong to kill after the first binlog EOF,
because that might happen between INSERT(4) and INSERT(5).
So, wait for the slave to acknowledge INSERT(5) before killing
the master, that is, both connection threads must pass
repl_semisync_master.wait_after_sync()
The root cause of the failure is a bug in the Linux network stack:
https://lore.kernel.org/netdev/87sf0ldk41.fsf@urd.knielsen-hq.org/T/#u
If the slave does a connect(2) at the exact same time that kill -9 of the
master process closes the listening socket, the FIN or RST packet is lost in
the kernel, and the slave ends up timing out waiting for the initial
communication from the server. This timeout defaults to
--slave-net-timeout=120, which causes include/master_gtid_wait.inc to time
out first and fail the test.
Work-around this problem by reducing the --slave-net-timeout for this test
case. If this problem turns up in other tests, we can consider reducing the
default value for all tests.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
bulk_insert_apply_for_table(dict_table_t*)
This issue is caused by
commit 188c5da72a (MDEV-32453).
trx_t::bulk_insert_apply_for_table(): Remove the assert
check_unique_secondary and check_foreigns. InnoDB can
apply the bulk insert operation even after disabling
the check_foreigns and check_unique_secondary variable.
This patch also fixes:
MDEV-33050 Build-in schemas like oracle_schema are accent insensitive
MDEV-33084 LASTVAL(t1) and LASTVAL(T1) do not work well with lower-case-table-names=0
MDEV-33085 Tables T1 and t1 do not work well with ENGINE=CSV and lower-case-table-names=0
MDEV-33086 SHOW OPEN TABLES IN DB1 -- is case insensitive with lower-case-table-names=0
MDEV-33088 Cannot create triggers in the database `MYSQL`
MDEV-33103 LOCK TABLE t1 AS t2 -- alias is not case sensitive with lower-case-table-names=0
MDEV-33109 DROP DATABASE MYSQL -- does not drop SP with lower-case-table-names=0
MDEV-33110 HANDLER commands are case insensitive with lower-case-table-names=0
MDEV-33119 User is case insensitive in INFORMATION_SCHEMA.VIEWS
MDEV-33120 System log table names are case insensitive with lower-cast-table-names=0
- Removing the virtual function strnncoll() from MY_COLLATION_HANDLER
- Adding a wrapper function CHARSET_INFO::streq(), to compare
two strings for equality. For now it calls strnncoll() internally.
In the future it will turn into a virtual function.
- Adding new accent sensitive case insensitive collations:
- utf8mb4_general1400_as_ci
- utf8mb3_general1400_as_ci
They implement accent sensitive case insensitive comparison.
The weight of a character is equal to the code point of its
upper case variant. These collations use Unicode-14.0.0 casefolding data.
The result of
my_charset_utf8mb3_general1400_as_ci.strcoll()
is very close to the former
my_charset_utf8mb3_general_ci.strcasecmp()
There is only a difference in a couple dozen rare characters, because:
- the switch from "tolower" to "toupper" comparison, to make
utf8mb3_general1400_as_ci closer to utf8mb3_general_ci
- the switch from Unicode-3.0.0 to Unicode-14.0.0
This difference should be tolarable. See the list of affected
characters in the MDEV description.
Note, utf8mb4_general1400_as_ci correctly handles non-BMP characters!
Unlike utf8mb4_general_ci, it does not treat all BMP characters
as equal.
- Adding classes representing names of the file based database objects:
Lex_ident_db
Lex_ident_table
Lex_ident_trigger
Their comparison collation depends on the underlying
file system case sensitivity and on --lower-case-table-names
and can be either my_charset_bin or my_charset_utf8mb3_general1400_as_ci.
- Adding classes representing names of other database objects,
whose names have case insensitive comparison style,
using my_charset_utf8mb3_general1400_as_ci:
Lex_ident_column
Lex_ident_sys_var
Lex_ident_user_var
Lex_ident_sp_var
Lex_ident_ps
Lex_ident_i_s_table
Lex_ident_window
Lex_ident_func
Lex_ident_partition
Lex_ident_with_element
Lex_ident_rpl_filter
Lex_ident_master_info
Lex_ident_host
Lex_ident_locale
Lex_ident_plugin
Lex_ident_engine
Lex_ident_server
Lex_ident_savepoint
Lex_ident_charset
engine_option_value::Name
- All the mentioned Lex_ident_xxx classes implement a method streq():
if (ident1.streq(ident2))
do_equal();
This method works as a wrapper for CHARSET_INFO::streq().
- Changing a lot of "LEX_CSTRING name" to "Lex_ident_xxx name"
in class members and in function/method parameters.
- Replacing all calls like
system_charset_info->coll->strcasecmp(ident1, ident2)
to
ident1.streq(ident2)
- Taking advantage of the c++11 user defined literal operator
for LEX_CSTRING (see m_strings.h) and Lex_ident_xxx (see lex_ident.h)
data types. Use example:
const Lex_ident_column primary_key_name= "PRIMARY"_Lex_ident_column;
is now a shorter version of:
const Lex_ident_column primary_key_name=
Lex_ident_column({STRING_WITH_LEN("PRIMARY")});
If replicating an event in ROW format, and InnoDB detects a deadlock
while searching for a row, the row event will error and rollback in
InnoDB and indicate that the binlog cache also needs to be cleared,
i.e. by marking thd->transaction_rollback_request. In the normal
case, this will trigger an error in Rows_log_event::do_apply_event()
and cause a rollback. During the Rows_log_event::do_apply_event()
cleanup of a successful event application, there is a DBUG_ASSERT in
log_event_server.cc::rows_event_stmt_cleanup(), which sets the
expectation that thd->transaction_rollback_request cannot be set
because the general rollback (i.e. not the InnoDB rollback) should
have happened already. However, if the replica is configured to skip
deadlock errors, the rows event logic will clear the error and
continue on, as if no error happened. This results in
thd->transaction_rollback_request being set while in
rows_event_stmt_cleanup(), thereby triggering the assertion.
This patch fixes this in the following ways:
1) The assertion is invalid, and thereby removed.
2) The rollback case is forced in rows_event_stmt_cleanup() if
transaction_rollback_request is set.
Note the differing behavior between transactions which are skipped
due to deadlock errors and other errors. When a transaction is
skipped due to an ignored deadlock error, the entire transaction is
rolled back and skipped (though note MDEV-33930 which allows
statements in the same transaction after the deadlock-inducing one
to commit). When a transaction is skipped due to ignoring a
different error, only the erroring statements are rolled-back and
skipped - the rest of the transaction will execute as normal. The
effect of this can be seen in the test results. The added test case
to rpl_skip_error.test shows that only statements which are ignored
due to non-deadlock errors are ignored in larger transactions. A
diff between rpl_temporary_error2_skip_all.result and
rpl_temporary_error2.result shows that all statements in the errored
transaction are rolled back (diff pasted below):
: diff rpl_temporary_error2.result rpl_temporary_error2_skip_all.result
49c49
< 2 1
---
> 2 NULL
51c51
< 4 1
---
> 4 NULL
53c53
< * There will be two rows in t2 due to the retry.
---
> * There will be one row in t2 because the ignored deadlock does not retry.
57d56
< 1
59c58
< 1
---
> 0
Reviewed By:
============
Andrei Elkin <andrei.elkin@mariadb.com>
On Windows systems, occurrences of ERROR_SHARING_VIOLATION due to
conflicting share modes between processes accessing the same file can
result in CreateFile failures.
mysys' my_open() already incorporates a workaround by implementing
wait/retry logic on Windows.
But this does not help if files are opened using shell redirection like
mysqltest traditionally did it, i.e via
--echo exec "some text" > output_file
In such cases, it is cmd.exe, that opens the output_file, and it
won't do any sharing-violation retries.
This commit addresses the issue by introducing a new built-in command,
'write_line', in mysqltest. This new command serves as a brief alternative
to 'write_file', with a single line output, that also resolves variables
like "exec" would.
Internally, this command will use my_open(), and therefore retry-on-error
logic.
Hopefully this will eliminate the very sporadic "can't open file because
it is used by another process" error on CI.
Issue:
------
The actual order of acquisition of the IBUF pessimistic insert mutex
(SYNC_IBUF_PESS_INSERT_MUTEX) and IBUF header page latch
(SYNC_IBUF_HEADER) w.r.t space latch (SYNC_FSP) differs from the order
defined in sync0types.h. It was not discovered earlier as the path to
ibuf_remove_free_page was not covered by the mtr test. Ideal order and
one defined in sync0types.h is as follows.
SYNC_IBUF_HEADER -> SYNC_IBUF_PESS_INSERT_MUTEX -> SYNC_FSP
In ibuf_remove_free_page, we acquire space latch earlier and we have
the order as follows resulting in the assert with innodb_sync_debug=on.
SYNC_FSP -> SYNC_IBUF_HEADER -> SYNC_IBUF_PESS_INSERT_MUTEX
Fix:
---
We do maintain this order in other places and there doesn't seem to be
any real issue here. To reduce impact in GA versions, we avoid doing
extensive changes in mutex ordering to match the current
SYNC_IBUF_PESS_INSERT_MUTEX order. Instead we relax the ordering check
for IBUF pessimistic insert mutex using SYNC_NO_ORDER_CHECK.
Test was waiting INSERT-clause to make rollback but
wait_condition was too tight. State could be
Freeing items or Rollback. Fixed wait_condition
to expect one of them.
create_partitioning_metadata() should only mark transaction r/w
if it actually did anything (that is, the table is partitioned).
otherwise it's a no-op, called even for temporary tables and
it shouldn't do anything at all
In the case if some unique key fields are nullable, there can be
several records with the same key fields in unique index with at least
one key field equal to NULL, as NULL != NULL.
When transaction is resumed after waiting on the record with at least one
key field equal to NULL, and stored in persistent cursor record is
deleted, persistent cursor can be restored to the record with all key
fields equal to the stored ones, but with at least one field equal to
NULL. And such record is wrongly treated as a record with the same unique
key as stored in persistent cursor record one, what is wrong as
NULL != NULL.
The fix is to check if at least one unique field is NULL in restored
persistent cursor position, and, if so, then don't treat the record as
one with the same unique key as in the stored record key.
dict_index_t::nulls_equal was removed, as it was initially developed for
never existed in MariaDB "intrinsic tables", and there is no code, which
would set it to "true".
Reviewed by Marko Mäkelä.
The test failure in rpl.rpl_domain_id_filter_restart is caused by
MDEV-33887. That is, the test uses master_pos_wait() (called
indirectly by sync_slave_with_master) to try and wait for the
replica to catch up to the master. However, the waited on
transaction is ignored by the configured
CHANGE MASTER TO IGNORE_DOMAIN_IDS=()
As MDEV-33887 reports, due to the IO thread updating the binlog
coordinates and the SQL thread updating the GTID state, if the
replica is stopped in-between these updates, the replica state will
be inconsistent. That is, the test expects that the GTID state will
be updated, so upon restart, the replica will be up-to-date.
However, if the replica is stopped before the SQL thread updates its
GTID state, then upon restart, the replica will fetch the previously
ignored event, which is no longer ignored upon restart, and execute
it. This leads to the sporadic extra row in t2.
This patch changes master_pos_wait() to use master_gtid_wait() to
ensure the replica state is consistent with the master state.
- This issue is caused by commit 188c5da72a (MDEV-32453).
InnoDB fails to end the bulk insert for the table after
applying the bulk insert operation. This leads to assertion
during commit process.
Before MDEV-15158, wsrep xid information was stored in only one place:
in the TRX_SYS page. Starting with 10.3, it is not stored there but
in the rollback segment header pages, and the latest one is what
matters. MDEV-19229 allows the undo tablespaces to be rebuilt when
innodb_undo_tablespaces is changed on startup. Previously it was not
possible to change that parameter.
These changes caused the fact that rollback segment header pages could
contain several wsrep xid's stored and when undo tablespaces were
rebuilt there was a effort to restore wsrep xid back to rollback
segment header page but because there was several of them the latest
wsrep xid was overwritten with older one.
trx_rseg_read_wsrep_checkpoint
trx_rseg_init_wsrep_xid
Return true if read xid is wsrep xid, false if not
trx_rseg_mem_restore
Try to read wsrep xid and if it is found copy it to
trx_sys.recovered_wsrep_xid if read xid has larger
seqno.
increase the MASTER_CONNECT_RETRY time under valgrind,
otherwise the slave gives up retrying before the master is ready
also, cosmetic cleanup of rpl_semi_sync_master_shutdown.test
The crash at running mysqlbinlog on a SEQUENCE containing binlog file
was caused MDEV-29621 fixes that did not check which of the slave
or binlog applier executes a block introduced there.
The block is meaningful only for the parallel slave applier, so
it's safe to fix this bug with identified the actual applier and
skipping the block when it's the mysqlbinlog one.
In commit d74d95961a (MDEV-18543)
there was an error that would cause the hidden metadata record
to be deleted, and therefore cause the table to appear corrupted
when it is reloaded into the data dictionary cache.
PageConverter::update_records(): Do not delete the metadata record,
but do validate it.
RecIterator::open(): Make the API more similar to 10.6, to simplify
merges.
When the system variables @@debug_dbug was assigned to
some expression, Sys_debug_dbug::do_check() did not properly
convert the value from the expression character set to utf8.
So the value was erroneously re-interpretted as utf8 without
conversion. In case of a tricky expression character set
(e.g. utf16le), this led to unexpected results.
Fix:
Re-using Sys_var_charptr::do_string_check() in Sys_debug_dbug::do_check().
Values of all session tracking system variables will be sent in the
first ok packet upon connection after successful authentication.
Also updated mtr to print session track info on connection (h/t Sergei
Golubchik) so that we can write mtr tests for this change.
The signal handler thread can use various different runtime
resources when processing a SIGHUP (e.g. master-info information)
due to calling into reload_acl_and_cache(). Currently, the shutdown
process waits for the termination of the signal thread after
performing cleanup. However, this could cause resources actively
used by the signal handler to be freed while reload_acl_and_cache()
is processing.
The specific resource that caused MDEV-30260 is a race condition for
the hostname_cache, such that mysqld would delete it in
clean_up()::hostname_cache_free(), before the signal handler would
use it in reload_acl_and_cache()::hostname_cache_refresh().
Another similar resource is the active_mi/master_info_index. There
was a race between its deletion by the main thread in end_slave(),
and their usage by the Signal Handler as a part of
Master_info_index::flush_all_relay_logs.read(active_mi) in
reload_acl_and_cache().
This patch fixes these race conditions by relocating where server
shutdown waits for the signal handler to die until after
server-level threads have been killed (i.e., as a last step of
close_connections()). With respect to the hostname_cache, active_mi
and master_info_cache, this ensures that they cannot be destroyed
while the signal handler is still active, and potentially using
them.
Additionally:
1) This requires that Events memory is still in place for SIGHUP
handling's mysql_print_status(). So event deinitialization is moved
into clean_up(), but the event scheduler still needs to be stopped
in close_connections() at the same spot.
2) The function kill_server_thread is no longer used, so it is
deleted
3) The timeout to wait for the death of the signal thread was not
consistent with the comment. The comment mentioned up to 10 seconds,
whereas it was actually 0.01s. The code has been fixed to wait up to
10 seconds.
4) A warning has been added if the signal handler thread fails to
exit in time.
5) Added pthread_join() to end of wait_for_signal_thread_to_end()
if it hadn't ended in 10s with a warning. Note this also removes
the pthread_detached attribute from the signal_thread to allow
for the pthread_join().
Reviewed By:
===========
Vladislav Vaintroub <wlad@mariadb.com>
Andrei Elkin <andrei.elkin@mariadb.com>
Problem was that if wsrep_load_data_splitting was used
streaming replication (SR) parameters were set
for MyISAM table. Galera does not currently support SR for
MyISAM.
Fix is to ignore wsrep_load_data_splitting setting (with
warning) if table is not InnoDB table.
This is 10.6+ case of fix.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Problem was too tight condition on ha_commit_trans to not
allow non transactional storage engines participate 2pc
in Galera case. This is required because transaction
using e.g. procedures might read mysql.proc table inside
a trasaction and these tables use at the moment Aria
storage engine that does not support 2pc.
Fixed by allowing read only transactions to storage
engines that do not support two phase commit to participate
2pc transaction. These will be committed later separately.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Even after commit b8a6719889 there
is an anomaly where a locking read could return inconsistent results.
If a locking read would have to wait for a record lock, then by the
definition of a read view, the modifications made by the current lock
holder cannot be visible in the read view. This is because the read
view must exclude any transactions that had not been committed at the
time when the read view was created.
lock_rec_convert_impl_to_expl_for_trx(), lock_rec_convert_impl_to_expl():
Return an unsafe-to-dereference pointer to a transaction that holds or
held the lock, or nullptr if the lock was available.
lock_clust_rec_modify_check_and_lock(),
lock_sec_rec_read_check_and_lock(),
lock_clust_rec_read_check_and_lock():
Return DB_RECORD_CHANGED if innodb_strict_isolation=ON and the
lock was being held by another transaction.
The test case, which is based on a bug report by Zhuang Liu,
covers the function lock_sec_rec_read_check_and_lock().
Reviewed by: Vladislav Lesin
Keep track of each recently active XID, recording which worker it was queued
on. If an XID might still be active, choose the same worker to queue event
groups that refer to the same XID to avoid conflicts.
Otherwise, schedule the XID freely in the next round-robin slot.
This way, XA PREPARE can normally be scheduled without restrictions (unless
duplicate XID transactions come close together). This improves scheduling
and parallelism over the old method, where the worker thread to schedule XA
PREPARE on was fixed based on a hash value of the XID.
XA COMMIT will normally be scheduled on the same worker as XA PREPARE, but
can be a different one if the XA PREPARE is far back in the event history.
Testcase and code for trimming dynamic array due to Andrei.
Reviewed-by: Andrei Elkin <andrei.elkin@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
On Microsoft Windows, ReadFile() as well as WriteFile() limit the size
of the request to DWORD, which is 32 bits (at most 4 GiB - 1) also on
64-bit systems.
On FreeBSD, sysctl debug.iosize_max_clamp could limit the size of a
write request to INT_MAX. The size of a read request is always limited
to INT_MAX. This would allow the request size to be 4095 bytes more than
the Linux limit (0x7ffff000 according to "man 2 read" and "man 2 write").
On OpenBSD, Solaris and possibly NetBSD, the read request size is limited
to SSIZE_T_MAX, which would be half the current maximum
innodb_log_buffer_size. This should be not much of an issue anyway,
because on contemporary 64-bit platforms, the virtual addresses are
limited to 48 bits.
IBM AIX documentation mentions OFF_MAX which would apply when
a 64-bit application is running on a 32-bit kernel.
Let us declare innodb_log_buffer_size as 32-bit unsigned and make the
maximum 0x7ffff000, to be compatible with the least common
denominator (Linux).
The maximum innodb_sort_buffer_size already was 64 MiB,
which is not a problem.
SyncFileIO::execute(): Assert that the size of a synchronous read or
write request is limited to the maximum.
Reviewed by: Vladislav Vaintroub
A GTID event can have variable length, with contributing factors
such as the variable length from the flags2 and optional extra flags
fields. These fields are bitmaps, where each set bit indicates an
additional value that should be appended to the event, e.g.
multi-engine transactions append a number to indicate the number of
additional engines a transaction uses. However, if a flags bit is
set, and no additional fields are appended to the event, MDEV-33672
reports that the server can still try to read from memory as if it
did exist. Note, however, in debug builds, this condition is
asserted for FL_EXTRA_MULTI_ENGINE.
This patch fixes this to check that the length of the event is
aligned with the expectation set by the flags for FL_PREPARED_XA,
FL_COMPLETED_XA, and FL_EXTRA_MULTI_ENGINE.
Reviewed By
============
Kristian Nielsen <knielsen@knielsen-hq.org>