Clarify confusing comments in the previous commit, and note that the failure
started after push of MDEV-34504.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
in bintars the server is linked with wolfssl, while the connector
is linked with gnutls. Thus client_ed25519.so gets gnutls
dependency, unresolved symbols and it cannot be loaded into the
server and gnutls symbols aren't present there.
linking the plugin statically with gnutls fixes that and the test passes.
but when such a plugin is loaded into the client, the client gets
two copies of gnutls - they conflict and ssl doesn't work at all.
let's detect this and disable the test for now.
note that:
* unit.conc_tls is broken in mtr
* schannel now doesn't fail on invalid ca path unless
--ssl-verify-server-cert is used. openssl still does.
Problem:
========
- After the commit ada1074bb1 (MDEV-14398)
fil_crypt_set_encrypt_tables() iterates through all tablespaces to
fill the default_encrypt tables list. This was a trigger to
encrypt or decrypt when key rotation age is set to 0. But import
tablespace does call fil_crypt_set_encrypt_tables() unnecessarily.
The motivation for the call is to signal the encryption threads.
Fix:
====
ha_innobase::discard_or_import_tablespace: Remove the
fil_crypt_set_encrypt_tables() and add the import tablespace
to the default encrypt list if necessary
- commit 85db534731 (MDEV-33400)
retains the instantness in the table definition after discard
tablespace. So there is no need to assign n_core_null_bytes
during instant table preparation unless they are not
initialized.
- During copy algorithm, InnoDB should use bulk insert operation
for row by row insert operation. By doing this, copy algorithm
can effectively build indexes. This optimization is disabled
for temporary table, versioning table and table which has
foreign key relation.
Introduced the variable innodb_alter_copy_bulk to allow
the bulk insert operation for copy alter operation
inside InnoDB. This is enabled by default
ha_innobase::extra(): HA_EXTRA_END_ALTER_COPY mode tries to apply
the buffered bulk insert operation, updates the non-persistent
table stats.
row_merge_bulk_t::write_to_index(): Update stat_n_rows after
applying the bulk insert operation
row_ins_clust_index_entry_low(): In case of copy algorithm,
switch to bulk insert operation.
copy_data_error_ignore(): Handles the error while copying
the data from source to target file.
(With trivial fixes by sergey@mariadb.com)
Added option fix_innodb_cardinality to optimizer_adjust_secondary_key_costs
Using fix_innodb_cardinality disables the 'divide by 2' of rec_per_key_int
in InnoDB that in effect doubles the Cardinality for secondary keys.
This has the biggest effect for indexes where a few rows has the same key
value. Using this may also cause table scans for very small tables (which
in some cases may be better than an index scan).
The user visible effect is that 'SHOW INDEX FROM table_name' will for
InnoDB show the true Cardinality (and not 2x the real value). It will
also allow the optimizer to chose a better index in some cases as the
division by 2 could have a bad effect for tables with 2-5 identical values
per key.
A few notes about using fix_innodb_cardinality:
- It has direct affect for SHOW INDEX FROM table_name. SHOW INDEX
will also update the statistics in table share.
- The effect of fix_innodb_cardinality for query plans or EXPLAIN
is only visible after first open of the table. This is why one must
do a flush tables or use SHOW INDEX for the option to take effect.
- Using fix_innodb_cardinality can thus affect all user in their query
plans if they are using the same tables.
Because of this, it is strongly recommended that one uses
optimizer_adjust_secondary_key_costs=fix_innodb_cardinality mainly
in configuration files to not cause issues for other users.
When there is no bounds on the upper or lower part of the window,
it doesn't matter if the type is numeric.
It also doesn't matter how many ORDER BY items there are in the
query.
Reviewers: Sergei Petrunia and Oleg Smirnov
When mysqldump is run to dump the `mysql` system database, it generates
INSERT statements into the table `mysql.gtid_slave_pos`.
After running the backup script
those inserts did not produce the expected gtid state on slave. In
particular the maximum of mysql.gtid_slave_pos.sub_id did not make
into
rpl_global_gtid_slave_state.last_sub_id
an in-memory object that is supposed to match the current state of the
table. And that was regardless of whether --gtid option was specified
or not. Later when the backup recipient server starts as slave
in *non-gtid* mode this desychronization may lead to a duplicate key
error.
This effect is corrected for --gtid mode mysqldump/mariadb-dump only
as the following. The fixes ensure the insert block of the dump
script is followed with a "summing-up" SET @global.gtid_slave_pos
assignment.
For the implemenation part, note a deferred print-out of
SET-gtid_slave_pos and associated comments is prefered over relocating
of the entire blocks if (opt_master,slave_data &&
do_show_master,slave_status) ... because of compatiblity
concern. Namely an error inside do_show_*() is handled in the new code
the same way, as early as, as before.
A regression test can be run in how-to-reproduce mode as well.
One affected mtr test observed.
rpl_mysqldump_slave.result "mismatch" shows now the new deferring print
of SET-gtid_slave_pos policy in action.
The test was missing a save_master_gtid.inc on the master,
leading to the slave thinking it was in sync after executing
sync_with_master_gtid.inc, despite not having executed the
latest transaction. This skipped transaction, XA COMMIT,
was supposed to error-to-be-ignored because its XID could not
be found, but be thrown out because the replication filters
would filter out the target database. However, if the slave
was able to stop before executing the transaction, then
the replication filer is reset (to empty), and when the
slave is later restarted, that transactions error would
no longer be ignored.
Additionally, as the test cases added in MDEV-33921 rely
on GTID synchronization, the test cases now force
master_use_gtid=slave_pos for consistency
There are 3 diff in result:
1) NULL value from SELECT
Due to incorrect truncating of the hex value, incorrect value is
written instead of original value to the view frm. This results in reading
incorrect value from frm, so eventual result is NULL.
2) 'Name_exp1' in column name (in gis.test)
This was because the identifier in SELECT is longer than 64 characters,
so 'Name_exp1' alias is also written to the view frm.
3)diff in explain extended
This was because the query plan for view protocol doesn't
contain database name. As a fix, disable view protocol for that particular
query.
- Fix view-protocol: long expressions in SELECT
list should have "expr AS column_name".
- Also, moved the test from subselect*test to
suite/json/t/json_table.test.
- During XA PREPARE, InnoDB releases the non-exclusive locks.
But it fails to remove the non-exclusive table lock from the
transaction table locks. In the mean time, main thread evicts
the table from the LRU cache. While rollbacking the XA transaction,
InnoDB iterates through the table locks to check whether it
holds lock on any system tables and wrongly assumes the
evicted table as system table since the table id is 0
Fix:
===
During XA PREPARE, remove the table locks of the transaction while
releasing the non-exclusive locks.
Simplify in an attempt to avoid:
mysqltest: At line 275: File already exist: on the write_file
lines.
Using write_line as that's what a lot of other tests
do for writing small bits to a expect file.
Review thanks Valdislav Vaintroub
Problem:
========
- innodb.temp_truncate_freed fails with ER_WRONG_ARGUMENTS and
states that another buffer pool resize is already in progress.
Test case has wait condition to ensure that buffer pool
resize is completed. There is a possibility that wait condition
check could get false impression that InnoDB buffer pool resize
completed due to previous buffer pool resize.
Fix:
===
Add more elaborate wait_condition to ensure the current buffer
pool resize completed.
buf_pool_t::resize(): Set the buffer pool resize status only
after setting previous buffer pool size to current buffer pool
size. This should help the test case to make reliable.
MDEV-34274 did not fix the test failure. The test has a START SLAVE
UNTIL condition, where we can't use sync_with_master_gtid.inc,
wait_for_slave_to_start.inc, or wait_for_slave_to_stop.inc because
our MTR connection thread races with the start/stop of the SQL/IO
threads. So instead, for slave start, we prove the threads started
by waiting for the connection count to increase by 2; and for slave
stop, we wait for the processlist count to return to its pre start
slave number.
Note this is a backport of 8c8b3ab784
from 11.1.
The test rpl.rpl_change_master_demote used a `sleep 1` command
to give time for a START SLAVE UNTIL to start the slave threads
and wait for them to automatically die by UNTIL. On machines
with heavy load (especially MSAN bb builders), one second was
not enough, and the test would fail due to the IO thread
still being up.
This patch fixes the test by replacing the sleep with specific
conditions to wait for. The test cannot wait for the IO or SQL
threads to start, as it would be possible that they would be
started and stopped by the time the MTR executor would check
the slave status. So instead, we test for proof that they
existed via the Connections status variable being incremented
by at least 2 (Connections just shows the global thread id).
At this point, we still can't use the wait_for_slave_to_stop
helper, as the SQL/IO_Running fields of SHOW SLAVE STATUS
may not be updated yet. So instead, we use
information_schema.processlist, which would show the presence
of the Slave_SQL/IO threads. So to "wait for the slave to stop",
we wait for the Slave_SQL/IO threads to be gone from the
processlist.
The issue was that the test did not take into account that the IO thread
could have been in COMMAND=Connecting state, which happens before the
COMMANMD=Slave_IO state.
The test is a bit fragile as it depends on the COMMAND state to be
syncronised with the Slave_IO_State, which is not the case.
I added a new proc state and some more information to the error
output to be able to diagnose future failures more easily.
Improve performance of queries like
SELECT * FROM t1 WHERE field = NAME_CONST('a', 4);
by, in this example, replacing the WHERE clause with field = 4
in the case of ref access.
The rewrite is done during fix_fields and we disambiguate this
case from other cases of NAME_CONST by inspecting where we are
in parsing. We rely on THD::where to accomplish this. To
improve performance there, we change the type of THD::where to
be an enumeration, so we can avoid string comparisons during
Item_name_const::fix_fields. Consequently, this patch also
changes all usages of THD::where to conform likewise.
There are two problems.
First, replication fails when XA transactions are used where the
slave has replicate_do_db set and the client has touched a different
database when running DML such as inserts. This is because XA
commands are not treated as keywords, and are thereby not exempt
from the replication filter. The effect of this is that during an XA
transaction, if its logged “use db” from the master is filtered out
by the replication filter, then XA END will be ignored, yet its
corresponding XA PREPARE will be executed in an invalid state,
thereby breaking replication.
Second, if the slave replicates an XA transaction which results in
an empty transaction, the XA START through XA PREPARE first phase of
the transaction won’t be binlogged, yet the XA COMMIT will be
binlogged. This will break replication in chain configurations.
The first problem is fixed by treating XA commands in
Query_log_event as keywords, thus allowing them to bypass the
replication filter. Note that Query_log_event::is_trans_keyword() is
changed to accept a new parameter to define its mode, to either
check for XA commands or regular transaction commands, but not both.
In addition, mysqlbinlog is adapted to use this mode so its
--database filter does not remove XA commands from its output.
The second problem fixed by overwriting the XA state in the XID
cache to be XA_ROLLBACK_ONLY, so at commit time, the server knows to
rollback the transaction and skip its binlogging. If the xid cache
is cleared before an XA transaction receives its completion command
(e.g. on server shutdown), then before reporting ER_XAER_NOTA when
the completion command is executed, the filter is first checked if
the database is ignored, and if so, the error is ignored.
Reviewed By:
============
Kristian Nielsen <knielsen@knielsen-hq.org>
Andrei Elkin <andrei.elkin@mariadb.com>
PURGE BINARY LOGS did not always purge binary logs. This commit fixes
some of the issues and adds notifications if a binary log cannot be
purged.
User visible changes:
- 'PURGE BINARY LOG TO log_name' and 'PURGE BINARY LOGS BEFORE date'
worked differently. 'TO' ignored 'slave_connections_needed_for_purge'
while 'BEFORE' did not. Now both versions ignores the
'slave_connections_needed_for_purge variable'.
- 'PURGE BINARY LOG..' commands now returns 'note' if a binary log cannot
be deleted like
Note 1375 Binary log 'master-bin.000004' is not purged because it is
the current active binlog
- Automatic binary log purges, based on date or size, will write a
note to the error log if a binary log matching the size or date
cannot yet be deleted.
- If 'slave_connections_needed_for_purge' is set from a config or
command line, it is set to 0 if Galera is enabled and 1 otherwise
(old default). This ensures that automatic binary log purge works
with Galera as before the addition of
'slave_connections_needed_for_purge'.
If the variable is changed to 0, a warning will be printed to the error
log.
Code changes:
- Added THD argument to several purge_logs related functions that needed
THD.
- Added 'interactive' options to purge_logs functions. This allowed
me to remove testing of sql_command == SQLCOM_PURGE.
- Changed purge_logs_before_date() to first check if log is applicable
before calling can_purge_logs(). This ensures we do not get a
notification for logs that does not match the remove criteria.
- MYSQL_BIN_LOG::can_purge_log() will write notifications to the user
or error log if a log cannot yet be removed.
- log_in_use() will return reason why a binary log cannot be removed.
Changes to keep code consistent:
- Moved checking of binlog_format for Galera to be after Galera is
initialized (The old check never worked). If Galera is enabled
we now change the binlog_format to ROW, with a warning, instead of
aborting the server. If this change happens a warning will be printed to
the error log.
- Print a warning if Galera or FLASHBACK changes the binlog_format
to ROW. Before it was done silently.
Reviewed by: Sergei Golubchik <serg@mariadb.com>,
Kristian Nielsen <knielsen@knielsen-hq.org>
- Column stat_value and sample_size in mysql.innodb_index_stats
table is declared as BIGINT UNSIGNED without any check constraint.
user manually updates the value of stat_value and sample_size
to zero. InnoDB aborts the server while reading the statistics
information because InnoDB expects at least one leaf
page to exist for the index.
- To fix this issue, InnoDB should interpret the value of
stat_n_leaf_pages, stat_index_size in innodb_index_stats
stat_clustered_index_size, stat_sum_of_other_index_sizes
in innodb_table_stats as valid one even though user
mentioned it as 0.
Builders:
- amd64-fedora-38-last-N-failed
- amd64-debian-10-last-N-failed
erroneously execute MTR for these files:
- rpl/include/rpl_extra_col_master.test
- rpl/include.rpl_binlog_max_cache_size.test
- rpl/include.rpl_charset.test
when these files are changed by a commit.
Changing their extension from *.test to *.inc to avoid this.
The problem was in error message suppression, which did not match
the actual warning messages, due to bad quotations.
Changed warnings message suppressions to more simple format.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
DML transactions on FK-child tables also get table locks
on FK-parent tables. If there is a DML transaction holding
such a lock, and a TOI transaction starts, the latter
BF-aborts the former and puts itself into a waiting state.
If at this moment another DML transaction on FK-child table
starts, it doesn't check that the transaction waiting on
a parent table lock is TOI, and it erroneously BF-aborts
the waiting TOI transaction.
The fix: don't roll back high-priority transaction waiting
on a lock in InnoDB, instead roll back an incoming DML
transaction.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>