If a mtr test runs multiple servers and only some of them get
restarted on whatever reason with new command-line parameters,
then subsequent mtr test may fail, because no cleanup is performed.
Replication and Galera test suites are affected.
In the mtr script, there is a server_need_restart function
that decides whether we need to start a new mysqld process before
the new (next) test. If the mysqld parameters were changed in the
previous test - not necessarily the parameters of the primary mysqld
server, maybe even the secondary server parameters - this function
decides to start a new mysqld process. But since it does not remove
the old (changed) parameters, the new process starts with the
parameters changed by the *previous* test.
To correct this error, we must delete the modified process
parameters after checking that they have been changed during
the previous test.
This patch also simplifies and makes more stable the
galera_drop_database test, during debugging of which this
problem was detected.
https://jira.mariadb.org/browse/MDEV-17421
This is used for controlling whether to use a new/optimized
certification rules or the old/classic ones that could cause more
certification failures - when foreign keys are used and two INSERTs are
done concurrently to the child table from different nodes.
(cherry picked from commit 815d73e6af8daace6262ab63ca6c043ffc4204b3)
wsrep_append_foreign_key() and wsrep_append_key() used to take a boolean
argument denoting whether the relevant certification key type is shared
(assuming it is exclusive if the argument is false). Change that
argument to the enum wsrep_key_type from wsrep_api.h, so that eventually
other types can also be passed (like WSREP_KEY_SEMI).
This is a non-functional change.
(cherry picked from commit 360bf36dbb9378b36ef57921c725a9505e19e0d9)
forceful connection close.
Fix is to ensure that when close_connection() is called from shutdown
thread, current_thd is set. This that allocation callback for THD specific
memory won't assert(in debug version), or crash (in 10.1 and later)
close_connection() allocates THD specific memory e.g when it writes
the final error packet, and compression is ON for the connection.
The bug appears as a slave SQL thread hanging in
rpl_parallel_thread_pool::get_thread() while there are no slave worker
threads to awake it.
The reason of the hang is that at the parallel slave worker pool
activation the being stared SQL thread could read the worker pool size
concurrently with pool deactivation. At reading the SQL thread did not
employ necessary protection from a race.
Fixed with making the SQL thread at the pool activation first
to grab the same lock as potential deactivator also does prior
to access the pool size.
derived table / view by equality
Now rows of a materialized derived table are always put into a
temporary table before join operation. If BNLH is used to join this
table with the result of a partial join then both operands of the
join are actually put into main memory. In most cases this is not
efficient.
We could avoid this by sending the rows of the derived table directly
to the join operation. However this kind of data flow is not supported
yet.
Fixed by not allowing usage of hash join algorithm to join a materialized
derived table if it's joined by an equality predicate of the form
f=e where f is a field of the derived table.
derived table / view by equality
Now rows of a materialized derived table are always put into a
temporary table before join operation. If BNLH is used to join this
table with the result of a partial join then both operands of the
join are actually put into main memory. In most cases this is not
efficient.
We could avoid this by sending the rows of the derived table directly
to the join operation. However this kind of data flow is not supported
yet.
Fixed by not allowing usage of hash join algorithm to join a materialized
derived table if it's joined by an equality predicate of the form
f=e where f is a field of the derived table.
derived table / view by equality
Now rows of a materialized derived table are always put into a
temporary table before join operation. If BNLH is used to join this
table with the result of a partial join then both operands of the
join are actually put into main memory. In most cases this is not
efficient.
We could avoid this by sending the rows of the derived table directly
to the join operation. However this kind of data flow is not supported
yet.
Fixed by not allowing usage of hash join algorithm to join a materialized
derived table if it's joined by an equality predicate of the form
f=e where f is a field of the derived table.
This would happen especially in optimistic parallel replication, where there
is a good chance that a transaction will be rolled back (due to conflicts)
after it has executed record_gtid(). If the transaction did any deletions of
old rows as part of record_gtid(), those deletions will be undone as well.
And the code did not properly ensure that the deletions would be re-tried.
This patch makes record_gtid() remember the list of deletions done as part
of a transaction. Then in rpl_slave_state::update() when the changes have
been committed, we discard the list. However, in case of error and rollback,
in cleanup_context() we will instead put the list back into
rpl_global_gtid_slave_state so that the deletions will be re-tried later.
Probably fixes part of the cause of MDEV-12147 as well.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
pthread_detach_this_thread() was intended to be defined to something
meaningful only on some ancient unixes, which don't have
pthread_attr_setdetachstate() defined. Otherwise, on normal unixes,
threads are created detached in the first place.
This was broken in 0f01bf2676 so that
we started calling pthread_detach() for already detached threads.
Intention was to detach aria checkpoint thread.
However in 87007dc2f7 aria service threads
were made joinable with appropriate handling, which makes breaking
revision unneccessary.
Revert remnants of 0f01bf2676, so that
pthread_detach_this_thread() is meaningful only on some ancient unixes
again.
When converting table identifiers to a new format,
some tables can be renamed twice, which subsequently
leads to the appearance of "false" auxiliary tables
belonging to another main (parent) table (which does
not actually have auxiliary tables).
This is because the table number is repeatedly added
to the aux_tables_to_rename vector inside the function
fts_check_and_drop_orphaned_tables.
To correct this error, we must add a check for the
occurrence of the table number in the aux_tables_to_rename
vector before adding a new element.
https://jira.mariadb.org/browse/MDEV-16656
e.g. "No option named 'FILE_KEY_MANAGEMENT_SO' in group 'ENV' at lib/My/ConfigFactory.pm line 370."
when a test has `plugin-load-add=@ENV.FILE_KEY_MANAGEMENT_SO`
Two bugs in Aria, related to 2-level fulltext indexes:
* REPAIR calculated the key number incorrectly
* CHECK copied the key into last_key too early and
checking the second-level btree was overwriting it
Unary minus operation for the smallest possible signed long long value
(LONLONG_MIN) is undefined in C++. Because of this, func_time.test
failed on ppc64 buildbot machines.
Fixing the code to avod using undefined operations.
This is fix is similar to "MDEV-7973 bigint fail with gcc 5.0"