If secure-file-priv was set on slave, it became unable to execute
LOAD DATA INFILE statements sent from master using mixed or
statement-based replication.
This patch fixes the issue by ignoring this security restriction
and checking if the files are created and read by the slave in the
--slave-load-tmpdir while executing the SQL Thread.
Moved the test case for the bug into a separate file (and restored the
original innodb_mysql test setup).
Used the new wait_show_condition test macro to avoid the usage of sleep
The problem is that select queries executed concurrently with
a concurrent insert on a MyISAM table could be cached if the
select started after the query cache invalidation but before
the unlock of tables performed by the concurrent insert. This
race could happen because the concurrent insert was failing
to prevent cache of select queries happening at the same time.
The solution is to add a 'uncacheable' status flag to signal
that a concurrent insert is being performed on the table and
that queries executing at the same time shouldn't cache the
results.
Replaced Unix calls with mysql-test-run's built-in functions / SQL manipulation where possible.
Replaced error codes with error names as well.
Disabled two tests on Windows due to more complex Unix command usage
See Bug#41307, Bug#41308
connections
The problem is that tables can enter open table cache for a thread without
being properly cleaned up. This can happen if make_join_statistics() fails
to read a const table because of e.g. a deadlock. It does set a member of
TABLE structure to a value it allocates, but doesn't clean-up this setting
on error nor does it set the rest of the members in JOIN to allow for
automatic cleanup.
As a result when such an error occurs and the next statement depends re-uses
the table from the open tables cache it will get it with this
TABLE::reginfo.join_tab pointing to a memory area that's freed.
Fixed by making sure make_join_statistics() cleans up TABLE::reginfo.join_tab
on error.
In case of ROW item each compared pair does not
check if argumet collations can be aggregated and
thus appropiriate item conversion does not happen.
The fix is to add the check and convertion for ROW
pairs.
Backport from 6.0
Changed error message to show that it is partitioning
that does not support foreign keys yet.
Changed spelling from British english to American english.
Cleaned up SQL code in the test.
Needed to move the FLUSH TABLES statement prior to the DROP TABLE t1 to prevent a warning of
Table open on delete and a test fail.