* error reporting was never needed
* avoid useless transformaton hton to db_type to hton
* in many cases the engine was guaranteed to exist, add asserts
* use ha_default_handlerton() instead of ha_checktype(DB_TYPE_DEFAULT)
After fix of MDEV-6728, the KILL signal is reset at the start of query
execution. This means that in this test case, we need to wait for the
to-be-killed query to have started; otherwise the kill signal can be lost,
causing the test case to fail.
Added a SESSION-only system variable "wsrep_dirty_reads" to allow SELECT
queries to pass even when the node is not prepared to accept queries
(wsrep_ready=OFF). Added a test case.
The test case deliberately crashes the server. If this crash happens in the
middle of a page write, InnoDB crash recovery recovers the page from the
doublewrite buffer, writing a message to the error log that is flagged as a
test failure by mysql-test-run. So add a suppression for this.
mtr internally does the following:
1. $default_vardir=....
2. $opt_vardir=$default_vardir unless $opt_vardir;
3. $opt_vardir=realpath $opt_vardir unless IS_WINDOWS
4. if ($opt_vardir eq $default_vardir) { .... use /dev/shm ... }
thus we have to realpath $default_datadir too, otherwise
the comparison logic might be broken
Using a boolean flag for 'there is a RESET MASTER in progress' doesn't
work very well for multiple concurrent RESET MASTER statements.
Changed to a counter.
Fix MDL to report an error when a wait was killed, but preserve
the old documented behavior of GET_LOCK() where killing it is not an error.
Also remove race conditions in main.create_or_replace test
This patch ports the work that facebook has performed
to make innochecksum handle compressed tables.
the basic idea is to use actual innodb-code to perform
checksum verification rather than duplicating in innochecksum.cc.
to make this work, innodb code has been annotated with
lots of #ifndef UNIV_INNOCHECKSUM so that it can be
compiled outside of storage/innobase.
A new testcase is also added that verifies that innochecksum
works on compressed/non-compressed tables.
Merged from commit fabc79d2ea976c4ff5b79bfe913e6bc03ef69d42
from https://code.google.com/p/google-mysql/
The actual steps to produce this patch are:
take innochecksum from 5.6.14
apply changes in innodb from facebook patches needed to make innochecksum compile
apply changes in innochecksum from facebook patches
add handcrafted testcase
The referenced facebook patches used are:
91e25120e7847fe76ea51135628a5a4dbf7c240c
mysql-test/suite/maria/insert_select.result:
Added test case
mysql-test/suite/maria/insert_select.test:
Added test case
mysys/thr_lock.c:
Ensure we don't allow concurrent_insert when a read_no_write lock is in use
- The code that tested if
WHERE expr=value AND expr=const
can be rewritten to:
WHERE const=value AND expr=const
was incomplete in case of STRING_RESULT.
- Moving the test into a new function, to reduce duplicate code.
FULLTEXT indexes do not permit index first lookups. By calling:
ha_index_first() with a garbage parameter, random data gets overwritten
that causes the table->field array to be corrupted. Subsequently, when
the field array is accessed, a segfault occurs.
By not allowing index statistics for FULLTEXT indexes, the problem is
resolved.
Fixing a wrong assymetric code in Arg_comparator::set_cmp_func().
It existed for a long time, but showed up in 10.0.14 after the fix
for "MDEV-6666 Malformed result for CONCAT(utf8_column, binary_string)".
MDEV-7254 Assigned expression is evaluated twice when updating column TIMESTAMP NOT NULL
The test type_timestamp failed depending on the build machine time zone.
Setting a fixed time zone for the test.
column TIMESTAMP NOT NULL
Analysis: Problem was that value->is_null() function is called
even when user had explicitly set the value for timestamp
field. Calling this function had the side effect that
expression was evaluated twice.
Fix: (by Sergei Golubchik) check instead value->null_value.
Structure of the table created by the test to archive mysql.slow_log
data didn't match the structure of mysql.slow_log. The failure only
appeared if the slow_log was not empty, which was very rare.
Updated the structure of the table.
The problem was a too low timeout for slave reconnect. It was set to 9 seconds
(10 retries with 1 second in-between). This is occasinally too short on some
Buildbot hosts, when the test crashes and restarts the master while the slave
IO thread is running.
Fix by increasing --master-retry-count for this test.
The test case injects a DBUG that will crash the server during replication,
then does a START SLAVE. We need to use --error 0,2006,2013 on the START
SLAVE, so that we will not fail the test if the server has time to crash
before the START SLAVE returns to the client.
Fixes a failure seen in Buildbot.
Test causes OS error printout and we need to supress this
error message on tests. Additionally, test could cause
different error codes on different OSs.