mtr internally does the following:
1. $default_vardir=....
2. $opt_vardir=$default_vardir unless $opt_vardir;
3. $opt_vardir=realpath $opt_vardir unless IS_WINDOWS
4. if ($opt_vardir eq $default_vardir) { .... use /dev/shm ... }
thus we have to realpath $default_datadir too, otherwise
the comparison logic might be broken
Using a boolean flag for 'there is a RESET MASTER in progress' doesn't
work very well for multiple concurrent RESET MASTER statements.
Changed to a counter.
Fix MDL to report an error when a wait was killed, but preserve
the old documented behavior of GET_LOCK() where killing it is not an error.
Also remove race conditions in main.create_or_replace test
This patch ports the work that facebook has performed
to make innochecksum handle compressed tables.
the basic idea is to use actual innodb-code to perform
checksum verification rather than duplicating in innochecksum.cc.
to make this work, innodb code has been annotated with
lots of #ifndef UNIV_INNOCHECKSUM so that it can be
compiled outside of storage/innobase.
A new testcase is also added that verifies that innochecksum
works on compressed/non-compressed tables.
Merged from commit fabc79d2ea976c4ff5b79bfe913e6bc03ef69d42
from https://code.google.com/p/google-mysql/
The actual steps to produce this patch are:
take innochecksum from 5.6.14
apply changes in innodb from facebook patches needed to make innochecksum compile
apply changes in innochecksum from facebook patches
add handcrafted testcase
The referenced facebook patches used are:
91e25120e7847fe76ea51135628a5a4dbf7c240c
mysql-test/suite/maria/insert_select.result:
Added test case
mysql-test/suite/maria/insert_select.test:
Added test case
mysys/thr_lock.c:
Ensure we don't allow concurrent_insert when a read_no_write lock is in use
Stage "Filling schema table" is now properly shown in 'show processlist'
mysys/mf_keycache.c:
Simple cleanup with more comments
sql/lock.cc:
Return to original stage after mysql_lock_tables
Made 'Table lock' as a true stage
sql/sql_show.cc:
Restore original stage after get_schema_tables_result
- The code that tested if
WHERE expr=value AND expr=const
can be rewritten to:
WHERE const=value AND expr=const
was incomplete in case of STRING_RESULT.
- Moving the test into a new function, to reduce duplicate code.
FULLTEXT indexes do not permit index first lookups. By calling:
ha_index_first() with a garbage parameter, random data gets overwritten
that causes the table->field array to be corrupted. Subsequently, when
the field array is accessed, a segfault occurs.
By not allowing index statistics for FULLTEXT indexes, the problem is
resolved.
Fixing a wrong assymetric code in Arg_comparator::set_cmp_func().
It existed for a long time, but showed up in 10.0.14 after the fix
for "MDEV-6666 Malformed result for CONCAT(utf8_column, binary_string)".
MDEV-7254 Assigned expression is evaluated twice when updating column TIMESTAMP NOT NULL
The test type_timestamp failed depending on the build machine time zone.
Setting a fixed time zone for the test.
Sep_char default is now ',' like when discovery is not used
If data_charset is UTF8, column names retrieved from the header
are no longer converted to UTF8 considering they already are (MDEV-7421)
modified:
storage/connect/ha_connect.cc
Sep_char default is now ',' like when discovery is not used
If data_charset is UTF8, column names retrieved from the header
are no longer converted to UTF8 considering they already are.
modified:
storage/connect/ha_connect.cc
column TIMESTAMP NOT NULL
Analysis: Problem was that value->is_null() function is called
even when user had explicitly set the value for timestamp
field. Calling this function had the side effect that
expression was evaluated twice.
Fix: (by Sergei Golubchik) check instead value->null_value.
Structure of the table created by the test to archive mysql.slow_log
data didn't match the structure of mysql.slow_log. The failure only
appeared if the slow_log was not empty, which was very rare.
Updated the structure of the table.
The problem was a too low timeout for slave reconnect. It was set to 9 seconds
(10 retries with 1 second in-between). This is occasinally too short on some
Buildbot hosts, when the test crashes and restarts the master while the slave
IO thread is running.
Fix by increasing --master-retry-count for this test.
The test case injects a DBUG that will crash the server during replication,
then does a START SLAVE. We need to use --error 0,2006,2013 on the START
SLAVE, so that we will not fail the test if the server has time to crash
before the START SLAVE returns to the client.
Fixes a failure seen in Buildbot.