When do 'insert delayed' operation, the time_zone info doesn't be keeped in the row info.
So when we do insert sometime later, time_zone didn't write into binlog.
This will cause wrong result for timestamp column in slave.
Our solution is that adding time_zone info with the delayed-row and
restoring time_zone from row-info when execute that row in the furture by another thread.
So we can write correct time_zone info into binlog and got correct result in slave.
mysql-test/r/rpl_timezone.result:
Test result
mysql-test/t/rpl_timezone.test:
Add test for bug#41719
sql/sql_insert.cc:
Add time_zone info in the delayed-row and restore time_zone when execute the row in the furture by another thread.
Details for Bug#43015 main.lock_multi: Weak code (sleeps etc.)
-------------------------------------------------------------
- The fix for bug 42003 already removed a lot of the weaknesses mentioned.
- Tests showed that there are unfortunately no improvements of this tests
in MySQL 5.1 which could be ported back to 5.0.
- Remove a superfluous "--sleep 1" around line 195
Details for Bug#43065 main.lock_multi: This test is too big if the disk is slow
-------------------------------------------------------------------------------
- move the subtests for the bugs 38499 and 36691 into separate scripts
- runtime under excessive parallel I/O load after applying the fix
lock_multi [ pass ] 22887
lock_multi_bug38499 [ pass ] 536926
lock_multi_bug38691 [ pass ] 258498
Don't throw an error after checking the first and the second arguments.
Continue with checking the third and higher arguments and if some of
them is stronger according to coercibility rules,
then this argument's collation is set as result collation.
mysql-test/r/ctype_collate.result:
test result
mysql-test/t/ctype_collate.test:
test case
sql/item.cc:
Don't throw an error after checking the first and the second arguments.
Continue with checking the third and higher arguments and if some of
them is stronger according to coercibility rules,
then this argument's collation is set as result collation.
When loading dump created by mysqldump tool an error is
thrown saying storage engine for the table doesn't have
an option.
mysqldump tries to re-insert the data into the federated
table which causes the error. Since the data is already
available on the remote server, mysqldump shouldn't try
to dump the data again for FEDERATED tables.
As stated in the bug page, it can be considered similar
to the MERGE ENGINE with "view only" nature.
Fixed by adding the "FEDERATED ENGINE" to the exception
list to ignore the data.
client/mysqldump.c:
Fixed check_if_ignore_table() to ignore FEDERATED engine
when dumping the table data.
mysql-test/r/federated.result:
Result file for BUG#21360
mysql-test/t/federated.test:
Testcase for BUG#21360
mysql.procs_priv table itself does not get replicated.
Inserting routine privilege record into mysql.procs_priv table
is triggered by creating function/procedure statements
according to current user's privileges.
Because the current user of SQL thread has GLOBAL_ACL,
which doesn't need any check mysql.procs_priv privilege
when create/alter/execute routines.
Corresponding GLOBAL_ACL privilege user
doesn't insert routine privilege record into
mysql.procs_priv when creating a routine.
Fixed by switching the current user of SQL thread to definer user if
the definer user exists on slave.
That populates procs_priv, otherwise to keep the SQL thread
user and procs_priv remains unchanged.
mysql-test/suite/rpl/r/rpl_do_grant.result:
Test case result for routine privilege when definer user exist or not on slave
mysql-test/suite/rpl/t/rpl_do_grant.test:
Test case result for routine privilege when definer user exist or not on slave
sql/sql_parse.cc:
Switch current user of SQL thread to definer user if the definer user
existes on slave when checking whether the routine privilege is
needed to insert mysql.procs_priv table or not.
Compiling with debug and assigning an invalid directory to --slave-load-tmpdir
was crashing the slave due to the following assertion DBUG_ASSERT(! is_set() ||
can_overwrite_status). This assertion assumes that a thread can change its
state once (i.e. ok,error, etc) before aborting, cleaning/resuming or completing
its execution unless the overwrite flag (i.e. can_overwrite_status) is true.
The Append_block_log_event::do_apply_event which is responsible for creating
temporary file(s) was not cleaning the thread state. Thus a failure while
trying to create a file in an invalid temporary directory was causing the crash.
To fix the problem we check if the temporary directory is valid before starting
the SQL Thread and reset the thread state before creating a file in
Append_block_log_event::do_apply_event.
~40Mb after mysqldump/import
When the input string exceeds the maximum allowed size for the
internal buffer, batch_readline() returns a truncated string.
Since there was no way for a caller to determine whether the
string was truncated or not, the command line client assumed
batch_readline() to always return the whole input string and
appended a newline character. This resulted in garbled data
when importing dumps containing strings longer than the
maximum input buffer size.
Fixed by adding a flag to the batch_readline() interface to
signal a truncated string to the caller.
Other minor problems fixed during patch implementation:
- The maximum allowed buffer size for batch_readline() was set
up depending on the client's max_allowed_packet value. It does
not actully make any sense, as those variables are not
related. The input buffer size limit is now always set to 1
MB.
- fill_buffer() did not always set the EOF flag.
- The input buffer could actually grow twice as the specified
limit due to insufficient checks in intern_read_line().
client/my_readline.h:
Changed the interface of batch_readline().
client/mysql.cc:
Honor the truncated flag returned by batch_readline() and do
not append the newline character if it was set. Since we can't
change the interfaces for readline()/fgets() used in the
interactive mode, always assume the returned string was not
truncated. In addition, always set the batch_readline()
internal buffer to 1 MB, independently from the client's
max_allowed_packet.
client/readline.cc:
Added the 'truncated' argument do batch_readline() to signal
truncated string to a caller.
Fixed fill_buffer() to set the EOF flag correctly.
Fixed checks in intern_read_line() to not allow the internal
buffer grow past the specified limit.
mysql-test/r/mysql.result:
Added a test case for bug #41486.
mysql-test/t/mysql.test:
Added a test case for bug #41486.
Any statement reading corrupt archive data file
(CHECK/REPAIR/SELECT/UPDATE/DELETE) may cause assertion
failure in debug builds. This assertion has been removed
and an error is returned instead.
Also fixed that CHECK/REPAIR returns vague error message
when it mets corruption in archive data file. This is
fixed by returning proper error code.
mysql-test/r/archive.result:
A test case for BUG#32880
mysql-test/std_data/bug32880.ARN:
corrupted archive table to test check and repair table operation
mysql-test/std_data/bug32880.ARZ:
corrupted archive table to test check and repair table operation
mysql-test/std_data/bug32880.frm:
corrupted archive table to test check and repair table operation
mysql-test/t/archive.test:
A test case for BUG#32880
storage/archive/ha_archive.cc:
Fixed unpack_row() to return the error instead of throwing assertion
and also fixed repair() to throw better error when repair table
operation fails on corrupted archive table
become negative
- merged the fix to 5.1
- extended to cover I_S.PROCESSLIST.TIME
- Changed the column type of I_S.PROCESSLIST.TIME from LOGNLONG
UNSIGNED
to LONG (to match the SHOW PROCESSLIST type)
- Added a test case
"Release_lock("hello")" is now also successful when delivering NULL, replaced two sleeps by wait_condition. The last two "sleep 1" have not been replaced as all tried wait conditions leaded to nondeterministic results, especially to succeeding concurrent updates. To replace the sleeps there should be some time planned (or internal knowledge of the server may help).