causes future shutdown hang
InnoDB would hang on shutdown if any XA transactions exist in the
system in the PREPARED state. This has been masked by the fact that
MySQL would roll back any PREPARED transaction on shutdown, in the
spirit of Bug #12161 Xa recovery and client disconnection.
[mysql-test-run] do_shutdown_server: Interpret --shutdown_server 0 as
a request to kill the server immediately without initiating a
shutdown procedure.
xid_cache_insert(): Initialize XID_STATE::rm_error in order to avoid a
bogus error message on XA ROLLBACK of a recovered PREPARED transaction.
innobase_commit_by_xid(), innobase_rollback_by_xid(): Free the InnoDB
transaction object after rolling back a PREPARED transaction.
trx_get_trx_by_xid(): Only consider transactions whose
trx->is_prepared flag is set. The MySQL layer seems to prevent
attempts to roll back connected transactions that are in the PREPARED
state from another connection, but it is better to play it safe. The
is_prepared flag was introduced in the InnoDB Plugin.
trx_n_prepared: A new counter, counting the number of InnoDB
transactions in the PREPARED state.
logs_empty_and_mark_files_at_shutdown(): On shutdown, allow
trx_n_prepared transactions to exist in the system.
trx_undo_free_prepared(), trx_free_prepared(): New functions, to free
the memory objects of PREPARED transactions on shutdown. This is not
needed in the built-in InnoDB, because it would collect all allocated
memory on shutdown. The InnoDB Plugin needs this because of
innodb_use_sys_malloc.
trx_sys_close(): Invoke trx_free_prepared() on all remaining
transactions.
Bug#59410 read uncommitted: unlock row could not find a 3 mode lock
on the record
This bug is present only in 5.6 but I am adding the test case to earlier
versions to ensure it never appears in earlier versions too.
Bug#59410 read uncommitted: unlock row could not find a 3 mode lock
on the record
This bug is present only in 5.6 but I am adding the test case to earlier
versions to ensure it never appears in earlier versions too.
ARE NOT BEING HONORED
max_allowed_packet works in conjunction with net_buffer_length.
max_allowed_packet is an upper bound of net_buffer_length.
So it doesn't make sense to set the upper limit lower than the value.
Added a warning (using ER_UNKNOWN_ERRROR and a specific message)
when this is done (in the log at startup and when setting either
max_allowed_packet or the net_buffer_length variables)
Added a test case.
Fixed several tests that broke the above rule.
The slave was not able to find the correct row in the innodb
table, because the row fetched from the innodb table would not
match the before image. This happened because the (don't care)
bytes in the NULLed fields would change once the row was stored
in the storage engine (from zero to the default value). This
would make bulk memory comparison (using memcmp) to fail.
We fix this by taking a preventing measure and avoiding memcmp
for tables that contain nullable fields. Therefore, we protect
the slave search routine from engines that return arbitrary
values for don't care bytes (in the nulled fields). Instead, the
slave thread will only check null_bits and those fields that are
not set to NULL when comparing the before image against the
storage engine row.
There is a race between two threads: user thread and the dump
thread. The former sets a debug instruction that makes the latter wait
before processing an Xid event. There can be cases that the dump
thread has not yet processed the previous Xid event, causing it to
wait one Xid event too soon, thus causing sync_slave_with_master never
to resume.
We fix this by moving the instructions that set the debug variable
after calling sync_slave_with_master.
rw_lock_create_func(): Initialize lock->writer_thread, so that Valgrind
will not complain even when Valgrind instrumentation is not enabled.
Flag lock->writer_thread uninitialized, so that Valgrind can complain
when it is used uninitialized.
rw_lock_set_writer_id_and_recursion_flag(): Revert the bogus Valgrind
instrumentation that was pushed in the first attempt to fix this bug.
privileges".
The first problem was that DROP USER didn't properly remove privileges
on stored functions from in-memory structures. So the dropped user
could have called stored functions on which he had privileges before
being dropped while his connection was still around.
Even worse if a new user with the same name was created he would
inherit privileges on stored functions from the dropped user.
Similar thing happened with old user name and function privileges
during RENAME USER.
This problem stemmed from the fact that the handle_grant_data() function
which handled DROP/RENAME USER didn't take any measures to update
in-memory hash with information about function privileges after
updating them on disk.
This patch solves this problem by adding code doing just that.
The second problem was that RENAME USER didn't properly update in-memory
structures describing table-level privileges and privileges on stored
procedures. As result such privileges could have been lost after a rename
(i.e. not associated with the new name of user) and inherited by a new
user with the same name as the old name of the original user.
This problem was caused by code handling RENAME USER in
handle_grant_struct() which [sic!]:
a) tried to update wrong (tables) hash when updating stored procedure
privileges for new user name.
b) passed wrong arguments to function performing the hash update and
didn't take into account the way in which such update could have
changed the order of the hash elements.
This patch solves this problem by ensuring that a) the correct hash
is updated, b) correct arguments are used for the hash_update()
function and c) we take into account possible changes in the order
of hash elements.
There is one part of the test case that needs to break
and re-establish the circular topology. For this the test
stops the slave threads on a couple of servers and restarts
them with START SLAVE. However, no check is done on the
status of the IO or SQL threads before proceeding with
the subsequent commands.
Because rpl_only_running_threads is set to 1 this can lead
to silently not syncing all slave threads as expected,
ultimately resulting in unexpected results (and consequently
on a failing test run).
We fix this by replacing the START SLAVE instructions with
calls to --source include/start_slave.inc, which will wait
for the slave threads to be running (show 'Yes' in
Slave_IO|SQL_Running fields of SHOW SLAVE STATUS) before
proceeding. Additionally, we change rpl_sync.inc to make the
IO thread report that it is running when its running status
is any other than 'No'.
primary_key_no == 0".
Attempt to create InnoDB table with non-nullable column of
geometry type having an unique key with length 12 on it and
with some other candidate key led to server crash due to
assertion failure in both non-debug and debug builds.
The problem was that such a non-candidate key could have
been sorted as the first key in table/.FRM, before any legit
candidate keys. This resulted in assertion failure in InnoDB
engine which assumes that primary key should either be the
first key in table/.FRM or should not exist at all.
The reason behind such an incorrect sorting was an wrong
value of Create_field::key_length member for geometry field
(which was set to its pack_length == 12) which confused code
in mysql_prepare_create_table(), so it would skip marking
such key as a key with partial segments.
This patch fixes the problem by ensuring that this member
gets the same value of Create_field::key_length member as
for other blob fields (from which geometry field class is
inherited), and as result unique keys on geometry fields
are correctly marked as having partial segments.
In SBR, if a statement does not fail, it is always written to the binary
log, regardless if rows are changed or not. If there is a failure, a
statement is only written to the binary log if a non-transactional (.e.g.
MyIsam) engine is updated.
INSERT ON DUPLICATE KEY UPDATE and INSERT IGNORE were not following the
rule above and were not written to the binary log, if then engine was
Innodb.
There are two calls to read_log_event() on master in mysql_binlog_send().
Each call reads 19 bytes in this test case and the error of the second
read_log_event() is reported to the slave.
The second read_log_event() starts from position 94 (75 + 19) to 113
(75 + 19 + 19). Usually, there are two events in the binary log:
. 0 - 3 - Header
. 4 - 105 - Format Descriptor Event
. 106 - 304 - Query Event
and both reads fail because operations are reading from invalid positions
as expected.
However, mysql_binlog_send() does not use the same IO_CACHE that is used to
write into binary log (i.e. mysql_bin_log.log_file) for the hot binary log.
It opens the binary log file directly by calling open_binlog() and creates a
separated IO_CACHE. So there is a possibly that after a master has flushed
the binary log file, the content has been cached by the filesystem, and has
not updated the disk file. If this happens, then a slave will only see part
of the file, and thus the second read_log_event() will report event truncated
error.
To fix the problem, if the first read_log_event() has failed, we ensure that
the second one will try to read from the same position.
The test were using external tools not available for embedded.
Fixed by rewriting the test to not rely on external tools like the mysql-client
Also fixed some non portable --exec commands and replaced #p# to #P# to pass
on windows.
rpl_packet got a timeout failure sporadically on PB when stopping
slave. The real reason of this bug is that STOP SLAVE stopped
IO thread first and then stopped SQL thread. It was
possible that IO thread stopped after replicating part of a
transaction which SQL thread was executing. SQL thread would
be hung if the transaction could not be rolled back safely.
After this patch, STOP SLAVE will stop SQL thread first and then stop IO
thread, which guarantees that IO thread will fetch the reset of the
events of the transaction that SQL thread is executing, so that SQL
thread can finish the transaction if it cannot be rolled back safely.
Added below auxiliary files to make the test code neater.
restart_slave_sql.inc
rpl_connection_master.inc
rpl_connection_slave.inc
rpl_connection_slave1.inc
Committing After latest merge.
Modified check_pct procedure to check return value of wait condition instead
of calling "dirty_pct".
Adding Review comments:
1) Added comment for success variable value
2) Procedure check_pct changed For Adding BOOLEAN input and SELECT QUERY Change