read_buffer_size set on master
BUG#33413 show binlog events fails if binlog has event size of close
to max_allowed_packet
The size of Append_block replication event was determined solely by
read_buffer_size whereas the rest of replication code deals with
max_allowed_packet.
When the former parameter was set to larger than the latter there were
two artifacts: the master could not read events from binlog;
show master events did not show.
Fixed with
- fragmenting the used io-cached buffer into pieces each size of less
than max_allowed_packet (bug#30435)
- incrementing show-binlog-events handling thread's max_allowed_packet
with the max estimated for the replication header size
Now, every transaction (including autocommit transactions) start with
a BEGIN and end with a COMMIT/ROLLBACK in the binlog.
Added a test case, and updated lots of test case result files.
When set the server-id dynamically, the server_id member of current thread is not updated.
Update the server_id member of current thread after updated the global variable value.
Complementary patch since LOAD DATA INFILE was not covered in
the previous patch.
This patch adds a check so that the slave skip counter is not
decreased to zero if seeing a BEGIN_LOAD_QUERY_EVENT,
APPEND_BLOCK_EVENT, or CREATE_FILE_EVENT since these cannot
end a group. The group is terminated by an EXECUTE_LOAD_QUERY_
EVENT or DELETE_FILE_EVENT.
The reason of this bug is that when mysqlbinlog dumps a query, the query is written to
output with a delimeter appended right after it, if the query string ends with a '--'
comment, then the delimeter would be considered as part of the comment, if there are any
statements after this query, then it will cause a syntax error.
Start a newline before appending delimiter after a query string
When executing drop view statement on the master, the statement is written
into bin-log without checking for possible errors, so the statement would
always be bin-logged with error code cleared even if some error might occur,
for example, some of the views being dropped does not exist. This would cause
failure on the slave.
Writing bin-log after check for errors, if at least one view has been dropped
the query is bin-logged possible with an error.
When running mysqlbinlog on a 64-bit machine with a corrupt relay log,
it causes mysqlbinlog to crash. In this case, the crash is caused
because a request for 18446744073709534806U bytes is issued, which
apparantly can be served on a 64-bit machine (speculatively, I assume)
but this causes the memcpy() issued later to copy the data to segfault.
The request for the number of bytes is caused by a computation
of data_len - server_vars_len where server_vars_len is corrupt in such
a sense that it is > data_len. This causes a wrap-around, with the
the data_len given above.
This patch adds a check that if server_vars_len is greater than
data_len before the substraction, and aborts reading the event in
that case marking the event as invalid. It also adds checks to see
that reading the server variables does not go outside the bounds
of the available space, giving a limited amount of integrity check.
commit is specific for 5.0 to eliminated non-deterministic tests.
Those tests run only in 5.1 env where there is a necessary devices such
as processlist table of info_schema.
Query_log_event::error_code
A query can perform completely having the local var error of mysql_$query
zero, where $query in insert, update, delete, load,
and be binlogged with error_code e.g KILLED_QUERY while there is no
reason do to so.
That can happen because Query_log_event consults thd->killed flag to
evaluate error_code.
Fixed with implementing a scheme suggested and partly implemented at
time of bug@22725 work-on. error_status is cached immediatly after the
control leaves the main rows-loop and that instance always corresponds
to `error' the local of mysql_$query functions. The cached value
is passed to Query_log_event constructor, not the default thd->killed
which can be changed in between of the caching and the constructing.
Inserting Data.
The problem was that under some circumstances Field class was not
properly initialized before calling create_length_to_internal_length()
function, which led to assert failure.
The fix is to do the proper initialization.
The user-visible problem was that under some circumstances
CREATE TABLE ... SELECT statement crashed the server or led
to wrong error message (wrong results).
When doing indexed search the server constructs a key image for
faster comparison to the stored keys. While doing that it must not
perform (and stop if they fail) the additional date checks that can
be turned on by the SQL mode because there already may be values in
the table that don't comply with the error checks.
Fixed by ignoring these SQL mode bits while making the key image.
an error, asserts server
In case of a fatal error during filesort in find_all_keys() the error
was returned without the necessary handler uninitialization.
Fixed by changing the code so that handler uninitialization is performed
before returning the error.
Since, as of MySQL 5.0.15, CHAR() arguments larger than 255 are converted into multiple result bytes, a single CHAR() argument can now take up to 4 bytes. This patch fixes Item_func_char::fix_length_and_dec() to take this into account.
This patch also fixes a regression introduced by the patch for bug21513. As now we do not always have the 'name' member of Item set for Item_hex_string and Item_bin_string, an own print() method has been added to Item_hex_string so that it could correctly be printed by Item_func::print_args().
The value of the actual argument of BIT-type-arg stored procedure was binlogged as non-escaped
sequence of bytes corresponding to internal representation of the bit value.
The patch enforces binlogging of the bit-argument as a valid literal: prefixing the quoted bytes
sequence with _binary.
Note, that behaviour of Item_field::var_str for field_type() of MYSQL_TYPE_BIT is exceptional
in that the returned string contains the binary representation even though result_type() of
the item is INT_RESULT.