When the thread executing a DDL was killed after finished its
execution but before writing the binlog event, the error code in
the binlog event could be set wrongly to ER_SERVER_SHUTDOWN or
ER_QUERY_INTERRUPTED.
This patch fixed the problem by ignoring the kill status when
constructing the event for DDL statements.
This patch also included the following changes in order to
provide the test case.
1) modified mysqltest to support variable for connection command
2) modified mysql-test-run.pl, add new variable MYSQL_SLAVE to
run mysql client against the slave mysqld.
Query_log_event::error_code
A query can perform completely having the local var error of mysql_$query
zero, where $query in insert, update, delete, load,
and be binlogged with error_code e.g KILLED_QUERY while there is no
reason do to so.
That can happen because Query_log_event consults thd->killed flag to
evaluate error_code.
Fixed with implementing a scheme suggested and partly implemented at
time of bug@22725 work-on. error_status is cached immediatly after the
control leaves the main rows-loop and that instance always corresponds
to `error' the local of mysql_$query functions. The cached value
is passed to Query_log_event constructor, not the default thd->killed
which can be changed in between of the caching and the constructing.
The reason for the bug was that replaying of a query on slave could not be possible since its event
was recorded with the killed error. Due to the specific of handling INSERT, which per-row-while-loop is
unbreakable to killing, the query on transactional table should have not appeared in binlog unless
there was a call to a stored routine that got interrupted with killing (and then there must be an error
returned out of the loop).
The offered solution added the following rule for binlogging of INSERT that accounts the above
specifics:
For INSERT on transactional-table if the error was not set the only raised flag
is harmless and is ignored via masking out on time of creation of binlog event.
For both table types the combination of raised error and KILLED flag indicates that there
was potentially partial execution on master and consistency is under the question.
In that case the code continues to binlog an event with an appropriate killed error.
The fix relies on the specified behaviour of stored routine that must propagate the error
to the top level query handling if the thd->killed flag was raised in the routine execution.
The patch adds an arg with the default killed-status-unset value to Query_log_event::Query_log_event.
This patch fixes problem that LOAD DATA could use different
character sets when loading files on master and on slave sides:
- Adding replication of thd->variables.collation_database
- Adding optional character set clause into LOAD DATA
Note, the second way, with explicit CHARACTER SET clause
should be the recommended way to load data using an alternative
character set.
The old way, using "SET @@character_set_database=xxx" should be
gradually depricated.
"INSERT... ON DUPLICATE KEY UPDATE skips auto_increment values".
When in an INSERT ON DUPLICATE KEY UPDATE, using
an autoincrement column, we inserted some autogenerated values and
also updated some rows, some autogenerated values were not used
(for example, even if 10 was the largest autoinc value in the table
at the start of the statement, 12 could be the first autogenerated
value inserted by the statement, instead of 11). One autogenerated
value was lost per updated row. Led to exhausting the range of the
autoincrement column faster.
Bug introduced by fix of BUG#20188; present since 5.0.24 and 5.1.12.
This bug breaks replication from a pre-5.0.24 master.
But the present bugfix, as it makes INSERT ON DUP KEY UPDATE
behave like pre-5.0.24, breaks replication from a [5.0.24,5.0.34]
master to a fixed (5.0.36) slave! To warn users against this when
they upgrade their slave, as agreed with the support team, we add
code for a fixed slave to detect that it is connected to a buggy
master in a situation (INSERT ON DUP KEY UPDATE into autoinc column)
likely to break replication, in which case it cannot replicate so
stops and prints a message to the slave's error log and to SHOW SLAVE
STATUS.
For 5.0.36->[5.0.24,5.0.34] replication we cannot warn as master
does not know the slave's version (but we always recommended to users
to have slave at least as new as master).
As agreed with support, I'll also ask for an alert to be put into
the MySQL Network Monitoring and Advisory Service.
This patch is an additional code change to the get_str_len_and_pointer
method in log_events.cc. This change is necessary to correct a problem
encountered on 64-bit SUSE where the auto_increment_* variables were
being overwritten. The change corrects a cast mismatch which caused
the problem.
Corrected spelling in copyright text
Makefile.am:
Don't update the files from BitKeeper
Many files:
Removed "MySQL Finland AB & TCX DataKonsult AB" from copyright header
Adjusted year(s) in copyright header
Many files:
Added GPL copyright text
Removed files:
Docs/Support/colspec-fix.pl
Docs/Support/docbook-fixup.pl
Docs/Support/docbook-prefix.pl
Docs/Support/docbook-split
Docs/Support/make-docbook
Docs/Support/make-makefile
Docs/Support/test-make-manual
Docs/Support/test-make-manual-de
Docs/Support/xwf
- Removed not used variables and functions
- Added #ifdef around code that is not used
- Renamed variables and functions to avoid conflicts
- Removed some not used arguments
Fixed some class/struct warnings in ndb
Added define IS_LONGDATA() to simplify code in libmysql.c
I did run gcov on the changes and added 'purecov' comments on almost all lines that was not just variable name changes
Problem: when loading mysqlbinlog dumps, CREATE PROCEDURE having semicolons
in their bodies failed.
Fix: Using safe delimiter "/*!*/;" to dump log entries.
A communication packet can also be a binlog event sent from the master to the slave.
To be sent by master dump and accepted by slave io thread both have to have
the value of max_allowed_packet bigger than one that client connection had.
In the patch there is the MAX possible replicatio header size estimation for events
in binlog that embedded user query. Only these events of query_log_event type, i.e
just plain queries, require attention.
- Added empty constructors and virtual destructors to many classes and structs
- Removed some usage of the offsetof() macro to instead use C++ class pointers
different features, adding numbering to enums reduce the risk that code will
be merged incorrectly. This particular enum must have fixed values to ensure
that an upgraded server always can read old logs. I added this, since I
noticed the incorrect order in the RBR clone.
in short we now record whenever the slave I/O thread ignores a master's event because of its server id,
and use this info in the slave SQL thread to advance Exec_master_log_pos. Because if we
do not, this variable stays at the position of the last executed event, i.e. the last *non-ignored*
executed one, which may not be the last of the master's binlog (and so the slave *looks* behind
the master though it's data-wise it's not).
The idea of the patch is to separate statement processing logic,
such as parsing, validation of the parsed tree, execution and cleanup,
from global query processing logic, such as logging, resetting
priorities of a thread, resetting stored procedure cache, resetting
thread count of errors and warnings.
This makes PREPARE and EXECUTE behave similarly to the rest of SQL
statements and allows their use in stored procedures.
This patch contains a change in behaviour:
until recently for each SQL prepared statement command, 2 queries
were written to the general log, e.g.
[Query] prepare stmt from @stmt_text;
[Prepare] select * from t1 <-- contents of @stmt_text
The chagne was necessary to prevent [Prepare] commands from being written
to the general log when executing a stored procedure with Dynamic SQL.
We should consider whether the old behavior is preferrable and probably
restore it.
This patch refixes Bug#7115, Bug#10975 (partially), Bug#10605 (various bugs
in Dynamic SQL reported before it was disabled).
Approximative, because it's using our binlogging way (what we call "query"-level) and this is not as good as record-level binlog (5.1) would be. It imposes several
limitations to routines, and has caveats (which I'll document, and for which the server will try to issue errors but that is not always possible).
Reason I don't propagate caller info to the binlog as planned is that on master and slave
users may be different; even with that some caveats would remain.
This saves one byte per Query_log_event on disk compared to 5.0.[0..3]. Compatibility problems with 5.0.x where x<4
are explained in the comments in log_event.cc. Putting back s/my_open(O_TRUNC)/(my_delete+my_create) change which had
been wiped away by somebody doing a wrong 4.1->5.0 merge (which happened just
before 5.0.3 :( ). Applying it to new events for LOAD DATA INFILE.
If slave fails in Execute_load_query_log_event::exec_event(),
don't delete the file (so that it's re-usable at next START SLAVE).
And (youpi!) fix for BUG#3247 "a partially completed LOAD DATA INFILE is not
executed at all on the slave" (storing an Execute_load_query_log_event
to binlog, with its error code, instead of Delete_file_log_event).
Now one can use user variables as target for data loaded from file
(besides table's columns). Also LOAD DATA got new SET-clause in which
one can specify values for table columns as expressions.
For example the following is possible:
LOAD DATA INFILE 'words.dat' INTO TABLE t1 (a, @b) SET c = @b + 1;
This patch also implements new way of replicating LOAD DATA.
Now we do it similarly to other queries.
We store LOAD DATA query in new Execute_load_query event
(which is last in the sequence of events representing LOAD DATA).
When we are executing this event we simply rewrite part of query which
holds name of file (we use name of temporary file) and then execute it
as usual query. In the beggining of this sequence we use Begin_load_query
event which is almost identical to Append_file event