The function thd_security_context allocates memory on an unprotected MEM_ROOT if the
message length becomes longer than requested and the initial buffer memory needs to
be reallocated.
This patch fixes the design error by copying parts of the reallocated buffer
to the destination buffer. This works because the destination buffer isn't
owned by the String object and thus isn't freed when a new buffer is allocated.
Any new memory allocated by the String object is reclaimed when the object
is destroyed at the end of the function call.
Innodb monitor could cause a server crash because of invalid access to a
shared variable in a concurrent environment.
This patch adds a guard to protect against crashes but not against
inconsistent values because of performance reasons.
When statement-based replication is used, and the
transaction isolation level is READ-COMMITTED or stricter,
InnoDB will print an error because statement-based
replication might lead to inconsistency between master
and slave databases. However, when the binary log is not
engaged, this is not an issue and an error should
not be printed.
This patch makes thd_binlog_format() return BINLOG_FORMAT_
UNSPEC when the binary log is not engaged for the given
thread.
Debug builds of MySQL 5.1, 6.0 with Sun Studio 12 broke because of
use of gcc specific feature.
The fix is to replace __FUNCTION__ with the corresponding character string
Debug builds of MySQL 5.1, 6.0 with Sun Studio 12 broke because of
use of gcc specific feature.
The fix is to replace __FUNCTION__ with the corresponding character
string
The failure was caused by executing a CREATE-SELECT statement that creates a
table in another database than the current one. In row-based logging, the
CREATE statement was written to the binary log without the database, hence
creating the table in the wrong database, causing the following inserts to
fail since the table didn't exist in the given database.
Fixed the bug by adding a parameter to store_create_info() that will make
the function print the database name before the table name and used that
in the calls that write the CREATE statement to the binary log. The database
name is only printed if it is different than the currently selected database.
The output of SHOW CREATE TABLE has not changed and is still printed without
the database name.
The assertion indicates that some data was left in the transaction
cache when the server was shut down, which means that a previous
statement did not commit or rollback correctly.
What happened was that a bug in the rollback of a transactional
table caused the transaction cache to be emptied, but not reset.
The error can be triggered by having a failing UPDATE or INSERT,
on a transactional table, causing an implicit rollback.
Fixed by always flushing the pending event to reset the state
properly.
The failure was caused by executing a CREATE-SELECT statement that creates a
table in another database than the current one. In row-based logging, the
CREATE statement was written to the binary log without the database, hence
creating the table in the wrong database, causing the following inserts to
fail since the table didn't exist in the given database.
Fixed the bug by adding a parameter to store_create_info() that will make
the function print the database name before the table name and used that
in the calls that write the CREATE statement to the binary log. The database
name is only printed if it is different than the currently selected database.
The output of SHOW CREATE TABLE has not changed and is still printed without
the database name.
The problem of this bug is that we need to get the list of tables
to be updated for a multi-table update statement, which requires to
open all the tables referenced by the statement and resolve all
the fields involved in update in order to figure out the list of
tables for update. However if there are replicate filter rules,
some tables might not exist on slave and result in a failure
before we could examine the filter rules.
I think the whole problem can not be solved on slave alone,
the master must record and send the information of tables
involved for update to slave, so that the slave do not need to
open all the tables referenced by the multi-table update statement to
figure out which tables are involved for update.
So a status variable is added to Query_log event to store the
value of table map for update on master. And on slave, it will
try to get the value of this variable and use it to examine
filter rules without opening any tables on slave, if this values
is not available, the old approach is used and thus the bug will
still occur for when replicating from old masters.
build)
The crash was caused by freeing the internal parser stack during the parser
execution.
This occured only for complex stored procedures, after reallocating the parser
stack using my_yyoverflow(), with the following C call stack:
- MYSQLparse()
- any rule calling sp_head::restore_lex()
- lex_end()
- x_free(lex->yacc_yyss), xfree(lex->yacc_yyvs)
The root cause is the implementation of stored procedures, which breaks the
assumption from 4.1 that there is only one LEX structure per parser call.
The solution is to separate the LEX structure into:
- attributes that represent a statement (the current LEX structure),
- attributes that relate to the syntax parser itself (Yacc_state),
so that parsing multiple statements in stored programs can create multiple
LEX structures while not changing the unique Yacc_state.
Now, Yacc_state and the existing Lex_input_stream are aggregated into
Parser_state, a structure that represent the complete state of the (Lexical +
Syntax) parser.
The crash appeared to be a result of allocating an instance of Discrete_interval
automatically that that was referred in out-of-declaration scope.
Fixed with correcting backing up and restoring scheme of
auto_inc_intervals_forced, introduced by bug#33029, by means of shallow copying;
added simulation code that forces executing those fixes of the former bug that
targeted at master-and-slave having incompatible bug#33029-prone versions.
subselects into account
It is forbidden to use the SELECT INTO construction inside UNION statements
unless on the last SELECT of the union. The parser records whether it
has seen INTO or not when parsing a UNION statement. But if the INTO was
legally used in an outer query, an error is thrown if UNION is seen in a
subquery. Fixed in 5.0 by remembering the nesting level of INTO tokens and
mitigate the error unless it collides with the UNION.
Add metadata validation to ~20 more SQL commands. Make sure that
these commands actually work in ps-protocol, since until now they
were enabled, but not carefully tested.
Fixes the ml003 bug found by Matthias during internal testing of the
patch.
WL#4165 Prepared statements: validation
WL#4166 Prepared statements: automatic re-prepare
Fixes
Bug#27430 Crash in subquery code when in PS and table DDL changed after PREPARE
Bug#27690 Re-execution of prepared statement after table was replaced with a view crashes
Bug#27420 A combination of PS and view operations cause error + assertion on shutdown
The basic idea of the patch is to keep track of table metadata between
prepared statement prepare and execute. If some table used in the statement
has changed, the prepared statement is re-prepared before execution.
See WL#4165 and WL#4166 contents and comments in the code for details
of the implementation.
If a binlog file is manually replaced with a namesake directory the internal purging did
not handle the error of deleting the file so that eventually
a post-execution guards fires an assert.
Fixed with reusing a snippet of code for bug@18199 to tolerate lack of the file but no other error
at an attempt to delete it.
The same applied to the index file deletion.
The cset carries pieces of manual merging.
using a trig in SP
For all 5.0 and up to 5.1.12 exclusive, when a stored routine or
trigger caused an INSERT into an AUTO_INCREMENT column, the
generated AUTO_INCREMENT value should not be written into the
binary log, which means if a statement does not generate
AUTO_INCREMENT value itself, there will be no Intvar event (SET
INSERT_ID) associated with it even if one of the stored routine
or trigger caused generation of such a value. And meanwhile, when
executing a stored routine or trigger, it would ignore the
INSERT_ID value even if there is a INSERT_ID value available set
by a SET INSERT_ID statement.
Starting from MySQL 5.1.12, the generated AUTO_INCREMENT value is
written into the binary log, and the value will be used if
available when executing the stored routine or trigger.
Prior fix of this bug in MySQL 5.0 and prior MySQL 5.1.12
(referenced as the buggy versions in the text below), when a
statement that generates AUTO_INCREMENT value by the top
statement was executed in the body of a SP, all statements in the
SP after this statement would be treated as if they had generated
AUTO_INCREMENT by the top statement. When a statement that did
not generate AUTO_INCREMENT value by the top statement but by a
function/trigger called by it, an erroneous Intvar event would be
associated with the statement, this erroneous INSERT_ID value
wouldn't cause problem when replicating between masters and
slaves of 5.0.x or prior 5.1.12, because the erroneous INSERT_ID
value was not used when executing functions/triggers. But when
replicating from buggy versions to 5.1.12 or newer, which will
use the INSERT_ID value in functions/triggers, the erroneous
value will be used, which would cause duplicate entry error and
cause the slave to stop.
The patch for 5.1 fixed it to ignore the SET INSERT_ID value when
executing functions/triggers if it is replicating from a master
of buggy versions, another patch for 5.0 fixed it not to generate
the erroneous Intvar event.
using a trig in SP
For all 5.0 and up to 5.1.12 exclusive, when a stored routine or
trigger caused an INSERT into an AUTO_INCREMENT column, the
generated AUTO_INCREMENT value should not be written into the
binary log, which means if a statement does not generate
AUTO_INCREMENT value itself, there will be no Intvar event (SET
INSERT_ID) associated with it even if one of the stored routine
or trigger caused generation of such a value. And meanwhile, when
executing a stored routine or trigger, it would ignore the
INSERT_ID value even if there is a INSERT_ID value available set
by a SET INSERT_ID statement.
Starting from MySQL 5.1.12, the generated AUTO_INCREMENT value is
written into the binary log, and the value will be used if
available when executing the stored routine or trigger.
Prior fix of this bug in MySQL 5.0 and prior MySQL 5.1.12
(referenced as the buggy versions in the text below), when a
statement that generates AUTO_INCREMENT value by the top
statement was executed in the body of a SP, all statements in the
SP after this statement would be treated as if they had generated
AUTO_INCREMENT by the top statement. When a statement that did
not generate AUTO_INCREMENT value by the top statement but by a
function/trigger called by it, an erroneous Intvar event would be
associated with the statement, this erroneous INSERT_ID value
wouldn't cause problem when replicating between masters and
slaves of 5.0.x or prior 5.1.12, because the erroneous INSERT_ID
value was not used when executing functions/triggers. But when
replicating from buggy versions to 5.1.12 or newer, which will
use the INSERT_ID value in functions/triggers, the erroneous
value will be used, which would cause duplicate entry error and
cause the slave to stop.
The patch for 5.0 fixed it not to generate the erroneous Intvar
event, another patch for 5.1 fixed it to ignore the SET INSERT_ID
value when executing functions/triggers if it is replicating from
a master of buggy versions.