The result of the CHECK OPTION condition evaluation over an
updated record and records of merged tables was arbitrary and
dependant on the order of records in the merged tables during
the execution of SELECT statement.
The CHECK OPTION expression was evaluated over expired record
buffers (with arbitrary data in the fields).
Rowids of tables used in the CHECK OPTION expression were
added to temporary table rows. The multi_update::do_updates()
method was modified to restore necessary record buffers
before evaluation of the CHECK OPTION condition.
Integer values with 10 digits may or may not fit into an int column
(e.g. 2147483647 vs 6147483647).
Thus when creating a temp table column for such an int we must
use bigint instead.
Fixed to use bigint.
Also subsituted a "magic number" with a named constant.
type assertion.
The bug was introduced by the patch for bug #16377.
The "+ INTERVAL" (Item_date_add_interval) function detects its result type
by the type of its first argument. But in some cases it returns STRING
as the result type. This happens when, for example, the first argument is a
DATE represented as string. All this makes the get_datetime_value()
function misinterpret such result and return wrong DATE/DATETIME value.
To avoid such cases in the fix for #16377 the code that detects correct result
field type on the first execution was added to the
Item_date_add_interval::get_date() function. Due to this the result
field type of the Item_date_add_interval item stored by the send_fields()
function differs from item's result field type at the moment when
the item is actually sent. It causes an assertion failure.
Now the get_datetime_value() detects that the DATE value is returned by
some item not only by checking the result field type but also by comparing
the returned value with the 100000000L constant - any DATE value should be
less than this value.
Removed result field type adjusting code from the
Item_date_add_interval::get_date() function.
The bug in that slave version of a table with unique field still was
able to execute INSERT query as replace whereas it's impossible on master.
The reason of this artifact is wrong usage of ndb->extra:s.
Fixed with resetting flags at do_after.
There is open issue with symmetrical resetting
table->file->extra(HA_EXTRA_NO_IGNORE_NO_KEY)
which i had to hand to bug#27077.
The test for the current bug was committed in a cset for bug#27320.
Refining the tests since pb revealed the older version's fragality - the error from SF() due to killed
may be different on different env:s.
DBUG_ASSERT instead of assert.
longer showing SP names.
SHOW CREATE VIEW uses Item::print() methods to reconstruct the
statement text from the parse tree.
The print() method for stored procedure calls needs allocate
space to print the function's quoted name.
It was incorrectly calculating the length of the buffer needed
(was too short).
Fixed to reflect the actual space needed.
When a Windows console application that has an open console (e.g. mysqld-nt
started with the --console option) encounters certain type of errors
(like no floppy disk in a floppy drive) the OS will pop-up an
"abort/retry/ignore" dialog and block the application (depending on a
registry setting : see http://msdn2.microsoft.com/en-us/embedded/aa731206.aspx
for details).
Fixed by disabling the dialog popups for every error except a GPF and
alignment errors. This is safe to do as the actual error gets reported
(and handled) to mysqld.
Fix a race
Wait at the end of the test for all events to finish.
Then continue to the next result. This should be done, as the
server won't be restarted, and although events are dropped with
drop database, they could still be executing in memory.
The reason for the bug was that replaying of a query on slave could not be possible since its event
was recorded with the killed error. Due to the specific of handling INSERT, which per-row-while-loop is
unbreakable to killing, the query on transactional table should have not appeared in binlog unless
there was a call to a stored routine that got interrupted with killing (and then there must be an error
returned out of the loop).
The offered solution added the following rule for binlogging of INSERT that accounts the above
specifics:
For INSERT on transactional-table if the error was not set the only raised flag
is harmless and is ignored via masking out on time of creation of binlog event.
For both table types the combination of raised error and KILLED flag indicates that there
was potentially partial execution on master and consistency is under the question.
In that case the code continues to binlog an event with an appropriate killed error.
The fix relies on the specified behaviour of stored routine that must propagate the error
to the top level query handling if the thd->killed flag was raised in the routine execution.
The patch adds an arg with the default killed-status-unset value to Query_log_event::Query_log_event.
This is a one liner which will fix semantics if SHOW SLAVE HOSTS to
display the list of slaves currently registered on the host on which
it was issued.
- A race condition caused brief unavailablility when trying to acccess
a table.
- The variable 'grant_option' was removed to resolve the race condition and
to simplify the design pattern. This flag was originally intended to optimize
grant checks.
- A race condition caused brief unavailablility when trying to acccess
a table.
- The unprotected variable 'grant_option' wasn't intended to alternate
during normal execution. Variable initialization moved to grant_init
a lines responsible for the alternation are removed.
Bug#4968 ""Stored procedure crash if cursor opened on altered table"
Bug#6895 "Prepared Statements: ALTER TABLE DROP COLUMN does nothing"
Bug#19182 "CREATE TABLE bar (m INT) SELECT n FROM foo; doesn't work from
stored procedure."
Bug#19733 "Repeated alter, or repeated create/drop, fails"
Bug#22060 "ALTER TABLE x AUTO_INCREMENT=y in SP crashes server"
Bug#24879 "Prepared Statements: CREATE TABLE (UTF8 KEY) produces a
growing key length" (this bug is not fixed in 5.0)
Re-execution of CREATE DATABASE, CREATE TABLE and ALTER TABLE
statements in stored routines or as prepared statements caused
incorrect results (and crashes in versions prior to 5.0.25).
In 5.1 the problem occured only for CREATE DATABASE, CREATE TABLE
SELECT and CREATE TABLE with INDEX/DATA DIRECTOY options).
The problem of bugs 4968, 19733, 19282 and 6895 was that functions
mysql_prepare_table, mysql_create_table and mysql_alter_table are not
re-execution friendly: during their operation they modify contents
of LEX (members create_info, alter_info, key_list, create_list),
thus making the LEX unusable for the next execution.
In particular, these functions removed processed columns and keys from
create_list, key_list and drop_list. Search the code in sql_table.cc
for drop_it.remove() and similar patterns to find evidence.
The fix is to supply to these functions a usable copy of each of the
above structures at every re-execution of an SQL statement.
To simplify memory management, LEX::key_list and LEX::create_list
were added to LEX::alter_info, a fresh copy of which is created for
every execution.
The problem of crashing bug 22060 stemmed from the fact that the above
metnioned functions were not only modifying HA_CREATE_INFO structure
in LEX, but also were changing it to point to areas in volatile memory
of the execution memory root.
The patch solves this problem by creating and using an on-stack
copy of HA_CREATE_INFO in mysql_execute_command.
Additionally, this patch splits the part of mysql_alter_table
that analizes and rewrites information from the parser into
a separate function - mysql_prepare_alter_table, in analogy with
mysql_prepare_table, which is renamed to mysql_prepare_create_table.