sometimes `mysqldump --hex-blob' overruned output buffer by '\0' byte.
The dump_table() function has been fixed to reserve 1 byte more for the
last '\0' byte of dumped string.
If a stored function or a trigger was killed it had aborted but no error
was thrown. This allows the caller statement to continue without a notice.
This may lead to a wrong data being inserted/updated to/deleted as in such
cases the correct result of a stored function isn't guaranteed. In the case
of triggers it allows the caller statement to ignore kill signal and to
waste time because of re-evaluation of triggers that always will fail
because thd->killed flag is still on.
Now the Item_func_sp::execute() and the sp_head::execute_trigger() functions
check whether a function or a trigger were killed during execution and
throws an appropriate error if so.
Now the fill_record() function stops filling record if an error was reported
through thd->net.report_error.
When processing the USE/FORCE index hints
the optimizer was not checking if the indexes
specified are enabled (see ALTER TABLE).
Fixed by:
Backporting the fix for bug 20604 to 5.0
mode.
When a new DATE/DATETIME field without default value is being added by the
ALTER TABLE the '0000-00-00' value is used as the default one. But it wasn't
checked whether such value was allowed by the set sql mode. Due to this
'0000-00-00' values was allowed for DATE/DATETIME fields even in the
NO_ZERO_DATE mode.
Now the mysql_alter_table() function checks whether the '0000-00-00' value
is allowed for DATE/DATETIME fields by the set sql mode.
The new error_if_not_empty flag is used in the mysql_alter_table() function
to indicate that it should abort if the table being altered isn't empty.
The new new_datetime_field field is used in the mysql_alter_table() function
for error throwing purposes.
The new error_if_not_empty parameter is added to the copy_data_between_tables()
function to indicate the it should return error if the source table isn't empty.
my_decimal in some cases can contain more decimal digits than
is officially supported (DECIMAL_MAX_PRECISION), so we need to
prepare bigger buffer for the resulting string.
is involved.
The Arg_comparator::compare_datetime() comparator caches its arguments if
they are constants i.e. const_item() returns true. The
Item_func_get_user_var::const_item() returns true or false based on
the current query_id and the query_id where the variable was created.
Thus even if a query can change its value its const_item() still will return
true. All this leads to a wrong comparison result when an object of the
Item_func_get_user_var class is involved.
Now the Arg_comparator::can_compare_as_dates() and the
get_datetime_value() functions never cache result of the GET_USER_VAR()
function (the Item_func_get_user_var class).
Conversion errors when constructing the condition for an
IN predicates were treated as if the affected column contains
NULL. If such a IN predicate is inside NOT we get wrong
results.
Corrected the handling of conversion errors in an IN predicate
that is resolved by unique_subquery (through
subselect_uniquesubquery_engine).
- Problem was reported as a SP variable using itself as
right value inside SUBSTR caused corruption of data.
- This bug could not be verified in either 5.0bk or 5.1bk
- Added test case to prevent future regressions.
Bug#21483 "Server abort or deadlock on INSERT DELAYED with another
implicit insert"
Also fixes and adds test cases for bugs:
20497 "Trigger with INSERT DELAYED causes Error 1165"
21714 "Wrong NEW.value and server abort on INSERT DELAYED to a
table with a trigger".
Post-review fixes.
Problem:
In MySQL INSERT DELAYED is a way to pipe all inserts into a
given table through a dedicated thread. This is necessary for
simplistic storage engines like MyISAM, which do not have internal
concurrency control or threading and thus can not
achieve efficient INSERT throughput without support from SQL layer.
DELAYED INSERT works as follows:
For every distinct table, which can accept DELAYED inserts and has
pending data to insert, a dedicated thread is created to write data
to disk. All user connection threads that attempt to
delayed-insert into this table interact with the dedicated thread in
producer/consumer fashion: all records to-be inserted are pushed
into a queue of the dedicated thread, which fetches the records and
writes them.
In this design, client connection threads never open or lock
the delayed insert table.
This functionality was introduced in version 3.23 and does not take
into account existence of triggers, views, or pre-locking.
E.g. if INSERT DELAYED is called from a stored function, which,
in turn, is called from another stored function that uses the delayed
table, a deadlock can occur, because delayed locking by-passes
pre-locking. Besides:
* the delayed thread works directly with the subject table through
the storage engine API and does not invoke triggers
* even if it was patched to invoke triggers, if triggers,
in turn, used other tables, the delayed thread would
have to open and lock involved tables (use pre-locking).
* even if it was patched to use pre-locking, without deadlock
detection the delayed thread could easily lock out user
connection threads in case when the same table is used both
in a trigger and on the right side of the insert query:
the delayed thread would not release locks until all inserts
are complete, and user connection can not complete inserts
without having locks on the tables used on the right side of the
query.
Solution:
These considerations suggest two general alternatives for the
future of INSERT DELAYED:
* it is considered a full-fledged alternative to normal INSERT
* it is regarded as an optimisation that is only relevant
for simplistic engines.
Since we missed our chance to provide complete support of new
features when 5.0 was in development, the first alternative
currently renders infeasible.
However, even the second alternative, which is to detect
new features and convert DELAYED insert into a normal insert,
is not easy to implement.
The catch-22 is that we don't know if the subject table has triggers
or is a view before we open it, and we only open it in the
delayed thread. We don't know if the query involves pre-locking
until we have opened all tables, and we always first create
the delayed thread, and only then open the remaining tables.
This patch detects the problematic scenarios and converts
DELAYED INSERT to a normal INSERT using the following approach:
* if the statement is executed under pre-locking (e.g. from
within a stored function or trigger) or the right
side may require pre-locking, we detect the situation
before creating a delayed insert thread and convert the statement
to a conventional INSERT.
* if the subject table is a view or has triggers, we shutdown
the delayed thread and convert the statement to a conventional
INSERT.
in the case of the overflow in the decimal->integer conversion
we didn't return the proper boundary value, but just the result
of the conversion we calculated on the moment of the error
function.
A wrong condition was used to check that the
Arg_comparator::can_compare_as_dates() function calculated the value of the
string constant. When comparing a non-const STRING function with a constant
DATETIME function it leads to saving an arbitrary value as a cached value of
the DATETIME function.
Now the Arg_comparator::set_cmp_func() function initializes the const_value
variable to the impossible DATETIME value (-1) and this const_value is
cached only if it was changed by the Arg_comparator::can_compare_as_dates()
function.