The result of the CHECK OPTION condition evaluation over an
updated record and records of merged tables was arbitrary and
dependant on the order of records in the merged tables during
the execution of SELECT statement.
The CHECK OPTION expression was evaluated over expired record
buffers (with arbitrary data in the fields).
Rowids of tables used in the CHECK OPTION expression were
added to temporary table rows. The multi_update::do_updates()
method was modified to restore necessary record buffers
before evaluation of the CHECK OPTION condition.
Integer values with 10 digits may or may not fit into an int column
(e.g. 2147483647 vs 6147483647).
Thus when creating a temp table column for such an int we must
use bigint instead.
Fixed to use bigint.
Also subsituted a "magic number" with a named constant.
type assertion.
The bug was introduced by the patch for bug #16377.
The "+ INTERVAL" (Item_date_add_interval) function detects its result type
by the type of its first argument. But in some cases it returns STRING
as the result type. This happens when, for example, the first argument is a
DATE represented as string. All this makes the get_datetime_value()
function misinterpret such result and return wrong DATE/DATETIME value.
To avoid such cases in the fix for #16377 the code that detects correct result
field type on the first execution was added to the
Item_date_add_interval::get_date() function. Due to this the result
field type of the Item_date_add_interval item stored by the send_fields()
function differs from item's result field type at the moment when
the item is actually sent. It causes an assertion failure.
Now the get_datetime_value() detects that the DATE value is returned by
some item not only by checking the result field type but also by comparing
the returned value with the 100000000L constant - any DATE value should be
less than this value.
Removed result field type adjusting code from the
Item_date_add_interval::get_date() function.
longer showing SP names.
SHOW CREATE VIEW uses Item::print() methods to reconstruct the
statement text from the parse tree.
The print() method for stored procedure calls needs allocate
space to print the function's quoted name.
It was incorrectly calculating the length of the buffer needed
(was too short).
Fixed to reflect the actual space needed.
constant outer tables did not return null complemented
rows when conditions were evaluated to FALSE.
Wrong results were returned because the conditions over constant
outer tables, when being pushed down, were erroneously enclosed
into the guard function used for WHERE conditions.
sometimes `mysqldump --hex-blob' overruned output buffer by '\0' byte.
The dump_table() function has been fixed to reserve 1 byte more for the
last '\0' byte of dumped string.
CHECK OPTION and a subquery in WHERE condition.
The abort was triggered by setting the value of join->tables for
subqueries in the function JOIN::cleanup. This function was called
after an invocation of the JOIN::join_free method for subqueries
used in WHERE condition.
If a stored function or a trigger was killed it had aborted but no error
was thrown. This allows the caller statement to continue without a notice.
This may lead to a wrong data being inserted/updated to/deleted as in such
cases the correct result of a stored function isn't guaranteed. In the case
of triggers it allows the caller statement to ignore kill signal and to
waste time because of re-evaluation of triggers that always will fail
because thd->killed flag is still on.
Now the Item_func_sp::execute() and the sp_head::execute_trigger() functions
check whether a function or a trigger were killed during execution and
throws an appropriate error if so.
Now the fill_record() function stops filling record if an error was reported
through thd->net.report_error.
When processing the USE/FORCE index hints
the optimizer was not checking if the indexes
specified are enabled (see ALTER TABLE).
Fixed by:
Backporting the fix for bug 20604 to 5.0
mode.
When a new DATE/DATETIME field without default value is being added by the
ALTER TABLE the '0000-00-00' value is used as the default one. But it wasn't
checked whether such value was allowed by the set sql mode. Due to this
'0000-00-00' values was allowed for DATE/DATETIME fields even in the
NO_ZERO_DATE mode.
Now the mysql_alter_table() function checks whether the '0000-00-00' value
is allowed for DATE/DATETIME fields by the set sql mode.
The new error_if_not_empty flag is used in the mysql_alter_table() function
to indicate that it should abort if the table being altered isn't empty.
The new new_datetime_field field is used in the mysql_alter_table() function
for error throwing purposes.
The new error_if_not_empty parameter is added to the copy_data_between_tables()
function to indicate the it should return error if the source table isn't empty.
my_decimal in some cases can contain more decimal digits than
is officially supported (DECIMAL_MAX_PRECISION), so we need to
prepare bigger buffer for the resulting string.
- The SQL commands used by mysql_upgrade are written to be run
with sql_mode set to '' - thus the scripts should change sql_mode
for the session to make sure the SQL is legal.
is involved.
The Arg_comparator::compare_datetime() comparator caches its arguments if
they are constants i.e. const_item() returns true. The
Item_func_get_user_var::const_item() returns true or false based on
the current query_id and the query_id where the variable was created.
Thus even if a query can change its value its const_item() still will return
true. All this leads to a wrong comparison result when an object of the
Item_func_get_user_var class is involved.
Now the Arg_comparator::can_compare_as_dates() and the
get_datetime_value() functions never cache result of the GET_USER_VAR()
function (the Item_func_get_user_var class).
Conversion errors when constructing the condition for an
IN predicates were treated as if the affected column contains
NULL. If such a IN predicate is inside NOT we get wrong
results.
Corrected the handling of conversion errors in an IN predicate
that is resolved by unique_subquery (through
subselect_uniquesubquery_engine).
- Problem was reported as a SP variable using itself as
right value inside SUBSTR caused corruption of data.
- This bug could not be verified in either 5.0bk or 5.1bk
- Added test case to prevent future regressions.
Made year 2000 handling more uniform
Removed year 2000 handling out from calc_days()
The above removes some bugs in date/datetimes with year between 0 and 200
Now we get a note when we insert a datetime value into a date column
For default values to CREATE, don't give errors for warning level NOTE
Fixed some compiler failures
Added library ws2_32 for windows compilation (needed if we want to compile with IOCP support)
Removed duplicate typedef TIME and replaced it with MYSQL_TIME
Better (more complete) fix for: Bug#21103 "DATE column not compared as DATE"
Fixed properly Bug#18997 "DATE_ADD and DATE_SUB perform year2K autoconversion magic on 4-digit year value"
Fixed Bug#23093 "Implicit conversion of 9912101 to date does not match cast(9912101 as date)"
Bug#21483 "Server abort or deadlock on INSERT DELAYED with another
implicit insert"
Also fixes and adds test cases for bugs:
20497 "Trigger with INSERT DELAYED causes Error 1165"
21714 "Wrong NEW.value and server abort on INSERT DELAYED to a
table with a trigger".
Post-review fixes.
Problem:
In MySQL INSERT DELAYED is a way to pipe all inserts into a
given table through a dedicated thread. This is necessary for
simplistic storage engines like MyISAM, which do not have internal
concurrency control or threading and thus can not
achieve efficient INSERT throughput without support from SQL layer.
DELAYED INSERT works as follows:
For every distinct table, which can accept DELAYED inserts and has
pending data to insert, a dedicated thread is created to write data
to disk. All user connection threads that attempt to
delayed-insert into this table interact with the dedicated thread in
producer/consumer fashion: all records to-be inserted are pushed
into a queue of the dedicated thread, which fetches the records and
writes them.
In this design, client connection threads never open or lock
the delayed insert table.
This functionality was introduced in version 3.23 and does not take
into account existence of triggers, views, or pre-locking.
E.g. if INSERT DELAYED is called from a stored function, which,
in turn, is called from another stored function that uses the delayed
table, a deadlock can occur, because delayed locking by-passes
pre-locking. Besides:
* the delayed thread works directly with the subject table through
the storage engine API and does not invoke triggers
* even if it was patched to invoke triggers, if triggers,
in turn, used other tables, the delayed thread would
have to open and lock involved tables (use pre-locking).
* even if it was patched to use pre-locking, without deadlock
detection the delayed thread could easily lock out user
connection threads in case when the same table is used both
in a trigger and on the right side of the insert query:
the delayed thread would not release locks until all inserts
are complete, and user connection can not complete inserts
without having locks on the tables used on the right side of the
query.
Solution:
These considerations suggest two general alternatives for the
future of INSERT DELAYED:
* it is considered a full-fledged alternative to normal INSERT
* it is regarded as an optimisation that is only relevant
for simplistic engines.
Since we missed our chance to provide complete support of new
features when 5.0 was in development, the first alternative
currently renders infeasible.
However, even the second alternative, which is to detect
new features and convert DELAYED insert into a normal insert,
is not easy to implement.
The catch-22 is that we don't know if the subject table has triggers
or is a view before we open it, and we only open it in the
delayed thread. We don't know if the query involves pre-locking
until we have opened all tables, and we always first create
the delayed thread, and only then open the remaining tables.
This patch detects the problematic scenarios and converts
DELAYED INSERT to a normal INSERT using the following approach:
* if the statement is executed under pre-locking (e.g. from
within a stored function or trigger) or the right
side may require pre-locking, we detect the situation
before creating a delayed insert thread and convert the statement
to a conventional INSERT.
* if the subject table is a view or has triggers, we shutdown
the delayed thread and convert the statement to a conventional
INSERT.