ENUM fields internally store their values as integers and may use integer
values as indexes to their values. Invalid values are mapped to zero value.
When storing an empty string the ENUM field fails to find an appropriate value
and tries to convert the provided string to integer. The conversion also
fails and error is returned even if the thd->count_cuted_fields is set to
CHECK_FIELD_IGNORE. This makes the range optimizer wrongly decide that an
impossible range is present.
Now the Field_enum::store() returns error while storing an empty string only
if the thd->count_cuted_fields isn't set to CHECK_FIELD_IGNORE.
The result of the CHECK OPTION condition evaluation over an
updated record and records of merged tables was arbitrary and
dependant on the order of records in the merged tables during
the execution of SELECT statement.
The CHECK OPTION expression was evaluated over expired record
buffers (with arbitrary data in the fields).
Rowids of tables used in the CHECK OPTION expression were
added to temporary table rows. The multi_update::do_updates()
method was modified to restore necessary record buffers
before evaluation of the CHECK OPTION condition.
Integer values with 10 digits may or may not fit into an int column
(e.g. 2147483647 vs 6147483647).
Thus when creating a temp table column for such an int we must
use bigint instead.
Fixed to use bigint.
Also subsituted a "magic number" with a named constant.
type assertion.
The bug was introduced by the patch for bug #16377.
The "+ INTERVAL" (Item_date_add_interval) function detects its result type
by the type of its first argument. But in some cases it returns STRING
as the result type. This happens when, for example, the first argument is a
DATE represented as string. All this makes the get_datetime_value()
function misinterpret such result and return wrong DATE/DATETIME value.
To avoid such cases in the fix for #16377 the code that detects correct result
field type on the first execution was added to the
Item_date_add_interval::get_date() function. Due to this the result
field type of the Item_date_add_interval item stored by the send_fields()
function differs from item's result field type at the moment when
the item is actually sent. It causes an assertion failure.
Now the get_datetime_value() detects that the DATE value is returned by
some item not only by checking the result field type but also by comparing
the returned value with the 100000000L constant - any DATE value should be
less than this value.
Removed result field type adjusting code from the
Item_date_add_interval::get_date() function.
Refining the tests since pb revealed the older version's fragality - the error from SF() due to killed
may be different on different env:s.
DBUG_ASSERT instead of assert.
longer showing SP names.
SHOW CREATE VIEW uses Item::print() methods to reconstruct the
statement text from the parse tree.
The print() method for stored procedure calls needs allocate
space to print the function's quoted name.
It was incorrectly calculating the length of the buffer needed
(was too short).
Fixed to reflect the actual space needed.
When a Windows console application that has an open console (e.g. mysqld-nt
started with the --console option) encounters certain type of errors
(like no floppy disk in a floppy drive) the OS will pop-up an
"abort/retry/ignore" dialog and block the application (depending on a
registry setting : see http://msdn2.microsoft.com/en-us/embedded/aa731206.aspx
for details).
Fixed by disabling the dialog popups for every error except a GPF and
alignment errors. This is safe to do as the actual error gets reported
(and handled) to mysqld.
The reason for the bug was that replaying of a query on slave could not be possible since its event
was recorded with the killed error. Due to the specific of handling INSERT, which per-row-while-loop is
unbreakable to killing, the query on transactional table should have not appeared in binlog unless
there was a call to a stored routine that got interrupted with killing (and then there must be an error
returned out of the loop).
The offered solution added the following rule for binlogging of INSERT that accounts the above
specifics:
For INSERT on transactional-table if the error was not set the only raised flag
is harmless and is ignored via masking out on time of creation of binlog event.
For both table types the combination of raised error and KILLED flag indicates that there
was potentially partial execution on master and consistency is under the question.
In that case the code continues to binlog an event with an appropriate killed error.
The fix relies on the specified behaviour of stored routine that must propagate the error
to the top level query handling if the thd->killed flag was raised in the routine execution.
The patch adds an arg with the default killed-status-unset value to Query_log_event::Query_log_event.
- A race condition caused brief unavailablility when trying to acccess
a table.
- The unprotected variable 'grant_option' wasn't intended to alternate
during normal execution. Variable initialization moved to grant_init
a lines responsible for the alternation are removed.
When storing a large number to a FLOAT or DOUBLE field with fixed length, it could be incorrectly truncated if the field's length was greater than 31.
This patch also does some code cleanups to be able to reuse code which is common between Field_float::store() and Field_double::store().
constant outer tables did not return null complemented
rows when conditions were evaluated to FALSE.
Wrong results were returned because the conditions over constant
outer tables, when being pushed down, were erroneously enclosed
into the guard function used for WHERE conditions.
The root cause of this bug is related to the function skip_rear_comments,
in sql_lex.cc
Recent code changes in skip_rear_comments changed the prototype from
"const uchar*" to "const char*", which had an unforseen impact on this test:
(endp[-1] < ' ')
With unsigned characters, this code filters bytes of value [0x00 - 0x20]
With *signed* characters, this also filters bytes of value [0x80 - 0xFF].
This caused the regression reported, considering cyrillic characters in the
parameter name to be whitespace, and truncated.
Note that the regression is present both in 5.0 and 5.1.
With this fix:
- [0x80 - 0xFF] bytes are no longer considered whitespace.
This alone fixes the regression.
In addition, filtering [0x00 - 0x20] was found bogus and abusive,
so that the code now filters uses my_isspace when looking for whitespace.
Note that this fix is only addressing the regression affecting UTF-8
in general, but does not address a more fundamental problem with
skip_rear_comments: parsing a string *backwards*, starting at end[-1],
is not safe with multi-bytes characters, so that end[-1] can confuse the
last byte of a multi-byte characters with a characters to filter out.
The only known impact of this remaining issue affects objects that have to
meet all the conditions below:
- the object is a FUNCTION / PROCEDURE / TRIGGER / EVENT / VIEW
- the body consist of only *1* instruction, and does *not* contain a
BEGIN-END block
- the instruction ends, lexically, with <ident> <whitespace>* ';'?
For example, "select <ident>;" or "return <ident>;"
- The last character of <ident> is a multi-byte character
- the last byte of this character is ';' '*', '/' or whitespace
In this case, the body of the object will be truncated after parsing,
and stored in an invalid format.
This last issue has not been fixed in this patch, since the real fix
will be implemented by Bug 25411 (trigger code truncated), which is caused
by the very same code.
The real problem is that the function skip_rear_comments is only a
work-around, and should be removed entirely: see the proposed patch for
bug 25411 for details.
On some Linux distributions with both LinuxThreads and NPTL glibc versions available, statically built binaries can crash, because linker defaults to LinuxThreads when linking statically, but calls to external libraries (like libnss) are resolved to NPTL versions.
Since there is nothing we can do in the code to work that around, just give user an advice on how to fix that, if a crash happened on such a binary/OS combination.
bug #26842: master binary log contains invalid queries - replication fails
bug #12826: Possible to get inconsistent slave using SQL syntax Prepared Statements
Problem:
binlogging PS' we may produce syntacticly incorrect queries in the binlog replacing
some parameters with variable names (instead of variable values).
E.g. in the reported case of "limit ?" clause: replacing "?" with "@var"
produces "limit @var" which is not a correct SQL syntax.
Also it may lead to different query execution on slave if we
set and use a variable in the same statement, e.g.
"insert into t1 values (@x:=@x+1, ?)"
Fix: make the stored statement string created upon its execution use variable values
(instead of names) to fill placeholders.
- The "mysql client in mysqld"(which is used by
replication and federated) should use alarms instead of setting
socket timeout value if the rest of the server uses alarm. By
always calling 'my_net_set_write_timeout'
or 'my_net_set_read_timeout' when changing the timeout value(s), the
selection whether to use alarms or timeouts will be handled by
ifdef's in those two functions.
- Move declaration of 'vio_timeout' into "vio_priv.h"
CHECK OPTION and a subquery in WHERE condition.
The abort was triggered by setting the value of join->tables for
subqueries in the function JOIN::cleanup. This function was called
after an invocation of the JOIN::join_free method for subqueries
used in WHERE condition.
If a stored function or a trigger was killed it had aborted but no error
was thrown. This allows the caller statement to continue without a notice.
This may lead to a wrong data being inserted/updated to/deleted as in such
cases the correct result of a stored function isn't guaranteed. In the case
of triggers it allows the caller statement to ignore kill signal and to
waste time because of re-evaluation of triggers that always will fail
because thd->killed flag is still on.
Now the Item_func_sp::execute() and the sp_head::execute_trigger() functions
check whether a function or a trigger were killed during execution and
throws an appropriate error if so.
Now the fill_record() function stops filling record if an error was reported
through thd->net.report_error.
being used without being def
Inside method Item_func_unsigned::val_int, the variable value
can be returned without being initialized when the CAST argument
is of type DECIMAL and has a NULL value. This gives a run-time
error when building debug binaries using Visual C++ 2005.
Solution: Initialize value to 0
Bug #23667 "CREATE TABLE LIKE is not isolated from alteration
by other connections"
Bug #18950 "CREATE TABLE LIKE does not obtain LOCK_open"
As well as:
Bug #25578 "CREATE TABLE LIKE does not require any privileges
on source table".
The first and the second bugs resulted in various errors and wrong
binary log order when one tried to execute concurrently CREATE TABLE LIKE
statement and DDL statements on source table or DML/DDL statements on its
target table.
The problem was caused by incomplete protection/table-locking against
concurrent statements implemented in mysql_create_like_table() routine.
We solve it by simply implementing such protection in proper way (see
comment for sql_table.cc for details).
The third bug allowed user who didn't have any privileges on table create
its copy and therefore circumvent privilege check for SHOW CREATE TABLE.
This patch solves this problem by adding privilege check, which was missing.
Finally it also removes some duplicated code from mysql_create_like_table().
Note that, altough tests covering concurrency-related aspects of CREATE TABLE
LIKE behaviour will only be introduced in 5.1, they were run manually for
this patch as well.
When processing the USE/FORCE index hints
the optimizer was not checking if the indexes
specified are enabled (see ALTER TABLE).
Fixed by:
Backporting the fix for bug 20604 to 5.0
mode.
When a new DATE/DATETIME field without default value is being added by the
ALTER TABLE the '0000-00-00' value is used as the default one. But it wasn't
checked whether such value was allowed by the set sql mode. Due to this
'0000-00-00' values was allowed for DATE/DATETIME fields even in the
NO_ZERO_DATE mode.
Now the mysql_alter_table() function checks whether the '0000-00-00' value
is allowed for DATE/DATETIME fields by the set sql mode.
The new error_if_not_empty flag is used in the mysql_alter_table() function
to indicate that it should abort if the table being altered isn't empty.
The new new_datetime_field field is used in the mysql_alter_table() function
for error throwing purposes.
The new error_if_not_empty parameter is added to the copy_data_between_tables()
function to indicate the it should return error if the source table isn't empty.
my_decimal in some cases can contain more decimal digits than
is officially supported (DECIMAL_MAX_PRECISION), so we need to
prepare bigger buffer for the resulting string.
Problem: we may get syntactically incorrect queries in the binary log
if we use a string value user variable executing a PS which
contains '... limit ?' clause, e.g.
prepare s from "select 1 limit ?";
set @a='qwe'; execute s using @a;
Fix: raise an error in such cases.
is involved.
The Arg_comparator::compare_datetime() comparator caches its arguments if
they are constants i.e. const_item() returns true. The
Item_func_get_user_var::const_item() returns true or false based on
the current query_id and the query_id where the variable was created.
Thus even if a query can change its value its const_item() still will return
true. All this leads to a wrong comparison result when an object of the
Item_func_get_user_var class is involved.
Now the Arg_comparator::can_compare_as_dates() and the
get_datetime_value() functions never cache result of the GET_USER_VAR()
function (the Item_func_get_user_var class).
Conversion errors when constructing the condition for an
IN predicates were treated as if the affected column contains
NULL. If such a IN predicate is inside NOT we get wrong
results.
Corrected the handling of conversion errors in an IN predicate
that is resolved by unique_subquery (through
subselect_uniquesubquery_engine).
Command line and configuration file option 'key_cache_block_size'
was reduced by MALLOC_OVERHEAD (8 in a production server, 36 in a
debug server) from the user supplied value and restricted it to
the greatest multiple of 512 less or equal to the reduced value.
This patch changes option 'key_cache_block_size' to not deduce
MALLOC_OVERHEAD from the input value. However, the restriction
to a multiple of 512 is still done.
Made year 2000 handling more uniform
Removed year 2000 handling out from calc_days()
The above removes some bugs in date/datetimes with year between 0 and 200
Now we get a note when we insert a datetime value into a date column
For default values to CREATE, don't give errors for warning level NOTE
Fixed some compiler failures
Added library ws2_32 for windows compilation (needed if we want to compile with IOCP support)
Removed duplicate typedef TIME and replaced it with MYSQL_TIME
Better (more complete) fix for: Bug#21103 "DATE column not compared as DATE"
Fixed properly Bug#18997 "DATE_ADD and DATE_SUB perform year2K autoconversion magic on 4-digit year value"
Fixed Bug#23093 "Implicit conversion of 9912101 to date does not match cast(9912101 as date)"
- Since isinf() portability across various platforms and
compilers is a complicated question, we should not use
it directly. Instead, the my_isinf() macro should be used,
which is defined as an alias to the system-defined isinf()
if it is safe to use, or a workaround implementation otherwise
Bug#21483 "Server abort or deadlock on INSERT DELAYED with another
implicit insert"
Also fixes and adds test cases for bugs:
20497 "Trigger with INSERT DELAYED causes Error 1165"
21714 "Wrong NEW.value and server abort on INSERT DELAYED to a
table with a trigger".
Post-review fixes.
Problem:
In MySQL INSERT DELAYED is a way to pipe all inserts into a
given table through a dedicated thread. This is necessary for
simplistic storage engines like MyISAM, which do not have internal
concurrency control or threading and thus can not
achieve efficient INSERT throughput without support from SQL layer.
DELAYED INSERT works as follows:
For every distinct table, which can accept DELAYED inserts and has
pending data to insert, a dedicated thread is created to write data
to disk. All user connection threads that attempt to
delayed-insert into this table interact with the dedicated thread in
producer/consumer fashion: all records to-be inserted are pushed
into a queue of the dedicated thread, which fetches the records and
writes them.
In this design, client connection threads never open or lock
the delayed insert table.
This functionality was introduced in version 3.23 and does not take
into account existence of triggers, views, or pre-locking.
E.g. if INSERT DELAYED is called from a stored function, which,
in turn, is called from another stored function that uses the delayed
table, a deadlock can occur, because delayed locking by-passes
pre-locking. Besides:
* the delayed thread works directly with the subject table through
the storage engine API and does not invoke triggers
* even if it was patched to invoke triggers, if triggers,
in turn, used other tables, the delayed thread would
have to open and lock involved tables (use pre-locking).
* even if it was patched to use pre-locking, without deadlock
detection the delayed thread could easily lock out user
connection threads in case when the same table is used both
in a trigger and on the right side of the insert query:
the delayed thread would not release locks until all inserts
are complete, and user connection can not complete inserts
without having locks on the tables used on the right side of the
query.
Solution:
These considerations suggest two general alternatives for the
future of INSERT DELAYED:
* it is considered a full-fledged alternative to normal INSERT
* it is regarded as an optimisation that is only relevant
for simplistic engines.
Since we missed our chance to provide complete support of new
features when 5.0 was in development, the first alternative
currently renders infeasible.
However, even the second alternative, which is to detect
new features and convert DELAYED insert into a normal insert,
is not easy to implement.
The catch-22 is that we don't know if the subject table has triggers
or is a view before we open it, and we only open it in the
delayed thread. We don't know if the query involves pre-locking
until we have opened all tables, and we always first create
the delayed thread, and only then open the remaining tables.
This patch detects the problematic scenarios and converts
DELAYED INSERT to a normal INSERT using the following approach:
* if the statement is executed under pre-locking (e.g. from
within a stored function or trigger) or the right
side may require pre-locking, we detect the situation
before creating a delayed insert thread and convert the statement
to a conventional INSERT.
* if the subject table is a view or has triggers, we shutdown
the delayed thread and convert the statement to a conventional
INSERT.
function.
A wrong condition was used to check that the
Arg_comparator::can_compare_as_dates() function calculated the value of the
string constant. When comparing a non-const STRING function with a constant
DATETIME function it leads to saving an arbitrary value as a cached value of
the DATETIME function.
Now the Arg_comparator::set_cmp_func() function initializes the const_value
variable to the impossible DATETIME value (-1) and this const_value is
cached only if it was changed by the Arg_comparator::can_compare_as_dates()
function.
to NULL
For queries of the form SELECT MIN(key_part_k) FROM t1
WHERE key_part_1 = const and ... and key_part_k-1 = const,
the opt_sum_query optimization tries to
use an index to substitute MIN/MAX functions with their values according
to the following rules:
1) Insert the minimum non-null values where the WHERE clause still matches, or
3) A row of nulls
However, the correct semantics requires that there is a third case 2)
such that a NULL value is substituted if there are only NULL values for
key_part_k.
The patch modifies opt_sum_query() to handle this missing case.
for a query over an empty table right after its creation.
The crash is the result of an attempt made by JOIN::optimize to evaluate
the WHERE condition when no records have been actually read.
The added test case can reproduce the crash only with InnoDB tables and
only with 5.0.x.
statement from a UNION query with ORDER BY an expression containing
RAND().
The crash happened because the global order by list in the union query
was not re-initialized for execution.
(Local order by lists were re-initialized though).
a crash when the left operand of the predicate is evaluated to NULL.
It happens when the rows from the inner tables (tables from the subquery)
are accessed by index methods with key values obtained by evaluation of
the left operand of the subquery predicate. When this predicate is
evaluated to NULL an alternative access with full table scan is used
to check whether the result set returned by the subquery is empty or not.
The crash was due to the fact the info about the access methods used for
regular key values was not properly restored after a switch back from the
full scan access method had occurred.
The patch restores this info properly.
The same problem existed for queries with IN subquery predicates if they
were used not at the top level of the queries.
database.
If a user has a right to update anything in the current database then the
access was granted and further checks of access rights for underlying tables
wasn't done correctly. The check is done before a view is opened and thus no
check of access rights for underlying tables can be carried out.
This allows a user to update through a view a table from another database for
which he hasn't enough rights.
Now the mysql_update() and the mysql_test_update() functions are forces
re-checking of access rights after a view is opened.
Bug #20662 "Infinite loop in CREATE TABLE IF NOT EXISTS ... SELECT
with locked tables"
Bug #20903 "Crash when using CREATE TABLE .. SELECT and triggers"
Bug #24738 "CREATE TABLE ... SELECT is not isolated properly"
Bug #24508 "Inconsistent results of CREATE TABLE ... SELECT when
temporary table exists"
Deadlock occured when one tried to execute CREATE TABLE IF NOT
EXISTS ... SELECT statement under LOCK TABLES which held
read lock on target table.
Attempt to execute the same statement for already existing
target table with triggers caused server crashes.
Also concurrent execution of CREATE TABLE ... SELECT statement
and other statements involving target table suffered from
various races (some of which might've led to deadlocks).
Finally, attempt to execute CREATE TABLE ... SELECT in case
when a temporary table with same name was already present
led to the insertion of data into this temporary table and
creation of empty non-temporary table.
All above problems stemmed from the old implementation of CREATE
TABLE ... SELECT in which we created, opened and locked target
table without any special protection in a separate step and not
with the rest of tables used by this statement.
This underminded deadlock-avoidance approach used in server
and created window for races. It also excluded target table
from prelocking causing problems with trigger execution.
The patch solves these problems by implementing new approach to
handling of CREATE TABLE ... SELECT for base tables.
We try to open and lock table to be created at the same time as
the rest of tables used by this statement. If such table does not
exist at this moment we create and place in the table cache special
placeholder for it which prevents its creation or any other usage
by other threads.
We still use old approach for creation of temporary tables.
Also note that we decided to postpone introduction of some tests
for concurrent behaviour of CREATE TABLE ... SELECT till 5.1.
The main reason for this is absence in 5.0 ability to set @@debug
variable at runtime, which can be circumvented only by using several
test files with individual .opt files. Since the latter is likely
to slowdown test-suite unnecessary we chose not to push this tests
into 5.0, but run them manually for this version and later push
their optimized version into 5.1
When using GROUP_CONCAT with ORDER BY, a tree is used for the sorting, as
opposed to normal nested loops join used when there is no ORDER BY.
The tree traversal that generates the result counts the lines that have been
cut down. (as they get cut down to the field's max_size)
But the check of that count was before the tree traversal, so no
warning was generated if the output is truncated.
Fixed by moving the check to after the tree traversal.
Bug occurs in INSERT IGNORE ... SELECT ... ON DUPLICATE KEY UPDATE
statements, when SELECT returns duplicated values and UPDATE clause
tries to assign NULL values to NOT NULL fields.
NOTE: By current design MySQL server treats INSERT IGNORE ... ON
DUPLICATE statements as INSERT ... ON DUPLICATE with update of
duplicated records, but MySQL manual lacks this information.
After this fix such behaviour becomes legalized.
The write_record() function was returning error values even within
INSERT IGNORE, because ignore_errors parameter of
the fill_record_n_invoke_before_triggers() function call was
always set to FALSE. FALSE is replaced by info->ignore.
TIMESTAMP field when no value has been provided.
The LOAD DATA sets the current time in the TIMESTAMP field with
CURRENT_TIMESTAMP default value when the field is detected as a null.
But when the LOAD DATA command loads data from a file that doesn't contain
enough data for all fields then the rest of fields are simply set to null
without any check. This leads to no value being inserted to such TIMESTAMP
field.
Now the read_sep_field() and the read_fixed_length() functions set current
time to the TIMESTAMP field with CURRENT_TIMESTAMP default value in all cases
when a NULL value is loaded to the field.
- Queries in the query cache are identified by the individual
characters in the query statement, the current database and
the current environment expressed as a set of system variable
flags.
- Since the set of environment flags didn't properly describe the
current environment unexpected results were returned from the
query cache.
- Query cache is now cleared when the variable ft_boolean_syntax is
updated.
- An identification flag for the variable default_week_format is
added to the query cache record.
Thanks to Martin Friebe who has supplied significant parts of this patch.
This patch corrects a bug involving a LOAD DATA INFILE operation on a
transactional table. It corrects a problem in the error handler moving
the transactional table check and autocommit_or_rollback operation to the
end of the error handler. An additional test case was added to detect this
condition.
This bug affects multi-row INSERT ... ON DUPLICATE into table
with PRIMARY KEY of AUTO_INCREMENT field and some additional UNIQUE indices.
If the first row in multi-row INSERT contains duplicated values of UNIQUE
indices, then following rows of multi-row INSERT (with either duplicated or
unique key field values) may me applied to _arbitrary_ records of table as
updates.
This bug was introduced in 5.0. Related code was widely rewritten in 5.1, and
5.1 is already free of this problem. 4.1 was not affected too.
When updating the row during INSERT ON DUPLICATE KEY UPDATE, we called
restore_auto_increment(), which set next_insert_id back to 0, but we
forgot to set clear_next_insert_id back to 0.
restore_auto_increment() function has been fixed.
The IN function was comparing DATE/DATETIME values either as ints or as
strings. Both methods have their disadvantages and may lead to a wrong
result.
Now IN function checks whether all of its arguments has the STRING result
types and at least one of them is a DATE/DATETIME item. If so it uses either
an object of the in_datetime class or an object of the cmp_item_datetime
class to perform its work. If the IN() function arguments are rows then
row columns are checked whether the DATE/DATETIME comparator should be used
to compare them.
The in_datetime class is used to find occurence of the item to be checked
in the vector of the constant DATE/DATETIME values. The cmp_item_datetime
class is used to compare items one by one in the DATE/DATETIME context.
Both classes obtain values from items with help of the get_datetime_value()
function and cache the left item if it is a constant one.
- In some cases, flow control optimization implemented in sp::optimize
removes hreturn instructions, causing SQL exception handlers to:
* never return
* execute wrong logic
- This patch overrides default short cut optimization on hreturn instructions
to avoid this problem.
The LEAST/GREATEST functions compared DATE/DATETIME values as
strings which in some cases could lead to a wrong result.
A new member function called cmp_datetimes() is added to the
Item_func_min_max class. It compares arguments in DATETIME context
and returns index of the least/greatest argument.
The Item_func_min_max::fix_length_and_dec() function now detects when
arguments should be compared in DATETIME context and sets the newly
added flag compare_as_dates. It indicates that the cmp_datetimes() function
should be called to get a correct result.
Item_func_min_max::val_xxx() methods are corrected to call the
cmp_datetimes() function when needed.
Objects of the Item_splocal class now stores and reports correct original
field type.
When checking for applicability of join cache
we must disable its usage only if there is no
temp table in use.
When a temp table is used we can use join
cache (and it will not make the result-set
unordered) to fill the temp table. The filesort()
operation is then applied to the data in the temp
table and hence is not affected by join cache
usage.
Fixed by narrowing the condition for disabling
join cache to exclude the case where temp table
is used.
Non-correlated scalar subqueries may get executed
in EXPLAIN at the optimization phase if they are
part of a right hand sargable expression.
If the scalar subquery uses a temp table to
materialize its results it will replace the
subquery structure from the parser with a simple
select from the materialization table.
As a result the EXPLAIN will crash as the
temporary materialization table is not to be shown
in EXPLAIN at all.
Fixed by preserving the original query structure
right after calling optimize() for scalar subqueries
with temp tables executed during EXPLAIN.
The generic string to int conversion was used by the Item_func_signed and
the Item_func_unsigned classes to convert DATE/DATETIME values to the
SIGNED/UNSIGNED type. But this conversion produces wrong results for such
values.
Now if the item which result has to be converted can return its result as
longlong then the item->val_int() method is used to allow the item to carry
out the conversion itself and return the correct result.
This condition is checked in the Item_func_signed::val_int() and the
Item_func_unsigned::val_int() functions.
'not exists' optimization is applied.
In fact 'not exists' optimization did not work anymore after the patch
introducing the evaluate_join_record function had been applied.
Corrected the evaluate_join_record function to respect the 'not_exists'
optimization.
some rollup rows (rows with NULLs for grouping attributes) if GROUP BY
list contained constant expressions.
This happened because the results of constant expressions were not put
in the temporary table used for duplicate elimination. In fact a constant
item from the GROUP BY list of a ROLLUP query can be replaced for an
Item_null_result object when a rollup row is produced .
Now the JOIN::rollup_init function wraps any constant item referenced in
the GROYP BY list of a ROLLUP query into an Item_func object of a special
class that is never detected as constant item. This ensures creation of
fields for such constant items in temporary tables and guarantees right
results when the result of the rollup operation first has to be written
into a temporary table, e.g. in the cases when duplicate elimination is
required.
INSERT...ON DUPLICATE KEY UPDATE may cause error 1032:
"Can't find record in ..." if we are inserting into
InnoDB table unique index of partial key with
underlying UTF-8 string field.
This error occurs because INSERT...ON DUPLICATE uses a wrong
procedure to copy string fields of multi-byte character sets
for index search.
- unsigned flag was not handled correctly for a number of mathematical funcions, which led to incorrect results
- passing large values as the number of decimals to ROUND() resulted in incorrect results and even server crashes in some cases
- reverted the fix and the testcase for bug #10083 as it violates the manual
- fixed some testcases which relied on broken ROUND() behavior
on a BLACKHOLE table
Using INSERT DELAYED on BLACKHOLE tables could lead to server
crash.
This happens because delayed thread wants to upgrade a lock,
but BLACKHOLE tables do not have locks at all.
This patch rejects attempts to use INSERT DELAYED on MERGE
tables.
Before this fix, the parser would sometime change where a token starts by
altering Lex_input_string::tok_start, which later confused the code in
sql_yacc.yy that needs to capture the source code of a SQL statement,
like to represent the body of a stored procedure.
This line of code in sql_lex.cc :
case MY_LEX_USER_VARIABLE_DELIMITER:
lip->tok_start= lip->ptr; // Skip first `
would <skip the first back quote> ... and cause the bug reported.
In general, the responsibility of sql_lex.cc is to *find* where token are
in the SQL text, but is *not* to make up fake or incomplete tokens.
With a quoted label like `my_label`, the token starts on the first quote.
Extracting the token value should not change that (it did).
With this fix, the lexical analysis has been cleaned up to not change
lip->tok_start (in the case found for this bug).
The functions get_token() and get_quoted_token() now have an extra
parameters, used when some characters from the beginning of the token need
to be skipped when extracting a token value, like when extracting 'AB' from
'0xAB', for example, for a HEX_NUM token.
This exposed a bad assumption in Item_hex_string and Item_bin_string,
which has been fixed:
The assumption was that the string given, 'AB', was in fact preceded in
memory by '0x', which might be false (it can be preceded by "x'" and
followed by "'" -- or not be preceded by valid memory at all)
If a name is needed for Item_hex_string or Item_bin_string, the name is
taken from the original and true source code ('0xAB'), and assigned in
the select_item rule, instead of relying on assumptions related to how
memory is used.
The BETWEEN function was comparing DATE/DATETIME values either as ints or as
strings. Both methods have their disadvantages and may lead to a wrong
result.
Now BETWEEN function checks whether all of its arguments has the STRING result
types and at least one of them is a DATE/DATETIME item. If so it sets up
two Arg_comparator obects to compare with the compare_datetime() comparator
and uses them to compare such items.
Added two Arg_comparator object members and one flag to the
Item_func_between class for the correct DATE/DATETIME comparison.
The Item_func_between::fix_length_and_dec() function now detects whether
it's used for DATE/DATETIME comparison and sets up newly added Arg_comparator
objects to do this.
The Item_func_between::val_int() now uses Arg_comparator objects to perform
correct DATE/DATETIME comparison.
The owner variable of the Arg_comparator class now can be set to NULL if the
caller wants to handle NULL values by itself.
Now the Item_date_add_interval::get_date() function ajusts cached_field type according to the detected type.