"load data" statements were written to the binlog as a mix of the original statement
and bits recreated from parse-info. This relied on implementation details and broke
with IGNORE_SPACES and versioned comments.
We now completely resynthesize the query for LOAD DATA for binlog (which among other
things normalizes them somewhat with regard to case, spaces, etc.).
We have already parsed the query properly, so we make use of that rather
than mix-and-match string literals and parsed items.
This should make us safe with regard to versioned comments, even those
spanning multiple tokens. Also no longer affected by IGNORE_SPACES.
The problem is that argument buffer can be used as result buffer
and it leads to argument value change.
The fix is to use 'old buffer' as result buffer only
if first argument is not constant item.
Archive engine returns wrong values for average record length
and max data length.
With this fix they're calculated as following:
- max data length is 2 ^ 63 where large files are supported
and INT_MAX32 where this is not supported;
- average record length is data length / records in data file.
Create temporary InnoDB table fails on case insensitive
filesystems, when lower_case_table_names is 2 (e.g. OS X)
and temporary directory path contains upper case letters.
The problem was that tmpdir prefix was converted to lower
case when table was created, but was passed as is when
table was opened.
Fixed by leaving tmpdir prefix part intact.
The parser rule for expressions in a udf parameter list contains
two hacks:
First, the parser input stream is read verbatim, bypassing
the lexer.
Second, the Item::name field is overwritten. If the argument to a
udf was a field, the field's name as seen by name resolution was
overwritten this way.
If the field name was quoted or escaped, it would appear as e.g. "`field`".
Fixed by not overwriting field names.
This test case uses mysqlbinlog to dump the content of master-bin.000001,
but the content of master-bin.000001 is not that this test needs.
MTR runs a lot of test cases on one server, so when this test starts, the current binlog file
might not be master-bin.000001, or there are other events are written by tests before.
'RESET MASTER' command must be called at the begin, it ensures that binlog of this test
is wrote to master-bin.000001 correctly.
Three other tests have the same problem, They were fixed together.
mysqlbinlog-cp932
binlog_incident
binlog_tmp_table
The external 'for' loop in remove_dup_with_compare() handled
HA_ERR_RECORD_DELETED by just starting over without advancing
to the next record which caused an infinite loop.
This condition could be triggered on certain data by a SELECT
query containing DISTINCT, GROUP BY and HAVING clauses.
Fixed remove_dup_with_compare() so that we always advance to
the next record when receiving HA_ERR_RECORD_DELETED from
rnd_next().
on subquery inside a SP
Problem: repeated call of a SP containing an incorrect query with a
subselect may lead to failed ASSERT().
Fix: set proper sublelect's state in case of error occured during
subquery transformation.
SELECT with join (not only self-join) from archive table may
return incomplete result set, when result set size exceeds
join buffer size.
The problem was that archive row counter was initialzed too
early, when ha_archive::info() method was called. Later,
when optimizer exceeds join buffer, it attempts to reuse
handler without calling ha_archive::info() again (which is
correct).
Fixed by moving row counter initialization from
ha_archive::info() to ha_archive::rnd_init().
name as existing view
When trying to create a table with the same name as existing view with
join, mysql server crashes.
The problem is when create table is issued with the same name as view, while
verifying with the existing tables, we assume that base table object is
created always.
In this case, since it is a view over multiple tables, we don't have the
mysql derived table object.
Fixed the logic which checks if there is an existing table to not to assume
that table object is created when the base table is view over multiple
tables.
function,file sql_base.cc
When uncacheable queries are written to a temp table the optimizer must
preserve the original JOIN structure, because it is re-using the JOIN
structure to read from the resulting temporary table.
This was done only for uncacheable sub-queries.
But top level queries can also benefit from this mechanism, specially if
they're using index access and need a reset.
Fixed by not limiting the saving of JOIN structure to subqueries
exclusively.
Added a new test file to extend the existing (large) subquery.test.
extraneous file
Online/fast ALTER TABLE of a partitioned table may leave
temporary file in database directory.
Fixed by removing unnecessary call to
handler::ha_create_handler_files(), which was creating
partitioning definition file.
results in server crash
check_group_min_max_predicates() assumed the input condition
item to be one of COND_ITEM, SUBSELECT_ITEM, or FUNC_ITEM.
Since a condition of the form "field" is also a valid condition
equivalent to "field <> 0", using such a condition in a query
where the loose index scan was chosen resulted in a debug
assertion failure.
Fixed by handling conditions of the FIELD_ITEM type in
check_group_min_max_predicates().
field references
This error requires a combination of factors :
1. An "impossible where" in the outermost SELECT
2. An aggregate in the outermost SELECT
3. A correlated subquery with a WHERE clause that includes an outer
field reference as a top level WHERE sargable predicate
When JOIN::optimize detects an "impossible WHERE" it will bail out
without doing the rest of the work and initializations. It will not
call make_join_statistics() as well. And make_join_statistics fills
in various structures for each table referenced.
When processing the result of the "impossible WHERE" the query must
send a single row of data if there are aggregate functions in it.
In this case the server marks all the aggregates as having received
no rows and calls the relevant Item::val_xxx() method on the SELECT
list. However if this SELECT list happens to contain a correlated
subquery this subquery is evaluated in a normal evaluation mode.
And if this correlated subquery has a reference to a field from the
outermost "impossible where" SELECT the add_key_fields will mistakenly
consider the outer field reference as a "local" field reference when
looking for sargable predicates.
But since the SELECT where the outer field reference refers to is not
completely initialized due to the "impossible WHERE" in this level
we'll get a NULL pointer reference.
Fixed by making a better condition for discovering if a field is "local"
to the SELECT level being processed.
It's not enough to look for OUTER_REF_TABLE_BIT in this case since
for outer references to constant tables the Item_field::used_tables()
will return 0 regardless of whether the field reference is from the
local SELECT or not.
When a connection is dropped any remaining temporary table is also automatically
dropped and the SQL statement of this operation is written to the binary log in
order to drop such tables on the slave and keep the slave in sync. Specifically,
the current code base creates the following type of statement:
DROP /*!40005 TEMPORARY */ TABLE IF EXISTS `db`.`table`;
Unfortunately, appending the database to the table name in this manner circumvents
the replicate-rewrite-db option (and any options that check the current database).
To solve the issue, we started writing the statement to the binary as follows:
use `db`; DROP /*!40005 TEMPORARY */ TABLE IF EXISTS `table`;
The crash happens because select_union object is used as result set
for queries which have derived tables.
select_union use temporary table as data storage and if
fields count exceeds 10(count of values for procedure ANALYSE())
then we get a crash on fill_record() function.
Problem was that the partition containing NULL values
was pruned away, since '2001-01-01' < '2001-02-00' but
TO_DAYS('2001-02-00') is NULL.
Added the NULL partition for RANGE/LIST partitioning on TO_DAYS()
function to be scanned too.
Also fixed a bug that added ALLOW_INVALID_DATES to sql_mode
(SELECT * FROM t WHERE date_col < '1999-99-99' on a RANGE/LIST
partitioned table would add it).
There were a problem since pruning uses the field
for comparison (while evaluate_join_record uses longlong),
resulting in pruning failures when comparing DATE to DATETIME.
Fix was to always comparing DATE vs DATETIME as DATETIME,
by adding ' 00:00:00' to the DATE string.
And adding optimization for comparing with 23:59:59, so that
DATETIME_col > '2001-02-03 23:59:59' ->
TO_DAYS(DATETIME_col) > TO_DAYS('2001-02-03 23:59:59') instead
of '>='.
The problem was that creating a DECIMAL column from a decimal
value could lead to a failed assertion as decimal values can
have a higher precision than those attached to a table. The
assert could be triggered by creating a table from a decimal
with a large (> 30) scale. Also, there was a problem in
calculating the number of digits in the integral and fractional
parts if both exceeded the maximum number of digits permitted
by the new decimal type.
The solution is to ensure that truncation procedure is executed
when deducing a DECIMAL column from a decimal value of higher
precision. If the integer part is equal to or bigger than the
maximum precision for the DECIMAL type (65), the integer part
is truncated to fit and the fractional becomes zero. Otherwise,
the fractional part is truncated to fit into the space left
after the integer part is copied.
This patch borrows code and ideas from Martin Hansson's patch.
INSERT ... SELECT ...
Problem was that when bulk insert is used on an empty
table/partition, it disables the indexes for better
performance, but in this specific case it also tries
to read from that partition using an index, which is
not possible since it has been disabled.
Solution was to allow index reads on disabled indexes
if there are no records.
Also reverted the patch for bug#38005, since that was a workaround
in the partitioning engine instead of a fix in myisam.