An user assignment variable expression that's
evaluated in a logical expression context
(Item::val_bool()) can be pre-calculated in a
temporary table for GROUP BY.
However when the expression value is used after the
temp table creation it was re-evaluated instead of
being read from the temp table due to a missing
val_bool_result() method.
Fixed by implementing the method.
variable assignments
The assert() that is firing is checking if expressions that can't be
null return a NULL when evaluated.
MAKEDATE() function can return NULL if the second argument is
less then or equal to 0. Thus its nullability depends not only on
the nullability of its arguments but also on their values.
Fixed by (overoptimistically) setting MAKEDATE() to be nullable
despite the nullability of its arguments.
Test added.
Had to update one test result to reflect the metadata change.
As we check for the impossible partitions earlier, it's possible that we don't find any
suitable partitions at all. So this assertion just has to be corrected for this case.
per-file comments:
mysql-test/r/partition_innodb.result
Bug#55146 Assertion `m_part_spec.start_part == m_part_spec.end_part' in index_read_idx_map
test result updated.
mysql-test/t/partition_innodb.test
Bug#55146 Assertion `m_part_spec.start_part == m_part_spec.end_part' in index_read_idx_map
test case added.
sql/ha_partition.cc
Bug#55146 Assertion `m_part_spec.start_part == m_part_spec.end_part' in index_read_idx_map
Assertion changed to '>=' as the prune_partition_set() in the get_partition_set() can
do start_part= end_part+1 if no possible partitions were found.
feature
The test for bug no 50939 was put in range.test which isn't such a good idea
since it requires partitioning. Fixed by moving the test case to
partitioning_range.test.
Problem was that the handler call ::extra(HA_EXTRA_CACHE) was cached
but the ::extra(HA_EXTRA_PREPARE_FOR_UPDATE) was not.
Solution was to also cache the other call and forward it when moving
to a new partition to scan.
mysql-test/r/partition.result:
test result
mysql-test/t/partition.test:
New test from bug report.
sql/ha_partition.cc:
cache the HA_EXTRA_PREPARE_FOR_UPDATE just like HA_EXTRA_CACHE.
sql/ha_partition.h:
Added cache flag for HA_EXTRA_PREPARE_FOR_UPDATE
INSERT IGNORE ... SELECT ... UNION SELECT ...
This assert was triggered by INSERT IGNORE ... SELECT. The assert checks that a
statement either sends OK or an error to the client. If the bug was triggered
on release builds, it caused OK to be sent to the client instead of the correct
error message (in this case ER_FIELD_SPECIFIED_TWICE).
The reason the assert was triggered, was that lex->no_error was set to TRUE
during JOIN::optimize() because of IGNORE. This causes all errors to be ignored.
However, not all errors can be ignored. Some, such as ER_FIELD_SPECIFIED_TWICE
will cause the INSERT to fail no matter what. But since lex->no_error was set,
the critical errors were ignored, the INSERT failed and neither OK nor the
error message was sent to the client.
This patch fixes the problem by temporarily turning off lex->no_error in
places where errors cannot be ignored during processing of INSERT ... SELECT.
Test case added to insert.test.
The CONVERT_TZ function crashes the server when the
timezone argument is an empty SET field value.
1) The CONVERT_TZ may find a timezone string in the
tz_names hash.
2) A string representation of the empty SET is a
String of zero length with the NULL pointer.
3) If the key argument length is zero, hash functions
do comparison using the length of the record being
compared against.
I.e. a zero-length String buffer is an invalid
argument for hash search functions, and if String
points to NULL buffer, hashcmp() fails with SEGV
accessing that memory.
The my_tz_find function has been modified to
treat empty Strings as invalid timezone values
to skip unnecessary hash search.
mysql-test/r/timezone2.result:
Test case for bug #55424.
mysql-test/t/timezone2.test:
Test case for bug #55424.
sql/sql_string.h:
Bug #55424: convert_tz crashes when fed invalid data
Added "const" modifier to String::is_empty().
sql/tztime.cc:
Bug #55424: convert_tz crashes when fed invalid data
The my_tz_find function has been modified to
treat empty Strings as invalid timezone values
to skip unnecessary hash search.
file .\item_subselect.cc, line 836
IN quantified predicates are never executed directly. They are rather wrapped
inside nodes called IN Optimizers (Item_in_optimizer) which take care of the
execution. However, this is not done during query preparation. Unfortunately
the LIKE predicate pre-evaluates constant right-hand side arguments even
during name resolution. Likely this is meant as an optimization.
Fixed by not pre-evaluating LIKE arguments in view prepare mode.
if() treated any non-numeric string as false
Fixed to treat those as true instead
Added some test cases
Fixed missing $ in variable name in include/mix2.inc
Queries may crash, if
1) the GREATEST or the LEAST function has a mixed list of
numeric and LONGBLOB arguments and
2) the result of such a function goes through an intermediate
temporary table.
An Item that references a LONGBLOB field has max_length of
UINT_MAX32 == (2^32 - 1).
The current implementation of GREATEST/LEAST returns REAL
result for a mixed list of numeric and string arguments (that
contradicts with the current documentation, this contradiction
was discussed and it was decided to update the documentation).
The max_length of such a function call was calculated as a
maximum of argument max_length values (i.e. UINT_MAX32).
That max_length value of UINT_MAX32 was used as a length for
the intermediate temporary table Field_double to hold
GREATEST/LEAST function result.
The Field_double::val_str() method call on that field
allocates a String value.
Since an allocation of String reserves an additional byte
for a zero-termination, the size of String buffer was
set to (UINT_MAX32 + 1), that caused an integer overflow:
actually, an empty buffer of size 0 was allocated.
An initialization of the "first" byte of that zero-size
buffer with '\0' caused a crash.
The Item_func_min_max::fix_length_and_dec() has been
modified to calculate max_length for the REAL result like
we do it for arithmetical operators.
******
Bug #54461: crash with longblob and union or update with subquery
Queries may crash, if
1) the GREATEST or the LEAST function has a mixed list of
numeric and LONGBLOB arguments and
2) the result of such a function goes through an intermediate
temporary table.
An Item that references a LONGBLOB field has max_length of
UINT_MAX32 == (2^32 - 1).
The current implementation of GREATEST/LEAST returns REAL
result for a mixed list of numeric and string arguments (that
contradicts with the current documentation, this contradiction
was discussed and it was decided to update the documentation).
The max_length of such a function call was calculated as a
maximum of argument max_length values (i.e. UINT_MAX32).
That max_length value of UINT_MAX32 was used as a length for
the intermediate temporary table Field_double to hold
GREATEST/LEAST function result.
The Field_double::val_str() method call on that field
allocates a String value.
Since an allocation of String reserves an additional byte
for a zero-termination, the size of String buffer was
set to (UINT_MAX32 + 1), that caused an integer overflow:
actually, an empty buffer of size 0 was allocated.
An initialization of the "first" byte of that zero-size
buffer with '\0' caused a crash.
The Item_func_min_max::fix_length_and_dec() has been
modified to calculate max_length for the REAL result like
we do it for arithmetical operators.
mysql-test/r/func_misc.result:
Test case for bug #54461.
******
Test case for bug #54461.
mysql-test/t/func_misc.test:
Test case for bug #54461.
******
Test case for bug #54461.
sql/item_func.cc:
Bug #54461: crash with longblob and union or update with subquery
The Item_func_min_max::fix_length_and_dec() has been
modified to calculate max_length for the REAL result like
we do it for arithmetical operators.
******
Bug #54461: crash with longblob and union or update with subquery
The Item_func_min_max::fix_length_and_dec() has been
modified to calculate max_length for the REAL result like
we do it for arithmetical operators.
In order to be able to check if the set of the grouping fields in a
GROUP BY has changed (and thus to start a new group) the optimizer
caches the current values of these fields in a set of Cached_item
derived objects.
The Cached_item_str, used for caching varchar and TEXT columns,
is limited in length by the max_sort_length variable.
A String buffer to store the value with an alloced length of either
the max length of the string or the value of max_sort_length
(whichever is smaller) in Cached_item_str's constructor.
Then, at compare time the value of the string to compare to was
truncated to the alloced length of the string buffer inside
Cached_item_str.
This is all fine and valid, but only if you're not assigning
values near or equal to the alloced length of this buffer.
Because when assigning values like this the alloced length is
rounded up and as a result the next set of data will not match the
group buffer, thus leading to wrong results because of the changed
alloced_length.
Fixed by preserving the original maximum length in the
Cached_item_str's constructor and using this instead of the
alloced_length to limit the string to compare to.
Test case added.
Fix a regression (due to a typo) which caused spurious incorrect
argument errors for long data stream parameters if all forms of
logging were disabled (binary, general and slow logs).
mysql-test/t/mysql_client_test.test:
Save the status of the slow_log.
sql/sql_prepare.cc:
Add a missing logical NOT operator.
tests/mysql_client_test.c:
Disable all query logs when running C tests. Fixes a omission
when, slow log should have been disabled too.
Run test case for Bug#54041 with query logs enabled and disabled.
prepared statements
Using GROUP_CONCAT() together with the WITH ROLLUP modifier
could crash the server.
The reason was a combination of several facts:
1. The Item_func_group_concat class stores pointers to ORDER
objects representing the columns in the ORDER BY clause of
GROUP_CONCAT().
2. find_order_in_list() called from
Item_func_group_concat::setup() modifies the ORDER objects so
that their 'item' member points to the arguments list
allocated in the Item_func_group_concat constructor.
3. In some cases (e.g. in JOIN::rollup_make_fields) a copy of
the original Item_func_group_concat object could be created by
using the Item_func_group_concat::Item_func_group_concat(THD
*thd, Item_func_group_concat *item) copy constructor. The
latter essentially creates a shallow copy of the source
object. Memory for the arguments array is allocated on
thd->mem_root, but the pointers for arguments and ORDER are
copied verbatim.
What happens in the test case is that when executing the query
for the first time, after a copy of the original
Item_func_group_concat object has been created by
JOIN::rollup_make_fields(), find_order_in_list() is called for
this new object. It then resolves ORDER BY by modifying the
ORDER objects so that they point to elements of the arguments
array which is local to the cloned object. When thd->mem_root
is freed upon completing the execution, pointers in the ORDER
objects become invalid. Those ORDER objects, however, are also
shared with the original Item_func_group_concat object which is
preserved between executions of a prepared statement. So the
first call to find_order_in_list() for the original object on
the second execution tries to dereference an invalid pointer.
The solution is to create copies of the ORDER objects when
copying Item_func_group_concat to not leave any stale pointers
in other instances with different lifecycles.
mysql-test/r/func_gconcat.result:
Test case for bug #54476.
mysql-test/t/func_gconcat.test:
Test case for bug #54476.
sql/item_sum.cc:
Copy the ORDER objects pointed to by the elements of the
'order' array in the copy constructor of
Item_func_group_concat.
sql/table.h:
Removed the unused 'item_copy' member of the ORDER class.
SHOW DATABASES LIKE ... was not converting to lowercase on comparison as the
documentation is suggesting.
Fixed it to behave similarly to SHOW TABLES LIKE ... and updated the failing
on MacOSX lowercase_table2 test case.
table with active trx
Essentially, the problem is that InnoDB does a implicit commit
when a cursor (table handler) is unlocked/closed, creating
a dissonance between the transaction state within the server
layer and the storage engine layer. Theoretically, a statement
transaction can encompass several table instances in a similar
manner to a multiple statement transaction, hence it does not
make sense to limit a statement transaction to the lifetime of
the table instances (cursors) used within it.
Since this particular instance of the problem is only triggerable
on 5.1 and is masked on 5.5 due 2PC being skipped (assertion is in
the prepare phase of a 2PC), the solution (which is less risky) is
to explicitly end the transaction before the cached table is unlock
on rename table.
The patch is to be null merged into trunk.
mysql-test/include/commit.inc:
Fix counters, the binlog engine does not get involved anymore.
mysql-test/suite/innodb_plugin/r/innodb_bug54453.result:
Add test case result for Bug#54453
mysql-test/suite/innodb_plugin/t/innodb_bug54453.test:
Add test case for Bug#54453
sql/sql_table.cc:
End transaction as otherwise InnoDB will end it behind our backs.
This assert checks that the server does not try to send OK to the
client if there has been some error during processing. This is done
to make sure that the error is in fact sent to the client.
The problem was that view errors during processing of WHERE conditions
in UPDATE statements where not detected by the update code. It therefore
tried to send OK to the client, triggering the assert.
The bug was only noticeable in debug builds.
This patch fixes the problem by making sure that the update code
checks for errors during condition processing and acts accordingly.
and the original engine is disabled
Missing check that engine is available.
mysql-test/include/not_blackhole.inc:
new include file
mysql-test/r/partition_not_blackhole.result:
new result file
mysql-test/std_data/parts/t1_blackhole.frm:
blackhole partitioned table .frm file:
create table `t1` (`id` int primary key) engine=blackhole
partition by key () partitions 1;
mysql-test/std_data/parts/t1_blackhole.par:
.par file matching blackhole partitioned .frm
mysql-test/t/partition_not_blackhole-master.opt:
new master-opt to disable blackhole if compiled in.
mysql-test/t/partition_not_blackhole.test:
New test
sql/ha_partition.cc:
Added check that engine is available.
Merge up to sunny.bains@oracle.com-20100625081841-ppulnkjk1qlazh82 .
There are 8 more changesets in mysql-5.1-innodb, but PB2 shows a
failure for a test added in one of them. If that is resolved quickly
then those 8 more changesets will be merged too.
Fixed an incomplete historical ALTER TABLE MODIFY trimming the trigger
privilege bit from mysql.tables_priv.Table_priv column.
Removed the duplicate ALTER TABLE MODIFY.
Test suite added.
The problem is that QUICK_SELECT_DESC behaviour depends
on used_key_parts value which can be bigger than selected
best_key_parts value if an engine supports clustered key.
But used_key_parts is overwritten with best_key_parts
value that prevents from correct selection of index
access method. The fix is to preserve used_key_parts
value for further use in QUICK_SELECT_DESC.
mysql-test/r/innodb_mysql.result:
test case
mysql-test/t/innodb_mysql.test:
test case
sql/sql_select.cc:
preserve used_key_parts value for further use in QUICK_SELECT_DESC
This deadlock happened if DROP DATABASE was blocked due to an open
HANDLER table from a different connection. While DROP DATABASE
is blocked, it holds the LOCK_mysql_create_db mutex. This results
in a deadlock if the connection with the open HANDLER table tries
to execute a CREATE/ALTER/DROP DATABASE statement as they all
try to acquire LOCK_mysql_create_db.
This patch makes this deadlock scenario very unlikely by closing and
marking for re-open all HANDLER tables for which there are pending
conflicing locks, before LOCK_mysql_create_db is acquired.
However, there is still a very slight possibility that a connection
could access one of these HANDLER tables between closing/marking for
re-open and the acquisition of LOCK_mysql_create_db.
This patch is for 5.1 only, a separate and complete fix will be
made for 5.5+.
Test case added to schema.test.
returns nothing
When looking for table or database names inside INFORMATION_SCHEMA
we must convert the table and database names to lowercase (just as it's
done in the rest of the server) when lowercase_table_names is non-zero.
This will allow us to find the same tables that we would find if there
is no condition.
Fixed by converting to lower case when extracting the database and
table name conditions.
Test case added.
During creation of the table list of
processed tables hidden I_S table 'VARIABLES'
is erroneously added into the table list.
it leads to ER_UNKNOWN_TABLE error in
TABLE_LIST::add_table_to_list() function.
The fix is to skip addition of hidden I_S
tables into the table list.
mysql-test/r/information_schema.result:
test case
mysql-test/t/information_schema.test:
test case
sql/sql_show.cc:
The fix is to skip addition of hidden I_S
tables into the table list.
require O(#scans) memory
When an index merge operation was restarted, it would
re-allocate the Unique object controlling the duplicate row
ID elimination. Fixed by making the Unique object a member
of QUICK_INDEX_MERGE_SELECT and thus reusing it throughout
the lifetime of this object.
file .\filesort.cc, line 149 (part II)
Problem: the server didn't disregard sort order
for some zero length tuples.
Fix: skip sort order in such a case
(zero length NOT NULL string functions).
mysql-test/r/select.result:
Fix for bug #54459: Assertion failed: param.sort_length,
file .\filesort.cc, line 149 (part II)
- test result.
mysql-test/t/select.test:
Fix for bug #54459: Assertion failed: param.sort_length,
file .\filesort.cc, line 149 (part II)
- test case.
sql/sql_select.cc:
Fix for bug #54459: Assertion failed: param.sort_length,
file .\filesort.cc, line 149 (part II)
- disregard sort order for zero length NOT NULL string functions
along with zero length NOT NULL fields.
and reverse() function
3 problems fixed :
1. The reported problem : caused by incorrect parsing of
the file as ucs data resulting in wrong length of the parsed
string. Fixed by truncating the invalid trailing bytes
(non-complete multibyte characters) when reading from the file
2. LOAD DATA when reading from a proper UCS2 file wasn't
recognizing the new line characters. Fixed by first looking
if a byte is a new line (or any other special) character before
reading it as a part of a multibyte character.
3. When using user variables to hold the column data in LOAD
DATA the character set of the user variable was set incorrectly
to the database charset. Fixed by setting it to the charset
specified by LOAD DATA (if any).
The problem there is that HAVING condition evaluates const
parts of condition despite the condition has references
on aggregate functions. Table t1 became const tables
after make_join_statistics and table1.pk = 1, HAVING is
transformed into MAX(1) < 7 and taken away from HAVING.
The fix is to skip evaluation of HAVING conts parts if
HAVING condition has references on aggregate functions.
mysql-test/r/having.result:
test case
mysql-test/t/having.test:
test case
sql/sql_select.cc:
skip evaluation of HAVING conts parts if
HAVING condition has references on aggregate functions.
Incorrect handling of NULL arguments could lead to a crash on
the IN or CASE operations when either NULL arguments were
passed explicitly as arguments (IN) or implicitly generated by
the WITH ROLLUP modifier (both IN and CASE).
Item_func_case::find_item() assumed all necessary comparators
to be instantiated in fix_length_and_dec(). However, in the
presence of WITH ROLLUP modifier, arguments could be
substituted with an Item_null leading to an "unexpected"
STRING_RESULT comparator being invoked.
In addition to the problem identical to the above,
Item_func_in::val_int() could crash even with explicitly passed
NULL arguments due to an optimization in fix_length_and_dec()
leading to NULL arguments being ignored during comparators
creation.
mysql-test/r/func_in.result:
Test cases for bug#54477.
mysql-test/t/func_in.test:
Test cases for bug#54477.
sql/item_cmpfunc.cc:
Added additional checks for Item_nulls in
Item_func_case::find_item() and Item_func_in::val_int().
In process of record search it is not taken into account
that inital quick->file->ref value could be inapplicable
to range interval. After proper row is found this value is
stored into the record buffer and later the record is
filtered out at condition evaluation stage.
The fix is store a refernce of found row to the handler ref field.
mysql-test/r/innodb_mysql.result:
test case
mysql-test/std_data/intersect-bug50389.tsv:
test case
mysql-test/t/innodb_mysql.test:
test case
sql/opt_range.cc:
store a refernce of found row to the handler ref field.
Problem: a flaw (derefencing a NULL pointer) in the LIKE optimization
code may lead to a server crash in some rare cases.
Fix: check the pointer before its dereferencing.
mysql-test/r/func_like.result:
Fix for bug #54575: crash when joining tables with unique set column
- test result.
mysql-test/t/func_like.test:
Fix for bug #54575: crash when joining tables with unique set column
- test case.
sql/item_cmpfunc.cc:
Fix for bug #54575: crash when joining tables with unique set column
- check res2 buffer pointer before its dereferencing
as it may be NULL in some cases.
Item*) at opt_sum.cc:305
Queries applying MIN/MAX functions to indexed columns are
optimized to read directly from the index if all key parts
of the index preceding the aggregated key part are bound to
constants by the WHERE clause. A prefix length is also
produced, equal to the total length of the bound key
parts. If the aggregated column itself is bound to a
constant, however, it is also included in the prefix.
Such full search keys are read as closed intervals for
reasons beyond the scope of this bug. However, the procedure
missed one case where a key part meant for use as range
endpoint was being overwritten with a NULL value destined
for equality checking. In this case the key part was
overwritten but the range flag remained, causing open
interval reading to be performed.
Bug was fixed by adding more stringent checking to the
search key building procedure (matching_cond) and never
allow overwrites of range predicates with non-range
predicates.
An assertion was added to make sure open intervals are never
used with full search keys.
Problem: the server missed the fact that one can read from
2 indexes alternately using HANDLER interface.
Fix: check if the same (initialized) index is involved
reading next/prev values from the index.
mysql-test/r/handler_myisam.result:
Fix for bug #54007: assert in ha_myisam::index_next, HANDLER
- test result.
mysql-test/t/handler_myisam.test:
Fix for bug #54007: assert in ha_myisam::index_next, HANDLER
- test case.
sql/sql_handler.cc:
Fix for bug #54007: assert in ha_myisam::index_next, HANDLER
- check if we use the same (initialized) index
to read next/prev values from the index.
The problem is in the Item_func_isnull::update_used_tables() function,
bracket is at the wrong place. Because of that isnull item erroneously
is treated as const item. The fix is to set brackets in the right place.
mysql-test/r/func_isnull.result:
test case
mysql-test/t/func_isnull.test:
test case
sql/item_cmpfunc.h:
set brackets in the right place.