One of the tests introduced for this bug was failing
because of path size restriction in windows.
Moved the test case to a new test which is disabled under windows.
In create_myisam_from_heap() mark all errors as fatal except
HA_ERR_RECORD_FILE_FULL for a HEAP table.
Not doing so could lead to problems, e.g. in a case when a
temporary MyISAM table gets overrun due to its MAX_ROWS limit
while executing INSERT/REPLACE IGNORE ... SELECT.
The SELECT execution was aborted, but the error was
converted to a warning due to IGNORE clause, so neither 'ok'
nor 'error' packet could be sent back to the client. This
condition led to hanging client when using 5.0 server, or
assertion failure in 5.1.
Problem was that a failing rename just left the partitions at the state
it was at the failure.
Solution was to try to revert the started rename if a failure occured.
When during the optimization an item is moved to the upper select
the item's context left unchanged. This caused wrong result in the
PS/SP mode.
The Item_ident::remove_dependence_processor now sets the context
of the select to which the item is moved to.
In a subselect all fields from outer selects are marked as dependent on
selects they are belong to. In some cases optimizer substitutes it for an
equivalent expression. For example "a_field IN (SELECT outer_field)" is
substituted with "a_field = outer_field". As we moved the outer_field to the
upper select it's not really outer anymore. But it was left marked as outer.
If exists an index over a_field optimizer choose wrong execution plan and thus
return wrong result.
Now the Item_in_subselect::single_value_transformer function removes dependent
marking from fields when a subselect is optimized away.
table
The MERGE table storage engine does not support the HA_CAN_SQL_HANDLE feature
and any attempt to open the merge table will fail with ER_ILLEGAL_HA.
After an error occurred the tables that was opened must be closed again
or they will be left in an inconsistent state. However, the assumption
made in the code for closing and register handler tables was that only
one table will be opened, and this is not true for MERGE tables which
will cause multiple tables to open.
The next time a SELECT operation was issued on the merge table it
caused the system to freeze.
This patch fixes this issue by making sure that all tables which
are opened also are closed in the event of an error.
match against.
Server crashes when executing prepared statement with duplicating
MATCH() function calls in SELECT and ORDER BY expressions, e.g.:
SELECT MATCH(a) AGAINST('test') FROM t1 ORDER BY MATCH(a) AGAINST('test')
This query gets optimized by the server, so the value returned
by MATCH() from the SELECT list is reused for ORDER BY purposes.
To make this optimization server is comparing items from
SELECT and ORDER BY lists. We were getting server crash because
comparision function for MATCH() item is not intended to be called
at this point of execution.
In 5.0 and 5.1 this problem is workarounded by resetting MATCH()
item to the state as it was during PREPARE.
In 6.0 correct comparision function will be implemented and
duplicating MATCH() items from the ORDER BY list will be
optimized.
without error
When using quick access methods for searching rows in UPDATE or
DELETE there was no check if a fatal error was not already sent
to the client while evaluating the quick condition.
As a result a false OK (following the error) was sent to the
client and the error was thus transformed into a warning.
Fixed by checking for errors sent to the client during
SQL_SELECT::check_quick() and treating them as real errors.
Fixed a wrong test case in group_min_max.test
Fixed a wrong return code in mysql_update() and mysql_delete()
mutually-nested subqueries
Queries of the form
SELECT * FROM (SELECT 1) AS t1,
(SELECT 2) AS t2,...
(SELECT 32) AS t32
caused the "Too high level of nesting for select" error
as if the query has a form
SELECT * FROM (SELECT 1 FROM (SELECT 2 FROM (SELECT 3 FROM...
The table_factor parser rule has been modified to adjust
the LEX::nest_level variable value after every derived table.
sort_buffer_size cannot allocate
The NULL return from tree_insert() (on low memory) was not
checked for in Item_func_group_concat::add(). As a result
on low memory conditions a crash happens.
Fixed by properly checking the return code.
1. BUG#21704 - Renaming column does not update FK definition
2. Changes in mysql-test/include/mtr_warnings.sql so that the testcase
for BUG#21704 doesn't fail because of the warnings generated.
Detailed revision comments:
r5488 | vasil | 2009-07-09 19:16:44 +0300 (Thu, 09 Jul 2009) | 13 lines
branches/5.1:
Fix Bug#21704 Renaming column does not update FK definition
by checking whether a column that participates in a FK definition is being
renamed and denying the ALTER in this case.
The patch was originally developed by Davi Arnaut <Davi.Arnaut@Sun.COM>:
http://lists.mysql.com/commits/77714
and was later adjusted to conform to InnoDB coding style by me (Vasil),
I also added some more comments and moved the bug specific mysql-test to
a separate file to make it more manageable and flexible.
and base tables
myrg_attach_children() could reuse a buffer that was allocated
previously based on a definition of a child table. The problem
was that the child's definition might have been changed, so
reusing the buffer could lead to crashes or valgrind errors
under some circumstances.
Fixed by changing myrg_attach_children() so that the
rec_per_key_part buffer is reused only when the child table
have not changed, and reallocated otherwise (the old buffer is
deallocated if necessary).
BUG#45749 - Race condition in SET GLOBAL innodb_commit_concurrency=DEFAULT
Detailed revision comments:
r5419 | marko | 2009-06-25 16:11:57 +0300 (Thu, 25 Jun 2009) | 18 lines
branches/5.1: Merge r5418 from branches/zip:
------------------------------------------------------------------------
r5418 | marko | 2009-06-25 15:55:52 +0300 (Thu, 25 Jun 2009) | 5 lines
Changed paths:
M /branches/zip/ChangeLog
M /branches/zip/handler/ha_innodb.cc
M /branches/zip/mysql-test/innodb_bug42101-nonzero.result
M /branches/zip/mysql-test/innodb_bug42101-nonzero.test
M /branches/zip/mysql-test/innodb_bug42101.result
M /branches/zip/mysql-test/innodb_bug42101.test
branches/zip: Fix a race condition caused by
SET GLOBAL innodb_commit_concurrency=DEFAULT. (Bug #45749)
When innodb_commit_concurrency is initially set nonzero,
DEFAULT would change it back to 0, triggering Bug #42101.
rb://139 approved by Heikki Tuuri.
------------------------------------------------------------------------
use partial primary key if another index can prevent filesort
The fix for bug #28404 causes the covering ordering indexes to be
preferred unconditionally over non-covering and ref indexes.
Fixed by comparing the cost of using a covering index to the cost of
using a ref index even for covering ordering indexes.
Added an assertion to clarify the condition the local variables should
be in.
The crash happend because for views which are joins
we have table_list->table == 0 and
table_list->table->'any method' call leads to crash.
The fix is to perform table_list->table->file->extra()
method for all tables belonging to view.
Using DECIMAL constants with more than 65 digits in CREATE
TABLE ... SELECT led to bogus errors in release builds or
assertion failures in debug builds.
The problem was in inconsistency in how DECIMAL constants and
fields are handled internally. We allow arbitrarily long
DECIMAL constants, whereas DECIMAL(M,D) columns are limited to
M<=65 and D<=30. my_decimal_precision_to_length() was used in
both Item and Field code and truncated precision to
DECIMAL_MAX_PRECISION when calculating value length without
adjusting precision and decimals. As a result, a DECIMAL
constant with more than 65 digits ended up having length less
than precision or decimals which led to assertion failures.
Fixed by modifying my_decimal_precision_to_length() so that
precision is truncated to DECIMAL_MAX_PRECISION only for Field
object which is indicated by the new 'truncate' parameter.
Another inconsistency fixed by this patch is how DECIMAL
constants and expressions are handled for CREATE ... SELECT.
create_tmp_field_from_item() (which is used for constants) was
changed as a part of the bugfix for bug #24907 to handle long
DECIMAL constants gracefully. Item_func::tmp_table_field()
(which is used for expressions) on the other hand was still
using a simplistic approach when creating a Field_new_decimal
from a DECIMAL expression.
contains ONLY_FULL_GROUP_BY
The partitioning code needs to issue a Item::fix_fields()
on the partitioning expression in order to prepare
it for being evaluated.
It does this by creating a special table and a table list
for the scope of the partitioning expression.
But when checking ONLY_FULL_GROUP_BY the
Item_field::fix_fields() was relying that there always be
cached_table set and was trying to use it to get the
select_lex of the SELECT the field's table is in.
But the cached_table was not set by the partitioning code
that creates the artificial TABLE_LIST used to resolve the
partitioning expression and this resulted in a crash.
Fixed by rectifying the following errors :
1. Item_field::fix_fields() : the code that check for
ONLY_FULL_GROUP_BY relies on having tables with
cacheable_table set. This is mostly true, the only
two exceptions being the partitioning context table
and the trigger context table.
Fixed by taking the current parsing context if no pointer
to the TABLE_LIST instance is present in the cached_table.
2. fix_fields_part_func() :
2a. The code that adds the table being created to the
scope for the partitioning expression is mostly a copy
of the add_table_to_list and friends with one exception :
it was not marking the table as cacheable (something that
normal add_table_to_list is doing). This caused the
problem in the check for ONLY_FULL_GROUP_BY in
Item_field::fix_fields() to appear.
Fixed by setting the correct members to make the table
cacheable.
The ideal structural fix for this is to use a unified
interface for adding a table to a table list
(add_table_to_list?) : noted in a TODO comment
2b. The Item::fix_fields() was called with a NULL destination
pointer. This causes uninitalized memory reads in the
overloaded ::fix_fields() function (namely
Item_field::fix_fields()) as it expects a non-zero pointer
there. Fixed by passing the source pointer similarly to how
it's done in JOIN::prepare().