Repairing MyISAM table with fulltext indexes and low
myisam_sort_buffer_size may crash the server.
Estimation of number of index entries was done incorrectly,
causing further assertion failure or server crash.
Docs note: min value for myisam_sort_buffer_size has been
changed from 4 to 4096.
mysql-test/r/fulltext.result:
A test case for BUG#51866.
mysql-test/r/myisam.result:
Min value for myisam_sort_buffer_size is 4096.
mysql-test/r/variables.result:
Min value for myisam_sort_buffer_size is 4096.
mysql-test/suite/sys_vars/r/myisam_sort_buffer_size_basic_32.result:
Min value for myisam_sort_buffer_size is 4096.
mysql-test/t/fulltext.test:
A test case for BUG#51866.
sql/mysqld.cc:
Min value for myisam_sort_buffer_size is 4096.
storage/myisam/mi_check.c:
When estimating number of index entries for external
fulltext parser, take into account that key_length may
be bigger than myisam_sort_buffer_size. Reuse logic
from _create_index_by_sort(): force MIN_SORT_BUFFER to
be min value for myisam_sort_buffer_size.
Another problem is that ftkey_nr has no other meaning
than serial number of fulltext index starting with 1.
We can't say if this key using built-in or external
parser basing on it's value. In other words we always
entered if-branch for external parser. At this point,
the only way to check if we use default parser is to
compare keyinfo::parser with &ft_default_parser.
storage/myisam/sort.c:
Get rid of MIN_SORT_MEMORY, use MIN_SORT_BUFFER instead
(defined in myisamdef.h, has the same value and purpose).
Invalid memory read if HANDLER ... READ NEXT is executed
after failed (e.g. empty table) HANDLER ... READ FIRST.
The problem was that we attempted to perform READ NEXT,
whereas there is no pivot available from failed READ FIRST.
With this fix READ NEXT after failed READ FIRST equals
to READ FIRST.
This bug affects MyISAM tables only.
mysql-test/r/gis-rtree.result:
Restore a test case for BUG51357.
mysql-test/r/handler_myisam.result:
A test case for BUG#51877.
mysql-test/t/gis-rtree.test:
Restore a test case for BUG51357.
mysql-test/t/handler_myisam.test:
A test case for BUG#51877.
storage/myisam/mi_rnext.c:
"search first" failed. This means we have no pivot for
"search next", or in other words MI_INFO::lastkey is
likely uninitialized.
Normally SQL layer would never request "search next" if
"search first" failed. But HANDLER may do anything.
As mi_rnext() without preceeding mi_rkey()/mi_rfirst()
equals to mi_rfirst(), we must restore original state
as if failing mi_rfirst() was not called.
Detailed revision comments:
r6785 | vasil | 2010-03-10 09:04:38 +0200 (Wed, 10 Mar 2010) | 11 lines
branches/5.1:
Add the missing --reap statements in innodb_bug38231.test. Probably MySQL
enforced the presence of those recently and the test started failing like:
main.innodb_bug38231 [ fail ]
Test ended at 2010-03-10 08:48:32
CURRENT_TEST: main.innodb_bug38231
mysqltest: At line 49: Cannot run query on connection between send and reap
r6788 | vasil | 2010-03-10 10:53:21 +0200 (Wed, 10 Mar 2010) | 8 lines
branches/5.1:
In innodb_bug38231.test: replace the fragile sleep 0.2 that depends on timing
with a more robust condition which waits for the TRUNCATE and LOCK commands
to appear in information_schema.processlist. This could also break if there
are other sessions executing the same SQL commands, but there are none during
the execution of the mysql test.
Detailed revision comments:
r6783 | jyang | 2010-03-09 17:54:14 +0200 (Tue, 09 Mar 2010) | 9 lines
branches/5.1: Fix bug #47621 "MySQL and InnoDB data dictionaries
will become out of sync when renaming columns". MySQL does not
provide new column name information to storage engine to
update the system table. To avoid column name mismatch, we shall
just request a table copy for now.
rb://246 approved by Marko.
If the listed columns in the view definition of
the table used in a 'INSERT .. SELECT ..'
statement mismatched, a debug assertion would
trigger in the cache invalidation code
following the failing statement.
Although the find_field_in_view() function
correctly generated ER_BAD_FIELD_ERROR during
setup_fields(), the error failed to propagate
further than handle_select(). This patch fixes
the issue by adding a check for the return
value.
mysql-test/r/query_cache_with_views.result:
* added test for bug 46615
mysql-test/t/query_cache_with_views.test:
* added test for bug 46615
sql/sql_parse.cc:
* added check for handle_select() return code before attempting to invalidate the cache.
The crash happens because greedy_serach
can not determine best plan due to
wrong inner table dependences. These
dependences affects join table sorting
which performs before greedy_search starting.
In our case table which has real 'no dependences'
should be put on top of the list but it does not
happen as inner tables have no dependences as well.
The fix is to exclude RAND_TABLE_BIT mask from
condition which checks if table dependences
should be updated.
mysql-test/r/join.result:
test result
mysql-test/t/join.test:
test case
sql/sql_select.cc:
RAND_TABLE_BIT mask should not be counted as it
prevents update of inner table dependences.
For example it might happen if RAND() function
is used in JOIN ON clause.
col equal to itself!
There's no need to copy the value of a field into itself.
While generally harmless (except for some performance penalties)
it may be dangerous when the copy code doesn't expect this.
Fixed by checking if the source field is the same as the destination
field before copying the data.
Note that we must preserve the order of assignment of the null
flags (hence the null_value assignment addition).
function on windows
When making sure that the directory path ends up with a
slash/backslash we need to check for the correct length of
the buffer and trim at the appropriate location so we don't
write past the end of the buffer.
CHECK_FIELD_IGNORE was treated as CHECK_FIELD_ERROR_FOR_NULL;
UPDATE...SET...NULL on NOT NULL fields behaved differently after
a trigger.
Now distinguishes between IGNORE and ERROR_FOR_NULL and save/restores
check-field options.
mysql-test/r/trigger.result:
Show that UPDATE...SET...NULL on NOT NULL columns doesn't behave differently
when run after a trigger.
mysql-test/t/trigger.test:
Show that UPDATE...SET...NULL on NOT NULL columns doesn't behave differently
when run after a trigger.
sql/field_conv.cc:
CHECK_FIELD_IGNORE was treated as CHECK_FIELD_ERROR_FOR_NULL.
Distinguish between the two.
sql/sp_head.cc:
Raise error as needed.
sql/sql_class.cc:
Save and restore check-fields options.
sql/sql_class.h:
Make room so we can save check-fields options.
sql/sql_insert.cc:
Raise error as needed.
myisam tables
Queries following TRUNCATE of partitioned MyISAM table
may crash server if myisam_use_mmap is true.
Internally this is MyISAM bug, but limited to partitioned
tables, because MyISAM doesn't use ::delete_all_rows()
method for TRUNCATE, but goes via table recreate instead.
MyISAM didn't properly fall back to non-mmaped I/O after
mmap() failure. Was not repeatable on linux before, likely
because (quote from man mmap):
SUSv3 specifies that mmap() should fail if length is 0.
However, in kernels before 2.6.12, mmap() succeeded in
this case: no mapping was created and the call returned
addr. Since kernel 2.6.12, mmap() fails with the error
EINVAL for this case.
mysql-test/r/partition.result:
A test case for BUG#51868.
mysql-test/t/partition.test:
A test case for BUG#51868.
storage/myisam/mi_delete_all.c:
_mi_unmap_file() is compressed record format specific,
which is read-only. As compressed MyISAM data files are
read-only, we must never use _mi_unmap_file() in
mi_delete_all_rows().
storage/myisam/mi_dynrec.c:
Make myisam mmap code more durable to errors:
- set file_read/file_write handlers if mmap succeeded;
- reset file_read/file_write handlers on unmap.
storage/myisam/mi_extra.c:
Moved file_read/file_write handlers initialization to
mi_dynmap_file().
storage/myisam/myisamdef.h:
Added mi_munmap_file() declaration.
Problem: caseup_multiply and casedn_multiply members
were not initialized for a dynamic collation, so
UPPER() and LOWER() functions returned empty strings.
Fix: initializing the members properly.
Adding tests:
mysql-test/r/ctype_ldml.result
mysql-test/t/ctype_ldml.test
Applying the fix:
mysys/charset.c
(Original patch by Sinisa Milivojevic)
The YEAR(4) value of 2000 was equal to the "bad" YEAR(4) value of 0000.
The get_year_value() function has been modified to not adjust bad
YEAR(4) value to 2000.
mysql-test/r/type_year.result:
Test case for bug #49910.
mysql-test/t/type_year.test:
Test case for bug #49910.
sql/item_cmpfunc.cc:
Bug #49910: Behavioural change in SELECT/WHERE on YEAR(4) data type
The get_year_value() function has been modified to not adjust bad
YEAR(4) value to 2000.
The problem is that when we make conditon for
grouped result const part of condition is cut off.
It happens because some parts of 'having' condition
which refer to outer join become const after
make_join_statistics. These parts may be lost
during further having condition transformation
in JOIN::exec. The fix is adding 'having'
condition check for const tables after
make_join_statistics is performed.
mysql-test/r/having.result:
test case
mysql-test/t/having.test:
test result
sql/sql_select.cc:
added 'having' condition check for const tables
after make_join_statistics is performed.
This has been back-ported from 6.0 as the problems proved to afflict
5.1 as well.
The fix exposed two new bugs. They were reported as follows.
Bug no 52174: Sometimes wrong plan when reading a MAX value
from non-NULL index
Bug no 52173: Reading NULL value from non-NULL index gives wrong
result in embedded server
Both bugs taken together affect a much smaller class of queries than #47762,
so the fix stays for now.
Optimizer erroneously translated LEFT JOIN into INNER JOIN.
It leads to cutting rows with NULL right side. It happens
because Item_row uses not_null_tables() method form the
base(Item) class and does not calculate 'null tables'
properly. The fix is adding calculation of 'not null tables'
to Item_row.
mysql-test/r/join_outer.result:
test result
mysql-test/t/join_outer.test:
test case
sql/item_row.cc:
adding calculation of 'not null tables' to Item_row.
sql/item_row.h:
adding calculation of 'not null tables' to Item_row.
The crash happens because of discrepancy between values of
conts_tables and join->const_table_map(make_join_statisctics).
Calculation of conts_tables used condition with
HA_STATS_RECORDS_IS_EXACT flag check. Calculation of
join->const_table_map does not use this flag check.
In case of MERGE table without union with index
the table does not become const table and
thus join_read_const_table() is not called
for the table. join->const_table_map supposes
this table is const and later in make_join_select
this table is used for making&calculation const
condition. As table record buffer is not populated
it leads to crash.
The fix is adding a check if an engine supports
HA_STATS_RECORDS_IS_EXACT flag before updating
join->const_table_map.
mysql-test/r/merge.result:
test result
mysql-test/t/merge.test:
test case
sql/sql_select.cc:
adding a check if an engine supports
HA_STATS_RECORDS_IS_EXACT flag before updating
join->const_table_map.
reverse DNS lookup of "localhost" returns "broadcasthost" on Snow Leopard, and NULL on most others.
Simply ignore the output, as this is not an essential part of UDF testing.
definition at engine
If a single ALTER TABLE contains both DROP INDEX and ADD INDEX using
the same index name (a.k.a. index modification) we need to disable
in-place alter table because we can't ask the storage engine to have
two copies of the index with the same name even temporarily (if we
first do the ADD INDEX and then DROP INDEX) and we can't modify
indexes that are needed by e.g. foreign keys if we first do
DROP INDEX and then ADD INDEX.
Fixed the problem by disabling in-place ALTER TABLE for these cases.
NULL column for NULL
The optimization to read MIN() and MAX() values from an
index did not properly handle comparisons with NULL
values. Fixed by giving up the particular optimization step
if there are non-NULL safe comparisons with NULL values, as
the result is NULL anyway.
Also, Oracle copyright notice was added to all files.
Base Tables
The type inferrence of a view column caused the result to be
interpreted as the wrong type: DATE colums were interpreted
as TIME and TIME as DATETIME. This happened because view
columns are represented by Item_ref objects as opposed to
Item_field's. Item_ref had no method for retrieving a TIME
value and thus was forced to depend on the default
implementation for any expression, which caused the
expression to be evaluated as a string and then parsed into
a TIME/DATETIME value.
Fixed by letting Item_ref classes forward the request for a
TIME value to the referred Item - which is a field in this
case - this reads the TIME value directly without
conversion.
index cardinalities=1
Parallel repair didn't poroperly update index cardinality
in certain cases.
When myisam_sort_buffer_size is not enough to store all
keys, index cardinality was updated before index was
actually written, when no index statistic is available.
mysql-test/r/myisam.result:
A test case for BUG#47444.
mysql-test/t/myisam.test:
A test case for BUG#47444.
storage/myisam/sort.c:
update_key_parts() must be called after all index
entries are written, when index statistic is available.
- Now DELETE IGNORE skips over rows with a foreign key constraints (as it was supposed to do)
mysql-test/r/foreign_key.result:
Test case for Bug#44987 DELETE IGNORE and FK constraint
mysql-test/t/foreign_key.test:
Test case for Bug#44987 DELETE IGNORE and FK constraint
sql/sql_delete.cc:
Firx for Bug#44987 DELETE IGNORE and FK constraint
Now DELETE IGNORE skips over rows with a foreign key constraints (as it was supposed to do)
Bug fix inspired by: Moritz Mertinkat
(regression)
Problem was that partition pruning did not exclude the
last partition if the range was beyond it
(i.e. not using MAXVALUE)
Fix was to not include the last partition if the
partitioning function value was not within the partition
range.
mysql-test/r/partition_innodb.result:
Bug#51830: Incorrect partition pruning on range partition
(regression)
Updated result
mysql-test/r/partition_pruning.result:
Bug#51830: Incorrect partition pruning on range partition
(regression)
Updated result
mysql-test/t/partition_innodb.test:
Bug#51830: Incorrect partition pruning on range partition
(regression)
Added test for pruning in InnoDB, since it does not show
for MyISAM due to 'Impossible WHERE noticed after reading
const tables'.
mysql-test/t/partition_pruning.test:
Bug#51830: Incorrect partition pruning on range partition
(regression)
Added test
sql/sql_partition.cc:
Bug#51830: Incorrect partition pruning on range partition
(regression)
Also increase the partition id if not inside the last partition
(and no MAXVALUE is defined).
Added comments and DBUG_ASSERT.
SET autocommit=1 while XA transaction is active may
cause various side effects, including memory corruption
and server crash.
The problem is that SET autocommit=1 and further queries
attempt to commit local transaction, whereas XA transaction
is still active.
As local and XA transactions are mutually exclusive, this
patch forbids enabling autocommit mode while XA transaction
is active.
mysql-test/r/xa.result:
A test case for BUG#51342.
mysql-test/t/xa.test:
A test case for BUG#51342.
sql/set_var.cc:
Forbid enabling autocommit mode while XA transaction is
active.