during an UPDATE
Extended the fix for bug 29310 to multi-table update:
When a table is being updated it has two set of fields - fields required for
checks of conditions and fields to be updated. A storage engine is allowed
not to retrieve columns marked for update. Due to this fact records can't
be compared to see whether the data has been changed or not. This makes the
server always update records independently of data change.
Now when an auto-updatable timestamp field is present and server sees that
a table handle isn't going to retrieve write-only fields then all of such
fields are marked as to be read to force the handler to retrieve them.
Problem: ALTER TABLE ADD INDEX may lead to table copying if there's
numeric field(s) with non-default display width modificator specified.
Fix: compare numeric field's storage lenghts when we decide whether
they can be considered 'equal' for table alteration purposes.
mysql-test/r/error_simulation.result:
Fix for bug#50946: fast index creation still seems to copy the table
- test result.
mysql-test/t/error_simulation.test:
Fix for bug#50946: fast index creation still seems to copy the table
- test case.
sql/field.cc:
Fix for bug#50946: fast index creation still seems to copy the table
- check numeric field's pack lengths instead of it's display lenghts
comparing fields equality for table alteration purposes.
sql/sql_table.cc:
Fix for bug#50946: fast index creation still seems to copy the table
- check compare_tables() result for testing purposes.
Previously installed dynamic plugins are explicitly not loaded
on startup with --skip-grant-tables enabled. However, INSTALL
PLUGIN/UNINSTALL PLUGIN commands are allowed, and result in
inconsistent error messages (reporting duplicate plugin or
plugin does not exist).
This patch adds a check for --skip-grant-tables mode, and
returns error ER_OPTION_PREVENTS_STATEMENT to the user when
the above commands are attempted.
Correcting a patch misstake. The converted file path is placed in 'buff' not in opt_secure_file_priv.
mysql-test/r/loaddata.result:
* Updated test case; Since secure_file_priv now is normalized the previous values are changed.
sql/mysqld.cc:
* Fixed patch misstake
Arg_comparator initializes 'comparators' array in case of
ROW comparison and does not free this array on destruction.
It leads to memory leaks.
The fix:
-added Arg_comparator::cleanup() method which frees
'comparators' array.
-added Item_bool_func2::cleanup() method which calls
Arg_comparator::cleanup() method
mysql-test/r/ps.result:
test case
mysql-test/r/row.result:
test case
mysql-test/t/ps.test:
test case
mysql-test/t/row.test:
test case
sql/item_cmpfunc.h:
-added Arg_comparator::cleanup() method which frees
'comparators' array.
-added Item_bool_func2::cleanup() method which calls
Arg_comparator::cleanup() method
When re-setting (SET GLOBAL debug='') the GLOBAL debug settings the
server was not freeing the data elements from the top (initial) frame
before setting them to 0 without freeing the underlying memory. As these
are global settings there's a chance that something is there already.
Fixed by :
1. making sure the allocated data are cleaned up before re-setting them
while parsing a debug string
2. making sure the stuff allocated in the global settings is freed on
shutdown.
Problem: EXPLAIN EXTENDED was trying to resolve references to
freed temporary table fields for GROUP_CONCAT()'s ORDER BY arguments.
Fix: use stored original GROUP_CONCAT()'s arguments in such a case.
mysql-test/r/func_gconcat.result:
Fix for bug#52397: another crash with explain extended and group_concat
- test result.
mysql-test/t/func_gconcat.test:
Fix for bug#52397: another crash with explain extended and group_concat
- test case.
sql/item_sum.cc:
Fix for bug#52397: another crash with explain extended and group_concat
- use "pargs", printing ORDER BY arguments in the
Item_func_group_concat::print() instead of "order" to avoid
possible reference resolving to (freed) temporary table fields.
function on windows
When making sure that the directory path ends up with a
slash/backslash we need to check for the correct length of
the buffer and trim at the appropriate location so we don't
write past the end of the buffer.
When mysqlbinlog was given the --database=X flag, it always printed
'ROLLBACK TO', but the corresponding 'SAVEPOINT' statement was not
printed. The replicated filter(replicated-do/ignore-db) and binlog
filter (binlog-do/ignore-db) has the same problem. They are solved
in this patch together.
After this patch, We always check whether the query is 'SAVEPOINT'
statement or not. Because this is a literal check, 'SAVEPOINT' and
'ROLLBACK TO' statements are also binlogged in uppercase with no
any comments.
The binlog before this patch can be handled correctly except one case
that any comments are in front of the keywords. for example:
/* bla bla */ SAVEPOINT a;
/* bla bla */ ROLLBACK TO a;
The crash is the result of an attempt made by JOIN::optimize to evaluate
the WHERE condition when no records have been actually read.
The fix is to remove erroneous 'outer_join' variable check.
mysql-test/r/join.result:
test result
mysql-test/t/join.test:
test case
sql/sql_select.cc:
removed erroneous 'outer_join' variable check.
The crash happens because of incorrect max_length calculation
in QUOTE function(due to overflow). max_length is set
to 0 and it leads to assert failure.
The fix is to cast expression result to
ulonglong variable and adjust it if the
result exceeds MAX_BLOB_WIDTH.
mysql-test/r/func_str.result:
test case
mysql-test/t/func_str.test:
test case
sql/item_strfunc.h:
cast expression result to ulonglong variable and
adjust it if the result exceeds MAX_BLOB_WIDTH.
There was no way to repair corrupt ARCHIVE data file,
when unrecoverable data loss is inevitable.
With this fix REPAIR ... EXTENDED attempts to restore
as much rows as possible, ignoring unrecoverable data.
Normal REPAIR is still able to repair meta-data file
only.
mysql-test/r/archive.result:
A test case for BUG#46565.
mysql-test/std_data/bug46565.ARZ:
A test case for BUG#46565.
mysql-test/std_data/bug46565.frm:
A test case for BUG#46565.
mysql-test/t/archive.test:
A test case for BUG#46565.
storage/archive/ha_archive.cc:
Allow unrecoverable data loss when extended repair
is requested.
Repairing MyISAM table with fulltext indexes and low
myisam_sort_buffer_size may crash the server.
Estimation of number of index entries was done incorrectly,
causing further assertion failure or server crash.
Docs note: min value for myisam_sort_buffer_size has been
changed from 4 to 4096.
mysql-test/r/fulltext.result:
A test case for BUG#51866.
mysql-test/r/myisam.result:
Min value for myisam_sort_buffer_size is 4096.
mysql-test/r/variables.result:
Min value for myisam_sort_buffer_size is 4096.
mysql-test/suite/sys_vars/r/myisam_sort_buffer_size_basic_32.result:
Min value for myisam_sort_buffer_size is 4096.
mysql-test/t/fulltext.test:
A test case for BUG#51866.
sql/mysqld.cc:
Min value for myisam_sort_buffer_size is 4096.
storage/myisam/mi_check.c:
When estimating number of index entries for external
fulltext parser, take into account that key_length may
be bigger than myisam_sort_buffer_size. Reuse logic
from _create_index_by_sort(): force MIN_SORT_BUFFER to
be min value for myisam_sort_buffer_size.
Another problem is that ftkey_nr has no other meaning
than serial number of fulltext index starting with 1.
We can't say if this key using built-in or external
parser basing on it's value. In other words we always
entered if-branch for external parser. At this point,
the only way to check if we use default parser is to
compare keyinfo::parser with &ft_default_parser.
storage/myisam/sort.c:
Get rid of MIN_SORT_MEMORY, use MIN_SORT_BUFFER instead
(defined in myisamdef.h, has the same value and purpose).
Invalid memory read if HANDLER ... READ NEXT is executed
after failed (e.g. empty table) HANDLER ... READ FIRST.
The problem was that we attempted to perform READ NEXT,
whereas there is no pivot available from failed READ FIRST.
With this fix READ NEXT after failed READ FIRST equals
to READ FIRST.
This bug affects MyISAM tables only.
mysql-test/r/gis-rtree.result:
Restore a test case for BUG51357.
mysql-test/r/handler_myisam.result:
A test case for BUG#51877.
mysql-test/t/gis-rtree.test:
Restore a test case for BUG51357.
mysql-test/t/handler_myisam.test:
A test case for BUG#51877.
storage/myisam/mi_rnext.c:
"search first" failed. This means we have no pivot for
"search next", or in other words MI_INFO::lastkey is
likely uninitialized.
Normally SQL layer would never request "search next" if
"search first" failed. But HANDLER may do anything.
As mi_rnext() without preceeding mi_rkey()/mi_rfirst()
equals to mi_rfirst(), we must restore original state
as if failing mi_rfirst() was not called.
Detailed revision comments:
r6783 | jyang | 2010-03-09 17:54:14 +0200 (Tue, 09 Mar 2010) | 9 lines
branches/5.1: Fix bug #47621 "MySQL and InnoDB data dictionaries
will become out of sync when renaming columns". MySQL does not
provide new column name information to storage engine to
update the system table. To avoid column name mismatch, we shall
just request a table copy for now.
rb://246 approved by Marko.
If the listed columns in the view definition of
the table used in a 'INSERT .. SELECT ..'
statement mismatched, a debug assertion would
trigger in the cache invalidation code
following the failing statement.
Although the find_field_in_view() function
correctly generated ER_BAD_FIELD_ERROR during
setup_fields(), the error failed to propagate
further than handle_select(). This patch fixes
the issue by adding a check for the return
value.
mysql-test/r/query_cache_with_views.result:
* added test for bug 46615
mysql-test/t/query_cache_with_views.test:
* added test for bug 46615
sql/sql_parse.cc:
* added check for handle_select() return code before attempting to invalidate the cache.
The crash happens because greedy_serach
can not determine best plan due to
wrong inner table dependences. These
dependences affects join table sorting
which performs before greedy_search starting.
In our case table which has real 'no dependences'
should be put on top of the list but it does not
happen as inner tables have no dependences as well.
The fix is to exclude RAND_TABLE_BIT mask from
condition which checks if table dependences
should be updated.
mysql-test/r/join.result:
test result
mysql-test/t/join.test:
test case
sql/sql_select.cc:
RAND_TABLE_BIT mask should not be counted as it
prevents update of inner table dependences.
For example it might happen if RAND() function
is used in JOIN ON clause.
col equal to itself!
There's no need to copy the value of a field into itself.
While generally harmless (except for some performance penalties)
it may be dangerous when the copy code doesn't expect this.
Fixed by checking if the source field is the same as the destination
field before copying the data.
Note that we must preserve the order of assignment of the null
flags (hence the null_value assignment addition).
function on windows
When making sure that the directory path ends up with a
slash/backslash we need to check for the correct length of
the buffer and trim at the appropriate location so we don't
write past the end of the buffer.
CHECK_FIELD_IGNORE was treated as CHECK_FIELD_ERROR_FOR_NULL;
UPDATE...SET...NULL on NOT NULL fields behaved differently after
a trigger.
Now distinguishes between IGNORE and ERROR_FOR_NULL and save/restores
check-field options.
mysql-test/r/trigger.result:
Show that UPDATE...SET...NULL on NOT NULL columns doesn't behave differently
when run after a trigger.
mysql-test/t/trigger.test:
Show that UPDATE...SET...NULL on NOT NULL columns doesn't behave differently
when run after a trigger.
sql/field_conv.cc:
CHECK_FIELD_IGNORE was treated as CHECK_FIELD_ERROR_FOR_NULL.
Distinguish between the two.
sql/sp_head.cc:
Raise error as needed.
sql/sql_class.cc:
Save and restore check-fields options.
sql/sql_class.h:
Make room so we can save check-fields options.
sql/sql_insert.cc:
Raise error as needed.
myisam tables
Queries following TRUNCATE of partitioned MyISAM table
may crash server if myisam_use_mmap is true.
Internally this is MyISAM bug, but limited to partitioned
tables, because MyISAM doesn't use ::delete_all_rows()
method for TRUNCATE, but goes via table recreate instead.
MyISAM didn't properly fall back to non-mmaped I/O after
mmap() failure. Was not repeatable on linux before, likely
because (quote from man mmap):
SUSv3 specifies that mmap() should fail if length is 0.
However, in kernels before 2.6.12, mmap() succeeded in
this case: no mapping was created and the call returned
addr. Since kernel 2.6.12, mmap() fails with the error
EINVAL for this case.
mysql-test/r/partition.result:
A test case for BUG#51868.
mysql-test/t/partition.test:
A test case for BUG#51868.
storage/myisam/mi_delete_all.c:
_mi_unmap_file() is compressed record format specific,
which is read-only. As compressed MyISAM data files are
read-only, we must never use _mi_unmap_file() in
mi_delete_all_rows().
storage/myisam/mi_dynrec.c:
Make myisam mmap code more durable to errors:
- set file_read/file_write handlers if mmap succeeded;
- reset file_read/file_write handlers on unmap.
storage/myisam/mi_extra.c:
Moved file_read/file_write handlers initialization to
mi_dynmap_file().
storage/myisam/myisamdef.h:
Added mi_munmap_file() declaration.
Problem: caseup_multiply and casedn_multiply members
were not initialized for a dynamic collation, so
UPPER() and LOWER() functions returned empty strings.
Fix: initializing the members properly.
Adding tests:
mysql-test/r/ctype_ldml.result
mysql-test/t/ctype_ldml.test
Applying the fix:
mysys/charset.c
(Original patch by Sinisa Milivojevic)
The YEAR(4) value of 2000 was equal to the "bad" YEAR(4) value of 0000.
The get_year_value() function has been modified to not adjust bad
YEAR(4) value to 2000.
mysql-test/r/type_year.result:
Test case for bug #49910.
mysql-test/t/type_year.test:
Test case for bug #49910.
sql/item_cmpfunc.cc:
Bug #49910: Behavioural change in SELECT/WHERE on YEAR(4) data type
The get_year_value() function has been modified to not adjust bad
YEAR(4) value to 2000.
The problem is that when we make conditon for
grouped result const part of condition is cut off.
It happens because some parts of 'having' condition
which refer to outer join become const after
make_join_statistics. These parts may be lost
during further having condition transformation
in JOIN::exec. The fix is adding 'having'
condition check for const tables after
make_join_statistics is performed.
mysql-test/r/having.result:
test case
mysql-test/t/having.test:
test result
sql/sql_select.cc:
added 'having' condition check for const tables
after make_join_statistics is performed.
This has been back-ported from 6.0 as the problems proved to afflict
5.1 as well.
The fix exposed two new bugs. They were reported as follows.
Bug no 52174: Sometimes wrong plan when reading a MAX value
from non-NULL index
Bug no 52173: Reading NULL value from non-NULL index gives wrong
result in embedded server
Both bugs taken together affect a much smaller class of queries than #47762,
so the fix stays for now.
Optimizer erroneously translated LEFT JOIN into INNER JOIN.
It leads to cutting rows with NULL right side. It happens
because Item_row uses not_null_tables() method form the
base(Item) class and does not calculate 'null tables'
properly. The fix is adding calculation of 'not null tables'
to Item_row.
mysql-test/r/join_outer.result:
test result
mysql-test/t/join_outer.test:
test case
sql/item_row.cc:
adding calculation of 'not null tables' to Item_row.
sql/item_row.h:
adding calculation of 'not null tables' to Item_row.
The crash happens because of discrepancy between values of
conts_tables and join->const_table_map(make_join_statisctics).
Calculation of conts_tables used condition with
HA_STATS_RECORDS_IS_EXACT flag check. Calculation of
join->const_table_map does not use this flag check.
In case of MERGE table without union with index
the table does not become const table and
thus join_read_const_table() is not called
for the table. join->const_table_map supposes
this table is const and later in make_join_select
this table is used for making&calculation const
condition. As table record buffer is not populated
it leads to crash.
The fix is adding a check if an engine supports
HA_STATS_RECORDS_IS_EXACT flag before updating
join->const_table_map.
mysql-test/r/merge.result:
test result
mysql-test/t/merge.test:
test case
sql/sql_select.cc:
adding a check if an engine supports
HA_STATS_RECORDS_IS_EXACT flag before updating
join->const_table_map.
reverse DNS lookup of "localhost" returns "broadcasthost" on Snow Leopard, and NULL on most others.
Simply ignore the output, as this is not an essential part of UDF testing.
definition at engine
If a single ALTER TABLE contains both DROP INDEX and ADD INDEX using
the same index name (a.k.a. index modification) we need to disable
in-place alter table because we can't ask the storage engine to have
two copies of the index with the same name even temporarily (if we
first do the ADD INDEX and then DROP INDEX) and we can't modify
indexes that are needed by e.g. foreign keys if we first do
DROP INDEX and then ADD INDEX.
Fixed the problem by disabling in-place ALTER TABLE for these cases.