WHERE predicates containing references to empty tables in a
subquery were handled incorrectly by the optimizer when
executing EXPLAIN. As a result, the optimizer could try to
evaluate such predicates rather than just stop with
"Impossible WHERE noticed after reading const tables" as
it would do in a non-subquery case. This led to valgrind
errors and crashes.
Fixed the code checking the above condition so that subqueries
are not excluded and hence are handled in the same way as top
level SELECTs.
The problem was that a syntactically invalid trigger could cause
the server to crash when trying to list triggers. The crash would
happen due to a mishap in the backup/restore procedure that should
protect parser items which are not associated with the trigger. The
backup/restore is used to isolate the parse tree (and context) of
a statement from the load (and parsing) of a trigger. In this case,
a error during the parsing of a trigger could cause the improper
backup/restore sequence.
The solution is to properly restore the original statement context
before the parser is exited due to syntax errors in the trigger body.
during an UPDATE
Extended the fix for bug 29310 to multi-table update:
When a table is being updated it has two set of fields - fields required for
checks of conditions and fields to be updated. A storage engine is allowed
not to retrieve columns marked for update. Due to this fact records can't
be compared to see whether the data has been changed or not. This makes the
server always update records independently of data change.
Now when an auto-updatable timestamp field is present and server sees that
a table handle isn't going to retrieve write-only fields then all of such
fields are marked as to be read to force the handler to retrieve them.
The problem was in an incorrect debug assertion. The expression
used in the failing assertion states that when finding
references matching ORDER BY expressions, there can be only one
reference to a single table. But that does not make any sense,
all test cases for this bug are valid examples with multiple
identical WHERE expressions referencing the same table which
are also present in the ORDER BY list.
Fixed by removing the failing assertion. We also have to take
care of the 'found' counter so that we count multiple
references only once. We rely on this fact later in
eq_ref_table().
Problem: ALTER TABLE ADD INDEX may lead to table copying if there's
numeric field(s) with non-default display width modificator specified.
Fix: compare numeric field's storage lenghts when we decide whether
they can be considered 'equal' for table alteration purposes.
Previously installed dynamic plugins are explicitly not loaded
on startup with --skip-grant-tables enabled. However, INSTALL
PLUGIN/UNINSTALL PLUGIN commands are allowed, and result in
inconsistent error messages (reporting duplicate plugin or
plugin does not exist).
This patch adds a check for --skip-grant-tables mode, and
returns error ER_OPTION_PREVENTS_STATEMENT to the user when
the above commands are attempted.
Arg_comparator initializes 'comparators' array in case of
ROW comparison and does not free this array on destruction.
It leads to memory leaks.
The fix:
-added Arg_comparator::cleanup() method which frees
'comparators' array.
-added Item_bool_func2::cleanup() method which calls
Arg_comparator::cleanup() method
When re-setting (SET GLOBAL debug='') the GLOBAL debug settings the
server was not freeing the data elements from the top (initial) frame
before setting them to 0 without freeing the underlying memory. As these
are global settings there's a chance that something is there already.
Fixed by :
1. making sure the allocated data are cleaned up before re-setting them
while parsing a debug string
2. making sure the stuff allocated in the global settings is freed on
shutdown.
We should disable const subselect item evaluation because
subselect transformation does not happen in view_prepare_mode
and thus val_...() methods can not be called.
The problem is that we can not use make_cond_for_table().
This function relies on used_tables() condition
which is not set properly for subqueries.
As result subquery is not filtered out.
The fix is to use remove_eq_conds() function instead
of make_cond_for_table() func. 'remove_eq_conds()'
algorithm relies on const_item() value and it allows
to handle subqueries in right way.
Procedure, while DECIMAL works
Selecting of the CONCAT(...<SP variable>...) result into
a user variable may return wrong data.
Item_func_concat::val_str contains a number of memory
allocation-saving tricks. One of them concatenates
strings inplace inserting the value of one string
at the beginning of the other string. However,
this trick didn't care about strings those points
to the same data buffer: this is possible when
a CONCAT() parameter is a stored procedure variable -
Item_sp_variable::val_str() uses the intermediate
Item_sp_variable::str_value field, where it may
store a reference to an external buffer.
The Item_func_concat::val_str function has been
modified to take into account val_str functions
(such as Item_sp_variable::val_str) that return
a pointer to an internal Item member variable
that may reference to a buffer provided.
on index
'my_decimal' class has two members which can be used to access the
value. The member variable buf (inherited from parent class decimal_t)
is set to member variable buffer so that both are pointing to same value.
Item_copy_decimal::copy() uses memcpy to clone 'my_decimal'. The member
buffer is declared as an array and memcpy results in copying the values
of the array, but the inherited member buf, which should be pointing at
the begining of the array 'buffer' starts pointing to the begining of
buffer in original object (which is being cloned). Further updates on
'my_decimal' updates only the inherited member 'buf' but leaves
buffer unchanged.
Later when the new object (which now holds a inconsistent value) is cloned
again using proper cloning function 'my_decimal2decimal' the buf pointer
is fixed resulting in loss of the current value.
Using my_decimal2decimal instead of memcpy in Item_copy_decimal::copy()
fixed this problem.
data and index files
It was possible if DATA/INDEX DIRECTORY is pointing to
symlinked MySQL data home directory.
Do not allow to drop data/index files implicitly symlinked
to data home directory. For such tables remove symlink only.
Problem: EXPLAIN EXTENDED was trying to resolve references to
freed temporary table fields for GROUP_CONCAT()'s ORDER BY arguments.
Fix: use stored original GROUP_CONCAT()'s arguments in such a case.
function on windows
When making sure that the directory path ends up with a
slash/backslash we need to check for the correct length of
the buffer and trim at the appropriate location so we don't
write past the end of the buffer.
When mysqlbinlog was given the --database=X flag, it always printed
'ROLLBACK TO', but the corresponding 'SAVEPOINT' statement was not
printed. The replicated filter(replicated-do/ignore-db) and binlog
filter (binlog-do/ignore-db) has the same problem. They are solved
in this patch together.
After this patch, We always check whether the query is 'SAVEPOINT'
statement or not. Because this is a literal check, 'SAVEPOINT' and
'ROLLBACK TO' statements are also binlogged in uppercase with no
any comments.
The binlog before this patch can be handled correctly except one case
that any comments are in front of the keywords. for example:
/* bla bla */ SAVEPOINT a;
/* bla bla */ ROLLBACK TO a;
The crash is the result of an attempt made by JOIN::optimize to evaluate
the WHERE condition when no records have been actually read.
The fix is to remove erroneous 'outer_join' variable check.
The crash happens because of incorrect max_length calculation
in QUOTE function(due to overflow). max_length is set
to 0 and it leads to assert failure.
The fix is to cast expression result to
ulonglong variable and adjust it if the
result exceeds MAX_BLOB_WIDTH.
There was no way to repair corrupt ARCHIVE data file,
when unrecoverable data loss is inevitable.
With this fix REPAIR ... EXTENDED attempts to restore
as much rows as possible, ignoring unrecoverable data.
Normal REPAIR is still able to repair meta-data file
only.
Repairing MyISAM table with fulltext indexes and low
myisam_sort_buffer_size may crash the server.
Estimation of number of index entries was done incorrectly,
causing further assertion failure or server crash.
Docs note: min value for myisam_sort_buffer_size has been
changed from 4 to 4096.
Invalid memory read if HANDLER ... READ NEXT is executed
after failed (e.g. empty table) HANDLER ... READ FIRST.
The problem was that we attempted to perform READ NEXT,
whereas there is no pivot available from failed READ FIRST.
With this fix READ NEXT after failed READ FIRST equals
to READ FIRST.
This bug affects MyISAM tables only.
Detailed revision comments:
r6783 | jyang | 2010-03-09 17:54:14 +0200 (Tue, 09 Mar 2010) | 9 lines
branches/5.1: Fix bug #47621 "MySQL and InnoDB data dictionaries
will become out of sync when renaming columns". MySQL does not
provide new column name information to storage engine to
update the system table. To avoid column name mismatch, we shall
just request a table copy for now.
rb://246 approved by Marko.
If the listed columns in the view definition of
the table used in a 'INSERT .. SELECT ..'
statement mismatched, a debug assertion would
trigger in the cache invalidation code
following the failing statement.
Although the find_field_in_view() function
correctly generated ER_BAD_FIELD_ERROR during
setup_fields(), the error failed to propagate
further than handle_select(). This patch fixes
the issue by adding a check for the return
value.
The crash happens because greedy_serach
can not determine best plan due to
wrong inner table dependences. These
dependences affects join table sorting
which performs before greedy_search starting.
In our case table which has real 'no dependences'
should be put on top of the list but it does not
happen as inner tables have no dependences as well.
The fix is to exclude RAND_TABLE_BIT mask from
condition which checks if table dependences
should be updated.
col equal to itself!
There's no need to copy the value of a field into itself.
While generally harmless (except for some performance penalties)
it may be dangerous when the copy code doesn't expect this.
Fixed by checking if the source field is the same as the destination
field before copying the data.
Note that we must preserve the order of assignment of the null
flags (hence the null_value assignment addition).
function on windows
When making sure that the directory path ends up with a
slash/backslash we need to check for the correct length of
the buffer and trim at the appropriate location so we don't
write past the end of the buffer.
CHECK_FIELD_IGNORE was treated as CHECK_FIELD_ERROR_FOR_NULL;
UPDATE...SET...NULL on NOT NULL fields behaved differently after
a trigger.
Now distinguishes between IGNORE and ERROR_FOR_NULL and save/restores
check-field options.
myisam tables
Queries following TRUNCATE of partitioned MyISAM table
may crash server if myisam_use_mmap is true.
Internally this is MyISAM bug, but limited to partitioned
tables, because MyISAM doesn't use ::delete_all_rows()
method for TRUNCATE, but goes via table recreate instead.
MyISAM didn't properly fall back to non-mmaped I/O after
mmap() failure. Was not repeatable on linux before, likely
because (quote from man mmap):
SUSv3 specifies that mmap() should fail if length is 0.
However, in kernels before 2.6.12, mmap() succeeded in
this case: no mapping was created and the call returned
addr. Since kernel 2.6.12, mmap() fails with the error
EINVAL for this case.
Problem: caseup_multiply and casedn_multiply members
were not initialized for a dynamic collation, so
UPPER() and LOWER() functions returned empty strings.
Fix: initializing the members properly.
Adding tests:
mysql-test/r/ctype_ldml.result
mysql-test/t/ctype_ldml.test
Applying the fix:
mysys/charset.c
(Original patch by Sinisa Milivojevic)
The YEAR(4) value of 2000 was equal to the "bad" YEAR(4) value of 0000.
The get_year_value() function has been modified to not adjust bad
YEAR(4) value to 2000.
The problem is that when we make conditon for
grouped result const part of condition is cut off.
It happens because some parts of 'having' condition
which refer to outer join become const after
make_join_statistics. These parts may be lost
during further having condition transformation
in JOIN::exec. The fix is adding 'having'
condition check for const tables after
make_join_statistics is performed.