WHERE predicates containing references to empty tables in a
subquery were handled incorrectly by the optimizer when
executing EXPLAIN. As a result, the optimizer could try to
evaluate such predicates rather than just stop with
"Impossible WHERE noticed after reading const tables" as
it would do in a non-subquery case. This led to valgrind
errors and crashes.
Fixed the code checking the above condition so that subqueries
are not excluded and hence are handled in the same way as top
level SELECTs.
The problem was in an incorrect debug assertion. The expression
used in the failing assertion states that when finding
references matching ORDER BY expressions, there can be only one
reference to a single table. But that does not make any sense,
all test cases for this bug are valid examples with multiple
identical WHERE expressions referencing the same table which
are also present in the ORDER BY list.
Fixed by removing the failing assertion. We also have to take
care of the 'found' counter so that we count multiple
references only once. We rely on this fact later in
eq_ref_table().
union...order by (select... where...)
The problem is mysql is trying to materialize and
cache the scalar sub-queries at JOIN::optimize
even for EXPLAIN where the number of columns is
totally different from what's expected.
Fixed by not executing the scalar subqueries
for EXPLAIN.
on LOAD DATA
Two problems :
1. LOAD DATA was not checking for SQL errors and was sending an OK
packet even when there were errors reported already. Fixed to check for
SQL errors in addition to the error conditions already detected.
2. There was an over-ambitious assert() on the server to check if the
protocol is always followed by the client. This can cause crashes on
debug servers by clients not completing the protocol exchange for some
reason (e.g. --send command in mysqltest). Fixed by keeping the assert
only on client side, since the server always completes the protocol
exchange.
We should disable const subselect item evaluation because
subselect transformation does not happen in view_prepare_mode
and thus val_...() methods can not be called.
The problem is that we can not use make_cond_for_table().
This function relies on used_tables() condition
which is not set properly for subqueries.
As result subquery is not filtered out.
The fix is to use remove_eq_conds() function instead
of make_cond_for_table() func. 'remove_eq_conds()'
algorithm relies on const_item() value and it allows
to handle subqueries in right way.
Procedure, while DECIMAL works
Selecting of the CONCAT(...<SP variable>...) result into
a user variable may return wrong data.
Item_func_concat::val_str contains a number of memory
allocation-saving tricks. One of them concatenates
strings inplace inserting the value of one string
at the beginning of the other string. However,
this trick didn't care about strings those points
to the same data buffer: this is possible when
a CONCAT() parameter is a stored procedure variable -
Item_sp_variable::val_str() uses the intermediate
Item_sp_variable::str_value field, where it may
store a reference to an external buffer.
The Item_func_concat::val_str function has been
modified to take into account val_str functions
(such as Item_sp_variable::val_str) that return
a pointer to an internal Item member variable
that may reference to a buffer provided.
on index
'my_decimal' class has two members which can be used to access the
value. The member variable buf (inherited from parent class decimal_t)
is set to member variable buffer so that both are pointing to same value.
Item_copy_decimal::copy() uses memcpy to clone 'my_decimal'. The member
buffer is declared as an array and memcpy results in copying the values
of the array, but the inherited member buf, which should be pointing at
the begining of the array 'buffer' starts pointing to the begining of
buffer in original object (which is being cloned). Further updates on
'my_decimal' updates only the inherited member 'buf' but leaves
buffer unchanged.
Later when the new object (which now holds a inconsistent value) is cloned
again using proper cloning function 'my_decimal2decimal' the buf pointer
is fixed resulting in loss of the current value.
Using my_decimal2decimal instead of memcpy in Item_copy_decimal::copy()
fixed this problem.
data and index files
It was possible if DATA/INDEX DIRECTORY is pointing to
symlinked MySQL data home directory.
Do not allow to drop data/index files implicitly symlinked
to data home directory. For such tables remove symlink only.
The problem was that a syntactically invalid trigger could cause
the server to crash when trying to list triggers. The crash would
happen due to a mishap in the backup/restore procedure that should
protect parser items which are not associated with the trigger. The
backup/restore is used to isolate the parse tree (and context) of
a statement from the load (and parsing) of a trigger. In this case,
a error during the parsing of a trigger could cause the improper
backup/restore sequence.
The solution is to properly restore the original statement context
before the parser is exited due to syntax errors in the trigger body.
during an UPDATE
Extended the fix for bug 29310 to multi-table update:
When a table is being updated it has two set of fields - fields required for
checks of conditions and fields to be updated. A storage engine is allowed
not to retrieve columns marked for update. Due to this fact records can't
be compared to see whether the data has been changed or not. This makes the
server always update records independently of data change.
Now when an auto-updatable timestamp field is present and server sees that
a table handle isn't going to retrieve write-only fields then all of such
fields are marked as to be read to force the handler to retrieve them.
Problem: ALTER TABLE ADD INDEX may lead to table copying if there's
numeric field(s) with non-default display width modificator specified.
Fix: compare numeric field's storage lenghts when we decide whether
they can be considered 'equal' for table alteration purposes.
Previously installed dynamic plugins are explicitly not loaded
on startup with --skip-grant-tables enabled. However, INSTALL
PLUGIN/UNINSTALL PLUGIN commands are allowed, and result in
inconsistent error messages (reporting duplicate plugin or
plugin does not exist).
This patch adds a check for --skip-grant-tables mode, and
returns error ER_OPTION_PREVENTS_STATEMENT to the user when
the above commands are attempted.
Arg_comparator initializes 'comparators' array in case of
ROW comparison and does not free this array on destruction.
It leads to memory leaks.
The fix:
-added Arg_comparator::cleanup() method which frees
'comparators' array.
-added Item_bool_func2::cleanup() method which calls
Arg_comparator::cleanup() method
When re-setting (SET GLOBAL debug='') the GLOBAL debug settings the
server was not freeing the data elements from the top (initial) frame
before setting them to 0 without freeing the underlying memory. As these
are global settings there's a chance that something is there already.
Fixed by :
1. making sure the allocated data are cleaned up before re-setting them
while parsing a debug string
2. making sure the stuff allocated in the global settings is freed on
shutdown.
Problem: EXPLAIN EXTENDED was trying to resolve references to
freed temporary table fields for GROUP_CONCAT()'s ORDER BY arguments.
Fix: use stored original GROUP_CONCAT()'s arguments in such a case.
function on windows
When making sure that the directory path ends up with a
slash/backslash we need to check for the correct length of
the buffer and trim at the appropriate location so we don't
write past the end of the buffer.
When mysqlbinlog was given the --database=X flag, it always printed
'ROLLBACK TO', but the corresponding 'SAVEPOINT' statement was not
printed. The replicated filter(replicated-do/ignore-db) and binlog
filter (binlog-do/ignore-db) has the same problem. They are solved
in this patch together.
After this patch, We always check whether the query is 'SAVEPOINT'
statement or not. Because this is a literal check, 'SAVEPOINT' and
'ROLLBACK TO' statements are also binlogged in uppercase with no
any comments.
The binlog before this patch can be handled correctly except one case
that any comments are in front of the keywords. for example:
/* bla bla */ SAVEPOINT a;
/* bla bla */ ROLLBACK TO a;
The crash is the result of an attempt made by JOIN::optimize to evaluate
the WHERE condition when no records have been actually read.
The fix is to remove erroneous 'outer_join' variable check.
The crash happens because of incorrect max_length calculation
in QUOTE function(due to overflow). max_length is set
to 0 and it leads to assert failure.
The fix is to cast expression result to
ulonglong variable and adjust it if the
result exceeds MAX_BLOB_WIDTH.
There was no way to repair corrupt ARCHIVE data file,
when unrecoverable data loss is inevitable.
With this fix REPAIR ... EXTENDED attempts to restore
as much rows as possible, ignoring unrecoverable data.
Normal REPAIR is still able to repair meta-data file
only.
Repairing MyISAM table with fulltext indexes and low
myisam_sort_buffer_size may crash the server.
Estimation of number of index entries was done incorrectly,
causing further assertion failure or server crash.
Docs note: min value for myisam_sort_buffer_size has been
changed from 4 to 4096.
Invalid memory read if HANDLER ... READ NEXT is executed
after failed (e.g. empty table) HANDLER ... READ FIRST.
The problem was that we attempted to perform READ NEXT,
whereas there is no pivot available from failed READ FIRST.
With this fix READ NEXT after failed READ FIRST equals
to READ FIRST.
This bug affects MyISAM tables only.
Detailed revision comments:
r6783 | jyang | 2010-03-09 17:54:14 +0200 (Tue, 09 Mar 2010) | 9 lines
branches/5.1: Fix bug #47621 "MySQL and InnoDB data dictionaries
will become out of sync when renaming columns". MySQL does not
provide new column name information to storage engine to
update the system table. To avoid column name mismatch, we shall
just request a table copy for now.
rb://246 approved by Marko.
If the listed columns in the view definition of
the table used in a 'INSERT .. SELECT ..'
statement mismatched, a debug assertion would
trigger in the cache invalidation code
following the failing statement.
Although the find_field_in_view() function
correctly generated ER_BAD_FIELD_ERROR during
setup_fields(), the error failed to propagate
further than handle_select(). This patch fixes
the issue by adding a check for the return
value.
The crash happens because greedy_serach
can not determine best plan due to
wrong inner table dependences. These
dependences affects join table sorting
which performs before greedy_search starting.
In our case table which has real 'no dependences'
should be put on top of the list but it does not
happen as inner tables have no dependences as well.
The fix is to exclude RAND_TABLE_BIT mask from
condition which checks if table dependences
should be updated.
col equal to itself!
There's no need to copy the value of a field into itself.
While generally harmless (except for some performance penalties)
it may be dangerous when the copy code doesn't expect this.
Fixed by checking if the source field is the same as the destination
field before copying the data.
Note that we must preserve the order of assignment of the null
flags (hence the null_value assignment addition).
function on windows
When making sure that the directory path ends up with a
slash/backslash we need to check for the correct length of
the buffer and trim at the appropriate location so we don't
write past the end of the buffer.
CHECK_FIELD_IGNORE was treated as CHECK_FIELD_ERROR_FOR_NULL;
UPDATE...SET...NULL on NOT NULL fields behaved differently after
a trigger.
Now distinguishes between IGNORE and ERROR_FOR_NULL and save/restores
check-field options.
myisam tables
Queries following TRUNCATE of partitioned MyISAM table
may crash server if myisam_use_mmap is true.
Internally this is MyISAM bug, but limited to partitioned
tables, because MyISAM doesn't use ::delete_all_rows()
method for TRUNCATE, but goes via table recreate instead.
MyISAM didn't properly fall back to non-mmaped I/O after
mmap() failure. Was not repeatable on linux before, likely
because (quote from man mmap):
SUSv3 specifies that mmap() should fail if length is 0.
However, in kernels before 2.6.12, mmap() succeeded in
this case: no mapping was created and the call returned
addr. Since kernel 2.6.12, mmap() fails with the error
EINVAL for this case.
Problem: caseup_multiply and casedn_multiply members
were not initialized for a dynamic collation, so
UPPER() and LOWER() functions returned empty strings.
Fix: initializing the members properly.
Adding tests:
mysql-test/r/ctype_ldml.result
mysql-test/t/ctype_ldml.test
Applying the fix:
mysys/charset.c
(Original patch by Sinisa Milivojevic)
The YEAR(4) value of 2000 was equal to the "bad" YEAR(4) value of 0000.
The get_year_value() function has been modified to not adjust bad
YEAR(4) value to 2000.
The problem is that when we make conditon for
grouped result const part of condition is cut off.
It happens because some parts of 'having' condition
which refer to outer join become const after
make_join_statistics. These parts may be lost
during further having condition transformation
in JOIN::exec. The fix is adding 'having'
condition check for const tables after
make_join_statistics is performed.
This has been back-ported from 6.0 as the problems proved to afflict
5.1 as well.
The fix exposed two new bugs. They were reported as follows.
Bug no 52174: Sometimes wrong plan when reading a MAX value
from non-NULL index
Bug no 52173: Reading NULL value from non-NULL index gives wrong
result in embedded server
Both bugs taken together affect a much smaller class of queries than #47762,
so the fix stays for now.
Optimizer erroneously translated LEFT JOIN into INNER JOIN.
It leads to cutting rows with NULL right side. It happens
because Item_row uses not_null_tables() method form the
base(Item) class and does not calculate 'null tables'
properly. The fix is adding calculation of 'not null tables'
to Item_row.
The crash happens because of discrepancy between values of
conts_tables and join->const_table_map(make_join_statisctics).
Calculation of conts_tables used condition with
HA_STATS_RECORDS_IS_EXACT flag check. Calculation of
join->const_table_map does not use this flag check.
In case of MERGE table without union with index
the table does not become const table and
thus join_read_const_table() is not called
for the table. join->const_table_map supposes
this table is const and later in make_join_select
this table is used for making&calculation const
condition. As table record buffer is not populated
it leads to crash.
The fix is adding a check if an engine supports
HA_STATS_RECORDS_IS_EXACT flag before updating
join->const_table_map.
reverse DNS lookup of "localhost" returns "broadcasthost" on Snow Leopard, and NULL on most others.
Simply ignore the output, as this is not an essential part of UDF testing.
definition at engine
If a single ALTER TABLE contains both DROP INDEX and ADD INDEX using
the same index name (a.k.a. index modification) we need to disable
in-place alter table because we can't ask the storage engine to have
two copies of the index with the same name even temporarily (if we
first do the ADD INDEX and then DROP INDEX) and we can't modify
indexes that are needed by e.g. foreign keys if we first do
DROP INDEX and then ADD INDEX.
Fixed the problem by disabling in-place ALTER TABLE for these cases.
NULL column for NULL
The optimization to read MIN() and MAX() values from an
index did not properly handle comparisons with NULL
values. Fixed by giving up the particular optimization step
if there are non-NULL safe comparisons with NULL values, as
the result is NULL anyway.
Also, Oracle copyright notice was added to all files.
Base Tables
The type inferrence of a view column caused the result to be
interpreted as the wrong type: DATE colums were interpreted
as TIME and TIME as DATETIME. This happened because view
columns are represented by Item_ref objects as opposed to
Item_field's. Item_ref had no method for retrieving a TIME
value and thus was forced to depend on the default
implementation for any expression, which caused the
expression to be evaluated as a string and then parsed into
a TIME/DATETIME value.
Fixed by letting Item_ref classes forward the request for a
TIME value to the referred Item - which is a field in this
case - this reads the TIME value directly without
conversion.
index cardinalities=1
Parallel repair didn't poroperly update index cardinality
in certain cases.
When myisam_sort_buffer_size is not enough to store all
keys, index cardinality was updated before index was
actually written, when no index statistic is available.
(regression)
Problem was that partition pruning did not exclude the
last partition if the range was beyond it
(i.e. not using MAXVALUE)
Fix was to not include the last partition if the
partitioning function value was not within the partition
range.
SET autocommit=1 while XA transaction is active may
cause various side effects, including memory corruption
and server crash.
The problem is that SET autocommit=1 and further queries
attempt to commit local transaction, whereas XA transaction
is still active.
As local and XA transactions are mutually exclusive, this
patch forbids enabling autocommit mode while XA transaction
is active.
The problem was that UNINSTALL PLUGIN wasn't performing privilege
checks before removing a plugin. Any user (including users without
any kind of privileges) could uninstall any plugin.
The solution is to verify if the user has the DELETE privilege for
the mysql.plugin table before uninstalling a plugin.
The problem is that Item_direct_view_ref which is inherited
from Item_ident updates orig_table_name and table_name with
the same values. The fix is introduction of new constructor
into Item_ident and up which updates orig_table_name and
table_name separately.
The problem is that not all column names retrieved from a SELECT
statement can be used as view column names due to length and format
restrictions. The server failed to properly check the conformity
of those automatically generated column names before storing the
final view definition on disk.
Since columns retrieved from a SELECT statement can be anything
ranging from functions to constants values of any format and length,
the solution is to rewrite to a pre-defined format any names that
are not acceptable as a view column name.
The name is rewritten to "Name_exp_%u" where %u translates to the
position of the column. To avoid this conversion scheme, define
explict names for the view columns via the column_list clause.
Also, aliases are now only generated for top level statements.
The problem was that bits of the destructive equality propagation
optimization weren't being undone after the execution of a stored
program. Modifications to the parse tree that are based on transient
properties must be undone to enable the re-execution of stored
programs.
The solution is to cleanup any references to predicates generated
by the equality propagation during the execution of a stored program.
Spatial indexes were not checking for out-of-record condition in
the handler next command when the previous command didn't found
rows.
Fixed by making the rtree index to check for end of rows condition
before re-using the key from the previous search.
Fixed another crash if the tree has changed since the last search.
Added a test case for the other error.
auto_increment on duplicate entry
The bug was that when INSERT_ID was used and the storage
engine was told to release any reserved but not used
auto_increment values, it set the highest auto_increment
value to INSERT_ID.
The fix was to check if the auto_increment value was forced
by user (INSERT_ID) or by slave-thread, i.e. not auto-
generated. So that it is only allowed to release generated
values.
Spatial indexes were not checking for out-of-record condition in
the handler next command when the previous command didn't found
rows.
Fixed by making the rtree index to check for end of rows condition
before re-using the key from the previous search.
Fixed another crash if the tree has changed since the last search.
Added a test case for the other error.
consider clustered primary keys
Choosing a shortest index for the covering index scan,
the optimizer ignored the fact, that the clustered primary
key read involves whole table data.
The find_shortest_key function has been modified to
take into account that fact that a clustered PK has a
longest key of possible covering indices.
Problem was block_size on partitioned tables was not set,
resulting in keys_per_block was not correct which affects
the cost calculation for read time of indexes (including
cost for group min/max).Which resulted in a bad optimizer
decision.
Fixed by setting stats.block_size correctly.