The checks in the test for bug #12480 were too wide and
made the test to depend on the procedures and triggers
present in the server.
Corrected the test to check only for the procedure and
trigger it creates.
fine
The reason of this bug is that when mysqlbinlog dumps a query, the query is written to
output with a delimeter appended right after it, if the query string ends with a '--'
comment, then the delimeter would be considered as part of the comment, if there are any
statements after this query, then it will cause a syntax error.
Start a newline before appending delimiter after a query string
The reason of this bug is that when mysqlbinlog dumps a query, the query is written to
output with a delimeter appended right after it, if the query string ends with a '--'
comment, then the delimeter would be considered as part of the comment, if there are any
statements after this query, then it will cause a syntax error.
Start a newline before appending delimiter after a query string
In a union without braces, the order by at the end is applied to the
overall union. It therefore should not interfere with the individual
select parts of the union.
Fixed by changing our parser rules appropriately.
Problem: it is unsafe to read base64-printed events without first
reading the Format_description_log_event (FD). Currently, mysqlbinlog
cannot print the FD.
As a side effect, another bug has also been fixed: When mysqlbinlog
--start-position=X was specified, no ROLLBACK was printed. I changed
this, so that ROLLBACK is always printed.
This patch does several things:
- Format_description_log_event (FD) now print themselves in base64
format.
- mysqlbinlog is now able to print FD events. It has three modes:
--base64-output=auto Print row events in base64 output, and print
FD event. The FD event is printed even if
it is outside the range specified with
--start-position, because it would not be
safe to read row events otherwise. This is
the default.
--base64-output=always Like --base64-output=auto, but also print
base64 output for query events. This is
like the old --base64-output flag, which
is also a shorthand for
--base64-output=always
--base64-output=never Never print base64 output, generate error if
row events occur in binlog. This is
useful to suppress the FD event in binlogs
known not to contain row events (e.g.,
because BINLOG statement is unsafe,
requires root privileges, is not SQL, etc)
- the BINLOG statement now handles FD events correctly, by setting
the thread's rli's relay log's description_event_for_exec to the
loaded event.
In fact, executing a BINLOG statement is almost the same as reading
an event from a relay log. Before my patch, the code for this was
separated (exec_relay_log_event in slave.cc executes events from
the relay log, mysql_client_binlog_statement in sql_binlog.cc
executes BINLOG statements). I needed to augment
mysql_client_binlog_statement to do parts of what
exec_relay_log_event does. Hence, I did a small refactoring and
moved parts of exec_relay_log_event to a new function, which I
named apply_event_and_update_pos. apply_event_and_update_pos is
called both from exec_relay_log_event and from
mysql_client_binlog_statement.
- When a non-FD event is executed in a BINLOG statement, without
previously executing a FD event in a BINLOG statement, it generates
an error, because that's unsafe. I took a new error code for that:
ER_NO_FORMAT_DESCRIPTION_EVENT_BEFORE_BINLOG_STATEMENTS.
In order to get a decent error message containing the name of the
event, I added the class method char*
Log_event::get_type_str(Log_event_type type), which returns a
string name for the given Log_event_type. This is just like the
existing char* Log_event::get_type_str(), except it is a class
method that takes the log event type as parameter.
I also added PRE_GA_*_ROWS_LOG_EVENT to Log_event::get_type_str(),
so that names of old rows event are properly printed.
- When reading an event, I added a check that the event type is known
by the current Format_description_log_event. Without this, it may
crash on bad input (and I was struck by this several times).
- I patched the following test cases, which all contain BINLOG
statements for row events which must be preceded by BINLOG
statements for FD events:
- rpl_bug31076
While I was here, I fixed some small things in log_event.cc:
- replaced hard-coded 4 by EVENT_TYPE_OFFSET in 3 places
- replaced return by DBUG_VOID_RETURN in one place
- The name of the logfile can be '-' to indicate stdin. Before my
patch, the code just checked if the first character is '-'; now it
does a full strcmp(). Probably, all arguments that begin with a -
are already handled somewhere else as flags, but I still think it
is better that the code reflects what it is supposed to do, with as
little dependencies as possible on other parts of the code. If we
one day implement that all command line arguments after -- are
files (as most unix tools do), then we need this.
I also fixed the following in slave.cc:
- next_event() was declared twice, and queue_event was not static but
should be static (not used outside the file).
Now, every transaction (including autocommit transactions) starts with
a BEGIN and ends with a COMMIT/ROLLBACK in the binlog.
Added a test case, and updated lots of test case result files.
with null values
For queries containing GROUP_CONCAT(DISTINCT fields ORDER BY fields), there
was a limitation that the DISTINCT fields had to be the same as ORDER BY
fields, owing to the fact that one single sorted tree was used for keeping
track of tuples, ordering and uniqueness. Fixed by introducing a second
structure to handle uniqueness so that the original structure has only to
order the result.
subselects into account
It is forbidden to use the SELECT INTO construction inside UNION statements
unless on the last SELECT of the union. The parser records whether it
has seen INTO or not when parsing a UNION statement. But if the INTO was
legally used in an outer query, an error is thrown if UNION is seen in a
subquery. Fixed in 5.0 by remembering the nesting level of INTO tokens and
mitigate the error unless it collides with the UNION.
The problem is that some DDL statements (ALTER TABLE, CREATE
TRIGGER, FLUSH TABLES, ...) when under LOCK TABLES need to
momentarily drop the lock, reopen the table and grab the write
lock again (using reopen_tables). When grabbing the lock again,
reopen_tables doesn't pass a flag to mysql_lock_tables in
order to ignore the impending global read lock, which causes a
assertion because LOCK_open is being hold. Also dropping the
lock must not signal to any threads that the table has been
relinquished (related to the locking/flushing protocol).
The solution is to correct the way the table is reopenned
and the locks grabbed. When reopening the table and under
LOCK TABLES, the table version should be set to 0 so other
threads have to wait for the table. When grabbing the lock,
any other flush should be ignored because it's theoretically
a atomic operation. The chosen solution also fixes a potential
discrepancy between binlog and GRL (global read lock) because
table placeholders were being ignored, now a FLUSH TABLES WITH
READ LOCK will properly for table with open placeholders.
It's also important to mention that this patch doesn't fix
a potential deadlock if one uses two GRLs under LOCK TABLES
concurrently.
cause ROLLBACK of statement", part 1. Review fixes.
Do not send OK/EOF packets to the client until we reached the end of
the current statement.
This is a consolidation, to keep the functionality that is shared by all
SQL statements in one place in the server.
Currently this functionality includes:
- close_thread_tables()
- log_slow_statement().
After this patch and the subsequent patch for Bug#12713, it shall also include:
- ha_autocommit_or_rollback()
- net_end_statement()
- query_cache_end_of_result().
In future it may also include:
- mysql_reset_thd_for_next_command().
There were two problems when inferring the correct field types resulting from
UNION queries.
- If the type is NULL for all corresponding fields in the UNION, the resulting
type would be NULL, while the type is BINARY(0) if there is just a single
SELECT NULL.
- If one SELECT in the UNION uses a subselect, a temporary table is created
to represent the subselect, and the result type defaults to a STRING type,
hiding the fact that the type was unknown(just a NULL value).
Fixed by remembering whenever a field was created from a NULL value and pass
type NULL to the type coercion if that is the case, and creating a string field
as result of UNION only if the type would otherwise be NULL.
HOUR(), MINUTE(), ... returned spurious results when used on a DATE-cast.
This happened because DATE-cast object did not overload get_time() method
in superclass Item. The default method was inappropriate here and
misinterpreted the data.
Patch adds missing method; get_time() on DATE-casts now returns SQL-NULL
on NULL input, 0 otherwise. This coincides with the way DATE-columns
behave.
Also fixes similar bug in Date-Field now.
The problem was that when convert_constant_item is called for subqueries,
this happens when we already started executing the top-level query, and
the field argument of convert_constant_item pointed to a valid table row.
In turn convert_constant_item used the field buffer to compute the value
of its item argument. This copied the item's value into the field,
and made equalities with outer references always true.
The fix saves/restores the original field's value when it belongs to an
outer table.
Both arguments of the function NAME_CONST must be constant expressions.
This constraint is checked in the Item_name_const::fix_fields method.
Yet if the argument of the function was not a constant expression no
error message was reported. As a result the client hanged waiting for a
response.
Now the function Item_name_const::fix_fields reports an error message
when any of the additional context conditions imposed on the function
NAME_CONST is not satisfied.
The index (key_part_1, key_part-2) was erroneously considered as compatible
with the required ordering in the function test_test_if_order_by_key when
a query with an ORDER BY clause contained a condition of the form
key_part_1=const OR key_part_1 IS NULL
and the order list contained only key_part_2. This happened because the value
of the const_key_parts field in the KEYUSE structure was not formed correctly
for the keys that could be used for ref_or_null access.
This was fixed in the code of the update_ref_and_keys function.
The problem could not manifest itself for MyISAM databases because the
implementation of the keys_to_use_for_scanning() handler function always
returns an empty bitmap for the MyISAM engine.
Change LAST_EXECUTED time the execution start time, instead of the execution completion time. This ensures the END time always the same or later than the LAST_EXECUTED time.
The Item_func_set_user_var::register_field_in_read_map() did not check
that the result_field was null.This caused server crashes for queries that
required order by such a field and were executed without using a temporary
table.
The Item_func_set_user_var::register_field_in_read_map() now checks the
result_field to be not null.
When read_only option was enabled, a user without SUPER privilege could
perform CREATE DATABASE and DROP DATABASE operations.
This patch adds a check to make sure this isn't possible. It also attempts to
simplify the logic used to determine if relevant tables are updated,
making it more human readable.
Problem: when alter to partitioned table,
it does not see it as change of engine.
Solution: If alter includes partitioning, check if it is possible
to change engines (eg. is the table referenced by a FK)
ha_partition::update_create_info() just calls update_create_info
of a first partition, so only get the autoincrement maximum
of the first partition, so SHOW CREATE TABLE can show
small AUTO_INCREMENT parameters.
Fixed by implementing ha_partition::update_create_info() in a way
other handlers work.
HA_ARCHIVE:stats.auto_increment handling made consistent with other engines
Anti-patch. This patch undoes the previously pushed patch. It is
null-merged in versions 5.1 and above since there the original
patch is still desired.
When executing drop view statement on the master, the statement is written
into bin-log without checking for possible errors, so the statement would
always be bin-logged with error code cleared even if some error might occur,
for example, some of the views being dropped does not exist. This would cause
failure on the slave.
Writing bin-log after check for errors, if at least one view has been dropped
the query is bin-logged possible with an error.
numbers into char fields" and bug #12860 "Difference in zero padding of
exponent between Unix and Windows"
Rewrote the code that determines what 'precision' argument should be
passed to sprintf() to fit the string representation of the input number
into the field.
We get finer control over conversion by pre-calculating the exponent, so
we are able to determine which conversion format, 'e' or 'f', will be
used by sprintf().
We also remove the leading zero from the exponent on Windows to make it
compatible with the sprintf() output on other platforms.
PS-protocol data is stored in different format - the MYSQL_RECORDS->data
contains the link to the record content, not to array of the links to
the field's contents. So we have to handle it separately for
embedded-server query cache.
1. A bad error message was given when a MERGE table with an
InnoDB child table was tried to use.
2. After selecting from a correct MERGE table and then altering
one of the children to InnoDB, incorrect results were returned.
These bugs have been fixed with the patch for bug 26379 (Combination
of FLUSH TABLE and REPAIR TABLE corrupts a MERGE table).
For verification, I added the test case from the bug report.
filesort() uses file->estimate_rows_upper_bound() call to allocate
internal buffers. If this function returns a value smaller than
a number of row that will be returned later in find_all_keys(),
that can cause server crash.
Fixed by implementing ha_federated::estimate_rows_upper_bound() to
return maximum possible number of rows.
Present estimation for FEDERATED always returns 0 if the linked to the VIEW.
Parser rejects valid INTERVAL() expressions when associated with
arithmetic operators. The problem is the way in which the expression
and interval grammar rules were organized caused shift/reduce conflicts.
The solution is to tweak the interval rules to avoid shift/reduce
conflicts by removing the broken interval_expr rule and explicitly
specify it's content where necessary.
Original fix by Davi Arnaut, revised and improved rules by Marc Alff
The following clarification should be made in The Manual:
Standard SQL is quite clear that, if new columns are added
to a table after a view on that table is created with
"select *", the new columns will not become part of the view.
In all cases, the view definition (view structure) is frozen
at CREATE time, so changes to the underlying tables do not
affect the view structure.
Default values of variables were not subject to upper/lower bounds
and step, while setting variables was. Bounds and step are also
applied to defaults now; defaults are corrected quietly, values
given by the user are corrected, and a correction-warning is thrown
as needed. Lastly, very large values could wrap around, starting
from 0 again. They are bounded at the maximum value for the
respective data-type now if no lower maximum is specified in the
variable's definition.
This bug is actually two bugs in one, one of which is CREATE TRIGGER under
LOCK TABLES and the other is CREATE TRIGGER under LOCK TABLES simultaneous
to a FLUSH TABLES WITH READ LOCK (global read lock). Both situations could
lead to a server crash or deadlock.
The first problem arises from the fact that when under LOCK TABLES, if the
table is in the set of locked tables, the table is already open and it doesn't
need to be reopened (not a placeholder). Also in this case, if the table is
not write locked, a exclusive lock can't be acquired because of a possible
deadlock with another thread also holding a (read) lock on the table. The
second issue arises from the fact that one should never wait for a global
read lock if it's holding any locked tables, because the global read lock
is waiting for these tables and this leads to a circular wait deadlock.
The solution for the first case is to check if the table is write locked
and upgraded the write lock to a exclusive lock and fail otherwise for non
write locked tables. Grabbin the exclusive lock in this case also means
to ensure that the table is opened only by the calling thread. The second
issue is partly fixed by not waiting for the global read lock if the thread
is holding any locked tables.
The second issue is only partly addressed in this patch because it turned
out to be much wider and also affects other DDL statements. Reported as
Bug#32395
Kill of a CREATE TABLE source_table LIKE statement waiting for a
name-lock on the source table causes a bad lock interaction.
The mysql_create_like_table() has a bug that if the connection is
killed while waiting for the name-lock on the source table, it will
jump to the wrong error path and try to unlock the source table and
LOCK_open, but both weren't locked.
The solution is to simple return when the name lock request is killed,
it's safe to do so because no lock was acquired and no cleanup is needed.
Original bug report also contains description of other problems
related to this scenario but they either already fixed in 5.1 or
will be addressed separately (see bug report for details).
The bug was that for ordered index scans, ha_partition::index_init() did
not put index columns into table->read_set if the underlying storage
engine did not have HA_PARTIAL_COLUMN_READ flag.
This was causing assertion failure when handle_ordered_index_scan() tried
to sort the records according to index order.
Fixed by making ha_partition::index_init() put index columns into table->read_set
for all ordered scans.
There's currently no way of knowing the determinicity of an UDF.
And the optimizer and the sequence() UDFs were making wrong
assumptions about what the is_const member means.
Plus there was no implementation of update_system_tables()
causing the optimizer to overwrite the information returned by
the <udf>_init function.
Fixed by equating the assumptions about the semantics of
is_const and providing a implementation of update_used_tables().
Added a TODO item for the UDF API change needed to make a better
implementation.
Problem: passing a non-constant name to the NAME_CONST function results in a crash.
Fix: check the NAME_CONST name argument; return fake item type if we got
non-constant argument(s).
Loading 4.1 into 5.0 or 5.1 failed silently because procs_priv table missing.
This caused the server to crash on any attempt to store new grants because
of uninitialized structures.
This patch breaks up the grant loading function into two phases to allow
for procs_priv table to fail with an warning instead of crashing the server.
self-join
When doing DELETE with self-join on a MyISAM or MERGE table, it could
happen that a record being retrieved in join_read_next_same() has
already been deleted by previous iterations. That caused the engine's
index_next_same() method to fail with HA_ERR_RECORD_DELETED error and
the whole DELETE query to be aborted with an error.
Fixed by suppressing the HA_ERR_RECORD_DELETED error in
hy_myisam::index_next_same() and ha_myisammrg::index_next_same(). Since
HA_ERR_RECORD_DELETED can only be returned by MyISAM, there is no point
in filtering this error in the SQL layer.
crashes MySQL 5.122
There was a difference in how UNIONs are handled
on top level and when in sub-query.
Because the rules for sub-queries were syntactically
allowing cases that are not currently supported by
the server we had crashes (this bug) or wrong results
(bug 32051).
Fixed by making the syntax rules for UNIONs match the
ones at top level.
These rules however do not support nesting UNIONs, e.g.
(SELECT a FROM t1 UNION ALL SELECT b FROM t2)
UNION
(SELECT c FROM t3 UNION ALL SELECT d FROM t4)
Supports for statements with nested UNIONs will be
added in a future version.
Problems:
1. looking for a matching partition we miss the fact that the maximum
allowed value is in the PARTITION p LESS THAN MAXVALUE.
2. one can insert maximum value if numeric maximum value is the last range.
(should only work if LESS THAN MAXVALUE).
3. one cannot have both numeric maximum value and MAXVALUE string as ranges
(the same value, but different meanings).
Fix: consider the maximum value as a supremum.
table cache is full
After reading last record from freshly opened archive table
(e.g. after flush table, or if there is no room in table cache),
the table is reported as crashed.
The problem was that azio wrongly invalidated azio_stream when it
meets EOF.
The client program 'mysqlbinlog' crashed when trying to print a User_var_log_event holding
a floating-point value since the format specifier for my_b_printf() does not support
floating-point format specifiers.
This patch prints the floating-point number to an internal buffer, and then writes
that buffer to the output instead.
FLUSH TABLES WITH READ LOCK fails to properly detect write locked
tables when running under low priority updates.
The problem is that when trying to aspire a global read lock, the
reload_acl_and_cache() function fails to properly check if the thread
has a low priority write lock, which later my cause a server crash or
deadlock.
The solution is to simple check if the thread has any type of the
possible exclusive write locks.
is_last_prefix <= 0, file .\opt_range.cc.
SELECT ... GROUP BY bit field failed with an assertion if the
bit length of that field was not divisible by 8.
Problem: setting Item_func_rollup_const::null_value property to argument's null_value
before (without) the argument evaluation may result in a crash due to wrong null_value.
Fix: use is_null() to set Item_func_rollup_const::null_value instead as it evaluates
the argument if necessary and returns a proper value.
Problem: even if an Item_xml_str_func successor returns NULL, it doesn't have
a corresponding property (maybe_null) set, that leads to a failed assertion.
Fix: set nullability property of Item_xml_str_func.
Index lookup does not always guarantee that we can
simply remove the relevant conditions from the WHERE
clause. Reasons can be e.g. conversion errors,
partial indexes etc.
The optimizer was removing these parts of the WHERE
condition without any further checking.
This leads to "false positives" when using indexes.
Fixed by checking the index reference conditions
(using WHERE) when using indexes with sub-queries.
Problem: we have CHECK TABLE options allowed (by accident?) for
ANALYZE/OPTIMIZE TABLE.
Fix: disable them.
Note: it might require additional fixes in 5.1/6.0
Problem: caching 00000000-00000099 dates as integer values we're
improperly shifting them up twice in the get_datetime_value().
Fix: don't shift cached DATETIME values up for the second time.
In several cases, an error when processing the query would cause mysql to
return to the top level without printing warnings. Fix is to always
print any available warnings before returning to the top level.
only on some occasions
Referencing an element from the SELECT list in a WHERE
clause is not permitted. The namespace of the WHERE
clause is the table columns only. This was not enforced
correctly when resolving outer references in sub-queries.
Fixed by not allowing references to aliases in a
sub-query in WHERE.
The problem is that DROP TABLE and other DDL statements failed to
automatically close handlers associated with tables that were marked
for reopen (FLUSH TABLES).
The current implementation fails to properly discard handlers of
dropped tables (that were marked for reopen) because it searches
on the open handler tables list and using the current alias of the
table being dropped. The problem is that it must not use the open
handler tables list to search because the table might have been
closed (marked for reopen) by a flush tables command and also it
must not use the current table alias at all since multiple different
aliases may be associated with a single table. This is specially
visible when a user has two open handlers (using alias) of a same
table and a flush tables command is issued before the table is
dropped (see test case). Scanning the handler table list is also
useless for dropping handlers associated with temporary tables,
because temporary tables are not kept in the THD::handler_tables
list.
The solution is to simple scan the handlers hash table searching
for, and deleting all handlers with matching table names if the
reopen flag is not passed to the flush function, indicating that
the handlers should be deleted. All matching handlers are deleted
even if the associated the table is not open.