Problem was an unclear error message since it could suggest that
MyISAM did not support INSERT DELAYED.
Changed the error message to say that DELAYED is not supported by the
table, instead of the table's storage engine.
The confusion is that a partitioned table is in somewhat sense using
the partitioning storage engine, which in turn uses the ordinary
storage engine. By saying that the table does not support DELAYED we
do not give any extra informantion about the storage engine or if it
is partitioned.
PREPARE", review fixes:
- make the patch follow the specification of WL#4166 and remove
the new error that was originally introduced.
Now the client never gets an error from reprepare, unless it failed.
I.e. even if the statement at hand returns a completely different
result set, this is not considered a server error.
The C API library, that can not handle this situation, was modified to
return a client error.
Added additional test coverage.
WL#4165 Prepared statements: validation
WL#4166 Prepared statements: automatic re-prepare
Fixes
Bug#27430 Crash in subquery code when in PS and table DDL changed after PREPARE
Bug#27690 Re-execution of prepared statement after table was replaced with a view crashes
Bug#27420 A combination of PS and view operations cause error + assertion on shutdown
The basic idea of the patch is to keep track of table metadata between
prepared statement prepare and execute. If some table used in the statement
has changed, the prepared statement is re-prepared before execution.
See WL#4165 and WL#4166 contents and comments in the code for details
of the implementation.
Based on contributed patch from Martin Friebe, CLA from 2007-02-24.
The parser lacked support for field sizes after signed long,
when it should extend to 2**32-1.
Now, we correct that limitation, and also make the error handling
consistent for casts.
---
Fix minor complaints of Marc Alff, for patch against B-g#15776.
---
Merge zippy.cornsilk.net:/home/cmiller/work/mysql/bug15776/my50-bug15776
into zippy.cornsilk.net:/home/cmiller/work/mysql/bug15776/my51-bug15776
---
Merge zippy.cornsilk.net:/home/cmiller/work/mysql/bug15776/my51-bug15776
into zippy.cornsilk.net:/home/cmiller/work/mysql/mysql-5.1-build
---
testing
The problem was that LOAD DATA code (sql_load.cc) didn't take into
account that there may be items, representing references to other
columns. This is a usual case in views. The crash happened because
Item_direct_view_ref was casted to Item_user_var_as_out_param,
which is not a base class.
The fix is to
1) Handle references properly;
2) Ensure that an item is treated as a user variable only when
it is a user variable indeed;
3) Report an error if LOAD DATA is used to load data into
non-updatable column.
Bug #18453 Warning/error message if there is a mismatch between ...
There were three problems:
1. the reported lack of warnings for the BEFORE syntax of PURGE;
2. the similar lack of warnings for the TO syntax;
3. incompatible behaviour between the two in that the latter blanked out
regardlessly of presence or lack the actual file corresponding to
an index record; the former version gave up at the first mismatch.
fixed with deploying the warning's generation and synronizing logics of
purge_logs() and purge_logs_before_date().
my_stat() is called in either of two branches of purge_logs() (responsible
for the TO syntax of PURGE) similarly to how it has behaved in the BEFORE syntax.
If there is no actual binlog file, my_stat returns NULL and my_delete is
not invoked.
A critical error is reported to the user if a file from the index
could not be retrieved info about or deleted with a system error code
different than ENOENT.
The error message due to lack of the default value for an extra field
was not as informative as it should be.
Fixed with improving the scheme of gathering, propagating and reporting
errors in applying rows events.
The scheme is in the following.
Any kind of error of processing of a row event incidents are to be
registered with my_error().
In the end Rows_log_event::do_apply_event() invokes rli->report() with the
message to display consisting of all the errors.
This mimics `show warnings' displaying.
A simple test checks three errors in processing an event.
Two hunks - a user level error and pushing it into the list -
have been devoted to already fixed Bug@31702.
Some open issues relating to this artifact listed on BUG@21842 page and
on WL@3679.
Todo: to synchronize the statement in the tests comments on Update and Delete
events may not stop when an extra field does not have a default with wl@3228 spec.
Problem: it is unsafe to read base64-printed events without first
reading the Format_description_log_event (FD). Currently, mysqlbinlog
cannot print the FD.
As a side effect, another bug has also been fixed: When mysqlbinlog
--start-position=X was specified, no ROLLBACK was printed. I changed
this, so that ROLLBACK is always printed.
This patch does several things:
- Format_description_log_event (FD) now print themselves in base64
format.
- mysqlbinlog is now able to print FD events. It has three modes:
--base64-output=auto Print row events in base64 output, and print
FD event. The FD event is printed even if
it is outside the range specified with
--start-position, because it would not be
safe to read row events otherwise. This is
the default.
--base64-output=always Like --base64-output=auto, but also print
base64 output for query events. This is
like the old --base64-output flag, which
is also a shorthand for
--base64-output=always
--base64-output=never Never print base64 output, generate error if
row events occur in binlog. This is
useful to suppress the FD event in binlogs
known not to contain row events (e.g.,
because BINLOG statement is unsafe,
requires root privileges, is not SQL, etc)
- the BINLOG statement now handles FD events correctly, by setting
the thread's rli's relay log's description_event_for_exec to the
loaded event.
In fact, executing a BINLOG statement is almost the same as reading
an event from a relay log. Before my patch, the code for this was
separated (exec_relay_log_event in slave.cc executes events from
the relay log, mysql_client_binlog_statement in sql_binlog.cc
executes BINLOG statements). I needed to augment
mysql_client_binlog_statement to do parts of what
exec_relay_log_event does. Hence, I did a small refactoring and
moved parts of exec_relay_log_event to a new function, which I
named apply_event_and_update_pos. apply_event_and_update_pos is
called both from exec_relay_log_event and from
mysql_client_binlog_statement.
- When a non-FD event is executed in a BINLOG statement, without
previously executing a FD event in a BINLOG statement, it generates
an error, because that's unsafe. I took a new error code for that:
ER_NO_FORMAT_DESCRIPTION_EVENT_BEFORE_BINLOG_STATEMENTS.
In order to get a decent error message containing the name of the
event, I added the class method char*
Log_event::get_type_str(Log_event_type type), which returns a
string name for the given Log_event_type. This is just like the
existing char* Log_event::get_type_str(), except it is a class
method that takes the log event type as parameter.
I also added PRE_GA_*_ROWS_LOG_EVENT to Log_event::get_type_str(),
so that names of old rows event are properly printed.
- When reading an event, I added a check that the event type is known
by the current Format_description_log_event. Without this, it may
crash on bad input (and I was struck by this several times).
- I patched the following test cases, which all contain BINLOG
statements for row events which must be preceded by BINLOG
statements for FD events:
- rpl_bug31076
While I was here, I fixed some small things in log_event.cc:
- replaced hard-coded 4 by EVENT_TYPE_OFFSET in 3 places
- replaced return by DBUG_VOID_RETURN in one place
- The name of the logfile can be '-' to indicate stdin. Before my
patch, the code just checked if the first character is '-'; now it
does a full strcmp(). Probably, all arguments that begin with a -
are already handled somewhere else as flags, but I still think it
is better that the code reflects what it is supposed to do, with as
little dependencies as possible on other parts of the code. If we
one day implement that all command line arguments after -- are
files (as most unix tools do), then we need this.
I also fixed the following in slave.cc:
- next_event() was declared twice, and queue_event was not static but
should be static (not used outside the file).
without PK
Bug#31609 Not all RBR slave errors reported as errors
bug#32468 delete rows event on a table with foreign key constraint fails
The first two bugs comprise idempotency issues.
First, there was no error code reported under conditions of the bug
description although the slave sql thread halted.
Second, executions were different with and without presence of prim key in
the table.
Third, there was no way to instruct the slave whether to ignore an error
and skip to the following event or to halt.
Fourth, there are handler errors which might happen due to idempotent
applying of binlog but those were not listed among the "idempotent" error
list.
All the named issues are addressed.
Wrt to the 3rd, there is the new global system variable, changeble at run
time, which controls the slave sql thread behaviour.
The new variable allows further extensions to mimic the sql_mode
session/global variable.
To address the 4th, the new bug#32468 had to be fixed as it was staying
in the way.
Problem: there was no standard syntax error when
creating partitions with syntax error in
the partitioning clause.
Solution: added "Syntax error: " to the error message
in the SELECT INTO OUTFILE clause starts with a special
character (one of n, t, r, b, 0, Z or N) and ENCLOSED BY
is empty, every occurrence of this character within a
field value is duplicated.
Duplication has been avoided.
New warning message has been added: "First character of
the FIELDS TERMINATED string is ambiguous; please use
non-optional and non-empty FIELDS ENCLOSED BY".
The problem was that the RETURNS column in the mysql.proc was of
CHAR(64). That was not enough for storing long-named datatypes.
The fix is to change CHAR(64) to LONGBLOB, and to throw warnings
at the time a stored routine is created if some data is truncated
during writing into mysql.proc.
Declaring an all space column name in the SELECT FROM DUAL or in a view
leads to misleading warning message:
"Leading spaces are removed from name ' '".
The Item::set_name method has been modified to raise warnings like
"Name ' ' has become ''" in case of the truncation of an all
space identifier to an empty string identifier instead of the
"Leading spaces are removed from name ' '" warning message.
Based on contributed patch from Martin Friebe, CLA from 2007-02-24.
The parser lacked support for field sizes after signed long,
when it should extend to 2**32-1.
Now, we correct that limitation, and also make the error handling
consistent for casts.
Two character mappings were way off (backtick and tilde were "E"
and "Y"!), and three others were slightly rotated. The first
would cause collisions, and the latter was probably benign.
Now, assign the character mappings exactly to their normal values.
The SELECT query with more than 31 nested dependent SELECT queries returned
wrong result.
New error message has been added: ER_TOO_HIGH_LEVEL_OF_NESTING_FOR_SELECT.
It will be reported as: "Too high level of nesting for select".
Bug#25422 (Hang with log tables)
Bug 17876 (Truncating mysql.slow_log in a SP after using cursor locks the
thread)
Bug 23044 (Warnings on flush of a log table)
Bug 29129 (Resetting general_log while the GLOBAL READ LOCK is set causes
a deadlock)
Prior to this fix, the server would hang when performing concurrent
ALTER TABLE or TRUNCATE TABLE statements against the LOG TABLES,
which are mysql.general_log and mysql.slow_log.
The root cause traces to the following code:
in sql_base.cc, open_table()
if (table->in_use != thd)
{
/* wait_for_condition will unlock LOCK_open for us */
wait_for_condition(thd, &LOCK_open, &COND_refresh);
}
The problem with this code is that the current implementation of the
LOGGER creates 'fake' THD objects, like
- Log_to_csv_event_handler::general_log_thd
- Log_to_csv_event_handler::slow_log_thd
which are not associated to a real thread running in the server,
so that waiting for these non-existing threads to release table locks
cause the dead lock.
In general, the design of Log_to_csv_event_handler does not fit into the
general architecture of the server, so that the concept of general_log_thd
and slow_log_thd has to be abandoned:
- this implementation does not work with table locking
- it will not work with commands like SHOW PROCESSLIST
- having the log tables always opened does not integrate well with DDL
operations / FLUSH TABLES / SET GLOBAL READ_ONLY
With this patch, the fundamental design of the LOGGER has been changed to:
- always open and close a log table when writing a log
- remove totally the usage of fake THD objects
- clarify how locking of log tables is implemented in general.
See WL#3984 for details related to the new locking design.
Additional changes (misc bugs exposed and fixed):
1)
mysqldump which would ignore some tables in dump_all_tables_in_db(),
but forget to ignore the same in dump_all_views_in_db().
2)
mysqldump would also issue an empty "LOCK TABLE" command when all the tables
to lock are to be ignored (numrows == 0), instead of not issuing the query.
3)
Internal errors handlers could intercept errors but not warnings
(see sql_error.cc).
4)
Implementing a nested call to open tables, for the performance schema tables,
exposed an existing bug in remove_table_from_cache(), which would perform:
in_use->some_tables_deleted=1;
against another thread, without any consideration about thread locking.
This call inside remove_table_from_cache() was not required anyway,
since calling mysql_lock_abort() takes care of aborting -- cleanly -- threads
that might hold a lock on a table.
This line (in_use->some_tables_deleted=1) has been removed.
results.
When executing a CREATE EVENT statement with ON COMPLETION NOT PRESERVE
clause (explicit or implicit) and completion date in the past, we do not
create the event. Or, put it differently, we create it and then drop
immediately.
A warning is issued in this case, not an error -- we want to load
successfully old database dumps, and such dumps may contain events
that are no longer valid.
Update the warning text to not imply an erroneous condition.