The problem was that the RETURNS column in the mysql.proc was of
CHAR(64). That was not enough for storing long-named datatypes.
The fix is to change CHAR(64) to LONGBLOB, and to throw warnings
at the time a stored routine is created if some data is truncated
during writing into mysql.proc.
Declaring an all space column name in the SELECT FROM DUAL or in a view
leads to misleading warning message:
"Leading spaces are removed from name ' '".
The Item::set_name method has been modified to raise warnings like
"Name ' ' has become ''" in case of the truncation of an all
space identifier to an empty string identifier instead of the
"Leading spaces are removed from name ' '" warning message.
Two character mappings were way off (backtick and tilde were "E"
and "Y"!), and three others were slightly rotated. The first
would cause collisions, and the latter was probably benign.
Now, assign the character mappings exactly to their normal values.
The SELECT query with more than 31 nested dependent SELECT queries returned
wrong result.
New error message has been added: ER_TOO_HIGH_LEVEL_OF_NESTING_FOR_SELECT.
It will be reported as: "Too high level of nesting for select".
Bug#25422 (Hang with log tables)
Bug 17876 (Truncating mysql.slow_log in a SP after using cursor locks the
thread)
Bug 23044 (Warnings on flush of a log table)
Bug 29129 (Resetting general_log while the GLOBAL READ LOCK is set causes
a deadlock)
Prior to this fix, the server would hang when performing concurrent
ALTER TABLE or TRUNCATE TABLE statements against the LOG TABLES,
which are mysql.general_log and mysql.slow_log.
The root cause traces to the following code:
in sql_base.cc, open_table()
if (table->in_use != thd)
{
/* wait_for_condition will unlock LOCK_open for us */
wait_for_condition(thd, &LOCK_open, &COND_refresh);
}
The problem with this code is that the current implementation of the
LOGGER creates 'fake' THD objects, like
- Log_to_csv_event_handler::general_log_thd
- Log_to_csv_event_handler::slow_log_thd
which are not associated to a real thread running in the server,
so that waiting for these non-existing threads to release table locks
cause the dead lock.
In general, the design of Log_to_csv_event_handler does not fit into the
general architecture of the server, so that the concept of general_log_thd
and slow_log_thd has to be abandoned:
- this implementation does not work with table locking
- it will not work with commands like SHOW PROCESSLIST
- having the log tables always opened does not integrate well with DDL
operations / FLUSH TABLES / SET GLOBAL READ_ONLY
With this patch, the fundamental design of the LOGGER has been changed to:
- always open and close a log table when writing a log
- remove totally the usage of fake THD objects
- clarify how locking of log tables is implemented in general.
See WL#3984 for details related to the new locking design.
Additional changes (misc bugs exposed and fixed):
1)
mysqldump which would ignore some tables in dump_all_tables_in_db(),
but forget to ignore the same in dump_all_views_in_db().
2)
mysqldump would also issue an empty "LOCK TABLE" command when all the tables
to lock are to be ignored (numrows == 0), instead of not issuing the query.
3)
Internal errors handlers could intercept errors but not warnings
(see sql_error.cc).
4)
Implementing a nested call to open tables, for the performance schema tables,
exposed an existing bug in remove_table_from_cache(), which would perform:
in_use->some_tables_deleted=1;
against another thread, without any consideration about thread locking.
This call inside remove_table_from_cache() was not required anyway,
since calling mysql_lock_abort() takes care of aborting -- cleanly -- threads
that might hold a lock on a table.
This line (in_use->some_tables_deleted=1) has been removed.
results.
When executing a CREATE EVENT statement with ON COMPLETION NOT PRESERVE
clause (explicit or implicit) and completion date in the past, we do not
create the event. Or, put it differently, we create it and then drop
immediately.
A warning is issued in this case, not an error -- we want to load
successfully old database dumps, and such dumps may contain events
that are no longer valid.
Update the warning text to not imply an erroneous condition.
- BUG#11986: Stored routines and triggers can fail if the code
has a non-ascii symbol
- BUG#16291: mysqldump corrupts string-constants with non-ascii-chars
- BUG#19443: INFORMATION_SCHEMA does not support charsets properly
- BUG#21249: Character set of SP-var can be ignored
- BUG#25212: Character set of string constant is ignored (stored routines)
- BUG#25221: Character set of string constant is ignored (triggers)
There were a few general problems that caused these bugs:
1. Character set information of the original (definition) query for views,
triggers, stored routines and events was lost.
2. mysqldump output query in client character set, which can be
inappropriate to encode definition-query.
3. INFORMATION_SCHEMA used strings with mixed encodings to display object
definition;
1. No query-definition-character set.
In order to compile query into execution code, some extra data (such as
environment variables or the database character set) is used. The problem
here was that this context was not preserved. So, on the next load it can
differ from the original one, thus the result will be different.
The context contains the following data:
- client character set;
- connection collation (character set and collation);
- collation of the owner database;
The fix is to store this context and use it each time we parse (compile)
and execute the object (stored routine, trigger, ...).
2. Wrong mysqldump-output.
The original query can contain several encodings (by means of character set
introducers). The problem here was that we tried to convert original query
to the mysqldump-client character set.
Moreover, we stored queries in different character sets for different
objects (views, for one, used UTF8, triggers used original character set).
The solution is
- to store definition queries in the original character set;
- to change SHOW CREATE statement to output definition query in the
binary character set (i.e. without any conversion);
- introduce SHOW CREATE TRIGGER statement;
- to dump special statements to switch the context to the original one
before dumping and restore it afterwards.
Note, in order to preserve the database collation at the creation time,
additional ALTER DATABASE might be used (to temporary switch the database
collation back to the original value). In this case, ALTER DATABASE
privilege will be required. This is a backward-incompatible change.
3. INFORMATION_SCHEMA showed non-UTF8 strings
The fix is to generate UTF8-query during the parsing, store it in the object
and show it in the INFORMATION_SCHEMA.
Basically, the idea is to create a copy of the original query convert it to
UTF8. Character set introducers are removed and all text literals are
converted to UTF8.
This UTF8 query is intended to provide user-readable output. It must not be
used to recreate the object. Specialized SHOW CREATE statements should be
used for this.
The reason for this limitation is the following: the original query can
contain symbols from several character sets (by means of character set
introducers).
Example:
- original query:
CREATE VIEW v1 AS SELECT _cp1251 'Hello' AS c1;
- UTF8 query (for INFORMATION_SCHEMA):
CREATE VIEW v1 AS SELECT 'Hello' AS c1;
replication):
Patch to add binlog format capabilities to the InnoDB storage engine.
The engine will not allow statement format logging when in READ COMMITTED
or READ UNCOMMITTED transaction isolation level.
In addition, an error is generated when trying to use READ COMMITTED
or READ UNCOMMITTED transaction isolation level in STATEMENT binlog
mode.
Adding new fields Last_{IO,SQL}_Errno and Last_{IO,SQL}_Error to output
of SHOW SLAVE STATUS to hold errors from I/O and SQL thread respectively.
Old fields Last_Error and Last_Errno are aliases for Last_SQL_Error and
Last_SQL_Errno respectively.
Fields are added last to output of SHOW SLAVE STATUS to allow old applications
to use the same positional arguments into the row, while allowing new
application to benefit from the added information.
In addition, some new error codes are added (especially for the I/O
thread) to be able to provide sensible error message.
SHOW CREATE TABLE fails
Underlying table names, that merge engine fails to open were not
reported.
With this fix CHECK TABLE issued against merge table reports all
underlying table names that it fails to open. Other statements
are unaffected, that is underlying table names are not included
into error message.
This fix doesn't solve SHOW CREATE TABLE issue.
Adding support to allow engines to tell what formats they can handle.
The server will generate an error if it is not possible to log the
statement according to the logging mode in effect.
Adding flags to several storage engines to state what they can handle.
Changes to NDB handler removing code that forces row-based mode and
adding flag saying that NDB can only handle row format.
Adding check that binlog flags are only used for real tables that are
opened for writing.
Replacing binlog_row_based_if_mixed with variable binlog_stmt_flags
holding several flags and adding member functions to manipulate the
flags.
Added code to generate a warning when an attempt to log an unsafe
statement to the binary log was made. The warning is both pushed to the
SHOW WARNINGS table and written to the error log. The prevent flooding
the error log, the warning is just written to the error log once per
open session.
when there are no up-to-date system tables to support it:
- initialize the scheduler before reporting "Ready for connections".
This ensures that warnings, if any, are printed before "Ready for
connections", and this message is not mangled.
- do not abort the scheduler if there are no system tables
- check the tables once at start up, remember the status and disable
the scheduler if the tables are not up to date.
If one attempts to use the scheduler with bad tables,
issue an error message.
- clean up the behaviour of the module under LOCK TABLES and pre-locking
mode
- make sure implicit commit of Events DDL works as expected.
- add more tests
Collateral clean ups in the events code.
This patch fixes Bug#23631 Events: SHOW VARIABLES doesn't work
when mysql.event is damaged
Adding an event that can be used to denote that an incident occured
on the master. The event can be used to denote a gap in the replication
stream, but can also be used to denote other incidents.
In addition, the injector interface is extended with functions to
generate an incident event. The function will also rotate the binary
log after generating an incident event to get a fresh binary log.
BUG#26429: SHOW CREATE EVENT is incorrect for an event that
STARTS NOW()
BUG#26431: Impossible to re-create an event from backup if its
STARTS clause is in the past
WL#3698: Events: execution in local time zone
The problem was that local times specified by the user in AT, STARTS
and ENDS of CREATE EVENT/ALTER EVENT statement were converted to UTC,
and the original time zone was forgotten. This way, event scheduler
couldn't honor Daylight Saving Time shifts, and times shown to the
user were also in UTC. Additionally, CREATE EVENT didn't allow times
in the past, thus preventing straightforward event restoration from
old backups.
This patch reworks event scheduler time computations, performing them
in the time zone associated with the event. Also it allows times to
be in the past.
The patch adds time_zone column to mysql.event table.
NOTE: The patch is almost final, but the bug#9953 should be pushed
first.
The problem was that some facilities (like CONVERT_TZ() function or
server HELP statement) may require implicit access to some tables in
'mysql' database. This access was done by ordinary means of adding
such tables to the list of tables the query is going to open.
However, if we issued LOCK TABLES before that, we would get "table
was not locked" error trying to open such implicit tables.
The solution is to treat certain tables as MySQL system tables, like
we already do for mysql.proc. Such tables may be opened for reading
at any moment regardless of any locks in effect. The cost of this is
that system table may be locked for writing only together with other
system tables, it is disallowed to lock system tables for writing and
have any other lock on any other table.
After this patch the following tables are treated as MySQL system
tables:
mysql.help_category
mysql.help_keyword
mysql.help_relation
mysql.help_topic
mysql.proc (it already was)
mysql.time_zone
mysql.time_zone_leap_second
mysql.time_zone_name
mysql.time_zone_transition
mysql.time_zone_transition_type
These tables are now opened with open_system_tables_for_read() and
closed with close_system_tables(), or one table may be opened with
open_system_table_for_update() and closed with close_thread_tables()
(the latter is used for mysql.proc table, which is updated as part of
normal MySQL server operation). These functions may be used when
some tables were opened and locked already.
NOTE: online update of time zone tables is not possible during
replication, because there's no time zone cache flush neither on LOCK
TABLES, nor on FLUSH TABLES, so the master may serve stale time zone
data from cache, while on slave updated data will be loaded from the
time zone tables.