The first problem was that SHOW CREATE TRIGGER took a stronger metadata
lock than required. This caused the statement to be blocked when it was
not needed. For example, LOCK TABLE WRITE in one connection would block
SHOW CREATE TRIGGER in another connection.
Another problem was that a SHOW CREATE TRIGGER statement issued inside
a transaction did not release its metadata locks at the end of the
statement execution. This happened even if SHOW CREATE TRIGGER is an
information statement. The consequence was that SHOW CREATE TRIGGER
was able to block other connections from accessing the table
(e.g. using ALTER TABLE).
This patch fixes the problem by changing SHOW CREATE TRIGGER to take
a MDL_SHARED_HIGH_PRIO metadata lock similar to what is already done
for SHOW CREATE TABLE. The patch also changes SHOW CREATE TRIGGER to
explicitly release any metadata locks taken by the statement after
it completes.
Test case added to show_check.test.
concurrent SHOW CREATE
The problem was that a SHOW CREATE TABLE statement issued inside
a transaction did not release its metadata locks at the end of the
statement execution. This happened even if SHOW CREATE TABLE is an
information statement.
The consequence was that SHOW CREATE TABLE was able to block other
connections from accessing the table (e.g. using ALTER TABLE).
This patch fixes the problem by explicitly releasing any metadata
locks taken by SHOW CREATE TABLE after the statement completes.
Test case added to show_check.test.
for write by another connection
The problem was that if a table was locked in one connection by
LOCK TABLES ... WRITE, REPAIR TABLE or OPTIMIZE TABLE, SHOW CREATE
TABLE from another connection would be blocked. As SHOW CREATE TABLE
only reads metadata about the table, such blocking is not needed.
The problem was that when SHOW CREATE TABLE tried to get a metadata
lock on the table in order to open it, it used the wrong type of
metadata lock request. It used MDL_SHARED_READ which is used when
the intent is to read both table metadata and table data. Instead
it should have used MDL_SHARED_HIGH_PRIO which signifies an intent
to only read metadata.
This patch fixes the problem by making sure SHOW CREATE TABLE uses
the MDL_SHARED_HIGH_PRIO metadata lock request type when trying to
open the table. The patch also fixes a similar problem with the
mysql_list_fields API call.
Test case added to show_check.test.
Bug#16565 mysqld --help --verbose does not order variablesBug#20413 sql_slave_skip_counter is not shown in show variables
Bug#20415 Output of mysqld --help --verbose is incomplete
Bug#25430 variable not found in SELECT @@global.ft_max_word_len;
Bug#32902 plugin variables don't know their names
Bug#34599 MySQLD Option and Variable Reference need to be consistent in formatting!
Bug#34829 No default value for variable and setting default does not raise error
Bug#34834 ? Is accepted as a valid sql mode
Bug#34878 Few variables have default value according to documentation but error occurs
Bug#34883 ft_boolean_syntax cant be assigned from user variable to global var.
Bug#37187 `INFORMATION_SCHEMA`.`GLOBAL_VARIABLES`: inconsistent status
Bug#40988 log_output_basic.test succeeded though syntactically false.
Bug#41010 enum-style command-line options are not honoured (maria.maria-recover fails)
Bug#42103 Setting key_buffer_size to a negative value may lead to very large allocations
Bug#44691 Some plugins configured as MYSQL_PLUGIN_MANDATORY in can be disabled
Bug#44797 plugins w/o command-line options have no disabling option in --help
Bug#46314 string system variables don't support expressions
Bug#46470 sys_vars.max_binlog_cache_size_basic_32 is broken
Bug#46586 When using the plugin interface the type "set" for options caused a crash.
Bug#47212 Crash in DBUG_PRINT in mysqltest.cc when trying to print octal number
Bug#48758 mysqltest crashes on sys_vars.collation_server_basic in gcov builds
Bug#49417 some complaints about mysqld --help --verbose output
Bug#49540 DEFAULT value of binlog_format isn't the default value
Bug#49640 ambiguous option '--skip-skip-myisam' (double skip prefix)
Bug#49644 init_connect and \0
Bug#49645 init_slave and multi-byte characters
Bug#49646 mysql --show-warnings crashes when server dies
When checking for an error after removing the special view error handler the code
was not taking into account that open_tables() may fail because of the current
statement being killed.
Added a check for thd->killed.
Added a client program to test it.
Text conflict in mysql-test/collections/default.experimental
Text conflict in mysql-test/r/show_check.result
Text conflict in mysql-test/r/sp-code.result
Text conflict in mysql-test/suite/binlog/r/binlog_tmp_table.result
Text conflict in mysql-test/suite/rpl/t/disabled.def
Text conflict in mysql-test/t/show_check.test
Text conflict in mysys/my_delete.c
Text conflict in sql/item.h
Text conflict in sql/item_cmpfunc.h
Text conflict in sql/log.cc
Text conflict in sql/mysqld.cc
Text conflict in sql/repl_failsafe.cc
Text conflict in sql/slave.cc
Text conflict in sql/sql_parse.cc
Text conflict in sql/sql_table.cc
Text conflict in sql/sql_yacc.yy
Text conflict in storage/myisam/ha_myisam.cc
Corrected results for
stm_auto_increment_bug33029.reject 2009-12-01
20:01:49.000000000 +0300
<andrei> @@ -42,9 +42,6 @@
<andrei> RETURN i;
<andrei> END//
<andrei> CALL p1();
<andrei> -Warnings:
<andrei> -Note 1592 Statement may not be safe to log in statement
format.
<andrei> -Note 1592 Statement may not be safe to log in statement
format.
There should be indeed no Note present because there is in fact autoincrement
top-level query in sp() that triggers inserting in yet another auto-inc table.
(todo: alert DaoGang to improve the test).
+ Fix for Bug#43114 wait_until_count_sessions too restrictive, random PB failures
+ Removal of a lot of other weaknesses found
+ modifications according to review
Length value is the length of the field,
Max_length is the length of the field value.
So Max_length can not be more than Length.
The fix: fixed calculation of the Item_empty_string item length
(Patch applied and queued on demand of Trudy/Davi.)
Addendum to thr fix for bug 30000:
show procedure/function code defined only in debug builds
show_check.test:
Addendum to thr fix for bug 30000:
show procedure/function code defined only in debug builds
Bug#25422 (Hang with log tables)
Bug 17876 (Truncating mysql.slow_log in a SP after using cursor locks the
thread)
Bug 23044 (Warnings on flush of a log table)
Bug 29129 (Resetting general_log while the GLOBAL READ LOCK is set causes
a deadlock)
Prior to this fix, the server would hang when performing concurrent
ALTER TABLE or TRUNCATE TABLE statements against the LOG TABLES,
which are mysql.general_log and mysql.slow_log.
The root cause traces to the following code:
in sql_base.cc, open_table()
if (table->in_use != thd)
{
/* wait_for_condition will unlock LOCK_open for us */
wait_for_condition(thd, &LOCK_open, &COND_refresh);
}
The problem with this code is that the current implementation of the
LOGGER creates 'fake' THD objects, like
- Log_to_csv_event_handler::general_log_thd
- Log_to_csv_event_handler::slow_log_thd
which are not associated to a real thread running in the server,
so that waiting for these non-existing threads to release table locks
cause the dead lock.
In general, the design of Log_to_csv_event_handler does not fit into the
general architecture of the server, so that the concept of general_log_thd
and slow_log_thd has to be abandoned:
- this implementation does not work with table locking
- it will not work with commands like SHOW PROCESSLIST
- having the log tables always opened does not integrate well with DDL
operations / FLUSH TABLES / SET GLOBAL READ_ONLY
With this patch, the fundamental design of the LOGGER has been changed to:
- always open and close a log table when writing a log
- remove totally the usage of fake THD objects
- clarify how locking of log tables is implemented in general.
See WL#3984 for details related to the new locking design.
Additional changes (misc bugs exposed and fixed):
1)
mysqldump which would ignore some tables in dump_all_tables_in_db(),
but forget to ignore the same in dump_all_views_in_db().
2)
mysqldump would also issue an empty "LOCK TABLE" command when all the tables
to lock are to be ignored (numrows == 0), instead of not issuing the query.
3)
Internal errors handlers could intercept errors but not warnings
(see sql_error.cc).
4)
Implementing a nested call to open tables, for the performance schema tables,
exposed an existing bug in remove_table_from_cache(), which would perform:
in_use->some_tables_deleted=1;
against another thread, without any consideration about thread locking.
This call inside remove_table_from_cache() was not required anyway,
since calling mysql_lock_abort() takes care of aborting -- cleanly -- threads
that might hold a lock on a table.
This line (in_use->some_tables_deleted=1) has been removed.
binary SHOW CREATE TABLE or SELECT FROM I_S.
The problem is that mysqldump generates incorrect dump for a table
with non-ASCII column name if the mysqldump's character set is
ASCII.
The fix is to:
1. Switch character_set_client for the mysqldump's connection
to binary before issuing SHOW CREATE TABLE statement in order
to avoid conversion.
2. Dump switch character_set_client statements to UTF8 and back
for CREATE TABLE statement.
SHOW CREATE TABLE or SELECT FROM I_S.
This is the last patch for this bug, which depends on the big
CS patch and was pending.
The problem was that SHOW CREATE statements returned original
queries in the binary character set. That could cause the query
to be unreadable.
The fix is to use original character_set_client when sending
the original query to the client. In order to preserve the query
in mysqldump, 'binary' character set results should be set when
issuing SHOW CREATE statement. If either source or destination
character set is 'binary' , no conversion is performed.
The idea is that since the source character set is no longer
'binary', we fix the destination character set to still produce
valid dumps.
Problem: logging queries not using indexes we check a special flag which
is set only at the server startup and is not changing with a corresponding
server variable together.
Fix: check the variable value instead of the flag.
- BUG#11986: Stored routines and triggers can fail if the code
has a non-ascii symbol
- BUG#16291: mysqldump corrupts string-constants with non-ascii-chars
- BUG#19443: INFORMATION_SCHEMA does not support charsets properly
- BUG#21249: Character set of SP-var can be ignored
- BUG#25212: Character set of string constant is ignored (stored routines)
- BUG#25221: Character set of string constant is ignored (triggers)
There were a few general problems that caused these bugs:
1. Character set information of the original (definition) query for views,
triggers, stored routines and events was lost.
2. mysqldump output query in client character set, which can be
inappropriate to encode definition-query.
3. INFORMATION_SCHEMA used strings with mixed encodings to display object
definition;
1. No query-definition-character set.
In order to compile query into execution code, some extra data (such as
environment variables or the database character set) is used. The problem
here was that this context was not preserved. So, on the next load it can
differ from the original one, thus the result will be different.
The context contains the following data:
- client character set;
- connection collation (character set and collation);
- collation of the owner database;
The fix is to store this context and use it each time we parse (compile)
and execute the object (stored routine, trigger, ...).
2. Wrong mysqldump-output.
The original query can contain several encodings (by means of character set
introducers). The problem here was that we tried to convert original query
to the mysqldump-client character set.
Moreover, we stored queries in different character sets for different
objects (views, for one, used UTF8, triggers used original character set).
The solution is
- to store definition queries in the original character set;
- to change SHOW CREATE statement to output definition query in the
binary character set (i.e. without any conversion);
- introduce SHOW CREATE TRIGGER statement;
- to dump special statements to switch the context to the original one
before dumping and restore it afterwards.
Note, in order to preserve the database collation at the creation time,
additional ALTER DATABASE might be used (to temporary switch the database
collation back to the original value). In this case, ALTER DATABASE
privilege will be required. This is a backward-incompatible change.
3. INFORMATION_SCHEMA showed non-UTF8 strings
The fix is to generate UTF8-query during the parsing, store it in the object
and show it in the INFORMATION_SCHEMA.
Basically, the idea is to create a copy of the original query convert it to
UTF8. Character set introducers are removed and all text literals are
converted to UTF8.
This UTF8 query is intended to provide user-readable output. It must not be
used to recreate the object. Specialized SHOW CREATE statements should be
used for this.
The reason for this limitation is the following: the original query can
contain symbols from several character sets (by means of character set
introducers).
Example:
- original query:
CREATE VIEW v1 AS SELECT _cp1251 'Hello' AS c1;
- UTF8 query (for INFORMATION_SCHEMA):
CREATE VIEW v1 AS SELECT 'Hello' AS c1;
SHOW CREATE TABLE or SELECT FROM I_S.
Actually, the bug discovers two problems:
- the original query is not preserved properly. This is the problem
of BUG#16291;
- the resultset of SHOW CREATE TABLE statement is binary.
This patch fixes the second problem for the 5.0.
Both problems will be fixed in 5.1.
An attempt to open file with name '????????.frm'
can produce different errors:
- ER_NO_SUCH_TABLE on Unix
- ER_FILE_NOT_FOUND on Windows
because QUESTION MARK has special meaning on Windows.
Make sure that any of these two errors happens.
Problem: crash on attempt to open a table
having "#mysql50#" prefix in db or table name.
Fix: This prefix is reserved for "mysql_upgrade"
to access 5.0 tables whose file names are not encoded
according to "5.1 tablename to filename encoded".
Don't try open tables whose db name or table name
has this prefix.
The patch for WL 1563 added a new duplicate key error message so that the
key name could be provided instead of the key number. But the error code
for the new message was used even though that did not need to change.
This could cause unnecessary problems for applications that used the old
ER_DUP_ENTRY error code to detect duplicate key errors.