sporadically.
The cause of the sporadic time out was a leaking protection
against the global read lock, taken by the RENAME statement,
and not released in case of an error occurred during RENAME.
The leaking protection counter would lead to the value of
protect_against_global_read never dropping to 0.
Consequently FLUSH TABLES in all connections, including the
one that leaked the protection, could not proceed.
The fix is to ensure that all branchesin RENAME code properly
release GRL protection.
The problem is that creating a event could fail if the value of
the variable server_id didn't fit in the originator column of
the event system table. The cause is two-fold: it was possible
to set server_id to a value outside the documented range (from
0 to 2^32-1) and the originator column of the event table didn't
have enough room for values in this range.
The log tables (general_log and slow_log) also don't have a proper
column type to store the server_id and having a large server_id
value could prevent queries from being logged.
The solution is to ensure that all system tables that store the
server_id value have a proper column type (int unsigned) and that
the variable can't be set to a value that is not within the range.
Problem: Many test cases don't clean up after themselves (fail
to drop tables or fail to reset variables). This implies that:
(1) check-testcase in the new mtr that currently lives in
5.1-rpl failed. (2) it may cause unexpected results in
subsequent tests.
Fix: make all tests clean up.
Also: cleaned away unnecessary output in rpl_packet.result
Also: fixed bug where rpl_log called RESET MASTER with a running
slave. This is not supposed to work.
Also: removed unnecessary code from rpl_stm_EE_err2 and made it
verify that an error occurred.
Also: removed unnecessary code from rpl_ndb_ctype_ucs2_def.
The problem is that when statement-based replication was enabled,
statements such as INSERT INTO .. SELECT FROM .. and CREATE TABLE
.. SELECT FROM need to grab a read lock on the source table that
does not permit concurrent inserts, which would in turn be denied
if the source table is a log table because log tables can't be
locked exclusively.
The solution is to not take such a lock when the source table is
a log table as it is unsafe to replicate log tables under statement
based replication. Furthermore, the read lock that does not permits
concurrent inserts is now only taken if statement-based replication
is enabled and if the source table is not a log table.
- Updated slow_query_log_file_basic and general_log_file basis instead of the func version as
the func version run good but the basic versions fail.
- Sent innodb.test to dev@innodb.com.
- variables.test has differences probably due to a bug in mtr or in the SET statement (see bug#39369).
- general_log_file_basic.test and slow_query_log_file_bsaic.test have differences, which might be
produced by the new mtr (see bug#38124).
"CSV does not work with NULL value in datetime fields"
Attempting to insert a row with a NULL value for a DATETIME field
results in a CSV file which the storage engine cannot read.
Don't blindly assume that "0" is acceptable for all field types,
Since CSV does not support NULL, we find out from the field the
default non-null value.
Do not permit the creation of a table with a nullable columns.
The general log write function (general_log_print) uses printf style
arguments which need to be pre-processed, meaning that the all arguments
are copied to a single buffer and the problem is that the buffer size is
constant (1022 characters) but queries can be much larger then this.
The solution is to introduce a new log write function that accepts a
buffer and it's length as arguments. The function is to be used when
a formatted output is not required, which is the case for almost all
query write-to-log calls.
This is a incompatible change with respect to the log format of prepared
statements.
Before this patch, failures to write to the log tables (mysql.slow_log
and mysql.general_log) were improperly printed (the time was printed twice),
or not printed at all.
With this patch, failures to write to the log tables is reported in the
error log, for all cases of failures.
Bug#25422 (Hang with log tables)
Bug 17876 (Truncating mysql.slow_log in a SP after using cursor locks the
thread)
Bug 23044 (Warnings on flush of a log table)
Bug 29129 (Resetting general_log while the GLOBAL READ LOCK is set causes
a deadlock)
Prior to this fix, the server would hang when performing concurrent
ALTER TABLE or TRUNCATE TABLE statements against the LOG TABLES,
which are mysql.general_log and mysql.slow_log.
The root cause traces to the following code:
in sql_base.cc, open_table()
if (table->in_use != thd)
{
/* wait_for_condition will unlock LOCK_open for us */
wait_for_condition(thd, &LOCK_open, &COND_refresh);
}
The problem with this code is that the current implementation of the
LOGGER creates 'fake' THD objects, like
- Log_to_csv_event_handler::general_log_thd
- Log_to_csv_event_handler::slow_log_thd
which are not associated to a real thread running in the server,
so that waiting for these non-existing threads to release table locks
cause the dead lock.
In general, the design of Log_to_csv_event_handler does not fit into the
general architecture of the server, so that the concept of general_log_thd
and slow_log_thd has to be abandoned:
- this implementation does not work with table locking
- it will not work with commands like SHOW PROCESSLIST
- having the log tables always opened does not integrate well with DDL
operations / FLUSH TABLES / SET GLOBAL READ_ONLY
With this patch, the fundamental design of the LOGGER has been changed to:
- always open and close a log table when writing a log
- remove totally the usage of fake THD objects
- clarify how locking of log tables is implemented in general.
See WL#3984 for details related to the new locking design.
Additional changes (misc bugs exposed and fixed):
1)
mysqldump which would ignore some tables in dump_all_tables_in_db(),
but forget to ignore the same in dump_all_views_in_db().
2)
mysqldump would also issue an empty "LOCK TABLE" command when all the tables
to lock are to be ignored (numrows == 0), instead of not issuing the query.
3)
Internal errors handlers could intercept errors but not warnings
(see sql_error.cc).
4)
Implementing a nested call to open tables, for the performance schema tables,
exposed an existing bug in remove_table_from_cache(), which would perform:
in_use->some_tables_deleted=1;
against another thread, without any consideration about thread locking.
This call inside remove_table_from_cache() was not required anyway,
since calling mysql_lock_abort() takes care of aborting -- cleanly -- threads
that might hold a lock on a table.
This line (in_use->some_tables_deleted=1) has been removed.
numbers)
Before this patch, the code in the class Log_to_csv_event_handler, which is
used by the global LOGGER object to write to the tables mysql.slow_log and
mysql_general_log, was supporting only records of the format defined for
these tables in the database creation scripts.
Also before this patch, the server would allow, with certain limitations,
to perform ALTER TABLE on the LOG TABLES.
As implemented, the behavior of the server, with regards to LOG TABLES,
is inconsistent:
- either ALTER TABLES on LOG TABLES should be prohibited,
and the code writing to these tables can make assumptions on the record
format,
- or ALTER TABLE on LOG TABLES is permitted, in which case the code
writing a record to these tables should be more flexible and honor
new fields.
In particular, adding an AUTO_INCREMENT column to the logs,
does not work as expected (per the bug report).
Given that the ALTER TABLE on log tables statement has been explicitly
implemented to check that the log should be off to perform the operation,
and that current test cases already cover this, the user expectation is
already set that this is a "feature" and should be supported.
With this patch, the server will:
- populate AUTO INCREMENT columns if present,
- populate any additional column with it's default value
when writing a record to the LOG TABLES.
Tests are provided, that detail the precise sequence of statements
a SUPER user might want to perform to add more columns to the log tables.
"Server Variables for Plugins"
Implement support for plugins to declare server variables.
Demonstrate functionality by removing InnoDB specific code from sql/*
New feature for HASH - HASH_UNIQUE flag
New feature for DYNAMIC_ARRAY - initializer accepts preallocated ptr.
Completed support for plugin reference counting.
Bug #21785 "Server crashes after rename of the log table" and
Bug #21966 "Strange warnings on create like/repair of the log
tables"
According to the patch, from now on, one should use RENAME to
perform a log table rotation (this should also be reflected in
the manual).
Here is a sample:
use mysql;
CREATE TABLE IF NOT EXISTS general_log2 LIKE general_log;
RENAME TABLE general_log TO general_log_backup, general_log2 TO general_log;
The rules for Rename of the log tables are following:
IF 1. Log tables are enabled
AND 2. Rename operates on the log table and nothing is being
renamed to the log table.
DO 3. Throw an error message.
ELSE 4. Perform rename.
The very RENAME query will go the the old (backup) table. This is
consistent with the behavoiur we have with binlog ROTATE LOGS
statement.
Other problems, which are solved by the patch are:
1) Now REPAIR of the log table is exclusive operation (as it should be), this
also eliminates lock-related warnings. and
2) CREATE LIKE TABLE now usese usual read lock on the source table rather
then name lock, which is too restrictive. This way we get rid of another
log table-related warning, which occured because of the above fact
(as a side-effect, name lock resulted in a warning).
Bug #18559 "log tables cannot change engine, and
gets deadlocked when dropping w/ log on":
1) Add more generic error messages
2) Add new handlerton flag for engines, which support
log tables
3) Remove (log-tables related) mutex lock in myisam to
improve performance
gets deadlocked when dropping w/ log on"
Log tables rely on concurrent insert machinery to add data.
This means that log tables are always opened and locked by
special (artificial) logger threads. Because of this, the thread
which tries to drop a log table starts to wait for the table
to be unlocked. Which will happen only if the log table is disabled.
Alike situation happens if one tries to alter a log table.
However in addition to the problem above, alter table calls
check_if_locking_is_allowed() routine for the engine. The
routine does not allow alter for the log tables. So, alter
doesn't start waiting forever for logs to be disabled, but
returns with an error.
Another problem is that not all engines could be used for
the log tables. That's because they need concurrent insert.
In this patch we:
(1) Explicitly disallow to drop/alter a log table if it
is currently used by the logger.
(2) Update MyISAM to support log tables
(3) Allow to drop log tables/alter log tables if log is
disabled
At the same time we (4) Disallow to alter log tables to
unsupported engine (after this patch CSV and MyISAM are
alowed)
Recommit with review fixes.
Due to incorrect handling of FLUSH TABLES, log tables were marked for flush,
but not reopened. Later we started to wait for the log table to be closed
(disabled) after the flush. And as nobody disabled logs in concurrent treads,
the command lasted forever.
After internal consultations it was decided to skip logs during FLUSH TABLES.
The reasoning is that logging is done in the "log device", whatever it is
which is always active and controlled by FLUSH LOGS. So, to flush logs
one should use FLUSH LOGS, and not FLUSH TABLES.
Move plugin declarations after system functions have been checked
(Fixes problem with ndb_config failing becasue SHM is not declared)
Fixed some memory leaks
The log content is obviously different in two modes,
as Queries are translated into Prepare and Execute
commands. Thus we should use test only in one mode
to get the match with result.