Add CMakeLists.txt to source distribution
CMakeLists.txt:
Added missing '${MYSQLD_EXE_SUFFIX}' to "mysqld" targets new in 5.1
Manual merge from 5.0 (bug#30118)
CMakeLists.txt, mysqlbinlog.cc, lib_sql.cc:
No need to test on USING_CMAKE, it is the only Windows build
Aligned client library build and use with the Unix version when it
comes to what source to include directly in the builds, and what
libraries to link with (bug#30118).
Also reviewed, corrected and made more clear when static or dynamic
Thread Local Storage is to be used. Some code duplication was removed,
and some redundant library usage were removed, reducing the risk of
incorrect TLS usage.
Fixed bug in query cache that made it impossible to run mysqld with --debug
Fixed memory leaks in mysqldump and mysqltest
Memory leaks associated with wrong usage of mysqltest is not fixed. To find these, run
mysql-test-run --debug mysqltest
Fixed compiler warnings, errors and link errors
Fixed new bug on Solaris with gethrtime()
Added --debug-check option to all mysql clients to print errors and memory leaks
Added --debug-info to all clients. This now works as --debug-check but also prints memory and cpu usage
They had been introduced in 5.1 and were only later backported to 5.0;
as a consequence, the files in the 5.1 tree do not depend on the 5.0 ones,
and changes in 5.0 do not propagate into the 5.1 files.
To fix this, the (previous) files in 5.1 now are deleted ("bk rm"),
and the previously deleted files depending on 5.0 are now moved to the
respective source directories ("bk mv").
The current 5.1 contents is restored in these files.
If you need the previous history of the 5.1 files ("bk revtool"),
access those in "BitKeeper/deleted".
Contrary to the original plan, I did not introduce the name
"CMakeLists.historic" - mostly in order not to clutter the source tree.
This fixes bug#29982.
mysqldump generates view defitions in two stages:
- dump CREATE TABLE statements for the temporary tables. For each view a
temporary table, that has the same structure as the view is created.
- dump DROP TABLE statements for the temporary tables and CREATE VIEW
statements for the view.
This approach is required because views can have dependencies on each other
(a view can use other views). So, they should be created in the particular
order. mysqldump however is not smart enough, so in order to resolve
dependencies it creates temporary tables first of all.
The problem was that mysqldump might have generated incorrect dump for the
temporary table when a view has non-ASCII column name. That happened when
default-character-set is not utf8.
The fix is to:
1. Switch character_set_client for the mysqldump's connection to binary
before issuing SHOW FIELDS statement in order to avoid conversion.
2. Dump switch character_set_client statements to UTF8 and back for
CREATE TABLE statement that is issued to create temporary table.
Bug#25422 (Hang with log tables)
Bug 17876 (Truncating mysql.slow_log in a SP after using cursor locks the
thread)
Bug 23044 (Warnings on flush of a log table)
Bug 29129 (Resetting general_log while the GLOBAL READ LOCK is set causes
a deadlock)
Prior to this fix, the server would hang when performing concurrent
ALTER TABLE or TRUNCATE TABLE statements against the LOG TABLES,
which are mysql.general_log and mysql.slow_log.
The root cause traces to the following code:
in sql_base.cc, open_table()
if (table->in_use != thd)
{
/* wait_for_condition will unlock LOCK_open for us */
wait_for_condition(thd, &LOCK_open, &COND_refresh);
}
The problem with this code is that the current implementation of the
LOGGER creates 'fake' THD objects, like
- Log_to_csv_event_handler::general_log_thd
- Log_to_csv_event_handler::slow_log_thd
which are not associated to a real thread running in the server,
so that waiting for these non-existing threads to release table locks
cause the dead lock.
In general, the design of Log_to_csv_event_handler does not fit into the
general architecture of the server, so that the concept of general_log_thd
and slow_log_thd has to be abandoned:
- this implementation does not work with table locking
- it will not work with commands like SHOW PROCESSLIST
- having the log tables always opened does not integrate well with DDL
operations / FLUSH TABLES / SET GLOBAL READ_ONLY
With this patch, the fundamental design of the LOGGER has been changed to:
- always open and close a log table when writing a log
- remove totally the usage of fake THD objects
- clarify how locking of log tables is implemented in general.
See WL#3984 for details related to the new locking design.
Additional changes (misc bugs exposed and fixed):
1)
mysqldump which would ignore some tables in dump_all_tables_in_db(),
but forget to ignore the same in dump_all_views_in_db().
2)
mysqldump would also issue an empty "LOCK TABLE" command when all the tables
to lock are to be ignored (numrows == 0), instead of not issuing the query.
3)
Internal errors handlers could intercept errors but not warnings
(see sql_error.cc).
4)
Implementing a nested call to open tables, for the performance schema tables,
exposed an existing bug in remove_table_from_cache(), which would perform:
in_use->some_tables_deleted=1;
against another thread, without any consideration about thread locking.
This call inside remove_table_from_cache() was not required anyway,
since calling mysql_lock_abort() takes care of aborting -- cleanly -- threads
that might hold a lock on a table.
This line (in_use->some_tables_deleted=1) has been removed.
binary SHOW CREATE TABLE or SELECT FROM I_S.
The problem is that mysqldump generates incorrect dump for a table
with non-ASCII column name if the mysqldump's character set is
ASCII.
The fix is to:
1. Switch character_set_client for the mysqldump's connection
to binary before issuing SHOW CREATE TABLE statement in order
to avoid conversion.
2. Dump switch character_set_client statements to UTF8 and back
for CREATE TABLE statement.
After dumping triggers mysqldump copied
the value of the OLD_SQL_MODE variable to the SQL_MODE
variable. If the --compact option of the mysqldump was
not set the OLD_SQL_MODE variable had the value
of the uninitialized SQL_MODE variable. So
usually the NO_AUTO_VALUE_ON_ZERO option of the
SQL_MODE variable was discarded.
This fix is for non-"--compact" mode of the mysqldump,
because mysqldump --compact never set SQL_MODE to the
value of NO_AUTO_VALUE_ON_ZERO.
The dump_triggers_for_table function has been modified
to restore previous value of the SQL_MODE variable after
dumping triggers using the SAVE_SQL_MODE temporary
variable.
For each view the mysqldump utility creates a temporary table
with the same name and the same columns as the view
in order to satisfy views that depend on this view.
After the creation of all tables, mysqldump drops all
temporary tables and creates actual views.
However, --skip-add-drop-table and --compact flags disable
DROP TABLE statements for those temporary tables. Thus, it was
impossible to create the views because of existence of the
temporary tables with the same names.
Added an option to yassl to allow "quiet shutdown" like openssl does. This option causes the SSL libs to NOT perform the close_notify handshake during shutdown. This fixes a hang we experience because we hold a lock during socket shutdown.
SHOW CREATE TABLE or SELECT FROM I_S.
This is the last patch for this bug, which depends on the big
CS patch and was pending.
The problem was that SHOW CREATE statements returned original
queries in the binary character set. That could cause the query
to be unreadable.
The fix is to use original character_set_client when sending
the original query to the client. In order to preserve the query
in mysqldump, 'binary' character set results should be set when
issuing SHOW CREATE statement. If either source or destination
character set is 'binary' , no conversion is performed.
The idea is that since the source character set is no longer
'binary', we fix the destination character set to still produce
valid dumps.
The Ctrl-C handler in mysql closes the console while ReadConsole()
waits for console input.
But the main thread was detecting that ReadConsole() haven't read
anything and was processing as if there're data in the buffer.
Fixed to handle correctly this error condition.
No test case added as the test relies on Ctrl-C sent to the client
from its console.
1. Fix ddl_i18n_koi8r, ddl_i18n_utf8: explicitly specify character-sets
directory for mysqldump;
2. Fix crash in mysqldump if collation is not found;
3. Use proper way to compare character set names.
- BUG#11986: Stored routines and triggers can fail if the code
has a non-ascii symbol
- BUG#16291: mysqldump corrupts string-constants with non-ascii-chars
- BUG#19443: INFORMATION_SCHEMA does not support charsets properly
- BUG#21249: Character set of SP-var can be ignored
- BUG#25212: Character set of string constant is ignored (stored routines)
- BUG#25221: Character set of string constant is ignored (triggers)
There were a few general problems that caused these bugs:
1. Character set information of the original (definition) query for views,
triggers, stored routines and events was lost.
2. mysqldump output query in client character set, which can be
inappropriate to encode definition-query.
3. INFORMATION_SCHEMA used strings with mixed encodings to display object
definition;
1. No query-definition-character set.
In order to compile query into execution code, some extra data (such as
environment variables or the database character set) is used. The problem
here was that this context was not preserved. So, on the next load it can
differ from the original one, thus the result will be different.
The context contains the following data:
- client character set;
- connection collation (character set and collation);
- collation of the owner database;
The fix is to store this context and use it each time we parse (compile)
and execute the object (stored routine, trigger, ...).
2. Wrong mysqldump-output.
The original query can contain several encodings (by means of character set
introducers). The problem here was that we tried to convert original query
to the mysqldump-client character set.
Moreover, we stored queries in different character sets for different
objects (views, for one, used UTF8, triggers used original character set).
The solution is
- to store definition queries in the original character set;
- to change SHOW CREATE statement to output definition query in the
binary character set (i.e. without any conversion);
- introduce SHOW CREATE TRIGGER statement;
- to dump special statements to switch the context to the original one
before dumping and restore it afterwards.
Note, in order to preserve the database collation at the creation time,
additional ALTER DATABASE might be used (to temporary switch the database
collation back to the original value). In this case, ALTER DATABASE
privilege will be required. This is a backward-incompatible change.
3. INFORMATION_SCHEMA showed non-UTF8 strings
The fix is to generate UTF8-query during the parsing, store it in the object
and show it in the INFORMATION_SCHEMA.
Basically, the idea is to create a copy of the original query convert it to
UTF8. Character set introducers are removed and all text literals are
converted to UTF8.
This UTF8 query is intended to provide user-readable output. It must not be
used to recreate the object. Specialized SHOW CREATE statements should be
used for this.
The reason for this limitation is the following: the original query can
contain symbols from several character sets (by means of character set
introducers).
Example:
- original query:
CREATE VIEW v1 AS SELECT _cp1251 'Hello' AS c1;
- UTF8 query (for INFORMATION_SCHEMA):
CREATE VIEW v1 AS SELECT 'Hello' AS c1;