A pattern to generate binlog for DROPped temp table in close_temporary_tables
was buggy: could not deal with a grave-accent-in-name table.
The fix exploits `append_identifier()' for quoting and duplicating accents.
Binlog lacks encoding info about DROPped temporary table.
Idea of the fix is to switch temporary to system_charset_info when a temporary table
is DROPped for binlog. Since that is the server, that automatically, but not the client, who generates the query
the binlog should be updated on the server's encoding for the coming DROP.
The `write_binlog_with_system_charset()' is introduced to replace similar problematic places in the code.
too many open statements". The patch adds a new global variable
@@max_prepared_stmt_count. This variable limits the total number
of prepared statements in the server. The default value of
@@max_prepared_stmt_count is 16382. 16382 small statements
(a select against 3 tables with GROUP, ORDER and LIMIT) consume
100MB of RAM. Once this limit has been reached, the server will
refuse to prepare a new statement and return ER_UNKNOWN_ERROR
(unfortunately, we can't add new errors to 4.1 without breaking 5.0). The limit is changeable after startup
and can accept any value from 0 to 1 million. In case
the new value of the limit is less than the current
statement count, no new statements can be added, while the old
still can be used. Additionally, the current count of prepared
statements is now available through a global read-only variable
@@prepared_stmt_count.
The GROUP_CONCAT uses its own temporary table. When ROLLUP is present
it creates the second copy of Item_func_group_concat. This copy receives the
same list of arguments that original group_concat does. When the copy is
set up the result_fields of functions from the argument list are reset to the
temporary table of this copy.
As a result of this action data from functions flow directly to the ROLLUP copy
and the original group_concat functions shows wrong result.
Since queries with COUNT(DISTINCT ...) use temporary tables to store
the results the COUNT function they are also affected by this bug.
The idea of the fix is to copy content of the result_field for the function
under GROUP_CONCAT/COUNT from the first temporary table to the second one,
rather than setting result_field to point to the second temporary table.
To achieve this goal force_copy_fields flag is added to Item_func_group_concat
and Item_sum_count_distinct classes. This flag is initialized to 0 and set to 1
into the make_unique() member function of both classes.
To the TMP_TABLE_PARAM structure is modified to include the similar flag as
well.
The create_tmp_table() function passes that flag to create_tmp_field().
When the flag is set the create_tmp_field() function will set result_field
as a source field and will not reset that result field to newly created
field for Item_func_result_field and its descendants. Due to this there
will be created copy func to copy data from old result_field to newly
created field.
Problem #1: INSERT...SELECT, Version for 4.1.
INSERT ... SELECT with the same table on both sides (hidden
below a MERGE table) does now work by buffering the select result.
The duplicate detection works now after open_and_lock_tables()
on the locks.
I did not find a test case that failed without the change in
sql_update.cc. I made the change anyway as it should in theory
fix a possible MERGE table problem with multi-table update.
CREATE TABLE and PS/SP": make sure that 'typelib' object for
ENUM values and 'Item_string' object for DEFAULT clause are
created in the statement memory root.
Version for 4.0.
It fixes two problems:
1. The cause of the bug was that we did not check the table version for
the HANDLER ... READ commands. We did not notice when a table was
replaced by a new one. This can happen during ALTER TABLE, REPAIR
TABLE, and OPTIMIZE TABLE (there might be more cases). I call the fix
for this problem "the primary bug fix".
2. mysql_ha_flush() was not always called with a locked LOCK_open.
Though the function comment clearly said it must.
I changed the code so that the locking is done when required. I call
the fix for this problem "the secondary fix".
present): the problem originally was that the tables in auxilliary_tables did not have
the correct real_name, which caused problems in the second call to tables_ok().
The fix corrects the real_name problem, and also sets the updating flag properly,
which makes the second call to tables_ok() unnecessary.
When creating temporary table for UNION, pass TMP_TABLE_FORCE_MYISAM flag to
create_tmp_table if we will be using fulltext function(s) when reading from the
temp. table.
Fixed portability problem with bool in C programs
Moved close_thread_tables out from LOCK_thread_count mutex (safety fix)
my_sleep() -> pthread_cond_timedwait()
of system vars at PREPARE time": implement a special Item
to handle system variables. This item substitutes itself with
a basic constant containing variable value at fix_fields.
data": remove the fix for another bug (8807) that
added OUTER_REF_TABLE_BIT to all subqueries that used a placeholder
to prevent their evaluation at prepare. As this bit hanged in
Item_subselect::used_tables_cache for ever, a constant subquery with
a placeholder was never evaluated as such, which caused wrong
choice of the execution plan for the statement.
- to fix Bug#8807 backport a better fix from 5.0
- post-review fixes.
#9728 'Decreased functionality in "on duplicate key update
#8147 'a column proclaimed ambigous in INSERT ... SELECT .. ON DUPLICATE'
This ensures fields are uniquely qualified and also that one can't update other tables in the ON DUPLICATE KEY UPDATE part
Bug#8367 "low log doesn't gives complete information about prepared
statements"
Implement status variables for prepared statements commands (a port of
the patch by Andrey Hristov).
See details in comments to the changed files.
No test case as there is no way to test slow log/general log in
mysqltest.
ALTER, OPTIMIZE and ANALYZE statements".
In 4.1 we disabled logging of slow admin statements. The fix adds an
option to enable it back.
No test case (slow log is not tested in the test suite), but tested
manually.
+ post-review fixes (word police mainly).
Ensure that 'null_value' is not accessed before val() is called in FIELD() functions
Fixed initialization of key maps. This fixes some problems with keys when you have more than 64 keys
Fixed that ROLLUP don't always create a temporary table. This fix ensures that func_gconcat.test results are now predictable
1.) Added a new option to mysql_lock_tables() for ignoring FLUSH TABLES.
Used the new option in create_table_from_items().
It is necessary to prevent the SELECT table from being reopend.
It would get new storage assigned for its fields, while the
SELECT part of the command would still use the old (freed) storage.
2.) Protected the CREATE TABLE and CREATE TABLE ... SELECT commands
against a global read lock. This prevents a deadlock in
CREATE TABLE ... SELECT in conjunction with FLUSH TABLES WITH READ LOCK
and avoids the creation of new tables during a global read lock.
3.) Replaced set_protect_against_global_read_lock() and
unset_protect_against_global_read_lock() by
wait_if_global_read_lock() and start_waiting_global_read_lock()
in the INSERT DELAYED handling.
Added protection against global read lock while creating and
initializing a delayed insert handler.
Allowed to ignore a global read lock when locking the table
inside the delayed insert handler.
Added some minor improvements.
in slave SQL thread: if a transaction fails because of InnoDB deadlock or innodb_lock_wait_timeout exceeded,
optionally retry the transaction a certain number of times (new variable --slave_transaction_retries).