column is increasing when table is recreated with PS/SP":
make use of create_field::char_length more consistent in the code.
Reinit create_field::length from create_field::char_length
for every execution of a prepared statement (actually fixes the
bug).
CREATE TABLE and PS/SP": make sure that 'typelib' object for
ENUM values and 'Item_string' object for DEFAULT clause are
created in the statement memory root.
Version for 4.0.
It fixes two problems:
1. The cause of the bug was that we did not check the table version for
the HANDLER ... READ commands. We did not notice when a table was
replaced by a new one. This can happen during ALTER TABLE, REPAIR
TABLE, and OPTIMIZE TABLE (there might be more cases). I call the fix
for this problem "the primary bug fix".
2. mysql_ha_flush() was not always called with a locked LOCK_open.
Though the function comment clearly said it must.
I changed the code so that the locking is done when required. I call
the fix for this problem "the secondary fix".
produce warning for 'create database if not exists' if database exists
do not update database options in this case
produce warning for 'create table if not exists' if table exists
We binlog the DROP TABLE for each table that was actually dropped. Per Sergei's
suggestion a fixed buffer for the DROP TABLE query is pre-allocated from THD pool, and
logging now is done in batches - new batch is started if the buffer becomes full.
Reduced memory usage by reusing the table list instead of accumulating a list of
dropped table names. Also fixed the problem if the table was not actually dropped, eg
due to permissions. Extended the test case to make sure batched query
logging does work.
adding test case
sql_table.cc:
sql_table.cc:
- do not create a new item when charsets are the same
- return ER_INVALID_DEFAULT if default value cannot
be converted into the column character set.
item.cc:
- Allow conversion not only to Unicode,
but also to and from "binary".
- Adding safe_charset_converter() for Item_num
and Item_varbinary, returning a fixed const Item.
- Added better error messages when trying to open a table that can't be discovered or unpacked. The most likely cause of this is that it does not have any frm data, probably since it has been created from NdbApi or is a NDB system table.
- Separated functionality that was in ha_create_table_from_engine into two functions. One that checks if the table exists and another one that tries to create the table from the engine.
Ensure that 'null_value' is not accessed before val() is called in FIELD() functions
Fixed initialization of key maps. This fixes some problems with keys when you have more than 64 keys
Fixed that ROLLUP don't always create a temporary table. This fix ensures that func_gconcat.test results are now predictable
1.) Added a new option to mysql_lock_tables() for ignoring FLUSH TABLES.
Used the new option in create_table_from_items().
It is necessary to prevent the SELECT table from being reopend.
It would get new storage assigned for its fields, while the
SELECT part of the command would still use the old (freed) storage.
2.) Protected the CREATE TABLE and CREATE TABLE ... SELECT commands
against a global read lock. This prevents a deadlock in
CREATE TABLE ... SELECT in conjunction with FLUSH TABLES WITH READ LOCK
and avoids the creation of new tables during a global read lock.
3.) Replaced set_protect_against_global_read_lock() and
unset_protect_against_global_read_lock() by
wait_if_global_read_lock() and start_waiting_global_read_lock()
in the INSERT DELAYED handling.
Count null_bits separately from field offsets and adjust them in case of primary key parts.
(Previously a CREATE TABLE with a lot of null fields that was part of a primary key caused MySQL to wrongly count the number of bytes needed to store null bits)
This is a more complete bug fix for #6236
if we fall back to mysql_alter_table() (for InnoDB), don't do binlogging in mysql_alter_table(), as mysql_admin_table()
is not supposed to do any binlogging (it is done by the caller).