backport ce6c0e584e
MDEV-8960: Can't refer the same column twice in one ALTER TABLE
Problem was that if column was created in alter table when
it was refered again it was not tried to find from list
of current columns.
mysql_prepare_alter_table:
There is two cases
(1) If alter table adds a new column and then later alter
changes the field definition, there was no check from
list of new columns, instead an incorrect error was given.
(2) If alter table adds a new column and then later alter
changes the default, there was no check from list of
new columns, instead an incorrect error was given.
Problem was that if column was created in alter table when
it was refered again it was not tried to find from list
of current columns.
mysql_prepare_alter_table:
There is two cases
(1) If alter table adds a new column and then later alter
changes the field definition, there was no check from
list of new columns, instead an incorrect error was given.
(2) If alter table adds a new column and then later alter
changes the default, there was no check from list of
new columns, instead an incorrect error was given.
Do not silence uncertain cases, or fix any bugs.
The only functional change should be that ha_federated::extra()
is not calling DBUG_PRINT to report an unhandled case for
HA_EXTRA_PREPARE_FOR_DROP.
Problem was with deleting non existing .frm file for a storage engine that
doesn't have .frm files (yet)
Fixed by not giving an error for non existing .frm files for storage engines
that are using discovery
Fixed also valgrind supression related to the given test case
be consistent and don't include the table name into the error message,
no other CREATE TABLE error does it.
(the crash happened, because thd->lex->query_tables was NULL)
ANALYSIS:
=========
'CREATE TABLE' query with a large value for 'CONNECTION'
string reports an incorrect error.
The length of connection string is stored in .frm in two
bytes (max value= 65535). When the string length exceeds
the max value, the length is truncated to fit the two
bytes limit. Further processing leads to reading only a
part of the string as the length stored is incorrect. The
remaining part of the string is treated as engine type and
hence results in an error.
FIX:
====
We are now restricting the connection string length to 1024.
An appropriate error is reported if the length crosses this
limit.
NOTE:
=====
The 'PASSWORD' table option is documented as unused and
processed within a dead code. Hence it will not cause
similar issue with large strings.
Code flow hit incorrect branch while closing table instances before removal.
This branch expects thread to hold open table instance, whereas CREATE OR
REPLACE doesn't actually hold open table instance.
Before CREATE OR REPLACE TABLE it was impossible to hit this condition in
LTM_PRELOCKED mode, thus the problem didn't expose itself during DROP TABLE
or DROP DATABASE.
Fixed by adjusting condition to take into account LTM_PRELOCKED mode, which can
be set during CREATE OR REPLACE TABLE.
The main.merge test case was failing when tested using row based
binlog format.
While analyzing the issue it was found the following issues:
a) The server is calling binlog related code even when a statement will
not be binlogged;
b) The child table list was not present into table structure by the time
to generate the create table statement;
c) The tables in the child table list will not be opened yet when
generating table create info using row based replication;
d) CREATE TABLE LIKE TEMP_TABLE does not preserve original table storage
engine when using row based replication;
This patch addressed all above issues.
@ sql/sql_class.h
Added a function to determine if the binary log is disabled to
the current session. This is related with issue (a) above.
@ sql/sql_table.cc
Added code to skip binary logging related code if the statement
will not be binlogged. This is related with issue (a) above.
Added code to add the children to the query list of the table that
will have its CREATE TABLE generated. This is related with issue (b)
above.
Added code to force the storage engine to be generated into the
CREATE TABLE. This is related with issue (d) above.
@ storage/myisammrg/ha_myisammrg.cc
Added a test to skip a table getting info about a child table if the
child table is not opened. This is related to issue (c) above.
fix a few cases where a successful ALTER was not binlogged:
* on errors after the completed ALTER, binlog it, then return an error
* don't let thd->killed abort open_table() after completed
online ALTER.
COLUMNS
ANALYSIS:
=========
A valgrind error is reported when CREATE TABLE .. SELECT
involving BIT columns triggers a column type redefinition.
In general the pack_flag is set for BIT columns in
'mysql_prepare_create_table()'. However, during the above
operation, redefined column types was handled after the
special handling for BIT columns and thus pack_flag ended
up not being set correctly triggering the valgrind error.
FIX:
====
The patch fixes this problem by setting pack_flag correctly
for BIT columns in the case of column type redefinition.
On shutdown feedback was sending a short report without creating
a THD. At that point current_thd was pointing to the already
destroyed THD from the previous full report.
backport from 10.1:
commit bfe703a
Author: Sergei Golubchik <serg@mariadb.org>
Date: Tue Feb 3 18:19:56 2015 +0100
don't let current_thd to point to a destroyed THD
Problem:
========
1) Drop table queries are re-generated by server
before writing the events(queries) into binlog
for various reasons. If table name/db name contains
a non regular characters (like latin characters),
the generated query is wrong. Hence it breaks the
replication.
2) In the edge case, when table name/db name contains
64 characters, server is throwing an assert
assert(M_TBLLEN < 128)
3) In the edge case, when db name contains 64 latin
characters, binlog content is interpreted badly
which is leading replication failure.
Analysis & Fix :
================
1) Parser reads the table name from the query and converts
it to standard charset(utf8) and stores it in table_name variable.
When drop table query is regenerated with the same table_name
variable, it should be converted back to the original charset
from standard charset(utf8).
2) Latin character takes two bytes for each character. Limit
of the identifier is 64. SYSTEM_CHARSET_MBMAXLEN is set to '3'.
So there is a possiblity that tablename/dbname contains 3 * 64.
Hence assert is changed to
(M_TBLLEN <= NAME_CHAR_LEN*SYSTEM_CHARSET_MBMAXLEN)
3) db_len in the binlog event header is taking 1 byte.
db_len is ranged from 0 to 192 bytes (3 * 64).
While reading the db_len from the event, server
is casting to uint instead of uchar which is leading
to bad db_len. This problem is fixed by changing the
cast type to uchar.
This includes fixing all utilities to not have any memory leaks,
as safemalloc warnings stopped tests from passing on MacOSX.
- Ensure that all clients takes character-set-dir, as the
libmysqlclient library will use it.
- mysql-test-run now passes character-set-dir to all external clients.
- Changed dynstr_free() so that it can be called twice (made freeing code easier)
- Changed rpl_global_gtid_slave_state to be allocated dynamicly as it
includes a mutex that needs to be initizlied/destroyed before my_end() is called.
- Removed rpl_slave_state::init() and rpl_slave_stage::deinit() as
their job are better handling by constructor and delete.
- Print alias instead of table_name in check_duplicate_key as
table_name may have been converted to lower case.
Other things:
- Fixed a case in time_to_datetime_with_warn() where we where
using && instead of & in tests
FILE
PROBLEM
In 5.5 when doing doing a rename of a column ,we ignore the case between
old and new column names while comparing them,so if the change is just
the case then we don't even mark the field FIELD_IS_RENAMED ,we just update
the frm file ,but don't recreate the table as is the norm when alter is
used.This leads to inconsistency in the innodb data dictionary which causes
index creation to fail.
FIX
According to the documentation any innodb column rename should trigger
rebuild of the table. Therefore for innodb tables we will do a strcmp()
between the column names and if there is case change in column name
we will trigger a rebuild.
Fix was to add a test in Query_log_event::Query_log_event() if we are using
CREATE ... SELECT and in this case use trans cache, like we do on the master.
This avoid using (with doesn't have checksum)
Other things:
- Removed dummy call my_checksum(0L, NULL, 0)
- More DBUG_PRINT
- Cleaned up Log_event::need_checksum() to make it more readable (similar as in MySQL 5.6)
- Renamed variable that was hiding another one in create_table_imp()
Problem :
---------
Issue-1: The root cause for the issues is that (col1 > 1) is not a
valid partition function and we should have thrown error at much early
stage [partition_info::check_partition_info]. We are not checking
sub-partition expression when partition expression is NULL.
Issue-2: Potential issue for future if any partition function needs to
change item tree during open/fix_fields. We should release changed
items, if any, before doing closefrm when we open the partitioned table
during creation in create_table_impl.
Solution :
----------
1.check_partition_info() - Check for sub-partition expression even if no
partition expression.
[partition by ... columns(...) subpartition by hash(<expr>)]
2.create_table_impl() - Assert that the change list is empty before doing
closefrm for partitioned table. Currently no supported partition function
seems to be changing item tree during open.
Reviewed-by: Mattias Jonsson <mattias.jonsson@oracle.com>
RB: 9345
in ha_delete_table()
* only convert ENOENT and HA_ERR_NO_SUCH_TABLE to warnings
* only return real error codes (that is, not ENOENT and
not HA_ERR_NO_SUCH_TABLE)
* intercept HA_ERR_ROW_IS_REFERENCED to generate backward
compatible ER_ROW_IS_REFERENCED
in mysql_rm_table_no_locks()
* no special code to handle HA_ERR_ROW_IS_REFERENCED
* no special code to handle ENOENT and HA_ERR_NO_SUCH_TABLE
* return multi-table error ER_BAD_TABLE_ERROR <table list> only
when there were many errors, not when there were many
tables to drop (but only one table generated an error)
When RENAME TABLE is executed, it apparently does not check whether the engine
is available (unlike ALTER TABLE .. RENAME, which does). It means that if the
engine in question was not loaded on some reason, the table might become
unusable, since the engine won't know about the change.
With this patch RENAME TABLE fails if storage engine is not available.