The '@' symbol can not be used in the host name according to rfc952.
The fix:
added function check_host_name(LEX_STRING *str)
which checks that all symbols in host name string are valid and
host name length is not more than max host name length
(just moved check_string_length() function from the parser into check_host_name()).
in open_table()
Problem: repeating "CREATE... ( AUTOINCREMENT) ... SELECT" may lead to
an assertion failure.
Fix: reset table->auto_increment_field_not_null after each record
writing.
on table creates
The problem was in incompatible syntax for key definition in CREATE
TABLE.
5.0 supports only the following syntax for key definition (see "CREATE
TABLE syntax" in the manual):
{INDEX|KEY} [index_name] [index_type] (index_col_name,...)
While 5.1 parser supports the above syntax, the "preferred" syntax was
changed to:
{INDEX|KEY} [index_name] (index_col_name,...) [index_type]
The above syntax is used in 5.1 for the SHOW CREATE TABLE output, which
led to dumps generated by 5.1 being incompatible with 5.0.
Fixed by changing the parser in 5.0 to support both 5.0 and 5.1 syntax
for key definition.
When the SQL_BIG_RESULT flag is specified SELECT should store items from the
select list in the filesort data and use them when sending to a client.
The get_addon_fields function is responsible for creating necessary structures
for that. But this function was allowed to do so only for SELECT and
INSERT .. SELECT queries. This makes the SQL_BIG_RESULT useless for
the CREATE .. SELECT queries.
Now the get_addon_fields allows storing select list items in the filesort
data for the CREATE .. SELECT queries.
Thanks to Martin Friebe for finding and submitting a fix for this bug!
A table with maximum number of key segments and maximum length key name
would have a corrupted .frm file, due to an incorrect calculation of the
complete key length. Now the key length is computed correctly (I hope) :-)
MyISAM would reject a table with the maximum number of keys and the maximum
number of key segments in all keys. It would allow one less than this total
maximum. Now MyISAM accepts a table defined with the maximum. (This is a
very minor issue.)
Bug #20662 "Infinite loop in CREATE TABLE IF NOT EXISTS ... SELECT
with locked tables"
Bug #20903 "Crash when using CREATE TABLE .. SELECT and triggers"
Bug #24738 "CREATE TABLE ... SELECT is not isolated properly"
Bug #24508 "Inconsistent results of CREATE TABLE ... SELECT when
temporary table exists"
Deadlock occured when one tried to execute CREATE TABLE IF NOT
EXISTS ... SELECT statement under LOCK TABLES which held
read lock on target table.
Attempt to execute the same statement for already existing
target table with triggers caused server crashes.
Also concurrent execution of CREATE TABLE ... SELECT statement
and other statements involving target table suffered from
various races (some of which might've led to deadlocks).
Finally, attempt to execute CREATE TABLE ... SELECT in case
when a temporary table with same name was already present
led to the insertion of data into this temporary table and
creation of empty non-temporary table.
All above problems stemmed from the old implementation of CREATE
TABLE ... SELECT in which we created, opened and locked target
table without any special protection in a separate step and not
with the rest of tables used by this statement.
This underminded deadlock-avoidance approach used in server
and created window for races. It also excluded target table
from prelocking causing problems with trigger execution.
The patch solves these problems by implementing new approach to
handling of CREATE TABLE ... SELECT for base tables.
We try to open and lock table to be created at the same time as
the rest of tables used by this statement. If such table does not
exist at this moment we create and place in the table cache special
placeholder for it which prevents its creation or any other usage
by other threads.
We still use old approach for creation of temporary tables.
Also note that we decided to postpone introduction of some tests
for concurrent behaviour of CREATE TABLE ... SELECT till 5.1.
The main reason for this is absence in 5.0 ability to set @@debug
variable at runtime, which can be circumvented only by using several
test files with individual .opt files. Since the latter is likely
to slowdown test-suite unnecessary we chose not to push this tests
into 5.0, but run them manually for this version and later push
their optimized version into 5.1
Thanks to Martin Friebe for finding and submitting a fix for this bug!
A table with maximum number of key segments and maximum length key name
would have a corrupted .frm file, due to an incorrect calculation of the
complete key length. Now the key length is computed correctly (I hope) :-)
MyISAM would reject a table with the maximum number of keys and the maximum
number of key segments in all keys. It would allow one less than this total
maximum. Now MyISAM accepts a table defined with the maximum. (This is a
very minor issue.)
- Use mysql_system_tables.sql to create MySQL system tables in
all places where we create them(mysql_install_db, mysql-test-run-pl
and mysql_fix_privilege_tables.sql)
(Mostly in DBUG_PRINT() and unused arguments)
Fixed bug in query cache when used with traceing (--with-debug)
Fixed memory leak in mysqldump
Removed warnings from mysqltest scripts (replaced -- with #)
Bug#19022 "Memory bug when switching db during trigger execution"
Bug#17199 "Problem when view calls function from another database."
Bug#18444 "Fully qualified stored function names don't work correctly in
SELECT statements"
Documentation note: this patch introduces a change in behaviour of prepared
statements.
This patch adds a few new invariants with regard to how THD::db should
be used. These invariants should be preserved in future:
- one should never refer to THD::db by pointer and always make a deep copy
(strmake, strdup)
- one should never compare two databases by pointer, but use strncmp or
my_strncasecmp
- TABLE_LIST object table->db should be always initialized in the parser or
by creator of the object.
For prepared statements it means that if the current database is changed
after a statement is prepared, the database that was current at prepare
remains active. This also means that you can not prepare a statement that
implicitly refers to the current database if the latter is not set.
This is not documented, and therefore needs documentation. This is NOT a
change in behavior for almost all SQL statements except:
- ALTER TABLE t1 RENAME t2
- OPTIMIZE TABLE t1
- ANALYZE TABLE t1
- TRUNCATE TABLE t1 --
until this patch t1 or t2 could be evaluated at the first execution of
prepared statement.
CURRENT_DATABASE() still works OK and is evaluated at every execution
of prepared statement.
Note, that in stored routines this is not an issue as the default
database is the database of the stored procedure and "use" statement
is prohibited in stored routines.
This patch makes obsolete the use of check_db_used (it was never used in the
old code too) and all other places that check for table->db and assign it
from THD::db if it's NULL, except the parser.
How this patch was created: THD::{db,db_length} were replaced with a
LEX_STRING, THD::db. All the places that refer to THD::{db,db_length} were
manually checked and:
- if the place uses thd->db by pointer, it was fixed to make a deep copy
- if a place compared two db pointers, it was fixed to compare them by value
(via strcmp/my_strcasecmp, whatever was approproate)
Then this intermediate patch was used to write a smaller patch that does the
same thing but without a rename.
TODO in 5.1:
- remove check_db_used
- deploy THD::set_db in mysql_change_db
See also comments to individual files.
When a too long field is used for a key, only a prefix part of the field is
used. Length is reduced to the max key length allowed for storage. But if the
field have a multibyte charset it is possible to break multibyte char
sequence. This leads to the failed assertion in the innodb code and
server crash when a record is inserted.
The make_prepare_table() now aligns truncated key length to the boundary of
multibyte char.
- Fixed tests
- Optimized new code
- Fixed some unlikely core dumps
- Better bug fixes for:
- #14397 - OPTIMIZE TABLE with an open HANDLER causes a crash
- #14850 (ERROR 1062 when a quering a view using a Group By on a column that can be null
Problem #1: INSERT...SELECT, Version for 5.0.
Extended the unique table check by a check of lock data.
Merge sub-tables cannot be detected by doing name checks only.
Problem #1: INSERT...SELECT, Version for 4.1.
INSERT ... SELECT with the same table on both sides (hidden
below a MERGE table) does now work by buffering the select result.
The duplicate detection works now after open_and_lock_tables()
on the locks.
I did not find a test case that failed without the change in
sql_update.cc. I made the change anyway as it should in theory
fix a possible MERGE table problem with multi-table update.
produce warning for 'create database if not exists' if database exists
do not update database options in this case
produce warning for 'create table if not exists' if table exists
Item::tmp_table_field_from_field_type() and create_tmp_field_from_item()
was converting string field to blob depending on byte-wise length instead of
character length, which results in converting valid varchar string with
length == 86 to longtext.
Made that functions above take into account max width of character when
converting string fields to blobs.