The value of "low-priority-updates" option and the LOW PRIORITY
prefix was taken into account at parse time.
This caused triggers (among others) to ignore this flag (if
supplied for the DML statement).
Moved reading of the LOW_PRIORITY flag at run time.
Fixed an incosistency when handling
SET GLOBAL LOW_PRIORITY_UPDATES : now it is in effect for
delayed INSERTs.
Tested by checking the effect of LOW_PRIORITY flag via a
trigger.
Bug#4968 ""Stored procedure crash if cursor opened on altered table"
Bug#6895 "Prepared Statements: ALTER TABLE DROP COLUMN does nothing"
Bug#19182 "CREATE TABLE bar (m INT) SELECT n FROM foo; doesn't work from
stored procedure."
Bug#19733 "Repeated alter, or repeated create/drop, fails"
Bug#22060 "ALTER TABLE x AUTO_INCREMENT=y in SP crashes server"
Bug#24879 "Prepared Statements: CREATE TABLE (UTF8 KEY) produces a
growing key length" (this bug is not fixed in 5.0)
Re-execution of CREATE DATABASE, CREATE TABLE and ALTER TABLE
statements in stored routines or as prepared statements caused
incorrect results (and crashes in versions prior to 5.0.25).
In 5.1 the problem occured only for CREATE DATABASE, CREATE TABLE
SELECT and CREATE TABLE with INDEX/DATA DIRECTOY options).
The problem of bugs 4968, 19733, 19282 and 6895 was that functions
mysql_prepare_table, mysql_create_table and mysql_alter_table are not
re-execution friendly: during their operation they modify contents
of LEX (members create_info, alter_info, key_list, create_list),
thus making the LEX unusable for the next execution.
In particular, these functions removed processed columns and keys from
create_list, key_list and drop_list. Search the code in sql_table.cc
for drop_it.remove() and similar patterns to find evidence.
The fix is to supply to these functions a usable copy of each of the
above structures at every re-execution of an SQL statement.
To simplify memory management, LEX::key_list and LEX::create_list
were added to LEX::alter_info, a fresh copy of which is created for
every execution.
The problem of crashing bug 22060 stemmed from the fact that the above
metnioned functions were not only modifying HA_CREATE_INFO structure
in LEX, but also were changing it to point to areas in volatile memory
of the execution memory root.
The patch solves this problem by creating and using an on-stack
copy of HA_CREATE_INFO in mysql_execute_command.
Additionally, this patch splits the part of mysql_alter_table
that analizes and rewrites information from the parser into
a separate function - mysql_prepare_alter_table, in analogy with
mysql_prepare_table, which is renamed to mysql_prepare_create_table.
Bug #23667 "CREATE TABLE LIKE is not isolated from alteration
by other connections"
Bug #18950 "CREATE TABLE LIKE does not obtain LOCK_open"
As well as:
Bug #25578 "CREATE TABLE LIKE does not require any privileges
on source table".
The first and the second bugs resulted in various errors and wrong
binary log order when one tried to execute concurrently CREATE TABLE LIKE
statement and DDL statements on source table or DML/DDL statements on its
target table.
The problem was caused by incomplete protection/table-locking against
concurrent statements implemented in mysql_create_like_table() routine.
We solve it by simply implementing such protection in proper way.
Most of actual work for 5.1 was already done by fix for bug 20662 and
preliminary patch changing locking in ALTER TABLE.
The third bug allowed user who didn't have any privileges on table create
its copy and therefore circumvent privilege check for SHOW CREATE TABLE.
This patch solves this problem by adding privilege check, which was missing.
Finally it also removes some duplicated code from mysql_create_like_table()
and thus fixes bug #26869 "TABLE_LIST::table_name_length inconsistent with
TABLE_LIST::table_name".
Bug #23667 "CREATE TABLE LIKE is not isolated from alteration
by other connections"
Bug #18950 "CREATE TABLE LIKE does not obtain LOCK_open"
As well as:
Bug #25578 "CREATE TABLE LIKE does not require any privileges
on source table".
The first and the second bugs resulted in various errors and wrong
binary log order when one tried to execute concurrently CREATE TABLE LIKE
statement and DDL statements on source table or DML/DDL statements on its
target table.
The problem was caused by incomplete protection/table-locking against
concurrent statements implemented in mysql_create_like_table() routine.
We solve it by simply implementing such protection in proper way (see
comment for sql_table.cc for details).
The third bug allowed user who didn't have any privileges on table create
its copy and therefore circumvent privilege check for SHOW CREATE TABLE.
This patch solves this problem by adding privilege check, which was missing.
Finally it also removes some duplicated code from mysql_create_like_table().
Note that, altough tests covering concurrency-related aspects of CREATE TABLE
LIKE behaviour will only be introduced in 5.1, they were run manually for
this patch as well.
Problem: we may get syntactically incorrect queries in the binary log
if we use a string value user variable executing a PS which
contains '... limit ?' clause, e.g.
prepare s from "select 1 limit ?";
set @a='qwe'; execute s using @a;
Fix: raise an error in such cases.
Bug #20662 "Infinite loop in CREATE TABLE IF NOT EXISTS ... SELECT
with locked tables"
Bug #20903 "Crash when using CREATE TABLE .. SELECT and triggers"
Bug #24738 "CREATE TABLE ... SELECT is not isolated properly"
Bug #24508 "Inconsistent results of CREATE TABLE ... SELECT when
temporary table exists"
Deadlock occured when one tried to execute CREATE TABLE IF NOT
EXISTS ... SELECT statement under LOCK TABLES which held
read lock on target table.
Attempt to execute the same statement for already existing
target table with triggers caused server crashes.
Also concurrent execution of CREATE TABLE ... SELECT statement
and other statements involving target table suffered from
various races (some of which might've led to deadlocks).
Finally, attempt to execute CREATE TABLE ... SELECT in case
when a temporary table with same name was already present
led to the insertion of data into this temporary table and
creation of empty non-temporary table.
All above problems stemmed from the old implementation of CREATE
TABLE ... SELECT in which we created, opened and locked target
table without any special protection in a separate step and not
with the rest of tables used by this statement.
This underminded deadlock-avoidance approach used in server
and created window for races. It also excluded target table
from prelocking causing problems with trigger execution.
The patch solves these problems by implementing new approach to
handling of CREATE TABLE ... SELECT for base tables.
We try to open and lock table to be created at the same time as
the rest of tables used by this statement. If such table does not
exist at this moment we create and place in the table cache special
placeholder for it which prevents its creation or any other usage
by other threads.
We still use old approach for creation of temporary tables.
Note that we have separate fix for 5.0 since there we use slightly
different less intrusive approach.
Bug #20662 "Infinite loop in CREATE TABLE IF NOT EXISTS ... SELECT
with locked tables"
Bug #20903 "Crash when using CREATE TABLE .. SELECT and triggers"
Bug #24738 "CREATE TABLE ... SELECT is not isolated properly"
Bug #24508 "Inconsistent results of CREATE TABLE ... SELECT when
temporary table exists"
Deadlock occured when one tried to execute CREATE TABLE IF NOT
EXISTS ... SELECT statement under LOCK TABLES which held
read lock on target table.
Attempt to execute the same statement for already existing
target table with triggers caused server crashes.
Also concurrent execution of CREATE TABLE ... SELECT statement
and other statements involving target table suffered from
various races (some of which might've led to deadlocks).
Finally, attempt to execute CREATE TABLE ... SELECT in case
when a temporary table with same name was already present
led to the insertion of data into this temporary table and
creation of empty non-temporary table.
All above problems stemmed from the old implementation of CREATE
TABLE ... SELECT in which we created, opened and locked target
table without any special protection in a separate step and not
with the rest of tables used by this statement.
This underminded deadlock-avoidance approach used in server
and created window for races. It also excluded target table
from prelocking causing problems with trigger execution.
The patch solves these problems by implementing new approach to
handling of CREATE TABLE ... SELECT for base tables.
We try to open and lock table to be created at the same time as
the rest of tables used by this statement. If such table does not
exist at this moment we create and place in the table cache special
placeholder for it which prevents its creation or any other usage
by other threads.
We still use old approach for creation of temporary tables.
Also note that we decided to postpone introduction of some tests
for concurrent behaviour of CREATE TABLE ... SELECT till 5.1.
The main reason for this is absence in 5.0 ability to set @@debug
variable at runtime, which can be circumvented only by using several
test files with individual .opt files. Since the latter is likely
to slowdown test-suite unnecessary we chose not to push this tests
into 5.0, but run them manually for this version and later push
their optimized version into 5.1
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
Before this fix, the parser would sometime change where a token starts by
altering Lex_input_string::tok_start, which later confused the code in
sql_yacc.yy that needs to capture the source code of a SQL statement,
like to represent the body of a stored procedure.
This line of code in sql_lex.cc :
case MY_LEX_USER_VARIABLE_DELIMITER:
lip->tok_start= lip->ptr; // Skip first `
would <skip the first back quote> ... and cause the bug reported.
In general, the responsibility of sql_lex.cc is to *find* where token are
in the SQL text, but is *not* to make up fake or incomplete tokens.
With a quoted label like `my_label`, the token starts on the first quote.
Extracting the token value should not change that (it did).
With this fix, the lexical analysis has been cleaned up to not change
lip->tok_start (in the case found for this bug).
The functions get_token() and get_quoted_token() now have an extra
parameters, used when some characters from the beginning of the token need
to be skipped when extracting a token value, like when extracting 'AB' from
'0xAB', for example, for a HEX_NUM token.
This exposed a bad assumption in Item_hex_string and Item_bin_string,
which has been fixed:
The assumption was that the string given, 'AB', was in fact preceded in
memory by '0x', which might be false (it can be preceded by "x'" and
followed by "'" -- or not be preceded by valid memory at all)
If a name is needed for Item_hex_string or Item_bin_string, the name is
taken from the original and true source code ('0xAB'), and assigned in
the select_item rule, instead of relying on assumptions related to how
memory is used.
The issue found with bug 25411 is due to the function skip_rear_comments()
which damages the source code while implementing a work around.
The root cause of the problem is in the lexical analyser, which does not
process special comments properly.
For special comments like :
[1] aaa /*!50000 bbb */ ccc
since 5.0 is a version older that the current code, the parser is in lining
the content of the special comment, so that the query to process is
[2] aaa bbb ccc
However, the text of the query captured when processing a stored procedure,
stored function or trigger (or event in 5.1), can be after rebuilding it:
[3] aaa bbb */ ccc
which is wrong.
To fix bug 25411 properly, the lexical analyser needs to return [2] when
in lining special comments.
In order to implement this, some preliminary cleanup is required in the code,
which is implemented by this patch.
Before this change, the structure named LEX (or st_lex) contains attributes
that belong to lexical analysis, as well as attributes that represents the
abstract syntax tree (AST) of a statement.
Creating a new LEX structure for each statements (which makes sense for the
AST part) also re-initialized the lexical analysis phase each time, which
is conceptually wrong.
With this patch, the previous st_lex structure has been split in two:
- st_lex represents the Abstract Syntax Tree for a statement. The name "lex"
has not been changed to avoid a bigger impact in the code base.
- class lex_input_stream represents the internal state of the lexical
analyser, which by definition should *not* be reinitialized when parsing
multiple statements from the same input stream.
This change is a pre-requisite for bug 25411, since the implementation of
lex_input_stream will later improve to deal properly with special comments,
and this processing can not be done with the current implementation of
sp_head::reset_lex and sp_head::restore_lex, which interfere with the lexer.
This change set alone does not fix bug 25411.
This patch removes the SLAVESIDE_DISABLED token from the event status clause
and adds the ability to mark an event as status = SLAVESIDE_DISABLED by using
the syntax DISABLE ON SLAVE instead.
The patch also adds tests to rpl_events to check the new syntax.