Bug #20662 "Infinite loop in CREATE TABLE IF NOT EXISTS ... SELECT
with locked tables"
Bug #20903 "Crash when using CREATE TABLE .. SELECT and triggers"
Bug #24738 "CREATE TABLE ... SELECT is not isolated properly"
Bug #24508 "Inconsistent results of CREATE TABLE ... SELECT when
temporary table exists"
Deadlock occured when one tried to execute CREATE TABLE IF NOT
EXISTS ... SELECT statement under LOCK TABLES which held
read lock on target table.
Attempt to execute the same statement for already existing
target table with triggers caused server crashes.
Also concurrent execution of CREATE TABLE ... SELECT statement
and other statements involving target table suffered from
various races (some of which might've led to deadlocks).
Finally, attempt to execute CREATE TABLE ... SELECT in case
when a temporary table with same name was already present
led to the insertion of data into this temporary table and
creation of empty non-temporary table.
All above problems stemmed from the old implementation of CREATE
TABLE ... SELECT in which we created, opened and locked target
table without any special protection in a separate step and not
with the rest of tables used by this statement.
This underminded deadlock-avoidance approach used in server
and created window for races. It also excluded target table
from prelocking causing problems with trigger execution.
The patch solves these problems by implementing new approach to
handling of CREATE TABLE ... SELECT for base tables.
We try to open and lock table to be created at the same time as
the rest of tables used by this statement. If such table does not
exist at this moment we create and place in the table cache special
placeholder for it which prevents its creation or any other usage
by other threads.
We still use old approach for creation of temporary tables.
Note that we have separate fix for 5.0 since there we use slightly
different less intrusive approach.
Bug #20662 "Infinite loop in CREATE TABLE IF NOT EXISTS ... SELECT
with locked tables"
Bug #20903 "Crash when using CREATE TABLE .. SELECT and triggers"
Bug #24738 "CREATE TABLE ... SELECT is not isolated properly"
Bug #24508 "Inconsistent results of CREATE TABLE ... SELECT when
temporary table exists"
Deadlock occured when one tried to execute CREATE TABLE IF NOT
EXISTS ... SELECT statement under LOCK TABLES which held
read lock on target table.
Attempt to execute the same statement for already existing
target table with triggers caused server crashes.
Also concurrent execution of CREATE TABLE ... SELECT statement
and other statements involving target table suffered from
various races (some of which might've led to deadlocks).
Finally, attempt to execute CREATE TABLE ... SELECT in case
when a temporary table with same name was already present
led to the insertion of data into this temporary table and
creation of empty non-temporary table.
All above problems stemmed from the old implementation of CREATE
TABLE ... SELECT in which we created, opened and locked target
table without any special protection in a separate step and not
with the rest of tables used by this statement.
This underminded deadlock-avoidance approach used in server
and created window for races. It also excluded target table
from prelocking causing problems with trigger execution.
The patch solves these problems by implementing new approach to
handling of CREATE TABLE ... SELECT for base tables.
We try to open and lock table to be created at the same time as
the rest of tables used by this statement. If such table does not
exist at this moment we create and place in the table cache special
placeholder for it which prevents its creation or any other usage
by other threads.
We still use old approach for creation of temporary tables.
Also note that we decided to postpone introduction of some tests
for concurrent behaviour of CREATE TABLE ... SELECT till 5.1.
The main reason for this is absence in 5.0 ability to set @@debug
variable at runtime, which can be circumvented only by using several
test files with individual .opt files. Since the latter is likely
to slowdown test-suite unnecessary we chose not to push this tests
into 5.0, but run them manually for this version and later push
their optimized version into 5.1
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
The issue found with bug 25411 is due to the function skip_rear_comments()
which damages the source code while implementing a work around.
The root cause of the problem is in the lexical analyser, which does not
process special comments properly.
For special comments like :
[1] aaa /*!50000 bbb */ ccc
since 5.0 is a version older that the current code, the parser is in lining
the content of the special comment, so that the query to process is
[2] aaa bbb ccc
However, the text of the query captured when processing a stored procedure,
stored function or trigger (or event in 5.1), can be after rebuilding it:
[3] aaa bbb */ ccc
which is wrong.
To fix bug 25411 properly, the lexical analyser needs to return [2] when
in lining special comments.
In order to implement this, some preliminary cleanup is required in the code,
which is implemented by this patch.
Before this change, the structure named LEX (or st_lex) contains attributes
that belong to lexical analysis, as well as attributes that represents the
abstract syntax tree (AST) of a statement.
Creating a new LEX structure for each statements (which makes sense for the
AST part) also re-initialized the lexical analysis phase each time, which
is conceptually wrong.
With this patch, the previous st_lex structure has been split in two:
- st_lex represents the Abstract Syntax Tree for a statement. The name "lex"
has not been changed to avoid a bigger impact in the code base.
- class lex_input_stream represents the internal state of the lexical
analyser, which by definition should *not* be reinitialized when parsing
multiple statements from the same input stream.
This change is a pre-requisite for bug 25411, since the implementation of
lex_input_stream will later improve to deal properly with special comments,
and this processing can not be done with the current implementation of
sp_head::reset_lex and sp_head::restore_lex, which interfere with the lexer.
This change set alone does not fix bug 25411.
Add "two liner" to mysqld --bootstrap that allows
wo write scripts that can be run both by mysql and mysqld --bootstrap
Remove duplicate create of MySQL system tables
Print information if net_clear() skipped bytes (As this otherwise hides critical timeing bugs)
Added DBUG_ASSERT if we get packets out of order
mysql_change_user() could on error send multiple packets, which caused mysql_client_test to randomly fail
when there are no up-to-date system tables to support it:
- initialize the scheduler before reporting "Ready for connections".
This ensures that warnings, if any, are printed before "Ready for
connections", and this message is not mangled.
- do not abort the scheduler if there are no system tables
- check the tables once at start up, remember the status and disable
the scheduler if the tables are not up to date.
If one attempts to use the scheduler with bad tables,
issue an error message.
- clean up the behaviour of the module under LOCK TABLES and pre-locking
mode
- make sure implicit commit of Events DDL works as expected.
- add more tests
Collateral clean ups in the events code.
This patch fixes Bug#23631 Events: SHOW VARIABLES doesn't work
when mysql.event is damaged
The problem was that THD::db_access variable was not restored after
database switch in stored-routine-execution code.
The fix is to restore THD::db_access in this case.
Unfortunately, this fix requires additional changes,
because in prepare_schema_table(), called on the parsing stage, we checked
privileges. That was wrong according to our design, but this flaw haven't
struck so far, because it was masked. All privilege checkings must be
done on the execution stage in order to be compatible with prepared statements
and stored routines. So, this patch also contains patch for
prepare_schema_table(), which moves the checkings to the execution phase.
Adding an event that can be used to denote that an incident occured
on the master. The event can be used to denote a gap in the replication
stream, but can also be used to denote other incidents.
In addition, the injector interface is extended with functions to
generate an incident event. The function will also rotate the binary
log after generating an incident event to get a fresh binary log.