Make the caller of Query_log_event, Execute_load_log_event
constructors and THD::binlog_query to provide the error code
instead of having the constructors to figure out the error code.
Error happens because sp_head::MULTI_RESULTS is not set for SP
which has 'show table status' command.
The fix is to add a SQLCOM_SHOW_TABLE_STATUS case into
sp_get_flags_for_command() func.
When the thread executing a DDL was killed after finished its
execution but before writing the binlog event, the error code in
the binlog event could be set wrongly to ER_SERVER_SHUTDOWN or
ER_QUERY_INTERRUPTED.
This patch fixed the problem by ignoring the kill status when
constructing the event for DDL statements.
This patch also included the following changes in order to
provide the test case.
1) modified mysqltest to support variable for connection command
2) modified mysql-test-run.pl, add new variable MYSQL_SLAVE to
run mysql client against the slave mysqld.
due to name_const substitution
Problem:
"In general, statements executed within a stored procedure
are written to the binary log using the same rules that
would apply were the statements to be executed in standalone
fashion. Some special care is taken when logging procedure
statements because statement execution within procedures
is not quite the same as in non-procedure context".
For example, each reference to a local variable in SP's
statements is replaced by NAME_CONST(var_name, var_value).
Queries like
"CREATE TABLE ... SELECT FUNC(local_var ..."
are logged as
"CREATE TABLE ... SELECT FUNC(NAME_CONST("local_var", var_value) ..."
that leads to differrent field names and
might result in "Incorrect column name" if var_value is long enough.
Fix: in 5.x we'll issue a warning in such a case.
In 6.0 we should get rid of NAME_CONST().
Note: this issue and change should be described in the documentation
("Binary Logging of Stored Programs").
- Remove bothersome warning messages. This change focuses on the warnings
that are covered by the ignore file: support-files/compiler_warnings.supp.
- Strings are guaranteed to be max uint in length
from stored procedure.
Problem: we replace all references to local variables in stored procedures
with NAME_CONST(name, value) logging to the binary log. However, if the
value's collation differs we might get an 'illegal mix of collation'
error as we don't pass the collation to the function.
Fix: pass the value's collation to NAME_CONST().
Note: actually we should pass to NAME_CONST() the value's derivation as well.
It's impossible without the parser modifying. Now we always set the
derivation to DERIVATION_IMPLICIT, the same as local variables have.
JOIN for the subselect wasn't cleaned if we came upon an error
during sub_select() execution. That leads to the assertion failure
in close_thread_tables()
part of the 6.0 code backported
per-file comments:
mysql-test/r/sp-error.result
Bug#37949 Crash if argument to SP is a subquery that returns more than one row
test result
mysql-test/t/sp-error.test
Bug#37949 Crash if argument to SP is a subquery that returns more than one row
test case
sql/sp_head.cc
Bug#37949 Crash if argument to SP is a subquery that returns more than one row
lex->unit.cleanup() call added if not substatement
This fix is for 5.0 only : back porting the 6.0 patch manually
The parser code in sql/sql_yacc.yy needs to be more robust to out of
memory conditions, so that when parsing a query fails due to OOM,
the thread gracefully returns an error.
Before this fix, a new/alloc returning NULL could:
- cause a crash, if dereferencing the NULL pointer,
- produce a corrupted parsed tree, containing NULL nodes,
- alter the semantic of a query, by silently dropping token values or nodes
With this fix:
- C++ constructors are *not* executed with a NULL "this" pointer
when operator new fails.
This is achieved by declaring "operator new" with a "throw ()" clause,
so that a failed new gracefully returns NULL on OOM conditions.
- calls to new/alloc are tested for a NULL result,
- The thread diagnostic area is set to an error status when OOM occurs.
This ensures that a request failing in the server properly returns an
ER_OUT_OF_RESOURCES error to the client.
- OOM conditions cause the parser to stop immediately (MYSQL_YYABORT).
This prevents causing further crashes when using a partially built parsed
tree in further rules in the parser.
No test scripts are provided, since automating OOM failures is not
instrumented in the server.
Tested under the debugger, to verify that an error in alloc_root cause the
thread to returns gracefully all the way to the client application, with
an ER_OUT_OF_RESOURCES error.
build)
The crash was caused by freeing the internal parser stack during the parser
execution.
This occured only for complex stored procedures, after reallocating the parser
stack using my_yyoverflow(), with the following C call stack:
- MYSQLparse()
- any rule calling sp_head::restore_lex()
- lex_end()
- x_free(lex->yacc_yyss), xfree(lex->yacc_yyvs)
The root cause is the implementation of stored procedures, which breaks the
assumption from 4.1 that there is only one LEX structure per parser call.
The solution is to separate the LEX structure into:
- attributes that represent a statement (the current LEX structure),
- attributes that relate to the syntax parser itself (Yacc_state),
so that parsing multiple statements in stored programs can create multiple
LEX structures while not changing the unique Yacc_state.
Now, Yacc_state and the existing Lex_input_stream are aggregated into
Parser_state, a structure that represent the complete state of the (Lexical +
Syntax) parser.
slave
The stored-routine code took the contents of the (lowest) parser
and copied it directly to the binlog, which causes problems if there
is a special case of interpretation at the parser level -- which
there is, in the "/*!VER */" comments. The trailing "*/" caused
errors on the slave, naturally.
Now, since by that point we have /properly/ created parse-tree (as
the rest of the server should do!) for the stored-routine CREATE, we
can construct a perfect statement from that information, instead of
writing uncertain information from an unknown parser state.
Fortunately, there's already a function nearby that does exactly
that.
---
Update for Bug#36570. Qualify routine names with db name when
writing to the binlog ONLY if the source text is qualified.
Add metadata validation to ~20 more SQL commands. Make sure that
these commands actually work in ps-protocol, since until now they
were enabled, but not carefully tested.
Fixes the ml003 bug found by Matthias during internal testing of the
patch.
between 5.0 and 5.1.
The problem was that in the patch for Bug#11986 it was decided
to store original query in UTF8 encoding for the INFORMATION_SCHEMA.
This approach however turned out to be quite difficult to implement
properly. The main problem is to preserve the same IS-output after
dump/restore.
So, the fix is to rollback to the previous functionality, but also
to fix it to support multi-character-set-queries properly. The idea
is to generate INFORMATION_SCHEMA-query from the item-tree after
parsing view declaration. The IS-query should:
- be completely in UTF8;
- not contain character set introducers.
For more information, see WL4052.
a SELECT doesn't cause ROLLBACK of statem".
The idea of the fix is to ensure that we always commit the current
statement at the end of dispatch_command(). In order to not issue
redundant disc syncs, an optimization of the two-phase commit
protocol is implemented to bypass the two phase commit if
the transaction is read-only.
pre-locking.
The crash was caused by an implicit assumption in check_table_access() that
table_list parameter is always a part of lex->query_tables.
When iterating over the passed list of tables, check_table_access() used
to stop only when lex->query_tables_last_not_own was reached.
In case of pre-locking, lex->query_tables_last_own is not NULL and points
to some element of lex->query_tables. When the parameter
of check_table_access() was not part of lex->query_tables, loop invariant
could never be violated and a crash would happen when the current table
pointer would point beyond the end of the provided list.
The fix is to change the signature of check_table_access() to also accept
a numeric limit of loop iterations, similarly to check_grant(), and
supply this limit in all places when we want to check access of tables
that are outside lex->query_tables, or just want to check access to one table.
Bug 33983 (Stored Procedures: wrong end <label> syntax is accepted)
The server used to crash when REPEAT or another control instruction
was used in conjunction with labels and a LEAVE instruction.
The crash was caused by a missing "pop" of handlers or cursors in the
code representing the stored program. When executing the code in a loop,
this missing "pop" would result in a stack overflow, corrupting memory.
Code generation has been fixed to produce the missing h_pop/c_pop
instructions.
Also, the logic checking that labels at the beginning and the end of a
statement are matched was incorrect, causing Bug 33983.
End labels, when used, must match the label used at the beginning of a block.
cause ROLLBACK of statement", part 1. Review fixes.
Do not send OK/EOF packets to the client until we reached the end of
the current statement.
This is a consolidation, to keep the functionality that is shared by all
SQL statements in one place in the server.
Currently this functionality includes:
- close_thread_tables()
- log_slow_statement().
After this patch and the subsequent patch for Bug#12713, it shall also include:
- ha_autocommit_or_rollback()
- net_end_statement()
- query_cache_end_of_result().
In future it may also include:
- mysql_reset_thd_for_next_command().
When the server was out of memory it crashed because of invalid memory access.
This patch adds detection for failed memory allocations and make the server
output a proper error message.
If a stored function that contains a drop temporary table statement
is invoked by a create temporary table of the same name may cause
a server crash. The problem is that when dropping a table no check
is done to ensure that table is not being used by some outer query
(or outer statement), potentially leaving the outer query with a
reference to a stale (freed) table.
The solution is when dropping a temporary table, always check if
the table is being used by some outer statement as a temporary
table can be dropped inside stored procedures.
The check is performed by looking at the TABLE::query_id value for
temporary tables. To simplify this check and to solve a bug related
to handling of temporary tables in prelocked mode, this patch changes
the way in which this member is used to track the fact that table is
used/unused. Now we ensure that TABLE::query_id is zero for unused
temporary tables (which means that all temporary tables which were
used by a statement should be marked as free for reuse after it's
execution has been completed).
The value of the actual argument of BIT-type-arg stored procedure was binlogged as non-escaped
sequence of bytes corresponding to internal representation of the bit value.
The patch enforces binlogging of the bit-argument as a valid literal: prefixing the quoted bytes
sequence with _binary.
Note, that behaviour of Item_field::var_str for field_type() of MYSQL_TYPE_BIT is exceptional
in that the returned string contains the binary representation even though result_type() of
the item is INT_RESULT.
The general log write function (general_log_print) uses printf style
arguments which need to be pre-processed, meaning that the all arguments
are copied to a single buffer and the problem is that the buffer size is
constant (1022 characters) but queries can be much larger then this.
The solution is to introduce a new log write function that accepts a
buffer and it's length as arguments. The function is to be used when
a formatted output is not required, which is the case for almost all
query write-to-log calls.
This is a incompatible change with respect to the log format of prepared
statements.
Bug#29816 Syntactically wrong query fails with misleading error message
The core problem is that an SQL-invoked function name can be a <schema
qualified routine name> that contains no <schema name>, but the mysql
parser insists that all stored procedures (function, procedures and
triggers) must have a <schema name>, which is not true for functions.
This problem is especially visible when trying to create a function
or when a query contains a syntax error after a function call (in the
same query), both will fail with a "No database selected" message if
the session is not attached to a particular schema, but the first
one should succeed and the second fail with a "syntax error" message.
Part of the fix is to revamp the sp name handling so that a schema
name may be omitted for functions -- this means that the internal
function name representation may not have a dot, which represents
that the function doesn't have a schema name. The other part is
to place schema checks after the type (function, trigger or procedure)
of the routine is known.
of statement breaks binlog.
There were two problems discovered by this bug:
1. Default (current) database is not fixed at the creation time.
That leads to wrong output of DATABASE() function.
2. Database attributes (@@collation_database) are not fixed at
the creation time. That leads to wrong resultset.
Binlog breakage and Query Cache wrong output happened because of
the first problem.
The fix is to remember the current database at the PREPARE-time and
set it each time at EXECUTE.
Bug#21422 GRANT/REVOKE possible inside stored function, probably in a trigger
Bug#17244 GRANT gives strange error message when used in a stored function
GRANT/REVOKE statements are non-transactional (no explicit transaction
boundaries) in nature and hence are forbidden inside stored functions and
triggers, but they weren't being effectively forbidden. Furthermore, the
absence of implict commits makes changes made by GRANT/REVOKE statements to
not be rolled back.
The implemented fix is to issue a implicit commit with every GRANT/REVOKE
statement, effectively prohibiting these statements in stored functions
and triggers. The implicit commit also fixes the replication bug, and looks
like being in concert with the behavior of DDL and administrative statements.
Since this is a incompatible change, the following sentence should be
added to the Manual in the very end of the 3rd paragraph, subclause
13.4.3 "Statements That Cause an Implicit Commit": "Beginning with
MySQL 5.0.??, the GRANT and REVOKE statements cause an implicit commit."
Patch contributed by Vladimir Shebordaev
Faster thr_alarm()
Added 'Opened_files' status variable to track calls to my_open()
Don't give warnings when running mysql_install_db
Added option --source-install to mysql_install_db
I had to do the following renames() as used polymorphism didn't work with Forte compiler on 64 bit systems
index_read() -> index_read_map()
index_read_idx() -> index_read_idx_map()
index_read_last() -> index_read_last_map()
Bug #27417 thd->no_trans_update.stmt lost value inside of SF-exec-stack
Once had been set the flag might later got reset inside of a stored routine
execution stack.
The reason was in that there was no check if a new statement started at time
of resetting.
The artifact affects most of binlogable DML queries. Notice, that multi-update
is wrapped up within
bug@27716 fix, multi-delete bug@29136.
Fixed with saving parent's statement flag of whether the statement modified
non-transactional table, and unioning (merging) the value with that was gained
in mysql_execute_command.
Resettling thd->no_trans_update members into thd->transaction.`member`;
Asserting code;
Effectively the following properties are held.
1. At the end of a substatement thd->transaction.stmt.modified_non_trans_table
reflects the fact if such a table got modified by the substatement.
That also respects THD::really_abort_on_warnin() requirements.
2. Eventually thd->transaction.stmt.modified_non_trans_table will be computed as
the union of the values of all invoked sub-statements.
That fixes this bug#27417;
Computing of thd->transaction.all.modified_non_trans_table is refined to base to
the stmt's value for all the case including insert .. select statement which
before the patch had an extra issue bug@28960.
Minor issues are covered with mysql_load, mysql_delete, and binloggin of insert in
to temp_table select.
The supplied test verifies limitely, mostly asserts. The ultimate testing is defered
for bug@13270, bug@23333.
SP with local variables with non-ASCII names crashed the server.
The server replaces SP local variable names with NAME_CONST calls
when putting statements into the binary log. It used UTF8-encoded
item names as variable names for the replacement inside NAME_CONST
calls. However, statement string may be encoded by any
known character set by the SET NAMES statement.
The server used byte length of UTF8-encoded names to increment
the position in the query string that led to array index overrun.
The subst_spvars function is used to create query string with SP variables
substituted with their values. This string is used later for the binary log
and for the query cache. The problem is that the
query_cache_send_result_to_client function requires some additional space
after the query to store database name and query cache flags. This
space wasn't reserved by the subst_spvars function which led to a memory
corruption and crash.
Now the subst_spvars function reserves additional space for the query cache.
causes full table lock on innodb table.
Also fixes Bug#28502 Triggers that update another innodb table
will block on X lock unnecessarily (duplciate).
Code review fixes.
Both bugs' synopses are misleading: InnoDB table is
not X locked. The statements, however, cannot proceed concurrently,
but this happens due to lock conflicts for tables used in triggers,
not for the InnoDB table.
If a user had an InnoDB table, and two triggers, AFTER UPDATE and
AFTER INSERT, competing for different resources (e.g. two distinct
MyISAM tables), then these two triggers would not be able to execute
concurrently. Moreover, INSERTS/UPDATES of the InnoDB table would
not be able to run concurrently.
The problem had other side-effects (see respective bug reports).
This behavior was a consequence of a shortcoming of the pre-locking
algorithm, which would not distinguish between different DML operations
(e.g. INSERT and DELETE) and pre-lock all the tables
that are used by any trigger defined on the subject table.
The idea of the fix is to extend the pre-locking algorithm to keep track,
for each table, what DML operation it is used for and not
load triggers that are known to never be fired.
SHOW CREATE TABLE or SELECT FROM I_S.
This is the last patch for this bug, which depends on the big
CS patch and was pending.
The problem was that SHOW CREATE statements returned original
queries in the binary character set. That could cause the query
to be unreadable.
The fix is to use original character_set_client when sending
the original query to the client. In order to preserve the query
in mysqldump, 'binary' character set results should be set when
issuing SHOW CREATE statement. If either source or destination
character set is 'binary' , no conversion is performed.
The idea is that since the source character set is no longer
'binary', we fix the destination character set to still produce
valid dumps.
- BUG#11986: Stored routines and triggers can fail if the code
has a non-ascii symbol
- BUG#16291: mysqldump corrupts string-constants with non-ascii-chars
- BUG#19443: INFORMATION_SCHEMA does not support charsets properly
- BUG#21249: Character set of SP-var can be ignored
- BUG#25212: Character set of string constant is ignored (stored routines)
- BUG#25221: Character set of string constant is ignored (triggers)
There were a few general problems that caused these bugs:
1. Character set information of the original (definition) query for views,
triggers, stored routines and events was lost.
2. mysqldump output query in client character set, which can be
inappropriate to encode definition-query.
3. INFORMATION_SCHEMA used strings with mixed encodings to display object
definition;
1. No query-definition-character set.
In order to compile query into execution code, some extra data (such as
environment variables or the database character set) is used. The problem
here was that this context was not preserved. So, on the next load it can
differ from the original one, thus the result will be different.
The context contains the following data:
- client character set;
- connection collation (character set and collation);
- collation of the owner database;
The fix is to store this context and use it each time we parse (compile)
and execute the object (stored routine, trigger, ...).
2. Wrong mysqldump-output.
The original query can contain several encodings (by means of character set
introducers). The problem here was that we tried to convert original query
to the mysqldump-client character set.
Moreover, we stored queries in different character sets for different
objects (views, for one, used UTF8, triggers used original character set).
The solution is
- to store definition queries in the original character set;
- to change SHOW CREATE statement to output definition query in the
binary character set (i.e. without any conversion);
- introduce SHOW CREATE TRIGGER statement;
- to dump special statements to switch the context to the original one
before dumping and restore it afterwards.
Note, in order to preserve the database collation at the creation time,
additional ALTER DATABASE might be used (to temporary switch the database
collation back to the original value). In this case, ALTER DATABASE
privilege will be required. This is a backward-incompatible change.
3. INFORMATION_SCHEMA showed non-UTF8 strings
The fix is to generate UTF8-query during the parsing, store it in the object
and show it in the INFORMATION_SCHEMA.
Basically, the idea is to create a copy of the original query convert it to
UTF8. Character set introducers are removed and all text literals are
converted to UTF8.
This UTF8 query is intended to provide user-readable output. It must not be
used to recreate the object. Specialized SHOW CREATE statements should be
used for this.
The reason for this limitation is the following: the original query can
contain symbols from several character sets (by means of character set
introducers).
Example:
- original query:
CREATE VIEW v1 AS SELECT _cp1251 'Hello' AS c1;
- UTF8 query (for INFORMATION_SCHEMA):
CREATE VIEW v1 AS SELECT 'Hello' AS c1;
The patch contains the following changes:
- Introduce auxilary functions to convenient work with character sets:
- resolve_charset();
- resolve_collation();
- get_default_db_collation();
- Introduce lex_string_set();
- Refactor Table_trigger_list::process_triggers() &
sp_head::execute_trigger() to be consistent with other code;
- Move reusable code from add_table_for_trigger() into
build_trn_path(), check_trn_exists() and load_table_name_for_trigger()
to be used in the following patch.
- Rename triggers_file_ext and trigname_file_ext into TRN_EXT and
TRG_EXT respectively.
Bug 28127 (Some valid identifiers names are not parsed correctly)
Bug 26302 (MySQL server cuts off trailing "*/" from comments in SP/func)
This patch is the second part of a major cleanup, required to fix
Bug 25411 (trigger code truncated).
The root cause of the issue stems from the function skip_rear_comments,
which was a work around to remove "extra" "*/" characters from the query
text, when parsing a query and reusing the text fragments to represent a
view, trigger, function or stored procedure.
The reason for this work around is that "special comments",
like /*!50002 XXX */, were not parsed properly, so that a query like:
AAA /*!50002 BBB */ CCC
would be seen by the parser as "AAA BBB */ CCC" when the current version
is greater or equal to 5.0.2
The root cause of this stems from how special comments are parsed.
Special comments are really out-of-bound text that appear inside a query,
that affects how the parser behave.
In nature, /*!50002 XXX */ in MySQL is similar to the C concept
of preprocessing :
#if VERSION >= 50002
XXX
#endif
Depending on the current VERSION of the server, either the special comment
should be expanded or it should be ignored, but in all cases the "text" of
the query should be re-written to strip the "/*!50002" and "*/" markers,
which does not belong to the SQL language itself.
Prior to this fix, these markers would leak into :
- the storage format for VIEW,
- the storage format for FUNCTION,
- the storage format for FUNCTION parameters, in mysql.proc (param_list),
- the storage format for PROCEDURE,
- the storage format for PROCEDURE parameters, in mysql.proc (param_list),
- the storage format for TRIGGER,
- the binary log used for replication.
In all cases, not only this cause format corruption, but also provide a vector
for dormant security issues, by allowing to tunnel code that will be activated
after an upgrade.
The proper solution is to deal with special comments strictly during parsing,
when accepting a query from the outside world.
Once a query is parsed and an object is created with a persistant
representation, this object should not arbitrarily mutate after an upgrade.
In short, special comments are a useful but limited feature for MYSQLdump,
when used at an *interface* level to facilitate import/export,
but bloating the server *internal* storage format is *not* the proper way
to deal with configuration management of the user logic.
With this fix:
- the Lex_input_stream class now acts as a comment pre-processor,
and either expands or ignore special comments on the fly.
- MYSQLlex and sql_yacc.yy have been cleaned up to strictly use the
public interface of Lex_input_stream. In particular, how the input stream
accepts or rejects a character is private to Lex_input_stream, and the
internal buffer pointers of that class are strictly private, and should not
be tempered with during parsing.
This caused many changes mostly in sql_lex.cc.
During the code cleanup in case MY_LEX_NUMBER_IDENT,
Bug 28127 (Some valid identifiers names are not parsed correctly)
was found and fixed.
By parsing special comments properly, and removing the function
'skip_rear_comments' [sic],
Bug 26302 (MySQL server cuts off trailing "*/" from comments in SP/func)
has been fixed as well.
Coding style: classes start with a capital letter.
Rename some classes related to parsing:
create_field -> Create_field
foreign_key -> Foreign_key
key_part_spec -> Key_part_spec
The root cause of this bug is related to the function skip_rear_comments,
in sql_lex.cc
Recent code changes in skip_rear_comments changed the prototype from
"const uchar*" to "const char*", which had an unforseen impact on this test:
(endp[-1] < ' ')
With unsigned characters, this code filters bytes of value [0x00 - 0x20]
With *signed* characters, this also filters bytes of value [0x80 - 0xFF].
This caused the regression reported, considering cyrillic characters in the
parameter name to be whitespace, and truncated.
Note that the regression is present both in 5.0 and 5.1.
With this fix:
- [0x80 - 0xFF] bytes are no longer considered whitespace.
This alone fixes the regression.
In addition, filtering [0x00 - 0x20] was found bogus and abusive,
so that the code now filters uses my_isspace when looking for whitespace.
Note that this fix is only addressing the regression affecting UTF-8
in general, but does not address a more fundamental problem with
skip_rear_comments: parsing a string *backwards*, starting at end[-1],
is not safe with multi-bytes characters, so that end[-1] can confuse the
last byte of a multi-byte characters with a characters to filter out.
The only known impact of this remaining issue affects objects that have to
meet all the conditions below:
- the object is a FUNCTION / PROCEDURE / TRIGGER / EVENT / VIEW
- the body consist of only *1* instruction, and does *not* contain a
BEGIN-END block
- the instruction ends, lexically, with <ident> <whitespace>* ';'?
For example, "select <ident>;" or "return <ident>;"
- The last character of <ident> is a multi-byte character
- the last byte of this character is ';' '*', '/' or whitespace
In this case, the body of the object will be truncated after parsing,
and stored in an invalid format.
This last issue has not been fixed in this patch, since the real fix
will be implemented by Bug 25411 (trigger code truncated), which is caused
by the very same code.
The real problem is that the function skip_rear_comments is only a
work-around, and should be removed entirely: see the proposed patch for
bug 25411 for details.
If a stored function or a trigger was killed it had aborted but no error
was thrown. This allows the caller statement to continue without a notice.
This may lead to a wrong data being inserted/updated to/deleted as in such
cases the correct result of a stored function isn't guaranteed. In the case
of triggers it allows the caller statement to ignore kill signal and to
waste time because of re-evaluation of triggers that always will fail
because thd->killed flag is still on.
Now the Item_func_sp::execute() and the sp_head::execute_trigger() functions
check whether a function or a trigger were killed during execution and
throws an appropriate error if so.
Now the fill_record() function stops filling record if an error was reported
through thd->net.report_error.
Bug#21483 "Server abort or deadlock on INSERT DELAYED with another
implicit insert"
Also fixes and adds test cases for bugs:
20497 "Trigger with INSERT DELAYED causes Error 1165"
21714 "Wrong NEW.value and server abort on INSERT DELAYED to a
table with a trigger".
Post-review fixes.
Problem:
In MySQL INSERT DELAYED is a way to pipe all inserts into a
given table through a dedicated thread. This is necessary for
simplistic storage engines like MyISAM, which do not have internal
concurrency control or threading and thus can not
achieve efficient INSERT throughput without support from SQL layer.
DELAYED INSERT works as follows:
For every distinct table, which can accept DELAYED inserts and has
pending data to insert, a dedicated thread is created to write data
to disk. All user connection threads that attempt to
delayed-insert into this table interact with the dedicated thread in
producer/consumer fashion: all records to-be inserted are pushed
into a queue of the dedicated thread, which fetches the records and
writes them.
In this design, client connection threads never open or lock
the delayed insert table.
This functionality was introduced in version 3.23 and does not take
into account existence of triggers, views, or pre-locking.
E.g. if INSERT DELAYED is called from a stored function, which,
in turn, is called from another stored function that uses the delayed
table, a deadlock can occur, because delayed locking by-passes
pre-locking. Besides:
* the delayed thread works directly with the subject table through
the storage engine API and does not invoke triggers
* even if it was patched to invoke triggers, if triggers,
in turn, used other tables, the delayed thread would
have to open and lock involved tables (use pre-locking).
* even if it was patched to use pre-locking, without deadlock
detection the delayed thread could easily lock out user
connection threads in case when the same table is used both
in a trigger and on the right side of the insert query:
the delayed thread would not release locks until all inserts
are complete, and user connection can not complete inserts
without having locks on the tables used on the right side of the
query.
Solution:
These considerations suggest two general alternatives for the
future of INSERT DELAYED:
* it is considered a full-fledged alternative to normal INSERT
* it is regarded as an optimisation that is only relevant
for simplistic engines.
Since we missed our chance to provide complete support of new
features when 5.0 was in development, the first alternative
currently renders infeasible.
However, even the second alternative, which is to detect
new features and convert DELAYED insert into a normal insert,
is not easy to implement.
The catch-22 is that we don't know if the subject table has triggers
or is a view before we open it, and we only open it in the
delayed thread. We don't know if the query involves pre-locking
until we have opened all tables, and we always first create
the delayed thread, and only then open the remaining tables.
This patch detects the problematic scenarios and converts
DELAYED INSERT to a normal INSERT using the following approach:
* if the statement is executed under pre-locking (e.g. from
within a stored function or trigger) or the right
side may require pre-locking, we detect the situation
before creating a delayed insert thread and convert the statement
to a conventional INSERT.
* if the subject table is a view or has triggers, we shutdown
the delayed thread and convert the statement to a conventional
INSERT.
Replacing binlog_row_based_if_mixed with variable binlog_stmt_flags
holding several flags and adding member functions to manipulate the
flags.
Added code to generate a warning when an attempt to log an unsafe
statement to the binary log was made. The warning is both pushed to the
SHOW WARNINGS table and written to the error log. The prevent flooding
the error log, the warning is just written to the error log once per
open session.
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
- In some cases, flow control optimization implemented in sp::optimize
removes hreturn instructions, causing SQL exception handlers to:
* never return
* execute wrong logic
- This patch overrides default short cut optimization on hreturn instructions
to avoid this problem.
The issue found with bug 25411 is due to the function skip_rear_comments()
which damages the source code while implementing a work around.
The root cause of the problem is in the lexical analyser, which does not
process special comments properly.
For special comments like :
[1] aaa /*!50000 bbb */ ccc
since 5.0 is a version older that the current code, the parser is in lining
the content of the special comment, so that the query to process is
[2] aaa bbb ccc
However, the text of the query captured when processing a stored procedure,
stored function or trigger (or event in 5.1), can be after rebuilding it:
[3] aaa bbb */ ccc
which is wrong.
To fix bug 25411 properly, the lexical analyser needs to return [2] when
in lining special comments.
In order to implement this, some preliminary cleanup is required in the code,
which is implemented by this patch.
Before this change, the structure named LEX (or st_lex) contains attributes
that belong to lexical analysis, as well as attributes that represents the
abstract syntax tree (AST) of a statement.
Creating a new LEX structure for each statements (which makes sense for the
AST part) also re-initialized the lexical analysis phase each time, which
is conceptually wrong.
With this patch, the previous st_lex structure has been split in two:
- st_lex represents the Abstract Syntax Tree for a statement. The name "lex"
has not been changed to avoid a bigger impact in the code base.
- class lex_input_stream represents the internal state of the lexical
analyser, which by definition should *not* be reinitialized when parsing
multiple statements from the same input stream.
This change is a pre-requisite for bug 25411, since the implementation of
lex_input_stream will later improve to deal properly with special comments,
and this processing can not be done with the current implementation of
sp_head::reset_lex and sp_head::restore_lex, which interfere with the lexer.
This change set alone does not fix bug 25411.
streamline the event worker thread work flow and try to eliminate
possibilities for memory corruptions that might have been
lurking in previous (complicated) code.
This patch:
* removes Event_job_data::compile that was never used
* cleans up Event_job_data::execute to minimize juggling with
thread context and eliminate unneded code paths
* Implements Security_context::change/restore_security_context
to be able to re-use these methods in all stored programs
This is to maybe fix Bug#27733 "Valgrind failures in
remove_table_from_cache".
Review comments applied.
when there are no up-to-date system tables to support it:
- initialize the scheduler before reporting "Ready for connections".
This ensures that warnings, if any, are printed before "Ready for
connections", and this message is not mangled.
- do not abort the scheduler if there are no system tables
- check the tables once at start up, remember the status and disable
the scheduler if the tables are not up to date.
If one attempts to use the scheduler with bad tables,
issue an error message.
- clean up the behaviour of the module under LOCK TABLES and pre-locking
mode
- make sure implicit commit of Events DDL works as expected.
- add more tests
Collateral clean ups in the events code.
This patch fixes Bug#23631 Events: SHOW VARIABLES doesn't work
when mysql.event is damaged
execution breaks replication.
When a stored routine is executed, we switch current
database to the database, in which the routine
has been created. When the stored routine finishes,
we switch back to the original database.
The problem was that if the original database does not
exist (anymore) after routine execution, we raised an error.
The fix is to report a warning, and switch to the NULL database.
the lexer API which internally uses unsigned char variables to
address its state map. The implementation of the lexer should be
internal to the lexer, and not influence the rest of the code.
thd->options' OPTION_STATUS_NO_TRANS_UPDATE bit was not restored at the end of SF() invocation, where
SF() modified non-ta table.
As the result of this artifact it was not possible to detect whether there were any side-effects when
top-level query ends.
If the top level query table was not modified and the bit is lost there would be no binlogging.
Fixed with preserving the bit inside of thd->no_trans_update struct. The struct agregates two bool flags
telling whether the current query and the current transaction modified any non-ta table.
The flags stmt, all are dropped at the end of the query and the transaction.
TABLE ... WRITE".
Memory and CPU hogging occured when connection which had to wait for table
lock was serviced by thread which previously serviced connection that was
killed (note that connections can reuse threads if thread cache is enabled).
One possible scenario which exposed this problem was when thread which
provided binlog dump to replication slave was implicitly/automatically
killed when the same slave reconnected and started pulling data through
different thread/connection.
The problem also occured when one killed particular query in connection
(using KILL QUERY) and later this connection had to wait for some table
lock.
This problem was caused by the fact that thread-specific mysys_var::abort
variable, which indicates that waiting operations on mysys layer should
be aborted (this includes waiting for table locks), was set by kill
operation but was never reset back. So this value was "inherited" by the
following statements or even other connections (which reused the same
physical thread). Such discrepancy between this variable and THD::killed
flag broke logic on SQL-layer and caused CPU and memory hogging.
This patch tries to fix this problem by properly resetting this member.
There is no test-case associated with this patch since it is hard to test
for memory/CPU hogging conditions in our test-suite.
Before this fix, the parser would accept illegal code in SQL exceptions
handlers, that later causes the runtime to crash when executing the code,
due to memory violations in the exception handler stack.
The root cause of the problem is instructions within an exception handler
that jumps to code located outside of the handler. This is illegal according
to the SQL 2003 standard, since labels located outside the handler are not
supposed to be visible (they are "out of scope"), so any instruction that
jumps to these labels, like ITERATE or LEAVE, should not parse.
The section of the standard that is relevant for this is :
SQL:2003 SQL/PSM (ISO/IEC 9075-4:2003)
section 13.1 <compound statement>,
syntax rule 4
<quote>
The scope of the <beginning label> is CS excluding every <SQL schema
statement> contained in CS and excluding every
<local handler declaration list> contained in CS. <beginning label> shall
not be equivalent to any other <beginning label>s within that scope.
</quote>
With this fix, the C++ class sp_pcontext, which represent the "parsing
context" tree (a.k.a symbol table) of a stored procedure, has been changed
as follows:
- constructors have been cleaned up, so that only building a root node for
the tree is public; building nodes inside a tree is not public.
- a new member, m_label_scope, indicates if a given syntactic context
belongs to a DECLARE HANDLER block,
- label resolution, in the method find_label(), has been changed to
implement the restriction of scope regarding labels used in a compound
statement.
The actions in the parser, when parsing the body of a SQL exception handler,
have been changed as follows:
- the implementation of an exception handler (DECLARE HANDLER) now creates
explicitly a new sp_pcontext, to isolate the code inside the handler from
the containing compound statement context.
- registering exception handlers as a result occurs in the parent context,
see the rule sp_hcond_element
- the code in sp_hcond_list has been cleaned up, to avoid code duplication
In addition, the flags IN_SIMPLE_CASE and IN_HANDLER, declared in sp_head.h
have been removed, since they are unused and broken by design (as seen with
Bug 19194 (Right recursion in parser for CASE causes excessive stack usage,
limitation), representing a stack in a single flag is not possible.
Tests in sp-error have been added to show that illegal constructs are now
rejected.
Tests in sp have been added for code coverage, to show that ITERATE or LEAVE
statements are legal when jumping to a label in scope, inside the body of
an exception handler.
Bug 18914 (Calling certain SPs from triggers fail)
Bug 20713 (Functions will not not continue for SQLSTATE VALUE '42S02')
Bug 21825 (Incorrect message error deleting records in a table with a
trigger for inserting)
Bug 22580 (DROP TABLE in nested stored procedure causes strange dependency
error)
Bug 25345 (Cursors from Functions)
This fix resolves a long standing issue originally reported with bug 8407,
which affect the behavior of Stored Procedures, Stored Functions and Trigger
in many different ways, causing symptoms reported by all the bugs listed.
In all cases, the root cause of the problem traces back to 8407 and how the
server locks tables involved with sub statements.
Prior to this fix, the implementation of stored routines would:
- compute the transitive closure of all the tables referenced by a top level
statement
- open and lock all the tables involved
- execute the top level statement
"transitive closure of tables" means collecting:
- all the tables,
- all the stored functions,
- all the views,
- all the table triggers
- all the stored procedures
involved, and recursively inspect these objects definition to find more
references to more objects, until the list of every object referenced does
not grow any more.
This mechanism is known as "pre-locking" tables before execution.
The motivation for locking all the tables (possibly) used at once is to
prevent dead locks.
One problem with this approach is that, if the execution path the code
really takes during runtime does not use a given table, and if the table is
missing, the server would not execute the statement.
This in particular has a major impact on triggers, since a missing table
referenced by an update/delete trigger would prevent an insert trigger to run.
Another problem is that stored routines might define SQL exception handlers
to deal with missing tables, but the server implementation would never give
user code a chance to execute this logic, since the routine is never
executed when a missing table cause the pre-locking code to fail.
With this fix, the internal implementation of the pre-locking code has been
relaxed of some constraints, so that failure to open a table does not
necessarily prevent execution of a stored routine.
In particular, the pre-locking mechanism is now behaving as follows:
1) the first step, to compute the transitive closure of all the tables
possibly referenced by a statement, is unchanged.
2) the next step, which is to open all the tables involved, only attempts
to open the tables added by the pre-locking code, but silently fails without
reporting any error or invoking any exception handler is the table is not
present. This is achieved by trapping internal errors with
Prelock_error_handler
3) the locking step only locks tables that were successfully opened.
4) when executing sub statements, the list of tables used by each statements
is evaluated as before. The tables needed by the sub statement are expected
to be already opened and locked. Statement referencing tables that were not
opened in step 2) will fail to find the table in the open list, and only at
this point will execution of the user code fail.
5) when a runtime exception is raised at 4), the instruction continuation
destination (the next instruction to execute in case of SQL continue
handlers) is evaluated.
This is achieved with sp_instr::exec_open_and_lock_tables()
6) if a user exception handler is present in the stored routine, that
handler is invoked as usual, so that ER_NO_SUCH_TABLE exceptions can be
trapped by stored routines. If no handler exists, then the runtime execution
will fail as expected.
With all these changes, a side effect is that view security is impacted, in
two different ways.
First, a view defined as "select stored_function()", where the stored
function references a table that may not exist, is considered valid.
The rationale is that, because the stored function might trap exceptions
during execution and still return a valid result, there is no way to decide
when the view is created if a missing table really cause the view to be invalid.
Secondly, testing for existence of tables is now done later during
execution. View security, which consist of trapping errors and return a
generic ER_VIEW_INVALID (to prevent disclosing information) was only
implemented at very specific phases covering *opening* tables, but not
covering the runtime execution. Because of this existing limitation,
errors that were previously trapped and converted into ER_VIEW_INVALID are
not trapped, causing table names to be reported to the user.
This change is exposing an existing problem, which is independent and will
be resolved separately.
SF/Triggers in SBR mode."
BUG#14914 "SP: Uses of session variables in routines are not always replicated"
BUG#25167 "Dupl. usage of user-variables in trigger/function is not replicated
correctly"
This patch corrects a minor error in the previous patch for BUG#20141. This patch
corrects an errant code change to sp_head.cc. The comments for the first patch follow:
User-defined variables used inside of stored functions/triggers in
statements which did not update tables directly were not replicated.
We also had problems with replication of user-defined variables which
were used in triggers (or stored functions called from table-updating
statements) more than once.
This patch addresses the first issue by enabling logging of all
references to user-defined variables in triggers/stored functions
and not only references from table-updating statements.
The second issue stemmed from the fact that for user-defined
variables used from triggers or stored functions called from
table-updating statements we were writing binlog events for each
reference instead of only one event for the first reference.
This problem is already solved for stored functions called from
non-updating statements with help of "event unioning" mechanism.
So the patch simply extends this mechanism to the case affected.
It also fixes small problem in this mechanism which caused wrong
logging of references to user-variables in cases when non-updating
statement called several stored functions which used the same
variable and some of these function calls were omitted from binlog
as they were not updating any tables.
SF/Triggers in SBR mode."
BUG#14914 "SP: Uses of session variables in routines are not always replicated"
BUG#25167 "Dupl. usage of user-variables in trigger/function is not replicated
correctly"
This patch corrects a minor error in the previous patch for BUG#20141. This patch
corrects an errant code change to sp_head.cc. The comments for the first patch follow:
User-defined variables used inside of stored functions/triggers in
statements which did not update tables directly were not replicated.
We also had problems with replication of user-defined variables which
were used in triggers (or stored functions called from table-updating
statements) more than once.
This patch addresses the first issue by enabling logging of all
references to user-defined variables in triggers/stored functions
and not only references from table-updating statements.
The second issue stemmed from the fact that for user-defined
variables used from triggers or stored functions called from
table-updating statements we were writing binlog events for each
reference instead of only one event for the first reference.
This problem is already solved for stored functions called from
non-updating statements with help of "event unioning" mechanism.
So the patch simply extends this mechanism to the case affected.
It also fixes small problem in this mechanism which caused wrong
logging of references to user-variables in cases when non-updating
statement called several stored functions which used the same
variable and some of these function calls were omitted from binlog
as they were not updating any tables.
SF/Triggers in SBR mode."
BUG#14914 "SP: Uses of session variables in routines are not always replicated"
BUG#25167 "Dupl. usage of user-variables in trigger/function is not replicated
correctly"
User-defined variables used inside of stored functions/triggers in
statements which did not update tables directly were not replicated.
We also had problems with replication of user-defined variables which
were used in triggers (or stored functions called from table-updating
statements) more than once.
This patch addresses the first issue by enabling logging of all
references to user-defined variables in triggers/stored functions
and not only references from table-updating statements.
The second issue stemmed from the fact that for user-defined
variables used from triggers or stored functions called from
table-updating statements we were writing binlog events for each
reference instead of only one event for the first reference.
This problem is already solved for stored functions called from
non-updating statements with help of "event unioning" mechanism.
So the patch simply extends this mechanism to the case affected.
It also fixes small problem in this mechanism which caused wrong
logging of references to user-variables in cases when non-updating
statement called several stored functions which used the same
variable and some of these function calls were omitted from binlog
as they were not updating any tables.
Triggers in SBR mode."
BUG#14914 "SP: Uses of session variables in routines are not always
replicated"
BUG#25167 "Dupl. usage of user-variables in trigger/function is not
replicated correctly"
User-defined variables used inside of stored functions/triggers in
statements which did not update tables directly were not replicated.
We also had problems with replication of user-defined variables which
were used in triggers (or stored functions called from table-updating
statements) more than once.
This patch addresses the first issue by enabling logging of all
references to user-defined variables in triggers/stored functions
and not only references from table-updating statements.
The second issue stemmed from the fact that for user-defined
variables used from triggers or stored functions called from
table-updating statements we were writing binlog events for each
reference instead of only one event for the first reference.
This problem is already solved for stored functions called from
non-updating statements with help of "event unioning" mechanism.
So the patch simply extends this mechanism to the case affected.
It also fixes small problem in this mechanism which caused wrong
logging of references to user-variables in cases when non-updating
statement called several stored functions which used the same
variable and some of these function calls were omitted from binlog
as they were not updating any tables.
to a single statement.
---
Bug#24795: SHOW PROFILE
Profiling is only partially functional on some architectures. Where
there is no getrusage() system call, presently Null values are
returned where it would be required. Notably, Windows needs some love
applied to make it as useful.
Syntax this adds:
SHOW PROFILES
SHOW PROFILE [types] [FOR QUERY n] [OFFSET n] [LIMIT n]
where "n" is an integer
and "types" is zero or many (comma-separated) of
"CPU"
"MEMORY" (not presently supported)
"BLOCK IO"
"CONTEXT SWITCHES"
"PAGE FAULTS"
"IPC"
"SWAPS"
"SOURCE"
"ALL"
It also adds a session variable (boolean) "profiling", set to "no"
by default, and (integer) profiling_history_size, set to 15 by
default.
This patch abstracts setting THDs' "proc_info" behind a macro that
can be used as a hook into the profiling code when profiling
support is compiled in. All future code in this line should use
that mechanism for setting thd->proc_info.
---
Tests are now set to omit the statistics.
---
Adds an Information_schema table, "profiling" for access to
"show profile" data.
---
Merge zippy.cornsilk.net:/home/cmiller/work/mysql/mysql-5.0-community-3--bug24795
into zippy.cornsilk.net:/home/cmiller/work/mysql/mysql-5.0-community
---
Fix merge problems.
---
Fixed one bug in the query_source being NULL.
Updated test results.
---
Include more thorough profiling tests.
Improve support for prepared statements.
Use session-specific query IDs, starting at zero.
---
Selecting from I_S.profiling is no longer quashed in profiling, as
requested by Giuseppe.
Limit the size of captured query text.
No longer log queries that are zero length.
Before this change, a local variables in stored procedures / stored functions
or triggers, when declared with a type of bit(N), would not evaluate their
value properly.
The problem was that the data was incorrectly typed as a string,
causing for example bit b'1', implemented as a byte 0x01, to be interpreted
as a string starting with the character 0x01. This later would cause
implicit conversions to integers or booleans to fail.
The root cause of this problem was an incorrect translation between field
types, like bit(N), and internal types used when representing values in Item
objects.
Also, before this change, the function HEX() would sometime print extra "0"
characters when invoked with bit(N) values.
With this fix, the type translation (sp_map_result_type, sp_map_item_type)
has been changed so that bit(N) fields are represented with integer values.
A consequence is that, for the function HEX(), when called with a stored
procedure local variable of type bit(N) as argument, HEX() is provided with an
integer instead of a string, and therefore does not print "0" padding.
A test case for Bug 12976 was present in the test suite, and has been updated.
Removed a lot of compiler warnings
Removed not used variables, functions and labels
Initialize some variables that could be used unitialized (fatal bugs)
%ll -> %l
correctly in some cases".
In short, calls to a stored function located in another database
than the default database, may fail to replicate if the call was made
by SET, SELECT, or DO.
Longer: when a stored function is called from a statement which does not go
to binlog ("SET @a=somedb.myfunc()", "SELECT somedb.myfunc()",
"DO somedb.myfunc()"), this crafted statement is binlogged:
"SELECT myfunc();" (accompanied with a mention of the default database
if there is one). So, if "somedb" is not the default database,
the slave would fail to find myfunc(). The fix is to specify the
function's database name in the crafted binlogged statement, like this:
"SELECT somedb.myfunc();". Test added in rpl_sp.test.
- Removed not used variables and functions
- Added #ifdef around code that is not used
- Renamed variables and functions to avoid conflicts
- Removed some not used arguments
Fixed some class/struct warnings in ndb
Added define IS_LONGDATA() to simplify code in libmysql.c
I did run gcov on the changes and added 'purecov' comments on almost all lines that was not just variable name changes