procedures causes crashes!
The problem of that bugreport was mostly fixed by the
patch for bug 38691.
However, attached test case focused on another crash or
valgrind warning problem: SHOW PROCESSLIST query accesses
freed memory of SP instruction that run in a parallel
connection.
Changes of thd->query/thd->query_length in dangerous
places have been guarded with the per-thread
LOCK_thd_data mutex (the THD::LOCK_delete mutex has been
renamed to THD::LOCK_thd_data).
due to name_const substitution
Problem:
"In general, statements executed within a stored procedure
are written to the binary log using the same rules that
would apply were the statements to be executed in standalone
fashion. Some special care is taken when logging procedure
statements because statement execution within procedures
is not quite the same as in non-procedure context".
For example, each reference to a local variable in SP's
statements is replaced by NAME_CONST(var_name, var_value).
Queries like
"CREATE TABLE ... SELECT FUNC(local_var ..."
are logged as
"CREATE TABLE ... SELECT FUNC(NAME_CONST("local_var", var_value) ..."
that leads to differrent field names and
might result in "Incorrect column name" if var_value is long enough.
Fix: in 5.x we'll issue a warning in such a case.
In 6.0 we should get rid of NAME_CONST().
Note: this issue and change should be described in the documentation
("Binary Logging of Stored Programs").
Fine-tuning. Broke out comparison into method by
suggestion of Davi. Clarified comments. Reverting
test-case which I find too brittle; proper test
case in 5.1+.
BUG#39325 Server crash inside MYSQL_LOG::purge_first_log halts replicaiton
The patch reverses the order of the purging and updating events for log and relay-log.info/index files respectively.
This solves the problem of having holes caused by crashes happening between updating info/index files and purging logs.
NOTE: This is a combined patch for BUG#38826 and BUG#39325. This patch is based on bugteam tree and takes into account reviewers suggestions.
The problem was that the server did not robustly handle a
unilateral roll back issued by the Resource Manager (RM)
due to a resource deadlock within the transaction branch.
By not acknowledging the roll back, the server (TM) would
eventually corrupt the XA transaction state and crash.
The solution is to mark the transaction as rollback-only
if the RM indicates that it rolled back its branch of the
transaction.
When running Stored Routines the Status Variable "Questions" was wrongly
incremented. According to the manual it should contain the "number of
statements that clients have sent to the server"
Introduced a new status variable 'questions' to replace the query_id
variable which currently corresponds badly with the number of statements
sent by the client.
The new behavior is ment to be backward compatible with 4.0 and at the
same time work with new features in a similar way.
This is a backport from 6.0
build)
The crash was caused by freeing the internal parser stack during the parser
execution.
This occured only for complex stored procedures, after reallocating the parser
stack using my_yyoverflow(), with the following C call stack:
- MYSQLparse()
- any rule calling sp_head::restore_lex()
- lex_end()
- x_free(lex->yacc_yyss), xfree(lex->yacc_yyvs)
The root cause is the implementation of stored procedures, which breaks the
assumption from 4.1 that there is only one LEX structure per parser call.
The solution is to separate the LEX structure into:
- attributes that represent a statement (the current LEX structure),
- attributes that relate to the syntax parser itself (Yacc_state),
so that parsing multiple statements in stored programs can create multiple
LEX structures while not changing the unique Yacc_state.
Now, Yacc_state and the existing Lex_input_stream are aggregated into
Parser_state, a structure that represent the complete state of the (Lexical +
Syntax) parser.
The problem is that passing anything other than a integer to a limit
clause in a prepared statement would fail. This limitation was introduced
to avoid replication problems (e.g: replicating the statement with a
string argument would cause a parse failure in the slave).
The solution is to convert arguments to the limit clause to a integer
value and use this converted value when persisting the query to the log.
subselects into account
It is forbidden to use the SELECT INTO construction inside UNION statements
unless on the last SELECT of the union. The parser records whether it
has seen INTO or not when parsing a UNION statement. But if the INTO was
legally used in an outer query, an error is thrown if UNION is seen in a
subquery. Fixed in 5.0 by remembering the nesting level of INTO tokens and
mitigate the error unless it collides with the UNION.
insert ... select.
The 5.0 manual page for mysql_insert_id() does not mention anything
about INSERT ... SELECT, though its current behavior is incosistent
with what the manual says about the plain INSERT.
Fixed by changing the AUTO_INCREMENT and mysql_insert_id() handling
logic in INSERT ... SELECT to be consistent with the INSERT behavior,
the manual, and the changes in 5.1 introduced by WL3146:
- mysql_insert_id() now returns the first automatically generated
AUTO_INCREMENT value that was successfully inserted by INSERT ... SELECT
- if an INSERT ... SELECT statement is executed, and no automatically
generated value is successfully inserted, mysql_insert_id() now returns
the ID of the last inserted row.
led to creating corrupted index.
Corrected fix. The new method called prepare2 is added to the select_create
class. As all preparations are done by the select_create::prepare function
it doesn't do anything. Slightly changed algorithm of calling the
start_bulk_insert function. Now it's called from the select_insert::prepare2
function when the SQL_BUFFER_RESULT flags is set.
The is_bulk_insert_mode flag is removed as it is not needed anymore.
UNIQUE (eq-ref) lookups result in table being considered as a "constant" table.
Queries that consist of only constant tables are processed in do_select() in a
special way that doesn't invoke evaluate_join_record(), and therefore doesn't
increase the counters join->examined_rows and join->thd->row_count.
The patch increases these counters in this special case.
NOTICE:
This behavior seems to contradict what the documentation says in Sect. 5.11.4:
"Queries handled by the query cache are not added to the slow query log, nor
are queries that would not benefit from the presence of an index because the
table has zero rows or one row."
No test case in 5.0 as issue shows only in slow query log, and other counters
can give subtly different values (with regard to counting in create_sort_index(),
synthetic rows in ROLLUP, etc.).
in the SELECT INTO OUTFILE clause starts with a special
character (one of n, t, r, b, 0, Z or N) and ENCLOSED BY
is empty, every occurrence of this character within a
field value is duplicated.
Duplication has been avoided.
New warning message has been added: "First character of
the FIELDS TERMINATED string is ambiguous; please use
non-optional and non-empty FIELDS ENCLOSED BY".
similar to bug_27716, but it was stressed on in the synopsis on that there is another
side of the artifact affecting behaviour in transaction.
Fixed with deploying multi_delete::send_error() - otherwise never called - and refining its logic
to perform binlogging job if needed.
The changeset includes the following side effects:
- added tests to check bug_23333's scenarios on the mixture of tables for multi_update;
- fixes bug@30763 with two-liner patch and a test coinciding to one added for bug_23333.
Problem: one thread could read uninitialized memory from (the stack of) another
thread.
Fix: swapped order of initializing the memory and making it available to the
other thread.
Fix: put lock around the statement that makes the memory available to the other
thread.
Fix: all fields of the struct are now initialized in the constructor, to avoid
future problems.
led to creating corrupted index.
While execution of the CREATE .. SELECT SQL_BUFFER_RESULT statement the
engine->start_bulk_insert function was called twice. On the first call
On the first call MyISAM disabled all non-unique indexes and on the second
call it decides to not re-enable them because all indexes was disabled.
Due to this no indexes was actually created during CREATE TABLE thus
producing crashed table.
Now the select_inset class has is_bulk_insert_mode flag which prevents
calling the start_bulk_insert function twice.
The flag is set in the select_create::prepare, select_insert::prepare2
functions and the select_insert class constructor.
The flag is reset in the select_insert::send_eof function.
restores from mysqlbinlog out
Problem: using "mysqlbinlog | mysql" for recoveries the connection_id()
result may differ from what was used when issuing the statement.
Fix: if there is a connection_id() in a statement, write to binlog
SET pseudo_thread_id= XXX; before it and use the value later on.
Bug #27417 thd->no_trans_update.stmt lost value inside of SF-exec-stack
Once had been set the flag might later got reset inside of a stored routine
execution stack.
The reason was in that there was no check if a new statement started at time
of resetting.
The artifact affects most of binlogable DML queries. Notice, that multi-update
is wrapped up within
bug@27716 fix, multi-delete bug@29136.
Fixed with saving parent's statement flag of whether the statement modified
non-transactional table, and unioning (merging) the value with that was gained
in mysql_execute_command.
Resettling thd->no_trans_update members into thd->transaction.`member`;
Asserting code;
Effectively the following properties are held.
1. At the end of a substatement thd->transaction.stmt.modified_non_trans_table
reflects the fact if such a table got modified by the substatement.
That also respects THD::really_abort_on_warnin() requirements.
2. Eventually thd->transaction.stmt.modified_non_trans_table will be computed as
the union of the values of all invoked sub-statements.
That fixes this bug#27417;
Computing of thd->transaction.all.modified_non_trans_table is refined to base to
the stmt's value for all the case including insert .. select statement which
before the patch had an extra issue bug@28960.
Minor issues are covered with mysql_load, mysql_delete, and binloggin of insert in
to temp_table select.
The supplied test verifies limitely, mostly asserts. The ultimate testing is defered
for bug@13270, bug@23333.
When innodb detects a deadlock it calls ha_rollback_trans() to rollback the
main transaction. But such action isn't allowed from inside of triggers and
functions. When it happen the 'Explicit or implicit commit' error is thrown
even if there is no commit/rollback statements in the trigger/function. This
leads to the user confusion.
Now the convert_error_code_to_mysql() function doesn't call the
ha_rollback_trans() function directly but rather calls the
mark_transaction_to_rollback function and returns an error.
The sp_rcontext::find_handler() now doesn't allow errors to be caught by the
trigger/function error handlers when the thd->is_fatal_sub_stmt_error flag
is set. Procedures are still allowed to catch such errors.
The sp_rcontext::find_handler function now accepts a THD handle as a parameter.
The transaction_rollback_request and the is_fatal_sub_stmt_error flags are
added to the THD class. The are initialized by the THD class constructor.
Now the ha_autocommit_or_rollback function rolls back main transaction
when not in a sub statement and the thd->transaction_rollback_request
is set.
The THD::restore_sub_statement_state function now resets the
thd->is_fatal_sub_stmt_error flag on exit from a sub-statement.
between perm and temp tables. Review fixes.
The original bug report complains that if we locked a temporary table
with LOCK TABLES statement, we would not leave LOCK TABLES mode
when this temporary table is dropped.
Additionally, the bug was escalated when it was discovered than
when a temporary transactional table that was previously
locked with LOCK TABLES statement was dropped, futher actions with
this table, such as UNLOCK TABLES, would lead to a crash.
The problem originates from incomplete support of transactional temporary
tables. When we added calls to handler::store_lock()/handler::external_lock()
to operations that work with such tables, we only covered the normal
server code flow and did not cover LOCK TABLES mode.
In LOCK TABLES mode, ::external_lock(LOCK) would sometimes be called without
matching ::external_lock(UNLOCK), e.g. when a transactional temporary table
was dropped. Additionally, this table would be left in the list of LOCKed
TABLES.
The patch aims to address this inadequacy. Now, whenever an instance
of 'handler' is destroyed, we assert that it was priorly
external_lock(UNLOCK)-ed. All the places that violate this assert
were fixed.
This patch introduces no changes in behavior -- the discrepancy in
behavior will be fixed when we start calling ::store_lock()/::external_lock()
for all tables, regardless whether they are transactional or not,
temporary or not.
gettimeofday() can fail and presumably, so can time().
Keep an eye on it.
Since we have no data on this at all so far, we just
retry on failure (and log the event), assuming that
this is just an intermittant failure. This might of
course hang the threat until we succeed. Once we know
more about these failures, an appropriate more clever
scheme may be picked (only try so many times per thread,
etc., if that fails, return last "good" time() we got or
some such). Using sql_print_information() to log as this
probably only occurs in high load scenarios where the debug-
trace likely is disabled (or might interfere with testing
the effect). No test-case as this is a non-deterministic
issue.
By default MyISAM overwrites .MYD and .MYI files no
DATA DIRECTORY option is used. This can lead to two tables
using the same .MYD and .MYI files (that can't be dropped).
To prevent CREATE TABLE from overwriting a file a new option
is introduced : keep_files_on_create
When this is on the CREATE TABLE throws an error if either
the .MYD or .MYI exists for a MyISAM table.
The option is off by default (resulting in compatible behavior).
The SELECT INTO OUTFILE FIELDS ENCLOSED BY digit or minus sign,
followed by the same LOAD DATA INFILE statement, used wrond encoding
of non-string fields contained the enclosed character in their text
representation.
Example:
SELECT 15, 9 INTO OUTFILE 'text' FIELDS ENCLOSED BY '5';
Old encoded result in the text file:
5155 595
^ was decoded as the 1st enclosing character of the 2nd field;
^ was skipped as garbage;
^ ^ was decoded as a pair of englosing characters of the 1st field;
^ was decoded as traling space of the first field;
^^ was decoded as a doubled enclosed character.
New encoded result in the text file:
51\55 595
^ ^ pair of enclosing characters of the 1st field;
^^ escaped enclosed character.