The problem was that the server did not robustly handle a
unilateral roll back issued by the Resource Manager (RM)
due to a resource deadlock within the transaction branch.
By not acknowledging the roll back, the server (TM) would
eventually corrupt the XA transaction state and crash.
The solution is to mark the transaction as rollback-only
if the RM indicates that it rolled back its branch of the
transaction.
When running Stored Routines the Status Variable "Questions" was wrongly
incremented. According to the manual it should contain the "number of
statements that clients have sent to the server"
Introduced a new status variable 'questions' to replace the query_id
variable which currently corresponds badly with the number of statements
sent by the client.
The new behavior is ment to be backward compatible with 4.0 and at the
same time work with new features in a similar way.
This is a backport from 6.0
build)
The crash was caused by freeing the internal parser stack during the parser
execution.
This occured only for complex stored procedures, after reallocating the parser
stack using my_yyoverflow(), with the following C call stack:
- MYSQLparse()
- any rule calling sp_head::restore_lex()
- lex_end()
- x_free(lex->yacc_yyss), xfree(lex->yacc_yyvs)
The root cause is the implementation of stored procedures, which breaks the
assumption from 4.1 that there is only one LEX structure per parser call.
The solution is to separate the LEX structure into:
- attributes that represent a statement (the current LEX structure),
- attributes that relate to the syntax parser itself (Yacc_state),
so that parsing multiple statements in stored programs can create multiple
LEX structures while not changing the unique Yacc_state.
Now, Yacc_state and the existing Lex_input_stream are aggregated into
Parser_state, a structure that represent the complete state of the (Lexical +
Syntax) parser.
The problem is that passing anything other than a integer to a limit
clause in a prepared statement would fail. This limitation was introduced
to avoid replication problems (e.g: replicating the statement with a
string argument would cause a parse failure in the slave).
The solution is to convert arguments to the limit clause to a integer
value and use this converted value when persisting the query to the log.
subselects into account
It is forbidden to use the SELECT INTO construction inside UNION statements
unless on the last SELECT of the union. The parser records whether it
has seen INTO or not when parsing a UNION statement. But if the INTO was
legally used in an outer query, an error is thrown if UNION is seen in a
subquery. Fixed in 5.0 by remembering the nesting level of INTO tokens and
mitigate the error unless it collides with the UNION.
insert ... select.
The 5.0 manual page for mysql_insert_id() does not mention anything
about INSERT ... SELECT, though its current behavior is incosistent
with what the manual says about the plain INSERT.
Fixed by changing the AUTO_INCREMENT and mysql_insert_id() handling
logic in INSERT ... SELECT to be consistent with the INSERT behavior,
the manual, and the changes in 5.1 introduced by WL3146:
- mysql_insert_id() now returns the first automatically generated
AUTO_INCREMENT value that was successfully inserted by INSERT ... SELECT
- if an INSERT ... SELECT statement is executed, and no automatically
generated value is successfully inserted, mysql_insert_id() now returns
the ID of the last inserted row.
led to creating corrupted index.
Corrected fix. The new method called prepare2 is added to the select_create
class. As all preparations are done by the select_create::prepare function
it doesn't do anything. Slightly changed algorithm of calling the
start_bulk_insert function. Now it's called from the select_insert::prepare2
function when the SQL_BUFFER_RESULT flags is set.
The is_bulk_insert_mode flag is removed as it is not needed anymore.
UNIQUE (eq-ref) lookups result in table being considered as a "constant" table.
Queries that consist of only constant tables are processed in do_select() in a
special way that doesn't invoke evaluate_join_record(), and therefore doesn't
increase the counters join->examined_rows and join->thd->row_count.
The patch increases these counters in this special case.
NOTICE:
This behavior seems to contradict what the documentation says in Sect. 5.11.4:
"Queries handled by the query cache are not added to the slow query log, nor
are queries that would not benefit from the presence of an index because the
table has zero rows or one row."
No test case in 5.0 as issue shows only in slow query log, and other counters
can give subtly different values (with regard to counting in create_sort_index(),
synthetic rows in ROLLUP, etc.).
in the SELECT INTO OUTFILE clause starts with a special
character (one of n, t, r, b, 0, Z or N) and ENCLOSED BY
is empty, every occurrence of this character within a
field value is duplicated.
Duplication has been avoided.
New warning message has been added: "First character of
the FIELDS TERMINATED string is ambiguous; please use
non-optional and non-empty FIELDS ENCLOSED BY".
similar to bug_27716, but it was stressed on in the synopsis on that there is another
side of the artifact affecting behaviour in transaction.
Fixed with deploying multi_delete::send_error() - otherwise never called - and refining its logic
to perform binlogging job if needed.
The changeset includes the following side effects:
- added tests to check bug_23333's scenarios on the mixture of tables for multi_update;
- fixes bug@30763 with two-liner patch and a test coinciding to one added for bug_23333.
Problem: one thread could read uninitialized memory from (the stack of) another
thread.
Fix: swapped order of initializing the memory and making it available to the
other thread.
Fix: put lock around the statement that makes the memory available to the other
thread.
Fix: all fields of the struct are now initialized in the constructor, to avoid
future problems.
led to creating corrupted index.
While execution of the CREATE .. SELECT SQL_BUFFER_RESULT statement the
engine->start_bulk_insert function was called twice. On the first call
On the first call MyISAM disabled all non-unique indexes and on the second
call it decides to not re-enable them because all indexes was disabled.
Due to this no indexes was actually created during CREATE TABLE thus
producing crashed table.
Now the select_inset class has is_bulk_insert_mode flag which prevents
calling the start_bulk_insert function twice.
The flag is set in the select_create::prepare, select_insert::prepare2
functions and the select_insert class constructor.
The flag is reset in the select_insert::send_eof function.
restores from mysqlbinlog out
Problem: using "mysqlbinlog | mysql" for recoveries the connection_id()
result may differ from what was used when issuing the statement.
Fix: if there is a connection_id() in a statement, write to binlog
SET pseudo_thread_id= XXX; before it and use the value later on.
Bug #27417 thd->no_trans_update.stmt lost value inside of SF-exec-stack
Once had been set the flag might later got reset inside of a stored routine
execution stack.
The reason was in that there was no check if a new statement started at time
of resetting.
The artifact affects most of binlogable DML queries. Notice, that multi-update
is wrapped up within
bug@27716 fix, multi-delete bug@29136.
Fixed with saving parent's statement flag of whether the statement modified
non-transactional table, and unioning (merging) the value with that was gained
in mysql_execute_command.
Resettling thd->no_trans_update members into thd->transaction.`member`;
Asserting code;
Effectively the following properties are held.
1. At the end of a substatement thd->transaction.stmt.modified_non_trans_table
reflects the fact if such a table got modified by the substatement.
That also respects THD::really_abort_on_warnin() requirements.
2. Eventually thd->transaction.stmt.modified_non_trans_table will be computed as
the union of the values of all invoked sub-statements.
That fixes this bug#27417;
Computing of thd->transaction.all.modified_non_trans_table is refined to base to
the stmt's value for all the case including insert .. select statement which
before the patch had an extra issue bug@28960.
Minor issues are covered with mysql_load, mysql_delete, and binloggin of insert in
to temp_table select.
The supplied test verifies limitely, mostly asserts. The ultimate testing is defered
for bug@13270, bug@23333.
When innodb detects a deadlock it calls ha_rollback_trans() to rollback the
main transaction. But such action isn't allowed from inside of triggers and
functions. When it happen the 'Explicit or implicit commit' error is thrown
even if there is no commit/rollback statements in the trigger/function. This
leads to the user confusion.
Now the convert_error_code_to_mysql() function doesn't call the
ha_rollback_trans() function directly but rather calls the
mark_transaction_to_rollback function and returns an error.
The sp_rcontext::find_handler() now doesn't allow errors to be caught by the
trigger/function error handlers when the thd->is_fatal_sub_stmt_error flag
is set. Procedures are still allowed to catch such errors.
The sp_rcontext::find_handler function now accepts a THD handle as a parameter.
The transaction_rollback_request and the is_fatal_sub_stmt_error flags are
added to the THD class. The are initialized by the THD class constructor.
Now the ha_autocommit_or_rollback function rolls back main transaction
when not in a sub statement and the thd->transaction_rollback_request
is set.
The THD::restore_sub_statement_state function now resets the
thd->is_fatal_sub_stmt_error flag on exit from a sub-statement.
between perm and temp tables. Review fixes.
The original bug report complains that if we locked a temporary table
with LOCK TABLES statement, we would not leave LOCK TABLES mode
when this temporary table is dropped.
Additionally, the bug was escalated when it was discovered than
when a temporary transactional table that was previously
locked with LOCK TABLES statement was dropped, futher actions with
this table, such as UNLOCK TABLES, would lead to a crash.
The problem originates from incomplete support of transactional temporary
tables. When we added calls to handler::store_lock()/handler::external_lock()
to operations that work with such tables, we only covered the normal
server code flow and did not cover LOCK TABLES mode.
In LOCK TABLES mode, ::external_lock(LOCK) would sometimes be called without
matching ::external_lock(UNLOCK), e.g. when a transactional temporary table
was dropped. Additionally, this table would be left in the list of LOCKed
TABLES.
The patch aims to address this inadequacy. Now, whenever an instance
of 'handler' is destroyed, we assert that it was priorly
external_lock(UNLOCK)-ed. All the places that violate this assert
were fixed.
This patch introduces no changes in behavior -- the discrepancy in
behavior will be fixed when we start calling ::store_lock()/::external_lock()
for all tables, regardless whether they are transactional or not,
temporary or not.
gettimeofday() can fail and presumably, so can time().
Keep an eye on it.
Since we have no data on this at all so far, we just
retry on failure (and log the event), assuming that
this is just an intermittant failure. This might of
course hang the threat until we succeed. Once we know
more about these failures, an appropriate more clever
scheme may be picked (only try so many times per thread,
etc., if that fails, return last "good" time() we got or
some such). Using sql_print_information() to log as this
probably only occurs in high load scenarios where the debug-
trace likely is disabled (or might interfere with testing
the effect). No test-case as this is a non-deterministic
issue.
By default MyISAM overwrites .MYD and .MYI files no
DATA DIRECTORY option is used. This can lead to two tables
using the same .MYD and .MYI files (that can't be dropped).
To prevent CREATE TABLE from overwriting a file a new option
is introduced : keep_files_on_create
When this is on the CREATE TABLE throws an error if either
the .MYD or .MYI exists for a MyISAM table.
The option is off by default (resulting in compatible behavior).
The SELECT INTO OUTFILE FIELDS ENCLOSED BY digit or minus sign,
followed by the same LOAD DATA INFILE statement, used wrond encoding
of non-string fields contained the enclosed character in their text
representation.
Example:
SELECT 15, 9 INTO OUTFILE 'text' FIELDS ENCLOSED BY '5';
Old encoded result in the text file:
5155 595
^ was decoded as the 1st enclosing character of the 2nd field;
^ was skipped as garbage;
^ ^ was decoded as a pair of englosing characters of the 1st field;
^ was decoded as traling space of the first field;
^^ was decoded as a doubled enclosed character.
New encoded result in the text file:
51\55 595
^ ^ pair of enclosing characters of the 1st field;
^^ escaped enclosed character.
The `SELECT 'r' INTO OUTFILE ... FIELDS ENCLOSED BY 'r' ' statement
encoded the 'r' string to a 4 byte string of value x'725c7272'
(sequence of 4 characters: r\rr).
The LOAD DATA statement decoded this string to a 1 byte string of
value x'0d' (ASCII Carriage Return character) instead of the original
'r' character.
The same error also happened with the FIELDS ENCLOSED BY clause
followed by special characters: 'n', 't', 'r', 'b', '0', 'Z' and 'N'.
NOTE 1: This is a result of the undocumented feature: the LOAD DATA INFILE
recognises 2-byte input sequences like \n, \t, \r and \Z in addition
to documented 2-byte sequences: \0 and \N. This feature should be
documented (here backspace character is a default ESCAPED BY character,
in the real-life example it may be any ESCAPED BY character).
NOTE 2, changed behaviour:
Now the `SELECT INTO OUTFILE' statement with the `FIELDS ENCLOSED BY'
clause followed by one of: 'n', 't', 'r', 'b', '0', 'Z' or 'N' characters
encodes this special character itself by doubling it ('r' --> 'rr'),
not by prepending it with an escape character.
ALTER VIEW is currently not supported as a prepared statement
and should be disabled as such as they otherwise could cause server crashes.
ALTER VIEW is currently not supported when called from stored
procedures or functions for related reasons and should also be disabled.
This patch disables these DDL statements and adjusts the appropriate test
cases accordingly.
Additional tests has been added to reflect on the fact that we do support
CREATE/ALTER/DROP TABLE for Prepared Statements (PS), Stored Procedures (SP)
and PS within SP.
The method select_insert::send_error does two things, it rolls back a statement
being executed and outputs an error message. But when a
nonexistent column is referenced, an error message has been published already and
there is no need to publish another.
Fixed by moving all functionality beyond publishing an error message into
select_insert::abort() and calling only that function.
The result of the CHECK OPTION condition evaluation over an
updated record and records of merged tables was arbitrary and
dependant on the order of records in the merged tables during
the execution of SELECT statement.
The CHECK OPTION expression was evaluated over expired record
buffers (with arbitrary data in the fields).
Rowids of tables used in the CHECK OPTION expression were
added to temporary table rows. The multi_update::do_updates()
method was modified to restore necessary record buffers
before evaluation of the CHECK OPTION condition.
The reason for the bug was that replaying of a query on slave could not be possible since its event
was recorded with the killed error. Due to the specific of handling INSERT, which per-row-while-loop is
unbreakable to killing, the query on transactional table should have not appeared in binlog unless
there was a call to a stored routine that got interrupted with killing (and then there must be an error
returned out of the loop).
The offered solution added the following rule for binlogging of INSERT that accounts the above
specifics:
For INSERT on transactional-table if the error was not set the only raised flag
is harmless and is ignored via masking out on time of creation of binlog event.
For both table types the combination of raised error and KILLED flag indicates that there
was potentially partial execution on master and consistency is under the question.
In that case the code continues to binlog an event with an appropriate killed error.
The fix relies on the specified behaviour of stored routine that must propagate the error
to the top level query handling if the thd->killed flag was raised in the routine execution.
The patch adds an arg with the default killed-status-unset value to Query_log_event::Query_log_event.
Made year 2000 handling more uniform
Removed year 2000 handling out from calc_days()
The above removes some bugs in date/datetimes with year between 0 and 200
Now we get a note when we insert a datetime value into a date column
For default values to CREATE, don't give errors for warning level NOTE
Fixed some compiler failures
Added library ws2_32 for windows compilation (needed if we want to compile with IOCP support)
Removed duplicate typedef TIME and replaced it with MYSQL_TIME
Better (more complete) fix for: Bug#21103 "DATE column not compared as DATE"
Fixed properly Bug#18997 "DATE_ADD and DATE_SUB perform year2K autoconversion magic on 4-digit year value"
Fixed Bug#23093 "Implicit conversion of 9912101 to date does not match cast(9912101 as date)"