CHECK_FIELD_IGNORE was treated as CHECK_FIELD_ERROR_FOR_NULL;
UPDATE...SET...NULL on NOT NULL fields behaved differently after
a trigger.
Now distinguishes between IGNORE and ERROR_FOR_NULL and save/restores
check-field options.
SHOW CREATE TABLE on a view (v1) that contains a function whose
statement uses another view (v2), could trigger a infinite loop
if the view referenced within the function causes a warning to
be raised while opening the said view (v2).
The problem was a infinite loop over the stack of internal error
handlers. The problem would be triggered if the stack contained
two or more handlers and the first two handlers didn't handle the
raised condition. In this case, the loop variable would always
point to the second handler in the stack.
The solution is to correct the loop variable assignment so that
the loop is able to iterate over all handlers in the stack.
error causes debug assertion
The IGNORE option of the multiple-table UPDATE command was
not intended to suppress errors caused by the
sql_safe_updates mode. This flag will raise an error if the
execution of UPDATE does not use a key for row retrieval,
and should continue do so regardless of the IGNORE option.
However the implementation of IGNORE does not support
exceptions to the rule; it always converts errors to
warnings and cannot be extended. The Internal_error_handler
interface offers the infrastructure to handle individual
errors, making sure that the error raised by
sql_safe_updates is not silenced.
Fixed by implementing an Internal_error_handler and using it
for UPDATE IGNORE commands.
subquery returning multiple rows
Error handling was missing when handling subqueires in WHERE
and when assigning a SELECT result to a @variable.
This caused crash(es).
Fixed by adding error handling code to both the WHERE
condition evaluation and to assignment to an @variable.
Bug#46539 Various crashes on INSERT IGNORE SELECT + SELECT FOR UPDATE.
If a transaction was rolled back inside InnoDB due to a deadlock
or lock wait timeout, and the statement had IGNORE clause,
the server could crash at the end of the statement or on shutdown.
This was caused by the error handling infrastructure's attempt to
ignore a non-ignorable error.
When a transaction rollback request is raised, switch off
current_select->no_error flag, so that the following error
won't be ignored.
Instead, we could add !thd->is_fatal_sub_stmt_error to
my_message_sql(), but since in write_record() we switch
off no_error, the same approach is used in
thd_mark_transaction_to_rollback().
@todo: call thd_mark_transaction_to_rollback() from
handler::print_error(), then we can easily make sure
that the error reported by print_error is not ignored.
Implemented the server infrastructure for the fix:
1. Added a function LEX_STRING *thd_query_string(THD) to return
a LEX_STRING structure instead of char *.
This is the function that must be called in innodb instead of
thd_query()
2. Did some encapsulation in THD : aggregated thd_query and
thd_query_length into a LEX_STRING and made accessor and mutator
methods for easy code updating.
3. Updated the server code to use the new methods where applicable.
Backport from 6.0 to 5.1.
Only those sync points are included, which are used in debug_sync.test.
The Debug Sync Facility allows to place synchronization points
in the code:
open_tables(...)
DEBUG_SYNC(thd, "after_open_tables");
lock_tables(...)
When activated, a sync point can
- Send a signal and/or
- Wait for a signal
Nomenclature:
- signal: A value of a global variable that persists
until overwritten by a new signal. The global
variable can also be seen as a "signal post"
or "flag mast". Then the signal is what is
attached to the "signal post" or "flag mast".
- send a signal: Assign the value (the signal) to the global
variable ("set a flag") and broadcast a
global condition to wake those waiting for
a signal.
- wait for a signal: Loop over waiting for the global condition until
the global value matches the wait-for signal.
Please find more information in the top comment in debug_sync.cc
or in the worklog entry.
binlog-db-db / binlog-ignore-db
InnoDB will return an error if statement based replication is used
along with transaction isolation level READ-COMMITTED (or weaker),
even if the statement in question is filtered out according to the
binlog-do-db rules set. In this case, an error should not be printed.
This patch addresses this issue by extending the existing check in
external_lock to take into account the filter rules before deciding to
print an error. Furthermore, it also changes decide_logging_format to
take into consideration whether the statement is filtered out from
binlog before decision is made.
procedures causes crashes!
The problem of that bugreport was mostly fixed by the
patch for bug 38691.
However, attached test case focused on another crash or
valgrind warning problem: SHOW PROCESSLIST query accesses
freed memory of SP instruction that run in a parallel
connection.
Changes of thd->query/thd->query_length in dangerous
places have been guarded with the per-thread
LOCK_thd_data mutex (the THD::LOCK_delete mutex has been
renamed to THD::LOCK_thd_data).
when used with --tab
1) New syntax: added CHARACTER SET clause to the
SELECT ... INTO OUTFILE (to complement the same clause in
LOAD DATA INFILE).
mysqldump is updated to use this in --tab mode.
2) ESCAPED BY/ENCLOSED BY field parameters are documented as
accepting CHAR argument, however SELECT .. INTO OUTFILE
silently ignored rests of multisymbol arguments.
For the symmetrical behavior with LOAD DATA INFILE the
server has been modified to fail with the same error:
ERROR 42000: Field separator argument is not what is
expected; check the manual
3) Current LOAD DATA INFILE recognizes field/line separators
"as is" without converting from client charset to data
file charset. So, it is supposed, that input file of
LOAD DATA INFILE consists of data in one charset and
separators in other charset. For the compatibility with
that [buggy] behaviour SELECT INTO OUTFILE implementation
has been saved "as is" too, but the new warning message
has been added:
Non-ASCII separator arguments are not fully supported
This message warns on field/line separators that contain
non-ASCII symbols.
If using statement based replication (SBR), repeatedly calling
statements which are unsafe for SBR will cause a warning message
to be written to the error for each statement. This might lead
to filling up the error log and there is no way to disable this
behavior.
The solution is to only log these message (about statements unsafe
for statement based replication) if the log_warnings option is set.
For example:
SET GLOBAL LOG_WARNINGS = 0;
INSERT INTO t1 VALUES(UUID());
SET GLOBAL LOG_WARNINGS = 1;
INSERT INTO t1 VALUES(UUID());
In this case the message will be printed only once:
[Warning] Statement may not be safe to log in statement format.
Statement: INSERT INTO t1 VALUES(UUID())
format." warnings
Despite the fact that a statement would be filtered out from binlog, a
warning would still be thrown if it was issued with the LIMIT.
This patch addresses this issue by checking the filtering rules before
printing out the warning.
such as quit and shutdown
Logging to slow log can produce an undetermined value for
Rows_examined in special cases. In debug mode this manifests
itself as any of the various marker values used to mark
uninitialized memory on various platforms.
If logging happens on a THD object that hasn't performed any
row reads (on this or any previous connections), the
THD::examined_row_count may be uninitialized. This patch adds
initialization for this attribute.
No automated test cases are added, as for this to be
meaningful, we need to ensure that we're using a THD
fulfilling the above conditions. This is hard to do in the
mysql-test-run framework. The patch has been verified
manually, however, by restarting mysqld and running the test
included with the bug report.
variable. The problem was that THD::connect_utime could be
used without being initialized when the main thread is used
to handle connections (--thread-handling=no-threads).
Make the caller of Query_log_event, Execute_load_log_event
constructors and THD::binlog_query to provide the error code
instead of having the constructors to figure out the error code.
MySQL crashes if a user without proper privileges attempts to create a procedure.
The crash happens because more than one error state is pushed onto the Diagnostic
area. In this particular case the user is denied to implicitly create a new user
account with the implicitly granted privileges ALTER- and EXECUTE ROUTINE.
The new account is needed if the original user account contained a host mask.
A user account with a host mask is a distinct user account in this context.
An alternative would be to first get the most permissive user account which
include the current user connection and then assign privileges to that
account. This behavior change is considered out of scope for this bug patch.
The implicit assignment of privileges when a user creates a stored routine is a
considered to be a feature for user convenience and as such it is not
a critical operation. Any failure to complete this operation is thus considered
non-fatal (an error becomes a warning).
The patch back ports a stack implementation of the internal error handler interface.
This enables the use of multiple error handlers so that it is possible to intercept
and cancel errors thrown by lower layers. This is needed as a error handler already
is used in the call stack emitting the errors which needs to be converted.
Problem: storing "SELECT ... INTO @var ..." results in variables we used val_xxx()
methods which returned results of the current row.
So, in some cases (e.g. SELECT DISTINCT, GROUP BY or HAVING) we got data
from the first row of a new group (where we evaluate a clause) instead of
data from the last row of the previous group.
Fix: use val_xxx_result() counterparts to get proper results.
Mysql server crashes because unsafe statements warning is wrongly elevated to error,
which is set the error status of Diagnostics_area of the thread in THD::binlog_query().
Yet the caller believes that binary logging shouldn't touch the status, so it will
set the status also later by my_ok(), my_error() or my_message() seperately
according to the execution result of the statement or transaction.
But the status of Diagnostics_area of the thread is allowed to set only once.
Fixed to clear the error wrongly set by binary logging, but keep the warning message.
due to name_const substitution
Problem:
"In general, statements executed within a stored procedure
are written to the binary log using the same rules that
would apply were the statements to be executed in standalone
fashion. Some special care is taken when logging procedure
statements because statement execution within procedures
is not quite the same as in non-procedure context".
For example, each reference to a local variable in SP's
statements is replaced by NAME_CONST(var_name, var_value).
Queries like
"CREATE TABLE ... SELECT FUNC(local_var ..."
are logged as
"CREATE TABLE ... SELECT FUNC(NAME_CONST("local_var", var_value) ..."
that leads to differrent field names and
might result in "Incorrect column name" if var_value is long enough.
Fix: in 5.x we'll issue a warning in such a case.
In 6.0 we should get rid of NAME_CONST().
Note: this issue and change should be described in the documentation
("Binary Logging of Stored Programs").
Fine-tuning. Broke out comparison into method by
suggestion of Davi. Clarified comments. Reverting
test-case which I find too brittle; proper test
case in 5.1+.
This is a back port from 5.1 to 5.0.
Fix for BUG 20023: mysql_change_user() resets the value
of SQL_BIG_SELECTS.
The bug was that SQL_BIG_SELECTS was not properly set
in COM_CHANGE_USER.
The fix is to update SQL_BIG_SELECTS properly.
When binlog_format is STATEMENT and the statement is unsafe before,
the unsafe warning/error message was issued without checking
whether the SQL_LOG_BIN was turned on or not.
Fixed with adding a sql_log_bin_toplevel flag in THD to check
whether SQL_LOG_BIN is ON in current session whatever the current is in sp or not.
Problem: storing "SELECT ... INTO @var ..." results in variables we used val_xxx()
methods which returned results of the current row.
So, in some cases (e.g. SELECT DISTINCT, GROUP BY or HAVING) we got data
from the first row of a new group (where we evaluate a clause) instead of
data from the last row of the previous group.
Fix: use val_xxx_result() counterparts to get proper results.
date_format functions
String::realloc() did not check whether the existing string data fits in
the newly allocated buffer for cases when reallocating a String object
with external buffer (i.e.alloced == FALSE). This could lead to memory
overruns in some cases.
The problem is that a unfiltered user query was being passed as
the format string parameter of sql_print_warning which later
performs printf-like formatting, leading to crashes if the user
query contains formatting instructions (ie: %s). Also, it was
using THD::query as the source of the user query, but this
variable is not meaningful in some situations -- in a delayed
insert, it points to the table name.
The solution is to pass the user query as a parameter for the
format string and use the function parameter query_arg as the
source of the user query.
functions
String::realloc() did not check whether the existing string data fits in the newly
allocated buffer for cases when reallocating a String object with external buffer
(i.e.alloced == FALSE). This could lead to memory overruns in some cases.