TRUNCATE TABLE fails to replicate when stmt-based binlogging is not supported.
Correcting some tests that was failing in pushbuild as well as fixing result
file for some tests that are not executed in the default MTR run.
The problem is that a unfiltered user query was being passed as
the format string parameter of sql_print_warning which later
performs printf-like formatting, leading to crashes if the user
query contains formatting instructions (ie: %s). Also, it was
using THD::query as the source of the user query, but this
variable is not meaningful in some situations -- in a delayed
insert, it points to the table name.
The solution is to pass the user query as a parameter for the
format string and use the function parameter query_arg as the
source of the user query.
In 37553 we declared longlong results for
class Item_str_timefunc as per comments/docs,
but didn't add a method for that. And the
default just wasn't good enough for some
cases.
Changeset adds dedicated val_int() to class.
TRUNCATE TABLE fails to replicate when stmt-based binlogging is not supported.
There were two separate problems with the code, both of which are fixed with
this patch:
1. An error was printed by InnoDB for TRUNCATE TABLE in statement mode when
the in isolation levels READ COMMITTED and READ UNCOMMITTED since InnoDB
does permit statement-based replication for DML statements. However,
the TRUNCATE TABLE is not transactional, but is a DDL, and should therefore
be allowed to be replicated as a statement.
2. The statement was not logged in mixed mode because of the error above, but
the error was not reported to the client.
This patch fixes the problem by treating TRUNCATE TABLE a DDL, that is, it is
always logged as a statement and not reporting an error from InnoDB for TRUNCATE
TABLE.
- Allow the new process to break away from any job that this
process is part of so that it can be assigned to the new JobObject
we just created. This is safe since the new JobObject is created with
the JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE flag, making sure it will be
terminated when the last handle to it is closed(which is owned by
this process).
Documented behaviour was broken by the patch for bug 33699
that actually is not a bug.
This fix reverts patch for bug 33699 and reverts the
UPDATE of NOT NULL field with NULL query to old
behavior.
Problem: some queries using NAME_CONST(.. COLLATE ...)
lead to server crash due to failed type cast.
Fix: return the underlying item's type in case of
NAME_CONST(.. COLLATE ...) to avoid wrong casting.
Accessing well defined MERGE table may return an error
stating that the merge table is incorrectly defined. This
happens if MERGE child tables were accessed before and we
failed to open another incorrectly defined MERGE table in
this connection.
myrg_open() internally used my_errno as a variable for determining
failure, and thus could be tricked into a wrong decision by other
uses of my_errno.
With this fix we use function local boolean flag instead of my_errno
to determine failure.
Problem:
RelativeLocationPath can appear only after a node-set expression
in the third and the fourth branches of this rule:
PathExpr :: = LocationPath
| FilterExpr
| FilterExpr '/' RelativeLocationPath
| FilterExpr '//' RelativeLocationPath
XPatch code didn't check the type of FilterExpr and crashed.
Fix:
If FilterExpr is a scalar expression
(variable reference, literal, number, scalar function call)
return error.
The bug happened because filtering-out a STMT_END_F-flagged event so that
the transaction COMMIT finds traces of incomplete statement commit.
Such situation is only possible with ndb circular replication. The filtered-out
rows event is one that immediately preceeds the COMMIT query event.
Fixed with deploying an the rows-log-event statement commit at executing
of the transaction COMMIT event.
Resources that were allocated by other than STMT_END_F-flagged event of
the last statement are clean up prior execution of the commit logics.
functions
String::realloc() did not check whether the existing string data fits in the newly
allocated buffer for cases when reallocating a String object with external buffer
(i.e.alloced == FALSE). This could lead to memory overruns in some cases.
upgrading lock, even with low_priority_updates
The problem is that there is no mechanism to control whether a
delayed insert takes a high or low priority lock on a table.
The solution is to modify the delayed insert thread ("handler")
to take into account the global value of low_priority_updates
when taking table locks. The value of low_priority_updates is
retrieved when the insert delayed thread is created and will
remain the same for the duration of the thread.
The original symptoms of this bug have been fixed as a consequence of other bug fixes.
Taking this time to correct some formatting, such as replacing error numbers with names.
Beginning this with 5.0
- If missing: add "disconnect <session>"
- If physical disconnect of non "default" sessions is not finished
at test end: add routine which waits till this happened
+ additional improvements like
- remove superfluous files created by the test
- replace error numbers by error names
- remove trailing spaces, replace tabs by spaces
- unify writing of bugs within comments
- correct comments
- minor changes of formatting
Modifications according to the code review are included.
Fixed tests:
grant2
grant3
lock_tables_lost_commit
mysqldump
openssl_1
outfile
There are two issues:
1. 6.0 uses the obsolate master-*** server options;
2. the test is not deterministic in that although master vs slave consistency is
fine, two runs of the test can have different results. The reason of the
non-determinism is the combination of
a chosen way to demo results and the ndb_autoincrement_prefetch_sz feature.
The current patch fixes the 2nd issue by putting out results via diff_table macro
instead of the former run-sensitive method.
The 1st issue is going to be fixed by a separate patch to 6.0.
Problem:
Custom UCA collations didn't set the MY_CS_STRNXFRM flag,
which resulted in "prefix_search" method instead of
the required "seq_search".
Problem2: (not metioned in the bug report)
Custom UCA collations didn't also set the MY_CS_UNICODE flag,
so an attempt to compare a column with a custom UCA collation
to another column with a non-Unicode character set led to
the "illegal mix of collation" error.
Fix:
the two missing flags was added into collation initialization.
Upgrade:
- All fulltext indexes with custom UCA collations should be rebuilt.
- Non-fulltext custom UCA indexes should likely be rebuild as well.