The enum system variables were handled inconsistently
as ints, unsigned int and unsigned long on various places.
This caused problems on platforms on which
sizeof(int) != sizeof(long).
Fixed by homogenizing the type of the enum variables
to unsigned int, since it's size compatible with the C enum
type.
Removed the test from the experimental list.
With statement- or mixed-mode logging, "LOAD DATA INFILE" queries
are written to the binlog using special types of log events.
When mysqlbinlog reads such events, it re-creates the file in a
temporary directory with a generated filename and outputs a
"LOAD DATA INFILE" query where the filename is replaced by the
generated file. The temporary file is not deleted by mysqlbinlog
after termination.
To fix the problem, in mixed mode we go to row-based. In SBR, we
document it to remind user the tmpfile is left in a temporary
directory.
With statement- or mixed-mode logging, "LOAD DATA INFILE" queries
are written to the binlog using special types of log events.
When mysqlbinlog reads such events, it re-creates the file in a
temporary directory with a generated filename and outputs a
"LOAD DATA INFILE" query where the filename is replaced by the
generated file. The temporary file is not deleted by mysqlbinlog
after termination.
To fix the problem, in mixed mode we go to row-based. In SBR, we
document it to remind user the tmpfile is left in a temporary
directory.
******
This patch fixes the following bugs:
- Bug#5889: Exit handler for a warning doesn't hide the warning in
trigger
- Bug#9857: Stored procedures: handler for sqlwarning ignored
- Bug#23032: Handlers declared in a SP do not handle warnings generated
in sub-SP
- Bug#36185: Incorrect precedence for warning and exception handlers
The problem was in the way warnings/errors during stored routine execution
were handled. Prior to this patch the logic was as follows:
- when a warning/an error happens: if we're executing a stored routine,
and there is a handler for that warning/error, remember the handler,
ignore the warning/error and continue execution.
- after a stored routine instruction is executed: check for a remembered
handler and activate one (if any).
This logic caused several problems:
- if one instruction generates several warnings (errors) it's impossible
to choose the right handler -- a handler for the first generated
condition was chosen and remembered for activation.
- mess with handling conditions in scopes different from the current one.
- not putting generated warnings/errors into Warning Info (Diagnostic
Area) is against The Standard.
The patch changes the logic as follows:
- Diagnostic Area is cleared on the beginning of each statement that
either is able to generate warnings, or is able to work with tables.
- at the end of a stored routine instruction, Diagnostic Area is left
intact.
- Diagnostic Area is checked after each stored routine instruction. If
an instruction generates several condition, it's now possible to take a
look at all of them and determine an appropriate handler.
In order to be able to check if the set of the grouping fields in a
GROUP BY has changed (and thus to start a new group) the optimizer
caches the current values of these fields in a set of Cached_item
derived objects.
The Cached_item_str, used for caching varchar and TEXT columns,
is limited in length by the max_sort_length variable.
A String buffer to store the value with an alloced length of either
the max length of the string or the value of max_sort_length
(whichever is smaller) in Cached_item_str's constructor.
Then, at compare time the value of the string to compare to was
truncated to the alloced length of the string buffer inside
Cached_item_str.
This is all fine and valid, but only if you're not assigning
values near or equal to the alloced length of this buffer.
Because when assigning values like this the alloced length is
rounded up and as a result the next set of data will not match the
group buffer, thus leading to wrong results because of the changed
alloced_length.
Fixed by preserving the original maximum length in the
Cached_item_str's constructor and using this instead of the
alloced_length to limit the string to compare to.
Test case added.
Fix a regression (due to a typo) which caused spurious incorrect
argument errors for long data stream parameters if all forms of
logging were disabled (binary, general and slow logs).
With statement- or mixed-mode logging, "LOAD DATA INFILE" queries
are written to the binlog using special types of log events.
When mysqlbinlog reads such events, it re-creates the file in a
temporary directory with a generated filename and outputs a
"LOAD DATA INFILE" query where the filename is replaced by the
generated file. The temporary file is not deleted by mysqlbinlog
after termination.
To fix the problem, in mixed mode we go to row-based. In SBR, we
document it to remind user the tmpfile is left in a temporary
directory.
With statement- or mixed-mode logging, "LOAD DATA INFILE" queries
are written to the binlog using special types of log events.
When mysqlbinlog reads such events, it re-creates the file in a
temporary directory with a generated filename and outputs a
"LOAD DATA INFILE" query where the filename is replaced by the
generated file. The temporary file is not deleted by mysqlbinlog
after termination.
To fix the problem, in mixed mode we go to row-based. In SBR, we
document it to remind user the tmpfile is left in a temporary
directory.
/*![:version:] Query Code */, where [:version:] is a sequence of 5
digits representing the mysql server version(e.g /*!50200 ... */),
is a special comment that the query in it can be executed on those
servers whose versions are larger than the version appearing in the
comment. It leads to a security issue when slave's version is larger
than master's. A malicious user can improve his privileges on slaves.
Because slave SQL thread is running with SUPER privileges, so it can
execute queries that he/she does not have privileges on master.
This bug is fixed with the logic below:
- To replace '!' with ' ' in the magic comments which are not applied on
master. So they become common comments and will not be applied on slave.
- Example:
'INSERT INTO t1 VALUES (1) /*!10000, (2)*/ /*!99999 ,(3)*/
will be binlogged as
'INSERT INTO t1 VALUES (1) /*!10000, (2)*/ /* 99999 ,(3)*/
DELETE statement
Single-table delete ordered by a field that has a hash-type index
may cause an assertion failure or a crash.
An optimization added by the fix for the bug 36569 forced the
optimizer to use ORDER BY-compatible indices when applicable.
However, the existence of unsorted indices (HASH index algorithm
for some engines such as MEMORY/HEAP, NDB) was ignored.
The test_if_order_by_key function has been modified to skip
unsorted indices.
The problem is that the fix Bug#29784 was mistakenly
reverted when updating YaSSL to a newer version.
The solution is to re-apply the fix and this time
actually add a meaningful test case so that possible
regressions are caught.
This patch also fixes Bug#55452 "SET PASSWORD is
replicated twice in RBR mode".
The goal of this patch is to remove the release of
metadata locks from close_thread_tables().
This is necessary to not mistakenly release
the locks in the course of a multi-step
operation that involves multiple close_thread_tables()
or close_tables_for_reopen().
On the same token, move statement commit outside
close_thread_tables().
Other cleanups:
Cleanup COM_FIELD_LIST.
Don't call close_thread_tables() in COM_SHUTDOWN -- there
are no open tables there that can be closed (we leave
the locked tables mode in THD destructor, and this
close_thread_tables() won't leave it anyway).
Make open_and_lock_tables() and open_and_lock_tables_derived()
call close_thread_tables() upon failure.
Remove the calls to close_thread_tables() that are now
unnecessary.
Simplify the back off condition in Open_table_context.
Streamline metadata lock handling in LOCK TABLES
implementation.
Add asserts to ensure correct life cycle of
statement transaction in a session.
Remove a piece of dead code that has also become redundant
after the fix for Bug 37521.
The problem was that the optimize method of the ARCHIVE storage
engine was not preserving the FRM embedded in the ARZ file when
rewriting the ARZ file for optimization. The ARCHIVE engine stores
the FRM in the ARZ file so it can be transferred from machine to
machine without also copying the FRM -- the engine restores the
embedded FRM during discovery.
The solution is to copy over the FRM when rewriting the ARZ file.
In addition, some initial error checking is performed to ensure
garbage is not copied over.
table copy".
This patch only adds test case as the bug itself was addressed
by Ramil's fix for bug 50946 "fast index creation still seems
to copy the table".
The first problem was that SHOW CREATE TRIGGER took a stronger metadata
lock than required. This caused the statement to be blocked when it was
not needed. For example, LOCK TABLE WRITE in one connection would block
SHOW CREATE TRIGGER in another connection.
Another problem was that a SHOW CREATE TRIGGER statement issued inside
a transaction did not release its metadata locks at the end of the
statement execution. This happened even if SHOW CREATE TRIGGER is an
information statement. The consequence was that SHOW CREATE TRIGGER
was able to block other connections from accessing the table
(e.g. using ALTER TABLE).
This patch fixes the problem by changing SHOW CREATE TRIGGER to take
a MDL_SHARED_HIGH_PRIO metadata lock similar to what is already done
for SHOW CREATE TABLE. The patch also changes SHOW CREATE TRIGGER to
explicitly release any metadata locks taken by the statement after
it completes.
Test case added to show_check.test.
A change in the default values of some config parameters
caused this test to fail, adjust the test and make it more
robust so it does not fail for the same reason in the future.
concurrent SHOW CREATE
The problem was that a SHOW CREATE TABLE statement issued inside
a transaction did not release its metadata locks at the end of the
statement execution. This happened even if SHOW CREATE TABLE is an
information statement.
The consequence was that SHOW CREATE TABLE was able to block other
connections from accessing the table (e.g. using ALTER TABLE).
This patch fixes the problem by explicitly releasing any metadata
locks taken by SHOW CREATE TABLE after the statement completes.
Test case added to show_check.test.
SHOW DATABASES LIKE ... was not converting to lowercase on comparison as the
documentation is suggesting.
Fixed it to behave similarly to SHOW TABLES LIKE ... and updated the failing
on MacOSX lowercase_table2 test case.
The reason for the bug above is unclear but
- Modify pfs_upgrade so that it's result is easier to analyze in case something fails
- Fix several minor weaknesses which could cause that a successing test (either an
already existing or a to be developed one) fails because of imperfect cleanup,
too slow disconnected sessions etc.
should either fix the bug or reduce it's probability or at least
make the analysis of failures easier.