The problem is that XML functions(items) do not reset null_value
before their execution and further item excution may use
null_value value of the previous result.
The fix is to reset null_value.
on 5.0
The server crashes on an assert in net_end_statement indicating that the
Diagnostics area wasn't set properly during execution.
This happened on a multi table DELETE operation using the IGNORE keyword.
The keyword is suppose to allow for execution to continue on a best effort
despite some non-fatal errors. Instead execution stopped and no client
response was sent which would have led to a protocol error if it hadn't been
for the assert.
This patch corrects this issue by checking for the existence of an IGNORE
option before setting an error state during row-by-row delete iteration.
updates
Attempt to execute trigger or stored function with multi-UPDATE
which used - but didn't update - a table that was also used by
the calling statement led to an error. Read-only reference to
tables used in the calling statement should be allowed.
This problem was caused by the fact that check for conflicting
use of tables in SP/triggers was performed in open_tables(),
and in case of multi-UPDATE we didn't know exact lock type at
this stage.
We solve the problem by moving this check to lock_tables(), so
it can be performed after exact lock types for tables used by
multi-UPDATE are determined.
UNION could convert fixed-point FLOAT(M,D)/DOUBLE(M,D) columns
to FLOAT/DOUBLE when aggregating data types from the SELECT
substatements. While there is nothing particularly wrong with
this behavior, especially when M is greater than the hardware
precision limits, it could be confusing in cases when all
SELECT statements in a union have the same
FLOAT(M,D)/DOUBLE(M,D) columns with equal precision
specifications listed in the same position.
Since the manual is quite vague on what data type should be
returned in such cases, the bug was fixed by implementing the
most 'expected' behavior: do not convert FLOAT(M,D)/DOUBLE(M,D)
to anything else if all SELECT statements in a UNION have the
same precision for that column.
The problem is that the read and write methods of the shared
memory transport (protocol) didn't react to asynchornous close
events, which could lead to a lock up as the client would wait
(until time out) for a server response that will never come.
The solution is to also wait for close events while waiting
for I/O from or to the server.
Bug report and patch submitted by: Armin Schöffmann
The problem is that the read and write methods of the shared
memory transport (protocol) didn't react to asynchronous close
events, which could lead to a lock up as the client would wait
(until time out) for a server response that will never come.
The solution is to also wait for close events while waiting
for I/O from or to the server.
including modifications according to code review
+ backport of the fix for
Bug 41932 funcs_1: is_collation_character_set_applicability path
too long for tar
which was missing in 5.0 (just a renaming of two files)
When add an aliase name after NAME_CONST, the aliase name will be overwrite.
NAME_CONST will re-set the field's name only if there isn't an aliase in the
function fix-fields().
If there is an aliase, NAME_CONST doesn't re-set the field's name and keeps the old
name.
including modifications according to code review
+ backport of the fix for
Bug 41932 funcs_1: is_collation_character_set_applicability path
too long for tar
which was missing in 5.0 (just a renaming of two files)
After the table is compressed by the myisampack utility,
opening the table by the server produces valgrind warnings.
This happens because when we try to read a record into the buffer
we alway assume that the remaining buffer to read is always equal
to word size(4 or 8 or 2 bytes) we read. Sometimes we have
remaining buffer size less than word size and trying to read the
entire word size will end up in valgrind errors.
Fixed by reading byte by byte when we detect the remaining buffer
size is less than the word size.
expired timeout on debx86-b in PB
Turned off general log when importing DB dump in the test
case for bug #41486 due to the bug in CSV engine code that
makes logging long SQL query too slow.
expired timeout on debx86-b in PB
Moved the resource-intensive test case for bug #41486 into
a separate test file to reduce execution time for mysql.test.
LOAD_FILE
LOAD_FILE is not safe to replicate in STATEMENT mode, because it
depends on a file (which is loaded on master and may not exist in
slave(s)). This leads to scenarios on which the slave replicates the
statement with 'load_file' and it will try to load the file from local
file system. Given that the file may not exist in the slave filesystem
the operation will not succeed (probably returning NULL), causing
master and slave(s) to diverge. However, when using MIXED mode
replication, this can be made to work, if the statement including
LOAD_FILE is marked as unsafe, triggering a switch to ROW mode,
meaning that the contents of the file are written to binlog as row
events. Consequently, the contents from the file in the master will
reach the slave via the binlog.
This patch addresses this bug by marking the load_file function as
unsafe. When in mixed mode and when LOAD_FILE is issued, there will be
a switch to row mode. Furthermore, when in statement mode, the
LOAD_FILE will raise a warning that the statement is unsafe in that
mode.
The problem is that after disconnect, the DOPR TEMPORARY TABLE event didn't been
written into binlog. So after syncing with slave, the TEMPORARY table on slave
is not removed.
Waiting DROP TEMPORARY TABLE event to be written into binlog before sync slave with
master.
When do 'insert delayed' operation, the time_zone info doesn't be keeped in the row info.
So when we do insert sometime later, time_zone didn't write into binlog.
This will cause wrong result for timestamp column in slave.
Our solution is that adding time_zone info with the delayed-row and
restoring time_zone from row-info when execute that row in the furture by another thread.
So we can write correct time_zone info into binlog and got correct result in slave.
Details for Bug#43015 main.lock_multi: Weak code (sleeps etc.)
-------------------------------------------------------------
- The fix for bug 42003 already removed a lot of the weaknesses mentioned.
- Tests showed that there are unfortunately no improvements of this tests
in MySQL 5.1 which could be ported back to 5.0.
- Remove a superfluous "--sleep 1" around line 195
Details for Bug#43065 main.lock_multi: This test is too big if the disk is slow
-------------------------------------------------------------------------------
- move the subtests for the bugs 38499 and 36691 into separate scripts
- runtime under excessive parallel I/O load after applying the fix
lock_multi [ pass ] 22887
lock_multi_bug38499 [ pass ] 536926
lock_multi_bug38691 [ pass ] 258498