updates
Attempt to execute trigger or stored function with multi-UPDATE
which used - but didn't update - a table that was also used by
the calling statement led to an error. Read-only reference to
tables used in the calling statement should be allowed.
This problem was caused by the fact that check for conflicting
use of tables in SP/triggers was performed in open_tables(),
and in case of multi-UPDATE we didn't know exact lock type at
this stage.
We solve the problem by moving this check to lock_tables(), so
it can be performed after exact lock types for tables used by
multi-UPDATE are determined.
UNION could convert fixed-point FLOAT(M,D)/DOUBLE(M,D) columns
to FLOAT/DOUBLE when aggregating data types from the SELECT
substatements. While there is nothing particularly wrong with
this behavior, especially when M is greater than the hardware
precision limits, it could be confusing in cases when all
SELECT statements in a union have the same
FLOAT(M,D)/DOUBLE(M,D) columns with equal precision
specifications listed in the same position.
Since the manual is quite vague on what data type should be
returned in such cases, the bug was fixed by implementing the
most 'expected' behavior: do not convert FLOAT(M,D)/DOUBLE(M,D)
to anything else if all SELECT statements in a UNION have the
same precision for that column.
The problem is that the read and write methods of the shared
memory transport (protocol) didn't react to asynchronous close
events, which could lead to a lock up as the client would wait
(until time out) for a server response that will never come.
The solution is to also wait for close events while waiting
for I/O from or to the server.
When add an aliase name after NAME_CONST, the aliase name will be overwrite.
NAME_CONST will re-set the field's name only if there isn't an aliase in the
function fix-fields().
If there is an aliase, NAME_CONST doesn't re-set the field's name and keeps the old
name.
After the table is compressed by the myisampack utility,
opening the table by the server produces valgrind warnings.
This happens because when we try to read a record into the buffer
we alway assume that the remaining buffer to read is always equal
to word size(4 or 8 or 2 bytes) we read. Sometimes we have
remaining buffer size less than word size and trying to read the
entire word size will end up in valgrind errors.
Fixed by reading byte by byte when we detect the remaining buffer
size is less than the word size.
expired timeout on debx86-b in PB
Moved the resource-intensive test case for bug #41486 into
a separate test file to reduce execution time for mysql.test.
When do 'insert delayed' operation, the time_zone info doesn't be keeped in the row info.
So when we do insert sometime later, time_zone didn't write into binlog.
This will cause wrong result for timestamp column in slave.
Our solution is that adding time_zone info with the delayed-row and
restoring time_zone from row-info when execute that row in the furture by another thread.
So we can write correct time_zone info into binlog and got correct result in slave.
Details for Bug#43015 main.lock_multi: Weak code (sleeps etc.)
-------------------------------------------------------------
- The fix for bug 42003 already removed a lot of the weaknesses mentioned.
- Tests showed that there are unfortunately no improvements of this tests
in MySQL 5.1 which could be ported back to 5.0.
- Remove a superfluous "--sleep 1" around line 195
Details for Bug#43065 main.lock_multi: This test is too big if the disk is slow
-------------------------------------------------------------------------------
- move the subtests for the bugs 38499 and 36691 into separate scripts
- runtime under excessive parallel I/O load after applying the fix
lock_multi [ pass ] 22887
lock_multi_bug38499 [ pass ] 536926
lock_multi_bug38691 [ pass ] 258498
Don't throw an error after checking the first and the second arguments.
Continue with checking the third and higher arguments and if some of
them is stronger according to coercibility rules,
then this argument's collation is set as result collation.
When loading dump created by mysqldump tool an error is
thrown saying storage engine for the table doesn't have
an option.
mysqldump tries to re-insert the data into the federated
table which causes the error. Since the data is already
available on the remote server, mysqldump shouldn't try
to dump the data again for FEDERATED tables.
As stated in the bug page, it can be considered similar
to the MERGE ENGINE with "view only" nature.
Fixed by adding the "FEDERATED ENGINE" to the exception
list to ignore the data.
~40Mb after mysqldump/import
When the input string exceeds the maximum allowed size for the
internal buffer, batch_readline() returns a truncated string.
Since there was no way for a caller to determine whether the
string was truncated or not, the command line client assumed
batch_readline() to always return the whole input string and
appended a newline character. This resulted in garbled data
when importing dumps containing strings longer than the
maximum input buffer size.
Fixed by adding a flag to the batch_readline() interface to
signal a truncated string to the caller.
Other minor problems fixed during patch implementation:
- The maximum allowed buffer size for batch_readline() was set
up depending on the client's max_allowed_packet value. It does
not actully make any sense, as those variables are not
related. The input buffer size limit is now always set to 1
MB.
- fill_buffer() did not always set the EOF flag.
- The input buffer could actually grow twice as the specified
limit due to insufficient checks in intern_read_line().
Since there is more than one duplicate value in the table, when adding the
unique index it is not deterministic which value will be reported as causing a
problem. Replace the reported value with '' so that it doesn't affect the
results.
--ignore-table option
mysqldump would correctly omit temporary tables for views, but would
incorrectly still emit all CREATE VIEW statements.
Backport a fix from 5.1, where we capture the names we want to emit
views for in one pass (the placeholder tables) and in the pass where
we actually emit the views, we don't emit a view if it wasn't in that
list.
The copy of the original arguments of a aggregate function was not
initialized until after fix_fields().
Sometimes (e.g. when there's an error processing the statement)
the print() can be called with no corresponding fix_fields() call.
Fixed by adding a check if the Item is fixed before using the arguments
copy.
+ Fix for Bug#43114 wait_until_count_sessions too restrictive, random PB failures
+ Removal of a lot of other weaknesses found
+ modifications according to review
return no rows
The algorithm of determining the best key for loose index scan is doing a loop
over the available indexes and selects the one that has the best cost.
It retrieves the parameters of the current index into a set of variables.
If the cost of using the current index is lower than the best cost so far it
copies these variables into another set of variables that contain the
information for the best index so far.
After having checked all the indexes it uses these variables (outside of the
index loop) to create the table read plan object instance.
The was a single omission : the key_infix/key_infix_len variables were used
outside of the loop without being preserved in the loop for the best index
so far.
This causes these variables to get overwritten by the next index(es) checked.
Fixed by adding variables to hold the data for the current index, passing
the new variables to the function that assigns values to them and copying
the new variables into the existing ones when selecting a new current best
index.
To avoid further such problems moved the declarations of the variables used
to keep information about the current index inside the loop's compound
statement.
Started fix in 5.0 as the same issue is here.
Revising queries used given what appears to be the scope of this test to only select the manipulated variables.
Added tests for values that are / are not multiples of 1024 to test rounding / constraints.
This behavior is not currently documented (docs bug has been opened)
There was a problem when a DELIMITER COMMAND is not the first
command on the line. I this case an extra line feed was added
to the glob buffer and this was causing subsequent attempts
to enter this delimiter to fail.
Fixed by not adding a new line to the glob buffer if the
command being added is a DELIMITER