When do 'insert delayed' operation, the time_zone info doesn't be keeped in the row info.
So when we do insert sometime later, time_zone didn't write into binlog.
This will cause wrong result for timestamp column in slave.
Our solution is that adding time_zone info with the delayed-row and
restoring time_zone from row-info when execute that row in the furture by another thread.
So we can write correct time_zone info into binlog and got correct result in slave.
Details for Bug#43015 main.lock_multi: Weak code (sleeps etc.)
-------------------------------------------------------------
- The fix for bug 42003 already removed a lot of the weaknesses mentioned.
- Tests showed that there are unfortunately no improvements of this tests
in MySQL 5.1 which could be ported back to 5.0.
- Remove a superfluous "--sleep 1" around line 195
Details for Bug#43065 main.lock_multi: This test is too big if the disk is slow
-------------------------------------------------------------------------------
- move the subtests for the bugs 38499 and 36691 into separate scripts
- runtime under excessive parallel I/O load after applying the fix
lock_multi [ pass ] 22887
lock_multi_bug38499 [ pass ] 536926
lock_multi_bug38691 [ pass ] 258498
Don't throw an error after checking the first and the second arguments.
Continue with checking the third and higher arguments and if some of
them is stronger according to coercibility rules,
then this argument's collation is set as result collation.
When loading dump created by mysqldump tool an error is
thrown saying storage engine for the table doesn't have
an option.
mysqldump tries to re-insert the data into the federated
table which causes the error. Since the data is already
available on the remote server, mysqldump shouldn't try
to dump the data again for FEDERATED tables.
As stated in the bug page, it can be considered similar
to the MERGE ENGINE with "view only" nature.
Fixed by adding the "FEDERATED ENGINE" to the exception
list to ignore the data.
~40Mb after mysqldump/import
When the input string exceeds the maximum allowed size for the
internal buffer, batch_readline() returns a truncated string.
Since there was no way for a caller to determine whether the
string was truncated or not, the command line client assumed
batch_readline() to always return the whole input string and
appended a newline character. This resulted in garbled data
when importing dumps containing strings longer than the
maximum input buffer size.
Fixed by adding a flag to the batch_readline() interface to
signal a truncated string to the caller.
Other minor problems fixed during patch implementation:
- The maximum allowed buffer size for batch_readline() was set
up depending on the client's max_allowed_packet value. It does
not actully make any sense, as those variables are not
related. The input buffer size limit is now always set to 1
MB.
- fill_buffer() did not always set the EOF flag.
- The input buffer could actually grow twice as the specified
limit due to insufficient checks in intern_read_line().
Since there is more than one duplicate value in the table, when adding the
unique index it is not deterministic which value will be reported as causing a
problem. Replace the reported value with '' so that it doesn't affect the
results.
--ignore-table option
mysqldump would correctly omit temporary tables for views, but would
incorrectly still emit all CREATE VIEW statements.
Backport a fix from 5.1, where we capture the names we want to emit
views for in one pass (the placeholder tables) and in the pass where
we actually emit the views, we don't emit a view if it wasn't in that
list.
The copy of the original arguments of a aggregate function was not
initialized until after fix_fields().
Sometimes (e.g. when there's an error processing the statement)
the print() can be called with no corresponding fix_fields() call.
Fixed by adding a check if the Item is fixed before using the arguments
copy.
+ Fix for Bug#43114 wait_until_count_sessions too restrictive, random PB failures
+ Removal of a lot of other weaknesses found
+ modifications according to review
return no rows
The algorithm of determining the best key for loose index scan is doing a loop
over the available indexes and selects the one that has the best cost.
It retrieves the parameters of the current index into a set of variables.
If the cost of using the current index is lower than the best cost so far it
copies these variables into another set of variables that contain the
information for the best index so far.
After having checked all the indexes it uses these variables (outside of the
index loop) to create the table read plan object instance.
The was a single omission : the key_infix/key_infix_len variables were used
outside of the loop without being preserved in the loop for the best index
so far.
This causes these variables to get overwritten by the next index(es) checked.
Fixed by adding variables to hold the data for the current index, passing
the new variables to the function that assigns values to them and copying
the new variables into the existing ones when selecting a new current best
index.
To avoid further such problems moved the declarations of the variables used
to keep information about the current index inside the loop's compound
statement.
Started fix in 5.0 as the same issue is here.
Revising queries used given what appears to be the scope of this test to only select the manipulated variables.
Added tests for values that are / are not multiples of 1024 to test rounding / constraints.
This behavior is not currently documented (docs bug has been opened)
Problem: storing "SELECT ... INTO @var ..." results in variables we used val_xxx()
methods which returned results of the current row.
So, in some cases (e.g. SELECT DISTINCT, GROUP BY or HAVING) we got data
from the first row of a new group (where we evaluate a clause) instead of
data from the last row of the previous group.
Fix: use val_xxx_result() counterparts to get proper results.
There was a problem when a DELIMITER COMMAND is not the first
command on the line. I this case an extra line feed was added
to the glob buffer and this was causing subsequent attempts
to enter this delimiter to fail.
Fixed by not adding a new line to the glob buffer if the
command being added is a DELIMITER
Both of our own implementations of rint(3) were inconsistent with the
most common behavior of rint() on those platforms that have it: round
to nearest, break ties by rounding to nearest even.
Fixed by leaving just one implementation of rint() in our source tree,
and changing its behavior to match the most common native
implementations on other platforms.
Signed integer format specifier forced to print the binlog header with server_id
negative if the unsigned value sets the sign-bit ON.
Fixed with correcting the specifier to correspond to typeof(server_id) == ulong.
Moved the test case for the bug into a separate file (and restored the
original innodb_mysql test setup).
Used the new wait_show_condition test macro to avoid the usage of sleep
Replaced Unix calls with mysql-test-run's built-in functions / SQL manipulation where possible.
Replaced error codes with error names as well.
Disabled two tests on Windows due to more complex Unix command usage
See Bug#41307, Bug#41308
mysqldump included character_set_client magic
that is unknown before 4.1 even when asked for
an appropriate compatibility mode.
In compatibility (3.23, 4.0) mode, we do not
output charset statements (not even in a
"comment conditional"), nor do we do magic on
the server, even if the server is sufficient
new (4.1+). Table-names will be output converted
to the charset requested by mysqldump; if such
a conversion is not possible (Ivrit -> Latin),
mysqldump will fail.
connections
The problem is that tables can enter open table cache for a thread without
being properly cleaned up. This can happen if make_join_statistics() fails
to read a const table because of e.g. a deadlock. It does set a member of
TABLE structure to a value it allocates, but doesn't clean-up this setting
on error nor does it set the rest of the members in JOIN to allow for
automatic cleanup.
As a result when such an error occurs and the next statement depends re-uses
the table from the open tables cache it will get it with this
TABLE::reginfo.join_tab pointing to a memory area that's freed.
Fixed by making sure make_join_statistics() cleans up TABLE::reginfo.join_tab
on error.
In case of ROW item each compared pair does not
check if argumet collations can be aggregated and
thus appropiriate item conversion does not happen.
The fix is to add the check and convertion for ROW
pairs.