Testing for presence of stuff in a hash inside the function
that's filling in the hash creates chicken-and-egg type of problems.
This results in test suite failures in mysql-pe in debug mode and
adds bad initialization dependency in 5.1.
Fixed by removing the debug code.
In RBR, All statements operating on temporary tables should not be binlogged.
Despite this fact, after executing 'TRUNCATE... ' on a temporary table,
the command is still logged, even if in row-based mode. Consequently, this raises
problems in the slave as the table may not exist, resulting in an
execution failure. Ultimately, this causes the slave to report
an error and abort.
After this patch, 'TRUNCATE ...' statement on a temporary table will not be
binlogged in RBR.
The problem is that the server could crash when attempting
to access a non-conformant proc system table. One such case
was a crash when invoking stored procedure related statements
on a 5.1 server with a proc system table in the 5.0 format.
The solution is to validate the proc system table format
before attempts to access it are made. If the table is not
in the format that the server expects, a message is written
to the error log and the statement that caused the table to
be accessed fails.
This patch introduce a limit on the time the query cache can
block with a lock on SELECTs.
Other operations which causes a change in the table
data will still be blocked.
Bug #48370 Absolutely wrong calculations with GROUP BY and
decimal fields when using IF
Added the test cases in the above two bugs for regression
testing.
Added additional tests that demonstrate a incomplete fix.
Added a new factory method for Field_new_decimal to
create a field from an (decimal returning) Item.
In the new method made sure that all the precision and
length variables are capped in a proper way.
This is required because Item's can have larger precision
than the decimal fields and thus need to be capped when
creating a field based on an Item type.
Fixed the wrong typecast to Item_decimal.
When merging ranges during calculation of the result of OR
to two range sets the current range may be obsoleted by the
resulting merged range.
The first overlapping range can be obsoleted as well.
Fixed by moving the pointer to the first overlapping range to the
pointer of the resulting union range.
Added few comments at key places in key_or().
Fixed 2 errors in comp_err executable :
1. Wrong (off by 1) length passed to my_checksum()
2. strmov() was used on overlapping strings. This is
not legal according to the docs in stpcpy(). Used
the overlap safe memmove() instead.
Problem: Some system functions that could return different values on
master and slave were not marked unsafe. In particular:
GET_LOCK
IS_FREE_LOCK
IS_USED_LOCK
MASTER_POS_WAIT
RELEASE_LOCK
SLEEP
SYSDATE
VERSION
Fix: Mark these functions unsafe.
Fixed a problem with the test case when executed with ps-protocol.
There the conflicing lock would be noticed during prepare, not
during execution of the insert - leading to a different (but
equally appropriate) error message.
DELETE IGNORE
The ER_CANT_UPDATE_USED_TABLE_IN_SF_OR_TRG error was set in the
diagnostics area when it happened, but the DELETE cleanup code
never checked for a non-fatal error condition, thus trying to
set diag.area to "ok". This triggered an assert checking that
the diag.area was empty.
The fix was to test if there existed a non-fatal error condition
(thd->is_error() before ok'ing the operation.
The problem was a "self-deadlock" if the connection issuing INSERT DELAYED
had both the global read lock (FLUSH TABLES WITH READ LOCK) and LOCK TABLES
mode active. The table being inserted into had to be different from the
table(s) locked by LOCK TABLES.
For INSERT DELAYED, the connection thread waits until the handler thread has
opened and locked its table before returning. But since the global read lock
was active, the handler thread would be unable to lock and would wait for the
global read lock to go away.
So the handler thread would be waiting for the connection thread to release
the global read lock while the connection thread was waiting for the handler
thread to lock the table. This gave a "self-deadlock" (same connection,
different threads).
The deadlock would only happen if we also had LOCK TABLES mode since the
INSERT otherwise will try to get protection against global read lock before
starting the handler thread. It will then notice that the global read lock
is owned by the same connection and report ER_CANT_UPDATE_WITH_READLOCK.
This patch removes the deadlock by reporting ER_CANT_UPDATE_WITH_READLOCK
also if we are inside LOCK TABLES mode.
Test case added to delayed.test.
WHERE conditions
check_group_min_max() checks if the loose index scan
optimization is applicable for a given WHERE condition, that is
if the MIN/MAX attribute participates only in range predicates
comparing the corresponding field with constants.
The problem was that it considered the whole predicate suitable
for the loose index scan optimization as soon as it encountered
a constant as a predicate argument. This is obviously wrong for
cases when a constant is the first argument of a predicate
which does not satisfy the above condition.
Fixed check_group_min_max() so that all arguments of the input
predicate are considered to decide if it passes the test, even
though a constant has already been encountered.
MySQL manual describes values of the YEAR(2) field type as follows:
values 00 - 69 mean 2000 - 2069 years and values 70 - 99 mean 1970 - 1999
years. MIN/MAX and comparison functions was comparing them as int values
thus producing wrong result.
Now the Arg_comparator class is extended with compare_year function which
performs correct comparison of the YEAR type.
The Item_sum_hybrid class now uses Item_cache and Arg_comparator objects to
correctly calculate its value.
To allow Arg_comparator to use func_name() function for Item_func and Item_sum
objects the func_name declaration is moved to the Item_result_field class.
A helper function is_owner_equal_func is added to the Arg_comparator class.
It checks whether the Arg_comparator object owner is the <=> function or not.
A helper function setup is added to the Item_sum_hybrid class. It sets up
cache item and comparator.
init_read_record() - (records.cc:274)
Item_cond::used_tables_cache was accessed in
records.cc#init_read_record() without being initialized. It had
not been initialized because it was wrongly assumed that the
Item's variables would not be accessed, and hence
quick_fix_field() was used instead of fix_fields() to save a few
CPU cycles at creation time.
The fix is to properly initilize the Item by replacing
quick_fix_field() with fix_fields().