WHERE conditions
check_group_min_max() checks if the loose index scan
optimization is applicable for a given WHERE condition, that is
if the MIN/MAX attribute participates only in range predicates
comparing the corresponding field with constants.
The problem was that it considered the whole predicate suitable
for the loose index scan optimization as soon as it encountered
a constant as a predicate argument. This is obviously wrong for
cases when a constant is the first argument of a predicate
which does not satisfy the above condition.
Fixed check_group_min_max() so that all arguments of the input
predicate are considered to decide if it passes the test, even
though a constant has already been encountered.
MySQL manual describes values of the YEAR(2) field type as follows:
values 00 - 69 mean 2000 - 2069 years and values 70 - 99 mean 1970 - 1999
years. MIN/MAX and comparison functions was comparing them as int values
thus producing wrong result.
Now the Arg_comparator class is extended with compare_year function which
performs correct comparison of the YEAR type.
The Item_sum_hybrid class now uses Item_cache and Arg_comparator objects to
correctly calculate its value.
To allow Arg_comparator to use func_name() function for Item_func and Item_sum
objects the func_name declaration is moved to the Item_result_field class.
A helper function is_owner_equal_func is added to the Arg_comparator class.
It checks whether the Arg_comparator object owner is the <=> function or not.
A helper function setup is added to the Item_sum_hybrid class. It sets up
cache item and comparator.
Post-push fix: Removed MTRv1 arguments according to the
original patch. Although there is a version check, the patch
was pushed to a 5.1 GA staging tree, while the version check
considers version 5.2. This makes the deprecated parameters
to be used, despite the fact that they are not valid anymore.
Part of MTRv1 is currently used in RQG semisync test, and this
was causing the test to fail on slave startup.
It should be safe to uncomment when merging up to celosia.
After fix of bug46322, logging to table is turned off, each test
that need this should turn on it in it's opt file.
Add suppression to unsafe statement warnings for binlog_unsafe.test.
Replication info files are not being flushed and synced when the
command 'STOP SLAVE' is issued. This means that one cannot just
rely on existing values on those files when the slave has been
stopped. Having consistent, uncorrupted and up-to-date info files
when stopping the slave would be most useful, for instance, for
snapshotting purposes (a procedure that is often used for
restoring slaves).
This patch addresses this by instrumenting the
terminate_slave_threads function so that it also flushes and
syncs the *info files as well as the relay log whenever it gets
called, ie, on 'STOP SLAVE'. Although this imposes a performance
trade-off (specifically when stopping the slave), it should have
no negative influence on overall replication performance (impact
is only noticeable on 'STOP SLAVE').
init_read_record() - (records.cc:274)
Item_cond::used_tables_cache was accessed in
records.cc#init_read_record() without being initialized. It had
not been initialized because it was wrongly assumed that the
Item's variables would not be accessed, and hence
quick_fix_field() was used instead of fix_fields() to save a few
CPU cycles at creation time.
The fix is to properly initilize the Item by replacing
quick_fix_field() with fix_fields().
One statement that have more than one different tables to update with
autoinc columns just was marked as unsafe in mixed mode, so the unsafe
warning can't be produced in statement mode.
To fix the problem, mark the statement as unsafe in statement mode too.
This patch borrows ideas, text and code from Kristofer
Pettersson's patch.
An assignment of a system variable sharing the same base
name as a declared stored procedure variable in the same
context could lead to a crash.
The reason was that during the parsing of the syntactic
rule 'option_value' an uninitialized set_var object was
pushed to the parameter stack of the SET statement. The
parent rule 'option_type_value' interpreted the existence
of variables on the parameter stack as an assignment and
wrapped it in a sp_instr_set object.
As the procedure later was executed an attempt was made
to run the method 'check()' on an uninitialized member
object (NULL value) belonging to the previously created
but uninitialized object.
This patch refactors the 'internal_variable_name' rule and
copies the semantic analysis part to the depending parent
rule: 'option_value'. This makes it possible to account
for any prefixes affecting the interpretation of the
internal_variable_name.
Create a set of test cases to see if some DDL statements implicitly commit
a transaction on the NDB and are written directly to the binary log without
going through either the Statement- or Transactional-Cache.
deadlock was encountered
The bug is caused by an inconsistent handling of the IGNORE
clause. A read from a const table caused a lock timeout
(ER_LOCK_TIMEOUT) in innodb. Since the IGNORE clause was
given, the timeout was converted into a warning instead of
an error, thus not populating the diagnostics area. When
innodb subsequently marked the transaction for rollback,
mysql asserted since the diag.area was empty.
This patch consists of only a test case, as the bug itself
was fixed by the patch for Bug #46539
on any access
Archive engine for 5.1 (and latter) version uses a modified
version of zlib (azlib). These two version are incompatible
so a proper upgrade is needed before tables created in 5.0
can be used reliable.
This upgrade can be performed using repair. But due to lack
of test its risky to allow upgrade for now. This patch addresses
only the crashing issue. Any attempt to repair will be blocked.
Eventually repair can be allowed to run through (which will also
cause an upgrade from older version to newer) but only after a
thorough testing.
The crash occurs because SAFEMALLOC is defined for the MySQL server
but not for the Archive or Federated engines, resulting in a
parameter mismatch between the function prototype and definition
for functions using the CALLER_INFO macro.
memory
The server was doing a bad class typecast causing setting of
wrong value for the maximum number of items in an internal
structure used in equality propagation.
Fixed by not doing the wrong typecast and asserting the type
of the Item where it should be done.
values
We should re-set the access method functions when changing the access
method when switching to another index to avoid sorting.
Fixed by doing a little re-engineering : encapsulating all the function
assignment into a special function and calling it when flipping the
indexes.
When values of different types are compared they're converted to a type that
allows correct comparison. This conversion is done for each comparison and
takes some time. When a constant is being compared it's possible to cache the
value after conversion to speedup comparison. In some cases (large dataset,
complex WHERE condition with many type conversions) query might be executed
7% faster.
A test case isn't provided because all changes are internal and isn't visible
outside.
The behavior of the Item_cache is changed to cache values on the first request
of cached value rather than at the moment of storing item to be cached.
A flag named value_cached is added to the Item_cache class. It's set to TRUE
when cache holds the value of the last stored item.
Function named cache_value() is added to the Item_cache class and derived classes.
This function actually caches the value of the saved item.
Item_cache_xxx::store functions now only store item to be cached and set
value_cached flag to FALSE.
Item_cache_xxx::val_xxx functions are changed to call cache_value function
prior to returning cached value if value_cached is FALSE.
The Arg_comparator::set_cmp_func function now calls cache_converted_constant
to cache constants if they need a type conversion.
The Item_cache::get_cache function is overloaded to allow setting of the
cache type.
The cache_converted_constant function is added to the Arg_comparator class.
It checks whether a value can and should be cached and if so caches it.
only const tables
The problem was caused by two shortcuts in the optimizer that
are inapplicable in the ROLLUP case.
Normally in a case when only const tables are involved in a
query, DISTINCT clause can be safely optimized away since there
may be only one row produced by the join. Similarly, we don't
need to create a temporary table to resolve DISTINCT/GROUP
BY/ORDER BY. Both of these are inapplicable when the WITH
ROLLUP modifier is present.
Fixed by disabling the said optimizations for the WITH ROLLUP
case.
Just change mysql_foo to mysql_cv_foo for one cache-id variable name. There
was only one bad variable name, present in 5.0 and 5.1, but not in the -pe
branch.