Analysis:
Range analysis detects that the subquery is expensive and doesn't
build a range access method. Later, the applicability test for loose
scan doesn't take that into account, and builds a loose scan method
without a range scan on the min/max column. As a result loose scan
fetches the first key in each group, rather than the first key that
satisfies the condition on the min/max column.
Solution:
Since there is no SEL_ARG tree to be used for the min/max column,
it is not possible to use loose scan if the min/max column is compared
with an expensive scalar subquery. Make the test for loose scan
applicability to be in sync with the range analysis code by testing if
the min/max argument is compared with an expensive predicate.
This bug happened because the executor tried to use a wrong
TABLE REF object when building access keys. It constructed
keys from fields of a materialized table from a ref object
created to construct keys from the fields of the underlying
base table. This could happen only when materialized table
was created for a non-correlated IN subquery and only
when the materialized table used for lookups.
In this case we are guaranteed to be able to construct the
keys from the fields of tables that would be outer tables
for the tables of the IN subquery.
The patch makes sure that no ref objects constructed from
fields of materialized lookup tables are to be used.
Analys:
The cause for the wrong result was that the optimizer
incorrectly chose min/max loose scan when it is not
applicable. The applicability test missed the case when
a condition on the MIN/MAX argument was OR-ed with a
condition on some other field. In this case, the MIN/MAX
condition cannot be used for loose scan.
Solution:
Extend the test check_group_min_max_predicates() to check
that the WHERE clause is of the form: "cond1 AND cond2"
where
cond1 - does not use min_max_column at all.
cond2 - is an AND/OR tree with leaves in form "min_max_column $CMP$ const"
or $CMP$ is one of the functions between, is [not] null
to enable both recompiling mysqli or odbc from sources in addition to drop-in replacement functionality.
The case in question is compiling mysqli from sources, that needs client_errors via ER() macro.
Previously, we exported it as mysql_client_errors (compatibly to Fedora's style symbol renaming, see MDEV-3842).
However, if MariaDB header files are used when compiling mysqli, client_errors needs to be exported with its original name.
Analysis:
Range analysis discoveres that the query can be executed via loose index scan for GROUP BY.
Later, GROUP BY analysis fails to confirm that the GROUP operation can be computed via an
index because there is no logic to handle duplicate field references in the GROUP clause.
As a result the optimizer produces an inconsistent plan. It constructs a temporary table,
but on the other hand the group fields are not set to point there.
Solution:
Make loose scan analysis work in sync with order by analysis. In the case of duplicate
columns loose scan will not be applicable. This limitation will be lifted in 10.0 by
removing duplicate columns.
Assertion happens in replication thread during THD destruction, when thread calls my_sync(), which in turn calls thd_wait_begin() callback. Connection count can be 0, because the counter was decremented before THD destructor.
This assertion currently reproducible only in Percona server and not in MariaDB, due to differences in replication code.
Fixed by moving code to decrement connection counter after the THD destructor.
from MariaDB 10.0.
The bug in mdev-3948 was an instance of the problem fixed by Sergey's patch
in 10.0 - namely that the range optimizer could change table->[read | write]_set,
and not restore it.
revno: 3471
committer: Sergey Petrunya <psergey@askmonty.org>
branch nick: 10.0-serg-fix-imerge
timestamp: Sat 2012-11-03 12:24:36 +0400
message:
# MDEV-3817: Wrong result with index_merge+index_merge_intersection, InnoDB table, join, AND and OR conditions
Reconcile the fixes from:
#
# guilhem.bichot@oracle.com-20110805143029-ywrzuz15uzgontr0
# Fix for BUG#12698916 - "JOIN QUERY GIVES WRONG RESULT AT 2ND EXEC. OR
# AFTER FLUSH TABLES [-INT VS NULL]"
#
# guilhem.bichot@oracle.com-20111209150650-tzx3ldzxe1yfwji6
# Fix for BUG#12912171 - ASSERTION FAILED: QUICK->HEAD->READ_SET == SAVE_READ_SET
# and
#
and related fixes from: BUG#1006164, MDEV-376:
Now, ROR-merged QUICK_RANGE_SELECT objects make no assumptions about the values
of table->read_set and table->write_set.
Each QUICK_ROR_SELECT has (and had before) its own column bitmap, but now, all
QUICK_ROR_SELECT's functions that care: reset(), init_ror_merged_scan(), and
get_next() will set table->read_set when invoked and restore it back to what
it was before the call before they return.
This allows to avoid the mess when somebody else modifies table->read_set for
some reason.
The problem was that a temporary table was re-created as a non-temporary table.
mysql-test/suite/maria/truncate.result:
Added test cases
mysql-test/suite/maria/truncate.test:
Added test cases
sql/sql_truncate.cc:
Mark that table to be created is a temporary table
storage/maria/ha_maria.cc:
Ensure that temporary tables are not transactional.
Miscellaneous workarounds for drop-in compatibility problems with Linux distributions, arounf versioning of the
MySQL 5.5 client shared library. There seems to be 3 different ways major distributions handle versioning
1. Fedora (also Mageia, and likely other Redhat descendants) way
old, 5.1 API functions are given version libmysqlclient_16
new API functions (client plugins, mysql_stmt_next ) are given version libmysqlclient_18
some extra functions beyond API are exported.
some functions are renamed.
2.Debian Wheezy way
all functions are given libmysqlclient_18 version
3. Ubuntu way (or MySQL/MariaDB download packages)
no versioning
UIp to this fix, MariaDB distributions did not have any versioning in the libraries, this rendered client library incompatible to distributions
thus exchanging distribution's libmysqlclient.so.18.0.0 with MariaDB's did not work nicely (anywhere but on Ubuntu)
THE FIX
is to build libraries the same way as distributions do it
- when building RPMs, use same version script as Fedora does, Make sure to export extra-symbols, the same as Fedora exports.
- when building DEBs, use the same version script as Debian Wheezy
- do not use version scripts otherwise
Also, makes sure that extensions of MySQL APIs (asynchronous client functionality) is exported by the shared libraries.
reached by fix_fields() (via reference) before row which it belongs to (on the second execution)
and fix_field for row did not follow usual protocol for Items with argument
(first check that the item fixed then call fix_fields).
Item_row::fix_field fixed.
allow only three failed change_user per connection.
successful change_user do NOT reset the counter
tests/mysql_client_test.c:
make --error to work for --change_user errors
This bug could result in returning 0 for the expressions of the form
<aggregate_function>(distinct field) when the system variable
max_heap_table_size was set to a small enough number.
It happened because the method Unique::walk() did not support
the case when more than one pass was needed to merge the trees
of distinct values saved in an external file.
Backported a fix in grant_lowercase.test from mariadb 5.5.
Early evaluation of subqueries in the WHERE conditions on I_S.*_STATUS tables,
otherwise the subquery on this same table will try to acquire LOCK_status twice.