Assertion happens in replication thread during THD destruction, when thread calls my_sync(), which in turn calls thd_wait_begin() callback. Connection count can be 0, because the counter was decremented before THD destructor.
This assertion currently reproducible only in Percona server and not in MariaDB, due to differences in replication code.
Fixed by moving code to decrement connection counter after the THD destructor.
from MariaDB 10.0.
The bug in mdev-3948 was an instance of the problem fixed by Sergey's patch
in 10.0 - namely that the range optimizer could change table->[read | write]_set,
and not restore it.
revno: 3471
committer: Sergey Petrunya <psergey@askmonty.org>
branch nick: 10.0-serg-fix-imerge
timestamp: Sat 2012-11-03 12:24:36 +0400
message:
# MDEV-3817: Wrong result with index_merge+index_merge_intersection, InnoDB table, join, AND and OR conditions
Reconcile the fixes from:
#
# guilhem.bichot@oracle.com-20110805143029-ywrzuz15uzgontr0
# Fix for BUG#12698916 - "JOIN QUERY GIVES WRONG RESULT AT 2ND EXEC. OR
# AFTER FLUSH TABLES [-INT VS NULL]"
#
# guilhem.bichot@oracle.com-20111209150650-tzx3ldzxe1yfwji6
# Fix for BUG#12912171 - ASSERTION FAILED: QUICK->HEAD->READ_SET == SAVE_READ_SET
# and
#
and related fixes from: BUG#1006164, MDEV-376:
Now, ROR-merged QUICK_RANGE_SELECT objects make no assumptions about the values
of table->read_set and table->write_set.
Each QUICK_ROR_SELECT has (and had before) its own column bitmap, but now, all
QUICK_ROR_SELECT's functions that care: reset(), init_ror_merged_scan(), and
get_next() will set table->read_set when invoked and restore it back to what
it was before the call before they return.
This allows to avoid the mess when somebody else modifies table->read_set for
some reason.
The problem was that a temporary table was re-created as a non-temporary table.
mysql-test/suite/maria/truncate.result:
Added test cases
mysql-test/suite/maria/truncate.test:
Added test cases
sql/sql_truncate.cc:
Mark that table to be created is a temporary table
storage/maria/ha_maria.cc:
Ensure that temporary tables are not transactional.
reached by fix_fields() (via reference) before row which it belongs to (on the second execution)
and fix_field for row did not follow usual protocol for Items with argument
(first check that the item fixed then call fix_fields).
Item_row::fix_field fixed.
allow only three failed change_user per connection.
successful change_user do NOT reset the counter
tests/mysql_client_test.c:
make --error to work for --change_user errors
This bug could result in returning 0 for the expressions of the form
<aggregate_function>(distinct field) when the system variable
max_heap_table_size was set to a small enough number.
It happened because the method Unique::walk() did not support
the case when more than one pass was needed to merge the trees
of distinct values saved in an external file.
Backported a fix in grant_lowercase.test from mariadb 5.5.
Early evaluation of subqueries in the WHERE conditions on I_S.*_STATUS tables,
otherwise the subquery on this same table will try to acquire LOCK_status twice.
Early evaluation of subqueries in the WHERE conditions on I_S.*_STATUS tables,
otherwise the subquery on this same table will try to acquire LOCK_status twice.
sql/item.h:
remove unused method
This fixed failing test in group_by.test
mysql-test/r/join_outer.result:
Updated test case
mysql-test/r/join_outer_jcl6.result:
Updated test case
sql/item.cc:
Don't reset maybe_null in update_used_tables(); This breaks ROLLUP
sql/item.h:
Don't reset maybe_null in update_used_tables(); This breaks ROLLUP
sql/item_cmpfunc.h:
Don't reset maybe_null in update_used_tables(); This breaks ROLLUP
The problem was that maybe_null of Item_row and its componetes was unsynced after update_used_tables() (and so pushed_cond_guards was not initialized but then requested).
Fix updates Item_row::maybe_null on update_used_tables().
Analysis
The reason for the less efficient plan was result of a prior design decision -
to limit the eveluation of constant expressions during optimization to only
non-expensive ones. With this approach all stored procedures were considered
expensive, and were not evaluated during optimization. As a result, SPs didn't
participate in range optimization, which resulted in a plan with table scan
rather than index range scan.
Solution
Instead of considering all SPs expensive, consider expensive only those SPs
that are non-deterministic. If an SP is deterministic, the optimizer will
checj if it is constant, and may eventually evaluate it during optimization.
The original patch with the implementation of virtual columns
did not support INSERT DELAYED into tables with virtual columns.
This patch fixes the problem.
(because it's conceptually wrong. only the user can decide whether the kill is
allowed to leave tables in the inconsistent state, storage engine has no say in that)
The previous fix for MDEV-3992 was incomplete, because it still computed
incorrectly the number of keyparts of the extended secondary key in the
case when columns of the PK participate in the secondary key.
This patch by Monty corrects the above problem.
Analysis:
The crash is a result of incorrect analysis of whether a secondary key
can be extended with a primary in order to compute ORDER BY. The analysis
is done in test_if_order_by_key(). This function doesn't take into account
that the primary key may in fact index the same columns as the secondary
key. For the test query test_if_order_by_key says that there is an extended
key with total 2 keyparts.
At the same time, the condition
if (pkinfo->key_part[i].field->key_start.is_set(nr))
in test_if_cheaper_oredring() becomes true for (i == 0), which results in
an invalid access to rec_per_key[-1].
Solution:
The best solution would be to reuse KEY::ext_key_parts that is already computed
by open_binary_frm(), however after detailed analysis the conclusion is that
the change would be too intrusive for a GA release.
The solution for 5.5 is to add a guard for the case when the 0-th key part is
considered, and to assume that all keys will be scanned in this case.
The bug could lead to a wrong estimate of the number of expected rows
in the output of the EXPLAIN commands for queries with GROUP BY.
This could be observed in the test case for LP bug 934348.