- Filesort has an optmization where it reads only columns that are
needed before the sorting is done.
- When ref(_or_null) is picked by the join optimizer, it may remove parts
of WHERE clause that are guaranteed to be true.
- However, if we use quick select, we must put all of the range columns into the
read set. Not doing so will may cause us to fail to detect the end of the range.
That particular part of slave connect to master was missing code to handle
retry in case of network errors. The same problem is present in MySQL 5.5, but
fixed in MySQL 5.6.
Fixed with this patch, by adding the code (mostly identical to MySQL 5.6), and
also adding a test case.
I checked other queries done towards master during slave connect, and they now
all seem to handle reconnect in case of network failures.
When plugin=mysql_native_password (or mysql_old_password) take the password
from *either* password *or* authentication_string, whichever is set.
This makes no sense, but alas, that's what MySQL-5.6 does.
replication causing replication to fail.
Remove the temporary fix for MDEV-5914, which used READ COMMITTED for parallel
replication worker threads. Replace it with a better, more selective solution.
The issue is with certain edge cases of InnoDB gap locks, for example between
INSERT and ranged DELETE. It is possible for the gap lock set by the DELETE to
block the INSERT, if the DELETE runs first, while the record lock set by
INSERT does not block the DELETE, if the INSERT runs first. This can cause a
conflict between the two in parallel replication on the slave even though they
ran without conflicts on the master.
With this patch, InnoDB will ask the server layer about the two involved
transactions before blocking on a gap lock. If the server layer tells InnoDB
that the transactions are already fixed wrt. commit order, as they are in
parallel replication, InnoDB will ignore the gap lock and allow the two
transactions to proceed in parallel, avoiding the conflict.
Improve the fix for MDEV-6020. When InnoDB itself detects a deadlock, it now
asks the server layer for any preferences about which transaction to roll
back. In case of parallel replication with two transactions T1 and T2 fixed to
commit T1 before T2, the server layer will ask InnoDB to roll back T2 as the
deadlock victim, not T1. This helps in some cases to avoid excessive deadlock
rollback, as T2 will in any case need to wait for T1 to complete before it can
itself commit.
Also some misc. fixes found during development and testing:
- Remove thd_rpl_is_parallel(), it is not used or needed.
- Use KILL_CONNECTION instead of KILL_QUERY when a parallel replication
worker thread is killed to resolve a deadlock with fixed commit
ordering. There are some cases, eg. in sql/sql_parse.cc, where a KILL_QUERY
can be ignored if the query otherwise completed successfully, and this
could cause the deadlock kill to be lost, so that the deadlock was not
correctly resolved.
- Fix random test failure due to missing wait_for_binlog_checkpoint.inc.
- Make sure that deadlock or other temporary errors during parallel
replication are not printed to the the error log; there were some places
around the replication code with extra error logging. These conditions can
occur occasionally and are handled automatically without breaking
replication, so they should not pollute the error log.
- Fix handling of rgi->gtid_sub_id. We need to be able to access this also at
the end of a transaction, to be able to detect and resolve deadlocks due to
commit ordering. But this value was also used as a flag to mark whether
record_gtid() had been called, by being set to zero, losing the value. Now,
introduce a separate flag rgi->gtid_pending, so rgi->gtid_sub_id remains
valid for the entire duration of the transaction.
- Fix one place where the code to handle ignored errors called reset_killed()
unconditionally, even if no error was caught that should be ignored. This
could cause loss of a deadlock kill signal, breaking deadlock detection and
resolution.
- Fix a couple of missing mysql_reset_thd_for_next_command(). This could
cause a prior error condition to remain for the next event executed,
causing assertions about errors already being set and possibly giving
incorrect error handling for following event executions.
- Fix code that cleared thd->rgi_slave in the parallel replication worker
threads after each event execution; this caused the deadlock detection and
handling code to not be able to correctly process the associated
transactions as belonging to replication worker threads.
- Remove useless error code in slave_background_kill_request().
- Fix bug where wfc->wakeup_error was not cleared at
wait_for_commit::unregister_wait_for_prior_commit(). This could cause the
error condition to wrongly propagate to a later wait_for_prior_commit(),
causing spurious ER_PRIOR_COMMIT_FAILED errors.
- Do not put the binlog background thread into the processlist. It causes
too many result differences in mtr, but also it probably is not useful
for users to pollute the process list with a system thread that does not
really perform any user-visible tasks...
SLOW/CRASHES SEMAPHORE
Problem:
There are 2 lakh tables - fk_000001, fk_000002 ... fk_200000. All of them
are related to the same parent_table through a foreign key constraint.
When the parent_table is loaded into the dictionary cache, all the child table
will also be loaded. This is taking lot of time. Since this operation happens
when the dictionary latch is taken, the scenario leads to "long semaphore wait"
situation and the server gets killed.
Analysis:
A simple performance analysis showed that the slowness is because of the
dict_foreign_find() function. It does a linear search on two linked list
table->foreign_list and table->referenced_list, looking for a particular
foreign key object based on foreign->id as the key. This is called two
times for each foreign key object.
Solution:
Introduce a rb tree in table->foreign_rbt and table->referenced_rbt, which
are some sort of index on table->foreign_list and table->referenced_list
respectively, using foreign->id as the key. These rbt structures will be
solely used by dict_foreign_find().
rb#5599 approved by Vasil
The method JOIN_CACHE::init may fail (return 1) if some conditions on the
used join buffer is not satisfied. For example it fails if join_buffer_size
is greater than join_buffer_space_limit. The conditions should be checked
when running the EXPLAIN command for the query. That's why the method
JOIN_CACHE::init has to be called for EXPLAIN commands as well.
Part#1.
table_cond_selectivity() should discount selectivity of table'
conditions only when ity counts that selectivity to begin with.
For non-ref-based access methods (ALL/range/index_merge/etc),
we start with sel=1.0 and hence do not need to discount any
selectivities.
- When range optimizer cannot the lookup value into [VAR]CHAR(n) column,
it should produce:
= "Impossible range" for equality
= "no range" for non-equalities.
Merge from mysql-5.6:
revno: 3257
committer: Jon Olav Hauglid <jon.hauglid@oracle.com>
branch nick: mysql-trunk-bug11756966
timestamp: Thu 2011-07-14 09:32:01 +0200
message:
Bug#11756966 - 48958: STORED PROCEDURES CAN BE LEVERAGED TO BYPASS
DATABASE SECURITY
The problem was that CREATE PROCEDURE/FUCTION could be used to
check the existence of databases for which the user had no
privileges and therefore should not be allowed to see.
The reason was that existence of a given database was checked
before privileges. So trying to create a stored routine in
a non-existent database would give a different error than trying
to create a stored routine in a restricted database.
This patch fixes the problem by changing the order of the checks
for CREATE PROCEDURE/FUNCTION so that privileges are checked first.
This means that trying to create a stored routine in a
non-existent database and in a restricted database both will
give ER_DBACCESS_DENIED_ERROR error.
Test case added to grant.test.
MDEV-6099 Bad results for DATE_ADD(.., INTERVAL 2000000000000000000.0 SECOND)
MDEV-6097 Inconsistent results for CAST(int,decimal,double AS DATETIME)
MDEV-6100 No warning on CAST(9000000 AS TIME)
We have to run the derived table prepare before the unique table check to mark the derived table (in this case the unique table check can turn that table to materialized one).
Bug #18167356: EXPLAIN W/ EXISTS(SELECT* UNION SELECT*)
WHERE ONE OF SELECT* IS DISTINCT FAILS.
the bugfix itself was not merged - MariaDB doesn't have this bug.
replication causing replication to fail.
In parallel replication, we run transactions from the master in parallel, but
force them to commit in the same order they did on the master. If we force T1
to commit before T2, but T2 holds eg. a row lock that is needed by T1, we get
a deadlock when T2 waits until T1 has committed.
Usually, we do not run T1 and T2 in parallel if there is a chance that they
can have conflicting locks like this, but there are certain edge cases where
it can occasionally happen (eg. MDEV-5914, MDEV-5941, MDEV-6020). The bug was
that this would cause replication to hang, eventually getting a lock timeout
and causing the slave to stop with error.
With this patch, InnoDB will report back to the upper layer whenever a
transactions T1 is about to do a lock wait on T2. If T1 and T2 are parallel
replication transactions, and T2 needs to commit later than T1, we can thus
detect the deadlock; we then kill T2, setting a flag that causes it to catch
the kill and convert it to a deadlock error; this error will then cause T2 to
roll back and release its locks (so that T1 can commit), and later T2 will be
re-tried and eventually also committed.
The kill happens asynchroneously in a slave background thread; this is
necessary, as the reporting from InnoDB about lock waits happen deep inside
the locking code, at a point where it is not possible to directly call
THD::awake() due to mutexes held.
Deadlock is assumed to be (very) rarely occuring, so this patch tries to
minimise the performance impact on the normal case where no deadlocks occur,
rather than optimise the handling of the occasional deadlock.
Also fix transaction retry due to deadlock when it happens after a transaction
already signalled to later transactions that it started to commit. In this
case we need to undo this signalling (and later redo it when we commit again
during retry), so following transactions will not start too early.
Also add a missing thd->send_kill_message() that got triggered during testing
(this corrects an incorrect fix for MySQL Bug#58933).
- When the optimizer chose LooseScan, make_join_readinfo() should
use the index that was chosen for LooseScan, and should not try
to find a better (shortest) index.
Handle retry of event groups that span multiple relay log files.
- If retry reaches the end of one relay log file, move on to the next.
- Handle refcounting of relay log files, and avoid purging relay log
files until all event groups have completed that might have needed
them for transaction retry.
"HAVING SUM(DISTINCT)": WRONG RESULTS.
ISSUE:
------
If a query uses loose index scan and it has both
AGG(DISTINCT) and MIN()/MAX()functions. Then, result values
of MIN/MAX() is set improperly.
When query has AGG(DISTINCT) then end_select is set to
end_send_group. "end_send_group" keeps doing aggregation
until it sees a record from next group. And, then it will
send out the result row of that group.
Since query also has MIN()/MAX() and loose index scan is
used, values of MIN/MAX() are set as part of loose index
scan itself. Setting MIN()/MAX() values as part of loose
index scan overwrites values computed in end_send_group.
This caused invalid result.
For such queries to work loose index scan should stop
performing MIN/MAX() aggregation. And, let end_send_group to
do the same. But according to current design loose index
scan can produce only one row per group key. If we have both
MIN() and MAX() then it has to give two records out. This is
not possible as interface has to use common buffer
record[0]! for both records at a time.
SOLUTIONS:
----------
For such queries to work we need a new interface for loose
index scan. Hence, do not choose loose_index_scan for such
cases. So a new rule SA7 is introduced to take care of the
same.
SA7: "If Q has both AGG_FUNC(DISTINCT ...) and
MIN/MAX() functions then loose index scan access
method is not used."
mysql-test/r/group_min_max.result:
Expected result.
mysql-test/t/group_min_max.test:
1. Test with various combination of AGG(DISTINCT) and
MIN(), MAX() functions.
2. Corrected the plan for old queries.
sql/opt_range.cc:
A new rule SA7 is introduced.
Implement that if first retry fails, we can do another attempt.
Add testcases to test multi-retry that succeeds in second attempt, and
multi-retry that eventually fails due to exceeding slave_trans_retries.