- Item::get_seconds() now skips decimal arithmetic, if decimals is 0. This significantly speeds up from_unixtime() if no fractional part is passed.
- replace sprintfs used to format temporal values by hand-coded formatting
Query1 (original query in the bug report)
BENCHMARK(10000000,DATE_SUB(FROM_UNIXTIME(RAND() * 2147483648), INTERVAL (FLOOR(1 + RAND() * 365)) DAY))
Query2 (Variation of query1 that does not use fractional part in FROM_UNIXTIME parameter)
BENCHMARK(10000000,DATE_SUB(FROM_UNIXTIME(FLOOR(RAND() * 2147483648)), INTERVAL (FLOOR(1 + RAND() * 365)) DAY))
Prior to the patch, the runtimes were (32 bit compilation/AMD machine)
Query1: 41.53 sec
Query2: 23.90 sec
With the patch, the runtimes are
Query1: 32.32 sec (speed up due to removing sprintf)
Query2: 12.06 sec (speed up due to skipping decimal arithmetic)
Analysis:
When the method JOIN::choose_subquery_plan() decided to apply
the IN-TO-EXISTS strategy, it set the unit and select_lex
uncacheable flag to UNCACHEABLE_DEPENDENT_INJECTED unconditionally.
As result, even if IN-TO-EXISTS injected non-correlated predicates,
the subquery was still treated as correlated.
Solution:
Set the subquery as correlated only if the injected predicate(s) depend
on the outer query.
The constructor for Query_log_event allocated 2 bytes too few for
extra space needed by Query cache. (Not sure if this is reproducible
in practice, as there are often a couple of extra bytes allocated
for unused string zero terminators, but better safe than sorry).
WHEN KILLING
Suppose there is a query waiting for a lock. If the user kills
this query, then "Got error -1 when reading table" error message
must not be logged in the server log file. Since this is a user
requested interruption, no spurious error message must be logged
in the server log. This patch will remove the error message from
the log.
approved by joh and tatjana
Analysis:
When a subquery that needs a temp table is executed during
the prepare or optimize phase of the outer query, at the end
of the subquery execution all the JOIN_TABs of the subquery
are replaced by a new JOIN_TAB that selects from the temp table.
However that temp table has no corresponding TABLE_LIST.
Once EXPLAIN execution reaches its last phase, it tries to print
the names of the subquery tables through its TABLE_LISTs, but in
the case of this bug there is no such TABLE_LIST (it is NULL),
hence a crash.
Solution:
The fix is to block subquery evaluation inside
Item_func_like::fix_fields and Item_func_like::select_optimize()
using the Item::is_expensive() test.
Problem: mysqlbinlog exits without any error code in case of
file write error. It is because of the fact that the calls
to Log_event::print() method does not return a value and the
thus any error were being ignored.
Resolution: We resolve this problem by checking for the
IO_CACHE::error == -1 after every call to Log_event:: print()
and terminating the further execution.
client/mysqlbinlog.cc:
- handled error conditions during event->print() calls
- added check for error in end_io_cache()
mysys/my_write.c:
Added debug code to simulate file write error.
error returned will be ENOSPC=> error no space on the disk
sql/log_event.cc:
Added debug code to simulate file write error, by reducing the size of io cache.
- In JOIN::exec(), make the having->update_used_tables() call before we've
made the JOIN::cleanup(full=true) call. The latter frees SJ-Materialization
structures, which correlated subquery predicate items attempt to walk afterwards.
This is a backport of the (unchaged) fix for MySQL bug #11764372, 57197.
Analysis:
When the outer query finishes its main execution and computes GROUP BY,
it needs to construct a new temporary table (and a corresponding JOIN) to
execute the last DISTINCT operation. At this point JOIN::exec calls
JOIN::join_free, which calls JOIN::cleanup -> TMP_TABLE_PARAM::cleanup
for both the outer and the inner JOINs. The call to the inner
TMP_TABLE_PARAM::cleanup sets copy_field = NULL, but not copy_field_end.
The final execution phase that computes the DISTINCT invokes:
evaluate_join_record -> end_write -> copy_funcs
The last function copies the results of all functions into the temp table.
copy_funcs walks over all functions in join->tmp_table_param.items_to_copy.
In this case items_to_copy contains both assignments to user variables.
The process of copying user variables invokes Item_func_set_user_var::check
which in turn re-evaluates the arguments of the user variable assignment.
This in turn triggers re-evaluation of the subquery, and ultimately
copy_field.
However, the previous call to TMP_TABLE_PARAM::cleanup for the subquery
already set copy_field to NULL but not its copy_field_end. This results
in a null pointer access, and a crash.
Fix:
Set copy_field_end and save_copy_field_end to null when deleting
copy fields in TMP_TABLE_PARAM::cleanup().
Analysis:
The optimizer detects an empty result through constant table optimization.
Then it calls return_zero_rows(), which in turns calls inderctly
Item_maxmin_subselect::no_rows_in_result(). The latter method set "value=0",
however "value" is pointer to Item_cache, and not just an integer value.
All of the Item_[maxmin | singlerow]_subselect::val_XXX methods does:
if (forced_const)
return value->val_real();
which of course crashes when value is a NULL pointer.
Solution:
When the optimizer discovers an empty result set, set
Item_singlerow_subselect::value to a FALSE constant Item instead of NULL.
Handle the 'set read_only=1' in lighter way, than the FLUSH TABLES READ LOCK;
For the transactional engines we don't wait for operations on that tables to finish.
per-file comments:
mysql-test/r/read_only_innodb.result
MDEV-136 Non-blocking "set read_only".
test result updated.
mysql-test/t/read_only_innodb.test
MDEV-136 Non-blocking "set read_only".
test case added.
sql/mysql_priv.h
MDEV-136 Non-blocking "set read_only".
The close_cached_tables_set_readonly() declared.
sql/set_var.cc
MDEV-136 Non-blocking "set read_only".
Call close_cached_tables_set_readonly() for the read_only::set_var.
sql/sql_base.cc
MDEV-136 Non-blocking "set read_only".
Parameters added to the close_cached_tables implementation,
close_cached_tables_set_readonly declared.
Prevent blocking on the transactional tables if the
set_readonly_mode is on.
INNODB_AUTOINC_LOCK_MODE=1 AND USING TRIGGER
When an insert stmt like "insert into t values (1),(2),(3)" is
executed, the autoincrement values assigned to these three rows are
expected to be contiguous. In the given lock mode
(innodb_autoinc_lock_mode=1), the auto inc lock will be released
before the end of the statement. So to make the autoincrement
contiguous for a given statement, we need to reserve the auto inc
values at the beginning of the statement.
Modified the fix based on review comment by Svoj.
Problem
========
SQL statements close to the size of max_allowed_packet produce binary
log events larger than max_allowed_packet.
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet + max_event_header
length. Now since the event length exceeds this size master Dump
thread is unable to send the packet on to the slave.
That can happen e.g with row-based replication in Update_rows event.
Fix
====
The problem was fixed by increasing the max_allowed_packet for the
slave's threads (IO/SQL) by increasing it to 1GB.
This is done using the new server option included which is used to
regulate the max_allowed_packet of the slave thread (IO/SQL).
This causes the large packets to be received by the slave and apply
it successfully.
sql/log_event.h:
Added the new option in the log_event.h file.
sql/mysqld.cc:
Added a new option to the server.
sql/slave.cc:
Increasing the session max_allowed_packet to a large value ,
i.e. not taking global(max_allowed) into consideration, for the slave's threads.
- make make_cond_after_sjm() correctly handle OR clauses where one branch refers to the semi-join table
while the other branch refers to the non-semijoin table.
Problem: After the fix for Bug#12589870, a new field that
stores the length of db name was added in the buffer that
stores the query to be executed. Unlike for the plain user
session, the replication execution did not allocate the
necessary chunk in Query-event constructor. This caused an
invalid read while accessing this field.
Solution: We fix this problem by allocating a necessary chunk
in the buffer created in the Query_log_event::Query_log_event()
and store the length of database name.
sql/log_event.cc:
Added a new field in the buffer created in the
Query_log_event's constructor and store the length
of database name.
PROBLEM:
Threads end-up in deadlock due to locks acquired as described
below,
con1: Run Query on a table.
It is important that this SELECT must back-off while
trying to open the t1 and enter into wait_for_condition().
The SELECT then is blocked trying to lock mysys_var->mutex
which is held by con3. The very significant fact here is
that mysys_var->current_mutex will still point to LOCK_open,
even if LOCK_open is no longer held by con1 at this point.
con2: Try dropping table used in con1 or query some table.
It will hold LOCK_open and be blocked trying to lock
kernel_mutex held by con4.
con3: Try killing the query run by con1.
It will hold THD::LOCK_thd_data belonging to con1 while
trying to lock mysys_var->current_mutex belonging to con1.
But current_mutex will point to LOCK_open which is held
by con2.
con4: Get innodb engine status
It will hold kernel_mutex, trying to lock THD::LOCK_thd_data
belonging to con1 which is held by con3.
So while technically only con2, con3 and con4 participate in the
deadlock, con1's mysys_var->current_mutex pointing to LOCK_open
is a vital component of the deadlock.
CYCLE = (THD::LOCK_thd_data -> LOCK_open ->
kernel_mutex -> THD::LOCK_thd_data)
FIX:
LOCK_thd_data has responsibility of protecting,
1) thd->query, thd->query_length
2) VIO
3) thd->mysys_var (used by KILL statement and shutdown)
4) THD during thread delete.
Among above responsibilities, 1), 2)and (3,4) seems to be three
independent group of responsibility. If there is different LOCK
owning responsibility of (3,4), the above mentioned deadlock cycle
can be avoid. This fix introduces LOCK_thd_kill to handle
responsibility (3,4), which eliminates the deadlock issue.
Note: The problem is not found in 5.5. Introduction MDL subsystem
caused metadata locking responsibility to be moved from TDC/TC to
MDL subsystem. Due to this, responsibility of LOCK_open is reduced.
As the use of LOCK_open is removed in open_table() and
mysql_rm_table() the above mentioned CYCLE does not form.
Revision ID for changes,
open_table() = dlenev@mysql.com-20100727133458-m3ua9oslnx8fbbvz
mysql_rm_table() = jon.hauglid@oracle.com-20101116100012-kxep9txz2fxy3nmw
INNODB_AUTOINC_LOCK_MODE=1 AND USING TRIGGER
When an insert stmt like "insert into t values (1),(2),(3)" is
executed, the autoincrement values assigned to these three rows are
expected to be contiguous. In the given lock mode
(innodb_autoinc_lock_mode=1), the auto inc lock will be released
before the end of the statement. So to make the autoincrement
contiguous for a given statement, we need to reserve the auto inc
values at the beginning of the statement.
rb://1074 approved by Alexander Nozdrin
If we did nothing in resolving unique table conflict we should not retry (it leed to infinite loop).
Now we retry (recheck) unique table check only in case if we materialized a table.
- Let fix_semijoin_strategies_for_picked_join_order() set
POSITION::prefix_record_count for POSITION records that it copies from
SJ_MATERIALIZATION_INFO::tables.
(These records do not have prefix_record_count set, because they are optimized
as joins-inside-semijoin-nests, without full advance_sj_state() processing).
The not_null_tables() of Item_func_not_all and Item_in_optimizer was inherited from
Item_func by mistake. It made the optimizer think that subquery
predicates with ALL/ANY/IN were null-rejecting. This could trigger invalid
conversions of outer joins into inner joins.
Optimization of aggregate functions detected constant under max() and evalueted it, but condition in the WHWRE clause (which is always FALSE) was not taken into account
The patch backports two patches from mysql 5.6:
- BUG#12640437: USING SQL_BUFFER_RESULT RESULTS IN A DIFFERENT QUERY OUTPUT
- Bug#12578908: SELECT SQL_BUFFER_RESULT OUTPUTS TOO MANY ROWS WHEN GROUP IS OPTIMIZED AWAY
Original comment:
-----------------
3714 Jorgen Loland 2012-03-01
BUG#12640437 - USING SQL_BUFFER_RESULT RESULTS IN A DIFFERENT
QUERY OUTPUT
For all but simple grouped queries, temporary tables are used to
resolve grouping. In these cases, the list of grouping fields is
stored in the temporary table and grouping is resolved
there (e.g. by adding a unique constraint on the involved
fields). Because of this, grouping is already done when the rows
are read from the temporary table.
In the case where a group clause may be optimized away, grouping
does not have to be resolved using a temporary table. However, if
a temporary table is explicitly requested (e.g. because the
SQL_BUFFER_RESULT hint is used, or the statement is
INSERT...SELECT), a temporary table is used anyway. In this case,
the temporary table is created with an empty group list (because
the group clause was optimized away) and it will therefore not
create groups. Since the temporary table does not take care of
grouping, JOIN::group shall not be set to false in
make_simple_join(). This was fixed in bug 12578908.
However, there is an exception where make_simple_join() should
set JOIN::group to false even if the query uses a temporary table
that was explicitly requested but is not strictly needed. That
exception is if the loose index scan access method (explain
says "Using index for group-by") is used to read into the
temporary table. With loose index scan, grouping is resolved
by the access method. This is exactly what happens in this bug.
This is a backport of the fix for MySQL bug #13723054 in 5.6.
Original comment:
The crash is caused by arbitrary memory area owerwriting in case of
BLOB fields during attempt to copy BLOB field key image into record
buffer(record buffer is too small to get BLOB key part image).
note:
QUICK_GROUP_MIN_MAX_SELECT can not work with BLOB fields
because it uses record buffer as temporary buffer for key values
however this case is filtered out by covering_keys() check
in get_best_group_min_max() as BLOBs always require key length
modificator in the key declaration and if the key has a BLOB
then it can not be covered key.
The fix is to use 'max_used_key_length' key length instead of 0.
Analysis:
Spcifically the crash in this bug was a result of the call to key_copy()
that copied the whole key, inlcuding the BLOB field which is not used
for index access. Copying the blob field overwrote memory as far as the
function parameter 'key_info'. As a result the contents of key_info was
all 0, which resulted in a crash when this key_info was accessed few
lines below in key_cmp().
The problem was in the code (update_const_equal_items()) which marked
index parts constant independently of the place where the equality was used.
In the test suite it marked t2_1.c part constant despite the fact that
it connected by OR with other expression.
Solution is to mark constant only top equalities connected with AND.
Create an Item_cache based on item's cmp_type, not result_type in
subselect_engine.
Use result_field in Item_cache_temporal::cache_value(),
just like all other Item_cache*::cache_value() do.