reached by fix_fields() (via reference) before row which it belongs to (on the second execution)
and fix_field for row did not follow usual protocol for Items with argument
(first check that the item fixed then call fix_fields).
Item_row::fix_field fixed.
allow only three failed change_user per connection.
successful change_user do NOT reset the counter
tests/mysql_client_test.c:
make --error to work for --change_user errors
This bug could result in returning 0 for the expressions of the form
<aggregate_function>(distinct field) when the system variable
max_heap_table_size was set to a small enough number.
It happened because the method Unique::walk() did not support
the case when more than one pass was needed to merge the trees
of distinct values saved in an external file.
Backported a fix in grant_lowercase.test from mariadb 5.5.
Early evaluation of subqueries in the WHERE conditions on I_S.*_STATUS tables,
otherwise the subquery on this same table will try to acquire LOCK_status twice.
Early evaluation of subqueries in the WHERE conditions on I_S.*_STATUS tables,
otherwise the subquery on this same table will try to acquire LOCK_status twice.
sql/item.h:
remove unused method
This fixed failing test in group_by.test
mysql-test/r/join_outer.result:
Updated test case
mysql-test/r/join_outer_jcl6.result:
Updated test case
sql/item.cc:
Don't reset maybe_null in update_used_tables(); This breaks ROLLUP
sql/item.h:
Don't reset maybe_null in update_used_tables(); This breaks ROLLUP
sql/item_cmpfunc.h:
Don't reset maybe_null in update_used_tables(); This breaks ROLLUP
The problem was that maybe_null of Item_row and its componetes was unsynced after update_used_tables() (and so pushed_cond_guards was not initialized but then requested).
Fix updates Item_row::maybe_null on update_used_tables().
Analysis
The reason for the less efficient plan was result of a prior design decision -
to limit the eveluation of constant expressions during optimization to only
non-expensive ones. With this approach all stored procedures were considered
expensive, and were not evaluated during optimization. As a result, SPs didn't
participate in range optimization, which resulted in a plan with table scan
rather than index range scan.
Solution
Instead of considering all SPs expensive, consider expensive only those SPs
that are non-deterministic. If an SP is deterministic, the optimizer will
checj if it is constant, and may eventually evaluate it during optimization.
Analysis:
The crash is a result of incorrect analysis of whether a secondary key
can be extended with a primary in order to compute ORDER BY. The analysis
is done in test_if_order_by_key(). This function doesn't take into account
that the primary key may in fact index the same columns as the secondary
key. For the test query test_if_order_by_key says that there is an extended
key with total 2 keyparts.
At the same time, the condition
if (pkinfo->key_part[i].field->key_start.is_set(nr))
in test_if_cheaper_oredring() becomes true for (i == 0), which results in
an invalid access to rec_per_key[-1].
Solution:
The best solution would be to reuse KEY::ext_key_parts that is already computed
by open_binary_frm(), however after detailed analysis the conclusion is that
the change would be too intrusive for a GA release.
The solution for 5.5 is to add a guard for the case when the 0-th key part is
considered, and to assume that all keys will be scanned in this case.
The bug could lead to a wrong estimate of the number of expected rows
in the output of the EXPLAIN commands for queries with GROUP BY.
This could be observed in the test case for LP bug 934348.
Fixed MDEV-4002: Server crash or valgrind errors in Item_func_group_concat::setup and Item_func_group_concat::add
mysql-test/r/group_by.result:
Added test case for failing GROUP_CONCAT ... ROLLUP queries
mysql-test/t/group_by.test:
Added test case for failing GROUP_CONCAT ... ROLLUP queries
sql/item_sum.cc:
Fixed issue where field->table pointed to different temporary table than expected.
Ensure that order->next points to the right object (could cause problems with setup_order())
- Fixed broadcast without a proper mutex
- Don't break existing locks if we are just testing if we can get the lock
mysql-test/r/create_delayed.result:
Added test case for failures with INSERT DELAYED with CREATE and DROP TABLE
mysql-test/t/create_delayed.test:
Added test case for failures with INSERT DELAYED with CREATE and DROP TABLE
sql/mdl.cc:
Don't break existing locks for timeout=0 (ie, just check if there are conflicting locks).
This fixed the bug that INSERT DELAYED didn't work properly with CREATE TABLE
sql/sql_base.cc:
One neads to hold the mutex before doing a mysql_cond_broadcast()
This fixed the bug that INSERT DELAYED didn't work properly with DROP TABLE
sql/sql_insert.cc:
Protect setting of mysys_var->current_mutex.
from a MERGE view.
The problem was in the lost ability to be null for the table of a left join if it
is a view/derived table.
It hapenned because setup_table_map(), was called earlier then we merged
the view or derived.
Fixed by propagating new maybe_null flag during Item::update_used_tables().
Change in join_outer.test and join_outer_jcl6.test appeared because
IS NULL reported no used tables (i.e. constant) for argument which could not be
NULL and new maybe_null flag was propagated for IS NULL argument (Item_field)
because table the Item_field belonged to changed its maybe_null status.
Analysis:
The following call stack shows that it is possible to set Item_cache::value_cached, and the relevant value
without setting Item_cache::example.
#0 Item_cache_temporal::store_packed at item.cc:8395
#1 get_datetime_value at item_cmpfunc.cc:915
#2 resolve_const_item at item.cc:7987
#3 propagate_cond_constants at sql_select.cc:12264
#4 propagate_cond_constants at sql_select.cc:12227
#5 optimize_cond at sql_select.cc:13026
#6 JOIN::optimize at sql_select.cc:1016
#7 st_select_lex::optimize_unflattened_subqueries at sql_lex.cc:3161
#8 JOIN::optimize_unflattened_subqueries at opt_subselect.cc:4880
#9 JOIN::optimize at sql_select.cc:1554
The fix is to set Item_cache_temporal::example even when the value is
set directly by Item_cache_temporal::store_packed. This makes the
Item_cache_temporal object consistent.
Analysys:
In the beginning of JOIN::cleanup there is code that is supposed to
free all filesort buffers. The code assumes that the table being sorted
is the first non-constant table. To get this table it calls:
first_top_level_tab(this, WITHOUT_CONST_TABLES)
However, first_top_level_tab() instead returned the wrong table - the first
one in the plan, instead of the first non-constant table. There is no other
place outside filesort() where sort buffers may be freed. As a result, the
sort buffer was not freed, and there was a memory leak.
Solution:
Change first_top_level_tab(), to test for WITH_CONST_TABLES instead of
WITHOUT_CONST_TABLES.
mysql-test/r/create.result:
Updated test results
mysql-test/t/create.test:
Updated test
sql/sql_base.cc:
Use push_internal_handler/pop_internal_handler to avoid errors & warnings instead of clear_error
Give a warnings instead of an error for CREATE TABLE IF EXISTS
sql/sql_parse.cc:
Check if we failed because of table exists (can only happen from create)
sql/sql_table.cc:
Check if we failed because of table exists (can only happen from create)
Analysis:
The reason for the suboptimal plan when querying IS tables through a view
was that the view columns that participate in an equality are wrapped by
an Item_direct_view_ref and were not recognized as being direct column
references.
Solution:
Use the original Item_field objects via the real_item() method.