Item_float::neg() did not preserve the "presentation" from "this".
So
CAST(-1e0 AS UNSIGNED) -- cast from double to unsigned
changes its meaning to:
CAST(-1 AS UNSIGNED) -- cast signed to undigned
Fixing Item_float::neg() to construct the new value for
Item_float::presentation as follows:
- if the old value starts with minus, then the minus is truncated:
'-2e0' -> '2e0'
- otherwise, minus sign followed by its old value:
'1e0' -> '-1e0'
If a query contained a CTE whose name coincided with the name of one of
the base tables used in the specification of the CTE and the query had at
least two references to this CTE in the specifications of other CTEs then
processing of the query led to unlimited recursion that ultimately caused
a crash of the server.
Any secondary non-recursive reference to a CTE requires creation of a copy
of the CTE specification. All the references to CTEs in this copy must be
resolved. If the specification contains a reference to a base table whose
name coincides with the name of then CTE then it should be ensured that
this reference in no way can be resolved against the name of the CTE.
If a query has a HAVING clause that contains a predicate with a constant
IN subquery whose lef part in its turn is a subquery and the predicate is
subject to pushdown from HAVING to WHERE then execution of the query could
cause a crash of the server.
The cause of the problem was the missing implementation of the walk()
method for the class Item_in_optimizer. As a result in some cases the left
operand of the Item_in_optimizer condition could be traversed twice by
the walk procedure. For many call-back functions used as an argument of
this procedure it does not matter. Yet it matters for the call-back
function cleanup_excluding_immutables_processor() used in pushdown of
predicates from HAVING to WHERE. If the processed item is marked with
the IMMUTABLE_FL flag then the processor just removes this flag, otherwise
it performs cleanup of the item making it unfixed. If an item is marked
with an the IMMUTABLE_FL and it traversed with this processor twice then
it becomes unfixed after the second traversal though the flag indicates
that the item should not be cleaned up.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Problem:
sp_cache erroneously looked up fully qualified SP names (e.g. `DB`.`SP`),
in case insensitive style. It was wrong, because only the "name"
part is always case insensitive, while the "db" part should be compared
according to lower_case_table_names (case sensitively for 0,
case insensitively for 1 and 2).
Fix:
Adding a "casedn_name" parameter make_qname() to tell
if the name part should be lower cased:
`DB1`.`SP` -> "DB1.SP" (when casedn_name=false)
`DB1`.`SP` -> "DB1.sp" (when casedn_name=true)
and using make_qname() with casedn_name=true when creating
sp_cache hash lookup keys.
Details:
As a result, it now works as follows:
- sp_head::m_db is converted to lower case if lower_case_table_names>0
during the sp_name initialization phase. So when make_qname() is called,
sp_head::m_db is already normalized. There are no changes in here.
- The initialization phase of sp_head when creating sp_head::m_qname
now calls make_qname() with casedn_name=true,
so sp_head::m_name gets written to sp_head::m_qname in lower case.
- sp_cache_lookup() now also calls make_qname() with casedn_name=true,
so sp_head::m_name gets written to the temporary lookup key in lower case.
- sp_cache::m_hashtable now uses case sensitive comparison
Consider this query
SELECT t1.* FROM t1, (SELECT t2.b FROM t2 WHERE NOT EXISTS
(SELECT 1 FROM t3) GROUP BY b) sq where sq.b = t1.a;
If SELECT 1 FROM t3 is expensive, for example t3 has >
thd->variables.expensive_subquery_limit, first evaluation is deferred to
mysql_derived_fill(). There it is noted that, in the above case
NOT EXISTS (SELECT 1 FROM t3) is constant and false.
This causes the join variable zero_result_cause to be set to
"Impossible WHERE noticed after reading const tables" and the handler
for this join is never "opened" via handler::ha_open.
When mysql_derived_fill() is called for the next group of results, this
unopened handler is not taken into account.
reviewed by Igor Babaev (igor@mariadb.com)
calculate auto-inc value even if long duplicate check fails -
this is what the engine does for normal uniques.
auto-inc value is needed if it's a REPLACE
* treat FUNC/ARRAY variables as SESSION (otherwise they won't be shown)
* allow SHOW_SIMPLE_FUNC everywhere where SHOW_FUNC is
* increase row buffer size to avoid "too short" assert
test uses +d,getnameinfo_fake_long_host to return a fake long
hostname in ip_to_hostname(). But this dbug keyword is only checked
after the lookup in the hostname cache.
the test has to flush the hostname cache in case previous tests
had it populated with fake ip addresses (perfschema tests do that)
also, remove redundant `connection` directives
The testcase had a race in two places where a KILL QUERY is made towards a
running query in another connection. The query can complete early so the kill
is lost, and the test fails due to expecting ER_QUERY_INTERRUPTED.
Fix by removing the KILL QUERY. It is not needed, as the query completes by
itself after SHOW EXPLAIN FOR.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Crash was caused by referencing a null pointer on getting
the number of the nesting levels of the set function for the current
select_lex at the method Item_field::fix_fields.
The current select for processing is taken from Name_resolution_context
that filled in at the function set_new_item_local_context() and
where initialization of the data member Name_resolution_context
was mistakenly removed by the commit
d6ee351bbb
(Revert "MDEV-24454 Crash at change_item_tree")
To fix the issue, correct initialization of data member
Name_resolution_context::select_lex
that was removed by the commit d6ee351bbb
is restored.
This patch fixes too strong condition in assert at the method
Item_func_group_concat::fix_fields
that is true in case of a stored routine and obviously broken
for a prepared statement.
Enable unusable key notes for non-equality predicates:
<, <=, =>, >, BETWEEN, IN, LIKE
Note, in some scenarios it displays duplicate notes, e.g.
for queries with ORDER BY:
SELECT * FROM t1
WHERE indexed_string_column >= 10
ORDER BY indexed_string_column
LIMIT 5;
This should be tolarable. Getting rid of the diplicate note
completely would need a much more complex patch, which is
not desiable in 10.6.
Details:
- Changing RANGE_OPT_PARAM::note_unusable_keys from bool
to a new data type Item_func::Bitmap, so the caller can
choose with a better granuality which predicates
should raise unusable key notes inside the range optimizer:
a. all predicates (=, <=>, <, <=, =>, >, BETWEEN, IN, LIKE)
b. all predicates except equality (=, <=>)
c. none of the predicates
"b." is needed because in some scenarios equality predicates (=, <=>)
send unusable key notes at an earlier stage, before the range optimizer,
during update_ref_and_keys(). Calling the range optimizer with
"all predicates" would produce duplicate notes for = and <=> in such cases.
- Fixing get_quick_record_count() to call the range optimizer
with "all predicates except equality" instead of "none of the predicates".
Before this change the range optimizer suppressed all notes for
non-equality predicates: <, <=, =>, >, BETWEEN, IN, LIKE.
This actually fixes the reported problem.
- Fixing JOIN::make_range_rowid_filters() to call the range optimizer
with "all predicates except equality" instead of "all predicates".
Before this change the range optimizer produced duplicate notes
for = and <=> during a rowid_filter optimization.
- Cleanup:
Adding the op_collation argument to Field::raise_note_cannot_use_key_part()
and displaying the operation collation rather than the argument collation
in the unusable key note. This is important for operations with more than
two arguments: BETWEEN and IN, e.g.:
SELECT * FROM t1
WHERE column_utf8mb3_general_ci
BETWEEN 'a' AND 'b' COLLATE utf8mb3_unicode_ci;
SELECT * FROM t1
WHERE column_utf8mb3_general_ci
IN ('a', 'b' COLLATE utf8mb3_unicode_ci);
The note for 'a' now prints utf8mb3_unicode_ci as the collation.
which is the collation of the entire operation:
Cannot use key key1 part[0] for lookup:
"`column_utf8mb3_general_ci`" of collation `utf8mb3_general_ci` >=
"'a'" of collation `utf8mb3_unicode_ci`
Before this change it printed the collation of 'a',
so the note was confusing:
Cannot use key key1 part[0] for lookup:
"`column_utf8mb3_general_ci`" of collation `utf8mb3_general_ci` >=
"'a'" of collation `utf8mb3_general_ci`"
Also in the startup, lets not "Error" on attempting to install a
mysql.plugin that is already there. We use the 'if_not_exists'
parameter to true to downgrade this to a "Note".
Also corrects: MDEV-32041 "plugin already loaded" should be a Warning, not an Error
Because --delete-master-logs immediately purges logs after flushing,
it is possible the binlog dump thread would still be using the old
log when the purge executes, disallowing the file from being
deleted.
This patch institutes a work-around in the test as follows:
1) temporarily stop the slave so there is no chance the old binlog
is still being referenced.
2) set master_use_gtid=Slave_pos so the slave can still appear
up-to-date on the master after the master flushes/purges its logs
(while the slave is offline). Otherwise (i.e. if using binlog
file/pos), the slave would point to a purged log file, and receive
an error immediately upon connecting to the master.
Reviewed By
============
Andrei Elkin <andrei.elkin@mariadb.com>