log_slow_filter=admin as been available for a long time.
Uses can migrate from log_slow_statements_statements=OFF by removing
'admin' from the default log_slow_filter variable setting.
try to make them less confusing for users.
Hopefully, if the version string will be changed like
- mariadb Ver 15.1 Distrib 10.11.2-MariaDB for Linux (x86_64)
+ mariadb from 10.11.2-MariaDB, client 15.1 for Linux (x86_64)
users will be less inclined to reply "15.1" to the question
"what mariadb version are you using?"
Analysis:
When we skip level when path is found, it changes the state of the json
engine. This breaks the sequence for json_get_path_next() which is called at
the end to ensure json document is valid and leads to crash.
Fix:
Use json_scan_next() at the end to check if json document has correct
syntax (is valid).
Analysis:
Parsing json path happens only once. When paring, we set types of path
(types_used) to use later. If the path type has range or wild card, only
then multiple values get added to the result set.
However for each row in the table, types_used still gets
overwritten to default (no multiple values) and is also not set again
(because path is already parsed). Since multiple values depend on the
type of path, they dont get added to result set either.
Fix:
set default for types_used only if path is not parsed already.
The reason things fails in 10.5 and above is that test_quick_select()
returns -1 (impossible range) for empty tables if there are any
conditions attached.
This didn't happen in 10.4 as the cost for a range was more than for
a table scan with 0 rows and get_key_scan_params() did not create any
range plans and thus did not mark the range as impossible.
The code that checked the 'impossible range' conditions did not take
into account all cases of LEFT JOIN usage.
Adding an extra check if the table is used with an ON condition in case
of 'impossible range' fixes the issue.
This issue was caused by the bug fix for
MDEV-30325 Wrong result upon range query using index condition
The bug could happen in the case of several overlapping key ranges
with OR
Before commit 6112853cda in MySQL 4.1.1
introduced the parameter innodb_file_per_table, all InnoDB data was
written to the InnoDB system tablespace (often named ibdata1).
A serious design problem is that once the system tablespace has grown to
some size, it cannot shrink even if the data inside it has been deleted.
There are also other design problems, such as the server hang MDEV-29930
that should only be possible when using innodb_file_per_table=0 and
innodb_undo_tablespaces=0 (storing both tables and undo logs in the
InnoDB system tablespace).
The parameter innodb_change_buffering was deprecated
in commit b5852ffbee.
Starting with commit baf276e6d4
(MDEV-19229) the number of innodb_undo_tablespaces can be increased,
so that the undo logs can be moved out of the system tablespace
of an existing installation.
If all these things (tables, undo logs, and the change buffer) are
removed from the InnoDB system tablespace, the only variable-size
data structure inside it is the InnoDB data dictionary.
DDL operations on .ibd files was optimized in
commit 86dc7b4d4c (MDEV-24626).
That should have removed any thinkable performance advantage of
using innodb_file_per_table=0.
Since there should be no benefit of setting innodb_file_per_table=0,
the parameter should be deprecated. Starting with MySQL 5.6 and
MariaDB Server 10.0, the default value is innodb_file_per_table=1.
(Variant 3, initial variant was by Rex Jonston)
A LEFT JOIN with a constant as a column of the inner table produced wrong
query result if the optimizer had to write the inner table column into a
temp table. Query pattern:
SELECT ...
FROM (SELECT /*non-mergeable select*/
FROM t1 LEFT JOIN (SELECT 'Y' as Val) t2 ON ...) as tbl
Fixed this by adding Item_direct_view_ref::save_in_field() which follows
the pattern of Item_direct_view_ref's save_org_in_field(),
save_in_result_field() and val_XXX() functions:
* call check_null_ref() and handle NULL value
* if we didn't get a NULL-complemented row, call Item_direct_ref's function.
The MDEV-25004 test innodb_fts.versioning is omitted because ever since
commit 685d958e38 InnoDB would not allow
writes to a database where the redo log file ib_logfile0 is missing.
it's incorrect to use change_item_tree() to replace arguments
of top-level AND/OR, because they (arguments) are stored in a List,
so a pointer to an argument is in the list_node, and individual
list_node's of top-level AND/OR can be deleted in Item_cond::build_equal_items().
In that case rollback_item_tree_changes() will modify the deleted object.
Luckily, it's not needed to use change_item_tree() for top-level
AND/OR, because the whole top-level item is copied and preserved
in prep_where and prep_on, and restored from there.
So, just don't.
Additionally to the test case in the commit it fixes
* ASAN failure of main.opt_tvc --ps
* ASAN failure of main.having_cond_pushdown --ps
when an internal temporary table field is created from a real field,
a new temp field should only copy a default from the source field
when the latter has it
when creating a temp table field from an actual table field,
these two fields are supposed to be mostly identical
(except for BIT field storage), in particular, temp field should
have the same default as the orig field, even if the sql_mode has
been changed meanwhile (e.g. to include NO_ZERO_DATE)
regression from MDEV-29540 / 8c38939369.
INSERT SELECT errors needed to be unconditionally ignored.
As this touches the CREATE .. SELECT functionality, show
the equalivent test there.
This bug affected queries with nested left joins having the same last inner
table such that not_exists optimization could be applied to the most inner
outer join when optimizer chose to use join buffers. The bug could lead to
producing wrong a result set.
If the WHERE condition a query contains a conjunctive IS NULL predicate
over a non-nullable column of an inner table of a not nested outer join
then not_exists optimization can be applied to tho the outer join. With
this optimization when looking for matches for a certain record from the
outer table of the join the records of the inner table can be ignored
right after the first match satisfying the ON condition is found.
In the case of nested outer joins having the same last inner table this
optimization still can be applied but only if all ON conditions of the
embedding outer joins are satisfied. Such check was missing in the code
that tried to apply not_exists optimization when join buffers were used
for outer join operations.
This problem has been already fixed in the patch for bug MDEV-7992. Yet
there it was resolved only for the cases when join buffers were not used
for outer joins.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
procedures
MDEV-22224 caused the parsing of keys with hyphens to break by setting
the state transitions for parsing keys to JE_SYN (syntax error) when
they encounter a hyphen. However json key names may contain hyphens and
still be considered valid json.
This patch changes the state transition table so that key names with
hyphens remain valid. Note that unquoted key names in paths like
$.key-name are also valid again. This restores the previous behaviour
when hyphens were considered part of the P_ETC character class.
This was caused by a bug in key_or() when SEL_ARG* key1 has been cloned
and is overlapping with SEL_ARG *key2
Cloning of SEL_ARG's happens only in very special cases, which is why this
bug has remained undetected for years.
It happend in the following query:
SELECT COUNT(*) FROM lineitem force index
(i_l_orderkey_quantity,i_l_shipdate) WHERE
l_shipdate < '1994-01-01' AND l_orderkey < 800 OR
l_quantity > 3 AND l_orderkey NOT IN ( 157, 1444 );
Because there are two different indexes that can be used and the code for
IN causes a 'tree_or', which causes all SEL_ARG's to be cloned.
Other things:
- While checking the code, I found a bug in SEL_ARG::SEL_ARG(SEL_ARG &arg)
- This was incrementing next_key_part->use_count as part of creating a
copy of an existing SEL_ARG.
This is however not enough as the 'reverse operation' when the copy is
not needed is 'key2_cpy.increment_use_count(-1)', which does something
completely different.
Fixed by calling increment_use_count(1) in SEL_ARG::SEL_ARG.
The problem was that when storing rows into a temporary table,
MIN/MAX items that where marked as constants (as theire value had
been computed at start of query) would be reset.
Fixed by not reseting MIN/MAX items that are marked as const in
Item_sum_min_max::clear().