The function JOIN_TAB::choose_best_splitting() did not take into account
that for some tables whose fields were used in the GROUP BY list of
the specification of a splittable materialized derived there might exist
no elements in the array ext_keyuses_for_splitting.
The crash occurs when the Spider node server attempts to create an error
message stating that the temporary table is not found. The function to
create the error message is called with incorrect parameters.
I fixed the crash by correcting the incorrect parameter values.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Now that ha_innobase::prepare_inplace_alter_table() is accessing
ha_alter_info->create_info->option_struct, we must initialize it in
the Mroonga wrapper for ALTER TABLE based on the parsed table options
for the wrap_altered_table.
The table option page_compression_level is something that only
affects future writes, not actually the data format. Therefore,
we can allow instant changes of this option.
Similarly, the table option page_compressed can be set on a
previously uncompressed table without rebuilding the table,
because an uncompressed page would be considered valid when
reading a page_compressed table.
Removing the page_compressed option will continue to require
the table to be rebuilt.
ha_innobase_inplace_ctx::page_compression_level: The requested
page_compression_level at the start of ALTER TABLE, or 0 if
page_compressed=OFF.
alter_options_need_rebuild(): Renamed from
create_option_need_rebuild(). Allow page_compression_level and
page_compressed to be changed as above, without rebuilding the table.
ha_innobase::check_if_supported_inplace_alter(): Allow ALGORITHM=INSTANT
for ALTER_OPTIONS if the table is not to be rebuilt. If rebuild is
needed, set ha_alter_info->unsupported_reason.
innobase_page_compression_try(): Update SYS_TABLES.TYPE according
to the table flags, for an instant change of page_compression_level
or page_compressed.
commit_cache_norebuild(): Adjust dict_table_t::flags, fil_space_t::flags
and (if needed) FSP_SPACE_FLAGS if page_compression_level was specified.
In MySQL 5.7, a follow-up to WL#6671 removed the unused
fields ha_innobase::lock and INNOBASE_SHARE::lock, but
MariaDB did not remove them, even though a counterpart of
WL#6671 itself was implemented as MDEV-7660 in
commit d665e79c5b.
INNOBASE_SHARE was removed in MDEV-16557. Thus, all that
needs to be removed is the unused member ha_innobase::lock
and related code.
Thanks to Monty (and Valgrind) for noticing that
ha_innobase::lock was uninitialized.
The problem was that the original alias was replaced with a new allocated
string, but constraint item's are still pointing to the original alias.
Fixed by storing the original alias used when printing constraint in the
tables mem_root.
The optimizer erroneously allowed to use join cache when joining a
splittable materialized table together with splitting optimization.
As a consequence in some rare cases the server returned wrong result
sets for queries with materialized derived.
This patch allows to use either join cache without usage of splitting
technique for materialization of a splittable derived table or splitting
without usage of join cache when joining such table. The costs the these
alternatives are compared and the best variant is chosen.
This patch fills a serious flaw in the implementation of common table
expressions. Before this patch an attempt to prepare a statement from
a query with a parameter marker in a CTE that was used more than once
in the query ended up with a bogus error message. Similarly if a statement
in a stored procedure contained a CTE whose specification used a
local variables and this CTE was referred to more than once in the
statement then the server failed to execute the stored procedure returning
a bogus error message on a non-existing field.
The problems appeared due to incorrect handling of parameter markers /
local variables in CTEs that were referred more than once.
This patch fixes the problems by differentiating between the original
occurrences of a parameter marker / local variable used in the
specification of a CTE and the corresponding occurrences used
in copies of this specification. These copies are substituted
instead of non-first references to the CTE.
The idea of the fix and even some code were taken from the MySQL
implementation of the common table expressions.
A debug assertion would fail if an instant ADD COLUMN operation
involves splitting the leftmost leaf page and storing a default
value off-page. Another debug assertion could fail if the
default value does not fit in an undo log page.
btr_cur_pessimistic_update(): Invoke rec_offs_make_valid()
in order to prevent rec_offs_validate() assertion failure.
innobase_add_instant_try(): Invoke btr_cur_pessimistic_update()
with the BTR_KEEP_POS_FLAG, which is the correct course of action
when BLOBs may need to be written. Whenever returning true,
ensure that my_error() will have been called.
and debug builds. Modified version for 10.1 from following commit:
commit 8e68876477
Author: Oleksandr Byelkin <sanja@mariadb.com>
Date: Thu Sep 13 15:06:44 2018 +0200
Fix of the test which has debug version
The problem occurs in 10.2 and earlier releases of MariaDB Server because the
Partition Engine was not pushing the engine conditions to the underlying
storage engine of each partition. This caused Spider to return the first 5
rows in the table with the data provided by the customer. 2 of the 5 rows
did not qualify the WHERE clause, so they were removed from the result set by
the server.
To fix the problem, I have back-ported support for engine condition pushdown
in the Partition Engine from MariaDB Server 10.3 to 10.2 and 10.1. In 10.3
and 10.4 I have merged the comments and the test case.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Merged:
Commit eb2ca3d on branch bb-10.2-MDEV-16912
The problem occurs in 10.2 and earlier releases of MariaDB Server because the
Partition Engine was not pushing the engine conditions to the underlying
storage engine of each partition. This caused Spider to return the first 5
rows in the table with the data provided by the customer. 2 of the 5 rows
did not qualify the WHERE clause, so they were removed from the result set by
the server.
To fix the problem, I have back-ported support for engine condition pushdown
in the Partition Engine from MariaDB Server 10.3.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Cherry-Picked:
Commit eb2ca3d on branch bb-10.2-MDEV-16912
The problem occurs in 10.2 and earlier releases of MariaDB Server because the
Partition Engine was not pushing the engine conditions to the underlying
storage engine of each partition. This caused Spider to return the first 5
rows in the table with the data provided by the customer. 2 of the 5 rows
did not qualify the WHERE clause, so they were removed from the result set by
the server.
To fix the problem, I have back-ported support for engine condition pushdown
in the Partition Engine from MariaDB Server 10.3.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Merged:
Commit eb2ca3d on branch bb-10.2-MDEV-16912
When counter increment is not within the expected range, print the number
instead of just FAIL.
This doesnt solve the bug but will help with the diagnostics.
The bug was this scenario:
1. Join optimizer picks a range plan on index IDX1
(This index doesn't match the ORDER BY clause, so sorting will be needed)
2. Index Condition Pushdown pushes a part of WHERE down. The pushed
condition is removed from SQL_SELECT::cond
3. test_if_skip_sort_order() figures that it's better to use IDX2
(as it will match ORDER BY ... LIMIT and so will execute faster)
3.1 It sees that there was a possible range access on IDX2. It tries to
construct it by calling SQL_SELECT::test_quick_select(), but alas,
SQL_SELECT::cond doesn't have all parts of WHERE anymore.
So it uses full index scan which is slow.
(The execution works fine because there's code further in test_if_skip_sort_order()
which "Unpushes" the index condition and restores the original WHERE clause.
It was just the test_quick_select call that suffered).
The blob key length could be shorter than the length of the entire blob,
for example,
CREATE TABLE t1 (b BLOB, i INT, KEY(b(8)));
INSERT INTO t1 VALUES (REPEAT('a',9),1);
The key length is 8, while the blob length is 9.
So we need to set the correct key length in Field_blob::sort_string().
While executing CTAS galera applier thread can cause CTAS to abort and rollback. Rollback can take time causing applier thread to shutdown node after serial unsuccessful retries to apply transaction. Don't set lock_wait_timeout to zero to wait for lock.
The problem occurs in 10.2 and earlier releases of MariaDB Server because the
Partition Engine was not pushing the engine conditions to the underlying
storage engine of each partition. This caused Spider to return the first 5
rows in the table with the data provided by the customer. 2 of the 5 rows
did not qualify the WHERE clause, so they were removed from the result set by
the server.
To fix the problem, I have back-ported support for engine condition pushdown
in the Partition Engine from MariaDB Server 10.3.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
On Unix, it is compiled-in datadir value.
On Windows, the directory is ..\data, relative to directory
mariabackup.exe
server uses the same logic to determine datadir.