The syntax error produced on running the test engines/iuds.insert_time
in PS-mode was caused by presence of the C-Style comment containing
the single quote at the end of SQL statement, something like
the following one:
/* doesn't throw error */;
Presence of the single quote was interpreted by mysqltest utility as
indication of real string literal, that resulted in consuming every
characters following that mark as a literal, including the delimiter
character ';'. It led to concatenation of lines into a single
multi-statement that was sent from mysqltest to MariaDB server for
processing. In case mysqltest is run in regular mode (that is,
not PS-mode), multi-statement is handled successfully on server side,
but in case PS-mode is on, multi-statement is supplied in COM_STMT_PREPARE
that caused the parsing error since multi-statements is not supported by
statement prepare command.
To fix the issue, in case mysqltest encounters the C-Style comment
is should switch to reading next following characters without any
processing until it hit the closing C-style comment marks '*/',
with one exception if the sequence of characters '/*' followed by
the exclamation mark, that means the hint introducer has been read.
Loose index scan currently only supports ASC key. When searching for
the next MAX value, the search starts from the rightmost range and
moves towards the left. If the key has "moved" as the result of a
successful ha_index_read_map() call when handling a previous range, we
check if the key is to the left of the current range, and if so, we
can skip to the next range.
The existing check on whether the loop has iterated at least once is
not sufficient, as an unsuccessful ha_index_read_map() often (always?)
does not "move" the key.
stats_deinit(): Replaces dict_stats_deinit().
Deinitialize the statistics for persistent tables,
so that they will be reloaded or recalculated
on a subsequent ha_innobase::open().
ha_innobase::rename_table(): Invoke stats_deinit() so that the
subsequent ha_innobase::open() will reload the InnoDB persistent
statistics. That is, it will remain possible to have the InnoDB
persistent statistics reloaded by executing the following:
RENAME TABLE t TO tmp, tmp TO t;
dict_table_close(table): Replaced with table->release().
There will no longer be any logic that would attempt to ensure
that the InnoDB persistent statistics will be reloaded after
FLUSH TABLES has been executed. This also fixes the problem that
dict_table_t::stat_modified_counter would be frequently reset to 0,
whenever ha_innobase::open() is invoked after the table reference
count had dropped to 0.
dict_table_close(table, thd, mdl): Remove the parameter "dict_locked".
Do not try to invalidate the statistics.
ha_innobase::statistics_init(): Replaces dict_stats_init(table).
Reviewed by: Thirunarayanan Balathandayuthapani
If a query has many OR-ed constructs which can use multiple indexes
(key1=1 AND key2=10) OR
(key1=2 AND key2=20) OR
(key1=3 AND key2=30) OR
...
The range optimizer would construct and then discard a lot of potential
index_merge plans. This process
1. is CPU-intensive
2. can hit the @@optimizer_max_sel_args limitation after which all
potential range or index_merge plans are discarded.
The fix is to apply a heuristic: if there is an OR clause with more than
MAX_OR_ELEMENTS_FOR_INDEX_MERGE=100 branches (hard-coded constant),
disallow construction of index_merge plans for the OR branches.
In the `check_join_cache_usage()` function there is a branching issue
where an accidental fall-through to BKA/BKAH buffers may occur, even
when the join_cache_level setting does not permit their use.
This patch corrects the condition to ensure that BKA/BKAH join caching
is only enabled when explicitly allowed by join_cache_level
Reviewer: Sergei Petrunia <sergey@mariadb.com>
Added caching of database directories that did not have a db.opt file.
This was common for older MariaDB installaiton or if a user created
a database with 'mkdir'.
Other things:
- Give a note "no db.opt file" if one uses SHOW CREATE DATABASE one
a database without a db.opt file.
Backport of commit 74f70c3944 to 10.11.
The new logic is disabled by default, to enable, use
optimizer_adjust_secondary_key_costs=fix_derived_table_read_cost.
== Original commit comment ==
Fixed costs in JOIN_TAB::estimate_scan_time() and HEAP
Estimate_scan_time() calculates the cost of scanning a derivied table.
The old code did not take into account that the temporary table heap table
may be converted to Aria.
Things fixed:
- Added checking if the temporary tables data will fit in the heap.
If not, then calculate the cost based on the designated internal
temporary table engine (Aria).
- Removed MY_MAX(records, 1000) and instead trust the optimizer's
estimate of records. This reduces the cost of temporary tables a bit
for small tables, which caused a few changes in mtr results.
- Fixed cost calculation for HEAP.
- HEAP costs->row_next_find_cost was not set. This does not affect old
costs calculation as this cost slot was not used anywhere.
Now HEAP cost->row_next_find_cost is set, which allowed me to remove
some duplicated computation in ha_heap::scan_time()
(Variant 2)
Multi-table UPDATE ... ORDER BY ... LIMIT could update the wrong rows when
ORDER BY was resolved by Using temporary + Using filesort.
== Background: ref_pointer_array ==
join->order[->next*]->item point into join->ref_pointer_array, which
has pointers to the used Item objects.
This indirection is employed so that we can switch the ORDER BY expressions
from using the original Items to using the values of their "image" fields
in the temporary table.
The variant of ref_pointer_array that has pointers to temp table fields
is created when JOIN::make_aggr_tables_info() calls
change_refs_to_tmp_fields().
== The problem ==
The created array didn't match element-by-element the original
ref_pointer_array. When arrays were switched, ORDER BY elements started
to point to the wrong temp.table fields, causing the wrong sorting.
== The cause ==
The cause is JOIN::add_fields_for_current_rowid(). This function is
called for UPDATE statements to make the rowids of rows in the original
tables to be saved in the temporary tables.
It adds extra columns to the select list in table_fields argument.
However, select lists are organized in a way that extra elements must
be added *to the front* of the list, and then change_refs_to_tmp_fields()
will add extra fields *to the end* of ref_pointer_array.
So, add_fields_for_current_rowid() adds new fields to the back of
table_fields list. This caused change_refs_to_tmp_fields() to produce
ref_pointer_array slice with extra elements in the front, causing any
references through ref_pointer_array to come to the wrong values.
== The fix ==
Make JOIN::add_fields_for_current_rowid() add fields to the front of
the select list.
disable the assert.
also, use the same check for check_that_all_fields_are_given_values()
as it's used in not_null_fields_have_null_values() - to avoid
issuing the same warning twice.
This bug prevented building conditions that could be pushed into a derived
table if the derived table was used in a query of a stored procedure and
the conditions contained local variables of the procedure. This could lead
to a slow execution of the procedure.
Also in some cases the bug prevented building conditions that could be
pushed from the HAVING condition into the WHERE condition of a query if
the conditions to be built used local variables of a stored procedure.
To failure to build such pushable conditions was due to lack of a proper
implementation of the virtual method to copy items for the objects of the
class Item_splocal.
Approved by Igor Babaev <igor@mariadb.com>
who had to change the original fix that just added the regular copying of
the nodes of the Item_splocal class to take into account the wrappers
do_get_copy() and do_build_clone() introduced after the fix had been
prepared. He also changed the test case to demonstrate that the fix
was really needed for pushdown from HAVING into WHERE.
Cannot add a foreign key on a table with a long UNIQUE multi-column index, that
contains a foreign key as a prefix.
Check that index algorithms match during the "generated" keys duplicate removal.
table->move_fields has some limitations:
1. It cannot be used in cascade
2. It should always have a restoring pair
In this case, an error has occurred before the field ptr was restored, returning
from the function in that state. Even in case of an error, the table can be
reused afterwards and table->field[i]->ptr is not reset in between.
The solution is to restore the field pointers immanently whenever they've been
deviated.
Also add an assertion that ensures that table fields are restored after the use
in close_thread_tables.
Calling a stored routine that executes a join on three or more tables
and referencing not-existent column name in the USING clause resulted in
a crash on its second invocation.
Server crash taken place by the reason of dereferencing null pointer
in condition of DBUG_ASSERT inside the method
Field_iterator_natural_join::next()
There the data member
cur_column_ref->table_field->field
has the nullptr value that was reset at the end of first
execution of a stored routine when the standalone procedure
cleanup_items() called by the method sp_head::execute.
Later this data member is not re-initialized and never referenced
in any place except the DBUG_ASSERT on second and later invocations
of the stored routine.
To fix the issue, the assert's condition should be augmented by
a condition '|| !cur_column_ref->table_field' before dereferencing
cur_column_ref->table_field. Such extra checking is aligned with
conditions used by DBUG_ASSERT macros used by implementation of
the class Field_iterator_table_ref that aggregated the class
Field_iterator_natural_join.
Also fixes:
MDEV-32190 Index corruption with unique key and nopad collation (without DESC or HASH keys)
MDEV-28328 Assertion failures in btr0cur.cc upon INSERT or in row0sel.cc afterwards
The code in strings/strcoll.inl when comparing an empty string
to a string like 0x0001050001 did not take into account
that the leftmost weight in the latter can be zero, while
there are some more weights can follow the zero weight.
Rewriting the code to treat the shorter string as smaller than
a longer string.
This bug affected queries containing degenerated single-value subqueries
with window functions. The bug led mostly to wrong results for such
queries.
A subquery is called degenerated if it is of the form (SELECT <expr>...).
For degenerated subqueries of the form (SELECT <expr>) the transformation
(SELECT <expr>) => <expr>
usually is applied. However if <expr> contains set functions or window
functions such rewriting is not valid for an obvious reason. The code
before this patch erroneously applied the transformation when <expr>
contained window functions and did not contain set functions.
Approved by Rex Johnston <rex.johnston@mariadb.com>
it's incorrect to zero out table->triggers->extra_null_bitmap
before a statement, because if insert uses an explicit field list
and omits a field that has no default value, the field should
get NULL implicitly. So extra_null_bitmap should have 1s for all
fields that have no defaults
* create extra_null_bitmap_init and initialize it as above
* copy extra_null_bitmap_init to extra_null_bitmap for inserts
* still zero out extra_null_bitmap for updates/deletes where
all fields definitely have a value
* make not_null_fields_have_null_values() to send
ER_NO_DEFAULT_FOR_FIELD for fields with no default and no value,
otherwise creation of a trigger with an empty body would change the
error message
Item:print_for_table_def() uses QT_TO_SYSTEM_CHARSET to print
the DEFAULT expression into FRM file during CREATE TABLE.
Therefore, the expression is encoded in utf8 in FRM.
get_field_default_value() erroneously used field->charset() to
print the DEFAULT expression at SHOW CREATE TABLE time.
Fixing get_field_default_value() to use &my_charset_utf8mb4_general_ci instead.
This makes DEFAULT work in the way way with:
- virtual column expressions:
if (field->vcol_info)
{
StringBuffer<MAX_FIELD_WIDTH> str(&my_charset_utf8mb4_general_ci);
field->vcol_info->print(&str);
- check constraint expressions:
if (field->check_constraint)
{
StringBuffer<MAX_FIELD_WIDTH> str(&my_charset_utf8mb4_general_ci);
field->check_constraint->print(&str);
Additional cleanup:
Fixing system_charset_info to &my_charset_utf8mb4_general_ci in a few
places to make non-BMP characters work in DEFAULT, virtual column,
check constraint expressions.
The val_buffer variable can come to Field_set::val_str()
with the Ptr member equal to nullptr. This caused UBSAN errors
"applying zero offset to null pointer" in my_strnncollsp_simple()
and other strnncollsp() virtual implementations. Fixing the code to
make sure its Ptr is not equal to nullptr.
A part of the test, that tests that a frame of recursive SP takes the
same amount of stack, relies on the fact that 3 calls to that
SP are executed in the same OS thread. This is not guaranteed with
threadpool(not with Windows native threadpool, anyway)
Fix the test to execute SP calls in the same thread, as semicolon-separated
batched command.
(Variant 2: use stack for buffers)
my_malloc_size_cb_func() has a call to thd->alloc() to produce an error message.
thd->alloc() calls alloc_root(), so one can end up with this stack trace:
alloc_root()
THD::alloc()
my_malloc_size_cb_func()
my_malloc()
alloc_root()
where alloc_root() calls itself. This is a problem, as alloc_root() is not
reenterable.
Fixed this by switching my_malloc_size_cb_func() to use space on the stack
instead.
mysql_compare_tables() failed because of no long hash index generated
fields in prepared tmp_create_info. In comparison we should skip these
fields in the original table by:
1. skipping INVISIBLE_SYSTEM fields;
2. getting key_info from table->s instead of table as TABLE_SHARE
contains unwrapped key_info.
`limit >= trx_id' failed in purge_node_t::skip
For fast alter partition ALTER lost hash fields in frm field
count. mysql_prepare_create_table() did not call add_hash_field()
because the logic of ALTER-ing field types implies automatic
promotion/demotion to/from hash index. So we don't pass hash algorithm
to mysql_prepare_create_table() and let it decide itself, but it
cannot decide it correctly for fast alter partition.
So now mysql_prepare_alter_table() is a bit more sophisticated on what
to pass in the algorithm. If not changed any fields it will force
mysql_prepare_create_table() to re-add hash fields by setting
HA_KEY_ALG_HASH.
The problem with the original logic is mysql_prepare_alter_table()
does not care 100% about hash property so the decision is blurred
between mysql_prepare_alter_table() and mysql_prepare_create_table().
MDEV-28127 did is_equal() which compared vcol expressions
literally. But another table vcol expression is not equal because of
different table name.
We implement another comparison method is_identical() which respects
different table name in vcol comparison. If any field item points to
table_A and compared field item points to table_B, such items are
treated as equal in (table_A, table_B) comparison. This is done by
cloning table_B expression and renaming any table_B entries to table_A
in it.
TABLE::use_all_columns turn on all bits of read_set, which is
interpreted by innodb as a request to read all columns. Without doing
so before calling init_read_record(), innodb will not retrieve any
columns if mysql.servers table has been altered to use innodb as the
engine, and any foreign servers stored in the table are "lost".
when testing MDEV-34539 create a table specifically for the test,
don't use a system table as a shortcut to save a couple of lines.
followup for 8d813f080b
use the same condition in
fill_schema_table_from_frm() when open_table_from_share() fails, as in
fill_schema_table_from_frm() when tdc_aquire_share() fails and as in
fill_schema_table_from_open() when open_table_from_share() fails
The issue is caused by a logic error in Item_sum::get_tmp_table_item() method:
it resets arguments of the item to point to the result fields during
change_ref_to_tmp_fields() call. However, Item_sum arguments must not be modified.
It is enough for Item_sum objects to call ancestor's implementation
Item::get_tmp_table_item().
This fix is in accordance with MySQL commit 2e3dc09087c24798c90e05163ed3d931f6b93db3
Reviewer: Oleksandr Byelkin <sanja@mariadb.com>
Add a simple test to verify the server behaves in a safe manner if configured
with ciphers that aren't compatible with the server certificate.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Add a simple test to verify that the server will fail to start up when no valid
cipher suites are passed to `ssl-cipher`.
As different TLS libraries and versions have differing cipher suite support, it
would be a good idea to ensure the server behaves in a safe manner if it is
configured with invalid cipher suites.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.