While cleaning up a failed CREATE TABLE LIKE <sequence>, `mysql_rm_table_no_locks`
erroneously attempted to remove all tables involved in the query, including
the source table (sequence).
Fix to temporarily modify `table_list` to ensure that only the intended
table is removed during the cleanup.
The memory allocated for an instance of the class Item_direct_ref_to_item
was leaked on second execution of a query run as a prepared statement and
involving conversion of strings with different character sets.
The reason of leaking the memory was caused by the fact that a statement
arena could be already set by the moment the method
Type_std_attributes::agg_item_set_converter() is called.
don't forget to reset mdl_context.m_deadlock_overweight when
taking the THD out of the cache - the history of previous connections
should not affect the weight in deadlock victim selection
(small cleanup of the test to help the correct merge)
When aggregating pairs BIT+NULL and NULL+BIT for result, e.g.
in COALESCE(), preserve the BIT data type (ignore explicit NULLs).
The same fix applied to YEAR.
This bug could affect queries with IN subqueries in WHERE clause and using
derived tables to which split optimization potentially could be applied.
When looking for the best split of a splittable derived table T any key
access from a semi-join materialized table used for lookups S to table T
must be excluded from consideration because in the current implementation
of such tables as S the values from its records cannot be used to access
other tables.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Author: Sergei Petrunia <sergey@mariadb.com>
Date: Wed Oct 11 19:02:25 2023 +0300
MDEV-32301: Server crashes at Arg_comparator::compare_row
In Item_bool_rowready_func2::build_clone(): if we're setting
clone->cmp.comparators=0
also set
const_item_cache=0
as the Item is currently in a state where one cannot compute it.
A subquery in form "(SELECT not_null_value LIMIT 1 OFFSET 1)" will
produce no rows which will translate into scalar SQL NULL value.
The code in Item_singlerow_subselect::fix_length_and_dec() failed to
take the LIMIT/OFFSET clause into account and used to set
item_subselect->maybe_null=0, despite that SQL NULL will be produced.
If such subselect was used in ORDER BY, this would cause a crash in
filesort() code when it would get a NULL value for a not-nullable item.
also made subselect_engine::no_tables() const function.
In Item_bool_rowready_func2::build_clone(): if we're setting
clone->cmp.comparators=0
also set
const_item_cache=0
as the Item is currently in a state where one cannot compute it.
The code inside Item_subselect::fix_fields() could fail to check
that left expression had an Item_row, like this:
(('x', 1.0) ,1) IN (SELECT 'x', 1.23 FROM ... UNION ...)
In order to hit the failure, the first SELECT of the subquery had
to be a degenerate no-tables select. In this case, execution will
not enter into Item_in_subselect::create_row_in_to_exists_cond()
and will not check if left_expr is composed of scalars.
But the subquery is a UNION so as a whole it is not degenerate.
We try to create an expression cache for the subquery.
We create a temp.table from left_expr columns. No field is created
for the Item_row. Then, we crash when trying to add an index over a
non-existent field.
Fixed by moving the left_expr cardinality check to a point in
check_and_do_in_subquery_rewrites() which gets executed for all
cases.
It's better to make the check early so we don't have to care about
subquery rewrite code hitting Item_row in left_expr.
Fixed missing initialization of Alter_info()
This could cause crashes in some create table like scenarios
where some generated indexes where automatically dropped.
I also added a test that we do not try to drop from index_stats for
temporary tables.
The intentention was always to not create histograms for single value
unique keys (as histograms is not useful in this case), but because of
a bug in the code this was still done.
The changes in the test cases was mainly because hist_size is now NULL
for these kind of columns.
The MDEV-29693 conflict resolution is from Monty, as well as is
a bug fix where ANALYZE TABLE wrongly built histograms for
single-column PRIMARY KEY.
Also includes a fix for safe_malloc error reporting.
Other things:
- Copied main.log_slow from 10.4 to avoid mtr issue
Disabled test:
- spider/bugfix.mdev_27239 because we started to get
+Error 1429 Unable to connect to foreign data source: localhost
-Error 1158 Got an error reading communication packets
- main.delayed
- Bug#54332 Deadlock with two connections doing LOCK TABLE+INSERT DELAYED
This part is disabled for now as it fails randomly with different
warnings/errors (no corruption).
In case the option require_secure_transport is on the user can't
establish a secure ssl connection over TCP protocol. Inability to set up
a ssl session over TCP was caused by the fact that a type of client's
connection was checked before ssl handshake performed (ssl handshake
happens at the function acl_authenticate()). At that moment vio type has
the value VIO_TYPE_TCPIP for client connection that uses TCP transport.
In result, checking for allowable vio type for fails despite the fact
that SSL session being established. To fix the issue move checking of
vio type for allowable values inside the function
parse_client_handshake_packet()
right after client's capabilities discovered that SSL is not requested
by the client.
The problem was that sometimes InnoDB returned sligtly wrong record count
for table, which causes the optimizer to disregard the result from
the range optimizer. The end result was that the optimizer choosed a
ref access instead of a range access which caused errors in buildbot.
Fixed by adding more rows to the table to ensure that table scan is
more costly than range scan of the given interval.
Fix order_by_optimizer_innodb and order_by_innodb tests.
The problem was that the query could be ran before InnoDB was
ready to provide a realistic statistic for #records in the table.
It provided a number that was too low, which caused the optimizer
to decide that range access plan wasn't advantageous and discard it.
An "ITERATE innerLoop" did not work properly inside
a WHILE loop, which itself is inside an outer FOR loop:
outerLoop:
FOR
...
innerLoop:
WHILE
...
ITERATE innerLoop;
...
END WHILE;
...
END FOR;
It erroneously generated an integer increment code for the outer FOR loop.
There were two problems:
1. "ITERATE innerLoop" worked like "ITERATE outerLoop"
2. It was always integer increment, even in case of FOR cursor loops.
Background:
- A FOR loop automatically creates a dedicated sp_pcontext stack entry,
to put the iteration and bound variables on it.
- Other loop types (LOOP, WHILE, REPEAT), do not generate a dedicated
slack entry.
The old code erroneously assumed that sp_pcontext::m_for_loop
either describes the most inner loop (in case the inner loop is FOR),
or is empty (in case the inner loop is not FOR).
But in fact, sp_pcontext::m_for_loop is never empty inside a FOR loop:
it describes the closest FOR loop, even if this FOR loop has nested
non-FOR loops inside.
So when we're near the ITERATE statement in the above script,
sp_pcontext::m_for_loop is not empty - it stores information about
the FOR loop labeled as "outrLoop:".
Fix:
- Adding a new member sp_pcontext::Lex_for_loop::m_start_label,
to remember the explicit or the auto-generated label correspoding
to the start of the FOR body. It's used during generation
of "ITERATE loop_label" code to check if "loop_label" belongs
to the current FOR loop pointed by sp_pcontext::m_for_loop,
or belongs to a non-FOR nested loop.
- Adding LEX methods sp_for_loop_intrange_iterate() and
sp_for_loop_cursor_iterate() to reuse the code between
methods handling:
* ITERATE
* END FOR
- Adding a test for Lex_for_loop::is_for_loop_cursor()
and generate a code either a cursor fetch, or for an integer increment.
Before this change, it always erroneously generated an integer increment
version.
- Cleanup: Initialize Lex_for_loop_st::m_cursor_offset inside
Lex_for_loop_st::init(), to avoid not initialized members.
- Cleanup: Removing a redundant method:
Lex_for_loop_st::init(const Lex_for_loop_st &other)
Using Lex_for_loop_st::operator(const Lex_for_loop_st &other) instead.
Problem:
Item_func_date_format::val_str() and make_date_time() did not take into
account that the format string and the result string
(separately or at the same time) can be of a tricky character set
like UCS2, UTF16, UTF32. As a result, DATE_FORMAT() could generate
an ill-formed result which crashed on DBUG_ASSERTs testing well-formedness
in other parts of the code.
Fix:
1. class String changes
Removing String::append_with_prefill(). It was not compatible with
tricky character sets. Also it was inconvenient to use and required
too much duplicate code on the caller side.
Adding String::append_zerofill() instead. It's compatible with tricky
character sets and is easier to use.
Adding helper methods Static_binary_string::q_append_wc() and
String::append_wc(), to append a single wide character
(a Unicode code point in my_wc_t).
2. storage/spider changes
Removing spider_string::append_with_prefill().
It used String::append_with_prefix() inside, but it was unused itself.
3. Changing tricky charset incompatible code pieces in make_date_time()
to compatible replacements:
- Fixing the loop scanning the format string to iterate in terms
of Unicode code points (using mb_wc()) rather than in terms
of "char" items.
- Using append_wc(my_wc_t) instead of append(char) to append
a single character to the result string.
- Using append_zerofill() instead of append_with_prefill() to
append date/time numeric components to the result string.
The problem was that we did not handle errors properly in
JOIN::get_best_combination. In case an early error, JOIN->join_tab would
contain unintialized values, which would cause errors on cleanup().
The error in question was reported earlier, but not noticed until later.
One cause of this is that most of the sql_select.cc code just checks
thd->fatal_error and not thd->is_error().
Fixed by changing of checks of fatal_error to is_error().
This allows a user to to change the default value of MAX_SEL_ARGS (16000)
in the rare case where they neeed more generated SEL_ARGS (as part of
the range optimizer)
Raise notes if indexes cannot be used:
- in case of data type or collation mismatch (diferent error messages).
- in case if a table field was replaced to something else
(e.g. Item_func_conv_charset) during a condition rewrite.
Added option to write warnings and notes to the slow query log for
slow queries.
New variables added/changed:
- note_verbosity, with is a set of the following options:
basic - All old notes
unusable_keys - Print warnings about keys that cannot be used
for select, delete or update.
explain - Print unusable_keys warnings for EXPLAIN querys.
The default is 'basic,explain'. This means that for old installations
the only notable new behavior is that one will get notes about
unusable keys when one does an EXPLAIN for a query. One can turn all
of all notes by either setting note_verbosity to "" or setting sql_notes=0.
- log_slow_verbosity has a new option 'warnings'. If this is set
then warnings and notes generated are printed in the slow query log
(up to log_slow_max_warnings times per statement).
- log_slow_max_warnings - Max number of warnings written to
slow query log.
Other things:
- One can now use =ALL for any 'set' variable to set all options at once.
For example using "note_verbosity=ALL" in a config file or
"SET @@note_verbosity=ALL' in SQL.
- mysqldump will in the future use @@note_verbosity=""' instead of
@sql_notes=0 to disable notes.
- Added "enum class Data_type_compatibility" and changing the return type
of all Field::can_optimize*() methods from "bool" to this new data type.
Reviewer & Co-author: Alexander Barkov <bar@mariadb.com>
- The code that prints out the notes comes mainly from Alexander
The warning is given in case of table not found or if there is a lock
timeout. The warning is needed as in case of a lock timeout then the
persistent table stats are going to be wrong.
Example of what causes the problem:
T1: ANALYZE TABLE starts to collect statistics
T2: ALTER TABLE starts by deleting statistics for all changed fields,
then creates a temp table and copies data to it.
T1: ANALYZE ends and writes to the statistics tables.
T2: ALTER TABLE renames temp table in place of the old table.
Now the statistics from analyze matches the old deleted tables.
Fixed by waiting to delete old statistics until ALTER TABLE is
the only one using the old table and ensure that rename of columns
can handle swapping of column names.
rename_columns_in_stat_table() (former rename_column_in_stat_tables())
now takes a list of columns to rename. It uses the following algorithm
to update column_stats to be able to handle circular renames
- While there are columns to be renamed and it is the first loop or
last rename loop did change something.
- Loop over all columns to be renamed
- Change column name in column_stat
- If fail because of duplicate key
- If this is first change attempt for this column
- Change column name to a temporary column name
- If there was a conflicting row, replace it with the current row.
else
- Remove entry from column list
- Loop over all remaining columns in the list
- Remove the conflicting row
- Change column from temporary name to final name in column_stat
Other things:
- Don't flush tables for every operation. Only flush when all updates
are done.
- Rename of columns was not handled in case of ALGORITHM=copy (old bug).
- Fixed that we do not collect statistics for hidden hash columns
used by UNIQUE constraint on long values.
- Fixed that we do not collect statistics for blob columns referred by
generated virtual columns. This was achieved by storing the fields for
which we want to have statistics in table->has_value_set instead of
in table->read_set.
- Rename of indexes was not handled for persistent statistics.
- This is now handled similar as rename of columns. Renamed columns
are now stored in 'rename_stat_indexes' and handled in
Alter_info::delete_statistics() together with drooped indexes.
- ALTER TABLE .. ADD INDEX may instead of creating a new index rename
an existing generated foreign key index. This was not reflected in
the index_stats table because this was handled in
mysql_prepare_create_table instead instead of in the mysql_alter() code.
Fixed by adding a call in mysql_prepare_create_table() to drop the
changed index.
I also had to change the code that 'marked the index' to be ignored
with code that would not destroy the original index name.
Reviewer: Sergei Petrunia <sergey@mariadb.com>
MDEV-31455: main.events_stress or events.events_stress fails with view-protocol
MDEV-31457: main.delete_use_source fails (hangs) with view-protocol
Fixed tests:
main.sum_distinct-big, main.delete_use_source - disabled view-protocol
for some cases because they use transactions without autocommit
main.events_stress, main.merge-big - disabled service connection
for some queries since it is necessary that the query SELECT pass
in the same session
The fix is to return 3-state value from Range_rowid_filter::build()
call:
1. The filter was built successfully;
2. The filter was not built, but the error was not fatal, i.e. there is
no need to rollback transaction. For example, if the size of container to
storevrow ids is not enough;
3. The filter was not built because of fatal error, for example,
deadlock or lock wait timeout from storage engine. In this case we
should stop query plan execution and roll back transaction.
Reviewed by: Sergey Petrunya