create templates
thd->alloc<X>(n) to use instead of (X*)thd->alloc(sizeof(X)*n)
and the same for thd->calloc(). By the default the type is char,
so old usage of thd->alloc(size) works too.
This partially reverts 43623f04a9
Engines have to set ::position() after ::write_row(), otherwise
the server won't be able to refer to the row just inserted.
This is important for high-level indexes.
heap part isn't reverted, so heap doesn't support high-level indexes.
to fix this, it'll need info->lastpos in addition to info->current_ptr
Single-table UPDATE/DELETE didn't provide outer_lookup_keys value for
subqueries. This didn't allow to make a meaningful choice between
IN->EXISTS and Materialization strategies for subqueries.
Fix this:
* Make UPDATE/DELETE save Sql_cmd_dml::scanned_rows,
* Then, subquery's JOIN::choose_subquery_plan() can fetch it from
there for outer_lookup_keys
Details:
UPDATE/DELETE now calls select_lex->optimize_unflattened_subqueries()
twice, like SELECT does (first call optimize_constant_subquries() in
JOIN::optimize_inner(), then call optimize_unflattened_subqueries() in
JOIN::optimize_stage2()):
1. Call with const_only=true before any optimizations. This allows
range optimizer and others to use the values of cheap const
subqueries.
2. Call it with const_only=false after range optimizer, partition
pruning, etc. outer_lookup_keys value is provided, so it's possible to
pick a good subquery strategy.
Note: PROTECT_STATEMENT_MEMROOT requires that first SP execution
performs subquery optimization for all subqueries, even for degenerate
query plans like "Impossible WHERE". Due to that, we ensure that the
call to optimize_unflattened_subqueries (with const_only=false) even
for degenerate query plans still happens, as was the case before this
change.
The memory leak happened on second execution of a prepared statement
that runs UPDATE statement with correlated subquery in right hand side of
the SET clause. In this case, invocation of the method
table->stat_records()
could return the zero value that results in going into the 'if' branch
that handles impossible where condition. The issue is that this condition
branch missed saving of leaf tables that has to be performed as first
condition optimization activity. Later the PS statement memory root
is marked as read only on finishing first time execution of the prepared
statement. Next time the same statement is executed it hits the assertion
on attempt to allocate a memory on the PS memory root marked as read only.
This memory allocation takes place by the sequence of the following
invocations:
Prepared_statement::execute
mysql_execute_command
Sql_cmd_dml::execute
Sql_cmd_update::execute_inner
Sql_cmd_update::update_single_table
st_select_lex::save_leaf_tables
List<TABLE_LIST>::push_back
To fix the issue, add the flag SELECT_LEX::leaf_tables_saved to control
whether the method SELECT_LEX::save_leaf_tables() has to be called or
it has been already invoked and no more invocation required.
Similar issue could take place on running the DELETE statement with
the LIMIT clause in PS/SP mode. The reason of memory leak is the same as for
UPDATE case and be fixed in the same way.
Running an UPDATE statement in PS mode and having positional
parameter(s) bound with an array of actual values (that is
prepared to be run in bulk mode) results in incorrect behaviour
in presence of on update trigger that also executes an UPDATE
statement. The same is true for handling a DELETE statement in
presence of on delete trigger. Typically, the visible effect of
such incorrect behaviour is expressed in a wrong number of
updated/deleted rows of a target table. Additionally, in case UPDATE
statement, a number of modified rows and a state message returned
by a statement contains wrong information about a number of modified rows.
The reason for incorrect number of updated/deleted rows is that
a data structure used for binding positional argument with its
actual values is stored in THD (this is thd->bulk_param) and reused
on processing every INSERT/UPDATE/DELETE statement. It leads to
consuming actual values bound with top-level UPDATE/DELETE statement
by other DML statements used by triggers' body.
To fix the issue, reset the thd->bulk_param temporary to the value
nullptr before invoking triggers and restore its value on finishing
its execution.
The second part of the problem relating with wrong value of affected
rows reported by Connector/C API is caused by the fact that diagnostics
area is reused by an original DML statement and a statement invoked
by a trigger. This fact should be take into account on finalizing a
state of diagnostics area on completion running of a statement.
Important remark: in case the macros DBUG_OFF is on, call of the method
Diagnostics_area::reset_diagnostics_area()
results in reset of the data members
m_affected_rows, m_statement_warn_count.
Values of these data members of the class Diagnostics_area are used on
sending OK and EOF messages. In case DML statement is executed in PS bulk
mode such resetting results in sending wrong result values to a client
for affected rows in case the DML statement fires a triggers. So, reset
these data members only in case the current statement being processed
is not run in bulk mode.
Re-design of a way for handling the DELETE statement introduced by
the task MDEV-28883, added regression caused by missing reset of
the data member current_select->first_cond_optimization on handling
the DELETE statement that results in a memory leaks on second execution
of the same DELETE statement in PS mode.
To fix memory leaks, added set of the data member
current_select->first_cond_optimization
to the value false on finishing execution of DELETE statement.
The memory leak happened on second execution of a prepared statement
that runs UPDATE statement with correlated subquery in right hand side of
the SET clause. In this case, invocation of the method
table->stat_records()
could return the zero value that results in going into the 'if' branch
that handles impossible where condition. The issue is that this condition
branch missed saving of leaf tables that has to be performed as first
condition optimization activity. Later the PS statement memory root
is marked as read only on finishing first time execution of the prepared
statement. Next time the same statement is executed it hits the assertion
on attempt to allocate a memory on the PS memory root marked as read only.
This memory allocation takes place by the sequence of the following
invocations:
Prepared_statement::execute
mysql_execute_command
Sql_cmd_dml::execute
Sql_cmd_update::execute_inner
Sql_cmd_update::update_single_table
st_select_lex::save_leaf_tables
List<TABLE_LIST>::push_back
To fix the issue, add the flag SELECT_LEX::leaf_tables_saved to control
whether the method SELECT_LEX::save_leaf_tables() has to be called or
it has been already invoked and no more invocation required.
Similar issue could take place on running the DELETE statement with
the LIMIT clause in PS/SP mode. The reason of memory leak is the same as for
UPDATE case and be fixed in the same way.
According to the standard, the autoincrement column (i.e. *identity
column*) should be advanced each insert implicitly made by
UPDATE/DELETE ... FOR PORTION.
This is very unconvenient use in several notable cases. Concider a
WITHOUT OVERLAPS key with an autoinc column:
id int auto_increment, unique(id, p without overlaps)
An update or delete with FOR PORTION creates a sense that id will remain
unchanged in such case.
The standard's IDENTITY reminds MariaDB's AUTO_INCREMENT, however
the generation rules differ in many ways. For example, there's also a
notion autoincrement index, which is bound to the autoincrement field.
We will define our own generation rule for the PORTION OF operations
involving AUTO_INCREMENT:
* If an autoincrement index contains WITHOUT OVERLAPS specification, then
a new value should not be generated, otherwise it should.
Apart from WITHOUT OVERLAPS there is also another notable case, referred
by the reporter - a unique key that has an autoincrement column and a field
from the period specification:
id int auto_increment, unique(id, s), period for p(s, e)
for this case, no exception is made, and the autoincrementing rules will be
proceeded accordung to the standard (i.e. the value will be advanced on
implicit inserts).
Enable unusable key notes for non-equality predicates:
<, <=, =>, >, BETWEEN, IN, LIKE
Note, in some scenarios it displays duplicate notes, e.g.
for queries with ORDER BY:
SELECT * FROM t1
WHERE indexed_string_column >= 10
ORDER BY indexed_string_column
LIMIT 5;
This should be tolarable. Getting rid of the diplicate note
completely would need a much more complex patch, which is
not desiable in 10.6.
Details:
- Changing RANGE_OPT_PARAM::note_unusable_keys from bool
to a new data type Item_func::Bitmap, so the caller can
choose with a better granuality which predicates
should raise unusable key notes inside the range optimizer:
a. all predicates (=, <=>, <, <=, =>, >, BETWEEN, IN, LIKE)
b. all predicates except equality (=, <=>)
c. none of the predicates
"b." is needed because in some scenarios equality predicates (=, <=>)
send unusable key notes at an earlier stage, before the range optimizer,
during update_ref_and_keys(). Calling the range optimizer with
"all predicates" would produce duplicate notes for = and <=> in such cases.
- Fixing get_quick_record_count() to call the range optimizer
with "all predicates except equality" instead of "none of the predicates".
Before this change the range optimizer suppressed all notes for
non-equality predicates: <, <=, =>, >, BETWEEN, IN, LIKE.
This actually fixes the reported problem.
- Fixing JOIN::make_range_rowid_filters() to call the range optimizer
with "all predicates except equality" instead of "all predicates".
Before this change the range optimizer produced duplicate notes
for = and <=> during a rowid_filter optimization.
- Cleanup:
Adding the op_collation argument to Field::raise_note_cannot_use_key_part()
and displaying the operation collation rather than the argument collation
in the unusable key note. This is important for operations with more than
two arguments: BETWEEN and IN, e.g.:
SELECT * FROM t1
WHERE column_utf8mb3_general_ci
BETWEEN 'a' AND 'b' COLLATE utf8mb3_unicode_ci;
SELECT * FROM t1
WHERE column_utf8mb3_general_ci
IN ('a', 'b' COLLATE utf8mb3_unicode_ci);
The note for 'a' now prints utf8mb3_unicode_ci as the collation.
which is the collation of the entire operation:
Cannot use key key1 part[0] for lookup:
"`column_utf8mb3_general_ci`" of collation `utf8mb3_general_ci` >=
"'a'" of collation `utf8mb3_unicode_ci`
Before this change it printed the collation of 'a',
so the note was confusing:
Cannot use key key1 part[0] for lookup:
"`column_utf8mb3_general_ci`" of collation `utf8mb3_general_ci` >=
"'a'" of collation `utf8mb3_general_ci`"
Statements affected by this bug are delete statements that have all
these conditions
1) single table delete syntax
2) and in (sub-query) predicate
3) semi-join optimization enabled
4) an order by clause.
Semijoin optimization on an innocent looking query, such as
DELETE FROM t1 WHERE c1 IN (select c2 from t2) ORDER BY c1;
turns it from a single table delete to a multi-table delete.
During multi_delete::initialize_tables for the top level join object, a
table is initialized missing a keep_current_rowid flag, needed to
position a handler for removal of the correct row after the filesort
structure has been built.
Fix provided by Monty (monty@mariadb.com)
OK'd in slack:#askmonty 2023-12-01
applicable to 11.1 on
Statements affected by this bug are delete statements that have all
these conditions
1) single table delete syntax
2) and in (sub-query) predicate
3) semi-join optimization enabled
4) an order by clause.
Semijoin optimization on an innocent looking query, such as
DELETE FROM t1 WHERE c1 IN (select c2 from t2) ORDER BY c1;
turns it from a single table delete to a multi-table delete.
During multi_delete::initialize_tables for the top level join object, a
table is initialized missing a keep_current_rowid flag, needed to
position a handler for removal of the correct row after the filesort
structure has been built.
Fix provided by Monty (monty@mariadb.com)
Pushed into 10.5 at Monty's request.
Applicable to 10.6, 10.11, 11.0.
OK'd by Monty in slack:#askmonty 2023-12-01
MDEV-32441 SENT_ROWS shows random wrong values when stored function
is selected.
MDEV-32281 EXAMINED_ROWS is not populated in
information_schema.processlist upon SELECT.
Added ROWS_SENT to information_schema.processlist
This is to have the same information as Percona server (SENT_ROWS)
To ensure that information_schema.processlist has correct values for
sent_rows and examined_rows I introduced two new variables to hold the
total counts so far. This was needed as stored functions and stored
procedures will reset the normal counters to be able to count rows for
each statement individually for slow query log.
Other things:
- Selects with functions shows in processlist the total examined_rows
and sent_rows by the main statement and all functions.
- Stored procedures shows in processlist examined_rows and sent_rows
per stored procedure statement.
- Fixed some double accounting for sent_rows and examined_rows.
- HANDLER operations now also supports send_rows and examined_rows.
- Display sizes for MEMORY_USED, MAX_MEMORY_USED, EXAMINED_ROWS and
QUERY_ID in information_schema.processlist changed to 10 characters.
- EXAMINED_ROWS and SENT_ROWS changed to bigint.
- INSERT RETURNING and DELETE RETURNING now updates SENT_ROWS.
- As thd is always up to date with examined_rows, we do not need
to handle examined row counting for unions or filesort.
- I renamed SORT_INFO::examined_rows to m_examined_rows to ensure that
we don't get bugs in merges that tries to use examined_rows.
- Removed calls of type "thd->set_examined_row_count(0)" as they are
not needed anymore.
- Removed JOIN::join_examined_rows
- Removed not used functions:
THD::set_examined_row_count()
- Made inline some functions that where called for each row.
(Review input addressed)
(Added handling of UPDATE/DELETE and partitioning w/o index)
If the properties of the used collation allow, do the following
equivalent rewrites:
1. UPPER(key_col)=expr -> key_col=expr
expr=UPPER(key_col) -> expr=key_col
(also rewrite both sides of the equality at the same time)
2. UPPER(key_col) IN (constant-list) -> key_col IN (constant-list)
- Mark utf8mb{3,4}_general_ci as collations that allow this.
- Add optimizer_switch='sargable_casefold=ON' to control this.
(ON by default in this patch)
- Cover the rewrite in Optimizer Trace, rewrite name is
"sargable_casefold_removal".
Do not attempt to produce "r_engine_stats" on the temporary (=work) tables.
These tables may be
- re-created during the query execution
- freed during the query execution (This is done e.g. in JOIN::cleanup(),
before we produce ANALYZE FORMAT=JSON output).
- (Also, make save_explain_data() functions not set handler_for_stats
to point to handler objects that do not have handler->handler_stats set.
If the storage engine is not collecting handler_stats, it will not have
them when we're producing ANALYZE FORMAT=JSON output, either).