Commit graph

2079 commits

Author SHA1 Message Date
Sergei Petrunia
5cd3fa81ef Merge 10.11 -> 11.2 2024-09-17 12:34:33 +03:00
Yuchen Pei
60b93cdd30
Merge branch '10.5' into 10.6 2024-09-06 13:52:57 +10:00
Yuchen Pei
e886c2ba02
MDEV-34757 Check leaf_tables_saved in partition pruning in UPDATE and DELETE 2024-09-06 11:41:59 +10:00
Yuchen Pei
2c3e07df47
MDEV-34447: Memory leakage is detected on running the test main.ps against the server 11.1
The memory leak happened on second execution of a prepared statement
that runs UPDATE statement with correlated subquery in right hand side of
the SET clause. In this case, invocation of the method
  table->stat_records()
could return the zero value that results in going into the 'if' branch
that handles impossible where condition. The issue is that this condition
branch missed saving of leaf tables that has to be performed as first
condition optimization activity. Later the PS statement memory root
is marked as read only on finishing first time execution of the prepared
statement. Next time the same statement is executed it hits the assertion
on attempt to allocate a memory on the PS memory root marked as read only.
This memory allocation takes place by the sequence of the following
invocations:
 Prepared_statement::execute
  mysql_execute_command
   Sql_cmd_dml::execute
    Sql_cmd_update::execute_inner
     Sql_cmd_update::update_single_table
      st_select_lex::save_leaf_tables
       List<TABLE_LIST>::push_back

To fix the issue, add the flag SELECT_LEX::leaf_tables_saved to control
whether the method SELECT_LEX::save_leaf_tables() has to be called or
it has been already invoked and no more invocation required.

Similar issue could take place on running the DELETE statement with
the LIMIT clause in PS/SP mode. The reason of memory leak is the same as for
UPDATE case and be fixed in the same way.
2024-09-06 11:41:58 +10:00
Oleksandr Byelkin
6197e6abc4 Merge branch '10.11' into 11.2 2024-08-21 07:58:46 +02:00
Oleksandr Byelkin
70afc62750 Merge branch '10.6' into 10.11 2024-08-20 10:00:39 +02:00
Oleksandr Byelkin
fc5772ce17 Merge branch '10.5' into 10.6 2024-08-20 09:11:34 +02:00
Dmitry Shulga
ba5482ffc2 MDEV-34718: Trigger doesn't work correctly with bulk update
Running an UPDATE statement in PS mode and having positional
parameter(s) bound with an array of actual values (that is
prepared to be run in bulk mode) results in incorrect behaviour
in presence of on update trigger that also executes an UPDATE
statement. The same is true for handling a DELETE statement in
presence of on delete trigger. Typically, the visible effect of
such incorrect behaviour is expressed in a wrong number of
updated/deleted rows of a target table. Additionally, in case UPDATE
statement, a number of modified rows and a state message returned
by a statement contains wrong information about a number of modified rows.

The reason for incorrect number of updated/deleted rows is that
a data structure used for binding positional argument with its
actual values is stored in THD (this is thd->bulk_param) and reused
on processing every INSERT/UPDATE/DELETE statement. It leads to
consuming actual values bound with top-level UPDATE/DELETE statement
by other DML statements used by triggers' body.

To fix the issue, reset the thd->bulk_param temporary to the value
nullptr before invoking triggers and restore its value on finishing
its execution.

The second part of the problem relating with wrong value of affected
rows reported by Connector/C API is caused by the fact that diagnostics
area is reused by an original DML statement and a statement invoked
by a trigger. This fact should be take into account on finalizing a
state of diagnostics area on completion running of a statement.

Important remark: in case the macros DBUG_OFF is on, call of the method
  Diagnostics_area::reset_diagnostics_area()
results in reset of the data members
  m_affected_rows, m_statement_warn_count.
Values of these data members of the class Diagnostics_area are used on
sending OK and EOF messages. In case DML statement is executed in PS bulk
mode such resetting results in sending wrong result values to a client
for affected rows in case the DML statement fires a triggers. So, reset
these data members only in case the current statement being processed
is not run in bulk mode.
2024-08-19 12:13:43 +07:00
Dmitry Shulga
e012407397 MDEV-34447: Memory leakage is detected on running the test main.ps against the server 11.1
The memory leak happened on second execution of a prepared statement
that runs UPDATE statement with correlated subquery in right hand side of
the SET clause. In this case, invocation of the method
  table->stat_records()
could return the zero value that results in going into the 'if' branch
that handles impossible where condition. The issue is that this condition
branch missed saving of leaf tables that has to be performed as first
condition optimization activity. Later the PS statement memory root
is marked as read only on finishing first time execution of the prepared
statement. Next time the same statement is executed it hits the assertion
on attempt to allocate a memory on the PS memory root marked as read only.
This memory allocation takes place by the sequence of the following
invocations:
 Prepared_statement::execute
  mysql_execute_command
   Sql_cmd_dml::execute
    Sql_cmd_update::execute_inner
     Sql_cmd_update::update_single_table
      st_select_lex::save_leaf_tables
       List<TABLE_LIST>::push_back

To fix the issue, add the flag SELECT_LEX::leaf_tables_saved to control
whether the method SELECT_LEX::save_leaf_tables() has to be called or
it has been already invoked and no more invocation required.

Similar issue could take place on running the DELETE statement with
the LIMIT clause in PS/SP mode. The reason of memory leak is the same as for
UPDATE case and be fixed in the same way.
2024-07-02 18:40:11 +07:00
Marko Mäkelä
27a3366663 Merge 10.6 into 10.11 2024-06-27 10:26:09 +03:00
Marko Mäkelä
0076eb3d4e Merge 10.5 into 10.6 2024-06-24 13:09:47 +03:00
Dave Gosselin
db0c28eff8 MDEV-33746 Supply missing override markings
Find and fix missing virtual override markings.  Updates cmake
maintainer flags to include -Wsuggest-override and
-Winconsistent-missing-override.
2024-06-20 11:32:13 -04:00
Sergei Petrunia
fe41171c96 MDEV-33533: Crash at execution of DELETE when trying to use rowid filter
(Based on original patch by Oleksandr Byelkin)

Multi-table DELETE can execute via "buffered" mode: at phase #1 it collects
rowids of rows to be deleted, then at phase #2 in multi_delete::do_deletes()
it calls handler->rnd_pos() to read rows to be deleted and deletes them.

The problem occurred when phase #1 used Rowid Filter on the table that
phase #2 would be deleting from.
In InnoDB, h->rnd_init(scan=false) and h->rnd_pos() is an index scan over PK
under the hood. So, at phase #2 ha_innobase::rnd_init() would try to use the
Rowid Filter and hit an assertion inside ha_innobase::rnd_init().

Note that multi-table UPDATE works similarly but was not affected, because
patch for MDEV-7487 added code to disable rowid filter for phase #2 in
multi_update::do_updates().

This patch changes the approach:
- It makes InnoDB not use Rowid Filter in rnd_pos() scans: it is disabled in
  ha_innobase::rnd_init() and enabled back in ha_innobase::rnd_end().
- multi_update::do_updates() no longer disables Rowid Filter for phase#2 as
  it is no longer necessary.
2024-05-13 09:52:39 +02:00
Marko Mäkelä
683fbced6b Merge 11.0 into 11.1 2024-03-28 12:15:36 +02:00
Marko Mäkelä
fec2fd6add Merge 10.11 into 11.0 2024-03-28 10:51:36 +02:00
Marko Mäkelä
788953463d Merge 10.6 into 10.11
Some fixes related to commit f838b2d799 and
Rows_log_event::do_apply_event() and Update_rows_log_event::do_exec_row()
for system-versioned tables were provided by Nikita Malyavin.
This was required by test versioning.rpl,trx_id,row.
2024-03-28 09:16:57 +02:00
Monty
cfa8268ef9 MDEV-33622 Server crashes when the UPDATE statement (which has duplicate key) is run after setting a low thread_stack
This was caused by wrong allocation of variable on stack.
(Was allocating 4K of data instead of 512 bytes).

No test case as the original MDEV test cases is not usable for mtr.
2024-03-12 19:00:41 +02:00
Marko Mäkelä
d73baa402a Merge 10.11 into 11.0 2024-02-20 12:02:01 +02:00
Marko Mäkelä
64cce8d5bf Merge 10.6 into 10.11 2024-02-14 16:12:53 +02:00
Marko Mäkelä
691f923906 Merge 10.5 into 10.6 2024-02-13 20:42:59 +02:00
Marko Mäkelä
8ec12e0d6d Merge 10.4 into 10.5 2024-02-12 11:38:13 +02:00
Dmitry Shulga
e48bd474a2 MDEV-15703: Crash in EXECUTE IMMEDIATE 'CREATE OR REPLACE TABLE t1 (a INT DEFAULT ?)' USING DEFAULT
This patch fixes the issue with passing the DEFAULT or IGNORE values to
positional parameters for some kind of SQL statements to be executed
as prepared statements.

The main idea of the patch is to associate an actual value being passed
by the USING clause with the positional parameter represented by
the Item_param class. Such association must be performed on execution of
UPDATE statement in PS/SP mode. Other corner cases that results in
server crash is on handling CREATE TABLE when positional parameter
placed after the DEFAULT clause or CALL statement and passing either
the value DEFAULT or IGNORE as an actual value for the positional parameter.
This case is fixed by checking whether an error is set in diagnostics
area at the function pack_vcols() on return from the function pack_expression()
2024-02-08 09:21:54 +01:00
Sergei Golubchik
b6680e0101 Merge branch '11.0' into 11.1 2024-02-02 11:30:47 +01:00
Sergei Golubchik
87e13722a9 Merge branch '10.6' into 10.11 2024-02-01 18:36:14 +01:00
Oleksandr Byelkin
fe490f85bb Merge branch '10.11' into 11.0 2024-01-30 08:54:10 +01:00
Oleksandr Byelkin
14d930db5d Merge branch '10.6' into 10.11 2024-01-30 08:17:58 +01:00
Monty
3c63e2f890 Temporary table name used by multip-update has 'null' as part of the name
Problem was that TMP_TABLE_PARAM was not properly initialized
2024-01-23 13:03:12 +02:00
Sergei Golubchik
7a5448f8da Merge branch '11.0' into 11.1 2023-12-19 20:11:54 +01:00
Sergei Golubchik
8c8bce05d2 Merge branch '10.11' into 11.0 2023-12-19 15:53:18 +01:00
Sergei Golubchik
fd0b47f9d6 Merge branch '10.6' into 10.11 2023-12-18 11:19:04 +01:00
Alexander Barkov
4ced4898fd MDEV-32958 Unusable key notes do not get reported for some operations
Enable unusable key notes for non-equality predicates:
   <, <=, =>, >, BETWEEN, IN, LIKE

Note, in some scenarios it displays duplicate notes, e.g.
for queries with ORDER BY:

  SELECT * FROM t1
  WHERE    indexed_string_column >= 10
  ORDER BY indexed_string_column
  LIMIT 5;

This should be tolarable. Getting rid of the diplicate note
completely would need a much more complex patch, which is
not desiable in 10.6.

Details:

- Changing RANGE_OPT_PARAM::note_unusable_keys from bool
  to a new data type Item_func::Bitmap, so the caller can
  choose with a better granuality which predicates
  should raise unusable key notes inside the range optimizer:
    a. all predicates (=, <=>, <, <=, =>, >, BETWEEN, IN, LIKE)
    b. all predicates except equality (=, <=>)
    c. none of the predicates

  "b." is needed because in some scenarios equality predicates (=, <=>)
  send unusable key notes at an earlier stage, before the range optimizer,
  during update_ref_and_keys(). Calling the range optimizer with
  "all predicates" would produce duplicate notes for = and <=> in such cases.

- Fixing get_quick_record_count() to call the range optimizer
  with "all predicates except equality" instead of "none of the predicates".
  Before this change the range optimizer suppressed all notes for
  non-equality predicates: <, <=, =>, >, BETWEEN, IN, LIKE.
  This actually fixes the reported problem.

- Fixing JOIN::make_range_rowid_filters() to call the range optimizer
  with "all predicates except equality" instead of "all predicates".
  Before this change the range optimizer produced duplicate notes
  for = and <=> during a rowid_filter optimization.

- Cleanup:
  Adding the op_collation argument to Field::raise_note_cannot_use_key_part()
  and displaying the operation collation rather than the argument collation
  in the unusable key note. This is important for operations with more than
  two arguments: BETWEEN and IN, e.g.:

    SELECT * FROM t1
    WHERE column_utf8mb3_general_ci
          BETWEEN 'a' AND 'b' COLLATE utf8mb3_unicode_ci;

    SELECT * FROM t1
    WHERE column_utf8mb3_general_ci
          IN ('a', 'b' COLLATE utf8mb3_unicode_ci);

    The note for 'a' now prints utf8mb3_unicode_ci as the collation.
    which is the collation of the entire operation:

      Cannot use key key1 part[0] for lookup:
      "`column_utf8mb3_general_ci`" of collation `utf8mb3_general_ci` >=
      "'a'" of collation `utf8mb3_unicode_ci`

    Before this change it printed the collation of 'a',
    so the note was confusing:

      Cannot use key key1 part[0] for lookup:
      "`column_utf8mb3_general_ci`" of collation `utf8mb3_general_ci` >=
      "'a'" of collation `utf8mb3_general_ci`"
2023-12-11 08:55:27 +04:00
Oleksandr Byelkin
f5fae75652 Merge branch '11.0' into 11.1 2023-08-09 08:25:14 +02:00
Oleksandr Byelkin
51f9d62005 Merge branch '10.11' into 11.0 2023-08-09 07:53:48 +02:00
Oleksandr Byelkin
036df5f970 Merge branch '10.10' into 10.11 2023-08-08 14:57:31 +02:00
Oleksandr Byelkin
34a8e78581 Merge branch '10.6' into 10.9 2023-08-04 08:01:06 +02:00
Oleksandr Byelkin
6bf8483cac Merge branch '10.5' into 10.6 2023-08-01 15:08:52 +02:00
Igor Babaev
adc13e2c16 MDEV-31150 Crash on 2nd execution of update using mergeable derived table
This bug caused crashes of the server on the second execution of update
statements that used mergeable derived tables. The crashes happened in
the function multi_update_check_table_access() when the code tried to
dereference the pointer stored in field TABLE_LIST::TABLE for a mergeable
derived table. The fact is this field is set to NULL after the first
execution of the query. At the same any action performed by the function
is actually not needed for derived tables.

Approved by Oleksandr Byelkin <sanja@mariadb.com>
2023-07-26 23:11:17 -07:00
Oleksandr Byelkin
7564be1352 Merge branch '10.4' into 10.5 2023-07-26 16:02:57 +02:00
Marko Mäkelä
c6ac1e39b6 Merge 11.0 into 11.1 2023-07-26 15:13:43 +03:00
Marko Mäkelä
f2b4972bd4 Merge 10.11 into 11.0 2023-07-26 15:13:06 +03:00
Marko Mäkelä
bce3ee704f Merge 10.10 into 10.11 2023-07-26 14:44:43 +03:00
Aleksey Midenkov
14cc7e7d6e MDEV-25644 UPDATE not working properly on transaction precise system versioned table
First UPDATE under START TRANSACTION does nothing (nstate= nstate),
but anyway generates history. Since update vector is empty we get into
(!uvect->n_fields) branch which only adds history row, but does not do
update. After that we get current row with wrong (old) row_start value
and because of that second UPDATE tries to insert history row again
because it sees trx->id != row_start which is the guard to avoid
inserting multiple trx_id-based history rows under same transaction
(because we have same trx_id and we get duplicate error and this bug
demostrates that). But this try anyway fails because PK is based on
row_end which is constant under same transaction, so PK didn't change.

The fix moves vers_make_update() to an earlier stage of
calc_row_difference(). Therefore it prepares update vector before
(!uvect->n_fields) check and never gets into that branch, hence no
need to handle versioning inside that condition anymore.

Now trx->id and row_start are equal after first UPDATE and we don't
try to insert second history row.

== Cleanups and improvements ==

ha_innobase::update_row():

vers_set_fields and vers_ins_row are cleaned up into direct condition
check. SQLCOM_ALTER_TABLE check now is not used as this is dead code,
assertion is done instead.

upd_node->is_delete is set in calc_row_difference() just to keep
versioning code as much in one place as possible. vers_make_delete()
is still located in row_update_for_mysql() as this is required for
ha_innodbase::delete_row() as well.

row_ins_duplicate_error_in_clust():

Restrict DB_FOREIGN_DUPLICATE_KEY to the better conditions.
VERSIONED_DELETE is used specifically to help lower stack to
understand what caused current insert. Related to MDEV-29813.
2023-07-20 18:22:31 +03:00
Marko Mäkelä
7cde5c539b Merge 10.6 into 10.9 2023-07-10 11:22:21 +03:00
Monty
99bd226059 MDEV-31558 Add InnoDB engine information to the slow query log
The new statistics is enabled by adding the "engine", "innodb" or "full"
option to --log-slow-verbosity

Example output:

 # Pages_accessed: 184  Pages_read: 95  Pages_updated: 0  Old_rows_read: 1
 # Pages_read_time: 17.0204  Engine_time: 248.1297

Page_read_time is time doing physical reads inside a storage engine.
(Writes cannot be tracked as these are usually done in the background).
Engine_time is the time spent inside the storage engine for the full
duration of the read/write/update calls. It uses the same code as
'analyze statement' for calculating the time spent.

The engine statistics is done with a generic interface that should be
easy for any engine to use. It can also easily be extended to provide
even more statistics.

Currently only InnoDB has counters for Pages_% and Undo_% status.
Engine_time works for all engines.

Implementation details:

class ha_handler_stats holds all engine stats.  This class is included
in handler and THD classes.
While a query is running, all statistics is updated in the handler. In
close_thread_tables() the statistics is added to the THD.

handler::handler_stats is a pointer to where statistics should be
collected. This is set to point to handler::active_handler_stats if
stats are requested. If not, it is set to 0.
handler_stats has also an element, 'active' that is 1 if stats are
requested. This is to allow engines to avoid doing any 'if's while
updating the statistics.

Cloned or partition tables have the pointer set to the base table if
status are requested.

There is a small performance impact when using --log-slow-verbosity=engine:
- All engine calls in 'select' will be timed.
- IO calls for InnoDB reads will be timed.
- Incrementation of counters are done on local variables and accesses
  are inline, so these should have very little impact.
- Statistics has to be reset for each statement for the THD and each
  used handler. This is only 40 bytes, which should be neglectable.
- For partition tables we have to loop over all partitions to update
  the handler_status as part of table_init(). Can be optimized in the
  future to only do this is log-slow-verbosity changes. For this to work
  we have to update handler_status for all opened partitions and
  also for all partitions opened in the future.

Other things:
- Added options 'engine' and 'full' to log-slow-verbosity.
- Some of the new files in the test suite comes from Percona server, which
  has similar status information.
- buf_page_optimistic_get(): Do not increment any counter, since we are
  only validating a pointer, not performing any buf_pool.page_hash lookup.
- Added THD argument to save_explain_data_intern().
- Switched arguments for save_explain_.*_data() to have
  always THD first (generates better code as other functions also have THD
  first).
2023-07-07 12:53:18 +03:00
Sergei Golubchik
cbabb95915 Merge branch '11.0' into 11.1 2023-06-05 20:15:15 +02:00
Oleg Smirnov
af0e0ad18d MDEV-30946 Index usage for DATE(datetime_column) = const does not work for DELETE and UPDATE
Add date conditions transformation to the execution paths
of DELETE and UPDATE commands.

Strip excessive prepared statements from the test file
2023-04-25 20:21:36 +07:00
Sergei Petrunia
c7fe8e51de Merge 10.11 into 11.0 2023-04-17 16:50:01 +03:00
Marko Mäkelä
656c2e18b1 Merge 10.10 into 10.11 2023-04-14 13:08:28 +03:00
Marko Mäkelä
44281b88f3 Merge 10.8 into 10.9 2023-04-14 11:32:36 +03:00
Marko Mäkelä
1d1e0ab2cc Merge 10.6 into 10.8 2023-04-12 15:50:08 +03:00