Commit graph

1543 commits

Author SHA1 Message Date
Oleksandr Byelkin
22d455612b Merge branch '10.8' into 10.9 2022-08-09 09:57:13 +02:00
Marko Mäkelä
9cbf8ccf29 Merge 10.7 into 10.8 2022-08-02 08:52:57 +03:00
Thirunarayanan Balathandayuthapani
3330f8d156 MDEV-28400 Leak in trx_mod_time_t::start_bulk_insert()
- Change partition does undo logging of all rows unnecessarily and
it invokes bulk insert during DDL. Better to avoid the logging of undo
records during copy of the parititon.
2022-08-01 14:44:55 +05:30
Marko Mäkelä
f53f64b7b9 Merge 10.8 into 10.9 2022-07-28 10:47:33 +03:00
Marko Mäkelä
f79cebb4d0 Merge 10.7 into 10.8 2022-07-28 10:33:26 +03:00
Marko Mäkelä
742e1c727f Merge 10.6 into 10.7 2022-07-27 18:26:21 +03:00
Marko Mäkelä
30914389fe Merge 10.5 into 10.6 2022-07-27 17:52:37 +03:00
Marko Mäkelä
098c0f2634 Merge 10.4 into 10.5 2022-07-27 17:17:24 +03:00
Oleksandr Byelkin
3bb36e9495 Merge branch '10.3' into 10.4 2022-07-27 11:02:57 +02:00
Nayuta Yanagisawa
2f3f1cd05b MDEV-26544 Assertion `part_share->auto_inc_initialized' failed in ha_partition::get_auto_increment on INSERT
The partition storage engine ignores return (error) values of
handler::info(). As a result, a query that should be aborted is
not aborted and then the server violates the assertion.
2022-07-05 23:07:49 +09:00
Marko Mäkelä
3c2a5ad3e8 Merge 10.7 into 10.8 2022-07-01 17:53:06 +03:00
Marko Mäkelä
3dff84cd15 Merge 10.6 into 10.7 2022-07-01 17:45:29 +03:00
Marko Mäkelä
62a20f8047 Merge 10.5 into 10.6 2022-07-01 15:24:50 +03:00
Marko Mäkelä
f09687094c Merge 10.4 into 10.5 2022-07-01 14:42:02 +03:00
Marko Mäkelä
392ee571c1 Merge 10.3 into 10.4 2022-07-01 13:10:36 +03:00
Marko Mäkelä
045771c050 Fix most clang-15 -Wunused-but-set-variable
Also, refactor trx_i_s_common_fill_table() to remove dead code.

Warnings about yynerrs in Bison-generated yyparse() will remain for now.
2022-07-01 09:48:36 +03:00
Marko Mäkelä
404d4820af Merge 10.8 into 10.9 2022-06-28 10:59:01 +03:00
Marko Mäkelä
9523986299 Merge 10.7 into 10.8 2022-06-28 10:06:00 +03:00
Marko Mäkelä
ac0af4ec4a Merge 10.6 into 10.7 2022-06-28 08:34:12 +03:00
Marko Mäkelä
87bd79b1e7 Merge 10.5 into 10.6 2022-06-27 10:59:31 +03:00
Marko Mäkelä
ea847cbeaf Merge 10.4 into 10.5 2022-06-27 10:51:20 +03:00
Marko Mäkelä
01d757036f Merge 10.3 into 10.4 2022-06-27 10:14:37 +03:00
Shunsuke Tokunaga
c4f65d8fed
MDEV-21027 Assertion `part_share->auto_inc_initialized || !can_use_for_auto_inc_init()' failed in ha_partition::set_auto_increment_if_higher
ha_partition::set_auto_increment_if_higher expects
part_share->auto_inc_initialized is true or can_use_for_auto_inc_init()
is false (but as the comment of this method says, it returns false
only if we use Spider engine with DROP TABLE or ALTER TABLE query).
However, part_share->auto_inc_initialized becomes true only after all
partitions are opened (since 6dce6aeceb).

Therefore, I added a conditional expression in order to read all
partitions when we execute REPLACE on a table that has an
AUTO_INCREMENT column.           

Reviewed by: Nayuta Yanagisawa
Reviewed by: Alexey Botchkov
2022-06-16 13:28:24 +09:00
Sergei Golubchik
bf2bdd1a1a Merge branch '10.8' into 10.9 2022-05-19 14:07:55 +02:00
Sergei Golubchik
443c2a715d Merge branch '10.7' into 10.8 2022-05-11 12:21:36 +02:00
Sergei Golubchik
fd132be117 Merge branch '10.6' into 10.7 2022-05-11 11:25:33 +02:00
Sergei Golubchik
3bc98a4ec4 Merge branch '10.5' into 10.6 2022-05-10 14:01:23 +02:00
Sergei Golubchik
ef781162ff Merge branch '10.4' into 10.5 2022-05-09 22:04:06 +02:00
Sergei Golubchik
a70a1cf3f4 Merge branch '10.3' into 10.4 2022-05-08 23:03:08 +02:00
Aleksey Midenkov
92bfc0e8c4 MDEV-17554 Auto-create new partition for system versioned tables with history partitioned by INTERVAL/LIMIT
:: Syntax change ::

Keyword AUTO enables history partition auto-creation.

Examples:

    CREATE TABLE t1 (x int) WITH SYSTEM VERSIONING
    PARTITION BY SYSTEM_TIME INTERVAL 1 HOUR AUTO;

    CREATE TABLE t1 (x int) WITH SYSTEM VERSIONING
    PARTITION BY SYSTEM_TIME INTERVAL 1 MONTH
    STARTS '2021-01-01 00:00:00' AUTO PARTITIONS 12;

    CREATE TABLE t1 (x int) WITH SYSTEM VERSIONING
    PARTITION BY SYSTEM_TIME LIMIT 1000 AUTO;

Or with explicit partitions:

    CREATE TABLE t1 (x int) WITH SYSTEM VERSIONING
    PARTITION BY SYSTEM_TIME INTERVAL 1 HOUR AUTO
    (PARTITION p0 HISTORY, PARTITION pn CURRENT);

To disable or enable auto-creation one can use ALTER TABLE by adding
or removing AUTO from partitioning specification:

    CREATE TABLE t1 (x int) WITH SYSTEM VERSIONING
    PARTITION BY SYSTEM_TIME INTERVAL 1 HOUR AUTO;

    # Disables auto-creation:
    ALTER TABLE t1 PARTITION BY SYSTEM_TIME INTERVAL 1 HOUR;

    # Enables auto-creation:
    ALTER TABLE t1 PARTITION BY SYSTEM_TIME INTERVAL 1 HOUR AUTO;

If the rest of partitioning specification is identical to CREATE TABLE
no repartitioning will be done (for details see MDEV-27328).

:: Description ::

Before executing history-generating DML command (see the list of commands below)
add N history partitions, so that N would be sufficient for potentially
generated history. N > 1 may be required when history partitions are switched
by INTERVAL and current_timestamp is N times further than the interval
boundary of the last history partition.

If the last history partition equals or exceeds LIMIT records then new history
partition is created and selected as the working partition. According to
MDEV-28411 partitions cannot be switched (or created) while the command is
running. Thus LIMIT does not carry strict limitation and the history partition
size must be planned as LIMIT value plus average number of history one DML
command can generate.

Auto-creation is implemented by synchronous fast_alter_partition_table() call
from the thread of the executed DML command before the command itself is run
(by the fallback and retry mechanism similar to Discovery feature,
see Open_table_context).

The name for newly added partitions are generated like default partition names
with extension of MDEV-22155 (which avoids name clashes by extending assignment
counter to next free-enough gap).

These DML commands can trigger auto-creation:

    DELETE (including multitable DELETE, excluding DELETE HISTORY)
    UPDATE (including multitable UPDATE)
    REPLACE (including REPLACE .. SELECT)
    INSERT .. ON DUPLICATE KEY UPDATE (including INSERT .. SELECT .. ODKU)
    LOAD DATA .. REPLACE

:: Bug fixes ::

MDEV-23642 Locking timeout caused by auto-creation affects original DML

    The reasons for this are:

    - Do not disrupt main business process (the history is auxiliary service);

    - Consequences are non-fatal (history is not lost, but comes into wrong
      partition; fixed by partitioning rebuild);

    - There is more freedom for application to fail in this case or not: it may
      read warning info and find corresponding error number.

    - While non-failing command is easy to handle by an application and fail it,
      the opposite is hard to handle: there is no automatic actions to fix
      failed command and retry, DBA intervention is required and until then
      application is non-functioning.

MDEV-23639 Auto-create does not work under LOCK TABLES or inside triggers

    Don't do tdc_remove_table() for OT_ADD_HISTORY_PARTITION because it is
    not possible in locked tables mode.

    LTM_LOCK_TABLES mode (and LTM_PRELOCKED_UNDER_LOCK_TABLES) works out
    of the box as fast_alter_partition_table() can reopen tables via
    locked_tables_list.

    In LTM_PRELOCKED we reopen and relock table manually.

:: More fixes ::

* some_table_marked_for_reopen flag fix

  some_table_marked_for_reopen affets only reopen of
  m_locked_tables. I.e. Locked_tables_list::reopen_tables() reopens only
  tables from m_locked_tables.

* Unused can_recover_from_failed_open() condition

  Is recover_from_failed_open() can be really used after
  open_and_process_routine()?

:: Reviewed by ::

Sergei Golubchik <serg@mariadb.org>
2022-05-06 15:11:02 +03:00
Aleksey Midenkov
ddc416c606 MDEV-20077 Warning on full history partition is delayed until next DML statement
Moved LIMIT warning from vers_set_hist_part() to new call
vers_check_limit() at table unlock phase. At that point
read_partitions bitmap is already pruned by DML code (see
prune_partitions(), find_used_partitions()) so we have to set
corresponding bits for working history partition.

Also we don't do my_error(ME_WARNING|ME_ERROR_LOG), because at that
point it doesn't update warnings number, so command reports 0 warnings
(but warning list is still updated). Instead we do
push_warning_printf() and sql_print_warning() separately.

Under LOCK TABLES external_lock(F_UNLCK) is not executed. There is
start_stmt(), but no corresponding "stop_stmt()". So for that mode we
call vers_check_limit() directly from close_thread_tables().

Test result has been changed according to new LIMIT and warning
printing algorithm. For convenience all LIMIT warnings are marked with
"You see warning above ^".

TODO MDEV-20345 fixed. Now vers_history_generating() contains
fine-grained list of DML-commands that can generate history (and TODO
mechanism worked well).
2022-04-29 13:31:42 +03:00
Aleksey Midenkov
ea2f09979f MDEV-28271 Assertion on TRUNCATE PARTITION for PARTITION BY SYSTEM_TIME
Like in MDEV-27217 vers_set_hist_part() for LIMIT depends on all
partitions selected in read_partitions. That bugfix just disabled
partition selection for DELETE with this check:

  if (table->pos_in_table_list &&
      table->pos_in_table_list->partition_names)
  {
    return HA_ERR_PARTITION_LIST;
  }

ALTER TABLE TRUNCATE PARTITION is a different story. First, it doesn't
update pos_in_table_list->partition_names, but
thd->lex->alter_info.partition_names. But we cannot depend on that
since alter_info will be stale for DML. Second, we should not disable
TRUNCATE PARTITION for that to be consistent with TRUNCATE TABLE
behavior.

Now we don't do vers_set_hist_part() for ALTER TABLE as this command
is not DML, so it does not produce history.
2022-04-29 13:31:41 +03:00
Marko Mäkelä
133c2129cd Merge 10.7 into 10.8 2022-04-27 10:43:00 +03:00
Marko Mäkelä
638afc4acf Merge 10.6 into 10.7 2022-04-26 18:59:40 +03:00
Marko Mäkelä
fae0ccad6e Merge 10.5 into 10.6 2022-04-21 17:46:40 +03:00
Marko Mäkelä
620c55e708 Merge 10.4 into 10.5 2022-04-21 15:33:50 +03:00
Vlad Lesin
188aae65e4 MDEV-26224 InnoDB fails to remove AUTO_INCREMENT attribute
Reset dict_table_t::persistent_autoinc when inplace alter table is
committed successfully.
2022-04-21 15:23:21 +03:00
Marko Mäkelä
394784095e Merge 10.3 into 10.4 2022-04-21 11:33:59 +03:00
Alexander Barkov
9d734cdd61 Merge remote-tracking branch 'origin/10.2' into 10.3 2022-04-14 11:50:34 +04:00
Nayuta Yanagisawa
27b5d814e2 MDEV-27065 Partitioning tables with custom data directories moves data back to default directory
The partitioning engine does not support the table-level DATA/INDEX
DIRECTORY specification.

If one create a non-partitioned table with the DATA/INDEX DIRECTORY
option and then performs ALTER TABLE ... PARTITION BY on it, the
DATA/INDEX DIRECTORY specification of the old schema is ignored.

The behavior might be a bit surprising for users because the value
of a usual table option applies to all the partitions. Thus, we raise
a warning on such ALTER TABLE ... PARTITION BY.
2022-04-08 16:49:10 +09:00
Daniel Black
88ce8a3d8b Merge 10.7 into 10.8 2022-03-25 15:06:56 +11:00
Daniel Black
e86986a157 Merge 10.6 into 10.7 2022-03-24 18:57:07 +11:00
Daniel Black
065f995e6d Merge branch 10.5 into 10.6 2022-03-18 12:17:11 +11:00
Daniel Black
b73d852779 Merge 10.4 to 10.5 2022-03-17 17:03:24 +11:00
Daniel Black
069139a549 Merge 10.3 to 10.4
extra2_read_len resolved by keeping the implementation
in sql/table.cc by exposed it for use by ha_partition.cc

Remove identical implementation in unireg.h
(ref: bfed2c7d57)
2022-03-16 16:39:10 +11:00
Sergei Golubchik
bfed2c7d57 MDEV-27753 Incorrect ENGINE type of table after crash for CONNECT table
whenever possible, partitioning should use the full
partition plugin name, not the one byte legacy code.

Normally, ha_partition can get the engine plugin from
table_share->default_part_plugin.

But in some cases, e.g. in DROP TABLE, the table isn't
opened, table_share is NULL, and ha_partition has to parse
the frm, much like dd_frm_type() does.

temporary_tables.cc, sql_table.cc:

When dropping a table, it must be deleted in the engine
first, then frm file. Because frm can be the only true
source of metadata that the engine might need for DROP.

table.cc:

when opening a partitioned table, if the engine for
partitions is not found, do not fallback to MyISAM.
2022-03-14 08:55:59 +01:00
Oleksandr Byelkin
4fb2cb1a30 Merge branch '10.7' into 10.8 2022-02-04 14:50:25 +01:00
Oleksandr Byelkin
9ed8deb656 Merge branch '10.6' into 10.7 2022-02-04 14:11:46 +01:00
Oleksandr Byelkin
f5c5f8e41e Merge branch '10.5' into 10.6 2022-02-03 17:01:31 +01:00
Oleksandr Byelkin
cf63eecef4 Merge branch '10.4' into 10.5 2022-02-01 20:33:04 +01:00
Oleksandr Byelkin
a576a1cea5 Merge branch '10.3' into 10.4 2022-01-30 09:46:52 +01:00
Nayuta Yanagisawa
c5d09f731a MDEV-5271 Support engine-defined attributes per partition
Make it possible to specify engine-defined attributes on partitions
as well as tables.

If an engine-defined attribute is only specified at the table level,
it applies to all the partitions in the table.
This is a backward-compatible behavior.

If the same attribute is specified both at the table level and the
partition level, the per-partition one takes precedence.
So, we can consider per-table attributes as default values.

One cannot specify engine-defined attributes on subpartitions.

Implementation details:

* We store per-partition attributes in the partition_element class
  because we already have the part_comment field, which is for
  per-partition comments.

* In the case of ALTER TABLE statements, the partition_elements in
  table->part_info is set up by mysql_unpack_partition().
  So, we parse per-partition attributes after the call of the function.
2022-01-24 19:26:09 +09:00
Aleksey Midenkov
7c61fb2fe2 MDEV-27217 ha_partition::start_stmt() ignored error fix
ha_partition::start_stmt() continued processing in error state. Though
no known bug cases yet it seems break was made incorrect by
f93a2a3b3b (2016) while originally break was ok since cd483c5520
(2005).
2022-01-13 23:35:17 +03:00
Aleksey Midenkov
4d5ae2b325 MDEV-27217 DELETE partition selection doesn't work for history partitions
LIMIT history switching requires the number of history partitions to
be marked for read: from first to last non-empty plus one empty. The
least we can do is to fail with error message if the needed partition
was not marked for read. As this is handler interface we require new
handler error code to display user-friendly error message.

Switching by INTERVAL works out-of-the-box with
ER_ROW_DOES_NOT_MATCH_GIVEN_PARTITION_SET error.
2022-01-13 23:35:16 +03:00
Marko Mäkelä
06988bdcaa Merge 10.6 into 10.7 2021-11-09 09:40:29 +02:00
Marko Mäkelä
25ac047baf Merge 10.5 into 10.6 2021-11-09 09:11:50 +02:00
Marko Mäkelä
9c18b96603 Merge 10.4 into 10.5 2021-11-09 08:50:33 +02:00
Marko Mäkelä
47ab793d71 Merge 10.3 into 10.4 2021-11-09 08:40:14 +02:00
Aleksey Midenkov
1be39f86cc MDEV-25552 system versioned partitioned by LIMIT tables break CHECK TABLE
Replaced HA_ADMIN_NOT_IMPLEMENTED error code by HA_ADMIN_OK. Now CHECK
TABLE does not fail by unsupported check_misplaced_rows(). Admin
message is not needed as well.

Test case is the same as for MDEV-21011 (a7cf0db3d8), the result have
been changed.
2021-11-02 04:52:03 +03:00
Marko Mäkelä
42ad4fa346 Merge 10.6 into 10.7 2021-09-06 16:23:49 +03:00
Marko Mäkelä
7730dd392b Merge 10.5 into 10.6 2021-09-06 10:31:32 +03:00
Monty
49ae199604 Added support for ANALYZE TABLE to S3 tables
Other things
- Cleaned up error messages for CHECK, REPAIR and OPTIMIZE
2021-09-01 13:47:02 +03:00
Sergei Golubchik
0299ec29d4 cleanup: MY_BITMAP mutex
in about a hundred of users of MY_BITMAP, only two were using its
built-in mutex, and only one of those two was actually needing it.

Remove the mutex from MY_BITMAP, remove all associated conditions
and checks in bitmap functions. Use an external LOCK_temp_pool
mutex and temp_pool_set_next/temp_pool_clear_bit acccessors.

Remove bitmap_init/bitmap_free, always use my_* versions.
2021-08-26 23:39:52 +02:00
Marko Mäkelä
f3fcf5f45c Merge 10.5 to 10.6 2021-08-19 12:25:00 +03:00
Marko Mäkelä
4a25957274 Merge 10.4 into 10.5 2021-08-18 18:22:35 +03:00
Marko Mäkelä
f84e28c119 Merge 10.3 into 10.4 2021-08-18 16:51:52 +03:00
Aleksey Midenkov
dc3a350df6 MDEV-18734 ASAN additional fix for 10.3
Do swap_blobs() for new partition_read_multi_range mode.
2021-08-18 13:31:56 +03:00
Marko Mäkelä
cd65845a0e Merge 10.2 into 10.3
MDEV-18734 FIXME: vcol.partition triggers ASAN heap-use-after-free
2021-08-18 12:26:58 +03:00
Aleksey Midenkov
160d97a4aa MDEV-18734 ASAN heap-use-after-free upon sorting by blob column from partitioned table
ha_partition stores records in array of m_ordered_rec_buffer and uses
it for prio queue in ordered index scan. When the records are restored
from the array the blob buffers may be already freed or rewritten.

The solution is to take temporary ownership of cached blob buffers via
String::swap(). When the record is restored from m_ordered_rec_buffer
the ownership is returned to table fields.

Cleanups:

init_record_priority_queue(): removed needless !m_ordered_rec_buffer
check as there is same assertion few lines before.

dbug_print_row() for arbitrary row pointer
2021-08-05 23:48:02 +03:00
Oleksandr Byelkin
6efb5e9f5e Merge branch '10.5' into 10.6 2021-08-02 10:11:41 +02:00
Oleksandr Byelkin
ae6bdc6769 Merge branch '10.4' into 10.5 2021-07-31 23:19:51 +02:00
Oleksandr Byelkin
7841a7eb09 Merge branch '10.3' into 10.4 2021-07-31 22:59:58 +02:00
Sergei Golubchik
6190a02f35 Merge branch '10.2' into 10.3 2021-07-21 20:11:07 +02:00
Nikita Malyavin
7d9ba57da4 [1/2] MDEV-18166 ASSERT_COLUMN_MARKED_FOR_READ failed on tables with vcols
This is a 10.2+ part of a jira task

The two bugs regarding virtual column marking have been fixed:

1. UPDATE of a partitioned table, where the optimizer has chosen a
 secondary index to make a filesort;
2. INSERT into a table with a nonblob field generated from a blob, with
 binlog enabled and binlog_row_image=noblob.

3. DELETE from a view on a table with virtual column.

Generally the assertion happens from update_virtual_fields() call

These bugs are root-caused by missing field marking for dependant fields
of a virtual column.

Therefore a fix is: mark all the fields involved in the vcol expression by
calling field->register_field_in_read_map() instead just setting a single
bit.

3 was reproducible only on 10.4+, however the problem might has just been
invisible in the earlier versions. The fix is applicable to 10.2-10.3 as
well.
2021-07-12 22:00:39 +03:00
Marko Mäkelä
a722ee88f3 Merge 10.5 into 10.6 2021-06-01 11:39:38 +03:00
Marko Mäkelä
9c7a456a92 Merge 10.4 into 10.5 2021-06-01 10:38:09 +03:00
Marko Mäkelä
77d8da57d7 Merge 10.3 into 10.4 2021-06-01 09:14:59 +03:00
Marko Mäkelä
950a220060 Merge 10.2 into 10.3 2021-06-01 08:40:59 +03:00
Marko Mäkelä
ab87fc6c7a Cleanup: Remove handler::update_table_comment()
The only call of the virtual member function
handler::update_table_comment() was removed in
commit 82d28fada7 (MySQL 5.5.53)
but the implementation was not removed.

The only non-trivial implementation was for InnoDB. The information
is now returned via handler::get_foreign_key_create_info() and
ha_statistics::delete_length.
2021-05-27 09:31:19 +03:00
Monty
83e529eced MDEV-18465 Logging of DDL statements during backup
Many of the changes was needed to be able to collect and print engine
name and table version id's in the ddl log.
2021-05-19 22:54:13 +02:00
Monty
7762ee5dbe MDEV-25180 Atomic ALTER TABLE
MDEV-25604 Atomic DDL: Binlog event written upon recovery does not
           have default database

The purpose of this task is to ensure that ALTER TABLE is atomic even if
the MariaDB server would be killed at any point of the alter table.
This means that either the ALTER TABLE succeeds (including that triggers,
the status tables and the binary log are updated) or things should be
reverted to their original state.

If the server crashes before the new version is fully up to date and
commited, it will revert to the original table and remove all
temporary files and tables.
If the new version is commited, crash recovery will use the new version,
and update triggers, the status tables and the binary log.
The one execption is ALTER TABLE .. RENAME .. where no changes are done
to table definition. This one will work as RENAME and roll back unless
the whole statement completed, including updating the binary log (if
enabled).

Other changes:
- Added handlerton->check_version() function to allow the ddl recovery
  code to check, in case of inplace alter table, if the table in the
  storage engine is of the new or old version.
- Added handler->table_version() so that an engine can report the current
  version of the table. This should be changed each time the table
  definition changes.
- Added  ha_signal_ddl_recovery_done() and
  handlerton::signal_ddl_recovery_done() to inform all handlers when
  ddl recovery has been done. (Needed by InnoDB).
- Added handlerton call inplace_alter_table_committed, to signal engine
  that ddl_log has been closed for the alter table query.
- Added new handerton flag
  HTON_REQUIRES_NOTIFY_TABLEDEF_CHANGED_AFTER_COMMIT to signal when we
  should call hton->notify_tabledef_changed() during
  mysql_inplace_alter_table. This was required as MyRocks and InnoDB
  needed the call at different times.
- Added function server_uuid_value() to be able to generate a temporary
  xid when ddl recovery writes the query to the binary log. This is
  needed to be able to handle crashes during ddl log recovery.
- Moved freeing of the frm definition to end of mysql_alter_table() to
  remove duplicate code and have a common exit strategy.

-------
InnoDB part of atomic ALTER TABLE
(Implemented by Marko Mäkelä)
innodb_check_version(): Compare the saved dict_table_t::def_trx_id
to determine whether an ALTER TABLE operation was committed.

We must correctly recover dict_table_t::def_trx_id for this to work.
Before purge removes any trace of DB_TRX_ID from system tables, it
will make an effort to load the user table into the cache, so that
the dict_table_t::def_trx_id can be recovered.

ha_innobase::table_version(): return garbage, or the trx_id that would
be used for committing an ALTER TABLE operation.

In InnoDB, table names starting with #sql-ib will remain special:
they will be dropped on startup. This may be revisited later in
MDEV-18518 when we implement proper undo logging and rollback
for creating or dropping multiple tables in a transaction.

Table names starting with #sql will retain some special meaning:
dict_table_t::parse_name() will not consider such names for
MDL acquisition, and dict_table_rename_in_cache() will treat such
names specially when handling FOREIGN KEY constraints.

Simplify InnoDB DROP INDEX.
Prevent purge wakeup

To ensure that dict_table_t::def_trx_id will be recovered correctly
in case the server is killed before ddl_log_complete(), we will block
the purge of any history in SYS_TABLES, SYS_INDEXES, SYS_COLUMNS
between ha_innobase::commit_inplace_alter_table(commit=true)
(purge_sys.stop_SYS()) and purge_sys.resume_SYS().
The completion callback purge_sys.resume_SYS() must be between
ddl_log_complete() and MDL release.

--------

MyRocks support for atomic ALTER TABLE
(Implemented by Sergui Petrunia)

Implement these SE API functions:
- ha_rocksdb::table_version()
- hton->check_version = rocksdb_check_versionMyRocks data dictionary
  now stores table version for each table.
  (Absence of table version record is interpreted as table_version=0,
  that is, which means no upgrade changes are needed)
- For inplace alter table of a partitioned table, call the underlying
  handlerton when checking if the table is ok. This assumes that the
  partition engine commits all changes at once.
2021-05-19 22:54:13 +02:00
Monty
08bc062e3c Remove some usage of Check_level_instant_set and Sql_mode_save
The reason for the removal are:
- Generates more code
  - Storing and retreving THD
  - Causes extra code and daata to be generated to handle possible throw
    exceptions (which never happens in MariaDB code)
- Uses more stack space

Other things:
- Changed convert_const_to_int() to use item->save_in_field_no_warnings(),
  which made the code shorter and simpler.
- Removed not needed code in Sp_handler::sp_create_routine()
- Added thd as argument to store_key.copy() to make function simpler
- Added thd as argument to some subselect* constructor that inherites
  from Item_subselect.
2021-05-19 22:54:12 +02:00
Monty
188b0b99cf Rename all external ddl_log function to start with ddl_log_ prefix
Rename deactivate_ddl_log_entry to ddl_log_increment_phase
2021-05-19 22:54:11 +02:00
Monty
02b6cef45e Move all ddl log code to ddl_log.cc and ddl_log.h
Part of prepration for: MDEV-17567 Atomic DDL

No notable code changes except moving code around
2021-05-19 22:54:11 +02:00
Monty
b6ff139aa3 Reduce usage of strlen()
Changes:
- To detect automatic strlen() I removed the methods in String that
  uses 'const char *' without a length:
  - String::append(const char*)
  - Binary_string(const char *str)
  - String(const char *str, CHARSET_INFO *cs)
  - append_for_single_quote(const char *)
  All usage of append(const char*) is changed to either use
  String::append(char), String::append(const char*, size_t length) or
  String::append(LEX_CSTRING)
- Added STRING_WITH_LEN() around constant string arguments to
  String::append()
- Added overflow argument to escape_string_for_mysql() and
  escape_quotes_for_mysql() instead of returning (size_t) -1 on overflow.
  This was needed as most usage of the above functions never tested the
  result for -1 and would have given wrong results or crashes in case
  of overflows.
- Added Item_func_or_sum::func_name_cstring(), which returns LEX_CSTRING.
  Changed all Item_func::func_name()'s to func_name_cstring()'s.
  The old Item_func_or_sum::func_name() is now an inline function that
  returns func_name_cstring().str.
- Changed Item::mode_name() and Item::func_name_ext() to return
  LEX_CSTRING.
- Changed for some functions the name argument from const char * to
  to const LEX_CSTRING &:
  - Item::Item_func_fix_attributes()
  - Item::check_type_...()
  - Type_std_attributes::agg_item_collations()
  - Type_std_attributes::agg_item_set_converter()
  - Type_std_attributes::agg_arg_charsets...()
  - Type_handler_hybrid_field_type::aggregate_for_result()
  - Type_handler_geometry::check_type_geom_or_binary()
  - Type_handler::Item_func_or_sum_illegal_param()
  - Predicant_to_list_comparator::add_value_skip_null()
  - Predicant_to_list_comparator::add_value()
  - cmp_item_row::prepare_comparators()
  - cmp_item_row::aggregate_row_elements_for_comparison()
  - Cursor_ref::print_func()
- Removes String_space() as it was only used in one cases and that
  could be simplified to not use String_space(), thanks to the fixed
  my_vsnprintf().
- Added some const LEX_CSTRING's for common strings:
  - NULL_clex_str, DATA_clex_str, INDEX_clex_str.
- Changed primary_key_name to a LEX_CSTRING
- Renamed String::set_quick() to String::set_buffer_if_not_allocated() to
  clarify what the function really does.
- Rename of protocol function:
  bool store(const char *from, CHARSET_INFO *cs) to
  bool store_string_or_null(const char *from, CHARSET_INFO *cs).
  This was done to both clarify the difference between this 'store' function
  and also to make it easier to find unoptimal usage of store() calls.
- Added Protocol::store(const LEX_CSTRING*, CHARSET_INFO*)
- Changed some 'const char*' arrays to instead be of type LEX_CSTRING.
- class Item_func_units now used LEX_CSTRING for name.

Other things:
- Fixed a bug in mysql.cc:construct_prompt() where a wrong escape character
  in the prompt would cause some part of the prompt to be duplicated.
- Fixed a lot of instances where the length of the argument to
  append is known or easily obtain but was not used.
- Removed some not needed 'virtual' definition for functions that was
  inherited from the parent. I added override to these.
- Fixed Ordered_key::print() to preallocate needed buffer. Old code could
  case memory overruns.
- Simplified some loops when adding char * to a String with delimiters.
2021-05-19 22:27:48 +02:00
Vicențiu Ciorbaru
13cf8f5e9a cleanup: Refactor select_limit in select lex
Replace
  * select_lex::offset_limit
  * select_lex::select_limit
  * select_lex::explicit_limit
with select_lex::Lex_select_limit

The Lex_select_limit already existed with the same elements and was used in
by the yacc parser.

This commit is in preparation for FETCH FIRST implementation, as it
simplifies a lot of the code.

Additionally, the parser is simplified by making use of the stack to
return Lex_select_limit objects.

Cleanup of init_query() too. Removes explicit_limit= 0 as it's done a bit later
in init_select() with limit_params.empty()
2021-04-21 14:08:58 +03:00
Marko Mäkelä
4930f9c94b Merge 10.5 into 10.6 2021-04-21 11:45:00 +03:00
Alexey Botchkov
22e0a317be The ha_partition::table_type() method was just never called before. 2021-04-21 10:21:44 +04:00
Marko Mäkelä
80ed136e6d Merge 10.4 into 10.5 2021-04-21 09:01:01 +03:00
Monty
031f11717d Fix all warnings given by UBSAN
The easiest way to compile and test the server with UBSAN is to run:
./BUILD/compile-pentium64-ubsan
and then run mysql-test-run.
After this commit, one should be able to run this without any UBSAN
warnings. There is still a few compiler warnings that should be fixed
at some point, but these do not expose any real bugs.

The 'special' cases where we disable, suppress or circumvent UBSAN are:
- ref10 source (as here we intentionally do some shifts that UBSAN
  complains about.
- x86 version of optimized int#korr() methods. UBSAN do not like unaligned
  memory access of integers.  Fixed by using byte_order_generic.h when
  compiling with UBSAN
- We use smaller thread stack with ASAN and UBSAN, which forced me to
  disable a few tests that prints the thread stack size.
- Verifying class types does not work for shared libraries. I added
  suppression in mysql-test-run.pl for this case.
- Added '#ifdef WITH_UBSAN' when using integer arithmetic where it is
  safe to have overflows (two cases, in item_func.cc).

Things fixed:
- Don't left shift signed values
  (byte_order_generic.h, mysqltest.c, item_sum.cc and many more)
- Don't assign not non existing values to enum variables.
- Ensure that bool and enum values are properly initialized in
  constructors.  This was needed as UBSAN checks that these types has
  correct values when one copies an object.
  (gcalc_tools.h, ha_partition.cc, item_sum.cc, partition_element.h ...)
- Ensure we do not called handler functions on unallocated objects or
  deleted objects.
  (events.cc, sql_acl.cc).
- Fixed bugs in Item_sp::Item_sp() where we did not call constructor
  on Query_arena object.
- Fixed several cast of objects to an incompatible class!
  (Item.cc, Item_buff.cc, item_timefunc.cc, opt_subselect.cc, sql_acl.cc,
   sql_select.cc ...)
- Ensure we do not do integer arithmetic that causes over or underflows.
  This includes also ++ and -- of integers.
  (Item_func.cc, Item_strfunc.cc, item_timefunc.cc, sql_base.cc ...)
- Added JSON_VALUE_UNITIALIZED to json_value_types and ensure that
  value_type is initialized to this instead of to -1, which is not a valid
  enum value for json_value_types.
- Ensure we do not call memcpy() when second argument could be null.
- Fixed that Item_func_str::make_empty_result() creates an empty string
  instead of a null string (safer as it ensures we do not do arithmetic
  on null strings).

Other things:

- Changed struct st_position to an OBJECT and added an initialization
  function to it to ensure that we do not copy or use uninitialized
  members. The change to a class was also motived that we used "struct
  st_position" and POSITION randomly trough the code which was
  confusing.
- Notably big rewrite in sql_acl.cc to avoid using deleted objects.
- Changed in sql_partition to use '^' instead of '-'. This is safe as
  the operator is either 0 or 0x8000000000000000ULL.
- Added check for select_nr < INT_MAX in JOIN::build_explain() to
  avoid bug when get_select() could return NULL.
- Reordered elements in POSITION for better alignment.
- Changed sql_test.cc::print_plan() to use pointers instead of objects.
- Fixed bug in find_set() where could could execute '1 << -1'.
- Added variable have_sanitizer, used by mtr.  (This variable was before
  only in 10.5 and up).  It can now have one of two values:
  ASAN or UBSAN.
- Moved ~Archive_share() from ha_archive.cc to ha_archive.h and marked
  it virtual. This was an effort to get UBSAN to work with loaded storage
  engines. I kept the change as the new place is better.
- Added in CONNECT engine COLBLK::SetName(), to get around a wrong cast
  in tabutil.cpp.
- Added HAVE_REPLICATION around usage of rgi_slave, to get embedded
  server to compile with UBSAN. (Patch from Marko).
- Added #ifdef for powerpc64 to avoid a bug in old gcc versions related
  to integer arithmetic.

Changes that should not be needed but had to be done to suppress warnings
from UBSAN:

- Added static_cast<<uint16_t>> around shift to get rid of a LOT of
  compiler warnings when using UBSAN.
- Had to change some '/' of 2 base integers to shift to get rid of
  some compile time warnings.

Reviewed by:
- Json changes: Alexey Botchkov
- Charset changes in ctype-uca.c: Alexander Barkov
- InnoDB changes & Embedded server: Marko Mäkelä
- sql_acl.cc changes: Vicențiu Ciorbaru
- build_explain() changes: Sergey Petrunia
2021-04-20 12:30:09 +03:00
Daniel Black
058484687a Add TL_FIRST_WRITE in SQL layer for determining R/W
Use < TL_FIRST_WRITE for determining a READ transaction.

Use TL_FIRST_WRITE as the relative operator replacing TL_WRITE_ALLOW_WRITE
as the minimium WRITE lock type.
2021-04-08 16:51:36 +10:00
Marko Mäkelä
03ff588d15 Merge 10.5 into 10.6 2021-03-05 16:05:47 +02:00
Marko Mäkelä
10d544aa7b Merge 10.4 into 10.5 2021-03-05 12:54:43 +02:00
Marko Mäkelä
fcc9f8b10c Remove unused HA_EXTRA_FAKE_START_STMT
This is fixup for commit f06a0b5338.
2021-03-05 10:40:16 +02:00
Marko Mäkelä
94b4578704 Merge 10.5 into 10.6 2021-02-17 19:39:05 +02:00
Sergei Golubchik
25d9d2e37f Merge branch 'bb-10.4-release' into bb-10.5-release 2021-02-15 16:43:15 +01:00
Sergei Golubchik
00a313ecf3 Merge branch 'bb-10.3-release' into bb-10.4-release
Note, the fix for "MDEV-23328 Server hang due to Galera lock conflict resolution"
was null-merged. 10.4 version of the fix is coming up separately
2021-02-12 17:44:22 +01:00
Marko Mäkelä
1110beccd4 Merge 10.5 into 10.6 2021-02-02 15:15:53 +02:00
Marko Mäkelä
6d1f1b61b5 MDEV-24564 Statistics are lost after ALTER TABLE
Ever since commit 007f68c37f,
ALTER TABLE no longer invokes handler::open() after
handler::commit_inplace_alter_table().

ha_innobase::reload_statistics(): Reload or recompute statistics
after ALTER TABLE.

innodb_notify_tabledef_changed(): A new function to invoke
ha_innobase::reload_statistics().

handlerton::notify_tabledef_changed(): Add the parameter handler*
so that ha_innobase::reload_statistics() can be invoked.

ha_partition::notify_tabledef_changed(),
partition_notify_tabledef_changed(): Pass through the call
to any partitions or subpartitions.

This is based on code that was supplied by Monty.
2021-01-28 14:15:01 +02:00
Nikita Malyavin
21809f9a45 MDEV-17556 Assertion `bitmap_is_set_all(&table->s->all_set)' failed
The assertion failed in handler::ha_reset upon SELECT under
READ UNCOMMITTED from table with index on virtual column.

This was the debug-only failure, though the problem is mush wider:
* MY_BITMAP is a structure containing my_bitmap_map, the latter is a raw
 bitmap.
* read_set, write_set and vcol_set of TABLE are the pointers to MY_BITMAP
* The rest of MY_BITMAPs are stored in TABLE and TABLE_SHARE
* The pointers to the stored MY_BITMAPs, like orig_read_set etc, and
 sometimes all_set and tmp_set, are assigned to the pointers.
* Sometimes tmp_use_all_columns is used to substitute the raw bitmap
 directly with all_set.bitmap
* Sometimes even bitmaps are directly modified, like in
TABLE::update_virtual_field(): bitmap_clear_all(&tmp_set) is called.

The last three bullets in the list, when used together (which is mostly
always) make the program flow cumbersome and impossible to follow,
notwithstanding the errors they cause, like this MDEV-17556, where tmp_set
pointer was assigned to read_set, write_set and vcol_set, then its bitmap
was substituted with all_set.bitmap by dbug_tmp_use_all_columns() call,
and then bitmap_clear_all(&tmp_set) was applied to all this.

To untangle this knot, the rule should be applied:
* Never substitute bitmaps! This patch is about this.
 orig_*, all_set bitmaps are never substituted already.

This patch changes the following function prototypes:
* tmp_use_all_columns, dbug_tmp_use_all_columns
 to accept MY_BITMAP** and to return MY_BITMAP * instead of my_bitmap_map*
* tmp_restore_column_map, dbug_tmp_restore_column_maps to accept
 MY_BITMAP* instead of my_bitmap_map*

These functions now will substitute read_set/write_set/vcol_set directly,
and won't touch underlying bitmaps.
2021-01-27 00:50:55 +10:00
Marko Mäkelä
3cef4f8f0f MDEV-515 Reduce InnoDB undo logging for insert into empty table
We implement an idea that was suggested by Michael 'Monty' Widenius
in October 2017: When InnoDB is inserting into an empty table or partition,
we can write a single undo log record TRX_UNDO_EMPTY, which will cause
ROLLBACK to clear the table.

For this to work, the insert into an empty table or partition must be
covered by an exclusive table lock that will be held until the transaction
has been committed or rolled back, or the INSERT operation has been
rolled back (and the table is empty again), in lock_table_x_unlock().

Clustered index records that are covered by the TRX_UNDO_EMPTY record
will carry DB_TRX_ID=0 and DB_ROLL_PTR=1<<55, and thus they cannot
be distinguished from what MDEV-12288 leaves behind after purging the
history of row-logged operations.

Concurrent non-locking reads must be adjusted: If the read view was
created before the INSERT into an empty table, then we must continue
to imagine that the table is empty, and not try to read any records.
If the read view was created after the INSERT was committed, then
all records must be visible normally. To implement this, we introduce
the field dict_table_t::bulk_trx_id.

This special handling only applies to the very first INSERT statement
of a transaction for the empty table or partition. If a subsequent
statement in the transaction is modifying the initially empty table again,
we must enable row-level undo logging, so that we will be able to
roll back to the start of the statement in case of an error (such as
duplicate key).

INSERT IGNORE will continue to use row-level logging and locking, because
implementing it would require the ability to roll back the latest row.
Since the undo log that we write only allows us to roll back the entire
statement, we cannot support INSERT IGNORE. We will introduce a
handler::extra() parameter HA_EXTRA_IGNORE_INSERT to indicate to storage
engines that INSERT IGNORE is being executed.

In many test cases, we add an extra record to the table, so that during
the 'interesting' part of the test, row-level locking and logging will
be used.

Replicas will continue to use row-level logging and locking until
MDEV-24622 has been addressed. Likewise, this optimization will be
disabled in Galera cluster until MDEV-24623 enables it.

dict_table_t::bulk_trx_id: The latest active or committed transaction
that initiated an insert into an empty table or partition.
Protected by exclusive table lock and a clustered index leaf page latch.

ins_node_t::bulk_insert: Whether bulk insert was initiated.

trx_t::mod_tables: Use C++11 style accessors (emplace instead of insert).
Unlike earlier, this collection will cover also temporary tables.

trx_mod_table_time_t: Add start_bulk_insert(), end_bulk_insert(),
is_bulk_insert(), was_bulk_insert().

trx_undo_report_row_operation(): Before accessing any undo log pages,
invoke trx->mod_tables.emplace() in order to determine whether undo
logging was disabled, or whether this is the first INSERT and we are
supposed to write a TRX_UNDO_EMPTY record.

row_ins_clust_index_entry_low(): If we are inserting into an empty
clustered index leaf page, set the ins_node_t::bulk_insert flag for
the subsequent trx_undo_report_row_operation() call.

lock_rec_insert_check_and_lock(), lock_prdt_insert_check_and_lock():
Remove the redundant parameter 'flags' that can be checked in the caller.

btr_cur_ins_lock_and_undo(): Simplify the logic. Correctly write
DB_TRX_ID,DB_ROLL_PTR after invoking trx_undo_report_row_operation().

trx_mark_sql_stat_end(), ha_innobase::extra(HA_EXTRA_IGNORE_INSERT),
ha_innobase::external_lock(): Invoke trx_t::end_bulk_insert() so that
the next statement will not be covered by table-level undo logging.

ReadView::changes_visible(trx_id_t) const: New accessor for the case
where the trx_id_t is not read from a potentially corrupted index page
but directly from the memory. In this case, we can skip a sanity check.

row_sel(), row_sel_try_search_shortcut(), row_search_mvcc():
row_sel_try_search_shortcut_for_mysql(),
row_merge_read_clustered_index(): Check dict_table_t::bulk_trx_id.

row_sel_clust_sees(): Replaces lock_clust_rec_cons_read_sees().

lock_sec_rec_cons_read_sees(): Replaced with lower-level code.

btr_root_page_init(): Refactored from btr_create().

dict_index_t::clear(), dict_table_t::clear(): Empty an index or table,
for the ROLLBACK of an INSERT operation.

ROW_T_EMPTY, ROW_OP_EMPTY: Note a concurrent ROLLBACK of an INSERT
into an empty table.

This is joint work with Thirunarayanan Balathandayuthapani,
who created a working prototype.
Thanks to Matthias Leich for extensive testing.
2021-01-25 18:41:27 +02:00
Nikita Malyavin
e25623e78a MDEV-17556 Assertion `bitmap_is_set_all(&table->s->all_set)' failed
The assertion failed in handler::ha_reset upon SELECT under
READ UNCOMMITTED from table with index on virtual column.

This was the debug-only failure, though the problem is mush wider:
* MY_BITMAP is a structure containing my_bitmap_map, the latter is a raw
 bitmap.
* read_set, write_set and vcol_set of TABLE are the pointers to MY_BITMAP
* The rest of MY_BITMAPs are stored in TABLE and TABLE_SHARE
* The pointers to the stored MY_BITMAPs, like orig_read_set etc, and
 sometimes all_set and tmp_set, are assigned to the pointers.
* Sometimes tmp_use_all_columns is used to substitute the raw bitmap
 directly with all_set.bitmap
* Sometimes even bitmaps are directly modified, like in
TABLE::update_virtual_field(): bitmap_clear_all(&tmp_set) is called.

The last three bullets in the list, when used together (which is mostly
always) make the program flow cumbersome and impossible to follow,
notwithstanding the errors they cause, like this MDEV-17556, where tmp_set
pointer was assigned to read_set, write_set and vcol_set, then its bitmap
was substituted with all_set.bitmap by dbug_tmp_use_all_columns() call,
and then bitmap_clear_all(&tmp_set) was applied to all this.

To untangle this knot, the rule should be applied:
* Never substitute bitmaps! This patch is about this.
 orig_*, all_set bitmaps are never substituted already.

This patch changes the following function prototypes:
* tmp_use_all_columns, dbug_tmp_use_all_columns
 to accept MY_BITMAP** and to return MY_BITMAP * instead of my_bitmap_map*
* tmp_restore_column_map, dbug_tmp_restore_column_maps to accept
 MY_BITMAP* instead of my_bitmap_map*

These functions now will substitute read_set/write_set/vcol_set directly,
and won't touch underlying bitmaps.
2021-01-08 16:04:29 +10:00
Marko Mäkelä
6a1e655cb0 Merge 10.4 into 10.5 2020-12-02 18:29:49 +02:00
Marko Mäkelä
589cf8dbf3 Merge 10.3 into 10.4 2020-12-01 19:51:14 +02:00
Alexey Botchkov
75e7132fca MDEV-21842 auto_increment does not increment with compound primary key on partitioned table.
The idea of this fix is that it's enough to prevent the
next_auto_inc_val from incrementing if an error, to fix this problem
and also the MDEV-17333.
So this patch basically reverts the existing fix to the MDEV-17333.
2020-11-23 14:12:30 +04:00
Marko Mäkelä
46957a6a77 Merge 10.3 into 10.4 2020-10-22 13:27:18 +03:00
Kentoku SHIBA
b30ad01d40 MDEV-20100 MariaDB 13.3.9 Crash "[ERROR] mysqld got signal 11 ;"
Some functions on ha_partition call functions on all partitions, but handler->reset() is only called that pruned by m_partitions_to_reset. So Spider didn't clear pointer on unpruned partitions, if the unpruned partitions are used by next query, Spider reference the pointer that is already freed.
2020-10-22 05:25:53 +09:00
Kentoku SHIBA
ac8d205795 MDEV-20100 MariaDB 13.3.9 Crash "[ERROR] mysqld got signal 11 ;"
Some functions on ha_partition call functions on all partitions, but handler->reset() is only called that pruned by m_partitions_to_reset. So Spider didn't clear pointer on unpruned partitions, if the unpruned partitions are used by next query, Spider reference the pointer that is already freed.
2020-10-22 05:21:35 +09:00
Monty
2c8c15483d MDEV-23730 s3.replication_partition 'innodb,mix' segv
This failure was caused because of several bugs:
- Someone had removed s3-slave-ignore-updates=1 from slave.cnf, which
  caused the slave to remove files that the master was working on.
- Bug in ha_partition::change_partitions() that didn't reset m_new_file
  in case of errors. This caused crashes in ha_maria::extra() as the
  maria handler was called on files that was already closed.
- In ma_pagecache there was a bug that when one got a read error one a
  big block (s3 block), it left the flag PCBLOCK_BIG_READ on for the page
  which cased an assert when the page where flushed.
- Flush all cached tables in case of ignored ALTER TABLE

Note that when merging code from 10.3, that fixes the partition bug, use
the code from this patch instead.

Changes to ma_pagecache.cc written or reviewed by Sanja
2020-10-21 03:09:29 +03:00
Kentoku SHIBA
88d22f0e65 MDEV-20100 MariaDB 13.3.9 Crash "[ERROR] mysqld got signal 11 ;"
Some functions on ha_partition call functions on all partitions, but handler->reset() is only called that pruned by m_partitions_to_reset. So Spider didn't clear pointer on unpruned partitions, if the unpruned partitions are used by next query, Spider reference the pointer that is already freed.
2020-10-20 22:32:12 +09:00
Monty
311b7f94e6 MDEV-23248 Server crashes in mi_extra / ha_partition::loop_extra_alter upon REORGANIZE
This also fixes some issues with
MDEV-23730 s3.replication_partition 'innodb,mix' segv

The problem was that mysql_change_partitions() closes all handler files
in case of error, which was not properly reflected in
fast_alter_partition_table(). This caused handle_alter_part_error() to
try to close already closed tables, which caused the crash.

Fixed fast_alter_partion_table() to reflect when tables are opened.
I also fixed that ha_partition::change_partitions() resets m_new_file in
case of errors.
Either of the above changes fixes the issue, but both are needed to ensure
that the code works as expected.
2020-10-16 19:48:36 +03:00
Marko Mäkelä
cf87f3e08c Merge 10.4 into 10.5 2020-08-14 11:33:35 +03:00
Marko Mäkelä
2f7b37b021 Merge 10.3 into 10.4, except MDEV-22543
Also, fix GCC -Og -Wmaybe-uninitialized in run_backup_stage()
2020-08-13 18:48:41 +03:00
Marko Mäkelä
b811c6ecc7 Fix GCC 10.2.0 -Og -Wmaybe-uninitialized
Fix some more cases after merging
commit 31aef3ae99.
Some warnings look possibly genuine, others are clearly bogus.
2020-08-13 18:21:30 +03:00
Oleksandr Byelkin
48b5777ebd Merge branch '10.4' into 10.5 2020-08-04 17:24:15 +02:00
Oleksandr Byelkin
57325e4706 Merge branch '10.3' into 10.4 2020-08-03 14:44:06 +02:00
Oleksandr Byelkin
c32f71af7e Merge branch '10.2' into 10.3 2020-08-03 13:41:29 +02:00
Oleksandr Byelkin
ef7cb0a0b5 Merge branch '10.1' into 10.2 2020-08-02 11:05:29 +02:00
Ian Gilfillan
d2982331a6 Code comment spellfixes 2020-07-22 23:18:12 +02:00
Marko Mäkelä
4d4865de6f Merge 10.4 into 10.5 2020-07-20 15:55:59 +03:00
Marko Mäkelä
4b959bd8df Merge 10.3 into 10.4 2020-07-20 15:34:59 +03:00
Alexey Botchkov
2cae58f891 MDEV-18371 Server crashes in ha_innobase::cmp_ref upon UPDATE with PARTITION clause.
m_file[0] not always is a good sample.
2020-07-17 12:20:23 +04:00
Sergei Golubchik
c55c292832 introduce hton->drop_table() method
first step in moving drop table out of the handler.
todo: other methods that don't need an open table

for now hton->drop_table is optional, for backward compatibility
reasons
2020-07-04 01:44:46 +02:00
Monty
5211af1c16 Merge remote-tracking branch 'origin/10.3' into 10.4 2020-07-03 00:35:28 +03:00
Monty
65f831d17c Fixed bugs found by valgrind
- Some of the bug fixes are backports from 10.5!
- The fix in innobase/fil/fil0fil.cc is just a backport to get less
  error messages in mysqld.1.err when running with valgrind.
- Renamed HAVE_valgrind_or_MSAN to HAVE_valgrind
2020-07-02 17:57:34 +03:00
Monty
d35616aab3 Fixed crash in failing instant alter table with partitioned table
MDEV-22649 SIGSEGV in ha_partition::create_partitioning_metadata on ALTER
MDEV-22804 SIGSEGV in ha_partition::create_partitioning_metadata
2020-06-14 19:39:42 +03:00
Sergei Petrunia
d7d80689b3 MDEV-15101: Stop ANALYZE TABLE from flushing table definition cache
Apply this patch from Percona Server (amended for 10.5):

commit cd7201514fee78aaf7d3eb2b28d2573c76f53b84
Author: Laurynas Biveinis <laurynas.biveinis@gmail.com>
Date:   Tue Nov 14 06:34:19 2017 +0200

    Fix bug 1704195 / 87065 / TDB-83 (Stop ANALYZE TABLE from flushing table definition cache)

    Make ANALYZE TABLE stop flushing affected tables from the table
    definition cache, which has the effect of not blocking any subsequent
    new queries involving the table if there's a parallel long-running
    query:

    - new table flag HA_ONLINE_ANALYZE, return it for InnoDB and TokuDB
      tables;
    - in mysql_admin_table, if we are performing ANALYZE TABLE, and the
      table flag is set, do not remove the table from the table
      definition cache, do not invalidate query cache;
    - in partitioning handler, refresh the query optimizer statistics
      after ANALYZE if the underlying handler supports HA_ONLINE_ANALYZE;
    - new testcases main.percona_nonflushing_analyze_debug,
      parts.percona_nonflushing_abalyze_debug and a supporting debug sync
      point.

    For TokuDB, this change exposes bug TDB-83 (Index cardinality stats
    updated for handler::info(HA_STATUS_CONST), not often enough for
    tokudb_cardinality_scale_percent). TokuDB may return different
    rec_per_key values depending on dynamic variable
    tokudb_cardinality_scale_percent value. The server does not have a way
    of knowing that changing this variable invalidates the previous
    rec_per_key values in any opened table shares, and so does not call
    info(HA_STATUS_CONST) again. Fix by updating rec_per_key for both
    HA_STATUS_CONST and HA_STATUS_VARIABLE. This also forces a re-record
    of tokudb.bugs.db756_card_part_hash_1_pick, with the new output
    seeming to be more correct.
2020-06-12 20:29:05 +03:00
Sergei Golubchik
89a33303c4 remove dead code
reduce the amount of engine-specific code in the server,
particularly as it does not serve any purpose now.

may be needed for VP engine,
to be reconsidered in MDEV-7795
2020-06-09 14:32:43 +02:00
Marko Mäkelä
4a0b56f604 Merge 10.4 into 10.5 2020-05-31 10:28:59 +03:00
Marko Mäkelä
6da14d7b4a Merge 10.3 into 10.4 2020-05-30 11:04:27 +03:00
Marko Mäkelä
e9aaa10c11 Merge 10.2 into 10.3 2020-05-29 22:21:19 +03:00
Aleksey Midenkov
4783494a5e MDEV-22283 Server crashes in key_copy or unexpected error 156
(The table already existed in the storage engine)

Wrong algorithm of closing partitions on error doesn't close last
partition.
2020-05-29 16:19:15 +03:00
Sergei Golubchik
e64dc07125 assert(a && b); -> assert(a); assert(b); 2020-05-27 15:56:40 +02:00
Monty
4102f1589c Aria will now register it's transactions
MDEV-22531 Remove maria::implicit_commit()
MDEV-22607 Assertion `ha_info->ht() != binlog_hton' failed in
           MYSQL_BIN_LOG::unlog_xa_prepare

From the handler point of view, Aria now looks like a transactional
engine. One effect of this is that we don't need to call
maria::implicit_commit() anymore.

This change also forces the server to call trans_commit_stmt() after doing
any read or writes to system tables.  This work will also make it easier
to later allow users to have system tables in other engines than Aria.

To handle the case that Aria doesn't support rollback, a new
handlerton flag, HTON_NO_ROLLBACK, was added to engines that has
transactions without rollback (for the moment only binlog and Aria).

Other things
- Moved freeing of MARIA_SHARE to a separate function as the MARIA_SHARE
  can be still part of a transaction even if the table has closed.
- Changed Aria checkpoint to use the new MARIA_SHARE free function. This
  fixes a possible memory leak when using S3 tables
- Changed testing of binlog_hton to instead test for HTON_NO_ROLLBACK
- Removed checking of has_transaction_manager() in handler.cc as we can
  assume that as the transaction was started by the engine, it does
  support transactions.
- Added new class 'start_new_trans' that can be used to start indepdendent
  sub transactions, for example while reading mysql.proc, using help or
  status tables etc.
- open_system_tables...() and open_proc_table_for_Read() doesn't anymore
  take a Open_tables_backup list. This is now handled by 'start_new_trans'.
- Split thd::has_transactions() to thd::has_transactions() and
  thd::has_transactions_and_rollback()
- Added handlerton code to free cached transactions objects.
  Needed by InnoDB.

squash! 2ed35999f2a2d84f1c786a21ade5db716b6f1bbc
2020-05-23 12:29:10 +03:00
Sergei Golubchik
67aaf51cf9 cleanup: ha_external_unlock() helper
as mentioned in f9f33b85be and generally to make it
easier to talk about
2020-05-05 19:41:12 +02:00
Monty
eca5c2c67f Added support for more functions when using partitioned S3 tables
MDEV-22088 S3 partitioning support

All ALTER PARTITION commands should now work on S3 tables except

REBUILD PARTITION
TRUNCATE PARTITION
REORGANIZE PARTITION

In addition, PARTIONED S3 TABLES can also be replicated.
This is achived by storing the partition tables .frm and .par file on S3
for partitioned shared (S3) tables.

The discovery methods are enchanced by allowing engines that supports
discovery to also support of the partitioned tables .frm and .par file

Things in more detail

- The .frm and .par files of partitioned tables are stored in S3 and kept
  in sync.
- Added hton callback create_partitioning_metadata to inform handler
  that metadata for a partitoned file has changed
- Added back handler::discover_check_version() to be able to check if
  a table's or a part table's definition has changed.
- Added handler::check_if_updates_are_ignored(). Needed for partitioning.
- Renamed rebind() -> rebind_psi(), as it was before.
- Changed CHF_xxx hadnler flags to an enum
- Changed some checks from using table->file->ht to use
  table->file->partition_ht() to get discovery to work with partitioning.
- If TABLE_SHARE::init_from_binary_frm_image() fails, ensure that we
  don't leave any .frm or .par files around.
- Fixed that writefrm() doesn't leave unusable .frm files around
- Appended extension to path for writefrm() to be able to reuse to function
  for creating .par files.
- Added DBUG_PUSH("") to a a few functions that caused a lot of not
  critical tracing.
2020-04-19 17:33:51 +03:00
Sergei Golubchik
0515577d12 cleanup: prepare "update_handler" for WITHOUT OVERLAPS
* rename to a generic name
* move remaning initializations from query exec to prepare time
* simplify/unify key handling in open_table_from_share and delayed
* remove dead code
* move tests where they belong
2020-03-31 17:42:34 +02:00
Nikita Malyavin
b9df4d2a35 Fix real keyread count for partitions
Sergei's commit ac6b3c4430 implemented handler status counters
compensation for underlying handlers like ha_partition.
`index_read_idx_map` is missing there, but it should have been fixed as
well (proof: ha_partition::index_read_idx_map never calls
ha_partition::index_read_map).

Note: all this compensation logic could be broken for subpartitions! (We
can experience double decrement)
2020-03-31 17:42:34 +02:00
Nikita Malyavin
e6af62189e unify "partitioning cannot do X" error messages 2020-03-31 17:42:34 +02:00
Marko Mäkelä
37c14690fc Merge 10.4 into 10.5 2020-03-30 19:07:25 +03:00
Marko Mäkelä
e2f1f88fa6 Merge 10.3 into 10.4 2020-03-30 14:50:23 +03:00
Thirunarayanan Balathandayuthapani
f8ec3ba01b MDEV-21832 FORCE all partition to rebuild if any one of the
partition does rebuild

- In ha_innobase::commit_inplace_alter_table() assumes that all partition
should do the same kind of alter operations. During DDL, if one partition
requires table rebuild and other partition doesn't need rebuild
then all partition should be forced to rebuild.
2020-03-30 12:41:59 +03:00
Monty
f36ca142f7 Added page_range to records_in_range() to improve range statistics
Prototype change:
-  virtual ha_rows records_in_range(uint inx, key_range *min_key,
-                                   key_range *max_key)
+  virtual ha_rows records_in_range(uint inx, const key_range *min_key,
+                                   const key_range *max_key,
+                                   page_range *res)

The handler can ignore the page_range parameter. In the case the handler
updates the parameter, the optimizer can deduce the following:
- If previous range's last key is on the same block as next range's first
  key
- If the current key range is in one block
- We can also assume that the first and last block read are cached!
  This can be used for a better calculation of IO seeks when we
  estimate the cost of a range index scan.

The parameter is fully implemented for MyISAM, Aria and InnoDB.
A separate patch will update handler::multi_range_read_info_const() to
take the benefits of this change and also remove the double
records_in_range() calls that are not anymore needed.
2020-03-27 03:54:45 +02:00
Vladislav Vaintroub
6ef3dbb1ff Fix unused variable warning in optimized build. 2020-03-25 15:53:38 +01:00
Monty
37393bea23 Replace handler::primary_key_is_clustered() with handler::pk_is_clustering_key()
This was done to both simplify the code and also to be easier to handle
storage engines that are clustered on some other index than the primary
key.

As pk_is_clustering_key() and is_clustering_key now are using only
index_flags, these where removed from all storage engines.
2020-03-24 21:00:04 +02:00
Monty
91ab42a823 Clean up and speed up interfaces for binary row logging
MDEV-21605 Clean up and speed up interfaces for binary row logging
MDEV-21617 Bug fix for previous version of this code

The intention is to have as few 'if' as possible in ha_write() and
related functions. This is done by pre-calculating once per statement the
row_logging state for all tables.

Benefits are simpler and faster code both when binary logging is disabled
and when it's enabled.

Changes:
- Added handler->row_logging to make it easy to check it table should be
  row logged. This also made it easier to disabling row logging for system,
  internal and temporary tables.
- The tables row_logging capabilities are checked once per "statements
  that updates tables" in THD::binlog_prepare_for_row_logging() which
  is called when needed from THD::decide_logging_format().
- Removed most usage of tmp_disable_binlog(), reenable_binlog() and
  temporary saving and setting of thd->variables.option_bits.
- Moved checks that can't change during a statement from
  check_table_binlog_row_based() to check_table_binlog_row_based_internal()
- Removed flag row_already_logged (used by sequence engine)
- Moved binlog_log_row() to a handler::
- Moved write_locked_table_maps() to THD::binlog_write_table_maps() as
  most other related binlog functions are in THD.
- Removed binlog_write_table_map() and binlog_log_row_internal() as
  they are now obsolete as 'has_transactions()' is pre-calculated in
  prepare_for_row_logging().
- Remove 'is_transactional' argument from binlog_write_table_map() as this
  can now be read from handler.
- Changed order of 'if's in handler::external_lock() and wsrep_mysqld.h
  to first evaluate fast and likely cases before more complex ones.
- Added error checking in ha_write_row() and related functions if
  binlog_log_row() failed.
- Don't clear check_table_binlog_row_based_result in
  clear_cached_table_binlog_row_based_flag() as it's not needed.
- THD::clear_binlog_table_maps() has been replaced with
  THD::reset_binlog_for_next_statement()
- Added 'MYSQL_OPEN_IGNORE_LOGGING_FORMAT' flag to open_and_lock_tables()
  to avoid calculating of binary log format for internal opens. This flag
  is also used to avoid reading statistics tables for internal tables.
- Added OPTION_BINLOG_LOG_OFF as a simple way to turn of binlog temporary
  for create (instead of using THD::sql_log_bin_off.
- Removed flag THD::sql_log_bin_off (not needed anymore)
- Speed up THD::decide_logging_format() by remembering if blackhole engine
  is used and avoid a loop over all tables if it's not used
  (the common case).
- THD::decide_logging_format() is not called anymore if no tables are used
  for the statement. This will speed up pure stored procedure code with
  about 5%+ according to some simple tests.
- We now get annotated events on slave if a CREATE ... SELECT statement
  is transformed on the slave from statement to row logging.
- In the original code, the master could come into a state where row
  logging is enforced for all future events if statement could be used.
  This is now partly fixed.

Other changes:
- Ensure that all tables used by a statement has query_id set.
- Had to restore the row_logging flag for not used tables in
  THD::binlog_write_table_maps (not normal scenario)
- Removed injector::transaction::use_table(server_id_type sid, table tbl)
  as it's not used.
- Cleaned up set_slave_thread_options()
- Some more DBUG_ENTER/DBUG_RETURN, code comments and minor indentation
  changes.
- Ensure we only call THD::decide_logging_format_low() once in
  mysql_insert() (inefficiency).
- Don't annotate INSERT DELAYED
- Removed zeroing pos_in_table_list in THD::open_temporary_table() as it's
  already 0
2020-03-24 21:00:03 +02:00
Monty
4ef437558a Improve update handler (long unique keys on blobs)
MDEV-21606 Improve update handler (long unique keys on blobs)
MDEV-21470 MyISAM and Aria start_bulk_insert doesn't work with long unique
MDEV-21606 Bug fix for previous version of this code
MDEV-21819 2 Assertion `inited == NONE || update_handler != this'

- Move update_handler from TABLE to handler
- Move out initialization of update handler from ha_write_row() to
  prepare_for_insert()
- Fixed that INSERT DELAYED works with update handler
- Give an error if using long unique with an autoincrement column
- Added handler function to check if table has long unique hash indexes
- Disable write cache in MyISAM and Aria when using update_handler as
  if cache is used, the row will not be inserted until end of statement
  and update_handler would not find conflicting rows.
- Removed not used handler argument from
  check_duplicate_long_entries_update()
- Syntax cleanups
  - Indentation fixes
  - Don't use single character indentifiers for arguments
2020-03-24 21:00:02 +02:00
Sergey Vojtovich
da82e75901 handler::rebind()
- rename PFS specific rebind_psi() to generic rebind()
- call rebind independently of PFS compilation status
- allow rebind() return an error
2020-03-24 20:47:41 +02:00
Oleksandr Byelkin
fad47df995 Merge branch '10.4' into 10.5 2020-03-11 17:52:49 +01:00
Oleksandr Byelkin
b8c0e49670 Merge commit '10.3' into 10.4 2020-03-11 13:27:10 +01:00