mysql_write_frm(): Correctly enclose code inside
#ifdef WITH_PARTITION_STORAGE_ENGINE
so that cmake -DPLUGIN_PARTITION=NO builds can succeed.
This was broken in
commit b7bba721ee.
Also fixes:
MDEV-25399 Assertion `name.length == strlen(name.str)' failed in Item_func_sp::make_send_field
Also fixes a problem that in this scenario:
SET NAMES binary;
SELECT 'some not well-formed utf8 string';
the auto-generated column name copied the binary string value directly
to the Item name, without checking utf8 well-formedness.
After this change auto-generated column names work as follows:
- Zero bytes 0x00 are copied to the name using HEX notation
- In case of "SET NAMES binary", all bytes sequences that do not make
well-formed utf8 characters are copied to the name using HEX notation.
ALTER TABLE IMPORT doesn't properly handle instant alter metadata.
This patch makes IMPORT read, parse and apply instant alter metadata at the
very beginning of operation. So, cases when source table has some metadata
and destination table doesn't have it now works fine.
DISCARD already removes instant metadata so importing normal table into
instant table worked fine before this patch.
decrypt_decompress(): decrypts and decompresses page if needed
handle_instant_metadata(): this should be the first thing to read source
table. Basically, it applies instant metadata to a destination
dict_table_t object. This is the first thing to read FSP flags so
all possible checks of it were moved to this function.
PageConverter::update_index_page(): it doesn't now read instant metadata.
This logic were moved into handle_instant_metadata()
row_import::match_flags(): this is a first part row_import::match_schema().
As a separate function it's used by handle_instant_metadata().
fil_space_t::is_full_crc32_compressed(): added convenient function
ha_innobase::discard_or_import_tablespace(): do not reload table definition
to read instant metadata because handle_instant_metadata() does it better.
The reverted code was originally added in
4e7ee166a9
ANONYMOUS_VAR: this is a handy thing to use along with make_scope_exit()
full_crc32_import.test shows different results, because no
dict_table_close() and dict_table_open_on_id() happens.
Thus, SHOW CREATE TABLE shows a little bit older table definition.
ER_TRUNCATED_WRONG_VALUE
Part 1: Fix for DELETE without ORDER BY
Analysis: m_current_row_for_warning doesn't increment and assumes default
value which is then used by ROW_NUMBER.
Fix: Increment m_current_row_for_warning for each processed row.
CHECK violation
Analysis: When there is constraint fail we return non-zero value for
view_check_option(). So we continue the loop which doesn't increment the
counter because it increments at the end of the loop.
Fix: Increment m_current_row_for_warning() at the beginning of loop. This
will also fix similar bugs if any, about counter not incrementing
correctly because of continue.
ER_WRONG_VALUE_COUNT_ON_ROW for the 1st row
Analysis: Current row for warning does not increment for prepare phase
Fix: Increment current row for warning if number of fields in the table and
row values dont match and number of values in rows is greater than number
of fields
Analysis: When row number is passed as parameter to set_warning() it is only
used for error/warning text but m_current_row_for_warning is not updated.
Hence default value of m_current_row_for_warning is assumed.
Fix: update m_current_row_for_warning when error/warning occurs.
in case of a bulk insert the server sends all rows to the engine, and
then the engine replies that there was ER_DUP_ENTRY somewhere.
the exact number of the row that caused the error is unknown.
Analysis: Parser was missing ROW_NUMBER as syntax for SIGNAL and RESIGNAL.
Fix: Fix parser and fix how m_row_number is copied like other attributes
to avoid ROW_NUMBER from assuming default value.
Syntax for CONVERT TABLE
ALTER TABLE tbl_name CONVERT TABLE tbl_name TO PARTITION partition_name partition_spec
Examples:
ALTER TABLE t1 CONVERT TABLE tp2 TO PARTITION p2 VALUES LESS THAN MAX_VALUE();
New ALTER_PARTITION_CONVERT_IN command for
fast_alter_partition_table() is done in alter_partition_convert_in()
function which basically does ha_rename_table().
Table structure and data check is basically the same as in EXCHANGE
PARTITION command. And these are done by
compare_table_with_partition() and check_table_data().
Atomic DDL is done by the scheme from MDEV-22166 (see the
corresponding commit message). The only differnce is that it also has
to drop source table frm and that is done by WFRM_DROP_CONVERTED_FROM.
Initial patch was done by Dmitry Shulga <dmitry.shulga@mariadb.com>
Syntax for CONVERT keyword
ALTER TABLE tbl_name
[alter_option [, alter_option] ...] |
[partition_options]
partition_option: {
...
| CONVERT PARTITION partition_name TO TABLE tbl_name
}
Examples:
ALTER TABLE t1 CONVERT PARTITION p2 TO TABLE tp2;
New ALTER_PARTITION_CONVERT_OUT command for
fast_alter_partition_table() is done in alter_partition_convert_out()
function which basically does ha_rename_table().
Partition to extract is marked with the same flag as dropped
partition: PART_TO_BE_DROPPED. Note that we cannot have multiple
partitioning commands in one ALTER.
For DDL logging basically the principle is the same as for other
fast_alter_partition_table() commands. The only difference is that it
integrates late Atomic DDL functions and introduces additional phase
of WFRM_BACKUP_ORIGINAL. That is required for binlog consistency
because otherwise we could not revert back after WFRM_INSTALL_SHADOW
is done. And before DDL log is complete if we crash or fail the
altered table will be already new but binlog will miss that ALTER
command. Note that this is different from all other atomic DDL in that
it rolls back until the ddl_log_complete() is done even if everything
was done fully before the crash.
Test cases added to:
parts.alter_table \
parts.partition_debug \
versioning.partition \
atomic.alter_partition
Instead of
create or replace table t1 (x int)
partition by range(x) (
partition p1 values less than (10),
partition pn values less than maxvalue);
it should be possible to type in shorter form:
create or replace table t1 (x int)
partition by range(x) (
p1 values less than (10),
pn values less than maxvalue);
As above examples demonstrate, make PARTITION keyword in partition
definition optional.
Static analyzer built in Eclipse CDT complained about missing initializers in
constructors of the class Alter_table_ctx so I've added them in order to
eliminate annoying warnings.
Dead code cleanup:
part_info->num_parts usage was wrong and working incorrectly in
mysql_drop_partitions() because num_parts is already updated in
prep_alter_part_table(). We don't have to update part_info->partitions
because part_info is destroyed at alter_partition_lock_handling().
Cleanups:
- DBUG_EVALUATE_IF() macro replaced by shorter form DBUG_IF();
- Typo in ER_KEY_COLUMN_DOES_NOT_EXITS.
Refactorings:
- Splitted write_log_replace_delete_frm() into write_log_delete_frm()
and write_log_replace_frm();
- partition_info via DDL_LOG_STATE;
- set_part_info_exec_log_entry() removed.
DBUG_EVALUATE removed
DBUG_EVALUTATE was only added for consistency together with
DBUG_EVALUATE_IF. It is not used anywhere in the code.
DBUG_SUICIDE() fix on release build
On release DBUG_SUICIDE() was statement. It was wrong as
DBUG_SUICIDE() is used in expression context.
Also fixes MDEV-24619 Wrong result or Assertion `0' in Item::val_native / Type_handler_inet6::Item_val_native_with_conversion
Type_handler_inet6::create_item_copy() created a generic Item_copy_string,
which does not implement val_native() - it has a dummy implementation
with DBUG_ASSERT(0), which made the server crash.
Fix:
- Adding a new class Type_handler_inet6
which implements val_native().
- Fixing Type_handler_inet6::create_item_copy()
to make Item_copy_inet6 instead of Item_copy_string.
When inserting a number of rows into an empty table,
InnoDB will buffer and pre-sort the records for each index, and
build the indexes one page at a time.
For each index, a buffer of innodb_sort_buffer_size will be created.
If the buffer ran out of memory then we will create temporary files
for storing the data.
At the end of the statement, we will sort and apply the buffered
records. Ideally, we would do this at the end of the transaction
or only when starting to execute a non-INSERT statement on the table.
However, it could be awkward if duplicate keys or similar errors
would be reported during the execution of a later statement.
This will be addressed in MDEV-25036.
Any columns longer than 2000 bytes will buffered in temporary files.
innodb_prepare_commit_versioned(): Apply all bulk buffered insert
operation, at the end of each statement.
ha_commit_trans(): Handle errors from innodb_prepare_commit_versioned().
row_merge_buf_write(): This function should accept blob
file handle too and it should write the field data which are
greater than 2000 bytes
row_merge_bulk_t: Data structure to maintain the data during
bulk insert operation.
trx_mod_table_time_t::start_bulk_insert(): Notify the start of
bulk insert operation and create new buffer for the given table
trx_mod_table_time_t::add_tuple(): Buffer a record.
trx_mod_table_time_t::write_bulk(): Do bulk insert operation
present in the transaction
trx_mod_table_time_t::bulk_buffer_exist(): Whether the buffer
storage exist for the bulk transaction
trx_mod_table_time_t::write_bulk(): Write all buffered insert
operation for the transaction and the table.
row_ins_clust_index_entry_low(): Insert the data into the
bulk buffer if it is already exist.
row_ins_sec_index_entry(): Insert the secondary tuple
if the bulk buffer already exist.
row_merge_bulk_buf_add(): Insert the tuple into bulk buffer
insert operation.
row_merge_buf_blob(): Write the field data whose length is
more than 2000 bytes into blob temporary file. Write the
file offset and length into the tuple field.
row_merge_copy_blob_from_file(): Copy the blob from blob file
handler based on reference of the given tuple.
row_merge_insert_index_tuples(): Handle blob for bulk insert
operation.
row_merge_bulk_t::row_merge_bulk_t(): Constructor. Initialize
the buffer and file for all the indexes expect fts index.
row_merge_bulk_t::create_tmp_file(): Create new temporary file
for the given index.
row_merge_bulk_t::write_to_tmp_file(): Write the content from
buffer to disk file for the given index.
row_merge_bulk_t::add_tuple(): Insert the tuple into the merge
buffer for the given index. If the memory ran out then InnoDB
should sort the buffer and write into file.
row_merge_bulk_t::write_to_index(): Do bulk insert operation
from merge file/merge buffer for the given index
row_merge_bulk_t::write_to_table(): Do bulk insert operation
for all the indexes.
dict_stats_update(): If a bulk insert transaction is in progress,
treat the table as empty. The index creation could hold latches
for extended amounts of time.
rollback_inplace_alter_table(): Tolerate a case where the transaction
is not in an active state. If ha_innobase::commit_inplace_alter_table()
failed with a deadlock, the transaction would already have been
rolled back. This omission of error handling was introduced in
commit 1bd681c8b3 (MDEV-25506 part 3).
After commit c3c53926c4 (MDEV-26554)
it became easier to trigger DB_DEADLOCK during exclusive table lock
acquisition in ha_innobase::commit_inplace_alter_table().
lock_table_low(): Add DBUG injection "innodb_table_deadlock".
`mytop` and `my_print_defaults` for RPM
- Add `mytop` to client package
- Add man page of `my_print_defaults` to client package
- Add dependencies for RPMs
- Remove old comment
- Remove dead link
Reviewed by: serg@mariadb.com
We have observed hangs of the io_uring subsystem when using a
Linux kernel newer than 5.10. Also 5.15-rc6 is affected by this.
The exact cause of the hangs has not been diagnosed yet.
As a safety measure, we will disable innodb_use_native_aio by default
when the server has been configured with io_uring and the kernel
version is between 5.11 and 5.15.
If the start-up parameter innodb_use_native_aio=ON is set, then
we will issue a warning to the server error log.
This implements memory transaction support for:
* Intel Restricted Transactional Memory (RTM), also known as TSX-NI
(Transactional Synchronization Extensions New Instructions)
* POWER v2.09 Hardware Trace Monitor (HTM) on GNU/Linux
transactional_lock_guard, transactional_shared_lock_guard:
RAII lock guards that try to elide the lock acquisition
when transactional memory is available.
buf_pool.page_hash: Try to elide latches whenever feasible.
Related to the InnoDB change buffer and ROW_FORMAT=COMPRESSED
tables, this is not always possible.
In buf_page_get_low(), memory transactions only work reasonably
well for validating a guessed block address.
TMLockGuard, TMLockTrxGuard, TMLockMutexGuard: RAII lock guards
that try to elide lock_sys.latch and related latches.
Since commit bd5a6403ca (MDEV-26033)
we can actually calculate the buf_pool.page_hash cell and latch
addresses while not holding buf_pool.mutex.
buf_page_alloc_descriptor(): Remove the MEM_UNDEFINED.
We now expect buf_page_t::hash to be zero-initialized.
buf_pool_t::hash_chain: Dedicated data type for buf_pool.page_hash.array.
buf_LRU_free_one_page(): Merged to the only caller
buf_pool_t::corrupted_evict().
page_hash_latch: Only use the spinlock implementation on
SUX_LOCK_GENERIC platforms (those for which we do not implement
a futex-like interface). Use srw_spin_mutex on 32-bit systems
(except Microsoft Windows) to satisfy the size constraints.
rw_lock::is_read_locked(): Remove. We will use the slightly
broader assertion is_locked().
srw_lock_: Implement is_locked(), is_write_locked() in a hacky
way for the Microsoft Windows SRWLOCK. This should be acceptable,
because we are only using these predicates in debug assertions
(or later, in lock elision), and false positives should not matter.