Classes that handle privilege tables (like Tables_priv_table)
could read some columns conditionally but they expect a certain
minimal number of colunms always to exist.
Add a check for a minimal required number of columns in privilege tables,
don't use a table that has fewer columns than required.
1. Fixing ROUND(x) and TRUNCATE(x,0) with TINYINT, SMALLINT, MEDIUMINT, BIGINT
input to preserve the exact data type of the argument when it's possible.
2. Fixing FLOOR(x) and CEILING(x) with TINYINT, SMALLINT, MEDIUMINT, BIGINT
to preserve the exact data type of the argument.
3. Adding dedicated Type_handler_year::Item_func_round_fix_length_and_dec()
to easier handle ROUND(x) and TRUNCATE(x,y) for the YEAR(2) and YEAR(4)
input. They still return INT(2) UNSIGNED and INT(4) UNSIGNED correspondingly,
as before.
Implementing dedicated fixing methods:
- Type_handler_bit::Item_func_round_fix_length_and_dec()
- Type_handler_bit::Item_func_int_val_fix_length_and_dec()
- Type_handler_typelib::Item_func_round_fix_length_and_dec()
because the inherited methods did not work well.
Fixing:
- Type_handler_typelib::Item_func_int_val_fix_length_and_dec
It did not work well, because it used args[0]->max_length to
calculate the result data type. In case of ENUM and SET it was
not correct, because in FLOOR() and CEILING() context
ENUM and SET return not more than 5 digits (65535 is the biggest
possible value).
Misc:
- Changing the API of
Type_handler_bit::Bit_decimal_notation_int_digits(const Item *item)
to a more generic form:
Type_handler_bit::Bit_decimal_notation_int_digits_by_nbits(uint nbits)
- Fixing Type_handler_bit::Bit_decimal_notation_int_digits_by_nbits() to
return the exact number of decimal digits for all nbits 1..64.
The old implementation was approximate.
This change gives better (more precise) data types.
- Type_handler_hex_hybrid did not override
Type_handler_string_result::Item_func_round_fix_length_and_dec(),
so the result type of ROUND(0xFFFFFFFFFFFFFFFF) was erroneously
calculated ad DOUBLE with a wrong length.
Overriding Item_func_round_fix_length_and_dec(), to calculated
the result type as INT/BIGINT.
Also, fixing Item_func_round::fix_arg_int() to use
args[0]->decimal_precision() instead of args[0]->max_length
when calculating this->max_length, to get a correct result
for hex hybrids.
- Type_handler_hex_hybrid::Item_func_int_val_fix_length_and_dec()
called item->fix_length_and_dec_int_or_decimal(), which did not
produce a correct result data type for hex hybrid.
Implementing a dedicated code instead, to return INT UNSIGNED or
BIGINT UNSIGNED depending in the number of digits in the arguments.
Doesn't allow instant alter of a field which is a prefix part of some key
is_part_of_a_key_prefix(): I hope the name of the function is clear
ha_innobase::can_convert_string()
ha_innobase::can_convert_varstring()
ha_innobase::can_convert_blob(): it can't when field is_part_of_a_key_prefix()
An instant ADD/DROP/reorder column could create a dummy table
object with the wrong ROW_FORMAT when innodb_default_row_format
was changed between CREATE TABLE and ALTER TABLE.
prepare_inplace_alter_table_dict(): If we had promised that
ALGORITHM=INPLACE is supported, we must preserve the ROW_FORMAT.
dict_table_t::prepare_instant(): Add debug assertions to catch
ROW_FORMAT mismatch.
The rest of the changes are related to adding
Alter_inplace_info::inplace_supported to cache the return value of
handler::check_if_supported_inplace_alter().
mutex was already locked upon startup with server_audit and orphan
records in mysql.plugin.
Preaquire auditing plugins before LOCK_plugin to awoid double locking.
In commit 0e5a4ac253 (MDEV-15562)
we introduced a bogus debug assertion, demanding that dict_col_t::len
never exceed some maximum value. The intention was to match what
dict_index_add_col() is doing. But, the assignment field->fixed_len = 0
applies to dict_field_t::fixed_len, not dict_col_t::len!
An assertion
`server_state_.rollback_mode() == wsrep::server_state::rm_async`
fired in before_command() when
- thread-handling was set to pool-of-threads and
- a BF abort happened between client session calls to
wait_rollback_complete_and_acquire_ownership() and before_command().
This commit introduces a test case to reproduce the crash and
updates wsrep-lib submodule to fixed version.
The sporadic test hangs happen because of mutex dealock between innodb
background threads and two test connection executions.
The test sets variable innodb_disallow_writes, which blocks all writes
to filesyste. The test logic is to execute an INSERT, which should hang
because of filesytstem writes are blocked, and through another session
verify by SELECT that this hanging happens. The SELECT session will then
release innodb_disallow_writes blocking.
However, filesystem write blocking affects also innodb background threads
and they may hang while keeping some other resources locked.
As an example, in one test hang situation, buffer pool access was blocked.
And, if buffer pool is blocked, the test connections will be blocked as well,
and the SELECT session will not be able to continue to release the
innodb_disallow_writes.
The fix in this commit is refactoring of the test logic.
The test will now set first innodb_disallow_writes blocking, and then record
a hash of data directory's filesystem contents. This works as checksum of the
state of data on the datadirectory.
Then some SQL load is tried on both nodes, these sessions will be blocking
due to frozen file system state. The test will have a short sleep to allow
innodb background threads to loop and possibly encounter innodb_disallow_writes
blocking as well.
After the sleep, the test will record file system checksun for the second time,
and then release the innodb_disallow-writes blocking.
Finally, the two checksums are compared, they should be identical to verify that
nothing was written on datadirectory during the test execution.
The checksum is implemented by md5sum hash over all files found in datadirectory
by find command. all these file hashes are hashed together by one more md5sum.
The test therefore depends on md5sum and find. find may work differently with some
OS distributions, e.g. freebsd may be problematic.
outline-atomics compilation flag changes behaviour of builtin_atomics,
by adding runtime detection of LSE atomics. If these are supported, they
will be used. This gains LSE atomics use without hurting compatibility
with older aarch64 machines.
aarch64 timer is available to userspace via arch register.
clang's __builtin_readcyclecounter is wrong for aarch64 (reads the PMU
cycle counter instead of the archi-timer register), so we don't use it.
my_rdtsc unit-test on AWS m6g shows:
frequency: 121830845
resolution: 1
overhead: 1
This counter is not strictly increasing, but it is non-decreasing.
Starting with MDEV-17441 we would no longer have os_once,
and we would always initialize zip_pad_info_t::mutex and
dict_table_t::autoinc_mutex, even for tables are not in
ROW_FORMAT=COMPRESSED nor include any AUTO_INCREMENT column.
mutex_free() on those unnecessary objects would make shutdown very slow
compared to older versions.
Let us use std::mutex for those two mutexes, to reduce the overhead.
The critical sections protected by these mutexes is very small, and
therefore contention or the need for any instrumentation should
be unlikely.
InnoDB should replace FSP_FLAGS_HAS_PAGE_COMPRESSION check with
fil_space_t::is_compressed(). fil_space_t::is_compressed() checks
for both non full crc32 and crc32 format.
trx_update_mod_tables_timestamp(): When implementing
innodb_evict_tables_on_commit_debug, do not evict tables
on which transactional locks exist.
This debug variable was broken since its introduction in
commit 947b0b5722.
The issue was:
T1, a parallel slave worker thread, is waiting for another worker thread to
commit. While waiting, it has the MDL_BACKUP_COMMIT lock.
T2, working for mariabackup, is doing BACKUP STAGE BLOCK_COMMIT and blocks
all commits.
This causes a deadlock as the thread T1 is waiting for can't commit.
Fixed by moving locking of MDL_BACKUP_COMMIT from ha_commit_trans() to
commit_one_phase_2()
Other things:
- Added a new argument to ha_comit_one_phase() to signal if the
transaction was a write transaction.
- Ensured that ha_maria::implicit_commit() is always called under
MDL_BACKUP_COMMIT. This code is not needed in 10.5
- Ensure that MDL_Request values 'type' and 'ticket' are always
initialized. This makes it easier to check the state of the MDL_Request.
- Moved thd->store_globals() earlier in handle_rpl_parallel_thread() as
thd->init_for_queries() could use a MDL that could crash if store_globals
where not called.
- Don't call ha_enable_transactions() in THD::init_for_queries() as this
is both slow (uses MDL locks) and not needed.
is_part_of_a_key(): detect is TEXT field is a part of some key
ha_innobase::can_convert_blob(): now correctly detect whether our blob
is a part of some key. Previously the check didn't work in some cases.
Fix stale virtual field value in 4 cases: when virtual field depends
on row_start/row_end in timestamp/trx_id versioned table. row_start
dep is recalculated in vers_update_fields() (SQL and InnoDB
layer). row_end dep is recalculated on history row insert.
make_versioned_helper() appended new update field unconditionally
while it should check if this field already exists in update vector.
Misc renames to conform versioning prefix. vers_update_fields() name
conforms with sql layer TABLE::vers_update_fields().
When InnoDB is extending a data file, it is updating the FSP_SIZE
field in the first page of the data file.
In commit 8451e09073 (MDEV-11556)
we removed a work-around for this bug and made recovery stricter,
by making it track changes to FSP_SIZE via redo log records, and
extend the data files before any changes are being applied to them.
It turns out that the function fsp_fill_free_list() is not crash-safe
with respect to this when it is initializing the change buffer bitmap
page (page 1, or generally, N*innodb_page_size+1). It uses a separate
mini-transaction that is committed (and will be written to the redo
log file) before the mini-transaction that actually extended the data
file. Hence, recovery can observe a reference to a page that is
beyond the current end of the data file.
fsp_fill_free_list(): Initialize the change buffer bitmap page in
the same mini-transaction.
The rest of the changes are fixing a bug that the use of the separate
mini-transaction was attempting to work around. Namely, we must ensure
that no other thread will access the change buffer bitmap page before
our mini-transaction has been committed and all page latches have been
released.
That is, for read-ahead as well as neighbour flushing, we must avoid
accessing pages that might not yet be durably part of the tablespace.
fil_space_t::committed_size: The size of the tablespace
as persisted by mtr_commit().
fil_space_t::max_page_number_for_io(): Limit the highest page
number for I/O batches to committed_size.
MTR_MEMO_SPACE_X_LOCK: Replaces MTR_MEMO_X_LOCK for fil_space_t::latch.
mtr_x_space_lock(): Replaces mtr_x_lock() for fil_space_t::latch.
mtr_memo_slot_release_func(): When releasing MTR_MEMO_SPACE_X_LOCK,
copy space->size to space->committed_size. In this way, read-ahead
or flushing will never be invoked on pages that do not yet exist
according to FSP_SIZE.
commit 854c219a7f (MDEV-17301)
broke a constraint: Fixed-length columns cannot be extended in InnoDB
without rebuilding the table.
ha_innobase::can_convert_string(): Correct the condition. We must
not allow any instantaneous change to the length of CHAR columns
measured in characters. For any format other than ROW_FORMAT=REDUNDANT,
we can allow the length in bytes to be extended if mbminlen<mbmaxlen held
before the change of the character set.
Server auto-sets lower_case_file_system value based on default
datadir's behavior instead of instead of using the directory specified
by the user through the configuration file or command line options.
This patch fixes this problem.
Server auto-sets lower_case_file_system value based on default
datadir's behavior instead of instead of using the directory specified
by the user through the configuration file or command line options.
This patch fixes this problem.