ha_partition: Remove redundant 'virtual' keywords and add
missing 'override'.
FIXME: handler::table_type() is not declared virtual, yet ha_partition
and ha_sequence are seemingly trying to override it.
- Flag ALTER_STORED_COLUMN_TYPE set while doing varchar extension
for partition table. Basically all partition supports
can_be_converted_by_engine() then it should be set to
ALTER_COLUMN_TYPE_CHANGE_BY_ENGINE.
Inserting a record into an index page involves updating multiple
fields in the page header as well as updating the next-record links
and potentially updating fields related to the sparse page directory.
Let us cover the insert operations by higher-level log records, to avoid
'redundant' logging about the writes.
The code for applying the high-level log records will check the
consistency of the page thoroughly, to avoid crashes during recovery.
We will refuse to replay the inserts if any inconsistency is detected.
With innodb_force_recovery=1, recovery will continue, but the affected
pages may be more inconsistent if some changes were omitted.
mrec_ext_t: Introduce the EXTENDED record subtypes
INSERT_HEAP_REDUNDANT, INSERT_REUSE_REDUNDANT,
INSERT_HEAP_DYNAMIC, INSERT_REUSE_DYNAMIC.
The record will explicitly identify the page type and whether
the space will be allocated from PAGE_HEAP_TOP or reused from
the PAGE_FREE list. It will also tell how many bytes to copy
from the preceding record header and payload, and how to
initialize the rest of the record header and payload.
mtr_t::page_insert(): Write the high-level log records.
log_phys_t::apply(): Parse the high-level log records.
page_apply_insert_redundant(), page_apply_insert_dynamic():
Apply the high-level log records.
page_dir_split_slot(): Introduce a variant that does not write log
nor deal with ROW_FORMAT=COMPRESSED pages.
page_mem_alloc_heap(): Remove the mtr_t parameter
page_cur_insert_rec_low(): Write log only via mtr_t::page_insert().
This is a follow-up to commit 572d20757b
where we introduced the EXTENDED log record subtypes
DELETE_ROW_FORMAT_REDUNDANT and DELETE_ROW_FORMAT_DYNAMIC.
log_phys_t::apply(): If corruption was noticed, stop applying the log
unless innodb_force_recovery is set.
This is a follow-up to commit 84e3f9ce84
that introduced the EXTENDED log record of UNDO_APPEND subtype.
mtr_t::undo_append(): Accurately enforce the mtr_buf_t::MAX_DATA_SIZE
limit. Also, replace mtr_buf_t::push() with simpler code, to append 1 byte
to the log.
log_phys_t::undo_append(): Return whether the page was found to
be in an inconsistent state.
log_phys_t::apply(): If corruption was noticed, stop applying log
unless innodb_force_recovery is set.
added cmake checks for pam_ext.h and pam_appl.h headers
added check for pam_syslog()
added pam_syslog() if doesn't exist
all cmake checks performed from inside the plugin
When there are E empty partitions left, auto-create N new empty
partitions for SYSTEM_TIME partitioning rotated by INTERVAL/LIMIT and
marked by AUTO_INCREMENT keyword. Syntax change: AUTO_INCREMENT
keyword (or shorter AUTO may be used instead) after LIMIT/INTERVAL
clause.
CREATE OR REPLACE TABLE t (x INT) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME LIMIT 100000 AUTO_INCREMENT;
CREATE OR REPLACE TABLE t (x INT) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME INTERVAL 1 WEEK AUTO_INCREMENT;
The current revision implements hard-coded values of 1 for E and N. As
well as auto-creation threshold MinInterval = 1 hour, MinLimit = 1000.
The name for newly added partition will be first chosen as "pX", where
X is partition number and "p" is hard-coded name prefix. If this name
is already occupied, the X will be incremented until the resulting
name will be free to use.
ALTER TABLE ADD PARTITION is now always fast. If there some history
partition overflow occurs manual ALTER TABLE REBUILD PARTITION is
needed.
join_cache_level=6+
The patch fixes two similar bugs in the commit 8eeb689e9f
that added multi_range_read support to partitions. The commit opened
a possibility to join a partition table using BKA+MRR. However in some
cases it could lead to wrong results or even crashes.
This could happened when
- index condition pushdown was used to join the table or
- the joined table was an inner table of an outer join and 'not exist'
optimization was applied or
- the join table was the inner table of a semi-join and the first match
optimization was applied
The bugs were in the code of the call-back functions
- partition_multi_range_key_skip_record() and
- partition_multi_range_key_skip_index_tuple().
Each of this function consist only of an invocation of another function.
Yet a wrong parameter was passed at this invocation.
The fix was suggested by Sergey Petrunia and it is apparently in line
with original design.
The corresponding comprehensive test cases demonstrating the problems
caused by the bugs were constructed by me.
This bug was introduced in
commit 7ae21b18a6
(the main commit of MDEV-12353).
page_cur_insert_rec_low(): Before entering the comparison loop, make sure
that the range does not exceed c_end already at the start of the loop.
The loop is only comparing for pointer equality, and that condition
cdm == c_end would never hold if the end was already exceeded in
the beginning. Also, skip the comparison altogether if we could find
at most 2 equal bytes.
PageBulk::insertPage(): Apply a similar change. It seems that this
code was correct, because the loop checks for cdm < c_end.
If async replication slave thread conflicts with cluster replication,
then the async slave transaction should be BF aborted, and depending on the
state of async slave transaction execution, potentially also replayed.
There were problems in such BF abort implementation and the replaying was not
started.
This pull request contains fixes which make sure that if async slave thread is
marked to abort and replay, it will complete carry out the rollback and
release all locks and resources before starting the replaying. After replaying,
async slave transactions is treated as successful, so the slave thread will
continue as usual, handling next replication event.
There is also new mtr test: galera.galera_slave_replay, which stresses both a
certification failure for async slave thread and a successful BF abort
followed by replaying.
* `--defaults-file` option is showed only in `--help --verbose` if
applied
* `--default-extra-file` is showing correctly now in `--help --verbose`,
previously it was treated as a directory with appended `my.cnf`
mrec_ext_t: Introduce DELETE_ROW_FORMAT_REDUNDANT,
DELETE_ROW_FORMAT_DYNAMIC.
mtr_t::page_delete(): Write DELETE_ROW_FORMAT_REDUNDANT or
DELETE_ROW_FORMAT_DYNAMIC log records. We log the byte offset
of the preceding record, so that on recovery we can easily
find everything to update. For DELETE_ROW_FORMAT_DYNAMIC,
we must also write the header and data size of the record.
We will retain the physical logging for ROW_FORMAT=COMPRESSED pages.
page_zip_dir_balance_slot(): Renamed from page_dir_balance_slot(),
and specialized for ROW_FORMAT=COMPRESSED only.
page_rec_set_n_owned(), page_dir_slot_set_n_owned(),
page_dir_balance_slot(): New variants that do not write any log.
page_mem_free(): Take data_size, extra_size as parameters.
Always zerofill the record payload.
page_cur_delete_rec(): For other than ROW_FORMAT=COMPRESSED,
only write log by mtr_t::page_delete().
* Remove those tests that will not be supported on that release.
* Make sure that correct tests are disabled and have MDEVs
* Sort test names
This should not be merged upwards.
There is no reason for the dummy index object dict_ind_redundant
to exist any more. It was only being passed to btr_create().
btr_create(): If !index, assume that a ROW_FORMAT=REDUNDANT
table is being created.
We could pass ibuf.index, dict_sys.sys_tables->indexes.start
and so on, if those objects had been initialized before the
function btr_create() is called.
recv_sys_t opened redo log files along with log_sys_t. That's why I
removed file sharing logic from InnoDB
in 9ef2d29ff4
But it was actually used to ensure that only one MariaDB instance
will touch the same InnoDB files.
os0file.cc: revert some changes done previously
mapped_file_t::map(): now has arguments read_only, nvme
file_io::open(): now has argument read_only
class file_os_io: make final
log_file_t::open(): now has argument read_only
Part#2: cleanup:
In the part 1 of the fix, DS-MRR implementation would peek into
the JOIN_TAB to get the rowid filter from
table->reginfo.join_tab->rowid_filter
This doesn't look good from code isolation standpoint (why should a
storage engine assume it is used through a JOIN_TAB?).
Fixed this by storing the 'un-pushed' rowid_filter in the DsMrr_impl
structure. The filter survives across multi_range_read_init() calls.
It is discarded when somebody calls index_end() or rnd_end() and cleans
up the DsMrr_impl.
Detecting the cpus based on sysconf of the online CPUs can significantly
over estimate the number of cpus available.
Wheither via numactl, cgroups, taskset, systemd constraints, docker
containers and probably other mechanisms, the number of threads mysqld
can be run on can be quite less.
As such we use the pthread_getaffinity_np function on Linux and FreeBSD
(identical API) to get the number of CPUs.
The number of CPUs is the default for the thread_pool_size and a too
high default will resulting in large memory usage and high context
switching overhead.
Closes PR #922
(Backport to 10.3)
Partitioning storage now supports MRR but doesn't support Index Condition
Pushdown (aka ICP). This causes counter-intuitive query plans for queries
that use BKA and conditions that depend on index fields:
- If the condition refers to other tables, BKA's variant of ICP is used
to handle it.
- If the condition depends on this table only, the optimizer will try to
use regular ICP for it, which will fail because the storage engine
doesn't support ICP.
Make the optimizer be smarter in the second case: if we were not able to
use regular ICP, use BKA's variant of ICP..
We introduce an EXTENDED log record for appending an undo log record
to an undo log page. This is equivalent to the MLOG_UNDO_INSERT record
that was removed in commit f802c989ec,
only using more compact encoding.
mtr_t::log_write(): Fix a bug that affects longer log
record writes in the !same_page && !have_offset case.
Similar code is already implemented for the have_offset code path.
The bug was unobservable before we started to write longer
EXTENDED records. All !have_offset records (FREE_PAGE, INIT_PAGE,
EXTENDED) that were written so far are short, and we never write
RESERVED or OPTION records.
mtr_t::undo_append(): Write an UNDO_APPEND record.
log_phys_t::undo_append(): Apply an UNDO_APPEND record.
trx_undo_page_set_next_prev_and_add(),
trx_undo_page_report_modify(),
trx_undo_page_report_rename():
Invoke mtr_t::undo_append() instead of emitting WRITE records.