With INTERSECT/EXCEPT fact that subquery item of IN/ALL/ANY was not assigned value does not mean that temporary table used for calculating unit is empty (records could be deleted).
row_log_table_apply_op(): Remove references to dict_table_t::n_vcols.
Virtual column information is no longer being written to the log.
row_log_t: Remove the unused fields n_old_col, n_old_vcol.
When MySQL 5.7 introduced indexed virtual columns, it introduced
several bugs into the online table-rebuilding ALTER, that is,
the row_log_table_apply() family of functions.
The online_log format that was introduced for online table-rebuilding
ALTER in MySQL 5.6 should be sufficient. Ideally, any indexed virtual
column values would be evaluated based on the log records in the temporary
file. There is no need to log virtual column values.
(For ADD INDEX, that is row_log_apply(), we always must log the values of
the keys, no matter if the columns are virtual.)
Because omitting the virtual column values removes any chance of
row_log_table_apply() working with indexed virtual columns, we
will for now refuse LOCK=NONE in table-rebuilding ALTER operations
when indexes on virtual columns exist. This restriction would be
lifted in MDEV-14341.
innobase_indexed_virtual_exist(): New predicate, to determine if
indexed virtual columns exist in a table definition.
ha_innobase::check_if_supported_inplace_alter(): Refuse online rebuild
if indexed virtual columns exist.
rec_get_converted_size_temp_v(), rec_convert_dtuple_to_temp_v(): Remove.
row_log_table_delete(), row_log_table_update(, row_log_table_insert():
Remove parameters for virtual columns.
trx_undo_read_v_rows(): Remove the col_map parameter.
row_log_table_apply(): Do not deal with virtual columns.
Make differentiation between pullout for merge and pulout of outer field during exists2in transformation.
In last case the field was outer and so we can safely start from name resolution context of the SELECT where it was pulled.
Old behavior lead to inconsistence between list of tables and outer name resolution context (which skips one SELECT for merge purposes) which creates problem vor name resolution.
Recording mtr.add_suppression("Could not use .*")
instead of mtr.add_suppression("Could not open .*")
in tests:
- binlog_encryption.rpl_binlog_errors
- binlog_encryption.binlog_index
There was a problem in the recent patch for MDEV-10817:
get_date() must "return true" on null_value rather than "return 0"
(forgot to fix Item_sum_hybrid::get_date() after copying and pasting
from Item_sum_hybrid::val_str()).
FlushObserver::flush(): Never discard unwritten changes.
We do not want to risk corrupting the system tablespace
or .ibd files that are not part of a table-rebuilding ALTER.
Ever since MDEV-10813 cleaned up InnoDB use of atomic memory operations
and made buf_block_fix() an atomic operation, some conditions around
buf_block_fix() have been unnecessary.
With a big buffer pool that contains many data pages,
DISCARD TABLESPACE took a long time, because it would scan the
entire buffer pool to remove any pages that belong to the tablespace.
With a large buffer pool, this would take a lot of time, especially
when the table-to-discard is empty.
The minimum amount of work that DISCARD TABLESPACE must do is to
remove the pages of the to-be-discarded table from the
buf_pool->flush_list because any writes to the data file must be
prevented before the file is deleted.
If DISCARD TABLESPACE does not evict the pages from the buffer pool,
then IMPORT TABLESPACE must do it, because we must prevent pre-DISCARD,
not-yet-evicted pages from being mistaken for pages of the imported
tablespace.
It would not be a useful fix to simply move the buffer pool scan to
the IMPORT TABLESPACE step. What we can do is to actively evict those
pages that could be mistaken for imported pages. In this way, when
importing a small table into a big buffer pool, the import should
still run relatively fast.
Import is bypassing the buffer pool when reading pages for the
adjustment phase. In the adjustment phase, if a page exists in
the buffer pool, we could replace it with the page from the imported
file. Unfortunately I did not get this to work properly, so instead
we will simply evict any matching page from the buffer pool.
buf_page_get_gen(): Implement BUF_EVICT_IF_IN_POOL, a new mode
where the requested page will be evicted if it is found. There
must be no unwritten changes for the page.
buf_remove_t: Remove. Instead, use trx!=NULL to signify that a write
to file is desired, and use a separate parameter bool drop_ahi.
buf_LRU_flush_or_remove_pages(), fil_delete_tablespace():
Replace buf_remove_t.
buf_LRU_remove_pages(), buf_LRU_remove_all_pages(): Remove.
PageConverter::m_mtr: A dummy mini-transaction buffer
PageConverter::PageConverter(): Complete the member initialization list.
PageConverter::operator()(): Evict any 'shadow' pages from the
buffer pool so that pre-existing (garbage) pages cannot be mistaken
for pages that exist in the being-imported file.
row_discard_tablespace(): Remove a bogus comment that seems to
refer to IMPORT TABLESPACE, not DISCARD TABLESPACE.
String comparison with utf8_bin collation is case sensitive.
Hence "DELIMITER" did not match with "delimiter".
The delimiter command matching now uses my_charset_latin1.
ibuf_check_bitmap_on_import(): Only access the pages that
are below FSP_FREE_LIMIT. It is possible that especially with
ROW_FORMAT=COMPRESSED, the FSP_SIZE will be much bigger than
the FSP_FREE_LIMIT, and the bitmap pages (page_size*N, 1+page_size*N)
are filled with zero bytes.
buf_page_is_corrupted(), buf_page_io_complete(): Make the
fault injection compatible with MariaDB 10.2.
Backport the IMPORT tests from 10.2.
On some old GNU/Linux systems, invoking posix_fallocate() with
offset=0 would sometimes cause already allocated bytes in the
data file to be overwritten.
Fix a correctness regression that was introduced in
commit 420798a81a
by invoking posix_fallocate() in a safer way.
A similar change was made in MDEV-5746 earlier.
os_file_get_size(): Avoid changing the state of the file handle,
by invoking fstat() instead of lseek().
os_file_set_size(): Determine the current size of the file
by os_file_get_size(), and then extend the file from that point
onwards.
os_file_set_size(): If posix_fallocate() returns EINVAL, fall back
to writing zero bytes to the file. Also, remove some error log output,
and make it possible for a server shutdown to interrupt the fall-back
code.
MariaDB used to ignore any possible return value from posix_fallocate()
ever since innodb_use_fallocate was introduced in MDEV-4338. If EINVAL
was returned, the file would not be extended.
Starting with MDEV-11520, MariaDB would treat EINVAL as a hard error.
Why is the EINVAL returned? The GNU posix_fallocate() function
would first try the fallocate() system call, which would return
-EOPNOTSUPP for many file systems (notably, not ext4). Then, it
would fall back to extending the file one block at a time by invoking
pwrite(fd, "", 1, offset) where offset is 1 less than a multiple of
the file block size. This would fail with EINVAL if the file is in
O_DIRECT mode, because O_DIRECT requires aligned operation.
It's better to prohibit pushdown of conditions that involve
regexp_substr() and regexp_replace() into materialized derived
tables / views until proper implementations of the get_copy()
virtual method are provided for those functions.
The support of embedded CTEs was not correct in the cases when
embedded CTEs were used multiple times. The problems occurred with
both non-recursive (bug mdev-13780) and recursive (bug mdev-14184)
embedded CTEs.
This was done to get more information about where time is spent.
Now we can get proper timing for time spent in commit, rollback,
binlog write etc.
Following stages was added:
- Commit
- Commit_implicit
- Rollback
- Rollback implicit
- Binlog write
- Init for update
- This is used instead of "Init" for insert, update and delete.
- Staring cleanup
Following stages where changed:
- "Unlocking tables" stage reset stage to previous stage at end
- "binlog write" stage resets stage to previous stage at end
- "end" -> "end of update loop"
- "cleaning up" -> "Reset for next command"
- Added stage_searching_rows_for_update when searching for rows
to be deleted.
Other things:
- Renamed all stages to start with big letter (before there was no
consitency)
- Increased performance_schema_max_stage_classes from 150 to 160.
- Most of the test changes in performance schema comes from renaming of
stages.
- Removed duplicate output of variables and inital state in a lot of
performance schema tests.
This was done to make it easier to change a default value for a
performance variable without affecting all tests.
- Added start_server_variables.test to check configuration
- Removed some duplicate "closing tables" stages
- Updated position for "stage_init_update" and "stage_updating" for
delete, insert and update to be just before update loop (for more
exact timing).
- Don't set "Checking permissions" twice in a row.
- Remove stage_end stage from creating views (not done for create table
either).
- Updated default performance history size from 10 to 20 because of new
stages
- Ensure that ps_enabled is correct (to be used in a later patch)
- innodb_buffer_pool_dump_now_basic is modified to make sure it
really performs a dump and waits till it completion, to avoid
the apparent or hidden failure similar to MDEV-9713 / MDEV-10651
- innodb_buffer_pool_dump_pct_basic is modified to re-use the new
code from innodb_buffer_pool_dump_now_basic and thus avoid
the failure MDEV-10651
- innodb_buffer_pool_load_now_basic is re-written to simplify
the logic by re-using the code innodb_buffer_pool_dump_now_basic
and is given an opt file to avoid race conditions with
buffer pool load performed upon server startup, which causes
MDEV-14196 failure
The test would pass even with skipped partitioning, because
CHECK PARTITION for a view works identically with enabled/disabled
partitioning; but if the server is compiled without partitioning
at all, it cannot execute the statement, and the test would fail.
Check for the presence of partitioning allows to skip the test
in this case, rather than let it fail