Introduce the configuration option innodb_log_optimize_ddl
for controlling whether native index creation or table-rebuild
in InnoDB should keep optimizing the redo log
(and writing MLOG_INDEX_LOAD records to ensure that
concurrent backup would fail).
By default, we have innodb_log_optimize_ddl=ON, that is,
the default behaviour that was introduced in MariaDB 10.2.2
(with the merge of InnoDB from MySQL 5.7) will be unchanged.
BtrBulk::m_trx: Replaces m_trx_id. We must be able to check for
KILL QUERY even if !m_flush_observer (innodb_log_optimize_ddl=OFF).
page_cur_insert_rec_write_log(): Declare globally, so that this
can be called from PageBulk::insert().
row_merge_insert_index_tuples(): Remove the unused parameter trx_id.
row_merge_build_indexes(): Enable or disable redo logging based on
the innodb_log_optimize_ddl parameter.
PageBulk::init(), PageBulk::insert(), PageBulk::finish(): Write
redo log records if needed. For ROW_FORMAT=COMPRESSED, redo log
will be written in PageBulk::compress() unless we called
m_mtr.set_log_mode(MTR_LOG_NO_REDO).
The problem occurs on Ubuntu where a Spider package is installed on the system
separately from the MariaDB package. MariaDB and Spider upgrades leave the
Spider plugin improperly installed. Spider is present in the mysql.plugin
table but is not present in information_schema.
The problem has been corrected in Spider's installation script. Logic has
been added to check for Spider entries in both information_schema and
mysql.plugin. If Spider is present in mysql.plugin but is not present in
information_schema, then Spider is first removed from mysql.plugin. The
subsequent plugin install of Spider will insert entries in both mysql.plugin
and information_schema.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Cherry-Picked:
Commit 0897d81 on branch bb-10.3-MDEV-15786
The problem occurs on Ubuntu where a Spider package is installed on the system
separately from the MariaDB package. MariaDB and Spider upgrades leave the
Spider plugin improperly installed. Spider is present in the mysql.plugin
table but is not present in information_schema.
The problem has been corrected in Spider's installation script. Logic has
been added to check for Spider entries in both information_schema and
mysql.plugin. If Spider is present in mysql.plugin but is not present in
information_schema, then Spider is first removed from mysql.plugin. The
subsequent plugin install of Spider will insert entries in both mysql.plugin
and information_schema.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Cherry-Picked:
Commit 0897d81 on branch bb-10.3-MDEV-15786
During a table-rebuilding operation, the function table_name_parse()
can encounter a table name that starts with #sql. Here is an example
of a failure:
CURRENT_TEST: gcol.innodb_virtual_basic
mysqltest: At line 1204: query 'alter table t drop column d ' failed:
2013: Lost connection to MySQL server during query
Let us just remove these bogus debug assertions.
If the final renaming phase during ALTER TABLE never fails, it
should not do any harm to skip the purge. If it does fail, then
we might end up 'leaking' some delete-marked records in the
indexes on virtual columns of the original table, and these
garbage records would keep consuming space until the indexes are
dropped or the table is successfully rebuilt.
This is a regression caused by the fix of MDEV-15855.
purge_vcol_info_t::set_used(): Add a missing condition.
row_purge_poss_sec(): Invoke set_used() in order to
have !is_first_fetch() when retrying.
There is only one temporary tablespace. Reserving a data member in
each read-write lock object for a Boolean flag seems like a huge waste,
especially because this field was only actually used in debug builds.
LatchDebug::check_order(): Compare to fil_system.temp_space->latch.
The problem occurred because the Spider node was incorrectly handling
timestamp values sent to and received from the data nodes.
The problem has been corrected as follows:
- Added logic to set and maintain the UTC time zone on the data nodes.
To prevent timestamp ambiguity, it is necessary for the data nodes to use
a time zone such as UTC which does not have daylight savings time.
- Removed the spider_sync_time_zone configuration variable, which did not
solve the problem and which interfered with the solution.
- Added logic to convert to the UTC time zone all timestamp values sent to
and received from the data nodes. This is done for both unique and
non-unique timestamp columns. It is done for WHERE clauses, applying to
SELECT, UPDATE and DELETE statements, and for UPDATE columns.
- Disabled Spider's use of direct update when any of the columns to update is
a timestamp column. This is necessary to prevent false duplicate key value
errors.
- Added a new test spider.timestamp to thoroughly test Spider's handling of
timestamp values.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Cherry-Picked:
Commit 97cc9d3 on branch bb-10.3-MDEV-16246
The parameter innodb_lock_schedule_algorithm was introduced in
MariaDB Server 10.1.19, 10.2.13, 10.3.4 as part of MDEV-11039.
In MariaDB 10.1, the default value of the parameter is 'fcfs',
that is, the existing algorithm is used by default. But in
later versions of MariaDB Server, the parameter was 'vats',
enabling the new algorithm.
Because the new algorithm is triggering a debug assertion failure
that suggests corruption of the transactional lock data structures,
we will revert to the old algorithm by default until we have
resolved the problem.
Problem:
========
Truncate operation holds MDL on the table (t1) and tries to
acquire InnoDB dict_operation_lock. Purge holds dict_operation_lock
and tries to acquire MDL on the table (t1) to evaluate virtual
column expressions for indexed virtual columns.
It leads to deadlock of purge and truncate table (DDL).
Solution:
=========
If purge tries to acquire MDL on the table then it should do the following:
i) Purge should release all innodb latches (including dict_operation_lock)
before acquiring metadata lock on the table.
ii) After acquiring metadata lock on the table, it should check whether the
table was dropped or renamed. If the table is dropped then purge should
ignore the undo log record. If the table is renamed then it should
release the old MDL and acquire MDL on the new name.
iii) Once purge acquires MDL, it should use the SQL table handle for all
the remaining virtual index for the purge record.
purge_node_t: Introduce new virtual column information to know whether
the MDL was acquired successfully.
This is joint work with Marko Mäkelä.
Make dict_table_t::n_ref_count private, and protect it with
a combination of dict_sys->mutex and atomics. We want to be
able to invoke dict_table_t::release() without holding
dict_sys->mutex.
In InnoDB, an INSERT will not create an explicit lock object. Instead,
the inserted record is initially implicitly locked by the transaction
that wrote its trx_t::id to the hidden system column DB_TRX_ID.
(Other transactions would check if DB_TRX_ID is referring to a
transaction that has not been committed.)
If a record was inserted in the current transaction, it would be
implicitly locked by that transaction. Only if some other transaction
is requesting access to the record, the implicit lock should be
converted to an explicit one, so that the waits-for graph can be
constructed for detecting deadlocks and lock wait timeouts.
Before this fix, InnoDB would convert implicit locks to
explicit ones, even if no conflict exists.
lock_rec_convert_impl_to_expl(): Return whether caller_trx
already holds an explicit lock that covers the record.
row_vers_impl_x_locked_low(): Avoid a lookup if the record matches
caller_trx->id.
lock_trx_has_expl_x_lock(): Renamed from lock_trx_has_rec_x_lock().
row_upd_clust_step(): In a debug assertion, check for implicit lock
before invoking lock_trx_has_expl_x_lock().
rw_trx_hash_t::find(): Make do_ref_count a mandatory parameter.
Assert that trx_id is not 0 (the caller should check it).
trx_sys_t::is_registered(): Only invoke find() if id != 0.
trx_sys_t::find(): Add the optional parameter do_ref_count.
lock_rec_queue_validate(): Avoid lookup for trx_id == 0.
Disks with native 4K sectors need 4K alignment and size for unbuffered IO
(i.e for files opened with FILE_FLAG_NO_BUFFERING)
Innodb opens redo log with FILE_FLAG_NO_BUFFERING, however it always does
512byte IOs. Thus, the IO on 4K native sectors will fail, rendering
Innodb non-functional.
The fix is to check whether OS_FILE_LOG_BLOCK_SIZE is multiple of logical
sector size, and if it is not, reopen the redo log without
FILE_FLAG_NO_BUFFERING flag.
dict0dict.cc
buf_LRU_drop_page_hash_for_tablespace(): Return whether any adaptive
hash index entries existed. If yes, the caller should keep retrying to
drop the adaptive hash index.
row_import_for_mysql(), row_truncate_table_for_mysql(),
row_drop_table_for_mysql(): Ensure that the adaptive hash index was
entirely dropped for the table.
NULL values when there is no DEFAULT
Copy and inplace algorithm works similarly for
NULL to NOT NULL conversion for the following cases:
(1) strict sql mode - Should give error.
(2) non-strict sql mode - Should give warnings alone
(3) alter ignore table command. - Should give warnings alone.
it only worked if mroonga plugin wasn't installed before (normal case),
but then it didn't need to delete anything.
if, by some glitch, mroonga was already installed, it would delete
mroonga from mysql.plugin, but INSTALL would fail (as mroonga was running),
and the script aborted, leaving mroonga not in mysql.plugin at all.
Don't let SET PASSWORD to set the password, if auth_string is set.
Now SET PASSWORD always sets the plugin/auth_string fields and clears
the password field (on pre-plugin mysql.user table it works as before).
In RPM/DEB packages - always ld-preload jemalloc, instead
of linking ha_tokudb.so with it.
Keep linking in bintars, because they don't install cnf files
in the correct locations.