innobase_get_charset(), innobase_get_stmt_safe(): Remove.
It is more efficient and readable to invoke thd_charset()
and thd_query_safe() directly, without a non-inlined wrapper function.
As part of the SPATIAL INDEX implementation in InnoDB,
dict_index_t was expanded by a rtr_ssn_t field. There are only
3 operations for this field, all protected by rtr_ssn_t::mutex:
* btr_cur_search_to_nth_level() stores the least significant 32 bits
of the 64-bit value that is stored in the index root page.
(This would better be done when the table is opened for the
very first time.)
* rtr_get_new_ssn_id() increments the value by 1.
* rtr_get_current_ssn_id() reads the current value.
All these operations can be implemented equally safely by using
atomic memory access operations.
Let us limit the maximum value of the debug parameter
innodb_data_file_size to 256 MiB. It is only being used
in the test innodb.log_data_file_size, and the size
of the system tablespace should never exceed some 70 MiB
in ./mtr. Thus, 256 MiB should be a reasonable limit.
The fact that negative values that are passed to unsigned parameters
wrap around to the maximum value appears to be a regression due to
commit 18ef02b04d
and has been filed as bug MDEV-22219.
This is a partial backport of
commit 5e7e7153b4 from 10.4.
assert_trx_is_free(): Assert !is_wsrep().
trx_init(): Do not initialize trx->wsrep, because it must have been
initialized already.
trx_commit_in_memory(): Invoke wsrep_commit_ordered(). This call
was being skipped, because the transaction object had already been
freed to the pool.
trx_rollback_for_mysql(), innobase_commit_low(),
innobase_rollback_trx(): Always reset trx->wsrep.
Several MYSQL_SYSVAR_STR parameters that employ both a validate
function callback fail to copy the string for saving the
validated value. The affected variables include the following:
innodb_ft_aux_table
innodb_ft_server_stopword_table
innodb_ft_user_stopword_table
innodb_buffer_pool_filename
The test case is an enhanced version of
mysql/mysql-server@0b0c30641f
and the code changes are inspired by their fixes.
We are also importing and adjusting the test innodb_fts.stopword
to get coverage for the variable innodb_ft_user_stopword_table.
buf_dump(), buf_load(): Protect srv_buf_dump_filename with
LOCK_global_system_variables.
fts_load_user_stopword(): Minor cleanup
fts_load_stopword(): Remove the parameter global_stopword_table.
innobase_fts_load_stopword(): Protect innodb_server_stopword_table
against concurrent SET GLOBAL.
innodb_buffer_pool_evict_uncompressed(): Restart the loop when
prev_block might not enjoy mutex protection.
This is based on
mysql/mysql-server@eccaecac07
The function wsrep_on() was being called rather frequently
in InnoDB and XtraDB. Let us cache it in trx_t and invoke
trx_t::is_wsrep() instead.
innobase_trx_init(): Cache trx->wsrep = wsrep_on(thd).
ha_innobase::write_row(): Replace many repeated calls to current_thd,
and test the cheapest condition first.
was restored.
Optionally rollback prepared XA's on "mariabackup --prepare".
The fix MUST NOT be ported on 10.5+, as MDEV-742 fix solves the issue for
slaves.
Temporary tables are typically short-lived, and temporary tables
are assumed to be accessed only by the thread that is handling
the owning connection. Hence, they must not be subject to
defragmenting.
ha_innobase::optimize(): Do not add temporary tables to
the defragment_table() queue.
All tablespace metadata is buffered in fil_system. There is a LRU
mechanism, but that only controls the opening and closing of
fil_node_t::handle.
It is much more efficient and less error-prone to access data file names
by looking up the fil_space_t object rather than by essentially joining
each row with an access to SYS_DATAFILES via the InnoDB internal SQL parser.
dict_get_first_path(): Declare static. The function may only be needed
when loading or updating the data dictionary. Also, change a condition
in order to avoid a bogus GCC 10 -Wstringop-overflow warning for
mem_strdupl() about len==ULINT_UNDEFINED.
i_s_sys_tablespaces_fill_table(): Do not access other InnoDB internal
dictionary tables than SYS_TABLESPACES.
The column INFORMATION_SCHEMA.INNODB_MUTEXES.NAME is not populated ever since
commit 2e814d4702 applied the InnoDB changes from
MySQL 5.7.9 to MariaDB Server 10.2.2.
Since the same commit, the view is only providing information about
rw_lock_t, not any mutexes.
For now, let us convert the source code file name and line number of
the rw_lock_t creation into a name. A better option in the future might
be to store the information somewhere where it can be looked up by
mysql_pfs_key_t, and possibly to remove the CREATE_FILE and CREATE_LINE
columns.
INNOBASE_ALTER_NOVALIDATE: Remove the set of operations
INNOBASE_ONLINE_CREATE that was accidentally included in the
definition.
In the merge of 82187a1221 to 10.3
(in commit eda719793a) the flags
were defined correctly.
This bug was caught by the test innodb_zip.index_large_prefix.
By default (innodb_strict_mode=ON), InnoDB attempts to guarantee
at DDL time that any INSERT to the table can succeed.
MDEV-19292 recently revised the "row size too large" check in InnoDB.
The check still is somewhat inaccurate;
that should be addressed in MDEV-20194.
Note: If a table contains multiple long string columns so that each column
is part of a column prefix index, then an UPDATE that attempts to modify
all those columns at once may fail, because the undo log record might
not fit in a single undo log page (of innodb_page_size). In the worst case,
the undo log record would grow by about 3KiB of for each updated column.
The DDL-time check (since the InnoDB Plugin for MySQL 5.1) is optional
in the sense that when the maximum B-tree record size or undo log
record size would be exceeded, the DML operation will fail and the
transaction will be properly rolled back.
create_table_info_t::row_size_is_acceptable(): Add the parameter
'bool strict' so that innodb_strict_mode=ON can be overridden during
TRUNCATE, OPTIMIZE and ALTER TABLE...FORCE (when the storage format
is not changing).
create_table_info_t::create_table(): Perform a sloppy check for
TRUNCATE TABLE (create_fk=false).
prepare_inplace_alter_table_dict(): Perform a sloppy check for
simple operations.
trx_is_strict(): Remove. The function became unused in
commit 98694ab0cb (MDEV-20949).
offset_t: this is a type which represents one record offset.
It's unsigned short int.
a lot of functions: replace ulint with offset_t
btr_pcur_restore_position_func(),
page_validate(),
row_ins_scan_sec_index_for_duplicate(),
row_upd_clust_rec_by_insert_inherit_func(),
row_vers_impl_x_locked_low(),
trx_undo_prev_version_build():
allocate record offsets on the stack instead of waiting for rec_get_offsets()
to allocate it from mem_heap_t. So, reducing memory allocations.
RECORD_OFFSET, INDEX_OFFSET:
now it's less convenient to store pointers in offset_t*
array. One pointer occupies now several offset_t. And those constant are start
indexes into array to places where to store pointer values
REC_OFFS_HEADER_SIZE: adjusted for the new reality
REC_OFFS_NORMAL_SIZE:
increase size from 100 to 300 which means less heap allocations.
And sizeof(offset_t[REC_OFFS_NORMAL_SIZE]) now is 600 bytes which
is smaller than previous 800 bytes.
REC_OFFS_SEC_INDEX_SIZE: adjusted for the new reality
rem0rec.h, rem0rec.ic, rem0rec.cc:
various arguments, return values and local variables types were changed to
fix numerous integer conversions issues.
enum field_type_t:
offset types concept was introduces which replaces old offset flags stuff.
Like in earlier version, 2 upper bits are used to store offset type.
And this enum represents those types.
REC_OFFS_SQL_NULL, REC_OFFS_MASK: removed
get_type(), set_type(), get_value(), combine():
these are convenience functions to work with offsets and it's types
rec_offs_base()[0]:
still uses an old scheme with flags REC_OFFS_COMPACT and REC_OFFS_EXTERNAL
rec_offs_base()[i]:
these have type offset_t now. Two upper bits contains type.
InnoDB RNG maintains global state, causing otherwise unnecessary bus
traffic. Even worse, this is cross-mutex traffic. That is, different
mutexes suffer from contention.
Fixed delay of 4 was verified to give best throughput by OLTP update
index and read-write benchmarks on Intel Broadwell (2/20/40) and
ARM (1/46/46).
This is a backport of ce04790065 from
MariaDB Server 10.3.
Move row size check to early CREATE/ALTER TABLE phase. Stop checking
on table open.
dict_index_add_to_cache(): remove parameter 'strict', stop checking row size
dict_index_t::record_size_info_t: this is a result of row size check operation
create_table_info_t::row_size_is_acceptable(): performs row size check.
Issues error or warning. Writes first overflow field to InnoDB log.
create_table_info_t::create_table(): add row size check
dict_index_t::record_size_info(): this is a refactored version
of dict_index_t::rec_potentially_too_big(). New version doesn't change global
state of a program but return all interesting info. And it's callers who
decide how to handle row size overflow.
dict_index_t::rec_potentially_too_big(): removed
Apply the changes to InnoDB and XtraDB that had been
inadvertently skipped in the merge
commit ae476868a5
That merge failure sabotaged part of MDEV-20127:
>Revert a problematic auto_increment_increment 'fix' from 2014.
>This involves replacing the MDEV-8827 fix and in 10.1,
>removing some WSREP instrumentation.
The code changes were re-merged manually by executing the following:
# Get the parent of the problematic merge.
git checkout ae476868a5394041a00e75a29c7d45917e8dfae8^
# Perform the merge again.
git merge ae476868a5394041a00e75a29c7d45917e8dfae8^2
# Get the conflict resolution from that merge.
git checkout ae476868a5 .
# Note: Any changes to these files were removed (empty diff)!
git diff HEAD storage/{innobase,xtradb}/handler/ha_innodb.cc
# Apply the code changes:
git diff cf40393471b10ca68cc1d2804c22ab9203900978^2..MERGE_HEAD \
storage/{innobase,xtradb}/handler/ha_innodb.cc|
patch -p1
To diagnose a hang in slow shutdown (innodb_fast_shutdown=0),
let us introduce a Boolean startup option in debug builds
that will cause the contents of the InnoDB change buffer
to be dumped to the server error log at startup.
This is another follow-up fix to
commit b393e2cb0c
which turned out to be still broken.
Replace the C++11 keyword 'constexpr' with #define.
debug_sync_t::str: Remove the zero-length array.
Replace sync->str with reinterpret_cast<char*>(&sync[1]).
Remove unused variables and type mismatch that was introduced
in commit b393e2cb0c
Also, fix a typo in the documentation of the parameter, and
update the test.
Problem:
=======
During dropping of fts index, InnoDB waits for fts_optimize_remove_table()
and it holds dict_sys->mutex and dict_operaiton_lock even though the
table id is not present in the queue. But fts_optimize_thread does wait
for dict_sys->mutex to process the unrelated table id from the slot.
Solution:
========
Whenever table is added to fts_optimize_wq, update the fts_status
of in-memory fts subsystem to TABLE_IN_QUEUE. Whenever drop index
wants to remove table from the queue, it can check the fts_status
to decide whether it should send the MSG_DELETE_TABLE to the queue.
Removed the following functions because these are all deadcode.
dict_table_wait_for_bg_threads_to_exit(),
fts_wait_for_background_thread_to_start(),fts_start_shutdown(), fts_shudown().
The setting innodb_change_buffering_debug=2 was supposed to inject
a crash during change buffer merge. There is no public test for
that functionality, and even if there were, it would be better
to use DEBUG_SYNC to halt the thread that does change buffer merge,
force a redo log flush from another thread, and finally kill the
server externally.
ha_innobase::open(): Always ignore problems with FOREIGN KEY constraints
(pass DICT_ERR_IGNORE_FK_NOKEY), no matter whether foreign_key_checks
is enabled. Instead, we must report errors when enforcing the FOREIGN KEY
constraints. As a result of ignoring these errors, the tables will be
loaded with dict_foreign_t objects whose foreign_index or referenced_index
will be NULL.
Also, pass DICT_ERR_IGNORE_FK_NOKEY instead of DICT_ERR_IGNORE_NONE
to dict_table_open_on_id_low() in many other cases. Notably, on
CREATE TABLE and ALTER TABLE, we will keep validating the FOREIGN KEY
constraints as before.
dict_table_open_on_name(): If no other flags than
DICT_ERR_IGNORE_FK_NOKEY are set, refuse access to unreadable tables.
Some encryption tests rely on this code path.
For the DML code path, we used to have the problem that when
one of the indexes was missing in dict_foreign_t, we would ignore
the FOREIGN KEY constraint altogether. The following changes
address that.
row_ins_check_foreign_constraints(): Add the parameter pk.
For the primary key, consider also foreign key constraints for which
foreign->foreign_index=NULL (no underlying index is available).
row_ins_check_foreign_constraint(): Report errors also for !check_ref.
Remove a redundant check for srv_read_only_mode.
row_ins_foreign_report_add_err(): Tolerate foreign->foreign_index=NULL.
fkerr_t: Errors for the foreign key checks. Replaces ulint,
which used #define that looked like dberr_t literals.
wsrep_dict_foreign_find_index(): Remove. Use
dict_foreign_find_index() instead, with default parameters.
dict_foreign_push_index_error(): Do not add redundant quotes
around quoted table names.
The general reason why innodb redo log file is limited by 512G is that
log_block_convert_lsn_to_no() returns value limited by 1G. But there is no
need to have unique log block numbers in log group. The fix removes 512G
limit and limits log group size by
(uint32_t maximum value) * (minimum page size), which, in turns, can be
removed if fil_io() is no longer used for innodb redo log io.
- The commit ab6dd77408 wrongly sets the
condition inside innobase_srv_conc_enter_innodb(). Problem is that
InnoDB makes the thread to sleep indefinitely if it is a replication
slave thread.
Thanks to Sujatha Sivakumar for contributing the replication test case.
fts_sync(): Remove the constant parameter has_dict=false.
fts_sync_table(): Remove the constant parameter has_dict=false,
and the redundant parameter unlock_cache = !wait.
Make wait=true the default parameter.
PROBLEM
-------
Index defined on a virtual column whose base column was in a fk
relation was not getting updated. This is because while getting
the updated field information from the update vector of the parent
table we were comparing the column number of the base column (for
virtual column) in child table with the associated column number
in the parent table. There was a mismatch in this column number
because of which this update field information was skipped and
subsequently index was not getting updated.
FIX
The function pointer ut_timer() was only used by the
InnoDB defragmenting thread. Let InnoDB use a single monotonic
high-precision timer, my_interval_timer() [in nanoseconds],
occasionally wrapped by microsecond_interval_timer().
srv_defragment_interval: Change from "timer" units to nanoseconds.
This concludes the InnoDB time function cleanup that was
motivated by MDEV-14154. Only ut_time_ms() will remain for now,
wrapping my_interval_timer().
This is a regression due to MDEV-16515 that affects some versions in
the MariaDB 10.1 server series starting with 10.1.35, and possibly
all versions starting with 10.2.17, 10.3.8, and 10.4.0.
The idea of MDEV-16515 is to allow DROP TABLE to be interrupted,
in case it was stuck due to some concurrent activity. We already
made some cases of internal DROP TABLE immune to kill in MDEV-18237,
MDEV-16647, MDEV-17470. We must include the cleanup of
CREATE TABLE...SELECT in the list of such internal DROP TABLE.
ha_innobase::delete_table(): Pass create_failed=true if the current
SQL statement is CREATE, so that the table will be dropped.
row_drop_table_for_mysql(): If create_failed=true, do not allow
the operation to be interrupted.
This is the race between DELETE and INSERT (or other any two operations accessing to the table).
What should happen in good case:
1. ALTER TABLE is issued. vc_templ->default_rec is initialized with temporary share's default_fields
2. temporary share is freed, but datadict is still there, with garbage in vc_templ->default_rec
3. DELETE is issued. It is first after ALTER TABLE finished.
4. ha_innobase::open() is called, ib_table->get_ref_count() should be one
5. we reinitialize vc_templ, so no garbage anymore
What actually happens:
3. DELETE is issued.
4. ha_innobase::open() is called and ib_table->get_ref_count() is 1
5. INSERT (or SELECT etc.) is issued in parallel
6. ha_innobase::open() is called and ib_table->get_ref_count() is 1
7. we check ib_table->get_ref_count() and it is 2 in both threads when we want reinitialize vc_templ
8. garbage is there
Fix:
* Do not store pointers to SHARE memory in table dict, copy it instead.
* But then we don't need to refresh it each time when refcount=1.