MDEV-7618 introduced configuration parameter innodb_instrument_semaphores
in MariaDB Server 10.1. The parameter seems to only affect the rw-lock
X-latch acquisition. Extra fields are added to rw_lock_t to remember one
X-latch holder or waiter. These fields are not being consulted or reported
anywhere. This is basically only adding code bloat.
If the intention is to debug hangs or deadlocks, we have better tools for
that in the debug server, and for the non-debug server, core dumps can
reveal a lot. For example, the mini-transaction memo records the
currently held buffer block or index rw-locks, to be released at
mtr_t::commit().
The configuration parameter innodb_instrument_semaphores will be
deprecated in 10.2.5 and removed in 10.3.0.
rw_lock_t: Remove the members lock_name, file_name, line, thread_id
which did not affect any output.
to tables in the system tablespace
This is a regression caused by MDEV-11585, which accidentally
changed Tablespace::is_undo_tablespace() in an incorrect way,
causing the InnoDB system tablespace to be reported as a dedicated
undo tablespace, for which the change buffer is not applicable.
Tablespace::is_undo_tablespace(): Remove. There were only 2
calls from the function buf_page_io_complete(). Replace those
calls as appropriate.
Also, merge changes to tablespace import/export tests from
MySQL 5.7, and clean up the tests a little further, allowing
them to be run with any innodb_page_size.
Remove duplicated error injection instrumentation for the
import/export tests. In MySQL 5.7, the error injection label
buf_page_is_corrupt_failure was renamed to
buf_page_import_corrupt_failure.
fil_space_extend_must_retry(): Correct a debug assertion
(tablespaces can be extended during IMPORT), and remove a
TODO comment about compressed temporary tables that was
already addressed in MDEV-11816.
dict_build_tablespace_for_table(): Correct a comment that
no longer holds after MDEV-11816, and assert that
ROW_FORMAT=COMPRESSED can only be used in .ibd files.
Note: At least one test is unstable, failing with the following:
./mtr --mysqld=--innodb-purge-threads=9 --big-test --no-reorder \
galera.galera_parallel_autoinc_largetrx galera.galera_var_slave_threads
The result difference is dependent on innodb_purge_threads.
buf_dump(): Correct the printf format passed to buf_dump_status()
to match the argument types.
Revert the changes to storage/xtradb. XtraDB is not being compiled
for 10.2. The unused copy that we have in the 10.2 branch is only
getting merges from 10.1.
Disable the test sys_vars.innodb_buffer_pool_dump_pct_function
because it is unstable on buildbot.
Revert the MDEV-4396 tweak to innodb.innodb_bug14676111.
We must fix the root cause instead.
Allow gcol.innodb_virtual_purge to run on a non-debug build
(If wait_innodb_all_purged.inc is used in a non-debug test,
it will have no effect.)
Add the test innodb.index_merge_threshold from MySQL 5.7.
The test innodb.log_file_size_checkpoint was originally added to
MySQL 5.7 by me in a bug fix, to fix the interaction of WL#6494
(redo log resizing, introduced in MySQL 5.6) and WL#7142
(data file discovery based on MLOG_FILE_NAME records,
introduced in MySQL 5.7):
commit 70f9ef4e1220827132b50275ca7272f2bcca1864
Author: Marko Mäkelä <marko.makela@oracle.com>
Date: Wed May 21 13:31:29 2014 +0300
Bug#18755095 REDO LOG SIZE CHANGE AFTER CRASH RESULTS IN CHECKPOINT AGE
ERROR MESSAGE
This is a regression from fixing
Bug#18730524 REPEATED KILL+RESTART FAILS DUE TO MISSING MLOG_FILE_NAME
RECORD
innobase_start_or_create_for_mysql(): Invoke fil_names_clear() before
creating the "checkpoint" when changing redo log files.
Approved by Jimmy Yang on IM.
The relevant part of the test is that fil_names_clear() is invoked to
emit an MLOG_CHECKPOINT record before the redo log files are deleted.
In case the server is killed before ib_logfile0 has been deleted,
the old (not-yet-resized) redo log will be treated as valid. We do not
need to create a large number of tables for that.
The issue was that JOIN::rollup_write_data() used
JOIN::tmp_table_param::[start_]recinfo, which had uninitialized data.
These fields have uninitialized data, because JOIN::tmp_table_param
currently only stores some grouping-related data fields. The data about
the work (temporary) tables themselves is stored in
join->join_tab[...].tmp_table_param.
The fix is to make JOIN::rollup_write_data follow this convention
and look at the right TMP_TABLE_PARAM object
The bug was not visible in current HEAD. Introduced test case to catch
regressions. Also improve error messages regarding distinct usage in
window functions.
The bug was caused by several issues.
2 problems in seek_io_cache. Due to wrong offsets used, we would end up
seeking way too much (first change), or over the intended seek point
(second change). Fixing it requires correctly detecting available data
in buffer (first change), and not using "IO_SIZE alligned" reads. The
second is needed because _my_b_cache_read adjusts the pos_in_file itself
based on read_pos and read_end. Pretending buffer is empty when we want
to force a read will aleviate this problem.
Secondly, the big-table cursors didn't repect the interface definitions
of always returning the rownumber that Table_read_cursor::fetch() would activate.
At the same time, next(), prev() and move_to() should not perform any
row activation.
Window functions need to be computed after applying the HAVING clause.
An optimization that we have for regular, non-window function, cases is
to apply having only during sending of the rows to the client. This
allows rows that should be filtered from the temporary table used to
store aggregation results to be stored there.
This behaviour is undesireable for window functions, as we have to
compute window functions on the result-set after HAVING is applied.
Storing extra rows in the table leads to wrong values as the frame
bounds might capture those -to be filtered afterwards- rows.
Write only one encryption key to the checkpoint page.
Use 4 bytes of nonce. Encrypt more of each redo log block,
only skipping the 4-byte field LOG_BLOCK_HDR_NO which the
initialization vector is derived from.
Issue notes, not warning messages for rewriting the redo log files.
recv_recovery_from_checkpoint_finish(): Do not generate any redo log,
because we must avoid that before rewriting the redo log files, or
otherwise a crash during a redo log rewrite (removing or adding
encryption) may end up making the database unrecoverable.
Instead, do these tasks in innobase_start_or_create_for_mysql().
Issue a firm "Missing MLOG_CHECKPOINT" error message. Remove some
unreachable code and duplicated error messages for log corruption.
LOG_HEADER_FORMAT_ENCRYPTED: A flag for identifying an encrypted redo
log format.
log_group_t::is_encrypted(), log_t::is_encrypted(): Determine
if the redo log is in encrypted format.
recv_find_max_checkpoint(): Interpret LOG_HEADER_FORMAT_ENCRYPTED.
srv_prepare_to_delete_redo_log_files(): Display NOTE messages about
adding or removing encryption. Do not issue warnings for redo log
resizing any more.
innobase_start_or_create_for_mysql(): Rebuild the redo logs also when
the encryption changes.
innodb_log_checksums_func_update(): Always use the CRC-32C checksum
if innodb_encrypt_log. If needed, issue a warning
that innodb_encrypt_log implies innodb_log_checksums.
log_group_write_buf(): Compute the checksum on the encrypted
block contents, so that transmission errors or incomplete blocks can be
detected without decrypting.
Rewrite most of the redo log encryption code. Only remember one
encryption key at a time (but remember up to 5 when upgrading from the
MariaDB 10.1 format.)
The InnoDB redo log consists of a list of files that logically form
a bigger file, as if the individual files were concatenated together.
The first file will always be written on redo log checkpoint, because
the two checkpoint pages are at the start of the single logical
redo log file.
There is no technical reason why InnoDB requires at least 2 files
to exist. Let us reduce the minimum number to 1. In that way,
restoring from backups will become easier, since InnoDB can directly
deal with a single backed-up redo log file.
the test restarts the server, giving it 60 seconds to shutdown
and then killing it mercilessly.
make sure the server closes all MyISAM tables before shutdown,
as we cannot reliably expect it to make the deadline.
options --log-slow-admin-statements, --log-queries-not-using-indexes and --log-slow-slave-statements have no effect if --log_slow_queries is not set
1. s/--log_slow_queries/--log-slow-queries/
2. disable log-slow-admin-statement/etc in mytr when doing mysqld --help
PART 2 of the fix adds the logic of not using password column, unless it
exists. If password column is missing we attempt to use plugin &&
authentication_string columns.
The same approach is needed for LAST_VALUE, otherwise the LAST_VALUE sum
functions are not cleared correctly. Now LAST_VALUE behaves as NTH_VALUE
with 0 offset, only that the frame that it is examining is the bottom bound,
not the top bound.
The problematic queries involve unions. For unions we have an
optimization where we skip the ORDER BY clause in a query from one side
of the union if it will be performed later due to UNION.
EX:
(SELECT a from t1 ORDER BY a) ORDER BY b;
The first ordering by a is not necessary and it gets removed.
The problem is that we still need to resolve the Items before removing the
ORDER BY list from the
SELECT_LEX structure. During this final resolve step however, we forgot to
allow SET functions within the ORDER BY clause. This caused us to return
an "Invalid use of group function" error during the checking performed
by fix_fields in Item_sum::init_sum_func_check.
change the parser not to allow SERIAL as a normal data type.
make a special rule for it, where it could be used for define
fields, but not generated fields, not return type of a stored function, etc.
JOIN_CACHE's were initialized in check_join_cache_usage()
from make_join_readinfo(). After that make_join_readinfo() was looking
whether it's possible to use keyread. Later, after make_join_readinfo(),
optimizer decided whether to use filesort. And even later, at the
execution time, from join_read_first(), keyread was actually enabled.
The problem is, that if a query uses a vcol, base columns that it
depends on are automatically added to the read_set - because they're
needed to calculate the vcol. But if we're doing keyread, vcol is taken
from the index, not calculated, and base columns do not need to be
in the read set (even should not be - as they aren't getting values).
The bug was that JOIN_CACHE used read_set with base columns,
they were not read because of keyread, so it was caching garbage.
So read_set is only known after the keyread was decided. And after the
filesort was decided, as filesort doesn't use keyread. But
check_join_cache_usage() needs to be done in make_join_readinfo(),
as the code below depends on these checks,
Fix: keep JOIN_CACHE checks where they were, but move initialization
down to the very end of JOIN::optimize_inner. If keyread was enabled,
update the read_set to include only columns that are part of the index.
Copy the keyread logic from join_read_first() to happen at optimize time.
move TABLE::key_read into handler. Because in index merge and DS-MRR
there can be many handlers per table, and some of them use
key read while others don't. "keyread" is really per handler,
not per TABLE property.
Optionally do table->update_default_fields() even for INSERT
that supposedly provides values for all column. Because these
"values" might be DEFAULT, which would need table->update_default_fields()
at the end.
Also set Item_default_value::used_tables() from the default expression.
Non-zero used_field() means that mysql_insert() will initialize all
fields to their default values (with restore_record()) even if
all columns are later provided with values. Because default expressions
may refer to other columns and they must be initialized.
Oracle introduced a Memcached plugin interface to the InnoDB
storage engine in MySQL 5.6. That interface is essentially a
fork of Memcached development snapshot 1.6.0-beta1 of an old
development branch 'engine-pu'.
To my knowledge, there have not been any updates to the Memcached code
between MySQL 5.6 and 5.7; only bug fixes and extensions related to
the Oracle modifications.
The Memcached plugin is not part of the MariaDB Server. Therefore it
does not make sense to include the InnoDB interfaces for the Memcached
plugin, or to have any related configuration parameters:
innodb_api_bk_commit_interval
innodb_api_disable_rowlock
innodb_api_enable_binlog
innodb_api_enable_mdl
innodb_api_trx_level
Removing this code in one commit makes it possible to easily restore
it, in case it turns out to be needed later.
These are different bugs, but the fixing code is the same:
if window functions are used over implicit grouping then
now the execution should follow the general path calling
the function set in JOIN::first_select.