ha_partition creates temporary ha_XXX objects for its partitions when
performing DDL operations. The objects were created on a MEM_ROOT and
never deleted.
This works as long as ha_XXX objects free all data ha_XXX::close() and
don't rely on a proper destructor invocation. Unfortunately, ha_rocksdb
includes String members which need to be delete'd properly.
Fixed the bug by having ha_partition::~ha_partition delete these temporary
objects.
This is a bogus debug assertion failure that should be possible
starting with MariaDB 10.2.2 (which merged WL#7142 via MySQL 5.7.9).
While generating page-change redo log records is strictly out of the
question during tat certain parts of crash recovery, the
fil_names_clear() is only emitting informational MLOG_FILE_NAME
and MLOG_CHECKPOINT records to guarantee that if the server is killed
during or soon after the crash recovery, subsequent crash recovery
will be possible.
The metadata buffer that fil_names_clear() is flushing to the redo log
is being filled by recv_init_crash_recovery_spaces(), right before
starting to apply redo log, by invoking fil_names_dirty() on every
discovered tablespace for which there are changes to apply.
When it comes to Mariabackup (xtrabackup --prepare), it is strictly out
of the question to generate any redo log whatsoever, because that could
break the restore of incremental backups by causing LSN deviation.
So, the fil_names_dirty() call must be skipped when restoring backups.
recv_recovery_from_checkpoint_start(): Do not invoke fil_names_clear()
when restoring a backup.
mtr_t::commit_checkpoint(): Remove the failing assertion. The only
caller is fil_names_clear(), and it must be called by
recv_recovery_from_checkpoint_start() for normal server startup to be
crash-safe. The debug assertion in mtr_t::commit() will still
catch rogue redo log writes.
Disable memory leak check in debug server, if rocksdb is loaded.
There is some subtle bug somewhere in 3rd party code we cannot
do much about.
The bug is manifested as follows
Rocksdb does not shutdown worker threads, when plugin is shut down. Thus
OS does not unload the library since there are some active threads using
this library's code. Thus global destructors in the library do not run,
and there is still some memory allocated when server exits.
The workaround disables server's memory leak check, if rocksdb engine was
loaded.
The option innodb_log_compressed_pages was contributed by
Facebook to MySQL 5.6. It was disabled in the 5.6.10 GA release
due to problems that were fixed in 5.6.11, which is when the
option was enabled.
The option was set to innodb_log_compressed_pages=ON by default
(disabling the feature), because safety was considered more
important than speed. The option innodb_log_compressed_pages=OFF
can *CORRUPT* ROW_FORMAT=COMPRESSED tables on crash recovery
if the zlib deflate function is behaving differently (producing
a different amount of compressed data) from how it behaved
when the redo log records were written (prior to the crash recovery).
In MDEV-6935, the default value was changed to
innodb_log_compressed_pages=OFF. This is inherently unsafe, because
there are very many different environments where MariaDB can be
running, using different zlib versions. While zlib can decompress
data just fine, there are no guarantees that different versions will
always compress the same data to the exactly same size. To avoid
problems related to zlib upgrades or version mismatch, we must
use a safe default setting.
This will reduce the write performance for users of
ROW_FORMAT=COMPRESSED tables. If you configure
innodb_log_compressed_pages=ON, please make sure that you will
always cleanly shut down InnoDB before upgrading the server
or zlib.
Since MariaDB 10.2.2, temporary table metadata is not written
to the InnoDB data dictionary tables. Therefore,
the DICT_TF2_TEMPORARY flag cannot be set in SYS_TABLES,
except if there exist orphan temporary tables that were created
before MariaDB 10.2.2.
trx_resurrect_table_locks(): Do not skip temporary tables.
If a resurrect transaction modified a temporary table that was
created before MariaDB 10.2.2, that table would be treated
internally as a persistent table. It is safer to resurrect
locks than to skip the table, because the table would be modified
on transaction rollback.
buf_flush_page_cleaner_coordinator: In the first loop, use an
appropriate termination condition, waiting for !recv_writer_thread_active.
logs_empty_and_mark_files_at_shutdown(): Signal recv_sys->flush_start
in case the recv_writer_thread was never started, or
buf_flush_page_cleaner_coordinator failed to notice its termination.
innobase_start_or_create_for_mysql(): Remove a redundant, unreachable
condition, and properly release resources when aborting startup due to
recv_sys->found_corrupt_log.
Don't write to a temporary file, use String.
Remove strange one-liner "helpers", use String methods.
Don't use current_thd, don't allocate memory for 1-byte strings, etc.
when opening 10.1- table that has virtual columns:
1. don't error out if it has vcols over autoinc columns.
just issue a warning.
2. set vcol type properly
3. in innodb: use table->s->stored_fields instead of table->s->fields,
because that's what was stored in innodb data dictionary
don't use thd->query_id check in background purge threads
(it doesn't work, because thd->query_id is never incremented there)
instead use thd->open_tables directly, there can be only one table
there anyway, and this is the table opened by this purge thread.
When using innodb_page_size=16k, InnoDB tables
that were created in MariaDB 10.1.0 to 10.1.20 with
PAGE_COMPRESSED=1 and
PAGE_COMPRESSION_LEVEL=2 or PAGE_COMPRESSION_LEVEL=3
would fail to load.
fsp_flags_is_valid(): When using innodb_page_size=16k, use a
more strict check for .ibd files, with the assumption that
nobody would try to use different-page-size files.
When using innodb_page_size=16k, InnoDB tables
that were created in MariaDB 10.1.0 to 10.1.20 with
PAGE_COMPRESSED=1 and
PAGE_COMPRESSION_LEVEL=2 or PAGE_COMPRESSION_LEVEL=3
would fail to load.
fsp_flags_is_valid(): When using innodb_page_size=16k, use a
more strict check for .ibd files, with the assumption that
nobody would try to use different-page-size files.
This is a regression caused by
commit bb60a832ed
srv_shutdown_all_bg_threads(): If os_thread_count indicates that
no threads are running, do not bother checking thread status.
This avoids a crash when InnoDB startup is aborted before
os_aio_init() has been invoked. (os_aio_all_slots_free() would
dereference AIO::s_reads even though it is NULL.)
InnoDB I/O and buffer pool interfaces and the redo log format
have been changed between MariaDB 10.1 and 10.2, and the backup
code has to be adjusted accordingly.
The code has been simplified, and many memory leaks have been fixed.
Instead of the file name xtrabackup_logfile, the file name ib_logfile0
is being used for the copy of the redo log. Unnecessary InnoDB startup and
shutdown and some unnecessary threads have been removed.
Some help was provided by Vladislav Vaintroub.
Parameters have been cleaned up and aligned with those of MariaDB 10.2.
The --dbug option has been added, so that in debug builds,
--dbug=d,ib_log can be specified to enable diagnostic messages
for processing redo log entries.
By default, innodb_doublewrite=OFF, so that --prepare works faster.
If more crash-safety for --prepare is needed, double buffering
can be enabled.
The parameter innodb_log_checksums=OFF can be used to ignore redo log
checksums in --backup.
Some messages have been cleaned up.
Unless --export is specified, Mariabackup will not deal with undo log.
The InnoDB mini-transaction redo log is not only about user-level
transactions; it is actually about mini-transactions. To avoid confusion,
call it the redo log, not transaction log.
We disable any undo log processing in --prepare.
Because MariaDB 10.2 supports indexed virtual columns, the
undo log processing would need to be able to evaluate virtual column
expressions. To reduce the amount of code dependencies, we will not
process any undo log in prepare.
This means that the --export option must be disabled for now.
This also means that the following options are redundant
and have been removed:
xtrabackup --apply-log-only
innobackupex --redo-only
In addition to disabling any undo log processing, we will disable any
further changes to data pages during --prepare, including the change
buffer merge. This means that restoring incremental backups should
reliably work even when change buffering is being used on the server.
Because of this, preparing a backup will not generate any further
redo log, and the redo log file can be safely deleted. (If the
--export option is enabled in the future, it must generate redo log
when processing undo logs and buffered changes.)
In --prepare, we cannot easily know if a partial backup was used,
especially when restoring a series of incremental backups. So, we
simply warn about any missing files, and ignore the redo log for them.
FIXME: Enable the --export option.
FIXME: Improve the handling of the MLOG_INDEX_LOAD record, and write
a test that initiates a backup while an ALGORITHM=INPLACE operation
is creating indexes or rebuilding a table. An error should be detected
when preparing the backup.
FIXME: In --incremental --prepare, xtrabackup_apply_delta() should
ensure that if FSP_SIZE is modified, the file size will be adjusted
accordingly.
The POINT data type is being treated just like any other
geometry data type in InnoDB. The fixed-length data type
DATA_POINT had been introduced in WL#6942 based on a
misunderstanding and without appropriate review.
Because of fundamental design problems (such as a
DEFAULT POINT(0 0) value secretly introduced by InnoDB),
the code was disabled in Oracle Bug#20415831 fix.
This patch removes the dead code and definitions that were
left behind by the Oracle Bug#20415831 patch.
This is preparation for MDEV-12288, which would set DB_TRX_ID=0
when purging history. Also with that change in place, delete-marked
records must always refer to an undo log record via a nonzero
DB_TRX_ID column. (The DB_TRX_ID is only present in clustered index
leaf page records.)
btr_cur_parse_del_mark_set_clust_rec(), rec_get_trx_id():
Statically allocate the offsets
(should never use the heap). Add some debug assertions.
Replace some use of rec_get_trx_id() with row_get_rec_trx_id().
trx_undo_report_row_operation(): Add some sanity checks that are
common for all operations that produce undo log.
- Added variable tmp_disk_table_size
- Added variable tmp_memory_table_size as an alias for tmp_table_size
- Changed internal variable tmp_table_size to tmp_memory_table_size
- create_info.data_file_length is now set with tmp_disk_table_size
- Fixed that Aria doesn't reset max_data_file_length for internal tables
- Added status flag if table is full so that we can detect this on next insert.
This ensures that the table is always 'correct', but we get the error one
row after the row that grow the table too big.
- Removed some mutex lock for internal temporary tables