and make sure that private ca key is not deleted at the end of
the procedure, so that we could generate additional certificates
any time without regenerating everything
These changes are comparable to Percona's modifications in innodb in the
Percona Xtrabackup repository.
- If functions are used in backup as well as in innodb, make them non-static.
- Define IS_XTRABACKUP() macro for special handling of innodb running
inside backup.
- Extend some functions for backup.
fil_space_for_table_exists_in_mem() gets additional parameter
'remove_from_data_dict_if_does_not_exist', for partial backups
fil_load_single_table_tablespaces() gets an optional parameter predicate
which tells whether to load tablespace based on database or table name,
also for partial backups.
srv_undo_tablespaces_init() gets an optional parameter 'backup_mode'
- Allow single redo log file (for backup "prepare")
- Do not read doublewrite buffer pages in backup, they are outdated
- Add function fil_remove_invalid_table_from_data_dict(), to remove non-existing
tables from data dictionary in case of partial backups.
- On Windows, fix file share modes when opening tablespaces,
to allow mariabackup to read tablespaces while server is online.
- Avoid access to THDVARs in backup, because innodb plugin is not loaded,
and THDVAR would crash in this case.
- Backup will load encryption plugins outside of mysqld. Thus, do not
force loading MyISAM plugin in plugin_load.
- init_signals() will be used in backup, make it global, not static.
- Do not throw output of exec command, if disable_result_log is set
save and dump it if exec fails. Need tha to meaningfully analyze
errors from mariabackup.
- rmdir now removes the entire tree. need that because xtrabackup tests
clean the whole directory.
- all filesystem modifying commands now require the argument to
be under MYSQLTEST_VARDIR or MYSQL_TMP_DIR.
Do not exporting mysqld entry points directly.
This is needed for mariabackup, to load encryption plugins on Windows.
All plugins are "pure" by default. To mark plugin "impure"
it should use RECOMPILE_FOR_EMBEDDED or STORAGE_ENGINE keyword.
because mysql->net.thd was reset to NULL in mysql_real_connect()
and thd_increment_bytes_received() didn't do anything.
Fix:
* set mysql->net.thd to current_thd instread.
* remove the test for non-null THD from a very often used
function thd_increment_bytes_received().
On several buildbot machines, the test fails like this:
CURRENT_TEST: wsrep.binlog_format
mysqltest: At line 12: query 'SET binlog_format=STATEMENT' failed:
1231: Variable 'binlog_format' can't be set to the value of 'STATEMENT'
ANALYSIS
This is regression caused due to worklog 6742 which
implemented ha_innobase::records() which always
uses clustered index to get the row count. Previously
optimizer chose secondary index which was smaller in
size of clustered index to scan for rows and resulted in
a quicker scan.
FIX
After discussion it was decided to remove this feature in 5.7.
[#rb14040 Approved by Kevin and Oystein ]
- Allow the server to start if innodb force recovery is set to 6
even though change buffer is not empty
Reviewed-by: Jimmy Yang <jimmy.yang@oracle.com>
Problem:
========
- Drop table assert if innodb_force_recovery is set to 5 or 6.
For innodb_force_recovery 5 and 6, InnoDB doesn't scan the undo log
and it makes the redo rollback segment as NULL. There is no way for
transaction to write any undo log.
- If innodb_force_recovery is set to 6 then InnoDB does not do the
redo log roll-forward in connection with recovery. In this case,
log_sys will be initalized only and it will not have latest
checkpoint information. Checkpoint is done during shutdown even
innodb_force_recovery is set to 6. So it leads to incorrect
information update in checkpoint header.
Solution:
========
1) Allow drop table only if innodb_force_recovery < 5.
2) Make innodb as read-only if innodb_force_recovery is set to 6.
3) During shutdown, remove the checkpoint if innodb_force_recovery
is set to 6.
Reviewed-by: Jimmy Yang <jimmy.yang@oracle.com>
RB: 15075
Problem:
========
During checkpoint, we are writing all MLOG_FILE_NAME records in one mtr
and parse buffer can't be processed till MLOG_MULTI_REC_END. Eventually parse
buffer exceeds the RECV_PARSING_BUF_SIZE and eventually it overflows.
Fix:
===
1) Break the large mtr if it exceeds LOG_CHECKPOINT_FREE_PER_THREAD into multiple mtr during checkpoint.
2) Move the parsing buffer if we are encountering only MLOG_FILE_NAME
records. So that it will never exceed the RECV_PARSING_BUF_SIZE.
Reviewed-by: Debarun Bannerjee <debarun.bannerjee@oracle.com>
Reviewed-by: Rahul M Malik <rahul.m.malik@oracle.com>
RB: 14743
Description:
===========
Add my_thread_init() and my_thread_exit() for background threads which
initializes and frees the st_my_thread_var structure.
Reviewed-by: Jimmy Yang<jimmy.yang@oracle.com>
RB: 15003
Problem:
During read head, wrong page size is used to calcuate the tablespace size.
Fix:
Use physical page size to calculate tablespace size
Reveiwed-By: Satya Bodapati
RB: 14993
Split the test case so that a server restart is not needed.
Reduce the test cases and use a simpler mechanism for triggering
and waiting for purge.
fil_table_accessible(): Check if a table can be accessed without
enjoying MDL protection.
PROBLEM
When truncating single tablespace tables, we need to scan the entire
buffer pool to remove the pages of the table from the buffer pool.
During this scan and removal dict_sys->mutex is being held ,causing
stalls in other DDL operations.
FIX
Release the dict_sys->mutex during the scan and reacquire it after the
scan. Make sure that purge thread doesn't purge the records of the table
being truncated and background stats collection thread skips the updation
of stats for the table being truncated.
[#rb 14564 Approved by Jimmy and satya ]
srv_sys_t::n_threads_active[]: Protect writes by both the mutex and
by atomic memory access.
srv_active_wake_master_thread_low(): Reliably wake up the master
thread if there is work to do. The trick is to atomically read
srv_sys->n_threads_active[].
srv_wake_purge_thread_if_not_active(): Atomically read
srv_sys->n_threads_active[] (and trx_sys->rseg_history_len),
so that the purge should always be triggered when there is work to do.
trx_commit_in_memory(): Invoke srv_wake_purge_thread_if_not_active()
whenever a transaction is committed. Purge could have been prevented by
the read view of the currently committing transaction, even if it is
a read-only transaction.
trx_purge_add_update_undo_to_history(): Do not wake up the purge.
This is only called by trx_undo_update_cleanup(), as part of
trx_write_serialisation_history(), which in turn is only called by
trx_commit_low() which will always call trx_commit_in_memory().
Thus, the added call in trx_commit_in_memory() will cover also
this use case where a committing read-write transaction added
some update_undo log to the purge queue.
trx_rseg_mem_restore(): Atomically modify trx_sys->rseg_history_len.
Problem:
=======
Concurrent update dml statement doesn't reflect in virtual index during
inplace table rebuild. It results mismatch value in virutal index and
clustered index. Deleting the table content tries to search the mismatch
value in virtual index but it can't find the value. During log update
apply phase, virtual information is being ignored while constructing
the new entry.
Solution:
=========
In row_log_update_apply phase, build the entry with virtual column
information. So that it can reflect in newly constructed virtual index.
Reviewed-by: Jimmy Yang<jimmy.yang@oracle.com>
RB: 14974
Issue:
======
Disabling macros such as UNIV_PFS_MUTEX/UNIV_PFS_RWLOCK/UNIV_PFS_THREAD
which are defined in InnoDB throws errors during compilation.
Fix:
====
Fix all the compilation errors.
RB: 14893
Reviewed-by: Jimmy Yang <Jimmy.Yang@oracle.com>
Reviewed-by: Satya Bodapati <satya.bodapati@oracle.com>
Issue:
======
The issue is that if a fts index is present in a table the space size is
incorrectly calculated in the case of truncate which results in a invalid
read.
Fix:
====
Have a different space size calculation in truncate if fts indexes are
present.
RB:14755
Reviewed-by: Shaohua Wang <shaohua.wang@oracle.com>
Analysis:
========
There was missing bracket for IF conditon in dict_stats_analyze_index_level()
and it leads to wrong result.
Fix:
====
Fix the IF condition in dict_stats_analyze_index_level() so that it satisfied
the if condtion only if level is zero.
Reviewed-by : Jimmy Yang <jimmy.yang@oracle.com>
Analysis:
========
Field name comparison happens while filling the virtual columns
affected by foreign constraint. But field name is NULL in virtual
index for the newly added virtual column.
Fix:
===
Ignore the index if it has newly added virtual column. Foreign
key affected virtual column information is filled during
loading operation.
Reviewed-by: Jimmy Yang <jimmy.yang@oracle.com>
RB: 14895
Problem :
---------
Information_Schema.referential_constraints (UNIQUE_CONSTRAINT_NAME)
shows NULL for a foreign key constraint after restarting the server.
If any dml or query (select/insert/update/delete) is done on
referenced table, then the constraint name is correctly shown.
Solution :
----------
UNIQUE_CONSTRAINT_NAME column is the key name of the referenced table.
In innodb, FK reference is stored as a list of columns in referenced
table in INNODB_SYS_FOREIGN and INNODB_SYS_FOREIGN_COLS. The referenced
column must have at least one index/key with the referenced column as
prefix but the key name itself is not included in FK metadata. For this
reason, the UNIQUE_CONSTRAINT_NAME is only filled up when the
referenced table is actually loaded in innodb dictionary cache.
The information_schema view calls handler::get_foreign_key_list() on
foreign key table to read the FK metadata. The UNIQUE_CONSTRAINT_NAME
information shows NULL based on whether the referenced table is
already loaded or not.
One way to fix this issue is to load the referenced table while reading
the FK metadata information, if needed.
Reviewed-by: Sunny Bains <sunny.bains@oracle.com>
RB: 14654