MDEV-9469: 'Incorrect key file' on ALTER TABLE
InnoDB needs to rebuild table if column name is changed and
added index (or foreign key) is created based on this new
name in same alter table.
Fix the doubly questional fix for MySQL Bug#17250787:
* it detected autoinc index by looking for the first index
that starts from autoinc column. never mind one column
can be part of many indexes.
* it used autoinc_field->field_index to look up into internal
innodb dictionary. But field_index accounts for virtual
columns too, while innodb dictionary ignores them.
Find the index by its name, like elsewhere in ha_innobase.
At alter table when server renames the table to temporal name,
old name uses normal partioned table naming rules. However,
if tables are created on Windows and then transfered to Linux
and lower-case-table-names=1 we should modify the old name
on rename to lower case to be able to find it from the
InnoDB dictionary.
Provided IBM System Z have outdated compiler version, which supports gcc sync
builtins but not gcc atomic builtins. It also has weak memory model.
InnoDB attempted to verify if __sync_lock_test_and_set() is available by
checking IB_STRONG_MEMORY_MODEL. This macro has nothing to do with availability
of __sync_lock_test_and_set(), the right one is HAVE_ATOMIC_BUILTINS.
Backport pull request #125 from grooverdan/MDEV-8923_innodb_buffer_pool_dump_pct to 10.0
WL#6504 InnoDB buffer pool dump/load enchantments
This patch consists of two parts:
1. Dump only the hottest N% of the buffer pool(s)
2. Prevent hogging the server duing BP load
From MySQL - commit b409342c43ce2edb68807100a77001367c7e6b8e
Add testcases for innodb_buffer_pool_dump_pct_basic.
Part of the code authored by Daniel Black
while according to Storage Engine API column names should be compared
case insensitively. This can cause FRM and InnoDB data dictionary to
go out of sync.
Called thd_progress_init() from several threads used for FT-index
creation. For FT-indexes, need better way to report progress,
remove current one for them.
Correct InnoDB calls to progress report API:
- second argument of thd_progress_init() is number of stages, one stage is
enough for the row merge
- third argument of thd_progress_report() is some value indicating threshold,
for the row merge it is file->offset
- second argument of thd_progress_report() is some value indicating current
state, for the row merge it is file->offset - num_runs.
fix innodb auto-increment handling
three bugs:
1. innobase_next_autoinc treated the case of current<offset incorrectly
2. ha_innobase::get_auto_increment didn't recalculate current when increment changed
3. ha_innobase::get_auto_increment didn't pass offset down to innobase_next_autoinc
Analysis: There were two problems. (1) if partition table was
created using lower_case_tables = 1 on windows we did find the
correct table but we did not set share->ib_table correctly.
(2) we did open table on dictionary but did not increase
mysql_open_tables.
Fix: In xtradb allow access to tables with incorrect
lower case names (warning is printed to error log). If
table is opened increase mysql_open_tables count to avoid
crash on flush tables.
- Added some extra command to rpl_start_stop to ensure that the
IO thread has connected to the master before we shut down the server.
- if signal returns signalhandler_t, use this with the alarm code
- Added missing tests to sys_vars
- Fixed some possible overflow bugs in tabxml.cpp
Analysis: Current implementation will write and read at least one block
(sort_buffer_size bytes) from disk / index even if that block does not
contain any records.
Fix: Avoid writing / reading empty blocks to temporary files (disk).
Note: Backporting the patch from mysql-5.6.
Problem:
A CREATE TABLE with an invalid table name is detected
at SQL layer. So the table name is reset to an empty
string. But the storage engine is called with this
empty table name. The table name is specified as
"database/table". So, in the given scenario we get
only "database/".
Solution:
Within InnoDB, detect this error and report it to
higher layer.
rb#9274 approved by jimmy.
recv_find_max_checkpoint(): Amend the error message to give advice
about downgrading. The 5.7.9 redo log format was intentionally changed
so that older MySQL versions will not find a valid redo log checkpoint.
PROBLEM
Whenever we insert in unique secondary index we take shared
locks on all possible duplicate record present in the table.
But while during a replace on the unique secondary index ,
we take exclusive and locks on the all duplicate record.
When the records are deleted, they are first delete marked
and later purged by the purge thread. While purging the
record we call the lock_update_delete() which in turn calls
lock_rec_inherit_to_gap() to inherit locks of the deleted
records. In repeatable read mode we inherit all the locks
from the record to the next record but in the read commited
mode we skip inherting them as gap type locks. We make a
exception here if the lock on the records is in shared mode
,we assume that it is set during insert for unique secondary
index and needs to be inherited to stop constraint violation.
We didnt handle the case when exclusive locks are set during
replace, we skip inheriting locks of these records and hence
causing constraint violation.
FIX
While inheriting the locks,check whether the transaction is
allowed to do TRX_DUP_REPLACE/TRX_DUP_IGNORE, if true
inherit the locks.
[ Revewied by Jimmy #rb9709]
The root cause is that x86 has a stronger memory model than the ARM
processors. And the GCC builtins didn't issue the correct fences when
setting/unsetting the lock word. In particular during the mutex release.
The solution is rewriting atomic TAS operations: replace '__sync_' by
'__atomic_' if possible.
Reviewed-by: Sunny Bains <sunny.bains@oracle.com>
Reviewed-by: Bin Su <bin.x.su@oracle.com>
Reviewed-by: Debarun Banerjee <debarun.banerjee@oracle.com>
Reviewed-by: Krunal Bauskar <krunal.bauskar@oracle.com>
RB: 9782
RB: 9665
RB: 9783