mirror of
https://github.com/MariaDB/server.git
synced 2025-08-25 20:01:50 +02:00
![]() Problem: ======= - During copy algorithm, InnoDB fails to detect the duplicate key error for unique hash key blob index. Unique HASH index treated as virtual index inside InnoDB. When table does unique hash key , server does search on the hash key before doing any insert operation and finds the duplicate value in check_duplicate_long_entry_key(). Bulk insert does all the insert together when copy of intermediate table is finished. This leads to undetection of duplicate key error while building the index. Solution: ======== - Avoid bulk insert operation when table does have unique hash key blob index. dict_table_t::can_bulk_insert(): To check whether the table is eligible for bulk insert operation during alter copy algorithm. Check whether any virtual column name starts with DB_ROW_HASH_ to know whether blob column has unique index on it. |
||
---|---|---|
.. | ||
row0ext.cc | ||
row0ftsort.cc | ||
row0import.cc | ||
row0ins.cc | ||
row0log.cc | ||
row0merge.cc | ||
row0mysql.cc | ||
row0purge.cc | ||
row0quiesce.cc | ||
row0row.cc | ||
row0sel.cc | ||
row0uins.cc | ||
row0umod.cc | ||
row0undo.cc | ||
row0upd.cc | ||
row0vers.cc |