1. Revert incorrect treatment of m_needs_reopen;
2. Close single instance of TABLE instead of all instances since
reopened only those that are marked for reopen.
Optimize the test by dropping the table early and by using only
one undo log thread, so that purge will be doing more useful work
and less busy work of suspending and resuming the worker threads.
The test used to cause shutdown timeout on 10.4 on buildbot, and
for me locally when using --mysqld=--innodb-sync-debug.
With these tweaks, it passes for me with --mysqld=--innodb-sync-debug.
It looks like the merge of MySQL 5.7.9 to MariaDB 10.2.2 conflicted with
earlier changes that were made in MDEV-8588.
row_search_mvcc(): If the page is corrupted, avoid invoking
btr_cur_store_position(). The caller should not try to fetch
the next record after a hard error.
No memory access violated the bounds of fake_extra_buf[],
but GCC does not like the fact that the pointer fake_extra
ends up pointing before the array.
Allocate a dummy element at the start of fake_extra_buf[]
in order to silence the warning.
ins_node_create() does not initialize all members of que_common_t, so
zero-init them with mem_heap_zalloc().
Handle out-of-memory correctly.
Init insert_node->common.parent to fulfill the contract of thr usage.
Free insert_node subtree at row_update_vers_insert() exit.
Problem:
=======
DROP TABLE IF EXISTS was killed. The table still exists on
the master but the DDL was still logged.
Analysis:
=========
During the execution of DROP TABLE command "ha_delete_table" call is invoked
to delete the table. If the query is killed at this point, the kill command
is not handled within the code. This results in two issues.
1) The table which is not dropped also gets written into the binary log.
2) The code continues further upon receiving 'KILL QUERY'.
Fix:
===
Upon receiving the KILL command the query should stop its current execution.
Tables which were successfully dropped prior to KILL command should be
included in the binary log.
If there're multiple row versions in InnoDB, reading one row from PK
may have O(N) complexity and reading from secondary keys may have
O(N^2) complexity.
The problem occurs when there are many pending versions of the same
row, meaning that the primary key is the same, but a secondary key is
different. The slowdown occurs when the secondary index is
traversed. This patch creates a helper class for the function
row_sel_get_clust_rec_for_mysql() which can remember and re-use
cached_clust_rec & cached_old_vers so that rec_get_offsets() does not
need to be called over and over for the clustered record.
Corrections by Kevin Lewis <kevin.lewis@oracle.com>
MDEV-20341 Unstable innodb.innodb_bug14704286
Removed test that tested the ability of interrupting long query which
is not long anymore.
Exclude SELECT and INSERT SELECT from vers_set_hist_part(). We cannot
likewise exclude REPLACE SELECT because it may REPLACE into itself
(and REPLACE generates history).
INSERT also does not generate history, but we have history
modification setting which might be interfered.
- Include the valgrind suppressions from the FB upstream
- Use HAVE_Valgrind, not HAVE_Purify (like the rest of MariaDB code does)
The call to DisownData() is now actually disabled under Valgrind
Starting with MDEV-12288 in MariaDB Server 10.3,
the transaction identifiers on records will be reset on purge.
Because purge might or might not run to completion before shutdown,
it could happen that the bogus transaction identifier that our
test is writing will be reset by purge after restart, and the
expected warning message on SELECT will fail to appear.
We resolve the race condition by ensuring that purge runs to
completion before the shutdown.
Use DEBUG_SYNC to hang the execution at the interesting point,
and then kill and restart the server externally. This will work
also with Valgrind. DBUG_SUICIDE() causes Valgrind to hang,
and it could also cause uninteresting reports about memory leaks.
While we are at it, let us clean up innodb.innodb_bulk_create_index_debug
so that it will actually test the desired functionality also in future
versions (with instant ADD COLUMN and DROP COLUMN) and avoid
some unnecessary restarts.
We are adding two DEBUG_SYNC points for ALTER TABLE, because there were
none that would be executed right before ha_commit_trans().
Galera threads were not registered to performance schema and
used pthread_create when mysql_thread_create should have been
used.
Added test case to verify current galera performance schema
instrumentation does work.
Skip the test on big-endian systems.
In MariaDB Server 10.0 and 10.1 (as well as MySQL 5.6),
the implementation of innodb_checksum_algorithm=crc32
wrongly assumes little-endian byte order.
MDEV-17614 flags INSERT…ON DUPLICATE KEY UPDATE unsafe for statement-based
replication when there are multiple unique indexes. This correctly fixes
something whose attempted fix in MySQL 5.7
in mysql/mysql-server@c93b0d9a97
caused lock conflicts. That change was reverted in MySQL 5.7.26
in mysql/mysql-server@066b6fdd43
(with a substantial amount of other changes).
In MDEV-17073 we already disabled the unfortunate MySQL change when
statement-based replication was not being used. Now, thanks to MDEV-17614,
we can actually remove the change altogether.
This reverts commit 8a346f31b9 (MDEV-17073)
and mysql/mysql-server@c93b0d9a97 while
keeping the test cases.
- mysqltest didn't free read_command_buf
- wait_for_slave_param did write different things to the log if valgrind
was used.
- Table open cache should not write the initial variable value as it
can depend on the configuration or if valgrind is used
- A variable in GetResult was used uninitalized
initialize_readline() is called with full pathname of executable which
sets rl_readline_name to that value.
It is expected that rl_readline_name is initialized with static name
not depending on the file name at all. Needed for setting custom
hotkeys in .inputrc
1. Fix DBUG_ASSERT(!table->pos_in_locked_tables) in tc_release_table();
2. Fix access of prematurely freed MDL_ticket: don't close ticket if table was not closed;
3. Fix deadlock after erroneous ALTER.
mysql_alter_table() leaves dirty table->m_needs_reopen in case of
error exit which then incorrectly treated by mysql_lock_tables().
For MODE_SIMULTANEOUS_ASSIGNMENT it is required to return back field
offsets from record[1] to record[0]. 'continue' in warning branch did
skip of rfield->move_field_offset() call.
... produces "bytes lost" warnings
When rocksdb_validate_update_cf_options() returns an error,
the update won't happen.
Free the copy of the string in this case.
Problem:-
When mysql executes INSERT ON DUPLICATE KEY INSERT, the storage engine checks
if the inserted row would generate a duplicate key error. If yes, it returns
the existing row to mysql, mysql updates it and sends it back to the storage
engine.When the table has more than one unique or primary key, this statement
is sensitive to the order in which the storage engines checks the keys.
Depending on this order, the storage engine may determine different rows
to mysql, and hence mysql can update different rows.The order that the
storage engine checks keys is not deterministic. For example, InnoDB checks
keys in an order that depends on the order in which indexes were added to
the table. The first added index is checked first. So if master and slave
have added indexes in different orders, then slave may go out of sync.
Solution:-
Make INSERT...ON DUPLICATE KEY UPDATE unsafe while using stmt or mixed format
When there is more then one unique key.
Although there is two exception.
1. Auto Increment key is not counted because Innodb will get gap lock for
failed Insert and concurrent insert will get a next increment value. But if
user supplies auto inc value it can be unsafe.
2. Count only unique keys for which insertion is performed.
So this patch also addresses the bug id #72921