Function `signal_waiters` assigned `m_committed_seqno` variable outside of
mutex lock which caused incorrect behavior of WSREP_SYNC_WAIT_UPTO_GTID.
Fixed by moving assignment inside lock. Added handling of OOM and now
error is reported.
Remove hard-coded seqno value and read seqno directly from current node state.
Also added support for MAP_SYNC. It allows to achieve decent performance
with DAX devices even when libpmem is unavailable.
Fixed Windows version of my_msync(): according to manual FlushViewOfFile()
may return before flush is actually completed. It is advised to issue
FlushFileBuffers() after FlushViewOfFile().
ha_innobase::commit_inplace_alter_table(): After
ALTER_STORED_COLUMN_ORDER, ensure that the virtual column metadata
will be reloaded also when the table is not being rebuilt.
Fix:
===
Add "REPLICA" as an alias for "SLAVE". All commands which use "SLAVE" keyword
can be used with new alias "REPLICA".
List of commands:
On Master:
=========
SHOW REPLICA HOSTS <--> SHOW SLAVE HOSTS
Privilege "SLAVE" <--> "REPLICA"
On Slave:
=========
START SLAVE <--> START REPLICA
START ALL SLAVES <--> START ALL REPLICAS
START SLAVE UNTIL <--> START REPLICA UNTIL
STOP SLAVE <--> STOP REPLICA
STOP ALL SLAVES <--> STOP ALL REPLICAS
RESET SLAVE <--> RESET REPLICA
RESET SLAVE ALL <--> RESET REPLICA ALL
SLAVE_POS <--> REPLICA_POS
We need to release global system variables mutex before
doing wsrep_init to avoid race with next show status and
we need to save wsrep_on value as it is changed on wsrep_init.
Added test case.
The column INFORMATION_SCHEMA.INNODB_MUTEXES.NAME is not populated ever since
commit 2e814d4702 applied the InnoDB changes from
MySQL 5.7.9 to MariaDB Server 10.2.2.
Since the same commit, the view is only providing information about
rw_lock_t, not any mutexes.
For now, let us convert the source code file name and line number of
the rw_lock_t creation into a name. A better option in the future might
be to store the information somewhere where it can be looked up by
mysql_pfs_key_t, and possibly to remove the CREATE_FILE and CREATE_LINE
columns.
Problem:-
So the issue is when we do bulk insert with rows
> MI_MIN_ROWS_TO_DISABLE_INDEXES(100) , We try to disable the indexes to
speedup insert. But current logic also disables the long unique indexes.
Solution:- In ha_myisam::start_bulk_insert if we find long hash index
(HA_KEY_ALG_LONG_HASH) we will not disable the index.
This commit also refactors the mi_disable_indexes_for_rebuild function,
Since this is function is called at only one place, it is inlined into
start_bulk_insert
mi_clear_key_active is added into myisamdef.h because now it is also used
in ha_myisam.cc file.
(Same is done for Aria Storage engine)
See also original report:
http://bugs.debian.org/946671
Using mysqlhotcopy, the following error occurs:
DBD::mysql::db do failed: You can't use locks with log tables at
/usr/bin/mysqlhotcopy line 545.
Author:
Paul Szabo psz@maths.usyd.edu.auhttp://www.maths.usyd.edu.au/u/psz/
School of Mathematics and Statistics University of Sydney Australia
Race condition when innodb_lock_wait_timeout (default 50 seconds)
exceeds for 'send update', but information_schema.innodb_lock_waits
still sees this wait or it my exit by timeout. My occur on overloaded
host.
fil_space_encrypt(): Remove the debug check that decrypts the
just encrypted page. We are exercising the decryption of encrypted
pages enough via --suite=encryption,mariabackup. It is a waste of
computing resources to decrypt every page immediately after encrypting it.
The redundant check had been added in
commit 2bedc3978b (MDEV-9931).
In commit 0e5a4ac253 (MDEV-15562)
we introduced was a bogus debug check failure that does not affect
the correctness of the release build.
With a fixed-length PRIMARY KEY, we do not have to recompute
the rec_get_offsets() after restarting the mini-transaction,
because the offsets of DB_TRX_ID,DB_ROLL_PTR are not going
to change.
row_undo_mod_clust(): Invoke rec_offs_make_valid() to keep the
debug check in page_zip_write_trx_id_and_roll_ptr() happy.
The scenario to reproduce this bug should be rather unlikely:
In the time frame when row_undo_mod_clust() has committed its
first mini-transaction and has not yet started the next one,
another mini-transaction must do something that causes the page
to be reorganized, split or merged.
Fixed a bug introduced in MDEV-11345, server did not start if
non-english error messages were set in startup parameters.
Added lc_messages=de_DE option into an existing test case.
Variable `wsrep_new_cluster` should be set to false after `wsrep_init_startup`.
Problem was that this was done before when mysqldump is used as SST method so option
wsrep-new-cluster didn't have any effect.
Support for galera GTID consistency thru cluster. All nodes in cluster
should have same GTID for replicated events which are originating from cluster.
Cluster originating commands need to contain sequential WSREP GTID seqno
Ignore manual setting of gtid_seq_no=X.
In master-slave scenario where master is non galera node replicated GTID is
replicated and is preserved in all nodes.
To have this - domain_id, server_id and seqnos should be same on all nodes.
Node which bootstraps the cluster, to achieve this, sends domain_id and
server_id to other nodes and this combination is used to write GTID for events
that are replicated inside cluster.
Cluster nodes that are executing non replicated events are going to have different
GTID than replicated ones, difference will be visible in domain part of gtid.
With wsrep_gtid_domain_id you can set domain_id for WSREP cluster.
Functions WSREP_LAST_WRITTEN_GTID, WSREP_LAST_SEEN_GTID and
WSREP_SYNC_WAIT_UPTO_GTID now works with "native" GTID format.
Fixed galera tests to reflect this chances.
Add variable to manually update WSREP GTID seqno in cluster
Add variable to manipulate and change WSREP GTID seqno. Next command
originating from cluster and on same thread will have set seqno and
cluster should change their internal counter to it's value.
Behavior is same as using @@gtid_seq_no for non WSREP transaction.
Starting with commit 373443903b
we would invoke memcmp() unconditionally, even if the length is zero.
But, a call to memcmp() is undefined if any parameter is a null pointer,
even if the length is zero.
In the following tests, a null pointer is being passed to the comparison:
vcol.vcol_keys_innodb gcol.gcol_keys_innodb main.func_group_innodb
innodb.innodb_bug53592
cmp_data(): Keep WITH_UBSAN happy and avoid potential future bugs
in optimized builds, like the one addressed by
commit fc168c3a5e (MDEV-15587).
Problem:
-------
Accessing a member within 'xid_count_per_binlog' structure results in
following error when 'UBSAN' is enabled.
member access within address 0xXXX which does not point to an object of type
'xid_count_per_binlog'
Analysis:
---------
The problem appears to be that no constructor for 'xid_count_per_binlog' is
being called, and thus the vtable will not be initialized.
Fix:
---
Defined a parameterized constructor for 'xid_count_per_binlog' class.
InnoDB crash recovery used a special type of mem_heap_t that
allocates backing store from the buffer pool. That incurred
a significant overhead, leading to underutilization of memory,
and limiting the maximum contiguous allocated size of a log record.
recv_sys_t::blocks: A linked list of buf_block_t that are allocated
by buf_block_alloc() for redo log records. Replaces recv_sys_t::heap.
We repurpose buf_block_t::unzip_LRU for linking the elements.
recv_sys_t::max_log_blocks: Renamed from recv_n_pool_free_frames.
recv_sys_t::max_blocks(): Accessor for max_log_blocks.
recv_sys_t::alloc(): Allocate memory from the current recv_sys_t::blocks
element, or allocate another block. In debug builds, various free()
member functions must be invoked, because we repurpose
buf_page_t::buf_fix_count for tracking allocations.
recv_sys_t::free_corrupted_page(): Renamed from recv_recover_corrupt_page()
recv_sys_t::is_memory_exhausted(): Renamed from recv_sys_heap_check()
recv_sys_t::pages and its elements are allocated directly by the
system memory allocator.
recv_parse_log_recs(): Remove the parameter available_memory.
We rename some variables 'store_to_hash' to 'store', because
recv_sys.pages is not actually a hash table.
This is joint work with Thirunarayanan Balathandayuthapani.
Move tokuftdump and tokuft_logprint man pages to storage/tokudb.
The man pages are now part of tokudb-engine cmake component. This change
is mostly for RPM & DEB based packaging generated through CMake & CPack.
Debian upstream already handles this change via the custom scripts in debian/
Problem:
=======
The problem is that InnoDB doesn't add the table in fts slots if drop table fails. InnoDB marks the table is in fts slots while processing sync message. So the consecutive alter statement assumes that table is in queue and tries to remove it. But InnoDB can't find the table in fts_slots.
Solution:
=========
i) Removal of in_queue in fts_t while processing the fts sync message.
ii) Add the table to fts_slots when drop table fails.