Rewriting GRANT/REVOKE grammar to use more bison stack and use Sql_cmd_ style
1. Removing a few members from LEX:
- uint grant, grant_to_col, which_columns
- List<LEX_COLUMN> columns
- bool all_privileges
2. Adding classes Grand_object_name, Lex_grant_object_name
3. Adding classes Grand_privilege, Lex_grand_privilege
4. Adding struct Lex_column_list_privilege_st, class Lex_column_list_privilege
5. Rewriting the GRANT/REVOKE grammar to use new classes and pass them through
bison stack (rather than directly access LEX members)
6. Adding classes Sql_cmd_grant* and Sql_cmd_revoke*,
changing GRANT/REVOKE to use LEX::m_sql_cmd.
7. Adding the "sp_handler" grammar rule and removing some duplicate grammar
for GRANT/REVOKE for different kinds of SP objects.
8. Adding a new rule comma_separated_ident_list, reusing it in:
- with_column_list
- colum_list_privilege
ut_align_down(): Preserve the const qualifier. Use C++ casts.
ha_delete_hash_node(): Correct an assertion expression.
fil_page_get_type(): Perform an assumed-aligned read.
page_align(): Preserve the const qualifier. Assume (some) alignment.
page_get_max_trx_id(): Check the index page type.
page_header_get_field(): Perform an assumed-aligned read.
page_get_autoinc(): Perform an assumed-aligned read.
page_dir_get_nth_slot(): Perform an assumed-aligned read.
Preserve the const qualifier.
When using LTO, one can see optimization of stack variables that
are passed to check_stack_overrun as argument buf. That prevents
proper stack overrun detection.
Use bit-fields for some mtr_t members to improve locality of reference.
Because mtr_t is never shared between threads, there are no considerations
regarding concurrent access.
Since commit 5e62b6a5e0 (MDEV-16264),
purge_sys_t::stop() no longer waited for all purge activity to stop.
This caused problems on FLUSH TABLES...FOR EXPORT because of
purge running concurrently with the buffer pool flush.
The assertion at the end of buf_flush_dirty_pages() could fail.
The, implemented by Vladislav Vaintroub, aims to eliminate race
conditions when stopping or resuming purge:
waitable_task::disable(): Wait for the task to complete, then replace
the task callback function with noop.
waitable_task::enable(): Restore the original task callback function
after disable().
purge_sys_t::stop(): Invoke purge_coordinator_task.disable().
purge_sys_t::resume(): Invoke purge_coordinator_task.enable().
purge_sys_t::running(): Add const qualifier, and clarify the comment.
The purge coordinator task will remain active as long as any purge
worker task is active.
purge_worker_callback(): Assert purge_sys.running().
srv_purge_wakeup(): Merge with the only caller purge_sys_t::resume().
purge_coordinator_task: Use static linkage.
srv_export_innodb_status(): While gathering
innodb_mem_adaptive_hash, acquire btr_search_latches[i]
in order to prevent a race condition with buffer pool resizing.
Release memory as soon as redo log records are processed.
Because the memory allocation and deallocation of parsed redo log
records must be protected by recv_sys.mutex, it is better to avoid
using a std::atomic field for bookkeeping.
buf_page_t::access_time: Keep track of the recv_sys.pages record
allocations. The most significant 16 bits will count allocated
blocks (which were previously counted by buf_page_t::buf_fix_count
in the debug version), and the least significant 16 bits indicate
the number of allocated bytes in the block (which was previously
managed in buf_block_t::modify_clock), which must be a positive
number, up to innodb_page_size. The byte offset 65536 is represented
as the value 0.
recv_recover_page(): Let the caller erase the log.
recv_validate_tablespace(): Acquire recv_sys_t::mutex.
row_log_table_get_pk_old_col(): For replacing a NULL value for a
column of the being-added primary key, look up the correct
default value, even if columns had been instantly reordered or
dropped earlier. This ought to have been broken ever since
commit 0e5a4ac253 (MDEV-15562).
Function `signal_waiters` assigned `m_committed_seqno` variable outside of
mutex lock which caused incorrect behavior of WSREP_SYNC_WAIT_UPTO_GTID.
Fixed by moving assignment inside lock. Added handling of OOM and now
error is reported.
Remove hard-coded seqno value and read seqno directly from current node state.
Also added support for MAP_SYNC. It allows to achieve decent performance
with DAX devices even when libpmem is unavailable.
Fixed Windows version of my_msync(): according to manual FlushViewOfFile()
may return before flush is actually completed. It is advised to issue
FlushFileBuffers() after FlushViewOfFile().
ha_innobase::commit_inplace_alter_table(): After
ALTER_STORED_COLUMN_ORDER, ensure that the virtual column metadata
will be reloaded also when the table is not being rebuilt.
Fix:
===
Add "REPLICA" as an alias for "SLAVE". All commands which use "SLAVE" keyword
can be used with new alias "REPLICA".
List of commands:
On Master:
=========
SHOW REPLICA HOSTS <--> SHOW SLAVE HOSTS
Privilege "SLAVE" <--> "REPLICA"
On Slave:
=========
START SLAVE <--> START REPLICA
START ALL SLAVES <--> START ALL REPLICAS
START SLAVE UNTIL <--> START REPLICA UNTIL
STOP SLAVE <--> STOP REPLICA
STOP ALL SLAVES <--> STOP ALL REPLICAS
RESET SLAVE <--> RESET REPLICA
RESET SLAVE ALL <--> RESET REPLICA ALL
SLAVE_POS <--> REPLICA_POS
We need to release global system variables mutex before
doing wsrep_init to avoid race with next show status and
we need to save wsrep_on value as it is changed on wsrep_init.
Added test case.
Problem:-
So the issue is when we do bulk insert with rows
> MI_MIN_ROWS_TO_DISABLE_INDEXES(100) , We try to disable the indexes to
speedup insert. But current logic also disables the long unique indexes.
Solution:- In ha_myisam::start_bulk_insert if we find long hash index
(HA_KEY_ALG_LONG_HASH) we will not disable the index.
This commit also refactors the mi_disable_indexes_for_rebuild function,
Since this is function is called at only one place, it is inlined into
start_bulk_insert
mi_clear_key_active is added into myisamdef.h because now it is also used
in ha_myisam.cc file.
(Same is done for Aria Storage engine)
In commit 0e5a4ac253 (MDEV-15562)
we introduced was a bogus debug check failure that does not affect
the correctness of the release build.
With a fixed-length PRIMARY KEY, we do not have to recompute
the rec_get_offsets() after restarting the mini-transaction,
because the offsets of DB_TRX_ID,DB_ROLL_PTR are not going
to change.
row_undo_mod_clust(): Invoke rec_offs_make_valid() to keep the
debug check in page_zip_write_trx_id_and_roll_ptr() happy.
The scenario to reproduce this bug should be rather unlikely:
In the time frame when row_undo_mod_clust() has committed its
first mini-transaction and has not yet started the next one,
another mini-transaction must do something that causes the page
to be reorganized, split or merged.
Variable `wsrep_new_cluster` should be set to false after `wsrep_init_startup`.
Problem was that this was done before when mysqldump is used as SST method so option
wsrep-new-cluster didn't have any effect.
Support for galera GTID consistency thru cluster. All nodes in cluster
should have same GTID for replicated events which are originating from cluster.
Cluster originating commands need to contain sequential WSREP GTID seqno
Ignore manual setting of gtid_seq_no=X.
In master-slave scenario where master is non galera node replicated GTID is
replicated and is preserved in all nodes.
To have this - domain_id, server_id and seqnos should be same on all nodes.
Node which bootstraps the cluster, to achieve this, sends domain_id and
server_id to other nodes and this combination is used to write GTID for events
that are replicated inside cluster.
Cluster nodes that are executing non replicated events are going to have different
GTID than replicated ones, difference will be visible in domain part of gtid.
With wsrep_gtid_domain_id you can set domain_id for WSREP cluster.
Functions WSREP_LAST_WRITTEN_GTID, WSREP_LAST_SEEN_GTID and
WSREP_SYNC_WAIT_UPTO_GTID now works with "native" GTID format.
Fixed galera tests to reflect this chances.
Add variable to manually update WSREP GTID seqno in cluster
Add variable to manipulate and change WSREP GTID seqno. Next command
originating from cluster and on same thread will have set seqno and
cluster should change their internal counter to it's value.
Behavior is same as using @@gtid_seq_no for non WSREP transaction.
Starting with commit 373443903b
we would invoke memcmp() unconditionally, even if the length is zero.
But, a call to memcmp() is undefined if any parameter is a null pointer,
even if the length is zero.
In the following tests, a null pointer is being passed to the comparison:
vcol.vcol_keys_innodb gcol.gcol_keys_innodb main.func_group_innodb
innodb.innodb_bug53592
cmp_data(): Keep WITH_UBSAN happy and avoid potential future bugs
in optimized builds, like the one addressed by
commit fc168c3a5e (MDEV-15587).
InnoDB crash recovery used a special type of mem_heap_t that
allocates backing store from the buffer pool. That incurred
a significant overhead, leading to underutilization of memory,
and limiting the maximum contiguous allocated size of a log record.
recv_sys_t::blocks: A linked list of buf_block_t that are allocated
by buf_block_alloc() for redo log records. Replaces recv_sys_t::heap.
We repurpose buf_block_t::unzip_LRU for linking the elements.
recv_sys_t::max_log_blocks: Renamed from recv_n_pool_free_frames.
recv_sys_t::max_blocks(): Accessor for max_log_blocks.
recv_sys_t::alloc(): Allocate memory from the current recv_sys_t::blocks
element, or allocate another block. In debug builds, various free()
member functions must be invoked, because we repurpose
buf_page_t::buf_fix_count for tracking allocations.
recv_sys_t::free_corrupted_page(): Renamed from recv_recover_corrupt_page()
recv_sys_t::is_memory_exhausted(): Renamed from recv_sys_heap_check()
recv_sys_t::pages and its elements are allocated directly by the
system memory allocator.
recv_parse_log_recs(): Remove the parameter available_memory.
We rename some variables 'store_to_hash' to 'store', because
recv_sys.pages is not actually a hash table.
This is joint work with Thirunarayanan Balathandayuthapani.