With wsrep_gtid_mode=ON, the appropriate commit hooks were not
called in all cases for applied streaming transactions.
As a fix, removed all special handling of commit order critical
section from Wsrep_high_priority_service and Wsrep_storage_service.
Now commit order critical section is always entered in ha_commit_trans().
Check for wsrep_run_commit_hook is now done in handler.cc, log.cc.
This makes it explicit that the transaction is an active wsrep
transaction which must go through commit hooks.
When the chosen execution plan accesses a join table employing a range
rowid filter a quick select to scan this range has to be built. This
quick select is built by a call of SQL_SELECT::test_quick_select().
At this call the function should allow to evaluate only single index
range scans. In order to be able to do this a new parameter was added
to this function.
* update system versioning fields before generaled columns
* don't presume that ha_write_row() means INSERT. It could still be UPDATE
* use the correct handler in check_duplicate_long_entry_key()
close table->update_handler in close_thread_tables().
it's not enough to do it in sql_update.cc only, because
sql_insert.cc can also do updates (REPLACE) and even
sql_delete.cc can (DELETE ... FOR PORTION OF)
when auto-adding a virtual LONG_UNIQUE_HASH_FIELD, fill in
a Virtual_column_info for it, so that fill_alter_inplace_info()
would know we're adding a virtual field (ALTER_ADD_VIRTUAL_COLUMN).
The bug manifested itself when executing a query with materialized
view/derived/CTE whose specification was a SELECT query contained
another materialized derived and impossible WHERE/HAVING condition
was detected for this SELECT.
As soon as such condition is detected the join structures of all
derived tables used in the SELECT are destroyed. So optimization
of the queries specifying these derived tables is impossible. Besides
it's not needed.
In 10.3 optimization of a materialized derived table is performed before
detection of impossible WHERE/HAVING condition in the embedding SELECT.
server shutdown code.
Fix fixes a race condition, if an active connection either writes,
or will be writing to the socket after it is closed.
Previous call to socket shutdown() is fully enough to wake up and idle
connection, so that close_connection is obsolete and dangerous.
Refactored wsrep patch to not use LOCK_thread_count and COND_thread_count anymore.
This has partially been replaced by using old LOCK_wsrep_slave_threads mutex.
For slave thread count change waiting, new COND_wsrep_slave_threads signal has been added
Added LOCK_wsrep_cluster_config mutex to control that cluster address change cannot happen in parallel
Protected wsrep_slave_threads variable changes with LOCK_cluster_config mutex
This is for avoiding concurrent slave thread count and cluster joining operations to happen
Fixes according to Teemu's review
The prtype & DATA_LONG_TRUE_VARCHAR flag only plays a role when
converting between InnoDB internal format and the MariaDB SQL layer
row format. Ideally this flag would never have been persisted in the
InnoDB data dictionary.
There were bogus assertion failures when an instant ADD, DROP, or
column reordering was combined with a change of extending a VARCHAR
from less than 256 bytes to more than 255 bytes. Such changes are
allowed starting with MDEV-15563 in MariaDB 10.4.3.
dict_table_t::instant_column(), dict_col_t::same_format(): Ignore
the DATA_LONG_TRUE_VARCHAR flag, because it does not affect the
persistent storage format.
The problem happened because Item_ident_for_show did not implement val_native().
Solution:
- Removing class Item_ident_for_show
- Implementing a new method Protocol::send_list_fields() instead,
which accepts a List<Field> instead of List<Item> as input.
Now no any Item creation is done during mysqld_list_fields().
Adding helper methods, to reuse the code easier:
- Moved a part of Protocol::send_result_set_metadata(),
responsible for sending an individual field metadata,
into a new method Protocol_text::store_field_metadata().
Reusing it in both send_list_fields() and send_result_set_metadata().
- Adding Protocol_text::store_field_metadata()
- Adding Protocol_text::store_field_metadata_for_list_fields()
Note, this patch also automatically fixed another bug:
MDEV-18685 mysql_list_fields() returns DEFAULT 0 instead of DEFAULT NULL for view columns
The reason for this bug was that Item_ident_for_show::val_xxx() and get_date()
did not check field->is_null() before calling field->val_xxx()/get_date().
Now the default value is correctly sent by Protocol_text::store(Field*).
Disabled GCF-437 which relies on InnoDB redo log size limitation
which does not seem to exist or is increased in MariaDB 10.4.
Require debug sync for mysql-wsrep#215.
Wsrep-lib is now guaranteed to hold the underlying mutex
which is wrapped in lock object passed to Wsrep_client_service
interrupted() call. The library part will now take care of
checking the wsrep::transaction specific state, so it is
enough to check the thd->killed state for the result.
The InnoDB DeadlockChecker::check_and_resolve() was missing a
call to wsrep_handle_SR_rollback() in the case when the
transaction running deadlock detection was chosen as victim.
Refined wsrep_handle_SR_rollback() to skip store_globals() calls
if the transaction was BF aborting itself.
Made mysql-wsrep-features#165 more deterministic by waiting until
the update is in progress before sending next update.
sql_field->key_length was 0 for blob fields when a field was
being added, but Field_blob::character_octet_length() on
subsequent ALTER TABLE's (when the Field object in the old table
already existed). This means mysql_prepare_create_table() couldn't
reliably detect if the keyseg was a prefix.
This patch implements engine independent unique hash index.
Usage:- Unique HASH index can be created automatically for blob/varchar/test column whose key
length > handler->max_key_length()
or it can be explicitly specified.
Automatic Creation:-
Create TABLE t1 (a blob unique);
Explicit Creation:-
Create TABLE t1 (a int , unique(a) using HASH);
Internal KEY_PART Representations:-
Long unique key_info will have 2 representations.
(lets understand this with an example create table t1(a blob, b blob , unique(a, b)); )
1. User Given Representation:- key_info->key_part array will be similar to what user has defined.
So in case of example it will have 2 key_parts (a, b)
2. Storage Engine Representation:- In this case there will be only one key_part and it will point to
HASH_FIELD. This key_part will be always after user defined key_parts.
So:- User Given Representation [a] [b] [hash_key_part]
key_info->key_part ----^
Storage Engine Representation [a] [b] [hash_key_part]
key_info->key_part ------------^
Table->s->key_info will have User Given Representation, While table->key_info will have Storage Engine
Representation.Representation can be changed into each other by calling re/setup_keyinfo_hash function.
Working:-
1. So when user specifies HASH_INDEX or key_length is > handler->max_key_length(), In mysql_prepare_create_table
One extra vfield is added (for each long unique key). And key_info->algorithm is set to HA_KEY_ALG_LONG_HASH.
2. In init_from_binary_frm_image values for hash_keypart is set (like fieldnr , field and flags)
3. In parse_vcol_defs, HASH_FIELD->vcol_info is created. Item_func_hash is used with list of Item_fields,
When Explicit length is given by user then Item_left is used to concatenate Item_field values.
4. In ha_write_row/ha_update_row check_duplicate_long_entry_key is called which will create the hash key from
table->record[0] and then call ha_index_read_map , if we found duplicated hash , we will compare the result
field by field.
Debian part. Do not ask to set a root password,
do not create debian-sys-maint user (but preserve an existing one
on upgrades - user scripts might be relying on it).
Just create an empty /etc/mysql/debian.cnf for --defaults-file not to fail
and, again, *don't use thd->clear_error()*
this fixed main.sp_notembedded failure on various amd64 platforms
(where ER_STACK_OVERRUN_NEED_MORE happens to fire in open_stat_tables()
under Dummy_error_handler)
After FLUSH PRIVILEGES remember if the connection started under
--skip-grant-tables and keep it all-powerful, not a lowly anonymous.
One could use this connection to reset passwords as needed.
Also fix a crash in SHOW CREATE USER
post-merge changes:
* handle password expiration on old tables like everything else -
make changes in memory, even if they cannot be done on disk
* merge "debug" tests with non-debug tests, they don't use dbug anyway
* only run rpl password expiration in MIXED mode, it doesn't replicate
anything, so no need to repeat it thrice
* restore update_user_table_password() prototype, it should not change
ACL_USER, this is done in acl_user_update()
* don't parse json twice in get_password_lifetime and get_password_expired
* remove LEX_USER::is_changing_password, see if there was any auth instead
* avoid overflow in expiration calculations
* don't initialize Account_options in the constructor, it's bzero-ed later
* don't create ulong sysvars - they're not portable, prefer uint or ulonglong
* misc simplifications