In the function test_if_cheaper_ordering we make a decision if using an index is better than
using filesort for ordering. If we chose to do range access then in test_quick_select we
should make sure that cost for table scan is set to DBL_MAX so that it is not picked.
This allows one to run the test suite even if any of the following
options are changed:
- character-set-server
- collation-server
- join-cache-level
- log-basename
- max-allowed-packet
- optimizer-switch
- query-cache-size and query-cache-type
- skip-name-resolve
- table-definition-cache
- table-open-cache
- Some innodb options
etc
Changes:
- Don't print out the value of system variables as one can't depend on
them to being constants.
- Don't set global variables to 'default' as the default may not
be the same as the test was started with if there was an additional
option file. Instead save original value and reset it at end of test.
- Test that depends on the latin1 character set should include
default_charset.inc or set the character set to latin1
- Test that depends on the original optimizer switch, should include
default_optimizer_switch.inc
- Test that depends on the value of a specific system variable should
set it in the test (like optimizer_use_condition_selectivity)
- Split subselect3.test into subselect3.test and subselect3.inc to
make it easier to set and reset system variables.
- Added .opt files for test that required specfic options that could
be changed by external configuration files.
- Fixed result files in rockdsb & tokudb that had not been updated for
a while.
The MDEV-20265 commit e746f451d5
introduces DBUG_ASSERT(right_op == r_tbl) in
st_select_lex::add_cross_joined_table(), and that assertion would
fail in several tests that exercise joins. That commit was skipped
in this merge, and a separate fix of MDEV-20265 will be necessary in 10.4.
The problem was that the code in maria_extra assumed that there could be
only one table open when doing maria_extra(MA_FORCE_REOPEN)
However in the case of triggers, there can be multiple copies of
the table open.
Fixed by removing assert.
The test cases for the MDEV found several independent bugs
in MariaDB server and Aria:
- If a temporary table was marked as crashed, it could never
be deleted.
- Opening of a crashed temporary table gave an error message
but the error was never forwarded to the caller which caused
an assert() in my_ok()
- init_read_record() did mmap of all temporary tables, which is
probably not a good idea as this area can potentially be
very big. Changed code to only mmap internal temporary tables.
- mmap-ed tables where not unmapped in case of repair/optimize
which caused bad data in table and crashes if the original
table files where replaced with new ones (as the old mmap
was still in place). Fixed by removing the mmap in case
of repair.
- Cleaned up usage of code that disabled mmap in Aria
There was two separate problems:
- Aria pagecache didn't properly handle re-reading of blocks
that have given errors before (this triggered an assert)
- temporary tables that where opened several times where
not properly closed in ALTER, REPAIR or OPTIMIZE table
Other things
- Added a couple of asserts that will make it easier to
find problems like this in the future.
So to push index condition for each join tab we have calculate the index condition that can be pushed and then
remove this index condition from the original condition. This is done through the function make_cond_remainder.
The problem is the function make_cond_remainder does not remove index condition when there is an OR operator.
Fixed this by making the function make_cond_remainder to keep in mind of the OR operator.
Also updated results for multiple test files which were incorrectly updated by the commit e0c1b3f242
code which was supposed to remove the condition present in the index
condition was not getting executed when the condition had OR operator, with AND the pushed
index condition was getting removed from where.
This problem affects all versions starting from 5.5 but this is a performance improvement, so fixing it in 10.4
The MDEV-17262 commit 26432e49d3
was skipped. In Galera 4, the implementation would seem to require
changes to the streaming replication.
In the tests archive.rnd_pos main.profiling, disable_ps_protocol
for SHOW STATUS and SHOW PROFILE commands until MDEV-18974
has been fixed.
The symptom of the bug was that one got the following in
the aria recovery log:
"Table 'xxx', id 57, has create_rename_lsn (1,0x12dee) more recent than LOGREC_FILE_ID's LSN (1,0x12dc4), ignoring open request"
After this all future redo entries was marked with
"For table of short id 57, table skipped, so skipping record"
Analyze:
When ending batch insert, create_rename_lsn for the table
is updated to signal that earlier redo entries for the
table can't be applied. The problem was that future redo
entries was also ignored as redo code assumed they where
for the old table.
Fixed by calling translog_dessign_id, which causes
future redo entries to be seen as belonging to the
updated table.
This patch implements engine independent unique hash index.
Usage:- Unique HASH index can be created automatically for blob/varchar/test column whose key
length > handler->max_key_length()
or it can be explicitly specified.
Automatic Creation:-
Create TABLE t1 (a blob unique);
Explicit Creation:-
Create TABLE t1 (a int , unique(a) using HASH);
Internal KEY_PART Representations:-
Long unique key_info will have 2 representations.
(lets understand this with an example create table t1(a blob, b blob , unique(a, b)); )
1. User Given Representation:- key_info->key_part array will be similar to what user has defined.
So in case of example it will have 2 key_parts (a, b)
2. Storage Engine Representation:- In this case there will be only one key_part and it will point to
HASH_FIELD. This key_part will be always after user defined key_parts.
So:- User Given Representation [a] [b] [hash_key_part]
key_info->key_part ----^
Storage Engine Representation [a] [b] [hash_key_part]
key_info->key_part ------------^
Table->s->key_info will have User Given Representation, While table->key_info will have Storage Engine
Representation.Representation can be changed into each other by calling re/setup_keyinfo_hash function.
Working:-
1. So when user specifies HASH_INDEX or key_length is > handler->max_key_length(), In mysql_prepare_create_table
One extra vfield is added (for each long unique key). And key_info->algorithm is set to HA_KEY_ALG_LONG_HASH.
2. In init_from_binary_frm_image values for hash_keypart is set (like fieldnr , field and flags)
3. In parse_vcol_defs, HASH_FIELD->vcol_info is created. Item_func_hash is used with list of Item_fields,
When Explicit length is given by user then Item_left is used to concatenate Item_field values.
4. In ha_write_row/ha_update_row check_duplicate_long_entry_key is called which will create the hash key from
table->record[0] and then call ha_index_read_map , if we found duplicated hash , we will compare the result
field by field.
The error message modified.
Then the TABLE_SHARE::error_table_name() implementation taken from 10.3,
to be used as a name of the table in this message.
Part of MDEV-5336 Implement LOCK FOR BACKUP
Originally both table metadata lock and global read lock protection
were acquired before getting TABLE from table cache. This will be
reordered in a future commit with MDL_BACKUP_XXX locks so that we
first take table metadata lock, then get TABLE from table cache, then
acquire analogue of global read lock.
This patch both simplifies FLUSH TABLES code, makes FLUSH TABLES to
lock less and also enables FLUSH TABLES code to be used with backup
locks.
The usage of FLUSH TABLES changes slightly:
- FLUSH TABLES without any arguments will now only close not used tables
and tables locked by the FLUSH TABLES connection. All not used table
shares will be closed.
Tables locked by the FLUSH TABLES connection will be reopened and
re-locked after all others has stoped using the table (as before).
If there was no locked tables, then FLUSH TABLES is instant and will
not cause any waits.
FLUSH TABLES will not wait for any in use table.
- FLUSH TABLES with a table list, will ensure that the tables are closed
before statement returns. The code is now only using MDL locks and not
table share versions, which simplices the code greatly. One visible
change is that the server will wait for the end of the transaction that
are using the tables. Before FLUSH TABLES only waited for the statements
to end.
Signed-off-by: Monty <monty@mariadb.org>
main.derived_cond_pushdown: Move all 10.3 tests to the end,
trim trailing white space, and add an "End of 10.3 tests" marker.
Add --sorted_result to tests where the ordering is not deterministic.
main.win_percentile: Add --sorted_result to tests where the
ordering is no longer deterministic.
Two bugs in Aria, related to 2-level fulltext indexes:
* REPAIR calculated the key number incorrectly
* CHECK copied the key into last_key too early and
checking the second-level btree was overwriting it
- Made output to be aligned in aria_chk -d
- Aria engine error texts are now written instead of "Undefined error"
- When running with --check --force, tables with wrong TRN's but otherwise
correct are now zerofilled
- Fixed several bugs in check and recovery related to fulltext
- When doing recovery, store highest found TRID in aria_control_file
Before this, the
Problem was that SQL level tried to read a record with rnd_pos()
that was already deleted by the same statement.
In the case where the page for the record had been deleted, this
caused an assert.
Fixed by extending the assert to also handle empty pages and
return HA_ERR_RECORD_DELETED for reads to deleted pages.
- Changed ERROR to WARNING for MyISAM/Aria message
that are warnings in the check utilities.
This affects for example "client is using or
hasn't closed the table properly".
- Print "Table is fixed" if check succeded in
fixing the table.