specific temporary errors
The optimistic parallel slave's worker thread could face a run-time error due to
the algorithm's specifics which allows for conflicts like the reported
"Can't find record in 'table'".
A typical stack is like
{noformat}
#0 handler::print_error (this=0x61c00008f8a0, error=149, errflag=0) at handler.cc:3650
#1 0x0000555555e95361 in write_record (thd=thd@entry=0x62a0000a2208, table=table@entry=0x61f00008ce88, info=info@entry=0x7fffdee356d0) at sql_insert.cc:1944
#2 0x0000555555ea7767 in mysql_insert (thd=thd@entry=0x62a0000a2208, table_list=0x61b00012ada0, fields=..., values_list=..., update_fields=..., update_values=..., duplic=<optimized out>, ignore=<optimized out>) at sql_insert.cc:1039
#3 0x0000555555efda90 in mysql_execute_command (thd=thd@entry=0x62a0000a2208) at sql_parse.cc:3927
#4 0x0000555555f0cc50 in mysql_parse (thd=0x62a0000a2208, rawbuf=<optimized out>, length=<optimized out>, parser_state=<optimized out>) at sql_parse.cc:7449
#5 0x00005555566d4444 in Query_log_event::do_apply_event (this=0x61200005b9c8, rgi=<optimized out>, query_arg=<optimized out>, q_len_arg=<optimized out>) at log_event.cc:4508
#6 0x00005555566d639e in Query_log_event::do_apply_event (this=<optimized out>, rgi=<optimized out>) at log_event.cc:4185
#7 0x0000555555d738cf in Log_event::apply_event (rgi=0x61d0001ea080, this=0x61200005b9c8) at log_event.h:1343
#8 apply_event_and_update_pos_apply (ev=ev@entry=0x61200005b9c8, thd=thd@entry=0x62a0000a2208, rgi=rgi@entry=0x61d0001ea080, reason=<optimized out>) at slave.cc:3479
#9 0x0000555555d8596b in apply_event_and_update_pos_for_parallel (ev=ev@entry=0x61200005b9c8, thd=thd@entry=0x62a0000a2208, rgi=rgi@entry=0x61d0001ea080) at slave.cc:3623
#10 0x00005555562aca83 in rpt_handle_event (qev=qev@entry=0x6190000fa088, rpt=rpt@entry=0x62200002bd68) at rpl_parallel.cc:50
#11 0x00005555562bd04e in handle_rpl_parallel_thread (arg=arg@entry=0x62200002bd68) at rpl_parallel.cc:1258
{noformat}
Here {{handler::print_error}} computes whether to error log the
current error when --log-warnings > 1. The decision flag is consulted
bu {{my_message_sql()}} which can be eventually called.
In the bug case the decision is to log.
However in the optimistic mode slave applier case any conflict is
attempted to resolve with rollback and retry to success. Hence the
logging is at least extraneous.
The case is fixed with adding a new flag {{ME_LOG_AS_WARN}} which
{{handler::print_error}} may propagate further on through {{my_error}}
when the error comes from an optimistically running slave worker thread.
The new flag effectively requests the warning level for the errlog record,
while the thread's DA records the actual error (which is regarded as temporary one
by the parallel slave error handler).
When attempting to rename a table to a non-existing database,
InnoDB would misleadingly report "OS error 71" when in fact the
error code is InnoDB's own (OS_FILE_NOT_FOUND), and not report
both pathnames. Errors on rename could occur due to reasons
connected to either pathname.
os_file_handle_rename_error(): New function, to report errors in
renaming files.
virtual Item_null_result::get_date() was not overridden.
It used the inherited Item::get_date(), which tests field_type(),
which in case of Item_null_result calls result_field->field_type(),
and result_field is not really always set (e.g. it's not set in the
test case from the bug report).
Overriding Item_null::get_date() like it's done for other val_xxx() methods.
This make the code more symmetric across data types.
In the new reduction, get_date() immediately returns NULL without entering
into any data type specific code.
On OS X, (u)int64_t is defined as (unsigned) long long int while on 74
bit Linux it is defined as (unsigned) long int.
Ensure the type matches when doing printf on all systems.
The order of outputting stored procedures is important. Stored
procedures must be available on view creation, for views which make use
of them. Make sure to print them before outputting tables.
GCC-8 introduced multiple warnings and increased the level of
strictness.
* -Wshadow will warn if a local variable shadows a typedef.
* GCC will also warn when memsetting a non-trivial type.
In this case a non-trivial type can not have a custom constructor.
For all intents and purposes, the class is trivially-copyable.
* GCC will also warn if you use too many paranthesses which are not
necessary
Assign all tests added via MY_ADD_TEST to a bogus default_ignore target,
so that they are not ran by default when doing bare make test. Add default
test named MTR that calls mysql-test-run suite, which is now the single
test run by make test.
In consequence, modified unit/suite.pm to exclude the MTR test and run the
real ctests flagged for default_ignore target, thus no circular
loop.
Several issues were encountered and fixed as explained bellow:
* missing link to dbug lib;
* user proper fprintf format specifier;
* ZERO_COND_INITIALIZER was using wrong toku_cond_t struct
initializer for first member of type pthread_cond_t and
not considering the TOKU_PTHREAD_DEBUG case which has
one extra struct member of type pfs_key_t;
* Remove likely(!opt_debug_sync_timeout), argument is
declared extern and not available to Toku;
* pthread_mutex_timedlock() is not available in pthreads
for Mac, as it's not part of the POSIX pthreads spec.
The encompassing event_t::wait(ms) methods are unused,
thus have been removed;
This is happening because they are declared as packed
and clang has -Waddress-of-packed-member when passing the
address of a packed member, a legit concern on different
architectures. The easiest way to get rid of the errors is to
remove the packed attribute from said structs.
Fix build on macOS 10.13:
39dceaae60 MDEV-10983: TokuDB does not compile on OS X 10.12
Make use of a different function to get the current tid.
Additionally, librt doesn't exist on OS X. Use System library instead.
storage/tokudb/PerconaFT/cmake_modules/TokuFeatureDetection.cmake | 4 +++-
storage/tokudb/PerconaFT/portability/portability.cc | 9 ++++++++-
storage/tokudb/PerconaFT/portability/tests/test-xid.cc | 9 ++++++++-
storage/tokudb/PerconaFT/portability/toku_config.h.in | 1 +
4 files changed, 20 insertions(+), 3 deletions(-)
subquery when ORDER BY is present
Currently for setting r_limit we divide with the number of iterations we invoke the dependent subquery.
This is not needed for the case of limit. For varying limits we produce the output that the limit varies with
execution.
Also there is a type for filtered , we forgot to multiply by 100 as it is represented as a percent.
materialization scan over materialization lookup
For non-mergeable semi-joins we don't store the estimates of the IN subquery in table->file->stats.records.
In the function TABLE_LIST::fetch_number_of_rows, we store the number of rows in the tables
(estimates in case of derived table/views).
Currently we don't store the estimates for non-mergeable semi-joins, which leads to a problem of selecting
materialization scan over materialization lookup.
Fixed this by storing these estimated appropriately
The bug arises when one uses --auto-generate-sql-guid-primary (and
--auto-generate-sql-secondary-indexes) with mysqlslap and also have
sql_mode=STRICT_TRANS_TABLE.
When using this option, mysqlslap should create a column with varchar(36),
but it appears to create it as a varchar(32) only. Then if one has
sql_mode=STRICT_TRANS_TABLES, it throws an error, like:
mysqlslap: Cannot run query INSERT INTO t1 VALUES (...)
ERROR : Data too long for column 'id' at row 1
Upstream bug report: BUG#80329.
fil_iterate(): Invoke fil_encrypt_buf() correctly when
a ROW_FORMAT=COMPRESSED table with a physical page size of
innodb_page_size is being imported. Also, validate the page checksum
before decryption, and reduce the scope of some variables.
AbstractCallback::operator()(): Remove the parameter 'offset'.
The check for it in FetchIndexRootPages::operator() was basically
redundant and dead code since the previous refactoring.
InnoDB insisted on closing the file handle before renaming a file.
Renaming a file should never be a problem on POSIX systems. Also on
Windows it should work if the file was opened in FILE_SHARE_DELETE
mode.
fil_space_t::stop_ios: Remove. We no longer need to stop file access
during rename operations.
fil_mutex_enter_and_prepare_for_io(): Remove the wait for stop_ios.
fil_rename_tablespace(): Remove the retry logic; do not close the
file handle. Remove the unused fault injection that was added along
with the DATA DIRECTORY functionality (MySQL WL#5980).
os_file_create_simple_func(), os_file_create_func(),
os_file_create_simple_no_error_handling_func(): Include FILE_SHARE_DELETE
in the share_mode. (We will still prevent multiple InnoDB instances
from using the same files by not setting FILE_SHARE_WRITE.)
ha_innobase::optimize(): If both innodb_defragment and
innodb_optimize_fulltext_only are at their default settings (OFF),
fall back to ALTER TABLE. Else process one or both options.
ha_innobase::set_partition_owner_stats(): Remove (unused function).
ha_innobase::ha_partition_stats: Remove (the variable is never read).
Remove unused ut_timer functions.
For non-semi-join subquery optimization we do a cost based decision between
Materialisation and IN -> EXIST transformation. The issue in this case is that for IN->EXIST transformation
we run JOIN::reoptimize with the IN->EXISt conditions and we come up with a new query plan. But when we compare
the cost with Materialization, we make the decision to chose Materialization so we need to restore the query plan
for Materilization.
The saving and restoring for keyuse array and join_tab keyuse is only done when we have atleast
one element in the keyuse_array , we are now changing to do it even for 0 elements to main the generality.
Also fixes MDEV-14727, MDEV-14491
InnoDB: Error: Waited for 5 secs for hash index ref_count (1) to drop to 0
by replacing the flawed wait logic in dict_index_remove_from_cache_low().
On DISCARD TABLESPACE, there is no need to drop the adaptive hash index.
We must drop it on IMPORT TABLESPACE, and eventually on DROP TABLE or
DROP INDEX. As long as the dict_index_t object remains in the cache
and the table remains inaccessible, the adaptive hash index entries
to orphaned pages would not do any harm. They would be dropped when
buffer pool pages are reused for something else.
btr_search_drop_page_hash_when_freed(), buf_LRU_drop_page_hash_batch():
Remove the parameter zip_size, and pass 0 to buf_page_get_gen().
buf_page_get_gen(): Ignore zip_size if mode==BUF_PEEK_IF_IN_POOL.
buf_LRU_drop_page_hash_for_tablespace(): Drop the adaptive hash index
even if the tablespace is inaccessible.
buf_LRU_drop_page_hash_for_tablespace(): New global function, to drop
the adaptive hash index.
buf_LRU_flush_or_remove_pages(), fil_delete_tablespace():
Remove the parameter drop_ahi.
dict_index_remove_from_cache_low(): Actively drop the adaptive hash index
if entries exist. This should prevent InnoDB hangs on DROP TABLE or
DROP INDEX.
row_import_for_mysql(): Drop any adaptive hash index entries for the table.
row_drop_table_for_mysql(): Drop any adaptive hash index for the table,
except if the table resides in the system tablespace. (DISCARD TABLESPACE
does not apply to the system tablespace, and we do no want to drop the
adaptive hash index for other tables than the one that is being dropped.)
row_truncate_table_for_mysql(): Drop any adaptive hash index entries for
the table, except if the table resides in the system tablespace.
When the transaction isolation level is SERIALIZABLE, or when
a locking read is performed in the REPEATABLE READ isolation level,
InnoDB must lock delete-marked records in order to prevent another
transaction from inserting something.
However, at READ UNCOMMITTED or READ COMMITTED isolation level or
when the parameter innodb_locks_unsafe_for_binlog is set, the
repeatability of the reads does not matter, and there is no need
to lock any records.
row_search_for_mysql(): Skip locks on delete-marked committed records
upfront, instead of invoking row_unlock_for_mysql() afterwards.
The unlocking never worked for secondary index records.
Crash happened because in discover, table->work_part_info was not properly
reset before execution.
Fixed by resetting before calling execute alter table, create table or
mysql_create_frm_image.
Close MYSQL (and destroy THD) in the same thread where it was used,
because THD embeds MDL_context, that owns some LF_PINS, that remember
a pointer to my_thread_var->stack_ends_here.