In this case we are accessing incorrect memory when we have mergeable semi-joins.
In the case when we have mergeable semi joins parent select will have a table count
of all the tables in that select plus all the tables involved in the IN-subquery.
But this table count does not include the "sjm table" (only includes the inner and outer tables)
denotes as <subquery#> in explain.
IN predicate defined with non-constant values is pushed down
The problem appears because of wrong changes made in MDEV-16090 in the
Item_func_in::build_clone() method.
For the clone of the IN predicate it copied 'cmp_fields' array values
that become dirty after Item::cleanup_excluding_const_fields_processor
has worked in pushdown. That causes crash.
There is no need to copy 'cmp_fields' field, the array values should be
NULLs in order to fix_fields() for the cloned IN predicate can set them
correctly. fix_fields() computes values for 'cmp_fields' array only
if they were not set earlier.
The bug was that innobase_get_computed_value() trashed record[0] and data
in Field_blob::value
Fixed by using a record on the heap for innobase_get_computed_value()
Reviewer: Marko Mäkelä
This is to mark that a field is indirectly part of a key, which simplifes
checking if we need to have this field up to date to evaluate a key.
For example:
CREATE TABLE t1 (a int, b int as (a) virtual,
c int as (b) virtual, index(c))
would mark a and b with PART_INDIRECT_KEY_FLAG.
c is marked with PART_KEY_FLAG as before.
The following conditions will decide the query cache retrieval or
storing inside innodb:
(1) There should not be any locks on the table.
(2) Some other trx shouldn't invalidated the cache before the
transaction started.
(3) Read view shouldn't exist. If exists then the view
low_limit_id should be greater than or equal to the transaction that
invalidates the cache for the particular table.
For read-only transaction: should satisfy the above (1) and (3)
For read-write transaction: should satisfy the above (1), (2), (3).
- Changed the variable from query_cache_inv_id to query_cache_inv_trx_id.
- Moved the function row_search_check_if_query_cache_permitted from
row0sel.h and made it as static function in ha_innodb.cc
* xtrabackup no longer support --compact
* wsrep_sst_mysqldump wasn't always using --default-file
* wsrep_sst.cc overquoted --default-file for wsrep_sst_mysqldump
also remove redundant lines config and test lines, compiler warnings,
and mark big tests as big.
srv_print_verbose_log: Introduce the value 2 to refer to
mariabackup --verbose.
recv_recover_page(), recv_parse_log_recs(): Add output for
mariabackup --verbose.
Commit dc9c555415 moved the final phase of
the redo log copying to the background thread. This would sometimes cause
too little redo log to be copied at the end of the backup. We would only
guarantee copying up to the latest redo log checkpoint. This would produce
a consistent backup, but it could refer to a too old point of time.
xtrabackup_copy_log(), xtrabackup_copy_logfile(): Add the parameter 'last'.
xtrabackup_backup_low(): Copy any remaining part of the log after the
backup threads have terminated.
When altering from DECIMAL to *INT UNIGNED or to BIT, go through val_decimal(),
to avoid truncation to the biggest possible signed integer
(0x7FFFFFFFFFFFFFFF / 9223372036854775807).
Merge following change from 10.2
revision-id: d52cff9f10aeea208a1058f7b5527e602125584c (mariadb-10.2.14-25-gd52cff9)
parent(s): bc2501453c
author: Sachin Setiya
committer: Sachin Setiya
timestamp: 2018-04-04 12:26:06 +0530
message:
MDEV-15611 Due to the failure of foreign key detection, Galera...
slave node killed himself.
Problem:- If we try to delete table with foreign key and table whom it is
referring with wsrep_slave_threads>1 then galera tries to execute both
Delete_rows_log-event in parallel, which should not happen.
Solution:- This is happening because we do not have foreign key info in
write set. Upto version 10.2.7 it used to work fine. Actually it happening
because of issue in commit 2f342c4. wsrep_must_process_fk should be used
with negation.
materialized derived table/view that uses aliases is done
The problem appears when a column alias inside the materialized derived
table/view t1 definition coincides with the column name used in the
GROUP BY clause of t1. If the condition that can be pushed into t1
uses that ambiguous column name this column is determined as a column that
is used in the GROUP BY clause instead of the alias used in the projection
list of t1. That causes wrong result.
To prevent it resolve_ref_in_select_and_group() was changed.
note that ${A#foo} is $A if there's no prefix foo. That's why
galera nodes tried to connect to 127.0.0.1:127.0.0.1 if there was
no port in the address
Followup for 2b35db5ac4
Since MariaDB Server 10.2.2 (and MySQL 5.7), the default value of
innodb_checksum_algorithm is crc32 (CRC-32C), not the inefficient "innodb"
checksum. Change Mariabackup to use the same default, so that checksum
validation (when using the default algorithm on the server) will take less
time during mariabackup --backup. Also, mariabackup --prepare should be
a little faster, and the server should read backups faster, because the
page checksums would only be validated against CRC-32C.
fil_page_decompress(): Replaces fil_decompress_page().
Allow the caller detect errors. Remove
duplicated code. Use the "safe" instead of "fast" variants of
decompression routines.
fil_page_compress(): Replaces fil_compress_page().
The length of the input buffer always was srv_page_size (innodb_page_size).
Remove printouts, and remove the fil_space_t* parameter.
buf_tmp_buffer_t::reserved: Make private; the accessors acquire()
and release() will use atomic memory access.
buf_pool_reserve_tmp_slot(): Make static. Remove the second parameter.
Do not acquire any mutex. Remove the allocation of the buffers.
buf_tmp_reserve_crypt_buf(), buf_tmp_reserve_compression_buf():
Refactored away from buf_pool_reserve_tmp_slot().
buf_page_decrypt_after_read(): Make static, and simplify the logic.
Use the encryption buffer also for decompressing.
buf_page_io_complete(), buf_dblwr_process(): Check more failures.
fil_space_encrypt(): Simplify the debug checks.
fil_space_t::printed_compression_failure: Remove.
fil_get_compression_alg_name(): Remove.
fil_iterate(): Allocate a buffer for compression and decompression
only once, instead of allocating and freeing it for every page
that uses compression, during IMPORT TABLESPACE. Also, validate the
page checksum before decryption, and reduce the scope of some variables.
fil_page_is_index_page(), fil_page_is_lzo_compressed(): Remove (unused).
AbstractCallback::operator()(): Remove the parameter 'offset'.
The check for it in FetchIndexRootPages::operator() was basically
redundant and dead code since the previous refactoring.
fil_page_decompress(): Replaces fil_decompress_page().
Allow the caller detect errors. Remove
duplicated code. Use the "safe" instead of "fast" variants of
decompression routines.
fil_page_compress(): Replaces fil_compress_page().
The length of the input buffer always was srv_page_size (innodb_page_size).
Remove printouts, and remove the fil_space_t* parameter.
buf_tmp_buffer_t::reserved: Make private; the accessors acquire()
and release() will use atomic memory access.
buf_pool_reserve_tmp_slot(): Make static. Remove the second parameter.
Do not acquire any mutex. Remove the allocation of the buffers.
buf_tmp_reserve_crypt_buf(), buf_tmp_reserve_compression_buf():
Refactored away from buf_pool_reserve_tmp_slot().
buf_page_decrypt_after_read(): Make static, and simplify the logic.
Use the encryption buffer also for decompressing.
buf_page_io_complete(), buf_dblwr_process(): Check more failures.
fil_space_encrypt(): Simplify the debug checks.
fil_space_t::printed_compression_failure: Remove.
fil_get_compression_alg_name(): Remove.
fil_iterate(): Allocate a buffer for compression and decompression
only once, instead of allocating and freeing it for every page
that uses compression, during IMPORT TABLESPACE.
fil_node_get_space_id(), fil_page_is_index_page(),
fil_page_is_lzo_compressed(): Remove (unused code).
Problem:
The problem was most likely introduced by a fix for MDEV-11597
(commit 5f0c31f928) which removed
the assignment "killed= KILL_BAD_DATA" from THD::raise_condition().
Before MDEV-11597, sp_head::execute() tested thd->killed after
looping through the SP instructions and exited with an error
if thd->killed is set. After MDEV-11597, sp_head::execute()
stopped to notice errors and set the OK status on top of the
error status, which crashed on assert.
Fix:
Making sp_cursor::fetch() return -1 if server_side_cursor->fetch(1)
left an error in the diagnostics area. This makes the statement
"err_status= i->execute(thd, &ip)" in sp_head::execute() set the
error code and correctly break the SP instruction loop and
return on error without setting the OK status.
specific temporary errors
The optimistic parallel slave's worker thread could face a run-time error due to
the algorithm's specifics which allows for conflicts like the reported
"Can't find record in 'table'".
A typical stack is like
{noformat}
#0 handler::print_error (this=0x61c00008f8a0, error=149, errflag=0) at handler.cc:3650
#1 0x0000555555e95361 in write_record (thd=thd@entry=0x62a0000a2208, table=table@entry=0x61f00008ce88, info=info@entry=0x7fffdee356d0) at sql_insert.cc:1944
#2 0x0000555555ea7767 in mysql_insert (thd=thd@entry=0x62a0000a2208, table_list=0x61b00012ada0, fields=..., values_list=..., update_fields=..., update_values=..., duplic=<optimized out>, ignore=<optimized out>) at sql_insert.cc:1039
#3 0x0000555555efda90 in mysql_execute_command (thd=thd@entry=0x62a0000a2208) at sql_parse.cc:3927
#4 0x0000555555f0cc50 in mysql_parse (thd=0x62a0000a2208, rawbuf=<optimized out>, length=<optimized out>, parser_state=<optimized out>) at sql_parse.cc:7449
#5 0x00005555566d4444 in Query_log_event::do_apply_event (this=0x61200005b9c8, rgi=<optimized out>, query_arg=<optimized out>, q_len_arg=<optimized out>) at log_event.cc:4508
#6 0x00005555566d639e in Query_log_event::do_apply_event (this=<optimized out>, rgi=<optimized out>) at log_event.cc:4185
#7 0x0000555555d738cf in Log_event::apply_event (rgi=0x61d0001ea080, this=0x61200005b9c8) at log_event.h:1343
#8 apply_event_and_update_pos_apply (ev=ev@entry=0x61200005b9c8, thd=thd@entry=0x62a0000a2208, rgi=rgi@entry=0x61d0001ea080, reason=<optimized out>) at slave.cc:3479
#9 0x0000555555d8596b in apply_event_and_update_pos_for_parallel (ev=ev@entry=0x61200005b9c8, thd=thd@entry=0x62a0000a2208, rgi=rgi@entry=0x61d0001ea080) at slave.cc:3623
#10 0x00005555562aca83 in rpt_handle_event (qev=qev@entry=0x6190000fa088, rpt=rpt@entry=0x62200002bd68) at rpl_parallel.cc:50
#11 0x00005555562bd04e in handle_rpl_parallel_thread (arg=arg@entry=0x62200002bd68) at rpl_parallel.cc:1258
{noformat}
Here {{handler::print_error}} computes whether to error log the
current error when --log-warnings > 1. The decision flag is consulted
bu {{my_message_sql()}} which can be eventually called.
In the bug case the decision is to log.
However in the optimistic mode slave applier case any conflict is
attempted to resolve with rollback and retry to success. Hence the
logging is at least extraneous.
The case is fixed with adding a new flag {{ME_LOG_AS_WARN}} which
{{handler::print_error}} may propagate further on through {{my_error}}
when the error comes from an optimistically running slave worker thread.
The new flag effectively requests the warning level for the errlog record,
while the thread's DA records the actual error (which is regarded as temporary one
by the parallel slave error handler).