don't initialize error_log_handler_list in set_handlers()
* error_log_handler_list is initialized to LOG_FILE early, in init_base()
* set_handlers always reinitializes it to LOG_FILE, so it's pointless
* after init_base() concurrent threads start using sql_log_warning,
so following set_handlers() shouldn't modify error_log_handler_list
without some protection
The problem was not with the server behavior,
it was with wrong test results recorded.
Enabling both type_set and type_enum.
type_set.result was OK. The test did not fail.
type_enum.result was incorrectly recorded in this commit:
eb483c5181
Recording correct results.
I change from `exit;` to `exit(1);` on a function `usage()`.
When we try to run mtr with a wrong option, a function `usage()` is called with the wrong option as its argument. In this case, because the function call `exit` in a first if statement, we get exit status 0.
`m_status == DA_ERROR' failed on SELECT after setting tmp_disk_table_size.
Analysis: Mismatch in number of warnings between "194 warnings" vs
"64 rows in set" is because of max_error_count variable which has default
value of 64.
About the corrupted tables, the error that occurs because of insufficient
tmp_disk_table_size variable is not reported correctly and we continue to
execute the statement. But because the previous error (about table being
full)is not reported correctly, this error moves up the stack and is
wrongly reported as parsing error later on while parsing frm file of one
of the information schema table. This parsing error gives corrupted table
error.
As for the innodb error, it occurs even when tmp_disk_table_size is not
insufficient is default but the internal error handler takes care of it
and the error doesn't show. But when tmp_disk_table_size is insufficient,
the fatal error which wasn't reported correctly moves up the stack so
internal error handler is not called. So it shows errors.
Fix: Report the error correctly.
This is a temporary fix for 10.2.
This problem was permanently fixed in 10.9 under terms of MDEV-27743.
This patch should propagate up to 10.8 then null-merged to 10.9.
The issue was that the value of MARIA_FOUND_WRONG_KEY was a value
that could be returned by ha_key_cmp.
This was already fixed in MyISAM, now using the same fix in Aria:
Setting the value to INT_MAX32, which should be impossible in any
normal cases.
I also fixed so that if there is a wrong key, we now get a proper error
message and not an assert.
- Purge thread is trying to access the table name when other thread
does rename of the table name. It leads to heap use after free error
by purge thread. purge thread should check whether the table name
is temporary while holding dict_sys.mutex.
Creating a temporary table with Spider is non-sense because a Spider
table cannot hold any physical data and it requires an additional
effort to manage even if it is configured correctly.
Set HTON_TEMPORARY_NOT_SUPPORTED to spider_hton->flags.
Reviewed-by: nayuta.yanagisawa@hey.com
Co-authored-by: d8sk4ueun@gmail.com
This crash happens on a combination of multiple conditions:
- There is a thead#1 running an "ANALYZE FORMAT=JSON" query for a
"SELECT .. FROM INFORMATION_SCHEMA.COLUMNS WHERE .. "
- The WHERE clause contains a stored function call, say f1().
- The WHERE clause is built in the way so that the function f1()
is never actually called, e.g.
WHERE .. AND (TRUE OR f1()=expr)
- The database contains multiple VIEWs that have the function f1() call,
e.g. in their <select list>
- The WHERE clause is built in the way so that these VIEWs match
the condition.
- There is a parallel thread#2 running. It creates or drops or recreates
some other stored routine, say f2(), which is not used in the ANALYZE query.
It effectively invalidates the stored routine cache for thread#1
without locking.
Note, it is important that f2() is NOT used by ANALYZE query.
Otherwise, thread#2 would be locked until the ANALYZE query
finishes.
When all of the above conditions are met, the following happens:
1. thread#1 starts the ANALYZE query. It notices a call for the stored function
f1() in the WHERE condition. The function f1() gets parsed and cached
to the SP cache. Its address also gets assigned to Item_func_sp::m_sp.
2. thread#1 starts iterating through all tables that
match the WHERE condition to find the information about their columns.
3. thread#1 processes columns of the VIEW v1.
It notices a call for f1() in the VIEW v1 definition.
But f1() is already cached in the step#1 and it is up to date.
So nothing happens with the SP cache.
4. thread#2 re-creates f2() in a non-locking mode.
It effectively invalidates the SP cache in thread#1.
5. thread#1 processes columns of the VIEW v2.
It notices a call for f1() in the VIEW v2 definition.
It also notices that the cached version of f1() is not up to date.
It frees the old definition of f1(), parses it again, and puts a
new version of f1() to the SP cache.
6. thread#1 finishes processing rows and generates the JSON output.
When printing the "attached_condition" value, it calls
Item_func_sp::print() for f1(). But this Item_func_sp links
to the old (freed) version of f1().
The above scenario demonstrates that Item_func_sp::m_sp can point to an
alredy freed instance when Item_func_sp::func_name() is called,
so accessing to Item_sp::m_sp->m_handler is not safe.
This patch rewrites the code to use Item_func_sp::m_handler instead,
which is always reliable.
Note, this patch is only a cleanup for MDEV-28166 to quickly fix the regression.
It fixes MDEV-28267. But it does not fix the core problem:
The code behind I_S does not take into account that the SP
cache can be updated while evaluating rows of the COLUMNS table.
This is a corner case and it never happens with any other tables.
I_S.COLUMNS is very special.
Another example of the core problem is reported in MDEV-25243.
The code accesses to Item_sp::m_sp->m_chistics of an
already freed m_sp, again. It will be addressed separately.
The check on the SQL command type, in ha_spider::external_lock() is deleted
by e954d9de. This resulted in the wrong call of spider_internal_start_trx()
(and thus Ha_trx_info::register_ha()).
I reverted the check and refactored ha_spider::external_lock().
The partitioning engine does not support the table-level DATA/INDEX
DIRECTORY specification.
If one create a non-partitioned table with the DATA/INDEX DIRECTORY
option and then performs ALTER TABLE ... PARTITION BY on it, the
DATA/INDEX DIRECTORY specification of the old schema is ignored.
The behavior might be a bit surprising for users because the value
of a usual table option applies to all the partitions. Thus, we raise
a warning on such ALTER TABLE ... PARTITION BY.
The original query "SELECT IF(COUNT(a.`id`)>=0,'Y','N') FROM t" is
transformed to "SELECT COUNT(a.`id`), IF(ref >= 0, 'Y', 'N') FROM t",
where ref is Item_ref to "COUNT(a.`id`)", by split_sum_func().
Spider walks the item list twice, invoking spider_db_print_item_type().
The first invocation is in spider_create_group_by_handler() with
str == NULL. The second one is in spider_group_by_handler::init_scan()
with str != NULL.
spider_db_print_item_type() prints nothing at the first invocation,
and it prints item at the second invocation. However, at the second
invocation, the above mentioned ref to "COUNT(a.`id`)" points to
a field in a temporary table where the result will be stored. Thus,
to look behind the item_ref, Spider need to generate the query earlier.
A possible fix would be to generate a query to send in
spider_create_group_by_handler(). However, the fix requires a
considerable amount of changes of the Spider's GROUP BY handler.
I'd like to avoid that.
So, I fix the problem by not to use the GROUP BY handler when a
query contains Item_ref whose table_name, name, and alias_name_used
are not set.
Currenly SST script for mariabackup stops on any failure while archiving
logs, e.g. when unable to create directory, insufficient permissions, gzip
failure, etc. However, in case of such problems, the script should issue
a warning and continue without archiving, but not exit with a fatal error.
This commit adds this fix to the SST script for mariabackup.
Rather than Debian logs containing a barely decipherable mix
of build command and the output of some other command, we
use the make option --output-sync=target. This make the compile
line and the output stay together in the output stream.
This option exists even in the make version in debian;stretch
so should work everywhere.
Test on debian:stretch, ubuntu:18.04, the two lowest version that
use the debian/rules.
This failure was caused by MDEV-25975, which removed the parameter
innodb_disallow_writes.
Added a check for wsrep_sst_disable_writes to the function
ibuf_merge_in_background().
The cause of the bug is overflow of uint16 KEY_PART_INFO::length and/or
uint16 KEY_PART_INFO::store_length. The solution is to increase the size
of those variables to the 'uint' type (which is 32-bit long)
If JOIN::create_postjoin_aggr_table encounters errors during execution
then free_tmp_table() is then called twice for JOIN_TAB::aggr.
The solution is to initialize JOIN_TAB::aggr only on successful completion
of JOIN::create_postjoin_aggr_table
We will remove the parameter innodb_disallow_writes because it is badly
designed and implemented. The parameter was never allowed at startup.
It was only internally used by Galera snapshot transfer.
If a user executed
SET GLOBAL innodb_disallow_writes=ON;
the server could hang even on subsequent read operations.
During Galera snapshot transfer, we will block writes
to implement an rsync friendly snapshot, as follows:
sst_flush_tables() will acquire a global lock by executing
FLUSH TABLES WITH READ LOCK, which will block any writes
at the high level.
sst_disable_innodb_writes(), invoked via ha_disable_internal_writes(true),
will suspend or disable InnoDB background tasks or threads that could
initiate writes. As part of this, log_make_checkpoint() will be invoked
to ensure that anything in the InnoDB buf_pool.flush_list will be written
to the data files. This has the nice side effect that the Galera joiner
will avoid crash recovery.
The changes to sql/wsrep.cc and to the tests are based on a prototype
that was developed by Jan Lindström.
Reviewed by: Jan Lindström
Some SQL statements that involves subqueries or stored routines could
fail since execution of subqueries or stored routines is not supported
for theses statements. Unfortunately, parsing error could result in
abnormal termination by firing the following assert
DBUG_ASSERT(m_thd == NULL);
in a destructor of the class sp_head.
The reason of the assert firing is that the method
sp_head::restore_thd_mem_root()
is not called on semantic action code to clean up resources allocated
during parsing. This happens since the macros YYABORT is called instead of
MYSQL_YYABORT by semantic action code for some grammar rules.
So, to fix the bug YYABORT was just replaced with MYSQL_YYABORT.
The reason for this fix was that when I tried to run mysql_upgrade
at home to update an old 10.5 installation, mysql_upgrade failed
with warnings about mariadb.sys user not existing.
If the server was started with --skip-grants, there would be no warnings
from mysql_upgrade, but in some cases running mysql_upgrade again could
produce new warnings.
The reason for the warnings was that any access of the mysql.user view
will produce a warning if the mariadb.sys user does not exists.
Fixed with the following changes:
- Disable warnings about mariadb.sys user not existing
- Don't overwrite old mariadb.sys entries in tables_priv and global_priv
- Ensure that tables_priv has an entry for mariadb.sys if the user exists.
This fixes an issue that tables_priv would not be updated if there
was a failure directly after global_priv was updated.
Server crashed during shutdown with:
"corrupted double-linked list"
when running mysql_upgrade multiple times against the server.
Reason was that db_repostitory could be freed twice.
This commit contains a fix to use modern syntax for selecting
character classes in the tr utility options.
Also one of the tests for SST via rsync (galera_sst_rysnc2) is made
more reliable (to avoid rare failures during automatic testing).
The issue is caused by 59a0236da4 commit.
The initial intention of the commit was to speed up
"mariabackup --prepare".
The call stack of binlog position reading is the following:
▾ trx_rseg_mem_restore
▾ trx_rseg_array_init
▾ trx_lists_init_at_db_start
▸ srv_start
Both trx_lists_init_at_db_start() and trx_rseg_mem_restore() contain
special cases for srv_operation == SRV_OPERATION_RESTORE condition, and
on this condition only rseg headers are read to parse binlog position.
Performance impact is not so big.
The solution is to revert 59a0236da4.
ib_id_t is a uint64. On AIX this isn't a long long unsigned and to
prevent the compile warnings and potential wrong type, the UINT64PFx
defination is corrected.
As INT64PF is unused (last use, xtradb in 10.2), it is removed to
remove the confusion that INT64PF and UINT64PFx would be different
types otherwise.