don't initialize error_log_handler_list in set_handlers()
* error_log_handler_list is initialized to LOG_FILE early, in init_base()
* set_handlers always reinitializes it to LOG_FILE, so it's pointless
* after init_base() concurrent threads start using sql_log_warning,
so following set_handlers() shouldn't modify error_log_handler_list
without some protection
The problem was not with the server behavior,
it was with wrong test results recorded.
Enabling both type_set and type_enum.
type_set.result was OK. The test did not fail.
type_enum.result was incorrectly recorded in this commit:
eb483c5181
Recording correct results.
- Trigger statement initiate update statement after bulk insert
operation and leads to disable of bulk operation. During commit,
InnoDB expects transaction to be in bulk insert mode before
applying the bulk insert operation. InnoDB transaction should
apply all bulk insert operation before disabling bulk insert
operation.
Remove unndeeded paths from mariadb-server-10.6.postinst as
it's already in export PATH and it fires 'command-with-path-in-maintainer-script'
in lintian checks
I change from `exit;` to `exit(1);` on a function `usage()`.
When we try to run mtr with a wrong option, a function `usage()` is called with the wrong option as its argument. In this case, because the function call `exit` in a first if statement, we get exit status 0.
`m_status == DA_ERROR' failed on SELECT after setting tmp_disk_table_size.
Analysis: Mismatch in number of warnings between "194 warnings" vs
"64 rows in set" is because of max_error_count variable which has default
value of 64.
About the corrupted tables, the error that occurs because of insufficient
tmp_disk_table_size variable is not reported correctly and we continue to
execute the statement. But because the previous error (about table being
full)is not reported correctly, this error moves up the stack and is
wrongly reported as parsing error later on while parsing frm file of one
of the information schema table. This parsing error gives corrupted table
error.
As for the innodb error, it occurs even when tmp_disk_table_size is not
insufficient is default but the internal error handler takes care of it
and the error doesn't show. But when tmp_disk_table_size is insufficient,
the fatal error which wasn't reported correctly moves up the stack so
internal error handler is not called. So it shows errors.
Fix: Report the error correctly.
This is a temporary fix for 10.2.
This problem was permanently fixed in 10.9 under terms of MDEV-27743.
This patch should propagate up to 10.8 then null-merged to 10.9.
The issue was that the value of MARIA_FOUND_WRONG_KEY was a value
that could be returned by ha_key_cmp.
This was already fixed in MyISAM, now using the same fix in Aria:
Setting the value to INT_MAX32, which should be impossible in any
normal cases.
I also fixed so that if there is a wrong key, we now get a proper error
message and not an assert.
- Purge thread is trying to access the table name when other thread
does rename of the table name. It leads to heap use after free error
by purge thread. purge thread should check whether the table name
is temporary while holding dict_sys.mutex.
Creating a temporary table with Spider is non-sense because a Spider
table cannot hold any physical data and it requires an additional
effort to manage even if it is configured correctly.
Set HTON_TEMPORARY_NOT_SUPPORTED to spider_hton->flags.
Reviewed-by: nayuta.yanagisawa@hey.com
Co-authored-by: d8sk4ueun@gmail.com
dict_acquire_mdl_shared(): Invoke the correct variant
dict_table_t::parse_name<true>() also when trylock=true,
that is, we are being called from fts_optimize_sync_table().
Ever since commit 82b7c561b7 (MDEV-24258)
this code was prone to a hang. If another thread requested an
exclusive dict_sys.latch between the time
dict_acquire_mdl_shared<trylock=true>() acquired a shared dict_sys.latch
and dict_table_t::parse_name<false>() was trying to acquire another
shared dict_sys.latch, InnoDB would get into an infinite livelock
of threads, because the futex-based srw-lock implementation prioritizes
exclusive latch requests.
dict_table_t::parse_name(): Rename the template parameter dict_locked
into dict_frozen.
buf_page_t::set_state(): Relax a debug assertion. It is fine to update
a read-fixed block descriptor to be both read-fixed and buffer-fixed.
buf_pool_t::watch_unset(): Fix some incorrect logic that was implemented
in commit e9e6db9355.
Thanks to Elena Stepanova for the test case.
This crash happens on a combination of multiple conditions:
- There is a thead#1 running an "ANALYZE FORMAT=JSON" query for a
"SELECT .. FROM INFORMATION_SCHEMA.COLUMNS WHERE .. "
- The WHERE clause contains a stored function call, say f1().
- The WHERE clause is built in the way so that the function f1()
is never actually called, e.g.
WHERE .. AND (TRUE OR f1()=expr)
- The database contains multiple VIEWs that have the function f1() call,
e.g. in their <select list>
- The WHERE clause is built in the way so that these VIEWs match
the condition.
- There is a parallel thread#2 running. It creates or drops or recreates
some other stored routine, say f2(), which is not used in the ANALYZE query.
It effectively invalidates the stored routine cache for thread#1
without locking.
Note, it is important that f2() is NOT used by ANALYZE query.
Otherwise, thread#2 would be locked until the ANALYZE query
finishes.
When all of the above conditions are met, the following happens:
1. thread#1 starts the ANALYZE query. It notices a call for the stored function
f1() in the WHERE condition. The function f1() gets parsed and cached
to the SP cache. Its address also gets assigned to Item_func_sp::m_sp.
2. thread#1 starts iterating through all tables that
match the WHERE condition to find the information about their columns.
3. thread#1 processes columns of the VIEW v1.
It notices a call for f1() in the VIEW v1 definition.
But f1() is already cached in the step#1 and it is up to date.
So nothing happens with the SP cache.
4. thread#2 re-creates f2() in a non-locking mode.
It effectively invalidates the SP cache in thread#1.
5. thread#1 processes columns of the VIEW v2.
It notices a call for f1() in the VIEW v2 definition.
It also notices that the cached version of f1() is not up to date.
It frees the old definition of f1(), parses it again, and puts a
new version of f1() to the SP cache.
6. thread#1 finishes processing rows and generates the JSON output.
When printing the "attached_condition" value, it calls
Item_func_sp::print() for f1(). But this Item_func_sp links
to the old (freed) version of f1().
The above scenario demonstrates that Item_func_sp::m_sp can point to an
alredy freed instance when Item_func_sp::func_name() is called,
so accessing to Item_sp::m_sp->m_handler is not safe.
This patch rewrites the code to use Item_func_sp::m_handler instead,
which is always reliable.
Note, this patch is only a cleanup for MDEV-28166 to quickly fix the regression.
It fixes MDEV-28267. But it does not fix the core problem:
The code behind I_S does not take into account that the SP
cache can be updated while evaluating rows of the COLUMNS table.
This is a corner case and it never happens with any other tables.
I_S.COLUMNS is very special.
Another example of the core problem is reported in MDEV-25243.
The code accesses to Item_sp::m_sp->m_chistics of an
already freed m_sp, again. It will be addressed separately.
The check on the SQL command type, in ha_spider::external_lock() is deleted
by e954d9de. This resulted in the wrong call of spider_internal_start_trx()
(and thus Ha_trx_info::register_ha()).
I reverted the check and refactored ha_spider::external_lock().
The partitioning engine does not support the table-level DATA/INDEX
DIRECTORY specification.
If one create a non-partitioned table with the DATA/INDEX DIRECTORY
option and then performs ALTER TABLE ... PARTITION BY on it, the
DATA/INDEX DIRECTORY specification of the old schema is ignored.
The behavior might be a bit surprising for users because the value
of a usual table option applies to all the partitions. Thus, we raise
a warning on such ALTER TABLE ... PARTITION BY.
The original query "SELECT IF(COUNT(a.`id`)>=0,'Y','N') FROM t" is
transformed to "SELECT COUNT(a.`id`), IF(ref >= 0, 'Y', 'N') FROM t",
where ref is Item_ref to "COUNT(a.`id`)", by split_sum_func().
Spider walks the item list twice, invoking spider_db_print_item_type().
The first invocation is in spider_create_group_by_handler() with
str == NULL. The second one is in spider_group_by_handler::init_scan()
with str != NULL.
spider_db_print_item_type() prints nothing at the first invocation,
and it prints item at the second invocation. However, at the second
invocation, the above mentioned ref to "COUNT(a.`id`)" points to
a field in a temporary table where the result will be stored. Thus,
to look behind the item_ref, Spider need to generate the query earlier.
A possible fix would be to generate a query to send in
spider_create_group_by_handler(). However, the fix requires a
considerable amount of changes of the Spider's GROUP BY handler.
I'd like to avoid that.
So, I fix the problem by not to use the GROUP BY handler when a
query contains Item_ref whose table_name, name, and alias_name_used
are not set.
Currenly SST script for mariabackup stops on any failure while archiving
logs, e.g. when unable to create directory, insufficient permissions, gzip
failure, etc. However, in case of such problems, the script should issue
a warning and continue without archiving, but not exit with a fatal error.
This commit adds this fix to the SST script for mariabackup.
Rather than Debian logs containing a barely decipherable mix
of build command and the output of some other command, we
use the make option --output-sync=target. This make the compile
line and the output stay together in the output stream.
This option exists even in the make version in debian;stretch
so should work everywhere.
Test on debian:stretch, ubuntu:18.04, the two lowest version that
use the debian/rules.
This failure was caused by MDEV-25975, which removed the parameter
innodb_disallow_writes.
Added a check for wsrep_sst_disable_writes to the function
ibuf_merge_in_background().
- There is a race condition occurs between purge thread and DDL.
So purge thread can increment n_ref_count even after DDL does
purge_sys_t::stop_FTS().
- dict_table_open_on_id for purge thread should check
purge_sys.must_wait_FTS() before acquring the table.
- purge_sys.stop_FTS() does acquire dict_sys.latch for setting
the purge system flag and check table ref count on auxilary tables.
The cause of the bug is overflow of uint16 KEY_PART_INFO::length and/or
uint16 KEY_PART_INFO::store_length. The solution is to increase the size
of those variables to the 'uint' type (which is 32-bit long)
If JOIN::create_postjoin_aggr_table encounters errors during execution
then free_tmp_table() is then called twice for JOIN_TAB::aggr.
The solution is to initialize JOIN_TAB::aggr only on successful completion
of JOIN::create_postjoin_aggr_table