When doing condition pushdown from HAVING into WHERE,
Item_equal::create_pushable_equalities() calls
item->set_extraction_flag(IMMUTABLE_FL) for constant items.
Then, Item::cleanup_excluding_immutables_processor() checks for this flag
to see if it should call item->cleanup() or leave the item as-is.
The failure happens when a constant item has a non-constant one inside it,
like:
(tbl.col=0 AND impossible_cond)
item->walk(cleanup_excluding_immutables_processor) works in a bottom-up
way so it
1. will call Item_func_eq(tbl.col=0)->cleanup()
2. will not call Item_cond_and->cleanup (as the AND is constant)
This creates an item tree where a fixed Item has an un-fixed Item inside
it which eventually causes an assertion failure.
Fixed by introducing this rule: instead of just calling
item->set_extraction_flag(IMMUTABLE_FL);
we call Item::walk() to set the flag for all sub-items of the item.
The only purpose of ibuf_bitmap_mutex is to prevent a deadlock between
two concurrent invocations of ibuf_update_free_bits_for_two_pages_low()
on the same pair of bitmap pages, but in opposite order.
The mutex is unnecessarily serializing the execution of the function
even when it is being invoked on totally different tablespaces.
To avoid deadlocks, it suffices to ensure that the two page latches
are being acquired in a deterministic (sorted) order.
- Simplified Chinese translation added
- Character encoding is gdk
-- gdk covers more characters
-- gdk includes both Simplified and Traditional
-- best option I think, may need to work along with other locale
settings
- Other cleanup
-- Within each error, messages are sorted according to language code
-- More consistent formatting (8 spaces proceeding each translation)
-- jps removed as duplicate of jpn translation
This should be a good starting point. More refinement is appreciated,
and needed down the road.
English "containt" (sic) spelling fixes on ER_FK_NO_INDEX_{CHILD,PARENT}
resulting in mtr test case adjustments.
Edited/reviewed by Daniel Black
The following condition has to added:
1) InnoDB fails to include the offset of the node pointer field
in non-leaf record for redundant row format.
2) If the Fixed length field does have only prefix length then
calculate the field maximum size as prefix length.
- Added the test case to test (2) and to check maximum number of
fields can exist in the index.
When we enable writes after Galera SST srv_n_fil_crypt_threads needs
to be set temporally to 0 (as was done when writes were disabled)
to make sure that encryption threads will be really started based
on old value of encryption threads.
Fix provided by Marko Mäkelä.
column generated using date_format() and if()
vcol_info->expr is allocated on expr_arena at parsing stage. Since
expr item is allocated on expr_arena all its containee items must be
allocated on expr_arena too. Otherwise fix_session_expr() will
encounter prematurely freed item.
When table is reopened from cache vcol_info contains stale
expression. We refresh expression via TABLE::vcol_fix_exprs() but
first we must prepare a proper context (Vcol_expr_context) which meets
some requirements:
1. As noted above expr update must be done on expr_arena as there may
be new items created. It was a bug in fix_session_expr_for_read() and
was just not reproduced because of no second refix. Now refix is done
for more cases so it does reproduce. Tests affected: vcol.binlog
2. Also name resolution context must be narrowed to the single table.
Tested by: vcol.update main.default vcol.vcol_syntax gcol.gcol_bugfixes
3. sql_mode must be clean and not fail expr update.
sql_mode such as MODE_NO_BACKSLASH_ESCAPES, MODE_NO_ZERO_IN_DATE, etc
must not affect vcol expression update. If the table was created
successfully any further evaluation must not fail. Tests affected:
main.func_like
Reviewed by: Sergei Golubchik <serg@mariadb.org>
1. moved fix_vcol_exprs() call to open_table()
mysql_alter_table() doesn't do lock_tables() so it cannot win from
fix_vcol_exprs() from there. Tests affected: main.default_session
2. Vanilla cleanups and comments.
UNION ALL queries are a subject of optimization introduced in MDEV-334
when creation of a temporary table is skipped.
While there is a check for this optimization in Explain_union::print_explain()
there was no such in Explain_union::print_explain_json(). This resulted in
printing irrelevant data like:
"union_result": {
"table_name": "<union2,3>",
"access_type": "ALL",
"r_loops": 0,
"r_rows": null
in case when creation of the temporary table was actually optimized out.
This commits adds a check whether the temporary table was actually created
during the UNION ALL processing and eliminates printing of the irrelevant data.
on_table_fill_finished() should always be done at the end of open()
even if result is not Select_materialize but (for example)
Select_fetch_into_spvars.
This commit fixes problems with parsing ipv6 addresses given via
the wsrep_sst_receive_address and wsrep_node_address options.
Also, this commit removes extra lines in the configuration files
in the mtr test suites for Galera related to these parameters.
don't initialize error_log_handler_list in set_handlers()
* error_log_handler_list is initialized to LOG_FILE early, in init_base()
* set_handlers always reinitializes it to LOG_FILE, so it's pointless
* after init_base() concurrent threads start using sql_log_warning,
so following set_handlers() shouldn't modify error_log_handler_list
without some protection
I change from `exit;` to `exit(1);` on a function `usage()`.
When we try to run mtr with a wrong option, a function `usage()` is called with the wrong option as its argument. In this case, because the function call `exit` in a first if statement, we get exit status 0.
`m_status == DA_ERROR' failed on SELECT after setting tmp_disk_table_size.
Analysis: Mismatch in number of warnings between "194 warnings" vs
"64 rows in set" is because of max_error_count variable which has default
value of 64.
About the corrupted tables, the error that occurs because of insufficient
tmp_disk_table_size variable is not reported correctly and we continue to
execute the statement. But because the previous error (about table being
full)is not reported correctly, this error moves up the stack and is
wrongly reported as parsing error later on while parsing frm file of one
of the information schema table. This parsing error gives corrupted table
error.
As for the innodb error, it occurs even when tmp_disk_table_size is not
insufficient is default but the internal error handler takes care of it
and the error doesn't show. But when tmp_disk_table_size is insufficient,
the fatal error which wasn't reported correctly moves up the stack so
internal error handler is not called. So it shows errors.
Fix: Report the error correctly.
This is a temporary fix for 10.2.
This problem was permanently fixed in 10.9 under terms of MDEV-27743.
This patch should propagate up to 10.8 then null-merged to 10.9.
The issue was that the value of MARIA_FOUND_WRONG_KEY was a value
that could be returned by ha_key_cmp.
This was already fixed in MyISAM, now using the same fix in Aria:
Setting the value to INT_MAX32, which should be impossible in any
normal cases.
I also fixed so that if there is a wrong key, we now get a proper error
message and not an assert.
Creating a temporary table with Spider is non-sense because a Spider
table cannot hold any physical data and it requires an additional
effort to manage even if it is configured correctly.
Set HTON_TEMPORARY_NOT_SUPPORTED to spider_hton->flags.
Reviewed-by: nayuta.yanagisawa@hey.com
Co-authored-by: d8sk4ueun@gmail.com
This crash happens on a combination of multiple conditions:
- There is a thead#1 running an "ANALYZE FORMAT=JSON" query for a
"SELECT .. FROM INFORMATION_SCHEMA.COLUMNS WHERE .. "
- The WHERE clause contains a stored function call, say f1().
- The WHERE clause is built in the way so that the function f1()
is never actually called, e.g.
WHERE .. AND (TRUE OR f1()=expr)
- The database contains multiple VIEWs that have the function f1() call,
e.g. in their <select list>
- The WHERE clause is built in the way so that these VIEWs match
the condition.
- There is a parallel thread#2 running. It creates or drops or recreates
some other stored routine, say f2(), which is not used in the ANALYZE query.
It effectively invalidates the stored routine cache for thread#1
without locking.
Note, it is important that f2() is NOT used by ANALYZE query.
Otherwise, thread#2 would be locked until the ANALYZE query
finishes.
When all of the above conditions are met, the following happens:
1. thread#1 starts the ANALYZE query. It notices a call for the stored function
f1() in the WHERE condition. The function f1() gets parsed and cached
to the SP cache. Its address also gets assigned to Item_func_sp::m_sp.
2. thread#1 starts iterating through all tables that
match the WHERE condition to find the information about their columns.
3. thread#1 processes columns of the VIEW v1.
It notices a call for f1() in the VIEW v1 definition.
But f1() is already cached in the step#1 and it is up to date.
So nothing happens with the SP cache.
4. thread#2 re-creates f2() in a non-locking mode.
It effectively invalidates the SP cache in thread#1.
5. thread#1 processes columns of the VIEW v2.
It notices a call for f1() in the VIEW v2 definition.
It also notices that the cached version of f1() is not up to date.
It frees the old definition of f1(), parses it again, and puts a
new version of f1() to the SP cache.
6. thread#1 finishes processing rows and generates the JSON output.
When printing the "attached_condition" value, it calls
Item_func_sp::print() for f1(). But this Item_func_sp links
to the old (freed) version of f1().
The above scenario demonstrates that Item_func_sp::m_sp can point to an
alredy freed instance when Item_func_sp::func_name() is called,
so accessing to Item_sp::m_sp->m_handler is not safe.
This patch rewrites the code to use Item_func_sp::m_handler instead,
which is always reliable.
Note, this patch is only a cleanup for MDEV-28166 to quickly fix the regression.
It fixes MDEV-28267. But it does not fix the core problem:
The code behind I_S does not take into account that the SP
cache can be updated while evaluating rows of the COLUMNS table.
This is a corner case and it never happens with any other tables.
I_S.COLUMNS is very special.
Another example of the core problem is reported in MDEV-25243.
The code accesses to Item_sp::m_sp->m_chistics of an
already freed m_sp, again. It will be addressed separately.
The partitioning engine does not support the table-level DATA/INDEX
DIRECTORY specification.
If one create a non-partitioned table with the DATA/INDEX DIRECTORY
option and then performs ALTER TABLE ... PARTITION BY on it, the
DATA/INDEX DIRECTORY specification of the old schema is ignored.
The behavior might be a bit surprising for users because the value
of a usual table option applies to all the partitions. Thus, we raise
a warning on such ALTER TABLE ... PARTITION BY.
The original query "SELECT IF(COUNT(a.`id`)>=0,'Y','N') FROM t" is
transformed to "SELECT COUNT(a.`id`), IF(ref >= 0, 'Y', 'N') FROM t",
where ref is Item_ref to "COUNT(a.`id`)", by split_sum_func().
Spider walks the item list twice, invoking spider_db_print_item_type().
The first invocation is in spider_create_group_by_handler() with
str == NULL. The second one is in spider_group_by_handler::init_scan()
with str != NULL.
spider_db_print_item_type() prints nothing at the first invocation,
and it prints item at the second invocation. However, at the second
invocation, the above mentioned ref to "COUNT(a.`id`)" points to
a field in a temporary table where the result will be stored. Thus,
to look behind the item_ref, Spider need to generate the query earlier.
A possible fix would be to generate a query to send in
spider_create_group_by_handler(). However, the fix requires a
considerable amount of changes of the Spider's GROUP BY handler.
I'd like to avoid that.
So, I fix the problem by not to use the GROUP BY handler when a
query contains Item_ref whose table_name, name, and alias_name_used
are not set.
Currenly SST script for mariabackup stops on any failure while archiving
logs, e.g. when unable to create directory, insufficient permissions, gzip
failure, etc. However, in case of such problems, the script should issue
a warning and continue without archiving, but not exit with a fatal error.
This commit adds this fix to the SST script for mariabackup.
This failure was caused by MDEV-25975, which removed the parameter
innodb_disallow_writes.
Added a check for wsrep_sst_disable_writes to the function
ibuf_merge_in_background().
The cause of the bug is overflow of uint16 KEY_PART_INFO::length and/or
uint16 KEY_PART_INFO::store_length. The solution is to increase the size
of those variables to the 'uint' type (which is 32-bit long)
If JOIN::create_postjoin_aggr_table encounters errors during execution
then free_tmp_table() is then called twice for JOIN_TAB::aggr.
The solution is to initialize JOIN_TAB::aggr only on successful completion
of JOIN::create_postjoin_aggr_table
We will remove the parameter innodb_disallow_writes because it is badly
designed and implemented. The parameter was never allowed at startup.
It was only internally used by Galera snapshot transfer.
If a user executed
SET GLOBAL innodb_disallow_writes=ON;
the server could hang even on subsequent read operations.
During Galera snapshot transfer, we will block writes
to implement an rsync friendly snapshot, as follows:
sst_flush_tables() will acquire a global lock by executing
FLUSH TABLES WITH READ LOCK, which will block any writes
at the high level.
sst_disable_innodb_writes(), invoked via ha_disable_internal_writes(true),
will suspend or disable InnoDB background tasks or threads that could
initiate writes. As part of this, log_make_checkpoint() will be invoked
to ensure that anything in the InnoDB buf_pool.flush_list will be written
to the data files. This has the nice side effect that the Galera joiner
will avoid crash recovery.
The changes to sql/wsrep.cc and to the tests are based on a prototype
that was developed by Jan Lindström.
Reviewed by: Jan Lindström