Item_{date|datetime}_typecase::get_date() erroneously passed the
TIME_INTERVAL_DAY flag from the caller to args[0] which made
CAST('100000:00:00' AS DATETIME) parse '100000:00:00' as TIME
rather that DATETIME.
Suppressing this flag.
Cursor protocol cannot handle select... into.
Disable this on loaddata.
For the grant_plugin/innodb_fts.fulltext changed
the tests to use a temporary table rather than a
user variable.
LEAST() and GREATEST() erroneously calcucalted the result as signed
for BIGINT UNSIGNED arguments.
Adding a new method for unsigned arguments:
Item_func_min_max::val_uint_native()
Item_func_substr::fix_length_and_dec() incorrecltly calculated its max_length
to 0 when a huge number was passed as the third argument:
substring('hello', 1, 4294967295)
Fixing this.
Limit only signed integer fields fields to LONGLONG_MAX.
Double and decimal fields do not need this limit, as they
can store integers up to ULONGLONG_MAX without problems.
Sys_var_typelib did not work when assigned to an expression
with character sets with mbminlen>1.
Using val_str_ascii() instead of val_str() to fix this.
Fixed main.mysql_upgrade to pass when unix_socket plugin is unavailable.
Also don't redefine _GNU_SOURCE, which was previously defined by command
line/environment. This fixes silent auth_socket build failure with
MYSQL_MAINTAINER_MODE=ERR.
Reported in Debian bug #1084293, from the tzdata changelog:
* Upstream obsoleted the System V names CET, CST6CDT, EET, EST*, HST, MET,
MST*, PST8PDT, and WET. They are symlinks now. Move those zones to
tzdata-legacy and update /etc/localtime on package update to the new names.
Please use Etc/GMT* in case you want to avoid DST changes.
As such the timezone output started to output CET (or CEST) as the
current timezone. Due to the way the test was written, its only
possible to hit this error when running mtr from a package. The
internals of MTR fix the timezone so this will never be hit in a build.
As such, added Europe/Budapest as the Central Europe Standard Time
(per sql/win_tzname_data.h and its derived unicode.org source) as timezone,
hard fixed by timezone.opt file so it will always run. The
have_cet_timezone is there to check the zonedata is installed
(was absent on buildbot Ubuntu 22.04 and Windows).
As replace result to the CET output and treat MET/MEST as the
same while its on its way out.
Thanks Santiago Vila for the bug report and Otto for forwarding it.
MDEV-35350 consolidated two methods that MTR tests
would wait until a file had certain content
written to it, which were only available in 10.6+.
This patch only backports the functionality to
10.5 in case some test wants to use it (nothing
uses it in 10.5 at present).
The cleanup bc46f1a7d9 from 10.6 is also
backported so SEARCH_TYPE doesn't need to be
accounted for in the new search_pattern_in_file.inc
logic.
- Replace statement fails with duplicate key error when multiple
unique key conflict happens. Reason is that Server expects the
InnoDB engine to store the confliciting keys in ascending order.
But the InnoDB doesn't store the conflicting keys
in ascending order.
Fix:
===
- Enable HA_DUPLICATE_KEY_NOT_IN_ORDER for InnoDB storage engine
only when unique index order is different in .frm and innodb dictionary.
Problem:
========
- dict_stats_table_clone_create() does not initialize the
flag stats_error_printed in either dict_table_t or dict_index_t.
Because dict_stats_save_index_stat() is operating on a copy
of a dict_index_t object, it appears that
dict_index_t::stats_error_printed will always be false
for actual metadata objects, and uninitialized in
dict_stats_save_index_stat().
Solution:
=========
dict_stats_table_clone_create(): Assign stats_error_printed
for table and index while copying the statistics
Ignoring configured server_id should not be a warning because
correct configuration is documented. Changed message to info
level with more detailed message what was configured and
what will be actually used.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
While applying CTAS log event, we peek the relay log to see if CTAS
contains inserted rows or if it's empty.
The peek function didn't check for end-of-file condition when tried to
get the next event from the log, and thus it hanged.
The fix includes checking for end-of-file while peeking for log events
and considering returned XID_EVENT value as a sign of an empty CTAS.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
It was found that unnecessary work of building Ordered_key structures
is being done when processing NULL-aware materialization for IN predicates
having only one column. In fact, the logic for that simplified case can be
expressed as follows.
Say we have predicate left_expr IN (SELECT <subq1>), where left_expr is
scalar(not a tuple).
Then
if (left_expr is NULL) {
if (subq1 produced any rows) {
// note that we don't care if subq1 has produced
// NULLs or not.
NULL IN (<some values>) -> UNKNOWN, i.e. NULL.
} else {
NULL IN ({empty-set}) -> FALSE.
}
} else {
// left_expr is a non-NULL value
if (subq1 output has a match for left_expr) {
left_expr IN (..., left_expr ...) -> TRUE
} else {
// no "known" matches.
if (subq1 output has a NULL) {
left_expr IN ( ... NULL ...) ->
(NULL could have been a match or not)
-> NULL.
} else {
// subq1 didn't produce any "UNKNOWNs" so
// we're positive there weren't any matches
-> FALSE.
}
}
}
This commit introduces subselect_single_column_partial_engine class
implementing the logic described.
Reviewer: Sergei Petrunia <sergey@mariadb.com>
Two problem solved:
1) Item_default_value makes a shallow copy so the copy
should not delete field belong to the Item
2) Item_default_value should not inherit
derived_field_transformer_for_having and
derived_field_transformer_for_where (in this variant
pushing DEFAULT(f) is prohibited (return NULL) but
if return "this" it will be allowed (should go with
a lot of tests))
create_process() temporarily changes STDOUT/STDERR output to error log file
This might redirect mtr output on Windows, so avoid it by holding
flush_lock.
The code in best_access_path() uses PREV_BITS(uint, N) to
compute a bitmap of all keyparts: {keypart0, ... keypart{N-1}).
The problem is that PREV_BITS($type, N) macro code can't handle the case
when N=<number of bits in $type).
Also, why use PREV_BITS(uint, ...) for key part map computations when
we could have used PREV_BITS(key_part_map) ?
Fixed both:
- Change PREV_BITS(type, N) to handle any N in [0; n_bits(type)].
- Change PREV_BITS() to use key_part_map when computing key_part_map bitmaps.
MDEV-34447 Removed setting first_cond_optimization to 0 in update and
delete when leaf_tables_saved. This can cause problems when two ps
executions of an update go through different paths, where the first ps
execution goes through single table update only and the second ps
execution also goes through multi table update. When this happens, the
first_cond_optimization of the outer query is not set to false during
the first ps execution because optimize() is not called for the outer
query. But then the second ps execution will call optimize() on the
outer query, which with first_cond_optimization==true trips the 2nd ps
mem leak detection.
This is not a problem in higher version as both executions go through
multi table updates, possibly due to MDEV-28883.
We fix this problem by restoring the FALSE assignments to
first_cond_optimization.
Post-fix for MDEV-35144.
Cannot allocate options values on the statement arena, because
HA_CREATE_INFO is shallow-copied for every execution, so if the
option_list was initially empty, it will be reset for every execution
and any values allocated on the statement arena will be lost.
Cannot allocate option values on the execution arena, because
HA_CREATE_INFO is shallow-copied for every execution, so if the
option_list was initially NOT empty, any values appended to the
end will be preserved and if they're on the execution arena their
content will be destroyed.
Let's use thd->change_item_tree() to save and restore necessary pointers
for every execution.
followup for 3da565c41d
Clean up configuration and tests. Add wait conditions to make
sure test continues from clean state.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
- InnoDB fulltext rebuilds the FTS COMMON table while adding the
new fulltext index. This can be optimized by avoiding rebuilding
the FTS COMMON table in case of FTS COMMON TABLE already exists.
Reviewed-by: Marko Mäkelä <marko.makela@mariadb.com>
When adding a column or index that uses plugin-defined
sysvar-based options with CREATE ... LIKE the server
was using the current value of the sysvar, not the default one.
Because parse_option_list() function was used both in create
and open and it tried to guess when it's create (need to use
current sysvar value and add a new name=value pair to the list)
or open (need to use default, without extending the list).
Let's move the list extending functionality into a separate
function and call it explicitly when needed. Operations that
add new objects (CREATE, ALTER ... ADD) will extend the list,
other operations (ALTER, CREATE ... LIKE, open) will not.
The reason for the crash was the code assumed that
SELECT_LEX.ref_pointer_array would be initialized with zero, which was
not the case. This cause the test of
if (!select->ref_pointer_array[counter]) in item.cc to be unpredictable
and causes crashes.
Fixed by zero-filling ref_pointer_array on allocation.
Remove workaround for MDEV-13941, it served for 5 years,and all affected
pre-release 10.2 installation should have been already fixed in between.
Apparently Innodb is using is_sparse parameter in os_file_set_size()
inconsistently, and it passes is_sparse=false now during first file
extension. With MDEV-13941 workaround in place, it would unsparse
the file, which is makes compression not to work at all anymore.
Implement variable legacy_xa_rollback_at_disconnect to support
backwards compatibility for applications that rely on the pre-10.5
behavior for connection disconnect, which is to rollback the
transaction (in violation of the XA specification).
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Problem:
=======
- Redundant table fails to insert into the table after
instant drop blob column. Instant drop column only marking
the column as hidden and consecutive insert statement tries
to insert NULL value for the dropped BLOB column and returns
the fixed length of the blob type as 65535. This lead to
row size too large error.
Fix:
====
For redundant table, if the non-fixed dropped column can be null
then set the length of the field type as 0.
- InnoDB fails to set the index information or index number
for the spatial index error HA_ERR_NULL_IN_SPATIAL.
row_build_spatial_index_key(): Initialize the tmp_mbr array completely.
check_if_supported_inplace_alter(): Fix the spelling mistake of alter