Item_null_result did not override type_handler() because of a wrong merge
of d8a9b524f2 (MDEV-14221) from 10.1.
Overriding type_handler().
Removing the old style field_type() method. It's not relevant any more.
The code incorrectly assumed in multiple places that TYPELIB
values cannot have 0x00 bytes inside. In fact they can:
CREATE TABLE t1 (a ENUM(0x61, 0x0062) CHARACTER SET BINARY);
Note, the TYPELIB value encoding used in FRM is ambiguous about 0x00.
So this fix is partial.
It fixes 0x00 bytes in many (but not all) places:
- In the middle or in the end of a value:
CREATE TABLE t1 (a ENUM(0x6100) ...);
CREATE TABLE t1 (a ENUM(0x610062) ...);
- In the beginning of the first value:
CREATE TABLE t1 (a ENUM(0x0061));
CREATE TABLE t1 (a ENUM(0x0061), b ENUM('b'));
- In the beginning of the second (and following) value of the *last* ENUM/SET
in the table:
CREATE TABLE t1 (a ENUM('a',0x0061));
CREATE TABLE t1 (a ENUM('a'), b ENUM('b',0x0061));
However, it does not fix 0x00 when:
- 0x00 byte is in the beginning of a value of a non-last ENUM/SET
causes an error:
CREATE TABLE t1 (a ENUM('a',0x0061), b ENUM('b'));
ERROR 1033 (HY000): Incorrect information in file: './test/t1.frm'
This is an ambuguous case and will be fixed separately.
We need a new TYPELIB encoding to fix this.
Details:
- unireg.cc
The function pack_header() incorrectly used strlen() to detect
a TYPELIB value length. Adding a new function typelib_values_packed_length()
which uses TYPELIB::type_lengths[n] to detect the n-th value length,
and reusing the new function in pack_header() and packed_fields_length()
- table.cc
fix_type_pointers() assumed in multiple places that values cannot have
0x00 inside and used strlen(TYPELIB::type_names[n]) to set
the corresponding TYPELIB::type_lengths[n].
Also, fix_type_pointers() did not check the encoded data for consistency.
Rewriting fix_type_pointers() code to populate TYPELIB::type_names[n] and
TYPELIB::type_lengths[n] at the same time, so no additional loop
with strlen() is needed any more.
Adding many data consistency tests.
Fixing the main loop in fix_type_pointers() to use memchr() instead of
strchr() to handle 0x00 properly.
Fixing create_key_infos() to return the result in a LEX_STRING rather
that in a char*.
Executing CHECK TABLE with streaming replication enabled reports
error "Streaming replication not supported with
binlog_format=STATEMENT".
Administrative commands such as CHECK TABLE, are not replicated and
temporarily set binlog format to statement.
To avoid the problem, report the error only for active transactions
for which streaming replication is enabled.
Analysis:
========
RESET MASTER TO # command deletes all binary log files listed in the index
file, resets the binary log index file to be empty, and creates a new binary
log with number #. When the user provided binary log number is greater than
the max allowed value '2147483647' server fails to generate a new binary log.
The RESET MASTER statement marks the binlog closure status as
'LOG_CLOSE_TO_BE_OPENED' and exits. Statements which follow RESET MASTER
try to write to binary log they find the log_state != LOG_CLOSED and
proceed to write to binary log cache and it results in crash.
Fix:
===
During MYSQL_BIN_LOG open, if generation of new binary log name fails then the
"log_state" needs to be marked as "LOG_CLOSED". With this further statements
will find binary log as closed and they will skip writing to the binary log.
Problem:
When handling a query like this:
VALUES ('') UNION SELECT _utf16 0x0020 COLLATE utf16_bin;
Type_handler_string_result::Item_hybrid_func_fix_attributes()
tried to apply character set conversion Item_type_holder,
which causes a crash on DBUG_ASSERT(0) inside Item_type_holder::val_str().
Fix:
Overriding Item_type_holder's methods to avoid this, as follows:
bool const_item() const { return false; }
bool is_expensive() { return true; }
Removing a wrong DBUG_ASSERT:
When Item_param gets "unfixed" in cleanup(), its "fixed" gets assigned
to false, while item_item keeps the value. So the assert was wrong.
Perhaps, instead of removing the assert, it was possible to reset
item_type to NO_VALUE in cleanup. But this is not very important:
it's implemented in 10.4 in a better way:
Item_param::is_fixed() always returns true and it does not need to be "unfixed".
1. Code simplification:
Item_default_value handled all these values:
a. DEFAULT(field)
b. DEFAULT
c. IGNORE
and had various conditions to distinguish (a) from (b) and from (c).
Introducing a new abstract class Item_contextually_typed_value_specification,
to handle (b) and (c), so the hierarchy now looks as follows:
Item
Item_result_field
Item_ident
Item_field
Item_default_value - DEFAULT(field)
Item_contextually_typed_value_specification
Item_default_specification - DEFAULT
Item_ignore_specification - IGNORE
2. Introducing a new virtual method is_evaluable_expression() to
determine if an Item is:
- a normal expression, so its val_xxx()/get_date() methods can be called
- or a just an expression substitute, whose value methods cannot be called.
3. Disallowing Items that are not evalualble expressions in table value
constructors.
TIME_ZONE_ID_UNKNOWN return code from GetDynamicTimeZoneInformation()
does not mean failure.
It only means, daylight saving dates in the returned strct are not valid.
TIME_ZONE_ID_INVALID means failure, in this case "unknown" should be returned
In multithreaded build (at least confirmed with Windows ninja and msbuild),
at the end of "sql" target compilation, only 2 processors are used,
compiling either sql_yacc.cc or sql_yacc_ora.cc.
Thus, link of dependent executables or libraries is delayed while build is
underusing the CPU.
Rearrange the source list to improve parallelism.
The assert was caused by early cleanup of a user variable participant
in BINLOG @var,@var where it plays twice. That was unexpected by the base
code to clear its value prematurely.
Fixed with relocating the user var destruction after operations with
its value is over.
The code erroneously allowed both:
INSERT INTO t1 (vcol) VALUES (DEFAULT);
INSERT INTO t1 (vcol) VALUES (DEFAULT(non_virtual_column));
The former is OK, but the latter is not.
Adding a new virtual method in Item:
virtual bool vcol_assignment_allowed_value() const { return false; }
Item_null, Item_param and Item_default_value override it.
Item_default_value overrides it in the way to:
- allow DEFAULT
- disallow DEFAULT(col)
Apparently, in stats_reset_table(), the innocuous
memset(&group->counters, 0, sizeof(group->counters));
is converted by clang to SSE2 instructions.
The problem is that "group" is not correctly aligned,
despite MY_ALIGNED(CPU_LEVEL1_DCACHE_LINESIZE) in the thread_group_t
declaration.
It is not aligned because it was allocated with my_malloc, since
commit fd9f1638, MDEV-5205. Previously all_groups was a
statically allocated array.
Fix is to remove MY_ALIGNED, and pad the struct instead.
When neither MSAN nor Valgrind are enabled, declare
Field::mark_unused_memory_as_defined() as an empty inline function,
instead of declaring it as a virtual function.
Same array instance in two Item_func_in instances. First Item_func_in
instance is freed on table close. Second one is freed on
cleanup_after_query().
get_copy() depends on copy ctor for copying an item and hence does
shallow copy for default copy ctor. Use build_clone() for deep copy of
Item_func_in.
MDEV-22073 MSAN use-of-uninitialized-value in
collect_statistics_for_table()
Other things:
innodb.analyze_table was changed to mainly test statistic
collection. Was discussed with Marko.
The issue here was that when the schema was changed the value for the THD::server_status
is ored with SERVER_SESSION_STATE_CHANGED.
For custom aggregate functions, currently we check if the server_status is equal to
SERVER_STATUS_LAST_ROW_SENT then we should terminate the execution of the custom
aggregate function as there are no more rows to fetch.
So the check should be that if the server status has the bit set for
SERVER_STATUS_LAST_ROW_SENT then we should terminate the execution of the
custom aggregate function.
Problem was that trx->lock.was_chosen_as_wsrep_victim variable was
not set back to false after it was set true.
wsrep_thd_bf_abort
Add assertions for correct mutex status and take necessary
mutexes before calling thd->awake_no_mutex().
innobase_rollback_trx()
Reset trx->lock.was_chosen_as_wsrep_victim
wsrep_abort_slave_trx()
Removed unused function.
wsrep_innobase_kill_one_trx()
Added function comment, removed unnecessary parameters
and added debug assertions to enforce correct usage. Added
more debug output to help out on error analysis.
wsrep_abort_transaction()
Added debug assertions and removed unused variables.
trx0trx.h
Removed assert_trx_is_free macro and replaced it with
assert_freed() member function.
trx_create()
Use above assert_free() and initialize wsrep variables.
trx_free()
Use assert_free()
trx_t::commit_in_memory()
Reset lock.was_chosen_as_wsrep_victim
trx_rollback_for_mysql()
Reset trx->lock.was_chosen_as_wsrep_victim
Add test case galera_bf_kill
For the case when the optimizer does the IN-EXISTS transformation,
the equality condition is injected in the WHERE OR HAVING clause of
the subquery. If the select list of the subquery has a reference to
the parent select make sure to use the reference and not the original
item.
The DECIMAL data type branch in Item_func_int_val::fix_length_and_dec()
incorrectly used DOUBLE-style length calculation, which resulted in
a smaller data type than the actual result of FLOOR()/CEIL() needs.
Type_handler_xxx::Item_const_eq() can handle only non-NULL values.
The code in Item_basic_value::eq() did not take this into account.
Adding a test to detect three different combinations:
- Both values are NULLs, return true.
- Only one value is NULL, return false.
- Both values are not NULL, call Type_handler::Item_const_eq()
to check equality.
The function thd_query_safe() is used in the implementation of the
following INFORMATION_SCHEMA views:
information_schema.innodb_trx
information_schema.innodb_locks
information_schema.innodb_lock_waits
information_schema.rocksdb_trx
The implementation of the InnoDB views is in trx_i_s_common_fill_table().
This function invokes trx_i_s_possibly_fetch_data_into_cache(),
which will acquire lock_sys->mutex and trx_sys->mutex in order to
protect the set of active transactions and explicit locks.
While holding those mutexes, it will traverse the collection of
InnoDB transactions. For each transaction, thd_query_safe() will be
invoked.
When called via trx_i_s_common_fill_table(), thd_query_safe()
is acquiring THD::LOCK_thd_data while holding the InnoDB locks.
This will cause a deadlock with THD::awake() (such as executing
KILL QUERY), because THD::awake() could invoke lock_trx_handle_wait(),
which attempts to acquire lock_sys->mutex while already holding
THD::lock_thd_data.
thd_query_safe(): Invoke mysql_mutex_trylock() instead of
mysql_mutex_lock(). Return the empty string if the mutex
cannot be acquired without waiting.
The code did not take into account that:
- U+005C (backslash) can occupy more than mbminlen characters (e.g. in sjis)
- Some character sets do not have a code for U+005C (e.g. swe7)
Adding a new function my_wc_to_printable into MY_CHARSET_HANDLER to
cover all special cases easier.
MDEV-22488 test failures: parts.partition_debug_innodb /
parts.partition_debug_myisam
The reason for the failure was a wrong printf() that accessed not existing
things on the stack.
The reason the falure was hard to find was that the partition_debug_...
tests disables core dumps, so there was no trace that the server had
crashed in the logs.
Fixed by fixing the faulty push_warning_printf() and splitting the tests
into two parts, one that test failures (with core dumps enabled) and one
part that test crash recovery.
The review and test splitting was done by Monty
Rowid Filter check is just like Index Condition Pushdown check: before
we check the filter, we must check if we have walked out of the range
we are scanning. (If we did, we should return, and not continue the scan).
Consequences of this:
- Rowid filtering doesn't work for keys that have partially-covered
blob columns (just like Index Condition Pushdown)
- The rowid filter function has three return values: CHECK_POS (passed)
CHECK_NEG (filtered out), CHECK_OUT_OF_RANGE.
All of the above is implemented in this patch
only MDL-prelock but do not open FK child tables for read-only (RESTRICT)
FK actions.
Tables still needs to be opened for CASCADE actions, see 9180e8666b
even if we're *allowed to* convert DELETE .. FOR PERIOD OF
into an update internally, doesn't think we'll *be able to*.
We always have to prepare for insert.
Fixing a race condition while collecting the engine independent statistics.
Thread1>
1) start running "ANALYZE TABLE t PERISTENT FOR COLUMNS (..) INDEXES ($list)"
2) Walk through $list and save it in TABLE::keys_in_use_for_query
3) Close/re-open tables
Thread2>
1) Make some use of table t. This involves taking table t from
the table cache, and putting it back (with TABLE::keys_in_use_for_query reset to 0)
Thread1>
continue collecting EITS stats. Since TABLE::keys_in_use_for_query is set to 0 we
will not collect statistics for indexes in $list.
Disable IPO (interprocedural optimization, aka /GL) on Windows
on libraries, from which server.dll exports symbols - exporting symbols
does not work for objects compiled with /GL.
- ALTER_ALGORITHM should be substituted when there is no mention of
algorithm in alter statement.
- Introduced algorithm(thd) in Alter_info. It returns the
user requested algorithm. If user doesn't specify algorithm explicitly then
it returns alter_algorithm variable.
- changed algorithm() to get_algorithm(thd) to return algorithm name for
displaying the error.
- set_requested_algorithm(algo_value) to avoid direct assignment on
requested_algorithm variable.
- Avoid direct access of requested_algorithm to encapsulate
requested_algorithm variable
For a unique key if all the keyparts are NOT NULL or the predicates involving
the keyparts is NULL rejecting, then we can use EQ_REF access instead of ref
access with the unique key
The event scheduler has a THD which is used for e.g. keeping track
of the timing of the events. Thus, each scheduling of an event will
make use of this THD, which in turn allocates memory in the THD's
mem root. However, the mem root was never cleared, and hence, the
memory occupied would monotonically increase throughout the life
time of the server.
The root cause was found by Jon Olav Hauglid, and this fix clears the
THD's mem root for each event being scheduled.
Change-Id: I462d2b9fd9658c9f33ab5080f7cd0e0ea28382df
in fact, in MariaDB it cannot, but it can show spurious slaves
in SHOW SLAVE HOSTS.
slave was registered in COM_REGISTER_SLAVE and un-registered after
COM_BINLOG_DUMP. If there was no COM_BINLOG_DUMP, it would never
unregister.
Post push fix.
when "replicate_wild_do_table" and "replicate_wild_ignore_table" filters
and changed dynamically the filter list gets cleared but the corresponding
"wild_do_table_inited" and "wild_ignore_table_inited" flags are not getting
cleared.
Fix: Clear the flags.
KEY_MULTI_RANGE::range_flag does not have correct flag bits for
per-endpoint flags (NEAR_MIN, NEAR_MAX, NO_MIN_RANGE, NO_MAX_RANGE).
It only has bits for flags that describe both endpoints.
So
- Document this.
- Switch optimizer trace to using {start|end}_key.flag values, instead.
This fixes the bug.
- Switch records_in_column_ranges() to doing that too. (This used to
work, because KEY_MULTI_RANGE::range_flag had correct flag value
for the last key component, and EITS only uses one-component
pseudo-indexes)
Cause:
In case of version based condtional comments, if the condition evaluates
to false, it is converted to a regular comment for replication by
replacing "!" by " ".
Nested comment in a conditional comment is replicated as is. Nested
comments are supported only in case of conditional comments and when a
the comment on slave is no more a conditional comment, the statement
execution fails on the slave.
Fix:
Convert the nested comment, start from "/*" to "(*" and comment end from
"*/" to "*)" for replication.
Change-Id: I1a8e385a267b2370529eade094f0258fa96886c0
sig_return: Solaris/OSX returns different function ptr
Move defination to my_alarm.h as its the only use.
prevents compile warnings (copied from 10.3 branch)
mysys/my_sync.c:136:19: error: 'cur_dir_name' defined but not used [-Werror=unused-const-variable=]
136 | static const char cur_dir_name[]= {FN_CURLIB, 0};
| ^~~~~~~~~~~~
fix compile error (DEPRECATED) leaked from ssl headers.
In file included from /export/home/dan/mariadb-server-10.4/sql/sys_vars.cc:37:
/export/home/dan/mariadb-server-10.4/sql/sys_vars.ic:69: error: "DEPRECATED" redefined [-Werror]
69 | #define DEPRECATED(X) X
|
In file included from /export/home/dan/mariadb-server-10.4/include/violite.h:150,
from /export/home/dan/mariadb-server-10.4/sql/sql_class.h:38,
from /export/home/dan/mariadb-server-10.4/sql/sys_vars.cc:36:
/usr/include/openssl/ssl.h:2356: note: this is the location of the previous definition
2356 | # define DEPRECATED __attribute__((deprecated))
|
Avoid Werror condition on non-Linux:
plugin/server_audit/server_audit.c:2267:7: error: variable 'db_len_off' set but not used [-Werror=unused-but-set-variable]
2267 | int db_len_off;
| ^~~~~~~~~~
plugin/server_audit/server_audit.c:2266:7: error: variable 'db_off' set but not used [-Werror=unused-but-set-variable]
2266 | int db_off;
| ^~~~~~
auth_gssapi fix include path for Solaris
Consistent with the upstream packaged patch:
https://github.com/OpenIndiana/oi-userland/blob/oi/hipster/components/database/mariadb-103/patches/06-gssapi.h.patch
compile warnings on Solaris
[ 91%] Building C object plugin/server_audit/CMakeFiles/server_audit.dir/server_audit.c.o
/plugin/server_audit/server_audit.c: In function 'auditing_v8':
/plugin/server_audit/server_audit.c:2194:20: error: unused variable 'db_len_off' [-Werror=unused-variable]
2194 | static const int db_len_off= 128;
| ^~~~~~~~~~
/plugin/server_audit/server_audit.c:2193:20: error: unused variable 'db_off' [-Werror=unused-variable]
2193 | static const int db_off= 120;
| ^~~~~~
/plugin/server_audit/server_audit.c:2192:20: error: unused variable 'cmd_off' [-Werror=unused-variable]
2192 | static const int cmd_off= 4432;
| ^~~~~~~
At top level:
/plugin/server_audit/server_audit.c:2192:20: error: 'cmd_off' defined but not used [-Werror=unused-const-variable=]
/plugin/server_audit/server_audit.c:2193:20: error: 'db_off' defined but not used [-Werror=unused-const-variable=]
2193 | static const int db_off= 120;
| ^~~~~~
/plugin/server_audit/server_audit.c:2194:20: error: 'db_len_off' defined but not used [-Werror=unused-const-variable=]
2194 | static const int db_len_off= 128;
| ^~~~~~~~~~
cc1: all warnings being treated as errors
tested on:
$ uname -a
SunOS openindiana 5.11 illumos-b97b1727bc i86pc i386 i86pc
Problem:
=======
SET @@GLOBAL.replicate_wild_ignore_table='';
SET @@GLOBAL.replicate_wild_do_table='';
Reports following valgrind error.
Conditional jump or move depends on uninitialised value(s)
Rpl_filter::set_wild_ignore_table(char const*) (rpl_filter.cc:439)
Conditional jump or move depends on uninitialised value(s)
at 0xF60390: delete_dynamic (array.c:304)
by 0x74F3F2: Rpl_filter::set_wild_do_table(char const*) (rpl_filter.cc:421)
Analysis:
========
List of values provided for options "wild_do_table" and "wild_ignore_table" are
stored in DYNAMIC_ARRAYS. When an empty list is provided these dynamic arrays
are not initialized. Existing code treats empty element list as an error and
tries to clean the uninitialized list. This results in above valgrind issue.
Fix:
===
The clean up should be initiated only when there is an error while parsing the
'wild_do_table' or 'wild_ignore_table' list and the dynamic_array is in
initialized state. Otherwise for empty list it should simply return success.
- Inplace alter shouldn't set default date column as '0000-00-00' when
table is not empty. So mysql_inplace_alter_table() copied
alter_ctx.error_if_not_empty to a new field of Alter_inplace_info.
In ha_innobase::check_if_supported_inplace_alter() should check the
error_if_not_empty flag and return INPLACE_NOT_SUPPORTED if the table
is not empty
This is continuation of MDEV-22153 bug when contiguity of history
partitions is broken. ha_partition::open_read_partitions() can not
handle non-contiguous list of default partitions.
Fix: when default partition is dropped convert list of partitions to
non-default.
when opening the `user` table separately, reset `thd->open_tables`
for the duration of open, otherwise auto-repair fallback-and-retry
will close *all* tables (but reopen only `user`)
This is a backport of the applicable part of
commit 93475aff8d and
commit 2c39f69d34
from 10.4.
Before 10.4 and Galera 4, WSREP_ON is a macro that points to
a global Boolean variable, so it is not that expensive to
evaluate, but we will add an unlikely() hint around it.
WSREP_ON_NEW: Remove. This macro was introduced in
commit c863159c32
when reverting WSREP_ON to its previous definition.
We replace some use of WSREP_ON with WSREP(thd), like it was done
in 93475aff8d. Note: the macro
WSREP() in 10.1 is equivalent to WSREP_NNULL() in 10.4.
Item_func_rand::seed_random(): Avoid invoking current_thd
when WSREP is not enabled.
_ma_fetch_keypage(): Correct an assertion that used to always hold.
Thanks to clang -Wint-in-bool-context for flagging this.
double_to_datetime_with_warn(): Suppress -Wimplicit-int-float-conversion
by adding a cast. LONGLONG_MAX converted to double will actually be
LONGLONG_MAX+1.
Since commit 7198c6ab2d
the ./mtr --embedded tests would fail to start innodb_plugin
because of an undefined reference to the symbol wsrep_log().
Let us define a stub for that function. The embedded server
is never built WITH_WSREP, but there are no separate storage
engine builds for the embedded server. Hence, by default,
the dynamic InnoDB storage engine plugin would be built WITH_WSREP
and it would fail to load into the embedded server library due to
a reference to the undefined symbol.
read TLS with my_thread_var
write TLS with set_mysys_var()
my_thread_var is no longer __attribute__ ((const)): this attribute
is simply incorrect here. Read gcc manual for more information.
sql/threadpool_generic.cc fails with that attribute.
The reason why we have wsrep_on() at all is that the macro WSREP(thd)
depends on the definition of THD, and that is intentionally an opaque
data type for InnoDB. So, we cannot avoid invoking wsrep_on(), but
we can evaluate the less expensive conditions thd && WSREP_ON before
calling the function.
Global_read_lock: Use WSREP_NNULL(thd) instead of wsrep_on(thd)
because we not only know the definition of THD but also that
the pointer is not null.
wsrep_open(): Use WSREP(thd) instead of wsrep_on(thd).
InnoDB: Replace thd && wsrep_on(thd) with wsrep_on(thd), now that
the condition has been merged to the definition of the macro
wsrep_on().
If the server is compiled WITH_WSREP=OFF, we should avoid evaluating
conditions on a global variable that is constant.
WSREP_ON_: Renamed from WSREP_ON. Defined only WITH_WSREP=ON.
WSREP_ON: Defined as unlikely(WSREP_ON_).
wsrep_on(): Defined as WSREP_ON && wsrep_service->wsrep_on_func().
The reason why we have wsrep_on() at all is that the macro WSREP(thd)
depends on the definition of THD, and that is intentionally an opaque
data type for InnoDB. So, we cannot avoid invoking wsrep_on(), but
we can evaluate the less expensive condition WSREP_ON before calling
the function.
mysqld_exit(): Change the assertion failure on
global_status_var.global_memory_used == 0
to fprintf, like in 0bcb65d358
It appears that in some cases, that variable may be nonzero
even when LeakSanitizer (WITH_ASAN) would not report errors.
This was observed in 10.4 88cf6f1c7f
with the MDEV-22348 test case (Aria startup failure when running
main.default_storage_engine).
commit 105b879d0f introduced this
warning. The warning looks harmless, but GCC does not understand
that the initialization and the use of the variables are guarded
by the same predicate.
sprintf() format of double changed from '%lg' to '%-.11lg'
The change was to make it easier to read optimizer trace output
with tables that has millions of records.
The reason for this is to make all temporary file names similar and
also to be able to figure out from where a #sql-xxx name orginates.
New format is for most cases:
'#sql-name-current_pid-thread_id[-increment]'
Where name is one of subselect, alter, exchange, temptable or backup
The exceptions are:
ALTER PARTITION shadow files:
'#sql-shadow-thread_id-'original_table_name'
Names used with temp pool:
'#sql-name-current_pid-pool_number'
MDEV-22088 S3 partitioning support
All ALTER PARTITION commands should now work on S3 tables except
REBUILD PARTITION
TRUNCATE PARTITION
REORGANIZE PARTITION
In addition, PARTIONED S3 TABLES can also be replicated.
This is achived by storing the partition tables .frm and .par file on S3
for partitioned shared (S3) tables.
The discovery methods are enchanced by allowing engines that supports
discovery to also support of the partitioned tables .frm and .par file
Things in more detail
- The .frm and .par files of partitioned tables are stored in S3 and kept
in sync.
- Added hton callback create_partitioning_metadata to inform handler
that metadata for a partitoned file has changed
- Added back handler::discover_check_version() to be able to check if
a table's or a part table's definition has changed.
- Added handler::check_if_updates_are_ignored(). Needed for partitioning.
- Renamed rebind() -> rebind_psi(), as it was before.
- Changed CHF_xxx hadnler flags to an enum
- Changed some checks from using table->file->ht to use
table->file->partition_ht() to get discovery to work with partitioning.
- If TABLE_SHARE::init_from_binary_frm_image() fails, ensure that we
don't leave any .frm or .par files around.
- Fixed that writefrm() doesn't leave unusable .frm files around
- Appended extension to path for writefrm() to be able to reuse to function
for creating .par files.
- Added DBUG_PUSH("") to a a few functions that caused a lot of not
critical tracing.
MDEV-22199 Add VISIBLE attribute for indexes in CREATE TABLE
This was done to make it easier to read in dumps from MySQL 8.0 generated
with MySQL workbench
>= M_TOT_PARTS' FAILED.
This patch is taken from MySQL, originally written by Mattias Jonsson
Here follows the original commit message:
Problem in handle_alter_part_error(),
result in altered partition_info object was still used
if table was under LOCK TABLES.
Solution was to always close and destroy all table
and table_share instances if exclusive mdl lock was
possible.
If not succeeding in get an exlusive lock (only possible
during rollback of DDL), at least close and destroy this
table instance.
rb#7361.
Approved by Mikael and Aditya.
Part of:
MDEV-21056 Assertion `global_status_var.global_memory_used == 0'
failed upon shutdown after query with DEFAULT on a geometry
field
Fixed by changing the ASSERT for memory leaks to a printf() on
stderr. This has needed as all mutex in mysys has been deleted and we
can't call functions like my_open() anymore.
Also added printing of leaks if safemalloc is used (like we do in 10.5)
If a transaction had no effect due to INSERT IGNORE and a new
transaction was started with START TRANSACTION without committing
the previous one, the server crashed on assertion when starting
a new wsrep transaction.
As a fix, refined the condition to do wsrep_commit_empty() at the end
of the ha_commit_trans().
The first patch for the bug was erroneous: it did not take into account
the fact that the modified function get_key_scans_params() was called in
different contexts. As a result the patch caused a regression bug MDEV-22191.
The patch for this bug introduced an extra parameter. Actually we can
do without this parameter and use the fourth parameter for the same
puropose - to differentiate between the calls of the function for range
access and for index merge access.
Also removed the call of get_key_scans_params() in the code of the function
merge_same_index_scans() as not needed.
Several tests that involve stored procedures fail on 10.4 kvm-asan
(clang 10) due to stack overrun. The main contributor to this stack
overrun is mysql_execute_command(), which is invoked recursively
during stored procedure execution.
Rebuilding with cmake -DWITH_WSREP=OFF shrunk the stack frame size
of mysql_execute_command() by more than 10 kilobytes in a
WITH_ASAN=ON, CMAKE_BUILD_TYPE=Debug build. The culprit
turned out to be the macro WSREP_LOG, which is allocating a
separate 1KiB buffer for every occurrence.
We replace the macro with a function, so that the stack will be
allocated only when the function is actually invoked. In this way,
no stack space will be wasted by default (when WSREP and Galera
are disabled).
This backports commit b6c5657ef2
from MariaDB 10.3.1.
Without ASAN, compilers can be smarter and optimize the stack usage.
The original commit message mentions that 1KiB was saved on GCC 5.4,
and 4KiB on Mac OS X Lion, which presumably uses a clang-based compiler.
- Made WSREP_LOG a function and moved the body out of header.
- Reduced the stack allocated buffer size and implemented
reprint into dynamically allocated buffer if stack buffer is not
large enough to hold the message.
In main.index_merge_myisam we remove the test that was added in
commit a2d24def8c because
it duplicates the test case that was added in
commit 5af12e4635.
`inited == NONE` at the initialization time does not always mean
that it'll be `NONE` later, at the execution time. Use a more complex
caller-specific logic to decide whether to create a cloned lookup handler.
Besides LOAD (as in the original bug report) make sure that all
prepare_for_insert() invocations are covered by tests. Add tests for
CREATE ... SELECT, multi-UPDATE, and multi-DELETE.
Don't enable write cache with long uniques.
Stop linking plugins to the server executable on Windows.
Instead, extract whole server functionality into a large DLL, called
server.dll. Link both plugins, and small server "stub" exe to it.
This eliminates plugin dependency on the name of the server executable.
It also reduces the size of the packages (since tiny mysqld.exe
and mariadbd.exe are now both linked to one big DLL)
Also, simplify the functionality of exporing all symbols from selected
static libraries. Rely on WINDOWS_EXPORT_ALL_SYMBOLS, rather than old
self-backed solution.
fix compile error
replace GetProcAddress(GetModuleHandle(NULL), "variable_name")
for server exported data with actual variable names.
Runtime loading was never required,was error prone
, since symbols could be missing at runtime, and now it actually failed,
because we do not export symbols from executable anymore, but from a shared
library
This did require a MYSQL_PLUGIN_IMPORT decoration for the plugin,
but made the code more straightforward, and avoids missing symbols at
runtime (as mentioned before).
The audit plugin is still doing some dynamic loading, as it aims to work
cross-version. Now it won't work cross-version on Windows, as it already
uses some symbols that are *not* dynamically loaded, e.g fn_format
and those symbols now exported from server.dll , when earlier they were
exported by mysqld.exe
Windows, fixes for storage engine plugin loading
after various rebranding stuff
Create server.dll containing functionality of the whole server
make mariadbd.exe/mysqld.exe a stub that is only calling mysqld_main()
fix build
When index_merge_sort_union is turned off only ror scans were considered for range
scans, which is wrong.
To fix the problem ensure both ror scans and non ror scans are considered for range
access
Changed wording in error messages from MySQL to MariaDB. In
cases where the word server could be used instead it was done.
Tests that have these errors recorded were updated.
was restored.
Optionally rollback prepared XA's on "mariabackup --prepare".
The fix MUST NOT be ported on 10.5+, as MDEV-742 fix solves the issue for
slaves.
ADD default history partitions generates wrong partition name,
f.ex. p2 instead of p1. Gap in sequence of partition names leads to
ha_partition::open_read_partitions() fail on inexistent name.
Manual fixing such broken table requires:
1. create empty table by any name (t_empty) with correct number
of partitions;
2. stop the server;
3. rename data files (.myd, .myi or .ibd) of broken table to t_empty
fixing the partition sequence (#p2 to #p1, #p3 to #p2);
4. start the server;
5. drop the broken table;
6. rename t_empty to correct table name.
This bug could happen only with a stored procedure containing queries with
more than one reference to a CTE that used local variables / parameters.
This bug was the result of an incomplete merge of the fix for the bug
MDEV-17154. The merge covered usage of parameter markers occurred in a CTE
that was referenced more than once, but missed coverage of local variables.
Turn read cache off for periodic update.
Like 498a96a4 says:
Aria with row_format=fixed uses IO_CACHE of type READ_CACHE for
sequential read in update loop. When history row is inserted inside
this loop the cache misses it and fails with error.
This applicable to any additional row inserts on UPDATE. In this case
it was initiated by UPDATE FOR PORTION.
Related to MDEV-20441.
- Fixed mysql_prepare_create_table() constraint duplicate checking;
- Refactored period constraint handling in mysql_prepare_alter_table():
* No need to allocate new objects;
* Keep old constraint name but exclude it from dup checking by automatic_name;
- Some minor memory leaks fixed;
- Some conceptual TODOs.
TDC_RT_REMOVE_ALL -> tdc_remove_table(). Some occurrences replaced with
TDC_element::flush() (whenver TABLE_SHARE is available).
TDC_RT_REMOVE_NOT_OWN[_KEEP_SHARE] -> TDC_element::flush(). These modes
assume that current thread owns TABLE_SHARE reference, which means we can
avoid hash lookup and flush unused TABLE instances directly.
TDC_RT_REMOVE_UNUSED -> TDC_element::flush_unused(). Only [ab]used by
mysql_admin_table() currently. Should be removed eventually.
Part of MDEV-17882 - Cleanup refresh version
Aim of this patch is to remove tdc_remove_table(TDC_RT_REMOVE_UNUSED),
which was mistakenly introduced by 055a3334a.
InnoDB allows only one open TABLE instance while performing table
truncation. To fulfill this requirement:
1. MDL_EXCLUSIVE has to be acquired to block concurrent threads from
accessing given table
2. cached TABLE instances have to be flushed
3. another InnoDB requirement is such that TABLE_SHARE and remaining
TABLE instance have to be invalidated and re-opened after truncation
This goes more or less inline with what regular TRUNCATE TABLE does.
Alternative solution would be handler::ha_delete_all_rows(), but InnoDB
doesn't implement it unfortunately.
Part of MDEV-17882 - Cleanup refresh version
Let DROP SERVER and ALTER SERVER perform fair affected tables flushing.
That is acquire MDL_EXCLUSIVE and do tdc_remove_table(TDC_RT_REMOVE_ALL).
Aim of this patch is elimination of another inconsistent use of
TDC_RT_REMOVE_UNUSED. It fixes (to some extent) a problem described in the
beginning of sql_server.cc, when close_cached_connection_tables()
interferes with concurrent transaction.
A better fix should probably introduce proper MDL locks for server
objects?
Part of MDEV-17882 - Cleanup refresh version
Removed redundant tdc_remove_table(TDC_RT_REMOVE_ALL). Share was marked
flushed by preceding wait_while_table_is_used() and eventually flushed by
close_all_tables_for_name().
Part of MDEV-17882 - Cleanup refresh version
close_all_tables_for_name() is always preceded by
wait_while_table_is_used(), which makes tdc_remove_table() redundant.
The only (now fixed) exception was close_cached_tables().
Part of MDEV-17882 - Cleanup refresh version
Rather than flushing caches with tdc_remove_table(TDC_RT_REMOVE_UNUSED)
flush them with extra(HA_EXTRA_FLUSH) instead. This goes inline with
regular FTWRL.
Part of MDEV-17882 - Cleanup refresh version