plugins can have unused variables too. If they use a literal "Unused"
string a compiler might or might not merge two identical strings into
one (-fmerge-constants) and depending on that the server will or will
not issue a "variable is ignored" warning.
MDEV-32188 make TIMESTAMP use whole 32-bit unsigned range
- Added --update-history option to mariadb-dump to change 2038
row_end timestamp to 2106.
- Updated ALTER TABLE ... to convert old row_end timestamps to
2106 timestamp for tables created before MariaDB 11.4.0.
- Fixed bug in CHECK TABLE where we wrongly suggested to USE REPAIR
TABLE when ALTER TABLE...FORCE is needed.
- mariadb-check printed table names that where used with REPAIR TABLE but
did not print table names used with ALTER TABLE or with name repair.
Fixed by always printing a table that is fixed if --silent is not
used.
- Added TABLE::vers_fix_old_timestamp() that will change max-timestamp
for versioned tables when replication from a pre-11.4.0 server.
A few test cases changed. This is caused by:
- CHECK TABLE now prints 'Please do ALTER TABLE... instead of
'Please do REPAIR TABLE' when there is a problem with the information
in the .frm file (for example a very old frm file).
- mariadb-check now prints repaired table names.
- mariadb-check also now prints nicer error message in case ALTER TABLE
is needed to repair a table.
This task is to ensure we have a clear definition and rules of how to
repair or optimize a table.
The rules are:
- REPAIR should be used with tables that are crashed and are
unreadable (hardware issues with not readable blocks, blocks with
'unexpected data' etc)
- OPTIMIZE table should be used to optimize the storage layout for the
table (recover space for delete rows and optimize the index
structure.
- ALTER TABLE table_name FORCE should be used to rebuild the .frm file
(the table definition) and the table (with the original table row
format). If the table is from and older MariaDB/MySQL release with a
different storage format, it will convert the data to the new
format. ALTER TABLE ... FORCE is used as part of mariadb-upgrade
Here follows some more background:
The 3 ways to repair a table are:
1) ALTER TABLE table_name FORCE" (not other options).
As an alias we allow: "ALTER TABLE table_name ENGINE=original_engine"
2) "REPAIR TABLE" (without FORCE)
3) "OPTIMIZE TABLE"
All of the above commands will optimize row space usage (which means that
space will be needed to hold a temporary copy of the table) and
re-generate all indexes. They will also try to replicate the original
table definition as exact as possible.
For ALTER TABLE and "REPAIR TABLE without FORCE", the following will hold:
If the table is from an older MariaDB version and data conversion is
needed (for example for old type HASH columns, MySQL JSON type or new
TIMESTAMP format) "ALTER TABLE table_name FORCE, algorithm=COPY" will be
used.
The differences between the algorithms are
1) Will use the fastest algorithm the engine supports to do a full repair
of the table (except if data conversions are is needed).
2) Will use the storage engine internal REPAIR facility (MyISAM, Aria).
If the engine does not support REPAIR then
"ALTER TABLE FORCE, ALGORITHM=COPY" will be used.
If there was data incompatibilities (which means that FORCE was used)
then there will be a warning after REPAIR that ALTER TABLE FORCE is
still needed.
The reason for this is that REPAIR may be able to go around data
errors (wrong incompatible data, crashed or unreadable sectors) that
ALTER TABLE cannot do.
3) Will use the storage engine internal OPTIMIZE. If engine does not
support optimize, then "ALTER TABLE FORCE" is used.
The above will ensure that ALTER TABLE FORCE is able to
correct almost any errors in the row or index data. In case of
corrupted blocks then REPAIR possible followed by ALTER TABLE is needed.
This is important as mariadb-upgrade executes ALTER TABLE table_name
FORCE for any table that must be re-created.
Bugs fixed with InnoDB tables when using ALTER TABLE FORCE:
- No error for INNODB_DEFAULT_ROW_FORMAT=COMPACT even if row length
would be too wide. (Independent of innodb_strict_mode).
- Tables using symlinks will be symlinked after any of the above commands
(Independent of the setting of --symbolic-links)
If one specifies an algorithm together with ALTER TABLE FORCE, things
will work as before (except if data conversion is required as then
the COPY algorithm is enforced).
ALTER TABLE .. OPTIMIZE ALL PARTITIONS will work as before.
Other things:
- FORCE argument added to REPAIR to allow one to first run internal
repair to fix damaged blocks and then follow it with ALTER TABLE.
- REPAIR will not update frm_version if ha_check_for_upgrade() finds
that table is still incompatible with current version. In this case the
REPAIR will end with an error.
- REPAIR for storage engines that does not have native repair, like InnoDB,
is now using ALTER TABLE FORCE.
- REPAIR csv-table USE_FRM now works.
- It did not work before as CSV tables had extension list in wrong
order.
- Default error messages length for %M increased from 128 to 256 to not
cut information from REPAIR.
- Documented HA_ADMIN_XX variables related to repair.
- Added HA_ADMIN_NEEDS_DATA_CONVERSION to signal that we have to
do data conversions when converting the table (and thus ALTER TABLE
copy algorithm is needed).
- Fixed typo in error message (caused test changes).
Remove alter_algorithm but keep the variable as no-op (with a warning).
The reasons for removing alter_algorithm are:
- alter_algorithm was introduced as a replacement for the
old_alter_table that was used to force the usage of the original
alter table algorithm (copy) in the cases where the new alter
algorithm did not work. The new option was added as a way to force
the usage of a specific algorithm when it should instead have made
it possible to disable algorithms that would not work for some
reason.
- alter_algorithm introduced some cases where ALTER TABLE would not
work without specifying the ALGORITHM=XXX option together with
ALTER TABLE.
- Having different values of alter_algorithm on master and slave could
cause slave to stop unexpectedly.
- ALTER TABLE FORCE, as used by mariadb-upgrade, would not always work
if alter_algorithm was set for the server.
- As part of the MDEV-33449 "improving repair of tables" it become
clear that alter- algorithm made it harder to provide a better and
more consistent ALTER TABLE FORCE and REPAIR TABLE and it would be
better to remove it.
Also fixed that all unused variables are using the same variable comment.
The warning will be tested with the next commit that deprecates the
variable alter_algorithm.
MDEV-32188 make TIMESTAMP use whole 32-bit unsigned range
- Changed usage of timeval to my_timeval as the timeval parts on windows
are 32-bit long, which causes some compiler issues on windows.
MDEV-32188 make TIMESTAMP use whole 32-bit unsigned range
This is done by changing my_time_t from long to unsigned long.
The effect of this is that on windows compling old clients may
get warnings of if they compare my_time_t with as signed variable.
Other things
- Removed my_time_t from include/*.pp files as it is different on windows
and linux.
- Changed do_abi_check.cmake to first print abi_check and then the
conflicting file (this makes it easier to find the cause of the error).
This patch extends the timestamp from
2038-01-19 03:14:07.999999 to 2106-02-07 06:28:15.999999
for 64 bit hardware and OS where 'long' is 64 bits.
This is true for 64 bit Linux but not for Windows.
This is done by treating the 32 bit stored int as unsigned instead of
signed. This is safe as MariaDB has never accepted dates before the epoch
(1970).
The benefit of this approach that for normal timestamp the storage is
compatible with earlier version.
However for tables using system versioning we before stored a
timestamp with the year 2038 as the 'max timestamp', which is used to
detect current values. This patch stores the new 2106 year max value
as the max timestamp. This means that old tables using system
versioning needs to be updated with mariadb-upgrade when moving them
to 11.4. That will be done in a separate commit.
- Slave_IO thread time is now reset between reading events. Before
this commit Slave_IO always showed "Waiting for master to send
event" and the time was from SLAVE START. Now it shows time since
reading last event.
Fixed that no tables from 'mysql' schema are included in userstat.
A beneif of this is that the server is not reading statistics tables
if mysql.proc or other tables in mysql is accessed.
Other changes:
- Do not collect index statistics for system tables like index_stats
table_stats, performance_schema, information_schema etc as the user
has no control of these and the generate noise in the statistics.
This is to update the plugin to be compatible with Percona's
query_response_time plugin, with some additions.
Some of the tests are taken from Percona server.
- Added plugins QUERY_RESPONSE_TIME_READ, QUERY_RESPONSE_TIME_WRITE and
QUERY_RESPONSE_TIME_READ_WRITE.
- Added option query_response_time_session_stats, with possible values
GLOBAL, ON or OFF, to the query_response_time plugin.
Notes:
- All modules are dependent on QUERY_RESPONSE_READ_TIME. This must always
be enabled if any of the other modules are used.
This will be auto-enabled in the near future.
- Accounting are done per statement. Stored functions are regarded
as part of the original statement.
- For stored procedures the accounting are done per statement executed
in the stored procedure. CALL will not be accounted because of this.
- FLUSH commands will not be accounted for. This is to ensure that
FLUSH QUERY_RESPONSE_TIME is not part of the statistics.
(This helps when testing with mtr and otherwise).
- FLUSH QUERY_RESPONSE_TIME_READ and FLUSH QUERY_RESPONSE_TIME_READ
only resets the corresponding status.
- FLUSH QUERY_RESPONSE_TIME and FLUSH QUERY_RESPONSE_TIME_READ_WRITE or
changing the value of query_response_time_range_base followed by
any FLUSH of QUERY_RESPOSNSE_TIME resets all status.
Various help message improvements:
* MySQL->MariaDB, mysqld->mariadbd, "mysqld daemon" -> "mariadbd process"
* typos
* don't specify defaults directly in the help message
* don't say that an option is deprecated, mark is as such
* missing spaces in the middle of the text
etc
* use new deprecated printer for all deprecated server options
* restore alphabetic option sorting order
* move deprecated printer from mysqld.cc to my_getopt.c
* in --help print deprecation message at the end of the option help
* move 'ALL' help text where it belongs - to other SET options, and
with a correct indentation.
* consistently end all or none command-line option help strings
with a dot - my_print_help() needs that.
It's about 50/50 now, so let's do none, less line wraps in --help
* remove trailing spaces from command-line option help strings
Currently there are mechanism to mark a system variable as
deprecated, but they are only used to print warning messages
when a deprecated variable is set.
Leverage the existing mechanisms in order to make the
deprecation information available at the --help output of mysqld by:
* Moving the deprecation information (i.e `deprecation_substitute`
attribute) from the `sys_var` class into the `my_option` struct.
As every `sys_var` contains its own `my_option` struct, the access
to the deprecation information remains available to `sys_var`
objects. `my_getotp` functions, which works directly with
`my_option` structs, gain access to this information while building
the --help output.
* For plugin variables, leverages the `PLUGIN_VAR_DEPRECATED` flag
and set the `deprecation_substitute` attribute accordingly when
building the `my_option` objects.
* Change the `option_cmp` function to use the `deprecation_substitute`
attribute instead of the name when sorting the options. This way
deprecated options and the substitutes will be grouped together.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer
Amazon Web Services, Inc.
nullptr+0 is an UB (undefined behavior).
- Fixing my_string_metadata_get_mb() to handle {nullptr,0} without UB.
- Fixing THD::copy_with_error() to disallow {nullptr,0} by DBUG_ASSERT().
- Fixing parse_client_handshake_packet() to call THD::copy_with_error()
with an empty string {"",0} instead of NULL string {nullptr,0}.
- Fixing the code in get_interval_value() to use Longlong_hybrid_null.
This allows to handle correctly:
- Signed and unsigned arguments
(the old code assumed the argument to be signed)
- Avoid undefined negation behavior the corner case with LONGLONG_MIN
This fixes the UBSAN warning:
negation of -9223372036854775808 cannot be represented
in type 'long long int';
- Fixing the code in get_interval_value() to avoid overflow in
the INTERVAL_QUARTER and INTERVAL_WEEK branches.
This fixes the UBSAN warning:
signed integer overflow: -9223372036854775808 * 7 cannot be represented
in type 'long long int'
- Fixing the INTERVAL_WEEK branch in date_add_interval() to handle
huge numbers correctly. Before the change, huge positive numeber
were treated as their negative complements.
Note, some other branches still can be affected by this problem
and should also be fixed eventually.
Step#2 - Adding a new collation derivation level for CAST and CONVERT.
Now character string cast functions:
- CAST(string_expr AS CHAR)
- CONVERT(expr USING charset_name)
have a new collation derivation level between:
- string literals
- utf8 metadata functions, e.g. user() and database()
Before the change these cast functions had collation derivation equal
to table columns, which caused more illegal mix of collation conflicts.
Note, binary string cast functions:
- BINARY(expr)
- CAST(string_expr AS BINARY)
- CONVERT(expr USING binary)
did not change their collation derivation, to preserve the behaviour of
queries like these:
SELECT database()=BINARY'test';
SELECT user()=CAST('root' AS BINARY);
SELECT current_role()=CONVERT('role' USING binary);
Derivation levels after the change look as follows:
DERIVATION_IGNORABLE= 7, // Explicit NULL
DERIVATION_NUMERIC= 6, // Numbers in string context,
// Numeric user variables
// CAST(numeric_expr AS CHAR)
DERIVATION_COERCIBLE= 5, // Literals, string user variables
DERIVATION_CAST= 4, // CAST(string_expr AS CHAR),
// CONVERT(string_expr USING cs)
DERIVATION_SYSCONST= 3, // utf8 metadata functions, e.g. user(), database()
DERIVATION_IMPLICIT= 2, // Table columns, SP variables, BINARY(expr)
DERIVATION_NONE= 1, // A mix (e.g. CONCAT) of two differrent collations
DERIVATION_EXPLICIT= 0 // An explicit COLLATE clause
Step#1 - Changing collation derivation for string user variables
from IMPLICIT to COERCIBLE.
Retionale:
Without this preparatory change, switching the default collation for
Unicode character sets from xxx_general_ci to uca1400_ai_ci would cause
"Illegal mix of collations" errors in scenarios comparing a column with
a non-default collation to a string user variable
This is especially important for queries to INFORMATION_SCHEMA tables,
whose columns use utf8mb3_general_ci.
See the description of MDEV-25829 for more details and SQL script examples.
to_natsort_key(): Zero-initialize also num_start. This silences a
compiler warning. There is no impact on correctness, because
before the first read of num_start, !n_digits would always hold
and hence num_start would have been initialized.
Correct the second parameter for strxnmov to prevent potential buffer
overflows. The second parameter must be one less than the size of the
input buffer to avoid writing past the end of the buffer.
While the second parameter is usually correct, there are exceptions
that need fixing.
This commit addresses the issue within frm_file_exists() and other
affected places.
MDEV-33672 (10.6) added checks/tests for malformed events which end
before the flags describe (which would lead to reading of un-owned
memory). MDEV-7850 (11.5) extended all GTID events with a thread id
at the end of the event. This GTID event extension invalidates the
tests added in MDEV-33672 because the thread id is appended after the
event (and thereby the event isn't cut short).
This patch fixes these MDEV-33672 tests by not writing the GTID
thread id when writing the Gtid events just for these tests. This
preserves tests for backwards compatibility, rather than getting rid
of the tests altogether.
Preserve compatibility with 7.1.3 by including the previous non-const
function.
The error was:
fmt/format.h:3466:8: note: candidate function template not
viable: no known conversion from 'const formatter<String, [2 * ...]>' to
'formatter<fmt::basic_string_view<char>, [2 * ...]>' for object argument
3466 | auto format(const T& val, FormatContext& ctx) ->
decltype(ctx.out()) {
The patch for MDEV-31340 fixed the following bugs:
MDEV-33084 LASTVAL(t1) and LASTVAL(T1) do not work well with lower-case-table-names=0
MDEV-33085 Tables T1 and t1 do not work well with ENGINE=CSV and lower-case-table-names=0
MDEV-33086 SHOW OPEN TABLES IN DB1 -- is case insensitive with lower-case-table-names=0
MDEV-33088 Cannot create triggers in the database `MYSQL`
MDEV-33103 LOCK TABLE t1 AS t2 -- alias is not case sensitive with lower-case-table-names=0
MDEV-33108 TABLE_STATISTICS and INDEX_STATISTICS are case insensitive with lower-case-table-names=0
MDEV-33109 DROP DATABASE MYSQL -- does not drop SP with lower-case-table-names=0
MDEV-33110 HANDLER commands are case insensitive with lower-case-table-names=0
MDEV-33119 User is case insensitive in INFORMATION_SCHEMA.VIEWS
MDEV-33120 System log table names are case insensitive with lower-cast-table-names=0
Backporting the fixes from 11.5 to 10.5
Similar to #2480.
567b681 introduced safe_strcpy() to minimize the use of C with
potentially unsafe memory overflow with strcpy() whose use is
discouraged.
Replace instances of strcpy() with safe_strcpy() where possible, limited
here to files in the `sql/` directory.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer
Amazon Web Services, Inc.
Alternative operator name keywords like `and`, `or`, `xor`, etc., are
uncommon in MariaDB and can cause obscure build errors when the GCC
flag `-fno-operator-names` is applied.
Description of `-fno-operator-names`:
https://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Dialect-Options.html
> Do not treat the operator name keywords `and`, `bitand`, `bitor`,
> `compl`, `not`, `or` and `xor` as synonyms as keywords.
Part of the build errors:
/local/p4clients/pkgbuild-LdLa_/workspace/src/RDSMariaDB/sql/sql_select.cc:11171:28: error: expected ‘)’ before ‘and’
11171 | DBUG_ASSERT(sel >= 0.0 and sel <= 1.00001);
| ^~~
/local/p4clients/pkgbuild-LdLa_/workspace/src/RDSMariaDB/include/my_global.h:372:44: note: in definition of macro ‘unlikely’
372 | #define unlikely(x) __builtin_expect(((x) != 0),0)
| ^
...
The build failure is caused by using alternative operator name keywords
`and` introduced in commit b66cdbd1e.
Replace the `and` keyword with `&&` and target on MariaDB 11.0+ branches
which include the commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer
Amazon Web Services, Inc.
A memory leak happens on the second execution of a query that run in PS mode
and uses the function ROWNUM().
A memory leak took place on allocation of an instance of the class Item_int
for storing a limit value that is performed at the function set_limit_for_unit
indirectly called from JOIN::optimize_inner. Typical trace to the place where
the memory leak occurred is below:
JOIN::optimize_inner
optimize_rownum
process_direct_rownum_comparison
set_limit_for_unit
new (thd->mem_root) Item_int(thd, lim, MAX_BIGINT_WIDTH);
To fix this memory leak, calling of the function optimize_rownum()
has to be performed only once on first execution and never called
after that. To control it, the new data member
first_rownum_optimization
added into the structure st_select_lex.
(Based on original patch by Oleksandr Byelkin)
Multi-table DELETE can execute via "buffered" mode: at phase #1 it collects
rowids of rows to be deleted, then at phase #2 in multi_delete::do_deletes()
it calls handler->rnd_pos() to read rows to be deleted and deletes them.
The problem occurred when phase #1 used Rowid Filter on the table that
phase #2 would be deleting from.
In InnoDB, h->rnd_init(scan=false) and h->rnd_pos() is an index scan over PK
under the hood. So, at phase #2 ha_innobase::rnd_init() would try to use the
Rowid Filter and hit an assertion inside ha_innobase::rnd_init().
Note that multi-table UPDATE works similarly but was not affected, because
patch for MDEV-7487 added code to disable rowid filter for phase #2 in
multi_update::do_updates().
This patch changes the approach:
- It makes InnoDB not use Rowid Filter in rnd_pos() scans: it is disabled in
ha_innobase::rnd_init() and enabled back in ha_innobase::rnd_end().
- multi_update::do_updates() no longer disables Rowid Filter for phase#2 as
it is no longer necessary.
Workaround patch: Do not remove GROUP BY clause when it has
subquer(ies) in it.
remove_redundant_subquery_clauses() removes redundant GROUP BY clause
from queries in form:
expr IN (SELECT no_aggregates GROUP BY ...)
expr {CMP} {ALL|ANY|SOME} (SELECT no_aggregates GROUP BY ...)
This hits problems when the GROUP BY clause itself has subquer(y/ies).
This patch is just a workaround: it disables removal of GROUP BY clause
if the clause has one or more subqueries in it.
Tests:
- subselect_elimination.test has all known crashing cases.
- subselect4.result, insert_select.result are updated.
Note that in some cases results of SELECT are changed too (not just
EXPLAINs). These are caused by non-deterministic SQL: when running a
query like:
x > ANY( SELECT col1 FROM t1 GROUP BY constant_expression)
without removing the GROUP BY, the executor is free to pick the value
of t1.col1 from any row in the GROUP BY group (denote it $COL1_VAL).
Then, it computes x > ANY(SELECT $COL1_VAL).
When running the same query and removing the GROUP BY:
x > ANY( SELECT col1 FROM t1)
the executor will actually check all rows of t1.
Due to this bug a wrong result might be expected from queries with
an IN subquery predicate in the WHERE clause and a derived table in the
FROM clause to which split optimization could be applied.
The function JOIN::fix_all_splittings_in_plan() used the value of the
bitmap JOIN::sjm_lookup_tables() such as it had been left after the
search for the best plan for the select containing the splittable
derived table. That value could not be guaranteed to be correct. So the
recalculation of this bitmap is needed to exclude the plans with key
accesses from SJM lookup tables.
Approved by Igor Babaev <igor@maridb.com>
Reset the connection_name to contain a null string, if the pointer
points to the same space as that of the system variable
default_master_connection.
We do this because the system variable may be updated which could free
the pointer and create a new one, causing use-after-free for
re-execution of prepared statements and stored procedures where the
LEX may be reused.
This allows connection_name to be set again be to the system variable
pointer in the next call of this function (see earlier in this
function), after any possible updates to the system variable.
The optimization code replacing DATETIME comparison to TIMESTAMP comparison
in conditions like:
- WHERE timestamp_col=const_expr
- WHERE const_expr IN (SELECT timestamp_column FROM t1)
worked as follows:
- Install an internal condition handler (suppressing and counting warnings).
- Convert const_expr from its data type to TIMESTAMP
- Check the warning count collected by the internal condition handler:
* If any warnings happened during the constant conversion,
then continue with DATETIME comparison.
* Otherwise, go to the next stage of switching to TIMESTAMP comparison.
This scenario did not take into account that in some cases warnings
are escalated to errors. Errors were not caught by the internal handler,
so Type_handler_datetime_common::convert_item_for_comparison()
returned with an SQL error in the diagnostics area.
The calling code did not expect this.
Fixing the code to suppress and count both errors and warnings, to make sure
Type_handler_datetime_common::convert_item_for_comparison() returns without
adding any errors to DA if the conversion to TIMESTAMP fails and it decides
to go with DATETIME comparison.
on disable_indexes(HA_KEY_SWITCH_NONUNIQ_SAVE) the engine does
not know that the long unique is logically unique, because on the
engine level it is not. And the engine disables it,
Change the disable_indexes/enable_indexes API. Instead of the enum
mode, send a key_map of indexes that should be enabled. This way the
server will decide what is unique, not the engine.
This commit fixes sporadic failures in galera_3nodes_sr.GCF-336
test. The following changes have been made here:
1) A small addition to the test itself which should make
it more deterministic by waiting for non-primary state
before COMMIT;
2) More careful handling of the wsrep_ready variable in
the server code (it should always be protected with mutex).
No additional tests are required.
Refinement of the original patch.
Move the code to reset the kill up into the parent class
Xid_apply_log_event, to also fix the similar issue for XA COMMIT.
Increase the number of slave retries in the test case
rpl.rpl_parallel_multi_domain_xa to fix some sporadic failures. The test
generates massive amounts of conflicting transactions in multiple
independent domains, which can cause multiple rollback+retry for a
transaction as it conflicts with transactions in other domains one-by-one.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Don't deadlock kill event groups in other domains if they are not
SPECULATE_OPTIMISTIC. Such event groups may not be able to safely roll back
and retry (eg. DDL).
But do deadlock kill a transaction T2 from a blocked transaction U in another
domain, even if T2 has lower sub_id than U. Otherwise, in case of a cycle
T2->T1->U->T2, we might not break the cycle if U is not SPECULATE_OPTIMISTIC
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
When HA_DUPLICATE_POS is not supported, the row to replace was navigated by
ha_index_read_idx_map, which uses only hash to navigate.
Suchwise, given a hash collision it may choose an incorrect row.
handler::position would be correct and very convenient to use here.
dup_ref is already set by handler independently of the engine
capabilities, when an extra lookup is made (for long unique or something else,
for example WITHOUT OVERLAPS) such error will be indicated by
file->lookup_errkey != -1.
In strict mode a timestamp(0) column could be directly assigned from
another timestamp(N>0) column with the value '1970-01-01 00:00:00.1'
(at time zone '+00:00'), or with any other value '1970-01-01 00:00:00.XXXXXX'
with non-zero microsecond value XXXXXX.
This assignment happened silently without warnings or errors.
It worked as follows:
- The value {tv_sec=0, tv_usec=100000}, which is '1970-01-01 00:00:00.1'
was rounded to {tv_sec=0, tv_usec=0}, which is '1970-01-01 00:00:00.0'
- Then {tv_sec=0, tv_usec=0} was silently re-interpreted as zero datetime.
After the fix this assignment always raises a warning,
which in case of the strict mode is escalated to an error.
The problem in this scenario is that '1970-01-01 00:00:00' cannot be stored,
because its timeval value {tv_sec=0, tv_usec=0} is reserved for zero datetimes.
Thus the warning should be raised no matter if sql_mode allows or disallows
zero dates.
Field_timestampf::val_native() checked only the
first four bytes to detect zero dates.
That was not enough. Fixing the code to check all packed_length()
bytes to detect zero dates.
The code in Field_timestamp::save_in_field() did not catch
zero datetime and stored it to the other field like a usual value
using store_timestamp_dec(), which knows nothing about zero date and
treats {tv_sec=0, tv_usec=0} as a normal timeval value corresponding to
'1970-01-01 00:00:00 +00:00'.
Fixing the code to catch the special combination (ts==0 && sec_pat==0) and
store it using store_time_dec() with a zero datetime passed as an argument.
Assertion "from->s->online_alter_binlog == NULL" fails in
copy_data_between_tables, signalizing that a table share is being reused
(in another alter) after a lock upgrade to EXCLUSIVE fails.
Commit 3059f27 relaxed the lock to be upgraded to MDL_SHARED_NO_WRITE, leaving
it to happen later by a common path wait_while_table_is_used() call.
However the error handling there is not enough for online alter case, where we
require (for now) the table to be flushed, in order to clean up the memory
properly.
* Add another lock upgrade (to MDL_EXCLUSIVE) after the second replication stage
in copy_data_between_tables.
The error from this upgrade will be handled by the branch presented further in
the function.
MDEV-33450 Assertion fails in main.alter_table_online_debug
`TABLE_SHARE` that is being online-altered has a shared `s->online_alter_binlog`
member that all concurrent DMLs are writing to. Online alter thread deletes it
under the MDL_EXCLUSIVE. If upgrading the lock to MDL_EXCLUSIVE fails, table as
marked as `flushed` and it's freed automatically when its usage drops to zero.
In commit 3059f27 the lock upgrade was relaxed to MDL_SHARED_NO_WRITE to allow
concurrent SELECT threads during the final `online_alter_read_from_binlog()`
pass. An attempt to upgrade the lock to MDL_EXCLUSIVE was still happening, but
much later — after the code that marked the table `flushed`.
That is, if the upgrade failed, the table was left with a stale
`s->online_alter_binlog` triggering an assert in a future online alter.
To fix this, upgrade the lock to MDL_EXCLUSIVE earlier, after the final
`online_alter_read_from_binlog()`.
When binlog_get_pending_rows_event was refactored, one usage in
binlog_need_stmt_format has not been taken in mind.
As binlog_get_pending_rows_event now requires existing cache_mngr, this check
is now made first.
When calculating next wakeup timepoint for the timer thread, with large
thread_pool_stall_limit values, 32bit int overflow can happen.
Fixed by making one operand 8 byte large.
Also fixed the type of tick_interval to be unsigned, so it does not
go negative for very thread_pool_stall_limit.
This patch augments Gtid_log_event with the user thread-id.
In particular that compensates for the loss of this info in
Rows_log_events.
Gtid_log_event::thread_id gets visible in mysqlbinlog output like
#231025 16:21:45 server id 1 end_log_pos 537 CRC32 0x1cf1d963 GTID 0-1-2 ddl thread_id=10
as a 32 bit unsigned integer. Note this is a 32-bit value, as
the connection id can only be 32 bits (see MDEV-15089 for
details).
While the size of Gtid event has grown by 4 bytes
replication from OLD <-> NEW is not affected by it. This patch
also slightly changes the logic to convert Gtid events to Query
events for older replicas which don't support Gtid. Instead of
hard-coding the padding of the sys var section of the generated
Query event, the length to pad is dynamically calculated based
on the length of the Gtid event.
This work was started by the late Sujatha Sivakumar.
Brandon Nesterenko took it over, reviewed initial patches and
extended the work.
Also thanks to Andrei for his help in finalizing the fixes for
MDEV-33924, which were squashed into this patch.
Reviewed-by:
=============
Andrei Elkin <andrei.elkin@mariadb.com>
Kristian Nielsen <knielsen@knielsen-hq.org>
- ZLIB_LIBRARIES, not ZLIB_LIBRARY
- ZLIB_INCLUDE_DIRS, not ZLIB_INCLUDE_DIR
For building libmariadb, ZLIB_LIBRARY/ZLIB_INCLUDE_DIR are still defined
This workaround will be removed later.
The two I_S plugins SPIDER_ALLOC_MEM and SPIDER_WRAPPER_PROTOCOL
only makes sense if the main SPIDER plugin is installed. Further,
SPIDER_ALLOC_MEM requires a mutex that requires SPIDER init to fill
the table.
We also update the spider init query to override
--transaction_read_only=on so that it does not affect the spider init.
Also fixed error handling in spider_db_init() so that failure in
spider table init does not result in memory leak
Clear any pending deadlock kill after completing XA PREPARE, and before
updating the mysql.gtid_slave_pos table in a separate transaction.
Reviewed-by: Andrei Elkin <andrei.elkin@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
One case is conflicting transactions T1 and T2 with different domain id, in
optimistic parallel replication in non-GTID mode. Then T2 will
wait_for_prior_commit on T1; and if T1 got a row lock wait on T2 it would
hang, as different domains caused the deadlock kill to be skipped in
thd_rpl_deadlock_check().
More generally, if we have transactions T1 and T2 in one domain/master
connection, and independent transactions U in another, then we can
still deadlock like this:
T1 row low wait on U
U row lock wait on T2
T2 wait_for_prior_commit on T1
This commit enforces the deadlock kill in these cases. If the waited-for
transaction is speculatively applied, then it will be deadlock killed in
case of a conflict, even if the two transactions are in different domains
or master connections.
Reviewed-by: Andrei Elkin <andrei.elkin@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Improve detection for DES support in OpenSSL, to allow compilation
against system OpenSSL without DES.
Note that MariaDB needs to be compiled against OpenSSL-like library
that itself has DES support which cmake detected. Positive detection
is indicated with CMake variable HAVE_des 1.
Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@surgut.co.uk>
The problem was that the signal thread was not killed when using
unireg_abort().
The bug was introduced by:
MDEV-30260: Slave crashed:reload_acl_and_cache during shutdown
Other things fixed:
- Don't produce memory leaks with safemalloc if all threads was not
ended properly (not useful)
It was possible to create virtual columns with incompatible
GENERATED ALWAYS expression data types:
CREATE TABLE t1 (a INT, b POINT GENERATED ALWAYS AS (a));
CREATE TABLE t1 (a POINT, b INT GENERATED ALWAYS AS (a));
These data type combinations are not allowed in other cases,
e.g. INSERT, UPDATE, SP variable assignment.
Fix:
Disallowing bad combinations of the column data type and its
GENERATED ALWAYS expression data type.
Analysis:
When we scan json to get to a beginning according to the path, we end up
scanning json even if we have exhausted it. When eventually returns error.
Fix:
Continue scanning json only if we have not exhausted it and return result
accordingly.
Analysis:
When scanning json and getting the exact path at each step, if a path
is reached, we end up adding the item in the result and immediately get the
next item which results in current path changing.
Fix:
Instead of immediately returning the item, count the occurences of the path
in argument and append in the result as needed.
(returns NULL) and for Date/DateTime returns "INTEGER"
Analysis:
When the first character of json is scanned it is number. Based on that
integer is returned.
Fix:
Scan rest of the json before returning the final result to ensure json is
valid in the first place in order to have a valid type.
Regexp_processor_pcre::fix_owner() called Regexp_processor_pcre::compile(),
which could fail on the regex syntax error in the pattern and put
an error into the diagnostics area. However, the callers:
- Item_func_regex::fix_length_and_dec()
- Item_func_regexp_instr::fix_length_and_dec()
still returned "false" in such cases, which made the code
crash later inside Diagnostics_area::set_ok_status().
Fix:
- Change the return type of fix_onwer() from "void" to "bool"
and return "true" whenever an error is put to the DA
(e.g. on the syntax error in the pattern).
- Fixing fix_length_and_dec() of the mentioned Item_func_xxx
classes to return "true" if fix_onwer() returned "true".
The negation in this line:
ulonglong abs_dec= dec_negative ? -dec : dec;
did not take into account that 'dec' can be the smallest possible
signed negative value -9223372036854775808. Its negation is
an operation with an undefined behavior.
Fixing the code to use Longlong_hybrid, which implements a safe
method to get an absolute value.
Fixing the problem that an operation involving a mix of
two or more GEOMETRY operands did not preserve their SRIDs.
Now SRIDs are preserved by hybrid functions, subqueries, TVCs, UNIONs, VIEWs.
Incompatible change:
An attempt to mix two different SRIDs now raises an error.
Details:
- Adding a new class Type_extra_attributes. It's a generic
container which can store very specific data type attributes.
For now it can store one uint32 and one const pointer attribute
(for GEOMETRY's SRID and for ENUM/SET TYPELIB respectively).
In the future it can grow as needed.
Type_extra_attributes will also be reused soon to store "const Type_zone*"
pointers for the TIMESTAMP's "WITH TIME ZONE 'tz'" attribute
(a timestamp data type with a fixed time zone independent from @@time_zone).
The time zone attribute will be stored in exactly the same way like
a TYPELIB pointer is stored by ENUM/SET.
- Removing Column_definition_attributes members "interval" and "srid".
Deriving Column_definition_attributes from the generic attribute container
Type_extra_attributes instead.
- Adding a new class Type_typelib_attributes, to store
the TYPELIB of the ENUM and SET data types. Deriving Field_enum from it.
Removing the member Field_enum::typelib.
- Adding a new class Type_geom_attributes, to store
the GEOMETRY related attributes. Deriving Field_geom from it.
Removing the member Field_geom::srid.
- Removing virtual methods:
Field::get_typelib()
Type_all_attributes::get_typelib() and
Type_all_attributes::set_typelib()
They were very specific to TYPELIB.
Adding more generic virtual methods instead:
* Field::type_extra_attributes() - to get extra attributes
* Type_all_attributes::type_extra_attributes() - to get extra attributes
* Type_all_attributes::type_extra_attributes_addr() - to set extra attributes
- Removing Item_type_holder::enum_set_typelib. Deriving Item_type_holder
from the generic attribute container Type_extra_attributes instead.
This makes it possible for UNION to preserve SRID
(in addition to preserving TYPELIB).
- Deriving Item_hybrid_func from Type_extra_attributes.
This makes it possible for hybrid functions (e.g. CASE, COALESCE,
LEAST, GREATEST etc) to preserve SRID.
- Deriving Item_singlerow_subselect from Type_extra_attributes and
overriding methods:
* Item_cache::type_extra_attributes()
* subselect_single_select_engine::fix_length_and_dec()
* Item_singlerow_subselect::type_extra_attributes()
* Item_singlerow_subselect::type_extra_attributes_addr()
This is needed to preserve SRID in subqueries and TVCs
- Cleanup: fixing the data type of members
* Binlog_type_info::m_enum_typelib
* Binlog_type_info::m_set_typelib
from "TYPELIB *" to "const TYPELIB *"
Replicated events have time associated with them from originating
node which will be used for commit timestamp. Associated time can
be set in past before event is even applied.
For WSREP replication we don't need to use time information from
event.
Addressed review comments:
Jan Lindström <jan.lindstrom@galeracluster.com>
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
0ccdf54 removed stack allocated THD objects from functions
Wsrep_schema::replay_transaction(). However, it inadvertedly
anticipated the destruction of the THD, causing assertions and usage
of THD after it was destroyed.
The fix consists in extracting the original function into a separate
function, and leave the allocation and destruction of the THD object
in Wsrep_schema::replay_transaction(), making sure that using the heap
allocated THD has no side effects.
Same for Wsrep_schema::recover_sr_transactions().
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
The problem was that Item_default_value::associate_with_target_field
assigned passed as argument field as an argument which changed argument
in case of default() call with certain field (i.e. deault(field)).
There is no way to get wrong field in constructor so we will not reassign
parameter.
Extends 89c907bd4f to account for
binlog_two_phase_alter flags in a Gtid log event. I.e., if the
FL_COMMIT_ALTER_E1 or FL_ROLLBACK_ALTER_E2 flags are set in the
event flags, yet the length of the event is too short to hold
the value, then set the event as invalid
Fix the previous patch:
- Only enable handler_stats if thd->should_collect_handler_stats()==true.
- Make handler_index_cond_check() work when handler_stats are not enabled.
instance, for ICP accounting, which is created
during ha_handler_stats_reset(). Always invoke that method
during table init to ensure that the handler, regardless of
implementation, has the pointer set correctly
Part#2, variant 2: Make the printed r_ values in JSON output consistent.
After this patch, ANALYZE output has:
- r_index_rows (NEW) - Observed number of rows before ICP or Rowid Filtering
checks. This is a per-scan average. like r_rows and "rows" are.
- r_rows (AS BEFORE) - Observed number of rows after ICP and Rowid Filtering.
- r_icp_filtered (NEW) - Observed selectivity of ICP condition.
- (AS BEFORE) observed selectivity of Rowid Filter is in
$.rowid_filter.r_selectivity_pct
- r_total_filtered - Observed combined selectivity: fraction of rows left
after applying ICP condition, Rowid Filter, and attached_condition.
This is now comparable with "filtered" and is printed right after it.
- r_filtered (AS BEFORE) - Observed selectivity of "attached_condition".
Tabular ANALYZE output is not changed. Note that JSON's r_filtered and
r_rows have the same meanings as before and have the same meaning as in
tabular output.
(Based on the original patch by Jason Cu)
Part #1:
- Add ha_handler_stats::{icp_attempts,icp_match}, make
handler_index_cond_check() increment them.
- ANALYZE FORMAT=JSON now prints r_icp_filtered based on these counters.
I checked all stack overflow potential problems found with
gcc -Wstack-usage=16384
and
clang -Wframe-larger-than=16384 -no-inline
Fixes:
Added '#pragma clang diagnostic ignored "-Wframe-larger-than="'
to a lot of function to where stack usage large but resonable.
- Added stack check warnings to BUILD scrips when using clang and debug.
Function changed to use malloc instead allocating things on stack:
- read_bootstrap_query() now allocates line_buffer (20000 bytes) with
malloc() instead of using stack. This has a small performance impact
but this is not releant for bootstrap.
- mroonga grn_select() used 65856 bytes on stack. Changed it to use
malloc().
- Wsrep_schema::replay_transaction() and
Wsrep_schema::recover_sr_transactions().
- Connect zipOpen3()
Not fixed:
- mroonga/vendor/groonga/lib/expr.c grn_proc_call() uses
43712 byte on stack. However this is not easy to fix as the stack
used is caused by a lot of code generated by defines.
- Most changes in mroonga/groonga where only adding of pragmas to disable
stack warnings.
- rocksdb/options/options_helper.cc uses 20288 of stack space.
(no reason to fix except to get rid of the compiler warning)
- Causes using alloca() where the allocation size is resonable.
- An issue in libmariadb (reported to connectors).
Problem was assertion assuming we always hold
THD::LOCK_thd_data mutex that is not true.
In most cases this is true but function is
also used from InnoDB lock manager and
there we can't take THD::LOCK_thd_data to
obey mutex ordering. Removed assertion as
wsrep transaction state can't change even
that case.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
There is a convention that Item::val_int() and Item::val_real() return
SQL NULL doing effectively what this code does:
null_value= true;
return 0; // Always return 0 for SQL NULL
This is done to optimize boolean value evaluation:
if Item::val_int() or Item::val_real() returned 1 -
that always means TRUE and never can means SQL NULL.
This convention helps to avoid unnecessary testing
Item::null_value after getting a non-zero return value.
Item_func_min_max did not follow this convention.
It could return a non-zero value together with null_value==true.
This made evaluate_join_record() erroneously misinterpret
SQL NULL as TRUE in this call:
select_cond_result= MY_TEST(select_cond->val_int());
Fixing Item_func_min_max to follow the convention.
Problem was assertion assuming we always hold
THD::LOCK_thd_data mutex that is not true.
In most cases this is true but function is
also used from InnoDB lock manager and
there we can't take THD::LOCK_thd_data to
obey mutex ordering. Removed assertion as
wsrep transaction state can't change even
that case.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
in the $case=2 - it's wrong to kill after the first binlog EOF,
because that might happen between INSERT(4) and INSERT(5).
So, wait for the slave to acknowledge INSERT(5) before killing
the master, that is, both connection threads must pass
repl_semisync_master.wait_after_sync()
The slave IO thread sets MYSQL_SET_CHARSET_DIR. The code for this option
however is not thread-safe in sql-common/client.c. The value set is
temporarily written to mysys global variable `charsets-dir` and can be seen
by other threads running in parallel, which can result in use-after-free
error.
Problem was visible as random failures of test cases in suite multi_source
with Valgrind or MSAN.
Work-around by not setting this option for slave connect, it is redundant
anyway as it is just setting the default value.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
it's a pointer into the net buffer, so it might be overwritten by the
next read or write. And the next plugin switch (in multi-auth) will
try to compare it (in send_plugin_request_packet) which is normally
harmless but fails the assert with Lex_ident::is_valid_ident()
This patch also fixes:
MDEV-33050 Build-in schemas like oracle_schema are accent insensitive
MDEV-33084 LASTVAL(t1) and LASTVAL(T1) do not work well with lower-case-table-names=0
MDEV-33085 Tables T1 and t1 do not work well with ENGINE=CSV and lower-case-table-names=0
MDEV-33086 SHOW OPEN TABLES IN DB1 -- is case insensitive with lower-case-table-names=0
MDEV-33088 Cannot create triggers in the database `MYSQL`
MDEV-33103 LOCK TABLE t1 AS t2 -- alias is not case sensitive with lower-case-table-names=0
MDEV-33109 DROP DATABASE MYSQL -- does not drop SP with lower-case-table-names=0
MDEV-33110 HANDLER commands are case insensitive with lower-case-table-names=0
MDEV-33119 User is case insensitive in INFORMATION_SCHEMA.VIEWS
MDEV-33120 System log table names are case insensitive with lower-cast-table-names=0
- Removing the virtual function strnncoll() from MY_COLLATION_HANDLER
- Adding a wrapper function CHARSET_INFO::streq(), to compare
two strings for equality. For now it calls strnncoll() internally.
In the future it will turn into a virtual function.
- Adding new accent sensitive case insensitive collations:
- utf8mb4_general1400_as_ci
- utf8mb3_general1400_as_ci
They implement accent sensitive case insensitive comparison.
The weight of a character is equal to the code point of its
upper case variant. These collations use Unicode-14.0.0 casefolding data.
The result of
my_charset_utf8mb3_general1400_as_ci.strcoll()
is very close to the former
my_charset_utf8mb3_general_ci.strcasecmp()
There is only a difference in a couple dozen rare characters, because:
- the switch from "tolower" to "toupper" comparison, to make
utf8mb3_general1400_as_ci closer to utf8mb3_general_ci
- the switch from Unicode-3.0.0 to Unicode-14.0.0
This difference should be tolarable. See the list of affected
characters in the MDEV description.
Note, utf8mb4_general1400_as_ci correctly handles non-BMP characters!
Unlike utf8mb4_general_ci, it does not treat all BMP characters
as equal.
- Adding classes representing names of the file based database objects:
Lex_ident_db
Lex_ident_table
Lex_ident_trigger
Their comparison collation depends on the underlying
file system case sensitivity and on --lower-case-table-names
and can be either my_charset_bin or my_charset_utf8mb3_general1400_as_ci.
- Adding classes representing names of other database objects,
whose names have case insensitive comparison style,
using my_charset_utf8mb3_general1400_as_ci:
Lex_ident_column
Lex_ident_sys_var
Lex_ident_user_var
Lex_ident_sp_var
Lex_ident_ps
Lex_ident_i_s_table
Lex_ident_window
Lex_ident_func
Lex_ident_partition
Lex_ident_with_element
Lex_ident_rpl_filter
Lex_ident_master_info
Lex_ident_host
Lex_ident_locale
Lex_ident_plugin
Lex_ident_engine
Lex_ident_server
Lex_ident_savepoint
Lex_ident_charset
engine_option_value::Name
- All the mentioned Lex_ident_xxx classes implement a method streq():
if (ident1.streq(ident2))
do_equal();
This method works as a wrapper for CHARSET_INFO::streq().
- Changing a lot of "LEX_CSTRING name" to "Lex_ident_xxx name"
in class members and in function/method parameters.
- Replacing all calls like
system_charset_info->coll->strcasecmp(ident1, ident2)
to
ident1.streq(ident2)
- Taking advantage of the c++11 user defined literal operator
for LEX_CSTRING (see m_strings.h) and Lex_ident_xxx (see lex_ident.h)
data types. Use example:
const Lex_ident_column primary_key_name= "PRIMARY"_Lex_ident_column;
is now a shorter version of:
const Lex_ident_column primary_key_name=
Lex_ident_column({STRING_WITH_LEN("PRIMARY")});
Add assertions about limitations one has when using Index Condition
Pushdown:
- add handler::assert_icp_limitations()
- call this function from functions that may attempt violations.
Verified that assert_icp_limitations() as well as calls to it are
compiled away in release build.
If replicating an event in ROW format, and InnoDB detects a deadlock
while searching for a row, the row event will error and rollback in
InnoDB and indicate that the binlog cache also needs to be cleared,
i.e. by marking thd->transaction_rollback_request. In the normal
case, this will trigger an error in Rows_log_event::do_apply_event()
and cause a rollback. During the Rows_log_event::do_apply_event()
cleanup of a successful event application, there is a DBUG_ASSERT in
log_event_server.cc::rows_event_stmt_cleanup(), which sets the
expectation that thd->transaction_rollback_request cannot be set
because the general rollback (i.e. not the InnoDB rollback) should
have happened already. However, if the replica is configured to skip
deadlock errors, the rows event logic will clear the error and
continue on, as if no error happened. This results in
thd->transaction_rollback_request being set while in
rows_event_stmt_cleanup(), thereby triggering the assertion.
This patch fixes this in the following ways:
1) The assertion is invalid, and thereby removed.
2) The rollback case is forced in rows_event_stmt_cleanup() if
transaction_rollback_request is set.
Note the differing behavior between transactions which are skipped
due to deadlock errors and other errors. When a transaction is
skipped due to an ignored deadlock error, the entire transaction is
rolled back and skipped (though note MDEV-33930 which allows
statements in the same transaction after the deadlock-inducing one
to commit). When a transaction is skipped due to ignoring a
different error, only the erroring statements are rolled-back and
skipped - the rest of the transaction will execute as normal. The
effect of this can be seen in the test results. The added test case
to rpl_skip_error.test shows that only statements which are ignored
due to non-deadlock errors are ignored in larger transactions. A
diff between rpl_temporary_error2_skip_all.result and
rpl_temporary_error2.result shows that all statements in the errored
transaction are rolled back (diff pasted below):
: diff rpl_temporary_error2.result rpl_temporary_error2_skip_all.result
49c49
< 2 1
---
> 2 NULL
51c51
< 4 1
---
> 4 NULL
53c53
< * There will be two rows in t2 due to the retry.
---
> * There will be one row in t2 because the ignored deadlock does not retry.
57d56
< 1
59c58
< 1
---
> 0
Reviewed By:
============
Andrei Elkin <andrei.elkin@mariadb.com>
Support index condition pushdown within partitioned tables.
- ha_partition will pass the pushed index condition into all of the used
partitions.
- We require that all of the partitions to handle the pushed index
condition in the same way.
- When using ICP, one may read rows (e.g. call h->index_read_map(buf, ...)
only to buf= table->record[0], for two reasons:
* Pushed index condition's Item_field objects point into record[0]
* InnoDB requires this: it calls offset() which assumes record[0].
So, when using ICP, ha_partition will read partition records to
table->record[0] and then will copy record away if it needs it to be
elsewhere.
create_partitioning_metadata() should only mark transaction r/w
if it actually did anything (that is, the table is partitioned).
otherwise it's a no-op, called even for temporary tables and
it shouldn't do anything at all
Ideally our methods and functions should do one thing, do that well,
and do only that. add_table_to_list does far more than adding a
table to a list, so this commit factors the TABLE_LIST creation out
to a new TABLE_LIST constructor. It then uses placement new()
to create it in the correct memory area (result of thd->calloc).
Benefits of this approach:
1. add_table_to_list now returns as early as possible on an error
2. fewer side-effects incurred on creating the TABLE_LIST object
3. TABLE_LIST won't be calloc'd if copy_to_db fails
4. local declarations moved closer to their respective first uses
5. improved code readability and logical flow
Also factored a couple of other functions to keep the happy path
more to the left, which makes them easier to follow at a glance.
Synopsis: If SELECT returned answer from Query Cache it is not really executed.
The reason for firing of assertion
DBUG_ASSERT((mem_root->flags & ROOT_FLAG_READ_ONLY) == 0);
is that in case the query_cache is on and the same query run by different
stored routines the following use case can take place:
First, lets say that bodies of routines used by the test case are the same
and contains the only query 'SELECT * FROM t1';
call p1() -- a result set is stored in query cache for further use.
call p2() -- the same query is run against the table t1, that result in
not running the actual query but using its cached result.
On finishing execution of this routine, its memory root is
marked for read only since every SP instruction that this
routine contains has been executed.
INSERT INT t1 VALUE (1); -- force following invalidation of query cache
call p2() -- query the table t1 will result in assertion failure since its
execution would require allocation on the memory root that
has been already marked as read only memory root
The root cause of firing the assertion is that memory root of the stored
routine 'p2' was marked as read only although actual execution of the query
contained inside hadn't been performed.
To fix the issue, mark a SP instruction as not yet run in case its execution
doesn't result in real query processing and a result set got from query cache
instead.
Note that, this issue relates server built in debug mode AND with the protect
statement memory root feature turned on. It doesn't affect server built
in release mode.
This way, if manager thread somehow starts and stops again quickly before
main thread wakes up to check if it started correctly, we will not hang.
Patch suggested by Monty as follow-up to
7f498fbab8
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
- Add additional MTRs for more coverage on invalid options
- Updating a few error messages to be more informative
- Use the exit code from handle_options() when there is an error processing
user options
All new code of the whole pull request, including one or several files that are
either new files or modified ones, are contributed under the BSD-new license. I
am contributing on behalf of my employer Amazon Web Services, Inc.
When passing in an invalid value (e.g. incorrect data type) for a variable, the
server startup will fail with misleading error messages.
The behavior **before** this change:
For server options:
- The error message will indicate that the argument is being adjusted to a valid value
- Server startup still fails
For plugin options:
- The error message will indicate that the argument is being adjusted to a valid value
- The plugin is still disabled
- Server startup fails with a message that it does not recognize the plugin option
The behavior **after** this change:
For server options:
- Output that an invalid argument was provided
- Exit server startup
For plugin options:
- Output that an invalid argument was provided
- Disable the plugin
- Attempt to continue server startup
All new code of the whole pull request, including one or several files that are
either new files or modified ones, are contributed under the BSD-new license. I
am contributing on behalf of my employer Amazon Web Services, Inc.
The crash at running mysqlbinlog on a SEQUENCE containing binlog file
was caused MDEV-29621 fixes that did not check which of the slave
or binlog applier executes a block introduced there.
The block is meaningful only for the parallel slave applier, so
it's safe to fix this bug with identified the actual applier and
skipping the block when it's the mysqlbinlog one.
Fixed that internal temporary tables are not waiting for freed disk space.
Other things:
- 'kill id' will now kill a query waiting for free disk space instantly.
Before it could take up to 60 seconds for the kill would be noticed.
- Fixed that sorting one index is not using MY_WAIT_IF_FULL for temp files.
- Fixed bug where share->write_flag set MY_WAIT_IF_FULL for temp files.
It is quite hard to do a test case for this. Instead I tested all
combinations interactively.
TIME-alike string and numeric arguments to TIMEDIFF()
can get additional fractional seconds during the supported
TIME range adjustment in get_time().
For example, during TIMEDIFF('839:00:00','00:00:00') evaluation
in Item_func_timediff::get_date(), the call for args[0]->get_time()
returns MYSQL_TIME '838:59:59.999999'.
Item_func_timediff::get_date() did not handle these extra digits
and returned a MYSQL_TIME result with fractional digits outside
of Item_func_timediff::decimals. This mismatch could further be
caught by a DBUG_ASSERT() in various other pieces of the code,
leading to a crash.
Fix:
In case if get_time() returned MYSQL_TIMESTAMP_TIME,
let's truncate all extra digits using my_time_trunc(&l_time,decimals).
This guarantees that the rest of the code returns a MYSQL_TIME
with second_part not conflicting with Item_func_timediff::decimals.
When the system variables @@debug_dbug was assigned to
some expression, Sys_debug_dbug::do_check() did not properly
convert the value from the expression character set to utf8.
So the value was erroneously re-interpretted as utf8 without
conversion. In case of a tricky expression character set
(e.g. utf16le), this led to unexpected results.
Fix:
Re-using Sys_var_charptr::do_string_check() in Sys_debug_dbug::do_check().
Values of all session tracking system variables will be sent in the
first ok packet upon connection after successful authentication.
Also updated mtr to print session track info on connection (h/t Sergei
Golubchik) so that we can write mtr tests for this change.
The signal handler thread can use various different runtime
resources when processing a SIGHUP (e.g. master-info information)
due to calling into reload_acl_and_cache(). Currently, the shutdown
process waits for the termination of the signal thread after
performing cleanup. However, this could cause resources actively
used by the signal handler to be freed while reload_acl_and_cache()
is processing.
The specific resource that caused MDEV-30260 is a race condition for
the hostname_cache, such that mysqld would delete it in
clean_up()::hostname_cache_free(), before the signal handler would
use it in reload_acl_and_cache()::hostname_cache_refresh().
Another similar resource is the active_mi/master_info_index. There
was a race between its deletion by the main thread in end_slave(),
and their usage by the Signal Handler as a part of
Master_info_index::flush_all_relay_logs.read(active_mi) in
reload_acl_and_cache().
This patch fixes these race conditions by relocating where server
shutdown waits for the signal handler to die until after
server-level threads have been killed (i.e., as a last step of
close_connections()). With respect to the hostname_cache, active_mi
and master_info_cache, this ensures that they cannot be destroyed
while the signal handler is still active, and potentially using
them.
Additionally:
1) This requires that Events memory is still in place for SIGHUP
handling's mysql_print_status(). So event deinitialization is moved
into clean_up(), but the event scheduler still needs to be stopped
in close_connections() at the same spot.
2) The function kill_server_thread is no longer used, so it is
deleted
3) The timeout to wait for the death of the signal thread was not
consistent with the comment. The comment mentioned up to 10 seconds,
whereas it was actually 0.01s. The code has been fixed to wait up to
10 seconds.
4) A warning has been added if the signal handler thread fails to
exit in time.
5) Added pthread_join() to end of wait_for_signal_thread_to_end()
if it hadn't ended in 10s with a warning. Note this also removes
the pthread_detached attribute from the signal_thread to allow
for the pthread_join().
Reviewed By:
===========
Vladislav Vaintroub <wlad@mariadb.com>
Andrei Elkin <andrei.elkin@mariadb.com>
What's happening:
1. Query_cache::insert() locks the QC and verifies that it's enabled
2. parallel thread tries to disable it. trylock fails (QC is locked)
so the status becomes DISABLE_REQUEST
3. Query_cache::insert() calls Query_cache::write_result_data()
which allocates a new block and unlocks the QC.
4. Query_cache::unlock() notices there are no more QC users and a
pending DISABLE_REQUEST so it disables the QC and frees all the
memory, including the new block that was just allocated
5. Query_cache::write_result_data() proceeds to write into the freed block
Fix: change m_cache_status under a mutex.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Problem was too tight condition on ha_commit_trans to not
allow non transactional storage engines participate 2pc
in Galera case. This is required because transaction
using e.g. procedures might read mysql.proc table inside
a trasaction and these tables use at the moment Aria
storage engine that does not support 2pc.
Fixed by allowing read only transactions to storage
engines that do not support two phase commit to participate
2pc transaction. These will be committed later separately.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Keep track of each recently active XID, recording which worker it was queued
on. If an XID might still be active, choose the same worker to queue event
groups that refer to the same XID to avoid conflicts.
Otherwise, schedule the XID freely in the next round-robin slot.
This way, XA PREPARE can normally be scheduled without restrictions (unless
duplicate XID transactions come close together). This improves scheduling
and parallelism over the old method, where the worker thread to schedule XA
PREPARE on was fixed based on a hash value of the XID.
XA COMMIT will normally be scheduled on the same worker as XA PREPARE, but
can be a different one if the XA PREPARE is far back in the event history.
Testcase and code for trimming dynamic array due to Andrei.
Reviewed-by: Andrei Elkin <andrei.elkin@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
This is a preparatory patch to facilitate the next commit to improve
the scheduling of XA transactions in parallel replication.
When choosing the scheduling bucket for the next event group in
rpl_parallel_entry::choose_thread(), use an explicit FIFO for the
round-robin selection instead of a simple cyclic counter i := (i+1) % N.
This allows to schedule XA COMMIT/ROLLBACK dependencies explicitly without
changing the round-robin scheduling of other event groups.
Reviewed-by: Andrei Elkin <andrei.elkin@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Only locked will participate in the query in this case. Chances are
that not-locked partitions were not opened, which is the cause of the
crash in the added test case spider/bugfix.mdev_33731 without this
patch.
Json test about max statement time fails with freebsd because on some
architectures the test might execute faster and the statement may not fail.
To simulate failure regardless of architecture, introduce a wait of seconds
longer than the max_statement_time.
A GTID event can have variable length, with contributing factors
such as the variable length from the flags2 and optional extra flags
fields. These fields are bitmaps, where each set bit indicates an
additional value that should be appended to the event, e.g.
multi-engine transactions append a number to indicate the number of
additional engines a transaction uses. However, if a flags bit is
set, and no additional fields are appended to the event, MDEV-33672
reports that the server can still try to read from memory as if it
did exist. Note, however, in debug builds, this condition is
asserted for FL_EXTRA_MULTI_ENGINE.
This patch fixes this to check that the length of the event is
aligned with the expectation set by the flags for FL_PREPARED_XA,
FL_COMPLETED_XA, and FL_EXTRA_MULTI_ENGINE.
Reviewed By
============
Kristian Nielsen <knielsen@knielsen-hq.org>
Commit 6dce6aeceb breaks out of a loop
in ha_partition::info when some partitions aren't opened, in which
case auto_increment_value assertion will fail. This commit patches
that hole.
We add an extra condition that makes the inequality testing in
SEQUENCE::increment_value() mathematically watertight, and we cast to
and from unsigned in potential underflow and overflow addition and
subtractions to avoid undefined behaviour.
Let's start by distinguishing between c++ expressions and mathematical
expressions. by c++ expression I mean an expression with the outcome
determined by the compiler/runtime. by mathematical expression I mean
an expression whose value is mathematically determined. So a c++
expression -9223372036854775806 - 1000 at worst can evaluate to any
value due to underflow. A mathematical expression -9223372036854775806
- 1000 evaluates to -9223372036854776806.
The problem boils down to how to write a c++ expression equivalent to
an mathematical expression x + y < z where x and z can take any values
of long long int, and y < 0 is also a long long int. Ideally we want
to avoid underflow, but I'm not sure how this can be done.
The correct c++ form should be (x + y < z || x < z - y || x < z).
Let M=9223372036854775808 i.e. LONGLONG_MAX + 1. We have
-M < x < M - 1
-M < y < 0
-M < z < M - 1
Let's consider the case where x + y < z is true as a mathematical
expression.
If the first disjunct underflows, i.e. the mathematical expression x
+ y < -M. If the arbitrary value resulting from the underflow causes
the c++ expression to hold too, then we are done. Otherwise we move
onto the next expression x < z - y. If there's no overflow in z
- y then we are done. If there's overflow i.e. z - y > M - 1,
and the c++ expression evals to false, then we are onto x < z.
There's no over or underflow here, and it will eval to true. To see
this, note that
x + y < -M means x < -M - y < -M - (-M) = 0
z - y > M - 1 means z > y + M - 1 > - M + M - 1 = -1
so x < z.
Now let's consider the case where x + y < z is false as a mathematical
expression.
The first disjunct will not underflow in this case, so we move to (x <
z - y). This will not overflow. To see this, note that
x + y >= z means z - y <= x < M - 1
So it evals to false too. And the third disjunct x < z also evals to
false because x >= z - y > z.
I suspect that in either case the expression x < z does not determine
the final value of the disjunction in the vast majority cases, which
is why we leave it as the final one in case of the rare cases of both
an underflow and an overflow happening.
Here's an example of both underflow and overflow happening and the
added inequality x < z saves the day:
x = - M / 2
y = - M / 2 - 1
z = M / 2
x + y evals to M - 1 which is > z
z - y evals to - M + 1 which is < x
We can do the same to test x + y > z where the increment y is positive:
(x > z - y || x + y > z || x > z)
And the same analysis applies to unsigned cases.
It was wrong to derive Item_func_uuid from Item_func_sys_guid,
because the former is a function returning the UUID data type,
while the latter is a string function returning VARCHAR.
As a result of the wrong hierarchy, Item_func_uuid erroneously derived
Item_str_func::fix_fields(), which contains this code:
/*
In Item_str_func::check_well_formed_result() we may set null_value
flag on the same condition as in test() below.
*/
if (thd->is_strict_mode())
set_maybe_null();
This code is not relevant to UUID() at all.
A simple fix would be to set_maybe_null(false) in
Item_func_uuid::fix_length_and_dec(). However,
it'd fix only exactly this single consequence of the wrong
class hierarchy, and similar bugs could appear again in
the future. Moreover, we're going to add functions UUIDv4()
and UUIDv7() soon (in 11.6). So it's better to fix the class hierarchy
in the right way before adding these new functions.
Fix:
- Adding a new abstract class Item_fbt_func in the template
in sql_type_fixedbin.h
- Deriving Item_typecast_fbt from Item_fbt_func
- Deriving Item_func_uuid from Item_fbt_func
- Adding a new helper class UUIDv1. It derives from UUID, and additionally
initializes the value to "UUID version 1" right in the constructor.
Note, the new coming soon SQL functions UUIDv4() and UUIDv7()
will also have corresponding classes UUIDv4 and UUIDv7.
So now UUID() is a pure "returning UUID" function,
like CAST(expr AS UUID) used to be, without any unintentional
artifacts of functions returning VARCHAR/TEXT.
Cleanup:
- Removing the member Item_func_sys_guid::with_dashes,
as it's not needed any more:
* Item_func_sys_guid now does not have any descendants any more
* Item_func_sys_guid::val_str() itself always displays without dashes
Fix in this commit handles foreign key value appending into write set
so that db and table names are converted from the filepath format
to tablename format. This is compatible with key values appended from
elsewhere in the code base
There is a mtr test galera.galera_table_with_hyphen for regression testing
Reviewer: monty@mariadb.com
JOIN_CACHE has a light-weight initialization mode that's targeted at
EXPLAINs. In that mode, JOIN_CACHE objects are not able to execute.
Light-weight mode was used whenever the statement was an EXPLAIN. However
the EXPLAIN can execute subqueries, provided they enumerate less than
@@expensive_subquery_limit rows.
Make sure we use light-weight initialization mode only when the select is
more expensive @@expensive_subquery_limit.
Also add an assert into JOIN_CACHE::put_record() which prevents its use
if it was initialized for EXPLAIN only.
MDEV-26473 fixed a segmentation fault at startup between the handle
manager thread and the binlog background thread, such that the
binlog background thread could be started and submit a job to the
handle manager, before it had initialized. Where MDEV-26473 made it
so the handle manager would initialize before the main thread
started the normal binary logs, it did not account for the recovery
case. That is, there is still a possibility of a segmentation fault
when a server is recovering using the binary logs such that it can
open the binary logs, start the binlog background thread, and submit
a job to the handle manager before it is initialized.
This patch fixes this by moving the initialization of the mysql
handler manager to happen prior to recovery.
Reviewed By:
============
Andrei Elkin <andrei.elkin@mariadb.com>
Delayed_insert has its own THD (initialized at mysql_insert()) and
hence its own LEX. Delayed_insert initalizes a very few parameters for
LEX and 'duplicates' is not in this list. Now we copy this missing
parameter from parser LEX (as well as sql_command).
When INSERT does auto-create for t1 all its handler instances are
closed by alter_close_table(). At this time down the stack
maria_close() clears share->state_history. Later when we unlock the
tables Aria transaction manager accesses old share instance (the one
before t1 was closed) and tries to reset its state_history.
The problem is maria_close() didn't remove table from transaction's
list (used_tables). The fix does _ma_remove_table_from_trnman() which
is triggered by HA_EXTRA_PREPARE_FOR_RENAME.
Item_func_dyncol_create::print_arguments() printed only CHARSET clause
without COLLATE.
Therefore,
HEX(column_create(1,'1212' AS CHAR CHARACTER SET utf8mb3 COLLATE utf8mb3_bin))
inside a VIEW changed to just:
HEX(column_create(1,'1212' AS CHAR CHARACTER SET utf8mb3))
which changed the collation ID seen in the HEX output.
Note, the collation ID inside column_create() is not really much important.
(It's only important what the character set is).
And for COLLATE, the more important thing is what's later written
in the AS clause of COLUMN_GET:
SELECT
COLUMN_GET(
column_create(1,'1212' AS CHAR CHARACTER SET utf8mb3 COLLATE utf8mb3_bin)
column_nr AS type -- this type is more important
);
Still, let's add the COLLATE clause into the COLUMN_CREATE() print output,
although it's not important for now for anything else than just the HEX output.
At least to make VIEW work in a more predictable way with HEX(COLUMN_CREATE()).
Also, in the future we can start using somehow the collation ID written inside
COLUMN_CREATE(), for example by making the `AS type` clause optional in
COLUMN_GET():
COLUMN_GET(dyncol_blob, column_nr [AS type]);
instead of:
COLUMN_GET(dyncol_blob, column_nr AS type);
SQL Server compatibility layer may need this for
the SQL_Variant data type support.
Fix a case of stack-use-after-return reported by ASAN in
Wsrep_schema_impl::open_table(). This function has a stack allocated
TABLE_LIST object and return TABLE_LIST::table to the caller.
Changed the function to take a TABLE_LIST pointer as argument.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
The discovered memory leak was introduced by the commit
762bf7a03b
(MDEV-22602 Disable UPDATE CASCADE for SQL constraints)
The reason why a memory leaked on running the test main.constraints
is that a statement arena was used for allocation a memory
for storing a constraint name. A constraint name is an entity having
temporary nature by its design so runtime arena should be used for its
allocation.
Some fixes related to commit f838b2d799 and
Rows_log_event::do_apply_event() and Update_rows_log_event::do_exec_row()
for system-versioned tables were provided by Nikita Malyavin.
This was required by test versioning.rpl,trx_id,row.
Found memory leaks were introduced by the commit
a896bebfa6
MDEV-18844 Implement EXCEPT ALL and INTERSECT ALL operations
and caused by using a statement arena instead a runtime arena for
allocation of objects having temporary life span by their nature.
Aforementioned memory leaks were produced by running queries
that typically use select with intersect, union or table values
constructors.
To fix these memory leaks use the runtime arena for allocation
of Item_field objects used by set operations.
Additionally, OOM handling added on allocation of aforementioned
Item_field objects.
$MYSQL_TZINFO_TO_SQL works by truncating tables. Truncation is an
operation that cannot be done in-place and therefore is fundamentally
incompatible with alter_algorithm='INPLACE'. As a result, we override
the default alter_algorithm setting in tztime.cc to
alter_algorithm='COPY' so that timezones can be loaded regardless
of the previously set alter_algorithm.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer
Amazon Web Services, Inc.
Item_func_group_concat::print() did not take into account
that Item_func_group_concat::separator can be of a different character set
than the "String *str" (when the printing is being done to).
Therefore, printing did not work correctly for:
- non-ASCII separators when GROUP_CONCAT is done on 8bit data
or multi-byte data with mbminlen==1.
- all separators (even including simple ones like comma)
when GROUP_CONCAT is done on ucs2/utf16/utf32 data (mbminlen>1).
Because of this problem, VIEW definitions did not print correctly to
their FRM files. This later led to a wrong SELECT and SHOW CREATE output.
Fix:
- Adding new String methods:
bool append_for_single_quote_using_mb_wc(const char *str, size_t length,
CHARSET_INFO *cs);
bool append_for_single_quote_opt_convert(const char *str,
size_t length,
CHARSET_INFO *cs)
which perform both escaping and character set conversion at the same time.
- Adding a new String method escaped_wc_for_single_quote(),
to reuse the code between the old and the new methods.
- Fixing Item_func_group_concat::print() to use the new
method append_for_single_quote_opt_convert().
Queries that select concatenated constant strings now have
colname and value that match. For example,
SELECT '123' 'x';
will return a result where the column name and value both
are '123x'.
Review: Daniel Black
- Fix to avoid mysqltest client getting killed abruptly during
mysql_shutdown(). When Galera replication is shutdown, wait for
THDs with `thd->stmt_da()->is_eof()` to disconnect (these are about
to disconnect anyway).
- Extract duplicate code from `wsrep_stop_replication()` and
`wsrep_shutdown_replication()` in a new function.
- No need to use a custom `shutdown_mysqld.inc` in galera
suite. Delete it, so that the one in `mysql-test/include/` is used.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
We should not set debug sync point when holding a mutex
to avoid mutex ordering failure.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
User transactions may acquire explicit MDL locks from InnoDB level
when persistent statistics is re-read for a table.
If such a transaction would be subject to BF-abort, it was improperly
detected as a system transaction and wouldn't get aborted.
The fix: Check if a transaction holding explicit MDL locks is a user
transaction in the MDL conflict handling code.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Add "real ip:<ip_or_localhost>" part to the aborted message
Only for proxy-protocoled connection, so it does not not to cause
confusion to normal users.
Add "real ip:<ip_or_localhost>" part to the aborted message
Only for proxy-protocoled connection, so it does not not to cause
confusion to normal users.
Fix function `remove_fragment()` in wsrep_schema so that no error is
raised if the fragment to be removed is not found in the
wsrep_streaming_log table. This is necessary to handle the case where
streaming transaction in idle state is BF aborted. This may result in
the case where the rollbacker thread successfully removes the
transaction's fragments, followed by the applier's attempt to remove
the same fragments. Causing the node to leave the cluster after
reporting a "Failed to apply write set" error.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
When we commit empty transaction we should allow wsrep
transaction to be on s_must_replay state for DDL that
was killed during certification.
Fix is tested with RQG because deterministic mtr-testcase
was not found.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
When using semi-sync replication with
rpl_semi_sync_master_wait_point=AFTER_COMMIT, the performance of the
primary can significantly reduce compared to AFTER_SYNC's
performance for workloads with many concurrent users executing
transactions. This is because all connections on the primary share
the same cond_wait variable/mutex pair, so any time an ACK is
received from a replica, all waiting connections are awoken to check
if the ACK was for itself, which is done in mutual exclusion.
This patch changes this such that the waiting THD will use its own
local condition variable, and the ACK receiver thread only signals
connections which have been ACKed for wakeup. That is, the
THD::LOCK_wakeup_ready condition variable is re-used for this
purpose, and the Active_tranx queue nodes are extended to hold the
waiting thread, so it can be signalled once ACKed.
Additionally:
1) Removed part of MDEV-11853 additions, which allowed suspended
connection threads awaiting their semi-sync ACKs to live until their
ACKs had been received. This part, however, wasn't needed. That is,
all that was needed was for the Ack_thread to survive. So now the
connection threads are killed during phase 1. Thereby
THD::is_awaiting_semisync_ack, and all its related code was removed.
2) COND_binlog_send is repurposed to signal on the condition when
Active_tranx is emptied during clear_active_tranx_nodes.
3) At master shutdown (when waiting for slaves), instead of the
main loop individually waiting for each ACK, await_slave_reply()
(renamed await_all_slave_replies()) just waits once for the
repurposed COND_binlog_send to signal it is empty.
4) Test rpl_semi_sync_shutdown_await_ack is updates as following:
4.1) Added test case (adapted from Kristian Nielsen) to ensure
that if a thread awaiting its ACK is killed while SHUTDOWN WAIT FOR
ALL SLAVES is issued, the primary will still wait for the ACK from
the killed thread.
4.2) As connections which by-passed phase 1 of thread killing no
longer are delayed for kill until phase 2, we can no longer query
yes/no tx after receiving an ACK/timeout. The check for these
variables is removed.
4.3) Comment descriptions are updated which mention that the
connection is alive; and adjusted to be the Ack_thread.
Reviewed By:
============
Kristian Nielsen <knielsen@knielsen-hq.org>
sql_sequence.h:233:19: runtime error: signed integer overflow: -9223372036854775808 + -1 cannot be represented in type 'long long int'
followup for 374783c3d9
https://jepsen.io/analyses/mysql-8.0.34 highlights that the
transaction isolation levels in the InnoDB storage engine do not
correspond to any widely accepted definitions, such as
"Generalized Isolation Level Definitions"
https://pmg.csail.mit.edu/papers/icde00.pdf
(PL-1 = READ UNCOMMITTED, PL-2 = READ COMMITTED, PL-2.99 = REPEATABLE READ,
PL-3 = SERIALIZABLE).
Only READ UNCOMMITTED in InnoDB seems to match the above definition.
The issue is that InnoDB does not detect write/write conflicts
(Section 4.4.3, Definition 6) in the above.
It appears that as soon as we implement write/write conflict detection
(SET SESSION innodb_snapshot_isolation=ON), the default isolation level
(SET TRANSACTION ISOLATION LEVEL REPEATABLE READ) will become
Snapshot Isolation (similar to Postgres), as defined in Section 4.2 of
"A Critique of ANSI SQL Isolation Levels", MSR-TR-95-51, June 1995
https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-95-51.pdf
Locking reads inside InnoDB used to read the latest committed version,
ignoring what should actually be visible to the transaction.
The added test innodb.lock_isolation illustrates this. The statement
UPDATE t SET a=3 WHERE b=2;
is executed in a transaction that was started before a read view or
a snapshot of the current transaction was created, and committed before
the current transaction attempts to execute
UPDATE t SET b=3;
If SET innodb_snapshot_isolation=ON is in effect when the second
transaction was started, the second transaction will be aborted with
the error ER_CHECKREAD. By default (innodb_snapshot_isolation=OFF),
the second transaction would execute inconsistently, displaying an
incorrect SELECT COUNT(*) FROM t in its read view.
If innodb_snapshot_isolation=ON, if an attempt to acquire a lock on a
record that does not exist in the current read view is made, an error
DB_RECORD_CHANGED (HA_ERR_RECORD_CHANGED, ER_CHECKREAD) will
be raised. This error will be treated in the same way as a deadlock:
the transaction will be rolled back.
lock_clust_rec_read_check_and_lock(): If the current transaction has
a read view where the record is not visible and
innodb_snapshot_isolation=ON, fail before trying to acquire the lock.
row_sel_build_committed_vers_for_mysql(): If innodb_snapshot_isolation=ON,
disable the "semi-consistent read" logic that had been implemented by
myself on the directions of Heikki Tuuri in order to address
https://bugs.mysql.com/bug.php?id=3300 that was motivated by a customer
wanting UPDATE to skip locked rows that do not match the WHERE condition.
It looks like my changes were included in the MySQL 5.1.5
commit ad126d90e019f223470e73e1b2b528f9007c4532; at that time, employees
of Innobase Oy (a recent acquisition of Oracle) had lost write access to
the repository.
The only reason why we set innodb_snapshot_isolation=OFF by default is
backward compatibility with applications, such as the one that motivated
the implementation of "semi-consistent read" back in 2005. In a later
major release, we can default to innodb_snapshot_isolation=ON.
Thanks to Peter Alvaro, Kyle Kingsbury and Alexey Gotsman for their work
on https://github.com/jepsen-io/ and to Kyle and Alexey for explanations
and some testing of this fix.
Thanks to Vladislav Lesin for the initial test for MDEV-26643,
as well as reviewing these changes.
Starting with clang-16, MemorySanitizer appears to check that
uninitialized values not be passed by value nor returned.
Previously, it was allowed to copy uninitialized data in such cases.
get_foreign_key_info(): Remove a local variable that was passed
uninitialized to a function.
DsMrr_impl: Initialize key_buffer, because DsMrr_impl::dsmrr_init()
is reading it.
test_bind_result_ext1(): MYSQL_TYPE_LONG is 32 bits, hence we must
use a 32-bit type, such as int. sizeof(long) differs between
LP64 and LLP64 targets.
Calling a stored function that uses a cursor inside its body
could produce the error ER_NO_SUCH_TABLE on the second execution
in case the cursor uses multi-table query and one of the tables
is a temporary table just created before querying the cursor and
dropped just after the query has been executed.
The reason for issue is that re-parsing of failed a SP instruction
caused be create/drop of the temporary table used LEX object
left from previous parsing of a SP instruction's query instead
re-initialize the lex object before parsing.
To fix the issue, add initialization of lex for cursor's
statement before re-parsing the query of a failed SP instruction.
Remove work-around that disables bulk insert optimization in replication
The root cause of the original problem is now fixed (MDEV-33475). Though the
bulk insert optimization will still be disabled in replication, as it is
only enabled in special circumstances meant for loading a mysqldump.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
An earlier patch for MDEV-13577 fixed the most common instances of this, but
missed one case for tables without primary key when the scan reaches the end
of the table. This patch adds similar code to handle this case, converting
the error to HA_ERR_RECORD_CHANGED when doing optimistic parallel apply.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
This patch makes the server wait for the manager thread to actually start
before proceeding with server startup.
Without this, if thread scheduling is really slow and the server shutdowns
quickly, then it is possible that the manager thread is not yet started when
shutdown_performance_schema() is called. If the manager thread starts at just
the wrong moment and just before the main server reaches exit(), the thread
can try to access no longer available performance schema data. This was seen
as occasional assertion in the main.bootstrap test.
As an additional improvement, make sure to run all pending actions before
exiting the manager thread.
Reviewed-by: Monty <monty@mariadb.org>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
There were several races in the main.kill_processlist-6619 testcase:
- Lingering connections from a previous test case could be visible in SHOW
PROCESSLIST and cause .result diff.
- A sync point "dispatch_command_end" was ineffective, as it was consumed at
the end of the SET DEBUG command itself.
- The signal from sync point "before_execute_sql_command" could override an
earlier signal, causing DEBUG_SYNC timeout and test failure.
- The final SHOW PROCESSLIST could occasionally see a connection in state
"Busy" instead of the expected "Sleep".
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
eliminate_item_equal() uses quick_fix_field() for Item objects it creates.
It computes some of their attributes on its own (see update_used_tables()
call) but it doesn't update not_null_tables_cache.
Recompute not_null_tables_cache also. Not computing it is currently
harmless, except for producing MSAN error when some other code
propagates the wrong value of not_null_tables_cache to other item.
In case there is a view that queried from a stored routine or
a prepared statement and this temporary table is dropped between
executions of SP/PS, then it leads to hitting an assertion
at the SELECT_LEX::fix_prepare_information. The fired assertion
was added by the commit 85f2e4f8e8
(MDEV-32466: Potential memory leak on executing of create view statement).
Firing of this assertion means memory leaking on execution of SP/PS.
Moreover, if the added assert be commented out, different result sets
can be produced by the statement SELECT * FROM the hidden table.
Both hitting the assertion and different result sets have the same root
cause. This cause is usage of temporary table's metadata after the table
itself has been dropped. To fix the issue, reload the cache of stored
routines. To do it cache of stored routines is reset at the end of
execution of the function dispatch_command(). Next time any stored routine
be called it will be loaded from the table mysql.proc. This happens inside
the method Sp_handler::sp_cache_routine where loading of a stored routine
is performed in case it missed in cache. Loading is performed unconditionally
while previously it was controlled by the parameter lookup_only. By that
reason the signature of the method Sroutine_hash_entry::sp_cache_routine
was changed by removing unused parameter lookup_only.
Clearing of sp caches affects the test main.lock_sync since it forces
opening and locking the table mysql.proc but the test assumes that each
statement locks its tables once during its execution. To keep this invariant
the debug sync points with names "before_lock_tables_takes_lock" and
"after_lock_tables_takes_lock" are not activated on handling the table
mysql.proc
When rolling back and retrying a transaction in parallel replication, don't
release the domain ownership (for --gtid-ignore-duplicates) as part of the
rollback. Otherwise another master connection could grab the ownership and
double-apply the transaction in parallel with the retry.
Reviewed-by: Brandon Nesterenko <brandon.nesterenko@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
This was caused by wrong allocation of variable on stack.
(Was allocating 4K of data instead of 512 bytes).
No test case as the original MDEV test cases is not usable for mtr.
UPDATE statement that is run in PS mode and uses positional parameter
handles columns declared with the clause DEFAULT NULL incorrectly in
case the clause DEFAULT is passed as actual value for the positional
parameter of the prepared statement. Similar issue happens in case
an expression specified in the DEFAULT clause of table's column definition.
The reason for incorrect processing of columns declared as DEFAULT NULL
is that setting of null flag for a field being updated was missed
in implementation of the method Item_param::assign_default().
The reason for incorrect handling of an expression in DEFAULT clause is
also missed saving of a field inside implementation of the method
Item_param::assign_default().
signal_hand(): Remove the cmake -DWITH_DBUG_TRACE=ON instrumentation.
It can cause a crash on shutdown when the only other thread is
waiting in wait_for_signal_thread_to_end().
In the JSON functions, the debug injection for stack overflows is
inaccurate and may cause actual stack overflows. Let us simply
inject stack overflow errors without actually relying on the ability
of check_stack_overrun() to do so.
Reviewed by: Rucha Deodhar
The patch for MDEV-15530 incorrectly added a column in the middle of SHOW
SLAVE STATUS output. This is wrong, as it breaks backwards compatibility
with existing applications and scripts. In this case, it even broke
mariadb-dump, which is included in the server source tree!
Revert the incorrect change, putting the new Replicate_Rewrite_DB at the end
of SHOW SLAVE STATUS output.
Add a testcase for the mariadb-dump --dump-slave wrong output problem. Also
add a testcase rpl.rpl_show_slave_status to hopefully prevent any future
incorrect additions to SHOW SLAVE STATUS.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
MDEV-33502 Slowdown when running nested statement with many partitions
caused this error as I failed to take into account bigendian architectures.
This patch also introduces bitmap_import() and bitmap_export() to be used
when one wants to store bitmaps in files/logs in a portable way.
Reviewed-by: Kristian Nielsen <knielsen@knielsen-hq.org>
This will makes it easier to find out what replication workers are
doing and what they are waiting for.
Things changed in processlist:
- Slave_SQL time was not consistent. Now time for state "Slave has
read all relay log; waiting for more updates" shows how long it has
waited for getting the next event.
- Slave_worker threads did often show "Closing tables" for a long
time. Now the state is reverted to the previous state after
"Closing tables" is done.
- Commit and Rollback states where not shown for replication (and some
other threads). Now Commit and Rollback states are always shown and
the state is reverted to previous state when the Commit/Rollback
have finished.
Code changes:
- Added thd->set_time_for_next_stage() for parallel replication when
when starting to wait for prior transactions to commit, group commit,
and FTWRL and for free space in thread pool.
Before we reset the time only after the above events.
- Moved THD_STAGE_INFO(stage_rollback) and THD_STAGE_INFO(stage_commit)
from sql_parse.cc to transaction.cc to ensure this is done for
all commits and not only 'normal connection queries'.
Test case changes:
- close_thread_tables() reverting stage to previous stage caused the
counter in performance_schema to be increased. In many case it is
the 'sql/starting' stage that was effected.
- We only change to "Commit" stage if there is a need for a commit.
This caused some "Commit" stages to disapper from perfschema reports.
TODO in 11.#:
- Slave_IO always showes "Waiting for master to send event" and the time is
from SLAVE START. We should in 11.# change this to be the time since
reading the last event.
Warnings are added to net_server.cc when
global_system_variables.log_warnings >= 4.
When the above condition holds then:
- All communication errors from net_serv.cc is also written to the
error log.
- In case of a of not being able to read or write a packet, a more
detailed error is given.
Other things:
- Added detection of slaves that has hangup to Ack_receiver::run()
- vio_close() is now first marking the socket closed before closing it.
The reason for this is to ensure that the connection that gets a read
error can check if the reason was that the socket was closed.
- Add a new state to vio to be able to detect if vio is acive, shutdown or
closed. This is used to detect if socket is closed by another thread.
- Testing of the new warnings is done in rpl_get_lock.test
- Suppress some of the new warnings in mtr to allow one to run some of
the tests with -mysqld=--log-warnings=4. All test in the 'rpl' suite
can now be run with this option.
- Ensure that global.log_warnings are restored at test end in a way
that allows one to use mtr --mysqld=--log-warnings=4.
Reviewed-by: <serg@mariadb.org>,<brandon.nesterenko@mariadb.com>
If a server has a default configuration (e.g. in a my.cnf file) with
rpl_semi_sync_slave_enabled set, on server start, the corresponding
rpl_semi_sync_slave_status variable will also be ON initially, even
if the slave was never configured/started. This is because the
Repl_semi_sync_slave initialization logic (function init_object())
sets the running status to the enabled value during
init_server_components().
This patch fixes this by removing the statement which sets the
semi-sync slave running status from the initialization logic. An
additional change needed from this is to semi-sync recovery: this
status variable was used as a condition to determine binlog
truncation during server recovery. This patch also switches this
condition to reference the global rpl_semi_sync_slave_enabled
variable. Though note, the semi-sync recovery condition is to be
changed entirely with the MDEV-33424 agenda.
Reviewed By:
============
Andrei Elkin <andrei.elkin@mariadb.com>
Under terms of MDEV 27490 we'll add support for non-BMP identifiers
and upgrade casefolding information to Unicode version 14.0.0.
In Unicode-14.0.0 conversion to lower and upper cases can increase octet length
of the string, so conversion won't be possible in-place any more.
This patch removes virtual functions performing in-place casefolding:
- my_charset_handler_st::casedn_str()
- my_charset_handler_st::caseup_str()
and fixes the code to use the non-inplace functions instead:
- my_charset_handler_st::casedn()
- my_charset_handler_st::caseup()
In most cases, ha_partition forwards calls to extra() to all
locked_partitions. It doesn't make sense to forward some calls for
partitions that were pruned away.
This patch introduces ha_partition::loop_read_partitions and makes
these calls use it:
- ha_partition::extra_opt(HA_EXTRA_KEYREAD)
- ha_partition::extra(HA_EXTRA_KEYREAD)
- ha_partition::extra(HA_EXTRA_NO_KEYREAD)
Reviewed-by: Monty
If mariabackup with backup locks is used on SST we do not
pause and desync galera provider at all. If WSREP_MODE_BF_MARIABACKUP
case provider is paused and desync at BLOCK_COMMIT phase. In
other cases provider is paused and desync at BLOCK_DDL phase.
MDEV-33502 Slowdown when running nested statement with many partitions
This change was triggered to help some MariaDB users with close to
10000 bits in their bitmaps.
- Change underlaying storage to be 64 bit instead of 32bit.
- This reduses number of loops to scan bitmaps.
- This can cause some bitmaps to be 4 byte large.
- Ensure that all not used top-bits are always 0 (simplifes code as
the last 64 bit storage is not a special case anymore).
- Use my_find_first_bit() to find the first set bit which is much faster
than scanning trough things byte by byte and then bit by bit.
Other things:
- Added a bool to remember if my_bitmap_init() did allocate the bitmap
array. my_bitmap_free() will only free arrays it did allocate.
This allowed me to remove setting 'bitmap=0' before calling
my_bitmap_free() for cases where the bitmap's where allocated externally.
- my_bitmap_init() sets bitmap to 0 in case of failure.
- Added 'universal' asserts to most bitmap functions.
- Change all remaining calls to bitmap_init() to my_bitmap_init().
- To finish the change from 2014.
- Changed all usage of uint32 in my_bitmap.h to my_bitmap_map.
- Updated bitmap_copy() to handle bitmaps of different size.
- Removed const from bitmap_exists_intersection() as this caused casts
on all usage.
- Removed not used function bitmap_set_above().
- Renamed create_last_word_mask() to create_last_bit_mask() (to match
name changes in my_bitmap.cc)
- Extended bitmap-t with test for more bitmap functions.
- Updated prototype for is_binary_frm_header().
- Added extra argument to ma_control_file_open().
- Added ma_control_file_open_or_create() for usage by tests.
(to make test a bit simpler).
If a query with GROUP_CONCAT is executed then the server reports a warning
every time when the length of the result of this function exceeds the set
value of the system variable group_concat_max_len. This bug led to the set
of warnings from the second execution of the prepared statement that did
not coincide with the one from the first execution if the executed query
was a grouping query over a join of tables using GROUP_CONCAT function and
join cache was not allowed to be employed.
The descrepancy of the sets of warnings was due to lack of cleanup for
Item_func_group_concat::row_count after execution of the query.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Problem:
REPAIR TABLE executed for a pre-MDEV-29959 table (with the old UUID format)
updated the server version in the FRM file without rewriting the data,
so it created a new FRM for old UUIDs. After that MariaDB could not
read UUIDs correctly.
Fix:
- Adding a new virtual method in class Type_handler:
virtual bool type_handler_for_implicit_upgrade() const;
* For the up-to-date data types it returns "this".
* For the data types which need to be implicitly upgraded
during REPAIR TABLE or ALTER TABLE, it returns a pointer
to a new replacement data type handler.
Old VARCHAR and old UUID type handlers override this method.
See more comments below.
- Changing the semantics of the method
Type_handler::Column_definition_implicit_upgrade(Column_definition *c)
to the opposite, so now:
* c->type_handler() references the old data type (to upgrade from)
* "this" references the new data type (to upgrade to).
Before this change Column_definition_implicit_upgrade() was supposed
to be called with the old data type handler (to upgrade from).
Renaming the method to Column_definition_implicit_upgrade_to_this(),
to avoid automatic merges in this method.
Reflecting this change in Create_field::upgrade_data_types().
- Replacing the hard-coded data type tests inside handler::check_old_types()
to a call for the new virtual method
Type_handler::type_handler_for_implicit_upgrade()
- Overriding Type_handler_fbt::type_handler_for_implicit_upgrade()
to call a new method FbtImpl::type_handler_for_implicit_upgrade().
Reasoning:
Type_handler_fbt is a template, so it has access only to "this".
So in case of UUID data types, the type handler for old UUID
knows nothing about the type handler of new UUID inside sql_type_fixedbin.h.
So let's have Type_handler_fbt delegate type_handler_for_implicit_upgrade()
to its Type_collection, which knows both new UUID and old UUID.
- Adding Type_collection_uuid::type_handler_for_implicit_upgrade().
It returns a pointer to the new UUID type handler.
- Overriding Type_handler_var_string::type_handler_for_implicit_upgrade()
to return a pointer to type_handler_varchar (true VARCHAR).
- Cleanup: these two methods:
handler::check_old_types()
handler::ha_check_for_upgrade()
were always called consequently.
So moving the call for check_old_types() inside ha_check_for_upgrade(),
and making check_old_types() private.
- Cleanup: removing the "bool varchar" parameter from fill_alter_inplace_info(),
as its not used any more.
When sending the server default collation ID to the client
in the handshake packet, translate a 2-byte collation ID
to the ID of the default collation for the character set.
Functions extracting non-negative datetime components:
- YEAR(dt), EXTRACT(YEAR FROM dt)
- QUARTER(td), EXTRACT(QUARTER FROM dt)
- MONTH(dt), EXTRACT(MONTH FROM dt)
- WEEK(dt), EXTRACT(WEEK FROM dt)
- HOUR(dt),
- MINUTE(dt),
- SECOND(dt),
- MICROSECOND(dt),
- DAYOFYEAR(dt)
- EXTRACT(YEAR_MONTH FROM dt)
did not set their max_length properly, so in the DECIMAL
context they created a too small DECIMAL column, which
led to the 'Out of range value' error.
The problem is that most of these functions historically
returned the signed INT data type.
There were two simple ways to fix these functions:
1. Add +1 to max_length.
But this would also change their size in the string context
and create too long VARCHAR columns, with +1 excessive size.
2. Preserve max_length, but change the data type from INT to INT UNSIGNED.
But this would break backward compatibility.
Also, using UNSIGNED is generally not desirable,
it's better to stay with signed when possible.
This fix implements another solution, which it makes all these functions
work well in all contexts: int, decimal, string.
Fix details:
- Adding a new special class Type_handler_long_ge0 - the data type
handler for expressions which:
* should look like normal signed INT
* but which known not to return negative values
Expressions handled by Type_handler_long_ge0 store in Item::max_length
only the number of digits, without adding +1 for the sign.
- Fixing Item_extract to use Type_handler_long_ge0
for non-negative datetime components:
YEAR, YEAR_MONTH, QUARTER, MONTH, WEEK
- Adding a new abstract class Item_long_ge0_func, for functions
returning non-negative datetime components.
Item_long_ge0_func uses Type_handler_long_ge0 as the type handler.
The class hierarchy now looks as follows:
Item_long_ge0_func
Item_long_func_date_field
Item_func_to_days
Item_func_dayofmonth
Item_func_dayofyear
Item_func_quarter
Item_func_year
Item_long_func_time_field
Item_func_hour
Item_func_minute
Item_func_second
Item_func_microsecond
- Cleanup: EXTRACT(QUARTER FROM dt) created an excessive VARCHAR column
in string context. Changing its length from 2 to 1.
As a result of this bug the second execution of the prepared statement
created for select from materialized view could return a wrong result set if
- the specification of the view used a left join
- an inner table the left join was a mergeable derived table
- the derived table contained a constant column.
The problem appeared because the flag 'maybe-null' of the wrapper
Item_direct_view_ref constructed for the constant field of the mergeable
derived table was not set to 'true' on the second execution of the
prepared statement.
The patch always sets this flag properly when calling the function
Item_direct_view_ref::set_null_ref-table(). The latter is invoked in
Item_direct_view_ref constructor if it is created for some reference of
a constant column belonging to a mergeable derived table.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
- Add `as <int_type>` to sequence creation options
- int_type can be signed or unsigned integer types, including
tinyint, smallint, mediumint, int and bigint
- Limitation: when alter sequence as <new_int_type>, cannot have any
other alter options in the same statement
- Limitation: increment remains signed longlong, and the hidden
constraint (cache_size x abs(increment) < longlong_max) stays for
unsigned types. This means for bigint unsigned, neither
abs(increment) nor (cache_size x abs(increment)) can be between
longlong_max and ulonglong_max
- Truncating maxvalue and minvalue from user input to the nearest max
or min value of the type, plus or minus 1. When the truncation
happens, a warning is emitted
- Information schema table for sequences
This removes the error:
"Failed to load slave replication state from table mysql.gtid_slave_pos:
1017: Can't find file: './mysql/' (errno: 2 "No such file or directory")