This patch is the result of running
run-clang-tidy -fix -header-filter=.* -checks='-*,modernize-use-equals-default' .
Code style changes have been done on top. The result of this change
leads to the following improvements:
1. Binary size reduction.
* For a -DBUILD_CONFIG=mysql_release build, the binary size is reduced by
~400kb.
* A raw -DCMAKE_BUILD_TYPE=Release reduces the binary size by ~1.4kb.
2. Compiler can better understand the intent of the code, thus it leads
to more optimization possibilities. Additionally it enabled detecting
unused variables that had an empty default constructor but not marked
so explicitly.
Particular change required following this patch in sql/opt_range.cc
result_keys, an unused template class Bitmap now correctly issues
unused variable warnings.
Setting Bitmap template class constructor to default allows the compiler
to identify that there are no side-effects when instantiating the class.
Previously the compiler could not issue the warning as it assumed Bitmap
class (being a template) would not be performing a NO-OP for its default
constructor. This prevented the "unused variable warning".
The -D flag was not passed to asm compiler, despite SET_PROPERTY(COMPILE_OPTIONS)
The exact reason for that remains unknown. It was not seen with gcc, as
nor was be reproduced on newer CMake.
The existing storage/rocksdb/CMakeCache.txt defined
ATOMIC_EXTRA_LIBS when atomics where required. This was
determined by the toplevel configure.cmake test
(HAVE_GCC_C11_ATOMICS_WITH_LIBATOMIC).
As build_rocksdb.cmake is included after ATOMIC_EXTRA_LIBS
was set, we just need to use it. As such no riscv64
specific macro is needed in build_rocksdb.cmake.
As highlighted by Gianfranco Costamagna (@LocutusOfBorg)
in #2472 overwriting SYSTEM_LIBS was problematic.
This is corrected in case in future SYSTEM_LIBS is changed
elsewhere.
Closes#2472.
The error string from ER_KILL_QUERY_DENIED_ERROR took a different
type to ER_KILL_DENIED_ERROR for the thread id. This shows
up in differences on 32 big endian arches like powerpc (Deb notation).
Normalize the passing of the THD->id to its real type of my_thread_id,
and cast to (long long) on output. As such normalize the
ER_KILL_QUERY_DENIED_ERROR to that convention too.
Note for upwards merge, convert the type to %lld on new translations
of ER_KILL_QUERY_DENIED_ERROR.
This patch allowed transformation of EXISTS subqueries into equivalent
IN predicands at the top level of WHERE conditions for multi-table UPDATE
and DELETE statements. There was no reason to prohibit the transformation
for such statements. The transformation provides more opportunities of
using semi-join optimizations.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Following tests do not test anymore what they intended to test
deleted: suite/galera/t/MDEV-24143.test
deleted: suite/galera/t/galera_bf_abort_get_lock.test
Enable use of Rowid Filter optimization with eq_ref access.
Use the following assumptions:
- Assume index-only access cost is 50% of non-index-only access cost.
- Take into account that "Eq_ref access cache" reduces the number of
lookups eq_ref access will make.
= This means the number of Rowid Filter checks is reduced also
= Eq_ref access cost is computed using that assumption (see
prev_record_reads() call), so we should use it in all cost '
computations.
This is Kentoku's patch for MDEV-22979 (e6e41f04f4 + 22a0097727),
which fixes 30370.
It changes the wait to a timed wait for the first sts thread, which
waits on server start to execute the init queries for spider. It also
flips the flag init_command to false when the sts thread is being
freed. With these changes the sts thread can check the flag regularly
and abort the init_queries when it finds out the init_command is
false. This avoids the deadlock that causes the problem in MDEV-30370.
It also fixes MDEV-22979 for 10.4, but not 10.5. I have not tested
higher versions for MDEV-22979.
A test has also been done on MDEV-29904 to avoid regression, given
MDEV-27233 is a similar problem and its patch caused the
regression. The test passes for 10.4-11.0.
However, this adhoc test only works consistently when placed in the
main testsuite. We should not place spider tests in the main suite, so
we do not include it in this commit. A patch for MDEV-27912 should fix
this problem and allow a proper test for MDEV-29904. See comments in
the jira ticket MDEV-30370/29904 for the adhoc testcase used for this
commit.
ANALYZE was observed to race over a preceding in binlog order DML
in updating the binlog and slave gtid states.
Tagging ANALYZE and other admin class commands in binlog by the fixes
of MDEV-17515 left a flaw allowing such race leading to
the gtid mode out-of-order error.
This is fixed now to observe by ADMIN commands the ordered access to
the slave gtid status variables and binlog.
This commit merely adds is a Read-Committed version MDEV-30225 test
solely to prove the RC isolation yields ROW binlog format as it is
supposed to per docs.
This bug manifested itself in very rare situations when splitting
optimization was applied to a materialized derived table with group clause
by key over a constant meargeable derived table that was in inner part of
an outer join. In this case the used tables for the key to access the
split table incorrectly was evaluated to a not empty table map.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Problem
========
On a parallel, delayed replica, Seconds_Behind_Master will not be
calculated until after MASTER_DELAY seconds have passed and the
event has finished executing, resulting in potentially very large
values of Seconds_Behind_Master (which could be much larger than the
MASTER_DELAY parameter) for the entire duration the event is
delayed. This contradicts the documented MASTER_DELAY behavior,
which specifies how many seconds to withhold replicated events from
execution.
Solution
========
After a parallel replica idles, the first event after idling should
immediately update last_master_timestamp with the time that it began
execution on the primary.
Reviewed By
===========
Andrei Elkin <andrei.elkin@mariadb.com>
This patch fixes the patch for bug MDEV-30248 that unsatisfactorily
resolved the problem of resolution of references to CTE. In some cases
when such a reference has the same table name as the name of one of
CTEs containing this reference the reference could be resolved incorrectly
that led to an invalid select tree where units could be mutually dependent.
This in its turn could lead to an infinite sequence of recursive calls or
to falls into infinite loops.
The patch also removes LEX::resolve_references_to_cte_in_hanging_cte() as
with the new code for resolution of CTE references the call of this
function is not needed anymore.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
(Initial patch by Varun Gupta. Amended and added comments).
When the query has both
1. Aggregate functions that require sorting data by group, and
2. Window functions
we need to use two temporary tables. The first temp.table will hold the
join output. Then it is passed to filesort(). Reading it in sorted
order allows to compute the aggregate functions.
Then, we need to write their values into the second temp. table. Then,
Window Function computation step can pass that to filesort() and read
them in the order it needs.
Failure to create the second temp. table would cause an assertion
failure: window function could would not find where to get the values
of the aggregate functions.
disable bulk insert optimization if long uniques are used, because they
need to read the table (index_read) after every inserted now. And bulk
insert optimization might disable indexes.
bulk insert is already disabled in other cases when there are chances
that the table will be read duing the bulk insert.
plugin_vars_free_values() was walking plugin sysvars and thus
did not free memory of plugin PLUGIN_VAR_NOSYSVAR vars.
* change it to walk all plugin vars
* add the pluginname_ prefix to NOSYSVARS var names too,
so that plugin_vars_free_values() would be able to find their
bookmarks
The MariaDB code base uses strcat() and strcpy() in several
places. These are known to have memory safety issues and their usage is
discouraged. Common security scanners like Flawfinder flags them. In MariaDB we
should start using modern and safer variants on these functions.
This is similar to memory issues fixes in 19af1890b5
and 9de9f105b5 but now replace use of strcat()
and strcpy() with safer options strncat() and strncpy().
However, add '\0' forcefully to make sure the result string is correct since
for these two functions it is not guaranteed what new string will be null-terminated.
Example:
size_t dest_len = sizeof(g->Message);
strncpy(g->Message, "Null json tree", dest_len); strncat(g->Message, ":",
sizeof(g->Message) - strlen(g->Message)); size_t wrote_sz = strlen(g->Message);
size_t cur_len = wrote_sz >= dest_len ? dest_len - 1 : wrote_sz;
g->Message[cur_len] = '\0';
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the BSD-new
license. I am contributing on behalf of my employer Amazon Web Services
-- Reviewer and co-author Vicențiu Ciorbaru <vicentiu@mariadb.org>
-- Reviewer additions:
* The initial function implementation was flawed. Replaced with a simpler
and also correct version.
* Simplified code by making use of snprintf instead of chaining strcat.
* Simplified code by removing dynamic string construction in the first
place and using static strings if possible. See connect storage engine
changes.
Item_singlerow_subselect may be converted to Item_cond during
optimization. So there is a possibility of constructing nested
Item_cond_and or Item_cond_or which is not allowed (such
conditions must be flattened).
This commit checks if such kind of optimization has been applied
and flattens the condition if needed
There are no source code changes in this commit!
This is an empty follow-up commit for
284ac6f2b7
to comment what was done, as the patch itself did not have
change comments.
Problems solved in this patch:
1. The function calc_hash_for_unique() erroneously takes into account
the string length, so equal strings (in terms of the collation)
with different lengths got different hash value.
For example:
- LATIN LETTER A - 1 byte
- LATIN LETTER A WITH ACUTE - 2 bytes
are equal in utf8_general_ci, but as their lengths
are different, calc_hash_for_unique() returned
different hash values.
2. calc_hash_for_unique() also erroneously used val_str()
result to calculate hashes. This may not be correct for
some data types, e.g. TIMESTAMP, as its string
value depends on the session environment (e.g. @@time_zone).
Change summary:
Instead of doing Item::val_str(), we should always call
Field::hash() of the underlying Field. It properly
handles both cases (equal strings with different
lengths, as well as tricky data types like TIMESTAMP).
Detailed change description:
Non-functional changes (make the code cleaner):
- Adding a helper class Hasher, to pass hash parts
nr1 and nr2 through function arguments easier.
- Splitting virtual Field::hash() into non-virtual
wrapper Field::hash() and virtual Field::hash_not_null().
This helps to get rid of duplicate code handling SQL NULL,
as it was equal in all Field_xxx implementations.
- Adding a new method THD::my_ok_with_recreate_info().
Actual fix changes (make new tables work properly):
- Adding a virtual method Item::hash_not_null()
This helps to handle hashes on full fields (Item_field)
and hashes on prefix fields (Item_func_left(Item_field))
in a polymorphic way.
Implementing overrides for Item_field and Item_func_left.
- Rewriting Item_func_hash::val_int() to use Item::hash_not_null(),
instead of the combination of val_str() and alc_hash_for_unique().
Backward compatibility changes (make old tables work in the new server):
- Adding a new class Item_func_hash_mariadb_100403.
Moving the old version of Item_func_hash::val_int()
into Item_func_hash_mariadb_100403::val_int().
The old class Item_func_hash_mariadb_100403 is still needed,
to open old tables before upgrade is done.
- Adding TABLE_SHARE::old_long_hash_function() and
handler::check_long_hash_compatibility() to test
if a table is using an old hash function.
- Adding a helper method TABLE_SHARE::make_long_hash_func()
to instantiate either Item_func_hash_mariadb_100403 (for old
not upgraded tables) or Item_func_hash (for new tables).
Upgrade changes (make old tables upgrade in the new server properly):
Upgrading an old table to a new hash can be done using either
of these two statements:
ALTER IGNORE TABLE t1 FORCE;
REPAIR TABLE t1;
!!! These statements find and filter out erreneous duplicates!!!
The table after these statements will have less records
if there were erroneous duplicates (such and A and A WITH ACUTE).
The information about filtered out records is reported in both statements.
- Adding a new class Recreate_info to return out information
about copied and duplucate rows from these functions:
- mysql_alter_table()
- mysql_recreate_table()
- admin_recreate_table()
This helps to print a warning during REPAIR:
MariaDB [test]> repair table mdev27653_100422_text;
+----------------------------+--------+----------+------------------------------------+
| Table | Op | Msg_type | Msg_text |
+----------------------------+--------+----------+------------------------------------+
| test.mdev27653_100422_text | repair | Warning | Number of rows changed from 2 to 1 |
| test.mdev27653_100422_text | repair | status | OK |
+----------------------------+--------+----------+------------------------------------+
2 rows in set (0.018 sec)
When built with ubsan and trying to load the spider plugin, the hidden
visibility of mysqld compiling flag causes ha_spider.so to be missing
the symbol ha_partition. This commit fixes that, as well as some
memcpy null pointer issues when built with ubsan.
Signed-off-by: Yuchen Pei <yuchen.pei@mariadb.com>
Use SELECT_LEX to save lists for ORDER BY and GROUP BY before parsing
WINDOW clauses / specifications. This is needed for proper parsing
of a nested WINDOW clause when a WINDOW clause is used in a subquery
contained in another WINDOW clause.
Fix assignment of empty SQL_I_List to another one (in case of empty list
next shoud point on first).
Item_singlerow_subselect may be converted to Item_cond during
optimization. So there is a possibility of constructing nested
Item_cond_and or Item_cond_or which is not allowed (such
conditions must be flattened).
This commit checks if such kind of optimization has been applied
and flattens the condition if needed
There are no source code changes in this commit!
This is an empty follow-up commit for
284ac6f2b7
to comment what was done, as the patch itself did not have
change comments.
Problems solved in this patch:
1. The function calc_hash_for_unique() erroneously takes into account
the string length, so equal strings (in terms of the collation)
with different lengths got different hash value.
For example:
- LATIN LETTER A - 1 byte
- LATIN LETTER A WITH ACUTE - 2 bytes
are equal in utf8_general_ci, but as their lengths
are different, calc_hash_for_unique() returned
different hash values.
2. calc_hash_for_unique() also erroneously used val_str()
result to calculate hashes. This may not be correct for
some data types, e.g. TIMESTAMP, as its string
value depends on the session environment (e.g. @@time_zone).
Change summary:
Instead of doing Item::val_str(), we should always call
Field::hash() of the underlying Field. It properly
handles both cases (equal strings with different
lengths, as well as tricky data types like TIMESTAMP).
Detailed change description:
Non-functional changes (make the code cleaner):
- Adding a helper class Hasher, to pass hash parts
nr1 and nr2 through function arguments easier.
- Splitting virtual Field::hash() into non-virtual
wrapper Field::hash() and virtual Field::hash_not_null().
This helps to get rid of duplicate code handling SQL NULL,
as it was equal in all Field_xxx implementations.
- Adding a new method THD::my_ok_with_recreate_info().
Actual fix changes (make new tables work properly):
- Adding a virtual method Item::hash_not_null()
This helps to handle hashes on full fields (Item_field)
and hashes on prefix fields (Item_func_left(Item_field))
in a polymorphic way.
Implementing overrides for Item_field and Item_func_left.
- Rewriting Item_func_hash::val_int() to use Item::hash_not_null(),
instead of the combination of val_str() and alc_hash_for_unique().
Backward compatibility changes (make old tables work in the new server):
- Adding a new class Item_func_hash_mariadb_100403.
Moving the old version of Item_func_hash::val_int()
into Item_func_hash_mariadb_100403::val_int().
The old class Item_func_hash_mariadb_100403 is still needed,
to open old tables before upgrade is done.
- Adding TABLE_SHARE::old_long_hash_function() and
handler::check_long_hash_compatibility() to test
if a table is using an old hash function.
- Adding a helper method TABLE_SHARE::make_long_hash_func()
to instantiate either Item_func_hash_mariadb_100403 (for old
not upgraded tables) or Item_func_hash (for new tables).
Upgrade changes (make old tables upgrade in the new server properly):
Upgrading an old table to a new hash can be done using either
of these two statements:
ALTER IGNORE TABLE t1 FORCE;
REPAIR TABLE t1;
!!! These statements find and filter out erreneous duplicates!!!
The table after these statements will have less records
if there were erroneous duplicates (such and A and A WITH ACUTE).
The information about filtered out records is reported in both statements.
- Adding a new class Recreate_info to return out information
about copied and duplucate rows from these functions:
- mysql_alter_table()
- mysql_recreate_table()
- admin_recreate_table()
This helps to print a warning during REPAIR:
MariaDB [test]> repair table mdev27653_100422_text;
+----------------------------+--------+----------+------------------------------------+
| Table | Op | Msg_type | Msg_text |
+----------------------------+--------+----------+------------------------------------+
| test.mdev27653_100422_text | repair | Warning | Number of rows changed from 2 to 1 |
| test.mdev27653_100422_text | repair | status | OK |
+----------------------------+--------+----------+------------------------------------+
2 rows in set (0.018 sec)
When built with ubsan and trying to load the spider plugin, the hidden
visibility of mysqld compiling flag causes ha_spider.so to be missing
the symbol ha_partition. This commit fixes that, as well as some
memcpy null pointer issues when built with ubsan.
Signed-off-by: Yuchen Pei <yuchen.pei@mariadb.com>