Do not allow setting wsrep_sst_donor as NULL as it is
incorrect value. User can use value '' (default) that represents
same as NULL. Setting wsrep_cluster_address to NULL is
already handled correctly.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Problem for Galera is the fact that sequences are not really
transactional. Sequence operation is committed immediately
in sql_sequence.cd and later Galera could find out that
we have changes but actual statement is not there anymore.
Therefore, we must make some restrictions what kind
of sequences Galera can support.
(1) Galera cluster supports only sequences implemented
by InnoDB storage engine. This is because Galera replication
supports currently only InnoDB.
(2) We do not allow LOCK TABLE on sequence object and
we do not allow sequence creation under LOCK TABLE, instead
lock is released and we issue warning.
(3) We allow sequences with NOCACHE definition or with
INCREMEMENT BY 0 CACHE=n definition. This makes sure that
sequence values are unique accross Galera cluster.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
lower_case_table_names=2 means "table names and database names are
stored as declared, but they are compared in lowercase".
But names of objects in grants are stored in lowercase for any value
of lower_case_table_names. This caused an error when checking grants
for objects containing uppercase letters since table_hash_search()
didn't take into account lower_case_table_names value
EXPLAIN EXTENDED should always print the field item used in the left part
of an equality expression from the SET clause of an update statement as a
reference to table column.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
This bug affected EXPLAIN EXTENDED command for single-table DELETE that
used an IN subquery in its WHERE clause. A crash happened if the optimizer
chose to employ index_subquery or unique_subquery access when processing
such command.
The crash happened when the command tried to print the transformed query.
In the current code of 10.4 for single-table DELETE statements the output
of any explain command is produced after the join structures of all used
subqueries have been destroyed. JOIN::destroy() sets the field tab of the
JOIN_TAB structures created for subquery tables to NULL. As a result
subselect_indexsubquery_engine::print(), subselect_indexsubquery_engine()
cannot use this field to get the alias name of the joined table.
This patch suggests to use the field TABLE_LIST::TAB that can be accessed
from JOIN_TAB::tab_list to get the alias name of the joined table.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
- Remove virtual from get_min_join_buffer_size() and
get_max_join_buffer_size().
- Avoid some calls to get_min_buffer_size()
- Simply cache usage in get_..._join_buffer_size()
- Simplify get_max_join_buffer_size() when using optimize_buff_size
- Reindented some long comments
Reviewer: Sergei Petrunia <sergey@mariadb.com>
The old code did set max_records to either number_of_rows
(partial_join_cardinality) or memory size (join_buffer_space_limit)
which did not make sense.
Fixed by setting max_records to number of rows that fits into
join_buffer_size.
Other things:
- Initialize buffer cache values in JOIN_CACHE constructors (safety)
Reviewer: Sergei Petrunia <sergey@mariadb.com>
The problem, introduced in patch for MDEV-26301:
When check_join_cache_usage() decides not to use join buffer, it must
adjust the access method accordingly. For BNL-H joins this means switching
from pseudo-"ref access"(with index=MAX_KEY) to some other access method.
Failing to do this will cause assertions down the line when code that is
not aware of BNL-H will try to initialize index use for ref access with
index=MAX_KEY.
The fix is to follow the regular code path to disable the join buffer for
the join_tab ("goto no_join_cache") instead of just returning from
check_join_cache_usage().
The problem was that join_buffer_size conflicted with
join_buffer_space_limit, which caused the query to be run without join
buffer. However this caused wrong results as the optimizer assumed
that hash+join buffer would ensure that the equi-join condition
would be satisfied, and didn't check it itself.
Fixed by not using join_buffer_space_limit when
optimize_join_buffer_size=off. This matches the documentation at
https://mariadb.com/kb/en/block-based-join-algorithms
Other things:
- Removed not used variable JOIN_TAB::join_buffer_size_limit
- Give an error if we cannot allocate a join buffer. This can
only happen if the join_buffer variables are wrongly configured or
we are running out of memory.
In the future, instead of returning an error, we could properly
convert the query plan that uses BNL-H join into one that doesn't
use join buffering:
make sure the equi-join condition is checked where appropriate.
Reviewer: Sergei Petrunia <sergey@mariadb.com>
select_insert::store_values() must reset
has_value_set bitmap before every row, just like mysql_insert() does.
because ON DUPLICATE KEY UPDATE and triggers modify it
This patch optimizes the number of refills for the lateral derived table
to which a materialized derived table subject to split optimization is
is converted. This optimized number of refills is now considered as the
expected number of refills of the materialized derived table when searching
for the best possible splitting of the table.
When a query does implicit grouping and join operation produces an empty
result set, a NULL-complemented row combination is generated.
However, constant table fields still show non-NULL values.
What happens in the is that end_send_group() is called with a
const row but without any rows matching the WHERE clause.
This last part is shown by 'join->first_record' not being set.
This causes item->no_rows_in_result() to be called for all items to reset
all sum functions to their initial state. However fields are not set
to NULL.
The used fix is to produce NULL-complemented records for constant tables
as well. Also, reset the constant table's records back in case we're
in a subquery which may get re-executed.
An alternative fix would have item->no_rows_in_result() also work
with Item_field objects.
There is some other issues with the code:
- join->no_rows_in_result_called is used but never set.
- Tables that are used with group functions are not properly marked as
maybe_null, which is required if the table rows should be regarded as
null-complemented (not existing).
- The code that tries to detect if mixed_implicit_grouping should be set
didn't take into account all usage of fields and sum functions.
- Item_func::restore_to_before_no_rows_in_result() called the wrong
function.
- join->clear() does not use a table_map argument to clear_tables(),
which caused it to ignore constant tables.
- unclear_tables() does not correctly restore status to what is
was before clear_tables().
Main bug fix was to always use a table_map argument to clear_tables() and
always use join->clear() and clear_tables() together with unclear_tables().
Other fixes:
- Fixed Item_func::restore_to_before_no_rows_in_result()
- Set 'join->no_rows_in_result_called' when no_rows_in_result_set()
is called.
- Removed not used argument from setup_end_select_func().
- More code comments
- Ensure that end_send_group() modifies the same fields as are in the
result set.
- Changed return_zero_rows() to use pointers instead of references,
similar to the rest of the code.
The problem was that mutex_init() was called after the worker was
put into the domain_hash, which allowed other threads to access it
before mutex was initialized.
- Update wsrep-lib which contains fix for the assertion
- Fix error handling for appending fragment to streaming log,
make sure tables are closed after rollback.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Adding virtual methods to class Schema:
make_item_func_replace()
make_item_func_substr()
make_item_func_trim()
This is a non-functional preparatory change for MDEV-27744.
Variant #2.
When Histogram::point_selectivity() sees that the point value of interest
falls into one bucket, it tries to guess whether the bucket has many
different (unpopular) values or a few popular values. (The number of
rows is fixed, as it's a Height-balanced histogram).
The basis for this guess is the "width" of the value range the bucket
covers. Buckets covering wider value ranges are assumed to contain
values with proportionally lower frequencies.
This is just a [brave] guesswork. For a very narrow bucket, it may
produce an estimate that's larger than total #rows in the bucket
or even in the whole table.
Remove the guesswork and replace it with basic logic: return
either the per-table average selectivity of col=const, or selectivity
of one bucket, whichever is lower.
Fix-up for commit 476b24d084
Author: Monty
Date: Thu Feb 16 14:19:33 2023 +0200
MDEV-20057 Distinct SUM on CROSS JOIN and grouped returns wrong result
which misses initializing of sorder->suffix_length.
In this commit the initialization is implemented by passing
MY_ZEROFILL flag to the allocation of SORT_FIELD elements
When using binlog_row_image=FULL with sequence table inserts, a
replica can deadlock because it treats full inserts in a sequence as DDL
statements by getting an exclusive lock on the sequence table. It
has been observed that with parallel replication, this exclusive
lock on the sequence table can lead to a deadlock where one
transaction has the exclusive lock and is waiting on a prior
transaction to commit, whereas this prior transaction is waiting on
the MDL lock.
This fix for this is on the master side, to raise FL_DDL
flag on the GTID of a full binlog_row_image write of a sequence table.
This forces the slave to execute the statement serially so a deadlock
cannot happen.
A test verifies the deadlock also to prove it happen on the OLD (pre-fixes)
slave.
OLD (buggy master) -replication-> NEW (fixed slave) is provided.
As the pre-fixes master's full row-image may represent both
SELECT NEXT VALUE and INSERT, the parallel slave pessimistically
waits for the prior transaction to have committed before to take on the
critical part of the second (like INSERT in the test) event execution.
The waiting exploits a parallel slave's retry mechanism which is
controlled by `@@global.slave_transaction_retries`.
Note that in order to avoid any persistent 'Deadlock found' 2013 error
in OLD -> NEW, `slave_transaction_retries` may need to be set to a
higher than the default value.
START-SLAVE is an effective work-around if this still happens.
The error was seen by a number of mtr tests being caused
by overdue initialization of rpl_parallel::LOCK_parallel_entry.
Specifically, SHOW-SLAVE-STATUS might find in
rpl_parallel::workers_idle() a gtid domain hash entry
already inserted whose mutex had not done
mysql_mutex_init().
Fixed with swapping the mutex init and the its entry's stack insertion.
Tested with a generous number of `mtr --repeat` of a few of the reported
to fail tests, incl rpl.parallel_backup.
When replicating MDL events for a table that uses system versioning
without primary keys, ensure that for data sets with duplicate
records, the updates to these records with duplicates are enacted on
the correct row. That is, there was a bug (reported in MDEV-30430)
such that the function to find the row to update would stop after
finding the first matching record. However, in the absence of
primary keys, the version of the record is needed to compare the row
to ensure we are updating the correct one.
The fix, therefore, updates the record comparison functionality to
use system version columns when there are no primary keys on the
table.
Reviewed By:
============
Andrei Elkin <andrei.elkin@mariadb.com>
Problem:
========
A master can segfault if it can't set up decryption for its binary
log during a binlog dump with Using_Gtid=Slave_Pos. If slave
connects using GTID mode, the master will call into
log.cc::get_gtid_list_event(), which iterate through binlog events
looking for a Gtid_list_log_event. On an encrypted binlog that the
master cannot decrypt, the first event will be a
START_ENCRYPTION_EVENT which will call into the following decryption branch
if (fdle->start_decryption((Start_encryption_log_event*) ev))
errormsg= ‘Could not set up decryption for binlog.’;
The event iteration however, does not stop in spite of this error.
The master will try to read the next event, but segfault while
trying to decrypt it because decryption failed to initialize.
Solution:
========
Break the event iteration if decryption cannot be set up.
Reviewed By:
============
Andrei Elkin <andrei.elkin@mariadb.com>
This bug could manifest itself at the first execution of prepared statement
created for queries using a materialized view defined as union. A crash
could happen for sure if the query contained a condition pushable into
the view and this condition was over the column defined via a complex string
expression requiring implicit conversion from one charset to another for
some of its sub-expressions. The bug could cause crashes when executing
PS for some other queries whose optimization needed building clones for
such expressions.
This bug was introduced in the patch for MDEV-29988 where the class
Item_direct_ref_to_item was added. The implementations of the virtual
methods get_copy() and build_clone() were invalid for the class and this
could cause crashes after the method build_clone() was called for
expressions containing objects of the Item_direct_ref_to_item type.
Approved by Sergei Golubchik <serg@mariadb.com>
This bug caused server crash when processing a multi-update statement that
used views if optimizer tracing was enabled.
The bug was introduced in the patch for MDEV-30539 that could incorrectly
detect the most top level selects of queries if views were used in them.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Assertion `thd->mdl_context.is_lock_owner()` fires when a client is
disconnected, while transaction and and a table is opened through
`HANDLER` interface.
Reason for the assertion is that when a connection closes, its ongoing
transaction is eventually rolled back in
`Wsrep_client_state::bf_rollback()`. This method also releases explicit
which are expected to survive beyond the transaction lifetime.
This patch also removes calls to `mysql_ull_cleanup()`. User level
locks are not supported in combination with Galera, making these calls
unnecessary.
This bug could affect multi-update statements as well as single-table
update statements processed as multi-updates when the where condition
contained a range condition over a non-indexed varchar column. The
optimizer calculates selectivity of such range conditions using histograms.
For each range the buckets containing endpoints of the the range are
determined with a procedure that stores the values of the endpoints in the
space of the record buffer where values of the columns are usually stored.
For a range over a varchar column the value of a endpoint may exceed the
size of the buffer and in such case the value is stored with truncation.
This truncations cannot affect the result of the calculation of the range
selectivity as the calculation employes only the beginning of the value
string. However it can trigger generation of an unexpected error on this
truncation if an update statement is processed.
This patch prohibits truncation messages when selectivity of a range
condition is calculated for a non-indexed column.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
If we are inside stored function or trigger we should not commit
or rollback current statement transaction.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
- Adding a new argument "flag" to MY_COLLATION_HANDLER::strnncollsp_nchars()
and a flag MY_STRNNCOLLSP_NCHARS_EMULATE_TRIMMED_TRAILING_SPACES.
The flag defines if strnncollsp_nchars() should emulate trailing spaces
which were possibly trimmed earlier (e.g. in InnoDB CHAR compression).
This is important for NOPAD collations.
For example, with this input:
- str1= 'a ' (Latin letter a followed by one space)
- str2= 'a ' (Latin letter a followed by two spaces)
- nchars= 3
if the flag is given, strnncollsp_nchars() will virtually restore
one trailing space to str1 up to nchars (3) characters and compare two
strings as equal:
- str1= 'a ' (one extra trailing space emulated)
- str2= 'a ' (as is)
If the flag is not given, strnncollsp_nchars() does not add trailing
virtual spaces, so in case of a NOPAD collation, str1 will be compared
as less than str2 because it is shorter.
- Field_string::cmp_prefix() now passes the new flag.
Field_varstring::cmp_prefix() and Field_blob::cmp_prefix() do
not pass the new flag.
- The branch in cmp_whole_field() in storage/innobase/rem/rem0cmp.cc
(which handles the CHAR data type) now also passed the new flag.
- Fixing UCA collations to respect the new flag.
Other collations are possibly also affected, however
I had no success in making an SQL script demonstrating the problem.
Other collations will be extended to respect this flags in a separate
patch later.
- Changing the meaning of the last parameter of Field::cmp_prefix()
from "number of bytes" (internal length)
to "number of characters" (user visible length).
The code calling cmp_prefix() from handler.cc was wrong.
After this change, the call in handler.cc became correct.
The code calling cmp_prefix() from key_rec_cmp() in key.cc
was adjusted according to this change.
- Old strnncollsp_nchar() related tests in unittest/strings/strings-t.c
now pass the new flag.
A few new tests also were added, without the flag.
This is allowed:
STRING_WITH_LEN("string literal")
This is not:
char *str = "pointer to string";
... STRING_WITH_LEN(str) ..
In C++ this is also allowed:
const char str[] = "string literal";
... STRING_WITH_LEN(str) ...
CREATE [TEMPORARY] SEQUENCE is internally CREATE+INSERT (initial value)
and it is replicated using statement based replication. In Galera
we use either TOI or RSU so we should skip commit time hooks
for it.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
With binlogs enabled, debug assertion ut_ad(xid_seqno > wsrep_seqno)
fired in trx_rseg_update_wsrep_checkpoint() when an applier thread
synced the seqno out of order for write set which had failed
certification. This was caused by releasing commit
order too early when binlogs were on, allowing group
commit to run in parallel and commit following transactions
too early.
Fixed by extending the commit order critical section to cover
call to wsrep_set_SE_checkpoint() also when binlogs are on.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
When using LEFT() function with a string that is without a charset,
the function crashes. This is because the function assumes that
the string has a charset, and tries to use it to calculate the
length of the string.
Two functions, UNHEX and WEIGHT_STRING, returned a string without
the charset being set to a not null value.
The fix is to set charset when calling val_str on these two functions.
Reviewed-by: Alexander Barkov <bar@mariadb.com>
Reviewed-by: Daniel Black <daniel@mariadb.org>
Problem:
UNIX_TIMESTAMP() called for a expression of the TIME data type
returned NULL.
Inside Type_handler_timestamp_common::Item_val_native_with_conversion
the call for item->get_date() did not convert TIME to DATETIME
automatically (because it does not have to, by design).
As a result, Type_handler_timestamp_common::TIME_to_native() received
a MYSQL_TIME value with zero date 0000-00-00 and therefore returned "true"
(indicating SQL NULL value).
Fix:
Removing the call for item->get_date().
Instantiating Datetime(item) instead.
This forces automatic TIME to DATETIME conversion
(unless @@old_mode is zero_date_time_cast).
EXPLAIN EXTENDED for an UPDATE/DELETE/INSERT/REPLACE statement did not
produce the warning containing the text representation of the query
obtained after the optimization phase. Such warning was produced for
SELECT statements, but not for DML statements.
The patch fixes this defect of EXPLAIN EXTENDED for DML statements.
- Description:
- Before 10.3.8 semisync was a plugin that is built into the server with
MDEV-13073,starting with commit cbc71485e2.
There are still some usage of `rpl_semi_sync_master` in mtr.
Note:
- To recognize the replica in the `dump_thread`, replica is creating
local variable `rpl_semi_sync_slave` (the keyword of plugin) in
function `request_transmit`, that is catched by primary in
`is_semi_sync_slave()`. This is the user variable and as such not
related to the obsolete plugin.
- Found in `sys_vars.all_vars` and `rpl_semi_sync_wait_point` tests,
usage of plugins `rpl_semi_sync_master`, `rpl_semi_sync_slave`.
The former test is disabled by default (`sys_vars/disabled.def`)
and marked as `obsolete`, however this patch will remove the queries.
- Add cosmetic fixes to semisync codebase
Reviewer: <brandon.nesterenko@mariadb.com>
Closes PR #2528, PR #2380
The hang could be seen as show slave status displaying an error like
Last_Error: Could not execute Write_rows_v1
along with
Slave_SQL_Running: Yes
accompanied with one of the replication threads in show-processlist
characteristically having status like
2394 | system user | | NULL | Slave_worker | 50852| closing tables
It turns out that closing tables worker got entrapped in endless looping
in mark_start_commit_inner() across already garbage-collected gco items.
The reclaimed gco links are explained with actually possible
out-of-order groups of events termination due to the Last_Error.
This patch reinforces the correct ordering to perform
finish_event_group's cleanup actions, incl unlinking gco:s
from the active list.
and my_getwd(). The cause is my_errno define which
depends on my_thread_var being a not null pointer
otherwise it will be de-referenced and cause
a SEGV already in the signal handler.
Replace uses of these functions in the output_core_info
using posix read/getcwd functions instead.
The getwd fallback in my_getcwd isn't needed as
its been obsolute for a very long time.
Thanks Vladislav Vaintroub for diagnosis and posix
recommendation.
MDEV-30668 Set function aggregated in outer select used in view definition
This patch fixes two bugs concerning views whose specifications contain
subqueries with set functions aggregated in outer selects.
Due to the first bug those such views that have implicit grouping were
considered as mergeable. This led to wrong result sets for selects from
these views.
Due to the second bug the aggregation select was determined incorrectly and
this led to bogus error messages.
The patch added several test cases for these two bugs and for four other
duplicate bugs.
The patch also enables view-protocol for many other test cases.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
- Avoid passing real field cache as a parameter when we check for duplicates.
- Correct cache cleanup (cached field number also have to be reset).
- Name resolution cache simple test added.
1. Adding a separate MY_COLLATION_HANDLER
my_collation_ucs2_general_mysql500_ci_handler
implementing a proper order for ucs2_general_mysql500_ci
The problem happened because ucs2_general_mysql500_ci
erroneously used my_collation_ucs2_general_ci_handler.
2. Cosmetic changes: Renaming:
- plane00_mysql500 to my_unicase_mysql500_page00
- my_unicase_pages_mysql500 to my_unicase_mysql500_pages
to use the same naming style with:
- my_unicase_default_page00
- my_unicase_defaul_pages
3. Moving code fragments from
- handler::check_collation_compatibility() in handler.cc
- upgrade_collation() in table.cc
into new methods in class Charset, to reuse the code easier.
Subselect_single_value_engine cannot handle table value constructor used as
subquery. That's why any table value constructor TVC used as subquery is
converted into a select over derived table whose specification is TVC.
Currently the names of the columns of the derived table DT are taken from
the first element of TVC and if the k-th component of the element happens
to be a subquery the text representation of this subquery serves as the
name of the k-th column of the derived table. References of all columns of
the derived table DT compose the select list of the result of the conversion.
If a definition of a view contained a table value constructor used as a
subquery and the view was registered after this conversion had been
applied we could register an invalid view definition if the first element
of TVC contained a subquery as its component: the name of this component
was taken from the original subquery, while the name of the corresponding
column of the derived table was taken from the text representation of the
subquery produced by the function SELECT_LEX::print() and these names were
usually differ from each other.
To avoid registration of such invalid views the function SELECT_LEX::print()
now prints the original TVC instead of the select in which this TVC has
been wrapped. Now the specification of registered view looks like as if no
conversions from TVC to selects were done.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
note that `KILL USER foo` should *not* fail with ER_KILL_DENIED_ERROR
when SHOW PROCESSLIST doesn't show connections of that user.
Because no connections exist or because the caller has no PROCESS -
doesn't matter.
also, fix the error message to make sense
("You are not owner of thread <current connection id>" is ridiculous)
SELECT DISTINCT did not work with expressions with sum functions.
Distinct was only done on the values stored in the intermediate temporary
tables, which only stored the value of each sum function.
In other words:
SELECT DISTINCT sum(a),sum(b),avg(c) ... worked.
SELECT DISTINCT sum(a),sum(b) > 2,sum(c)+sum(d) would not work.
The later query would do ONLY apply distinct on the sum(a) part.
Reviewer: Sergei Petrunia <sergey@mariadb.com>
This was fixed by extending remove_dup_with_hash_index() and
remove_dup_with_compare() to take into account the columns in the result
list that where not stored in the temporary table.
Note that in many cases the above dup removal functions are not used as
the optimizer may be able to either remove duplicates early or it will
discover that duplicate remove is not needed. The later happens for
example if the group by fields is part of the result.
Other things:
- Backported from 11.0 the change of Sort_param.tmp_buffer from char* to
String.
- Changed Type_handler::make_sort_key() to take String as a parameter
instead of Sort_param. This was done to allow make_sort_key() functions
to be reused by distinct elimination functions.
This makes Type_handler_string_result::make_sort_key() similar to code
in 11.0
- Simplied error handling in remove_dup_with_compare() to remove code
duplication.
MDEV-28227 added the error messages in simplified characters.
Lets use these for those running a zh_CN profile.
From Haidong Ji in the MDEV, Taiwan/Hong Kong (zh_TW/zh_HK)
would expect traditional characters so this is left for when
we have these.
ha_partition doesn't forward Rowid Filter API calls to the storage
engines handling partitions.
An attempt to use rowid filtering with caused either
- Rowid Filter being shown in EXPLAIN but not actually used
- Assertion failure when subquery code tried to disable/enable rowid
filter, which was not present.
Fixed by returning correct flags from ha_partition::index_flags()
One of the constraints added in the MDEV-29639 patch, is that only
the first event after idling should update last_master_timestamp;
and as long as the replica has more events to execute, the variable
should not be updated. The corresponding test,
rpl_delayed_parallel_slave_sbm.test, aims to verify this; however,
if the IO thread takes too long to queue events, the SQL thread can
appear to catch up too fast.
This fix ensures that the relay log has been fully written before
executing the events.
Note that the underlying cause of this test failure needs to be
addressed as a bug-fix, this is a temporary fix to stop test
failures. To track work on the bug-fix for the underlying issue,
please see MDEV-30619.
The parser code for single-table DELETE missed the call of the function
LEX::check_main_unit_semantics(). As a result the the field nested level
of SELECT_LEX structures remained set 0 for all non-top level selects.
This could lead to different kind of problems. In particular this did not
allow to determine properly the selects where set functions had to be
aggregated when they were used in inner subqueries.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
This patch is the result of running
run-clang-tidy -fix -header-filter=.* -checks='-*,modernize-use-equals-default' .
Code style changes have been done on top. The result of this change
leads to the following improvements:
1. Binary size reduction.
* For a -DBUILD_CONFIG=mysql_release build, the binary size is reduced by
~400kb.
* A raw -DCMAKE_BUILD_TYPE=Release reduces the binary size by ~1.4kb.
2. Compiler can better understand the intent of the code, thus it leads
to more optimization possibilities. Additionally it enabled detecting
unused variables that had an empty default constructor but not marked
so explicitly.
Particular change required following this patch in sql/opt_range.cc
result_keys, an unused template class Bitmap now correctly issues
unused variable warnings.
Setting Bitmap template class constructor to default allows the compiler
to identify that there are no side-effects when instantiating the class.
Previously the compiler could not issue the warning as it assumed Bitmap
class (being a template) would not be performing a NO-OP for its default
constructor. This prevented the "unused variable warning".
The error string from ER_KILL_QUERY_DENIED_ERROR took a different
type to ER_KILL_DENIED_ERROR for the thread id. This shows
up in differences on 32 big endian arches like powerpc (Deb notation).
Normalize the passing of the THD->id to its real type of my_thread_id,
and cast to (long long) on output. As such normalize the
ER_KILL_QUERY_DENIED_ERROR to that convention too.
Note for upwards merge, convert the type to %lld on new translations
of ER_KILL_QUERY_DENIED_ERROR.
This patch allowed transformation of EXISTS subqueries into equivalent
IN predicands at the top level of WHERE conditions for multi-table UPDATE
and DELETE statements. There was no reason to prohibit the transformation
for such statements. The transformation provides more opportunities of
using semi-join optimizations.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Enable use of Rowid Filter optimization with eq_ref access.
Use the following assumptions:
- Assume index-only access cost is 50% of non-index-only access cost.
- Take into account that "Eq_ref access cache" reduces the number of
lookups eq_ref access will make.
= This means the number of Rowid Filter checks is reduced also
= Eq_ref access cost is computed using that assumption (see
prev_record_reads() call), so we should use it in all cost '
computations.
This patch fixes the problem by adding a new rule booleat_test.
This makes the grammar clearer and less conflicting.
Additionally, fixing %prec in this grammar branch:
- | boolean_test IS NULL_SYM %prec PREC_BELOW_NOT
+ | boolean_test IS NULL_SYM %prec IS
to have consistently "%prec IS" in all grammar branches starting
with "boolean_test IS ...".
It's not clear why these three rules needed different %prec before the fix:
- boolean_test IS TRUE
- boolean_test IS UNKNOWN
- boolean_test IS NULL
This bug manifested itself when the server processed a query containing
a derived table over union whose ORDER BY clause included a subquery
with unresolvable column reference. For such a query the server crashed
when trying to resolve column references in the ORDER BY clause used by
union.
For any union with ORDER BY clause an extra SELECT_LEX structure is created
and it is attached to SELECT_LEX_UNIT structure of the union via the field
fake_select_lex. The outer context for fake_select_lex must be the same as
for other selects of the union. If the union is used in the FROM list of
a derived table then the outer context for fake_select_lex must be set to
NULL in line with other selects of the union. It was not done and it
caused a crash when searching for possible resolution of an unresolvable
column reference occurred in a subquery used in the ORDER BY clause.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
ANALYZE was observed to race over a preceding in binlog order DML
in updating the binlog and slave gtid states.
Tagging ANALYZE and other admin class commands in binlog by the fixes
of MDEV-17515 left a flaw allowing such race leading to
the gtid mode out-of-order error.
This is fixed now to observe by ADMIN commands the ordered access to
the slave gtid status variables and binlog.
This bug manifested itself in very rare situations when splitting
optimization was applied to a materialized derived table with group clause
by key over a constant meargeable derived table that was in inner part of
an outer join. In this case the used tables for the key to access the
split table incorrectly was evaluated to a not empty table map.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Problem
========
On a parallel, delayed replica, Seconds_Behind_Master will not be
calculated until after MASTER_DELAY seconds have passed and the
event has finished executing, resulting in potentially very large
values of Seconds_Behind_Master (which could be much larger than the
MASTER_DELAY parameter) for the entire duration the event is
delayed. This contradicts the documented MASTER_DELAY behavior,
which specifies how many seconds to withhold replicated events from
execution.
Solution
========
After a parallel replica idles, the first event after idling should
immediately update last_master_timestamp with the time that it began
execution on the primary.
Reviewed By
===========
Andrei Elkin <andrei.elkin@mariadb.com>
This patch fixes the patch for bug MDEV-30248 that unsatisfactorily
resolved the problem of resolution of references to CTE. In some cases
when such a reference has the same table name as the name of one of
CTEs containing this reference the reference could be resolved incorrectly
that led to an invalid select tree where units could be mutually dependent.
This in its turn could lead to an infinite sequence of recursive calls or
to falls into infinite loops.
The patch also removes LEX::resolve_references_to_cte_in_hanging_cte() as
with the new code for resolution of CTE references the call of this
function is not needed anymore.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
(Initial patch by Varun Gupta. Amended and added comments).
When the query has both
1. Aggregate functions that require sorting data by group, and
2. Window functions
we need to use two temporary tables. The first temp.table will hold the
join output. Then it is passed to filesort(). Reading it in sorted
order allows to compute the aggregate functions.
Then, we need to write their values into the second temp. table. Then,
Window Function computation step can pass that to filesort() and read
them in the order it needs.
Failure to create the second temp. table would cause an assertion
failure: window function could would not find where to get the values
of the aggregate functions.
disable bulk insert optimization if long uniques are used, because they
need to read the table (index_read) after every inserted now. And bulk
insert optimization might disable indexes.
bulk insert is already disabled in other cases when there are chances
that the table will be read duing the bulk insert.
plugin_vars_free_values() was walking plugin sysvars and thus
did not free memory of plugin PLUGIN_VAR_NOSYSVAR vars.
* change it to walk all plugin vars
* add the pluginname_ prefix to NOSYSVARS var names too,
so that plugin_vars_free_values() would be able to find their
bookmarks
The MariaDB code base uses strcat() and strcpy() in several
places. These are known to have memory safety issues and their usage is
discouraged. Common security scanners like Flawfinder flags them. In MariaDB we
should start using modern and safer variants on these functions.
This is similar to memory issues fixes in 19af1890b5
and 9de9f105b5 but now replace use of strcat()
and strcpy() with safer options strncat() and strncpy().
However, add '\0' forcefully to make sure the result string is correct since
for these two functions it is not guaranteed what new string will be null-terminated.
Example:
size_t dest_len = sizeof(g->Message);
strncpy(g->Message, "Null json tree", dest_len); strncat(g->Message, ":",
sizeof(g->Message) - strlen(g->Message)); size_t wrote_sz = strlen(g->Message);
size_t cur_len = wrote_sz >= dest_len ? dest_len - 1 : wrote_sz;
g->Message[cur_len] = '\0';
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the BSD-new
license. I am contributing on behalf of my employer Amazon Web Services
-- Reviewer and co-author Vicențiu Ciorbaru <vicentiu@mariadb.org>
-- Reviewer additions:
* The initial function implementation was flawed. Replaced with a simpler
and also correct version.
* Simplified code by making use of snprintf instead of chaining strcat.
* Simplified code by removing dynamic string construction in the first
place and using static strings if possible. See connect storage engine
changes.
Item_singlerow_subselect may be converted to Item_cond during
optimization. So there is a possibility of constructing nested
Item_cond_and or Item_cond_or which is not allowed (such
conditions must be flattened).
This commit checks if such kind of optimization has been applied
and flattens the condition if needed
When built with ubsan and trying to load the spider plugin, the hidden
visibility of mysqld compiling flag causes ha_spider.so to be missing
the symbol ha_partition. This commit fixes that, as well as some
memcpy null pointer issues when built with ubsan.
Signed-off-by: Yuchen Pei <yuchen.pei@mariadb.com>
Use SELECT_LEX to save lists for ORDER BY and GROUP BY before parsing
WINDOW clauses / specifications. This is needed for proper parsing
of a nested WINDOW clause when a WINDOW clause is used in a subquery
contained in another WINDOW clause.
Fix assignment of empty SQL_I_List to another one (in case of empty list
next shoud point on first).
Updated wsrep-lib to version in which server_state
wait_until_state() and sst_received() were changed to report
errors via return codes instead of throwing exceptions. Added
error handling accordingly.
Tested manually that failure in sst_received() which was
caused by server misconfiguration (unknown configuration variable
in server configuration) does not cause crash due to uncaught
exception.
If two high priority threads have lock conflict, we look at the
order of these transactions and honor the earlier transaction.
for_locking parameter in lock_rec_has_to_wait() has become
obsolete and it is now removed from the code .
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
The rather recent thd_need_ordering_with() function does not take
high priority transactions' order in consideration. Chaged this
funtion to compare also transaction seqnos and favor earlier transaction.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Created mtr test for reproducing the crash
Developed actual fix for the issue.
Setting THD::system_thread_info.rpl_sql_info for replayer thread,
same way as it is handled for appliers.
Recorded test result, with the fix
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Cluster conflict victim's THD is marked with wsrep_aborter.
THD::wsrep_aorter holds the thread ID of the hight priority tread,
which is currently carrying out BF aborting for this victim.
However, the BF abort operation is not always successful,
and in such case the wsrep_aborter mark should be removed.
In the old code, this wsrep_aborter resetting did not happen,
and this could lead to a situation where the sticky wsrep_aborter
mark prevents any further attempt to BF abort this transaction.
This commit fixes this issue, and resets wsrep_aborter after
unsuccesful BF abort attempt.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
node->is_delete was incorrectly set to NO_DELETE for a set of operations.
In general we shouldn't rely on sql_command and look for more abstract ways
to control the behavior.
trg_event_map seems to be a suitable way. To mind replica nodes, it is ORed
with slave_fk_event_map, which stores trg_event_map when replica has
triggers disabled.
Problem:
=======
Mysqlbinlog cannot show the type of a compressed
column when two levels of verbosity is provided.
Solution:
========
Extend the log event printing logic to handle and
tag compressed types.
Behavioral Changes:
==================
Old: When mysqlbinlog is called in verbose mode and
the database uses compressed columns, an error is
returned to the user.
New: The output will append “ COMPRESSED” on the
type of compressed columns
Reviewed By
===========
Andrei Elkin <andrei.elkin@mariadb.com>
(Variant 3, initial variant was by Rex Jonston)
A LEFT JOIN with a constant as a column of the inner table produced wrong
query result if the optimizer had to write the inner table column into a
temp table. Query pattern:
SELECT ...
FROM (SELECT /*non-mergeable select*/
FROM t1 LEFT JOIN (SELECT 'Y' as Val) t2 ON ...) as tbl
Fixed this by adding Item_direct_view_ref::save_in_field() which follows
the pattern of Item_direct_view_ref's save_org_in_field(),
save_in_result_field() and val_XXX() functions:
* call check_null_ref() and handle NULL value
* if we didn't get a NULL-complemented row, call Item_direct_ref's function.
it's incorrect to use change_item_tree() to replace arguments
of top-level AND/OR, because they (arguments) are stored in a List,
so a pointer to an argument is in the list_node, and individual
list_node's of top-level AND/OR can be deleted in Item_cond::build_equal_items().
In that case rollback_item_tree_changes() will modify the deleted object.
Luckily, it's not needed to use change_item_tree() for top-level
AND/OR, because the whole top-level item is copied and preserved
in prep_where and prep_on, and restored from there.
So, just don't.
Additionally to the test case in the commit it fixes
* ASAN failure of main.opt_tvc --ps
* ASAN failure of main.having_cond_pushdown --ps
when an internal temporary table field is created from a real field,
a new temp field should only copy a default from the source field
when the latter has it
when creating a temp table field from an actual table field,
these two fields are supposed to be mostly identical
(except for BIT field storage), in particular, temp field should
have the same default as the orig field, even if the sql_mode has
been changed meanwhile (e.g. to include NO_ZERO_DATE)
regression from MDEV-29540 / 8c38939369.
INSERT SELECT errors needed to be unconditionally ignored.
As this touches the CREATE .. SELECT functionality, show
the equalivent test there.
This bug affected queries with nested left joins having the same last inner
table such that not_exists optimization could be applied to the most inner
outer join when optimizer chose to use join buffers. The bug could lead to
producing wrong a result set.
If the WHERE condition a query contains a conjunctive IS NULL predicate
over a non-nullable column of an inner table of a not nested outer join
then not_exists optimization can be applied to tho the outer join. With
this optimization when looking for matches for a certain record from the
outer table of the join the records of the inner table can be ignored
right after the first match satisfying the ON condition is found.
In the case of nested outer joins having the same last inner table this
optimization still can be applied but only if all ON conditions of the
embedding outer joins are satisfied. Such check was missing in the code
that tried to apply not_exists optimization when join buffers were used
for outer join operations.
This problem has been already fixed in the patch for bug MDEV-7992. Yet
there it was resolved only for the cases when join buffers were not used
for outer joins.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
MariaDB MDEV-12583 added `SOURCE_REVISION` variable that exposes the
SHA1 of source code commit that the current running engine was built
from. This info is useful for troubleshooting and debugging.
This commit does the following:
- addes the `SOURCE_REVISION` value into engine error log.
- when a crash triggers handle_fatal_signal, the `SOURCE_REVISION` will
be included in crash report.
- resolves MDEV-20344: startup messages belong in stderr/error-log not
stdout
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
the parser couldn't parse `1=2 not between 3 and 5`
after `2` it expected only NOT2_SYM, but not NOT_SYM
(visible from the sql_yacc.output file), which resulted in
Syntax error ... near 'not between 3 and 4'
The parser was confused by a rather low NOT_SYM precedence and
%prec BETWEEN_SYM didn't resolve this confusion.
As a fix, let's remove any %precedence from NOT_SYM and
specify %prec explicitly in the only place where it matters for NOT_SYM.
In other places, such as for NOT BETWEEN, NOT_SYM won't have a
precedence, so bison won't be confused about it.
The idea is to put Item_direct_ref_to_item as a transparent and
permanent wrapper before a string which require conversion.
So that Item_direct_ref_to_item would be the only place where
the pointer to the string item is stored, this pointer can be changed
and restored during PS execution as needed. And if any permanent
(subquery) optimization would need a pointer to the item,
it'll use a pointer to the Item_direct_ref_to_item - which is
a permanent item and won't go away.
1. In case of system-versioned table add row_end into FTS_DOC_ID index
in fts_create_common_tables() and innobase_create_key_defs().
fts_n_uniq() returns 1 or 2 depending on whether the table is
system-versioned.
After this patch recreate of FTS_DOC_ID index is required for
existing system-versioned tables. If you see this message in error
log or server warnings: "InnoDB: Table db/t1 contains 2 indexes
inside InnoDB, which is different from the number of indexes 1
defined in the MariaDB" use this command to fix the table:
ALTER TABLE db.t1 FORCE;
2. Fix duplicate history for secondary unique index like it was done
in MDEV-23644 for clustered index (932ec586aa). In case of
existing history row which conflicts with currently inseted row we
check in row_ins_scan_sec_index_for_duplicate() whether that row
was inserted as part of current transaction. In that case we
indicate with DB_FOREIGN_DUPLICATE_KEY that new history row is not
needed and should be silently skipped.
3. Some parts of MDEV-21138 (7410ff436e) reverted. Skipping of
FTS_DOC_ID index for history rows made problems with purge
system. Now this is fixed differently by p.2.
4. wait_all_purged.inc checks that we didn't affect non-history rows
so they are deleted and purged correctly.
Additional FTS fixes
fts_init_get_doc_id(): exclude history rows from max_doc_id
calculation. fts_init_get_doc_id() callback is used only for crash
recovery.
fts_add_doc_by_id(): set max value for row_end field.
fts_read_stopword(): stopwords table can be system-versioned too. We
now read stopwords only for current data.
row_insert_for_mysql(): exclude history rows from doc_id validation.
row_merge_read_clustered_index(): exclude history_rows from doc_id
processing.
fts_load_user_stopword(): for versioned table retrieve row_end field
and skip history rows. For non-versioned table we retrieve 'value'
field twice (just for uniformity).
FTS tests for System Versioning now include maybe_versioning.inc which
adds 3 combinations:
'vers' for debug build sets sysvers_force and
sysvers_hide. sysvers_force makes every created table
system-versioned, sysvers_hide hides WITH SYSTEM VERSIONING
for SHOW CREATE.
Note: basic.test, stopword.test and versioning.test do not
require debug for 'vers' combination. This is controlled by
$modify_create_table in maybe_versioning.inc and these
tests run WITH SYSTEM VERSIONING explicitly which allows to
test 'vers' combination on non-debug builds.
'vers_trx' like 'vers' sets sysvers_force_trx and sysvers_hide. That
tests FTS with trx_id-based System Versioning.
'orig' works like before: no System Versioning is added, no debug is
required.
Upgrade/downgrade test for System Versioning is done by
innodb_fts.versioning. It has 2 combinations:
'prepare' makes binaries in std_data (requires old server and OLD_BINDIR).
It tests upgrade/downgrade against old server as well.
'upgrade' tests upgrade against binaries in std_data.
Cleanups:
Removed innodb-fts-stopword.test as it duplicates stopword.test
Works like vers_force but forces trx_id-based system-versioned tables
if the storage supports it (currently InnoDB-only). Otherwise creates
timestamp-based system-versioned table.
The incorrect type handler caused an incorrect result_type() for
Item_cache_row (STRING_RESULT rather than ROW_RESULT). By updating the
constructor of Item_cache_row with the correct type handler, it fixes
this problem.
Signed-off-by: Yuchen Pei <yuchen.pei@mariadb.com>
Reviewed-by: Sergei Golubchik <serg@mariadb.com>
Consistent with MDEV-4206 and empty log_slow_filter still means
no explict filtering. Since 21518ab2e4 however the
log_queries_not_using_indexes became stored in the same variable.
As we need to test for the absense of log_queries_not_using_indexes
the SERVER_QUERY_NO_INDEX USED part of log_slow_statement, the empty
criteria resulted in an always true to log queries not using indexes if
log_slow_filter was set to empty.
Adjusted the log_slow.test for MDEV-4206 as slow_log_query has been
global and session for a while and it was relying on the MDEV-21187
buggy behavior to detect a slow query.
Reviewer: Monty
(Patch from Monty, slightly amended)
Fix rowid filtering optimization in best_access_path():
== Ref access + rowid filtering ==
The cost computations compare #records and index-only scan cost
(keyread_tmp) to find out the per-record advantage one will get if
they skip reading full table record.
The computations produce wrong result when:
- the #records are "clipped down" with s->worst_seeks or
thd->variables.max_seeks_for_key. keyread_tmp is not clipped
this way so the numbers are not comparable.
- access_factor is negative. This means index_only read is
cheaper than non-index-only read.
This patch makes the optimizer not to consider Rowid Filtering in
such cases.
The decision is logged in the Optimizer Trace using
"rowid_filter_skipped" name.
== Range access + rowid filtering ==
when considering to use Rowid Filter with range access, do multiply
keyread_tmp by record_count. That way, it is comparable with the
range access's estimate, which is multiplied by record_count.
mysql_discard_or_import_tablespace(): On successful
ALTER TABLE...DISCARD TABLESPACE, evict the table handle from the
table definition cache, so that ha_innobase::close() will be invoked,
like InnoDB expects to be the case. This will avoid an assertion failure
ut_a(table->get_ref_count() == 0) during IMPORT TABLESPACE.
ha_innobase::open(): Do not issue any ER_TABLESPACE_DISCARDED warning.
Member functions for DML will do that.
ha_innobase::truncate(), ha_innobase::check_if_supported_inplace_alter():
Issue ER_TABLESPACE_DISCARDED warnings, to compensate for the removal of
the warning in ha_innobase::open().
row_quiesce_write_indexes(): Only write information about committed
indexes. The ALTER TABLE t NOWAIT ADD INDEX(c) in the nondeterministic
test case will most of the time fail due to a metadata lock (MDL) timeout
and leave behind an uncommitted index.
Reviewed by: Sergei Golubchik
The geometry type requires Type:"Feature" but the feature need
not be first in the JSON structure.
Adjust code to return an error if geometry isn't a JSON object,
but continue parsing searching for Type: "Feature" to trigger
the geometry parsing.
Thanks Derick Magnusen for the bug report.
The bug is caused by a similar mechanism as MDEV-21027.
The function, check_insert_or_replace_autoincrement, failed to open
all the partitions on REPLACE SELECT statements and it results in the
assertion error.
Item_func_not_all::print() either uses Item_func::print() or
directly invokes args[0]->print(). Thus the precedence should be
either the one of Item_func or of args[0].
Item_allany_subselect::print() prints args[0], then a comparison op,
then a subquery. That is, the precedence should be the one of
a comparison.
rename to stress that is a specific hack for Item_func_nextval
and should not be used for other items.
If a vcol uses Item_func_nextval, a corresponding table for the sequence
should be added to the prelocking list (in that sense NEXTVAL is not
simply a function, but more like a subquery), see add_internal_tables()
in DML_prelocking_strategy::handle_table(). At the moment it is only
implemented for DEFAULT, not for GENERATED ALWAYS AS, thus the
VCOL_NEXTVAL hack.
select_union_direct::send_data() only sends a record when
the LIMIT ... OFFSET clause of the individual select won't skip it.
Thus, select_union_direct::send_data() should not do any actions
related to a sending a record if the offset of a select isn't
reached yet
Like in MDEV-16110 we must release items allocated on thd->mem_root by
reopening the table.
MDEV-16290 relocated MDEV-16110 fix in 10.5 so it works for MDEV-28576
as well. 10.3 without MDEV-16290 now duplicates this fix.
The change from MDEV-29465 exposed a flaw in replace_column_table
where again we were not properly updating the column-level bits.
replace_table_table was changed in MDEV-29465 to properly update
grant_table->init_cols, however replace_column_table still only
modified grant_column->rights when the GRANT_COLUMN already existed.
This lead to a missmatch between GRANT_COLUMN::init_rights and
GRANT_COLUMN::rights, *if* the GRANT_COLUMN already existed.
As an example:
GRANT SELECT (col1) ...
Here:
For col1
GRANT_COLUMN::init_rights and GRANT_COLUMN::rights are set to 1 (SELECT) in
replace_column_table.
GRANT INSERT (col1) ...
Here, without this patch GRANT_COLUMN::init_rights is still 1 and
GRANT_COLUMN::rights is 3 (SELECT_PRIV | INSERT_PRIV)
Finally, if before this patch, one does:
REVOKE SELECT (col1) ...
replace_table_table will see that init_rights loses bit 1 thus it
considers there are no more rights granted on that particular table.
This prompts the whole GRANT_TABLE to be removed via the first revoke,
when the GRANT_COLUMN corresponding to it should still have init_rights == 2.
By also updating replace_column_table to keep init_rights in sync
properly, the issue is resolved.
Reviewed by <serg@mariadb.com>
Test MDEV-26575 fails when it runs after MDEV-25389. This is because
the latter simulates a failure while an applier thread is
created in `start_wsrep_THD()`. The failure was not handled correctly
and would not cleanup the created THD from the global
`server_threads`. A subsequent shutdown would hang and eventually fail
trying to close this THD.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Fix `wsrep_table_accessible_when_detached()` so that commands that
access no tables are rejected while a node is disconnected from a
cluster.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
This is a DELETE only case. Normally this statement doesn't make inserts,
but DELETE ... FOR PORTION changes it. UPDATE and INSERT initializes
autoinc by calling handler::info(HA_STATUS_AUTO). Also myisam and innodb
can lazily initialize it in their update_create_info overrides.
The solution is to initialize autoinc during delete preparation,
if period (DELETE FOR PORTION) is specified.
The initial work has been done by Kento Takeuchi by his PR #2048,
however this commit also holds a few technical modifications by
Nikita Malyavin
Virtual column values are updated in handler in reading commands,
like ha_index_next, etc. This was missing for ha_ft_read.
handler::ha_ft_read: add table->update_virtual_fields() call
This patch adds the correct setting of the "--tls-version" and
"--ssl-verify-server-cert" options in the client-side utilities
such as mysqltest, mysqlcheck and mysqlslap, as well as the correct
setting of the "--ssl-crl" option when executing queries on the
slave side, and also the correct option codes in the "sslopts-logopts.h"
file (in the latter case, incorrect values are not a problem right
now, but may cause subtle test failures in the future, if the option
handling code changes).
This patch adds the correct setting of the "--ssl-verify-server-cert"
option in the client-side utilities such as mysqlcheck and mysqlslap,
as well as the correct setting of the "--ssl-crl" option when executing
queries on the slave side, and also add the correct option codes in
the "sslopts-logopts.h" file (in the latter case, incorrect values
are not a problem right now, but may cause subtle test failures in
the future, if the option handling code changes).
Because of the default warning level, aborted unauthenticated
connections are in the error log. These errors frequently occur
in production environments because cancelled connectiosn occur
all the time when web pages are shutdown.
Rather than flood our user's errors log with these ordinary
messages, lets push them down to the warning level at log-warnings=4
level.
Concept approved by Monty.
Fixing a few problems relealed by UBSAN in type_float.test
- multiplication overflow in dtoa.c
- uninitialized Field::geom_type (and Field::srid as well)
- Wrong call-back function types used in combination with SHOW_FUNC.
Changes in the mysql_show_var_func data type definition were not
properly addressed all around the code by the following commits:
b4ff64568c18feb62fee0ee879ff8a
Adding a helper SHOW_FUNC_ENTRY() function and replacing
all mysql_show_var_func declarations using SHOW_FUNC
to SHOW_FUNC_ENTRY, to catch mysql_show_var_func in the future
at compilation time.
The issue is that record_should_be_deleted() returns true in
mysql_delete() even if sub-select with join gets error from storage
engine when DELETE FROM ... WHERE ... IN (SELECT ...) statement is
executed.
The same is true for mysql_update() where select->skip_record() returns
true even if sub-select with join gets error from storage engine.
In the test case if sub-select is chosen as deadlock victim the whole
transaction is rolled back during sub-select execution, but
mysql_delete()/mysql_update() continues transaction execution and invokes
table->delete_row() as record_should_be_deleted() wrongly returns true
in mysql_delete() and table->update_row() as select->skip_record(thd)
wrongly returns 1 for mysql_update().
record_should_be_deleted() wrogly returns true because thd->is_error()
returns false SQL_SELECT::skip_record() invoked from
record_should_be_deleted().
It's supposed that THD error should be set in rr_handle_error() called
from rr_sequential() during sub-select JOIN::exec_inner() execution.
But rr_handle_error() does not set THD error because
READ_RECORD::print_error is not set in JOIN_TAB::read_record.
READ_RECORD::print_error should be initialized in
init_read_record()/init_read_record_idx(). But make_join_readinfo() does
not invoke init_read_record()/init_read_record_idx() for
JOIN_TAB::read_record.
The fix is to set JOIN_TAB::read_record.print_error in
make_join_readinfo(), i.e. in the same place where
JOIN_TAB::read_record.table is set.
Reviewed by Sergey Petrunya.
Problem:-
We are able to insert duplicate value in table because cmp_binary_offset
is not able to differentiate between NULL and empty string. So
check_duplicate_long_entry_key is never called and we don't check for
duplicate.
Solution
Added a if condition with is_null() on field which can differentiate
between NULL and empty string.
when assigning the cached item to the Item_cache for the first time
make sure to use Item_cache::setup(), not Item_cache::store().
Because the former copies the metadata (and allocates memory, in case
of Item_cache_row), and Item_cache::decimal must be set for
comparisons to work correctly.
Deallocation of TABLE_LIST::dt_handler and TABLE_LIST::pushdown_derived
was performed in multiple places if code. This not only made the code
more difficult to maintain but also led to memory leaks and
ASAN heap-use-after-free errors.
This commit puts deallocation of TABLE_LIST::dt_handler and
TABLE_LIST::pushdown_derived to the single point - JOIN::cleanup()
Per the code my_set_max_open_files 3 lines earlier, we attempt
to set the nofile (number of open files), rlimit to max_open_files.
We should use this in the warning because wanted_files may not
be the number.
When a range rowid filter was used with an index ref access the cost of
accessing the index entries for the records rejected by the filter was not
taken into account. For a ref access by an index with big average number
of records per key this led to poor execution plans if selectivity of the
used filter was high.
The patch resolves this problem. It also introduces a minor optimization
that skips look-ups into a filter that turns out to be empty.
With this patch the output of ANALYZE stmt reports the number of look-ups
into used rowid filters.
The patch also back-ports from 10.5 the code that properly sets the field
TABLE::file::table for opened temporary tables.
The test cases that were supposed to use rowid filters have been adjusted
in order to use similar execution plans after this fix.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
The ALTER related code cannot do at the same time both:
- modify partitions
- change column data types
Explicit changing of a column data type together with a partition change is
prohibited by the parter, so this is not allowed and returns a syntax error:
ALTER TABLE t MODIFY ts BIGINT, DROP PARTITION p1;
This fix additionally disables implicit data type upgrade
(e.g. from "MariaDB 5.3 TIME" to "MySQL 5.6 TIME", or the other way
around according to the current mysql56_temporal_format) in case of
an ALTER modifying partitions, e.g.:
ALTER TABLE t DROP PARTITION p1;
In such commands now only the partition change happens, while
the data types stay unchanged.
One can additionally run:
ALTER TABLE t FORCE;
either before or after the ALTER modifying partitions to
upgrade data types according to mysql56_temporal_format.
Abort startup, if SSL setup fails.
Also, for the server always check that certificate matches private key
(even if ssl_cert is not set, OpenSSL will try to use default one)
Read the version of the view share when we read definition to prevent
simultaniouse access to a view table SHARE (and so its MEM_ROOT)
from different threads.
OpenSSL handles memory management using **OPENSSL_xxx** API[^1]. For
allocation, there is `OPENSSL_malloc`. To free it, `OPENSSL_free` should
be called.
We've been lucky that OPENSSL (and wolfSSL)'s implementation allowed the
usage of `free` for memory cleanup. However, other OpenSSL forks, such
as AWS-LC[^2], is not this forgiving. It will cause a server crash.
Test case `openssl_1` provides good coverage for this issue. If a user
is created using:
`grant select on test.* to user1@localhost require SUBJECT "...";`
user1 will crash the instance during connection under AWS-LC.
There have been numerous OpenSSL forks[^3]. Due to FIPS[^4] and other
related regulatory requirements, MariaDB will be built using them. This
fix will increase MariaDB's adaptability by using more compliant and
generally accepted API.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
[^1]: https://www.openssl.org/docs/man1.1.1/man3/OPENSSL_malloc.html
[^2]: https://github.com/awslabs/aws-lc
[^3]: https://en.wikipedia.org/wiki/OpenSSL#Forks
[^4]: https://en.wikipedia.org/wiki/FIPS_140-2
st_select_lex::init_query is called in the exectuion of EXECUTE
IMMEDIATE 'alter table ...'. so reset the initialization at the
same point we set join= 0.
and also MDEV-25564, MDEV-18157.
Attempt to produce EXPLAIN output caused a crash in
Explain_node::print_explain_for_children. The cause of this was that an
Explain_node (actually a derived) had a link to child select#N, but
there was no query plan present for select#N.
The query plan wasn't present because the subquery was eliminated.
- Either it was a degenerate subquery like "(SELECT 1)" in MDEV-25564.
- Or it was a subquery in a UNION subquery's ORDER BY clause:
col IN (SELECT ... UNION
SELECT ... ORDER BY (SELECT FROM t1))
In such cases, legacy code structure in subquery/union processing code(*)
makes it hard to detect that the subquery was eliminated, so we end up
with EXPLAIN data structures (Explain_node::children) having dangling
links to child subqueries.
Do make the checks and don't follow the dangling links.
(In ideal world, we should not have these dangling links. But fixing
the code (*) would have high risk for the stable versions).
The population of default values in INSERT SELECT was being
performed twice. With sequences, this resulted in every
second sequence value being used.
With SELECT INSERT we remove the second invokation of
table->update_default_fields(). This was already performed
in store_values() invoking fill_record_n_invoke_before_triggers()
which invoked update_default_fields() previously.
We do need to return an error on duplicate values, so the
::store_values is extended to take the ignore option.
=========== Problem =============
- `show columns` is not working for temporary tables, even though there
is enough privilege `create temporary tables`.
=========== Solution =============
- Append `TMP_TABLE_ACLS` privilege when running `show columns` for temp
tables.
- Additionally `check_access()` for database only once, not for each
field
=========== Additionally =============
- Update comments for function `check_table_access` arguments
Reviewed by: <vicentiu@mariadb.org>
For some queries that involve tables with different but convertible
character sets for columns taking part in the query, repeatable
execution of such queries in PS mode or as part of a stored routine
would result in server abnormal termination.
For example,
CREATE TABLE t1 (a2 varchar(10));
CREATE TABLE t2 (u1 varchar(10) CHARACTER SET utf8);
CREATE TABLE t3 (u2 varchar(10) CHARACTER SET utf8);
PREPARE stmt FROM
"SELECT t1.* FROM (t1 JOIN t2 ON (t2.u1 = t1.a2))
WHERE (EXISTS (SELECT 1 FROM t3 WHERE t3.u2 = t1.a2))";
EXECUTE stmt;
EXECUTE stmt; <== Running this prepared statement the second time
results in server crash.
The reason of server crash is that an instance of the class
Item_func_conv_charset, that created for conversion of a column
from one character set to another, is allocated on execution
memory root but pointer to this instance is stored in an item
placed on prepared statement memory root. Below is calls trace to
the place where an instance of the class Item_func_conv_charset
is created.
setup_conds
Item_func::fix_fields
Item_bool_rowready_func2::fix_length_and_dec
Item_func::setup_args_and_comparator
Item_func_or_sum::agg_arg_charsets_for_comparison
Item_func_or_sum::agg_arg_charsets
Item_func_or_sum::agg_item_set_converter
Item::safe_charset_converter
And the following trace shows the place where a pointer to
the instance of the class Item_func_conv_charset is passed
to the class Item_func_eq, that is created on a memory root of
the prepared statement.
Prepared_statement::execute
mysql_execute_command
execute_sqlcom_select
handle_select
mysql_select
JOIN::optimize
JOIN::optimize_inner
convert_join_subqueries_to_semijoins
convert_subq_to_sj
To fix the issue, switch to the Prepared Statement memory root
before calling the method Item_func::setup_args_and_comparator
in order to place any created Items on permanent memory root.
It may seem that such approach would result in a memory
leakage in case the parameter marker '?' is used in the query
as in the following example
PREPARE stmt FROM
"SELECT t1.* FROM (t1 JOIN t2 ON (t2.u1 = t1.a2))
WHERE (EXISTS (SELECT 1 FROM t3 WHERE t3.u2 = ?))";
EXECUTE stmt USING convert('A' using latin1);
but it wouldn't since for such case any of the parameter markers
is treated as a constant and no subquery to semijoin optimization
is performed.
See also commits aa8a31da and 64678c for a Bug #22990029 fix.
In this scenario INSERT chose to check if delete unmarking is available for
a just deleted record. To build an update vector, it needed to calculate
the vcols as well. Since this INSERT was not IGNORE-flagged, recalculation
failed.
Solutiuon: temporarily set abort_on_warning=true, while calculating the
column for delete-unmarked insert.
As of now innodb does not store trx_id for each record in secondary index.
The idea behind is following: let us store only per-page max_trx_id, and
delete-mark the records when they are deleted/updated.
If the read starts, it rememders the lowest id of currently active
transaction. Innodb refers to it as trx->read_view->m_up_limit_id.
See also ReadView::open.
When the page is fetched, its max_trx_id is compared to m_up_limit_id.
If the value is lower, and the secondary index record is not delete-marked,
then this page is just safe to read as is. Else, a clustered index could be
needed ato access. See page_get_max_trx_id call in row_search_mvcc, and the
corresponding switch (row_search_idx_cond_check(...)) below.
Virtual columns are required to be updated in case if the record was
delete-marked. The motivation behind it is documented in
Row_sel_get_clust_rec_for_mysql::operator() near
row_sel_sec_rec_is_for_clust_rec call.
This was basically a description why virtual column computation can
normally happen during SELECT, and, generally, a vcol index access.
Sometimes stats tables are updated by innodb. This starts a new
transaction, and it can happen that it didn't finish to the moment of
SELECT execution, forcing virtual columns recomputation. If the result was
a something that normally outputs a warning, like division by zero, then
it could be outputted in a racy manner.
The solution is to suppress the warnings when a column is computed
for the described purpose.
ignore_wrnings argument is added innobase_get_computed_value.
Currently, it is only true for a call from
row_sel_sec_rec_is_for_clust_rec.
MDEV-19243 introduced a regression on Windows.
In (supposedly rare) case, where environment variable TZ was set,
@@system_time_zone no longer derives from TZ. Instead, it incorrecty
refers to system default time zone, eventhough UTC time conversion
takes TZ into account.
The fix is to restore TZ-aware handling (timezone name derives from
tzname), if TZ is set.
Adding debug output for key and keyseg flags at ha_myisam::open() time.
So now there are three points of debug output:
1. In the very end of mysql_prepare_create_table()
2. In ha_myisam::create(), after the table2myisam() call
3. In ha_myisan::open(), after the mi_open() call
mi_create(), which is is called between 2 and 3, modifies flags for
some data types, so the output in 2 and 3 is different.
Fix error message to contain correct errno. This commit was
tested interactively because mtr will notice if you provide
wrong wsrep_provider in config and you may not change
wsrep_provider dynamically.
If repl.max_ws_size is set too low following CREATE TABLE could fail
during commit. In this case wsrep_commit_empty should allow rolling
it back if provider state is s_aborted.
Furhermore, original ER_ERROR_DURING_COMMIT does not really tell anything
clear for user. Therefore, this commit adds a new error
ER_TOO_BIG_WRITESET. This will change some test cases output.
Problem was that in ALTER TABLE execution variables were set
to 1 even when wsrep_auto_increment_control is OFF. We should
set them only when wsrep_auto_increment_control is ON.
In test user has set WSREP_ON=OFF this causes streaming replication
recovery to fail and this caused call to unireg_abort(). However,
this call is not necessary and we can let transaction to fail. Naturally,
if real user does this he needs to bootstrap his cluster.
When f.ex. table is partitioned by HASH(a) and we rename column `a' to
`b' partitioning filter stays unchanged: HASH(a). That's the wrong
behavior.
The patch updates partitioning filter in accordance to the new columns
names. That includes partition/subpartition expression and
partition/subpartition field list.
For "const char *" replace() and after() accepted const as "T *" and
passed forward "void *". This cannot be cast implicitly, so we better
use "const void *" instead of "void *" in the input interface. This
way we avoid problems with using List for any const type.
The problem is that if table definition cache (TDC) is full of real tables
which are in tables cache, view definition can not stay there so will be
removed by its own underlying tables.
In situation above old mechanism of detection matching definition in PS
and current version always require reprepare and so prevent executing
the PS.
One work around is to increase TDC, other - improve version check for
views/triggers (which is done here). Now in suspicious cases we check:
- timestamp (microseconds) of the view to be sure that version really
have changed;
- time (microseconds) of creation of a trigger related to time
(microseconds) of statement preparation.
- Added missing information about database of corresponding table for various types of commands
- Update some typos
- Reviewed by: <vicentiu@mariadb.org>
This patch resolves the problem of improper name resolution of table
references to embedded CTEs for some queries. This improper binding could
lead to
- infinite sequence of calls of recursive functions
- crashes due to resolution of null pointers
- wrong result sets returned by queries
- bogus error messages
If the definition of a CTE contains with clauses then such CTE is called
embedding CTE while CTEs from the with clauses are called embedded CTEs.
If a table reference used in the definition of an embedded CTE cannot be
resolved within the unit that contains this reference it still may be
resolved against a CTE definition from the with clause with one of the
embedding CTEs.
A table reference can be resolved against a CTE definition if it used in
the the scope of this definition and it refers to the name of the CTE.
Table reference t is in the scope of the CTE definition of CTE cte if
- the definition of cte is an element of a with clause declared as
RECURSIVE and the reference t belongs either to the unit to which
this with clause is attached or to one of the elements of this clause
- the definition of cte is an element of a with clause without RECURSIVE
specifier and the reference t belongs either to the unit to which this
with clause is attached or to one of the elements from this clause that
are placed before the definition of cte.
If a table reference can be resolved against several CTE definitions then
it is bound to the most embedded.
The code before this patch not always resolved table references used in
embedded CTE according to the above rules.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Continue with similar changes as done in 19af1890 to replace sprintf(buf, ...)
with snprintf(buf, sizeof(buf), ...), specifically in the "easy" cases where buf
is allocated with a size known at compile time.
All new code of the whole pull request, including one or several files that are
either new files or modified ones, are contributed under the BSD-new license. I
am contributing on behalf of my employer Amazon Web Services, Inc.
Nowdays subquery in a UNION's ORDER BY placed correctly in fake select,
the only problem was incorrect Name_resolution_contect is fixed by this
patch in parsing, so we do not need scanning/reseting of ORDER BY of
a union.
There are separate flags DBUG_OFF for disabling the DBUG facility
and ENABLED_DEBUG_SYNC for enabling the DEBUG_SYNC facility.
Let us allow debug builds without DEBUG_SYNC.
Note: For CMAKE_BUILD_TYPE=Debug, CMakeLists.txt will continue to
define ENABLED_DEBUG_SYNC.
In commit 28325b0863
a compile-time option was introduced to disable the macros
DBUG_ENTER and DBUG_RETURN or DBUG_VOID_RETURN.
The parameter name WITH_DBUG_TRACE would hint that it also
covers DBUG_PRINT statements. Let us do that: WITH_DBUG_TRACE=OFF
shall disable DBUG_PRINT() as well.
A few InnoDB recovery tests used to check that some output from
DBUG_PRINT("ib_log", ...) is present. We can live without those checks.
Reviewed by: Vladislav Vaintroub
The fix for MDEV-29352 was pushed to 10.6+ but the code causing the
bug is old and the bug is unlikely to be a recent regression in 10.6.
So, we apply the fix also to older versions, 10.3-10.5.
The original commit message:
MDEV-29352 SIGSEGV's in strlen and unknown location on optimized builds at SHUTDOWN
When the UDF creation frails to write the newly created UDF into
the related system table, the UDF is still created in memory.
However, as it is now, the related DLL is unloaded in this case right
in the mysql_create_function. And failure happens when the UDF handle
is freed and tries to unload the respective DLL which is still unloaded.
check_audit_mask(mysql_global_audit_mask, event_class_mask) is tested in
mysql_audit_general_log() and then assert in mysql_audit_acquire_plugins()
verifies that the condition still holds.
But this code path is not protected by LOCK_audit_mask, so
mysql_global_audit_mask can change its value between the if() and the
assert. That is, the assert is invalid and will fire if the
audit plugin is unloaded concurrently with mysql_audit_general_log().
Nothing bad will happen in this case though, we'll just do a useless
loop over all remaining installed audit plugins.
That is, the fix is simply to remove the assert.
- Commit c8948b0d0d introduced `get_one_variable()` - updating missing argument.
- Remove caller setting of empty string in `rpl_filter`, since underlying functions will do the same
(commit 9584cbe7fc introduced).
Reviewed by: <brandon.nesterenko@mariadb.com>