When executing a statement of the form
SELECT AGGR_FN(DISTINCT c1, c2,..,cn) FROM t1,
where AGGR_FN is an aggregate function such as COUNT(), AVG() or SUM(),
and a unique index exists on table t1 covering some or all of the
columns (c1, c2,..,cn), the retrieved values are inherently unique.
Consequently, the need for de-duplication imposed by the DISTINCT
clause can be eliminated, leading to optimization of aggregation
operations.
This optimization applies under the following conditions:
- only one table involved in the join (not counting const tables)
- some arguments of the aggregate function are fields
(not functions/subqueries)
This optimization extends to queries of the form
SELECT AGGR_FN(c1, c2,..,cn) GROUP BY cx,..cy
when a unique index covers some or all of the columns
(c1, c2,..cn, cx,..cy)
MDEV-33308 CHECK TABLE is modifying .frm file even if --read-only
As noted in commit d0ef1aaf61,
MySQL as well as older versions of MariaDB server would during
ALTER TABLE ... IMPORT TABLESPACE write bogus values to the
PAGE_MAX_TRX_ID field to pages of the clustered index, instead of
letting that field remain 0.
In commit 8777458a6e this field
was repurposed for PAGE_ROOT_AUTO_INC in the clustered index root page.
To avoid trouble when upgrading from MySQL or older versions of MariaDB,
we will try to detect and correct bogus values of PAGE_ROOT_AUTO_INC
when opening a table for the first time from the SQL layer.
btr_read_autoinc_with_fallback(): Add the parameters to mysql_version,max
to indicate the TABLE_SHARE::mysql_version of the .frm file and the
maximum value allowed for the type of the AUTO_INCREMENT column.
In case the table was originally created in MySQL or an older version of
MariaDB, read also the maximum value of the AUTO_INCREMENT column from
the table and reset the PAGE_ROOT_AUTO_INC if it is above the limit.
dict_table_t::get_index(const dict_col_t &) const: Find an index that
starts with the specified column.
ha_innobase::check_for_upgrade(): Return HA_ADMIN_FAILED if InnoDB
needs upgrading but is in read-only mode. In this way, the call to
update_frm_version() will be skipped.
row_import_autoinc(): Adjust the AUTO_INCREMENT column at the end of
ALTER TABLE...IMPORT TABLESPACE. This refinement was suggested by
Debarun Banerjee.
The changes outside InnoDB were developed by Michael 'Monty' Widenius:
Added print_check_msg() service for easy reporting of check/repair messages
in ENGINE=Aria and ENGINE=InnoDB.
Fixed that CHECK TABLE do not update the .frm file under --read-only.
Added 'handler_flags' to HA_CHECK_OPT as a way for storage engines to
store state from handler::check_for_upgrade().
Reviewed by: Debarun Banerjee
This bug led to wrong result sets returned by the second execution of
prepared statements from selects using mergeable derived tables pushed
into external engine. Such derived tables are always materialized. The
decision that they have to be materialized is taken late in the function
mysql_derived_optimized(). For regular derived tables this decision is
usually taken at the prepare phase. However in some cases for some derived
tables this decision is made in mysql_derived_optimized() too. It can be
seen in the code of mysql_derived_fill() that for such a derived table it's
critical to change its translation table to tune it to the fields of the
temporary table used for materialization of the derived table and this
must be done after each refill of the derived table. The same actions are
needed for derived tables pushed into external engines.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Variant#3: moved the logic out of create_key_parts_for_pseudo_indexes
Range Analyzer (get_mm_tree functions) can only process up to MAX_KEY=64
indexes. The problem was that calculate_cond_selectivity_for_table used
it to estimate selectivities for columns, and since a table can
have > MAX_KEY columns, would invoke Range Analyzer with more than MAX_KEY
"pseudo-indexes".
Fixed by making calculate_cond_selectivity_for_table() to run Range
Analyzer with at most MAX_KEY pseudo-indexes. If there are more
columns to process, Range Analyzer will be invoked multiple times.
Also made this change:
- param.real_keynr[0]= 0;
+ MEM_UNDEFINED(¶m.real_keynr, sizeof(param.real_keynr));
Range Analyzer should have no use on real_keynr when it is run with
pseudo-indexes.
According to the standard, the autoincrement column (i.e. *identity
column*) should be advanced each insert implicitly made by
UPDATE/DELETE ... FOR PORTION.
This is very unconvenient use in several notable cases. Concider a
WITHOUT OVERLAPS key with an autoinc column:
id int auto_increment, unique(id, p without overlaps)
An update or delete with FOR PORTION creates a sense that id will remain
unchanged in such case.
The standard's IDENTITY reminds MariaDB's AUTO_INCREMENT, however
the generation rules differ in many ways. For example, there's also a
notion autoincrement index, which is bound to the autoincrement field.
We will define our own generation rule for the PORTION OF operations
involving AUTO_INCREMENT:
* If an autoincrement index contains WITHOUT OVERLAPS specification, then
a new value should not be generated, otherwise it should.
Apart from WITHOUT OVERLAPS there is also another notable case, referred
by the reporter - a unique key that has an autoincrement column and a field
from the period specification:
id int auto_increment, unique(id, s), period for p(s, e)
for this case, no exception is made, and the autoincrementing rules will be
proceeded accordung to the standard (i.e. the value will be advanced on
implicit inserts).
rpl.rpl_seconds_behind_master_spike uses the DEBUG_SYNC mechanism to
count how many format descriptor events (FDEs) have been executed,
to attempt to pause on a specific relay log FDE after executing
transactions. However, depending on when the IO thread is stopped,
it can send an extra FDE before sending the transactions, forcing
the test to pause before executing any transactions, resulting in a
table not existing, that is attempted to be read for COUNT.
This patch fixes this by no longer counting FDEs, but rather by
programmatically waiting until the SQL thread has executed the
transaction and then automatically activating the DEBUG_SYNC point
to trigger at the next relay log FDE.
write_record() when performing REPLACE has an optimization:
- if the unique violation happened in the last unique key, then do UPDATE
- otherwise, do DELETE+INSERT
This patch changes the way of detecting if this optimization
can be applied if the table has long (hash based) unique
(i.e. UNIQUE..USING HASH) constraints.
Problem:
The old condition did not take into account that
TABLE_SHARE and TABLE see long uniques differently:
- TABLE_SHARE sees as HA_KEY_ALG_LONG_HASH and HA_NOSAME
- TABLE sees as usual non-unique indexes
So the old condition could erroneously decide that the UPDATE optimization
is possible when there are still some unique hash constraints in the table.
Fix:
- If the current key is a long unique, it now works as follows:
UPDATE can be done if the current long unique is the last
long unique, and there are no in-engine (normal) uniques.
- For in-engine uniques nothing changes, it still works as before:
If the current key is an in-engine (normal) unique:
UPDATE can be done if it is the last normal unique.
Turning REGEXP_REPLACE into two schema-qualified functions:
- mariadb_schema.regexp_replace()
- oracle_schema.regexp_replace()
Fixing oracle_schema.regexp_replace(subj,pattern,replacement) to treat
NULL in "replacement" as an empty string.
Adding new classes implementing oracle_schema.regexp_replace():
- Item_func_regexp_replace_oracle
- Create_func_regexp_replace_oracle
Adding helper methods:
- String *Item::val_str_null_to_empty(String *to)
- String *Item::val_str_null_to_empty(String *to, bool null_to_empty)
and reusing these methods in both Item_func_replace and
Item_func_regexp_replace.
use the original, not the truncated, field in the long unique prefix,
that is, in the hash(left(field, length)) expression.
because MyISAM CHECK/REPAIR in compute_vcols() moves table->field
but not prefix fields from keyparts.
Also, implement Field_string::cmp_prefix() for prefix comparison
of CHAR columns to work.
use thd->start_time for the "start_time" column of the slow_log table.
"current_time" here refers to the current_time() function return value
not to the actual *current* time.
also fixes
MDEV-33267 User with minimal permissions can intentionally corrupt mysql.slow_log table
Item_func_quote did not calculate its max_length correctly for nullable
arguments.
Fix:
In case if the argument is nullable, reserve at least 4 characters
so the string "NULL" fits.
Statements affect by this bug are all SQL statements that
1) prefixed with "EXPLAIN"
2) have a lower level join structure created for a union subquery.
A bug in select_describe() passed an incorrect "result" object to
mysql_explain_union(), resulting in unpredictable behaviour and
out of context calls.
Reviewed by: Oleksandr Byelkin, sanja@mariadb.com
An existing binlog checksum can be overridden to 0 if writing a NULL
payload when using Zlib for the computation. That is, calling into
Zlib's crc32 with empty data initializes an incremental CRC
computation to 0.
This patch changes the Log_event_writer::write_data() to exit
immediately if there is nothing to write, thereby bypassing the
checksum computation. This follows the pattern of
Log_event_writer::encrypt_and_write(), which also exits immediately
if there is no data to write.
Reviewed By:
============
Andrei Elkin <andrei.elkin@mariadb.com>
This patch corrects the fix for MDEV-32569. The latter has not taken into
account the fact not each statement uses the SELECT_LEX structure. In
particular CALL statements do not use such structure. However the parameter
passed to the stored procedure used in such a statement may require an
invocation of Type_std_attributes::agg_item_set_converter().
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Fixed typo in mysql_admin_table which cused call of
close_unused_temporary_table_instances alwas for the first table
instead of the current table.
Added ASSERT that close_unused_temporary_table_instances should not
remove all instances of user created temporary table.
initialize THD::rand in THD::init() not in THD::THD(),
because the former is also called when a THD is reused -
in COM_CHANGE_USER and in taking a THD from the cache.
Also use current cycle timer for more unpreditability
Item_float::neg() did not preserve the "presentation" from "this".
So
CAST(-1e0 AS UNSIGNED) -- cast from double to unsigned
changes its meaning to:
CAST(-1 AS UNSIGNED) -- cast signed to undigned
Fixing Item_float::neg() to construct the new value for
Item_float::presentation as follows:
- if the old value starts with minus, then the minus is truncated:
'-2e0' -> '2e0'
- otherwise, minus sign followed by its old value:
'1e0' -> '-1e0'
The bitmap is temporarily flipped to ~0 for the sake of checking all
fields. It needs to be restored because it will be reused in second
and subsequent ps execution.
In commit b4ff64568c the
signature of mysql_show_var_func was changed, but not all functions
of that type were adjusted.
When the server is configured with `cmake -DWITH_ASAN=ON` and
compiled with clang, runtime errors would be flagged for invoking
functions through an incompatible function pointer.
Reviewed by: Michael 'Monty' Widenius
If a query contained a CTE whose name coincided with the name of one of
the base tables used in the specification of the CTE and the query had at
least two references to this CTE in the specifications of other CTEs then
processing of the query led to unlimited recursion that ultimately caused
a crash of the server.
Any secondary non-recursive reference to a CTE requires creation of a copy
of the CTE specification. All the references to CTEs in this copy must be
resolved. If the specification contains a reference to a base table whose
name coincides with the name of then CTE then it should be ensured that
this reference in no way can be resolved against the name of the CTE.
wsrep_plugin_init(), wsrep_plugin_deinit(): Remove these dummy functions
in order to fix an error that would be flagged by cmake -DWITH_UBSAN=ON
when using clang.
wsrep_show_ready(), wsrep_show_bf_aborts(): Correct the signature.
If a query has a HAVING clause that contains a predicate with a constant
IN subquery whose lef part in its turn is a subquery and the predicate is
subject to pushdown from HAVING to WHERE then execution of the query could
cause a crash of the server.
The cause of the problem was the missing implementation of the walk()
method for the class Item_in_optimizer. As a result in some cases the left
operand of the Item_in_optimizer condition could be traversed twice by
the walk procedure. For many call-back functions used as an argument of
this procedure it does not matter. Yet it matters for the call-back
function cleanup_excluding_immutables_processor() used in pushdown of
predicates from HAVING to WHERE. If the processed item is marked with
the IMMUTABLE_FL flag then the processor just removes this flag, otherwise
it performs cleanup of the item making it unfixed. If an item is marked
with an the IMMUTABLE_FL and it traversed with this processor twice then
it becomes unfixed after the second traversal though the flag indicates
that the item should not be cleaned up.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Problem:
sp_cache erroneously looked up fully qualified SP names (e.g. `DB`.`SP`),
in case insensitive style. It was wrong, because only the "name"
part is always case insensitive, while the "db" part should be compared
according to lower_case_table_names (case sensitively for 0,
case insensitively for 1 and 2).
Fix:
Adding a "casedn_name" parameter make_qname() to tell
if the name part should be lower cased:
`DB1`.`SP` -> "DB1.SP" (when casedn_name=false)
`DB1`.`SP` -> "DB1.sp" (when casedn_name=true)
and using make_qname() with casedn_name=true when creating
sp_cache hash lookup keys.
Details:
As a result, it now works as follows:
- sp_head::m_db is converted to lower case if lower_case_table_names>0
during the sp_name initialization phase. So when make_qname() is called,
sp_head::m_db is already normalized. There are no changes in here.
- The initialization phase of sp_head when creating sp_head::m_qname
now calls make_qname() with casedn_name=true,
so sp_head::m_name gets written to sp_head::m_qname in lower case.
- sp_cache_lookup() now also calls make_qname() with casedn_name=true,
so sp_head::m_name gets written to the temporary lookup key in lower case.
- sp_cache::m_hashtable now uses case sensitive comparison
Part#1 A non-functional change
Changing the signature of Identifier_chain2::make_qname() from
bool make_qname(MEM_ROOT *mem_root, LEX_CSTRING *dst) const;
to
LEX_CSTRING make_qname(MEM_ROOT *mem_root) const;
Now the result is returned as LEX_CSTRING from the function rather than
is passed as a parameter.
The return value {NULL,0} means "EOM".
This is a requirement step to fix and merge easier
MDEV-33019 The database part is not case sensitive in SP names
The original MDEV-31991 commit commend:
- Moving some of Database_qualified_name methods into a new class
Identifier_chain2.
- Changing the data type of the following variables from
Database_qualified_name to Identifier_chain2:
* q_pkg_proc in LEX::call_statement_start()
* q_pkg_func in LEX::make_item_func_call_generic()
Rationale:
The data type of Database_qualified_name::m_db will be changed
to Lex_ident_db soon. So Database_qualified_name won't be able
to store the `pkg.routine` part of `db.pkg.routine` any more,
because `pkg` must not depend on lower-case-table-names.
Attempting to set a SAVEPOINT when one of the involved storage engines
does not support savepoints, raises an error, and results in statement
rollback. If Galera is enabled with binlog emulation, the above
scenario was not handled correctly, and resulted in cluster wide
inconsistency.
The problem was in wsrep_register_binlog_handler(), which is called
towards the beginning of SAVEPOINT execution. This function is
supposed to mark the beginning of statement position in trx cache
through `set_prev_position()`. However, it did so only on condition
that `get_prev_position()` returns `MY_OFF_T_UNDEF`.
This before statement position is typically reset to undefined at the
end of statement in `binlog_commit()` / `binlog_rollback()`.
However that's not the case with Galera and binlog emulation, for
which binlog commit / rollback hooks are not called due to the
optimization that avoids internal 2PC (MDEV-16509).
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
This commit fixes GTID inconsistency which was injected by mariabackup SST.
Donor node now writes new info file: donor_galera_info, which is streamed
along the mariabackup donation to the joiner node. The donor_galera_info
file contains both GTID and gtid domain_id, and joiner will use these to
initialize the GTID state.
Commit has new mtr test case: galera_3nodes.galera_gtid_consistency, which
exercises potentially harmful mariabackup SST scenarios. The test has also
scenario with IST joining.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
This patch fixes cases where a transaction caused empty writeset to be
replicated. This could happen in the case where a transaction executes
a statement that initially manages to modify some data and therefore
appended keys some for certification. The statement is however rolled
back at some later stage due to some error (for example, a duplicate
key error). After statement rollback the transaction is still alive,
has no other changes. When committing such transaction, an empty
writeset was replicated through Galera.
The fix is to avoid calling into commit hook only when transaction
has appended one or keys for certification *and* has some data in
binlog cache to replicate. Otherwise, the commit is considered empty,
and goes through usual empty commit path.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
The previous patch for MDEV-10653 changes the rpl_parallel::workers_idle()
function to use Relay_log_info::last_inuse_relaylog to check for idle
workers. But the code was missing a NULL check. Also, there was one place
during SQL slave thread start which was missing mutex synchronisation when
updating inuse_relaylog.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
The error-injection inject_mdev8031 simulates a deadlock kill in a specific
place, by setting killed_for_retry to RETRY_KILL_KILLED directly. If a real
deadlock kill triggers at the same time, it is possible for the thread to
complete its transaction retry and set rgi_slave to NULL before the real
readlock kill can complete in the background. This will cause a segfault
due to null-pointer access.
Fix by changing the error injection to do a real background deadlock kill,
which ensures that the thread will wait for any pending background kills to
complete.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Item::val_str() sets the Item::null_value flag, so call it before checking
the flag, not after.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Consider this query
SELECT t1.* FROM t1, (SELECT t2.b FROM t2 WHERE NOT EXISTS
(SELECT 1 FROM t3) GROUP BY b) sq where sq.b = t1.a;
If SELECT 1 FROM t3 is expensive, for example t3 has >
thd->variables.expensive_subquery_limit, first evaluation is deferred to
mysql_derived_fill(). There it is noted that, in the above case
NOT EXISTS (SELECT 1 FROM t3) is constant and false.
This causes the join variable zero_result_cause to be set to
"Impossible WHERE noticed after reading const tables" and the handler
for this join is never "opened" via handler::ha_open.
When mysql_derived_fill() is called for the next group of results, this
unopened handler is not taken into account.
reviewed by Igor Babaev (igor@mariadb.com)