make TRANSACTIONAL table option behave similar to other engine-defined
table options. If the engine doesn't suport it:
* if specified expicitly in CREATE or ALTER - it's ER_UNKNOWN_OPTION
* an error or a warning depending on sql_mode IGNORE_BAD_TABLE_OPTIONS
* in ALTER TABLE from the engine that suppors it to the engine that
doesn't - silently preserved (no warning)
* it is commented out in SHOW CREATE unless IGNORE_BAD_TABLE_OPTIONS
* invoke check_expression() for all vcol_info's in
mysql_prepare_create_table() to check for FK CASCADE
* also check for SET NULL and SET DEFAULT
* to check against existing FKs when a vcol is added in ALTER TABLE,
old FKs must be added to the new_key_list just like other indexes are
* check columns recursively, if vcol1 references vcol2,
flags of vcol2 must be taken into account
* remove check_table_name_processor(), put that logic under
check_vcol_func_processor() to avoid walking the tree twice
A simple "SET SESSION gtid_seq_no= DEFAULT" did not work, it would straight
up crash the server! Also, explicitly setting gtid_seq_no to 0 gave an error
in --gtid-strict-mode=1.
Setting to DEFAULT or 0 should disable any prior setting of
gtid_seq_no, so that the next transaction is allocated the next GTID
in sequence, as normal.
Reviewed-by: Monty <monty@mariadb.org>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
prep_alter_part_table upon re-partitioning by system time
memcmp() tries to compare beyond the last member of interval because
sizeof(Vers_part_info::interval) is 80. It is sizeof of variable,
sizeof of type is 76.
Now we compare interval_t struct C++ way.
The assertion was to make sure we don't do vers_set_hist_part() for
SELECT (or any non-DML). But actually we must do it if SELECT calls
some function that does DML. Patch moves the assertion to non-routines
only.
differently react to SQL_MODE => unusable SHOW CREATE
Use abort_on_warning dependent on strict mode over create new table
like it is done for copy data and inplace alter.
- InnoDB fails to update the autoinc persistently after
bulk insert operation.
row_merge_bulk_t::write_to_index(): Update the autoinc value
persistently
This patch adds for "--ps-protocol" second execution
of queries "SELECT".
Also in this patch it is added ability to disable/enable
(--disable_ps2_protocol/--enable_ps2_protocol) second
execution for "--ps-prototocol" in testcases.
MDEV-31749 sporadic assert in MDEV-30619 new test
If the workers of a parallel replica are busy (potentially with long
queues), but the SQL thread has no events left to distribute (so it
goes idle), then the next event that comes from the primary will
update mi->last_master_timestamp with its timestamp, even if the
workers have not yet finished.
This patch changes the parallel replica logic which updates
last_master_timestamp after idling from using solely sql_thread_caught_up
(added in MDEV-29639) to using the latter with rli queued/dequeued
event counters.
That is, if the queued count is equal to the dequeued count, it
means all events have been processed and the replica is considered
idle when the driver thread has also distributed all events.
Low level details of the commit include
- to make a more generalized test for Seconds_Behind_Master on
the parallel replica, rpl_delayed_parallel_slave_sbm.test
is renamed to rpl_parallel_sbm.test for this purpose.
- pause_sql_thread_on_next_event usage was removed
with the MDEV-30619 fixes. Rather than remove it, we adapt it
to the needs of this test case
- added test case to cover SBM spike of relay log read and LMT
update that was fixed by MDEV-29639
- rpl_seconds_behind_master_spike.test is made to use
the negate_clock_diff_with_master debug eval.
Reviewed By:
============
Andrei Elkin <andrei.elkin@mariadb.com>
ANALYZE FORMAT=JSON output now includes table.r_engine_stats which
has the engine statistics. Only non-zero members are printed.
Internally: EXPLAIN data structures Explain_table_acccess and
Explain_update now have handler* handler_for_stats pointer.
It is used to read statistics from handler_for_stats->handler_stats.
The following applies only to 10.9+, backport doesn't use it:
Explain data structures exist after the tables are closed. We avoid
walking invalid pointers using this:
- SQL layer calls Explain_query::notify_tables_are_closed() before
closing tables.
- After that call, printing of JSON output is disabled. Non-JSON output
can be printed but we don't access handler_for_stats when doing that.
Restrict vcol_cleanup_expr() in close_thread_tables() to only simple
locked tables mode. Prelocked is cleaned up like normal statement: in
close_thread_table().
First UPDATE under START TRANSACTION does nothing (nstate= nstate),
but anyway generates history. Since update vector is empty we get into
(!uvect->n_fields) branch which only adds history row, but does not do
update. After that we get current row with wrong (old) row_start value
and because of that second UPDATE tries to insert history row again
because it sees trx->id != row_start which is the guard to avoid
inserting multiple trx_id-based history rows under same transaction
(because we have same trx_id and we get duplicate error and this bug
demostrates that). But this try anyway fails because PK is based on
row_end which is constant under same transaction, so PK didn't change.
The fix moves vers_make_update() to an earlier stage of
calc_row_difference(). Therefore it prepares update vector before
(!uvect->n_fields) check and never gets into that branch, hence no
need to handle versioning inside that condition anymore.
Now trx->id and row_start are equal after first UPDATE and we don't
try to insert second history row.
== Cleanups and improvements ==
ha_innobase::update_row():
vers_set_fields and vers_ins_row are cleaned up into direct condition
check. SQLCOM_ALTER_TABLE check now is not used as this is dead code,
assertion is done instead.
upd_node->is_delete is set in calc_row_difference() just to keep
versioning code as much in one place as possible. vers_make_delete()
is still located in row_update_for_mysql() as this is required for
ha_innodbase::delete_row() as well.
row_ins_duplicate_error_in_clust():
Restrict DB_FOREIGN_DUPLICATE_KEY to the better conditions.
VERSIONED_DELETE is used specifically to help lower stack to
understand what caused current insert. Related to MDEV-29813.
On create table tmp as select ... we exited Item_func::fix_fields()
with error. fix_fields_if_needed('foo' or 'bar') failed and we
returned true, but already changed const_item_cache. So the item is in
inconsistent state: fixed == false and const_item_cache == false.
Now we cleanup the item before the return if Item_func::fix_fields()
fails to process.
Constraints processing row_ins_check_foreign_constraint() was not
called because row_upd_check_references_constraints() didn't see
update as delete: node->is_delete was false.
Since MDEV-30378 we check for TRG_EVENT_DELETE to detect versioned
delete in ha_innobase::update_row().
Now we can use TRG_EVENT_DELETE to set upd_node->is_delete, so
constraints processing is triggered correctly.
1. Exclude merging history rows into fts index.
The check !history_fts && (index->type & DICT_FTS) was just incorrect
attempt to avoid history in fts index.
2. Don't check for duplicates for history rows.
There was a memory leak under these conditions:
- YYABORT was called in the end-of-rule action of a rule containing expr_lex
- This expr_lex was not bound to any sp_lex_keeper
Bison did not call %destructor <expr_lex> in this case, because its stack
already contained a reduced upper-level rule.
Fixing rules starting with RETURN, CONTINUE, EXIT keywords:
Turning end-of-rule actions with YYABORT into mid-rule actions
by adding an empty trailing { } block. This prevents the upper level
rule from being reduced without calling %destructor <expr_lex>.
In other rules expr_lex is used not immediately before the last
end-of-rule { } block, so they don't need changes.
- Moving the code from a public function trim_whitespaces()
to the class Lex_cstring as methods. This code may
be useful in other contexts, and also this code becomes
visible inside sql_class.h
- Adding a helper method THD::strmake_lex_cstring_trim_whitespaces()
- Unifying the way how CREATE PROCEDURE/CREATE FUNCTION and
CREATE PACKAGE/CREATE PACKAGE BODY work:
a) Now CREATE PACKAGE/CREATE PACKAGE BODY also calls
Lex->sphead->set_body_start() to remember the cpp body start inside
an sp_head member.
b) adding a "const char *cpp_body_end" parameter to
sp_head::set_stmt_end().
These changes made it possible to reuse sp_head::set_stmt_end() inside
LEX::create_package_finalize() and remove the duplucate code.
- Renaming sp_head::m_body_begin to m_cpp_body_begin and adding a comment
to make it clear that this member is used only during parsing, and
points to a fragment inside the cpp buffer.
- Changed sp_head::set_body_start() and sp_head::set_stmt_end()
to skip the calls related to "body_utf8" in cases when m_parent is not NULL.
A non-NULL m_parent means that we're inside a package routine.
"body_utf8" in such case belongs not to the current sphead itself,
but to parent (the package) sphead.
So an sphead instance of a package routine should neither initialize,
nor finalize, nor change in any other ways the "body_utf8" related
members of Lex_input_stream, and should not take over or copy "body_utf8"
data from Lex_input_stream to "this".
MDEV-31503 ALTER SEQUENCE ends up in optimistic parallel slave binlog out-of-order
The OOO error still was possible even after MDEV-31077. This time
it occured through open_table() when the sequence table was not in
the table cache *and* the table was created before the last server
restart.
In such context a internal (read-only) transaction is committed
and it was not blocked from doing a wakeup() call to subsequent
transactions.
Fixed with extending suspend_subsequent_commits() effect for the entirety
of Sql_cmd_alter_sequence::execute().
An elaborated MDEV-31077 test proves the fixes of both failure scenarios.
Also the bug condition suggests a workaround to pre-SELECT sequence
tables before START SLAVE.
Reviewed-by: Brandon Nesterenko <brandon.nesterenko@mariadb.com>
Item_func_tochar::check_arguments() didn't check if its arguments
each had one column. Failing to make this check and proceeding would
eventually cause either an assertion failure or the execution would
reach "MY_ASSERT_UNREACHABLE();" which would produce a crash with
a misleading stack trace.
* Fixed Item_func_tochar::check_arguments() to do the required check.
* Also fixed MY_ASSERT_UNREACHABLE() to terminate the program. Just
"executing" __builtin_unreachable() used to cause "undefined results",
which in my experience was a crash with corrupted stack trace.
The largest_started_sub_id needs to be set under LOCK_parallel_entry
together with testing stop_sub_id. However, in-between was the logic for
do_ftwrl_wait(), which temporarily releases the mutex. This could lead to
inconsistent stopping amongst worker threads and lost data.
Fix by moving all the stop-related logic out from unrelated do_gco_wait()
and do_ftwrl_wait() and into its own function do_stop_handling().
Reviewed-by: Andrei Elkin <andrei.elkin@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Various test cases for the bugs around MDEV-31448.
Test cases due to Brandon Nesterenko, thanks!
Reviewed-by: Andrei Elkin <andrei.elkin@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Where a read-only server permits writes through replication, it
should not permit user connections to commit/rollback XA
transactions prepared via replication. The bug reported in
MDEV-30978 shows that this can happen. This is because there is no
read only check in the XA transaction logic, the most relevant one
occurs in ha_commit_trans() for normal statements/transactions.
This patch extends the XA transaction logic to check the read only
status of the server before performing an XA COMMIT or ROLLBACK.
Reviewed By:
Andrei Elkin <andrei.elkin@mariadb.com>
Before MDEV-24671, the wait time was derived from my_interval_timer() /
1000 (nanoseconds converted to microseconds, and not microseconds to
milliseconds like I must have assumed). The lock_sys.wait_time and
lock_sys.wait_time_max are already in milliseconds; we should not divide
them by 1000.
In MDEV-24738 the millisecond counts lock_sys.wait_time and
lock_sys.wait_time_max were changed to a 32-bit type. That would
overflow in 49.7 days. Keep using a 64-bit type for those millisecond
counters.
Reviewed by: Marko Mäkelä
recv_log_recover_10_5(): Make reads aligned by 4096 bytes, to avoid
any trouble in case the file was opened in O_DIRECT mode and
the physical block size is larger than 512 bytes.
Because innodb_log_file_size used to be defined in whole megabytes,
reading multiples of 4096 bytes instead of 512 should not be an issue.
The new statistics is enabled by adding the "engine", "innodb" or "full"
option to --log-slow-verbosity
Example output:
# Pages_accessed: 184 Pages_read: 95 Pages_updated: 0 Old_rows_read: 1
# Pages_read_time: 17.0204 Engine_time: 248.1297
Page_read_time is time doing physical reads inside a storage engine.
(Writes cannot be tracked as these are usually done in the background).
Engine_time is the time spent inside the storage engine for the full
duration of the read/write/update calls. It uses the same code as
'analyze statement' for calculating the time spent.
The engine statistics is done with a generic interface that should be
easy for any engine to use. It can also easily be extended to provide
even more statistics.
Currently only InnoDB has counters for Pages_% and Undo_% status.
Engine_time works for all engines.
Implementation details:
class ha_handler_stats holds all engine stats. This class is included
in handler and THD classes.
While a query is running, all statistics is updated in the handler. In
close_thread_tables() the statistics is added to the THD.
handler::handler_stats is a pointer to where statistics should be
collected. This is set to point to handler::active_handler_stats if
stats are requested. If not, it is set to 0.
handler_stats has also an element, 'active' that is 1 if stats are
requested. This is to allow engines to avoid doing any 'if's while
updating the statistics.
Cloned or partition tables have the pointer set to the base table if
status are requested.
There is a small performance impact when using --log-slow-verbosity=engine:
- All engine calls in 'select' will be timed.
- IO calls for InnoDB reads will be timed.
- Incrementation of counters are done on local variables and accesses
are inline, so these should have very little impact.
- Statistics has to be reset for each statement for the THD and each
used handler. This is only 40 bytes, which should be neglectable.
- For partition tables we have to loop over all partitions to update
the handler_status as part of table_init(). Can be optimized in the
future to only do this is log-slow-verbosity changes. For this to work
we have to update handler_status for all opened partitions and
also for all partitions opened in the future.
Other things:
- Added options 'engine' and 'full' to log-slow-verbosity.
- Some of the new files in the test suite comes from Percona server, which
has similar status information.
- buf_page_optimistic_get(): Do not increment any counter, since we are
only validating a pointer, not performing any buf_pool.page_hash lookup.
- Added THD argument to save_explain_data_intern().
- Switched arguments for save_explain_.*_data() to have
always THD first (generates better code as other functions also have THD
first).
There was no actual execution of the SQL of a pushed derived table,
which caused "r_rows" to be always displayed as 0 and "r_total_time_ms"
to show inaccurate numbers.
This commit makes a derived table SQL to be executed by the storage
engine, so the server is able to calculate the number of rows returned
and measure the execution time more accurately
PROBLEM:
A deadlock was possible when a transaction tried to "upgrade" an already
held Record Lock to Next Key Lock.
SOLUTION:
This patch is based on observations that:
(1) a Next Key Lock is equivalent to Record Lock combined with Gap Lock
(2) a GAP Lock never has to wait for any other lock
In case we request a Next Key Lock, we check if we already own a Record
Lock of equal or stronger mode, and if so, then we change the requested
lock type to GAP Lock, which we either already have, or can be granted
immediately, as GAP locks don't conflict with any other lock types.
(We don't consider Insert Intention Locks a Gap Lock in above statements).
The reason of why we don't upgrage Record Lock to Next Key Lock is the
following.
Imagine a transaction which does something like this:
for each row {
request lock in LOCK_X|LOCK_REC_NOT_GAP mode
request lock in LOCK_S mode
}
If we upgraded lock from Record Lock to Next Key lock, there would be
created only two lock_t structs for each page, one for
LOCK_X|LOCK_REC_NOT_GAP mode and one for LOCK_S mode, and then used
their bitmaps to mark all records from the same page.
The situation would look like this:
request lock in LOCK_X|LOCK_REC_NOT_GAP mode on row 1:
// -> creates new lock_t for LOCK_X|LOCK_REC_NOT_GAP mode and sets bit for
// 1
request lock in LOCK_S mode on row 1:
// -> notices that we already have LOCK_X|LOCK_REC_NOT_GAP on the row 1,
// so it upgrades it to X
request lock in LOCK_X|LOCK_REC_NOT_GAP mode on row 2:
// -> creates a new lock_t for LOCK_X|LOCK_REC_NOT_GAP mode (because we
// don't have any after we've upgraded!) and sets bit for 2
request lock in LOCK_S mode on row 2:
// -> notices that we already have LOCK_X|LOCK_REC_NOT_GAP on the row 2,
// so it upgrades it to X
...etc...etc..
Each iteration of the loop creates a new lock_t struct, and in the end we
have a lot (one for each record!) of LOCK_X locks, each with single bit
set in the bitmap. Soon we run out of space for lock_t structs.
If we create LOCK_GAP instead of lock upgrading, the above scenario works
like the following:
// -> creates new lock_t for LOCK_X|LOCK_REC_NOT_GAP mode and sets bit for
// 1
request lock in LOCK_S mode on row 1:
// -> notices that we already have LOCK_X|LOCK_REC_NOT_GAP on the row 1,
// so it creates LOCK_S|LOCK_GAP only and sets bit for 1
request lock in LOCK_X|LOCK_REC_NOT_GAP mode on row 2:
// -> reuses the lock_t for LOCK_X|LOCK_REC_NOT_GAP by setting bit for 2
request lock in LOCK_S mode on row 2:
// -> notices that we already have LOCK_X|LOCK_REC_NOT_GAP on the row 2,
// so it reuses LOCK_S|LOCK_GAP setting bit for 2
In the end we have just two locks per page, one for each mode:
LOCK_X|LOCK_REC_NOT_GAP and LOCK_S|LOCK_GAP.
Another benefit of this solution is that it avoids not-entirely
const-correct, (and otherwise looking risky) "upgrading".
The fix was ported from
mysql/mysql-server@bfba840dfamysql/mysql-server@75cefdb1f7
Reviewed by: Marko Mäkelä
ha_innobase::delete_table(): Also on DROP SEQUENCE, do try to drop any
persistent statistics. They should really not be created for
SEQUENCE objects (which internally are 1-row no-rollback tables),
but that is how happened to always work.
i_s_innodb_buffer_page_get_info(): Correct a condition.
After crash recovery, there may be some buffer pool pages in FREED state,
containing garbage (invalid data page contents). Let us ignore such pages
in the INFORMATION_SCHEMA output.
The test innodb.innodb_defragment_fill_factor will be removed, because
the queries that it is invoking on information_schema.innodb_buffer_page
would start to fail. The defragmentation feature was removed in
commit 7ca89af6f8 in MariaDB Server 11.1.
Tested by: Matthias Leich