Binary logging is now disabled for the queries run by SQL SERVICE.
The binlogging can be turned on with the 'SET SQL_LOG_BIN=On' query.
Conflicts:
sql/sql_prepare.cc
Conflicts:
sql/sql_prepare.cc
The reason was that Event e11 was re-executed before
"ALTER EVENT e11 DISABLE" had been executed.
Fixed by increasing re-schedule time
Other things:
- Removed double accounting of 'execution_count'. It was incremented in
top->mark_last_executed(thd) that was executed a few lines earlier.
There where two errors left from the previous fix.
- subselect.test assumes that mysql.slow_log is empty. This was not
enforced.
- subselect.test dropped a file that does not exists (for safety).
This was fixed by ensuring we don't get a warning if the file does
not exist.
Before MariaDB 10.3.5, the binlog position was stored in the TRX_SYS page,
while after it is stored in rollback segments. There is code to read the
legacy position from TRX_SYS to handle upgrades. The problem was if the
legacy position happens to compare larger than the position found in
rollback segments; in this case, the old TRX_SYS position would incorrectly
be preferred over the newer position from rollback segments.
Fixed by always preferring a position from rollback segments over a legacy
position.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
This commit can cause the wrong (old) binlog position to be recovered by
mariabackup --prepare. It implements that the value of the FIL_PAGE_LSN is
compared to determine which binlog position is the last one and should be
recoved. However, it is not guaranteed that the FIL_PAGE_LSN order matches the
commit order, as is assumed by the code. This is because the page LSN could be
modified by an unrelated update of the page after the commit.
In one example, the recovery first encountered this in trx_rseg_mem_restore():
lsn=27282754 binlog position (./master-bin.000001, 472908)
and then later:
lsn=27282699 binlog position (./master-bin.000001, 477164)
The last one 477164 is the correct position. However, because the LSN
encountered for the first one is higher, that position is recovered instead.
This results in too old binlog position, and a newly provisioned slave will
start replicating too early and get duplicate key error or similar.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Revert the patch for MDEV-18917, which removed this functionality.
This restores that mariabackup --prepare recovers the transactional
binlog position from the redo log, and writes it to the file
xtrabackup_binlog_pos_innodb.
This position is updated only on every InnoDB commit. This means that
if the last event in the binlog at the time of backup is a DDL or
non-transactional update, the recovered position from --no-lock will
be behind the state of the backup.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Make profiler to play a bit better with memory allocators. Test suite can not be included because it lead to non free memory on shutdown (IMHO OK for memory shortage emulation)
As alternetive all this should be rewritten and ability to return errors on upper level should be added.
In commit 75e82f71f1 the code to
rename internal tables for FULLTEXT INDEX that had been
created on Microsoft Windows using incompatible names
was removed. Let us also remove the related fault injection.
Problem:
=======
- InnoDB fails to find the foreign key index for the newly
added foreign key relation. This is caused by commit
5f09b53bdb (MDEV-31086).
FIX:
===
In check_col_is_in_fk_indexes(), while iterating through
the newly added foreign key relationship, InnoDB should
consider that foreign key relation may not have foreign index
when foreign key check is disabled.
Dyncol functions like column_create() encode the
current character set inside the value.
So they cannot be used with --view-protocol.
This patch changes only --disable_view_protocol to
--disable_service_connection.
This bug affected only multi-table update statements and in very rare
cases: one of the tables used at the top level of the statement must be
a derived table containg a row construct with a subquery including hanging
CTE.
Before this patch was applied the function prepare_unreferenced() of the
class With_element when invoked for the the hangin CTE did not properly
restored the value of thd->lex->context_analysis_only. As a result it
became 0 after the call of this function.
For a query affected by the bug this function is called when
JOIN::prepare() is called for the subquery with a hanging CTE. This happens
when Item_row::fix_fields() calls fix_fields() for the subquery. Setting
the value of thd->lex->context_analysis_only forces the caller function
Item_row::fix_fields() to invoke the virtual method is_null() for the
subquery that leads to execution of it. It causes an assertion failure
because the call of Item_row::fix_fields() happens during the invocation
of Multiupdate_prelocking_strategy::handle_end() that calls the function
mysql_derived_prepare() for the derived table used by the UPDATE at the
time when proper locks for the statement tables has not been acquired yet.
With this patch the value of thd->lex->context_analysis_only is restored
to CONTEXT_ANALYSIS_ONLY_DERIVED that is set in the function
mysql_multi_update_prepare().
Approved by Oleksandr Byelkin <sanja@mariadb.com>
The MTR test rpl.rpl_semi_sync_slave_compressed_protocol scans the
log file to ensure there is no magic number error. It attempts to
only scan the log files of the current test; however, the variable
which controls this, , is initialized incorrectly,
and it thereby scans the entire log file, which includes output from
prior tests. This causes it to fail if a test which expects this
error runs previously on the same worker.
This patch fixes the assert_only_after so the test only scans
through its own log contents.
The test rpl.rpl_sql_thd_start_errno_cleared can lose a debug_sync
signal, as there is a RESET immediately following a SIGNAL. When the
signal is lost, the sql_thread is stuck in a WAIT_FOR clause until
it times out, resulting in long test times (albeit still
successful).
This patch extends the test to ensure the debug_sync signal was
received before issuing the RESET
Item_char_typecast::print() did not print the "binary" keyword
in such cases:
CAST('a' AS CHAR CHARACTER SET latin1 BINARY)
This caused a difference in "mtr" vs "mtr --view-protocol"
This patch corrects the fix for MDEV-32369. No Item_direct_ref_to_item
objects should be allocated at the optimizer phase after permanent
rewritings have been done.
The patch also adds another test case for MDEV-32369 that uses MyISAM
with more than one row.
Approved by Rex Johnston <rex.johnston@mariadb.com>
This commit fixes several bugs in error handling around disk full when
writing the statement/transaction binlog caches:
1. If the error occurs during a non-transactional statement, the code
attempts to binlog the partially executed statement (as it cannot roll
back). The stmt_cache->error was still set from the disk full error. This
caused MYSQL_BIN_LOG::write_cache() to get an error while trying to read the
cache to copy it to the binlog. This was then wrongly interpreted as a disk
full error writing to the binlog file. As a result, a partial event group
containing just a GTID event (no query or commit) was binlogged. Fixed by
checking if an error is set in the statement cache, and if so binlog an
INCIDENT event instead of a corrupt event group, as for other errors.
2. For LOAD DATA LOCAL INFILE, if a disk full error occured while writing to
the statement cache, the code would attempt to abort and read-and-discard
any remaining data sent by the client. The discard code would however
continue trying to write data to the statement cache, and wrongly interpret
another disk full error as end-of-file from the client. This left the client
connection with extra data which corrupts the communication for the next
command, as well as again causing an corrupt/incomplete event to be
binlogged. Fixed by restoring the default read function before reading any
remaining data from the client connection.
Reviewed-by: Andrei Elkin <andrei.elkin@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Updates to specific replication system variables need to target the
active primary connection to support multi-source replication. These
variables use the Sys_var_multi_source_ulonglong type. This class
uses offsets of the Master_info C++ class to generalize access to
its member variables.
The problem is that the Master_info class is not of standard layout,
and neither are many of its member variables, e.g. rli and
rli->relay_log. Because the class is not of standard layout, using
offsets to access member variables invokes undefined behavior.
This patch changes how Sys_var_multi_source_ulonglong accesses the
member variables of Master_info from using parameterized memory
offsets to “getter” function pointers.
Note that the size parameter and assertion are removed, as they are
no longer needed because the condition is guaranteed by compiler
type-safety checks.
Reviewed By:
============
Kristian Nielsen <knielsen@knielsen-hq.org>
This commit addresses column naming issues with CTEs in the use of prepared
statements and stored procedures. Usage of either prepared statements or
procedures with Common Table Expressions and column renaming may be affected.
There are three related but different issues addressed here.
1) First execution issue. Consider the following
prepare s from "with cte (col1, col2) as (select a as c1, b as c2 from t
order by c1) select col1, col2 from cte";
execute s;
After parsing, items in the select are named (c1,c2), order by (and group by)
resolution is performed, then item names are set to (col1, col2).
When the statement is executed, context analysis is again performed, but
resolution of elements in the order by statement will not be able to find c1,
because it was renamed to col1 and remains this way.
The solution is to save the names of these items during context resolution
before they have been renamed. We can then reset item names back to those after
parsing so first execution can resolve items referred to in order and group by
clauses.
2) Second Execution Issue
When the derived table contains more than one select 'unioned' together we could
reasonably think that dealing with only items in the first select (which
determines names in the resultant table) would be sufficient. This can lead to
a different problem. Consider
prepare st from "with cte (c1,c2) as
(select a as col1, sum(b) as col2 from t1 where a > 0 group by col1
union select a as col3, sum(b) as col4 from t2 where b > 2 group by col3)
select * from cte where c1=1";
When the optimizer (only run during the first execution) pushes the outside
condition "c1=1" into every select in the derived table union, it renames the
items to make the condition valid. In this example, this leaves the first item
in the second select named 'c1'. The second execution will now fail 'group by'
resolution.
Again, the solution is to save the names during context analysis, resetting
before subsequent resolution, but making sure that we save/reset the item
names in all the selects in this union.
3) Memory Leak
During parsing Item::set_name() is used to allocate memory in the statement
arena. We cannot use this call during statement execution as this represents
a memory leak. We directly set the item list names to those in the column list
of this CTE (also allocated during parsing).
Approved by Igor Babaev <igor@mariadb.com>
For some reason, in embedded server, a command
let $a=`$query`
ignores local context. Make a workaround: use SET STATEMENT to set
debug_dbug in the same statement.
This patch fixes a performance regression introduced in the patch for the
bug MDEV-21104. The performance regression could affect queries for which
join buffer was used for an outer join such that its on expression from
which a conjunctive condition depended only on outer tables can be
extracted. If the number of records in the join buffer for which this
condition was false greatly exceeded the number of other records the
slowdown could be significant.
If there is a conjunctive condition extracted from the ON expression
depending only on outer tables this condition is evaluated when interesting
fields of each survived record of outer tables are put into the join buffer.
Each such set of fields for any join operation is supplied with a match
flag field used to generate null complemented rows. If the result of the
evaluation of the condition is false the flag is set to MATCH_IMPOSSIBLE.
When looking in the join buffer for records matching a record of the
right operand of the outer join operation the records with such flags
are not needed to be unpacked into record buffers for evaluation of on
expressions.
The patch for MDEV-21104 fixing some problem of wrong results when
'not exists' optimization by mistake broke the code that allowed to
ignore records with the match flag set to MATCH_IMPOSSIBLE when looking
for matching records. As a result such records were unpacked for each
record of the right operand of the outer join operation. This caused
significant execution penalty in some cases.
One of the test cases added in the patch can be used only for demonstration
of the restored performance for the reported query. The second test case is
needed to demonstrate the validity of the fix.
When mysql/mysql-server@0c954c2289
added a plugin interface for FULLTEXT INDEX tokenization to MySQL 5.7,
fts_tokenize_ctx::processed_len got a second meaning, which is only
partly implemented in row_merge_fts_doc_tokenize().
This inconsistency could cause a crash when using FULLTEXT...WITH PARSER.
A test case that would crash MySQL 8.0 when using an n-gram parser and
single-character words would fail to crash in MySQL 5.7, because the
buf_full condition in row_merge_fts_doc_tokenize() was not met.
This change is inspired by
mysql/mysql-server@38e9a0779a
that appeared in MySQL 5.7.44.
After two concurrent FTWRL/UNLOCK TABLES, the node stays in paused state
and the following CREATE TABLE fails with
ER_UNKNOWN_COM_ERROR (1047): Aborting TOI: Replication paused on
node for FTWRL/BACKUP STAGE.
The cause is the use of global `wsrep_locked_seqno` to determine
if the node should be resumed on UNLOCK TABLES. In some executions
the `wsrep_locked_seqno` is cleared by the first UNLOCK TABLES
after the second FTWRL gets past `make_global_read_lock_block_commit()`.
As a fix, use `thd->wsrep_desynced_backup_stage` to determine
if the thread should resume the node on UNLOCK TABLES.
Add MTR test galera.galera_ftwrl_concurrent to reproduce the
race. The test contains also cases for BACKUP STAGE which
uses similar mechanism for desyncing and pausing the node.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
remove the hack where NO_DEFAULT_VALUE_FLAG was temporarily removed
from a field to initialize DEFAULT() functions in CHECK constraints
while disabling self-reference field checks.
Instead, initialize DEFAULT() functions in CHECK explicitly,
don't call check_field_expression_processor() for CHECK at all.
Semisync ack (master side) receiver thread is made to report
details of faced errors.
In case of 'magic byte' error, a hexdump of the received packet
is always (level) NOTEd into the error log.
In other cases an exact server level error is print out
as a warning (as it may not be critical) under log_warnings > 2.
An MTR test added for the magic byte error. For others existing mtr
tests cover that, provided log_warnings > 2 is set.