The original code is correct.
valgrind and asan binaries should be built with a specialiced version of
mem_root that makes it easier to find memory overwrites.
This is what the BUILD scripts is doing.
The specialiced mem_root code allocates a new block for every allocation
which is visiable for any test that depenmds on the default original malloc
size and usage.
Running an UPDATE statement in PS mode and having positional
parameter(s) bound with an array of actual values (that is
prepared to be run in bulk mode) results in incorrect behaviour
in presence of on update trigger that also executes an UPDATE
statement. The same is true for handling a DELETE statement in
presence of on delete trigger. Typically, the visible effect of
such incorrect behaviour is expressed in a wrong number of
updated/deleted rows of a target table. Additionally, in case UPDATE
statement, a number of modified rows and a state message returned
by a statement contains wrong information about a number of modified rows.
The reason for incorrect number of updated/deleted rows is that
a data structure used for binding positional argument with its
actual values is stored in THD (this is thd->bulk_param) and reused
on processing every INSERT/UPDATE/DELETE statement. It leads to
consuming actual values bound with top-level UPDATE/DELETE statement
by other DML statements used by triggers' body.
To fix the issue, reset the thd->bulk_param temporary to the value
nullptr before invoking triggers and restore its value on finishing
its execution.
The second part of the problem relating with wrong value of affected
rows reported by Connector/C API is caused by the fact that diagnostics
area is reused by an original DML statement and a statement invoked
by a trigger. This fact should be take into account on finalizing a
state of diagnostics area on completion running of a statement.
Important remark: in case the macros DBUG_OFF is on, call of the method
Diagnostics_area::reset_diagnostics_area()
results in reset of the data members
m_affected_rows, m_statement_warn_count.
Values of these data members of the class Diagnostics_area are used on
sending OK and EOF messages. In case DML statement is executed in PS bulk
mode such resetting results in sending wrong result values to a client
for affected rows in case the DML statement fires a triggers. So, reset
these data members only in case the current statement being processed
is not run in bulk mode.
- Added plugin_debug.test, multiple_index.test to innodb_fts suite
from mysql-5.7.
- commit c5b28e55f6 removes the warning
for InnoDB rebuilding table to add FTS_DOC_ID
- multiple_index test case has MATCH(a) values are smaller
than in MySQL because ROLLBACK updates the stat_n_rows.
- st_mysql_ftparser_boolean_info structure conveys boolean
metadata to mysql search engine for every word in the query.
This structure misses the position value to store the correct
offset of every word. So phrase search queries in plugin_debug
test case with boolean mode for simple parser throws
wrong result.
With wsrep_sst_rsync, node goes into endless loop when trying
to establish connection to donor for IST/SST if the database
is bind on specific IP address, not the "*".
This commit fixes this problem. Separate tests are not
required - the problem can occur in normal configurations
on a number of systems when selecting a bing address other
than "*", especially on FreeBSD and with the IPv6 addresses.
int wsrep_thd_append_key(THD*, const wsrep_key*, int, Wsrep_service_key_type)
CREATE TABLE [SELECT|REPLACE SELECT] is CTAS and idea was that
we force ROW format. However, it was not correctly enforced
and keys were appended before wsrep transaction was started.
At THD::decide_logging_format we should force used stmt binlog
format to ROW in CTAS case and produce a warning if used
binlog format was not ROW.
At ha_innodb::update_row we should not append keys similarly
as in ha_innodb::write_row if sql_command is SQLCOM_CREATE_TABLE.
Improved error logging on ::write_row, ::update_row and ::delete_row
if wsrep key append fails.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
A mixture of a multi-byte *TEXT column and a short binary column
produced a too large column.
For example, COALESCE(tinytext_utf8mb4, short_varbinary)
produced a BLOB column instead of an expected TINYBLOB.
- Adding a virtual method Type_all_attributes::character_octet_length(),
returning max_length by default.
- Overriding Item_field::character_octet_length() to extract
the octet length from the underlying Field.
- Overriding Item_ref::character_octet_length() to extract
the octet length from the references Item (e.g. as VIEW fields).
- Fixing Type_numeric_attributes::find_max_octet_length() to
take the octet length using the new method character_octet_length()
instead of accessing max_length directly.
fprintf() on Windows, when used on unbuffered FILE*, writes bytewise.
This can make crash handler messages harder to read, if they are mixed up
with other error log output.
Fixed , on Windows, by using a small buffer for formatting, and fwrite
instead of fprintf, if buffer is large enough for message.
The reason is that on Windows, OpenSSL can be built with different C runtime
than the server (e.g Debug runtime in debug vcpkg build).
Overwriting only malloc(), with CRT that server is using can cause
mixup of incompatible malloc() and free() inside openssl.
To fix, overwrite all memory functions.
Assertion `table->field[0]->ptr >= table->record[0] &&
table->field[0]->ptr <= table->record[0] + table->s->reclength' failed in
handler::assert_icp_limitations.
table->move_fields has some limitations:
1. It cannot be used in cascade
2. It should always have a restoring pair.
Rule 1 is covered by assertions in handler::assert_icp_limitations
and handler::ptr_in_record (commit 30894fe9a9).
Rule 2 should be manually maintained with care. Hopefully, the rule 1 assertions
may sometimes help as well.
In ha_myisam::repair, both rules are broken. table->move_fields is used
asymmetrically there: it is set on every param->fix_record call
(i.e. in compute_vcols) but is restored only once, in the end of repair.
The reason to updating field ptr's for every call is that compute_vcols can
(supposedly) be called in parallel, that is, with the same table, but different
records.
The condition to "unmove" the pointers in ha_myisam::restore_vcos_after_repair
is incorrect, when stored vcols are available, and myisam stores a VIRTUAL field
if it's the only field in the table (the record cannot be of zero length).
This patch solves the problem by "unmoving" the pointers symmetrically, in
compute_vcols. That is, both rules will be preserved maintained.
Modified node config with longer timeouts for suspect,
inactive, install and wait_prim timeout. Increased
node_1 weight to keep it primary component when
other nodes are voted out.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Replication of MyISAM and Aria DML is experimental and best
effort only. Earlier change make INSERT SELECT on both
MyISAM and Aria to replicate using TOI and STATEMENT
replication. Replication should happen only if user
has set needed wsrep_mode setting.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Fixed used configuration and added suppression for warning
message. Test case changes only.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Modified test configuration file to use wsrep_sync_wait
to make sure committed transactions are replicated before
next operation.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
New runtime type diagnostic (MDEV-34490) has detected that classes
Item_func_eq, Item_default_value and Item_date_literal_for_invalid_dates
incorrectly return an instance of its ancestor classes when being cloned.
This commit fixes that.
Additionally, it fixes a bug at Item_func_case_simple::do_build_clone()
which led to an endless loop of cloning functions calls.
Reviewer: Oleksandr Byelkin <sanja@mariadb.com>
Reason:
======
- InnoDB fails to load the instant alter table metadata from
clustered index while loading the table definition.
The reason is that InnoDB metadata blob has the column length
exceeds maximum fixed length column size.
Fix:
===
- InnoDB should treat the long fixed length column as variable
length fields that needs external storage while initializing
the field map for instant alter operation
Before this patch the crash occured when a single row dataset is used and
Item::remove_eq_conds() is called for HAVING. This function is not supposed
to be called after the elimination of multiple equalities.
To fix this problem instead of Item::remove_eq_conds() Item::val_int() is
used. In this case the optimizer tries to evaluate the condition for the
single row dataset and discovers impossible HAVING immediately. So, the
execution phase is skipped.
Approved by Igor Babaev <igor@maridb.com>
Problem:
========
- After the commit ada1074bb1 (MDEV-14398)
fil_crypt_set_encrypt_tables() iterates through all tablespaces to
fill the default_encrypt tables list. This was a trigger to
encrypt or decrypt when key rotation age is set to 0. But import
tablespace does call fil_crypt_set_encrypt_tables() unnecessarily.
The motivation for the call is to signal the encryption threads.
Fix:
====
ha_innobase::discard_or_import_tablespace: Remove the
fil_crypt_set_encrypt_tables() and add the import tablespace
to the default encrypt list if necessary
Remove an assert added by fix for MDEV-34417. BNL-H join can be used with
prefix keys. This happens when there are real prefix indexes on the
equi-join columns (although it probably doesn't make a lot of sense).
Anyway, remove the assert. The code receives properly truncated key values
for hashing/comparison so it can handle them just fine.
- commit 85db534731 (MDEV-33400)
retains the instantness in the table definition after discard
tablespace. So there is no need to assign n_core_null_bytes
during instant table preparation unless they are not
initialized.
Statements affected by this bug need all the following to be true
1) a derived table table or view whose specification contains a set
operation at the top level.
2) a grouping operator (group by/having) operating on a column alias
other than in the first select of the union/intersect
3) an outer condition that will be pushed into all selects in this
union/intersect, either into the where or having clause
When pushing a condition into all selects of a unit with more than one
select, pushdown_cond_for_derived() renames items so we can re-use the
condition being pushed.
These names need to be saved and reset for correct name resolution on
second execution of prepared statements.
Reviewed by Igor Babaev (igor@mariadb.com)
We need to work around deficiencies of Valgrind, and apparently
the previous work-around attempts
(such as d247d64988) do not work
anymore, definitely not on recent clang-based compilers.
MemorySanitizer should be fine; unfortunately we set HAVE_valgrind for it
as well.
It is not sufficient to check that the CPU supports the necessary
instructions. Also the operating system (or virtual machine hypervisor)
must enable all the AVX registers to be saved and restored on a
context switch.
Because clang 8 does not support the compiler intrinsic _xgetbv()
we will require clang 9 or later for enabling the use of VPCLMULQDQ
and the related AVX512 features.
When there is no bounds on the upper or lower part of the window,
it doesn't matter if the type is numeric.
It also doesn't matter how many ORDER BY items there are in the
query.
Reviewers: Sergei Petrunia and Oleg Smirnov
New runtime diagnostic introduced with MDEV-34490 has detected
that `Item_int_with_ref` incorrectly returns an instance of its ancestor
class `Item_int`. This commit fixes that.
In addition, this commit reverts a part of the diagnostic related
to `clone_item()` checks. As it turned out, `clone_item()` is not required
to return an object of the same class as the cloned one. For example,
look at `Item_param::clone_item()`: it can return objects of `Item_null`,
`Item_int`, `Item_string`, etc, depending on the object state.
So the runtime type diagnostic is not applicable to `clone_item()` and
is disabled with this commit.
As the similar diagnostic failures are expected to appear again
in the future, this commit introduces a new test file in the main suite:
item_types.test, and new test cases may be added to this file
Reviewer: Oleksandr Byelkin <sanja@mariadb.com>
When mysqldump is run to dump the `mysql` system database, it generates
INSERT statements into the table `mysql.gtid_slave_pos`.
After running the backup script
those inserts did not produce the expected gtid state on slave. In
particular the maximum of mysql.gtid_slave_pos.sub_id did not make
into
rpl_global_gtid_slave_state.last_sub_id
an in-memory object that is supposed to match the current state of the
table. And that was regardless of whether --gtid option was specified
or not. Later when the backup recipient server starts as slave
in *non-gtid* mode this desychronization may lead to a duplicate key
error.
This effect is corrected for --gtid mode mysqldump/mariadb-dump only
as the following. The fixes ensure the insert block of the dump
script is followed with a "summing-up" SET @global.gtid_slave_pos
assignment.
For the implemenation part, note a deferred print-out of
SET-gtid_slave_pos and associated comments is prefered over relocating
of the entire blocks if (opt_master,slave_data &&
do_show_master,slave_status) ... because of compatiblity
concern. Namely an error inside do_show_*() is handled in the new code
the same way, as early as, as before.
A regression test can be run in how-to-reproduce mode as well.
One affected mtr test observed.
rpl_mysqldump_slave.result "mismatch" shows now the new deferring print
of SET-gtid_slave_pos policy in action.
The test was missing a save_master_gtid.inc on the master,
leading to the slave thinking it was in sync after executing
sync_with_master_gtid.inc, despite not having executed the
latest transaction. This skipped transaction, XA COMMIT,
was supposed to error-to-be-ignored because its XID could not
be found, but be thrown out because the replication filters
would filter out the target database. However, if the slave
was able to stop before executing the transaction, then
the replication filer is reset (to empty), and when the
slave is later restarted, that transactions error would
no longer be ignored.
Additionally, as the test cases added in MDEV-33921 rely
on GTID synchronization, the test cases now force
master_use_gtid=slave_pos for consistency