vsnprintf takes the space need for trailing '\0' in consideration, and copies only n-1 characters to destination buffer.
With the old code, only sizeof(buf)-2 characters were copied, this caused that last character of message could be lost.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Since commit fb335b48b5 we may have
a null pointer in purge_sys.query when fetch_data_into_cache() is
invoked and innodb_force_recovery>4. This is because the call to
purge_sys.create() would be skipped.
fetch_data_into_cache(): Load the purge_sys pseudo transaction pointer
to a local variable (null pointer if purge_sys is not initialized).
This commit contains a fix, where the replication write set for a CREATE TABLE
will contain, as certification keys, table names for all FK references.
With this, all DML for the FK parent tables will conflict with the CREATE TABLE
statement.
There is also new test galera.MDEV-27276 to verify the fix.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
This commit has a mtr test where two two transactions delete a row from
two separate tables, which will cascade a FK delete for the same row in
a third table. Second replica node is configured with 2 applier threads,
and the test will fail if these two transactions are applied in parallel.
The actual fix, in this commit, is to mark a transaction as unsafe for
parallel applying when it traverses into cascade delete operation.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
1) Removed symlinks that are not very well supported in tar under Windows.
2) Added comment + changed code formatting in viosslfactories.c
3) Fixed a small bug in the yassl code.
4) Fixed a typo in the script code.
MDEV-25803 excluded some cases from key sort upon alter table. That
particularly depends on ALTER_ADD_INDEX flag. Creating a column of
SERIAL data type missed that flag. Though equivalent operation
alter table t1 add x bigint unsigned not null auto_increment unique;
has ALTER_ADD_INDEX flag.
Repeating execution of a query containing the clause IN with string literals
in environment where the server variable in_predicate_conversion_threshold
is set results in server abnormal termination in case the query is run
as a Prepared Statement and conversion of charsets for string values in the
query are required.
The reason for server abnormal termination is that instances of the class
Item_string created on transforming the IN clause into subquery were created
on runtime memory root that is deallocated on finishing execution of Prepared
statement. On the other hand, references to Items placed on deallocated memory
root still exist in objects of the class table_value_constr. Subsequent running
of the same prepared statement leads to dereferencing of pointers to already
deallocated memory that could lead to undefined behaviour.
To fix the issue the values being pushed into a values list for TVC are created
by cloning their original items. This way the cloned items are allocate on
the PS memroot and as consequences no dangling pointer does more exist.
Consider the following use case:
MariaDB [test]> CREATE TABLE t1 (field1 BIGINT DEFAULT -1);
MariaDB [test]> CREATE VIEW v1 AS SELECT DISTINCT field1 FROM t1;
Repeated execution of the following query as a Prepared Statement
MariaDB [test]> PREPARE stmt FROM 'SELECT * FROM v1 WHERE field1 <=> NULL';
MariaDB [test]> EXECUTE stmt;
results in a crash for a server built with DEBUG.
MariaDB [test]> EXECUTE stmt;
ERROR 2013 (HY000): Lost connection to MySQL server during query
Assertion failed: (!result), function convert_const_to_int, file item_cmpfunc.cc, line 476.
Abort trap: 6 (core dumped)
The crash inside the function convert_const_to_int() happens by the reason
that the value -1 is stored in an instance of the class Field_longlong
on restoring its original value in the statement
result= field->store(orig_field_val, TRUE);
that leads to assigning the value 1 to the variable 'result' with subsequent
crash in the DBUG_ASSERT statement following it
DBUG_ASSERT(!result);
The main matter here is why this assertion failure happens on the second
execution of the prepared statement and doens't on the first one.
On first handling of the statement
'EXECUTE stmt;'
a temporary table is created for serving the query involving the view 'v1'.
The table is created by the function create_tmp_table() in the following
calls trace: (trace #1)
JOIN::prepare (at sql_select.cc:725)
st_select_lex::handle_derived
LEX::handle_list_of_derived
TABLE_LIST::handle_derived
mysql_handle_single_derived
mysql_derived_prepare
select_union::create_result_table
create_tmp_table
Note, that the data member TABLE::status of a TABLE instance returned by the
function create_tmp_table() has the value 0.
Later the function setup_table_map() is called on the TABLE instance just
created for the sake of the temporary table (calls trace #2 is below):
JOIN::prepare (at sql_select.cc:737)
setup_tables_and_check_access
setup_tables
setup_table_map
where the data member TABLE::status is set to the value STATUS_NO_RECORD.
After that when execution of the method JOIN::prepare reaches calling of
the function setup_without_group() the following calls trace is invoked
JOIN::prepare
setup_without_group
setup_conds
Item_func::fix_fields
Item_func_equal::fix_length_and_dec
Item_bool_rowready_func2::fix_length_and_dec
Item_func::setup_args_and_comparator
Item_func::convert_const_compared_to_int_field
convert_const_to_int
There is the following code snippet in the function convert_const_to_int()
at the line item_cmpfunc.cc:448
bool save_field_value= (field_item->const_item() ||
!(field->table->status & STATUS_NO_RECORD));
Since field->table->status has bits STATUS_NO_RECORD set the variable
save_field_value is false and therefore neither the method
Field_longlong::val_int() nor the method Field_longlong::store is called
on the Field instance that has the numeric value -1.
That is the reason why first execution of the Prepared Statement for the query
'SELECT * FROM v1 WHERE field1 <=> NULL'
is successful.
On second running of the statement 'EXECUTE stmt' a new temporary tables
is also created by running the calls trace #1 but the trace #2 is not executed
by the reason that data member SELECT_LEX::first_cond_optimization has been set
to false on first execution of the prepared statemet (in the method
JOIN::optimize_inner()). As a consequence, the data member TABLE::status for
a temporary table just created doesn't have the flags STATUS_NO_RECORD set and
therefore on re-execution of the prepared statement the methods
Field_longlong::val_int() and Field_longlong::store() are called for the field
having the value -1 and the DBUG_ASSERT(!result) is fired.
To fix the issue the data member TABLE::status has to be assigned the value
STATUS_NO_RECORD in every place where the macros empty_record() is called
to emptify a record for just instantiated TABLE object created on behalf
the new temporary table.
Example build: ./BUILD/compile-pentium64-valgrind-max
Fixes:
- sp-no-valgrind failed if binary was built for valgrind as in this case
mem_root is allocated in very small hunks which the test cannot handle.
Fixed by testing of valgrind build
- truncate_notembedded failed in reap because of more memory used.
Fixed by allowing reap to fail too
Followup to fix for MDEV-25858: When test_if_skip_sort_order() decides
to use an index to satisfy ORDER BY ... LIMIT clause, it should
disable "Range Checked for Each Record" optimization.
Do this in all cases.
This enables optimizer_trace output for the next SQL command.
Identical as if one would have done:
- Store value of @@optimizer_trace
- Set @optimizer_trace="enabled=on"
- Run query
- SELECT * from OPTIMIZER_TRACE
- Restore value of @@optimizer_trace
This is a great time saver when one wants to quickly check the optimizer
trace for a query in a mtr test.
The issue is that when recovery is about to create a new data or index
page it check if the page already exits.
If the page does not exists (file is too short) or contains wrong checksum,
then the recovery code will recreate the page.
The bug was that the code that checked if the page existed didn't take
into account encrypted pages.
Fixed by adding a check if page could not be encrypted solved the issue.
I also added some code to silence decryption errors for new pages.
Test case and some inspiration for how to solve this come from
the pull request by alexandr.miloslavsky
create_log_files(): Check log_set_capacity() before modifying
or creating any log files.
innobase_start_or_create_for_mysql(): If create_log_files()
fails and we were initializing a new database, delete the
system tablespace files before exiting.
1. Galera SST scripts should use ssl_capath (not ssl_ca) for CA
directory. The current implementation tries to automatically
detect the path using the trailing slash in the ssl_ca variable
value, but this approach is not compatible with the server
configuration. Now, by analogy with the server, SST scripts
also use a separate ssl_capath variable. In addition, a similar
tcapath variable has been added for the old-style configuration
(in the "sst" section).
2. Openssl utility detection made more reliable.
3. Removed extra spaces in automatically generated command lines -
to simplify debugging of the SST scripts.
4. In general, the code for detecting the presence or absence of
auxiliary utilities has been improved - it is made more reliable
in some configurations (and for shells other than bash).
mtr test galera.galera_UK_conflict, has rather distorted logic in test scenario 2.
The test would fail always with:
CURRENT_TEST: galera.galera_UK_conflict
mysqltest: In included file "./include/galera_wait_sync_point.inc":
included from /home/seppo/work/wsrep/mariadb-server/mysql-test/suite/galera/t/galera_UK_conflict.test at line 216:
At line 3: query 'SET SESSION wsrep_on = 0' failed: 1179: You are not allowed to execute this command in a transaction
This happens because wait_condition is called in wrong connection (node_1, which is excuting MST transaction)
This test is fixed by using contl connection for wait contdition check, and new result is recorded as well
1. Galera SST scripts should use ssl_capath (not ssl_ca) for CA
directory. The current implementation tries to automatically
detect the path using the trailing slash in the ssl_ca variable
value, but this approach is not compatible with the server
configuration. Now, by analogy with the server, SST scripts
also use a separate ssl_capath variable. In addition, a similar
tcapath variable has been added for the old-style configuration
(in the "sst" section).
2. Openssl utility detection made more reliable.
3. Removed extra spaces in automatically generated command lines -
to simplify debugging of the SST scripts.
4. In general, the code for detecting the presence or absence of
auxiliary utilities has been improved - it is made more reliable
in some configurations (and for shells other than bash).
1. Galera SST scripts should use ssl_capath (not ssl_ca) for CA
directory. The current implementation tries to automatically
detect the path using the trailing slash in the ssl_ca variable
value, but this approach is not compatible with the server
configuration. Now, by analogy with the server, SST scripts
also use a separate ssl_capath variable. In addition, a similar
tcapath variable has been added for the old-style configuration
(in the "sst" section).
2. Openssl utility detection made more reliable.
3. Removed extra spaces in automatically generated command lines -
to simplify debugging of the SST scripts.
4. In general, the code for detecting the presence or absence of
auxiliary utilities has been improved - it is made more reliable
in some configurations (and for shells other than bash).
Fixes to make the galera.MDEV-20793 test deterministic.
Specifically, after --send COMMIT, there is now a sync point to catch a known state of the COMMIT execution
it's not printed, not cleaned up without perfschema,
so isn't supposed to be written into either
this fixes "Memory not freed" warnings when early command line
options produce warnings in non-perfschema builds
CMake Error in wsrep-lib/CMakeLists.txt:
The custom command generating
/Users/name/build/mariadb-server/sql/lex_token.h
is attached to multiple targets:
GenServerSource
sql
but none of these is a common dependency of the other(s). This is not
allowed by the Xcode "new build system".
A part of the test main.long_unique attempts to insert records
with two 60,000,001-byte columns. Let us move that test into
a separate file main.long_unique_big, declared as big test,
so that it can be skipped in environments with limited memory.
Hi,
if the pid-file option is configured more than once (e.g. multiple times in different files), my_print_defaults prints it twice, resulting in the logrotate postrotate script failing because of a syntax error. Debian fixed this already (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=830976#42).
Perhaps you could implement this small change in the other branches as well?
Thanks,
Christopher
* galera_kill_applier : we should make sure that node has
correct number of wsrep appliers
* galera_bad_wsrep_new_cluster: This test restarts both nodes,
so it is bad on mtr. Make sure it is run alone
* galera_update_limit : Make sure we have PK when needed
galera_as_slave_replay : bf abort was not consistent
* galera_unicode_pk : Add wait_conditions so that all nodes
are part of cluster and DDL and INSERT has replicated before
any further operations are done.
* galera_bf_abort_at_after_statement : Add wait_conditions
to make sure all nodes are part of cluster and that DDL
and INSERT has replicated. Make sure we reset DEBUG_SYNC.
MariaDB server crashes on ARM (weak memory model architecture) while
concurrently executing l_find to load node->key and add_to_purgatory
to store node->key = NULL. l_find then uses key (which is NULL), to
pass it to a comparison function.
The specific problem is the out-of-order execution that happens on a
weak memory model architecture. Two essential reorderings are possible,
which need to be prevented.
a) As l_find has no barriers in place between the optimistic read of
the key field lf_hash.cc#L117 and the verification of link lf_hash.cc#L124,
the processor can reorder the load to happen after the while-loop.
In that case, a concurrent thread executing add_to_purgatory on the same
node can be scheduled to store NULL at the key field lf_alloc-pin.c#L253
before key is loaded in l_find.
b) A node is marked as deleted by a CAS in l_delete lf_hash.cc#L247 and
taken off the list with an upfollowing CAS lf_hash.cc#L252. Only if both
CAS succeed, the key field is written to by add_to_purgatory. However,
due to a missing barrier, the relaxed store of key lf_alloc-pin.c#L253
can be moved ahead of the two CAS operations, which makes the value of
the local purgatory list stored by add_to_purgatory visible to all threads
operating on the list. As the node is not marked as deleted yet, the
same error occurs in l_find.
This change three accesses to be atomic.
* optimistic read of key in l_find lf_hash.cc#L117
* read of link for verification lf_hash.cc#L124
* write of key in add_to_purgatory lf_alloc-pin.c#L253
Reviewers: Sergei Vojtovich, Sergei Golubchik
Fixes: MDEV-23510 / d30c1331a18d875e553f3fcf544997e4f33fb943
Print this piece when we've just made the choice to convert to semi-join.
Also, print it when we've already made that choice before:
transformation": {
"select_id": 2,
"from": "IN (SELECT)",
"to": "semijoin",
"chosen": true
}
This bug was introduced by commit be00e279c6
The commit was applied for the task MDEV-6480 that allowed to remove top
level disjuncts from WHERE conditions if the range optimizer evaluated them
as always equal to FALSE/NULL.
If such disjuncts are removed the WHERE condition may become an AND formula
and if this formula contains multiple equalities the field JOIN::item_equal
must be updated to refer to these equalities. The above mentioned commit
forgot to do this and it could cause crashes for some queries.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
close_connections() in mysqld.cc sends a signal to all threads.
But InnoDB is too busy purging, doesn't react immediately.
close_connections() waits 20 seconds, which isn't enough in this
particular case, and then unlinks all threads from
the list and forcibly closes their vio connection.
InnoDB background threads have no vio connection to close, but
they're unlinked all the same. So when later they finally notice
the shutdown request and try to unlink themselves, they fail to
assert that they're still linked.
Fix: don't assert_linked, as another thread can unlink this THD anytime