Small postfix to MDEV-23175 to ensure faster option on FreeBSD
and compatibility to Solaris that isn't high resolution.
ftime is left as a backup in case an implementation doesn't
contain any of these clocks.
FreeBSD
$ ./unittest/mysys/my_rdtsc-t
1..11
# ----- Routine ---------------
# myt.cycles.routine : 5
# myt.nanoseconds.routine : 11
# myt.microseconds.routine : 13
# myt.milliseconds.routine : 11
# myt.ticks.routine : 17
# ----- Frequency -------------
# myt.cycles.frequency : 3610295566
# myt.nanoseconds.frequency : 1000000000
# myt.microseconds.frequency : 1000000
# myt.milliseconds.frequency : 899
# myt.ticks.frequency : 136
# ----- Resolution ------------
# myt.cycles.resolution : 1
# myt.nanoseconds.resolution : 1
# myt.microseconds.resolution : 1
# myt.milliseconds.resolution : 7
# myt.ticks.resolution : 1
# ----- Overhead --------------
# myt.cycles.overhead : 26
# myt.nanoseconds.overhead : 19140
# myt.microseconds.overhead : 19036
# myt.milliseconds.overhead : 578
# myt.ticks.overhead : 21544
ok 1 - my_timer_init() did not crash
ok 2 - The cycle timer is strictly increasing
ok 3 - The cycle timer is implemented
ok 4 - The nanosecond timer is increasing
ok 5 - The nanosecond timer is implemented
ok 6 - The microsecond timer is increasing
ok 7 - The microsecond timer is implemented
ok 8 - The millisecond timer is increasing
ok 9 - The millisecond timer is implemented
ok 10 - The tick timer is increasing
ok 11 - The tick timer is implemented
The old code erroneously used default_charset_info to compare field names.
default_charset_info can point to any arbitrary collation,
including ucs2*, utf16*, utf32*, including those that do not
support strcasecmp().
my_charset_utf8mb4_unicode_ci, which is used in this scenario:
CREATE TABLE t1 ENGINE=InnoDB WITH SYSTEM VERSIONING AS SELECT 0;
does not support strcasecmp().
Fixing the code to use Lex_ident::streq(), which uses
system_charset_info instead of default_charset_info.
Since commit fb335b48b5 we may have
a null pointer in purge_sys.query when fetch_data_into_cache() is
invoked and innodb_force_recovery>4. This is because the call to
purge_sys.create() would be skipped.
fetch_data_into_cache(): Load the purge_sys pseudo transaction pointer
to a local variable (null pointer if purge_sys is not initialized).
1) Removed symlinks that are not very well supported in tar under Windows.
2) Added comment + changed code formatting in viosslfactories.c
3) Fixed a small bug in the yassl code.
4) Fixed a typo in the script code.
MDEV-25803 excluded some cases from key sort upon alter table. That
particularly depends on ALTER_ADD_INDEX flag. Creating a column of
SERIAL data type missed that flag. Though equivalent operation
alter table t1 add x bigint unsigned not null auto_increment unique;
has ALTER_ADD_INDEX flag.
Repeating execution of a query containing the clause IN with string literals
in environment where the server variable in_predicate_conversion_threshold
is set results in server abnormal termination in case the query is run
as a Prepared Statement and conversion of charsets for string values in the
query are required.
The reason for server abnormal termination is that instances of the class
Item_string created on transforming the IN clause into subquery were created
on runtime memory root that is deallocated on finishing execution of Prepared
statement. On the other hand, references to Items placed on deallocated memory
root still exist in objects of the class table_value_constr. Subsequent running
of the same prepared statement leads to dereferencing of pointers to already
deallocated memory that could lead to undefined behaviour.
To fix the issue the values being pushed into a values list for TVC are created
by cloning their original items. This way the cloned items are allocate on
the PS memroot and as consequences no dangling pointer does more exist.
Consider the following use case:
MariaDB [test]> CREATE TABLE t1 (field1 BIGINT DEFAULT -1);
MariaDB [test]> CREATE VIEW v1 AS SELECT DISTINCT field1 FROM t1;
Repeated execution of the following query as a Prepared Statement
MariaDB [test]> PREPARE stmt FROM 'SELECT * FROM v1 WHERE field1 <=> NULL';
MariaDB [test]> EXECUTE stmt;
results in a crash for a server built with DEBUG.
MariaDB [test]> EXECUTE stmt;
ERROR 2013 (HY000): Lost connection to MySQL server during query
Assertion failed: (!result), function convert_const_to_int, file item_cmpfunc.cc, line 476.
Abort trap: 6 (core dumped)
The crash inside the function convert_const_to_int() happens by the reason
that the value -1 is stored in an instance of the class Field_longlong
on restoring its original value in the statement
result= field->store(orig_field_val, TRUE);
that leads to assigning the value 1 to the variable 'result' with subsequent
crash in the DBUG_ASSERT statement following it
DBUG_ASSERT(!result);
The main matter here is why this assertion failure happens on the second
execution of the prepared statement and doens't on the first one.
On first handling of the statement
'EXECUTE stmt;'
a temporary table is created for serving the query involving the view 'v1'.
The table is created by the function create_tmp_table() in the following
calls trace: (trace #1)
JOIN::prepare (at sql_select.cc:725)
st_select_lex::handle_derived
LEX::handle_list_of_derived
TABLE_LIST::handle_derived
mysql_handle_single_derived
mysql_derived_prepare
select_union::create_result_table
create_tmp_table
Note, that the data member TABLE::status of a TABLE instance returned by the
function create_tmp_table() has the value 0.
Later the function setup_table_map() is called on the TABLE instance just
created for the sake of the temporary table (calls trace #2 is below):
JOIN::prepare (at sql_select.cc:737)
setup_tables_and_check_access
setup_tables
setup_table_map
where the data member TABLE::status is set to the value STATUS_NO_RECORD.
After that when execution of the method JOIN::prepare reaches calling of
the function setup_without_group() the following calls trace is invoked
JOIN::prepare
setup_without_group
setup_conds
Item_func::fix_fields
Item_func_equal::fix_length_and_dec
Item_bool_rowready_func2::fix_length_and_dec
Item_func::setup_args_and_comparator
Item_func::convert_const_compared_to_int_field
convert_const_to_int
There is the following code snippet in the function convert_const_to_int()
at the line item_cmpfunc.cc:448
bool save_field_value= (field_item->const_item() ||
!(field->table->status & STATUS_NO_RECORD));
Since field->table->status has bits STATUS_NO_RECORD set the variable
save_field_value is false and therefore neither the method
Field_longlong::val_int() nor the method Field_longlong::store is called
on the Field instance that has the numeric value -1.
That is the reason why first execution of the Prepared Statement for the query
'SELECT * FROM v1 WHERE field1 <=> NULL'
is successful.
On second running of the statement 'EXECUTE stmt' a new temporary tables
is also created by running the calls trace #1 but the trace #2 is not executed
by the reason that data member SELECT_LEX::first_cond_optimization has been set
to false on first execution of the prepared statemet (in the method
JOIN::optimize_inner()). As a consequence, the data member TABLE::status for
a temporary table just created doesn't have the flags STATUS_NO_RECORD set and
therefore on re-execution of the prepared statement the methods
Field_longlong::val_int() and Field_longlong::store() are called for the field
having the value -1 and the DBUG_ASSERT(!result) is fired.
To fix the issue the data member TABLE::status has to be assigned the value
STATUS_NO_RECORD in every place where the macros empty_record() is called
to emptify a record for just instantiated TABLE object created on behalf
the new temporary table.
Followup to fix for MDEV-25858: When test_if_skip_sort_order() decides
to use an index to satisfy ORDER BY ... LIMIT clause, it should
disable "Range Checked for Each Record" optimization.
Do this in all cases.
create_log_files(): Check log_set_capacity() before modifying
or creating any log files.
innobase_start_or_create_for_mysql(): If create_log_files()
fails and we were initializing a new database, delete the
system tablespace files before exiting.
1. Galera SST scripts should use ssl_capath (not ssl_ca) for CA
directory. The current implementation tries to automatically
detect the path using the trailing slash in the ssl_ca variable
value, but this approach is not compatible with the server
configuration. Now, by analogy with the server, SST scripts
also use a separate ssl_capath variable. In addition, a similar
tcapath variable has been added for the old-style configuration
(in the "sst" section).
2. Openssl utility detection made more reliable.
3. Removed extra spaces in automatically generated command lines -
to simplify debugging of the SST scripts.
4. In general, the code for detecting the presence or absence of
auxiliary utilities has been improved - it is made more reliable
in some configurations (and for shells other than bash).
1. Galera SST scripts should use ssl_capath (not ssl_ca) for CA
directory. The current implementation tries to automatically
detect the path using the trailing slash in the ssl_ca variable
value, but this approach is not compatible with the server
configuration. Now, by analogy with the server, SST scripts
also use a separate ssl_capath variable. In addition, a similar
tcapath variable has been added for the old-style configuration
(in the "sst" section).
2. Openssl utility detection made more reliable.
3. Removed extra spaces in automatically generated command lines -
to simplify debugging of the SST scripts.
4. In general, the code for detecting the presence or absence of
auxiliary utilities has been improved - it is made more reliable
in some configurations (and for shells other than bash).
it's not printed, not cleaned up without perfschema,
so isn't supposed to be written into either
this fixes "Memory not freed" warnings when early command line
options produce warnings in non-perfschema builds
CMake Error in wsrep-lib/CMakeLists.txt:
The custom command generating
/Users/name/build/mariadb-server/sql/lex_token.h
is attached to multiple targets:
GenServerSource
sql
but none of these is a common dependency of the other(s). This is not
allowed by the Xcode "new build system".
MariaDB server crashes on ARM (weak memory model architecture) while
concurrently executing l_find to load node->key and add_to_purgatory
to store node->key = NULL. l_find then uses key (which is NULL), to
pass it to a comparison function.
The specific problem is the out-of-order execution that happens on a
weak memory model architecture. Two essential reorderings are possible,
which need to be prevented.
a) As l_find has no barriers in place between the optimistic read of
the key field lf_hash.cc#L117 and the verification of link lf_hash.cc#L124,
the processor can reorder the load to happen after the while-loop.
In that case, a concurrent thread executing add_to_purgatory on the same
node can be scheduled to store NULL at the key field lf_alloc-pin.c#L253
before key is loaded in l_find.
b) A node is marked as deleted by a CAS in l_delete lf_hash.cc#L247 and
taken off the list with an upfollowing CAS lf_hash.cc#L252. Only if both
CAS succeed, the key field is written to by add_to_purgatory. However,
due to a missing barrier, the relaxed store of key lf_alloc-pin.c#L253
can be moved ahead of the two CAS operations, which makes the value of
the local purgatory list stored by add_to_purgatory visible to all threads
operating on the list. As the node is not marked as deleted yet, the
same error occurs in l_find.
This change three accesses to be atomic.
* optimistic read of key in l_find lf_hash.cc#L117
* read of link for verification lf_hash.cc#L124
* write of key in add_to_purgatory lf_alloc-pin.c#L253
Reviewers: Sergei Vojtovich, Sergei Golubchik
Fixes: MDEV-23510 / d30c1331a18d875e553f3fcf544997e4f33fb943
This bug was introduced by commit be00e279c6
The commit was applied for the task MDEV-6480 that allowed to remove top
level disjuncts from WHERE conditions if the range optimizer evaluated them
as always equal to FALSE/NULL.
If such disjuncts are removed the WHERE condition may become an AND formula
and if this formula contains multiple equalities the field JOIN::item_equal
must be updated to refer to these equalities. The above mentioned commit
forgot to do this and it could cause crashes for some queries.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
close_connections() in mysqld.cc sends a signal to all threads.
But InnoDB is too busy purging, doesn't react immediately.
close_connections() waits 20 seconds, which isn't enough in this
particular case, and then unlinks all threads from
the list and forcibly closes their vio connection.
InnoDB background threads have no vio connection to close, but
they're unlinked all the same. So when later they finally notice
the shutdown request and try to unlink themselves, they fail to
assert that they're still linked.
Fix: don't assert_linked, as another thread can unlink this THD anytime
The bug occurs where the float token containing a dot with an 'e'
notation was dropped from the request completely.
This causes a manner of invalid SQL statements like:
select id 1.e, char 10.e(id 2.e), concat 3.e('a'12356.e,'b'1.e,'c'1.1234e)1.e, 12 1.e*2 1.e, 12 1.e/2 1.e, 12 1.e|2 1.e, 12 1.e^2 1.e, 12 1.e%2 1.e, 12 1.e&2 from test;
To be parsed correctly as if it was:
select id, char(id), concat('a','b','c'), 12*2, 12/2, 12|2, 12^2, 12%2, 12&2 from test.test;
This correct parsing occurs when e is followed by any of:
( ) . , | & % * ^ /
Currently, SST scripts assume that the filename specified in
the --log-bin-index argument either does not contain an extension
or uses the standard ".index" extension. Similar assumptions are
used for the log_bin_index parameter read from the configuration
file. This commit adds support for arbitrary extensions for the
index file paths.
If the server is started with the --innodb-force-recovery argument
on the command line, then during SST this argument can be passed to
mariabackup only at the --prepare stage, and accordingly it must be
removed from the --mysqld-args list (and it is not should be passed
to mariabackup otherwise).
This commit fixes a flaw in the SST scripts and add a test that
checks the ability to run the joiner node in a configuration that
uses --innodb-force-recovery=1.
This bug led to reporting bogus messages "No database selected" for DELETE
statements if they used subqueries in their WHERE conditions and these
subqueries contained references to CTEs.
The bug happened because the grammar rule for DELETE statement did not
call the function LEX::check_cte_dependencies_and_resolve_references() and
as a result of it references to CTEs were not identified as such.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
This bug concerned only CREATE TABLE statements of the form
CREATE TABLE <table name> AS <with clause> <union>.
For such a statement not all references to CTE used in <union> were resolved.
As a result a bogus message was reported for the first unresolved reference.
This happened because for such statements the function resolving references
to CTEs LEX::check_cte_dependencies_and_resolve_references() was called
prematurely in the parser.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Remove section that was trying to rename default-character-set to character-set-server
This seems to be an old workaround for some upgrade warning, which did not
work for some time already, because the ini filename was not initialized.
fil_space_decrypt(): change signature to return status via dberr_t only.
Also replace impossible condition with an assertion and prove it via
test cases.
This bug affected queries with two or more references to a CTE referring
another CTE if the definition of the latter contained an invocation of
a stored function that used a base table. The bug could lead to a bogus
error message or to an assertion failure.
For any non-first reference to CTE cte1 With_element::clone_parsed_spec()
is called that parses the specification of cte1 to construct the unit
structure for this usage of cte1. If cte1 refers to another CTE cte2
outside of the specification of cte1 then With_element::clone_parsed_spec()
has to be called for cte2 as well. This call is made by the function
LEX::resolve_references_to_cte() within the invocation of the function
With_element::clone_parsed_spec() for cte1.
When the specification of a CTE is parsed all table references encountered
in it must be added to the global list of table references for the query.
As the specification for the non-first usage of a CTE is parsed at a
recursive call of the parser the function With_element::clone_parsed_spec()
invoked at this recursive call should takes care of appending the list of
table references encountered in the specification of this CTE cte1 to the
list of table references created for the query. And it should do it after
the call of LEX::resolve_references_to_cte() that resolves references to
CTEs defined outside of the specification of cte1 because this call may
invoke the parser again for specifications of other CTEs and the table
references from their specifications must ultimately appear in the global
list of table references of the query.
The code of With_element::clone_parsed_spec() misplaced the call of
LEX::resolve_references_to_cte(). As a result LEX::query_tables_last used
for the query that was supposed to point to the field 'next_global' of the
last element in the global list of table references actually pointed to
'next_global' of the previous element.
The above inconsistency certainly caused serious problems when table
references used in the stored functions invoked in cloned specifications
of CTEs were added to the global list of table references.