When a range rowid filter was used with an index ref access the cost of
accessing the index entries for the records rejected by the filter was not
taken into account. For a ref access by an index with big average number
of records per key this led to poor execution plans if selectivity of the
used filter was high.
The patch resolves this problem. It also introduces a minor optimization
that skips look-ups into a filter that turns out to be empty.
With this patch the output of ANALYZE stmt reports the number of look-ups
into used rowid filters.
The patch also back-ports from 10.5 the code that properly sets the field
TABLE::file::table for opened temporary tables.
The test cases that were supposed to use rowid filters have been adjusted
in order to use similar execution plans after this fix.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
The ALTER related code cannot do at the same time both:
- modify partitions
- change column data types
Explicit changing of a column data type together with a partition change is
prohibited by the parter, so this is not allowed and returns a syntax error:
ALTER TABLE t MODIFY ts BIGINT, DROP PARTITION p1;
This fix additionally disables implicit data type upgrade
(e.g. from "MariaDB 5.3 TIME" to "MySQL 5.6 TIME", or the other way
around according to the current mysql56_temporal_format) in case of
an ALTER modifying partitions, e.g.:
ALTER TABLE t DROP PARTITION p1;
In such commands now only the partition change happens, while
the data types stay unchanged.
One can additionally run:
ALTER TABLE t FORCE;
either before or after the ALTER modifying partitions to
upgrade data types according to mysql56_temporal_format.
with C/C.
The patch introduces mariadb_capi_rename.h which is included into
mysql.h. The hew header contains macro definitions for the names being
renamed. In versions 10.6+(i.e. where sql service exists) the renaming
condition in the mariadb_capi_rename.h should be added with
&& !defined(MYSQL_DYNAMIC_PLUGIN)
and look like
The patch also contains removal of mysql.h from the api check.
Disabling false_duper-6543 test for embedded.
ha_federated.so uses C API. C API functions are being renamed in the server,
but not renamed in embedded, since embedded server library should have proper
C API, as expected by programs using it.
Thus the same ha_federated.so cannot work both for server and embedded
server library.
As all federated tests are already disabled for embedded,
federated isn't supposed to work for embedded anyway, and thus the test
is being disabled.
Abort startup, if SSL setup fails.
Also, for the server always check that certificate matches private key
(even if ssl_cert is not set, OpenSSL will try to use default one)
the only query of the XA transaction is on a non-transactional table
errors out:
XA BEGIN 'x';
--error ER_DUP_ENTRY
INSERT INTO t1 VALUES (1),(1);
XA END 'x';
XA PREPARE 'x';
The binlogging pattern is correctly started as expected with
the errored-out Query or its ROW format events, but there is
no empty XA_prepare_log_event group.
The following
XA COMMIT 'x';
therefore should not be logged either, but it does.
The bug is fixed with proper maintaining of a read-write binlog hton
property and use it to enforce correct binlogging decisions.
Specifically in the bug description case XA COMMIT won't be binlogged
in both when given in the same connection and externally after disconnect.
The same continue to apply to an empty XA that do not change any data in all
transactional engines involved.
The lock is created during page splitting after moving records and
locks(lock_move_rec_list_(start|end)()) to the new page, and inheriting
the locks to the supremum of left page from the successor of the infimum
on right page.
There is no need in such inheritance for READ COMMITTED isolation level
and not-gap locks, so the fix is to add the corresponding condition in
gap lock inheritance function.
One more fix is to forbid gap lock inheritance if XA was prepared. Use the
most significant bit of trx_t::n_ref to indicate that gap lock inheritance
is forbidden. This fix is based on
mysql/mysql-server@b063e52a83
Read the version of the view share when we read definition to prevent
simultaniouse access to a view table SHARE (and so its MEM_ROOT)
from different threads.
This is a new version of the patch instead of the reverted:
MDEV-28727 ALTER TABLE ALGORITHM=NOCOPY does not work after upgrade
Ignore the difference in key packing flags HA_BINARY_PACK_KEY and HA_PACK_KEY
during ALTER to allow ALGORITHM=INSTANT and ALGORITHM=NOCOPY in more cases.
If for some reasons (e.g. due to a bug fix such as MDEV-20704) these
cumulative (over all segments) flags in KEY::flags are different for
the old and new table inside compare_keys_but_name(), the difference
in HA_BINARY_PACK_KEY and HA_PACK_KEY in KEY::flags is not really important:
MyISAM and Aria can handle such cases well: per-segment flags are stored in
MYI and MAI files anyway and they are read during ha_myisam::open()
ha_maria::open() time. So indexes get opened with correct per-segment
flags that were calculated during the table CREATE time, no matter
what the old (CREATE time) and new (ALTER TIME) per-index compression
flags are, and no matter if they are equal or not.
All other engine ignore key compression flags, so this change
is safe for other engines as well.
on Linux this pthread_attr_setstacksize() fails with EINVAL
"The stack size is less than PTHREAD_STACK_MIN (16384) bytes".
But on FreeBSD it succeeds and causes a crash later, as 8196 is too little.
Let's keep the stack at its default size in the timer thread.
OpenSSL handles memory management using **OPENSSL_xxx** API[^1]. For
allocation, there is `OPENSSL_malloc`. To free it, `OPENSSL_free` should
be called.
We've been lucky that OPENSSL (and wolfSSL)'s implementation allowed the
usage of `free` for memory cleanup. However, other OpenSSL forks, such
as AWS-LC[^2], is not this forgiving. It will cause a server crash.
Test case `openssl_1` provides good coverage for this issue. If a user
is created using:
`grant select on test.* to user1@localhost require SUBJECT "...";`
user1 will crash the instance during connection under AWS-LC.
There have been numerous OpenSSL forks[^3]. Due to FIPS[^4] and other
related regulatory requirements, MariaDB will be built using them. This
fix will increase MariaDB's adaptability by using more compliant and
generally accepted API.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
[^1]: https://www.openssl.org/docs/man1.1.1/man3/OPENSSL_malloc.html
[^2]: https://github.com/awslabs/aws-lc
[^3]: https://en.wikipedia.org/wiki/OpenSSL#Forks
[^4]: https://en.wikipedia.org/wiki/FIPS_140-2
st_select_lex::init_query is called in the exectuion of EXECUTE
IMMEDIATE 'alter table ...'. so reset the initialization at the
same point we set join= 0.
and also MDEV-25564, MDEV-18157.
Attempt to produce EXPLAIN output caused a crash in
Explain_node::print_explain_for_children. The cause of this was that an
Explain_node (actually a derived) had a link to child select#N, but
there was no query plan present for select#N.
The query plan wasn't present because the subquery was eliminated.
- Either it was a degenerate subquery like "(SELECT 1)" in MDEV-25564.
- Or it was a subquery in a UNION subquery's ORDER BY clause:
col IN (SELECT ... UNION
SELECT ... ORDER BY (SELECT FROM t1))
In such cases, legacy code structure in subquery/union processing code(*)
makes it hard to detect that the subquery was eliminated, so we end up
with EXPLAIN data structures (Explain_node::children) having dangling
links to child subqueries.
Do make the checks and don't follow the dangling links.
(In ideal world, we should not have these dangling links. But fixing
the code (*) would have high risk for the stable versions).
The official deb.mariadb.org mirrors are intended for distribution of the
current MariaDB releases. When a version goes end-of-life, they are
removed from those mirrors.
The upgrade tests should however work even after EOL. While we do want
users to stop using EOL versions, we still expect the newer versions to
support upgrades from old versions to the current versions. Therefore we
should continue testing upgrades from EOL versions, and for that to work,
switch the CI to use the archive.mariadb.org repositories instead.
MERGE NOTE: This commit was made on the oldest branch with the salsa-ci.yml
file. When merging 10.5->10.6->...->10.12 please include this commit in
the merge and ensure all files end up with the change:
deb.mariadb.org/10.([0-9]+)/ -> archive.mariadb.org/mariadb-10.$1/repo/
The bug is that we don't have a a lock on the trigger name, so it is
possible for two threads to try to create the same trigger at the same
time and both thinks that they have succeed.
Same thing can happen with drop trigger or a combinations of create and
drop trigger.
Fixed by adding a mdl lock for the trigger name for the duration of the
create/drop.
This was caused by the short_option_1-master.opt file that had the
option -T12, which means (among other things) to use blocking for
sockets. This was supported up to MariaDB 10.4, but not in 10.5 where
we removed the code that changes blocking sockets to non blocking in
case of errors.
Fixed by ignoring the TEST_BLOCKING flag and also by not using the -T12
argument in short_option_1.
Other things:
- Added back support for valgrind (the original issue had nothing to
do with valgrind).
- While debugging I noticed that the retry loop in
handle_connections_sockets() was doing a lot of work during shutdown.
Fixed by not doing retrys during shutdown.
The population of default values in INSERT SELECT was being
performed twice. With sequences, this resulted in every
second sequence value being used.
With SELECT INSERT we remove the second invokation of
table->update_default_fields(). This was already performed
in store_values() invoking fill_record_n_invoke_before_triggers()
which invoked update_default_fields() previously.
We do need to return an error on duplicate values, so the
::store_values is extended to take the ignore option.
=========== Problem =============
- `show columns` is not working for temporary tables, even though there
is enough privilege `create temporary tables`.
=========== Solution =============
- Append `TMP_TABLE_ACLS` privilege when running `show columns` for temp
tables.
- Additionally `check_access()` for database only once, not for each
field
=========== Additionally =============
- Update comments for function `check_table_access` arguments
Reviewed by: <vicentiu@mariadb.org>
For some queries that involve tables with different but convertible
character sets for columns taking part in the query, repeatable
execution of such queries in PS mode or as part of a stored routine
would result in server abnormal termination.
For example,
CREATE TABLE t1 (a2 varchar(10));
CREATE TABLE t2 (u1 varchar(10) CHARACTER SET utf8);
CREATE TABLE t3 (u2 varchar(10) CHARACTER SET utf8);
PREPARE stmt FROM
"SELECT t1.* FROM (t1 JOIN t2 ON (t2.u1 = t1.a2))
WHERE (EXISTS (SELECT 1 FROM t3 WHERE t3.u2 = t1.a2))";
EXECUTE stmt;
EXECUTE stmt; <== Running this prepared statement the second time
results in server crash.
The reason of server crash is that an instance of the class
Item_func_conv_charset, that created for conversion of a column
from one character set to another, is allocated on execution
memory root but pointer to this instance is stored in an item
placed on prepared statement memory root. Below is calls trace to
the place where an instance of the class Item_func_conv_charset
is created.
setup_conds
Item_func::fix_fields
Item_bool_rowready_func2::fix_length_and_dec
Item_func::setup_args_and_comparator
Item_func_or_sum::agg_arg_charsets_for_comparison
Item_func_or_sum::agg_arg_charsets
Item_func_or_sum::agg_item_set_converter
Item::safe_charset_converter
And the following trace shows the place where a pointer to
the instance of the class Item_func_conv_charset is passed
to the class Item_func_eq, that is created on a memory root of
the prepared statement.
Prepared_statement::execute
mysql_execute_command
execute_sqlcom_select
handle_select
mysql_select
JOIN::optimize
JOIN::optimize_inner
convert_join_subqueries_to_semijoins
convert_subq_to_sj
To fix the issue, switch to the Prepared Statement memory root
before calling the method Item_func::setup_args_and_comparator
in order to place any created Items on permanent memory root.
It may seem that such approach would result in a memory
leakage in case the parameter marker '?' is used in the query
as in the following example
PREPARE stmt FROM
"SELECT t1.* FROM (t1 JOIN t2 ON (t2.u1 = t1.a2))
WHERE (EXISTS (SELECT 1 FROM t3 WHERE t3.u2 = ?))";
EXECUTE stmt USING convert('A' using latin1);
but it wouldn't since for such case any of the parameter markers
is treated as a constant and no subquery to semijoin optimization
is performed.
* ODBC Connect cosmetic fixes
- Update command for connection for default `peer` authentication for user
`postgres` (unless changed in `pg_hba.conf`).
- Update command for privilege to be more verbose.
- Update path for `.sql` file
- Update instructions for `pg_hba.conf` file to use unix socket
(`local`) type as well as TCP/IP type `host`.
- Update instruction about usage of user dsn (data source file) over
system dsn.
- Update path of `odbc-postgresql` driver path in comment
* Connect SE: update ODBC result file