when assigning the cached item to the Item_cache for the first time
make sure to use Item_cache::setup(), not Item_cache::store().
Because the former copies the metadata (and allocates memory, in case
of Item_cache_row), and Item_cache::decimal must be set for
comparisons to work correctly.
Deallocation of TABLE_LIST::dt_handler and TABLE_LIST::pushdown_derived
was performed in multiple places if code. This not only made the code
more difficult to maintain but also led to memory leaks and
ASAN heap-use-after-free errors.
This commit puts deallocation of TABLE_LIST::dt_handler and
TABLE_LIST::pushdown_derived to the single point - JOIN::cleanup()
FederatedX table may refer to a table with a different name on the
remote server:
test> CREATE TABLE t2 (...) ENGINE="FEDERATEDX"
CONNECTION="mysql://user:pass@192.168.1.111:9308/federatedx/t1";
test> select * from t2 where ...;
This could cause an issue with federated_pushdown=1, because FederatedX
pushes the query (or derived table's) text to the remote server. The remote
server will try to read from table t2 (while it should read from t1).
Solution: do not allow pushing down queries with tables that have different
db_name.table name on the local and remote server.
This patch also fixes:
MDEV-29863 Server crashes in federatedx_txn::acquire after select from the
FederatedX table with partitions
Solution: disallow pushdown when partitioned FederatedX tables are used.
When a range rowid filter was used with an index ref access the cost of
accessing the index entries for the records rejected by the filter was not
taken into account. For a ref access by an index with big average number
of records per key this led to poor execution plans if selectivity of the
used filter was high.
The patch resolves this problem. It also introduces a minor optimization
that skips look-ups into a filter that turns out to be empty.
With this patch the output of ANALYZE stmt reports the number of look-ups
into used rowid filters.
The patch also back-ports from 10.5 the code that properly sets the field
TABLE::file::table for opened temporary tables.
The test cases that were supposed to use rowid filters have been adjusted
in order to use similar execution plans after this fix.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
The ALTER related code cannot do at the same time both:
- modify partitions
- change column data types
Explicit changing of a column data type together with a partition change is
prohibited by the parter, so this is not allowed and returns a syntax error:
ALTER TABLE t MODIFY ts BIGINT, DROP PARTITION p1;
This fix additionally disables implicit data type upgrade
(e.g. from "MariaDB 5.3 TIME" to "MySQL 5.6 TIME", or the other way
around according to the current mysql56_temporal_format) in case of
an ALTER modifying partitions, e.g.:
ALTER TABLE t DROP PARTITION p1;
In such commands now only the partition change happens, while
the data types stay unchanged.
One can additionally run:
ALTER TABLE t FORCE;
either before or after the ALTER modifying partitions to
upgrade data types according to mysql56_temporal_format.
with C/C.
The patch introduces mariadb_capi_rename.h which is included into
mysql.h. The hew header contains macro definitions for the names being
renamed. In versions 10.6+(i.e. where sql service exists) the renaming
condition in the mariadb_capi_rename.h should be added with
&& !defined(MYSQL_DYNAMIC_PLUGIN)
and look like
The patch also contains removal of mysql.h from the api check.
Disabling false_duper-6543 test for embedded.
ha_federated.so uses C API. C API functions are being renamed in the server,
but not renamed in embedded, since embedded server library should have proper
C API, as expected by programs using it.
Thus the same ha_federated.so cannot work both for server and embedded
server library.
As all federated tests are already disabled for embedded,
federated isn't supposed to work for embedded anyway, and thus the test
is being disabled.
Abort startup, if SSL setup fails.
Also, for the server always check that certificate matches private key
(even if ssl_cert is not set, OpenSSL will try to use default one)
This is a new version of the patch instead of the reverted:
MDEV-28727 ALTER TABLE ALGORITHM=NOCOPY does not work after upgrade
Ignore the difference in key packing flags HA_BINARY_PACK_KEY and HA_PACK_KEY
during ALTER to allow ALGORITHM=INSTANT and ALGORITHM=NOCOPY in more cases.
If for some reasons (e.g. due to a bug fix such as MDEV-20704) these
cumulative (over all segments) flags in KEY::flags are different for
the old and new table inside compare_keys_but_name(), the difference
in HA_BINARY_PACK_KEY and HA_PACK_KEY in KEY::flags is not really important:
MyISAM and Aria can handle such cases well: per-segment flags are stored in
MYI and MAI files anyway and they are read during ha_myisam::open()
ha_maria::open() time. So indexes get opened with correct per-segment
flags that were calculated during the table CREATE time, no matter
what the old (CREATE time) and new (ALTER TIME) per-index compression
flags are, and no matter if they are equal or not.
All other engine ignore key compression flags, so this change
is safe for other engines as well.
st_select_lex::init_query is called in the exectuion of EXECUTE
IMMEDIATE 'alter table ...'. so reset the initialization at the
same point we set join= 0.
and also MDEV-25564, MDEV-18157.
Attempt to produce EXPLAIN output caused a crash in
Explain_node::print_explain_for_children. The cause of this was that an
Explain_node (actually a derived) had a link to child select#N, but
there was no query plan present for select#N.
The query plan wasn't present because the subquery was eliminated.
- Either it was a degenerate subquery like "(SELECT 1)" in MDEV-25564.
- Or it was a subquery in a UNION subquery's ORDER BY clause:
col IN (SELECT ... UNION
SELECT ... ORDER BY (SELECT FROM t1))
In such cases, legacy code structure in subquery/union processing code(*)
makes it hard to detect that the subquery was eliminated, so we end up
with EXPLAIN data structures (Explain_node::children) having dangling
links to child subqueries.
Do make the checks and don't follow the dangling links.
(In ideal world, we should not have these dangling links. But fixing
the code (*) would have high risk for the stable versions).
The population of default values in INSERT SELECT was being
performed twice. With sequences, this resulted in every
second sequence value being used.
With SELECT INSERT we remove the second invokation of
table->update_default_fields(). This was already performed
in store_values() invoking fill_record_n_invoke_before_triggers()
which invoked update_default_fields() previously.
We do need to return an error on duplicate values, so the
::store_values is extended to take the ignore option.
=========== Problem =============
- `show columns` is not working for temporary tables, even though there
is enough privilege `create temporary tables`.
=========== Solution =============
- Append `TMP_TABLE_ACLS` privilege when running `show columns` for temp
tables.
- Additionally `check_access()` for database only once, not for each
field
=========== Additionally =============
- Update comments for function `check_table_access` arguments
Reviewed by: <vicentiu@mariadb.org>
For some queries that involve tables with different but convertible
character sets for columns taking part in the query, repeatable
execution of such queries in PS mode or as part of a stored routine
would result in server abnormal termination.
For example,
CREATE TABLE t1 (a2 varchar(10));
CREATE TABLE t2 (u1 varchar(10) CHARACTER SET utf8);
CREATE TABLE t3 (u2 varchar(10) CHARACTER SET utf8);
PREPARE stmt FROM
"SELECT t1.* FROM (t1 JOIN t2 ON (t2.u1 = t1.a2))
WHERE (EXISTS (SELECT 1 FROM t3 WHERE t3.u2 = t1.a2))";
EXECUTE stmt;
EXECUTE stmt; <== Running this prepared statement the second time
results in server crash.
The reason of server crash is that an instance of the class
Item_func_conv_charset, that created for conversion of a column
from one character set to another, is allocated on execution
memory root but pointer to this instance is stored in an item
placed on prepared statement memory root. Below is calls trace to
the place where an instance of the class Item_func_conv_charset
is created.
setup_conds
Item_func::fix_fields
Item_bool_rowready_func2::fix_length_and_dec
Item_func::setup_args_and_comparator
Item_func_or_sum::agg_arg_charsets_for_comparison
Item_func_or_sum::agg_arg_charsets
Item_func_or_sum::agg_item_set_converter
Item::safe_charset_converter
And the following trace shows the place where a pointer to
the instance of the class Item_func_conv_charset is passed
to the class Item_func_eq, that is created on a memory root of
the prepared statement.
Prepared_statement::execute
mysql_execute_command
execute_sqlcom_select
handle_select
mysql_select
JOIN::optimize
JOIN::optimize_inner
convert_join_subqueries_to_semijoins
convert_subq_to_sj
To fix the issue, switch to the Prepared Statement memory root
before calling the method Item_func::setup_args_and_comparator
in order to place any created Items on permanent memory root.
It may seem that such approach would result in a memory
leakage in case the parameter marker '?' is used in the query
as in the following example
PREPARE stmt FROM
"SELECT t1.* FROM (t1 JOIN t2 ON (t2.u1 = t1.a2))
WHERE (EXISTS (SELECT 1 FROM t3 WHERE t3.u2 = ?))";
EXECUTE stmt USING convert('A' using latin1);
but it wouldn't since for such case any of the parameter markers
is treated as a constant and no subquery to semijoin optimization
is performed.
See also commits aa8a31da and 64678c for a Bug #22990029 fix.
In this scenario INSERT chose to check if delete unmarking is available for
a just deleted record. To build an update vector, it needed to calculate
the vcols as well. Since this INSERT was not IGNORE-flagged, recalculation
failed.
Solutiuon: temporarily set abort_on_warning=true, while calculating the
column for delete-unmarked insert.
As of now innodb does not store trx_id for each record in secondary index.
The idea behind is following: let us store only per-page max_trx_id, and
delete-mark the records when they are deleted/updated.
If the read starts, it rememders the lowest id of currently active
transaction. Innodb refers to it as trx->read_view->m_up_limit_id.
See also ReadView::open.
When the page is fetched, its max_trx_id is compared to m_up_limit_id.
If the value is lower, and the secondary index record is not delete-marked,
then this page is just safe to read as is. Else, a clustered index could be
needed ato access. See page_get_max_trx_id call in row_search_mvcc, and the
corresponding switch (row_search_idx_cond_check(...)) below.
Virtual columns are required to be updated in case if the record was
delete-marked. The motivation behind it is documented in
Row_sel_get_clust_rec_for_mysql::operator() near
row_sel_sec_rec_is_for_clust_rec call.
This was basically a description why virtual column computation can
normally happen during SELECT, and, generally, a vcol index access.
Sometimes stats tables are updated by innodb. This starts a new
transaction, and it can happen that it didn't finish to the moment of
SELECT execution, forcing virtual columns recomputation. If the result was
a something that normally outputs a warning, like division by zero, then
it could be outputted in a racy manner.
The solution is to suppress the warnings when a column is computed
for the described purpose.
ignore_wrnings argument is added innobase_get_computed_value.
Currently, it is only true for a call from
row_sel_sec_rec_is_for_clust_rec.
Adding debug output for key and keyseg flags at ha_myisam::open() time.
So now there are three points of debug output:
1. In the very end of mysql_prepare_create_table()
2. In ha_myisam::create(), after the table2myisam() call
3. In ha_myisan::open(), after the mi_open() call
mi_create(), which is is called between 2 and 3, modifies flags for
some data types, so the output in 2 and 3 is different.
If repl.max_ws_size is set too low following CREATE TABLE could fail
during commit. In this case wsrep_commit_empty should allow rolling
it back if provider state is s_aborted.
Furhermore, original ER_ERROR_DURING_COMMIT does not really tell anything
clear for user. Therefore, this commit adds a new error
ER_TOO_BIG_WRITESET. This will change some test cases output.
Problem was that in ALTER TABLE execution variables were set
to 1 even when wsrep_auto_increment_control is OFF. We should
set them only when wsrep_auto_increment_control is ON.
In test user has set WSREP_ON=OFF this causes streaming replication
recovery to fail and this caused call to unireg_abort(). However,
this call is not necessary and we can let transaction to fail. Naturally,
if real user does this he needs to bootstrap his cluster.