The population of default values in INSERT SELECT was being
performed twice. With sequences, this resulted in every
second sequence value being used.
With SELECT INSERT we remove the second invokation of
table->update_default_fields(). This was already performed
in store_values() invoking fill_record_n_invoke_before_triggers()
which invoked update_default_fields() previously.
We do need to return an error on duplicate values, so the
::store_values is extended to take the ignore option.
=========== Problem =============
- `show columns` is not working for temporary tables, even though there
is enough privilege `create temporary tables`.
=========== Solution =============
- Append `TMP_TABLE_ACLS` privilege when running `show columns` for temp
tables.
- Additionally `check_access()` for database only once, not for each
field
=========== Additionally =============
- Update comments for function `check_table_access` arguments
Reviewed by: <vicentiu@mariadb.org>
For some queries that involve tables with different but convertible
character sets for columns taking part in the query, repeatable
execution of such queries in PS mode or as part of a stored routine
would result in server abnormal termination.
For example,
CREATE TABLE t1 (a2 varchar(10));
CREATE TABLE t2 (u1 varchar(10) CHARACTER SET utf8);
CREATE TABLE t3 (u2 varchar(10) CHARACTER SET utf8);
PREPARE stmt FROM
"SELECT t1.* FROM (t1 JOIN t2 ON (t2.u1 = t1.a2))
WHERE (EXISTS (SELECT 1 FROM t3 WHERE t3.u2 = t1.a2))";
EXECUTE stmt;
EXECUTE stmt; <== Running this prepared statement the second time
results in server crash.
The reason of server crash is that an instance of the class
Item_func_conv_charset, that created for conversion of a column
from one character set to another, is allocated on execution
memory root but pointer to this instance is stored in an item
placed on prepared statement memory root. Below is calls trace to
the place where an instance of the class Item_func_conv_charset
is created.
setup_conds
Item_func::fix_fields
Item_bool_rowready_func2::fix_length_and_dec
Item_func::setup_args_and_comparator
Item_func_or_sum::agg_arg_charsets_for_comparison
Item_func_or_sum::agg_arg_charsets
Item_func_or_sum::agg_item_set_converter
Item::safe_charset_converter
And the following trace shows the place where a pointer to
the instance of the class Item_func_conv_charset is passed
to the class Item_func_eq, that is created on a memory root of
the prepared statement.
Prepared_statement::execute
mysql_execute_command
execute_sqlcom_select
handle_select
mysql_select
JOIN::optimize
JOIN::optimize_inner
convert_join_subqueries_to_semijoins
convert_subq_to_sj
To fix the issue, switch to the Prepared Statement memory root
before calling the method Item_func::setup_args_and_comparator
in order to place any created Items on permanent memory root.
It may seem that such approach would result in a memory
leakage in case the parameter marker '?' is used in the query
as in the following example
PREPARE stmt FROM
"SELECT t1.* FROM (t1 JOIN t2 ON (t2.u1 = t1.a2))
WHERE (EXISTS (SELECT 1 FROM t3 WHERE t3.u2 = ?))";
EXECUTE stmt USING convert('A' using latin1);
but it wouldn't since for such case any of the parameter markers
is treated as a constant and no subquery to semijoin optimization
is performed.
See also commits aa8a31da and 64678c for a Bug #22990029 fix.
In this scenario INSERT chose to check if delete unmarking is available for
a just deleted record. To build an update vector, it needed to calculate
the vcols as well. Since this INSERT was not IGNORE-flagged, recalculation
failed.
Solutiuon: temporarily set abort_on_warning=true, while calculating the
column for delete-unmarked insert.
As of now innodb does not store trx_id for each record in secondary index.
The idea behind is following: let us store only per-page max_trx_id, and
delete-mark the records when they are deleted/updated.
If the read starts, it rememders the lowest id of currently active
transaction. Innodb refers to it as trx->read_view->m_up_limit_id.
See also ReadView::open.
When the page is fetched, its max_trx_id is compared to m_up_limit_id.
If the value is lower, and the secondary index record is not delete-marked,
then this page is just safe to read as is. Else, a clustered index could be
needed ato access. See page_get_max_trx_id call in row_search_mvcc, and the
corresponding switch (row_search_idx_cond_check(...)) below.
Virtual columns are required to be updated in case if the record was
delete-marked. The motivation behind it is documented in
Row_sel_get_clust_rec_for_mysql::operator() near
row_sel_sec_rec_is_for_clust_rec call.
This was basically a description why virtual column computation can
normally happen during SELECT, and, generally, a vcol index access.
Sometimes stats tables are updated by innodb. This starts a new
transaction, and it can happen that it didn't finish to the moment of
SELECT execution, forcing virtual columns recomputation. If the result was
a something that normally outputs a warning, like division by zero, then
it could be outputted in a racy manner.
The solution is to suppress the warnings when a column is computed
for the described purpose.
ignore_wrnings argument is added innobase_get_computed_value.
Currently, it is only true for a call from
row_sel_sec_rec_is_for_clust_rec.
Adding debug output for key and keyseg flags at ha_myisam::open() time.
So now there are three points of debug output:
1. In the very end of mysql_prepare_create_table()
2. In ha_myisam::create(), after the table2myisam() call
3. In ha_myisan::open(), after the mi_open() call
mi_create(), which is is called between 2 and 3, modifies flags for
some data types, so the output in 2 and 3 is different.
If repl.max_ws_size is set too low following CREATE TABLE could fail
during commit. In this case wsrep_commit_empty should allow rolling
it back if provider state is s_aborted.
Furhermore, original ER_ERROR_DURING_COMMIT does not really tell anything
clear for user. Therefore, this commit adds a new error
ER_TOO_BIG_WRITESET. This will change some test cases output.
Problem was that in ALTER TABLE execution variables were set
to 1 even when wsrep_auto_increment_control is OFF. We should
set them only when wsrep_auto_increment_control is ON.
In test user has set WSREP_ON=OFF this causes streaming replication
recovery to fail and this caused call to unireg_abort(). However,
this call is not necessary and we can let transaction to fail. Naturally,
if real user does this he needs to bootstrap his cluster.
This test is not slow, but it reliably produces an EXPLAIN difference
(number of rows) on the Valgrind builder. A possible explanation could be
that the purge threads are not being scheduled. Valgrind runs all threads
in a single thread.
When f.ex. table is partitioned by HASH(a) and we rename column `a' to
`b' partitioning filter stays unchanged: HASH(a). That's the wrong
behavior.
The patch updates partitioning filter in accordance to the new columns
names. That includes partition/subpartition expression and
partition/subpartition field list.
Let us disable Valgrind on tests that would fail because a
server shutdown or a STOP SLAVE command would take longer,
causing the test harness to forcibly and silently kill the server
due to an exceeded timeout.
row_purge_get_partial(): Replaces trx_undo_rec_get_partial_row().
Also copy the purge_node_t::ref to the purge_node_t::row.
In this way, the clustered index key fields will always be
available, even if thanks to
commit d384ead0f0 (MDEV-14799)
they would no longer be repeated in the remaining part of the
undo log record.
This commit adds automation that will reduce the possibility
of user errors when customizing wsrep_notify.sh (in particular
caused by user-specified parameters). Now all leading and trailing
spaces are removed from the user-specified parameters and automatic
port and host address substitution has been added to scripts, as
well as automatic password substitution to the client command line,
only if it is specified in the wsrep_notify.sh and not as empty
strings. Also added support for automatic substitution of the all
SSL-related parameters and improved parsing for ipv6 addresses
(to allow "[...]" notation for ipv6 addresses). Also added a
test to check if the wsrep notify script will works with SSL.
The problem is that if table definition cache (TDC) is full of real tables
which are in tables cache, view definition can not stay there so will be
removed by its own underlying tables.
In situation above old mechanism of detection matching definition in PS
and current version always require reprepare and so prevent executing
the PS.
One work around is to increase TDC, other - improve version check for
views/triggers (which is done here). Now in suspicious cases we check:
- timestamp (microseconds) of the view to be sure that version really
have changed;
- time (microseconds) of creation of a trigger related to time
(microseconds) of statement preparation.
- Added missing information about database of corresponding table for various types of commands
- Update some typos
- Reviewed by: <vicentiu@mariadb.org>
This patch resolves the problem of improper name resolution of table
references to embedded CTEs for some queries. This improper binding could
lead to
- infinite sequence of calls of recursive functions
- crashes due to resolution of null pointers
- wrong result sets returned by queries
- bogus error messages
If the definition of a CTE contains with clauses then such CTE is called
embedding CTE while CTEs from the with clauses are called embedded CTEs.
If a table reference used in the definition of an embedded CTE cannot be
resolved within the unit that contains this reference it still may be
resolved against a CTE definition from the with clause with one of the
embedding CTEs.
A table reference can be resolved against a CTE definition if it used in
the the scope of this definition and it refers to the name of the CTE.
Table reference t is in the scope of the CTE definition of CTE cte if
- the definition of cte is an element of a with clause declared as
RECURSIVE and the reference t belongs either to the unit to which
this with clause is attached or to one of the elements of this clause
- the definition of cte is an element of a with clause without RECURSIVE
specifier and the reference t belongs either to the unit to which this
with clause is attached or to one of the elements from this clause that
are placed before the definition of cte.
If a table reference can be resolved against several CTE definitions then
it is bound to the most embedded.
The code before this patch not always resolved table references used in
embedded CTE according to the above rules.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Nowdays subquery in a UNION's ORDER BY placed correctly in fake select,
the only problem was incorrect Name_resolution_contect is fixed by this
patch in parsing, so we do not need scanning/reseting of ORDER BY of
a union.