ha_innobase::check_if_supported_inplace_alter(): Do not invoke
innobase_table_is_empty() if the tablespace has been discarded.
That is, native ALTER TABLE in InnoDB will treat an empty table
in the same way as a tablespace whose tablespace has been discarded.
(Note: ALTER TABLE...ALGORITHM=COPY will fail if the tablespace
has been discarded.)
This fixes a crash that was introduced
in commit c755974775 (MDEV-19611).
Delete-marked record is on the secondary index and the clustered index
already purged the corresponding record. We cannot detect if such
record is historical and we should not: the algorithm of
row_ins_check_foreign_constraint() skips such record anyway.
If lock type is LOCK_GAP or LOCK_ORDINARY, and the transaction holds
implicit lock for the record, then explicit gap-lock will not be set for
the record, as lock_rec_convert_impl_to_expl() returns true and
lock_rec_convert_impl_to_expl() bypasses lock_rec_lock() call.
The fix converts explicit lock to implicit one if requested lock type is
not LOCK_REC_NOT_GAP.
innodb_information_schema test result is also changed as after the fix
the following statements execution:
SET autocommit=0;
INSERT INTO t1 VALUES (5,10);
SELECT * FROM t1 FOR UPDATE;
leads to additional gap lock requests.
Import operation without .cfg file fails when there is mismatch of index
between metadata table and .ibd file. Moreover, MDEV-19022 shows
that InnoDB can end up with index tree where non-leaf page has only
one child page. So it is unsafe to find the secondary index root page.
This patch does the following when importing the table without .cfg file:
1) If the metadata contains more than one index then InnoDB stops
the import operation and report the user to drop all secondary
indexes before doing import operation.
2) When the metadata contain only clustered index then InnoDB finds the
index id by reading page 0 & page 3 instead of traversing the
whole tablespace.
ha_partition stores records in array of m_ordered_rec_buffer and uses
it for prio queue in ordered index scan. When the records are restored
from the array the blob buffers may be already freed or rewritten.
The solution is to take temporary ownership of cached blob buffers via
String::swap(). When the record is restored from m_ordered_rec_buffer
the ownership is returned to table fields.
Cleanups:
init_record_priority_queue(): removed needless !m_ordered_rec_buffer
check as there is same assertion few lines before.
dbug_print_row() for arbitrary row pointer
Server crashes in Field::register_field_in_read_map upon select from
partitioned table with indexed by prefix virtual column.
After several read-mark fixes a problem has surfaced:
Since KEY (c(10),a) uses only a prefix of c, a new field is created,
duplicated from table->field[3], with a new length. However,
vcol_inco->expr is not copied.
Therefore, (*key_info)->key_part[i].field->vcol_info->expr was left NULL
in ha_partition::index_init().
Solution: copy vcol_info from table field when it's set up.
fix main.processlist_notembedded test
* before EXPLAINing `select sleep` wait for select to start
(fixes "Target is not running an EXPLAINable command")
* after killing sleep, wait for it to die
(fixes test failures on --repeat when old sleep shows on a test rerun)
* unify with 10.3, copy minor changes from there
(`--echo End of 5.5` vs `--echo # End of 5.5`, etc)
len was containing garbage, since vctempl->mysql_col_offset was
containing old value while calling row_mysql_store_col_in_innobase_format
from innobase_get_computed_value().
It was not updated after the first ALTER TABLE call, because it's INPLACE
logic considered there's nothing to update, and exited immediately from
ha_innobase::inplace_alter_table().
However, vcol metadata needs an update, since vcols structure is changed
in mysql record.
The regression was introduced by 12614af1fe. There, refcount==1 condition
was removed, which turned out to be crucial, though racy. The idea was to
update vc_templ after each (sequencing) ALTER TABLE.
We should do the same another way, and there may be a plenty of solutions,
but the simplest one is to add a following condition:
if vcol structure is changed, drop vc_templ; it will be recreated on next
ha_innobase::open() call.
in prepare_inplace_alter_table. It is safe, since innodb inplace changes
require at least HA_ALTER_INPLACE_SHARED_LOCK_AFTER_PREPARE, which
guarantee MDL_EXCLUSIVE on this stage.
alter_templ_needs_rebuild() also has to track the columns not indexed, to
keep vc_templ correct.
Note that vc_templ is always kept constructed and available after
ha_innobase::open() call, even on INSERT, though no virtual columns are
evaluated during that statement
inside innodb.
In the test case suplied, it will be recreated on the second ALTER TABLE.
Server crashes in Field::register_field_in_read_map upon select from
partitioned table with indexed by prefix virtual column.
After several read-mark fixes a problem has surfaced:
Since KEY (c(10),a) uses only a prefix of c, a new field is created,
duplicated from table->field[3], with a new length. However,
vcol_inco->expr is not copied.
Therefore, (*key_info)->key_part[i].field->vcol_info->expr was left NULL
in ha_partition::index_init().
Solution: initialize vcols before key initialization
Also key initialization is moved to a function.
MDEV-16026: Forbid global system_versioning_asof in non-default time zone
* store `system_versioning_asof` in unix time;
* both session and global vars are processed in session timezone;
* setting `default` does not copy global variable anymore. Instead, it sets
system_time to SYSTEM_TIME_UNSPECIFIED, which means that no 'AS OF' time
is applied and `now()` can be assumed
As a regression, we cannot assign values below 1970 (UTC) anymore
MDEV-16481: set global system_versioning_asof=sf() crashes in specific case
* sys_vars.h: add `MYSQL_TIME` field to `set_var::save_result`
* sys_vars.ic: get rid of calling `var->value->get_date()` from
`Sys_var_vers_asof::update()`
* versioning.sysvars: add test; remove double warning
refactor Sys_var_vers_asof
* inherit from sys_var rather than Sys_var_enum
* remove junk "DEFAULT" keyword. There is DEFAULT in SQL grammar for it.
* make all conversions in check() to avoid possible errors
* avoid double var->value evaluation, which could
consequence in undefined behavior
Problem was that not all normal error codes where not handled
after wsrep_row_upd_check_foreign_constraints() call. Furhermore,
debug assertion did not contain all normal error cases. Changed
ib:: calls to WSREP_ calls to use wsrep instrumentation.
Problem:
=========
As a part of MDEV-14398 patch, InnoDB added and removed
the tablespace from default encrypt list. But InnoDB removes
the tablespace from the default encrypt list too early due to
i) other encryption thread working on the tablespace
ii) When tablespace is being flushed at the end of
key rotation
InnoDB fails to decrypt/encrypt the tablespace since
the tablespace removed too early and it leads to
test case failure.
Solution:
=========
Avoid the removal of tablespace from default_encrypt_list
only when
1) Another active encryption thread working on tablespace
2) Eligible for tablespace key rotation
3) Tablespace is in flushing phase
Removed the workaround in encryption.innodb_encryption_filekeys test case.
Set tests to non-valgrind:
oqgraph.social
encryption.innodb-page_encryption
binlog_encryption.encrypted_master
innodb.innodb-page_compression_lz4
main.lock_multi_bug38499
main.lock_multi_bug38691
This patch fixes parsing problems concerning derived tables that use table
value constructors (TVC) with LIMIT and ORDER BY clauses of the form
((VALUES ... LIMIT ...) ORDER BY ...) as dt
The fix has to be applied only to 10.3 as 10.4 that employs a different
grammar rules has no such problems. The test cases should be merged
upstream.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
In commit 83d2e0841e (MDEV-24041)
we failed to notice that in addition to the bug with
DELETE and ON DELETE CASCADE, there is another bug with
UPDATE and ON UPDATE CASCADE.
row_ins_foreign_fill_virtual(): Use the correct memory heap
for everything that will be reachable from the cascade->update
that we return to the caller.
Note: It is correct to use the shorter-lived cascade->heap for
rec_get_offsets(), because that memory will be abandoned when
row_ins_foreign_fill_virtual() returns.
ha_innobase::prepare_inplace_alter_table(): Unless the table is
being rebuilt, determine the maximum column length based on the
current ROW_FORMAT of the table. When TABLE_SHARE (and the .frm file)
contains no explicit ROW_FORMAT, InnoDB table creation or rebuild
will use innodb_default_row_format.
Based on mysql/mysql-server@3287d33acd
InnoDB tablespace identifiers and page numbers are 32-bit numbers.
Let us use a 32-bit type for them in innochecksum.
The changes in commit 1918bdf32c
broke the build on 32-bit Windows.
Thanks to Vicențiu Ciorbaru for an initial version of this fixup.
SQL processor failed to catch references to unknown columns and other
errors of the phase of semantic analysis in the specification of a
hanging recursive CTE. This happened because the function
With_clause::prepare_unreferenced_elements() failed to detect a CTE as
a hanging CTE if the CTE was recursive.
Fixing this problem in the code of the mentioned function opened another
problem: EXPLAIN started including the lines for the specifications of
hanging recursive CTEs in its output. This problem also was fixed in this
patch.
Approved by Dmitry Shulga <dmitry.shulga@mariadb.com>
from view
A crash of the server happened when executing a stored procedure whose the
only query calculated window functions over a mergeable view specified
as a select from non-mergeable view. The crash could be reproduced if
the window specifications of the window functions were identical and both
contained PARTITION lists and ORDER BY lists. A crash also happened on
the second execution of the prepared statement created for such query.
If to use derived tables or CTE instead of views the problem still
manifests itself crashing the server.
When optimizing the window specifications of a window function the
server can substitute the partition lists and the order lists for
the corresponding lists from another window specification in the case
when the lists are identical. This substitution is not permanent and should
be rolled back before the second execution. It was not done and this
ultimately led to a crash when resolving the column names at the second
execution of SP/PS.
This bug appeared after the patch for bug MDEV-23886. Due to this bug
execution of queries with CTEs used the same CTE at least twice via
prepared statements or with stored procedures caused crashes of the server.
It happened because the select created for any of not the first usage of
a CTE erroneously was not included into all_selects_list.
This patch corrects the patch applied to fix the bug MDEV-26108.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
When building with `make` gcov files use full path names,
when building with `ninja` gcov files use paths relative to the source root
in gcov_one_file() the current directory is somewhere under CMakeFiles/,
so if a file exists in the specified location, this location
must've been a full path name.
For every file.gcda file, gcov <7.x created file.cc.gcda.gcov.
While gcov 7.x and 8.x create file.cc.gcov
And sometimes otherfile.h.gcov or otherfile.ic.gcov, for included files.
(gcov 9.x+ creates .json.gz files, see MDEV-26102)
So, we use `gcov -l` that will create file.cc.gcda##file.cc.gcov,
file.cc.gcda##otherfile.h.gcov, etc. And we search and parse all
those file.cc.gcda*.gcov files.
The bug affected execution of queries with With clauses containing so-called
hanging recursive CTEs in PREPARE mode. A CTE is hanging if it's not used
in the query. Preparation of a prepared statement from a query with a
hanging CTE caused a leak in the server and execution of this prepared
statement led to an assert failure of the server built in the debug mode.
This happened because the units specifying recursive CTEs erroneously were
not cleaned up if those CTEs were hanging.
The patch enforces cleanup of hanging recursive CTEs in the same way as
other hanging CTEs.
Approved by dmitry.shulga@mariadb.com
Test cases like the following one produce different result sets if it's run
with and without th option --ps-protocol.
CREATE TABLE t1(a INT);
--enable_metadata
(SELECT MAX(a) FROM t1) UNION (SELECT MAX(a) FROM t1);
--disable_metadata
DROP TABLE t1;
Result sets differ in metadata for the query
(SELECT MAX(a) FROM t1) UNION (SELECT MAX(a) FROM t1);
The reason for different content of query metadata is that for queries
with union the items being created on JOIN preparing phase is placed into
item_list from SELECT_LEX_UNIT whereas for queries without union item_list
from SELECT_LEX is used instead.
In case a stored procedure is invoked in PS mode with argument of type ROW()
like the following one:
CALL p1(ROW(10,20))
such statement fails with the error
ER_OPERAND_COLUMNS (1241): Operand should contain 1 column(s)
The reason of emitting the error is that wrong method was invoked
on fixing an item corresponding to an argument of stored procedure -
the method fix_fields_if_needed_for_scalar() was called instead of
fix_fields_if_needed() that should be called.
The columns that are part of DEFAULT expression were not read-marked
in statements like UPDATE...SET b=DEFAULT.
The problem is `F(DEFAULT)` expression depends of the left-hand side of an
assignment. However, setup_fields accepts only right-hand side value.
Neither Item::fix_fields does.
Suchwise, b=DEFAULT(b) works fine, because Item_default_field has
information on what field it is default of:
if (thd->mark_used_columns != MARK_COLUMNS_NONE)
def_field->default_value->expr->update_used_tables();
in Item_default_value::fix_fields().
It is not reasonable to pass a left-hand side to Item:fix_fields, because
the case is rare, so the rewrite
b= F(DEFAULT) -> b= F(DEFAULT(b))
is made instead.
Both UPDATE and multi-UPDATE are affected, however any form of INSERT
is not: it marks all the fields in DEFAULT expressions for read in
TABLE::mark_default_fields_for_write().
If test_if_skip_sort_order() decides to use an index to produce required
ordering, it should disable "Range Checked for each record" optimization.
This is because Range-Checked-for-each-record may decide to use an index
(or an index_merge) which will not produce the required ordering.
The problem is the same as in MDEV-18166: columns in virtual field
expression are not marked for read, while the field itself does.
field->register_field_in_read_map() should be called for read-marking all
fields.
The test is reproduced only in 10.4+, however the fix is applicable to
10.2+.
Several different test cases were failing under the same reason: the
fields in a vcol expression were not marked during marking columns of a key
contatining virtual column for read.
Fix: make marking columns of a key for read a special case where
register_field_in_read_map() is done instead of plain bitmap_set_bit().
Some test cases are only reproducible in 10.4+, but the fix is applicable
to 10.2+
This is a 10.2+ part of a jira task
The two bugs regarding virtual column marking have been fixed:
1. UPDATE of a partitioned table, where the optimizer has chosen a
secondary index to make a filesort;
2. INSERT into a table with a nonblob field generated from a blob, with
binlog enabled and binlog_row_image=noblob.
3. DELETE from a view on a table with virtual column.
Generally the assertion happens from update_virtual_fields() call
These bugs are root-caused by missing field marking for dependant fields
of a virtual column.
Therefore a fix is: mark all the fields involved in the vcol expression by
calling field->register_field_in_read_map() instead just setting a single
bit.
3 was reproducible only on 10.4+, however the problem might has just been
invisible in the earlier versions. The fix is applicable to 10.2-10.3 as
well.