When WSREP(thd) is not true we will use my_error(...) to print error. This
will set thd->is_error() to true and we wont be getting generic error.
Signed-off-by: Sachin Setiya <sachin.setiya@mariadb.com>
In the current code temporary tables we identified and opened before
other tables. CTE tables are identified in the same procedure as
regular tables. When a temporary table and a CTE table have the same
name T any reference to T that is in the scope of the CTE declaration
must be associated with this CTE. Yet it was not done properly.
When a reference to T was found in the scope of the declaration
of CTE T a pointer to this CTE was set in the reference. No check
that the reference had been already associated with a temporary table
was done. As a result, if the temporary table T had been created then
the reference to T was considered simultaneously as reference to the CTE
named T and as a reference to the temporary table named T. This
confused the code that were executed later and caused a crash of
the server.
Now when a table reference is associated with a CTE any previous
association with a temporary table is dropped.
This problem could be easily avoided if the temporary tables were
not identified prematurely.
as reference to CTE named T and
Here's what started happening after the patch that fixed
the bug mdev-10454 with query reported for the bug
SELECT * FROM t t1 right JOIN t t2 ON (t2.pk = t1.pk)
WHERE (t2.i, t2.pk) NOT IN ( SELECT t3.i, t3.i FROM t t3, t t4 )
AND t1.c = 'foo';
The patch added an implementation of propagate_equal_fields() for
the class Item_row and thus opened the possibility of equal fields
substitutions.
At the prepare stage after setup_conds() called for WHERE condition
had completed the flag of maybe_null of the Item_row object created
for (t2.i, t2.pk) was set to false, because the maybe_null flags of
both elements were set to false. However the flag of maybe_null for
t1.pk from the ON condition were set to true, because t1 was an inner
table of an outer join.
At the optimization stage the outer join was converted to inner join,
but the maybe_null flags were not corrected and remained the same.
So after the substitution t2.pk/t1.pk. the maybe_null flag for the
row remained false while the maybe_flag for the second element of
the row was true. As a result, when the in-to_exists transformation
was performed for the NOT IN predicate the guards variables were
not created for the elements of the row, but a guard object for
the second element was created. The object were not valid because
it referred to NULL as a guard variable. This ultimately caused
a crash when the expression with the guard was evaluated at the
execution stage.
The patch made sure that the guard objects are not created without
guard variables.
Yet it does not resolve the problem of inconsistent maybe_null flags.
and it might be that the problem will pop op in other pieces of code.
The resolution of this problem is not easy, but the problem should
be resolved in future versions.
PROBLEM
By design stats estimation always reading uncommitted data. In this scenario
an uncommitted transaction has deleted all rows in the table. In Innodb
uncommitted delete records are marked as delete but not actually removed
from Btree until the transaction has committed or a read view for the rows
is present.While calculating persistent stats we were ignoring the delete
marked records,since all the records are delete marked we were estimating
the number of rows present in the table as zero which leads to bad plans
in other transaction operating on the table.
Fix
Introduced a system variable called innodb_stats_include_delete_marked
which when enabled includes delete marked records for stat
calculations .
Apparently, WL#8149 QA did not cover the code changes made to
online table rebuild (which was introduced in MySQL 5.6.8 by WL#6255)
for ROW_FORMAT=REDUNDANT tables.
row_log_table_low_redundant(): Log the new values of indexed virtual
columns (ventry) only once.
row_log_table_low(): Assert that if o_ventry is specified, the
logged operation must not be ROW_T_INSERT, and ventry must be specified
as well.
row_log_table_low(): When computing the size of old_pk, pass v_entry=NULL to
rec_get_converted_size_temp(), to be consistent with the subsequent call
to rec_convert_dtuple_to_temp() for logging old_pk. Assert that
old_pk never contains information on virtual columns, thus proving that this
change is a no-op.
RB: 13822
Reviewed-by: Jimmy Yang <jimmy.yang@oracle.com>
Problem:
=======
Autoincrement value gives duplicate values because of the following reasons.
(1) In InnoDB handler function, current autoincrement value is not changed
based on newly set auto_increment_increment or auto_increment_offset variable.
(2) Handler function does the rounding logic and changes the current
autoincrement value and InnoDB doesn't aware of the change in current
autoincrement value.
Solution:
========
Fix the problem(1), InnoDB always respect the auto_increment_increment
and auto_increment_offset value in case of current autoincrement value.
By fixing the problem (2), handler layer won't change any current
autoincrement value.
Reviewed-by: Jimmy Yang <jimmy.yang@oracle.com>
RB: 13748
Problem:
=======
Inplace alter algorithm determines the table to be rebuild if the table
undergoes row format change, key block size if handler flag contains only
change table create option. If alter with inplace ignore flag operations and change table create options then it leads to table rebuild operation.
Solution:
========
During the check for rebuild, ignore the inplace ignore flag and check for
table create options.
Reviewed-by: Jimmy Yang <jimmy.yang@oracle.com>
Reviewed-by: Marko Makela <marko.makela@oracle.com>
RB: 13172
This patch fixed some problems that occurred with subqueries that
contained directly or indirectly recursive references to recursive CTEs.
1. A [NOT] IN predicate with a constant left operand and a non-correlated
subquery as the right operand used in the specification of a recursive CTE
was considered as a constant predicate and was evaluated only once.
Now such a predicate is re-evaluated after every iteration of the process
that produces the records of the recursive CTE.
2. The Exists-To-IN transformation could be applied to [NOT] IN predicates
with recursive references. This opened a possibility of materialization
for the subqueries used as right operands. Yet, materialization
is prohibited for the subqueries if they contain a recursive reference.
Now the Exists-To-IN transformation cannot be applied for subquery
predicates with recursive references.
The function st_select_lex::check_subqueries_with_recursive_references()
is called now only for the first execution of the SELECT.
Alias the InnoDB ulint and lint data types to size_t and ssize_t,
which are the standard names for the machine-word-width data types.
Correspondingly, define ULINTPF as "%zu" and introduce ULINTPFx as "%zx".
In this way, better compiler warnings for type mismatch are possible.
Furthermore, use PRIu64 for that 64-bit format, and define
the feature macro __STDC_FORMAT_MACROS to enable it on Red Hat systems.
Fix some errors in error messages, and replace some error messages
with assertions.
Most notably, an IMPORT TABLESPACE error message in InnoDB was
displaying the number of columns instead of the mismatching flags.
This is a reduced version of an originally much larger patch.
We will keep the definition of the ulint, lint data types unchanged,
and we will not be replacing fprintf() calls with ib_logf().
On Windows, use the standard format strings instead of nonstandard
extensions.
This patch fixes some errors in format strings.
Most notably, an IMPORT TABLESPACE error message in InnoDB was
displaying the number of columns instead of the mismatching flags.
When MDEV-6076 repurposed the field PAGE_MAX_TRX_ID, it was assumed
that the field always was 0 in the clustered index of old data files.
This was not the case in IMPORT TABLESPACE (introduced in MySQL 5.6
and MariaDB 10.0), which is writing the transaction ID to all index
pages, including clustered index pages.
This means that on a data file that was at some point of its life
IMPORTed to an InnoDB instance, MariaDB 10.2.4 or later could interpret
the transaction ID as a persistent AUTO_INCREMENT value.
This also means that future changes that repurpose PAGE_MAX_TRX_ID
in the clustered index may cause trouble with files that were imported
at some point of their life.
There is a separate minor issue that InnoDB is writing PAGE_MAX_TRX_ID
to every secondary index page, even though it is only needed on leaf
pages. From now on we will write PAGE_MAX_TRX_ID as 0 to non-leaf pages,
just to be able to keep stricter debug assertions.
btr_root_raise_and_insert(): Reset the PAGE_MAX_TRX_ID field on non-root
pages of the clustered index, and on the no-longer-leaf root page of
secondary indexes.
AbstractCallback::is_root_page(): Remove. Use page_is_root() instead.
PageConverter::update_index_page(): Reset the PAGE_MAX_TRX_ID to 0
on other pages than the clustered index root page or secondary index
leaf pages.
Fixed handling of default values with cached temporal functions so that the
CREATE TABLE statement now succeeds.
Fixed virtual column session cleanup.
Fixed the error message.
Added quoting of date/time values in cases when this was omitted.
Added a test case in default.test.
Updated test result files.
Fixed the bug by failing the statement with an error message that explains
that an auto-increment column may not be used in an expression for a
check constraint.
Added a test case in check_constraint.test.
Updated existing tests and results.
In case of error on opening VIEW (absent table for example) it is still possible to print its definition but some variable is not set (table_list->derived->derived) so it is better do not try to test it when there is safer alternative (table_list itself).
because on Windows it cannot properly append to the file,
doesn't use CreateFile(..., FILE_APPEND_DATA, ...)
this fixes main.shutdown failures on Windows
automatic shortening of a too-long non-unique key should
be not a warning, but a note. It's a normal optimization,
doesn't affect correctness, and should never be converted to
an error, no matter how strict sql_mode is.
When a CTE referring to another CTE from the same with clause
was used twice then the server could not find the second CTE and
reported a bogus error message.
This happened because for any unit that was created as a clone of
a CTE specification the pointer to the WITH clause that owned this CTE
was not set.
It is assumed that both insert..select statements take so
long that drop database from node2 gets to abort them both
but on fast machines it was too small. Increased the size
of insert.
* remove part of galera_var_cluster_address.test that can not be tested reliably
* reduce running time for galera_gcache_recover_manytrx.test
* Additional wait_conditions for GAL-401.test
Signed-off-by: Sachin Setiya <sachin.setiya@mariadb.com>
* a dedicated test for wsrep_retry_autocommit
* some galera_toi_* tests were only passing because wsrep_retry_autocommit
was in effect. The tests were changed to do not use autocommit
* higher timeout values in galera_2nodes.cnf , galera_3nodes.cnf
Signed-off-by: Sachin Setiya <sachin.setiya@mariadb.com>