Lex_input_stream::scan_ident_delimited() could go beyond the end
of the input when a starting backtick (`) delimiter did not have a
corresponding ending backtick.
Fix: catch the case when yyGet() returns 0, which means
either eof-of-query or straight 0x00 byte inside backticks,
and make the parser fail on syntax error, displaying the left
backtick as the syntax error place.
In case of filename in a script like this:
SET CHARACTER_SET_CLIENT=17; -- 17 is 'filename'
SELECT doc.`Children`.0 FROM t1;
the ending backtick was not recognized as such because my_charlen() returns 0 for
a straight backtick (backticks must normally be encoded as @0060 in filename).
The same fix works for 'filename': the execution skips the backtick
and reaches the end of the query, then yyGet() returns 0.
This fix is OK for now. But eventually 'filename' should either be disallowed
as a parser character set, or fixed to handle encoded punctuation properly.
- Adding optional qualifiers to data types:
CREATE TABLE t1 (a schema.DATE);
Qualifiers now work only for three pre-defined schemas:
mariadb_schema
oracle_schema
maxdb_schema
These schemas are virtual (hard-coded) for now, but may turn into real
databases on disk in the future.
- mariadb_schema.TYPE now always resolves to a true MariaDB data
type TYPE without sql_mode specific translations.
- oracle_schema.DATE translates to MariaDB DATETIME.
- maxdb_schema.TIMESTAMP translates to MariaDB DATETIME.
- Fixing SHOW CREATE TABLE to use a qualifier for a data type TYPE
if the current sql_mode translates TYPE to something else.
The above changes fix the reported problem, so this script:
SET sql_mode=ORACLE;
CREATE TABLE t2 AS SELECT mariadb_date_column FROM t1;
is now replicated as:
SET sql_mode=ORACLE;
CREATE TABLE t2 (mariadb_date_column mariadb_schema.DATE);
and the slave can unambiguously treat DATE as the true MariaDB DATE
without ORACLE specific translation to DATETIME.
Similar,
SET sql_mode=MAXDB;
CREATE TABLE t2 AS SELECT mariadb_timestamp_column FROM t1;
is now replicated as:
SET sql_mode=MAXDB;
CREATE TABLE t2 (mariadb_timestamp_column mariadb_schema.TIMESTAMP);
so the slave treats TIMESTAMP as the true MariaDB TIMESTAMP
without MAXDB specific translation to DATETIME.
* Fix the crash: IN-to-EXISTS rewrite causes an error (and so
JOIN::optimize() fails with an error, too), don't call
update_used_tables(). Terminate the query execution instead.
* Fix the cause of the error in the IN-to-EXISTS rewrite: don't do
the rewrite if doing it will cause an error of this kind:
This version of MariaDB doesn't yet support 'SUBQUERY in ROW in left
expression of IN/ALL/ANY'
* Fix another issue exposed by this testcase:
JOIN::setup_subquery_caches() may be invoked before any select has
saved its query plan, and will crash because none of the SELECTs
has called create_explain_query_if_not_exists() to create the Explain
Data Structure for this SELECT.
TODO: When merging this to 10.2, remove the poorly-placed call to
create_explain_query_if_not_exists made by fix for M_D_E_V-16153
Allocate space for fields inside the window function (arguments, PARTITION BY and ORDER BY clause)
in the ref pointer array. All fields inside the window function are part of the temporary
table that is required for the window function computation.
The opt_for_user subrule was incorrectly scanned before sp_create_assignment_lex(),
so the user name and the host were created on a wrong memory root.
- Reoganizing the grammar to make sure that sp_create_assignment_lex()
is called immediately after PASSWORD_SYM is scanned, so all attributes
are then allocated on its memory root.
- Moving the semantic code as methods to LEX, so the grammar looks as simple as possible.
- Changing text_or_password to be of the data type USER_AUTH*.
As a side effect, the LEX::definer member is now not used when processing
the SET PASSWORD statement. Everything is done using Bison's stack.
The bug sas introduced by this commit:
commit bf5a144e16
Cause:
In case of version based condtional comments, if the condition evaluates
to false, it is converted to a regular comment for replication by
replacing "!" by " ".
Nested comment in a conditional comment is replicated as is. Nested
comments are supported only in case of conditional comments and when a
the comment on slave is no more a conditional comment, the statement
execution fails on the slave.
Fix:
Convert the nested comment, start from "/*" to "(*" and comment end from
"*/" to "*)" for replication.
Change-Id: I1a8e385a267b2370529eade094f0258fa96886c0
In main.index_merge_myisam we remove the test that was added in
commit a2d24def8c because
it duplicates the test case that was added in
commit 5af12e4635.
This bug could happen only with a stored procedure containing queries with
more than one reference to a CTE that used local variables / parameters.
This bug was the result of an incomplete merge of the fix for the bug
MDEV-17154. The merge covered usage of parameter markers occurred in a CTE
that was referenced more than once, but missed coverage of local variables.
e.g.
- dont -> don't
- occurence -> occurrence
- succesfully -> successfully
- easyly -> easily
Also remove trailing space in selected files.
These changes span:
- server core
- Connect and Innobase storage engine code
- OQgraph, Sphinx and TokuDB storage engines
Related to MDEV-21769.
The existing syntax for renaming a column uses "ALTER TABLE ... CHANGE"
command. This requires full column specification to rename the column.
This patch adds new syntax "ALTER TABLE ... RENAME COLUMN", which do not
expect users to provide full column specification. It means that the new
syntax would pick in-place or copy algorithm in the same way as that of
existing "ALTER TABLE ... CHANGE" command. The existing syntax
"ALTER TABLE ... CHANGE" will continue to work.
Syntax changes
==============
ALTER TABLE tbl_name
[alter_specification [, alter_specification] ...]
[partition_options]
Following is a new <alter_specification> added:
| RENAME COLUMN <oldname> TO <newname>
Where <oldname> and <newname> are identifiers for old name and new
name of the column.
Related to: WL#10761
Rewriting GRANT/REVOKE grammar to use more bison stack and use Sql_cmd_ style
1. Removing a few members from LEX:
- uint grant, grant_to_col, which_columns
- List<LEX_COLUMN> columns
- bool all_privileges
2. Adding classes Grand_object_name, Lex_grant_object_name
3. Adding classes Grand_privilege, Lex_grand_privilege
4. Adding struct Lex_column_list_privilege_st, class Lex_column_list_privilege
5. Rewriting the GRANT/REVOKE grammar to use new classes and pass them through
bison stack (rather than directly access LEX members)
6. Adding classes Sql_cmd_grant* and Sql_cmd_revoke*,
changing GRANT/REVOKE to use LEX::m_sql_cmd.
7. Adding the "sp_handler" grammar rule and removing some duplicate grammar
for GRANT/REVOKE for different kinds of SP objects.
8. Adding a new rule comma_separated_ident_list, reusing it in:
- with_column_list
- colum_list_privilege
with condition_pushdown_from_having
This bug could manifest itself for queries with GROUP BY and HAVING clauses
when the HAVING clause was a conjunctive condition that depended
exclusively on grouping fields and at least one conjunct contained an
equality of the form fld=sq where fld is a grouping field and sq is a
constant subquery.
In this case the optimizer tries to perform a pushdown of the HAVING
condition into WHERE. To construct the pushable condition the optimizer
first transforms all multiple equalities in HAVING into simple equalities.
This has to be done for a proper processing of the pushed conditions
in WHERE. The multiple equalities at all AND/OR levels must be converted
to simple equalities because any multiple equality may refer to a multiple
equality at the upper level.
Before this patch the conversion was performed like this:
multiple_equality(x,f1,...,fn) => x=f1 and ... and x=fn.
When an equality item for x=fi was constructed both the items for x and fi
were cloned. If x happened to be a constant subquery that could not be
cloned the conversion failed. If the conversions of multiple equalities
previously performed had succeeded then the whole condition became in an
inconsistent state that could cause different failures.
The solution provided by the patch is:
1. to use a different conversion rule if x is a constant
multiple_equality(x,f1,...,fn) => f1=x and f2=f1 and ... and fn=f1
2. not to clone x if it's a constant.
Such conversions cannot fail and besides the result of the conversion
preserves the equivalence of f1,...,fn that can be used for other
optimizations.
This patch also made sure that expensive predicates are not pushed from
HAVING to WHERE.
(Variant #2 of the patch, which keeps the sp_head object inside the
MEM_ROOT that sp_head object owns)
(10.3 requires extra work due to sp_package, will commit a separate
patch for it)
sp_head::operator new() and operator delete() were dereferencing sp_head*
pointers to memory that didn't hold a valid sp_head object (it was
not created/already destroyed).
This caused UBSan to crash when looking up type information.
Fixed by providing static sp_head::create() and sp_head::destroy() methods.
(Variant #2 of the patch, which keeps the sp_head object inside the
MEM_ROOT that sp_head object owns)
(10.3 version of the fix, with handling for class sp_package)
sp_head::operator new() and operator delete() were dereferencing sp_head*
pointers to memory that didn't hold a valid sp_head object (it was
not created/already destroyed).
This caused UBSan to crash when looking up type information.
Fixed by providing static sp_head::create() and sp_head::destroy() methods.
Set read_set bitmap for view from the JOIN::all_fields list instead of JOIN::fields_list
as split_sum_func would have added items to the all_fields list.
Add support of referential constraints directly in column defininions:
create table t1 (id1 int primary key);
create table t2 (id2 int references t1(id1));
Referenced field name can be omitted if equal to foreign field name:
create table t1 (id int primary key);
create table t2 (id int references t1);
Until 10.5 this syntax was understood by the parser but was silently
ignored.
In case of generated columns this syntax is disabled at parser level
by ER_PARSE_ERROR. Note that separate FOREIGN KEY clause for generated
columns is disabled at storage engine level.
On order to unify the two *.yy files easier,
this patch collects all different rules to the end of *.yy files,
so the rule section looks like this:
%%
common rules
different rules
Adding:
- new class sp_expr_lex
- new grammar rule expr_lex, which includes both reset_lex()
and its corresponding restore_lex()
Also:
- Moving a few methods from LEX to sp_expr_lex.
- Moving the code from *.yy to new method sp_expr_lex methods
sp_repeat_loop_finalize() and sp_if_expr().
This change makes it easier to edit the related grammar
(and makes it easier to unify sql_yacc.yy and sql_yacc_ora.yy later).
* Explicit STARTS syntax
* SHOW CREATE
* Default STARTS rounding depending on INTERVAL type
* Warn when STARTS timestamp is later than query time
* Fix uninitialized Lex->create_last_non_select_table under
mysql_unpack_partition()
Default STARTS rounding depending on INTERVAL type
If STARTS clause is omitted, default one is assigned with value
derived from query timestamp. The rounding is done on STARTS value
depending on INTERVAL type:
SECOND: no rounding is done;
MINUTE: timestamp seconds is set to 0;
HOUR: timestamp seconds and minutes are set to 0;
DAY, WEEK, MONTH and YEAR: timestamp seconds, minutes and hours are
set to 0 (the date of rotation is kept as current date).
In order to:
- unify sql_yacc.yy and sql_yacc_ora.yy easier
- move more functionality from the parser to Type_handler
(so plugins can override the behavior)
this patch:
- removes rules sp_param_field_type_string and sp_param_field_type
from sql_yacc_ora.yy
- adds a new virtial method Type_handler::Column_definition_set_attributes()
LEX::parsed_select_expr_cont(): Replace a condition with an
assertion DBUG_ASSERT(!s2->next_select()), and always
initialize sel1=s2, because all subsequent code paths will
assign to sel1->first_nested.
This was flagged by GCC reporting -Wmaybe-uninitialized
for the statement last->link_neighbour(sel1).
Shift-Reduce conflicts prevented parsing some queries with subqueries that
used set operations when the subqueries occurred in expressions or in IN
predicands.
The grammar rules for query expression were transformed in order to avoid
these conflicts. New grammar rules employ an idea taken from MySQL 8.0.
Pruning fix for SYSTEM_TIME INTERVAL partitioning.
Allocating one more element in range_int_array for CURRENT partition
is required for RANGE pruning to work correctly
(get_partition_id_range_for_endpoint()).
SYSTEM_TYPE partitioning: COLUMN properties removed. Partitioning is
now pure RANGE based on UNIX_TIMESTAMP(row_end).
DECIMAL type is now allowed as RANGE partitioning, we can partition by
UNIX_TIMESTAMP() (but not for DATETIME which depends on local timezone
of course).
With --skip-debug-assert, DBUG_ASSERT(false) will allow execution to
continue. Hence, we will need /* fall through */ after them.
Some DBUG_ASSERT(0) were replaced by break; when the switch () statement
was followed by DBUG_ASSERT(0).
- Initialize variables that could be used uninitialized
- Added extra end space to DbugStringItemTypeValue to get rid of warnings
from c_ptr()
- Session_sysvars_tracker::update() accessed unitialized memory if called
with NULL value.
- get_schema_stat_record() accessed unitialized memory if HA_KEY_LONG_HASH
was used
- parse_vcol_defs() accessed random memory for tables without keys.
cmake -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ -DCMAKE_BUILD_TYPE=Debug
Maintainer mode makes all warnings errors. This patch fix warnings. Mostly about
deprecated `register` keyword.
Too much warnings came from Mroonga and I gave up on it.
This patch includes:
- MDEV-19639 sql_mode=ORACLE: Wrong SHOW PROCEDURE output for sysvar:=expr
- MDEV-19640 Wrong SHOW PROCEDURE output for SET GLOBAL sysvar1=expr, sysvar2=expr
- Preparatory refactoring for MySQL WL#4179
Detailed change list:
1. Changing sp_create_assignment_lex() to accept the position
in the exact query buffer instead of a "bool no_lookahead".
This actually fixes MDEV-19639.
In the previous reduction sp_create_assignment_lex() was
called too late, when the parser went far the from beginning
of the statement, so only a part of the statement got into
sp_instr_stmt.
2. Generating "SET" or "SET GLOBAL" inside sp_create_assignment_instr()
depending on the option type.
This fixes MDEV-19640.
In the previous reduction the code passed (through no_lookahead)
the position of the
word GLOBAL inside sp_create_assignment_lex(), which
worked only for the left-most assignment.
3. Fixing the affected rules to use:
- ident_cli instead of ident
- ident_cli_set_usual_case instead of ident_set_usual_case
4. Changing the input parameter in:
- LEX::set_system_variable()
- LEX::call_statement_start()
- LEX::set_variable()
from just LEX_CSTRING to Lex_ident_sys_st for stricter data type constrol:
to make sure that noone passes an ident_cli
(a fragment of the original query in the client character set)
instead of server-side identifier
(utf8 identifier allocated on THD when needed).
5. Adding Lex_ident_sys() in places where the affected functions are called.
6. Moving all calls of sp_create_assignment_lex() to the places
just before parsing set_expr_or_default.
This makes the grammar clearer, because
sp_create_assignment_lex() and sp_create_assignment_instr()
now stay near each other, so the balance of LEX's push/pop
can be read easier.
This will also help to WL#4179.
7. Adding class sp_lex_set_var
Moving the initialization code from
sp_create_assignment_lex() to the constructor of sp_lex_set_var.
This will also help to WL#4179.
8. Moving a part of the "set" grammar rule into a separate
rule "set_param".
This makes the grammar easier to read and removes
one shift/reduce conflict.
This patch complements the patch that fixes bug MDEV-18479.
This patch takes care of possible overflow when calculating the
estimated number of rows in a materialized derived table / view.
query with VALUES()
A table value constructor can be used in all contexts where a select
can be used. In particular an ORDER BY clause or a LIMIT clause or both
of them can be attached to a table value constructor to produce a new
query. Unfortunately execution of such queries was not supported.
This patch fixes the problem.
Only take LOCK_plugin for plugin system variables.
Reverted optimisation that was originally done for session tracker: it
makes much less sense now. Specifically only if connections would want to
track plugin session variables changes and these changes would actually
happen frequently. If this ever becomes an issue, there're much better
ways to optimise this workload.
Part of MDEV-14984 - regression in connect performance
This patch corrects the patch for MDEV-19324. The latter did not
work properly in the cases when the transformation
(SELECT ... ORDER BY ...) LIMIT ... =>
SELECT ... ORDER BY ... LIMIT ...
was applied to the operands of a set operation.
If a select query was of the form (SELECT ... ORDER BY ...) LIMIT ...
then in most cases it returned incorrect result. It happened because
SELECT ... ORDER BY ... was wrapped into a select with materialized
derived table:
SELECT ... ORDER BY ... =>
SELECT * FROM (SELECT ... ORDER BY ...) dt.
Yet for any materialized derived table ORDER BY without LIMIT is ignored.
This patch resolves the problem by the conversion
(SELECT ... ORDER BY ...) LIMIT ... =>
SELECT ... ORDER BY ... LIMIT ...
at the parser stage.
Similarly
((SELECT ... UNION ...) ORDER BY ...) LIMIT ...
is converted to
(SELECT ... UNION ...) ORDER BY ... LIMIT ...
This conversion optimizes execution of the query because the result of
(SELECT ... UNION ...) ORDER BY ... is not materialized into a temporary
table anymore.
A sequence of <digits>e<mbhead><mbtail>, e.g.:
SELECT 123eXYzzz FROM t1;
was not scanned correctly (where XY is a multi-byte character).
The multi-byte head byte X was appended to 123e separately from
the multi-byte tail byte Y, so a pointer to "Yzzz" was passed
into scan_ident_start(), which failed on a bad multi-byte sequence.
After this change, scan_ident_start() gets a pointer to "XYzzz",
so it correctly sees the whole multi-byte character.
When pushing a condition from HAVING into WHERE the function
st_select_lex::pushdown_from_having_into_where() transforms column
references in the pushed condition then performs cleanup of
items of the condition and finally calls fix_fields() for the condition
items. The cleanup is performed by a call of the method walk() with
cleanup_processor as the first parameter. Unfortunately this sequence
of calls does not work if the condition contains cached items, because
fix_fields() cannot go through Item_cache items and this leaves
underlying items unfixed.
The solution of this problem used in this patch is just does not allow
to process Item_cache objects when performing cleanup of the pushed
condition. In order to let the traversal procedure walk() not to process
Item_cache objects the third parameter of the used call of walk()
is set to a fictitious pointer (void *) 1. And Item_cache::walk() is
changed to prevent any action when it gets such value as the third
parameter.
A syntax error was reported for any INSERT statement with explicit
partition selection it if i used a column list.
Fixed by saving the parsing place before parsing the clause for explicit
partition selection and restoring it when the clause has been parsed.
This bug is caused by pushdown from HAVING into WHERE.
It appears because condition that is pushed wasn't fixed.
It is also discovered that condition pushdown from HAVING into
WHERE is done wrong. There is no need to build clones for some
conditions that can be pushed. They can be simply moved from HAVING
into WHERE without cloning.
build_pushable_cond_for_having_pushdown(),
remove_pushed_top_conjuncts_for_having() methods are changed.
It is found that there is no transformation made for fields of
pushed condition.
field_transformer_for_having_pushdown transformer is added.
New tests are added. Some comments are changed.
The MDEV-17262 commit 26432e49d3
was skipped. In Galera 4, the implementation would seem to require
changes to the streaming replication.
In the tests archive.rnd_pos main.profiling, disable_ps_protocol
for SHOW STATUS and SHOW PROFILE commands until MDEV-18974
has been fixed.
This solves the following issues:
* unlike lex->m_sql_cmd and lex->sql_command, thd->query_plan_flags
is not reset in Prepared_statement::execute, it survives
till the log_slow_statement(), so slow logging behaves correctly in --ps
* using thd->query_plan_flags for both slow_log_filter and
log_slow_admin_statements means the definition of "admin" statements
for the slow log is the same no matter how it is filtered out.
Part#2 (final): rewritting the code to pass the correct enum_sp_aggregate_type
to the sp_head constructor, so sp_head never changes its aggregation type
later on. The grammar has been simplified and defragmented.
This allowed to check aggregate specific instructions right after
a routine body has been scanned, by calling new LEX methods:
sp_body_finalize_{procedure|function|trigger|event}()
Moving some C++ code from *.yy to a few new helper methods in LEX.
1. Always drop merged_for_insert flag on cleanup (there could be errors which prevent TABLE to be assigned)
2. Make more precise cleanup of select parts which was touched
st_select_lex::handle_derived() and mysql_handle_list_of_derived() had
exactly the same implementations.
- Adding a new method LEX::handle_list_of_derived() instead
- Removing public function mysql_handle_list_of_derived()
- Reusing LEX::handle_list_of_derived() in st_select_lex::handle_derived()
with UNION ALL after INTERSECT
EXPLAIN EXTENDED erroneously showed UNION instead of UNION ALL in
the warning if UNION ALL followed INTERSECT or EXCEPT operations.
The bug was in the function st_select_lex_unit::print() that printed
the text of the query used in the warning.
* inject portion of time updates into mysql_delete main loop
* triggered case emits delete+insert, no updates
* PORTION OF `SYSTEM_TIME` is forbidden
* `DELETE HISTORY .. FOR PORTION OF ...` is forbidden as well
Optimized the code that removed multiple equalities pushed from HAVING
into WHERE. Now this removal is postponed until all multiple equalities
are eliminated in substitute_for_best_equal_field().
Condition can be pushed from the HAVING clause into the WHERE clause
if it depends only on the fields that are used in the GROUP BY list
or depends on the fields that are equal to grouping fields.
Aggregate functions can't be pushed down.
How the pushdown is performed on the example:
SELECT t1.a,MAX(t1.b)
FROM t1
GROUP BY t1.a
HAVING (t1.a>2) AND (MAX(c)>12);
=>
SELECT t1.a,MAX(t1.b)
FROM t1
WHERE (t1.a>2)
GROUP BY t1.a
HAVING (MAX(c)>12);
The implementation scheme:
1. Extract the most restrictive condition cond from the HAVING clause of
the select that depends only on the fields that are used in the GROUP BY
list of the select (directly or indirectly through equalities)
2. Save cond as a condition that can be pushed into the WHERE clause
of the select
3. Remove cond from the HAVING clause if it is possible
The optimization is implemented in the function
st_select_lex::pushdown_from_having_into_where().
New test file having_cond_pushdown.test is created.
1. Renaming Type_handler_json to Type_handler_json_longtext
There will be other JSON handlers soon, e.g. Type_handler_json_varchar.
2. Making the code more symmetric for data types:
- Adding a new virtual method
Type_handler::Column_definition_validate_check_constraint()
- Moving JSON-specific code from sql_yacc.yy to
Type_handler_json_longtext::Column_definition_validate_check_constraint()
3. Adding new files sql_type_json.cc and sql_type_json.h
and moving Type_handler+JSON related code into these files.
When creating a field of type JSON, it will be automatically
converted to TEXT with CHECK (json_valid(`a`)), if there wasn't any
previous check for the column.
Additional things:
- Added two bug fixes that was found while testing JSON. These bug
fixes has also been pushed to 10.3 (with a test case), but as they
where minimal and needed to get this task done and tested, the fixes
are repeated here.
- CREATE TABLE ... SELECT drops constraints for columns that
are both in the create and select part.
- If one has both a default expression and check constraint for a
column, one can get the error "Expression for field `a` is refering
to uninitialized field `a`.
- Removed some duplicate MYSQL_PLUGIN_IMPORT symbols
MDEV-17631 select_handler for a full query pushdown
Interfaces + Proof of Concept for federatedx with test cases.
The interfaces have been developed for integration of ColumnStore engine.
Calling st_select_lex::update_used_tables in JOIN::optimize_unflattened_subqueries
only when we are sure that the join have not been cleaned up.
This can happen for a case when we have a non-merged semi-join and an impossible
where which would lead to the cleanup of the join which has the non-merged semi-join
When we have a nested subquery then a subquery that was a dependent subquery
may change to an independent one when we optimizer the inner subqueries.
This is handled st_select_lex::optimize_unflattened_subqueries.
Currently a subquery that was changed to independent from dependent after optimization
phase incorrectly shows dependent in the output of Explain, this happens because we
don't update used_tables for the WHERE clause, ON clause, etc after the optimization phase.
MDEV-17660 sql_mode=ORACLE: Some keywords do not work as label names: history, system, versioning, without
MDEV-17661 Add sql_mode specific tokens for the keyword DECODE
main.derived_cond_pushdown: Move all 10.3 tests to the end,
trim trailing white space, and add an "End of 10.3 tests" marker.
Add --sorted_result to tests where the ordering is not deterministic.
main.win_percentile: Add --sorted_result to tests where the
ordering is no longer deterministic.
The test and also rpl_gtid_delete_domain failed on PPC64 platform
due to an incorrectly specified actual key for searching
in a gtid domain system hash. While the correct size is 32 bits
the supplied value was 8 bytes of long int size on the platform.
The problem became evident thanks to the big endiness which
cut off the *least* significant part of the value field.
Fixed with correcting a dynamic array initialization to hold
now uint32 values as well as the values extraction for
searching in the gtid domain system hash.
A new added test ensures no overflowed values are accepted
for deletion which prevents inadvertent action. Notice though
MariaDB [test]> set @@session.gtid_domain_id=(1 << 32) + 1;
MariaDB [test]> show warnings;
+---------+------+--------------------------------------------------------+
| Level | Code | Message |
+---------+------+--------------------------------------------------------+
| Warning | 1292 | Truncated incorrect gtid_domain_id value: '4294967297' |
+---------+------+--------------------------------------------------------+
MariaDB [test]> select @@session.gtid_domain_id;
+--------------------------+
| @@session.gtid_domain_id |
+--------------------------+
| 4294967295 |
+--------------------------+
This patch fills a serious flaw in the implementation of common table
expressions. Before this patch an attempt to prepare a statement from
a query with a parameter marker in a CTE that was used more than once
in the query ended up with a bogus error message. Similarly if a statement
in a stored procedure contained a CTE whose specification used a
local variables and this CTE was referred to more than once in the
statement then the server failed to execute the stored procedure returning
a bogus error message on a non-existing field.
The problems appeared due to incorrect handling of parameter markers /
local variables in CTEs that were referred more than once.
This patch fixes the problems by differentiating between the original
occurrences of a parameter marker / local variable used in the
specification of a CTE and the corresponding occurrences used
in copies of this specification. These copies are substituted
instead of non-first references to the CTE.
The idea of the fix and even some code were taken from the MySQL
implementation of the common table expressions.