There were several races in the main.kill_processlist-6619 testcase:
- Lingering connections from a previous test case could be visible in SHOW
PROCESSLIST and cause .result diff.
- A sync point "dispatch_command_end" was ineffective, as it was consumed at
the end of the SET DEBUG command itself.
- The signal from sync point "before_execute_sql_command" could override an
earlier signal, causing DEBUG_SYNC timeout and test failure.
- The final SHOW PROCESSLIST could occasionally see a connection in state
"Busy" instead of the expected "Sleep".
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
In case there is a view that queried from a stored routine or
a prepared statement and this temporary table is dropped between
executions of SP/PS, then it leads to hitting an assertion
at the SELECT_LEX::fix_prepare_information. The fired assertion
was added by the commit 85f2e4f8e8
(MDEV-32466: Potential memory leak on executing of create view statement).
Firing of this assertion means memory leaking on execution of SP/PS.
Moreover, if the added assert be commented out, different result sets
can be produced by the statement SELECT * FROM the hidden table.
Both hitting the assertion and different result sets have the same root
cause. This cause is usage of temporary table's metadata after the table
itself has been dropped. To fix the issue, reload the cache of stored
routines. To do it cache of stored routines is reset at the end of
execution of the function dispatch_command(). Next time any stored routine
be called it will be loaded from the table mysql.proc. This happens inside
the method Sp_handler::sp_cache_routine where loading of a stored routine
is performed in case it missed in cache. Loading is performed unconditionally
while previously it was controlled by the parameter lookup_only. By that
reason the signature of the method Sroutine_hash_entry::sp_cache_routine
was changed by removing unused parameter lookup_only.
Clearing of sp caches affects the test main.lock_sync since it forces
opening and locking the table mysql.proc but the test assumes that each
statement locks its tables once during its execution. To keep this invariant
the debug sync points with names "before_lock_tables_takes_lock" and
"after_lock_tables_takes_lock" are not activated on handling the table
mysql.proc
UPDATE statement that is run in PS mode and uses positional parameter
handles columns declared with the clause DEFAULT NULL incorrectly in
case the clause DEFAULT is passed as actual value for the positional
parameter of the prepared statement. Similar issue happens in case
an expression specified in the DEFAULT clause of table's column definition.
The reason for incorrect processing of columns declared as DEFAULT NULL
is that setting of null flag for a field being updated was missed
in implementation of the method Item_param::assign_default().
The reason for incorrect handling of an expression in DEFAULT clause is
also missed saving of a field inside implementation of the method
Item_param::assign_default().
This patch fixes the issue with passing the DEFAULT or IGNORE values to
positional parameters for some kind of SQL statements to be executed
as prepared statements.
The main idea of the patch is to associate an actual value being passed
by the USING clause with the positional parameter represented by
the Item_param class. Such association must be performed on execution of
UPDATE statement in PS/SP mode. Other corner cases that results in
server crash is on handling CREATE TABLE when positional parameter
placed after the DEFAULT clause or CALL statement and passing either
the value DEFAULT or IGNORE as an actual value for the positional parameter.
This case is fixed by checking whether an error is set in diagnostics
area at the function pack_vcols() on return from the function pack_expression()
In commit 179424db the file lowercase_table2.result was made executable
for no known reason, most likely just a mistake. Test result files
definitely should not be executable.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
This patch corrects the fix for MDEV-32569. The latter has not taken into
account the fact not each statement uses the SELECT_LEX structure. In
particular CALL statements do not use such structure. However the parameter
passed to the stored procedure used in such a statement may require an
invocation of Type_std_attributes::agg_item_set_converter().
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Modify the NS_ZERO state in the JSON number parser to allow
exponential notation with a zero coefficient (e.g. 0E-4).
The NS_ZERO state transition on 'E' was updated to move to the
NS_EX state rather than returning a syntax error. Similar change
was made for the NS_ZE1 (negative zero) starter state.
This allows accepted number grammar to include cases like:
- 0E4
- -0E-10
which were previously disallowed. Numeric parsing remains
the same for all other states.
Test cases are added to func_json.test to validate parsing for
various exponential numbers starting with zero coefficients.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services.
Fixed typo in mysql_admin_table which cused call of
close_unused_temporary_table_instances alwas for the first table
instead of the current table.
Added ASSERT that close_unused_temporary_table_instances should not
remove all instances of user created temporary table.
Item_float::neg() did not preserve the "presentation" from "this".
So
CAST(-1e0 AS UNSIGNED) -- cast from double to unsigned
changes its meaning to:
CAST(-1 AS UNSIGNED) -- cast signed to undigned
Fixing Item_float::neg() to construct the new value for
Item_float::presentation as follows:
- if the old value starts with minus, then the minus is truncated:
'-2e0' -> '2e0'
- otherwise, minus sign followed by its old value:
'1e0' -> '-1e0'
If a query contained a CTE whose name coincided with the name of one of
the base tables used in the specification of the CTE and the query had at
least two references to this CTE in the specifications of other CTEs then
processing of the query led to unlimited recursion that ultimately caused
a crash of the server.
Any secondary non-recursive reference to a CTE requires creation of a copy
of the CTE specification. All the references to CTEs in this copy must be
resolved. If the specification contains a reference to a base table whose
name coincides with the name of then CTE then it should be ensured that
this reference in no way can be resolved against the name of the CTE.
If a query has a HAVING clause that contains a predicate with a constant
IN subquery whose lef part in its turn is a subquery and the predicate is
subject to pushdown from HAVING to WHERE then execution of the query could
cause a crash of the server.
The cause of the problem was the missing implementation of the walk()
method for the class Item_in_optimizer. As a result in some cases the left
operand of the Item_in_optimizer condition could be traversed twice by
the walk procedure. For many call-back functions used as an argument of
this procedure it does not matter. Yet it matters for the call-back
function cleanup_excluding_immutables_processor() used in pushdown of
predicates from HAVING to WHERE. If the processed item is marked with
the IMMUTABLE_FL flag then the processor just removes this flag, otherwise
it performs cleanup of the item making it unfixed. If an item is marked
with an the IMMUTABLE_FL and it traversed with this processor twice then
it becomes unfixed after the second traversal though the flag indicates
that the item should not be cleaned up.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Problem:
sp_cache erroneously looked up fully qualified SP names (e.g. `DB`.`SP`),
in case insensitive style. It was wrong, because only the "name"
part is always case insensitive, while the "db" part should be compared
according to lower_case_table_names (case sensitively for 0,
case insensitively for 1 and 2).
Fix:
Adding a "casedn_name" parameter make_qname() to tell
if the name part should be lower cased:
`DB1`.`SP` -> "DB1.SP" (when casedn_name=false)
`DB1`.`SP` -> "DB1.sp" (when casedn_name=true)
and using make_qname() with casedn_name=true when creating
sp_cache hash lookup keys.
Details:
As a result, it now works as follows:
- sp_head::m_db is converted to lower case if lower_case_table_names>0
during the sp_name initialization phase. So when make_qname() is called,
sp_head::m_db is already normalized. There are no changes in here.
- The initialization phase of sp_head when creating sp_head::m_qname
now calls make_qname() with casedn_name=true,
so sp_head::m_name gets written to sp_head::m_qname in lower case.
- sp_cache_lookup() now also calls make_qname() with casedn_name=true,
so sp_head::m_name gets written to the temporary lookup key in lower case.
- sp_cache::m_hashtable now uses case sensitive comparison
Consider this query
SELECT t1.* FROM t1, (SELECT t2.b FROM t2 WHERE NOT EXISTS
(SELECT 1 FROM t3) GROUP BY b) sq where sq.b = t1.a;
If SELECT 1 FROM t3 is expensive, for example t3 has >
thd->variables.expensive_subquery_limit, first evaluation is deferred to
mysql_derived_fill(). There it is noted that, in the above case
NOT EXISTS (SELECT 1 FROM t3) is constant and false.
This causes the join variable zero_result_cause to be set to
"Impossible WHERE noticed after reading const tables" and the handler
for this join is never "opened" via handler::ha_open.
When mysql_derived_fill() is called for the next group of results, this
unopened handler is not taken into account.
reviewed by Igor Babaev (igor@mariadb.com)
This patch fixes too strong condition in assert at the method
Item_func_group_concat::fix_fields
that is true in case of a stored routine and obviously broken
for a prepared statement.
Also in the startup, lets not "Error" on attempting to install a
mysql.plugin that is already there. We use the 'if_not_exists'
parameter to true to downgrade this to a "Note".
Also corrects: MDEV-32041 "plugin already loaded" should be a Warning, not an Error
Because --delete-master-logs immediately purges logs after flushing,
it is possible the binlog dump thread would still be using the old
log when the purge executes, disallowing the file from being
deleted.
This patch institutes a work-around in the test as follows:
1) temporarily stop the slave so there is no chance the old binlog
is still being referenced.
2) set master_use_gtid=Slave_pos so the slave can still appear
up-to-date on the master after the master flushes/purges its logs
(while the slave is offline). Otherwise (i.e. if using binlog
file/pos), the slave would point to a purged log file, and receive
an error immediately upon connecting to the master.
Reviewed By
============
Andrei Elkin <andrei.elkin@mariadb.com>
When QUICK_GROUP_MIN_MAX_SELECT is initialized or being reset
it stores the prefix of the last group of the index chosen for
retrieving data (last_value). Later, when looping through records
at get_next() method, the server checks whether the retrieved
group is the last, and if so, it finishes processing.
At the same time, it looks like there is no need for that additional
check since method next_prefix() returns HA_ERR_KEY_NOT_FOUND
or HA_ERR_END_OF_FILE when there are no more satisfying records.
If we do not perform the check, we do not need to retrieve and
store last_value either.
This commit removes using of last_value from QUICK_GROUP_MIN_MAX_SELECT.
Reviewer: Sergei Petrunia <sergey@mariadb.com>
Extending the rpl_mysqldump_slave.test to incorporate the
delete-master-logs option, and this options is alias of
get binlogs: show master status -> flush logs -> purge binary logs to <new_binlog>
sequence and this test is derived using the same pattern.
also we measure the pre and post log state
on the master following the mysqldump process. Introducing
assertions to validate the correct state.
Using mysql.slow_log was a test table would generate more than
one row if there was more than one row in the table.
Replace this table with a empty table with PK.
Reviewer: Rex Johnston
- Added the parameter stats_persistent=0 for InnoDB engine.
- Before printing metadata_lock_info query, make sure that
InnoDB does complete purging.
Reviewed by: Marko Mäkelä
The problem is that background statistics can race with statistics update
during INSERT and cause slightly inaccurate `Rows` count in table statistics
(this is deliberate to avoid excessive locking overhead). This was seen as
occasional .result difference in the test.
Mask out the unstable `Rows` column from SHOW TABLE STATUS; the value is not
related to what is being tested in this part of the test case.
Run ANALYZE TABLE before SHOW EXPLAIN to get stable row count in output.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
mysql_prepare_alter_table(): Alter table should check whether
foreign key exists when it expected to exists and
report the error in early stage
dict_foreign_parse_drop_constraints(): Don't throw error if the
foreign key constraints doesn't exist when if exists is given
in the statement.
This bug was caused by a patch for the task MDEV-32733.
Incorrect memory root was used for allocation of memory
pointed by the data memebr Item_func_json_contains_path::p_found.
The bug is fixed by the patch ported from MySQL. See the comprehensive
description below.
commit 455c4e8810c76430719b1a08a63ca0f69f44678a
Author: Guilhem Bichot <guilhem.bichot@oracle.com>
Date: Fri Mar 13 17:51:27 2015 +0100
Bug#17668844: CRASH/ASSERT AT ITEM_TYPE_HOLDER::VAL_STR IN ITEM.C
We have a predicate of the form:
literal_row <=> (a UNION)
The subquery is constant, so Item_cache objects are used for its
SELECT list.
In order, this happens:
- Item_subselect::fix_fields() calls select_lex_unit::prepare,
where we create Item_type_holder's
(appended to unit->types list), create the tmp table (using type info
found in unit->types), and call fill_item_list() to put the
Item_field's of this table into unit->item_list.
- Item_subselect::fix_length_and_dec() calls set_row() which
makes Item_cache's of the subquery wrap the Item_type_holder's
- When/if a first result row is found for the subquery,
Item_cache's are re-pointed to unit->item_list
(i.e. Item_field objects which reference the UNION's tmp table
columns) (see call to Item_singlerow_subselect::store()).
- In our subquery, no result row is found, so the Item_cache's
still wrap Item_type_holder's; evaluating '<=>' reads the
value of those, but Item_type_holder objects are not expected to be
evaluated.
Fix: instead of putting unit->types into Item_cache, and later
replacing with unit->item_list, put unit->item_list in Item_cache from
the start.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
This patch is actually follow-up for the task
MDEV-23902: MariaDB crash on calling function
to use correct query arena for a statement. In case invocation of
a function is in progress use its call arena, else use current
query arena that can be either a statement or a regular query arena.
- Allow database level access via `LOCK TABLES` to execute statement
`BACKUP [un]LOCK <object>`
- `BACKUP UNLOCK` works only with `RELOAD` privilege.
In case there is `LOCK TABLES` privilege without `RELOAD` privilege,
we check if backup lock is taken before.
If it is not we raise an error of missing `RELOAD` privilege.
- We had to remove any error/warnings from calling functions because
`thd->get_stmt_da()->m_status` will be set to error and will break
`my_ok()`.
- Added missing test coverage of `RELOAD` privilege to `main.grant.test`
Reviewer: <daniel@mariadb.org>
Statements affected by this bug have all the following
1) select statements with a sub-query
2) that sub-query includes a group-by clause
3) that group-by clause contains an expression
4) that expression has a reference to view
When a view is used in a group by expression, and that group by can be
eliminated in a sub-query simplification as part of and outer condition
that could be in, exists, > or <, then the table structure left behind
will have a unit that contains a null select_lex pointer.
If this happens as part of a prepared statement, or execute in a stored
procedure for the second time, then, when the statement is executed, the table
list entry for that, now eliminated, view is "opened" and "reinit"ialized.
This table entry's unit no longer has a select_lex pointer.
Prior to MDEV-31995 this was of little consequence, but now following this
null pointer will cause a crash.
Reviewed by Igor Babaev (igor@mariadb.com)
This is caused by fact that VARIABLE_VALUE is defined in
variables_fields_info (sql_show.cc) as 2048.
wsrep_provider_options contain few path variables and
this could cause string truncation on deep and long
directory paths.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
The server doesn't use the enforced storage engine in ALTER TABLE
without ENGINE clause to avoid an unwanted engine change.
However, the server tries to use the enforced engine in CREATE
INDEX. As a result, the false positive error is raised. The server
should not apply the enforced engine in CREATE INDEX too.
The tests main.func_json and json.json_no_table fail on server built with
the option -DWITH_PROTECT_STATEMENT_MEMROOT=YES by the reason that a memory
is allocated on the statement's memory root on the second execution of
a query that uses the function json_contains_path().
The reason that a memory is allocated on second execution of a prepared
statement that ivokes the function json_contains_path() is that a memory
allocated on every call of the method Item_json_str_multipath::fix_fields
To fix the issue, memory allocation should be done only once on first
call of the method Item_json_str_multipath::fix_fields. Simmilar issue
take place inside the method Item_func_json_contains_path::fix_fields.
Both methods are modified to make memory allocation only once on its
first execution and later re-use the allocated memory.
Before this patch the memory referenced by the pointers stored in the array
tmp_paths were released by the method Item_func_json_contains_path::cleanup
that is called on finishing execution of a prepared statement. Now that
memory allocation performed inside the method Item_json_str_multipath::fix_fields
is done only once, the item clean up has degenerate form and can be
delegated to the cleanup() method of the base class and memory deallocation
can be performed in the destructor.
When parsing statements like (SELECT .. FROM ..) ORDER BY <expr>,
there is a step LEX::add_tail_to_query_expression_body_ext_parens()
which calls LEX::wrap_unit_into_derived(). After that the statement
looks like SELECT * FROM (SELECT .. FROM ..), and parser's
Lex_order_limit_lock structure (ORDER BY <expr>) is assigned to
the new SELECT. But what is missing here is that Items in
Lex_order_limit_lock are left with their original name resolution
contexts, and fix_fields() later resolves the names incorrectly.
For example, when processing
(SELECT * FROM t1 JOIN t2 ON a=b) ORDER BY a
Item_field 'a' in the ORDER BY clause is left with the name resolution
context of the derived table (first_name_resolution_table='t1'), so
it is resolved to 't1.a', which is incorrect.
After LEX::wrap_unit_into_derived() the statement looks like
SELECT * FROM (SELECT * FROM t1 JOIN t2 ON a=b) AS '__2' ORDER BY a,
and the name resolution context for Item_field 'a' in the ORDER BY
must be set to the wrapping SELECT's one.
This commit fixes the issue by changing context for Items in
Lex_order_limit_lock after LEX::wrap_unit_into_derived().
The crash happened with an indexed virtual column whose
value is evaluated using a function that has a different meaning
in sql_mode='' vs sql_mode=ORACLE:
- DECODE()
- LTRIM()
- RTRIM()
- LPAD()
- RPAD()
- REPLACE()
- SUBSTR()
For example:
CREATE TABLE t1 (
b VARCHAR(1),
g CHAR(1) GENERATED ALWAYS AS (SUBSTR(b,0,0)) VIRTUAL,
KEY g(g)
);
So far we had replacement XXX_ORACLE() functions for all mentioned function,
e.g. SUBSTR_ORACLE() for SUBSTR(). So it was possible to correctly re-parse
SUBSTR_ORACLE() even in sql_mode=''.
But it was not possible to re-parse the MariaDB version of SUBSTR()
after switching to sql_mode=ORACLE. It was erroneously mis-interpreted
as SUBSTR_ORACLE().
As a result, this combination worked fine:
SET sql_mode=ORACLE;
CREATE TABLE t1 ... g CHAR(1) GENERATED ALWAYS AS (SUBSTR(b,0,0)) VIRTUAL, ...;
INSERT ...
FLUSH TABLES;
SET sql_mode='';
INSERT ...
But the other way around it crashed:
SET sql_mode='';
CREATE TABLE t1 ... g CHAR(1) GENERATED ALWAYS AS (SUBSTR(b,0,0)) VIRTUAL, ...;
INSERT ...
FLUSH TABLES;
SET sql_mode=ORACLE;
INSERT ...
At CREATE time, SUBSTR was instantiated as Item_func_substr and printed
in the FRM file as substr(). At re-open time with sql_mode=ORACLE, "substr()"
was erroneously instantiated as Item_func_substr_oracle.
Fix:
The fix proposes a symmetric solution. It provides a way to re-parse reliably
all sql_mode dependent functions to their original CREATE TABLE time meaning,
no matter what the open-time sql_mode is.
We take advantage of the same idea we previously used to resolve sql_mode
dependent data types.
Now all sql_mode dependent functions are printed by SHOW using a schema
qualifier when the current sql_mode differs from the function sql_mode:
SET sql_mode='';
CREATE TABLE t1 ... SUBSTR(a,b,c) ..;
SET sql_mode=ORACLE;
SHOW CREATE TABLE t1; -> mariadb_schema.substr(a,b,c)
SET sql_mode=ORACLE;
CREATE TABLE t2 ... SUBSTR(a,b,c) ..;
SET sql_mode='';
SHOW CREATE TABLE t1; -> oracle_schema.substr(a,b,c)
Old replacement names like substr_oracle() are still understood for
backward compatibility and used in FRM files (for downgrade compatibility),
but they are not printed by SHOW any more.
There where two errors left from the previous fix.
- subselect.test assumes that mysql.slow_log is empty. This was not
enforced.
- subselect.test dropped a file that does not exists (for safety).
This was fixed by ensuring we don't get a warning if the file does
not exist.
Dyncol functions like column_create() encode the
current character set inside the value.
So they cannot be used with --view-protocol.
This patch changes only --disable_view_protocol to
--disable_service_connection.
This bug affected only multi-table update statements and in very rare
cases: one of the tables used at the top level of the statement must be
a derived table containg a row construct with a subquery including hanging
CTE.
Before this patch was applied the function prepare_unreferenced() of the
class With_element when invoked for the the hangin CTE did not properly
restored the value of thd->lex->context_analysis_only. As a result it
became 0 after the call of this function.
For a query affected by the bug this function is called when
JOIN::prepare() is called for the subquery with a hanging CTE. This happens
when Item_row::fix_fields() calls fix_fields() for the subquery. Setting
the value of thd->lex->context_analysis_only forces the caller function
Item_row::fix_fields() to invoke the virtual method is_null() for the
subquery that leads to execution of it. It causes an assertion failure
because the call of Item_row::fix_fields() happens during the invocation
of Multiupdate_prelocking_strategy::handle_end() that calls the function
mysql_derived_prepare() for the derived table used by the UPDATE at the
time when proper locks for the statement tables has not been acquired yet.
With this patch the value of thd->lex->context_analysis_only is restored
to CONTEXT_ANALYSIS_ONLY_DERIVED that is set in the function
mysql_multi_update_prepare().
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Item_char_typecast::print() did not print the "binary" keyword
in such cases:
CAST('a' AS CHAR CHARACTER SET latin1 BINARY)
This caused a difference in "mtr" vs "mtr --view-protocol"