The problem is that if table definition cache (TDC) is full of real tables
which are in tables cache, view definition can not stay there so will be
removed by its own underlying tables.
In situation above old mechanism of detection matching definition in PS
and current version always require reprepare and so prevent executing
the PS.
One work around is to increase TDC, other - improve version check for
views/triggers (which is done here). Now in suspicious cases we check:
- timestamp (microseconds) of the view to be sure that version really
have changed;
- time (microseconds) of creation of a trigger related to time
(microseconds) of statement preparation.
Remove table_count from Query_tables_list (not used, moved to MYSQL_LOCK).
Rename table_count from LEX to avoid mixing it with other counters of tables.
* Fix test galera.MW-44 to make it work with --ps-protocol
* Skip test galera.MW-328C under --ps-protocol This test
relies on wsrep_retry_autocommit, which has no effect
under ps-protocol.
* Return WSREP related errors on COM_STMT_PREPARE commands
Change wsrep_command_no_result() to allow sending back errors
when a statement is prepared. For example, to handle deadlock
error due to BF aborted transaction during prepare.
* Add sync waiting before statement prepare
When a statement is prepared, tables used in the statement may be
opened and checked for existence. Because of that, some tests (for
example galera_create_table_as_select) that CREATE a table in one node
and then SELECT from the same table in another node may result in errors
due to non existing table.
To make tests behave similarly under normal and PS protocol, we add a
call to sync wait before preparing statements that would sync wait
during normal execution.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Handling BF abort for prepared statement execution so that EXECUTE processing will continue
until parameter setup is complete, before BF abort bails out the statement execution.
THD class has new boolean member: wsrep_delayed_BF_abort, which is set if BF abort is observed
in do_command() right after reading client's packet, and if the client has sent PS execute command.
In such case, the deadlock error is not returned immediately back to client, but the PS execution
will be started. However, the PS execution loop, will now check if wsrep_delayed_BF_abort is set, and
stop the PS execution after the type information has been assigned for the PS.
With this, the PS protocol type information, which is present in the first PS EXECUTE command, is not lost
even if the first PS EXECUTE command was marked to abort.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
This patch reverts the fixes of the bugs MDEV-24454 and MDEV-25631 from
the commit 3690c549c6.
It leaves the changes in plugin/feedback/feedback.cc and corresponding
test files introduced in this commit intact.
Proper fixes for the bug MDEV-24454 and MDEV-25631 will follow immediately.
failed in Diagnostics_area::set_ok_status in my_ok from
mysql_sql_stmt_prepare
Analysis: Before PREPARE is executed, binlog_format is STATEMENT.
This PREPARE had SET STATEMENT which sets binlog_format to ROW. Now after
PREPARE is done we reset the binlog_format (back to STATEMENT). But we have
temporary table, it doesn't let changing binlog_format=ROW to
binlog_format=STATEMENT and gives error which goes unreported. This
unreported error eventually causes assertion failure.
Fix: Change return type for LEX::restore_set_statement_var() to bool and
make it return error state.
Use in_sum_func (and so nest_level) only in LEX to which SELECT lex belong to
Reduce usage of current_select (because it does not always point on the correct
SELECT_LEX, for example with prepare.
Change context for all classes inherited from Item_ident (was only for Item_field) in case of pushing down it to HAVING.
Now name resolution context have to have SELECT_LEX reference if the context is present.
Fixed feedback plugin stack usage.
Test cases like the following one produce different result sets if it's run
with and without th option --ps-protocol.
CREATE TABLE t1(a INT);
--enable_metadata
(SELECT MAX(a) FROM t1) UNION (SELECT MAX(a) FROM t1);
--disable_metadata
DROP TABLE t1;
Result sets differ in metadata for the query
(SELECT MAX(a) FROM t1) UNION (SELECT MAX(a) FROM t1);
The reason for different content of query metadata is that for queries
with union the items being created on JOIN preparing phase is placed into
item_list from SELECT_LEX_UNIT whereas for queries without union item_list
from SELECT_LEX is used instead.
In case a stored procedure is invoked in PS mode with argument of type ROW()
like the following one:
CALL p1(ROW(10,20))
such statement fails with the error
ER_OPERAND_COLUMNS (1241): Operand should contain 1 column(s)
The reason of emitting the error is that wrong method was invoked
on fixing an item corresponding to an argument of stored procedure -
the method fix_fields_if_needed_for_scalar() was called instead of
fix_fields_if_needed() that should be called.
In the code existed just before this patch binding of a table reference to
the specification of the corresponding CTE happens in the function
open_and_process_table(). If the table reference is not the first in the
query the specification is cloned in the same way as the specification of
a view is cloned for any reference of the view. This works fine for
standalone queries, but does not work for stored procedures / functions
for the following reason.
When the first call of a stored procedure/ function SP is processed the
body of SP is parsed. When a query of SP is parsed the info on each
encountered table reference is put into a TABLE_LIST object linked into
a global chain associated with the query. When parsing of the query is
finished the basic info on the table references from this chain except
table references to derived tables and information schema tables is put
in one hash table associated with SP. When parsing of the body of SP is
finished this hash table is used to construct TABLE_LIST objects for all
table references mentioned in SP and link them into the list of such
objects passed to a pre-locking process that calls open_and_process_table()
for each table from the list.
When a TABLE_LIST for a view is encountered the view is opened and its
specification is parsed. For any table reference occurred in
the specification a new TABLE_LIST object is created to be included into
the list for pre-locking. After all objects in the pre-locking have been
looked through the tables mentioned in the list are locked. Note that the
objects referenced CTEs are just skipped here as it is impossible to
resolve these references without any info on the context where they occur.
Now the statements from the body of SP are executed one by one that.
At the very beginning of the execution of a query the tables used in the
query are opened and open_and_process_table() now is called for each table
reference mentioned in the list of TABLE_LIST objects associated with the
query that was built when the query was parsed.
For each table reference first the reference is checked against CTEs
definitions in whose scope it occurred. If such definition is found the
reference is considered resolved and if this is not the first reference
to the found CTE the the specification of the CTE is re-parsed and the
result of the parsing is added to the parsing tree of the query as a
sub-tree. If this sub-tree contains table references to other tables they
are added to the list of TABLE_LIST objects associated with the query in
order the referenced tables to be opened. When the procedure that opens
the tables comes to the TABLE_LIST object created for a non-first
reference to a CTE it discovers that the referenced table instance is not
locked and reports an error.
Thus processing non-first table references to a CTE similar to how
references to view are processed does not work for queries used in stored
procedures / functions. And the main problem is that the current
pre-locking mechanism employed for stored procedures / functions does not
allow to save the context in which a CTE reference occur. It's not trivial
to save the info about the context where a CTE reference occurs while the
resolution of the table reference cannot be done without this context and
consequentially the specification for the table reference cannot be
determined.
This patch solves the above problem by moving resolution of all CTE
references at the parsing stage. More exactly references to CTEs occurred in
a query are resolved right after parsing of the query has finished. After
resolution any CTE reference it is marked as a reference to to derived
table. So it is excluded from the hash table created for pre-locking used
base tables and view when the first call of a stored procedure / function
is processed.
This solution required recursive calls of the parser. The function
THD::sql_parser() has been added specifically for recursive invocations of
the parser.
In the code existed just before this patch binding of a table reference to
the specification of the corresponding CTE happens in the function
open_and_process_table(). If the table reference is not the first in the
query the specification is cloned in the same way as the specification of
a view is cloned for any reference of the view. This works fine for
standalone queries, but does not work for stored procedures / functions
for the following reason.
When the first call of a stored procedure/ function SP is processed the
body of SP is parsed. When a query of SP is parsed the info on each
encountered table reference is put into a TABLE_LIST object linked into
a global chain associated with the query. When parsing of the query is
finished the basic info on the table references from this chain except
table references to derived tables and information schema tables is put
in one hash table associated with SP. When parsing of the body of SP is
finished this hash table is used to construct TABLE_LIST objects for all
table references mentioned in SP and link them into the list of such
objects passed to a pre-locking process that calls open_and_process_table()
for each table from the list.
When a TABLE_LIST for a view is encountered the view is opened and its
specification is parsed. For any table reference occurred in
the specification a new TABLE_LIST object is created to be included into
the list for pre-locking. After all objects in the pre-locking have been
looked through the tables mentioned in the list are locked. Note that the
objects referenced CTEs are just skipped here as it is impossible to
resolve these references without any info on the context where they occur.
Now the statements from the body of SP are executed one by one that.
At the very beginning of the execution of a query the tables used in the
query are opened and open_and_process_table() now is called for each table
reference mentioned in the list of TABLE_LIST objects associated with the
query that was built when the query was parsed.
For each table reference first the reference is checked against CTEs
definitions in whose scope it occurred. If such definition is found the
reference is considered resolved and if this is not the first reference
to the found CTE the the specification of the CTE is re-parsed and the
result of the parsing is added to the parsing tree of the query as a
sub-tree. If this sub-tree contains table references to other tables they
are added to the list of TABLE_LIST objects associated with the query in
order the referenced tables to be opened. When the procedure that opens
the tables comes to the TABLE_LIST object created for a non-first
reference to a CTE it discovers that the referenced table instance is not
locked and reports an error.
Thus processing non-first table references to a CTE similar to how
references to view are processed does not work for queries used in stored
procedures / functions. And the main problem is that the current
pre-locking mechanism employed for stored procedures / functions does not
allow to save the context in which a CTE reference occur. It's not trivial
to save the info about the context where a CTE reference occurs while the
resolution of the table reference cannot be done without this context and
consequentially the specification for the table reference cannot be
determined.
This patch solves the above problem by moving resolution of all CTE
references at the parsing stage. More exactly references to CTEs occurred in
a query are resolved right after parsing of the query has finished. After
resolution any CTE reference it is marked as a reference to to derived
table. So it is excluded from the hash table created for pre-locking used
base tables and view when the first call of a stored procedure / function
is processed.
This solution required recursive calls of the parser. The function
THD::sql_parser() has been added specifically for recursive invocations of
the parser.
When you only need view structure, don't call handle_derived with
DT_CREATE and rely on its internal hackish check to skip DT_CREATE.
Because handle_derived is called from many different places,
and this internal hackish check is indiscriminative.
Instead, just don't ask handle_derived to do DT_CREATE
if you don't want it to do DT_CREATE.
When you only need view structure, don't call handle_derived with
DT_CREATE and rely on its internal hackish check to skip DT_CREATE.
Because handle_derived is called from many different places,
and this internal hackish check is indiscriminative.
Instead, just don't ask handle_derived to do DT_CREATE
if you don't want it to do DT_CREATE.
plugin variables in SET only locked the plugin till the end of the
statement. If SET with a plugin variable was prepared, it was possible
to uninstall the plugin before EXECUTE. Then EXECUTE would crash,
trying to resolve a now-invalid pointer to a disappeared variable.
Fix: keep plugins locked until the prepared statement is closed.
A user connected to a server with an expired password
can't change password with the statement "SET password=..."
if this statement is run in PS mode. In mentioned use case a user
gets the error ER_MUST_CHANGE_PASSWORD on attempt to run
the statement PREPARE stmt FOR "SET password=...";
The reason of failure to reset password by a locked user using the
statement PREPARE stmt FOR "SET password=..." is that PS-related
statements are not listed among the commands allowed for execution
by a user with expired password. However, simple adding of PS-related
statements (PREPARE FOR/EXECUTE/DEALLOCATE PREPARE ) to the list of
statements allowed for execution by a locked user is not enough
to solve problems, since it opens the opportunity for a locked user
to execute any statement in the PS mode.
To exclude this opportunity, additional checking that the statement
being prepared for execution in PS-mode is the SET statement has to be added.
This extra checking has been added by this patch into the method
Prepared_statement::prepared() that executed on preparing any statement
for execution in PS-mode.
Running statements with SET STATEMENT FOR clause is handled incorrectly in
case the whole statement is executed in prepared statement mode.
For example, running of the following statement
SET STATEMENT sql_mode = 'NO_ENGINE_SUBSTITUTION' FOR CREATE TABLE t1 AS SELECT CONCAT('abc') AS c1;
results in different definition of the table t1 depending on whether
the statement is executed as a prepared or as a regular statement.
In first case the column c1 is defined as
`c1` varchar(3) DEFAULT NULL
in the last case the column c1 is defined as
`c1` varchar(3) NOT NULL
Different definition for the column c1 arise due to the fact that
a value of the data memeber Item_func_concat::maybe_null depends on
whether strict mode is on or off. Below is definition of the method
fix_fields() of the class Item_str_func that is base class for the
class Item_func_concat that is created on parsing the
SET STATEMENT FOR clause.
bool Item_str_func::fix_fields(THD *thd, Item **ref)
{
bool res= Item_func::fix_fields(thd, ref);
/*
In Item_str_func::check_well_formed_result() we may set null_value
flag on the same condition as in test() below.
*/
maybe_null= maybe_null || thd->is_strict_mode();
return res;
}
Although the clause SET STATEMENT sql_mode = 'NO_ENGINE_SUBSTITUTION' FOR
is parsed on PREPARE phase during processing of the prepared statement,
real setting of the sql_mode system variable is done on EXECUTION phase.
On the other hand, the method Item_str_func::fix_fields is called on PREPARE
phase. In result, thd->is_strict_mode() returns true during calling the method
Item_str_func::fix_fields(), the data member maybe_null is assigned the value
true and column c1 is defined as DEFAULT NULL.
To fix the issue the system variables listed in the SET STATEMENT FOR clause
are set at the beginning of handling the PREPARE phase just right before
calling the function check_prepared_statement() and their original values
restored immediate after return from this function.
Additionally, to avoid code duplication the source code used in the function
mysql_execute_command for setting variables, specified by SET STATEMENT
clause, were extracted to the standalone functions
run_set_statement_if_requested(). This new function is called from
the function mysql_execute_command() and the method
Prepared_statement::prepare().
The reason for the failure is that
thd->mdl_context.release_transactional_locks()
was called after commit & rollback even in cases where the current
transaction is still active.
For 10.2, 10.3 and 10.4 the fix is simple:
- Replace all calls to thd->mdl_context.release_transactional_locks() with
thd->release_transactional_locks(). The thd function will only call
the mdl_context function if there are no active transactional locks.
In 10.6 we will better fix where we will change the return value for
some trans_xxx() functions to indicate if transaction did close the
transaction or not. This will avoid the need of the indirect call.
Other things:
- trans_xa_commit() and trans_xa_rollback() will automatically
call release_transactional_locks() if the transaction is closed.
- We can't do that for the other functions as the caller of many of these
are doing additional work (like close_thread_tables) before calling
release_transactional_locks().
- Added missing abort_result_set() and missing DBUG_RETURN in
select_create::send_eof()
- Fixed wrong indentation in injector::transaction::commit()
- mysqlnd from PHP < 7.3
- mysql-connector-python any version
- mysql-connector-java any version
Relaxed check about garbage at the end of the packet in case of no parameters.
Added check for array binding.
Fixed test according to the new paradigm (allow junk at the end of the packet)
Prepared statements which were run over binary protocol crashed
a server if the statement did not have CF_PS_ARRAY_BINDING_OPTIMIZED
flag and the statement was executed in bulk mode and a BF abort occrurred.
This was because the bulk execution resulted in several statements without
calling wsrep_after_statement() between, which confused wsrep transaction
state tracking.
As a fix, call wsrep_after_statement() in bulk loop after each execution
if CF_PS_ARRAY_BINDING_OPTIMIZED is not set.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
In case of direct execution(stmtid=-1, mariadb_stmt_execute_direct in C
API) application is in control of how many parameters client sends to
the server. In case this number is not equal to actual query parameters
number, the server may start to interprete packet data incorrectly, e.g.
starting from the size of null bitmap. And that could cause it to crash
at some point. The commit introduces some additional COM_STMT_EXECUTE
packet sanity checks:
- checking that "types sent" byte is set, and the value is equal to 1.
if it's not direct execution, then that value is 0 or 1.
- checking that parameter type value is a valid type, and parameter
flags value is 0 or only "unsigned" bit is set
- added more checks that read does not go beyond the end of the packet