Updated tests: cases with bugs or which cannot be run
with the cursor-protocol were excluded with
"--disable_cursor_protocol"/"--enable_cursor_protocol"
Fix for v.10.5
safety first - tell mariadb client not to execute dangerous
cli commands, they cannot be present in the dump anyway.
wrapping the command in /*!999999 ..... */ guarantees that
if a non-mariadb-cli client loads the dump and sends it to the
server - the server will ignore the command it doesn't understand
This bug led to wrong result sets returned by the second execution of
prepared statements from selects using mergeable derived tables pushed
into external engine. Such derived tables are always materialized. The
decision that they have to be materialized is taken late in the function
mysql_derived_optimized(). For regular derived tables this decision is
usually taken at the prepare phase. However in some cases for some derived
tables this decision is made in mysql_derived_optimized() too. It can be
seen in the code of mysql_derived_fill() that for such a derived table it's
critical to change its translation table to tune it to the fields of the
temporary table used for materialization of the derived table and this
must be done after each refill of the derived table. The same actions are
needed for derived tables pushed into external engines.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
This patch adds for "--ps-protocol" second execution
of queries "SELECT".
Also in this patch it is added ability to disable/enable
(--disable_ps2_protocol/--enable_ps2_protocol) second
execution for "--ps-prototocol" in testcases.
There was no actual execution of the SQL of a pushed derived table,
which caused "r_rows" to be always displayed as 0 and "r_total_time_ms"
to show inaccurate numbers.
This commit makes a derived table SQL to be executed by the storage
engine, so the server is able to calculate the number of rows returned
and measure the execution time more accurately
The problem was that federated engine does not support comparable rowids
which was not taken into account by semijoin code.
Fixed by checking that we don't use semijoin with tables that does not
support comparable rowids.
Other things:
- Fixed some typos in the code comments
Deallocation of TABLE_LIST::dt_handler and TABLE_LIST::pushdown_derived
was performed in multiple places if code. This not only made the code
more difficult to maintain but also led to memory leaks and
ASAN heap-use-after-free errors.
This commit puts deallocation of TABLE_LIST::dt_handler and
TABLE_LIST::pushdown_derived to the single point - JOIN::cleanup()
FederatedX table may refer to a table with a different name on the
remote server:
test> CREATE TABLE t2 (...) ENGINE="FEDERATEDX"
CONNECTION="mysql://user:pass@192.168.1.111:9308/federatedx/t1";
test> select * from t2 where ...;
This could cause an issue with federated_pushdown=1, because FederatedX
pushes the query (or derived table's) text to the remote server. The remote
server will try to read from table t2 (while it should read from t1).
Solution: do not allow pushing down queries with tables that have different
db_name.table name on the local and remote server.
This patch also fixes:
MDEV-29863 Server crashes in federatedx_txn::acquire after select from the
FederatedX table with partitions
Solution: disallow pushdown when partitioned FederatedX tables are used.
Federated and Federatex cannot be used with ROR scans
Federated::position() and Federatex::position() is storing in 'ref' a
pointer into a local result set buffer. This means that one cannot
compare 'ref' from different handler instances to see if they point to the
same physical record.
This bug caused federated.federatedx to return wrong results when the
optimizer tried to use index_merge to resolve some queries.
Fixed by introducing table flag HA_NON_COMPARABLE_ROWID and using this
with the above handlers.
Todo:
- Fix multi_delete(), multi_update and read_records() to use primary key
instead of 'ref' if case HA_NON_COMPARABLE_ROWID is set. The current
code only works if we have only one range (like table scan) for the
tables that will be updated in the second pass.
- Enable DBUG_ASSERT() in ha_federated::cmp_ref() and
ha_federatedx::cmp_ref().
ha_partition stores records in array of m_ordered_rec_buffer and uses
it for prio queue in ordered index scan. When the records are restored
from the array the blob buffers may be already freed or rewritten.
The solution is to take temporary ownership of cached blob buffers via
String::swap(). When the record is restored from m_ordered_rec_buffer
the ownership is returned to table fields.
Cleanups:
init_record_priority_queue(): removed needless !m_ordered_rec_buffer
check as there is same assertion few lines before.
dbug_print_row() for arbitrary row pointer
Cause: shared federatedx_io cannot store table-specific data.
Fix: move current row reference `federatedx_io_mysql::current` to
ha_federatedx.
FederatedX connection (represented by federatedx_io) is stored into
federatedx_txn::txn_list of per-server connections (see
federatedx_txn::acquire()). federatedx_txn object is stored into THD
(see ha_federatedx::external_lock()). When multiple handlers acquire
FederatedX connection they get single federatedx_io instance. Multiple
handlers do their operation via federatedx_io_mysql::mark_position()
and federatedx_io_mysql::fetch_row() in arbitrarty manner. They access
the same federatedx_io_mysql instance and same MYSQL_ROWS *current
pointer, so one handler disrupts the work of the other.
Related to "MDEV-14551 Can't find record in table on multi-table update
with ORDER BY".
This bug happened when the HEAP temporary table used for the derived table
created for a derived handler of a remote engine of the federated type
became full and was converted to an Area table. For this conversion
the tmp_table_param parameter must be always taken from the select_unit
object created for the result of the derived table.
- select_describe() should not attempt to produce query plans
for subqueries if the query is handled by a Select Handler.
- JOIN::save_explain_data_intern should not add links to Explain_select
for children selects if:
1. The whole query is handled by the Select Handler, or
2. this select (and so its children) is handled by Derived Handler.
Respect system fields in NO_ZERO_DATE mode.
This is the subject for refactoring in MDEV-19597
Conflict resolution from 7d5223310789f967106d86ce193ef31b315ecff0
Backport to 10.4:
- Don't try to push down SELECTs that have a side effect
- In case the storage engine did support pushdown of SELECT with an INTO
clause, write the rows we've got from it into select->join->result,
and not thd->protocol. This way, SELECT ... INTO ... FROM
smart_engine_table will put the result into where instructed, and
NOT send it to the client.
- Don't try to push down SELECTs that have a side effect
- In case the storage engine did support pushdown of SELECT with an INTO
clause, write the rows we've got from it into select->join->result,
and not thd->protocol. This way, SELECT ... INTO ... FROM
smart_engine_table will put the result into where instructed, and
NOT send it to the client.