Handling of top level conjuncts in WHERE whose used_tables() contained
RAND_TABLE_BIT in the function make_join_select() was incorrect.
As a result if such a conjunct referred to fields non of which belonged
to the last joined table it was pushed twice. (This could be seen
for a test case from subselect.test whose output was changed after this
patch had been applied. In 10.1 when running EXPLAIN FORMAT=JSON for
the query from this test case we clearly see that one of the conjuncts
is pushed twice.) This fact by itself was not good. Besides, if such a
conjunct was pushed to a table that was the result of materialization
of a semi-join the query could return a wrong result set. In particular
we could watch it for queries with semi-join subqueries whose left parts
used stored functions without "deterministic' specifier.
Adding destructor for Group_bound_tracker to free Cached_item_str.
The Cached_item for window functions are allocated on THD:mem_root
but the Cached_item_str has value of type string which is allocated on
the heap, so we need to call free() for it
- --default-character-set can now be disabled in mysqldump
- --skip-resolve can be be disabled in mysqld
- mysql_client_test now resets global variables it changes
- mtr couldn't handle [mysqldump] in config files (wrong regexp used)
as well as
MDEV-19500 Update with join stopped worked if there is a call to a procedure in a trigger
MDEV-19521 Update Table Fails with Trigger and Stored Function
MDEV-19497 Replication stops because table not found
MDEV-19527 UPDATE + JOIN + TRIGGERS = table doesn't exists error
Reimplement the fix for (5d510fdbf0)
MDEV-18507 can't update temporary table when joined with table with triggers on read-only
instead of calling open_tables() twice, put multi-update
prepare code inside open_tables() loop.
Add a test for a MDL backoff-and-retry loop inside open_tables()
across multi-update prepare code.
Problem:
========
Following typo in error log:
2019-03-13 15:58:10 0 [Note] Reading of all Master_info entries succeded
Should be 'succeeded'
Fix:
===
Fixed the typo with the right word 'succeeded'.
This patch complements the patch that fixes bug MDEV-18479.
This patch takes care of possible overflow when calculating the
estimated number of rows in a materialized derived table / view.
This bug could happen when queries with nested outer joins were
executed employing join buffers. At such an execution if the method
JOIN_CACHE::join_records() is called when a join buffer has become
full no 'first_unmatched' field should be cleaned up in the JOIN_TAB
structure to which the join cache with this buffer is attached.
or server crashes in JOIN::fix_all_splittings_in_plan after EXPLAIN
This patch resolves the problem of overflowing when performing
calculations to estimate the cost of an evaluated query execution plan.
The overflowing in a non-debug build could cause different kind of
problems uncluding crashes of the server.
This patch corrects the patch for the bug 10006. The latter incorrectly
calculates the attribute TABLE_LIST::dep_tables for inner tables
of outer joins that are to be converted into inner joins.
As a result after the patch some valid join orders were not evaluated
and the optimizer could choose an execution plan that was far from
being optimal.
The code in best_access_path function, when it does not find a key suitable for ref access
and join_cache_level is set to a value so that hash_join is possible we build a hash key.
Later in the function we compare the cost of ref access with table scan (or index scan
or quick selects). No need to do this when we have got the hash key.
Problem:
========
The test now fails with the following trace:
CURRENT_TEST: rpl.rpl_parallel_temptable
--- /mariadb/10.4/mysql-test/suite/rpl/r/rpl_parallel_temptable.result
+++ /mariadb/10.4/mysql-test/suite/rpl/r/rpl_parallel_temptable.reject
@@ -194,7 +194,6 @@
30 conservative
31 conservative
32 optimistic
-33 optimistic
Analysis:
=========
The part of test which fails with result content mismatch is given below.
CREATE TEMPORARY TABLE t4 (a INT PRIMARY KEY) ENGINE=InnoDB;
INSERT INTO t4 VALUES (32);
INSERT INTO t4 VALUES (33);
INSERT INTO t1 SELECT a, "optimistic" FROM t4;
slave_parallel_mode=optimistic
The expectation of the above test script is, INSERT FROM SELECT should read both
32, 33 and populate table 't1'. But this expectation fails occasionally.
All three INSERT statements are handed over to three different slave parallel
workers. Temporary tables are not safe for parallel replication. They were
designed to be visible to one thread only, so have no table locking. Thus there
is no protection against two conflicting transactions committing in parallel and
things like that.
So anything that uses temporary tables will be serialized with anything before
it, when using parallel replication by using a "wait_for_prior_commit" function
call. This will ensure that the each transaction is executed sequentially.
But there exists a code path in which the above wait doesn't happen. Because of
this at times INSERT from SELECT doesn't wait for the INSERT (33) to complete
and it completes its executes and enters commit stage. Hence only row 32 is
found in those cases resulting in test failure.
The wait needs to be added within "open_temporary_table" call. The code looks
like this within "open_temporary_table".
Each thread tries to open temporary table in 3 different ways:
case 1: Find a temporary table which is already in use by using
find_temporary_table(tl) && wait_for_prior_commit()
case 2: If above failed then try to look for temporary table which is marked for
free for reuse. This internally calls "wait_for_prior_commit()" if table
is found.
find_and_use_tmp_table(tl, &table)
case 3: If none of the above open a new table handle from table share.
if (!table && (share= find_tmp_table_share(tl)))
{ table= open_temporary_table(share, tl->get_table_name(), true); }
At present the "wait_for_prior_commit" happens only in case 1 & 2.
Fix:
====
On slave add a call for "wait_for_prior_commit" for case 3.
The above wait on slave will solve the issue. A more detailed fix would be to
mark temporary tables as not safe for parallel execution on the master side.
In order to do that, on the master side, mark the Gtid_log_event specific flag
FL_TRANSACTIONAL to be false all the time. So that they are not scheduled
parallely.
This patch complements the original patch for MDEV-18896 that prevents
conversions to semi-joins in tableless selects used in INSERT statements
in post-5.5 versions of the server.
The test case was corrected as well to ensure that potential conversion
to jtbm semi-joins is also checked (the problem was that one of
the preceeding testcases in subselect_sj.test did not restore the
state of the optimizer switch leaving the 'materialization' in the state
'off' and so blocking this check).
Noticed an inconsistency in the state of select_lex::table_list used
in INSERT statements and left a comment about this.
The bug was that when using mysql_list_fields, then
table_list->schema_table_name was not filled in.
Fixed by using table_list->schema_table instead, which is always
filled in.
we had the statistics tables in the FROM list of the select.
The statistics for tables are not read in such cases, so we need
to check this case separately.
truncating a temporary table
TRUNCATE expects only one TABLE instance (which is used by TRUNCATE
itself) to be open. However this requirement wasn't enforced after
"MDEV-5535: Cannot reopen temporary table".
Fixed by closing unused table instances before performing TRUNCATE.
Problem:
========
We have a Master/Master Setup on two servers, but are only writing to one of
those servers (so it is essentially Master/Slave) We upgraded from 10.1.* to
10.2.22 last week and starting with the upgrade, we are getting duplicate key
errors on the slave. BINLOG=mixed.
Analysis:
=========
This issue happens with LOCK TABLES and binlog_format=MIXED combination. When an
UNSAFE statement is encountered in 'MIXED' mode, it is logged in the form of
'ROW' format. For all the tables that are part of LOCK TABLES list their table maps
are written into the binary log. For each table in the list a check is
done to see if 'check_table_binlog_row_based_done' flag is set or not. If it is not set
a check process is initiated to see if table qualifies for row based binary
logging or not and 'check_table_binlog_row_based_done' is set. This flag will be
cleared at the time of closing thread tables.
But there can be special cases where the LOCK TABLES contains more number of
tables but the unsafe query is actually using subset of tables from LOCK TABLES
list.
For example: LOCK TABLES locks t1,t2,t3 but the unsafe statement makes use of
only two tables t1,t3. In this case the 'check_table_binlog_row_based_done' flag
is enabled for table 't2' while writing table map, but 'close_thread_tables'
function call will not reset this flag. Since the flag is not cleared for table
't2' even a safe statement which used t2 will be logged in the form of row based
format.
This leads to an assert on debug builds and causes duplicate entries in release
builds. In release builds a statement is logged in the form of both ROW and
STATEMENT format. This causes the slave to fail with duplicate key error.
Fix:
===
During 'close_thread_tables' when LOCK TABLE modes are active "ha_reset" is done
for all the tables which were part of current statement. As mentioned in the
example 'ha_reset' is called for tables 't1' and 't3'. This will clear the
'check_table_binlog_row_based_done' flag. At this point add a check for the rest
of the tables to see if 'check_table_binlog_row_based_done' is enabled or not.
If enabled clear the flag.
Some places didn't match the previous rules, making the Floor
address wrong.
Additional sed rules:
sed -i -e 's/Place.*Suite .*, Boston/Street, Fifth Floor, Boston/g'
sed -i -e 's/Suite .*, Boston/Fifth Floor, Boston/g'
This commit is based on the work of Michal Schorm, rebased on the
earliest MariaDB version.
Th command line used to generate this diff was:
find ./ -type f \
-exec sed -i -e 's/Foundation, Inc., 59 Temple Place, Suite 330, Boston, /Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, /g' {} \; \
-exec sed -i -e 's/Foundation, Inc. 59 Temple Place.* Suite 330, Boston, /Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, /g' {} \; \
-exec sed -i -e 's/MA.*.....-1307.*USA/MA 02110-1335 USA/g' {} \; \
-exec sed -i -e 's/Foundation, Inc., 59 Temple/Foundation, Inc., 51 Franklin/g' {} \; \
-exec sed -i -e 's/Place, Suite 330, Boston, MA.*02111-1307.*USA/Street, Fifth Floor, Boston, MA 02110-1335 USA/g' {} \; \
-exec sed -i -e 's/MA.*.....-1307/MA 02110-1335/g' {} \;
No functional change.
Call my_timer_init() only once and then reuse it from InnoDB and
perfschema storage engines.
This patch speeds up empty test for me like this:
./mtr -mem innodb.kevg,xtradb 1.21s user 0.84s system 34% cpu 5.999 total
./mtr -mem innodb.kevg,xtradb 1.12s user 0.60s system 31% cpu 5.385 total
The crash happens when writing into log file.
The reason is likely that the call to WriteFile() was missing a valid
parameter for lpNumberOfBytesWritten. This seems only to happen on ancient
version of Windows.
Since the fix to MDEV-16430 in 141bc58ac9, null pointer was passed
instead of valid pointer.
The fix is to provide a valid lpNumberOfBytesWritten parameter.
A sequel to 9180e86 and 149b754.
ALTER TABLE ... ADD FOREIGN KEY may crash if parent table is updated
concurrently.
Block FK parent table updates even earlier, before intermediate child
table is created.
Use proper charset info for my_casedn_str() and don't update original
identifiers so that lower_cast_table_names == 2 is honoured.
The bug occurs because Item_func_set_user var is allowed to be pushed
into materialized derived table/view.
To fix it excl_dep_on_table() as added to Item_func_set_user_var class
to prevent pushdown.
Need to call split_sum_func if an aggregate function is part of order by
or partition by clause so that we have the required fields inside the temporary
table, as all the fields inside the partition by and order by clause of the
window function needs to be there in the temp table used for window function
computation.
The issue here is that for a window function in the ORDER BY clause, we were not
creating an extra field in the temporary table for the window function
(which is contained in an expression).
So a call to split_sum_func is added to handle this case
Also we need to update all items that contain a window function
in the temp table during window function computation as filesort would need
these values to be updated to calculate the ORDER BY clause of the select.
For degenerate joins we may have JOIN::table_list as NULL, so instead
of using JOIN::top_join_tab_count use the function JOIN::exec_join_tab_cnt
to get the number of tables joined at the top level.