Note: this patch is for 5.6.
Detected by ASAN.
The patch fixes the cleanup of parser stack pointers.
Reviewed-by: Guilhem Bichot <guilhem.bichot@oracle.com>
The parser returned a syntax error message for the queries with join
expressions like this t1 JOIN t2 [LEFT | RIGHT] JOIN t3 ON ... ON ... when
the second operand of the outer JOIN operation with ON clause was another
join expression with ON clause. In this expression the JOIN operator is
right-associative, i.e. expression has to be parsed as the expression
t1 JOIN (t2 [LEFT | RIGHT] JOIN t3 ON ... ) ON ...
Such join expressions are hard to parse because the outer JOIN is
left-associative if there is no ON clause for the first outer JOIN operator.
The patch implements the solution when the JOIN operator is always parsed
as right-associative and builds first the right-associative tree. If it
happens that there is no corresponding ON clause for this operator the
tree is converted to left-associative.
The idea of the solution was taken from the patch by Martin Hansson
"WL#8083: Fixed the join_table rule" from MySQL-8.0 code line.
As the grammar rules related to join expressions in MySQL-8.0 and
MariaDB-5.5+ are quite different MariaDB solution could not borrow
any code from the MySQL-8.0 solution.
Problem:
========
The test now fails with the following trace:
CURRENT_TEST: rpl.rpl_parallel_temptable
--- /mariadb/10.4/mysql-test/suite/rpl/r/rpl_parallel_temptable.result
+++ /mariadb/10.4/mysql-test/suite/rpl/r/rpl_parallel_temptable.reject
@@ -194,7 +194,6 @@
30 conservative
31 conservative
32 optimistic
-33 optimistic
Analysis:
=========
The part of test which fails with result content mismatch is given below.
CREATE TEMPORARY TABLE t4 (a INT PRIMARY KEY) ENGINE=InnoDB;
INSERT INTO t4 VALUES (32);
INSERT INTO t4 VALUES (33);
INSERT INTO t1 SELECT a, "optimistic" FROM t4;
slave_parallel_mode=optimistic
The expectation of the above test script is, INSERT FROM SELECT should read both
32, 33 and populate table 't1'. But this expectation fails occasionally.
All three INSERT statements are handed over to three different slave parallel
workers. Temporary tables are not safe for parallel replication. They were
designed to be visible to one thread only, so have no table locking. Thus there
is no protection against two conflicting transactions committing in parallel and
things like that.
So anything that uses temporary tables will be serialized with anything before
it, when using parallel replication by using a "wait_for_prior_commit" function
call. This will ensure that the each transaction is executed sequentially.
But there exists a code path in which the above wait doesn't happen. Because of
this at times INSERT from SELECT doesn't wait for the INSERT (33) to complete
and it completes its executes and enters commit stage. Hence only row 32 is
found in those cases resulting in test failure.
The wait needs to be added within "open_temporary_table" call. The code looks
like this within "open_temporary_table".
Each thread tries to open temporary table in 3 different ways:
case 1: Find a temporary table which is already in use by using
find_temporary_table(tl) && wait_for_prior_commit()
case 2: If above failed then try to look for temporary table which is marked for
free for reuse. This internally calls "wait_for_prior_commit()" if table
is found.
find_and_use_tmp_table(tl, &table)
case 3: If none of the above open a new table handle from table share.
if (!table && (share= find_tmp_table_share(tl)))
{ table= open_temporary_table(share, tl->get_table_name(), true); }
At present the "wait_for_prior_commit" happens only in case 1 & 2.
Fix:
====
On slave add a call for "wait_for_prior_commit" for case 3.
The above wait on slave will solve the issue. A more detailed fix would be to
mark temporary tables as not safe for parallel execution on the master side.
In order to do that, on the master side, mark the Gtid_log_event specific flag
FL_TRANSACTIONAL to be false all the time. So that they are not scheduled
parallely.
it always required UPDATE privilege on views, not being able to detect
when a views was not actually updated in multi-update.
fix: instead of marking all tables as "updating" by default,
only set "updating" on tables that will actually be updated
by multi-update. And mark the view "updating" if any of the
view's tables is.
1. Always drop merged_for_insert flag on cleanup (there could be errors which prevent TABLE to be assigned)
2. Make more precise cleanup of select parts which was touched
st_select_lex::handle_derived() and mysql_handle_list_of_derived() had
exactly the same implementations.
- Adding a new method LEX::handle_list_of_derived() instead
- Removing public function mysql_handle_list_of_derived()
- Reusing LEX::handle_list_of_derived() in st_select_lex::handle_derived()
The problem was originally stated in
http://bugs.mysql.com/bug.php?id=82212
The size of an base64-encoded Rows_log_event exceeds its
vanilla byte representation in 4/3 times.
When a binlogged event size is about 1GB mysqlbinlog generates
a BINLOG query that can't be send out due to its size.
It is fixed with fragmenting the BINLOG argument C-string into
(approximate) halves when the base64 encoded event is over 1GB size.
The mysqlbinlog in such case puts out
SET @binlog_fragment_0='base64-encoded-fragment_0';
SET @binlog_fragment_1='base64-encoded-fragment_1';
BINLOG @binlog_fragment_0, @binlog_fragment_1;
to represent a big BINLOG.
For prompt memory release BINLOG handler is made to reset the BINLOG argument
user variables in the middle of processing, as if @binlog_fragment_{0,1} = NULL
is assigned.
Notice the 2 fragments are enough, though the client and server still may
need to tweak their @@max_allowed_packet to satisfy to the fragment
size (which they would have to do anyway with greater number of
fragments, should that be desired).
On the lower level the following changes are made:
Log_event::print_base64()
remains to call encoder and store the encoded data into a cache but
now *without* doing any formatting. The latter is left for time
when the cache is copied to an output file (e.g mysqlbinlog output).
No formatting behavior is also reflected by the change in the meaning
of the last argument which specifies whether to cache the encoded data.
Rows_log_event::print_helper()
is made to invoke a specialized fragmented cache-to-file copying function
which is
copy_cache_to_file_wrapped()
that takes care of fragmenting also optionally wraps encoded
strings (fragments) into SQL stanzas.
my_b_copy_to_file()
is refactored to into my_b_copy_all_to_file(). The former function
is generalized
to accepts more a limit argument to constraint the copying and does
not reinitialize anymore the cache into reading mode.
The limit does not do any effect on the fully read cache.
The test and also rpl_gtid_delete_domain failed on PPC64 platform
due to an incorrectly specified actual key for searching
in a gtid domain system hash. While the correct size is 32 bits
the supplied value was 8 bytes of long int size on the platform.
The problem became evident thanks to the big endiness which
cut off the *least* significant part of the value field.
Fixed with correcting a dynamic array initialization to hold
now uint32 values as well as the values extraction for
searching in the gtid domain system hash.
A new added test ensures no overflowed values are accepted
for deletion which prevents inadvertent action. Notice though
MariaDB [test]> set @@session.gtid_domain_id=(1 << 32) + 1;
MariaDB [test]> show warnings;
+---------+------+--------------------------------------------------------+
| Level | Code | Message |
+---------+------+--------------------------------------------------------+
| Warning | 1292 | Truncated incorrect gtid_domain_id value: '4294967297' |
+---------+------+--------------------------------------------------------+
MariaDB [test]> select @@session.gtid_domain_id;
+--------------------------+
| @@session.gtid_domain_id |
+--------------------------+
| 4294967295 |
+--------------------------+
This patch fills a serious flaw in the implementation of common table
expressions. Before this patch an attempt to prepare a statement from
a query with a parameter marker in a CTE that was used more than once
in the query ended up with a bogus error message. Similarly if a statement
in a stored procedure contained a CTE whose specification used a
local variables and this CTE was referred to more than once in the
statement then the server failed to execute the stored procedure returning
a bogus error message on a non-existing field.
The problems appeared due to incorrect handling of parameter markers /
local variables in CTEs that were referred more than once.
This patch fixes the problems by differentiating between the original
occurrences of a parameter marker / local variable used in the
specification of a CTE and the corresponding occurrences used
in copies of this specification. These copies are substituted
instead of non-first references to the CTE.
The idea of the fix and even some code were taken from the MySQL
implementation of the common table expressions.
Before this patch if no default database was set the server threw
an error for any table name reference that was not fully qualified by
database name. In particular it happened for table names referenced
CTE tables. This was incorrect.
The error message was thrown at the parser stage when the names referencing
different tables were not resolved yet.
Now if no default database is set and a with clause is used in the
processed statement any table reference is just supplied with a dummy
database name "*none*" at the parser stage. Later after a call
of check_dependencies_in_with_clauses() when the names for CTE tables
can be resolved error messages are thrown only for those names that
refer to non-CTE tables. This is done in open_and_process_table().
materialized derived table/view that uses aliases is done
The problem appears when a column alias inside the materialized derived
table/view t1 definition coincides with the column name used in the
GROUP BY clause of t1. If the condition that can be pushed into t1
uses that ambiguous column name this column is determined as a column that
is used in the GROUP BY clause instead of the alias used in the projection
list of t1. That causes wrong result.
To prevent it resolve_ref_in_select_and_group() was changed.
The current code does not support recursive CTEs whose specifications
contain a mix of ALL UNION and DISTINCT UNION operations.
This patch catches such specifications and reports errors for them.
In this issue we hit the assert because we are adding addition fields to the field JOIN::all_fields list. This
is done because HEAP tables can't index BIT fields so we need to use an additional hidden field for grouping because later it will be
converted to a LONG field. Original field will remain of the BIT type and will be returned. This happens when we convert DISTINCT to
GROUP BY.
The solution is to take into account the number of such hidden fields that would be added to the field
JOIN::all_fields list while calculating the size of the ref_pointer_array.
Counter for select numbering made stored with the statement (before was global)
So now it does have always accurate value which does not depend on
interruption of statement prepare by errors like lack of table in
a view definition.
TRASH was mapped to TRASH_FREE and was supposed to be used for memory
that should not be accessed anymore, while TRASH_ALLOC() is to be
used for uninitialized but to-be-used memory.
But sometimes TRASH() was used in the latter sense.
Remove TRASH() macro, always use explicit TRASH_ALLOC() or TRASH_FREE().
Issue:
------
VALUES doesn't have a type() function and is considered a
Item_field.
Solution for 5.7:
-----------------
Add a new type() function for Item_values_insert.
On 8.0 and trunk it was fixed by Mithun's Bug#19601973.
Solution for 5.6:
-----------------
Additionally Bug#17458914 is backported.
This will address the problem of using VALUES() in
INSERT ... ON DUPLICATE KEY UPDATE. Create a field object
only if it is in the UPDATE clause, else return a NULL
item.
This will also address the problems mentioned in
Bug#14789787 and Bug#16756402.
Solution for 5.5:
-----------------
As mentioned above Bug#17458914 is backported.
Additionally Bug#14786324 is also backported.
When VALUES() is detected outside its meaningful place,
it should be treated as NULL and is thus replaced with a
Field_null object, with the same name as the original
field.
Fields with type NULL are generally not handled well inside
the server (e.g Innodb will not accept them and it is
impossible to create them in regular tables). So create a
new const NULL item instead.
As reported in MDEV-11969 "there's no way to ditch knowledge" about some
domain that is no longer updated on a server. Besides being of annoyance to
clutter output in DBA console stale domains can prevent the slave
to connect the master as MDEV-12012 witnesses.
What domain is obsolete must be evaluated by the user (DBA) according
to whether the domain info is still relevant and will the domain ever
receive any update.
This patch introduces a method to discard obsolete gtid domains from
the server binlog state. The removal requires no event group from such
domain present in existing binlog files though. If there are any the
containing logs must be first PURGEd in order for
FLUSH BINARY LOGS DELETE_DOMAIN_ID=(list-of-domains)
succeed. Otherwise the command returns an error.
The list of obsolete domains can be computed through
intersecting two sets - the earliest (first) binlog's Gtid_list
and the current value of @@global.gtid_binlog_state - and extracting
the domain id components from the intersection list items.
The new DELETE_DOMAIN_ID featured FLUSH continues to rotate binlog
omitting the deleted domains from the active binlog file's Gtid_list.
Notice though when the command is ineffective - that none of requested to delete
domain exists in the binlog state - rotation does not occur.
Obsolete domain deletion is not harmful for connected slaves as long
as master side binlog files *purge* is synchronized with FLUSH-DELETE_DOMAIN_ID.
The slaves must have the last event from purged files processed as usual,
in order not to bump later into requesting a gtid from a file which
was already gone.
While the command is not replicated (as ordinary FLUSH BINLOG LOGS is)
slaves, even though having extra domains, won't suffer from reconnection errors
thanks to master-slave gtid connection protocol allowing the master
to be ignorant about a gtid domain.
Should at failover such slave to be promoted into master role it may run
the ex-master's
FLUSH BINARY LOGS DELETE_DOMAIN_ID=(list-of-domains)
to clean its own binlog state.
NOTES.
suite/perfschema/r/start_server_low_digest.result
is re-recorded as consequence of internal parser codes changes.
A reference to a CTE may occur not in the master of the CTE
specification. In this case if the reference to the CTE is
the first one the specification should be detached from its
master and attached to the referencing select.
Also fixed the TYPE column in the lines of the EXPLAIN output
created for CTE tables.
This patch fills in a serious flaw in the
code that supports condition pushdown into
materialized views / derived tables.
If a predicate happened to contain a reference
to a mergeable view / derived table and it does
not depended directly on the target materialized
view / derived table then the predicate was not
considered as a subject to pusdown to this view
/ derived table.
This is another attempt to fix the bug mdev-12992.
This patch introduces st_select_lex::context_analysis_place for
the place in SELECT where context analysis is currently performed.
It's similar to st_select_lex::parsing_place, but it is used at
the preparation stage.
Significantly reduce the amount of InnoDB, XtraDB and Mariabackup
code changes by defining pfs_os_file_t as something that is
transparently compatible with os_file_t.