Code in QUICK_RANGE_SELECT::init_ror_merged_scan() could theoretically
have caused crashes if this was ever called from an update or delete
This also found a bug in the vcol/range.result. file.
Locked_tables_list::unlock_locked_tables
Similarly to regular DROP TABLE, don't leave locked tables mode if CREATE OR
REPLACE dropped temporary table but failed to cerate new one.
The problem is that there's no track of which temporary table was "locked" by
LOCK TABLES.
Window definitions are resolved during fix fields. Updating used tables
for window functions must be done after all window functions have had a
chance to be resolved.
There was an additional problem with the implementation: expressions that
contained window functions never updated the expression's used tables.
To fix both these issues, make sure to call "update_used_tables" on all
items that contain window functions after we have passed through all
items.
for a query that uses CTE
The first reference to a CTE in the processed query uses the unit
built by the parser for the CTE specification. This unit is
considered as the specification of the derived table created for
the first reference of the CTE. This requires some transformation of
the original query tree: the unit of the specification must be moved
to a new position as a slave of the select where the first reference
to the CTE occurs. The transformation is performed by the function
st_select_lex_node::move_as_slave(). There was an obvious bug in this
function. As a result of this bug in many cases the moved unit turned
out to be lost in the query tree. This could cause different problems.
In particular the prepared statements for queries that used CTEs could
miss cleanup for some selects that was performed at the end of the
preparation/execution of the PSs. If such cleanup is not done for a PS
the next execution of the PS causes an assertion abort or a crash.
don't allocate them on THD::mem_root on every init(HA_STATUS_CONST) call,
do it once in open() (because they don't change) on TABLE::mem_root
(so they stay valid until the table is closed)
PKG_CONFIG does not really work on Windows, Strawberry perl's uses mingw
libraries, which VS compiler cannot use,
BOOST not used.
Tests main.query_cache_debug and main.mdev-504 timed out on debug build
at 2 minutes so increase the timeout to 4 minutes.
Overall build time was 30 min 44 seconds so plenty of time currently.
Signed-off-by: Daniel Black <daniel@linux.vnet.ibm.com>
Problem:- Gtid are not transferred in Galera Cluster.
Solution:- We need to transfer gtid in the case on either when cluster is
slave/master in async replication. In normal Gtid replication gtid are generated on
recieving node itself and it is always on sync with other nodes. Because galera keeps
node in sync , So all nodes get same no of event groups. So the issue arises when
say galera is slave in async replication.
A
| (Async replication)
D <-> E <-> F {Galera replication}
So what should happen is that all node should apply the master gtid but this does
node happen, becuase node E, F does not recieve gtid from D in write set , So what E(or F)
does is that it applies wsrep_gtid_domain_id, D server-id , E gtid next seq no. This
generated gtid does not always work when say A has different domain id.
So In this commit, on galera node when we see that this event is recieved from master
we simply write Gtid_Log_Event in write_set and send it to other nodes.
- Make my.cnf to include rpl_1slave_base.cnf (needed for tests that
actually use replication, i.e. need a functioning slave)
- Adjust and enable singledelete_idempotent_table.test
- More edits in disabled.def
A suggestion to make role propagation simpler from serg@mariadb.org.
Instead of gathering the leaf roles in an array, which for very wide
graphs could potentially mean a big part of the whole roles schema, keep
the previous logic. When finally merging a role, set its counter
to something positive.
This will effectively mean that a role has been merged, thus a random pass
through roles hash that touches a previously merged role won't cause the problem
described in MDEV-12366 any more, as propagate_role_grants_action will stop
attempting to merge from that role.
Sometimes, the test would fail with a result difference for
the READ UNCOMMITTED read, because the incremental backup
would finish before redo log was written for all the rows
that were inserted in the second batch.
To fix that, cause a redo log write by creating another
transaction. The transaction rollback (which internally does commit)
will be flushed to the redo log, and before that, all the preceding
changes will be flushed to the redo log as well.
row_log_table_apply_insert_low(), row_log_table_apply_update():
When reporting the error_key_num, only count the clustered index
if it corresponds to a key in the SQL layer.
The assertion failure was probably introduced by the (incomplete)
MySQL 5.6.28 bug fix
Bug #21364096 THE BOGUS DUPLICATE KEY ERROR IN ONLINE DDL
WITH INCORRECT KEY NAME
which we are improving.
Side note: the fix was incorrectly merged to MySQL 5.7.10;
incorrect key names will continue to be reported in MySQL 5.7.
These assertions were disabled in MariaDB 10.1.1 in
commit df4dd593f2
with a bogus comment referring to the function wsrep_fake_trx_id()
that was introduced in the very same commit.
Problem:
The command was:
find $paths -mindepth 1 -regex $cpat -prune -o -exec rm -rf {} \+
Which was supposed to work as
* skipping $paths directories themselves (-mindepth 1)
* see if the dir/file name matches $cpat (-regex)
* if yes - don't dive into the directory, skip it (-prune)
* otherwise (-o)
* remove it and everything inside (-exec)
Now -exec ... \+ works like this:
every new found path is appended to the end of the command line.
when accumulated command line length reaches `getconf ARG_MAX` (~2Gb)
it's executed, and find continues, appending to a new command line.
What happens here, find appends some directory to the command line,
then dives into it, and starts appending files from that directory.
At some point command line overflows, rm -rf gets executed and removes
the whole directory. Now find tries to continue scanning the directory
that was already removed.
Fix: don't dive into directories that will be recursively removed
anyway, use -prune for them. Basically, we should be pruning both paths
that have matched $cpat and paths that have not matched it. This is
achived by pruning unconditionally, before the regex is tested:
find $paths -mindepth 1 -prune -regex $cpat -o -exec rm -rf {} \+
Patch Credit:- Serg