(from 10.1 to 10.0-galera)
This conflicted signficantly with 7d550c76be
which added --defaults-group-suffix support.
Took the approach of 4bb49d84a9 and adapted the
--defaults-group-suffix handling to be consistent.
The following changes as follows:
SST scripts now use $MY_PRINT_DEFAULTS rather than the lowercase for
consistency and this include all required --default arguements.
Backport/merge by Daniel Black <daniel@linux.vnet.ibm.com>
The problem was in such scenario:
T1 - starts registering query and locked QC
T2 - starts disabling QC and wait for UNLOCK
T1 - unlock QC
T2 - disable QC and destroy signals without waiting for query unlock
T1 a) - not yet unlocked query in qc and crash on attempt to unlock because
QC signals are destroyed
b) if above was done before destruction, it execute end_of results first
time at exit on after try_lock which see QC disables and return TRUE.
But it do not reset query_cache_tls->first_query_block which lead to
second call of end_of_result when diagnostic arena has already
inappropriate status (not is_eof()).
Fix is:
1) wait for all queries unlocked before destroying them by locking and
unlocking
2) remove query_cache_tls->first_query_block if QC disabled
with joins, SQ, ORDER BY, semijoin=on
A bug in get_sort_by_table() could mislead the function
setup_semijoin_dups_elimination(). As a result the optimizer
could produce invalid execution plans for queries with ORDER BY
and subquery predicates that could be converted to semi-joins.
Remove non prepared (and so belonging to removed clauses FT functions) from the list.
in later version it will be fixed by building the list during preparation.
This bug happens when locking the same Aria "transactional" table
(page format) more then once with LOCK TABLES and inserting into one
of them with INSERT ... SELECT when the table is empty.
Fixed by ensuring we don't use fast bulk insert if table is opened
twice with LOCK TABLES (as this changes table->s->state)
Code changes:
- Added use_count to MARIA_USED_TABLES to be able to check if
table is opened twice for a statement/lock table
- Don't clear history or reset info->start_state if we
don't have versioning. One reason for the bug was
was that info->start_state was set to point to different
states for the two tables. If there is no versioning
info->start_state should always point to info->s->state.common.
Other things:
- Fixed also some typos that was noticed while scanning the code
- More DBUG_PRINT
The warning was originally added in
commit c67663054a
(MySQL 4.1.12, 5.0.3) to trace claimed undo log corruption that
was analyzed in https://lists.mysql.com/mysql/176250
on November 9, 2004.
Originally, the limit was 20,000 undo log headers or transactions,
but in commit 9d6d1902e0
in MySQL 5.5.11 it was increased to 2,000,000.
The message can be triggered when the progress of purge is prevented
by a long-running transaction (or just an idle transaction whose
read view was started a long time ago), by running many transactions
that UPDATE or DELETE some records, then starting another transaction
with a read view, and finally by executing more than 2,000,000
transactions that UPDATE or DELETE records in InnoDB tables. Finally,
when the oldest long-running transaction is completed, purge would
run up to the next-oldest transaction, and there would still be more
than 2,000,000 transactions to purge.
Because the message can be triggered when the database is obviously
not corrupted, it should be removed. Heavy users of InnoDB should be
monitoring the "History list length" in SHOW ENGINE INNODB STATUS;
there is no need to spam the error log.
The XtraDB option innodb_track_changed_pages causes
the function log_group_read_log_seg() to be invoked
even when recv_sys==NULL, leading to the SIGSEGV.
This regression was caused by
MDEV-11027 InnoDB log recovery is too noisy
innodb/buf_LRU_get_free_block
Add debug instrumentation to produce error message about
no free pages. Print error message only once and do not
enable innodb monitor.
xtradb/buf_LRU_get_free_block
Add debug instrumentation to produce error message about
no free pages. Print error message only once and do not
enable innodb monitor. Remove code that does not seem to
be used.
innodb-lru-force-no-free-page.test
New test case to force produce desired error message.
dict_foreign_find_index(): Ignore incompletely created indexes.
After a failed ADD UNIQUE INDEX, an incompletely created index
could be left behind until the next ALTER TABLE statement.
This bug affects both writing and reading encrypted redo log in
MariaDB 10.1, starting from version 10.1.3 which added support for
innodb_encrypt_log. That is, InnoDB crash recovery and Mariabackup
will sometimes fail when innodb_encrypt_log is used.
MariaDB 10.2 or Mariabackup 10.2 or later versions are not affected.
log_block_get_start_lsn(): Remove. This function would cause trouble if
a log segment that is being read is crossing a 32-bit boundary of the LSN,
because this function does not allow the most significant 32 bits of the
LSN to change.
log_blocks_crypt(), log_encrypt_before_write(), log_decrypt_after_read():
Add the parameter "lsn" for the start LSN of the block.
log_blocks_encrypt(): Remove (unused function).
2cd3169113 broke conf_to_src,
because strings library is now dependend on mysys (my_alloc etc are used
now directly in string lib)
Fix by adding appropriate dependency.
Also exclude conf_to_src from VS IDE builds. EXCLUDE_FROM_ALL
is not enough for that.
debug_key_management
encrypt_and_grep
innodb_encryption
If real table count is different from what is expected by the test, it
just hangs on waiting to fulfill hardcoded number. And then exits with
**failed** after 10 minutes of wait: quite unfriendly and hard to
figure out what's going on.
trx_undo_rec_get_partial_row(): When the PRIMARY KEY includes a
column prefix of an externally stored column, the already parsed
part of the undo log record may contain a reference to
an off-page column. This is the case in the bug58912 test in
innodb.innodb.
This is a regression caused by MDEV-14051 'Undo log record is too big.'
Purge in the secondary index is wrongly skipped in
row_purge_upd_exist_or_extern() because node->row only does not contain all
indexed columns.
trx_undo_rec_get_partial_row(): Add the parameter for node->update
so that the updated columns will be copied from the initial part
of the undo log record.
* don't use Env module in tests, use $ENV{xxx} instead
* collateral changes:
** $file in the error message was unset
** $file in the other error message was unset too :)
** source file arguments are conventionally upper-cased
** abort the test (die) on error, don't just echo/exit
If translation table present when we materialize the derived table then
change it to point to the materialized table.
Added debug info to see really what happens with what derived.
don't allocate them on THD::mem_root on every init(HA_STATUS_CONST) call,
do it once in open() (because they don't change) on TABLE::mem_root
(so they stay valid until the table is closed)
Problem:- Gtid are not transferred in Galera Cluster.
Solution:- We need to transfer gtid in the case on either when cluster is
slave/master in async replication. In normal Gtid replication gtid are generated on
recieving node itself and it is always on sync with other nodes. Because galera keeps
node in sync , So all nodes get same no of event groups. So the issue arises when
say galera is slave in async replication.
A
| (Async replication)
D <-> E <-> F {Galera replication}
So what should happen is that all node should apply the master gtid but this does
node happen, becuase node E, F does not recieve gtid from D in write set , So what E(or F)
does is that it applies wsrep_gtid_domain_id, D server-id , E gtid next seq no. This
generated gtid does not always work when say A has different domain id.
So In this commit, on galera node when we see that this event is recieved from master
we simply write Gtid_Log_Event in write_set and send it to other nodes.
A suggestion to make role propagation simpler from serg@mariadb.org.
Instead of gathering the leaf roles in an array, which for very wide
graphs could potentially mean a big part of the whole roles schema, keep
the previous logic. When finally merging a role, set its counter
to something positive.
This will effectively mean that a role has been merged, thus a random pass
through roles hash that touches a previously merged role won't cause the problem
described in MDEV-12366 any more, as propagate_role_grants_action will stop
attempting to merge from that role.
Sometimes, the test would fail with a result difference for
the READ UNCOMMITTED read, because the incremental backup
would finish before redo log was written for all the rows
that were inserted in the second batch.
To fix that, cause a redo log write by creating another
transaction. The transaction rollback (which internally does commit)
will be flushed to the redo log, and before that, all the preceding
changes will be flushed to the redo log as well.
row_log_table_apply_insert_low(), row_log_table_apply_update():
When reporting the error_key_num, only count the clustered index
if it corresponds to a key in the SQL layer.
The assertion failure was probably introduced by the (incomplete)
MySQL 5.6.28 bug fix
Bug #21364096 THE BOGUS DUPLICATE KEY ERROR IN ONLINE DDL
WITH INCORRECT KEY NAME
which we are improving.
Side note: the fix was incorrectly merged to MySQL 5.7.10;
incorrect key names will continue to be reported in MySQL 5.7.
These assertions were disabled in MariaDB 10.1.1 in
commit df4dd593f2
with a bogus comment referring to the function wsrep_fake_trx_id()
that was introduced in the very same commit.
Problem:
The command was:
find $paths -mindepth 1 -regex $cpat -prune -o -exec rm -rf {} \+
Which was supposed to work as
* skipping $paths directories themselves (-mindepth 1)
* see if the dir/file name matches $cpat (-regex)
* if yes - don't dive into the directory, skip it (-prune)
* otherwise (-o)
* remove it and everything inside (-exec)
Now -exec ... \+ works like this:
every new found path is appended to the end of the command line.
when accumulated command line length reaches `getconf ARG_MAX` (~2Gb)
it's executed, and find continues, appending to a new command line.
What happens here, find appends some directory to the command line,
then dives into it, and starts appending files from that directory.
At some point command line overflows, rm -rf gets executed and removes
the whole directory. Now find tries to continue scanning the directory
that was already removed.
Fix: don't dive into directories that will be recursively removed
anyway, use -prune for them. Basically, we should be pruning both paths
that have matched $cpat and paths that have not matched it. This is
achived by pruning unconditionally, before the regex is tested:
find $paths -mindepth 1 -prune -regex $cpat -o -exec rm -rf {} \+
Patch Credit:- Serg