The warning was originally added in
commit c67663054a
(MySQL 4.1.12, 5.0.3) to trace claimed undo log corruption that
was analyzed in https://lists.mysql.com/mysql/176250
on November 9, 2004.
Originally, the limit was 20,000 undo log headers or transactions,
but in commit 9d6d1902e0
in MySQL 5.5.11 it was increased to 2,000,000.
The message can be triggered when the progress of purge is prevented
by a long-running transaction (or just an idle transaction whose
read view was started a long time ago), by running many transactions
that UPDATE or DELETE some records, then starting another transaction
with a read view, and finally by executing more than 2,000,000
transactions that UPDATE or DELETE records in InnoDB tables. Finally,
when the oldest long-running transaction is completed, purge would
run up to the next-oldest transaction, and there would still be more
than 2,000,000 transactions to purge.
Because the message can be triggered when the database is obviously
not corrupted, it should be removed. Heavy users of InnoDB should be
monitoring the "History list length" in SHOW ENGINE INNODB STATUS;
there is no need to spam the error log.
The XtraDB option innodb_track_changed_pages causes
the function log_group_read_log_seg() to be invoked
even when recv_sys==NULL, leading to the SIGSEGV.
This regression was caused by
MDEV-11027 InnoDB log recovery is too noisy
innodb/buf_LRU_get_free_block
Add debug instrumentation to produce error message about
no free pages. Print error message only once and do not
enable innodb monitor.
xtradb/buf_LRU_get_free_block
Add debug instrumentation to produce error message about
no free pages. Print error message only once and do not
enable innodb monitor. Remove code that does not seem to
be used.
innodb-lru-force-no-free-page.test
New test case to force produce desired error message.
dict_foreign_find_index(): Ignore incompletely created indexes.
After a failed ADD UNIQUE INDEX, an incompletely created index
could be left behind until the next ALTER TABLE statement.
This bug affects both writing and reading encrypted redo log in
MariaDB 10.1, starting from version 10.1.3 which added support for
innodb_encrypt_log. That is, InnoDB crash recovery and Mariabackup
will sometimes fail when innodb_encrypt_log is used.
MariaDB 10.2 or Mariabackup 10.2 or later versions are not affected.
log_block_get_start_lsn(): Remove. This function would cause trouble if
a log segment that is being read is crossing a 32-bit boundary of the LSN,
because this function does not allow the most significant 32 bits of the
LSN to change.
log_blocks_crypt(), log_encrypt_before_write(), log_decrypt_after_read():
Add the parameter "lsn" for the start LSN of the block.
log_blocks_encrypt(): Remove (unused function).
2cd3169113 broke conf_to_src,
because strings library is now dependend on mysys (my_alloc etc are used
now directly in string lib)
Fix by adding appropriate dependency.
Also exclude conf_to_src from VS IDE builds. EXCLUDE_FROM_ALL
is not enough for that.
debug_key_management
encrypt_and_grep
innodb_encryption
If real table count is different from what is expected by the test, it
just hangs on waiting to fulfill hardcoded number. And then exits with
**failed** after 10 minutes of wait: quite unfriendly and hard to
figure out what's going on.
trx_undo_rec_get_partial_row(): When the PRIMARY KEY includes a
column prefix of an externally stored column, the already parsed
part of the undo log record may contain a reference to
an off-page column. This is the case in the bug58912 test in
innodb.innodb.
This is a regression caused by MDEV-14051 'Undo log record is too big.'
Purge in the secondary index is wrongly skipped in
row_purge_upd_exist_or_extern() because node->row only does not contain all
indexed columns.
trx_undo_rec_get_partial_row(): Add the parameter for node->update
so that the updated columns will be copied from the initial part
of the undo log record.
* don't use Env module in tests, use $ENV{xxx} instead
* collateral changes:
** $file in the error message was unset
** $file in the other error message was unset too :)
** source file arguments are conventionally upper-cased
** abort the test (die) on error, don't just echo/exit
If translation table present when we materialize the derived table then
change it to point to the materialized table.
Added debug info to see really what happens with what derived.
don't allocate them on THD::mem_root on every init(HA_STATUS_CONST) call,
do it once in open() (because they don't change) on TABLE::mem_root
(so they stay valid until the table is closed)
Problem:- Gtid are not transferred in Galera Cluster.
Solution:- We need to transfer gtid in the case on either when cluster is
slave/master in async replication. In normal Gtid replication gtid are generated on
recieving node itself and it is always on sync with other nodes. Because galera keeps
node in sync , So all nodes get same no of event groups. So the issue arises when
say galera is slave in async replication.
A
| (Async replication)
D <-> E <-> F {Galera replication}
So what should happen is that all node should apply the master gtid but this does
node happen, becuase node E, F does not recieve gtid from D in write set , So what E(or F)
does is that it applies wsrep_gtid_domain_id, D server-id , E gtid next seq no. This
generated gtid does not always work when say A has different domain id.
So In this commit, on galera node when we see that this event is recieved from master
we simply write Gtid_Log_Event in write_set and send it to other nodes.
A suggestion to make role propagation simpler from serg@mariadb.org.
Instead of gathering the leaf roles in an array, which for very wide
graphs could potentially mean a big part of the whole roles schema, keep
the previous logic. When finally merging a role, set its counter
to something positive.
This will effectively mean that a role has been merged, thus a random pass
through roles hash that touches a previously merged role won't cause the problem
described in MDEV-12366 any more, as propagate_role_grants_action will stop
attempting to merge from that role.
Sometimes, the test would fail with a result difference for
the READ UNCOMMITTED read, because the incremental backup
would finish before redo log was written for all the rows
that were inserted in the second batch.
To fix that, cause a redo log write by creating another
transaction. The transaction rollback (which internally does commit)
will be flushed to the redo log, and before that, all the preceding
changes will be flushed to the redo log as well.
row_log_table_apply_insert_low(), row_log_table_apply_update():
When reporting the error_key_num, only count the clustered index
if it corresponds to a key in the SQL layer.
The assertion failure was probably introduced by the (incomplete)
MySQL 5.6.28 bug fix
Bug #21364096 THE BOGUS DUPLICATE KEY ERROR IN ONLINE DDL
WITH INCORRECT KEY NAME
which we are improving.
Side note: the fix was incorrectly merged to MySQL 5.7.10;
incorrect key names will continue to be reported in MySQL 5.7.
These assertions were disabled in MariaDB 10.1.1 in
commit df4dd593f2
with a bogus comment referring to the function wsrep_fake_trx_id()
that was introduced in the very same commit.
Problem:
The command was:
find $paths -mindepth 1 -regex $cpat -prune -o -exec rm -rf {} \+
Which was supposed to work as
* skipping $paths directories themselves (-mindepth 1)
* see if the dir/file name matches $cpat (-regex)
* if yes - don't dive into the directory, skip it (-prune)
* otherwise (-o)
* remove it and everything inside (-exec)
Now -exec ... \+ works like this:
every new found path is appended to the end of the command line.
when accumulated command line length reaches `getconf ARG_MAX` (~2Gb)
it's executed, and find continues, appending to a new command line.
What happens here, find appends some directory to the command line,
then dives into it, and starts appending files from that directory.
At some point command line overflows, rm -rf gets executed and removes
the whole directory. Now find tries to continue scanning the directory
that was already removed.
Fix: don't dive into directories that will be recursively removed
anyway, use -prune for them. Basically, we should be pruning both paths
that have matched $cpat and paths that have not matched it. This is
achived by pruning unconditionally, before the regex is tested:
find $paths -mindepth 1 -prune -regex $cpat -o -exec rm -rf {} \+
Patch Credit:- Serg
Using systemd we can automate creating users and directories. So
generate and install the configuration files.
Signed-off-by: Vicențiu Ciorbaru <vicentiu@mariadb.org>
Small change in cmake/install_layout.cmake compared to original contributor
patch to also install SYSTEMD_SYSUSERS and SYSTEMD_TMPFILES directories. The
variables were being set, but the loop which defines the final install files
was not updated.
In the function make_sortkey a tmp buffer was defined and in the absence of
param->tmp_buffer, tmp buffer used the sort_keys buffer. sort_keys buffer
has a length defined in sort_field->length, while param->tmp_buffer is
stored in param->rec_length. Make sure to use the appropriate length
based on which buffer we are using otherwise we'll overflow.
Also added a type cast to size_t during the calculation of the sort keys
buffer size to avoid an oveflow if the buffer size exceeds 32 bits.
galera_events test shows a regression with the original fix for MW-416
Reason was that Events::drop_event() can be called also from inside event
execution, and there we have a speacial treatment for event, which executes
"DROP EVENT" statement, and runs TOI replication inside the event processing body.
This resulted in executing WSREP_TO_ISOLATION two times for such DROP EVENT statement.
Fix is to call WSREP_TO_ISOLATION_BEGIN only in Events::drop_event()
Changed return code for replicatio error to TRUE.
This is aligned with native mysql convention to return TRUE (defined to 1) or FALSE (defined to 0) from a bool function.
This is wrong, but follows the mysql conventiosn, at least...