The column INFORMATION_SCHEMA.INNODB_MUTEXES.NAME is not populated ever since
commit 2e814d4702 applied the InnoDB changes from
MySQL 5.7.9 to MariaDB Server 10.2.2.
Since the same commit, the view is only providing information about
rw_lock_t, not any mutexes.
For now, let us convert the source code file name and line number of
the rw_lock_t creation into a name. A better option in the future might
be to store the information somewhere where it can be looked up by
mysql_pfs_key_t, and possibly to remove the CREATE_FILE and CREATE_LINE
columns.
Problem:-
So the issue is when we do bulk insert with rows
> MI_MIN_ROWS_TO_DISABLE_INDEXES(100) , We try to disable the indexes to
speedup insert. But current logic also disables the long unique indexes.
Solution:- In ha_myisam::start_bulk_insert if we find long hash index
(HA_KEY_ALG_LONG_HASH) we will not disable the index.
This commit also refactors the mi_disable_indexes_for_rebuild function,
Since this is function is called at only one place, it is inlined into
start_bulk_insert
mi_clear_key_active is added into myisamdef.h because now it is also used
in ha_myisam.cc file.
(Same is done for Aria Storage engine)
See also original report:
http://bugs.debian.org/946671
Using mysqlhotcopy, the following error occurs:
DBD::mysql::db do failed: You can't use locks with log tables at
/usr/bin/mysqlhotcopy line 545.
Author:
Paul Szabo psz@maths.usyd.edu.auhttp://www.maths.usyd.edu.au/u/psz/
School of Mathematics and Statistics University of Sydney Australia
Race condition when innodb_lock_wait_timeout (default 50 seconds)
exceeds for 'send update', but information_schema.innodb_lock_waits
still sees this wait or it my exit by timeout. My occur on overloaded
host.
fil_space_encrypt(): Remove the debug check that decrypts the
just encrypted page. We are exercising the decryption of encrypted
pages enough via --suite=encryption,mariabackup. It is a waste of
computing resources to decrypt every page immediately after encrypting it.
The redundant check had been added in
commit 2bedc3978b (MDEV-9931).
In commit 0e5a4ac253 (MDEV-15562)
we introduced was a bogus debug check failure that does not affect
the correctness of the release build.
With a fixed-length PRIMARY KEY, we do not have to recompute
the rec_get_offsets() after restarting the mini-transaction,
because the offsets of DB_TRX_ID,DB_ROLL_PTR are not going
to change.
row_undo_mod_clust(): Invoke rec_offs_make_valid() to keep the
debug check in page_zip_write_trx_id_and_roll_ptr() happy.
The scenario to reproduce this bug should be rather unlikely:
In the time frame when row_undo_mod_clust() has committed its
first mini-transaction and has not yet started the next one,
another mini-transaction must do something that causes the page
to be reorganized, split or merged.
Fixed a bug introduced in MDEV-11345, server did not start if
non-english error messages were set in startup parameters.
Added lc_messages=de_DE option into an existing test case.
Problem:
-------
Accessing a member within 'xid_count_per_binlog' structure results in
following error when 'UBSAN' is enabled.
member access within address 0xXXX which does not point to an object of type
'xid_count_per_binlog'
Analysis:
---------
The problem appears to be that no constructor for 'xid_count_per_binlog' is
being called, and thus the vtable will not be initialized.
Fix:
---
Defined a parameterized constructor for 'xid_count_per_binlog' class.
Move tokuftdump and tokuft_logprint man pages to storage/tokudb.
The man pages are now part of tokudb-engine cmake component. This change
is mostly for RPM & DEB based packaging generated through CMake & CPack.
Debian upstream already handles this change via the custom scripts in debian/
Problem:
=======
The problem is that InnoDB doesn't add the table in fts slots if drop table fails. InnoDB marks the table is in fts slots while processing sync message. So the consecutive alter statement assumes that table is in queue and tries to remove it. But InnoDB can't find the table in fts_slots.
Solution:
=========
i) Removal of in_queue in fts_t while processing the fts sync message.
ii) Add the table to fts_slots when drop table fails.
[Variant 2 of the fix: collect the attached conditions]
Problem:
make_join_select() has a section of code which starts with
"We plan to scan all rows. Check again if we should use an index."
the code in that section will [unnecessarily] re-run the range
optimizer using this condition:
condition_attached_to_current_table AND current_table's_ON_expr
Note that the original invocation of range optimizer in
make_join_statistics was done using the whole select's WHERE condition.
Taking the whole select's WHERE condition and using multiple-equalities
allowed the range optimizer to infer more range restrictions.
The fix:
- Do range optimization using a condition that is an AND of this table's
condition and all of the previous tables' conditions.
- Also, fix the range optimizer to prefer SEL_ARGs with type=KEY_RANGE
over SEL_ARGS with type=MAYBE_KEY, regardless of the key part.
Computing
key_and(
SEL_ARG(type=MAYBE_KEY key_part=1),
SEL_ARG(type=KEY_RANGE, key_part=2)
)
will now produce the SEL_ARG with type=KEY_RANGE.