Analysis; Problem is that InnoDB does not have support for generating
CURRENT_TIMESTAMP or constant default.
Fix: Add additional check if column has changed from NULL -> NOT NULL
and column default has changed. If this is is first column definition
whose SQL type is TIMESTAMP and it is defined as NOT NULL and
it has either constant default or function default we must use
"Copy" method for alter table.
Post push fix. The function cmp_dtuple_rec() was used without a prototype
in the file row0purge.c. Adding the include file rem0cmp.h to row0purge.c
to resolve this issue.
approved by Krunal over IM.
Problem:
If we add a referential integrity constraint with a duplicate
name, an error occurs. The foreign key object would not have
been added to the dictionary cache. In the error path, there
is an attempt to remove this foreign key object. Since this
object is not there, the search returns a NULL result.
De-referencing the null object results in this crash.
Solution:
If the search to the foreign key object failed, then don't
attempt to access it.
rb#9309 approved by Marko.
Added new dynamic configuration variable innodb_buf_dump_status_frequency
to configure how often buffer pool dump status is printed in the logs.
A number between [0, 100] that tells how oftern buffer pool dump status
in percentages should be printed. E.g. 10 means that buffer pool dump
status is printed when every 10% of number of buffer pool pages are
dumped. Default is 0 (only start and end status is printed).
Add progress info on InnoDB/XtraDB row0merge phase. Note that we
do not know exact number of rounds merge sort needs at start thus
also progress report might not be accurate.
Analysis: Problem was that actual payload size (page size) after compression
was handled incorrectly on encryption. Additionally, some of the variables
were not initialized.
Fixed by encrypting/decrypting only the actual compressed page size.
Analysis: Problem is that there is not enough temporary buffer slots
for pending IO requests.
Fixed by allocating same amount of temporary buffer slots as there
are max pending IO requests.
Analysis: Problem is that SQL-layer calls handler API after storage
engine has already returned error state. InnoDB does internal
rollback when it notices transaction error (e.g. lock wait timeout,
deadlock, etc.) and after this transaction is not naturally in
correct state to continue.
Fix: Do not continue fetch operations if transaction is not started.
Analysis: Problem is that both encrypted tables and compressed tables use
FIL header offset FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION to store
required metadata. Furhermore, for only compressed tables currently
code skips compression.
Fixes:
- Only encrypted pages store key_version to FIL header offset FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION,
no need to fix
- Only compressed pages store compression algorithm to FIL header offset FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION,
no need to fix as they have different page type FIL_PAGE_PAGE_COMPRESSED
- Compressed and encrypted pages now use a new page type FIL_PAGE_PAGE_COMPRESSED_ENCRYPTED and
key_version is stored on FIL header offset FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION and compression
method is stored after FIL header similar way as compressed size, so that first
FIL_PAGE_COMPRESSED_SIZE is stored followed by FIL_PAGE_COMPRESSION_METHOD
- Fix buf_page_encrypt_before_write function to really compress pages if compression is enabled
- Fix buf_page_decrypt_after_read function to really decompress pages if compression is used
- Small style fixes
Problem :
---------
This is a regression of Bug#19138298. In purge_node_t::validate_pcur
we are trying to get offsets for all columns of clustered index from
stored record in persistent cursor. This would fail when stored record
is not having all fields of the index. The stored record stores only
fields that are needed to uniquely identify the entry.
Solution :
----------
1. Use pcur.old_n_fields to get fields that are stored
2. Add comment to note dependency between stored fields in purge node
ref and stored cursor.
3. Return if the cursor record is not already stored as it is not safe
to access cursor record directly without latch.
Reviewed-by: Marko Makela <marko.makela@oracle.com>
RB: 9139
Problem :
---------
This is a regression of bug-19138298. During purge, if
btr_pcur_restore_position fails, we set found_clust to FALSE
so that it can find a possible clustered index record in future
calls for the same undo entry. This, however, overwrites the
old_rec_buf while initializing pcur again in next call.
The leak is reproducible in local environment and with the
test provided along with bug-19138298.
Solution :
----------
If btr_pcur_restore_position() fails close the cursor.
Reviewed-by: Marko Makela <Marko.Makela@oracle.com>
Reviewed-by: Annamalai Gurusami <Annamalai.Gurusami@oracle.com>
RB: 9074
that was apparently lost in 20c23048:
commit 20c23048c1
Author: Jan Lindström <jan.lindstrom@mariadb.com>
Date: Sun May 17 14:14:16 2015 +0300
MDEV-8164: Server crashes in pfs_mutex_enter_func after fil_crypt_is_closing
This also reverts 8635c4b4:
commit 8635c4b4e6
Author: Jan Lindström <jan.lindstrom@mariadb.com>
Date: Thu May 21 11:02:03 2015 +0300
Fix test failure.
This is an addendum to the fix for MDEV-7026. The ARM memory model is
similar to that of PowerPC and thus needs the same semantics with
respect to memory barriers. That is, os_atomic_test_and_set_*_release()
must be a store with a release barrier followed by a full
barrier. Unlike x86 using __sync_lock_test_and_set() which is
implemented as “exclusive load with acquire barriers + exclusive store”
is insufficient in contexts where os_atomic_test_and_set_*_release()
macros are used.