don't let mysql_install_db set SUID bit for auth_pam_tool in rpm/deb
packages - instead package files with correct permissions and
only fix the ownership of auth_pam_tool_dir (which can only be done
after mysql user is created, so in post-install).
keep old mysql_install_db behavior for bintars
The only change is a change of the version number.
In MySQL 5.6.46, the copyright comments in a number of files were changed
in mysql/mysql-server@f1a006ece7
but there was no functional change to InnoDB code.
This was also reflected by XtraDB. We are not changing the copyright
comments in MariaDB Server for now.
Between MySQL 5.6.46 and 5.6.47, InnoDB was not changed at all.
Actually, we had forgotten to update the InnoDB version number to
5.6.46. With this change, we are updating InnoDB
from 5.6.45 to 5.6.47 and XtraDB from 5.6.45-86.1 to 5.6.46-86.2.
- Fixed a warning visible in optimized build related to calling
memcpy with length parameters larger than ptrdiff_t max.
rb#23333 approved by Annamalai Gurusami <annamalai.gurusami@oracle.com>
IndexPurge::next(): Replace btr_pcur_move_to_next_user_rec()
with some equivalent code that performs sanity checks without
killing the server. Perform some additional sanity checks as well.
This change is motivated by
mysql/mysql-server@48de4d74f4
which unnecessarily introduces storage overhead to btr_pcur_t
and uses a test case that injects a fault somewhere else,
not in the code path that was modified.
MySQL 5.7.29 includes the following fix:
Bug #30287668 INNODB: A LONG SEMAPHORE WAIT
mysql/mysql-server@5cdbb22b51
There is no test case. It seems that the problem could occur when
a spatial index is large and peculiar enough so that multiple R-tree
leaf pages will have the exactly same maximum bounding rectangle (MBR).
The commit message suggests that the hang can occur when R-tree
non-leaf pages are being merged, which should only be possible
during transaction rollback or the purge of transaction history,
when the R-tree index is at least 2 levels high and very many records
are being deleted. The message says that a comparison result that two
spatial index node pointer records are equal will cause an infinite loop
in rtr_page_copy_rec_list_end_no_locks(). Hence, we must include the
child page number in the comparison to be consistent with
mysql/mysql-server@2e11fe0e15.
We fix this bug in a simpler way, involving fewer code changes.
cmp_rec_rec(): Renamed from cmp_rec_rec_with_match().
Assert that rec2 always resides in an index page.
Treat non-leaf spatial index pages specially.
Now that we will be invoking dtuple_get_n_ext() instead of
letting btr_push_update_extern_fields() update an already
calculated value, it is unnecessary to calculate the n_ext
upfront.
row_rec_to_index_entry(), row_rec_to_index_entry_low():
Remove the output parameter n_ext.
During update, rollback, or MVCC read, we may miscalculate
the number of off-page columns, and thus the size of the
clustered index record. The function btr_push_update_extern_fields()
is mostly redundant, because the off-page columns would also be
moved by row_upd_index_replace_new_col_val(), which is invoked
via row_upd_index_replace_new_col_vals().
btr_push_update_extern_fields(): Remove.
This is based on
mysql/mysql-server@1fa475b85d
which refines a fix for a recovery bug fix
mysql/mysql-server@ce0a1e85e2
in MySQL 5.7.5.
No test case was provided by Oracle.
Some of the changed code is being covered by the existing test
innodb.blob-crash.
WL#6326 in MariaDB 10.2.2 introduced a potential hang on purge or rollback
when an index tree is being shrunk by multiple levels.
This fix is based on
mysql/mysql-server@f2c5852630
with the main difference that our version of the test case uses
DEBUG_SYNC instrumentation on ROLLBACK, not on purge.
btr_cur_will_modify_tree(): Simplify the check further.
This is the actual bug fix.
row_undo_mod_remove_clust_low(), row_undo_mod_clust(): Add DEBUG_SYNC
instrumentation for the test case.
Remove the offending test case. This sort of error is hard to test in
all possible corner cases and thus makes the test less valuable. The
overflow error will be covered by warnings generated by the compiler,
which is much more reliable in the general case.
* size represents the size of an element in the Unique class
* full_size is used when the Unique class counts the number of
duplicates stored per element. This requires additional space per Unique
element.
The write-heavy test innodb_zip.wl6501_scale_1 timed out on
10.2 60d7011c5f for me.
Out of os_aio_n_segments=6, 5 are waiting for an event in
os_aio_simulated_handler(). One thread is waiting for a
write to complete in buf_dblwr_add_to_batch(), but that
would never happen, because nothing is waking up the simulated AIO
handler threads.
This hang appears to have been introduced in MySQL 5.6.12
in mysql/mysql-server@26cfde776c.
with condition_pushdown_from_having
This bug could manifest itself for queries with GROUP BY and HAVING clauses
when the HAVING clause was a conjunctive condition that depended
exclusively on grouping fields and at least one conjunct contained an
equality of the form fld=sq where fld is a grouping field and sq is a
constant subquery.
In this case the optimizer tries to perform a pushdown of the HAVING
condition into WHERE. To construct the pushable condition the optimizer
first transforms all multiple equalities in HAVING into simple equalities.
This has to be done for a proper processing of the pushed conditions
in WHERE. The multiple equalities at all AND/OR levels must be converted
to simple equalities because any multiple equality may refer to a multiple
equality at the upper level.
Before this patch the conversion was performed like this:
multiple_equality(x,f1,...,fn) => x=f1 and ... and x=fn.
When an equality item for x=fi was constructed both the items for x and fi
were cloned. If x happened to be a constant subquery that could not be
cloned the conversion failed. If the conversions of multiple equalities
previously performed had succeeded then the whole condition became in an
inconsistent state that could cause different failures.
The solution provided by the patch is:
1. to use a different conversion rule if x is a constant
multiple_equality(x,f1,...,fn) => f1=x and f2=f1 and ... and fn=f1
2. not to clone x if it's a constant.
Such conversions cannot fail and besides the result of the conversion
preserves the equivalence of f1,...,fn that can be used for other
optimizations.
This patch also made sure that expensive predicates are not pushed from
HAVING to WHERE.
Item_cond inherits from Item_args but doesn't store its arguments
as function arguments, which means it has zero arguments.
Don't call memcpy in this case.
(Variant #2 of the patch, which keeps the sp_head object inside the
MEM_ROOT that sp_head object owns)
(10.3 requires extra work due to sp_package, will commit a separate
patch for it)
sp_head::operator new() and operator delete() were dereferencing sp_head*
pointers to memory that didn't hold a valid sp_head object (it was
not created/already destroyed).
This caused UBSan to crash when looking up type information.
Fixed by providing static sp_head::create() and sp_head::destroy() methods.
The long semaphore wait appeared to be the caused by the following
pattern in the MTR test:
```
SET DEBUG_SYNC = "now SIGNAL wsrep_after_certification_continue";
SET DEBUG_SYNC = "now SIGNAL signal.wsrep_apply_cb;
```
Raising two signals, one right after another, caused one signal to
overwrite the other, before the signal was consumed by the thread.
This caused one thread to be stuck until the debug sync point would
timeout.
A certification failure followed by a clean shutdown would cause an
inconsistency between the sequence number stored in innodb and the
sequence number stored in provider.
This happened both in the case of local certification failure, and in
the case where dummy writeset is applied.
The fix consists of:
- updating wsrep position after dummy writeset is delivered in
`Wsrep_high_priority_service::log_dummy_write_set()`
- updating wsrep position while releasing commit order in wsrep-lib
side
Added two tests which stress the situation where a server is shutdown
after a certification failure.
The string doesn't appear to be null-terminated when binlog checksums are
enabled. This causes a corrupt binlog name in the error message when a
slave is ahead of the master.
(Variant #2 of the patch, which keeps the sp_head object inside the
MEM_ROOT that sp_head object owns)
(10.3 version of the fix, with handling for class sp_package)
sp_head::operator new() and operator delete() were dereferencing sp_head*
pointers to memory that didn't hold a valid sp_head object (it was
not created/already destroyed).
This caused UBSan to crash when looking up type information.
Fixed by providing static sp_head::create() and sp_head::destroy() methods.
Failed compile when XML table type is not supported.
Was because XMLDEF was unconditionally called from REST table.
modified: storage/connect/tabrest.cpp
- Make cmake less verbose
modified: storage/connect/CMakeLists.txt
- Hide Switch_to_definer_security_ctx not defined for 10.1 and 10.0
modified: storage/connect/ha_connect.cc
In this scenario:
- There is a possible range access for table T
- And there is a ref access on the same index which uses fewer key parts
- The join optimizer picks the ref access (because it is cheaper)
- make_join_select applies this heuristic to switch to range:
/* Range uses longer key; Use this instead of ref on key */
Join buffer will be used without having called
JOIN_TAB::make_scan_filter(). This means, conditions that should be
checked when reading table T will be checked after T is joined with the
contents of the join buffer, instead.
Fixed this by adding a make_scan_filter() check.
(updated patch after backport to 10.3)
(Fix testcase on Windows)
Failed compile when XML table type is not supported.
Was because XMLDEF was unconditionally called from REST table.
modified: storage/connect/tabrest.cpp
Analysis:
========
'max_binlog_cache_size' is configured and a huge transaction is executed. When
the transaction specific events size exceeds 'max_binlog_cache_size' the event
cannot be written to the binary log cache and cache write error is raised.
Upon cache write error the statement is rolled back and the transaction cache
should be truncated to a previous statement specific position. The truncate
operation should reset the cache to earlier valid positions and flush the new
changes. Even though the flush is successful the cache write error is still in
marked state. The truncate code interprets the cache write error as cache flush
failure and returns abruptly without modifying the write cache parameters.
Hence cache is in a invalid state. When a COMMIT statement is executed in this
session it tries to flush the contents of transaction cache to binary log.
Since cache has partial events the cache write operation will report
'writer.remains' assert.
Fix:
===
Binlog truncate function resets the cache to a specified size. As a first step
of truncation, clear the cache write error flag that was raised during earlier
execution. With this new errors that surface during cache truncation can be
clearly identified.