When discounting selectivity of ref access, don't discount the
selectivity we've already discounted for range access.
The 10.1 version of the fix. Will need to adjust condition filtering
test results in 10.4
A histogram size that is odd in size with DOUBLE precision will leave the last
byte unwritten. When collecting histograms, this causes the last byte to
be uninitialized in the record. memset the buffer to 0 first to make
sure this does not happen.
Skip the test on big-endian systems.
In MariaDB Server 10.0 and 10.1 (as well as MySQL 5.6),
the implementation of innodb_checksum_algorithm=crc32
wrongly assumes little-endian byte order.
Problem:-
When mysql executes INSERT ON DUPLICATE KEY INSERT, the storage engine checks
if the inserted row would generate a duplicate key error. If yes, it returns
the existing row to mysql, mysql updates it and sends it back to the storage
engine.When the table has more than one unique or primary key, this statement
is sensitive to the order in which the storage engines checks the keys.
Depending on this order, the storage engine may determine different rows
to mysql, and hence mysql can update different rows.The order that the
storage engine checks keys is not deterministic. For example, InnoDB checks
keys in an order that depends on the order in which indexes were added to
the table. The first added index is checked first. So if master and slave
have added indexes in different orders, then slave may go out of sync.
Solution:-
Make INSERT...ON DUPLICATE KEY UPDATE unsafe while using stmt or mixed format
When there is more then one unique key.
Although there is two exception.
1. Auto Increment key is not counted because Innodb will get gap lock for
failed Insert and concurrent insert will get a next increment value. But if
user supplies auto inc value it can be unsafe.
2. Count only unique keys for which insertion is performed.
So this patch also addresses the bug id #72921
- The commit ab6dd77408 wrongly sets the
condition inside innobase_srv_conc_enter_innodb(). Problem is that
InnoDB makes the thread to sleep indefinitely if it is a replication
slave thread.
Thanks to Sujatha Sivakumar for contributing the replication test case.
Problem:
=======
Failed CREATE OR REPLACE TEMPORARY TABLE statement which dropped the table but
failed at a later stage of creation of temporary table is not written to
binarylog in row based replication. This causes the slave to diverge.
Analysis:
========
CREATE OR REPLACE statements work as shown below.
CREATE OR REPLACE TABLE table_name (a int);
is basically the same as:
DROP TABLE IF EXISTS table_name;
CREATE TABLE table_name (a int);
Hence every CREATE OR REPLACE TABLE command which dropped the table should be
written to binary log, even when following CREATE TABLE part fails. In order
to achieve this, during the execution of CREATE OR REPLACE command, when a
table is dropped 'thd->log_current_statement' flag is set. When table creation
results in an error within 'mysql_create_table' code, the error handling part
looks for this flag. If it is set the failed CREATE OR REPLACE statement is
written into the binary log inspite of error. This ensure that slave doesn't
diverge from the master. In case of row based replication the error handling
code returns very early, if the table is of type temporary. This is done based
on the assumption that temporary tables are not replicated in row based
replication.
It fails to handle the cases where a temporary table was created as part of
statement based replication at an earlier stage and the binary log format was
changed to row because of an unsafe statement. In this case when a CREATE OR
REPLACE statement is executed on this temporary table it will dropped but the
query will not be written to binary log. Hence slave diverges.
Fix:
===
In error handling code check the return status of create table operation. If
it is successful and replication mode is row based and table is of type
temporary then return. Other wise proceed further to the code which checks for
thd->log_current_statement flag and does appropriate logging.
The function pointer ut_timer() was only used by the
InnoDB defragmenting thread. Let InnoDB use a single monotonic
high-precision timer, my_interval_timer() [in nanoseconds],
occasionally wrapped by microsecond_interval_timer().
srv_defragment_interval: Change from "timer" units to nanoseconds.
This concludes the InnoDB time function cleanup that was
motivated by MDEV-14154. Only ut_time_ms() will remain for now,
wrapping my_interval_timer().
MDEV-5589 commit set up a policy to skip DROP TEMPORARY TABLE binary logging
in case the target table has not been "CREATEed" in binlog (no CREATE
Query-log-event was logged into the binary log).
It turns out that
1. the rule did not cover non-existing table DROPped with IF-EXISTS clause.
The logged-create knowledge for the non-existing one does not even need
MDEV-5589 patch, and
2. connection close disobeys it to trigger automatic DROP-IF-EXISTS
binlogging.
Either 1 or 2 or even both is/are also responsible for unexpected binlog
records observed in MDEV-17863, actually rendering a referred
@@global.read_only irrelevant as far as the described stored procedure
definition *and* the ROW binlog-format are concerned.
The FTS optimizer thread made a false assumption that time(NULL)
is monotonic. The system clock can be adjusted to the past,
for example if the hardware clock was drifting to the future,
and it was adjusted by NTP.
fts_slot_t::interval_time: Replace with the constant
FTS_OPTIMIZE_INTERVAL_IN_SECS.
fts_slot_t::last_run, fts_slot_t::completed: Clarify the
documentation.
fts_optimize_get_time_limit(): Remove a type cast, and
add a FIXME comment about domain mismatch.
fts_optimize_compact(), fts_optimize_words(): Limit the time
also when the current time has been moved to the past.
fts_optimize_table_bk(): Check for wrap-around.
fts_optimize_how_many(): Check for wrap-around, and remove the
failing assertions.
fts_is_sync_needed(): Remove a redundant call to time(NULL).
lock_t::requested_time: Document what the field is used for.
lock_t::wait_time: Document that the field is only used for
diagnostics and may be garbage if the system time is being adjusted.
srv_slot_t::suspend_time: Document that this is duplicating
trx_lock_t::wait_started.
lock_table_print(), lock_rec_print(): Declare in static scope.
Add a parameter for the current time.
lock_deadlock_check_and_resolve(), lock_deadlock_lock_print(),
lock_deadlock_joining_trx_print():
Add a parameter for the current time.
srv_slot_t::suspend_time, os_aio_slot_t::reservation_time,
sync_cell_t::reservation_time: Explain what could happen
if the system time has is being adjusted.
fts_sync_t::start_time: Document that the field is mostly unused.
Replace ut_usectime() with my_interval_timer(),
which is equivalent, but monotonically counting nanoseconds
instead of counting the microseconds of real time.
os_event_wait_time_low(): Use my_hrtime() instead of ut_usectime().
FIXME: Set a clock attribute on the condition variable that allows
a monotonic clock to be chosen as the time base, so that the wait
is immune to adjustments of the system clock.
Analysis
========
Point in time recovery using mysqlbinlog containing queries
operating on temporary tables results in an error.
While writing the query log event in the binary log, the
thread id used for execution of DROP TABLE and DELETE commands
were incorrect. The thread variable 'thread_specific_used'
is used to determine whether a specific thread id is to used
while executing the statements i.e using 'SET
@@session.pseudo_thread_id'. This variable was not set
correctly for DROP TABLE query and was never set for DELETE
query. The thread id is important for temporary tables
since the tables are session specific. DROP TABLE and DELETE
queries executed using a wrong thread id resulted in errors
while applying the queries generated by mysqlbinlog utility.
Fix
===
Set the 'thread_specific_used' THD variable for DROP TABLE and
DELETE queries.
ReviewBoard: 21833
Note: this patch is for 5.6.
Detected by ASAN.
The patch fixes the cleanup of parser stack pointers.
Reviewed-by: Guilhem Bichot <guilhem.bichot@oracle.com>
check_valid_path() uses my_strcspn() that cannot handle invalid characters
properly. This is fixed by a big refactoring in 10.2 (MDEV-6353).
For 5.5, let's simply swap tests, because check_string_char_length()
rejects invalid characters just fine.
Description:- During server startup, the server exits if
the 'mysql.plugin' system table has any rows with empty
value for the field 'name' (plugin name).
The xpath parsing function was using a local string buffer that was
deallocated when going out of scope. However references to it are
preserved in the XPATH parse tree. This was causing read-after-free.
Fixed by making the xpath buffer a local variable inside the Item
class for the relevant xpath function, thus being preserved for the
duration of the query.
DESCRIPTION
===========
PVS-Studio static code analyzer found several suspicious
fragments of code across various files.
i) sizeof() is using the pointer
ii) memcpy() doesn't copy the whole string.
iii) enumeration constant 'wkb_multilinestring' is used as
a variable of a Boolean-type.
iv) 'throw' keyword is missing from std::runtime_error()
FIX
===
i) Use sizeof({actual object/data type})
ii) Use strncpy() and set last char as '\0'
iii) N/A (Issue has already been fixed)
iv) Add 'throw' before the exception.
RB: 21502
This is motivated by PS-5221 in
percona/percona-server@2817c561fc
The coarser-precision ut_time() will still refer to the
system clock, meaning that bad things can happen if the
real time clock is adjusted backwards.
Valgrind started supporting CRC32 instruction starting with version
3.6.1, released in 2011. Thus remove the fallback to software
implementation in case running under Valgrind.
There is one directly applicable change to InnoDB:
commit 739f5239f1 in the
5.5 branch will be merged before the next MariaDB releases.
Another potentially applicable change will be tracked
separately as MDEV-20126.
Thus, here we only update the InnoDB version number and do
not change anything else.
Problem: Clients running different values for auto_increment_increment
and doing concurrent inserts leads to "Duplicate key error" in one of them.
Analysis:
When auto_increment_increment value is reduced in a session,
InnoDB uses last auto_increment_increment value
to recalculate the autoinc value.
In case, some other session has inserted a value
with different auto_increment_increment, InnoDB recalculate
autoinc values based on current session previous auto_increment_increment
instead of considering the auto_increment_increment used for last insert
across all session
Fix:
revert 7acdf29cb4
a.k.a. 7c12a9e5c3
as it causing the bug.
Reviewed By:
Bin <bin.x.su@oracle.com>
Kevin <kevin.lewis@oracle.com>
RB#21777
Note: In MariaDB Server, earlier changes in
ae5bc05988
for MDEV-533 require that the original test in
mysql/mysql-server@1ccd472d63
be adjusted for MariaDB.
Also, ef47b62551 (MDEV-8827)
had to be reverted after the upstream fix had been backported.