There was a race. The test case was expecting the slave to start processing a
particular DELETE statement, then the test would stop the slave at this
point. But there was missing something to wait until the slave would actually
reach this point; thus depending on timing it was possible that the slave
would be stopped too early, causing .result file difference.
Fixed by adding an appropriate wait to the test case.
Problem with test is that test causes OS failures that change.
Idea with test is just to test that server does not crash, no other
output is necessary.
Fix rare failures in test case rpl.rpl_gtid_basic:
- Add another possible error code when a connection is killed.
- Make sure that the IO thread has had time to complete its stop after START
SLAVE UNTIL. Otherwise, START SLAVE might run before IO thread stop,
leaving the test case with a stopped IO thread that eventually causes a
wait timeout.
There was a race, a small window between updating slave position and updating
Seconds_Behind_Master, during which the test case could see the wrong value.
Fix by waiting for the expected status to appear.
Applied the fix previously pushed into 10.0.
Initial Jan's commit comment:
Problem is that test could open Microsoft C++ Client Debugger
windows with abort exception. Lets not try to test this on
windows.
Problem is that tests restart the server and "shutdown_server" looks
for pid file # which is not there with embedded mode.
Fix tests so that they are not run with embedded mode.
innodb.innodb_stats_drop_locked fail and
innodb.innodb_stats_fetch_nonexistent fails in buildbot on Windows
Analysis: Problem is that innodb_stats_create_on_corrupted
test renames mysql.innodb.index_stats and all the rest
are dependend on this table.
Fix: After rename back to original, restart mysqld to
make sure that table is correct.
The replication relay log position was sometimes updated incorrectly at the
end of a transaction in parallel replication. This happened because the relay
log file name was taken from the current Relay_log_info (SQL driver thread),
not the correct value for the transaction in question.
The result was that if a transaction was applied while the SQL driver thread
was at least one relay log file ahead, _and_ the SQL thread was subsequently
stopped before applying any events from the most recent relay log file, then
the relay log position would be incorrect - wrong relay log file name. Thus,
when the slave was started again, usually a relay log read error would result,
or in rare cases, if the position happened to be readable, the slave might
even skip arbitrary amounts of events.
In GTID mode, the relay log position is reset when both slave threads are
restarted, so this bug would only be seen in non-GTID mode, or in GTID mode
when only the SQL thread, not the IO thread, was stopped.
MDEV-7106: Sporadic test failure in multi_source.gtid
MDEV-7153: Yet another sporadic failure of multi_source.gtid in buildbot
This patch fixes three races in the multi_source.gtid test case that could
cause sporadic failures:
1. Do not put SHOW ALL SLAVES STATUS in the output, the output is not stable.
2. Ensure that slave1 has replicated as far as expected, before stopping its
connection to master1 (otherwise the following wait will time out due to rows
not replicated from master1).
3. Ensure that slave2 has replicated far enough before connecting slave1 to it
(otherwise we get an error during connect that slave1 is ahead of slave2).
I saw two test failures in rpl.rpl_gtid_crash where we get this in the error
log:
141123 12:47:54 [Note] InnoDB: Restoring possible half-written data pages
141123 12:47:54 [Note] InnoDB: from the doublewrite buffer...
InnoDB: Warning: database page corruption or a failed
InnoDB: file read of space 6 page 3.
InnoDB: Trying to recover it from the doublewrite buffer.
141123 12:47:54 [Note] InnoDB: Recovered the page from the doublewrite buffer.
This test case deliberately crashes the server, and if this crash happens
right in the middle of writing a buffer pool page to disk, it is not
unexpected that we can get a half-written page. The page is recovered
correctly from the doublewrite buffer.
So this patch adds a suppression for this warning in the error log for this
test case.
When a master slave restarts, it logs a special restart format description
event in its binlog. When the slave sees this event, it knows it needs to roll
back any active partial transaction, in case the master crashed previously in
the middle of writing such transaction to its binlog.
However, there was a bug where this rollback did not reset rgi->pending_gtid.
This caused the @@gtid_slave_pos to be updated incorrectly with the GTID of
the partial transaction that was rolled back.
Fix this by always clearing rgi->pending_gtid in cleanup_context(), hopefully
preventing similar bugs from turning up in other special cases where a
transaction is rolled back during replication.
Thanks to Pavel Ivanov for tracking down the issue and providing a test case.
after Operating system error number 36 in a file operation.
Analysis: os_file_get_status did not handle error ENAMETOOLONG
correctly.
Fix: Add correct handling for error ENAMETOOLONG. Note that on InnoDB
case the error is not passed all the way up to server. That would
be bigger rewamp.
innodb.innodb_stats_drop_locked fail and
innodb.innodb_stats_fetch_nonexistent fails in buildbot on Windows
Analysis: Problem is that innodb_stats_create_on_corrupted
test renames mysql.innodb.index_stats and all the rest
are dependend on this table.
Fix: After rename back to original, restart mysqld to
make sure that table is correct.
innodb_stats_sample_pages
Analysis: If you set the number of analyzed pages
to very low number compared to actual pages on
that table/index it randomly pics those pages
(default 8 pages), this leads to fact that query
after analyze table returns different results. If
the index tree is small, smaller than 10 *
n_sample_pages + total_external_size, then the
estimate is ok. For bigger index trees it is
common that we do not see any borders between
key values in the few pages we pick. But still
there may be n_sample_pages different key values,
or even more. And it just tries to
approximate to n_sample_pages (8).
Fix: (1) Introduced new dynamic configuration variable
innodb_stats_sample_traditional that retains
the current design. Default false.
(2) If traditional sample is not used we use
n_sample_pages = max(min(srv_stats_sample_pages,
index->stat_index_size),
log2(index->stat_index_size)*
srv_stats_sample_pages);
(3) Introduced new dynamic configuration variable
stat_modified_counter (default = 0) if set
sets lower bound for row updates when statistics is re-estimated.
If user has provided upper bound for how many rows needs to be updated
before we calculate new statistics we use minimum of provided value
and 1/16 of table every 16th round. If no upper bound is provided
(srv_stats_modified_counter = 0, default) then calculate new statistics
if 1 / 16 of table has been modified
since the last time a statistics batch was run.
We calculate statistics at most every 16th round, since we may have
a counter table which is very small and updated very often.
@param t table
@return true if the table has changed too much and stats need to be
recalculated
*/
#define DICT_TABLE_CHANGED_TOO_MUCH(t) \
((ib_int64_t) (t)->stat_modified_counter > (srv_stats_modified_counter ? \
ut_min(srv_stats_modified_counter, (16 + (t)->stat_n_rows / 16)) : \
16 + (t)->stat_n_rows / 16))
Problem is that page compressed tables currently require atomic_blobs and
that feature is not availabe currently for row_format=redundant.
Fix: Disallow page compressed create option if table row_format=redundant.
The problem is that the binlog position is updated before
Executed_log_entries and Slave_SQL_State. So, it's possible to hit
the moment when MASTER_POS_WAIT (and hence sync_with_master) already
returned success, but Slave_SQL_State and Executed_log_entries were not
modified yet.
Fixing it by adding a wait on the expected Executed_log_entries value.
use the same restriction for character_set_client on the command line
and from SQL.
Also: remove strange hack from thd_init_client_charset() that contradicted
the manual (collation_connection and character_set_result were not always set)
MDEV-6789 segfault in Item_func_from_unixtime::get_date on updating table with virtual columns
* prohibit VALUES in partitioning expression
* prohibit user and system variables in virtual column expressions
* fix Item_func_date_format to cache locale (for %M/%W to return the same as MONTHNAME/DAYNAME)
* fix Item_func_from_unixtime to cache time_zone directly, not THD (and not to crash)
* added tests for other incorrectly allowed (in vcols) functions to see that they don't crash
A "field" could be either an Item_field or
(if loading into a view) an Item_direct_ref that references Item_field.
Also: when iterating fields, use fields of the TABLE_LIST (table or view),
not fields of a TABLE (actual underlying table - might have more columns).
Do not allow server to start if binlog_format is set
to a format other than ROW. Also restrict the change
of GLOBAL/SESSION binlog_format value at runtime.
The test case rpl.rpl_parallel_temptable deliberately crashes the master
server as part of the testing. This makes it unsuitable for Valgrind
testing. So make sure that it will be skipped when testing with Valgrind.
Merge Facebook commit cd063ab930
authored by Peng Tian from https://github.com/facebook/mysql-5.6
Introduced a new configuration variable innodb_fatal_semaphore_wait_threshold,
it makes the fatal semaphore timeout configurable. Modified original commit
so that no MariaDB server files are changed, instead introduced a new
InnoDB/XtraDB configuration variable.
Its default/min/max vlaues are 600/1/2^32-1 in seconds (it was hardcoded
as 600, now its default value is 600, so the default behavior of this diff
should be no change).
The real problem here was inconsistent handling of entry->commit_errno in
MYSQL_BIN_LOG::write_transaction_or_stmt(). Some return paths were setting it
to the value of errno, some where not. And the setting was redundant anyway,
as it is set consistently by the caller.
Fix by consistently setting it in the caller, and not in each return path in
the function.
The test failure happened because a DBUG_EXECUTE_IF() used in the test case
set an entry->commit_errno that was immediately overwritten in the caller with
whatever happened to be the value of errno. This could lead to different error
message in the .result file.
This bug was seen when parallel replication experienced a deadlock between
transactions T1 and T2, where T2 has reached the commit phase and is waiting
for T1 to commit first. In this case, the deadlock is broken by sending a kill
to T2; that kill error is then later detected and converted to a deadlock
error, which causes T2 to be rolled back and retried.
The problem was that the kill caused ha_commit_trans() to errorneously call
wakeup_subsequent_commits() on T3, signalling it to abort because T2 failed
during commit. This is incorrect, because the error in T2 is only a temporary
error, which will be resolved by normal transaction retry. We should not
signal error to the next transaction until we have executed the code that
handles such temporary errors.
So this patch just removes the calls to wakeup_subsequent_commits() from
ha_commit_trans(). They are incorrect in this case, and they are not needed in
general, as wakeup_subsequent_commits() must in any case be called in
finish_event_group() to wakeup any transactions that may have started to wait
after ha_commit_trans(). And normally, wakeup will in fact have happened
earlier, either from the binlog group commit code, or (in case of no
binlogging) after the fast part of InnoDB/XtraDB group commit.
The symptom of this bug was that replication would break on some transaction
with "Commit failed due to failure of an earlier commit on which this one
depends", but with no such failure of an earlier commit visible anywhere.
The retry of an event group in parallel replication set the wrong value for
the end log position of the event that was retried
(qev->future_event_relay_log_pos). It was too large by the size of the event,
so it pointed into the middle of the following event.
If the retry happened in the very last event of the event group, _and_ the SQL
thread was stopped just after successfully retrying that event, then the SQL
threads's relay log position would be left incorrect. Restarting the SQL
thread could then try to read events from a garbage offset in the relay log,
usually leading to an error about not being able to read the event.
The code in binlog group commit around wait_for_commit that controls commit
order, did the wakeup of subsequent commits early, as soon as a following
transaction is put into the group commit queue, but before any such commit has
actually taken place. This causes problems with too early wakeup of
transactions that need to wait for prior to commit, but do not take part in
the binlog group commit for one reason or the other.
This patch solves the problem, by moving the wakeup to happen only after the
binlog group commit is completed.
This requires a new solution to ensure that transactions that arrive later
than the leader are still able to participate in group commit. This patch
introduces a flag wait_for_commit::commit_started. When this is set, a waiter
can queue up itself in the group commit queue.
This way, effectively the wait_for_prior_commit() is skipped only for
transactions that participate in group commit, so that skipping the wait is
safe. Other transactions still wait as needed for correctness.
setting of innodb_io_capacity_max
(a) Changed the behaviour so that if you set innodb_io_capacity to a
value > innodb_io_capacity_max that the value is accepted AND
that innodb_io_capacity_max = innodb_io_capacity * 2.
(b) If someone wants to reduce innodb_io_capacity_max and
reduce it below innodb_io_capacity then innodb_io_capacity
should be reduced to the same level as innodb_io_capacity_max.
In both cases give a warning to user.
The test complains that the server failed to disappear upon shutdown /
wait_for_disconnect. Trying to solve the probably race condition by
adding a wait before restart.
The test case had a classic mistake: SET DEBUG_SYNC='now SIGNAL xxx'
followed immediately by SET DEBUG_SYNC='RESET'. This makes it
possible for the waiter to miss the signal, if it does not manage
to wake up prior to the RESET.
The test cases had some --replace_result $USER USER. The problem is that the
value of $USER can be anything, depending on the name of the unix account that
runs the test suite. So random parts of the result can be errorneously
replaced, causing test failures.
Fix by making the replacements more specific, so they will match only the
intended stuff regardless of the value of $USER.
Merged Facebook commit 617aef9f911d825e9053f3d611d0389e02031225
authored by Inaam Rana to InnoDB storage engine (not XtraDB)
from https://github.com/facebook/mysql-5.6
WL#7047 - Optimize buffer pool list scans and related batch processing
Reduce excessive scanning of pages when doing flush list batches. The
fix is to introduce the concept of "Hazard Pointer", this reduces the
time complexity of the scan from O(n*n) to O.
The concept of hazard pointer is reversed in this work. Academically
hazard pointer is a pointer that the thread working on it will declar
such and as long as that thread is not done no other thread is allowe
do anything with it.
In this WL we declare the pointer as a hazard pointer and then if any
thread attempts to work on it, it is allowed to do so but it has to a
the hazard pointer to the next valid value. We use hazard pointer sol
reverse traversal of lists within a buffer pool instance.
Add an event to control the background flush thread. The background f
thread wait has been converted to an os event timed wait so that it c
signalled by threads that want to kick start a background flush when
buffer pool is running low on free/dirty pages.
Merge Facebook commit 154c579b828a60722a7d9477fc61868c07453d08
and e8f0052f9b112dc786bf9b957ed5b16a5749f7fd authored
by Steaphan Greene from https://github.com/facebook/mysql-5.6
Optimize prefix index queries to skip cluster index lookup when possible.
Currently InnoDB will always fetch the clustered index (primary key
index) for all prefix columns in an index, even when the value of a
particular record is smaller than the prefix length. This change
optimizes that case to use the record from the secondary index and avoid
the extra lookup.
Also adds two status vars that track how effective this is:
innodb_secondary_index_triggered_cluster_reads:
Times secondary index lookup triggered cluster lookup.
innodb_secondary_index_triggered_cluster_reads_avoided:
Times prefix optimization avoided triggering cluster lookup.
The debug configuration parameter innodb_optimistic_insert_debug
which was introduced for testing corner cases in B-tree handling
had a bug in it. The value 1 would trigger an infinite sequence
of page splits.
Fix: When the value 1 is specified, disable this debug feature.
Approved by Yasufumi Kinoshita
Merge Facebook commit 4f3e0343fd2ac3fc7311d0ec9739a8f668274f0d
authored by Steaphan Greene from https://github.com/facebook/mysql-5.6
Adds innodb_idle_flush_pct to enable tuning of the page flushing rate
when the system is relatively idle. We care about this, since doing
extra unnecessary flash writes shortens the lifespan of the flash.
Merge Facebook commit ca40b4417fd224a68de6636b58c92f133703fc68
authored by Steaphan Greene from https://github.com/facebook/mysql-5.6
Change the default value for innodb_log_compressed_pages to false
Logging these pages is a waste. We don't want this to be enabled.
One caution here: If the zlib version used by innodb is changed, but
the running version is still the previous version, and the running
version crashes, it is possible crash recovery could fail.
When crash recovery uses a zlib version at all different than the
version used by the crashed instance, it is possible that a redone
compression could fail, where the original did not, because the new
zlib version compresses the same data to a slightly larger size.
Because of the nature of compression, this is even possible when
upgrading to a version of zlib which actually peforms overall better
compression than the previous version.
If this happens, mysql will fail to recover, since a page split can
not be safely triggered during crash recovery.
So, either the exact zlib version must be controlled between builds,
or these rare recovery failures must be accepted. The cost of
logging these pages is quite high, so we consider this limitation to
be worthwhile.
This failure scenario can not happen if there was a clean shutdown.
This is only relevant to restarting crashed instances, or starting an
instance built via a hot backup too (XtraBackup).
New generation hard drives, SSDs and NVM devices support 4K
sector size. Supported sector size can be found using fstatvfs()
or GetDiskFreeSpace() functions.
dict_set_corrupted(): Use the canonical way of searching for
less-than-equal (PAGE_CUR_LE) and then checking low_match.
The code that was introduced in MySQL 5.5.17 in
Bug#11830883 SUPPORT "CORRUPTED" BIT FOR INNODB TABLES AND INDEXES
could position the cursor on the page supremum, and then attempt
to overwrite non-existing 7th field of the 1-field supremum record.
Approved by Jimmy Yang
Merged Facebook commit dd2d11be7aaf3be270e740fb95cbc4eacb52f4d7
authored by Rongrong Zhong from https://github.com/facebook/mysql-5.6
This fixes MySQL Bug #68220 innodb_rows_updated is misleading on slave
http://bugs.mysql.com/bug.php?id=68220
Added innodb_system_rows_read/inserted/updated/deleted counters
that are the equivalent of innodb_rows_* but that only account for
changes made to system databases (mysql, information_schame and
preformance_schema). These counters will be used on slaves to
differentiated the updates made on system databases from those made on
user databases.
innodb_rows_* status counters are not updated when innodb_system_rows_*
are updated.
dd2d11be7a
Merged Facebook commit ecff018632c6db49bad73d9233c3cdc9f41430e9
authored by Steaphan Greene from https://github.com/facebook/mysql-5.6
This change is to fix: http://bugs.mysql.com/62534
This makes innodb_max_dirty_pages_pct a double with min,default,max values
0.001, 75, 99.999.
This also makes innodb_max_dirty_pages_pct_lwm and adaptive_flushing_lwm
doubles, as these sysvars are inter-dependent.
Added more to the BUFFER POOL AND MEMORY section of SHOW INNODB STATUS:
Percent pages dirty: X.X
This is all n_dirty_pages / used_pages
Percent all pages dirty: X.X
This is all n_dirty_pages / all-pages
Max dirty pages percent: X.X
This is innodb_max_dirty_pages_pct
Also changed all of buf from 2 to 3 digits of precision (%.2f -> %.3f).
some of the tables are created in InnoDB and some tables are created in MyISAM.
We need to create all tables on InnoDB. Fix is to add engine=innodb to the
CREATE TABLE statements.
approved in IM by Marko and Vasil.
The test case waits for other threads to complete, but the wait is only 2
seconds. This is likely to sometimes be too little on our heavily loaded
buildbot VMs, that can easily stall for more than 2 seconds from time to time.
So let's try to increase the timeout (to about 40 seconds) and see if it
helps.
The test case runs SHOW SLAVE HOSTS. The output of this is only stable after
all slaves have had time to register with the master; this happens
asynchroneously.
The test was waiting for the slave with server_id=3 to appear in the output,
but it was missing a similar wait for server_id=2. Thus, if server_id=2 was
much slower to connect for some reason, it could be missing from the output,
causing the test to fail.
innodb_buffer_pool_pages_total depends on page size. On Power8 it is 65k
compared to 4k on Intel. As we round allocations on page size we may get
slightly more memory for buffer pool.
Extend existing plugins to support
* SHOW QUERY_RESPONSE_TIME
* FLUSH QUERY_RESPONSE_TIME
* SHOW LOCALE
move userstat tables to use the new API instead of
hand-coded syntax
* ignore the OPTIMIZER_SWITCH_ENGINE_CONDITION_PUSHDOWN bit
* issue a deprecation warning on 'engine_condition_pushdown=on'
* remove unused remains of the old pre-5.5 engine_condition_pushdown variable
instead of having it 1K everywhere, make it 1K on 32-bit and 2K on 64-bit.
As the latter has larger pointers (and larger sizeof(SHOW_VAR),
it needs a larger buffer to store the same amount of SHOW_VARs
* fix debian patch
* update the copyright
* rename include guards to follow conventions
* restore incorectly deleted test file, add clarification in a comment
* capitalize the first letter of the status variable
Added MAX_STATEMENT_TIME user variable to automaticly kill queries after a given time limit has expired.
- Added timer functions based on pthread_cond_timedwait
- Added kill_handlerton() to signal storage engines about kill/timeout
- Added support for GRANT ... MAX_STATEMENT_TIME=#
- Copy max_statement_time to current user, if stored in mysql.user
- Added status variable max_statement_time_exceeded
- Added KILL_TIMEOUT
- Removed digest hash from performance schema tests as they change all the time.
- Updated test results that changed because of the new user variables or new fields in mysql.user
This functionallity is inspired by work done by Davi Arnaut at twitter.
Test case is copied from Davi's work.
Documentation can be found at
https://kb.askmonty.org/en/how-to-limittimeout-queries/
mysql-test/r/mysqld--help.result:
Updated for new help message
mysql-test/suite/perfschema/r/all_instances.result:
Added new mutex
mysql-test/suite/sys_vars/r/max_statement_time_basic.result:
Added testing of max_statement_time
mysql-test/suite/sys_vars/t/max_statement_time_basic.test:
Added testing of max_statement_time
mysql-test/t/max_statement_time.test:
Added testing of max_statement_time
mysys/CMakeLists.txt:
Added thr_timer
mysys/my_init.c:
mysys/mysys_priv.h:
Added new mutex and condition variables
Added new mutex and condition variables
mysys/thr_timer.c:
Added timer functions based on pthread_cond_timedwait()
This can be compiled with HAVE_TIMER_CREATE to benchmark agains timer_create()/timer_settime()
sql/lex.h:
Added MAX_STATEMENT_TIME
sql/log_event.cc:
Safety fix (timeout should be threated as an interrupted query)
sql/mysqld.cc:
Added support for timers
Added status variable max_statement_time_exceeded
sql/share/errmsg-utf8.txt:
Added ER_QUERY_TIMEOUT
sql/signal_handler.cc:
Added support for KILL_TIMEOUT
sql/sql_acl.cc:
Added support for GRANT ... MAX_STATEMENT_TIME=#
Copy max_statement_time to current user
sql/sql_class.cc:
Added timer functionality to THD.
Added thd_kill_timeout()
sql/sql_class.h:
Added timer functionality to THD.
Added KILL_TIMEOUT
Added max_statement_time variable in similar manner as long_query_time was done.
sql/sql_connect.cc:
Added handling of max_statement_time_exceeded
sql/sql_parse.cc:
Added starting and stopping timers for queries.
sql/sql_show.cc:
Added max_statement_time_exceeded for user/connects status in MariaDB 10.0
sql/sql_yacc.yy:
Added support for GRANT ... MAX_STATEMENT_TIME=# syntax, to be enabled in 10.0
sql/structs.h:
Added max_statement_time user resource
sql/sys_vars.cc:
Added max_statement_time variables
mysql-test/suite/roles/create_and_drop_role_invalid_user_table.test
Removed test as we require all fields in mysql.user table.
scripts/mysql_system_tables.sql
scripts/mysql_system_tables_data.sql
scripts/mysql_system_tables_fix.sql
Updated mysql.user with new max_statement_time field
merge from MySQL-5.6, revision:
revno: 3677.2.1
committer: Alfranio Correia <alfranio.correia@oracle.com>
timestamp: Tue 2012-02-28 16:26:37 +0000
message:
BUG#13627921 - MISSING FLAGS IN SQL_COMMAND_FLAGS MAY LEAD TO REPLICATION PROBLEMS
Flags in sql_command_flags[command] are not correctly set for the following
commands:
. SQLCOM_SET_OPTION is missing CF_CAN_GENERATE_ROW_EVENTS;
. SQLCOM_BINLOG_BASE64_EVENT is missing CF_CAN_GENERATE_ROW_EVENTS;
. SQLCOM_REVOKE_ALL is missing CF_CHANGES_DATA;
. SQLCOM_CREATE_FUNCTION is missing CF_AUTO_COMMIT_TRANS;
This may lead to a wrong sequence of events in the binary log. To fix
the problem, we correctly set the flags in sql_command_flags[command].
Normally, SET SESSION SQL_LOG_BIN is used by DBAs to run a
non-conflicting command locally only, ensuring it does not
get replicated.
Setting GLOBAL SQL_LOG_BIN would not require all sessions to
disconnect. When SQL_LOG_BIN is changed globally, it does not
immediately take effect for any sessions. It takes effect by
becoming the session-level default inherited at the start of
each new session, and this setting is kept and cached for the
duration of that session. Setting it intentionally is unlikely
to have a useful effect under any circumstance; setting it
unintentionally, such as while intending to use SET [SESSION]
is potentially disastrous. Accidentally using SET GLOBAL
SQL_LOG_BIN will not show an immediate effect to the user,
instead not having the desired session-level effect, and thus
causing other potential problems with local-only maintenance
being binlogged and executed on slaves; And transactions from
new sessions (after SQL_LOG_BIN is changed globally) are not
binlogged and replicated, which would result in irrecoverable
or difficult data loss.
This is the regular GLOBAL variables way to work, but in
replication context it does not look right on a working server
(with connected sessions) 'set global sql_log_bin' and none of
that connections is affected. Unexperienced DBA after noticing
that the command did "nothing" will change the session var and
most probably won't unset the global var, causing new sessions
to not be binlog.
Setting GLOBAL SQL_LOG_BIN allows DBA to stop binlogging on all
new sessions, which can be used to make a server "replication
read-only" without restarting the server. But this has such big
requirements, stop all existing connections, that it is more
likely to make a mess, it is too risky to allow the GLOBAL variable.
The statement 'SET GLOBAL SQL_LOG_BIN=N' will produce an error
in 5.5, 5.6 and 5.7. Reading the GLOBAL SQL_LOG_BIN will produce
a deprecation warning in 5.7.
Added comments
Ensure that tokudb test works even if jemalloc is not installed
Removed not referenced function Item::remove_fixed()
mysql-test/suite/rpl/t/rpl_gtid_reconnect.test:
Fixed race condition
sql/item.cc:
Indentation fix
sql/item.h:
Removed not used function
Added comment
sql/sql_select.cc:
Fixed indentation
storage/tokudb/mysql-test/rpl/include/have_tokudb.opt:
Ensure that tokudb test works even if jemalloc is not installed
storage/tokudb/mysql-test/tokudb/suite.opt:
Ensure that tokudb test works even if jemalloc is not installed
storage/tokudb/mysql-test/tokudb_add_index/suite.opt:
Ensure that tokudb test works even if jemalloc is not installed
storage/tokudb/mysql-test/tokudb_alter_table/suite.opt:
Ensure that tokudb test works even if jemalloc is not installed
storage/tokudb/mysql-test/tokudb_bugs/suite.opt:
Ensure that tokudb test works even if jemalloc is not installed
storage/tokudb/mysql-test/tokudb_mariadb/suite.opt:
Ensure that tokudb test works even if jemalloc is not installed
FROM A FUNCTION
Scenario:
In a stored procedure, CREATE TABLE statement is not allowed. But an
exception is provided for CREATE TEMPORARY TABLE. We can create a temporary
table in a stored procedure.
Let there be two stored functions f1 and f2 and two stored procedures p1 and
p2. Their properties are as follows:
. stored function f1() calls stored procedure p1().
. stored function f2() calls stored procedure p2().
. stored procedure p1() creates temporary table t1.
. stored procedure p2() does DML on t1.
Consider the following situation:
1. Autocommit mode is on.
2. select f1()
3. select f2()
Step 2: In this step, t1 would be created via p1(). A table level transaction
lock would have been taken. The ::external_lock() would not have been called
on this table. At the end of step 2, because of autocommit mode on, this table
level lock will be released.
Step 3: When we execute DML on table t1 via p2() we have two problems:
Problem 1:
The function ha_innobase::external_lock() would have been called but since
it is a select query no table level locks would have been taken. Hence the
following assert will fail:
ut_ad(lock_table_has(thr_get_trx(thr), index->table, LOCK_IX));
Solution:
The solution would be to identify this situation and take a table level lock
and use the proper lock type prebuilt->select_lock_type = LOCK_X for DML
operations.
Problem 2:
Another problem is that in step 3, ha_innobase::open() is never called on
the table t1.
Solution:
The solution would be to identify this situation and call re-init the handler
of table t1.
rb#6429 approved by Krunal.
Problem:
Creation of a table fails when innodb_strict_mode is enabled, but the same
table is created without any warning when innodb_strict_mode is enabled.
Solution:
If creation of a table fails with an error when innodb_strict_mode is
enabled, it must issue a warning when innodb_strict_mode is disabled.
rb#6723 approved by Krunal.
We should assume that the store engine will report the first duplicate key for this case.
Old code of suppression of unsafe logging error with LIMIT didn't work, because of wrong usage of my_interval_timer().
Suppress unsafe logging errors to the error log if we get too many unsafe logging errors in a short time.
This is to not overflow the error log with meaningless errors.
- Each error code is suppressed and counted separately.
- We do a 5 minute suppression of new errors if we get more than 10 errors in that time.
Only print unsafe logging errors if log_warnings > 1.
mysql-test/suite/binlog/r/binlog_stm_unsafe_warning.result:
Update test results as INSERT ... ON DUPLICATE KEY UPDATE doesn't get logged anymore
mysql-test/suite/binlog/r/binlog_unsafe.result:
Update test results as INSERT ... ON DUPLICATE KEY UPDATE doesn't get logged anymore
mysql-test/suite/engines/README:
Fixed typos
mysql-test/suite/rpl/r/rpl_known_bugs_detection.result:
Update test results as INSERT ... ON DUPLICATE KEY UPDATE doesn't get logged anymore
sql/sql_base.cc:
Don't log warning if there are two unique keys used with INSERT .. ON DUPLICATE KEY UPDATE.
We should assume that the store engine will report the first duplicate key for this case.
sql/sql_class.cc:
Suppress error in binary log if we get too many unsafe logging errors in a short time.
Only print unsafe logging errors if log_warnings > 1
The problem was that my_hash_sort didn't properly delete end-space characters properly, so strings that should compare
identically was seen as different strings. (Space was handled correctly, but not NBSP)
This caused duplicate key errors when a heap table was converted to Aria as part of overflow in group by.
Fixed by removing all characters that compares as end space when creating a hash.
Other things:
- Fixed that --sorted_results also works for errors in mysqltest.
- Speed up hash by not comparing strings that has different hash.
- Speed up many my_hash_sort functions by using registers to calculate hash instead of pointers.
This was previously done for some functions, but not for all.
- Made a macro of the hash function, to simplify code and to be able to experiment with new hash functions.
client/mysqltest.cc:
Fixed that --sorted_results also works for error messages.
mysql-test/r/ctype_partitions.result:
New test to ensure that partitions on hash works
mysql-test/suite/multi_source/gtid.result:
Updated result
mysql-test/suite/multi_source/gtid.test:
Test that --sorted_result works for error messages
mysql-test/suite/multi_source/gtid_ignore_duplicates.result:
Updated result
mysql-test/suite/multi_source/gtid_ignore_duplicates.test:
Updated result
mysql-test/suite/multi_source/load_data.result:
Updated result
mysql-test/suite/multi_source/load_data.test:
Updated result
mysql-test/t/ctype_partitions.test:
New test to ensure that partitions on hash works
storage/heap/hp_write.c:
Speed up hash by not comparing strings that has different hash.
storage/maria/ma_check.c:
Extra debug
strings/ctype-bin.c:
Use macro for hash function
strings/ctype-latin1.c:
Use macro for hash function
Use registers to calculate hash (speedup)
strings/ctype-mb.c:
Use macro for hash function
Use registers to calculate hash (speedup)
strings/ctype-simple.c:
Use macro for hash function
Use same variable names as in other my_hash_sort functions.
Update my_hash_sort_simple() to properly remove end space (patch by Bar)
strings/ctype-uca.c:
Ignore duplicated space inside strings and end space in my_hash_sort_uca(). This fixed MDEV-6255
Use macro for hash function
Use registers to calculate hash (speedup)
strings/ctype-ucs2.c:
Use macro for hash function
Use registers to calculate hash (speedup)
strings/ctype-utf8.c:
Use macro for hash function
Use registers to calculate hash (speedup)
strings/strings_def.h:
Made a macro of the hash function, to simplify code and to be able to experiment with new hash functions.
MDEV-6560 Assertion `! is_set() ' failed in Diagnostics_area::set_ok_status on killing CREATE OR REPLACE
MDEV-6525 Assertion `table->pos_in_locked _tables == __null || table->pos_in_locked_tables->table = table' failed in mark_used_tables_as_free_for_reuse, locking problems and binlogging problems on CREATE OR REPLACE under lock.
mysql-test/r/create_or_replace.result:
Added test for MDEV-6560
mysql-test/t/create_or_replace.test:
Added test for MDEV-6560
mysql-test/valgrind.supp:
Added suppression for OpenSuse 12.3
sql/sql_base.cc:
More DBUG
sql/sql_class.cc:
Changed that thd_sqlcom_can_generate_row_events() does not report that CREATE OR REPLACE is generating row events.
This is safe as this function is only used by InnoDB/XtraDB to check if a query is generating row events as part of another transaction. As CREATE is always run as it's own transaction, this isn't a problem.
This fixed MDEV-6525.
sql/sql_table.cc:
Remember if reopen_tables() generates an error (which can only happen in case of KILL).
This fixed MDEV-6560
DELETE FROM ports WHERE ports.id = 'f37aa3fe-ab99-4d0f-a566-6cd3169d7516'
where table ports have foreign keys.
Problem is repeatable with 10.0.12-galera but not with 10.0-13-galera.
Added test case to regression set.
If the slave gets a reconnect in the middle of a GTID event group, normally
it will re-fetch that event group, skipping the first part that was already
queued for the SQL thread.
However, if the master crashed while writing the event group, the group is
incomplete. This patch detects this case and makes sure that the
transaction is rolled back and nothing is skipped from any following
event groups.
Similarly, a network proxy might cause the reconnect to end up on a
different master server. Detect this by noticing a different server_id,
and similarly in this case roll back the partially received group.
MDEV-6656: Test wsrep.variables hangs
Analysis: wsrep_applier_thread shutdown signaling does not always work
correctly causing a timing problem where main thread is waiting in a
condition variable a signal that all worker threads to end.
Analysis: wsrep_applier_thread shutdown signaling does not always work
correctly causing a timing problem where main thread is waiting in a
condition variable a signal that all worker threads to end.