Reason for the problem was that the hash of changed files in the key cache was too small (was 128). Fixed by making the hash size larger and changeable.
- Introduced key-cache-file-hash-size (default 512) for MyISAM and aria_pagecache_file_hash_size (default 512) for Aria.
- Added new status variable "Feature_delay_key_write" which counts number of tables opened that are using delay_key_write
mysql-test/r/features.result:
Added test of Feature_delay_key_write
mysql-test/r/key_cache.result:
Updated tests as the number of blocks has changed
mysql-test/r/mysqld--help.result:
Updated result
mysql-test/suite/maria/maria3.result:
Updated result
mysql-test/suite/sys_vars/r/key_cache_file_hash_size_basic.result:
Test new variable
mysql-test/suite/sys_vars/t/aria_pagecache_file_hash_size_basic.test:
Test new variable
mysql-test/suite/sys_vars/t/key_cache_file_hash_size_basic.test:
Test new variable
mysql-test/t/features.test:
Added test of Feature_delay_key_write
mysql-test/t/key_cache.test:
Updated tests as the number of blocks has changed
mysys/mf_keycache.c:
Made CHANGED_BLOCKS_HASH dynamic
sql/handler.cc:
Updated call to init_key_cache()
sql/mysqld.cc:
Added "Feature_delay_key_write"
Added support for key-cache-file-hash-size
sql/mysqld.h:
Added support for key-cache-file-hash-size
sql/sql_class.h:
Added feature_files_opened_with_delayed_keys
sql/sys_vars.cc:
Added key_cache_file_hash_size
storage/maria/ha_maria.cc:
Added pagecache_file_hash_size
Added counting of files with delay_key_write
storage/maria/ma_checkpoint.c:
Fixed compiler warning
storage/maria/ma_pagecache.c:
Made PAGECACHE_CHANGED_BLOCKS_HASH into a variable
storage/maria/ma_pagecache.h:
Made PAGECACHE_CHANGED_BLOCKS_HASH into a variable
storage/maria/ma_rt_test.c:
Updated parameters for init_pagecache()
storage/maria/ma_test1.c:
Updated parameters for init_pagecache()
storage/maria/ma_test2.c:
Updated parameters for init_pagecache()
storage/maria/ma_test3.c:
Updated parameters for init_pagecache()
storage/maria/maria_chk.c:
Updated parameters for init_pagecache()
storage/maria/maria_ftdump.c:
Updated parameters for init_pagecache()
storage/maria/maria_pack.c:
Updated parameters for init_pagecache()
storage/maria/maria_read_log.c:
Updated parameters for init_pagecache()
storage/maria/unittest/ma_pagecache_consist.c:
Updated parameters for init_pagecache()
storage/maria/unittest/ma_pagecache_rwconsist.c:
Updated parameters for init_pagecache()
storage/maria/unittest/ma_pagecache_rwconsist2.c:
Updated parameters for init_pagecache()
storage/maria/unittest/ma_pagecache_single.c:
Updated parameters for init_pagecache()
storage/maria/unittest/ma_test_loghandler-t.c:
Updated parameters for init_pagecache()
storage/maria/unittest/ma_test_loghandler_first_lsn-t.c:
Updated parameters for init_pagecache()
storage/maria/unittest/ma_test_loghandler_max_lsn-t.c:
Updated parameters for init_pagecache()
storage/maria/unittest/ma_test_loghandler_multigroup-t.c:
Updated parameters for init_pagecache()
storage/maria/unittest/ma_test_loghandler_multithread-t.c:
Updated parameters for init_pagecache()
storage/maria/unittest/ma_test_loghandler_noflush-t.c:
Updated parameters for init_pagecache()
storage/maria/unittest/ma_test_loghandler_nologs-t.c:
Updated parameters for init_pagecache()
storage/maria/unittest/ma_test_loghandler_pagecache-t.c:
Updated parameters for init_pagecache()
storage/maria/unittest/ma_test_loghandler_purge-t.c:
Updated parameters for init_pagecache()
storage/myisam/ha_myisam.cc:
Added counting of files with delay_key_write
storage/myisam/mi_check.c:
Updated call to init_key_cache()
storage/myisam/mi_test1.c:
Updated call to init_key_cache()
storage/myisam/mi_test2.c:
Updated call to init_key_cache()
storage/myisam/mi_test3.c:
Updated call to init_key_cache()
storage/myisam/mi_test_all.sh:
Fixed broken test
storage/myisam/myisam_ftdump.c:
Updated call to init_key_cache()
storage/myisam/myisamchk.c:
Updated call to init_key_cache()
storage/myisam/myisamlog.c:
Updated call to init_key_cache()
The bug was that my_real_read() called net_before_header_psi() multiple times for long packets.
Fixed by adding a flag when we are reading a header.
Did also some cleanups to interface of my_net_read() to avoid unnecessary calls if performance schema is not used.
- Added my_net_read_packet() as a replacement for my_net_read(). my_net_read() is still in the client library for old clients.
- Removed THD->m_server_idle (not needed anymore as this is now given as argument to my_net_read_packet()
- Added my_net_read_packet(), which is a new version of my_net_read() with a new parameter if we are doing a read for a new command from the server.
- Added tests for compressed protocol and big packets
include/mysql.h.pp:
Added my_net_read_packet() as a replacement for my_net_read()
include/mysql_com.h:
Added my_net_read_packet() as a replacement for my_net_read()
mysql-test/r/mysql_client_test_comp.result:
New test
mysql-test/t/mysql_client_test-master.opt:
Added max_allowed_packet to be able to test big packets and packet size overflows.
mysql-test/t/mysql_client_test_comp-master.opt:
New test
mysql-test/t/mysql_client_test_nonblock-master.opt:
Added max_allowed_packet to be able to test big packets and packet size overflows.
sql-common/client.c:
Use my_net_read_packet()
sql/mf_iocache.cc:
Use my_net_read_packet()
sql/mysqld.cc:
Removed THD->m_server_idle (not needed anymore as this is now given as argument to my_net_read_packet()
sql/net_serv.cc:
Added argument to my_real_read() to indicte if we are reading the first block of the next statement and should call performance schema.
Added 'compatibilty function' my_net_read().
Added my_net_read_packet(), which is a new version of my_net_read() with a new parameter if we are doing a read for a new command from the server.
sql/sql_class.cc:
Removed m_server_idle (not needed anymore)
sql/sql_class.h:
Removed m_server_idle (not needed anymore)
sql/sql_parse.cc:
Removed m_server_idle (not needed anymore)
tests/mysql_client_test.c:
Added tests for compressed protocol and big packets
Merge the patches into MariaDB 10.0 main.
With this patch, parallel replication will now automatically retry a
transaction that fails due to deadlock or other temporary error, same as
single-threaded replication.
We catch deadlocks with InnoDB transactions due to enforced commit order. If
T1 must commit before T2 in parallel replication and T1 ends up waiting for T2
inside InnoDB, we kill T2 and retry it later to resolve the deadlock
automatically.
Fix a bug discovered in Buildbot valgrind. The logic in checking for slave
init thread completion was reversed, so depending on thread scheduling server
startup could hang.
Also add another variant of SSL valgrind suppression, needed for different
library version.
When a MyISAM query is killed midway, the query is logged to the binlog marked
with the error.
The slave does not attempt to run the query, but aborts with a suitable error
message in the error log for the DBA to act on.
In this case, the parallel replication code would check the sql_errno() code,
even no my_error() had been set. In debug builds, this causes an assertion.
Fixed the code to check that we actually have an error set before querying for
an error code.
After-review changes.
For this patch in 10.0, we do not introduce a new public storage engine API,
we just fix the InnoDB/XtraDB issues. In 10.1, we will make a better public
API that can be used for all storage engines (MDEV-6429).
Eliminate the background thread that did deadlock kills asynchroneously.
Instead, we ensure that the InnoDB/XtraDB code can handle doing the kill from
inside the deadlock detection code (when thd_report_wait_for() needs to kill a
later thread to resolve a deadlock).
(We preserve the part of the original patch that introduces dedicated mutex
and condition for the slave init thread, to remove the abuse of
LOCK_thread_count for start/stop synchronisation of the slave init thread).
The sql_slave_skip_counter is important to be able to recover replication from
certain errors. Often, an appropriate solution is to set
sql_slave_skip_counter to skip over a problem event. But setting
sql_slave_skip_counter produced an error in GTID mode, with a suggestion to
instead set @@gtid_slave_pos to point past the problem event. This however is
not always possible; for example, in case of an INCIDENT event, that event
does not have any GTID to assign to @@gtid_slave_pos.
With this patch, sql_slave_skip_counter now works in GTID mode the same was as
in non-GTID mode. When set, that many initial events are skipped when the SQL
thread starts, plus as many extra events are needed to completely skip any
partially skipped event group. The GTID position is updated to point past the
skipped event(s).
MDEV-6344: mysqldump issues FLUSH TABLES, which gets written into binlog and replicated
Add a --gtid option (for compatibility, the original behaviour is preserved
when --gtid is not used).
With --gtid, --master-data and --dump-slave output the GTID position (the
old-style file/offset position is still output, but commented out). Also, a
CHANGE MASTER TO master_use_gtid=slave_pos is output to ensure a provisioned
slave is configured in GTID, as requested.
Without --gtid, the GTID position is still output, if available, but commented
out.
Also fix MDEV-6344, to avoid FLUSH TABLES getting into the binlog. Otherwise a
mysqldump on a slave server will silently inject a GTID which does not exist
on the master, which is highly undesirable.
Also fix an incorrect error handling around obtaining binlog position with
--master-data (was probably unlikely to trigger in most cases).
Analysis: For some reason table stats for a table pointed from a index
is not initialized. Added additional warning output on this situation
and table stats initialization. This is better than asserting.
These tests use search_pattern_in_file.inc to search the error log for
expected output. However, search_pattern_in_file.inc by default searched only
the first 50000 bytes, so if the error log grew too big the tests would fail.
This patch extends search_pattern_in_file.inc with an option to specify how
much of the file to search, and whether to search from the start of the file
or from the end. Then the rpl.rpl_checksum and rpl.rpl_gtid_errorlog test
cases are fixed to search the last 50000 bytes of the error log, which will
work no matter how large prior tests have made it.
The direct cause of the assertion was missing error handling in
record_gtid(). If ha_commit_trans() fails for the statement commit, there was
missing code to catch the error and do ha_rollback_trans() in this case; this
caused close_thread_tables() to assert.
Normally, this error case is not hit, but in this case it was triggered due to
another bug: When a transaction T1 fails during parallel replication, the code
would signal following transactions that they could start to run without
properly marking the error condition. This caused subsequent transactions to
incorrectly start replicating, only to get an error later during their own
commit step. This was particularly serious if the subsequent transactions were
DDL or MyISAM updates, which cannot be rolled back and would leave replication
in an inconsistent state.
Fixed by 1) in case of error, only signal following transactions to continue
once the error has been properly marked and those transactions will know not
to start; and 2) implement proper error handling in record_gtid() in the case
that statement commit fails.
If replication breaks in GTID mode, it is not trivial to determine the GTID of
the failing event group. This is a problem, as such GTID is needed eg. to
explicitly set @@gtid_slave_pos to skip to after that event group, or to
compare errors on different servers, etc.
Fix by ensuring that relevant slave errors logged to the error log include the
GTID of the event group containing the problem event.
This is MySQL Bug#59123. The message string stored in an INCIDENT event was
not zero-terminated. This caused any following checksum bytes (if enabled on
the master) to be output to the error log as trailing garbage when the message
was printed to the error log.
Backport the patch from MySQL 5.6:
revno: 2876.228.200
revision-id: zhenxing.he@sun.com-20110111051323-w2xnzvcjn46x6h6u
committer: He Zhenxing <zhenxing.he@sun.com>
timestamp: Tue 2011-01-11 13:13:23 +0800
message:
BUG#59123 rpl_stm_binlog_max_cache_size fails sporadically with found warnings
Also add a test case.
MySQL 5.6 implemented WL#344, which is about a MASTER_DELAY option to CHANGE
MASTER. But as part of this worklog, the format of the realy-log.info file was
changed. The new format is not understood by earlier versions, and nor by
MariaDB 10.0, so changing server to those versions would cause the slave to
abort with an error due to reading incorrect data out of relay-log.info.
Fix this by backporting from the WL#344 patch just the code that understands
the new relay-log.info format. We still write out the old format, and none of
the MASTER_DELAY feature is backported with this commit.
The INCIDENT_EVENT always caused slave error and abort, without checking
--slave-skip-errors.
Now, if error 1590, ER_SLAVE_INCIDENT is included in the --slave-skip-errors
list, incident events will be ignored.
This is a merge of this MySQL 5.6 patch:
revision-id: frazer@mysql.com-20110314170916-ypgin17otj3ucx95
committer: Frazer Clement <frazer@mysql.com>
timestamp: Mon 2011-03-14 17:09:16 +0000
message:
Bug#11799671 NOT POSSIBLE TO SKIP INCIDENT ERRORS
When plugin=mysql_native_password (or mysql_old_password) take the password
from *either* password *or* authentication_string, whichever is set.
This makes no sense, but alas, that's what MySQL-5.6 does.
replication causing replication to fail.
Remove the temporary fix for MDEV-5914, which used READ COMMITTED for parallel
replication worker threads. Replace it with a better, more selective solution.
The issue is with certain edge cases of InnoDB gap locks, for example between
INSERT and ranged DELETE. It is possible for the gap lock set by the DELETE to
block the INSERT, if the DELETE runs first, while the record lock set by
INSERT does not block the DELETE, if the INSERT runs first. This can cause a
conflict between the two in parallel replication on the slave even though they
ran without conflicts on the master.
With this patch, InnoDB will ask the server layer about the two involved
transactions before blocking on a gap lock. If the server layer tells InnoDB
that the transactions are already fixed wrt. commit order, as they are in
parallel replication, InnoDB will ignore the gap lock and allow the two
transactions to proceed in parallel, avoiding the conflict.
Improve the fix for MDEV-6020. When InnoDB itself detects a deadlock, it now
asks the server layer for any preferences about which transaction to roll
back. In case of parallel replication with two transactions T1 and T2 fixed to
commit T1 before T2, the server layer will ask InnoDB to roll back T2 as the
deadlock victim, not T1. This helps in some cases to avoid excessive deadlock
rollback, as T2 will in any case need to wait for T1 to complete before it can
itself commit.
Also some misc. fixes found during development and testing:
- Remove thd_rpl_is_parallel(), it is not used or needed.
- Use KILL_CONNECTION instead of KILL_QUERY when a parallel replication
worker thread is killed to resolve a deadlock with fixed commit
ordering. There are some cases, eg. in sql/sql_parse.cc, where a KILL_QUERY
can be ignored if the query otherwise completed successfully, and this
could cause the deadlock kill to be lost, so that the deadlock was not
correctly resolved.
- Fix random test failure due to missing wait_for_binlog_checkpoint.inc.
- Make sure that deadlock or other temporary errors during parallel
replication are not printed to the the error log; there were some places
around the replication code with extra error logging. These conditions can
occur occasionally and are handled automatically without breaking
replication, so they should not pollute the error log.
- Fix handling of rgi->gtid_sub_id. We need to be able to access this also at
the end of a transaction, to be able to detect and resolve deadlocks due to
commit ordering. But this value was also used as a flag to mark whether
record_gtid() had been called, by being set to zero, losing the value. Now,
introduce a separate flag rgi->gtid_pending, so rgi->gtid_sub_id remains
valid for the entire duration of the transaction.
- Fix one place where the code to handle ignored errors called reset_killed()
unconditionally, even if no error was caught that should be ignored. This
could cause loss of a deadlock kill signal, breaking deadlock detection and
resolution.
- Fix a couple of missing mysql_reset_thd_for_next_command(). This could
cause a prior error condition to remain for the next event executed,
causing assertions about errors already being set and possibly giving
incorrect error handling for following event executions.
- Fix code that cleared thd->rgi_slave in the parallel replication worker
threads after each event execution; this caused the deadlock detection and
handling code to not be able to correctly process the associated
transactions as belonging to replication worker threads.
- Remove useless error code in slave_background_kill_request().
- Fix bug where wfc->wakeup_error was not cleared at
wait_for_commit::unregister_wait_for_prior_commit(). This could cause the
error condition to wrongly propagate to a later wait_for_prior_commit(),
causing spurious ER_PRIOR_COMMIT_FAILED errors.
- Do not put the binlog background thread into the processlist. It causes
too many result differences in mtr, but also it probably is not useful
for users to pollute the process list with a system thread that does not
really perform any user-visible tasks...
The method JOIN_CACHE::init may fail (return 1) if some conditions on the
used join buffer is not satisfied. For example it fails if join_buffer_size
is greater than join_buffer_space_limit. The conditions should be checked
when running the EXPLAIN command for the query. That's why the method
JOIN_CACHE::init has to be called for EXPLAIN commands as well.
Part#1.
table_cond_selectivity() should discount selectivity of table'
conditions only when ity counts that selectivity to begin with.
For non-ref-based access methods (ALL/range/index_merge/etc),
we start with sel=1.0 and hence do not need to discount any
selectivities.
- When range optimizer cannot the lookup value into [VAR]CHAR(n) column,
it should produce:
= "Impossible range" for equality
= "no range" for non-equalities.
Merge from mysql-5.6:
revno: 3257
committer: Jon Olav Hauglid <jon.hauglid@oracle.com>
branch nick: mysql-trunk-bug11756966
timestamp: Thu 2011-07-14 09:32:01 +0200
message:
Bug#11756966 - 48958: STORED PROCEDURES CAN BE LEVERAGED TO BYPASS
DATABASE SECURITY
The problem was that CREATE PROCEDURE/FUCTION could be used to
check the existence of databases for which the user had no
privileges and therefore should not be allowed to see.
The reason was that existence of a given database was checked
before privileges. So trying to create a stored routine in
a non-existent database would give a different error than trying
to create a stored routine in a restricted database.
This patch fixes the problem by changing the order of the checks
for CREATE PROCEDURE/FUNCTION so that privileges are checked first.
This means that trying to create a stored routine in a
non-existent database and in a restricted database both will
give ER_DBACCESS_DENIED_ERROR error.
Test case added to grant.test.