- no need to use --skip-ndb in collections files anymore, since long but
more clear logic after recent mtr.pl fixes. ndb tests are never run in MySQL Server
unless explicitly requested
- remove sys_vars.ndb_log_update_as_write_basic.test and sys_vars.ndb_log_updated_only_basic.result since MySQL Server does not have those
options.
- Only sys_vars.have_ndbcluster_basic left since MySQL Server has that variable hardcoded.
btr_lift_page_up() writes wrong page number (different by -1) for upper than father page.
But in almost all of the cases, the father page should be root page, no upper
pages. It is very rare path.
In addition the leaf page should not be lifted unless the father page is root.
Because the branch pages should not become the leaf pages.
rb://1336 approved by Marko Makela.
btr_lift_page_up() writes wrong page number (different by -1) for upper than father page.
But in almost all of the cases, the father page should be root page, no upper
pages. It is very rare path.
In addition the leaf page should not be lifted unless the father page is root.
Because the branch pages should not become the leaf pages.
rb://1336 approved by Marko Makela.
main.mysqlbinlog_row_innodb are skipped by mtr
=== Problem ===
The following tests are wrongly placed in main suite and as a
result these are not run with proper binlog format combinations.
Some are always skipped by mtr.
1) mysqlbinlog_row_myisam
2) mysqlbinlog_row_innodb
3) mysqlbinlog_row.test
4) mysqlbinlog_row_trans.test
5) mysqlbinlog-cp932
6) mysqlbinlog2
7) mysqlbinlog_base64
=== Background ===
mtr runs the tests placed in main suite with binlog format=stmt.
Those that need to be tested against binlog format=row or mixed
or more than one binlog format and require only one mysql server
are placed in binlog suite. mtr runs tests in binlog suite with
all three binlog formats(stmt,row and mixed).
=== Fix ===
1) Moved the test listed in problem section above to binlog suite.
2) Added prefix "binlog_" to the name of each test case moved.
Renamed the coresponding result files and option files accordingly.
mysql-test/extra/binlog_tests/mysqlbinlog_row_engine.inc:
include file for mysqlbinlog_row_myisam.test and
mysqlbinlog_row_myisam.test which are being moved to
binlog suite.
mysql-test/suite/binlog/r/binlog_mysqlbinlog-cp932.result:
result file for mysqlbinlog-cp932.test which is being moved to
binlog suite.
mysql-test/suite/binlog/r/binlog_mysqlbinlog2.result:
result file for mysqlbinlog2.test which is being moved to
binlog suite.
mysql-test/suite/binlog/r/binlog_mysqlbinlog_base64.result:
result file for mysqlbinlog_base64.test which is being moved to
binlog suite.
mysql-test/suite/binlog/r/binlog_mysqlbinlog_row.result:
result file for mysqlbinlog_row.test which is being moved to
binlog suite.
mysql-test/suite/binlog/r/binlog_mysqlbinlog_row_innodb.result:
result file for mysqlbinlog_row_innodb.test which is being moved to
binlog suite.
mysql-test/suite/binlog/r/binlog_mysqlbinlog_row_myisam.result:
result file for mysqlbinlog_row_myisam.test which is being moved to
binlog suite.
mysql-test/suite/binlog/r/binlog_mysqlbinlog_row_trans.result:
result file for mysqlbinlog_row_trans.test which is being moved to
binlog suite.
mysql-test/suite/binlog/t/binlog_mysqlbinlog-cp932-master.opt:
option file for mysqlbinlog-cp932.test which is being moved to
binlog suite.
mysql-test/suite/binlog/t/binlog_mysqlbinlog-cp932.test:
the test requires binlog format=stmt or mixed. Since, it was placed in
main suite earlier, it was only run with binlog format=stmt, and hence
this test was never run with binlog format=mixed.
mysql-test/suite/binlog/t/binlog_mysqlbinlog2.test:
the test requires binlog format=stmt or mixed. Since, it was placed in
main suite earlier, it was only run with binlog format=stmt, and hence
this test was never run with binlog format=mixed.
mysql-test/suite/binlog/t/binlog_mysqlbinlog_base64.test:
the test requires binlog format=row. Since, it was placed in main
suite earlier, it was only run with binlog format=stmt, and hence
this test was always skipped by mtr.
mysql-test/suite/binlog/t/binlog_mysqlbinlog_row.test:
the test requires binlog format=row. Since, it was placed in main
suite earlier, it was only run with binlog format=stmt, and hence
this test was always skipped by mtr.
mysql-test/suite/binlog/t/binlog_mysqlbinlog_row_innodb.test:
the test requires binlog format=row. Since, it was placed in main
suite earlier, it was only run with binlog format=stmt, and hence
this test was always skipped by mtr.
mysql-test/suite/binlog/t/binlog_mysqlbinlog_row_myisam.test:
the test requires binlog format=row. Since, it was placed in main
suite earlier, it was only run with binlog format=stmt, and hence
this test was always skipped by mtr.
mysql-test/suite/binlog/t/binlog_mysqlbinlog_row_trans.test:
the test requires binlog format=row. Since, it was placed in main
suite earlier, it was only run with binlog format=stmt, and hence
this test was always skipped by mtr.
SECONDARY INDEX UPDATES MAKE CONSISTENT READS DO O(N^2) UNDO PAGE
LOOKUPS (honoring kill query while accessing sec_index)
If secondary index is being used for select query evaluation and this
query is operating with consistent read snapshot it might take good time for
secondary index to return back control to mysql as MVCC would kick in.
If user issues "kill query <id>" while query is actively accessing
secondary index it will not be honored as there is no hook to check
for this condition. Added hook for this check.
-----
Parallely secondary index taking too long to evaluate for consistent
read snapshot case is being examined for performance improvement. WL#6540.
SECONDARY INDEX UPDATES MAKE CONSISTENT READS DO O(N^2) UNDO PAGE
LOOKUPS (honoring kill query while accessing sec_index)
If secondary index is being used for select query evaluation and this
query is operating with consistent read snapshot it might take good time for
secondary index to return back control to mysql as MVCC would kick in.
If user issues "kill query <id>" while query is actively accessing
secondary index it will not be honored as there is no hook to check
for this condition. Added hook for this check.
-----
Parallely secondary index taking too long to evaluate for consistent
read snapshot case is being examined for performance improvement. WL#6540.
CONSISTENT SNAPSHOT OPTION
A transaction is started with a consistent snapshot. After
the transaction is started new indexes are added to the
table. Now when we issue an update statement, the optimizer
chooses an index. When the index scan is being initialized
via ha_innobase::change_active_index(), InnoDB reports
the error code HA_ERR_TABLE_DEF_CHANGED, with message
stating that "insufficient history for index".
This error message is propagated up to the SQL layer. But
the my_error() api is never called. The statement level
diagnostics area is not updated with the correct error
status (it remains in Diagnostics_area::DA_EMPTY).
Hence the following check in the Protocol::end_statement()
fails.
516 case Diagnostics_area::DA_EMPTY:
517 default:
518 DBUG_ASSERT(0);
519 error= send_ok(thd->server_status, 0, 0, 0, NULL);
520 break;
The fix is to backport the fix of bugs 14365043, 11761652
and 11746399.
14365043 PROTOCOL::END_STATEMENT(): ASSERTION `0' FAILED
11761652 HA_RND_INIT() RESULT CODE NOT CHECKED
11746399 RETURN VALUES OF HA_INDEX_INIT() AND INDEX_INIT() IGNORED
rb://1227 approved by guilhem and mattiasj.
QUOTING IN REPLICATION
Problem: Misquoting or unquoted identifiers may lead to
incorrect statements to be logged to the binary log.
Fix: we use specialized functions to append quoted identifiers in
the statements generated by the server.
THOUGH IT IS NOT.
The following error message is misleading because it claims
that the BLOB space is not counted.
"ERROR 1118 (42000): Row size too large. The maximum row size for
the used table type, not counting BLOBs, is 8126. You have to
change some columns to TEXT or BLOBs"
When the ROW_FORMAT=compact or ROW_FORMAT=REDUNDANT is used,
the BLOB prefix is stored inline along with the row. So
the above error message is changed as follows depending on
the row format used:
For ROW_FORMAT=COMPRESSED or ROW_FORMAT=DYNAMIC, the error
message is as follows:
"ERROR 42000: Row size too large (> 8126). Changing some
columns to TEXT or BLOB may help. In current row format,
BLOB prefix of 0 bytes is stored inline."
For ROW_FORMAT=COMPACT or ROW_FORMAT=REDUNDANT, the error
message is as follows:
"ERROR 42000: Row size too large (> 8126). Changing some
columns to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or
ROW_FORMAT=COMPRESSED may help. In current row
format, BLOB prefix of 768 bytes is stored inline."
rb://1252 approved by Marko Makela
Backport from mysql-5.6 the fix
(revision-id sunny.bains@oracle.com-20120315045831-20rgfa4cozxmz7kz)
Bug#13839886 - CRASH IN INNOBASE_NEXT_AUTOINC
The assertion introduce in the fix for Bug#13817703
is too strong, a negative number can be greater
than the column max value, when the column value is
a negative number.
rb://978 Approved by Jimmy Yang.
rb:1236 approved by Marko Makela
NUMBERS
If a system variable was declared as deprecated without mention of an
alternative, the message would look funny, e.g. for @@delayed_insert_limit:
Warning 1287 '@@delayed_insert_limit' is deprecated and
will be removed in MySQL .
The message was meant to display the version number, but it's not
possible to give one when declaring a system variable.
The fix does two things:
1) The definition of the message
ER_WARN_DEPRECATED_SYNTAX_NO_REPLACEMENT is changed so that it does
not display a version number. I.e. in English the message now reads:
Warning 1287 The syntax '@@delayed_insert_limit' is deprecated and
will be removed in a future version.
2) The message ER_WARN_DEPRECATED_SYNTAX_WITH_VER is discontinued in
favor of ER_WARN_DEPRECATED_SYNTAX for system variables. This change
was already done in versions 5.6 and above as part of wl#5265. This
part is simply back-ported from the worklog.
WARNING
This patch is for mysql-5.5 only,
to be null-merged to mysql-5.6 and mysql-trunk.
This is a partial rollback of the file io instrumentation,
removing the instrumentation for mysql_file_stat in the archive engine.
See the bug comments for details.
Problem description:
Table 't' created with two colums having compound index on both the
columns under innodb/myisam engine at remote machine. In the local
machine same table is created undet the federated engine.
A select having where clause with along 'AND' operation gives wrong
results on local machine.
Analysis:
The given query at federated engine is wrongly transformed by
federated::create_where_from_key() function and the same was sent to
the remote machine. Hence the local machine is showing wrong results.
Given query "select c1 from t where c1 <= 2 and c2 = 1;"
Query transformed, after ha_federated::create_where_from_key() function is:
SELECT `c1`, `c2` FROM `t` WHERE (`c1` IS NOT NULL ) AND
( (`c1` >= 2) AND (`c2` <= 1) ) and the same sent to real_query().
In the above the '<=' and '=' conditions were transformed to '>=' and
'<=' respectively.
ha_federated::create_where_from_key() function behaving as below:
The key_range is having both the start_key and end_key. The start_key
is used to get "(`c1` IS NOT NULL )" part of the where clause, this
transformation is correct. The end_key is used to get "( (`c1` >= 2)
AND (`c2` <= 1) )", which is wrong, here the given conditions('<=' and '=')
are changed as wrong conditions('>=' and '<=').
The end_key is having {key = 0x39fa6d0 "", length = 10, keypart_map = 3,
flag = HA_READ_AFTER_KEY}
The store_length is having value '5'. Based on store_length and length
values the condition values is applied in HA_READ_AFTER_KEY switch case.
The switch case 'HA_READ_AFTER_KEY' is applicable to only the last part of
the end_key and for previous parts it is going to 'HA_READ_KEY_OR_NEXT' case,
here the '>=' is getting added as a condition instead of '<='.
Fix:
Updated the 'if' condition in 'HA_READ_AFTER_KEY' case to affect for all
parts of the end_key. i.e 'i > 0' will used for end_key, Hence added it in
the if condition.
mysql-test/suite/federated/federated.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_archive.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_bug_13118.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_bug_25714.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_bug_35333.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_debug.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_innodb.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_server.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_transactions.test:
modified the federated.inc file location
mysql-test/suite/federated/include/federated.inc:
moved the file from federated suite to federated/include folder
mysql-test/suite/federated/include/federated_cleanup.inc:
moved the file from federated suite to federated/include folder
mysql-test/suite/federated/include/have_federated_db.inc:
moved the file from federated suite to federated/include folder
storage/federated/ha_federated.cc:
updated the 'if condition' in ha_federated::create_where_from_key()
function.
Backporting the WL#5716, "Information schema table for InnoDB
buffer pool information". Backporting revisions 2876.244.113,
2876.244.102 from mysql-trunk.
rb://1175 approved by Jimmy Yang.
Backporting the WL#5716, "Information schema table for InnoDB
buffer pool information". Backporting revisions 2876.244.113,
2876.244.102 from mysql-trunk.
rb://1177 approved by Jimmy Yang.
rpl_cant_read_event_incident:
Slave applies updates from bug11747416_32228_binlog.000001 file which
contains a CREATE TABLE t statement and an incident, when SQL thread is
running slowly IO thread may reach the incident before SQL thread
executes the create table statement.
Execute "drop table if exists t" and also perform a RESET MASTER to
clean slave binary logs.
rpl_bug41902:
Error "MYSQL_BIN_LOG::purge_logs was called with file
./master-bin.000001 not listed in the index." suppression is not
considering windows path, there is ".\master-bin.000001".
Changed suppression to: "MYSQL_BIN_LOG::purge_logs was called with file
..master-bin.000001 not listed in the index", to match ".\" and "./".
Print the warning(note):
YEAR(x) is deprecated and will be removed in a future release. Please use YEAR(4) instead
on "CREATE TABLE ... YEAR(x)" or "ALTER TABLE MODIFY ... YEAR(x)", where x != 4
Problem
========
Replication breaks in the cases if the event length exceeds
the size of master Dump thread's max_allowed_packet.
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet, on addition of the
max_event_header length exceeds the max_allowed_packet of the DUMP thread.
This causes the Dump thread to break replication and throw an error.
That can happen e.g with row-based replication in Update_rows event.
Fix
====
The problem is fixed in 2 steps:
1.) The Dump thread limit to read event is increased to the upper limit
i.e. Dump thread reads whatever gets logged in the binary log.
2.) On the slave side we increase the the max_allowed_packet for the
slave's threads (IO/SQL) by increasing it to 1GB.
This is done using the new server option (slave_max_allowed_packet)
included, is used to regulate the max_allowed_packet of the
slave thread (IO/SQL) by the DBA, and facilitates the sending of
large packets from the master to the slave.
This causes the large packets to be received by the slave and apply
it successfully.
sql/log_event.cc:
The max_allowed_packet is not evaluated to the new option
slave_max_allowed_packet after the fix.
sql/log_event.h:
Added the new option in the log_event.h file.
sql/mysqld.cc:
Added a new option to the server.
sql/slave.cc:
Increasing the session max_allowed_packet to a large value,
i.e. not taking global(max_allowed) into consideration, for the slave's threads.
sql/sql_repl.cc:
The dump thread's max_allowed_packet is set to the upper limit
which makes it independent and it now reads whatever gets
logged in the binary log.
Problem
========
Replication breaks in the cases if the event length exceeds
the size of master Dump thread's max_allowed_packet.
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet, on addition of the
max_event_header length exceeds the max_allowed_packet of the DUMP thread.
This causes the Dump thread to break replication and throw an error.
That can happen e.g with row-based replication in Update_rows event.
Fix
====
The problem is fixed in 2 steps:
1.) The Dump thread limit to read event is increased to the upper limit
i.e. Dump thread reads whatever gets logged in the binary log.
2.) On the slave side we increase the the max_allowed_packet for the
slave's threads (IO/SQL) by increasing it to 1GB.
This is done using the new server option (slave_max_allowed_packet)
included, is used to regulate the max_allowed_packet of the
slave thread (IO/SQL) by the DBA, and facilitates the sending of
large packets from the master to the slave.
This causes the large packets to be received by the slave and apply
it successfully.
sql/log_event.cc:
The max_allowed_packet is not evaluated to the new option
slave_max_allowed_packet after the fix.
sql/log_event.h:
Added the new option in the log_event.h file.
sql/mysqld.cc:
Added a new option to the server.
sql/slave.cc:
Increasing the session max_allowed_packet to a large value,
i.e. not taking global(max_allowed) into consideration, for the slave's threads.
sql/sql_repl.cc:
The dump thread's max_allowed_packet is set to the upper limit
which makes it independent and it now reads whatever gets
logged in the binary log.
Details:
- Archive storage engine file access were not instrumented and thus
were not shown in PS tables.
Fix:
- Added instrumentation code by using PS Apis for I/O.
updating the result file. Because a multi-row insert now reserves the
auto increment values before hand, if any explicitly specified auto
increment values are there, then some of the reserved values are lost.