INDEX_READ_MAP HAD NO MATCH
If index_read_map is called for exact search and no matching records
exists it will position the cursor on the next record, but still having the
relative position to BTR_PCUR_ON.
This will make a call for index_next to read yet another next record,
instead of returning the record the cursor points to.
Fixed by setting pcur->rel_pos = BTR_PCUR_BEFORE if an exact
[prefix] search is done, but failed.
Also avoids optimistic restoration if rel_pos != BTR_PCUR_ON,
since btr_cur may be different than old_rec.
rb#3324, approved by Marko and Jimmy
DURING INNODB RECOVERY
Problem:
=======
The connection 'master' is dropped by mysqltest after
rpl_end.inc. At this point, dropping temporary tables
at the connection 'master' are not synced at slave.
So, the temporary tables replicated from master remain
on slave leading to an inconsistent close of the test.
The following test thus complains about the presence of
temporary table(s) left over from the previous test.
Fix:
===
- Put explicit drop commands in replication tests so
that the temporary tables are dropped at slave as well.
- Added the check for Slave_open_temp_tables in
mtr_check.sql to warn about the remaining temporary
table, if any, at the close of a test.
IN TIME RECOVERY FAILURE ON SLAVES
Problem:
DROP TEMP TABLE IF EXISTS commands can cause point
in time recovery (re-applying binlog) failures.
Analyses:
In RBR, 'DROP TEMPORARY TABLE' commands are
always binlogged by adding 'IF EXISTS' clauses.
Also, the slave SQL thread will not check replicate.* filter
rules for "DROP TEMPORARY TABLE IF EXISTS" queries.
If log-slave-updates is enabled on slave, these queries
will be binlogged in the format of "USE `db`;
DROP TEMPORARY TABLE IF EXISTS `t1`;" irrespective
of filtering rules and irrespective of the `db` existence.
When users try to recover slave from it's own binlog,
use `db` command might fail if `db` is not present on slave.
Fix:
At the time of writing the 'DROP TEMPORARY TABLE
IF EXISTS' query into the binlog, 'use `db`' will not be
present and the table name in the query will be a fully
qualified table name.
Eg:
'USE `db`; DROP TEMPORARY TABLE IF EXISTS `t1`;'
will be logged as
'DROP TEMPORARY TABLE IF EXISTS `db`.`t1`;'.
Problem:
sys_vars.rpl_init_slave_func test was failing sporadically
on 5.5+.
Fix:
Added assert condition after wait for checks.
Recorded test and enabled it.
BUG#12535301- SYS_VARS.RPL_INIT_SLAVE_FUNC MISMATCHES IN DAILY-5.5
Problem:
sys_vars.rpl_init_slave_func test was not recorded after
the last edit. It was disabled on 5.1 after seeing failures
due to the above reason.
No old failures as this suite never ran with pb2 on 5.1
Fix:
Added assert condition after wait for checks.
Recorded test and enabled it.
BY BINLOG_KILLED_SIMULATE.TEST
'mysqbinlog' tool creates a temporary file while
preparing LOAD DATA QUERY. These files needs to be deleted
at the end of the test script otherwise these files are
left out in the daily-run machines, causing
"no space on device issues"
Fix:
Delete them at the end of these test scripts
1) execute mysqlbinlog with --local-load option to
create these files in a specified tmpdir
2) delete the tmpdir at the end of the test script
--BINLOG-IGNORE-DB AND FULLY QUALIFIED TABLE
Problem:
=======
An ALTER TABLE statement is not written to binlog if server
started with "--binlog-ignore-db some database" and 'fully
qualified' table names are used in the ALTER TABLE statement
altering table different from current database context.
Analysis:
========
The above mentioned problem not only affects "ALTER TABLE"
statements but also to all kind of statements. Once the
current default database becomes "NULL" none of the
statements will be binlogged.
The current behaviour is such that if the user has specified
restrictions on which database needs to be replicated and the
default db is not specified, then do not replicate.
This means that "NULL" is considered to be equivalent to
everything (default db = null implied ignore don't log the
statement).
Fix:
===
"NULL" should not be considered as equivalent to everything.
Since the filtering criteria is not equal to "NULL" the
statement should be logged into binlog.
PLATFORM= MACOSX10.6 X86_64 MAX
Problem: The test was failing on pb2's mac machine because
it was not cleaned up properly. The test checks if
the command 'start slave until' throws a proper
error when issued with a wrong number/type of
parameters. After this,the replication stream was
stopped using the include file 'rpl_end.inc'.
The errors thrown earlier left the slave in an
inconsistent state to be closed by the include
file which was caught by the mac machine.
Fix: Started slave by invoking start_slave.inc to have a
working slave before calling rpl_reset.inc
Problem: The test file was not in a good shape. It tested
start slave until relay log file/pos combination
wrongly. A couple of commands were executed at
master and replicated at slave. Next, the
coordinates in terms of relay log file and pos
were noted down followed by reset slave and start
slave until saved relay log file/pos. Reset slave
deletes all relay log files and makes the slave
forget its replication position. So, using the
saved coordiantes after reset slave is wrong.
Fix: Split the test in two parts:
a) Test for start slave until master log file/pos and
checking for correct errors in the failure
scenarios.
b) Test for start slave until relay log file/pos.
Problem: The variables auto_increment_increment and
auto_increment_offset were set in the the include
file rpl_init.inc. This was only configured for
some connections that are rarely used by test
cases, so likely that it will cause confusion.
If replication tests want to setup these variables
they should do so explicitly.
Fix:
a) Removed code to set the variables
auto_increment_increment and auto_increment_offset
in the include file.
b) Updated tests files using the same.
post push fix:
rpl_stm_until.test was disabled because of
this bug. Enabled and fixed it.
Removed a part of the test that was obsolete.
It tested replication from 4.0 master to 5.0
slave.
post push fix:
rpl_stm_until.test was disabled because of
this bug. Enabled and fixed it.
Removed a part of the test that was obsolete.
It tested replication from 4.0 master to 5.0
slave.
DOWNGRADED FROM 5.6.11 TO 5.6.10
Problem was new syntax not accepted by previous version.
Fixed by adding version comment of /*!50531 around the
new syntax.
Like this in the .frm file:
'PARTITION BY KEY /*!50611 ALGORITHM = 2 */ () PARTITIONS 3'
and also changing the output from SHOW CREATE TABLE to:
CREATE TABLE t1 (a INT)
/*!50100 PARTITION BY KEY */ /*!50611 ALGORITHM = 1 */ /*!50100 ()
PARTITIONS 3 */
It will always add the ALGORITHM into the .frm for KEY [sub]partitioned
tables, but for SHOW CREATE TABLE it will only add it in case it is the non
default ALGORITHM = 1.
Also notice that for 5.5, it will say /*!50531 instead of /*!50611, which
will make upgrade from 5.5 > 5.5.31 to 5.6 < 5.6.11 fail!
If one downgrades an fixed version to the same major version (5.5 or 5.6) the
bug 14521864 will be visible again, but unless the .frm is updated, it will
work again when upgrading again.
Also fixed so that the .frm does not get updated version
if a single partition check passes.
innodb_bug12400341.test is disabled for valgrind daily test.
It might be affected by the previous test's undo slots existing,
because of slower execution.
Due to an internal change in the server code in between 5.1 and 5.5
(wl#2649) the hash function used in KEY partitioning changed
for numeric and date/time columns (from binary hash calculation
to character based hash calculation).
Also enum/set changed from latin1 ci based hash calculation to
binary hash between 5.1 and 5.5. (bug#11759782).
These changes makes KEY [sub]partitioned tables on any of
the affected column types incompatible with 5.5 and above,
since the calculation of partition id differs.
Also since InnoDB asserts that a deleted row was previously
read (positioned), the server asserts on delete of a row that
is in the wrong partition.
The solution for this situation is:
1) The partitioning engine will check that delete/update will go to the
partition the row was read from and give an error otherwise, consisting
of the rows partitioning fields. This will avoid asserts in InnoDB and
also alert the user that there is a misplaced row. A detailed error
message will be given, including an entry to the error log consisting
of both table name, partition and row content (PK if exists, otherwise
all partitioning columns).
2) A new optional syntax for KEY () partitioning in 5.5 is allowed:
[SUB]PARTITION BY KEY [ALGORITHM = N] (list_of_cols)
Where N = 1 uses the same hashing as 5.1 (Numeric/date/time fields uses
binary hashing, ENUM/SET uses charset hashing) N = 2 uses the same
hashing as 5.5 (Numeric/date/time fields uses charset hashing,
ENUM/SET uses binary hashing). If not set on CREATE/ALTER it will
default to 2.
This new syntax should probably be ignored by NDB.
3) Since there is a demand for avoiding scanning through the full
table, during upgrade the ALTER TABLE t PARTITION BY ... command is
considered a no-op (only .frm change) if everything except ALGORITHM
is the same and ALGORITHM was not set before, which allows manually
upgrading such table by something like:
ALTER TABLE t PARTITION BY KEY ALGORITHM = 1 () or
ALTER TABLE t PARTITION BY KEY ALGORITHM = 2 ()
4) Enhanced partitioning with CHECK/REPAIR to also check for/repair
misplaced rows. (Also works for ALTER TABLE t CHECK/REPAIR PARTITION)
CHECK FOR UPGRADE:
If the .frm version is < 5.5.3
and uses KEY [sub]partitioning
and an affected column type
then it will fail with an message:
KEY () partitioning changed, please run:
ALTER TABLE `test`.`t1` PARTITION BY KEY ALGORITHM = 1 (a)
PARTITIONS 12
(i.e. current partitioning clause, with the addition of
ALGORITHM = 1)
CHECK without FOR UPGRADE:
if MEDIUM (default) or EXTENDED options are given:
Scan all rows and verify that it is in the correct partition.
Fail for the first misplaced row.
REPAIR:
if default or EXTENDED (i.e. not QUICK/USE_FRM):
Scan all rows and every misplaced row is moved into its correct
partitions.
5) Updated mysqlcheck (called by mysql_upgrade) to handle the
new output from CHECK FOR UPGRADE, to run the ALTER statement
instead of running REPAIR.
This will allow mysql_upgrade (or CHECK TABLE t FOR UPGRADE) to upgrade
a KEY [sub]partitioned table that has any affected field type
and a .frm version < 5.5.3 to ALGORITHM = 1 without rebuild.
Also notice that if the .frm has a version of >= 5.5.3 and ALGORITHM
is not set, it is not possible to know if it consists of rows from
5.1 or 5.5! In these cases I suggest that the user does:
(optional)
LOCK TABLE t WRITE;
SHOW CREATE TABLE t;
(verify that it has no ALGORITHM = N, and to be safe, I would suggest
backing up the .frm file, to be used if one need to change to another
ALGORITHM = N, without needing to rebuild/repair)
ALTER TABLE t <old partitioning clause, but with ALGORITHM = N>;
which should set the ALGORITHM to N (if the table has rows from
5.1 I would suggest N = 1, otherwise N = 2)
CHECK TABLE t;
(here one could use the backed up .frm instead and change to a new N
and run CHECK again and see if it passes)
and if there are misplaced rows:
REPAIR TABLE t;
(optional)
UNLOCK TABLES;
PROPERLY QUOTED IN BINLOG FILE
Problem: In load data file query, User variables are allowed
inside "Into_list" and "Set_list". These user variables used
inside these two lists are not properly guarded with backticks
while server is writting into binlog. Hence user variable names
like a` cannot be used in this context.
Fix: Properly quote these variables while
writting into binlog
buf_page_get_gen(): Do not attempt to decompress a compressed-only
page when mode == BUF_PEEK_IF_IN_POOL. This mode is only being used by
btr_search_drop_page_hash_when_freed(). There cannot be any adaptive
hash index pointing to a page that does not exist in uncompressed
format in the buffer pool.
innodb_buffer_pool_evict_update(): New function for debug builds, to handle
SET GLOBAL innodb_buffer_pool_evicted='uncompressed'
by evicting all uncompressed page frames of compressed tablespaces
from the buffer pool.
rb#1873 approved by Jimmy Yang
The test, binlog.binlog_spurious_ddl_errors was failing on pb2 at the statement
"UNINSTALL PLUGIN example;" with this warning:
"Warning 1620 Plugin is busy and will be uninstalled on shutdown "
Fix
Spurious warnings occur in the test since we do not empty the Query cache,
used by the example plugin at the time of creating tables using the plugin.
Hence, the query chache is flushed before uninstalling the plugin.
Also, as part of running the test across platforms, the plugin installation
script is changed.
WITH A VARIABLE AND ORDER BY
Bug#16035412 MYSQL SERVER 5.5.29 WRONG SORTING USING COMPLEX INDEX
This is a fix for a regression introduced by Bug#12667154:
Bug#12667154 attempted to fix a performance problem with subqueries
that did filesort. For doing filesort, the optimizer creates a quick
select object to use when building the sort index. This quick select
object was deleted after the first call to create_sort_index(). Thus,
for queries where the subquery was executed multiple times, the quick
object was only used for the first execution. For all later executions
of the subquery, filesort used a complete table scan for building the
sort index. The fix for Bug#12667154 tried to fix this by not deleting
the quick object after the first execution of create_sort_index() so
that it would be re-used for building the sort index by the following
executions of the subquery.
This regression introduced in Bug#12667154 is that due to not deleting
the quick select object after building the sort index, the quick
object could in some cases be used also during the second phase of the
execution of the subquery instead of using the created sort
index. This caused wrong results to be returned.
The fix for this issue is to delete the reference to the select object
after it has been used in create_sort_index(). In this way the select
and quick objects will not be available when doing the second phase
of the execution of the select operation. To ensure that the select
object can be re-used for the following executions of the subquery
we make a copy of the select pointer. This is used for restoring the
select object after the select operation is completed.
Before this fix, configuring the server with:
- performance_schema_events_waits_history_size=0
- performance_schema_events_waits_history_long_size=0
could cause a crash in the performance schema.
These settings to 0 are intended to be valid and supported,
and are in fact working properly in mysql 5.6 and up already.
This fix backports the code fix and test cases from mysql 5.6
to the mysql 5.5 release.
Problem:
Before the ALTER TABLE statement, the array
dict_index_t::stat_n_diff_key_vals had proper values calculated
and updated. But after the ALTER TABLE statement, all the values
of this array is 0.
Because of this statistics returned by innodb_rec_per_key() is
different before and after the ALTER TABLE statement. Running the
ANALYZE TABLE command populates the statistics correctly.
Solution:
After ALTER TABLE statement, set the flag dict_table_t::stat_initialized
correctly so that the table statistics will be recalculated properly when
the table is next loaded. But note that we still don't choose the loose
index scans. This fix only ensures that an ALTER TABLE does not change
the optimizer plan.
rb://1639 approved by Marko and Jimmy.
RPL_ROTATE_LOGS has been failing sporadically in what seems a
problem related to routines that update the coordinates. However,
the test lacks proper assert statments and because of this the
debug information upon failure simply points to the content
mismatch between the test and the result file.
Not as a solution, but as a improvement to the test to better
debug this failure, new assert statments were added to the test.
@rpl_rotate_logs.test
Added new assert statments reducing the
dependency on the result file.
@rpl_rotate_logs.result
Added new content to the result file to
match the test changes
Problem: The problem with the test is that the slave returns
from start_slave.inc call too early before the list
is actually actualised. This caused the slave stale
data to be reported.
Fix: Added a wait in the test till the slave's IO status is
changed to "Waiting for master to send event" which
which ensures the list is correctly updated.
=== Problem ===
The test is dependent on binlog positions and checks
to see if the command 'START SLAVE' functions correctly
with the 'UNTIL' clause added to it. The 'UNTIL' clause
is added to specify that the slave should start and run
until the SQL thread reaches a given point in the master
binary log or in the slave relay log.
The test uses hard coded values for MASTER_LOG_POS and
RELAY_LOG_POS, instead of extracting it using
query_get_value() function. There is a test
'rpl.rpl_row_until' which does the similar thing but uses
query_get_value() function to set the values of
MASTER_LOG_POS/ RELAY_LOG_POS. To be precise,
rpl.rpl_row_until is a modified version of
engines/func.rpl_row_until.test.
The use of hard coded values may lead the slave to stop at a position
which may differ from the expected position in the binlog file,
an example being the failure of engines/funcs.rpl_row_until in
mysql-5.1 given as:
"query 'select * from t2' failed. Table 'test.t2' doesn't exist".
In this case, the slave actually ran a couple of extra commands
as a result of which the slave first deleted the table and then
ran a select query on table, leading to the above mentioned failure.
=== Fix ===
1) Fixed the code for failure seen in rpl.rpl_row_until.
This test was also failing although the symptoms of
failure were different.
2) Copied the contents from rpl.rpl_row_until into
into engines/funcs.rpl.rpl_row_until.
3) Updated engines/funcs.rpl_row_until.result accordingly.
When a binlog is replayed into a server, e.g.:
$ mysqlbinlog binlog.000001 | mysql
it sets a pseudo slave mode on the client connection in order to server
be able to read binlog events, there is, a format description event is
needed to correctly read following events.
Also this pseudo slave mode applies to the current connection
replication rules that are needed to correctly apply binlog events.
If a binlog dump is sourced on a connection, this pseudo slave mode will
remains after it, what will apply unexpected rules from customer
perspective to following commands.
Added a new SET statement to binlog dump that will unset pseudo slave
mode at the end of dump file.