low myisam_sort_buffer_size
Repair by sort (default) or parallel repair of a MyISAM table
(doesn't matter partitioned or not) as well as bulk inserts
and enable indexes some times didn't failover to repair with
key cache.
The problem was that after unsuccessful attempt, data file was
closed. Whereas repair with key cache requires open data file.
Fixed by reopening data file.
Also fixed a valgrind warning, which may appear during repair
by sort or parallel repair with certain myisam_sort_buffer_size
number of rows and length of an index entry (very dependent).
mysql-test/r/myisam.result:
A test case for BUG#47073.
mysql-test/t/myisam.test:
A test case for BUG#47073.
storage/myisam/ha_myisam.cc:
Reverted fix for BUG25289. Not needed anymore.
storage/myisam/mi_check.c:
Reopen data file, when repair by sort or parallel repair
fails.
When repair by sort is requested to rebuild data file, data file
gets rebuilt while fixing first index. When rebuild is completed,
info->dfile is pointing to temporary data file, original data file
is closed.
It may happen that repair has successfully fixed first index and
rebuilt data file, but failed to fix second index. E.g.
myisam_sort_buffer_size was big enough to fix first shorter index,
but not enough to fix subsequent longer index.
In this case we end up with info->dfile pointing to temporary file,
which is removed and info->dfile is set to -1.
Though repair by sort failed, the upper layer may still want to
try repair with key cache. But it needs info->dfile pointing to
valid data file.
storage/myisam/sort.c:
When performing a copy of IO_CACHE structure, current_pos and
current_end must be updated separatly to point to memory we're
copying to (not to memory we're copying from).
As t_file2 is always WRITE cache, proper members are write_pos
and write_end accordingly.
The bug is not related to MERGE table or TRIGGER. More correct description
would be 'assertion on multi-table UPDATE + NATURAL JOIN + MERGEABLE VIEW'.
On PREPARE stage(see test case) we call mark_common_columns() func which
creates ON condition for NATURAL JOIN and sets appropriate
table read_set bitmaps for fields which are used in ON condition.
On EXECUTE stage mark_common_columns() is not called, we set
necessary read_set bitmaps in setup_conds(). But 'B.f1' field
is already processed and related item alredy fixed before
setup_conds() as updated field and setup_conds can not set
read_set bitmap because of that.
The fix is to set read_set bitmap for appropriate table field even
if Item_direct_view_ref item which represents a refernce to this field
is fixed.
mysql-test/r/join.result:
test result
mysql-test/t/join.test:
test case
sql/item.cc:
The bug is not related to MERGE table or TRIGGER. More correct description
would be 'assertion on multi-table UPDATE + NATURAL JOIN + MERGEABLE VIEW'.
On PREPARE stage(see test case) we call mark_common_columns() func which
creates ON condition for NATURAL JOIN and sets appropriate
table read_set bitmaps for fields which are used in ON condition.
On EXECUTE stage mark_common_columns() is not called, we set
necessary read_set bitmaps in setup_conds(). But 'B.f1' field
is already processed and related item alredy fixed before
setup_conds() as updated field and setup_conds can not set
read_set bitmap because of that.
The fix is to set read_set bitmap for appropriate table field even
if Item_direct_view_ref item which represents a refernce to this field
is fixed.
trigger, merge table
The problem with break statements is that they have very
local effects. Hence a break statement within the inner loop
of a nested-loops join caused execution to proceed to the
next table even though a serious error occurred. The problem
was fixed by breaking out the inner loop into its own
method. The change empowers all errors to terminate the
execution.
The errors that will now halt multi-DELETE execution
altogether are
- triggers returning errors
- handler errors
- server being killed
mysql-test/r/delete.result:
Bug#46958: Test result.
mysql-test/t/delete.test:
Bug#46958: Test case.
sql/sql_class.h:
Bug#46958: New method declaration.
sql/sql_delete.cc:
Bug#46958: New method implementation.
All statements executed by mysql_upgrade are binlogged and then are replicated to slave.
This will result in some errors. The report of this bug has demonstrated some examples.
Master and slave should be upgraded separately. All statements executed by
mysql_upgrade will not be binlogged.
--write-binlog and --skip-write-binlog options are added into mysql_upgrade.
These options control whether sql statements are binlogged or not.
In RBR, 'DROP TEMPORARY TABLE IF EXISTS...' statement is binlogged when the table
does not exist.
In fact, 'DROP TEMPORARY TABLE ...' statement should never be binlogged in RBR
no matter if the table exists or not.
This patch addresses this by checking whether we are dropping a
temporary table or not, when building the custom drop statement.
HA_ERR_WRONG_INDEX
In RBR, disabling keys on slave table will break replication when
updating or deleting a record. When the slave thread tries to
find the row, by searching in the storage engine, it checks
whether the table has a key or not. If it has one, then the slave
thread uses it to search the record.
Nonetheless, the slave only checks whether the key exists or not,
it does not verify if it is active. Should the key be
disabled (eg, DBA has issued an ALTER TABLE ... DISABLE KEYS)
then it will result in error: HA_ERR_WRONG_INDEX.
This patch addresses this issue by making the slave thread also
check whether the key is active or not before actually using it.
The failure is not reproduced on 5.1, so enable the 'rpl_cross_version' test.
mysql-test/suite/rpl/t/disabled.def:
Got rid of the line for enabling 'rpl_cross_version' test.
Network error happened here, but it can be caused by CR_CONNECTION_ERROR,
CR_CONN_HOST_ERROR, CR_SERVER_GONE_ERROR, CR_SERVER_LOST, ER_CON_COUNT_ERROR,
and ER_SERVER_SHUTDOWN. We just check CR_SERVER_LOST here, so the test fails.
To fix the problem, check all errors that can be cause by the master shutdown.
mysql-test/extra/rpl_tests/rpl_get_master_version_and_clock.test:
Added a 'if' sentence to check all errors that can be cause by the master shutdown.
mysql-test/suite/rpl/r/rpl_get_master_version_and_clock.result:
Test result is updated duo to the patch of bug#46931
Invalid (old?) table or database name in logs
Problem was still not completely fixed, due to
qouting.
This is the server side only fix (in explain_filename),
the change from filename_to_tablename to use explain_filename
in the InnoDB code must be done before the bug is
fixed.
mysql-test/include/have_not_innodb_plugin.inc:
Bug#32430: 'show innodb status' causes errors
Invalid (old?) table or database name in logs
Added include file to allow test for only the
'old' built-in innodb engine
mysql-test/r/not_true.require:
Bug#32430: 'show innodb status' causes errors
Invalid (old?) table or database name in logs
Added require to match 'not' TRUE
mysql-test/r/partition_innodb_builtin.result:
Bug#32430: 'show innodb status' causes errors
Invalid (old?) table or database name in logs
New result file for partitioning specific to
the 'old' built-in innodb engine
mysql-test/r/partition_innodb_plugin.result:
Bug#32430: 'show innodb status' causes errors
Invalid (old?) table or database name in logs
New result file for partitioning specific to
the new plugin innodb engine
mysql-test/t/disabled.def:
Bug#32430: 'show innodb status' causes errors
Invalid (old?) table or database name in logs
Disabling the new test until the fix is
included in the InnoDB source too.
mysql-test/t/partition_innodb_builtin.test:
Bug#32430: 'show innodb status' causes errors
Invalid (old?) table or database name in logs
New test file for partitioning specific to
the 'old' built-in innodb engine
mysql-test/t/partition_innodb_plugin.test:
Bug#32430: 'show innodb status' causes errors
Invalid (old?) table or database name in logs
New test file for partitioning specific to
the new plugin innodb engine
sql/mysql_priv.h:
Bug#32430: 'show innodb status' causes errors
Invalid (old?) table or database name in logs
Added thd as a parameter to explain_filename
to be able to use the correct quote character
sql/sql_table.cc:
Bug#32430: 'show innodb status' causes errors
Invalid (old?) table or database name in logs
Changed explain_filename, so that it does qouting
correctly according to the sessions qoute char.
- Create the "dummy" thread joinable and wait for it to
exit before continuing in 'my_thread_global_init'
- This way we know that the pthread library is initialized
by one thread only
when compiled with Sun Studio compiler).
The thing is that Sun Studio compiler calls destructor of stack
objects when pthread_exit() is called. That triggered an assertion
in DBUG_ENTER()/DBUG_RETURN() validation logic (if DBUG_ENTER() is
used in the beginning of function, all returns should be replaced
by DBUG_RETURN/DBUG_VOID_RETURN macros).
A fix is to explicitly use DBUG_LEAVE macro.
value from the index (PRIMARY)
With the fix for BUG#46760, we correctly flag the presence of row_type
only when it's actually changed and enables the FAST ALTER TABLE which was
disabled with the BUG#39200.
So the changes made by BUG#46760 makes MySQL data dictionaries to be out of
sync but they are handled already by InnoDB with this BUG#44030.
The test was originally written to handle this but we requested Innodb to
update the test as the data dictionaries were in sync after the fix for
BUG#39200.
Adjusting the innodb-autoinc testcase as mentioned in the comments.
mysql-test/lib/mtr_cases.pm:
Re-enable the innodb-autoinc test case for plugin as we have a common
result file.
mysql-test/r/innodb-autoinc.result:
Additional Fix for BUG#44030 - Error: (1500) Couldn't read the MAX(ID) autoinc
value from the index (PRIMARY)
Adjust the innodb-autoinc testcase as the patch for BUG#46760 enables the
FAST ALTER TABLE and makes the data dictonaries go out of sync. This is
expected in the testcase.
mysql-test/t/innodb-autoinc.test:
Additional Fix for BUG#44030 - Error: (1500) Couldn't read the MAX(ID) autoinc
value from the index (PRIMARY)
Adjust the innodb-autoinc testcase as the patch for BUG#46760 enables the
FAST ALTER TABLE and makes the data dictonaries go out of sync. This is
expected in the testcase.
the fix is reverted from 5.1, mysql-pe as
unnecessary(no valgrind warnings there).
sql/sql_select.cc:
the fix is reverted from 5.1, mysql-pe as
unnecessary(no valgrind warnings there).
The "socket" variable is not available on Windows even though
the --socket option can be used to specify the pipe name for
local connections that use a named pipe.
The solution is to ensure that the variable is always defined.
mysql-test/r/windows.result:
Add test case result for Bug#45498
mysql-test/t/windows.test:
Add test case for Bug#45498
sql/set_var.cc:
socket variable must always be present.
checksum)"
The problem was that checksum of GEOMETRY type used memory addresses
in the computation, making it un-repeatable thus useless.
(This patch is a backport from 6.0 branch)
mysql-test/r/myisam.result:
test case for bug35570 that same tables give same checksums
mysql-test/t/myisam.test:
test case for bug35570 that same tables give same checksums
sql/sql_table.cc:
Type GEOMETRY is implemented on top of type BLOB, so, just like for BLOB,
its 'field' contains pointers which it does not make sense to include in
the checksum; it rather has to be converted to a string and then we can
compute the checksum.
Despite copying the value of the old table's row type
we don't always have to mark row type as being specified.
Innodb uses this to check if it can do fast ALTER TABLE
or not.
Fixed by correctly flagging the presence of row_type
only when it's actually changed.
Added a test case for 39200.
query
The fix for bug 46749 removed the check for OUTER_REF_TABLE_BIT
and substituted it for a check on the presence of
Item_ident::depended_from.
Removing it altogether was wrong : OUTER_REF_TABLE_BIT should
still be checked in addition to depended_from (because it's not
set in all cases and doesn't contradict to the check of depended_from).
Fixed by returning the old condition back as a compliment to the
new one.
But there is no Last_IO_Error reported.
On the master, if a binary log event is larger than max_allowed_packet,
ER_MASTER_FATAL_ERROR_READING_BINLOG and the specific reason of this error is
sent to a slave when it requests a dump from the master, thus leading
the I/O thread to stop.
On a slave, the I/O thread stops when receiving a packet larger than max_allowed_packet.
In both cases, however, there was no Last_IO_Error reported.
This patch adds code to report the Last_IO_Error and exact reason before stopping the
I/O thread and also reports the case the out memory pops up while
handling packets from the master.
sql/sql_repl.cc:
The master send the Specific reasons instead of "error reading log entry" to the slave which is requesting a dump.
if an fatal error is returned by read_log_event function.