A patch for this bug has already been pushed. A minor change is made here.
The database to be used after re-enabling the disabled code is 'TEST'.
But instead, 'MYSQL' was being used.
This is the minor change that is being made here.
Don't do this for echo, instead:
1) Enable replacements also for assignment from backquoted SQL
2) Allow replace_regex to take a variable for the *entire* argument list
With this, the test can be amended, but only in its version in trunk
post-push fixes for show_slave_io_error= 1 of wait_for_slave_io_error.inc;
Unix and win format path specifically so few tests have to change show_slave_io_error
to zero.
The bug case is similar to one fixed earlier bug_49536.
Deadlock involving LOCK_log appears to be possible because the purge running thread
is holding LOCK_log whereas there is no sense of doing that and which fact was
exploited by the earlier bug fixes.
Fixed with small reengineering of rotate_and_purge(), adding two new methods and
setting up a policy to execute those instead of the former
rotate_and_purge(RP_LOCK_LOG_IS_ALREADY_LOCKED).
The policy for using rotate(), purge() is that if the caller acquires LOCK_log itself,
it should call rotate(), release the mutex and run purge().
Side effect of this patch is refining error message of bug@11747416 to print
the whole path.
mysql-test/suite/rpl/r/rpl_cant_read_event_incident.result:
the file name printing is changed to a relative path instead of just the file name.
mysql-test/suite/rpl/r/rpl_log_pos.result:
the file name printing is changed to a relative path instead of just the file name.
mysql-test/suite/rpl/r/rpl_manual_change_index_file.result:
the file name printing is changed to a relative path instead of just the file name.
mysql-test/suite/rpl/r/rpl_packet.result:
the file name printing is changed to a relative path instead of just the file name.
mysql-test/suite/rpl/r/rpl_rotate_purge_deadlock.result:
new result file is added.
mysql-test/suite/rpl/t/rpl_cant_read_event_incident.test:
The test of that bug can't satisfy windows and unix backslash interpretation so windows
execution is chosen to bypass.
mysql-test/suite/rpl/t/rpl_rotate_purge_deadlock-master.opt:
new opt file is added.
mysql-test/suite/rpl/t/rpl_rotate_purge_deadlock.test:
regression test is added as well as verification of a
possible side effect of the fixes is tried.
sql/log.cc:
LOCK_log is never taken during execution of log purging routine.
The former MYSQL_BIN_LOG::rotate_and_purge is made to necessarily
acquiring and releasing LOCK_log.
If caller takes the mutex itself it has to use a new rotate(), purge()
methods combination and to never let purge() be run with LOCK_log grabbed.
split apart to allow
the caller to chose either it
Simulation of concurrently rotating/purging threads is added.
sql/log.h:
new rotate(), purge() methods are added to be used instead of
the former rotate_and_purge(RP_LOCK_LOG_IS_ALREADY_LOCKED).
rotate_and_purge() signature is changed. Caller should not call rotate_and_purge()
but rather {rotate(), purge()} if LOCK_log is acquired by it.
sql/rpl_injector.cc:
changes to reflect the new rotate_and_purge() signature.
sql/sql_class.h:
unnecessary constants are removed.
sql/sql_parse.cc:
changes to reflect the new rotate_and_purge() signature.
sql/sql_reload.cc:
changes to reflect the new rotate_and_purge() signature.
sql/sql_repl.cc:
followup for bug@11747416: the file name printing is changed to a relative
path instead of just the file name.
This was an attempt to address problems with the Bug#12612184 fix.
Even with this follow-up fix, crash recovery can be broken.
Let us fix the bug later.
In the ON UPDATE CASCADE clause of FOREIGN KEY constraints, the
calculated update vector was not fully initialized. This bug was
introduced in the InnoDB Plugin when implementing support for
ROW_FORMAT=DYNAMIC.
Additionally, the data type information was not initialized, but
apparently it has never been needed in this case. Nevertheless, it is
not good programming practice to pass uninitialized values around.
calc_row_difference(): Declare the update field uninitialized in
Valgrind. Copy the data type information as well, except when the
field is SQL NULL. In the built-in InnoDB, initialize
ufield->extern_storage = FALSE (an initialization bug that had gone
unnoticed this far). The InnoDB Plugin and later have this flag to
dfield_t and have always initialized it properly.
row_ins_cascade_calc_update_vec(): Reduce the scope of some
pointers. Initialize orig_len. (This caused the bug in InnoDB Plugin
and later.)
row_ins_foreign_check_on_constraint(): Simplify a condition. Declare
the update vector uninitialized.
rb:771 approved by Jimmy Yang
PARENT FOR OTHER ONE
Do not try to lookup key_nr'th key in 'table' because there may not be such
a key there. key_nr is the number of the key in the _child_ table name, not
in the parent table.
Instead just print the fields of the record that are covered by the first key
defined on the parent table.
This bug gets a better fix in MySQL 5.6, which is too risky for 5.1 and 5.5.
Approved by: Jon Olav Hauglid (via IM)
TESTS: CRASH, CORRUPTION, 4G MEMOR
Issue: Valgrind errors due to checksum and optimize
query against archive tables with null columns.
Table record buffer was not initialized.
Solution: Initialize the record buffer.
TESTS: CRASH, CORRUPTION, 4G MEMOR
Issue: Valgrind errors due to checksum and optimize
query angaist archive tables with null columns.
Table record buffer was not initialized.
Solution: Initialize the record buffer.
USING MYISAM_USE_MMAP ON WINDOWS
When OPTIMIZE/REPAIR TABLE is switching to new data file,
old data file is removed while memory mapping is still
active.
With 5.1 implementation of nt_share_delete() it is not
permitted to remove mmaped file.
This fix disables memory mapping for mi_repair() operations.
mysql-test/r/myisam.result:
A test case for BUG#11757032.
mysql-test/t/myisam.test:
A test case for BUG#11757032.
storage/myisam/ha_myisam.cc:
mi_repair*() functions family use file I/O even if memory
mapping is available.
Since mixing mmap I/O and file I/O may cause various artifacts,
memory mapping must be disabled.
storage/myisam/mi_delete_all.c:
Clean-up: do not attempt to remap file after truncate, since
there is nothing to map.
The assertion in innodb is triggered in this way:
1. mysql server does lookup on the primary key with full key,
innodb decides to not store cursor position because
"any index_next/prev call will return EOF anyway"
2. server asks innodb to return any next record in the index and the
assertion is triggered because no cursor position is stored.
It happens when a unique search (match_mode=ROW_SEL_EXACT)
in the clustered index is performed. InnoDB has never stored
the cursor position after a unique key lookup in the
clustered index because storing the position is an expensive
operation. The bug was introduced by
WL3220 'Loose index scan for aggregate functions'.
The fix is to disallow loose index scan optimization
for AGG_FUNC(DISTINCT ...) if GROUP_MIN_MAX quick select
uses clustered key.
mysql-test/r/group_min_max_innodb.result:
test case
mysql-test/t/group_min_max_innodb.test:
test case
sql/opt_range.cc:
disallow loose index scan optimization for
AGG_FUNC(DISTINCT ...) if GROUP_MIN_MAX
quick select uses clustered key.
modified function do_get_error in mysqltest.cc to handle multiple variable passed
added test case to mysqltest.test to verify handling to multiple errors passed
This patch corrects a defect in the building of the DELETE commands for
disabling a plugin whereby only the original plugin data was deleted. If there
were other plugins, the delete did not remove the rows. The code has been
changed to remove all rows from the mysql.plugin table that were inserted when
the plugin was loaded. The test has also been changed to correctly identify if
all rows have been deleted.
Buffer over-run on all platforms, crash on windows, wrong result on other platforms,
when rounding numbers which start with 999999999 and have
precision = 9 or 18 or 27 or 36 ...
mysql-test/r/type_newdecimal.result:
New test cases.
mysql-test/t/type_newdecimal.test:
New test cases.
sql/my_decimal.h:
Add sanity checking code, to catch buffer over/under-run.
strings/decimal.c:
The original initialization of intg1 (add 1 if buf[0] == DIG_MAX)
will set p1 to point outside the buffer, and the loop to copy the original value
while (buf0 < p0)
*(--p1) = *(--p0);
will overwrite memory outside the my_decimal object.
When temporary tables is used for result sorting
result field for gconcat function is created using
group_concat_max_len size. It leads to result truncation
when character_set_results is multi-byte character set due
to insufficient tmp table field size.
The fix is to increase temporary table field size for
gconcat. Method make_string_field() is overloaded
for Item_func_group_concat class and uses
max_characters * collation.collation->mbmaxlen size for
result field. max_characters is maximum number of characters
what can fit into max_length size.
mysql-test/r/ctype_utf16.result:
test result
mysql-test/r/ctype_utf32.result:
test result
mysql-test/r/ctype_utf8.result:
test result
mysql-test/t/ctype_utf16.test:
test case
mysql-test/t/ctype_utf32.test:
test case
mysql-test/t/ctype_utf8.test:
test case
sql/item.h:
make Item::make_string_field() virtual
sql/item_sum.cc:
added Item_func_group_concat::make_string_field(TABLE *table) method
which uses max_characters * collation.collation->mbmaxlen size for
result item. max_characters is maximum number of characters what can
fit into max_length size.
sql/item_sum.h:
added Item_func_group_concat::make_string_field(TABLE *table) method
This is a redo for 5.5
Added 'innodb_file_format_max' as variable to ignore change to.
Tests that had to restore this amended
Two tests assumed it to be Antelope, make sure these run on a freshly
started server
Problematic query:
insert ignore into `t1_federated` (`c1`) select `c1` from `t1_local` a
where not exists (select 1 from `t1_federated` b where a.c1 = b.c1);
When this query is killed in another connection it could lead to crash.
The problem is follwing:
An attempt to obtain table statistics for subselect table in killed query
fails with an error. So JOIN::optimize() for subquery is failed but
it does not prevent further subquery evaluation.
At the first subquery execution JOIN::optimize() is called
(see subselect_single_select_engine::exec()) and fails with
an error. 'executed' flag is set to TRUE and it prevents
further subquery evaluation. At the second call
JOIN::optimize() does not happen as 'JOIN::optimized' is TRUE
and in case of uncacheable subquery the 'executed' flag is set
to FALSE before subquery evaluation. So we loose 'optimize stage'
error indication (see subselect_single_select_engine::exec()).
In other words 'executed' flag is used for two purposes, for
error indication at JOIN::optimize() stage and for an
indication of subquery execution. And it seems it's wrong
as the flag could be reset.
mysql-test/r/error_simulation.result:
test case
mysql-test/t/error_simulation.test:
test case
sql/item_subselect.cc:
added new flag subselect_single_select_engine::optimize_error
which is used for error detection which could happen at optimize
stage.
sql/item_subselect.h:
added new flag subselect_single_select_engine::optimize_error
sql/sql_select.cc:
test case
Fix for commit_1innodb failure on pb2.
Background: as status increment differs for an unsafe statement
when logged in stmt and row format, mtr throws a content mismatch
error.
Fix: call p_verify_status_increment with different arguments
for loging format as stmt and row/mixed and disable query log.
Problem: commit_1innodb fails on pb2 after the patch for BUG#11758262
Background: Certain statements threw warnings only in statement mode causing
the result cintent mismatch.
Fix: disabled warnings from the statements.
Binary log of master can get a partially logged event if the server
runs out of disk space and, while waiting for some space to be freed,
is shut down (or crashes). If the server is not stopped, it will just
wait endlessly for space to be freed, thus no partial event anomaly
occurs. The restarted master server has had a dubious policy to send
the incomplete event to slave which it apparently can't handle.
Although an error was printed out the fact of sending with unclear
error message is a source of confusion.
Actually the problem of presence an incomplete event in the binary log
was already fixed by WL 5493 (which was merged to our current trunk
branch, major version 5.6). The fix makes the server truncate the
binary log on server restart and recovery.
However 5.5 master can't do that. So the current issue is a problem of
sending incomplete events to the slave by 5.5 master.
It is fixed in this patch by changing the policy so that only complete
events are pushed by the dump thread to the IO thread. In addition,
the error text that master sends to the slave when an incomplete event
is found, now states that incomplete event may have been caused by an
out-of-disk space situation and provides coordinates of
the first and the last event bytes read.
mysql-test/std_data/bug11747416_32228_binlog.000001:
a binlog is added with the last event written partly.
mysql-test/suite/rpl/r/rpl_cant_read_event_incident.result:
new result file is added.
mysql-test/suite/rpl/r/rpl_log_pos.result:
results updated.
mysql-test/suite/rpl/r/rpl_manual_change_index_file.result:
results updated.
mysql-test/suite/rpl/r/rpl_packet.result:
results updated.
mysql-test/suite/rpl/t/rpl_cant_read_event_incident.test:
regression test for bug#11747416 : 32228 A disk full makes binary log corrupt
is added.
sql/share/errmsg-utf8.txt:
Increasing the explanatory part of ER_MASTER_FATAL_ERROR_READING_BINLOG error message twice
in order to fit to the updated version which carries some more info.
sql/sql_repl.cc:
Error text indicating a failure of reading from binlog that master delivers to the slave
is made more clear;
A policy to regard a partial event to send it out to the slave anyway is removed.
1 - If a user had SHOW VIEW and SELECT privileges on a view and
this view was referencing another view, EXPLAIN SELECT on the outer
view (that the user had privileges on) could reveal the structure
of the underlying "inner" view as well as the number of rows in
the underlying tables, even if the user had privileges on none of
these referenced objects.
This happened because we used DEFINER's UID ("SUID") not just for
the view given in EXPLAIN, but also when checking privileges on
the underlying views (where we should use the UID of the EXPLAIN's
INVOKER instead).
We no longer run the EXPLAIN SUID (with DEFINER's privileges).
This prevents a possible exploit and makes permissions more
orthogonal.
2 - EXPLAIN SELECT would reveal a view's structure even if the user
did not have SHOW VIEW privileges for that view, as long as they
had SELECT privilege on the underlying tables.
Instead of requiring both SHOW VIEW privilege on a view and SELECT
privilege on all underlying tables, we were checking for presence
of either of them.
We now explicitly require SHOW VIEW and SELECT privileges on
the view we run EXPLAIN SELECT on, as well as all its
underlying views. We also require SELECT on all relevant
tables.
mysql-test/r/view_grant.result:
add extensive tests to illustrate desired behavior and
prevent regressions (as always).
mysql-test/t/view_grant.test:
add extensive tests to illustrate desired behavior and
prevent regressions (as always).
sql/sql_view.cc:
We no longer run the EXPLAIN SUID (with DEFINER's privileges).
To achieve this, we use a temporary, SUID-less TABLE_LIST for
the views while checking privileges.
Problem: The following statements can cause the slave to go out of sync
if logged in statement format:
INSERT IGNORE...SELECT
INSERT ... SELECT ... ON DUPLICATE KEY UPDATE
REPLACE ... SELECT
UPDATE IGNORE :
CREATE ... IGNORE SELECT
CREATE ... REPLACE SELECT
Background: Since the order of the rows returned by the SELECT
statement or otherwise may differ on master and slave, therefore
the above statements may cuase the salve to go out of sync with
the master.
Fix:
Issue a warning when statements like the above are exectued and
the bin-logging format is statement. If the logging format is mixed,
use row based logging. Marking a statement as unsafe has been
done in the sql/sql_parse.cc instead of sql/sql_yacc.cc, because while
parsing for a token has been done we cannot be sure if the parsing
of the other tokens has been done as well.
Six new warning messages has been added for each unsafe statement.
binlog.binlog_unsafe.test has been updated to incoporate these additional unsafe statments.
******
BUG#11758262 - 50439: MARK INSERT...SEL...ON DUP KEY UPD,REPLACE...SEL,CREATE...[IGN|REPL] SEL
Problem: The following statements can cause the slave to go out of sync
if logged in statement format:
INSERT IGNORE...SELECT
INSERT ... SELECT ... ON DUPLICATE KEY UPDATE
REPLACE ... SELECT
UPDATE IGNORE :
CREATE ... IGNORE SELECT
CREATE ... REPLACE SELECT
Background: Since the order of the rows returned by the SELECT
statement or otherwise may differ on master and slave, therefore
the above statements may cuase the salve to go out of sync with
the master.
Fix:
Issue a warning when statements like the above are exectued and
the bin-logging format is statement. If the logging format is mixed,
use row based logging. Marking a statement as unsafe has been
done in the sql/sql_parse.cc instead of sql/sql_yacc.cc, because while
parsing for a token has been done we cannot be sure if the parsing
of the other tokens has been done as well.
Six new warning messages has been added for each unsafe statement.
binlog.binlog_unsafe.test has been updated to incoporate these additional unsafe statments.
mysql-test/extra/rpl_tests/rpl_insert_duplicate.test:
Test removed: Added the test to rpl.rpl_insert_ignore.test
******
Test removed: the test is redundant as the same is being tested in rpl.rpl_insert_ignore.
mysql-test/extra/rpl_tests/rpl_insert_id.test:
Warnings disabled for the unsafe statements.
mysql-test/extra/rpl_tests/rpl_insert_ignore.test:
1. Disabled warnings while for unsafe statements
2. As INSERT...IGNORE is an unsafe statement, an insert ignore not changing any rows,
will not be logged in the binary log, in the ROW and MIXED modes. It will however be logged
in STATEMENT mode.
mysql-test/r/commit_1innodb.result:
updated result file
******
updated result file
mysql-test/suite/binlog/r/binlog_stm_blackhole.result:
Updated result file.
mysql-test/suite/binlog/r/binlog_unsafe.result:
updated result file
mysql-test/suite/binlog/t/binlog_unsafe.test:
added tests for the statements marked as unsafe.
mysql-test/suite/rpl/r/rpl_insert_duplicate.result:
File Removed :Result file of rpl_insert_duplicate, which has been removed.
mysql-test/suite/rpl/r/rpl_insert_ignore.result:
Added the content of rpl.rpl_insert_duplicate here.
mysql-test/suite/rpl/r/rpl_insert_select.result:
Result file removed as the corresponding test has beenn removed.
mysql-test/suite/rpl/r/rpl_known_bugs_detection.result:
Updated result file.
mysql-test/suite/rpl/t/rpl_insert_duplicate.test:
File Removed: this was a wrapper for rpl.rpl_insert_duplicate.test, which has been removed.
mysql-test/suite/rpl/t/rpl_insert_select.test:
File Removed: This test became redundant after this fix, This test showed how INSERT IGNORE...SELECT break replication, which has been handled in this fix.
mysql-test/suite/rpl/t/rpl_known_bugs_detection.test:
Since all the tests are statement based bugs are being tested, having mixed format
forces the event to be written in row format. When the statement and causes the
test to fail as certain known bugs do not occur when the even is logged in row format.
sql/share/errmsg-utf8.txt:
added 6 new Warning messages.
******
added 6 new Warning messages.
sql/sql_lex.cc:
Added 6 new error Identifier [ER_BINLOG_STMT_UNSAFEE_*]
sql/sql_lex.h:
Added 6 new BINLOG_STMT_UNSAFE_* enums to identify the type of unsafe statement dealt with in this bug.
******
Added 6 new BINLOG_STMT_UNSAFE_* enums to identify the type of unsafe statement dealt with in this bug.
sql/sql_parse.cc:
added check for specific queries and marked them as unsafe.
******
added check for specific queries and marked them as unsafe.
Let CMake parse files with a ".in" suffix containing includes
Added default.release.in to replace default.release
Explained in README
New patch: replace 'include' with '#include' to avoid accidental matches
SYSTEM VARIABLE NAME SQL_MAX_JOIN_SI
BACKGROUND:
ER_TOO_BIG_SELECT refers to SQL_MAX_JOIN_SIZE, which is the
old name for MAX_JOIN_SIZE.
FIX:
Support for old name SQL_MAX_JOIN_SIZE is removed in MySQL 5.6
and is renamed as MAX_JOIN_SIZE.So the errmsg.txt
and mysql.cc files have been updated and the corresponding result
files have also been updated.
CREATE_TIME IN INFORMATION_SC
It was impossible to determine MEMORY table creation time,
since it wasn't stored/exposed.
With this patch creation time is saved and it is available via
I_S.TABLES.CREATE_TIME.
Note: it was decided that additional analysis is required before
implementing UPDATE_TIME. Thus this patch doesn't store UPDATE_TIME.
Added 'innodb_file_format_check' as variable to ignore change to.
Tests that had to restore this amended
Two tests assumed it to be Antelope, make sure these run on a freshly
started server
For 5.5, apparently innodb_file_format_max is the one to ignore
The problem occurred when indexes are added between the time that an
UNDO record is created and the time that the purge thread comes around
and deletes the old secondary index entries. The purge thread would
hit an assert when trying to build a secondary index entry for
searching. The problem was that the old value of those fields were not
in the UNDO record since they were not part of an index when the UPDATE
occured.
A test case was added to innodb-index.test.
The problem occurred when indexes are added between the time that an
UNDO record is created and the time that the purge thread comes around
and deletes the old secondary index entries. The purge thread would
hit an assert when trying to build a secondary index entry for
searching. The problem was that the old value of those fields were not
in the UNDO record since they were not part of an index when the UPDATE
occured.
A test case was added to innodb-index.test.
mysql-test/r/func_str.result:
New test cases.
mysql-test/t/func_str.test:
New test cases.
strings/dtoa.c:
Increasing the buffer size slightly made some queries pass without leaks.
Adding Bfree(p51, alloc) fixed the remaining leaks.
Added options --boot-gdb etc.
Extended gdb_arguments() with optional input argument
Cannot use set args in gdb init file, as run < <input> kills them (?)
For some reason the test authentication plugin accepted connection with arbitrary password. But the intention of the plugin is that password should equal to the authentication string and in the later versions of the server connection fails if password is wrong. So I have updated auth_rpl test to specify the correct password.
FULLTEXT INDEXES
myisamchk may create incorrect fulltext index for compressed
tables. Incorrect data pointer size was used while creating
fulltext index.
mysql-test/r/myisampack.result:
A test case for BUG#11761180.
mysql-test/t/myisampack.test:
A test case for BUG#11761180.
storage/myisam/ft_boolean_search.c:
rec_reflength on share may have adjustments required for
compressed tables and must be used instead of rec_reflength
on base info.
storage/myisam/ft_nlq_search.c:
rec_reflength on share may have adjustments required for
compressed tables and must be used instead of rec_reflength
on base info.
storage/myisam/mi_check.c:
rec_reflength on share may have adjustments required for
compressed tables and must be used instead of rec_reflength
on base info.
storage/myisam/mi_write.c:
rec_reflength on share may have adjustments required for
compressed tables and must be used instead of rec_reflength
on base info.
(also 5.5+ solution for bug#11766879/bug#60106)
The valgrind warning was due to an unused 'new handler_add_index(...)'
which was never freed.
The error handling did not work (fails as in bug#11766879) and
the implementation was not as transparant as it could, therefore I
made it a bit simpler and more transparant to the underlying handlers.
This way it follows the api better and the error handling works and
is also now tested.
Also added a debug test to verify the error handling.
Improved according to Jon Olavs review:
Added class ha_partition_add_index.
Also added base class Sql_alloc to handler_add_index.
Update 3.
Quick fix: run mysql-stress-test.pl via a wrapper test
Amend mtr to run just that test when using --stress
Updated mysql-stress-test.pl to exit(1) if wrong options
Amendment to previous patch:
Failure in CONV() should return NULL instead of
empty set.
When compiled on Windows or Solaris the function
Item_func_conv::val_str() doesn't fail on
longlong2str() but finds an earlier exit path
based on the attributes of the arguments.
This exit path returns NULL on failure and as a
consequence the original patch caused different
test results depending on the OS used.
Connection of slave to master using a replication account which authenticates
with an external plugin was not possible.
Fixed by making sure that the CLIENT_PLUGIN_AUTH capability is set when client connects using mysql_real_connect(). Also, a plugin-dir path used by client library to locate authentication plugins is set based on the analogous server setting. This is done in connect_to_master() function before a call to mysql_real_connect().
Call handle_error() instead of die() when evaluating these
Must remember "current command" with link to errors to ignore
Added test cases to mysqltest.test
Added a keyword ONCE to add to those commands
Some internal tables to keep track of which properties are
temporarily overriden
Added tests in mysqltest.test
Updates to other tests will be done later
This patch corrects an error encountered in PB where Windows machines
are built in release mode have an extraneous parameter added in place
of the --console option. This is caused by the insert of '(null'
instead of an empty string. In non-debug mode, the string is explicitly
set to an empty string.
Patch also fixes a result mismatch on Windows machines.
Failure to check the return state of a longlong2str() call
caused a crash. This could happen if a user executed the sql
function CONV() with certain parameters.
The patch fixes the issue by checking that the returned pointer
isn't NULL.
This fix was accidentally pushed to mysql-5.1 after the 5.1.59 clone-off in
bzr revision id marko.makela@oracle.com-20110829081642-z0w992a0mrc62s6w
with the fix of Bug#12704861 Corruption after a crash during BLOB update
but not merged to mysql-5.5 and upwards.
In the Barracuda formats, the clustered index record no longer
contains a prefix of off-page columns. Because of this, the undo log
must contain these prefixes, so that purge and multi-versioning will
continue to work. However, this also means that an undo log record can
become too big to fit in an undo log page. (It is a limitation of the
undo log that undo records cannot span across multiple pages.)
In case the checks for undo log size fail when CREATE TABLE or CREATE
INDEX is executed, we need a fallback that blocks a modification
operation when the undo log record would exceed the maximum size.
trx_undo_free_last_page_func(): Renamed from trx_undo_free_page_in_rollback().
Define the trx_t parameter only in debug builds.
trx_undo_free_last_page(): Wrapper for trx_undo_free_last_page_func().
Pass the trx_t parameter only in debug builds.
trx_undo_truncate_end_func(): Renamed from trx_undo_truncate_end().
Define the trx_t parameter only in debug builds. Rewrite a for(;;) loop
as a while loop for clarity.
trx_undo_truncate_end(): Wrapper for from trx_undo_truncate_end_func().
Pass the trx_t parameter only in debug builds.
trx_undo_erase_page_end(): Return TRUE if the page was non-empty
to begin with. Refuse to erase empty pages.
trx_undo_report_row_operation(): If the page for which the undo log
was too big was empty, free the undo page and return DB_TOO_BIG_RECORD.
rb:749 approved by Inaam Rana
GROUPING BY FUNCTIONS.... (PART
The bug was introduced in a patch for bug 49897.
Problem: The assertion inserted by the original patch to guard against
zero-lenght sort keys during merge phase triggers also when the whole
set fits in memory.
Fix: Move assert so that it does not trigger if the whole set is in
memory.
mysql-test/r/group_by.result:
Add test for bug#11765254
mysql-test/t/group_by.test:
Add test for bug#11765254
sql/filesort.cc:
Move assertion
Converting the number zero to binary and back yielded the number zero,
but with no digits, i.e. zero precision.
This made the multiply algorithm go haywire in various ways.
include/decimal.h:
Document struct st_decimal_t
mysql-test/r/type_newdecimal.result:
New test case (valgrind warnings)
mysql-test/t/type_newdecimal.test:
New test case (valgrind warnings)
sql/my_decimal.h:
Remove the HAVE_purify enabled/disabled code.
strings/decimal.c:
Make a proper zero, with non-zero precision.
The fix of Bug#12612184 broke crash recovery. When a record that
contains off-page columns (BLOBs) is updated, we must first write redo
log about the BLOB page writes, and only after that write the redo log
about the B-tree changes. The buggy fix would log the B-tree changes
first, meaning that after recovery, we could end up having a record
that contains a null BLOB pointer.
Because we will be redo logging the writes off the off-page columns
before the B-tree changes, we must make sure that the pages chosen for
the off-page columns are free both before and after the B-tree
changes. In this way, the worst thing that can happen in crash
recovery is that the BLOBs are written to free pages, but the B-tree
changes are not applied. The BLOB pages would correctly remain free in
this case. To achieve this, we must allocate the BLOB pages in the
mini-transaction of the B-tree operation. A further quirk is that BLOB
pages are allocated from the same file segment as leaf pages. Because
of this, we must temporarily "hide" any leaf pages that were freed
during the B-tree operation by "fake allocating" them prior to writing
the BLOBs, and freeing them again before the mtr_commit() of the
B-tree operation, in btr_mark_freed_leaves().
btr_cur_mtr_commit_and_start(): Remove this faulty function that was
introduced in the Bug#12612184 fix. The problem that this function was
trying to address was that when we did mtr_commit() the BLOB writes
before the mtr_commit() of the update, the new BLOB pages could have
overwritten clustered index B-tree leaf pages that were freed during
the update. If recovery applied the redo log of the BLOB writes but
did not see the log of the record update, the index tree would be
corrupted. The correct solution is to make the freed clustered index
pages unavailable to the BLOB allocation. This function is also a
likely culprit of InnoDB hangs that were observed when testing the
Bug#12612184 fix.
btr_mark_freed_leaves(): Mark all freed clustered index leaf pages of
a mini-transaction allocated (nonfree=TRUE) before storing the BLOBs,
or freed (nonfree=FALSE) before committing the mini-transaction.
btr_freed_leaves_validate(): A debug function for checking that all
clustered index leaf pages that have been marked free in the
mini-transaction are consistent (have not been zeroed out).
btr_page_alloc_low(): Refactored from btr_page_alloc(). Return the
number of the allocated page, or FIL_NULL if out of space. Add the
parameter "mtr_t* init_mtr" for specifying the mini-transaction where
the page should be initialized, or if this is a "fake allocation"
(init_mtr=NULL) by btr_mark_freed_leaves(nonfree=TRUE).
btr_page_alloc(): Add the parameter init_mtr, allowing the page to be
initialized and X-latched in a different mini-transaction than the one
that is used for the allocation. Invoke btr_page_alloc_low(). If a
clustered index leaf page was previously freed in mtr, remove it from
the memo of previously freed pages.
btr_page_free(): Assert that the page is a B-tree page and it has been
X-latched by the mini-transaction. If the freed page was a leaf page
of a clustered index, link it by a MTR_MEMO_FREE_CLUST_LEAF marker to
the mini-transaction.
btr_store_big_rec_extern_fields_func(): Add the parameter alloc_mtr,
which is NULL (old behaviour in inserts) and the same as local_mtr in
updates. If alloc_mtr!=NULL, the BLOB pages will be allocated from it
instead of the mini-transaction that is used for writing the BLOBs.
fsp_alloc_from_free_frag(): Refactored from
fsp_alloc_free_page(). Allocate the specified page from a partially
free extent.
fseg_alloc_free_page_low(), fseg_alloc_free_page_general(): Add the
parameter "mtr_t* init_mtr" for specifying the mini-transaction where
the page should be initialized, or NULL if this is a "fake allocation"
that prevents the reuse of a previously freed B-tree page for BLOB
storage. If init_mtr==NULL, try harder to reallocate the specified page
and assert that it succeeded.
fsp_alloc_free_page(): Add the parameter "mtr_t* init_mtr" for
specifying the mini-transaction where the page should be initialized.
Do not allow init_mtr == NULL, because this function is never to be
used for "fake allocations".
mtr_t: Add the operation MTR_MEMO_FREE_CLUST_LEAF and the flag
mtr->freed_clust_leaf for quickly determining if any
MTR_MEMO_FREE_CLUST_LEAF operations have been posted.
row_ins_index_entry_low(): When columns are being made off-page in
insert-by-update, invoke btr_mark_freed_leaves(nonfree=TRUE) and pass
the mini-transaction as the alloc_mtr to
btr_store_big_rec_extern_fields(). Finally, invoke
btr_mark_freed_leaves(nonfree=FALSE) to avoid leaking pages.
row_build(): Correct a comment, and add a debug assertion that a
record that contains NULL BLOB pointers must be a fresh insert.
row_upd_clust_rec(): When columns are being moved off-page, invoke
btr_mark_freed_leaves(nonfree=TRUE) and pass the mini-transaction as
the alloc_mtr to btr_store_big_rec_extern_fields(). Finally, invoke
btr_mark_freed_leaves(nonfree=FALSE) to avoid leaking pages.
buf_reset_check_index_page_at_flush(): Remove. The function
fsp_init_file_page_low() already sets
bpage->check_index_page_at_flush=FALSE.
There is a known issue in tablespace extension. If the request to
allocate a BLOB page leads to the tablespace being extended, crash
recovery could see BLOB writes to pages that are off the tablespace
file bounds. This should trigger an assertion failure in fil_io() at
crash recovery. The safe thing would be to write redo log about the
tablespace extension to the mini-transaction of the BLOB write, not to
the mini-transaction of the record update. However, there is no redo
log record for file extension in the current redo log format.
rb:693 approved by Sunny Bains
Suppress the known warnings generated by filesort().
The real fix belongs to worklog 1509:
Pack values of non-sorted fields in the sort buffer
(which is basically the same issue, but in an optimization context:
We are writing the entire sort buffer to disk,
including un-used space for varchar columns.)
mysql-test/valgrind.supp:
Add new Memcheck suppressions for filesort.
sql/filesort.cc:
Remove the ifdef HAVE_purify/bzero code, use valgrind suppressions instead.
PARTITONING, ON INDEX CREATE
If the first partition succeeded in adding a index, but a successive partition failed,
then the first partition had still the new index.
The fix reverts the added indexes from previous partitions on failure.
Added a second internal variable $mysql_errname
This is set the same way as $mysql_errno
Can be used like "if ($mysql_errname == ER_NO_SUCH_TABLE)...."
CRASHES SERVER
Flushing of MERGE table or one of its child tables, which was
locked by flushing thread using LOCK TABLES, might have caused
crashes or assertion failures if the thread failed to reopen
child or parent table.
Particularly, this might have happened when another connection
killed this FLUSH TABLE statement/connection.
Also this problem might have occurred when we failed to reopen
MERGE table or one of its children when executing DDL statement
under LOCK TABLES.
The problem was caused by the fact that reopen_tables() might
have failed to reopen child table but still tried to reopen,
reattach children for and re-lock its parent. Vice versa it
might have failed to reopen parent but kept references from
children to parent around. Since reopen_tables() closes table
it has failed to reopen and therefore frees all associated
memory such dangling references led to crashes when followed.
This patch solves this problem by ensuring that we always close
parent table and all its children if we fail to reopen this
table or one of its children. Same happens if we fail to reattach
children to parent.
Affects 5.1 only.
mysql-test/r/merge.result:
A test case for BUG#11763712.
mysql-test/t/merge.test:
A test case for BUG#11763712.
sql/sql_base.cc:
When flushing tables under LOCK TABLES, all locked
and flushed tables are released and then reopened.
It may happen that we failed to reopen some tables,
in this case we reopen as much tables as possible.
If it was not possible to reopen MERGE child, MERGE
parent is unusable and must be removed from thread
open tables list.
If it was not possible to reopen MERGE parent, all
MERGE child table objects are unusable as well, at
least because their locks are handled by MERGE parent.
They must also be removed from thread open tables
list.
In other words if it was impossible to reopen any
object of a MERGE table or reattach child tables,
all objects of this MERGE table must be considered
unusable and closed.
Also addressed issues in bug #11745133, where we could mark a table
corrupted instead of crashing the server when found a corrupted buffer/page
if the table created with innodb_file_per_table on.
Undo previous fix, it is not reliable
Drop setting $MYSQL_LIBDIR, mtr can't be sure anyway
Test is set to override plugin-dir to some known existing dir
to mysql-5.5.16-release.
Original revision:
# revision-id: dmitry.lenev@oracle.com-20110811155849-feyt3h7tj48padiu
# parent: tatjana.nuernberg@oracle.com-20110811120945-c6x9a5d2du8s9oj2
# committer: Dmitry Lenev <Dmitry.Lenev@oracle.com>
# branch nick: mysql-5.5-12828477
# timestamp: Thu 2011-08-11 19:58:49 +0400
# message:
# Fix for bug #12828477 - "MDL SUBSYSTEM CREATES BIG OVERHEAD
# FOR CERTAIN QUERIES TO INFORMATION_SCHEMA".
#
# The problem was that metadata locking subsystem introduced
# too much overhead for queries to I_S which were processed by
# opening only .FRM or .TRG files and had to scanned a lot of
# tables (e.g. SELECT COUNT(*) FROM I_S.TRIGGERS was affected).
# The same effect was not observed for similar queries which
# performed full-blown table open in order to fill I_S table.
#
# The problem stemmed from the fact that in case when I_S
# implementation opened only .FRM or .TRG file for each table
# processed it didn't release metadata lock it has acquired on
# the table after finishing its processing. As result, list
# of acquired metadata locks were growing until the end of
# statement. Since acquisition of each new lock required
# search in the list of already acquired locks performance
# degraded.
#
# The same effect is not observed when I_S implementation
# performs full-blown table open for each table being
# processed, as in the latter cases metadata lock on the
# table is released right after table processing.
#
# This fix addressed the problem by ensuring that I_S
# implementation releases metadata lock after processing
# the table in both cases of full-blown table open and in
# case when only .FRM or .TRG file is read.
SET STATEMENT.
Server built with debug asserts, without debug crashes if a user tries
to run a stored procedure that constains query with subquery that include
either LIMIT or LIMIT OFFSET clauses.
The problem was that Item::fix_fields() was not called for the items
representing LIMIT or OFFSET clauses.
The solution is to call Item::fix_fields() right before evaluation in
st_select_lex_unit::set_limit().
mysql-test/r/sp.result:
Added testcase result for bug#12621017. Updated testcase result for
bug 11918.
mysql-test/t/sp.test:
Added testcase for bug#12621017. Addressed review comments for Bug 11918
(added tests for use LIMIT at stored function).
sql/item.h:
Addressed review comments for Bug 11918.
sql/share/errmsg-utf8.txt:
Addressed review comments for Bug 11918.
sql/sp_head.cc:
Addressed review comments for Bug 11918.
sql/sql_lex.cc:
Added call fix_fields() for item just before its evaluation.
sql/sql_yacc.yy:
Addressed review comments for Bug 11918.
PAM AUTHENTICATION SETTINGS
SET PASSWORD code on a account with plugin authentication was errorneously
resetting the in-memory plugin pointer for the user back to native password
plugin despite the fact that it was sending a warning that the command has
no immediate effect.
Fixed by not updating the user's plugin if it's already set to a non default value.
Note that the bug affected only the in-memory cache of the user definitions.
Any restart of the server will fix the problem.
Also the salt and the password has are still stored into the user tables (just as
it's documented now).
Test case added.
One old test case result updated to have the correct value.
FOR CERTAIN QUERIES TO INFORMATION_SCHEMA".
The problem was that metadata locking subsystem introduced
too much overhead for queries to I_S which were processed by
opening only .FRM or .TRG files and had to scanned a lot of
tables (e.g. SELECT COUNT(*) FROM I_S.TRIGGERS was affected).
The same effect was not observed for similar queries which
performed full-blown table open in order to fill I_S table.
The problem stemmed from the fact that in case when I_S
implementation opened only .FRM or .TRG file for each table
processed it didn't release metadata lock it has acquired on
the table after finishing its processing. As result, list
of acquired metadata locks were growing until the end of
statement. Since acquisition of each new lock required
search in the list of already acquired locks performance
degraded.
The same effect is not observed when I_S implementation
performs full-blown table open for each table being
processed, as in the latter cases metadata lock on the
table is released right after table processing.
This fix addressed the problem by ensuring that I_S
implementation releases metadata lock after processing
the table in both cases of full-blown table open and in
case when only .FRM or .TRG file is read.
mysql-test/r/information_schema.result:
Added coverage for bug #12828477 - "MDL SUBSYSTEM CREATES BIG
OVERHEAD FOR CERTAIN QUERIES TO INFORMATION_SCHEMA".
mysql-test/t/information_schema.test:
Added coverage for bug #12828477 - "MDL SUBSYSTEM CREATES BIG
OVERHEAD FOR CERTAIN QUERIES TO INFORMATION_SCHEMA".
sql/sql_show.cc:
Changed fill_schema_table_from_frm() to release metadata lock
it has acquired after processing the .FRM or .TRG file for
table.
Without this step metadata locks acquired for each table
processed will be accumulated. In situation when a lot of
tables are processed by I_S query this will result in
transaction with too many metadata locks. As result
performance of acquisition of new lock will degrade.
This patch corrects a problem found in PB. Some platforms have very
different locations for the mysql installation. The client was not
able to find either my_print_defaults or mysqld predictably.
The patch adds two new command options --mysqld and --my-print-defaults
which can be used to provide the location of mysqld and
my_print_defaults by providing the paths.
The patch also changes the concatenation of the soname extension to
fix a problem found on some Ubuntu systems.
The patch contains changes to the test to ensure it will run on all
platforms. A trap is set in the test to skip testing if the location
of mysqld, my_print_defaults, or the daemon_example.ini files cannot
be determined.
for compressed InnoDB tables
ha_innodb::info_low(): For calculating data_length or index_length,
use the compressed page size for compressed tables instead of UNIV_PAGE_SIZE.
rb:714 approved by Sunny Bains
There is an optimization of DISTINCT in JOIN::optimize()
which depends on THD::used_tables value. Each SELECT statement
inside SP resets used_tables value(see mysql_select()) and it
leads to wrong result. The fix is to replace THD::used_tables
with LEX::used_tables.
mysql-test/r/sp.result:
test case
mysql-test/t/sp.test:
test case
sql/sql_base.cc:
THD::used_tables is replaced with LEX::used_tables
sql/sql_class.cc:
THD::used_tables is replaced with LEX::used_tables
sql/sql_class.h:
THD::used_tables is replaced with LEX::used_tables
sql/sql_insert.cc:
THD::used_tables is replaced with LEX::used_tables
sql/sql_lex.cc:
THD::used_tables is replaced with LEX::used_tables
sql/sql_lex.h:
THD::used_tables is replaced with LEX::used_tables
sql/sql_prepare.cc:
THD::used_tables is replaced with LEX::used_tables
sql/sql_select.cc:
THD::used_tables is replaced with LEX::used_tables
The problem is that TIME_FUZZY_DATE is explicitly used for get_arg0_date()
function in Item_date_typecast::get_date method. The fix is to use real
fuzzy_date value.
mysql-test/r/func_time.result:
test case
mysql-test/t/func_time.test:
test case
sql/item_timefunc.cc:
use real fuzzy_date value
WITH UTF32
The 5.5 version of the UTF32 collation was not enforcing the BMP range that
it currently supports when comparing with LIKE.
Fixed by backporting the checks for the BMP from trunk.
Added a named constant for the maximum character that can have a weight
in the weight table.
SHOW ALL PROBLEMS FOR MERGE TABLE COMPLIANCE IN 5.1".
The problem was that CHECK/REPAIR TABLE for a MERGE table which
had several children missing or in wrong engine reported only
issue with the first such table in its result-set. While in 5.0
this statement returned the whole list of problematic tables.
Ability to report problems for all children was lost during
significant refactorings of MERGE code which were done as part
of work on 5.1 and 5.5 releases.
This patch restores status quo ante refactorings by changing
code in such a way that:
1) Failure to open child table due to its absence during CHECK/
REPAIR TABLE for a MERGE table is not reported immediately
when its absence is discovered in open_tables(). Instead
handling/error reporting in such a situation is postponed
until the moment when children are attached.
2) Code performing attaching of children no longer stops when
it encounters first problem with one of the children during
CHECK/REPAIR TABLE. Instead it continues iteration through
the child list until all problems caused by child absence/
wrong engine are reported.
Note that even after this change problem with mismatch of
child/parent definition won't be reported if there is also
another child missing, but this is how it was in 5.0 as well.
mysql-test/r/merge.result:
Added test case for bug #11754210 - "45777: CHECK TABLE DOESN'T
SHOW ALL PROBLEMS FOR MERGE TABLE COMPLIANCE IN 5.1".
Adjusted results of existing tests to the fact that CHECK/REPAIR
TABLE statements now try to report problems about missing table/
wrong engine for all underlying tables, and to the fact that
mismatch of parent/child definitions is always reported as an
error and not a warning.
mysql-test/t/merge.test:
Added test case for bug #11754210 - "45777: CHECK TABLE DOESN'T
SHOW ALL PROBLEMS FOR MERGE TABLE COMPLIANCE IN 5.1".
sql/sql_base.cc:
Changed code responsible for opening tables to ignore the fact
that underlying tables of a MERGE table are missing, if this
table is opened for CHECK/REPAIR TABLE.
The absence of underlying tables in this case is now detected and
appropriate error is reported at the point when child tables are
attached. At this point we can produce full list of problematic
child tables/errors to be returned as part of CHECK/REPAIR TABLE
result-set.
storage/myisammrg/ha_myisammrg.cc:
Changed myisammrg_attach_children_callback() to handle new
situation, when during CHECK/REPAIR TABLE we do not report
error about missing child immediately when this fact is
discovered during open_tables() but postpone error-reporting
till the time when children are attached.
Also this callback is now responsible for pushing an error
mentioning problematic child table to the list of errors to
be reported by CHECK/REPAIR TABLE statements.
Finally, since now myrg_attach_children() no longer relies on
return value from callback to determine the end of the children
list, callback no longer needs to set my_errno value and can
be simplified.
Changed myrg_print_wrong_table() to always report a problem
with child table as an error and not as a warning. This makes
reporting for different types of issues with child tables
more consistent and compatible with 5.0 behavior.
storage/myisammrg/myrg_open.c:
Changed code in myrg_attach_children() not to abort on the
first problem with a child table when attaching children to
parent MERGE table during CHECK/REPAIR TABLE statement
execution. This allows CHECK/REPAIR TABLE to report problems
about absence/wrong engine for all underlying tables as
part of their result-set.
mysql-test/t/implicit_commit.test:
Test fails if server is compiled with -DENABLED_PROFILING=0
sql/sql_class.cc:
Let class PROFILING do its own handling of the input file name.
sql/sql_profile.cc:
Store only basename of file argument.
Patch fixes an issue with reading basedir on Windows. It fixes how
the code interprets opt_basedir on Windows by adding the correct
path separators and quotes for paths with spaces.
BUG#12664302 : mysql_plugin cannot recognize the plugin config file
Patch fixes an issue with reading a plugin config file. It adds
more information to the error messages to ensure the user is
using the options correctly. Also deals with paths with spacs on
Windows.
This patch changes the plugin configuration file format to make it
easier to add new plugins and remove complexity. It also adds more
information when plugin configuration file reads fail.
This patch adds a new client utility that enables or disables plugin
features. The utility disables or enables a plugin using values (name,
soname, and symbols) provided via a configuration file by the same name.
For example, to ENABLE the daemon_example plugin, the utility will read
the daemon_example.ini configuration file and use the values contained to
enable or disable the plugin.
Before BUG#28796, an empty host was used to identify that an instance was no
longer a slave. However, BUG#28796 changed this behavior and one cannot set
an empty host. Besides, a RESET SLAVE only cleans up information on the next
event to retrieve from the master, disables ssl and resets heartbeat period.
So a call to SHOW SLAVE STATUS after issuing a RESET SLAVE still returns some
valid information, such as host, port, user and password.
To fix this problem, we have introduced the command RESET SLAVE ALL that does
what a regular RESET SLAVE does and also clears host, port, user and password
information thus allowing users to identify when an instance is no longer a
slave.
Truncate result of decimal division before converting to integer.
mysql-test/r/func_math.result:
New test case.
mysql-test/t/func_math.test:
New test case.
sql/item_func.cc:
Item_func_int_div::val_int():
Truncate result of decimal division before converting to integer.
mysql-test/r/type_float.result:
New test case.
mysql-test/t/type_float.test:
New test case.
sql/item_strfunc.cc:
There was a buffer over/under-run when inserting decimal point into an empty string.
Turns out the DBUG_ASSERT added by fix for Bug#11792200 was overly pessimistic:
'stop0' is used in the main loop of do_div_mod, but we only dereference 'buf0'
for div operations, not for mod.
mysql-test/r/func_math.result:
New test case.
mysql-test/t/func_math.test:
New test case.
strings/decimal.c:
Move DBUG_ASSERT down to where we actually dereference the loop pointer.
The buffer was simply too small.
In 5.5 and trunk, the size is 311 + 31,
in 5.1 and below, the size is 331
client/sql_string.cc:
Increase buffer size in String::set(double, ...)
include/m_string.h:
Increase FLOATING_POINT_BUFFER
mysql-test/r/type_float.result:
New test cases.
mysql-test/t/type_float.test:
New test cases.
sql/sql_string.cc:
Increase buffer size in String::set(double, ...)
sql/unireg.h:
Move definition of FLOATING_POINT_BUFFER
non-latin1 server error message
The problem was a one byte buffer overflow in the conversion
of a error message between character sets. Ahead of explaining
the problem further, some background information. Before an
error message is sent to the user, the message is converted
to the character set specified in the character_set_results
variable. For various reasons, this conversion might cause
the message to increase in length -- for example, if certain
characters can't be represented in the result character set.
If the final message length is greater than the maximum allowed
length of a error message (MYSQL_ERRMSG_SIZE), the message
is truncated. The message is also always null-terminated
regardless of the character set. The problem arises from this
null-termination. If a message length reached the maximum,
the terminating null character would be placed one byte past
the end of the message buffer.
The solution is to reserve the end of the message buffer for
the null character.
mysql-test/t/ctype_errors.test:
Add test case for Bug#12736295.
sql/sql_error.cc:
The to_end pointer was actually pointing past the end of
the buffer. Since the message is always null terminated,
point to_end to the last position of the buffer.
The server crashes if it processes table map events that are
corrupted, especially if they map different tables to the same
identifier. This could happen, for instance, due to BUG 56226.
We fix this by checking whether the table map has already been
mapped before actually applying the event. If it has been mapped
with different settings an error is raised and the slave SQL
thread stops. If it has been mapped with same settings the event
is skipped. If the table is set to be ignored by the filtering
rules, there is no change in behavior: the event is skipped and
ids are not checked.
mysql-test/suite/rpl/t/rpl_row_corruption.test:
Added a simple test case that checks both cases:
- multiple table maps with the same identifier
- multiple table maps with the same identifier, but only one
is processed (the others are filtered out)
When CREATE TABLE wasn't given ENGINE=... it would determine
the default ENGINE at parse-time rather than at execution
time, leading to incorrect behaviour (namely, later changes
to the default engine being ignore) when calling CREATE TABLE
from a stored procedure.
We now defer working out the default engine till execution of
CREATE TABLE.
mysql-test/r/sp_trans.result:
results!
mysql-test/t/sp_trans.test:
Show that CREATE TABLE (called from store routine) heeds
any changes after CREATE SP / parse-time. Show that explicitly
requesting an ENGINE still works.
sql/sql_parse.cc:
If no ENGINE=... was given at parse-time, determine default
engine at execution time of CREATE TABLE.
sql/sql_yacc.yy:
If CREATE TABLE is not given ENGINE=..., don't bother
figuring out the default engine during parsing; we'll
do it at execution time instead to be aware of the
latest updates.
We must allocate a larger ref_pointer_array. We failed to account for extra
items allocated here:
#0 find_order_in_list
uint el= all_fields.elements;
all_fields.push_front(order_item); /* Add new field to field list. */
ref_pointer_array[el]= order_item;
order->item= ref_pointer_array + el;
#1 setup_order
#2 setup_without_group
#3 JOIN::prepare
mysql-test/r/order_by.result:
New test case.
mysql-test/r/union.result:
New test case.
mysql-test/t/order_by.test:
New test case.
mysql-test/t/union.test:
New test case.
sql/sql_lex.cc:
find_order_in_list() may need some extra space, so multiply og_num by two.
sql/sql_union.cc:
For UNION, the 'n_sum_items' are accumulated in the "global_parameters" select_lex.
This number must be propagated to setup_ref_array()
When preparing a 'fake_select_lex' we need to use global_parameters->order_list
rather than fake_select_lex->order_list (see comments inside st_select_lex_unit::cleanup)
Bug#12637786 was fixed with rb:692 by marko. But that fix has a remaining
bug. It added this assert;
ut_ad(ind_field->prefix_len);
before a section of code that assumes there is a prefix_len.
The patch replaced code that explicitly avoided this with a check for
prefix_len. It turns out that the purge thread can get to that assert
without a prefix_len because it does not use a row_ext_t* .
When UNIV_DEBUG is not defined, the affect of this is that the purge thread
sets the dfield->len to zero and then cannot find the entry in the index to
purge. So secondary index entries remain unpurged.
This patch does not do the assert. Instead, it uses
'if (ind_field->prefix_len) {...}'
around the section of code that assumes a prefix_len. This is the way the
patch I provided to Marko did it.
The test case is simply modified to do a sleep(10) in order to give the
purge thread a chance to run. Without the code change to row0row.c, this
modified testcase will assert if InnoDB was compiled with UNIV_DEBUG.
I tried to sleep(5), but it did not always assert.