Fixed a typo in the comment.
Fixing test cases which were previouslyno throwing due
disable warnings macro.
sql/sql_base.cc:
Change in indentation and fixing a typo in the comment.
During FIC error handling the trx->error_state was not being set to DB_SUCCESS
after failure, before attempting the next DDL SQL operation. This reset to
DB_SUCCESS is somewhat of a requirement though not explicitly stated anywhere.
The fix is to reset it to DB_SUCCESS in row0merge.cc if row_merge_rename_indexes
or row_merge_drop_index functions fail, also reset to DB_SUCCESS at trx commit.
rb://935 Approved by Jimmy Yang.
During FIC error handling the trx->error_state was not being set to DB_SUCCESS
after failure, before attempting the next DDL SQL operation. This reset to
DB_SUCCESS is somewhat of a requirement though not explicitly stated anywhere.
The fix is to reset it to DB_SUCCESS in row0merge.cc if row_merge_rename_indexes
or row_merge_drop_index functions fail, also reset to DB_SUCCESS at trx commit.
rb://935 Approved by Jimmy Yang.
Problem: Statements that write to tables with auto_increment columns
based on the selection from another table, may lead to master
and slave going out of sync, as the order in which the rows
are retrieved from the table may differ on master and slave.
Solution: We mark writing to a table with auto_increment table
based on the rows selected from another table as unsafe. This
will cause the execution of such statements to throw a warning
and forces the statement to be logged in ROW if the logging
format is mixed.
Changes:
1. All the statements that writes to a table with auto_increment
column(s) based on the rows fetched from another table, will now
be unsafe.
2. CREATE TABLE with SELECT will now be unsafe.
sql/share/errmsg-utf8.txt:
Added new warning messages.
sql/sql_base.cc:
-Created function to check statements that write to
tables with auto_increment column and has select.
-Marked all the statements that write to a table
with auto_increment column based on rows fetched
from other table(s) as unsafe.
sql/sql_table.cc:
mark CREATE TABLE[with auto_increment column] as unsafe.
Problem: Statements that write to tables with auto_increment columns
based on the selection from another table, may lead to master
and slave going out of sync, as the order in which the rows
are retrived from the table may differ on master and slave.
Solution: We mark writing to a table with auto_increment table
as unsafe. This will cause the execution of such statements to
throw a warning and forces the statement to be logged in ROW if
the logging format is mixed.
Changes:
1. All the statements that writes to a table with auto_increment
column(s) based on the rows fetched from another table, will now
be unsafe.
2. CREATE TABLE with SELECT will now be unsafe.
sql/share/errmsg-utf8.txt:
Added new Warning messages
sql/sql_base.cc:
created a new function that checks for select + write on a autoinc table
made all such statements to be unsafe.
sql/sql_parse.cc:
made create autoincremnet tabble + select unsafe
The actual Bug#11754376 does not exist in MySQL 5.5 because at startup
we drop entries for temporary tables from InnoDB dictionary cache (only
if ROW_FORMAT is not REDUNDANT). But nevertheless the bug in
normalize_table_name_low() is present so we fix it.
GRACEFUL SHUTDOWN
During startup mysql picks up .frm files from the tmpdir directory and
tries to drop those tables in the storage engine.
The problem is that when tmpdir ends in / then ha_innobase::delete_table()
is passed a string like "/var/tmp//#sql123", then it wrongly normalizes it
to "/#sql123" and calls row_drop_table_for_mysql() which of course fails
to delete the table entry from the InnoDB dictionary cache.
ha_innobase::delete_table() returns an error but nevertheless mysql wipes
away the .frm file and the entry in the InnoDB dictionary cache remains
orphaned with no easy way to remove it.
The "no easy" way to remove it is to create a similar temporary table again,
copy its .frm file to tmpdir under "#sql123.frm" and restart mysqld with
tmpdir=/var/tmp (no trailing slash) - this way mysql will pick the .frm file
after restart and will try to issue drop table for "/var/tmp/#sql123"
(notice do double slash), ha_innobase::delete_table() will normalize it to
"tmp/#sql123" and row_drop_table_for_mysql() will successfully remove the
table entry from the dictionary cache.
The solution is to fix normalize_table_name_low() to normalize things like
"/var/tmp//table" correctly to "tmp/table".
This patch also adds a test function which invokes
normalize_table_name_low() with various inputs to make sure it works
correctly and a mtr test that calls this test function.
Reviewed by: Marko (http://bur03.no.oracle.com/rb/r/929/)
rpl_heartbeat_basic test fails sporadically on pushbuild because did
not received all heartbeats from slave in circular replication.
Removed from experimental collection.
CORRUPTED WHEN RUN CONCURRENTLY WITH
ISSUE: Table corruption due to concurrent queries.
Different threads running check, repair query
along with insert. Locks not properly acquired
in repair query. Rows are inserted inbetween
repair query.
SOLUTION: Mutex lock is acquired before the
repair call. Concurrent queries wont
effect the call to repair.
ON 64 BIT MACHINES
PROBLEM: When sorting index during repair of
myisam tables, due to improper casting
of buffer size variables value of myisam_
sort_buffer_size is not set greater than
4GB.
SOLUTION: Proper casting of buffer size variable.
myisam_buffer_size changed to unsigned
long long to handle size > 4GB on
linux as well as windows.
print page dump
buf_page_print(): Remove the ut_ad(0) from the beginning. Add two flags
(enum buf_page_print_flags) that can be bitwise-ORed together:
BUF_PAGE_PRINT_NO_CRASH:
Do not crash debug builds at the end of buf_page_print().
BUF_PAGE_PRINT_NO_FULL:
Do not print the full page dump. This can be useful when adding
diagnostic printout to flushing or to the doublewrite buffer.
trx_sys_doublewrite_init_or_restore_page(): Replace exit(1) with ut_error,
so that we can get a core dump if this extraordinary condition happens.
rb:924 approved by Sunny Bains
CASES RESETS DATA POINTER TO SMAL
ISSUE: Myisamchk doing sort recover
on a table reduces data_file_length.
Maximum size of data file decreases,
lesser number of rows are stored.
SOLUTION: Size of data_file_length is
fixed to the original length.
CASES RESETS DATA POINTER TO SMAL
ISSUE: Myisamchk doing sort recover
on a table reduces data_file_length.
Maximum size of data file decreases,
lesser number of rows are stored.
SOLUTION: Size of data_file_length is
fixed to the original length.
rb://914
approved by: Marko Makela
Poll in fil_rename_tablespace() after setting ::stop_ios flag can
result in a hang because the other thread actually dispatching the IO
won't wake IO helper threads or flush the tablespace before starting
wait in fil_mutex_enter_and_prepare_for_io().
+get_tty_password this is the only external symbol in get_password.c,
which is explicitly listed in CLIENT_SOURCES
+handle_options this is in mysys/my_getopt.c
adding this sysmbol pulls in the other externals:
T getopt_compare_strings
T getopt_double_limit_value
T getopt_ll_limit_value
T getopt_ull_limit_value
T handle_options
T my_cleanup_options
T my_getopt_register_get_addr
T my_print_help
T my_print_variables
This fix does not remove the underlying cause of the assertion
failure. It just works around the problem, allowing a corrupted
secondary index to be fixed by DROP INDEX and CREATE INDEX (or in the
worst case, by re-creating the table).
ibuf_delete(): If the record to be purged is the last one in the page
or it is not delete-marked, refuse to purge it. Instead, write an
error message to the error log and let a debug assertion fail.
ibuf_set_del_mark(): If the record to be delete-marked is not found,
display some more information in the error log and let a debug
assertion fail.
row_undo_mod_del_unmark_sec_and_undo_update(),
row_upd_sec_index_entry(): Let a debug assertion fail when the record
to be delete-marked is not found.
buf_page_print(): Add ut_ad(0) so that corruption will be more
prominent in stress testing with debug binaries. Add ut_ad(0) here and
there where corruption is noticed.
btr_corruption_report(): Display some data on page_is_comp() mismatch.
btr_assert_not_corrupted(): A wrapper around btr_corruption_report().
Assert that page_is_comp() agrees with the table flags.
rb:911 approved by Inaam Rana
BUG#13519696 - 62940: SELECT RESULTS VARY WITH VERSION AND
WITH/WITHOUT INDEX RANGE SCAN
BUG#13453382 - REGRESSION SINCE 5.1.39, RANGE OPTIMIZER WRONG
RESULTS WITH DECIMAL CONVERSION
BUG#13463488 - 63437: CHAR & BETWEEN WITH INDEX RETURNS WRONG
RESULT AFTER MYSQL 5.1.
Those are all cases where the range optimizer got it wrong
with > and >=.
mysql-test/r/range.result:
Without the code fix for DECIMAL, "select count(val) from t2 where val > 0.1155"
(which uses a range scan) returned 127 instead of 128);
Moreover, both
select * from t1 force index (primary) where a=1 and c>= 2.9;
and
select * from t1 force index (primary) where a=1 and c> 2.9;
would miss "1 1 3".
Without the code fix for strings, both
SELECT * FROM t1 WHERE F1 >= 'A ';
and
SELECT * FROM t1 WHERE F1 BETWEEN 'A ' AND 'AAAAA';
would miss "A A A".
sql/item.cc:
Preamble to the explanations below: opt_range.cc:get_mm_leaf() does
this (this is not changed by the patch): changes
column > value
to
column OP V
where:
* V is what is in "column" after we stored "value" in it
(such store operation may have done rounding...)
* OP is > or >=, depending on what's correct.
For example, if c is an INT column,
c > 2.9 is changed to
c OP 3
where OP is >= ('>' would not be correct).
The bugs below are cases where we chose OP wrongly.
Note that such transformations are visible in the optimizer trace.
1) Fix for STRING. In the scenario with CHAR(5) in range.test, this happens,
in get_mm_tree(), for the condition F1>='A ':
* value->save_in_field_no_warnings(field, 1) wants to store the right argument
(named 'item') into the CHAR(5) field; this stores 'A ' (the item's value)
padded with spaces (which changes nothing: still 'A ')
* we come to
case Item_func::GE_FUNC:
/* Don't use open ranges for partial key_segments */
if ((!(key_part->flag & HA_PART_KEY_SEG)) &&
(stored_field_cmp_to_item(param->thd, field, value) < 0))
tree->min_flag= NEAR_MIN;
tree->max_flag=NO_MAX_RANGE;
What this wants to do is: if the field's value is strictly smaller
than the item's, then ">=" can be changed to ">" (this is an optimization,
it can help pruning one useless partition).
* stored_field_cmp_to_item() is called; it compares the field's
and item's values: the item's value (Item_string::val_str()) is
'A ') and the field's value (Field_string::val_str()) is
'A' (yes val_str() removes end spaces unless sql_mode='PAD_CHAR_TO_FULL_LENGTH');
and the comparison is done with stringcmp() which considers
end spaces as relevant; as end spaces differ, function returns a
negative number, and ">='A '" becomes ">'A'" (i.e. the NEAR_MIN
flag is turned on).
During execution the index range scan code will search for "A", find
a match, but exclude it (because of ">"), wrongly.
The badness is the string comparison done by stored_field_cmp_to_item():
we use the reply of this function to determine where the index search
should start, so it should do comparison like index search does
comparisons; index search comparisons are ha_key_cmp() which uses
a collation-aware comparison (in our case, my_strnncollsp_simple(),
which ignores end spaces); so stored_field_cmp_to_item()
needs to do the same. When this is fixed, condition becomes
">='A '".
2) Fix for DECIMAL: just like in other comparisons in stored_field_cmp_to_item(),
we must first pass the field and then the item; otherwise expectations
on what <0 and >0 mean (inferiority, superiority) get violated.
In the test in range.test about c>2.9: c is an INT column, so 2.9
gets stored as 3, then stored_field_cmp_to_item() compares 3
and 2.9; because of the wrong order of arguments passed
to my_decimal_cmp(), range optimizer
thinks that 3 is < 2.9 and thus changes "c> 2.9" to "c> 3".
After fixing the order, it changes to the correct "c>= 3".
In the test in range.inc for val > 0.1155, it was changed to
val > 0.116, now it is changed to val >= 0.116.