Before this fix, the test myisam_file_io executed:
- (a) an update on setup_instrument to disable non myisam file io instruments
- (b) a truncate on events_waits_history_long
and later
- (c) a select on events_waits_history_long
Surprisingly, events that were supposed to be disabled in (a) and removed in (b)
still were found in (c).
This happened for events such as
wait/io/file/innodb/innodb_data_file fil0fil.c: sync
because the sync was started before (a) and completed after (b),
and as a consequence was added in the performance schema history, as expected.
Presence of these records in the history made the test fail.
This fix makes the test script more robust to account for extra spill waits records in (c).
The problem was due to a misuse of GCC asm constraints used to
implement a atomic load. On x86_64, the load was implemented
as a cmpxchg which implicitly uses the eax register as a
source and destination operand, yet the dummy value used for
comparison wasn't being properly loaded into eax (and other
problems).
The core problem is that cmpxchg is unnecessary as a load
on x86_64 as there are other simpler instructions such
as xadd. Even though, such instructions are only used to
have a memory barrier as load and stores are atomic by
definition. Hence, the solution is to explicitly issue the
required CPU and compiler barriers.
the DROP statement ..."
Problem: When using temporary tables and closing a session, an
implicit DROP TEMPORARY TABLE IF EXISTS is written to the binary
log (while cleaning up the context of the session THD - see:
sql_class.cc:THD::cleanup which calls close_temporary_tables).
close_temporary_tables, first checks if the binary log is opened
and then proceeds to creating the DROP statements. Then, such
statements, are written to the binary log through
MYSQL_BIN_LOG::write(Log_event *). Inside, there is another check
if the binary log is opened and if not an error is returned. This
is where the faulty behavior is triggered. Given that the test
case replays a binary log, with temp tables statements, and right
after it issues RESET MASTER, there is a chance that is_open will
report false (when the mysql session is closed and the temporary
tables are written).
is_open may return false, because MYSQL_BIN_LOG::reset_logs is
not setting the correct flag (LOG_CLOSE_TO_BE_OPENED), on the
MYSQL_LOG_BIN::log_state (instead it sets just the
LOG_CLOSE_INDEX flag, leaving the log_state to
LOG_CLOSED). Thence, when writing the DROP statement as part of
the THD::cleanup, the thread could get a return value of false
for is_open - inside MYSQL_BIN_LOG::write, ultimately reporting
that it can't write the event to the binary log.
Fix: We fix this by adding the correct flag, missing in the
second close.
It is not necessary to support INSERT DELAYED for a single value insert,
while we do not support that for multi-values insert when binlog is
enabled in SBR.
The lock_type is upgrade to TL_WRITE from TL_WRITE_DELAYED for
INSERT DELAYED for single value insert as multi-values insert
did when binlog is enabled. Then it's safe. And binlog it as
INSERT without DELAYED.
When using BINLOG statement to execute rows log events, session variables
foreign_key_checks and unique_checks are changed temporarily. As each rows
log event has their own special session environment and its own
foreign_key_checks and unique_checks can be different from current session
which executing the BINLOG statement. But these variables are not restored
correctly after BINLOG statement. This problem will cause that the following
statements fail or generate unexpected data.
In this patch, code is added to backup and restore these two variables.
So BINLOG statement will not affect current session's variables again.
The problem is that the logic which checks if a pointer is
valid relies on a poor heuristic based on the start and end
addresses of the data segment and heap.
Apart from miscalculating the heap bounds, this approach also
suffers from the fact that memory can come from places other
than the heap. See Bug#58528 for a more detailed explanation.
On Linux, the solution is to access the process's memory
through /proc/self/task/<tid>/mem, which allows for retrieving
the contents of pages within the virtual address space of
the calling process. If a address range is not mapped, a
input/output error is returned.
Problem: MySQL cp1251 did not support 'U+20AC EURO SIGN'
which was assigned a few years ago to 0x88.
Fix: adding mapping: 0x88 <-> U+20AC
@ mysql-test/include/ctype_8bit.inc
New shared file to test 8bit character sets.
@ mysql-test/r/ctype_cp1251.result
@ mysql-test/t/ctype_cp1251.test
Adding tests
@ sql/share/charsets/cp1251.xml
Adding mapping
@ strings/ctype-extra.c
Regenerating ctype-extra.c using strings/conf_to_src
according to new cp1251.xml
After dropping and recreating the database specified along with --one-database
option at command line, mysql client keeps filtering the statements even after
the execution of a 'USE' command on the same database.
--one-database option enables the filtering of statements when the current
database is not the one specified at the command line. However, when the same
database is dropped and recreated the variable (current_db) that holds the
inital database name gets altered. This bug exploits the fact that current_db
initially gets set to null value (0) when a 'use db_name' follows the recreation
of same database db_name (speficied at the command line) and hence skip_updates
gets set to 1, which inturn triggers the further filtering of statements.
Fixed by making get_current_db() a no-op function when one_database is set,
and hence, under that condition current_db will not get altered.
Note, however the value of current_db can change when we execute 'connect'
command with a differnet database to reconnect to the server, in which case,
the behavior of --one-database will be formulated using this new database.
Problem: LIKE over an indexed column optimized away good results,
because my_like_range_utf32/utf16 returned wrong ranges for contractions.
Contraction related code was missing in my_like_range_utf32/utf16,
but did exist in my_like_range_ucs2/utf8.
It was forgotten in utf32/utf16 versions (during mysql-6.0 push/revert mess).
Fix:
The patch removes individual functions my_like_range_ucs2,
my_like_range_utf16, my_like_range_utf32 and introduces a single function
my_like_range_generic() instead. The new function handles contractions
correctly. It can handle any character set with cs->min_sort_char and
cs->max_sort_char represented in Unicode code points.
added:
@ mysql-test/include/ctype_czech.inc
@ mysql-test/include/ctype_like_ignorable.inc
@ mysql-test/r/ctype_like_range.result
@ mysql-test/t/ctype_like_range.test
Adding tests
modified:
@ include/m_ctype.h
- Adding helper functions for contractions.
- Prototypes: removing ucs2,utf16,utf32 functions, adding generic function.
@ mysql-test/r/ctype_uca.result
@ mysql-test/r/ctype_utf16_uca.result
@ mysql-test/r/ctype_utf32_uca.result
@ mysql-test/t/ctype_uca.test
@ mysql-test/t/ctype_utf16_uca.test
@ mysql-test/t/ctype_utf32_uca.test
- Adding tests.
@ strings/ctype-mb.c
- Pad function did not put the last character.
- Implementing my_like_range_generic() - an universal replacement
for three separate functions
my_like_range_ucs2(), my_like_range_utf16() and my_like_range_utf32(),
with correct contraction handling.
@ strings/ctype-ucs2.c
- my_fill_mb2 did not put the high byte, as previously
it was used to put only characters in ASCII range.
Now it puts high byte as well
(needed to pupulate cs->max_sort_char correctly).
- Adding DBUG_ASSERT()
- Removing character set specific functions:
my_like_range_ucs2(), my_like_range_utf16() and my_like_range_utf32().
- Using my_like_range_generic() instead of the old functions.
@ strings/ctype-uca.c
- Using generic function instead of the old character set specific ones.
@ sql/item_create.cc
@ sql/item_strfunc.cc
@ sql/item_strfunc.h
- Adding SQL functions LIKE_RANGE_MIN and LIKE_RANGE_MAX,
available only in debug build to make sure like_range()
works correctly for all character sets and collations.
It does work in general, the problem here was that the test name
'alter_table' matches 'main.alter_table-big' which has already been found.
Fixed by matching more explicitly (with/without suite name)