With innodb compressed pages formerly as crc32 changing to the
innodb checksum, optimize for speed by only calcuating the big
endian variant of crc32 after checking that the little endian
doesn't validate the page.
Signed-off-by: Daniel Black <daniel.black@au.ibm.com>
Different fix. Don't allow Item_func_sp to be evaluated unless
all tables are prelocked.
Extend the test case to make sure Item_func_sp::val_str is called
(the table must have at least one row for that).
This reverts commit 035a5ac62a.
Two minor problems and one regression:
1. caching the value in str_result. Other Item methods may use it,
destroying the cache. See, for example, Item::save_in_field, where
str_result is moved to use a local buffer (this failed main.grant)
2. Item_func_conv_charset::safe is now set too late, it's initialized
only in val_str() but checked before that, this failed many tests
in optimized builds.
to fix 1 - use tmp_result instead of str_result, to fix 2, use
the else branch in the Item_func_conv_charset constructor to set
safe purely from charset properties.
But this introduces a regression, constant strings can no longer be
converted, say, from utf8 to latin1 (because 'safe' will be false).
This fails few tests too. There is no way to fix it without reverting
the commit and converting constants, as before, in the constructor.
The race condition happened if mark_xid_done was considerably delayed,
and an extra Binlog_checkpoint event was written into the binary log
which was later indicated in an error message. Fixed by ensuring
that the event is written before the binary log is rotated to the one
which is used in the output.
innobase_kill_query(): Remove the bogus warning message
(for the valid outcome err==DB_DEADLOCK) that was added in
commit fec844aca8
Author: Jan Lindström <jan.lindstrom@mariadb.com>
Date: Tue Sep 6 09:43:16 2016 +0300
Merge InnoDB 5.7 from mysql-5.7.14.
Also, remove some redundant variables and add a debug assertion for
enforcing the proper outcome of lock_trx_handle_wait().
The reason is a simple race condition. Initially the test was meant to synchronize with master
before showing tables, but it turned out that the slave IO thread should fail by this point,
and synchronization was removed along with a server bugfix. Now added an intermediate sync
instead, to make sure that slave has replicated events before the point of failure
buffer pool size
The reason for the exception is previous overflow in multiplication of
two 32bit integers (product was 0 rather than expected 8GB, due
to truncation)
When JOIN::destroy() is called for a JOIN object that has
- join->tmp_join != NULL
- also has join->table[0]->sort
then the latter was not cleaned up.
This could cause a memory leak and/or asserts in the subsequent queries.
Fixed by adding a cleanup call.
The problem was that null_value was not set to "false" on a well-formed row.
If an ill-formed row was followed by a well-forned row, null_value remained
"true" in the call of Item::send() for the well-formed row.
- created binlog_encryption test suite and added it to the default list
- moved some tests from rpl, binlog and multisource suites to extra
so that they could be re-used in different suites
- made minor changes in include files
crashes server
This bug is the result of merging the Oracle MySQL follow-up fix
BUG#22963169 MYSQL CRASHES ON CREATE FULLTEXT INDEX
without merging the base bug fix:
Bug#79475 Insert a token of 84 4-bytes chars into fts index causes
server crash.
Unlike the above mentioned fixes in MySQL, our fix will not change
the storage format of fulltext indexes in InnoDB or XtraDB
when a character encoding with mbmaxlen=2 or mbmaxlen=3
and the length of a word is between 128 and 84*mbmaxlen bytes.
The Oracle fix would allocate 2 length bytes for these cases.
Compatibility with other MySQL and MariaDB releases is ensured by
persisting the used maximum length in the SYS_COLUMNS table in the
InnoDB data dictionary.
This fix also removes some unnecessary strcmp() calls when checking
for the legacy default collation my_charset_latin1
(my_charset_latin1.name=="latin1_swedish_ci").
fts_create_one_index_table(): Store the actual length in bytes.
This metadata will be written to the SYS_COLUMNS table.
fts_zip_initialize(): Initialize only the first byte of the buffer.
Actually the code should not even care about this first byte, because
the length is set as 0.
FTX_MAX_WORD_LEN: Define as HA_FT_MAXCHARLEN * 4 aka 336 bytes,
not as 254 bytes.
row_merge_create_fts_sort_index(): Set the actual maximum length of the
column in bytes, similar to fts_create_one_index_table().
row_merge_fts_doc_tokenize(): Remove the redundant parameter word_dtype.
Use the actual maximum length of the column. Calculate the extra_size
in the same way as row_merge_buf_encode() does.
Check for readline before checking for curses headers, because
MYSQL_CHECK_READLINE fails when curses is not found, but
CHECK_INCLUDE_FILES simply remembers the fact and continues. So if
there's no curses, MYSQL_CHECK_READLINE will abort, the user will then
installs curses and continue the build. Thus, CHECK_INCLUDE_HEADERS
will remember that there is no curses, but other checks from
MYSQL_CHECK_READLINE will remember that curses are there. It will
result in inconsistent HAVE_xxx defines.
crashes server
This bug is the result of merging the Oracle MySQL follow-up fix
BUG#22963169 MYSQL CRASHES ON CREATE FULLTEXT INDEX
without merging the base bug fix:
Bug#79475 Insert a token of 84 4-bytes chars into fts index causes
server crash.
Unlike the above mentioned fixes in MySQL, our fix will not change
the storage format of fulltext indexes in InnoDB or XtraDB
when a character encoding with mbmaxlen=2 or mbmaxlen=3
and the length of a word is between 128 and 84*mbmaxlen bytes.
The Oracle fix would allocate 2 length bytes for these cases.
Compatibility with other MySQL and MariaDB releases is ensured by
persisting the used maximum length in the SYS_COLUMNS table in the
InnoDB data dictionary.
This fix also removes some unnecessary strcmp() calls when checking
for the legacy default collation my_charset_latin1
(my_charset_latin1.name=="latin1_swedish_ci").
fts_create_one_index_table(): Store the actual length in bytes.
This metadata will be written to the SYS_COLUMNS table.
fts_zip_initialize(): Initialize only the first byte of the buffer.
Actually the code should not even care about this first byte, because
the length is set as 0.
FTX_MAX_WORD_LEN: Define as HA_FT_MAXCHARLEN * 4 aka 336 bytes,
not as 254 bytes.
row_merge_create_fts_sort_index(): Set the actual maximum length of the
column in bytes, similar to fts_create_one_index_table().
row_merge_fts_doc_tokenize(): Remove the redundant parameter word_dtype.
Use the actual maximum length of the column. Calculate the extra_size
in the same way as row_merge_buf_encode() does.