- Make accelerated checksum available to InnoDB and XtraDB.
- Fall back to slice-by-eight if not available. The mode used is printed on startup.
- Will only build on POWER systems at the moment until CMakeLists are modified
to only add the crc32_power8/ files when building on POWER.
running MySQL-5.7 unittest/gunit/innodb/ut0crc32-t
Before:
1..2
Using software crc32 implementation, CPU is little-endian
ok 1
Using software crc32 implementation, CPU is little-endian
normal CRC32: real 0.148006 sec
normal CRC32: user 0.148000 sec
normal CRC32: sys 0.000000 sec
big endian CRC32: real 0.144293 sec
big endian CRC32: user 0.144000 sec
big endian CRC32: sys 0.000000 sec
ok 2
After:
1..2
Using POWER8 crc32 implementation, CPU is little-endian
ok 1
Using POWER8 crc32 implementation, CPU is little-endian
normal CRC32: real 0.008097 sec
normal CRC32: user 0.008000 sec
normal CRC32: sys 0.000000 sec
big endian CRC32: real 0.147043 sec
big endian CRC32: user 0.144000 sec
big endian CRC32: sys 0.000000 sec
ok 2
Author CRC32 ASM code: Anton Blanchard <anton@au.ibm.com>
ref: https://github.com/antonblanchard/crc32-vpmsum
Signed-off-by: Daniel Black <daniel.black@au.ibm.com>
Provided IBM System Z have outdated compiler version, which supports gcc sync
builtins but not gcc atomic builtins. It also has weak memory model.
InnoDB attempted to verify if __sync_lock_test_and_set() is available by
checking IB_STRONG_MEMORY_MODEL. This macro has nothing to do with availability
of __sync_lock_test_and_set(), the right one is HAVE_ATOMIC_BUILTINS.
1. define connect_EXPORTS, this causes the engine to use MariaDB
versions for timestamp<->struct tm conversion instead of
TZ-dependent libc versions.
2. remove check_access() that was removed once, but re-appeared
during a complex merge.
3. disable a totally broken test
4. update test results
5. skip odbc_firebird test when no firebird DSN is available
rpl/rpl_mdev382 ; Wrong replace in show_binlog_events2.inc
binlog/database ; Different error on Solaris if file exists
mroonga/repair_table_no_index_file ; Different system error on Solaris
partition_not_blackhole ; Different error on Solaris
partition_myisam ; Different error on Solaris
Some other failures in mroonga was because have_32bit.inc didn't correctly
detect 64 bits on Solaris. Fixed using DEFAULT_MACHINE instead of MACHINE_TYPE
for Sys_version_compile_machine.
Don't mark the SEQUENCE engine as XA-capable. The engine never
registers itself for any transaction, so it doesn't matter
whether it is XA-capable or not. The only effect of being
"XA-capable" is breaking the "number of XA-capable engines"
check of TC_LOG_MMAP.
Fixed failure in tests when running optimized code
- Some assert() was using code that had to be executed
Fixed copying of some uninitialized data (fixed valgrind warning)
Maintainer: Michal Hrusecky <Michal.Hrusecky@opensuse.org>
(modified by O. Bertrand --> adding and using the XSTR macro)
modified: storage/connect/tabxml.cpp
When compiled with "-Wl,-Bsymbolic-functions" flags
(e.g. when building a .deb package on Ubuntu) with TokuDB and jemalloc,
mysqld crashed in toku_get_processor_frequency_cpuinfo() when
free()-ing a buffer returned by getline().
getline() uses libc malloc() internally, while free() is aliased
to jemalloc's free() in this configuration.
Fixing not to use getline(). Using a static buffer instead.
Analysis: Lengths which are not UNIV_SQL_NULL, but bigger than the following
number indicate that a field contains a reference to an externally
stored part of the field in the tablespace. The length field then
contains the sum of the following flag and the locally stored len.
This was incorrectly set to
define UNIV_EXTERN_STORAGE_FIELD (UNIV_SQL_NULL - UNIV_PAGE_SIZE_MAX)
When it should be
define UNIV_EXTERN_STORAGE_FIELD (UNIV_SQL_NULL - UNIV_PAGE_SIZE_DEF)
Additionally, we need to disable support for > 16K page size for
row compressed tables because a compressed page directory entry
reserves 14 bits for the start offset and 2 bits for flags.
This limits the uncompressed page size to 16k. To support
larger pages page directory entry needs to be larger.
Restore changes that were lost in a merge. Originally from
commit 66fd45a
Author: Sergei Golubchik <serg@mariadb.org>
Date: Mon Jun 8 21:06:56 2015 +0200
MDEV-7398 mysqld segfaults on FreeBSD 10.1 i386 when built with clang 3.4
Analysis: Current implementation will write and read at least one block
(sort_buffer_size bytes) from disk / index even if that block does not
contain any records.
Fix: Avoid writing / reading empty blocks to temporary files (disk).
Define O_RDONLY in jsonudf.cpp
Correct wrong deinit function names
Make Locate functions use the variable more
Avoid signed/unsigned warning in ha_connect.cc GetIntegerTableOption
Initialize oom in tabodbc MakeInsert
modified: storage/connect/ha_connect.cc
modified: storage/connect/jsonudf.cpp
modified: storage/connect/tabodbc.cpp
Define O_RDONLY in jsonudf.cpp
Correct wrong deinit function names
Make Locate functions use the variable more
Avoid signed/unsigned warning in ha_connect.cc GetIntegerTableOption
Initialize oom in tabodbc MakeInsert
modified: storage/connect/ha_connect.cc
modified: storage/connect/jsonudf.cpp
modified: storage/connect/tabodbc.cpp