Passing a null pointer to a nonnull argument is not only undefined
behaviour, but it also grants the compiler the permission to optimize
away further checks whether the pointer is null. GCC -O2 at least
starting with version 8 may do that, potentially causing SIGSEGV.
These problems were caught in a WITH_UBSAN=ON build with the
Bug#7024 test in main.view.
It is already in libmariadb, and server (also that client in server)
does not need it.
It does not work in embedded either since it relies on non-blocking sockets
When MDEV-22669 introduced CRC-32C acceleration to IA-32,
it worked around a compiler bug by disabling the acceleration
on GCC 4 for IA-32 altogether, even though the compiler bug
only affects -fPIC builds that are targeting IA-32.
Let us extend the solution fe5dbfe723
and define HAVE_CPUID_INSTRUCTION that allows us to implement
a necessary and sufficient work-around of the compiler bug.
GCC before version 5 would fail to emit the CPUID instruction
when targeting IA-32 in -fPIC mode. Therefore, we must add the
CPUID instruction to the HAVE_CLMUL_INSTRUCTION check.
This means that the PCLMUL accelerated crc32() function will
not be available on i686 executables that are compiled with
GCC 4. The limitation does not impact AMD64 builds or non-PIC
x86 builds, or other compilers (clang, or GCC 5 or newer).
MDEV-22641 in commit dec3f8ca69
refactored a SIMD implementation of CRC-32 for the ISO 3309 polynomial
that uses the IA-32/AMD64 carry-less multiplication (pclmul)
instructions. The code was previously only available in Mariabackup;
it was changed to be a general replacement of the zlib crc32().
There exist AMD64 systems where CMAKE_SYSTEM_PROCESSOR matches
the pattern i[36]86 but not x86_64 or amd64. This would cause a
link failure, because mysys/checksum.c would basically assume that
the compiler support for instruction is always available on GCC-compatible
compilers on AMD64.
Furthermore, we were unnecessarily disabling the SIMD acceleration
for 32-bit executables.
Note: Until MDEV-22749 has been implemented, the PCLMUL instruction
will not be used on Microsoft Windows.
Closes: #1660
Raspberry Pi 4 supports crc32 but doesn't support pmull (MDEV-23030).
The PR #1645 offers a solution to fix this issue. But it does not consider
the condition that the target platform does support crc32 but not support PMULL.
In this condition, it should leverage the Arm64 crc32 instruction (__crc32c) and
just only skip parallel computation (pmull/vmull) rather than skip all hardware
crc32 instruction of computation.
The PR also removes unnecessary CRC32_ZERO branch in 'crc32c_aarch64' for MariaDB,
formats the indent and coding style.
Change-Id: I76371a6bd767b4985600e8cca10983d71b7e9459
Signed-off-by: Yuqi Gu <yuqi.gu@arm.com>
depending on build config the error might be hidded,
in particular liblz4.so and libjemalloc.so make it to disappear,
but with -DWITH_INNODB_LZ4=NO -DWITH_JEMALLOC=NO it reappears.
MariaDB adopted a hardware optimized crc32c approach on ARM64 starting 10.5.
Said implementation of crc32c needs support from target hardware for crc32
and pmull instructions. Existing logic is checking only for crc32 support
from target hardware through a runtime check and so if target hardware
doesn't support pmull it would cause things to fail/crash.
Expanded runtime check to ensure pmull support is also checked on the target
hardware along with existing crc32.
Thanks to Marko and Daniel for review.
I run perf top during ./mtr testing and constantly see times()
function there. It's so slow, that it has no sense to run it
in a loop too many times.
This patch speeds up -suite=innodb for me from 218s to 208s.
9s of times() function!
Small postfix to MDEV-23175 to ensure faster option on FreeBSD
and compatibility to Solaris that isn't high resolution.
ftime is left as a backup in case an implementation doesn't
contain any of these clocks.
FreeBSD
$ ./unittest/mysys/my_rdtsc-t
1..11
# ----- Routine ---------------
# myt.cycles.routine : 5
# myt.nanoseconds.routine : 11
# myt.microseconds.routine : 13
# myt.milliseconds.routine : 11
# myt.ticks.routine : 17
# ----- Frequency -------------
# myt.cycles.frequency : 3610295566
# myt.nanoseconds.frequency : 1000000000
# myt.microseconds.frequency : 1000000
# myt.milliseconds.frequency : 899
# myt.ticks.frequency : 136
# ----- Resolution ------------
# myt.cycles.resolution : 1
# myt.nanoseconds.resolution : 1
# myt.microseconds.resolution : 1
# myt.milliseconds.resolution : 7
# myt.ticks.resolution : 1
# ----- Overhead --------------
# myt.cycles.overhead : 26
# myt.nanoseconds.overhead : 19140
# myt.microseconds.overhead : 19036
# myt.milliseconds.overhead : 578
# myt.ticks.overhead : 21544
ok 1 - my_timer_init() did not crash
ok 2 - The cycle timer is strictly increasing
ok 3 - The cycle timer is implemented
ok 4 - The nanosecond timer is increasing
ok 5 - The nanosecond timer is implemented
ok 6 - The microsecond timer is increasing
ok 7 - The microsecond timer is implemented
ok 8 - The millisecond timer is increasing
ok 9 - The millisecond timer is implemented
ok 10 - The tick timer is increasing
ok 11 - The tick timer is implemented
Largely based on MySQL commit
75271e51d6
MySQL Ref:
BUG#24566529: BACKPORT BUG#23575445 TO 5.6
(cut)
Also, the PTR_SANE macro which tries to check if a pointer
is invalid (used when printing pointer values in stack traces)
gave false negatives on OSX/FreeBSD. On these platforms we
now simply check if the pointer is non-null. This also removes
a sbrk() deprecation warning when building on OS X. (It was
before only disabled with building using XCode).
Removed execinfo path of MySQL patch that was already included.
sbrk doesn't exist on FreeBSD aarch64.
Removed HAVE_BSS_START based detection and replaced with __linux__
as it doesn't exist on OSX, Solaris or Windows. __bss_start
exists on mutiple Linux architectures.
Tested on FreeBSD and Linux x86_64. Being in FreeBSD ports for 2
years implies a good testing there on all FreeBSD architectures there
too. MySQL-8.0.21 code is functionally identical to original commit.
aarch64 timer is available to userspace via arch register.
clang's __builtin_readcyclecounter is wrong for aarch64 (reads the PMU
cycle counter instead of the archi-timer register), so we don't use it.
my_rdtsc unit-test on AWS m6g shows:
frequency: 121830845
resolution: 1
overhead: 1
This counter is not strictly increasing, but it is non-decreasing.
This patch ensures that all identical character sets shares the same
cs->csname.
This allows us to replace strcmp() in my_charset_same() with comparisons
of pointers. This fixes a long standing performance issue that could cause
as strcmp() for every item sent trough the protocol class to the end user.
One consequence of this patch is that we don't allow one to add a character
definition in the Index.xml file that changes the csname of an existing
character set. This is by design as changing character set names of existing
ones is extremely dangerous, especially as some storage engines just records
character set numbers.
As we now have a hash over character set's csname, we can in the future
use that for faster access to a specific character set. This could be done
by changing the hash to non unique and use the hash to find the next
character set with same csname.
Linux glibc has deprecated ftime resutlting in a compile error on Fedora-32.
Per manual clock_gettime is the suggested replacement. Because my_timer_milliseconds
is a relative time used by largely the perfomrance schema, CLOCK_MONOTONIC_COARSE
is used. This has been available since Linux-2.6.32.
The low overhead is shows in the unittest:
$ unittest/mysys/my_rdtsc-t
1..11
# ----- Routine ---------------
# myt.cycles.routine : 5
# myt.nanoseconds.routine : 11
# myt.microseconds.routine : 13
# myt.milliseconds.routine : 18
# myt.ticks.routine : 17
# ----- Frequency -------------
# myt.cycles.frequency : 3596597014
# myt.nanoseconds.frequency : 1000000000
# myt.microseconds.frequency : 1000000
# myt.milliseconds.frequency : 1039
# myt.ticks.frequency : 103
# ----- Resolution ------------
# myt.cycles.resolution : 1
# myt.nanoseconds.resolution : 1
# myt.microseconds.resolution : 1
# myt.milliseconds.resolution : 1
# myt.ticks.resolution : 1
# ----- Overhead --------------
# myt.cycles.overhead : 118
# myt.nanoseconds.overhead : 234
# myt.microseconds.overhead : 222
# myt.milliseconds.overhead : 30
# myt.ticks.overhead : 4946
ok 1 - my_timer_init() did not crash
ok 2 - The cycle timer is strictly increasing
ok 3 - The cycle timer is implemented
ok 4 - The nanosecond timer is increasing
ok 5 - The nanosecond timer is implemented
ok 6 - The microsecond timer is increasing
ok 7 - The microsecond timer is implemented
ok 8 - The millisecond timer is increasing
ok 9 - The millisecond timer is implemented
ok 10 - The tick timer is increasing
ok 11 - The tick timer is implemented
The merge commit 0fd89a1a89
of commit b6ec1e8bbf
seems to cause occasional MemorySanitizer failures,
because it failed to replace some MEM_UNDEFINED() calls
with MEM_MAKE_ADDRESSABLE().
my_large_free(): Correctly invoke MEM_MAKE_ADDRESSABLE() after
freeing memory. Failure to do so could cause bogus
AddressSanitizer failures for memory allocated by my_large_malloc().
On MemorySanitizer, we will do nothing.
buf_pool_t::chunk_t::create(): Replace the MEM_MAKE_ADDRESSABLE()
that had been added in commit 484931325e
to work around the issue.
In AddressSanitizer, we only want memory poisoning to happen
in connection with custom memory allocation or freeing.
The primary use of MEM_UNDEFINED is for declaring memory uninitialized
in Valgrind or MemorySanitizer. We do not want MEM_UNDEFINED to
have the unwanted side effect that AddressSanitizer would no longer
be able to complain about accessing unallocated memory.
MEM_UNDEFINED(): Define as no-op for AddressSanitizer.
MEM_MAKE_ADDRESSABLE(): Define as MEM_UNDEFINED() or
ASAN_UNPOISON_MEMORY_REGION().
MEM_CHECK_ADDRESSABLE(): Wrap also __asan_region_is_poisoned().
MDEV-21298: mariabackup doesn't read from the [mariadbd] and [mariadbd-X.Y]
server option groups from configuration files
MDEV-21301: mariabackup doesn't read [mariadb-backup] option group in
configuration file
All three issues require to change the same code, that is why their
fixes are joined in one commit.
The fix is in invoking load_defaults_or_exit() and handle_options() for
backup-specific groups separately from client-server groups to let the last
handle_options() call fail on unknown backup-specific options.
The order of options procesing is the following:
1) Load server groups and process server options, ignore unknown
options
2) Load client groups and process client options, ignore unknown
options
3) Load backup groups and process client-server options, exit on
unknown option
4) Process --mysqld-args command line options, ignore unknown options
New global flag my_handle_options_init_variables was added to have
ability to invoke handle_options() for the same allowed options set
several times without re-initialising previously set option values.
--password value destroying is moved from option processing callback to
mariabackup's handle_options() function to have ability to invoke server's
handle_options() several times for the same possible allowed options
set.
Galera invokes wsrep_sst_mariabackup.sh with mysqld command line
options to configure mariabackup as close to the server as possible.
It is not known what server options are supported by mariabackup when the
script is invoked. That is why new mariabackup option "--mysqld-args" is added,
all unknown options that follow this option will be silently ignored.
wsrep_sst_mariabackup.sh was also changed to:
- use "--mysqld-args" mariabackup option to pass mysqld options,
- remove deprecated innobackupex mode,
- remove unsupported mariabackup options:
--encrypt
--encrypt-key
--rebuild-indexes
--rebuild-threads
MCOL-3875 Columnstore write cache
The main change is to change thr_lock function get_status to
return a value that indicates we have to abort the lock.
Other thing:
- Made start_bulk_insert() and end_bulk_insert() protected so that the
insert cache can use these
The used code is largely based on code from Tencent
The problem is that in some rare cases there may be a conflict between .frm
files and the files in the storage engine. In this case the DROP TABLE
was not able to properly drop the table.
Some MariaDB/MySQL forks has solved this by adding a FORCE option to
DROP TABLE. After some discussion among MariaDB developers, we concluded
that users expects that DROP TABLE should always work, even if the
table would not be consistent. There should not be a need to use a
separate keyword to ensure that the table is really deleted.
The used solution is:
- If a .frm table doesn't exists, try dropping the table from all storage
engines.
- If the .frm table exists but the table does not exist in the engine
try dropping the table from all storage engines.
- Update storage engines using many table files (.CVS, MyISAM, Aria) to
succeed with the drop even if some of the files are missing.
- Add HTON_AUTOMATIC_DELETE_TABLE to handlerton's where delete_table()
is not needed and always succeed. This is used by ha_delete_table_force()
to know which handlers to ignore when trying to drop a table without
a .frm file.
The disadvantage of this solution is that a DROP TABLE on a non existing
table will be a bit slower as we have to ask all active storage engines
if they know anything about the table.
Other things:
- Added a new flag MY_IGNORE_ENOENT to my_delete() to not give an error
if the file doesn't exist. This simplifies some of the code.
- Don't clear thd->error in ha_delete_table() if there was an active
error. This is a bug fix.
- handler::delete_table() will not abort if first file doesn't exists.
This is bug fix to handle the case when a drop table was aborted in
the middle.
- Cleaned up mysql_rm_table_no_locks() to ensure that if_exists uses
same code path as when it's not used.
- Use non_existing_Table_error() to detect if table didn't exists.
Old code used different errors tests in different position.
- Table_triggers_list::drop_all_triggers() now drops trigger file if
it can't be parsed instead of leaving it hanging around (bug fix)
- InnoDB doesn't anymore print error about .frm file out of sync with
InnoDB directory if .frm file does not exists. This change was required
to be able to try to drop an InnoDB file when .frm doesn't exists.
- Fixed bug in mi_delete_table() where the .MYD file would not be dropped
if the .MYI file didn't exists.
- Fixed memory leak in Mroonga when deleting non existing table
- Fixed memory leak in Connect when deleting non existing table
Bugs fixed introduced by the original version of this commit:
MDEV-22826 Presence of Spider prevents tables from being force-deleted from
other engines
* FreeBSD calls amd64 what Linux calls x86_64
* signal returns void (*)(int)
* struct pam_message has char*, not const char*
* krb5_free_unparsed_name exists, but is deprecated
Existing implementation used my_checksum (from mysys)
for calculating table checksum and binlog checksum.
This implementation was optimized for powerpc only and lacked
SIMD implementation for x86 (using clmul) and ARM
(using ACLE) instead used zlib-crc32.
mariabackup had its own copy of the crc32 implementation
using hardware optimized implementation only for x86 and lagged
hardware based implementation for powerpc and ARM.
Patch helps unifies all such calls and help aggregate all of them
using an unified interface my_checksum().
Said unification also enables hardware optimized calls for all
architecture viz. x86, ARM, POWERPC.
Default always fallback to zlib crc32.
Thanks to Daniel Black for reviewing, fixing and testing
PowerPC changes. Thanks to Marko and Daniel for early code feedback.
This ensures that directory permissions are correct in all cases, even if
boostrap is passed non-standard locations for innodb.
Directory permissions are copied from the datadir.
The issue here is that end_of_file for encrypted temporary IO_CACHE (used by filesort) is updated
using lseek.
Encryption adds storage overhead and hides it from the caller by recalculating offsets and lengths.
Two different IO_CACHE cannot possibly modify the same file
because the encryption key is randomly generated and stored in the IO_CACHE.
So when the tempfiles are encrypted DO NOT use lseek to change end_of_file.
Further observations about updating end_of_file using lseek
1) The end_of_file update is only used for binlog index files
2) The whole point is to update file length when the file was modified via a different file descriptor.
3) The temporary IO_CACHE files can never be modified via a different file descriptor.
4) For encrypted temporary IO_CACHE, end_of_file should not be updated with lseek
Disable IPO (interprocedural optimization, aka /GL) on Windows
on libraries, from which server.dll exports symbols - exporting symbols
does not work for objects compiled with /GL.
queues.c cleanup and refactoring.
Restore old version of _downhead() (from before cd483c5520)
that works well in an average case. Use it for queue_fix().
Move existing specialized version of _downhead() to queue_replace()
where it'll be handling the case it was specifically optimized for
(moving the element to the end of the queue).
And correct it to fix the heap not only down, but also up
(this fixes BUG#30301356).
Add unit tests.
Collateral cosmetic fixes.
sig_return: Solaris/OSX returns different function ptr
Move defination to my_alarm.h as its the only use.
prevents compile warnings (copied from 10.3 branch)
mysys/my_sync.c:136:19: error: 'cur_dir_name' defined but not used [-Werror=unused-const-variable=]
136 | static const char cur_dir_name[]= {FN_CURLIB, 0};
| ^~~~~~~~~~~~
fix compile error (DEPRECATED) leaked from ssl headers.
In file included from /export/home/dan/mariadb-server-10.4/sql/sys_vars.cc:37:
/export/home/dan/mariadb-server-10.4/sql/sys_vars.ic:69: error: "DEPRECATED" redefined [-Werror]
69 | #define DEPRECATED(X) X
|
In file included from /export/home/dan/mariadb-server-10.4/include/violite.h:150,
from /export/home/dan/mariadb-server-10.4/sql/sql_class.h:38,
from /export/home/dan/mariadb-server-10.4/sql/sys_vars.cc:36:
/usr/include/openssl/ssl.h:2356: note: this is the location of the previous definition
2356 | # define DEPRECATED __attribute__((deprecated))
|
Avoid Werror condition on non-Linux:
plugin/server_audit/server_audit.c:2267:7: error: variable 'db_len_off' set but not used [-Werror=unused-but-set-variable]
2267 | int db_len_off;
| ^~~~~~~~~~
plugin/server_audit/server_audit.c:2266:7: error: variable 'db_off' set but not used [-Werror=unused-but-set-variable]
2266 | int db_off;
| ^~~~~~
auth_gssapi fix include path for Solaris
Consistent with the upstream packaged patch:
https://github.com/OpenIndiana/oi-userland/blob/oi/hipster/components/database/mariadb-103/patches/06-gssapi.h.patch
compile warnings on Solaris
[ 91%] Building C object plugin/server_audit/CMakeFiles/server_audit.dir/server_audit.c.o
/plugin/server_audit/server_audit.c: In function 'auditing_v8':
/plugin/server_audit/server_audit.c:2194:20: error: unused variable 'db_len_off' [-Werror=unused-variable]
2194 | static const int db_len_off= 128;
| ^~~~~~~~~~
/plugin/server_audit/server_audit.c:2193:20: error: unused variable 'db_off' [-Werror=unused-variable]
2193 | static const int db_off= 120;
| ^~~~~~
/plugin/server_audit/server_audit.c:2192:20: error: unused variable 'cmd_off' [-Werror=unused-variable]
2192 | static const int cmd_off= 4432;
| ^~~~~~~
At top level:
/plugin/server_audit/server_audit.c:2192:20: error: 'cmd_off' defined but not used [-Werror=unused-const-variable=]
/plugin/server_audit/server_audit.c:2193:20: error: 'db_off' defined but not used [-Werror=unused-const-variable=]
2193 | static const int db_off= 120;
| ^~~~~~
/plugin/server_audit/server_audit.c:2194:20: error: 'db_len_off' defined but not used [-Werror=unused-const-variable=]
2194 | static const int db_len_off= 128;
| ^~~~~~~~~~
cc1: all warnings being treated as errors
tested on:
$ uname -a
SunOS openindiana 5.11 illumos-b97b1727bc i86pc i386 i86pc
read TLS with my_thread_var
write TLS with set_mysys_var()
my_thread_var is no longer __attribute__ ((const)): this attribute
is simply incorrect here. Read gcc manual for more information.
sql/threadpool_generic.cc fails with that attribute.
MDEV-22088 S3 partitioning support
All ALTER PARTITION commands should now work on S3 tables except
REBUILD PARTITION
TRUNCATE PARTITION
REORGANIZE PARTITION
In addition, PARTIONED S3 TABLES can also be replicated.
This is achived by storing the partition tables .frm and .par file on S3
for partitioned shared (S3) tables.
The discovery methods are enchanced by allowing engines that supports
discovery to also support of the partitioned tables .frm and .par file
Things in more detail
- The .frm and .par files of partitioned tables are stored in S3 and kept
in sync.
- Added hton callback create_partitioning_metadata to inform handler
that metadata for a partitoned file has changed
- Added back handler::discover_check_version() to be able to check if
a table's or a part table's definition has changed.
- Added handler::check_if_updates_are_ignored(). Needed for partitioning.
- Renamed rebind() -> rebind_psi(), as it was before.
- Changed CHF_xxx hadnler flags to an enum
- Changed some checks from using table->file->ht to use
table->file->partition_ht() to get discovery to work with partitioning.
- If TABLE_SHARE::init_from_binary_frm_image() fails, ensure that we
don't leave any .frm or .par files around.
- Fixed that writefrm() doesn't leave unusable .frm files around
- Appended extension to path for writefrm() to be able to reuse to function
for creating .par files.
- Added DBUG_PUSH("") to a a few functions that caused a lot of not
critical tracing.
- On Windows, do not handle lack of SeLockMemory privilege as fatal error.
Just like on any other platform, there is a fallback to ordinary pages.
It is better than server that does not start, silently.
- On Windows, remove incorrect irritating "fallback to conventional pages failed"
from the warning, when allocating large pages fails.
In main.index_merge_myisam we remove the test that was added in
commit a2d24def8c because
it duplicates the test case that was added in
commit 5af12e4635.
Both Windows and MMAP capable implementations fell back to a
non-MEM_LARGE_PAGES/HugeTLB allocation with the large page implementaion
failed. These can can be freed by the corresponding function.
Prior to this, if we fell back to a conventional memory, than will
results in deallocation using munmap/VirtualFree on a memory allocated
using my_malloc_lock. At worst this could succeed and the
my_malloc_lock looses its memory without knowing about it.
* `--defaults-file` option is showed only in `--help --verbose` if
applied
* `--default-extra-file` is showing correctly now in `--help --verbose`,
previously it was treated as a directory with appended `my.cnf`
Detecting the cpus based on sysconf of the online CPUs can significantly
over estimate the number of cpus available.
Wheither via numactl, cgroups, taskset, systemd constraints, docker
containers and probably other mechanisms, the number of threads mysqld
can be run on can be quite less.
As such we use the pthread_getaffinity_np function on Linux and FreeBSD
(identical API) to get the number of CPUs.
The number of CPUs is the default for the thread_pool_size and a too
high default will resulting in large memory usage and high context
switching overhead.
Closes PR #922
Also added support for MAP_SYNC. It allows to achieve decent performance
with DAX devices even when libpmem is unavailable.
Fixed Windows version of my_msync(): according to manual FlushViewOfFile()
may return before flush is actually completed. It is advised to issue
FlushFileBuffers() after FlushViewOfFile().
if my_realpath() fails, don't return the error code, get_defaults_options()
returns a number of options consumed, not 0=ok/1=error.
instead, ignore the error from my_realpath. If it fails it internally
falls back to my_load_path, which restores 10.4- behavior
Use my_thread_var::stack_ends_here inside lf_pinbox_real_free() for address
where thread stack ends.
Remove LF_PINS::stack_ends_here.
It is not safe to assume that mysys_var that was used during pin allocation,
remains correct during free. E.g with binlog group commit in Innodb,
that frees pins for multiple Innodb transactions, it does not work
correctly.
Threadpool will need a functionality for periodic thr_timer
(the threadpool maintainence task is a timer that runs periodically).
Also increase the stack size for the timer thread, 8k won't be enough.
make load_defaults() store the file name in the generated option list
using a special marker ---file-marker--- option.
Pick up this filename in handle_options().
Remove ---args-separator---, use ---file-marker--- with an empty file
name instead - this simplifies checks on the caller, only one special
option to recognize.
only my_getopt should use it, because it changes my_getopt's behavior.
If one simply wants to skip the separator - don't ask it to be added
in the first place
process all --defaults* options uniformly,
get rid of special case for --no-defaults and --print-defaults
use realpath instead of blindly concatenating pwd and relative path.
it turns out that practically every single user of handle_options()
used the get_one_option callback. Simplify the code,
make it mandatory, adjust unit tests.
almost all my_getopt settings and callbacks are global variables,
directly assignable to configure my_getopt. Only getopt_get_addr
was using a setter function. Get rid of it, make it a global
directly assignable variable like all other settings.
Also make getopt_compare_strings() static.
This is a remnant of "MySQL Instance Manager", which was removed in
MySQL-5.5.0 and never existed in MariaDB
Remove callback, simplify and optimize the code accordingly.
Commit 536215e32f in MariaDB Server 10.3.1
introduced the compiler flag (not cmake option) DBUG_ASSERT_AS_PRINTF
that converts DBUG_ASSERT in non-debug builds into printouts.
For debug builds, it could be useful to be able to convert DBUG_ASSERT
into a warning or error printout, to allow execution to continue.
This would allow debug builds to be used for reproducing hard failures
that occur with release builds.
my_assert: A Boolean flag (set by default), tied to the new option
debug_assert that is available on debug builds only.
When set, DBUG_ASSERT() will invoke assert(), like it did until now.
When unset, DBUG_ASSERT() will invoke fprintf(stderr, ...)
with the file name, line number and assertion expression.
Limit increased from 1000 to 2000.
Avoiding stack overflow by only storing keys and pages on the stack in
recursive functions if there is plenty of space on it.
Other things:
- Use less stack space for b-tree operations as we now only allocate as
much space as needed instead of always allocating HA_MAX_KEY_LENGTH.
- Replaced most usage of my_safe_alloca() in Aria with the stack_alloc
interface.
- Moved my_setstacksize() to mysys/my_pthread.c
Even though the PAUSE instruction latency was increased from
about 10 to 140 clock cycles in the Intel Skylake microarchitecture,
it seems to be optimal to reduce the amount of subsequently executed
PAUSE instructions not to 1/14, but to 1/2.
On clang, use __builtin_readcyclecounter() when available.
Hinted by Sergey Vojtovich. (This may lead to runtime failure
on ARM systems. The hardware should be available on ARMv8 (AArch64),
but access to it may require special privileges.)
We remove support for the proprietary Sun Microsystems compiler,
and rely on clang or the __GNUC__ assembler syntax instead.
For now, we retain support for IA-64 (Itanium) and 32-bit SPARC,
even though those platforms are likely no longer widely used.
We remove support for clock_gettime(CLOCK_SGI_CYCLE),
because Silicon Graphics ceased supporting IRIX in December 2013.
This was the only cycle timer interface available for MIPS.
On PowerPC, we rely on the GCC 4.8 __builtin_ppc_get_timebase()
(or clang __builtin_readcyclecounter()), which should be equivalent
to the old assembler code on both 64-bit and 32-bit targets.
The RDTSC instruction, which was introduced in the Intel Pentium,
has been used in MariaDB for a long time. But, the __rdtsc()
wrapper is not available by default in some x86 build environments.
The simplest solution seems to replace the inlined instruction
with a call to the wrapper function my_timer_cycles(). The overhead
for the call should not affect the measurement threshold.
On Windows and on AMD64, we will keep using __rdtsc() directly.
Starting with the Intel Skylake microarchitecture, the PAUSE
instruction latency is about 140 clock cycles instead of earlier 10.
On AMD processors, the latency could be 10 or 50 clock cycles,
depending on microarchitecture.
Because of this big range of latency, let us scale the loops around
the PAUSE instruction based on timing results at server startup.
my_cpu_relax_multiplier: New variable: How many times to invoke PAUSE
in a loop. Only defined for IA-32 and AMD64.
my_cpu_init(): Determine with RDTSC the time to run 16 PAUSE instructions
in two unrolled loops according, and based on the quicker of the two
runs, initialize my_cpu_relax_multiplier. This form of calibration was
suggested by Mikhail Sinyavin from Intel.
LF_BACKOFF(), ut_delay(): Use my_cpu_relax_multiplier when available.
ut_delay(): Define inline in my_cpu.h.
UT_COMPILER_BARRIER(): Remove. This does not seem to have any effect,
because in our ut_delay() implementation, no computations are being
performed inside the loop. The purpose of UT_COMPILER_BARRIER() was to
prohibit the compiler from reordering computations. It was not
emitting any code.
- Do not scan registry to check if TCPIP is supported.
- Do not read registry under HKEY_LOCAL_MACHINE\SOFTWARE\MySQL anymore.
- Do not load threadpool function dynamically, it is available
since Win7.
- simplify win32_init_tcp_ip(), and return error of WSAStartup() fails.
- Correct comment in my_parameter_handler()
Restore the detection of default charset in command line utilities.
It worked up to 10.1, but was broken by Connector/C.
Moved code for detection of default charset from sql-common/client.c
to mysys, and make command line utilities to use this code if charset
was not specified on the command line.
There was two separate problems:
- Aria pagecache didn't properly handle re-reading of blocks
that have given errors before (this triggered an assert)
- temporary tables that where opened several times where
not properly closed in ALTER, REPAIR or OPTIMIZE table
Other things
- Added a couple of asserts that will make it easier to
find problems like this in the future.
fix MDEV-18750: failed to flashback large-size binlog file
fix mysqlbinlog flashback failure caused by reading io_cache without MY_FULL_IO flag
fix MDEV-18750: mysqlbinlog flashback failure on large binlog
InnoDB duplicates file descriptor returned by create_temp_file() to
workaround further inconsistent use of this descriptor.
Use mysys file descriptors consistently for innobase_mysql_tmpfile(path).
Mostly close it by appropriate mysys wrappers.
- Add new submodule for WolfSSL
- Build and use wolfssl and wolfcrypt instead of yassl/taocrypt
- Use HAVE_WOLFSSL instead of HAVE_YASSL
- Increase MY_AES_CTX_SIZE, to avoid compile time asserts in my_crypt.cc
(sizeof(EVP_CIPHER_CTX) is larger on WolfSSL)
This patch is for MEM_ROOT only.
In debug mode add 8 byte of poisoned memory before every allocated chunk.
On the right of every chunk there will be either 1-7 trailing poisoned bytes, or
next chunk's redzone, or poisoned non allocated memory or redzone of a
malloc()ed buffer.
Some places didn't match the previous rules, making the Floor
address wrong.
Additional sed rules:
sed -i -e 's/Place.*Suite .*, Boston/Street, Fifth Floor, Boston/g'
sed -i -e 's/Suite .*, Boston/Fifth Floor, Boston/g'
This commit is based on the work of Michal Schorm, rebased on the
earliest MariaDB version.
Th command line used to generate this diff was:
find ./ -type f \
-exec sed -i -e 's/Foundation, Inc., 59 Temple Place, Suite 330, Boston, /Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, /g' {} \; \
-exec sed -i -e 's/Foundation, Inc. 59 Temple Place.* Suite 330, Boston, /Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, /g' {} \; \
-exec sed -i -e 's/MA.*.....-1307.*USA/MA 02110-1335 USA/g' {} \; \
-exec sed -i -e 's/Foundation, Inc., 59 Temple/Foundation, Inc., 51 Franklin/g' {} \; \
-exec sed -i -e 's/Place, Suite 330, Boston, MA.*02111-1307.*USA/Street, Fifth Floor, Boston, MA 02110-1335 USA/g' {} \; \
-exec sed -i -e 's/MA.*.....-1307/MA 02110-1335/g' {} \;
With MAX_INDEXIES=64(default), key_map=Bitmap<64> is just a wrapper around
ulonglong and thus "trivial" (can be bzero-ed, or memcpy-ed, and stays
valid still)
With MAX_INDEXES=128, key_map = Bitmap<128> is not a "trivial" type
anymore. The implementation uses MY_BITMAP, and MY_BITMAP contains pointers
which make Bitmap invalid, when it is memcpy-ed/bzero-ed.
The problem in 10.4 is that there are many new key_map members, inside TABLE
or KEY, and those are often memcopied and bzeroed
The fix makes Bitmap "trivial", by inlining most of MY_BITMAP functionality.
pointers/heap allocations are not used anymore.
SHOW STATUS LIKE 'Open_files' was showing 18446744073709551615
my_file_opened used statistic_increment/statistic_decrement,
so one-off errors were normal and expected. But they confused
monitoring tools, so let's move my_file_opened to use atomics.
Windows does atomic writes, as long as they are aligned and multiple
of sector size. this is documented in MSDN.
Fix innodb.doublewrite test to always use doublewrite buffer,
(even if atomic writes are autodetected)
On Linux, <fcntl.h> declares open(2) as having a nonnull first argument.
In GCC 8, if a function with nonnull argument is called, that argument
will be silently assumed to nonnull along the same code path. Hence,
later nullness checks for this argument can be optimized away.
Similar to MDEV-15587, the fix is to ensure that functions with
nonnull arguments are not being called with NULL.
This bug caused a crash in mysqlbinlog, which was invoking
create_temp_file() with the argument dir=NULL. The affected test was
binlog.binlog_mysqlbinlog_base64. It would display the following message
before crashing:
mysqlbinlog: O_TMPFILE is not supported on (null) (disabling future attempts)
Segmentation fault
On some systems with 10,000+ binlogs, show binary logs could block
log rotation for more than 10 seconds.
This patch fixes this by first caching all binary log names and
releases all mutexes while calculating the sizes of the binary logs.
Other things:
- Ensure that reinit_io_cache() sets end_of_file when moving to read_cache.
This ensures that external changes of the underlying file is known to
the cache.
- get_binlog_list() is made more efficent and show_binlogs() is changed
to call get_binlog_list()
Reviewed by Andrei Elkin
According to close(2) "Retrying the close() after a failure return is
the wrong thing to do"
Even the EINTR case its maybe closed. Take the prudent approach here
an risk leaking one file descriptor rather than closing one that is
nolonger ours.
If the rlimit.rlim_cur value returned by getrlimit is not the
RLIM_INFINITY magic constant, but a *very* large number, we can allocate
too many open files. Restrict set_max_open_files to only return at most
max_file_limit, as passed via its parameter.
The problem was originally stated in
http://bugs.mysql.com/bug.php?id=82212
The size of an base64-encoded Rows_log_event exceeds its
vanilla byte representation in 4/3 times.
When a binlogged event size is about 1GB mysqlbinlog generates
a BINLOG query that can't be send out due to its size.
It is fixed with fragmenting the BINLOG argument C-string into
(approximate) halves when the base64 encoded event is over 1GB size.
The mysqlbinlog in such case puts out
SET @binlog_fragment_0='base64-encoded-fragment_0';
SET @binlog_fragment_1='base64-encoded-fragment_1';
BINLOG @binlog_fragment_0, @binlog_fragment_1;
to represent a big BINLOG.
For prompt memory release BINLOG handler is made to reset the BINLOG argument
user variables in the middle of processing, as if @binlog_fragment_{0,1} = NULL
is assigned.
Notice the 2 fragments are enough, though the client and server still may
need to tweak their @@max_allowed_packet to satisfy to the fragment
size (which they would have to do anyway with greater number of
fragments, should that be desired).
On the lower level the following changes are made:
Log_event::print_base64()
remains to call encoder and store the encoded data into a cache but
now *without* doing any formatting. The latter is left for time
when the cache is copied to an output file (e.g mysqlbinlog output).
No formatting behavior is also reflected by the change in the meaning
of the last argument which specifies whether to cache the encoded data.
Rows_log_event::print_helper()
is made to invoke a specialized fragmented cache-to-file copying function
which is
copy_cache_to_file_wrapped()
that takes care of fragmenting also optionally wraps encoded
strings (fragments) into SQL stanzas.
my_b_copy_to_file()
is refactored to into my_b_copy_all_to_file(). The former function
is generalized
to accepts more a limit argument to constraint the copying and does
not reinitialize anymore the cache into reading mode.
The limit does not do any effect on the fully read cache.
SIGHUP causes debug info in the error log and reload of
logs/privileges/tables/etc. The server should only do it when
a user intentionally sends SUGHUP, not when a parent terminal gets
disconnected or something.
In particular, not ignoring kernel SIGHUP causes FLUSH PRIVILEGES
at some random point during non-systemd Debian upgrades (Debian
restarts mysqld, debian-start script runs mysql_upgrade in the background,
postinit script ends and kernel sends SIGHUP to all background processes
it has started). And during mysql_upgrade privilege tables aren't
necessarily ready to be reloaded.