* in online ALTER it must include the complete new row,
note that an UPDATE should set all extra columns to their
default values, as if UPDATE was completely done before the ALTER.
* in rpl WRITE_ROWS_EVENT it must include all extra slave columns,
but not existing columns unmarked in the m_cols (sequences do that)
* in rpl UPDATE/DELETE events it should follow m_cols_ai
also: default values must be updated for WRITE_ROWS_EVENT and
for UPDATE/DELETE in the online ALTER mode, see above.
Update the result file accordingly.
Extend bitmap_copy() to support arguments of different lengths
Revert the old work-around for buggy fdatasync() on Linux ext3. This bug was
fixed in Linux > 10 years ago back to kernel version at least 3.0.
Reviewed-by: Marko Mäkelä <marko.makela@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Docker when mounting a configuration file into a Windows exposes the
file with permission 0777. These world writable files are ignored by
by MariaDB.
Add the access check such that filesystem RO or immutable file is
counted as sufficient protection on the file.
Test:
$ mkdir /tmp/src
$ vi /tmp/src/my.cnf
$ chmod 666 /tmp/src/my.cnf
$ mkdir /tmp/dst
$ sudo mount --bind /tmp/src /tmp/dst -o ro
$ ls -la /tmp/dst
total 4
drwxr-xr-x. 2 dan dan 60 Jun 15 15:12 .
drwxrwxrwt. 25 root root 660 Jun 15 15:13 ..
-rw-rw-rw-. 1 dan dan 10 Jun 15 15:12 my.cnf
$ mount | grep dst
tmpfs on /tmp/dst type tmpfs (ro,seclabel,nr_inodes=1048576,inode64)
strace client/mariadb --defaults-file=/tmp/dst/my.cnf
newfstatat(AT_FDCWD, "/tmp/dst/my.cnf", {st_mode=S_IFREG|0666, st_size=10, ...}, 0) = 0
access("/tmp/dst/my.cnf", W_OK) = -1 EROFS (Read-only file system)
openat(AT_FDCWD, "/tmp/dst/my.cnf", O_RDONLY|O_CLOEXEC) = 3
The one failing test, but this isn't a regression, just not a total fix:
$ chmod u-w /tmp/src/my.cnf
$ ls -la /tmp/src/my.cnf
-r--rw-rw-. 1 dan dan 18 Jun 16 10:22 /tmp/src/my.cnf
$ strace -fe trace=access client/mariadb --defaults-file=/tmp/dst/my.cnf
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
access("/etc/system-fips", F_OK) = -1 ENOENT (No such file or directory)
access("/tmp/dst/my.cnf", W_OK) = -1 EACCES (Permission denied)
Warning: World-writable config file '/tmp/dst/my.cnf' is ignored
Windows test (Docker Desktop ~4.21) which was the important one to fix:
dan@LAPTOP-5B5P7RCK:~$ docker run --rm -v /mnt/c/Users/danie/Desktop/conf:/etc/mysql/conf.d/:ro -e MARIADB_ROOT_PASSWORD=bob quay.io/m
ariadb-foundation/mariadb-devel:10.4-MDEV-27038-ro-mounts-pkgtest ls -la /etc/mysql/conf.d
total 4
drwxrwxrwx 1 root root 512 Jun 15 13:57 .
drwxr-xr-x 4 root root 4096 Jun 15 07:32 ..
-rwxrwxrwx 1 root root 43 Jun 15 13:56 myapp.cnf
root@a59b38b45af1:/# strace -fe trace=access mariadb
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
access("/etc/mysql/conf.d/myapp.cnf", W_OK) = -1 EROFS (Read-only file system)
debug_sync refactoring introduced a statically instantiated object
debug_sync_global of the structure st_debug_sync_globals.
st_debug_sync_globals includes Hash_set<> which allocates memory
in the constructor. sf_malloc() calls _my_thread_var()->dbug_id
which is pthread_getspecific(THR_KEY_mysys), and THR_KEY_mysys is 0
before pthread_key_create(). pthread_getspecific(0) returns a valid
pointer, not EINVAL. And safemalloc dereferences it.
let's statically initialize THR_KEY_mysys to -1, this makes
pthread_getspecific(THR_KEY_mysys) to fail before pthread_key_create()
is called.
followup for 8885225de6
The effect was that that ROOT_FLAG_THREAD_SPECIFIC was cleared and
the memory allocated by memroot would be contributed the the system,
not to the thread.
This exposed a bug in how "show explain for ..." allocated data.
- The thread that did provide the explain allocated data in the
"show explain" threads mem_root, which is marked as THREAD_SPECIFIC.
- Fixed by allocating the explain data in a temporary explain_mem_root
which is not THREAD_SPECIFIC.
Other things:
- Added extra checks when using update_malloc_size()
- Do not call update_malloc_size() for memory not registered with
update_malloc_size(). This avoid some wrong 'memory not freed' reports.
- Added a checking of 'thd->killed' to ensure that
main.truncate_notembedded.test still works.
Reported by: Yury Chaikou
rw_trx_hash_t::find() acquires element->mutex, then unpins pins, used for
lf_hash element search. After that the "element" can be deallocated and
reused by some other thread.
If we take a look rw_trx_hash_t::insert()->lf_hash_insert()->lf_alloc_new()
calls, we will not find any element->mutex acquisition, as it was not
initialized yet before it's allocation. rw_trx_hash_t::insert() can reuse
the chunk, unpinned in rw_trx_hash_t::find().
The scenario is the following:
1. Thread 1 have just executed lf_hash_search() in
rw_trx_hash_t::find(), but have not acquired element->mutex yet.
2. Thread 2 have removed the element from hash table with
rw_trx_hash_t::erase() call.
3. Thread 1 acquired element->mutex and unpinned pin 2 pin with
lf_hash_search_unpin(pins) call.
4. Some thread purged memory of the element.
5. Thread 3 reused the memory for the element, filled element->id,
element->trx.
6. Thread 1 crashes with failed "DBUG_ASSERT(trx_id == trx->id)"
assertion.
Note that trx_t objects are also reused, see the code around trx_pools
for details.
The fix is to invoke "lf_hash_search_unpin(pins);" after element->trx is
stored in local variable in rw_trx_hash_t::find().
Reviewed by: Nikita Malyavin, Marko Mäkelä.
Modern software (including text editors, static analysis software,
and web-based code review interfaces) often requires source code files
to be interpretable via a consistent character encoding, with UTF-8 or
ASCII (a strict subset of UTF-8) as the default. Several of the MariaDB
source files contain bytes that are not valid in either the UTF-8 or
ASCII encodings, but instead represent strings encoded in the
ISO-8859-1/Latin-1 or ISO-8859-2/Latin-2 encodings.
These inconsistent encodings may prevent software from correctly
presenting or processing such files. Converting all source files to
valid UTF8 characters will ensure correct handling.
Comments written in Czech were replaced with lightly-corrected
translations from Google Translate. Additionally, comments describing
the proper handling of special characters were changed so that the
comments are now purely UTF8.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer
Amazon Web Services, Inc.
Co-authored-by: Andrew Hutchings <andrew@linuxjedi.co.uk>
make the compile-time logic in my_timer_cycles() also #define
MY_TIMER_ROUTINE_CYCLES to indicate which implementation it is using.
Then, make my_timer_init() use MY_TIMER_ROUTINE_CYCLES.
This leaves us with just one set of compile-time #if's which determine
how we read time in #cycles.
Reviewer (and commit message author): Sergei Petrunia <sergey@mariadb.com>
MariaDB server prints the stack information if a crash happens.
It traverses the stack frames in function `print_with_addr_resolve`.
For *EACH* frame, it tries to parse the file name and line number of the
frame using `addr2line`, or prints `backtrace_symbols_fd` if `addr2line`
fails.
1. Logic in `addr_resolve` function uses addr2line to get the file name
and line numbers. It has a timeout of 500ms to wait for the response
from addr2line. However, that's not enough on small instances
especially if the debug information is in a separate file or
compressed.
Increase the timeout to 5 seconds to support some edge cases, as
experiments showed addr2line may take 2-3 seconds on some frames.
2. While parsing a frame inside of a shared library using `addr2line`,
the file name and line numbers could be `??`, empty or `0` if the
debug info is not loaded.
It's easy to reproduce when glibc-debuginfo is not installed.
Instead of printing a meaningless frame like:
:0(__GI___poll)[0x1505e9197639]
...
??:0(__libc_start_main)[0x7ffff6c8913a]
We want to print the frame information using `backtrace_symbols_fd`,
with the shared library name and a hexadecimal offset.
Stacktrace example on a real instance with this commit:
/lib64/libc.so.6(__poll+0x49)[0x145cbf71a639]
...
/lib64/libc.so.6(__libc_start_main+0xea)[0x7f4d0034d13a]
`addr_resolve` has considered the case of meaningless combination of
file name and line number returned by `addr2line`. e.g. `??:?`
However, conditions like `:0` and `??:0` are not handled. So now the
function will rollback to `backtrace_symbols_fd` in above cases.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Building of MariaDB server version 11.0 and 11.1 fails
on MacOS 12.x (Monterey). Build failure happened on generating
header files from the error messages contained in the file
errmsg-utf8.txt. This process is performed by the utility comp_err
that crashes when it is run on MacOS Monterey.
comp_err invokes my_init at the very beginning of start before
initialization of thread environment done. While executing my_init
the function my_readlink is called. my_readlink is wrapper around
the system call readlink with extra errors handling. In case the
system call readlink returns error the following block of code is run
if (my_thread_var)
my_errno= errno;
my_thred_var is macros that expanded to invocation of
my_pthread_getspecific() against supplied thread specific key
THR_KEY_mysys. Unfortunately, the tsd key THR_KEY_mysys is initialized
right after the call of my_init() so return value of pthread_getspecific
is platform dependent. On Linux pthread_getspecific returns NULL
if key is not a valid TSD key. On MacOS, the effect of calling
pthread_getspecific() with a key value not obtained from pthread_key_create()
is undefined. So, on MacOS pthread_getspecific() returns some invalid address
where the errno value is written. It leads to a crash latter when the library
API function pthread_self() is called.
To fix the issue, initialization of thread environment is moved at very
beginning of the main() function in order to run it as the first step
to have full initiliazed environment at the moment when my_init()
is ivoked.
Replace calls to `sprintf` and `strcpy` by the safer options `snprintf`
and `safe_strcpy` in the following directories:
- libmysqld
- mysys
- sql-common
- strings
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer
Amazon Web Services, Inc.
- Remove DBUG calls from my_winfile.c where call and parameters
are already printed by mysys.
- Remove DBUG from my_get_osfhandle() and my_get_open_flags() to remove
DBUG noise.
- Updated convert-debug-for-diff to take into account windows.
- Changed some DBUG_RETURN(function()) to tmp=function(); DBUG_RETURN(tmp);
This is needed as Visual C++ prints for DBUG binaries a trace for
func_a()
{
DBUG_ENTER("func_a");
DBUG_RETURN(func_b())
}
as
>func_a
<func_a
>func_b
<func_b
instead of when using gcc:
>func_a
| >func_b
| <func_b
<func_a
This patch also fixes some bugs detected by valgrind after this
patch:
- Not enough copy_func elements was allocated by Create_tmp_table() which
causes an memory overwrite in Create_tmp_table::add_fields()
I added an ASSERT() to be able to detect this also without valgrind.
The bug was that TMP_TABLE_PARAM::copy_fields was not correctly set
when calling create_tmp_table().
- Aria::empty_bits is not allocated if there is no varchar/char/blob
fields in the table. Fixed code to take this into account.
This cannot cause any issues as this is just a memory access
into other Aria memory and the content of the memory would not be used.
- Aria::last_key_buff was not allocated big enough. This may have caused
issues with rtrees and ma_extra(HA_EXTRA_REMEMBER_POS) as they
would use the same memory area.
- Aria and MyISAM didn't take extended key parts into account, which
caused problems when copying rec_per_key from engine to sql level.
- Mark asan builds with 'asan' in version strihng to detect these in
not_valgrind_build.inc.
This is needed to not have main.sp-no-valgrind fail with asan.
String length growth during upper/lower conversion
in Unicode collations depends only on the underlying MY_UNICASE_INFO
used in the collation.
Maintaining a separate member CHARSET_INFO::caseup_multiply and
CHARSET_INFO::casedn_multiply duplicated this information
and caused bugs like this (when MY_UNICASE_INFO and case??_multiply
when out of sync because of incomplete CHARSET_INFO initialization).
Fix:
Changing CHARSET_INFO::caseup_multiply and CHARSET_INFO::casedn_multiply
from members to virtual functions.
The virtual functions in Unicode collations calculate case conversion
growth factors from the MY_UNICASE_INFO. This guarantees that the growth
factors are always in sync with the MY_UNICASE_INFO.
avoid contaminating my_getopt with sysvar implementation details.
adjust variable values after my_getopt, like it's done for others.
this fixes --help to show correct values.
don't include my_progname in the error message, my_error starts from it
automatically, resulting in, like
/usr/bin/mysqladmin: Notice: /usr/bin/mysqladmin is deprecated and will be removed in a future release, use command 'mariadb-admin'
and remove "Notice" so that the problem description would directly
follow the executable name.
make the check to work when the executable is in the PATH
(so, invoked simply like 'mysql' and thus readlink cannot find it)
fix the check in mysql_install_db and mysql_secure_installation to not
print the warning if the intermediate path contains "mysql" substring
add this message also to
* mysql_waitpid
* mysql_convert_table_format
* mysql_find_rows
* mysql_setpermissions
* mysqlaccess
* mysqld_multi
* mysqld_safe
* mysqldumpslow
* mysqlhotcopy
* mysql_ldb
Closes#2273
Eventually mysql symlinks will go away, as MariaDB and MySQL keep
diverging and we do not want to make it impossible to install
MariaDB and MySQL side-by-side when users want it.
It also useful if people start using MariaDB tools with MariaDB.
If the exe doesn't begine with "mariadb" or is a symlink,
print a warning to use the resolved name.
In my_readlink, add check on my_thread_var as its used by comp_err
and other build utils that also use my_init.
__gcov_flush was never an external symbol in the documentation.
It was removed in gcc-11. The correct function to use is __gcov_dump
which is defined in the gcov.h header.
This includes:
- cleanup and optimization of filtering and pushdown engine code.
- Adjusted costs for rowid filters (based on extensive testing
and profiling).
This made a small two changes to the handler_rowid_filter_is_active()
API:
- One should not call it with a zero pointer!
- One does not need to call handler_rowid_filter_is_active() for every
row anymore. It is enough to check if filter is active by calling it
call it during index_init() or when handler::rowid_filter_changed()
is called
The changes was to avoid unnecessary function calls and checks if
pushdown conditions and rowid_filter is not used.
Updated costs for rowid_filter_lookup() to be closer to reality.
The old cost was based only on rowid_compare_cost. This is now
changed to take into account the overhead in checking the rowid.
Changed the Range_rowid_filter class to use DYNAMIC_ARRAY directly
instead of Dynamic_array<>. This was done to be able to use the new
append_dynamic() functions which gives a notable speed improvment
compared to the old code. Removing the abstraction also makes
the code easier to understand.
The cost of filtering is now slightly lower than before, which
is reflected in some test cases that is now using rowid filters.
This makes it easier to compare different costs and also allows
the optimizer to optimizer different storage engines more reliably.
- Added tests/check_costs.pl, a tool to verify optimizer cost calculations.
- Most engine costs has been found with this program. All steps to
calculate the new costs are documented in Docs/optimizer_costs.txt
- User optimizer_cost variables are given in microseconds (as individual
costs can be very small). Internally they are stored in ms.
- Changed DISK_READ_COST (was DISK_SEEK_BASE_COST) from a hard disk cost
(9 ms) to common SSD cost (400MB/sec).
- Removed cost calculations for hard disks (rotation etc).
- Changed the following handler functions to return IO_AND_CPU_COST.
This makes it easy to apply different cost modifiers in ha_..time()
functions for io and cpu costs.
- scan_time()
- rnd_pos_time() & rnd_pos_call_time()
- keyread_time()
- Enhanched keyread_time() to calculate the full cost of reading of a set
of keys with a given number of ranges and optional number of blocks that
need to be accessed.
- Removed read_time() as keyread_time() + rnd_pos_time() can do the same
thing and more.
- Tuned cost for: heap, myisam, Aria, InnoDB, archive and MyRocks.
Used heap table costs for json_table. The rest are using default engine
costs.
- Added the following new optimizer variables:
- optimizer_disk_read_ratio
- optimizer_disk_read_cost
- optimizer_key_lookup_cost
- optimizer_row_lookup_cost
- optimizer_row_next_find_cost
- optimizer_scan_cost
- Moved all engine specific cost to OPTIMIZER_COSTS structure.
- Changed costs to use 'records_out' instead of 'records_read' when
recalculating costs.
- Split optimizer_costs.h to optimizer_costs.h and optimizer_defaults.h.
This allows one to change costs without having to compile a lot of
files.
- Updated costs for filter lookup.
- Use a better cost estimate in best_extension_by_limited_search()
for the sorting cost.
- Fixed previous issues with 'filtered' explain column as we are now
using 'records_out' (min rows seen for table) to calculate filtering.
This greatly simplifies the filtering code in
JOIN_TAB::save_explain_data().
This change caused a lot of queries to be optimized differently than
before, which exposed different issues in the optimizer that needs to
be fixed. These fixes are in the following commits. To not have to
change the same test case over and over again, the changes in the test
cases are done in a single commit after all the critical change sets
are done.
InnoDB changes:
- Updated InnoDB to not divide big range cost with 2.
- Added cost for InnoDB (innobase_update_optimizer_costs()).
- Don't mark clustered primary key with HA_KEYREAD_ONLY. This will
prevent that the optimizer is trying to use index-only scans on
the clustered key.
- Disabled ha_innobase::scan_time() and ha_innobase::read_time() and
ha_innobase::rnd_pos_time() as the default engine cost functions now
works good for InnoDB.
Other things:
- Added --show-query-costs (\Q) option to mysql.cc to show the query
cost after each query (good when working with query costs).
- Extended my_getopt with GET_ADJUSTED_VALUE which allows one to adjust
the value that user is given. This is used to change cost from
microseconds (user input) to milliseconds (what the server is
internally using).
- Added include/my_tracker.h ; Useful include file to quickly test
costs of a function.
- Use handler::set_table() in all places instead of 'table= arg'.
- Added SHOW_OPTIMIZER_COSTS to sys variables. These are input and
shown in microseconds for the user but stored as milliseconds.
This is to make the numbers easier to read for the user (less
pre-zeros). Implemented in 'Sys_var_optimizer_cost' class.
- In test_quick_select() do not use index scans if 'no_keyread' is set
for the table. This is what we do in other places of the server.
- Added THD parameter to Unique::get_use_cost() and
check_index_intersect_extension() and similar functions to be able
to provide costs to called functions.
- Changed 'records' to 'rows' in optimizer_trace.
- Write more information to optimizer_trace.
- Added INDEX_BLOCK_FILL_FACTOR_MUL (4) and INDEX_BLOCK_FILL_FACTOR_DIV (3)
to calculate usage space of keys in b-trees. (Before we used numeric
constants).
- Removed code that assumed that b-trees has similar costs as binary
trees. Replaced with engine calls that returns the cost.
- Added Bitmap::find_first_bit()
- Added timings to join_cache for ANALYZE table (patch by Sergei Petrunia).
- Added records_init and records_after_filter to POSITION to remember
more of what best_access_patch() calculates.
- table_after_join_selectivity() changed to recalculate 'records_out'
based on the new fields from best_access_patch()
Bug fixes:
- Some queries did not update last_query_cost (was 0). Fixed by moving
setting thd->...last_query_cost in JOIN::optimize().
- Write '0' as number of rows for const tables with a matching row.
Some internals:
- Engine cost are stored in OPTIMIZER_COSTS structure. When a
handlerton is created, we also created a new cost variable for the
handlerton. We also create a new variable if the user changes a
optimizer cost for a not yet loaded handlerton either with command
line arguments or with SET
@@global.engine.optimizer_cost_variable=xx.
- There are 3 global OPTIMIZER_COSTS variables:
default_optimizer_costs The default costs + changes from the
command line without an engine specifier.
heap_optimizer_costs Heap table costs, used for temporary tables
tmp_table_optimizer_costs The cost for the default on disk internal
temporary table (MyISAM or Aria)
- The engine cost for a table is stored in table_share. To speed up
accesses the handler has a pointer to this. The cost is copied
to the table on first access. If one wants to change the cost one
must first update the global engine cost and then do a FLUSH TABLES.
This was done to be able to access the costs for an open table
without any locks.
- When a handlerton is created, the cost are updated the following way:
See sql/keycaches.cc for details:
- Use 'default_optimizer_costs' as a base
- Call hton->update_optimizer_costs() to override with the engines
default costs.
- Override the costs that the user has specified for the engine.
- One handler open, copy the engine cost from handlerton to TABLE_SHARE.
- Call handler::update_optimizer_costs() to allow the engine to update
cost for this particular table.
- There are two costs stored in THD. These are copied to the handler
when the table is used in a query:
- optimizer_where_cost
- optimizer_scan_setup_cost
- Simply code in best_access_path() by storing all cost result in a
structure. (Idea/Suggestion by Igor)
The MDEV-25004 test innodb_fts.versioning is omitted because ever since
commit 685d958e38 InnoDB would not allow
writes to a database where the redo log file ib_logfile0 is missing.
Since 7c58e97 the PSI_memory_key was added to some routines in the
mysys/. This commit fixes synopses of functions that were updated with
the PSI_memory_key parameter.
When trying to output stacktrace, and addr2line is not installed, the
child process forked by start_addr2line_fork() will fail to do exec(),
and finish with exit(1).
There is a problem with exit() though - it runs exit handlers,
and for the forked copy of crashing process, it is a bad idea.
In 10.5+ code for example, exit handlers include
tpool::task_group static destructors, and it will hang infinitely
waiting for completion of the outstanding tasks.
The fix is to use _exit() instead, which skips the execution of exit
handlers
Fix build failure in comp_err, if git is configured with default,
platform-specific EOL.
The error happens because comp_err is not prepared to handle extraneous
CR characters from errmgs-utf8.txt. Use fopen in text mode to fix.
To prevent ASAN heap-use-after-poison in the MDEV-16549 part of
./mtr --repeat=6 main.derived
the initialization of Name_resolution_context was cleaned up.
on Linux this pthread_attr_setstacksize() fails with EINVAL
"The stack size is less than PTHREAD_STACK_MIN (16384) bytes".
But on FreeBSD it succeeds and causes a crash later, as 8196 is too little.
Let's keep the stack at its default size in the timer thread.
In commit 28325b0863
a compile-time option was introduced to disable the macros
DBUG_ENTER and DBUG_RETURN or DBUG_VOID_RETURN.
The parameter name WITH_DBUG_TRACE would hint that it also
covers DBUG_PRINT statements. Let us do that: WITH_DBUG_TRACE=OFF
shall disable DBUG_PRINT() as well.
A few InnoDB recovery tests used to check that some output from
DBUG_PRINT("ib_log", ...) is present. We can live without those checks.
Reviewed by: Vladislav Vaintroub
Modern software (including text editors, static analysis software,
and web-based code review interfaces) often requires source code files
to be interpretable via a consistent character encoding, with UTF-8 or
ASCII (a strict subset of UTF-8) as the default. Several of the MariaDB
source files contain bytes that are not valid in either the UTF-8 or
ASCII encodings, but instead represent strings encoded in the
ISO-8859-1/Latin-1 or ISO-8859-2/Latin-2 encodings.
These inconsistent encodings may prevent software from correctly
presenting or processing such files. Converting all source files to
valid UTF8 characters will ensure correct handling.
Comments written in Czech were replaced with lightly-corrected
translations from Google Translate. Additionally, comments describing
the proper handling of special characters were changed so that the
comments are now purely UTF8.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer
Amazon Web Services, Inc.
Co-authored-by: Andrew Hutchings <andrew@linuxjedi.co.uk>
- Added one neutral and 22 tailored (language specific) collations based on
Unicode Collation Algorithm version 14.0.0.
Collations were added for Unicode character sets
utf8mb3, utf8mb4, ucs2, utf16, utf32.
Every tailoring was added with four accent and case
sensitivity flag combinations, e.g:
* utf8mb4_uca1400_swedish_as_cs
* utf8mb4_uca1400_swedish_as_ci
* utf8mb4_uca1400_swedish_ai_cs
* utf8mb4_uca1400_swedish_ai_ci
and their _nopad_ variants:
* utf8mb4_uca1400_swedish_nopad_as_cs
* utf8mb4_uca1400_swedish_nopad_as_ci
* utf8mb4_uca1400_swedish_nopad_ai_cs
* utf8mb4_uca1400_swedish_nopad_ai_ci
- Introducing a conception of contextually typed named collations:
CREATE DATABASE db1 CHARACTER SET utf8mb4;
CREATE TABLE db1.t1 (a CHAR(10) COLLATE uca1400_as_ci);
The idea is that there is no a need to specify the character set prefix
in the new collation names. It's enough to type just the suffix
"uca1400_as_ci". The character set is taken from the context.
In the above example script the context character set is utf8mb4.
So the CREATE TABLE will make a column with the collation
utf8mb4_uca1400_as_ci.
Short collations names can be used in any parts of the SQL syntax
where the COLLATE clause is understood.
- New collations are displayed only one time
(without character set combinations) by these statements:
SELECT * FROM INFORMATION_SCHEMA.COLLATIONS;
SHOW COLLATION;
For example, all these collations:
- utf8mb3_uca1400_swedish_as_ci
- utf8mb4_uca1400_swedish_as_ci
- ucs2_uca1400_swedish_as_ci
- utf16_uca1400_swedish_as_ci
- utf32_uca1400_swedish_as_ci
have just one entry in INFORMATION_SCHEMA.COLLATIONS and SHOW COLLATION,
with COLLATION_NAME equal to "uca1400_swedish_as_ci", which is the suffix
without the character set name:
SELECT COLLATION_NAME FROM INFORMATION_SCHEMA.COLLATIONS
WHERE COLLATION_NAME LIKE '%uca1400_swedish_as_ci';
+-----------------------+
| COLLATION_NAME |
+-----------------------+
| uca1400_swedish_as_ci |
+-----------------------+
Note, the behaviour of old collations did not change.
Non-unicode collations (e.g. latin1_swedish_ci) and
old UCA-4.0.0 collations (e.g. utf8mb4_unicode_ci)
are still displayed with the character set prefix, as before.
- The structure of the table INFORMATION_SCHEMA.COLLATIONS was changed.
The NOT NULL constraint was removed from these columns:
- CHARACTER_SET_NAME
- ID
- IS_DEFAULT
and from the corresponding columns in SHOW COLLATION.
For example:
SELECT COLLATION_NAME, CHARACTER_SET_NAME, ID, IS_DEFAULT
FROM INFORMATION_SCHEMA.COLLATIONS
WHERE COLLATION_NAME LIKE '%uca1400_swedish_as_ci';
+-----------------------+--------------------+------+------------+
| COLLATION_NAME | CHARACTER_SET_NAME | ID | IS_DEFAULT |
+-----------------------+--------------------+------+------------+
| uca1400_swedish_as_ci | NULL | NULL | NULL |
+-----------------------+--------------------+------+------------+
The NULL value in these columns now means that the collation
is applicable to multiple character sets.
The behavioir of old collations did not change.
Make sure your client programs can handle NULL values in these columns.
- The structure of the table
INFORMATION_SCHEMA.COLLATION_CHARACTER_SET_APPLICABILITY was changed.
Three new NOT NULL columns were added:
- FULL_COLLATION_NAME
- ID
- IS_DEFAULT
New collations have multiple entries in COLLATION_CHARACTER_SET_APPLICABILITY.
The column COLLATION_NAME contains the collation name without the character
set prefix. The column FULL_COLLATION_NAME contains the collation name with
the character set prefix.
Old collations have full collation name in both FULL_COLLATION_NAME and
COLLATION_NAME.
SELECT COLLATION_NAME, FULL_COLLATION_NAME, CHARACTER_SET_NAME, ID, IS_DEFAULT
FROM INFORMATION_SCHEMA.COLLATION_CHARACTER_SET_APPLICABILITY
WHERE FULL_COLLATION_NAME RLIKE '^(utf8mb4|latin1).*swedish.*ci$';
+-----------------------------+-------------------------------------+--------------------+------+------------+
| COLLATION_NAME | FULL_COLLATION_NAME | CHARACTER_SET_NAME | ID | IS_DEFAULT |
+-----------------------------+-------------------------------------+--------------------+------+------------+
| latin1_swedish_ci | latin1_swedish_ci | latin1 | 8 | Yes |
| latin1_swedish_nopad_ci | latin1_swedish_nopad_ci | latin1 | 1032 | |
| utf8mb4_swedish_ci | utf8mb4_swedish_ci | utf8mb4 | 232 | |
| uca1400_swedish_ai_ci | utf8mb4_uca1400_swedish_ai_ci | utf8mb4 | 2368 | |
| uca1400_swedish_as_ci | utf8mb4_uca1400_swedish_as_ci | utf8mb4 | 2370 | |
| uca1400_swedish_nopad_ai_ci | utf8mb4_uca1400_swedish_nopad_ai_ci | utf8mb4 | 2372 | |
| uca1400_swedish_nopad_as_ci | utf8mb4_uca1400_swedish_nopad_as_ci | utf8mb4 | 2374 | |
+-----------------------------+-------------------------------------+--------------------+------+------------+
- Other INFORMATION_SCHEMA queries:
SELECT COLLATION_NAME FROM INFORMATION_SCHEMA.COLUMNS;
SELECT COLLATION_NAME FROM INFORMATION_SCHEMA.PARAMETERS;
SELECT TABLE_COLLATION FROM INFORMATION_SCHEMA.TABLES;
SELECT DEFAULT_COLLATION_NAME FROM INFORMATION_SCHEMA.SCHEMATA;
SELECT COLLATION_NAME FROM INFORMATION_SCHEMA.ROUTINES;
SELECT COLLATION_CONNECTION FROM INFORMATION_SCHEMA.EVENTS;
SELECT DATABASE_COLLATION FROM INFORMATION_SCHEMA.EVENTS;
SELECT COLLATION_CONNECTION FROM INFORMATION_SCHEMA.ROUTINES;
SELECT DATABASE_COLLATION FROM INFORMATION_SCHEMA.ROUTINES;
SELECT COLLATION_CONNECTION FROM INFORMATION_SCHEMA.TRIGGERS;
SELECT DATABASE_COLLATION FROM INFORMATION_SCHEMA.TRIGGERS;
SELECT COLLATION_CONNECTION FROM INFORMATION_SCHEMA.VIEWS;
display full collation names, including character sets prefix,
for all collations, including new collations.
Corresponding SHOW commands also display full collation names
in collation related columns:
SHOW CREATE TABLE t1;
SHOW CREATE DATABASE db1;
SHOW TABLE STATUS;
SHOW CREATE FUNCTION f1;
SHOW CREATE PROCEDURE p1;
SHOW CREATE EVENT ev1;
SHOW CREATE TRIGGER tr1;
SHOW CREATE VIEW;
These INFORMATION_SCHEMA queries and SHOW statements may change in
the future, to display show collation names.
This avoids LF->CRLF conversion by the C runtime, which historically has
been rather buggy (see MDEV-9409)
Disabling text mode also fixes the --binary-mode in command line client
to work the same on Windows, as it does elsewhere.
The user-visible effect is that some text files, e.g output of mysqldump
or mysqlbinlog will not have CRLF end-of-lines,but LF. That should be
acceptable, as even Notepad can read this Unix EOLs since 2018
(on older Windows, Wordpad can)
Leave error log in text(CRLF) mode for now, for the sake of old Windows.
Take into account that in preparation of a simple key cache for resizing no disk blocks might be assigned to it.
Reviewer: IgorBabaev <igor@mariadb.com>
Table_cache_instance: Define the structure aligned at
the CPU cache line, and remove a pad[] data member.
Krunal Bauskar reported this to improve performance on ARMv8.
aligned_malloc(): Wrapper for the Microsoft _aligned_malloc()
and the ISO/IEC 9899:2011 <stdlib.h> aligned_alloc().
Note: The parameters are in the Microsoft order (size, alignment),
opposite of aligned_alloc(alignment, size).
Note: The standard defines that size must be an integer multiple
of alignment. It is enforced by AddressSanitizer but not by GNU libc
on Linux.
aligned_free(): Wrapper for the Microsoft _aligned_free() and
the standard free().
HAVE_ALIGNED_ALLOC: A new test. Unfortunately, support for
aligned_alloc() may still be missing on some platforms.
We will fall back to posix_memalign() for those cases.
HAVE_MEMALIGN: Remove, along with any use of the nonstandard memalign().
PFS_ALIGNEMENT (sic): Removed; we will use CPU_LEVEL1_DCACHE_LINESIZE.
PFS_ALIGNED: Defined using the C++11 keyword alignas.
buf_pool_t::page_hash_table::create(),
lock_sys_t::hash_table::create():
lock_sys_t::hash_table::resize(): Pad the allocation size to an
integer multiple of the alignment.
Reviewed by: Vladislav Vaintroub
and failing spider partition test.
With some small datatype changes to the Linux/Solaris my_gethwaddr implementation
the hardware address of AIX can be returned. This is an important aspect
in Spider (and UUID).
Spider test change reviewed by Nayuta Yanagisawa.
my_gethwaddr review by Monty in #2081
The purpose of the compress() wrapper my_compress_buffer() was twofold:
silence Valgrind warnings about uninitialized memory access before
zlib 1.2.4, and have PERFORMANCE_SCHEMA instrumentation of some zlib
related memory allocation. Because of PERFORMANCE_SCHEMA, we cannot
trivially replace my_compress_buffer() with compress().
az_open(): Remove a crc32() call. Any CRC of the empty string is 0.
On affected machine, the error happens sporadically in
innodb.instant_alter_limit.
Procmon shows SetRenameInformationFile failing with ERROR_ACCESS_DENIED.
In this case, the destination file was previously opened rsp oplocked by
Windows defender antivirus.
The fix is to retry MoveFileEx on ERROR_ACCESS_DENIED.
For some reason, the tests of the MemorySanitizer build on 10.5 failed
with both clang 13 and clang 14 with SIGSEGV. On 10.6 where it worked
better, some more places to work around were identified.
The MemorySanitizer implementation in clang includes some built-in
instrumentation (interceptors) for GNU libc. In GNU libc 2.33, the
interface to the stat() family of functions was changed. Until the
MemorySanitizer interceptors are adjusted, any MSAN code builds
will act as if that the stat() family of functions failed to initialize
the struct stat.
A fix was applied in
https://reviews.llvm.org/rG4e1a6c07052b466a2a1cd0c3ff150e4e89a6d87a
but it fails to cover the 64-bit variants of the calls.
For now, let us work around the MemorySanitizer bug by defining
and using the macro MSAN_STAT_WORKAROUND().
When a server is compiled with -fPIE, my_addr_resolve needs to
subtract the info.dli_fbase from symbol addresses in memory for
addr2line to recognize them. When a server is compiled without -fPIE,
my_addr_resolve should not do it. Unfortunately not all compilers
define __PIE__ when -fPIE was used (e.g. older gcc doesn't), so we
have to resort to run-time detection.
We used to define a native unary function CRC32() that computes the CRC-32
of a string using the ISO 3309 polynomial that is being used by zlib
and many others.
Often, a CRC is computed in pieces. To faciliate this, we introduce a
2-ary variant of the function that inputs a previous CRC as the first
argument: CRC32('MariaDB')=CRC32(CRC32('Maria'),'DB').
InnoDB and MyRocks use a different polynomial, which was implemented
in SSE4.2 instructions that were introduced in the
Intel Nehalem microarchitecture. This is commonly called CRC-32C
(Castagnoli).
We introduce a native function that uses the Castagnoli polynomial:
CRC32C('MariaDB')=CRC32C(CRC32C('Maria'),'DB'). This allows
SELECT...INTO DUMPFILE to be used for the creation of files with
valid checksums, such as a logically empty InnoDB redo log file
ib_logfile0 corresponding to a particular log sequence number.
Two Problems
1. Upgrade wizard failed to retrieve path to service executable,
if it contained non-ASCII.
Fixed by setlocale(LC_ALL, "en_US.UTF8"), which was missing in upgrade wizard
2.mysql_upgrade_service only updated (converted to UTF8) the server's sections
leaving client's as-is
Corrected typo.
3. Fixed assertion in my_getopt, turns out to be too strict.
The previous default innodb_buffer_pool_chunk_size of 128M
made sense when the innodb buffer pool size was a few GB.
When the pool size is 128GB this means the chunk size is 0.1%
of this. Fine tuning the buffer pool size on such a fine
increment doesn't make practical sense. Also on extremely
large buffer pool systems, initializing on the default 128M can
also take a considerable amount of time.
When large pages are enabled, the chunk size has to be a multiple
of an available large page size or memory allocation without
use can occur.
Previously the default 0 was documented as disabling resizing.
With srv_buf_pool_chunk_unit > 0 assertions in the code and the
minimium value set, I doubt this was ever the case.
As such the autosizing (based on default 0) takes place as follows:
* a 64th of the innodb_buffer_pool_size
* if large pages, this is rounded down the the nearest multiple
of the large page size.
* If less than 1MB, set to 1MB.
This does mean the new default innodb_buffer_pool_chunk size is
2MB, derived form the above formular with 128MB as the buffer pool
size.
The innodb_buffer_pool_chunk_size is changed to a size_t for
better compatiblity with the memory allocations which use size_t.
The previous upper limit is changed to the maxium of a size_t. The
maximium value used is the buffer pool size anyway.
Getting this default value of the chunk size to a more practical
size facilitates further development of more automated resizing
without significant overhead or memory fragmentation.
innodb_buffer_pool_resize test adjusted based on 1M default
chunk size thanks Wlad.
Adapted from https://github.com/google/benchmark/pull/833
authored by Sam Elliot at lowRISC.
This requires the RISCV kernel to set the CY bit of the mcountern register
which is done on Linux, but documenting here in case another OS hits
a SIGILL here.
When CY bit of the mcounteren register is unset, reading the cycle register
will cause illegal instruction exception in the next privilege level ( user
mode or supervisor mode ). See the privileged isa manual section 3.1.11 in
https://github.com/riscv/riscv-isa-manual/releases/latest
Small postfix to MDEV-23175 to ensure faster option on FreeBSD
and compatibility to Solaris that isn't high resolution.
ftime is left as a backup in case an implementation doesn't
contain any of these clocks.
FreeBSD
$ ./unittest/mysys/my_rdtsc-t
1..11
# ----- Routine ---------------
# myt.cycles.routine : 5
# myt.nanoseconds.routine : 11
# myt.microseconds.routine : 13
# myt.milliseconds.routine : 11
# myt.ticks.routine : 17
# ----- Frequency -------------
# myt.cycles.frequency : 3610295566
# myt.nanoseconds.frequency : 1000000000
# myt.microseconds.frequency : 1000000
# myt.milliseconds.frequency : 899
# myt.ticks.frequency : 136
# ----- Resolution ------------
# myt.cycles.resolution : 1
# myt.nanoseconds.resolution : 1
# myt.microseconds.resolution : 1
# myt.milliseconds.resolution : 7
# myt.ticks.resolution : 1
# ----- Overhead --------------
# myt.cycles.overhead : 26
# myt.nanoseconds.overhead : 19140
# myt.microseconds.overhead : 19036
# myt.milliseconds.overhead : 578
# myt.ticks.overhead : 21544
ok 1 - my_timer_init() did not crash
ok 2 - The cycle timer is strictly increasing
ok 3 - The cycle timer is implemented
ok 4 - The nanosecond timer is increasing
ok 5 - The nanosecond timer is implemented
ok 6 - The microsecond timer is increasing
ok 7 - The microsecond timer is implemented
ok 8 - The millisecond timer is increasing
ok 9 - The millisecond timer is implemented
ok 10 - The tick timer is increasing
ok 11 - The tick timer is implemented
Small postfix to MDEV-23175 to ensure faster option on FreeBSD
and compatibility to Solaris that isn't high resolution.
ftime is left as a backup in case an implementation doesn't
contain any of these clocks.
FreeBSD
$ ./unittest/mysys/my_rdtsc-t
1..11
# ----- Routine ---------------
# myt.cycles.routine : 5
# myt.nanoseconds.routine : 11
# myt.microseconds.routine : 13
# myt.milliseconds.routine : 11
# myt.ticks.routine : 17
# ----- Frequency -------------
# myt.cycles.frequency : 3610295566
# myt.nanoseconds.frequency : 1000000000
# myt.microseconds.frequency : 1000000
# myt.milliseconds.frequency : 899
# myt.ticks.frequency : 136
# ----- Resolution ------------
# myt.cycles.resolution : 1
# myt.nanoseconds.resolution : 1
# myt.microseconds.resolution : 1
# myt.milliseconds.resolution : 7
# myt.ticks.resolution : 1
# ----- Overhead --------------
# myt.cycles.overhead : 26
# myt.nanoseconds.overhead : 19140
# myt.microseconds.overhead : 19036
# myt.milliseconds.overhead : 578
# myt.ticks.overhead : 21544
ok 1 - my_timer_init() did not crash
ok 2 - The cycle timer is strictly increasing
ok 3 - The cycle timer is implemented
ok 4 - The nanosecond timer is increasing
ok 5 - The nanosecond timer is implemented
ok 6 - The microsecond timer is increasing
ok 7 - The microsecond timer is implemented
ok 8 - The millisecond timer is increasing
ok 9 - The millisecond timer is implemented
ok 10 - The tick timer is increasing
ok 11 - The tick timer is implemented
If someone on whatever reasons uses --default-character-set=cp850,
this will avoid incorrect display, and inserting incorrect data.
Adjusting console codepage sometimes also needs to happen with
--default-charset=auto, on older Windows. This is because autodetection
is not always exact. For example, console codepage on US editions of
Windows is 437. Client autodetects it as cp850, a rather loose
approximation, given 46 code point differences. We change the console
codepage to cp850, so that there is no discrepancy.
That fix is currently Windows-only, and serves people who used combination
of chcp to achieve WYSIWYG effect (although, this would mostly likely used
with utf8 in the past)
Now, --default-character-set would be a replacement for that.
Fix fs_character_set() detection of current codepage.
- Use corresponding entry in the manifest, as described in
https://docs.microsoft.com/en-us/windows/apps/design/globalizing/use-utf8-code-page
- If if ANSI codepage is UTF8 (i.e for Windows 1903 and later)
Use UTF8 as default client charset
Set console codepage(s) to UTF8, in case process is using console
- Allow some previously disabled MTR tests, that used Unicode for in "exec",
for the recent Windows versions
Corresponding Windows bug https://github.com/microsoft/terminal/issues/4551
Use ReadConsoleW instead and convert to console's input codepage, to
workaround.
Also, disable VT sequences in the console output, as we do not knows what
type of data comes with SELECT, we do not want VT escapes there.
Remove my_cgets()
Prior to patch, get_password would echo multple mask characters '*', for
a single multibyte input character.
Fixed the behavior by using "wide" version of getch, _getwch.
Also take care of possible characters outside of BMP (i.e we do not print
'*' for high surrogates).
The function will now internally construct the "wide" password string,
and conver to the console codepage. Some characters could still be lost
in that conversion, unless the codepage is utf8, but this is not any new
bug.
MariaDB server crashes on ARM (weak memory model architecture) while
concurrently executing l_find to load node->key and add_to_purgatory
to store node->key = NULL. l_find then uses key (which is NULL), to
pass it to a comparison function.
The specific problem is the out-of-order execution that happens on a
weak memory model architecture. Two essential reorderings are possible,
which need to be prevented.
a) As l_find has no barriers in place between the optimistic read of
the key field lf_hash.cc#L117 and the verification of link lf_hash.cc#L124,
the processor can reorder the load to happen after the while-loop.
In that case, a concurrent thread executing add_to_purgatory on the same
node can be scheduled to store NULL at the key field lf_alloc-pin.c#L253
before key is loaded in l_find.
b) A node is marked as deleted by a CAS in l_delete lf_hash.cc#L247 and
taken off the list with an upfollowing CAS lf_hash.cc#L252. Only if both
CAS succeed, the key field is written to by add_to_purgatory. However,
due to a missing barrier, the relaxed store of key lf_alloc-pin.c#L253
can be moved ahead of the two CAS operations, which makes the value of
the local purgatory list stored by add_to_purgatory visible to all threads
operating on the list. As the node is not marked as deleted yet, the
same error occurs in l_find.
This change three accesses to be atomic.
* optimistic read of key in l_find lf_hash.cc#L117
* read of link for verification lf_hash.cc#L124
* write of key in add_to_purgatory lf_alloc-pin.c#L253
Reviewers: Sergei Vojtovich, Sergei Golubchik
Fixes: MDEV-23510 / d30c1331a18d875e553f3fcf544997e4f33fb943
MariaDB server crashes on ARM (weak memory model architecture) while
concurrently executing l_find to load node->key and add_to_purgatory
to store node->key = NULL. l_find then uses key (which is NULL), to
pass it to a comparison function.
The specific problem is the out-of-order execution that happens on a
weak memory model architecture. Two essential reorderings are possible,
which need to be prevented.
a) As l_find has no barriers in place between the optimistic read of
the key field lf_hash.cc#L117 and the verification of link lf_hash.cc#L124,
the processor can reorder the load to happen after the while-loop.
In that case, a concurrent thread executing add_to_purgatory on the same
node can be scheduled to store NULL at the key field lf_alloc-pin.c#L253
before key is loaded in l_find.
b) A node is marked as deleted by a CAS in l_delete lf_hash.cc#L247 and
taken off the list with an upfollowing CAS lf_hash.cc#L252. Only if both
CAS succeed, the key field is written to by add_to_purgatory. However,
due to a missing barrier, the relaxed store of key lf_alloc-pin.c#L253
can be moved ahead of the two CAS operations, which makes the value of
the local purgatory list stored by add_to_purgatory visible to all threads
operating on the list. As the node is not marked as deleted yet, the
same error occurs in l_find.
This change three accesses to be atomic.
* optimistic read of key in l_find lf_hash.cc#L117
* read of link for verification lf_hash.cc#L124
* write of key in add_to_purgatory lf_alloc-pin.c#L253
Reviewers: Sergei Vojtovich, Sergei Golubchik
Fixes: MDEV-23510 / d30c1331a18d875e553f3fcf544997e4f33fb943
The DYNAMIC_ARRAY copies values in and copies values out. Without a
comparitor function, get_index_dynamic() does not make sense.
This function is not used. If we have a need for a function like it
in the future, I propose we write a new one with unit tests showing
how it is used and demostrating that it behaves as expected.
Dead code cleanup:
part_info->num_parts usage was wrong and working incorrectly in
mysql_drop_partitions() because num_parts is already updated in
prep_alter_part_table(). We don't have to update part_info->partitions
because part_info is destroyed at alter_partition_lock_handling().
Cleanups:
- DBUG_EVALUATE_IF() macro replaced by shorter form DBUG_IF();
- Typo in ER_KEY_COLUMN_DOES_NOT_EXITS.
Refactorings:
- Splitted write_log_replace_delete_frm() into write_log_delete_frm()
and write_log_replace_frm();
- partition_info via DDL_LOG_STATE;
- set_part_info_exec_log_entry() removed.
DBUG_EVALUATE removed
DBUG_EVALUTATE was only added for consistency together with
DBUG_EVALUATE_IF. It is not used anywhere in the code.
DBUG_SUICIDE() fix on release build
On release DBUG_SUICIDE() was statement. It was wrong as
DBUG_SUICIDE() is used in expression context.
Some architectures (mips) require libatomic to support proper
atomic operations. Check first if support is available without
linking, otherwise use the library.
Contributors:
James Cowgill <jcowgill@debian.org>
Jessica Clarke <jrtc27@debian.org>
Vicențiu Ciorbaru <vicentiu@mariadb.org>
https://jira.mariadb.org/browse/MDEV-26221
my_sys DYNAMIC_ARRAY and DYNAMIC_STRING inconsistancy
The DYNAMIC_STRING uses size_t for sizes, but DYNAMIC_ARRAY used uint.
This patch adjusts DYNAMIC_ARRAY to use size_t like DYNAMIC_STRING.
As the MY_DIR member number_of_files is copied from a DYNAMIC_ARRAY,
this is changed to be size_t.
As MY_TMPDIR members 'cur' and 'max' are copied from a DYNAMIC_ARRAY,
these are also changed to be size_t.
The lists of plugins and stored procedures use DYNAMIC_ARRAY,
but their APIs assume a size of 'uint'; these are unchanged.
Create minidump when server fails to shutdown. If process is being
debugged, cause a debug break.
Moves some code which is part of safe_kill into mysys, as both safe_kill,
and mysqltest produce minidumps on different timeouts.
Small cleanup in wait_until_dead() - replace inefficient loop with a single
wait.
Thanks to Fabian Vogt for noticing the mutual exclusions
of these open flags on tmpfs caused by mariadb opening it
incorrectly.
As such we clear the O_CREAT flag while opening it as O_TMPFILE.
This is a side-effect of my_large_malloc() introduction,MDEV-18851
It removed a cast to size_t to variable 'blocks' in
multiplication blocks * keycache->key_cache_block_size , creating ulong value
instead of correct size_t.
Replaced a couple of ulongs with appropriate data type, which is size_t.
Also, fixed casts to ulongs in crash handler messages, so that people would
not be confused by that, too.
Interestingly, aria did not expose the same problem even if it contains
copied and pasted code in ma_pagecache, because Aria had some ulongs removed
when fixing a similar problem in MDEV-9256.
init_mutex_v1_t: Stop lying that the mutex parameter is const.
GCC 11.2.0 assumes that it is and could complain about any mysql_mutex_t
being uninitialized even after mysql_mutex_init() as long as
PLUGIN_PERFSCHEMA is enabled.
init_rwlock_v1_t, init_cond_v1_t: Remove untruthful const qualifiers.
Note: init_socket_v1_t is expecting that the socket fd has already
been created before PSI_SOCKET_CALL(init_socket), and therefore that
parameter really is being treated as a pointer to const.
in about a hundred of users of MY_BITMAP, only two were using its
built-in mutex, and only one of those two was actually needing it.
Remove the mutex from MY_BITMAP, remove all associated conditions
and checks in bitmap functions. Use an external LOCK_temp_pool
mutex and temp_pool_set_next/temp_pool_clear_bit acccessors.
Remove bitmap_init/bitmap_free, always use my_* versions.