The previous default innodb_buffer_pool_chunk_size of 128M
made sense when the innodb buffer pool size was a few GB.
When the pool size is 128GB this means the chunk size is 0.1%
of this. Fine tuning the buffer pool size on such a fine
increment doesn't make practical sense. Also on extremely
large buffer pool systems, initializing on the default 128M can
also take a considerable amount of time.
When large pages are enabled, the chunk size has to be a multiple
of an available large page size or memory allocation without
use can occur.
Previously the default 0 was documented as disabling resizing.
With srv_buf_pool_chunk_unit > 0 assertions in the code and the
minimium value set, I doubt this was ever the case.
As such the autosizing (based on default 0) takes place as follows:
* a 64th of the innodb_buffer_pool_size
* if large pages, this is rounded down the the nearest multiple
of the large page size.
* If less than 1MB, set to 1MB.
This does mean the new default innodb_buffer_pool_chunk size is
2MB, derived form the above formular with 128MB as the buffer pool
size.
The innodb_buffer_pool_chunk_size is changed to a size_t for
better compatiblity with the memory allocations which use size_t.
The previous upper limit is changed to the maxium of a size_t. The
maximium value used is the buffer pool size anyway.
Getting this default value of the chunk size to a more practical
size facilitates further development of more automated resizing
without significant overhead or memory fragmentation.
innodb_buffer_pool_resize test adjusted based on 1M default
chunk size thanks Wlad.
1) Removed symlinks that are not very well supported in tar under Windows.
2) Added comment + changed code formatting in viosslfactories.c
3) Fixed a small bug in the yassl code.
4) Fixed a typo in the script code.
The previous threads locked need to be released too.
This occurs if the initialization of any of the non-first
mutex/conditition variables errors occurs.
This is follow-up to commit 1193a793c4.
We will set innodb_use_native_aio=OFF by default also in mariadb-backup
when running on a potentially affected kernel.
because plugin code is not only about encryption anymore
(also loads provider plugins), and xb_ prefix prevents name
clashes with the server code (that mariabackup links with).
bzip2/lz4/lzma/lzo/snappy compression is now provided via *services*
they're almost like normal services, but in include/providers/
and they're supposed to provide exactly the same interface
as original compression libraries (but not everything,
only enough of if for the code to compile).
the services are implemented via dummy functions that return
corresponding error values (LZMA_PROG_ERROR, LZO_E_INTERNAL_ERROR, etc).
the actual compression libraries are linked into corresponding
provider plugins. Providers are daemon plugins that when loaded
replace service pointers to point to actual compression functions.
That is, run-time dependency on compression libraries is now on plugins,
and the server doesn't need any compression libraries to run, but
will automatically support the compression when a plugin is loaded.
InnoDB and Mroonga use compression plugins now. RocksDB doesn't,
because it comes with standalone utility binaries that cannot
load plugins.
https://jira.mariadb.org/browse/MDEV-26221
my_sys DYNAMIC_ARRAY and DYNAMIC_STRING inconsistancy
The DYNAMIC_STRING uses size_t for sizes, but DYNAMIC_ARRAY used uint.
This patch adjusts DYNAMIC_ARRAY to use size_t like DYNAMIC_STRING.
As the MY_DIR member number_of_files is copied from a DYNAMIC_ARRAY,
this is changed to be size_t.
As MY_TMPDIR members 'cur' and 'max' are copied from a DYNAMIC_ARRAY,
these are also changed to be size_t.
The lists of plugins and stored procedures use DYNAMIC_ARRAY,
but their APIs assume a size of 'uint'; these are unchanged.