mariadb/mysys/my_largepage.c
Marko Mäkelä b6923420f3 MDEV-29445: Reimplement SET GLOBAL innodb_buffer_pool_size
We deprecate and ignore the parameter innodb_buffer_pool_chunk_size
and let the buffer pool size to be changed in arbitrary 1-megabyte
increments.

innodb_buffer_pool_size_max: A new read-only startup parameter
that specifies the maximum innodb_buffer_pool_size.  If 0 or
unspecified, it will default to the specified innodb_buffer_pool_size
rounded up to the allocation unit (2 MiB or 8 MiB).  The maximum value
is 4GiB-2MiB on 32-bit systems and 16EiB-8MiB on 64-bit systems.
This maximum is very likely to be limited further by the operating system.

The status variable Innodb_buffer_pool_resize_status will reflect
the status of shrinking the buffer pool. When no shrinking is in
progress, the string will be empty.

Unlike before, the execution of SET GLOBAL innodb_buffer_pool_size
will block until the requested buffer pool size change has been
implemented, or the execution is interrupted by a KILL statement
a client disconnect, or server shutdown.  If the
buf_flush_page_cleaner() thread notices that we are running out of
memory, the operation may fail with ER_WRONG_USAGE.

SET GLOBAL innodb_buffer_pool_size will be refused
if the server was started with --large-pages (even if
no HugeTLB pages were successfully allocated). This functionality
is somewhat exercised by the test main.large_pages, which now runs
also on Microsoft Windows.  On Linux, explicit HugeTLB mappings are
apparently excluded from the reported Redident Set Size (RSS), and
apparently unshrinkable between mmap(2) and munmap(2).

The buffer pool will be mapped to a contiguous virtual memory area
that will be aligned and partitioned into extents of 8 MiB on
64-bit systems and 2 MiB on 32-bit systems.

Within an extent, the first few innodb_page_size blocks contain
buf_block_t objects that will cover the page frames in the rest
of the extent.  The number of such frames is precomputed in the
array first_page_in_extent[] for each innodb_page_size.
In this way, there is a trivial mapping between
page frames and block descriptors and we do not need any
lookup tables like buf_pool.zip_hash or buf_pool_t::chunk_t::map.

We will always allocate the same number of block descriptors for
an extent, even if we do not need all the buf_block_t in the last
extent in case the innodb_buffer_pool_size is not an integer multiple
of the of extents size.

The minimum innodb_buffer_pool_size is 256*5/4 pages.  At the default
innodb_page_size=16k this corresponds to 5 MiB.  However, now that the
innodb_buffer_pool_size includes the memory allocated for the block
descriptors, the minimum would be innodb_buffer_pool_size=6m.

my_large_virtual_alloc(): A new function, similar to my_large_malloc().

my_virtual_mem_reserve(), my_virtual_mem_commit(),
my_virtual_mem_decommit(), my_virtual_mem_release():
New interface mostly by Vladislav Vaintroub, to separately
reserve and release virtual address space, as well as to
commit and decommit memory within it.

After my_virtual_mem_decommit(), the virtual memory range will be
read-only or unaccessible, depending on whether the build option
cmake -DHAVE_UNACCESSIBLE_AFTER_MEM_DECOMMIT=1
has been specified.  This option is hard-coded on Microsoft Windows,
where VirtualMemory(MEM_DECOMMIT) will make the memory unaccessible.
On IBM AIX, Linux, Illumos and possibly Apple macOS, the virtual memory
will be zeroed out immediately.  On other POSIX-like systems,
madvise(MADV_FREE) will be used if available, to give the operating
system kernel a permission to zero out the virtual memory range.
We prefer immediate freeing so that the reported
resident set size (RSS) of the process will reflect the current
innodb_buffer_pool_size.  Shrinking the buffer pool is a rarely
executed resource intensive operation, and the immediate configuration
of the MMU mappings should not incur significant additional penalty.

opt_super_large_pages: Declare only on Solaris. Actually, this is
specific to the SPARC implementation of Solaris, but because we
lack access to a Solaris development environment, we will not revise
this for other MMU and ISA.

buf_pool_t::chunk_t::create(): Remove.

buf_pool_t::create(): Initialize all n_blocks of the buf_pool.free list.

buf_pool_t::allocate(): Renamed from buf_LRU_get_free_only().

buf_pool_t::LRU_warned: Changed to Atomic_relaxed<bool>,
only to be modified by the buf_flush_page_cleaner() thread.

buf_pool_t::shrink(): Attempt to shrink the buffer pool.
There are 3 possible outcomes: SHRINK_DONE (success),
SHRINK_IN_PROGRESS (the caller may keep trying),
and SHRINK_ABORT (we seem to be running out of buffer pool).
While traversing buf_pool.LRU, release the contended
buf_pool.mutex once in every 32 iterations in order to
reduce starvation. Use lru_scan_itr for efficient traversal,
similar to buf_LRU_free_from_common_LRU_list().

buf_pool_t::shrunk(): Update the reduced size of the buffer pool
in a way that is compatible with buf_pool_t::page_guess(),
and invoke my_virtual_mem_decommit().

buf_pool_t::resize(): Before invoking shrink(), run one batch of
buf_flush_page_cleaner() in order to prevent LRU_warn().
Abort if shrink() recommends it, or no blocks were withdrawn in
the past 15 seconds, or the execution of the statement
SET GLOBAL innodb_buffer_pool_size was interrupted.

buf_pool_t::first_to_withdraw: The first block descriptor that is
out of the bounds of the shrunk buffer pool.

buf_pool_t::withdrawn: The list of withdrawn blocks.
If buf_pool_t::resize() is aborted before shrink() completes,
we must be able to resurrect the withdrawn blocks in the free list.

buf_pool_t::contains_zip(): Added a parameter for the
number of least significant pointer bits to disregard,
so that we can find any pointers to within a block
that is supposed to be free.

buf_pool_t::is_shrinking(): Return the total number or blocks that
were withdrawn or are to be withdrawn.

buf_pool_t::to_withdraw(): Return the number of blocks that will need to
be withdrawn.

buf_pool_t::usable_size(): Number of usable pages, considering possible
in-progress attempt at shrinking the buffer pool.

buf_pool_t::page_guess(): Try to buffer-fix a guessed block pointer.
If HAVE_UNACCESSIBLE_AFTER_MEM_DECOMMIT is set, the pointer will
be validated before being dereferenced.

buf_pool_t::get_info(): Replaces buf_stats_get_pool_info().

innodb_init_param(): Refactored. We must first compute
srv_page_size_shift and then determine the valid bounds of
innodb_buffer_pool_size.

buf_buddy_shrink(): Replaces buf_buddy_realloc().
Part of the work is deferred to buf_buddy_condense_free(),
which is being executed when we are not holding any
buf_pool.page_hash latch.

buf_buddy_condense_free(): Do not relocate blocks.

buf_buddy_free_low(): Do not care about buffer pool shrinking.
This will be handled by buf_buddy_shrink() and
buf_buddy_condense_free().

buf_buddy_alloc_zip(): Assert !buf_pool.contains_zip()
when we are allocating from the binary buddy system.
Previously we were asserting this on multiple recursion levels.

buf_buddy_block_free(), buf_buddy_free_low():
Assert !buf_pool.contains_zip().

buf_buddy_alloc_from(): Remove the redundant parameter j.

buf_flush_LRU_list_batch(): Add the parameter to_withdraw
to keep track of buf_pool.n_blocks_to_withdraw.

buf_do_LRU_batch(): Skip buf_free_from_unzip_LRU_list_batch()
if we are shrinking the buffer pool. In that case, we want
to minimize the page relocations and just finish as quickly
as possible.

trx_purge_attach_undo_recs(): Limit purge_sys.n_pages_handled()
in every iteration, in case the buffer pool is being shrunk
in the middle of a purge batch.

Reviewed by: Debarun Banerjee
2025-03-26 17:05:44 +02:00

597 lines
16 KiB
C

/* Copyright (c) 2004, 2010, Oracle and/or its affiliates. All rights reserved.
Copyright (c) 2019, 2020 IBM.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; version 2 of the License.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1335 USA */
#include "mysys_priv.h"
#include <mysys_err.h>
#ifdef __linux__
#include <dirent.h>
#endif
#if defined(__linux__) || defined(MAP_ALIGNED)
#include "my_bit.h"
#endif
#ifdef HAVE_LINUX_MMAN_H
#include <linux/mman.h>
#endif
#ifdef HAVE_SOLARIS_LARGE_PAGES
#if defined(__sun__) && defined(__GNUC__) && defined(__cplusplus) \
&& defined(_XOPEN_SOURCE)
/* memcntl exist within sys/mman.h, but under-defines what is need to use it */
extern int memcntl(caddr_t, size_t, int, caddr_t, int, int);
#endif /* __sun__ ... */
#endif /* HAVE_SOLARIS_LARGE_PAGES */
my_bool my_use_large_pages;
#ifdef _WIN32
static size_t my_large_page_size;
#endif
#if defined(HAVE_GETPAGESIZES) || defined(__linux__)
/* Descending sort */
static int size_t_cmp(const void *a, const void *b)
{
const size_t ia= *(const size_t *) a;
const size_t ib= *(const size_t *) b;
if (ib > ia)
{
return 1;
}
else if (ib < ia)
{
return -1;
}
return 0;
}
#endif /* defined(HAVE_GETPAGESIZES) || defined(__linux__) */
#if defined(__linux__) || defined(HAVE_GETPAGESIZES)
#define my_large_page_sizes_length 8
static size_t my_large_page_sizes[my_large_page_sizes_length];
#endif
/**
Linux-specific function to determine the sizes of large pages
*/
#ifdef __linux__
static inline my_bool my_is_2pow(size_t n) { return !((n) & ((n) - 1)); }
static void my_get_large_page_sizes(size_t sizes[my_large_page_sizes_length])
{
DIR *dirp;
struct dirent *r;
int i= 0;
DBUG_ENTER("my_get_large_page_sizes");
dirp= opendir("/sys/kernel/mm/hugepages");
if (dirp == NULL)
{
my_error(EE_DIR, MYF(ME_BELL), "/sys/kernel/mm/hugepages", errno);
}
else
{
while (i < my_large_page_sizes_length && (r= readdir(dirp)))
{
if (strncmp("hugepages-", r->d_name, 10) == 0)
{
sizes[i]= strtoull(r->d_name + 10, NULL, 10) * 1024ULL;
if (!my_is_2pow(sizes[i]))
{
my_printf_error(0,
"non-power of 2 large page size (%zu) found,"
" skipping", MYF(ME_NOTE | ME_ERROR_LOG_ONLY),
sizes[i]);
sizes[i]= 0;
continue;
}
++i;
}
}
if (closedir(dirp))
{
my_error(EE_BADCLOSE, MYF(ME_BELL), "/sys/kernel/mm/hugepages", errno);
}
qsort(sizes, i, sizeof(size_t), size_t_cmp);
}
DBUG_VOID_RETURN;
}
#elif defined(HAVE_GETPAGESIZES)
static void my_get_large_page_sizes(size_t sizes[my_large_page_sizes_length])
{
int nelem;
nelem= getpagesizes(NULL, 0);
assert(nelem <= my_large_page_sizes_length);
getpagesizes(sizes, my_large_page_sizes_length);
qsort(sizes, nelem, sizeof(size_t), size_t_cmp);
if (nelem < my_large_page_sizes_length)
{
sizes[nelem]= 0;
}
}
#elif defined(_WIN32)
#define my_large_page_sizes_length 0
#define my_get_large_page_sizes(A) do {} while(0)
#else
#define my_large_page_sizes_length 1
static size_t my_large_page_sizes[my_large_page_sizes_length];
static void my_get_large_page_sizes(size_t sizes[])
{
sizes[0]= my_getpagesize();
}
#endif
/**
Returns the next large page size smaller or equal to the passed in size.
The search starts at my_large_page_sizes[*start].
Assumes my_get_large_page_sizes(my_large_page_sizes) has been called before
use.
For first use, have *start=0. There is no need to increment *start.
@param[in] sz size to be searched for.
@param[in,out] start ptr to int representing offset in my_large_page_sizes to
start from.
*start is updated during search and can be used to search again if 0 isn't
returned.
@returns the next size found. *start will be incremented to the next potential
size.
@retval a large page size that is valid on this system or 0 if no large page
size possible.
*/
#ifndef _WIN32
static size_t my_next_large_page_size(size_t sz, int *start)
{
DBUG_ENTER("my_next_large_page_size");
while (*start < my_large_page_sizes_length && my_large_page_sizes[*start] > 0)
{
size_t cur= *start;
(*start)++;
if (my_large_page_sizes[cur] <= sz)
{
DBUG_RETURN(my_large_page_sizes[cur]);
}
}
DBUG_RETURN(0);
}
#endif
int my_init_large_pages(void)
{
my_use_large_pages= 1;
#ifdef _WIN32
if (!my_obtain_privilege(SE_LOCK_MEMORY_NAME))
{
my_printf_error(EE_PERM_LOCK_MEMORY,
"Lock Pages in memory access rights required for use with"
" large-pages, see https://mariadb.com/kb/en/library/"
"mariadb-memory-allocation/#huge-pages", MYF(MY_WME));
my_use_large_pages= 0;
}
my_large_page_size= GetLargePageMinimum();
#endif
my_get_large_page_sizes(my_large_page_sizes);
#ifdef HAVE_SOLARIS_LARGE_PAGES
extern my_bool opt_super_large_pages;
/*
tell the kernel that we want to use 4/256MB page for heap storage
and also for the stack. We use 4 MByte as default and if the
super-large-page is set we increase it to 256 MByte. 256 MByte
is for server installations with GBytes of RAM memory where
the MySQL Server will have page caches and other memory regions
measured in a number of GBytes.
We use as big pages as possible which isn't bigger than the above
desired page sizes.
Note: This refers to some implementations of the SPARC ISA,
where the supported page sizes are
8KiB, 64KiB, 512KiB, 4MiB, 32MiB, 256MiB, 2GiB, and 16GiB.
On implementations of the AMD64 ISA, the available page sizes
should be 4KiB, 2MiB, and 1GiB.
*/
int nelem= 0;
size_t max_desired_page_size= opt_super_large_pages ? 256 << 20 : 4 << 20;
size_t max_page_size= my_next_large_page_size(max_desired_page_size, &nelem);
if (max_page_size > 0)
{
struct memcntl_mha mpss;
mpss.mha_cmd= MHA_MAPSIZE_BSSBRK;
mpss.mha_pagesize= max_page_size;
mpss.mha_flags= 0;
if (memcntl(NULL, 0, MC_HAT_ADVISE, (caddr_t) &mpss, 0, 0))
{
my_error(EE_MEMCNTL, MYF(ME_WARNING | ME_ERROR_LOG_ONLY), "MC_HAT_ADVISE",
"MHA_MAPSIZE_BSSBRK");
}
mpss.mha_cmd= MHA_MAPSIZE_STACK;
if (memcntl(NULL, 0, MC_HAT_ADVISE, (caddr_t) &mpss, 0, 0))
{
my_error(EE_MEMCNTL, MYF(ME_WARNING | ME_ERROR_LOG_ONLY), "MC_HAT_ADVISE",
"MHA_MAPSIZE_STACK");
}
}
#endif /* HAVE_SOLARIS_LARGE_PAGES */
return 0;
}
/**
Large page size helper.
This rounds down, if needed, the size parameter to the largest
multiple of an available large page size on the system.
*/
void my_large_page_truncate(size_t *size)
{
if (my_use_large_pages)
{
size_t large_page_size= 0;
#ifdef _WIN32
large_page_size= my_large_page_size;
#elif defined(HAVE_MMAP)
int page_i= 0;
large_page_size= my_next_large_page_size(*size, &page_i);
#endif
if (large_page_size > 0)
*size-= *size % large_page_size;
}
}
#if defined(HAVE_MMAP) && !defined(_WIN32)
/* Solaris for example has only MAP_ANON, FreeBSD has MAP_ANONYMOUS and
MAP_ANON but MAP_ANONYMOUS is marked "for compatibility" */
#if defined(MAP_ANONYMOUS)
#define OS_MAP_ANON MAP_ANONYMOUS
#elif defined(MAP_ANON)
#define OS_MAP_ANON MAP_ANON
#else
#error unsupported mmap - no MAP_ANON{YMOUS}
#endif
#endif /* HAVE_MMAP && !_WIN32 */
/**
General large pages allocator.
Tries to allocate memory from large pages pool and falls back to
my_malloc_lock() in case of failure.
Every implementation returns a zero filled buffer here.
*/
uchar *my_large_malloc(size_t *size, myf my_flags)
{
uchar *ptr= NULL;
#ifdef _WIN32
DWORD alloc_type= MEM_COMMIT | MEM_RESERVE;
size_t orig_size= *size;
DBUG_ENTER("my_large_malloc");
if (my_use_large_pages)
{
alloc_type|= MEM_LARGE_PAGES;
/* Align block size to my_large_page_size */
*size= MY_ALIGN(*size, (size_t) my_large_page_size);
}
ptr= VirtualAlloc(NULL, *size, alloc_type, PAGE_READWRITE);
if (!ptr)
{
if (my_flags & MY_WME)
{
if (my_use_large_pages)
{
my_printf_error(EE_OUTOFMEMORY,
"Couldn't allocate %zu bytes (MEM_LARGE_PAGES page "
"size %zu); Windows error %lu",
MYF(ME_WARNING | ME_ERROR_LOG_ONLY), *size,
my_large_page_size, GetLastError());
}
else
{
my_error(EE_OUTOFMEMORY, MYF(ME_BELL+ME_ERROR_LOG), *size);
}
}
if (my_use_large_pages)
{
*size= orig_size;
ptr= VirtualAlloc(NULL, *size, MEM_COMMIT | MEM_RESERVE, PAGE_READWRITE);
if (!ptr && my_flags & MY_WME)
{
my_error(EE_OUTOFMEMORY, MYF(ME_BELL+ME_ERROR_LOG), *size);
}
}
}
#elif defined(HAVE_MMAP)
int mapflag;
int page_i= 0;
size_t large_page_size= 0;
size_t aligned_size= *size;
DBUG_ENTER("my_large_malloc");
while (1)
{
mapflag= MAP_PRIVATE | OS_MAP_ANON;
if (my_use_large_pages)
{
large_page_size= my_next_large_page_size(*size, &page_i);
/* this might be 0, in which case we do a standard mmap */
if (large_page_size)
{
#if defined(MAP_HUGETLB) /* linux 2.6.32 */
mapflag|= MAP_HUGETLB;
#if defined(MAP_HUGE_SHIFT) /* Linux-3.8+ */
mapflag|= my_bit_log2_size_t(large_page_size) << MAP_HUGE_SHIFT;
#else
# warning "No explicit large page (HUGETLB pages) support in Linux < 3.8"
#endif
#elif defined(MAP_ALIGNED)
mapflag|= MAP_ALIGNED(my_bit_log2_size_t(large_page_size));
#if defined(MAP_ALIGNED_SUPER)
mapflag|= MAP_ALIGNED_SUPER;
#endif
#endif
aligned_size= MY_ALIGN(*size, (size_t) large_page_size);
}
else
{
aligned_size= *size;
}
}
ptr= mmap(NULL, aligned_size, PROT_READ | PROT_WRITE, mapflag, -1, 0);
if (ptr == (void*) -1)
{
ptr= NULL;
if (my_flags & MY_WME)
{
if (large_page_size && errno == ENOMEM)
{
my_printf_error(EE_OUTOFMEMORY,
"Couldn't allocate %zu bytes (Large/HugeTLB memory "
"page size %zu); errno %u; continuing to smaller size",
MYF(ME_WARNING | ME_ERROR_LOG_ONLY),
aligned_size, large_page_size, errno);
}
else
{
my_error(EE_OUTOFMEMORY, MYF(ME_BELL+ME_ERROR_LOG), aligned_size);
}
}
/* try next smaller memory size */
if (large_page_size && errno == ENOMEM)
continue;
/* other errors are more serious */
break;
}
else /* success */
{
if (large_page_size)
{
/*
we do need to record the adjustment so that munmap gets called with
the right size. This is only the case for HUGETLB pages.
*/
*size= aligned_size;
}
break;
}
if (large_page_size == 0)
{
break; /* no more options to try */
}
}
#else
DBUG_RETURN(my_malloc_lock(*size, my_flags));
#endif /* defined(HAVE_MMAP) */
if (ptr != NULL)
{
MEM_MAKE_DEFINED(ptr, *size);
update_malloc_size(*size, 0);
}
DBUG_RETURN(ptr);
}
#ifdef _WIN32
/**
Special large pages allocator, with possibility to commit to allocating
more memory later.
Every implementation returns a zero filled buffer here.
*/
char *my_large_virtual_alloc(size_t *size)
{
char *ptr;
DBUG_ENTER("my_large_virtual_alloc");
if (my_use_large_pages)
{
size_t s= *size;
s= MY_ALIGN(s, (size_t) my_large_page_size);
ptr= VirtualAlloc(NULL, s, MEM_COMMIT | MEM_RESERVE | MEM_LARGE_PAGES,
PAGE_READWRITE);
if (ptr)
{
*size= s;
DBUG_RETURN(ptr);
}
}
DBUG_RETURN(VirtualAlloc(NULL, *size, MEM_RESERVE, PAGE_READWRITE));
}
#elif defined HAVE_MMAP
/**
Special large pages allocator, with possibility to commit to allocating
more memory later.
Every implementation returns a zero filled buffer here.
*/
char *my_large_mmap(size_t *size, int prot)
{
char *ptr;
DBUG_ENTER("my_large_virtual_alloc");
if (my_use_large_pages)
{
size_t large_page_size;
int page_i= 0;
prot= PROT_READ | PROT_WRITE;
while ((large_page_size= my_next_large_page_size(*size, &page_i)) != 0)
{
int mapflag= MAP_PRIVATE |
# ifdef MAP_POPULATE
MAP_POPULATE |
# endif
# if defined MAP_HUGETLB /* linux 2.6.32 */
MAP_HUGETLB |
# if defined MAP_HUGE_SHIFT /* Linux-3.8+ */
my_bit_log2_size_t(large_page_size) << MAP_HUGE_SHIFT |
# else
# warning "No explicit large page (HUGETLB pages) support in Linux < 3.8"
# endif
# elif defined MAP_ALIGNED
MAP_ALIGNED(my_bit_log2_size_t(large_page_size)) |
# if defined MAP_ALIGNED_SUPER
MAP_ALIGNED_SUPER |
# endif
# endif
OS_MAP_ANON;
size_t aligned_size= MY_ALIGN(*size, (size_t) large_page_size);
ptr= mmap(NULL, aligned_size, prot, mapflag, -1, 0);
if (ptr == (void*) -1)
{
ptr= NULL;
/* try next smaller memory size */
if (errno == ENOMEM)
continue;
/* other errors are more serious */
break;
}
else /* success */
{
/*
we do need to record the adjustment so that munmap gets called with
the right size. This is only the case for HUGETLB pages.
*/
*size= aligned_size;
DBUG_RETURN(ptr);
}
}
}
ptr= mmap(NULL, *size, prot,
# ifdef MAP_NORESERVE
MAP_NORESERVE |
# endif
MAP_PRIVATE | OS_MAP_ANON, -1, 0);
if (ptr == MAP_FAILED)
{
my_error(EE_OUTOFMEMORY, MYF(ME_BELL + ME_ERROR_LOG), size);
ptr= NULL;
}
DBUG_RETURN(ptr);
}
/**
Special large pages allocator, with possibility to commit to allocating
more memory later.
Every implementation returns a zero filled buffer here.
*/
char *my_large_virtual_alloc(size_t *size)
{
return my_large_mmap(size, PROT_READ | PROT_WRITE);
}
#endif
/**
General large pages deallocator.
Tries to deallocate memory as if it was from large pages pool and falls back
to my_free_lock() in case of failure
*/
void my_large_free(void *ptr, size_t size)
{
DBUG_ENTER("my_large_free");
/*
The following implementations can only fail if ptr was not allocated with
my_large_malloc(), i.e. my_malloc_lock() was used so we should free it
with my_free_lock()
For ASAN, we need to explicitly unpoison this memory region because the OS
may reuse that memory for some TLS or stack variable. It will remain
poisoned if it was explicitly poisioned before release. If this happens,
we'll have hard to debug false positives like in MDEV-21239.
For valgrind, we mark it as UNDEFINED rather than NOACCESS because of the
implict reuse possiblility.
*/
#if defined(HAVE_MMAP) && !defined(_WIN32)
if (munmap(ptr, size))
{
my_error(EE_BADMEMORYRELEASE, MYF(ME_ERROR_LOG_ONLY), ptr, size, errno);
}
#if !__has_feature(memory_sanitizer)
else
{
MEM_MAKE_ADDRESSABLE(ptr, size);
}
#endif
update_malloc_size(- (longlong) size, 0);
#elif defined(_WIN32)
/*
When RELEASE memory, the size parameter must be 0.
Do not use MEM_RELEASE with MEM_DECOMMIT.
*/
if (ptr)
{
if (!VirtualFree(ptr, 0, MEM_RELEASE))
{
my_error(EE_BADMEMORYRELEASE, MYF(ME_ERROR_LOG_ONLY), ptr, size,
GetLastError());
}
update_malloc_size(- (longlong) size, 0);
}
#if !__has_feature(memory_sanitizer)
else
{
MEM_MAKE_ADDRESSABLE(ptr, size);
}
#endif /* memory_sanitizer */
#else
my_free_lock(ptr);
#endif /* HAVE_MMAP */
DBUG_VOID_RETURN;
}