This patch adds support for controlling of memory allocation
done by SP/PS that could happen on second and following executions.
As soon as SP or PS has been executed the first time its memory root
is marked as read only since no further memory allocation should
be performed on it. In case such allocation takes place it leads to
the assert hit for invariant that force no new memory allocations
takes place as soon as the SP/PS has been marked as read only.
The feature for control of memory allocation made on behalf SP/PS
is turned on when both debug build is on and the cmake option
-DWITH_PROTECT_STATEMENT_MEMROOT is set.
The reason for introduction of the new cmake option
-DWITH_PROTECT_STATEMENT_MEMROOT
to control memory allocation of second and following executions of
SP/PS is that for the current server implementation there are too many
places where such memory allocation takes place. As soon as all such
incorrect allocations be fixed the cmake option
-DWITH_PROTECT_STATEMENT_MEMROOT
can be removed and control of memory allocation made on second and
following executions can be turned on only for debug build. Before
every incorrect memory allocation be fixed it makes sense to guard
the checking of memory allocation on read only memory by extra cmake
option else we would get a lot of failing test on buildbot.
Moreover, fixing of all incorrect memory allocations could take pretty
long period of time, so for introducing the feature without necessary
to wait until all places throughout the source code be fixed it makes
sense to add the new cmake option.
- This commit is different from 10.6 commit c438284863.
Due to Commit 045757af4c (MDEV-24621),
InnoDB does buffer and pre-sort the records for each index, and build
the indexes one page at a time.
Multiple large insert ignore statment aborts the server during bulk
insert operation. Problem is that InnoDB merge record exceeds
the page size. To avoid this scenario, InnoDB should catch
too big record while buffering the insert operation itself.
row_merge_buf_encode(): returns length of the encoded index record
row_merge_buf_write(): Catches the DB_TOO_BIG_RECORD earlier and
returns error
- HA_EXTRA_IGNORE_INSERT call is being called for every inserted row,
and on partitioned tables on every row * every partition.
This leads to slowness during load..data operation
- Under bulk operation, multiple insert statement error handling
will end up emptying the table. This behaviour introduced by the
commit 8ea923f55b (MDEV-24818).
This makes the HA_EXTRA_IGNORE_INSERT call redundant. We can
use the same behavior for insert..ignore statement as well.
- Removed the extra call HA_EXTRA_IGNORE_INSERT as the solution
to improve the performance of load command.
- Adding automatic conversion operator from LEX_STRING to LEX_CSTRING
Now a LEX_STRING can be passed directly to any function expecting
a LEX_CSTRING parameter passed by value or by reference.
- Removing a number of duplicate methods accepting LEX_STRING.
Now the code used the LEX_CSTRING version.
This is also related to
MDEV-31348 Assertion `last_key_entry >= end_pos' failed in virtual bool
JOIN_CACHE_HASHED::put_record()
Valgrind exposed a problem with the join_cache for hash joins:
=25636== Conditional jump or move depends on uninitialised value(s)
==25636== at 0xA8FF4E: JOIN_CACHE_HASHED::init_hash_table()
(sql_join_cache.cc:2901)
The reason for this was that avg_record_length contained a random value
if one had used SET optimizer_switch='optimize_join_buffer_size=off'.
This causes either 'random size' memory to be allocated (up to
join_buffer_size) which can increase memory usage or, if avg_record_length
is less than the row size, memory overwrites in thd->mem_root, which is
bad.
Fixed by setting avg_record_length in JOIN_CACHE_HASHED::init()
before it's used.
There is no test case for MDEV-31893 as valgrind of join_cache_notasan
checks that.
I added a test case for MDEV-31348.
Revert the old work-around for buggy fdatasync() on Linux ext3. This bug was
fixed in Linux > 10 years ago back to kernel version at least 3.0.
Reviewed-by: Marko Mäkelä <marko.makela@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
This is also related to
MDEV-31348 Assertion `last_key_entry >= end_pos' failed in virtual bool
JOIN_CACHE_HASHED::put_record()
Valgrind exposed a problem with the join_cache for hash joins:
=25636== Conditional jump or move depends on uninitialised value(s)
==25636== at 0xA8FF4E: JOIN_CACHE_HASHED::init_hash_table()
(sql_join_cache.cc:2901)
The reason for this was that avg_record_length contained a random value
if one had used SET optimizer_switch='optimize_join_buffer_size=off'.
This causes either 'random size' memory to be allocated (up to
join_buffer_size) which can increase memory usage or, if avg_record_length
is less than the row size, memory overwrites in thd->mem_root, which is
bad.
Fixed by setting avg_record_length in JOIN_CACHE_HASHED::init()
before it's used.
There is no test case for MDEV-31893 as valgrind of join_cache_notasan
checks that.
I added a test case for MDEV-31348.
HA_UNIQUE_CHECK was
* only used internally by MyISAM/Aria
* only used for internal temporary tables (for DISTINCT)
* never saved in frm
* saved in MYI/MAD but only for temporary tables
* only set, never checked
it's safe to remove it and free the bit (there are only 16 of them)
We introduce simple plugin dependency. A plugin init function may
return HA_ERR_RETRY_INIT. If this happens during server startup when
the server is trying to initialise all plugins, the failed plugins
will be retried, until no more plugins succeed in initialisation or
want to be retried.
This will fix spider init bugs which is caused in part by its
dependency on Aria for initialisation.
The reason we need a new return code, instead of treating every
failure as a request for retry, is that it may be impossible to clean
up after a failed plugin initialisation. Take InnoDB for example, it
has a global variable `buf_page_cleaner_is_active`, which may not
satisfy an assertion during a second initialisation try, probably
because InnoDB does not expect the initialisation to be called
twice.
Since TLS server certificate verification is a client
only option, this flag is removed in both client (C/C)
and MariaDB server capability flags.
This patch reverts commit 89d759b93e
(MySQL Bug #21543) and stores the server certificate validation
option in mysql->options.extensions.
The `safe_strcpy()` function was added in
https://github.com/mariadb/server/commit/567b68129943#diff-23f88d0b52735bf79b7eb76e2ddbbebc96f3b1ca16e784a347525a9c43134d77
Unfortunately, its current implementation triggers many GCC 8+ string
truncation and array bounds warnings, particularly due to the potential
for a false positive `-Warray-bounds`.
For example, the line `safe_strcpy(delimiter, sizeof(delimiter), ";")` in
`client/mysqldump.c` causes the following warning:
[1669/1914] Building C object client/CMakeFiles/mariadb-dump.dir/mysqldump.c.o
In file included from /PATH/include/my_sys.h:20,
from /PATH/mysqldump.c:51:
In function ?safe_strcpy?,
inlined from ?dump_events_for_db.isra? at /PATH/client/mysqldump.c:2595:3:
/PATH/include/m_string.h:258:39: warning: array subscript 1535 is outside array bounds of ?const char[2]? [-Warray-bounds=]
258 | if (dst[dst_size - 2] != '\0' && src[dst_size - 1] != '\0')
| ~~~^~~~~~~~~~~~~~
GCC is reporting that the `safe_strcpy` function *could* cause an
out-of-bounds read from the constant *source* string `";"`, however this
warning is unhelpful and confusing because it can only happen if the size of
the *destination* buffer is incorrectly specified, which is not the case
here.
In https://github.com/MariaDB/server/pull/2640, Andrew Hutchings proposed
fixing this by disabling the `-Warray-bounds` check in this function
(specifically in
be382d01d0 (diff-23f88d0b52735bf79b7eb76e2ddbbebc96f3b1ca16e784a347525a9c43134d77R255-R262)).
However, this was rejected because it also disables the *helpful*
`-Warray-bounds` check on the destination buffer.
Cherry-picking the commit
a7adfd4c52
from 11.2 by Monty Widenius solves the first two problems:
1. It reimplements `safe_strcpy` a bit more efficiently, skipping the
`memset(dst, 0, dst_size)`. This is unnecessary since `strncpy` already
pads `dst` with 0 bytes.
2. It will not trigger the `-Warray-bounds` warning, because `src` is
not read based on an offset determined from `dst_size`.
There is a third problem, however. Using `strncpy` triggers the
`-Wstringop-truncation` warning
(https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html#index-Wstringop-truncation),
so we need to disable that. However, that is a much less broadly and
generally-useful warning so there is no loss of static analysis value caused
by disabling it.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Item_func_tochar::check_arguments() didn't check if its arguments
each had one column. Failing to make this check and proceeding would
eventually cause either an assertion failure or the execution would
reach "MY_ASSERT_UNREACHABLE();" which would produce a crash with
a misleading stack trace.
* Fixed Item_func_tochar::check_arguments() to do the required check.
* Also fixed MY_ASSERT_UNREACHABLE() to terminate the program. Just
"executing" __builtin_unreachable() used to cause "undefined results",
which in my experience was a crash with corrupted stack trace.
Allocate a temporary buffer instead of using the same buffer in some
cases, and add assertions to verify the buffers do not overlap. See [1]
for reasonsing.
[1] https://github.com/MariaDB/server/pull/2438#discussion_r1137403645
Signed-off-by: Trevor Gross <tmgross@umich.edu>
This patch ensures that all direct and indirect calls to
encryption_crypt provide a `dlen` value correctly initialized to the
destination buffer length, allowing encryption plugins to verify
available space. It also adds assertions to verify related invariants.
Signed-off-by: Trevor Gross <tmgross@umich.edu>