Spider calls ha_spider::close() at least twice on ALTER TABLE ... ADD
PARTITION. The first call frees wide_handler and the second call
accesses wide_handler->trx->thd (heap-use-after-free).
In general, there seems to be no problem with using THD obtained by
the macro current_thd() except in background threads. Thus, we simply
replace wide_handler->trx->thd with current_thd().
Original author: Nayuta Yanagasawa
Remove the dead-code, in Spider, which is related to the Spider's
HandlerSocket support. The code has been disabled for a long time
and it is unlikely that the code will be enabled.
- rm all files under storage/spider/hs_client/ except hs_compat.h
- rm storage/spider/spd_db_handlersocket.*
- unifdef -UHS_HAS_SQLCOM -UHAVE_HANDLERSOCKET \
-m storage/spider/spd_* storage/spider/ha_spider.* storage/spider/hs_client/*
- remove relevant files from storage/spider/CMakeLists.txt
This commit updates the README to indicate that the "Get the code, build
it, test it" link will help decide the correct branch to work in.
Also fixes a grammar issue and cleans-up the Markdown a little bit.
Fix various typos, in comments and DEBUG statements, and code changes
are non-functional.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
- Change the comments in class ha_handler_stats to say the members
are in ticks, not milliseconds.
- In sql_explain.cc, adjust the scale to print milliseconds.
nullptr+0 is an UB (undefined behavior).
- Fixing my_string_metadata_get_mb() to handle {nullptr,0} without UB.
- Fixing THD::copy_with_error() to disallow {nullptr,0} by DBUG_ASSERT().
- Fixing parse_client_handshake_packet() to call THD::copy_with_error()
with an empty string {"",0} instead of NULL string {nullptr,0}.
- Fixing the code in get_interval_value() to use Longlong_hybrid_null.
This allows to handle correctly:
- Signed and unsigned arguments
(the old code assumed the argument to be signed)
- Avoid undefined negation behavior the corner case with LONGLONG_MIN
This fixes the UBSAN warning:
negation of -9223372036854775808 cannot be represented
in type 'long long int';
- Fixing the code in get_interval_value() to avoid overflow in
the INTERVAL_QUARTER and INTERVAL_WEEK branches.
This fixes the UBSAN warning:
signed integer overflow: -9223372036854775808 * 7 cannot be represented
in type 'long long int'
- Fixing the INTERVAL_WEEK branch in date_add_interval() to handle
huge numbers correctly. Before the change, huge positive numeber
were treated as their negative complements.
Note, some other branches still can be affected by this problem
and should also be fixed eventually.
- InnoDB page compression works only on COMPACT or DYNAMIC row
format tables. So InnoDB should throw error when alter table
tries to enable PAGE_COMPRESSED for redundant table.
to_natsort_key(): Zero-initialize also num_start. This silences a
compiler warning. There is no impact on correctness, because
before the first read of num_start, !n_digits would always hold
and hence num_start would have been initialized.
Correct the second parameter for strxnmov to prevent potential buffer
overflows. The second parameter must be one less than the size of the
input buffer to avoid writing past the end of the buffer.
While the second parameter is usually correct, there are exceptions
that need fixing.
This commit addresses the issue within frm_file_exists() and other
affected places.
Fixing the condition to raise an overflow in the ulonglong
representation of the number is greater or equal to 0x8000000000000000ULL.
Before this change the condition did not catch -9223372036854775808
(the smallest possible signed negative longlong number).
Preserve compatibility with 7.1.3 by including the previous non-const
function.
The error was:
fmt/format.h:3466:8: note: candidate function template not
viable: no known conversion from 'const formatter<String, [2 * ...]>' to
'formatter<fmt::basic_string_view<char>, [2 * ...]>' for object argument
3466 | auto format(const T& val, FormatContext& ctx) ->
decltype(ctx.out()) {
in buf_dblwr_t::init_or_load_pages()
- InnoDB fails to set the TRX_SYS_DOUBLEWRITE_SPACE_ID_STORED
flag in transaction system header page while recreating
the undo log tablespaces
buf_dblwr_t::init_or_load_pages(): Tries to reset the
space id and try to write into doublewrite buffer even
when read_only mode is enabled.
In srv_all_undo_tablespaces_open(), InnoDB should try to
open the extra unused undo tablespaces instead of trying to
creating it.
According to https://discussions.apple.com/thread/8256853
an attempt to use AVX512 registers on macOS will result in #UD
(crash at runtime).
Also, starting with clang-18 and GCC 14, we must add "evex512" to the
target flags so that AVX and SSE instructions can use AVX512 specific
encodings. This flag was introduced together with the avx10.1-512 target.
Older compiler versions do not recognize "evex512". We do not want to
write "avx10.1-512" because it could enable some AVX512 subfeatures
that we do not have any CPUID check for.
Reviewed by: Vladislav Vaintroub
Tested on macOS by: Valerii Kravchuk
The patch for MDEV-31340 fixed the following bugs:
MDEV-33084 LASTVAL(t1) and LASTVAL(T1) do not work well with lower-case-table-names=0
MDEV-33085 Tables T1 and t1 do not work well with ENGINE=CSV and lower-case-table-names=0
MDEV-33086 SHOW OPEN TABLES IN DB1 -- is case insensitive with lower-case-table-names=0
MDEV-33088 Cannot create triggers in the database `MYSQL`
MDEV-33103 LOCK TABLE t1 AS t2 -- alias is not case sensitive with lower-case-table-names=0
MDEV-33108 TABLE_STATISTICS and INDEX_STATISTICS are case insensitive with lower-case-table-names=0
MDEV-33109 DROP DATABASE MYSQL -- does not drop SP with lower-case-table-names=0
MDEV-33110 HANDLER commands are case insensitive with lower-case-table-names=0
MDEV-33119 User is case insensitive in INFORMATION_SCHEMA.VIEWS
MDEV-33120 System log table names are case insensitive with lower-cast-table-names=0
Backporting the fixes from 11.5 to 10.5
BUF_LRU_MIN_LEN (256) is too high value for low buffer pool(BP) size.
For example, for BP size lower than 80M and 16 K page size, the limit is
more than 5% of total BP and for lowest BP 5M, it is 80% of the BP.
Non-data objects like explicit locks could occupy part of the BP pool
reducing the pages available for LRU. If LRU reaches minimum limit and
if no free pages are available, server would hang with page cleaner not
able to free any more pages.
Fix: To avoid such hang, we adjust the LRU limit lower than the limit
for data objects as checked in buf_LRU_check_size_of_non_data_objects()
i.e. one page less than 5% of BP.
trx_free_at_shutdown(): Similar to trx_t::commit_in_memory(),
clear the detailed_error (FOREIGN KEY constraint error) before
invoking trx_t::free(). We only do this on debug instrumented
builds in order to avoid a debug assertion failure on shutdown.
This regression is introduced in 10.6 by following commit.
commit b6a2472489
MDEV-27891: SIGSEGV in InnoDB buffer pool resize
During DML, we check if buffer pool is running out of data pages in
buf_pool_t::running_out. Here is 75% of the buffer pool is occupied by
non-data pages we rollback the current transaction and exit with
ER_LOCK_TABLE_FULL.
The integer division (n_chunks_new / 4) becomes zero whenever the total
number of chunks are < 4 making the check completely ineffective for
such cases. Also the check is inaccurate for larger chunks.
Fix-1: Correct the check in buf_pool_t::running_out.
Fix-2: While waiting for free page, check for
buf_LRU_check_size_of_non_data_objects.
A wide_handler is shared among ha_spider of partitions of the same
spider table, where the last partition is designated the owner of the
wide_handler, and is responsible for its deallocation. Therefore in
case of failure, we only reset wide_handler in error handling if the
current ha_spider is the owner of the wide_handler, otherwise it will
result in segv in the destructor of ha_spider, or during
ha_spider::close().
The init, init_error, and init_error_time fields of a SPIDER_SHARE
should only be assigned when actually doing the initialisation of a
SPIDER_SHARE, otherwise they could result in spurious failures from
spider_get_share() in a subsequent statement.
This regression is introduced in 10.6 by following commit.
commit 898dcf93a8
(Cleanup the lock creation)
It removed one important optimization for lock bitmap pre-allocation.
We pre-allocate about 8 byte extra space along with every lock object to
adjust for similar locks on newly created records on the same page by
same transaction. When it is exhausted, a new lock object is created
with similar 8 byte pre-allocation. With this optimization removed we
are left with only 1 byte pre-allocation. When large number of records
are inserted and locked in a single page, we end up creating too many
new locks almost in n^2 order.
Fix-1: Bring back LOCK_PAGE_BITMAP_MARGIN for pre-allocation.
Fix-2: Use the extra space (40 bytes) for bitmap in trx->lock.rec_pool.
Similar to #2480.
567b681 introduced safe_strcpy() to minimize the use of C with
potentially unsafe memory overflow with strcpy() whose use is
discouraged.
Replace instances of strcpy() with safe_strcpy() where possible, limited
here to files in the `sql/` directory.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer
Amazon Web Services, Inc.
Spider calls info with HA_STATUS_AUTO to update auto increment info,
which may attempt to connect the data node. If the connection fails,
it may emit an error and return the same error. This error should not
be of lower priority than any possible error from the later call to
handler::update_auto_increment().
Without this change, certain errors from update_auto_increment() such
as HA_ERR_AUTOINC_ERANGE may get ignored, causing my_insert() to call
my_ok(), which fails the assertion because the error was emitted in
the info() call (Diagnostics_area::is_set() returns true).
A memory leak happens on the second execution of a query that run in PS mode
and uses the function ROWNUM().
A memory leak took place on allocation of an instance of the class Item_int
for storing a limit value that is performed at the function set_limit_for_unit
indirectly called from JOIN::optimize_inner. Typical trace to the place where
the memory leak occurred is below:
JOIN::optimize_inner
optimize_rownum
process_direct_rownum_comparison
set_limit_for_unit
new (thd->mem_root) Item_int(thd, lim, MAX_BIGINT_WIDTH);
To fix this memory leak, calling of the function optimize_rownum()
has to be performed only once on first execution and never called
after that. To control it, the new data member
first_rownum_optimization
added into the structure st_select_lex.