When building on 64-bit kernel machine in 32-bit docker container
CMake falsely (but it works as expected) detects that container
runtime in also 64-bits. Use linux32 command to change runtime
enviroment to 32-bit and then CMake will correctly disable for
example ColumnStore and not try to build it
This commit only works with debian/autobake-debs.sh
The problem was in the Aria part of the range optimizer,
maria_records_in_range(), which wrong concluded that there was no rows
in the range.
This error would happen in the unlikely case when searching for a range
on a partial key and there was a match for the first key part in the
upper part of the b-tree (node) and also a match in the underlying
node page.
In other words, for this bug to happen one have to use Aria, have a multi
part key with a lot of identical values for the first key part and do a
range search on the second part of the key.
Fixed by ensuring that we do not stop searching for partial keys found
on node.
Other things:
- Added some comments
- Changed a variable name to more clearly explain it's purpose.
- Fixed wrong cast in _ma_record_pos() that could cause problems on 32 bit
systems.
The shared counter template ib_counter_t uses the function
my_timer_cycles() as a source of pseudo-random numbers to pick a shard.
On some platforms, my_timer_cycles() could return the constant value 0.
get_rnd_value(): Remove.
my_pseudo_random(): Implement as an alias of my_timer_cycles() or
a wrapper for pthread_self().
Reviewed by: Vladislav Vaintroub
This bug could affect queries containing a join of derived tables over
grouping views such that one of the derived tables contains a window
function while another uses view V with dependent subquery DSQ containing
a set function aggregated outside of the subquery in the view V. The
subquery also refers to the fields from the group clause of the view.Due to
this bug execution of such queries could produce wrong result sets.
When the fix_fields() method performs context analysis of a set function AF
first, at the very beginning the function Item_sum::init_sum_func_check()
is called. The function copies the pointer to the embedding set function,
if any, stored in THD::LEX::in_sum_func into the corresponding field of the
set function AF simultaneously changing the value of THD::LEX::in_sum_func
to point to AF. When at the very end of the fix_fields() method the function
Item_sum::check_sum_func() is called it is supposed to restore the value
of THD::LEX::in_sum_func to point to the embedding set function. And in
fact Item_sum::check_sum_func() did it, but only for regular set functions,
not for those used in window functions. As a result after the context
analysis of AF had finished THD::LEX::in_sum_func still pointed to AF.
It confused the further context analysis. In particular it led to wrong
resolution of Item_outer_ref objects in the fix_inner_refs() function.
This wrong resolution forced reading the values of grouping fields referred
in DSQ not from the temporary table used for aggregation from which they
were supposed to be read, but from the table used as the source table for
aggregation.
This patch guarantees that the value of THD::LEX::in_sum_func is properly
restored after the call of fix_fields() for any set function.
Like MDEV-28105, spider may attempt to connect to remote server in
info(), and it may emit an error upon failure to connect. In this
case, the downstream caller ha_partition::open() should return the
error to avoid inconsistency.
This fixes MDEV-27186, MDEV-27237, MDEV-27334, MDEV-28241, MDEV-34101.
Moving to use Debian systemd install and uninstall scripts
caused duplicated DEBHELP to server postrm script. Commit
removes unneeded and makes postrm work better and pass lintian
tests
Few man pages have less standard format directive:
.it 1 an-trap which specifying a formatting instruction
related to indentation (adds tab in man page in this)
There is no traces what an-trap should do and removing
it does not affect rendering of man page
Field_string::val_int(), Field_string::val_real(), Field_string::val_decimal()
passed the whole buffer of field_length bytes to data type conversion routines.
This made conversion routines to print redundant trailing spaces in case of warnings.
Adding a method Field_string::to_lex_cstring() and using it inside
val_int(), val_real(), val_decimal(), val_str().
After this change conversion routines get the same value with what val_str() returns,
and no redundant trailing spaces are displayed.
The @@global.character_set_client variable could erroneously be set
to a non-default collation of its character set, which further made
the `SET NAMES DEFAULT` statement crash the server.
Fixing the code to make sure that the global value these variables:
@@character_set_client
@@character_set_connection
@@character_set_server
@@character_set_database
@@character_set_connection
point to the default compiled collations of the character set.
- Added a counter innodb_num_bulk_insert_operation in
INFORMATION_SCHEMA.GLOBAL_STATUS. This counter is incremented
whenever a InnoDB undergoes bulk insert operation.
- Change the innodb_instant_alter_column to atomic variable.
GTID events are applied without a running server transaction,
we need to set next transaction ID for Wsrep transaction.
The whole Galera cluster now has a single GTID value (including
the server ID throughout the cluster), fix the config accordingly.
Add force restart so that repeated MTR test execution prints
consistent GTID values, otherwise they would have been recovered
from the previous run.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
It's possible to establish Galera multi-cluster setups connected
through the native replication when every Galera cluster is configured
to have a separate domain ID.
For this setup to work, we need to replace domain ID values in generated
GTID events when they are written at transaction commit to the values
configured by Wsrep replication.
At the same time, it's possible that the GTID event already contains
a correct domain ID if it comes through the native replication from
another Galera cluster.
In this case, when such an event is applied either through a native
replication slave thread or through Wsrep applier, we write GTID event
on transaction start and avoid writing it during transaction commit.
The code contained multiple problems that were fixed:
- applying GTID events didn't work because it's applied without a
running server transaction and Wsrep transaction was not started
- GTID event generation on transaction start didn't contain proper
"standalone" and "is_transactional" flags that the original applied
GTID event contained
- condition determining that GTID event is written on transaction start
to avoid writing it on commit relied on the fact that the GTID event
is the first found in transaction/statement caches, which wasn't the
case and resulted in duplicate GTID events written
- instead of relying on the caches to find a GTID event, a simple check
is introduced that follows the exact rules for checking if event is
written at transaction start as described above
- the test case is improved to check that exact GTID events are
applied after two Galera clusters have synced.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
The test that triggers multi-master conflict between two CTAS commands
uses LOCK/UNLOCK TABLES to block local CTAS from progress. It could
result in a race when UNLOCK TABLES command is issued a bit earlier
then needed, causing local CTAS to run further and change wsrep
transaction state, so that a different code path is taken later and
the original error gets overridden, causing the test to fail.
The solution is to replace LOCK/UNLOCK TABLES with debug sync points.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Spider calls ha_spider::close() at least twice on ALTER TABLE ... ADD
PARTITION. The first call frees wide_handler and the second call
accesses wide_handler->trx->thd (heap-use-after-free).
In general, there seems to be no problem with using THD obtained by
the macro current_thd() except in background threads. Thus, we simply
replace wide_handler->trx->thd with current_thd().
Original author: Nayuta Yanagasawa
Remove the dead-code, in Spider, which is related to the Spider's
HandlerSocket support. The code has been disabled for a long time
and it is unlikely that the code will be enabled.
- rm all files under storage/spider/hs_client/ except hs_compat.h
- rm storage/spider/spd_db_handlersocket.*
- unifdef -UHS_HAS_SQLCOM -UHAVE_HANDLERSOCKET \
-m storage/spider/spd_* storage/spider/ha_spider.* storage/spider/hs_client/*
- remove relevant files from storage/spider/CMakeLists.txt
This commit updates the README to indicate that the "Get the code, build
it, test it" link will help decide the correct branch to work in.
Also fixes a grammar issue and cleans-up the Markdown a little bit.
Fix various typos, in comments and DEBUG statements, and code changes
are non-functional.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
- Change the comments in class ha_handler_stats to say the members
are in ticks, not milliseconds.
- In sql_explain.cc, adjust the scale to print milliseconds.
nullptr+0 is an UB (undefined behavior).
- Fixing my_string_metadata_get_mb() to handle {nullptr,0} without UB.
- Fixing THD::copy_with_error() to disallow {nullptr,0} by DBUG_ASSERT().
- Fixing parse_client_handshake_packet() to call THD::copy_with_error()
with an empty string {"",0} instead of NULL string {nullptr,0}.
- Fixing the code in get_interval_value() to use Longlong_hybrid_null.
This allows to handle correctly:
- Signed and unsigned arguments
(the old code assumed the argument to be signed)
- Avoid undefined negation behavior the corner case with LONGLONG_MIN
This fixes the UBSAN warning:
negation of -9223372036854775808 cannot be represented
in type 'long long int';
- Fixing the code in get_interval_value() to avoid overflow in
the INTERVAL_QUARTER and INTERVAL_WEEK branches.
This fixes the UBSAN warning:
signed integer overflow: -9223372036854775808 * 7 cannot be represented
in type 'long long int'
- Fixing the INTERVAL_WEEK branch in date_add_interval() to handle
huge numbers correctly. Before the change, huge positive numeber
were treated as their negative complements.
Note, some other branches still can be affected by this problem
and should also be fixed eventually.
- InnoDB page compression works only on COMPACT or DYNAMIC row
format tables. So InnoDB should throw error when alter table
tries to enable PAGE_COMPRESSED for redundant table.
to_natsort_key(): Zero-initialize also num_start. This silences a
compiler warning. There is no impact on correctness, because
before the first read of num_start, !n_digits would always hold
and hence num_start would have been initialized.
Correct the second parameter for strxnmov to prevent potential buffer
overflows. The second parameter must be one less than the size of the
input buffer to avoid writing past the end of the buffer.
While the second parameter is usually correct, there are exceptions
that need fixing.
This commit addresses the issue within frm_file_exists() and other
affected places.
Fixing the condition to raise an overflow in the ulonglong
representation of the number is greater or equal to 0x8000000000000000ULL.
Before this change the condition did not catch -9223372036854775808
(the smallest possible signed negative longlong number).
Preserve compatibility with 7.1.3 by including the previous non-const
function.
The error was:
fmt/format.h:3466:8: note: candidate function template not
viable: no known conversion from 'const formatter<String, [2 * ...]>' to
'formatter<fmt::basic_string_view<char>, [2 * ...]>' for object argument
3466 | auto format(const T& val, FormatContext& ctx) ->
decltype(ctx.out()) {
in buf_dblwr_t::init_or_load_pages()
- InnoDB fails to set the TRX_SYS_DOUBLEWRITE_SPACE_ID_STORED
flag in transaction system header page while recreating
the undo log tablespaces
buf_dblwr_t::init_or_load_pages(): Tries to reset the
space id and try to write into doublewrite buffer even
when read_only mode is enabled.
In srv_all_undo_tablespaces_open(), InnoDB should try to
open the extra unused undo tablespaces instead of trying to
creating it.
According to https://discussions.apple.com/thread/8256853
an attempt to use AVX512 registers on macOS will result in #UD
(crash at runtime).
Also, starting with clang-18 and GCC 14, we must add "evex512" to the
target flags so that AVX and SSE instructions can use AVX512 specific
encodings. This flag was introduced together with the avx10.1-512 target.
Older compiler versions do not recognize "evex512". We do not want to
write "avx10.1-512" because it could enable some AVX512 subfeatures
that we do not have any CPUID check for.
Reviewed by: Vladislav Vaintroub
Tested on macOS by: Valerii Kravchuk
The patch for MDEV-31340 fixed the following bugs:
MDEV-33084 LASTVAL(t1) and LASTVAL(T1) do not work well with lower-case-table-names=0
MDEV-33085 Tables T1 and t1 do not work well with ENGINE=CSV and lower-case-table-names=0
MDEV-33086 SHOW OPEN TABLES IN DB1 -- is case insensitive with lower-case-table-names=0
MDEV-33088 Cannot create triggers in the database `MYSQL`
MDEV-33103 LOCK TABLE t1 AS t2 -- alias is not case sensitive with lower-case-table-names=0
MDEV-33108 TABLE_STATISTICS and INDEX_STATISTICS are case insensitive with lower-case-table-names=0
MDEV-33109 DROP DATABASE MYSQL -- does not drop SP with lower-case-table-names=0
MDEV-33110 HANDLER commands are case insensitive with lower-case-table-names=0
MDEV-33119 User is case insensitive in INFORMATION_SCHEMA.VIEWS
MDEV-33120 System log table names are case insensitive with lower-cast-table-names=0
Backporting the fixes from 11.5 to 10.5
BUF_LRU_MIN_LEN (256) is too high value for low buffer pool(BP) size.
For example, for BP size lower than 80M and 16 K page size, the limit is
more than 5% of total BP and for lowest BP 5M, it is 80% of the BP.
Non-data objects like explicit locks could occupy part of the BP pool
reducing the pages available for LRU. If LRU reaches minimum limit and
if no free pages are available, server would hang with page cleaner not
able to free any more pages.
Fix: To avoid such hang, we adjust the LRU limit lower than the limit
for data objects as checked in buf_LRU_check_size_of_non_data_objects()
i.e. one page less than 5% of BP.
trx_free_at_shutdown(): Similar to trx_t::commit_in_memory(),
clear the detailed_error (FOREIGN KEY constraint error) before
invoking trx_t::free(). We only do this on debug instrumented
builds in order to avoid a debug assertion failure on shutdown.
This regression is introduced in 10.6 by following commit.
commit b6a2472489
MDEV-27891: SIGSEGV in InnoDB buffer pool resize
During DML, we check if buffer pool is running out of data pages in
buf_pool_t::running_out. Here is 75% of the buffer pool is occupied by
non-data pages we rollback the current transaction and exit with
ER_LOCK_TABLE_FULL.
The integer division (n_chunks_new / 4) becomes zero whenever the total
number of chunks are < 4 making the check completely ineffective for
such cases. Also the check is inaccurate for larger chunks.
Fix-1: Correct the check in buf_pool_t::running_out.
Fix-2: While waiting for free page, check for
buf_LRU_check_size_of_non_data_objects.
A wide_handler is shared among ha_spider of partitions of the same
spider table, where the last partition is designated the owner of the
wide_handler, and is responsible for its deallocation. Therefore in
case of failure, we only reset wide_handler in error handling if the
current ha_spider is the owner of the wide_handler, otherwise it will
result in segv in the destructor of ha_spider, or during
ha_spider::close().
The init, init_error, and init_error_time fields of a SPIDER_SHARE
should only be assigned when actually doing the initialisation of a
SPIDER_SHARE, otherwise they could result in spurious failures from
spider_get_share() in a subsequent statement.