Problem:
Empty queries are incremented if no rows are sent to the client in the
EXECUTE phase of select query. With cursor protocol, rows are not sent
during EXECUTE phase; they are sent later in FETCH phase. Hence,
queries executed with cursor protocol are always falsely treated as
empty in EXECUTE phase.
Fix:
For cursor protocol, empty queries are now counted during the FETCH
phase. This ensures counter correctly reflects whether any rows were
actually sent to the client.
Tests included in `mysql-test/main/show.test`.
Log tables cannot work with transactional InnoDB or Aria, that is
checked by ALTER TABLE for ER_UNSUPORTED_LOG_ENGINE. But it was
possible to circumvent this check with CREATE TABLE. The patch makes
the check of supported engine common for ALTER TABLE and CREATE TABLE.
It was mysql_install_db, and this is changed to mariadb-install-db.
likewise changed all of the support-files references to
mysql_install_db.
This install script is part of the service as a useful instigation
step, and a no-op in subseqeuent runs.
This script does however change the auth_pam_tool_dir ownership.
When running a multi-instance based on username, changing the
auth_pam_tool_dir will only cause troubles for the other users.
If you are running multiple instances on username is seems you
are unlikely do be having pam access for all users. Even
if you where the solution on auth_pam_tool_dir would be a group
permission and group access based on the users.
As such skip the changing of ownership.
A deadlock forces the on going transaction to rollback implicitly.
Within a transaction block, started with START TRANSACTION / BEGIN,
implicit rollback doesn't reset OPTION_BEGIN flag. It results in a
new implicit transaction to start when the next statement is executed.
This behaviour is unexpected and should be fixed. However, we should
note that there is no issue with rollback.
We fix the issue to keep the behaviour of implicit rollback (deadlock)
similar to explicit COMMIT and ROLLBACK i.e. the next statement after
deadlock error is not going to start a transaction block implicitly
unless autocommit is set to zero.
This controls which linux implementation to use for
innodb_use_native_aio=ON.
innodb_linux_aio=auto is equivalent to innodb_linux_aio=io_uring when
it is available, and falling back to innodb_linux_aio=aio when not.
Debian packaging is no longer aio exclusive or uring, so
for those older Debian or Ubuntu releases, its a remove_uring directive.
For more recent releases, add mandatory liburing for consistent packaging.
WITH_LIBAIO is now an independent option from WITH_URING.
LINUX_NATIVE_AIO preprocessor constant is renamed to HAVE_LIBAIO,
analogous to existing HAVE_URING.
tpool::is_aio_supported(): A common feature check.
is_linux_native_aio_supported(): Remove. This had originally been added in
mysql/mysql-server@0da310b69d in 2012
to fix an issue where io_submit() on CentOS 5.5 would return EINVAL
for a /tmp/#sql*.ibd file associated with CREATE TEMPORARY TABLE.
But, starting with commit 2e814d4702 InnoDB
temporary tables will be written to innodb_temp_data_file_path.
The 2012 commit said that the error could occur on "old kernels".
Any GNU/Linux distribution that we currently support should be based
on a newer Linux kernel; for example, Red Hat Enterprise Linux 7
was released in 2014.
tpool::create_linux_aio(): Wraps the Linux implementations:
create_libaio() and create_liburing(), each defined in separate
compilation units (aio_linux.cc, aio_libaio.cc, aio_liburing.cc).
The CMake definitions are simplified using target_sources() and
target_compile_definitions(), all available since CMake 2.8.12.
With this change, there is no need to include ${CMAKE_SOURCE_DIR}/tpool
or add TPOOL_DEFINES flags anymore, target_link_libraries(lib tpool)
does all that.
This is joint work with Daniel Black and Vladislav Vaintroub.
In a UBSAN debug build, the comparisons with next_mrec_end are made
with index->online_log's head/tail members' block ptr with a sort buffer
size offset (1048576).
The logic that flows though to this point means that even srv_sort_buf_size
above a null pointer wouldn't contain the value of next_mrec_end.
As such this is a UBSAN type fix where we first check if the
head.block / tail.block is null before doing the asserts around
this debug condition. This would be required for the assertions
conditions not to segfault anyway.
log_hdr_buf: Align to an 8-byte boundary, because we will actually
assume at least 4-byte alignment in log_crypt_write_header().
This fixes a regression that had been introduced in
commit 685d958e38 (MDEV-14425)
where a 512-byte alignment requirement was relaxed too much.
Problem:
=========
(1) Mariabackup tries to read the history data from
mysql.mariadb_backup_history and fails with segfault. Reason is that
mariabackup does force innodb_log_checkpoint_now from commit 652f33e0a44661d6093993d49d3e83d770904413(MDEV-30000).
Mariabackup sends the "innodb_log_checkpoint_now=1" query to server and
reads the result set for the query later in the code because the query
may trigger the page thread to flush the pages. But before reading the
query result for innodb_log_checkpoint_now=1, mariabackup does execute
the select query for the history table (mysql.mariadb_backup_history)
and wrongly reads the query result of innodb_log_checkpoint_now. This leads
to assertion in mariabackup.
(2) The recording of incremental backups has the format as "tar"
when mbstream was used. The xb_stream_fmt_t only had XB_STREAM_FMT_NONE
and XB_STREAM_FMT_XBSTREAM and hence in the mysql.mariadb_backup_history
table the format was recorded as "tar" for the "mbstream" due to the
offset in the xb_stream_name array within mariadb-backup.
(3) Also under Windows the full path of mariabackup was recorded in the the
history.
(4) select_incremental_lsn_from_history(): Name of the backup and UUID
of the history record variable could lead to buffer overflow while
copying the variable value from global variable.
Solution:
=========
(1) Move the reading of history data from mysql.mariadb_backup_history
after reading the result of innodb_log_checkpoint_now=1 query
(2) We've removed the "tar" element from the xb_stream_name. As the
"xbstream" was never used, the format name is changed to mbstream.
As the table needs alteration the "mbstream" appended instead of
the unused xbstream in the table. "tar" is left in the enum as
the previous recordings are still possible.
(3) The Windows path separator is used to store just the executable
name as the tool in the mariadb_backup_history table.
(4) select_incremental_lsn_from_history(): Check and validate
the length of incremental history name and incremental history uuid
before copying into temporary buffer
Thanks to Daniel black for contributing the code for solution (2) and (3)
There was a Wrong result due to Table Elimination
when 'unique_col IS NULL' condition is supplied in the left-join query.
An index on an unique_col was created for the table, and it
is being used in the plan.
As the query was a left join, no fields from the right table were
projected. The right table was getting wrongly eliminated although
multiple rows from the right could be matched to a single row from the
left with the condition `unique_col IS NULL'.
This PR addresses the problem by including an additional check before
eliminating a table in check_equality() function of opt_table_elimination.cc
innodb.doublewrite: Skip the test case if we get an unexpected
checkpoint. This could happen because page cleaner thread
could be active after reading the initial checkpoint information.
MSAN has been updated since 2022 when this macro was added
and as such the working around MSAN's deficient understanding
of the fstat/stat syscall behaviour at the time is no longer
required.
As an effective no-op a straight removal is sufficient.
Problem:
=======
- InnoDB unpoisons the freed page memory to make sure that
no other thread uses this freed page. In buf_pool_t::close(),
InnoDB unmap() the buffer pool memory during shutdown or it
encountered during startup. Later at some point, server
re-uses the same virtual address using mmap() and writes into
memory region. This leads to use_after_poison error.
This issue doesn't happen in latest clang and gcc version.
Older version of clang and gcc can still fail with this error.
ASAN should unpoison the memory while reusing the same virtual
address. This issue was already raised in
https://github.com/google/sanitizers/issues/1705
Fix:
===
In order to avoid this failure, let's unpoison the buffer
pool memory explictly during buf_pool_t::close() for
lesser than gcc-14 and clang-18 version.
Backing up with mariabackup a datadir containing
ENGINE=Mroonga tables leaves behind the corresponding
*.mrn* files. Those tables are therefore broken once
such backup is restored.
minor style/mtr changes by Daniel Black
Hurd doesn't have the mechanism to identify the user connecting to
a socket via a system call as MDEV-8535 highlighted. As such it
can't be supported so we disable it in Debian's mysql_release profile.
Hurd string from uname -m, "SYSTEM processor: i686-AT386" in mariadb
output. And wiki reference https://en.wikipedia.org/wiki/Uname
Origin: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1006531
CHECK constraint uses caching items and the value is cached even in
case of error. At the second execution the cached value is taken and
the error is not thrown. The fix does not cache value in case error
was thrown during value retrieval.
This issue arises when we have an outer reference resolved in another outer reference
where that outer reference is resolved in a outer select that is a grouping select.
Under these circumstances, the intermediate item is wrapped in an Item_outer_ref and
fixing is deferred later until fix_inner_refs. As this item wrapper isn't fixed
we fail this assertion. The fix here is to resolve the item at the lowest level
to the item inside the wrapper, which is fixed. This item can then get it's
own wrapper pointing to the ultimate resolution of this item.
Approved by Sanja Byelkin (sanja@mariadb.com) 2025-06-13
Now that RocksDB has been synced up to 6.29 which includes the changes
mentioned in the CMake comment support for building on non-Linux aarch64
OSes can be enabled.
replication problems
DELETE HISTORY did not process parameterized PS properly as the
history expression was checked on prepare stage when the parameters
was not yet substituted. In that case check_units() succeeded as there
is no invalid type: Item_param has type_handler_null which is
inherited from string type and this is valid type for history
expression. The warning was thrown when the expression was evaluated
for comparison on delete execution (when the parameter was already
substituted).
The fix postpones check_units() until the first PS execution. We have
to postpone where conditions processing until the first execution and
update select_lex.where on every execution as it is reset to the state
after prepare.
Nullability is decided in two stages-
1. Based on argument NULL-ness
Problem:
- COALESCE currently uses a generic logic- "Result of a function
is nullable if any of the arguments is nullable", which is wrong.
- IFNULL sets nullability using second argument alone, which incorrectly
sets the result to NULL even when first argument is not null.
Fix:
- Result of COALESCE and IFNULL is set to NULL only if all arguments are
NULL.
2. Based on type conversion safety of fallback value
Problem:
- The generic `Item_hybrid_func_fix_attributes` logic would mark the
function's result as nullable if any argument involved a type
conversion that could yield NULL.
Fix:
- For COALESCE and IFNULL, nullability is set to NOT NULL if the first
non-null argument can be safely converted to function's target return
type.
- For other functions, if any argument's conversion to target type could
result in NULL, the function is marked nullable.
Tests included in `mysql-test/main/func_hybrid_type.test`
Fix AWS SDK build, it has changed substantionally since the plugin was
introduced. There is now a bunch of intermediate C libraries, aws-cpp-crt
and others, and for static linking, the link dependency must be declared.
Also support AWS C++ SDK in vcpkg package manager.
Since 10.11.12 (commit 8363d05f4d), executables are built with
the dynamic C runtime (/MD), introducing a dependency on
vcruntime140.dll.
VC++ runtime is typically installed via the VCCRT feature, which is
included by default as part of the hidden/default ALWAYSINSTALL feature
set. However, when users specify the ADDLOCAL property, it overrides the
default selection and may omit critical features like VCCRT. This leads to
installation failures.
Fix: Add a custom action that ensures mandatory features (e.g., VCCRT)
are appended to the ADDLOCAL property at install time. This guarantees
essential runtime components are always installed, even when a custom
feature set is selected.
- Add a testcase showing JSON_HB histograms handle multi-byte characters
correctly.
- Make Item_func_json_unquote::val_str() handle situation where
it is reading non-UTF8 "JSON" and transcoding it into UTF-8.
(the JSON spec only allows UTF8 but MariaDB's implementation
supports non-UTF8 as well)
- Make Item_func_json_search::compare_json_value_wild() handle
json_unescape()'s return values in the same way its done in other
places.
- Coding style fixes.
Using report_json_error was incorrect as errors
in the je have already been handled earlier in the
json function.
The errors related to json_unescape are handled with
consistently with other functions.
Now pushes the OUTOFMEMORY error and ER_JSON_BAD_CHAR as a warning
if these resulted in those errors.
callers only expected a bool so the prototype was changed.
Json_engine_scan::check_and_get_value_scalar failed to handle the error
condition so set the *error if an error occured and return the correct
value.
json_unescape can return negative numbers, and with
so we should free the buffer allocated.
Also handle the NULL value in unsafe_str by not de-referencing
NULL.
When json_escape changed[1] to return a -1 in the case of
a character that didn't match the character set, json_unescape_to_string
assumed the -1 meant out of memory and just looped with more
memory.
Problem 1 - json_escape needs to return a different code
so that the different between charset incompatibility and out
of memory needs to occur. This enables json_escape_to_string
to handle the it correctly (ignore and fail seems the best
option).
Problem 2 - JSON histograms need to support character with
where the column json min/maximum value aren't a character
set represented by a single byte.
Problem 2 was previously hidden as ? was a result of the conversion.
As JSON histograms can relate to columns when have an explict
character set, use that and fall back to bin which was the
previous default for non-string columns.
Replaces -1/-2 constants and handling with JSON_ERROR_ILLEGAL_SYMBOL /
JSON_ERROR_OUT_OF_SPACE defines.
[1] regression from: f699010c0f
Two new error codes ER_SEQUENCE_TABLE_HAS_TOO_FEW_ROWS and
ER_SEQUENCE_TABLE_HAS_TOO_MANY_ROWS were introduced in MDEV-36032 in
both 10.11 and, as part of MDEV-22491, 12.0. Here we remove them from
10.11, but they should remain in 12.0.
MariaDB server crashes when a query includes a derived table
containing unnamed column (eg: `SELECT '' from t`). When `Item`
object representing such unnamed column was checked for valid,
non-empty name in `TABLE_LIST::create_field_translation`, the
server crahsed(assertion `item->name.str && item->name.str[0]`
failed).
This fix removes the redundant assertion. The assert was a strict
debug guard that's no longer needed because the code safely handles
empty strings without it.
Selecting `''` from a derived table caused `item->name.str`
to be an empty string. While the pointer itself wasn't `NULL`
(`item->name.str` is `true`), its first character (`item->name.str[0]`)
was null terminator, which evaluates to `false` and eventually made
the assert fail. The code immediately after the assert can safely
handle empty strings and the assert was guarding against something
which the code can already handle.
Includes `mysql-test/main/derived.test` to verify the fix.
is_bulk_op())' failed after ALTER TABLE of versioned table
Missed error code resulted in my_ok() at higher frame which failed on
assertion for m_status in state of error.
As of CMake 3.24 CMAKE_COMPILER_IS_GNU(CC|CXX) are deprecated and should
be replaced with CMAKE_(C|CXX)_COMPILER_ID which were introduced with
CMake 2.6.
MDEV-33813 caused a regressing in that when a disk got full when
writing to a MyISAM or Aria table the MariaDB connection would, instead
of doing a retry after 60 seconds, hang until the query was killed.
Fixed by changing mysql_coind_wait() top mysql_cond_timedwait()
Author: Thomas Stangner
handler::clone() call did not work with read only tables like S3.
It gave a wrong error message (out of memory instead of a permission
error) and aborted the query.
The issue was that the clone call had a wrong parameter to ha_open().
This now fixed. I also changed the clone call to provide the correct
error message if things fails.
This patch fixes an 'out of memory' error when using the S3 engine
for queries that could use multiple indexes together to find the matching
rows, like the following:
SELECT * FROM t1 WHERE key1 = 99 OR key2 = 2
This commit fixes a bug where Aria tables are used in
(master->slave1->slave2) and a backup is taken on slave2. In this case
it is possible that the replication position in the backup, stored in
mysql.gtid_slave_pos, will be wrong. This will lead to replication
errors if one is trying to use the backup as a new slave.
Analyze:
Replicated row events are committed with trans_commit_stmt() and
thd->transaction->all.ha_list != 0.
This means that backup_commit_lock is not taken for Aria tables,
which means the rows are committed and binary logged on the slave
under BLOCK_COMMIT which should not happen.
This issue does not occur on the master as thd->transaction->all.ha_list
is == 0 under AUTO_COMMIT, which sets 'is_real_trans' and 'rw_trans'
which in turn causes backup_commit_lock to be taken.
Fixed by checking in ha_check_and_coalesce_trx_read_only() if all handlers
supports rollback and if not, then wait for BLOCK_COMMIT also for
statement commit.
forever, cannot be killed
mysql_rm_table_no_locks() does TDC_RT_REMOVE_ALL which waits while
share is closed. The table normally is open only as OPEN_STUB, this is
what parser does for CREATE TABLE. But for SELECT the table is opened
not as a stub. If it is the same table name we anyway have two
TABLE_LIST objects: stub and not stub. So for "not stub"
TDC_RT_REMOVE_ALL sees open count and decides to wait until it is
closed. And it hangs because that was opened in the same thread.
The fix disables subqueries in CHECK expression at parser
level. Thanks to Sergei Golubchik <serg@mariadb.org> for the patch.