- Change to use 'mariadbd' instead of 'mysqld' in help texts and other
visible places.
- Start binary 'mariadbd' instead of 'mysqld'. This will remove a warning
in 11.0 when running mysql_install_db.
- Use my_print_defaults --mariadbd instead of --mysqld
- Use --skip-log-error if the user don't have access to log-error file.
This it needed to allow mysql_install_db to work silenty for users that
has not write access to /var/log.
Other things:
- Updated my_print_defaults to support --mariadbd
Charset names in the 'languages' line are not used any more.
Removing to avoid confusion.
All messages in errmsg-utf8.txt are in utf8 now.
Charset names should have been removed in MySQL-5.5 during: https://dev.mysql.com/worklog/task/?id=751
Bump version number.
While downgrades are not supported and misguided attempts at it could
cause serious corruption especially after
commit b07920b634
it might be useful if InnoDB would start up even after an upgrade to
MariaDB Server 11.0 or later had removed the change buffer.
innodb_change_buffering_update(): Disallow anything else than
innodb_change_buffering=none when the change buffer is corrupted.
ibuf_init_at_db_start(): Mention a possible downgrade in the corruption
error message. If innodb_change_buffering=none, ignore the error but do
not initialize ibuf.index.
ibuf_free_excess_pages(), ibuf_contract(), ibuf_merge_space(),
ibuf_update_max_tablespace_id(), ibuf_delete_for_discarded_space(),
ibuf_print(): Check for !ibuf.index.
ibuf_check_bitmap_on_import(): Remove some unnecessary code.
This function is only accessing change buffer bitmap pages in a
data file that is not attached to the rest of the database.
It is not accessing the change buffer tree itself, hence it does
not need any additional mutex protection.
This has been tested both by starting up MariaDB Server 10.8 on
a 11.0 data directory, and by running ./mtr --big-test while
ibuf_init_at_db_start() was tweaked to always fail.
The mini-benchmark.sh script failed to run in the latest Fedora
distributions in GitLab CI. It requires `lscpu` resolved by installing
util-linux.
Additionally, executing the benchmark inside a Docker container had
failed because of increased Docker security in recent updates. In
particular the `renice` and `taskset` operations are not permitted.
Neither are the required `perf` operations.
https://docs.docker.com/engine/security/seccomp/
Allow these operations to fail gracefully, and test then skip `perf`,
allowing the remaining benchmark activities to proceed.
Other minor changes to the CI are included such as allowing sanitizer
jobs to fail and using "needs" in the mini-benchmark pipeline.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer
Amazon Web Services, Inc.
Removed an old '* 2' from the HASH join cost. This was made obsolete by
a later patch that added cost for copying the data out from the join buffer
to table->record.
I also added some 'echo' to some test cases to make it easier to debug
test case changes.
Test case changes:
- subselect3_jcl6 and subselect_sj2_jcl6 result changes as materialized
tables changed to hash join + first_match
Firstmatch_picker::check_qep() has an optimization that allows firstmatch
to be used together with join buffer under some conditions. In this
case the cost was assumed to be same as what best_access_path()
had calculated.
However if HASH+join_buffer was used, then
fix_semijoin_strategies_for_picked_join_order() would remove the
join_buffer (which would cause a full join to be used) and the cost
assumption by Firstmatch_picker::check_qep() would be wrong.
Later check_join_cache_usage() sees that it's a full scan and decides
it can use join buffering, (But not the hash join).
Fixed by also allowing HASH joins with firstmatch.
This removes the need to change disable and re-enable join buffer.
Test case changes:
- HASH join used with firstmatch (Using join buffer (flat, BNLH join))
- Filtered could change with firstmatch as the conversion with and without
join_buffered lost the filtering information.
- The not "re-enabling join buffer" is shown in main.optimizer_trace
Original code by Sergei, optimized by Monty.
Author: Sergei Petrunia <sergey@mariadb.com>, monty@mariadb.org
- Deadlock happens when bulk insert acquires the space latch
before acquiring the index root page and check table does the
opposite. Workaround is to avoid validating the index for
check table when bulk insert is in progress for the table.
- This failure caused by commit 358921ce32
row_ins_duplicate_online() should consider if the record is an exact
match of the tuple when number of matching fields equals with number of
unique fields + DB_TRX_ID + DB_ROLL_PTR
rec_init_offsets_comp_ordinary(), rec_init_offsets(),
rec_get_offsets_reverse(), rec_get_nth_field_offs_old():
Simplify some bitwise arithmetics to avoid conditional jumps,
and add branch prediction hints with the assumption that most
variable-length columns are short.
Tested by: Matthias Leich
AWS KMS plugin saves all key files under the root folder of data
directory. Increasing of the key IDs and key rotations will generate a
lot of key files under the root folder, looks messy and hard to
maintain the folder permission etc.
Now introduce a new plugin parameter `aws_key_management_keyfile_dir` to
define the directory for saving the key files for better maintenance.
Detailed parameter information as following:
```
VARIABLE_NAME: AWS_KEY_MANAGEMENT_KEYFILE_DIR
SESSION_VALUE: NULL
GLOBAL_VALUE: <Directory path>
GLOBAL_VALUE_ORIGIN: COMMAND-LINE
DEFAULT_VALUE:
VARIABLE_SCOPE: GLOBAL
VARIABLE_TYPE: VARCHAR
VARIABLE_COMMENT: Define the directory in which to save key files
for the AWS key management plugin. If not set,
the root datadir will be used
READ_ONLY: YES
COMMAND_LINE_ARGUMENT: REQUIRED
GLOBAL_VALUE_PATH: NULL
```
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
If you are running mariadb-install-db from a source tree instead of
installation it was executing `mysqld` instead of `mariadb` which showed
the deprecation warning. This patch fixes that as well as fixing
messages and links to other things that have been renamed.
This stabilizes main.order_by_optimizer_innodb, where the result varies
depending on the rec_per_key status from the engine.
The logic to prefer range over a const ref:
- If range of has only one part and it uses more key parts than ref, then
use the range.
Example:
WHERE key_part1=1 and key_part2 > #
Here we will prefer a range over (key_part1,key_part2) instead a ref
over key_part1.
This fixes a regression due to MDEV-19229. InnoDB would fail to maintain
the maximum transaction ID when it changes and reinitializes the number
of undo tablespaces. InnoDB should maintain the maximum transaction ID
in TRX_RSEG_MAX_TRX_ID of system rollback segment header.
srv_undo_tablespaces_reinit(): Preserve the system-wide maximum
transaction identifier in the TRX_RSEG_MAX_TRX_ID field of
the first rollback segment. If needed, upgrade the page to the
MariaDB 10.3 format first. All this must be done in the same
atomic mini-transaction that will reinitialize the TRX_SYS page.
Before MariaDB Server 10.3, InnoDB persisted the maximum transaction
identifier only in the TRX_SYS page. MariaDB 10.3 started to treat that
page as a read-only directory of rollback segments, and the maximum
transaction identifier will be recovered from TRX_RSEG_MAX_TRX_ID or
from undo logs. Since a change of innodb_undo_tablespaces is only
allowed when no undo log records exist, the only place to store the
persistent maximum transaction identifier is in TRX_RSEG_MAX_TRX_ID
of one of the rollback segment header pages.
The bug was observed when the database was upgraded directly from MySQL 5.7
or earlier, or from MariaDB Server 10.2 or earlier, to multiple
innodb_undo_tablespaces. On a restart of MariaDB after the upgrade,
the transaction identifier would be reported to be smaller than during
the upgrade:
2023-03-03 10:43:57 0 [Note] InnoDB: log sequence number 2762352; transaction id 1794
2023-03-03 10:44:17 0 [Note] InnoDB: log sequence number 2786076; transaction id 770
This error was discovered while working on
MDEV-30540 Wrong result with IN list length reaching
IN_PREDICATE_CONVERSION_THRESHOLD
If there is read error from handler::ha_rnd_next() during a recursive
query, st_select_lex_unit::exec_recursive() will crash as it will try to
get the error code from a structure that was deleted by the callee.
The code was using the construct:
sl->join->exec();
saved_error=sl->join->error;
This does not work as sl->join was freed by the exec() and sl->join would
be set to 0.
Fixed by having JOIN::exec() return the error code.
The included test case simulates the error in ha_rnd_next(), which causes
a crash without the patch.
scovered whle working on
MDEV-30540 Wrong result with IN list length reaching
IN_PREDICATE_CONVERSION_THRESHOLD
If there is read error from handler::ha_rnd_next() during a recursive
query, st_select_lex_unit::exec_recursive() will crash as it will try to
get the error code from a structure that was deleted by the callee.
The code was using the construct:
sl->join->exec();
saved_error=sl->join->error;
This does not work as sl->join was freed by the exec() and sl->join was
set to 0.
Fixed by having JOIN::exec() return the error code.
The included test case simulates the error in ha_rnd_next(), which causes
a crash without the patch.
This error was discovered while working on
MDEV-30540 Wrong result with IN list length reaching
IN_PREDICATE_CONVERSION_THRESHOLD
Failing test: cte_recursive.test
If one writes to a file, then truncates it and then call mmap() over the
file_size + 7, then the file size changes to 7. (On Linux mmap() does not
change file size).
This caused _ma_read_rnd_dynamic_record() to believe that there are more
records in the data file, which is not the case, and the table will be
marked as corrupted.
Fixed by disabling mmap() in Aria on Windows.
The problem was the mysql_derived_prepare() did not correctly set
'distinct' when creating a temporary derivated table.
Fixed by separating checking for distinct for queries with and without
UNION.
Other things:
- Fixed bug in generate_derived_keys_for_table() where we set the wrong
bit for join_tab->keys
- Cleaned up JOIN::drop_unused_derived_keys()
- Changed TABLE::use_index() to keep unique keys and update
share->key_parts
Author: Sergei Petrunia <sergey@mariadb.com>, monty@mariadb.org
- Remove DBUG calls from my_winfile.c where call and parameters
are already printed by mysys.
- Remove DBUG from my_get_osfhandle() and my_get_open_flags() to remove
DBUG noise.
- Updated convert-debug-for-diff to take into account windows.
- Changed some DBUG_RETURN(function()) to tmp=function(); DBUG_RETURN(tmp);
This is needed as Visual C++ prints for DBUG binaries a trace for
func_a()
{
DBUG_ENTER("func_a");
DBUG_RETURN(func_b())
}
as
>func_a
<func_a
>func_b
<func_b
instead of when using gcc:
>func_a
| >func_b
| <func_b
<func_a
- InnoDB fails to reset the check_foreigns and check_unique_secondary
in trx_t::free(), trx_t::commit_cleanup(). This lead to bulk insert
in internal innodb fts table operation.
Starting with commit 0de3be8cfd (MDEV-30671),
the field TRX_UNDO_NEEDS_PURGE lost its previous meaning.
The following scenario is possible:
(1) InnoDB is killed at a point of time corresponding to the durable
execution of some fseg_free_step_not_header() but not
trx_purge_remove_log_hdr().
(2) After restart, the affected pages are allocated for something else.
(3) Purge will attempt to access the newly reallocated pages when looking
for some old undo log records.
trx_purge_free_segment(): Invoke trx_purge_remove_log_hdr() as the first
thing, to be safe. If the server is killed, some pages will never be
freed. That is the lesser evil. Also, before each mtr.start(), invoke
log_free_check() to prevent ib_logfile0 overrun.
Because downgrades from 11.0 to older MariaDB server are not possible
due to the removal of the InnoDB change buffer, there is no need to
access the field TRX_UNDO_NEEDS_PURGE anymore.
This patch also fixes some bugs detected by valgrind after this
patch:
- Not enough copy_func elements was allocated by Create_tmp_table() which
causes an memory overwrite in Create_tmp_table::add_fields()
I added an ASSERT() to be able to detect this also without valgrind.
The bug was that TMP_TABLE_PARAM::copy_fields was not correctly set
when calling create_tmp_table().
- Aria::empty_bits is not allocated if there is no varchar/char/blob
fields in the table. Fixed code to take this into account.
This cannot cause any issues as this is just a memory access
into other Aria memory and the content of the memory would not be used.
- Aria::last_key_buff was not allocated big enough. This may have caused
issues with rtrees and ma_extra(HA_EXTRA_REMEMBER_POS) as they
would use the same memory area.
- Aria and MyISAM didn't take extended key parts into account, which
caused problems when copying rec_per_key from engine to sql level.
- Mark asan builds with 'asan' in version strihng to detect these in
not_valgrind_build.inc.
This is needed to not have main.sp-no-valgrind fail with asan.