innobase_rename_indexes_cache(): fix corruption of index cache. Index ids
help distinguish indexes when their names clash.
innobase_rename_indexes_cache(): fix corruption of index statistics table.
Use unique temporary names to avoid names clashing.
Reviewed by: Marko Mäkelä
innodb_preshutdown(): Terminate the encryption threads before
the page cleaner thread can be shut down.
innodb_shutdown(): Always wait for the encryption threads and
page cleaner to shut down.
srv_shutdown_all_bg_threads(): Wait for the encryption threads and
the page cleaner to shut down. (After an aborted startup,
innodb_shutdown() would not be called.)
row_get_background_drop_list_len_low(): Remove.
os_thread_count: Remove. Alternatively, at the end of
srv_shutdown_all_bg_threads() we could try to wait longer
for the count to reach 0. On some platforms, an assertion
os_thread_count==0 could fail even after a small delay,
even though in the core dump all threads would have exited.
srv_shutdown_threads(): Renamed from srv_shutdown_all_bg_threads().
Do not wait for the page cleaner to shut down, because the later
innodb_shutdown(), which may invoke
logs_empty_and_mark_files_at_shutdown(), assumes that it exists.
Problem:
1. The server terminates abnormally when phrase search doesn't
filter out doc_ids correctly. This problem has been fixed in bug
2. Wrong query result: It's a regression from the bug #22709692 fix.
This fix optimize full-text search query with limit clause.
when FTS expression involves only union operation, we fetch only
number of doc_ids specified with the limit clause.
Fulltext phrase search is not an union operation and we consider
phrase search with plugin parser a union operation.
In phrase search with limit clause, we fetch limited doc_ids for
each token and if any of the selected doc_id does not contain all
tokens in correct order then we do not include that row_id in the
result set.
Therefore phrase search gets fewer number of rows than the qualified
rows exist in the table.
Fix:
Added a condition that phrase search with plugin parser is not a
union operation.
RB: 24925
Reviewed by : Annamalai Gurusami <annamalai.gurusami@oracle.com>
This is a cherry-pick of
mysql/mysql-server@5549920b7a
without a test case, because the test case depends on an n-gram
tokenizer that will be missing from MariaDB until MDEV-10267 is added.
Problem:
In Full-text phrase search, we filter out row that do not contain
all the tokens in the phrase.
If we do not filter out doc_id that doesn't appear in all the
token's doc_id lists then we hit an assert.
Fix:
if any of the token has last doc_id equal to ith doc_id of the first
token doc_id list then filter out rest of the higher doc_ids.
RB: 24909
Reviewed by : Annamalai Gurusami <annamalai.gurusami@oracle.com>
This is a cherry-pick of
mysql/mysql-server@5aa075277d
but without a test case, because the test case depends on an n-gram
tokenizer that will be missing from MariaDB until MDEV-10267 is added.
Recent gcc/clang versions failed to compile the existing code.
Updating a later upstream SDK version was simple and required
only implementing a flush method. This was left blank as
there was no strong requirement to keep the error log
atomic or durable.
Reviewed-by: wlad@mariadb.com
The upstream SDK version added a flush method which was simple
to complete.
This issue is caused by MDEV-22456 ad6171b91c. Fix involves the backported version of 10.4 patch
MDEV-22778 5f2628d1ee and few parts of
MDEV-17441 (e9a5f288f2).
dict_table_t::stats_latch_created: Removed
dict_table_t::stats_latch: make value member and always lock it for
simplicity even for stats cloned table.
zip_pad_info_t::mutex_created: Removed
zip_pad_info_t::mutex: make member value instead of pointer
os0once.h: Removed
dict_table_remove_from_cache_low(): Ensure that fts_free() is always
called, even if dict_mem_table_free() is deferred until
btr_search_lazy_free().
InnoDB would always zip_pad_info_t::mutex and
dict_table_t::autoinc_mutex, even for tables are not in
ROW_FORMAT=COMPRESSED nor include any AUTO_INCREMENT column.
MariaDB 10.2.2 inherited from MySQL 5.7 a perceived optimization
of ALTER TABLE, which skips the writing of redo log records.
In MDEV-16809 we introduced a parameter that allows the redo log to
be written, so that Mariabackup would not be impacted, but we kept
the MySQL 5.7 behaviour enabled by default (innodb_log_optimize_ddl=ON).
As noted in MDEV-19747 (Deprecate and ignore innodb_log_optimize_ddl,
implemented in MariaDB 10.5.1), omitting the redo log writes can
actually reduce performance, because we will have to wait for the data
pages to be written out. When the redo log file is configured to be
large enough, it actually can be much faster to write the redo log and
avoid the extra page flushing.
When the redo log is omitted (innodb_log_optimize_ddl=ON), also
Mariabackup may have to perform a lot of extra work, to re-copy the
entire data file if it is possible that any log was omitted during
the backup.
Starting with MariaDB 10.5.1, the parameter innodb_log_optimize_ddl
is deprecated and ignored. We hereby deprecate (but will not ignore)
the parameter in earlier versions as well.
storage engines are generally initialized in some random order
(by iterating the hash of plugin names).
S3 fails if it's initialized before Aria.
But it looks that while S3 needs Aria, it does not need
Aria to be initialized before S3. S3 copies maria_hton and then
overwrites every single member of it, so it can handle Aria being
initialized after S3.
There are 2 issues here:
Issue #1: memory allocation.
An IO_CACHE that uses encryption uses a larger buffer (it needs space for the encrypted data,
decrypted data, IO_CACHE_CRYPT struct to describe encryption parameters etc).
Issue #2: IO_CACHE::seek_not_done
When IO_CACHE objects are cloned, they still share the file descriptor.
This means, operation on one IO_CACHE may change the file read position
which will confuse other IO_CACHEs using it.
The fix of these issues would be:
Allocate the buffer to also include the extra size needed for encryption.
Perform seek again after one IO_CACHE reads the file.
The characters parsed are always ascii characters, hence one byte. This
means that the code did not have "incorrect" logic because the boolean
condition, if true, would also evaluate to the value of 1.
The condition however is semantically wrong, assuming a length is equal
to the condition outcome. Change paranthesis to make it also read
according to the intent.
fix printing precedence for BETWEEN, LIKE/ESCAPE, REGEXP, IN
don't use precedence for printing CASE/WHEN/THEN/ELSE/END
fix parsing precedence of BETWEEN, LIKE/ESCAPE, REGEXP, IN
support predicate arguments for IN, BETWEEN, SOUNDS LIKE, LIKE/ESCAPE,
REGEXP
use %nonassoc for unary operators
fix parsing of IS TRUE/FALSE/UNKNOWN/NULL
remove parser_precedence test as superseded by the precedence test
Some GSS-API functions like gss_import_name(), gss_release_buffer()
used in plugin/auth_gssapi and libmariadb/plugins/auth are marked
as deprecated in MacOS starting from version 10.14+. It results in
extra warnings output on server building.
To eliminate extra warnings the flag '-Wno-deprecated-declarations'
has been added to compiler invocation string for those source
files that invoke deprecated GSS-API functions.
Reimplement MDEV-14275 Improving memory utilization for information schema
Postpone temp table instantiation until after setup_fields().
Replace all unused (not marked in read_set) columns in an I_S table
with CHAR(0). This can drastically reduce the footprint of a MEMORY
table (a TABLE_CATALOG alone is 1538 bytes per row).
This does not change the engine. If the table was decided to be Aria
(because of, say, blobs) then after optimization it'll stay Aria
even if all blobs were removed.
Note 1: when transforming table structure, share->blob_fields is
preserved, otherwise Aria might switch from DYNAMIC to STATIC row format
and expect a special field for a deleted mark, which create_tmp_tabe
didn't provide.
Note 2: optimizer was doing handler::info() (to know the number of rows)
before the temp table is populated. That didn't make much sense. Now
it's done before the table is even instantiated. Preserve the old
behavior and report 0 rows.
This reverts e2664ee836 and a8458a2345
This patch moves definitions of macros variables
HAVE_PAM_SYSLOG, HAVE_PAM_EXT_H, HAVE_PAM_APPL_H, HAVE_STRNDUP
from command line (in the form -Dmacros) to the auto-generated
header file config_auth_pam.h
Compiler warnings like one listed below are generated during server build on MacOS:
[88%] Building C object plugin/auth_pam/CMakeFiles/pam_user_map.dir/mapper/pam_user_map.c.o
mariadb/server-10.2/plugin/auth_pam/mapper/pam_user_map.c:87:41: error: passing
'gid_t *' (aka 'unsigned int *') to parameter of type 'int *' converts between pointers to integer types
with different sign [-Werror,-Wpointer-sign]
if (getgrouplist(user, user_group_id, loc_groups, &ng) < 0)
^~~~~~~~~~
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/unistd.h:650:43: note:
passing argument to parameter here
int getgrouplist(const char *, int, int *, int *);
^
In case MariaDB server is build with -DCMAKE_BUILD_TYPE=Debug it results in
build error.
The reason of compiler warnings is that declaration of the Posix C API function
getgrouplist() on MacOS differs from declaration of getgrouplist() proposed
by Posix.
To suppress this compiler warning cmake configure was adapted to detect what
kind of getgrouplist() function is declared on the build platform and
set the macros HAVE_POSIX_GETGROUPLIST in case the building platform supports
Posix compatible interface for the getgrouplist() function. Depending on
whether this macros is set the compatible type of arguments is used to pass
parameter values to the function.
The problem:
When incremental backup is taken, delta files are created for innodb tables
which are marked as new tables during innodb ddl tracking. When such
tablespace is tried to be opened during prepare in
xb_delta_open_matching_space(), it is "created", i.e.
xb_space_create_file() is invoked, instead of opening, even if
a tablespace with the same name exists in the base backup directory.
xb_space_create_file() writes page 0 header the tablespace.
This header does not contain crypt data, as mariabackup does not have
any information about crypt data in delta file metadata for
tablespaces.
After delta file is applied, recovery process is started. As the
sequence of recovery for different pages is not defined, there can be
the situation when crypt data redo log event is executed after some
other page is read for recovery. When some page is read for recovery, it's
decrypted using crypt data stored in tablespace header in page 0, if
there is no crypt data, the page is not decryped and does not pass corruption
test.
This causes error for incremental backup --prepare for encrypted
tablespaces.
The error is not stable because crypt data redo log event updates crypt
data on page 0, and recovery for different pages can be executed in
undefined order.
The fix:
When delta file is created, the corresponding write filter copies only
the pages which LSN is greater then some incremental LSN. When new file
is created during incremental backup, the LSN of all it's pages must be
greater then incremental LSN, so there is no need to create delta for
such table, we can just copy it completely.
The fix is to copy the whole file which was tracked during incremental backup
with innodb ddl tracker, and copy it to base directory during --prepare
instead of delta applying.
There is also DBUG_EXECUTE_IF() in innodb code to avoid writing redo log
record for crypt data updating on page 0 to make the test case stable.
Note:
The issue is not reproducible in 10.5 as optimized DDL's are deprecated
in 10.5. But the fix is still useful because it allows to decrease
data copy size during backup, as delta file contains some extra info.
The test case should be removed for 10.5 as it will always pass.
- Original patch was contributed by Jani Tolonen <jani.k.tolonen@gmail.com>
https://github.com/an3l/server/commits/bb-10.3-anel-MDEV-21786-dump-sequence
which distinguishes data structure (linked list) of sequences from
tables.
- Added standard sql output to prevent future changes
of sequences and disabled locks for sequences.
- Added test case for `MDEV-20070: mysqldump won't work correct on
sequences` where table column depends on sequence value.
- Restore sequence last value in the following way:
- Find `next_not_cached_value` and use it to `setval()`
- We just need for logical restore, so don't execute `setval()`
- `setval()` should be showed also in case of `--no-data` option.
Reviewed-by: daniel@mariadb.org
When first argument to the JSON_MERGE_PATCH was NULL and second - the
invalid JSON line, the error code was garbage. So it should be set to 0
initially.