`LOCK TABLES view_name` should require
* invoker to have SELECT and LOCK TABLES privileges on the view
* either invoker or definer (only if sql security definer) to
have SELECT and LOCK TABLES privileges on the used tables/views.
According to https://stackoverflow.com/questions/22827510/how-to-avoid-bad-fd-set-buffer-overflow-crash
it seems that using select instead of poll can cause additional memory
allocations. As we are in a crashed state, we must prevent allocating
any memory (if possible). Thus, switch select call to poll.
Also move some bigger datastructures to global space. The code is not
run in a multithreaded context so best we don't use up stack space
if it's not needed.
Problem:
1. The server terminates abnormally when phrase search doesn't
filter out doc_ids correctly. This problem has been fixed in bug
2. Wrong query result: It's a regression from the bug #22709692 fix.
This fix optimize full-text search query with limit clause.
when FTS expression involves only union operation, we fetch only
number of doc_ids specified with the limit clause.
Fulltext phrase search is not an union operation and we consider
phrase search with plugin parser a union operation.
In phrase search with limit clause, we fetch limited doc_ids for
each token and if any of the selected doc_id does not contain all
tokens in correct order then we do not include that row_id in the
result set.
Therefore phrase search gets fewer number of rows than the qualified
rows exist in the table.
Fix:
Added a condition that phrase search with plugin parser is not a
union operation.
RB: 24925
Reviewed by : Annamalai Gurusami <annamalai.gurusami@oracle.com>
This is a cherry-pick of
mysql/mysql-server@5549920b7a
without a test case, because the test case depends on an n-gram
tokenizer that will be missing from MariaDB until MDEV-10267 is added.
Problem:
In Full-text phrase search, we filter out row that do not contain
all the tokens in the phrase.
If we do not filter out doc_id that doesn't appear in all the
token's doc_id lists then we hit an assert.
Fix:
if any of the token has last doc_id equal to ith doc_id of the first
token doc_id list then filter out rest of the higher doc_ids.
RB: 24909
Reviewed by : Annamalai Gurusami <annamalai.gurusami@oracle.com>
This is a cherry-pick of
mysql/mysql-server@5aa075277d
but without a test case, because the test case depends on an n-gram
tokenizer that will be missing from MariaDB until MDEV-10267 is added.
This issue is caused by MDEV-22456 ad6171b91c. Fix involves the backported version of 10.4 patch
MDEV-22778 5f2628d1ee and few parts of
MDEV-17441 (e9a5f288f2).
dict_table_t::stats_latch_created: Removed
dict_table_t::stats_latch: make value member and always lock it for
simplicity even for stats cloned table.
zip_pad_info_t::mutex_created: Removed
zip_pad_info_t::mutex: make member value instead of pointer
os0once.h: Removed
dict_table_remove_from_cache_low(): Ensure that fts_free() is always
called, even if dict_mem_table_free() is deferred until
btr_search_lazy_free().
InnoDB would always zip_pad_info_t::mutex and
dict_table_t::autoinc_mutex, even for tables are not in
ROW_FORMAT=COMPRESSED nor include any AUTO_INCREMENT column.
MariaDB 10.2.2 inherited from MySQL 5.7 a perceived optimization
of ALTER TABLE, which skips the writing of redo log records.
In MDEV-16809 we introduced a parameter that allows the redo log to
be written, so that Mariabackup would not be impacted, but we kept
the MySQL 5.7 behaviour enabled by default (innodb_log_optimize_ddl=ON).
As noted in MDEV-19747 (Deprecate and ignore innodb_log_optimize_ddl,
implemented in MariaDB 10.5.1), omitting the redo log writes can
actually reduce performance, because we will have to wait for the data
pages to be written out. When the redo log file is configured to be
large enough, it actually can be much faster to write the redo log and
avoid the extra page flushing.
When the redo log is omitted (innodb_log_optimize_ddl=ON), also
Mariabackup may have to perform a lot of extra work, to re-copy the
entire data file if it is possible that any log was omitted during
the backup.
Starting with MariaDB 10.5.1, the parameter innodb_log_optimize_ddl
is deprecated and ignored. We hereby deprecate (but will not ignore)
the parameter in earlier versions as well.
The maximum innodb key length is 3500 what is hardcoded in
ha_innobase::max_supported_key_length()). The maximum number of innodb indexes
is configured with MAX_INDEXES macro (see also MAX_KEY definition).
The same is currently implemented for blackhole storage engine.
Cherry picked from percona-server 0d90d81c3c507a6b1476246a405504f6e4ef9d4d
Original lp bug 1733049
Reviewed-by: daniel@mariadb.org
There are 2 issues here:
Issue #1: memory allocation.
An IO_CACHE that uses encryption uses a larger buffer (it needs space for the encrypted data,
decrypted data, IO_CACHE_CRYPT struct to describe encryption parameters etc).
Issue #2: IO_CACHE::seek_not_done
When IO_CACHE objects are cloned, they still share the file descriptor.
This means, operation on one IO_CACHE may change the file read position
which will confuse other IO_CACHEs using it.
The fix of these issues would be:
Allocate the buffer to also include the extra size needed for encryption.
Perform seek again after one IO_CACHE reads the file.
The characters parsed are always ascii characters, hence one byte. This
means that the code did not have "incorrect" logic because the boolean
condition, if true, would also evaluate to the value of 1.
The condition however is semantically wrong, assuming a length is equal
to the condition outcome. Change paranthesis to make it also read
according to the intent.
fix printing precedence for BETWEEN, LIKE/ESCAPE, REGEXP, IN
don't use precedence for printing CASE/WHEN/THEN/ELSE/END
fix parsing precedence of BETWEEN, LIKE/ESCAPE, REGEXP, IN
support predicate arguments for IN, BETWEEN, SOUNDS LIKE, LIKE/ESCAPE,
REGEXP
use %nonassoc for unary operators
fix parsing of IS TRUE/FALSE/UNKNOWN/NULL
remove parser_precedence test as superseded by the precedence test
Some GSS-API functions like gss_import_name(), gss_release_buffer()
used in plugin/auth_gssapi and libmariadb/plugins/auth are marked
as deprecated in MacOS starting from version 10.14+. It results in
extra warnings output on server building.
To eliminate extra warnings the flag '-Wno-deprecated-declarations'
has been added to compiler invocation string for those source
files that invoke deprecated GSS-API functions.
This patch moves definitions of macros variables
HAVE_PAM_SYSLOG, HAVE_PAM_EXT_H, HAVE_PAM_APPL_H, HAVE_STRNDUP
from command line (in the form -Dmacros) to the auto-generated
header file config_auth_pam.h
Compiler warnings like one listed below are generated during server build on MacOS:
[88%] Building C object plugin/auth_pam/CMakeFiles/pam_user_map.dir/mapper/pam_user_map.c.o
mariadb/server-10.2/plugin/auth_pam/mapper/pam_user_map.c:87:41: error: passing
'gid_t *' (aka 'unsigned int *') to parameter of type 'int *' converts between pointers to integer types
with different sign [-Werror,-Wpointer-sign]
if (getgrouplist(user, user_group_id, loc_groups, &ng) < 0)
^~~~~~~~~~
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/unistd.h:650:43: note:
passing argument to parameter here
int getgrouplist(const char *, int, int *, int *);
^
In case MariaDB server is build with -DCMAKE_BUILD_TYPE=Debug it results in
build error.
The reason of compiler warnings is that declaration of the Posix C API function
getgrouplist() on MacOS differs from declaration of getgrouplist() proposed
by Posix.
To suppress this compiler warning cmake configure was adapted to detect what
kind of getgrouplist() function is declared on the build platform and
set the macros HAVE_POSIX_GETGROUPLIST in case the building platform supports
Posix compatible interface for the getgrouplist() function. Depending on
whether this macros is set the compatible type of arguments is used to pass
parameter values to the function.
The problem:
When incremental backup is taken, delta files are created for innodb tables
which are marked as new tables during innodb ddl tracking. When such
tablespace is tried to be opened during prepare in
xb_delta_open_matching_space(), it is "created", i.e.
xb_space_create_file() is invoked, instead of opening, even if
a tablespace with the same name exists in the base backup directory.
xb_space_create_file() writes page 0 header the tablespace.
This header does not contain crypt data, as mariabackup does not have
any information about crypt data in delta file metadata for
tablespaces.
After delta file is applied, recovery process is started. As the
sequence of recovery for different pages is not defined, there can be
the situation when crypt data redo log event is executed after some
other page is read for recovery. When some page is read for recovery, it's
decrypted using crypt data stored in tablespace header in page 0, if
there is no crypt data, the page is not decryped and does not pass corruption
test.
This causes error for incremental backup --prepare for encrypted
tablespaces.
The error is not stable because crypt data redo log event updates crypt
data on page 0, and recovery for different pages can be executed in
undefined order.
The fix:
When delta file is created, the corresponding write filter copies only
the pages which LSN is greater then some incremental LSN. When new file
is created during incremental backup, the LSN of all it's pages must be
greater then incremental LSN, so there is no need to create delta for
such table, we can just copy it completely.
The fix is to copy the whole file which was tracked during incremental backup
with innodb ddl tracker, and copy it to base directory during --prepare
instead of delta applying.
There is also DBUG_EXECUTE_IF() in innodb code to avoid writing redo log
record for crypt data updating on page 0 to make the test case stable.
Note:
The issue is not reproducible in 10.5 as optimized DDL's are deprecated
in 10.5. But the fix is still useful because it allows to decrease
data copy size during backup, as delta file contains some extra info.
The test case should be removed for 10.5 as it will always pass.
When including a generated file, always use <...>.
We need the compiler to find it in the BINDIR, not in the SRCDIR.
But when including as "..." SRCDIR is always searched first.
The bug can only happen in out-of-source builds, if there was an
in-source build before.
When first argument to the JSON_MERGE_PATCH was NULL and second - the
invalid JSON line, the error code was garbage. So it should be set to 0
initially.
problem:
========
mysqltest: In included file "./include/assert.inc":
included from mysql-test/suite/sys_vars/t/rpl_init_slave_func.test at line 69:
Assertion text: '@@global.max_connections = @start_max_connections'
Assertion result: '0'
mysqltest: In included file "./include/assert.inc":
included from mysql-test/suite/sys_vars/t/rpl_init_slave_func.test at line 86:
Assertion text: '@@global.max_connections = @start_max_connections + 1'
Assertion result: '0'
Analysis:
=========
A slave SQL thread sets its Running state to Yes very early in its
initialisation, before the majority of initialisation actions, including
executing the init_slave command, are done. Thus the testcase has a race
condition where the initial replication setup might finish executing later
than the testcase SET GLOBAL init_slave, making the testcase see its effect
where it checks for its absence.
Fix:
===
Include 'sync_slave_sql_with_master.inc' at the beginning of the test to
ensure that slave applier has completed the execution of 'init_slave' command
and proceeded to event application. Replace the apparently needless RESET
MASTER / RESET SLAVE etc.
Patch is based on:
b91e2e6f90
Author: laurynas-biveinis
Compiler warnings like one listed below are generated during server build on MacOS:
In file included from server-10.2-MDEV-23564/mysys_ssl/openssl.c:33:
In file included from /usr/local/include/openssl/evp.h:16:
In file included from /usr/local/include/openssl/bio.h:20:
/usr/local/include/openssl/crypto.h:206:10: warning: 'CRYPTO_cleanup_all_ex_data' macro redefined [-Wmacro-redefined]
^
/mariadb/server-10.2-MDEV-23564/include/ssl_compat.h:46:9: note: previous definition is here
^
In case MariaDB serer is build with -DCMAKE_BUILD_TYPE=Debug it results in
build error.
The reason of compiler warnings is that header file <ssl_compat.h>
included before the openssl system header files. File ssl_compat.h
contains some macros with the same names as SSL API functions declared
in the openssl system header files. It resulted in duplicate
symbols that produces compiler warnings.
To fix the issue the header file ssl_compat.h should be included
after a line where openssl system header is included.
It was only from CMake-3.14.0 that CMAKE_REQUIRED_LINK_OPTIONS
was used in CHECK_CXX_SOURCE_COMPILES. Without this, it could be
the case (as was on OSX) that a flag was never checked in
CHECK_CXX_SOURCE_COMPILES, the CHECK successfully passed, but
failed at link time.
As such we use CMAKE_REQUIRED_LIBRARIES to include the flags to check
as its compatible enough with the cmake versions for non-Windows
compilers/linkers.
Tested on x86_64 with:
* 3.11.4
* 3.17.4
Corrects: 7473e1841c
In the future:
* cmake >=3.14.0 can use CMAKE_REQUIRED_LINK_OPTIONS
* cmake >=3.18.0 can use CHECK_LINKER_FLAG (with policy CMP0057 NEW)
(e.g: commit c7ac2deff9a2c965887dcc67cbf2a3a7c3e0123d)
CMAKE_REQUIRED_LIBRARIES suggested by serg@mariadb.com
Reviewed-by: anel@mariadb.org
fts_query_t::nested_sub_exp: Keep track of nested
fts_ast_visit_sub_exp() calls.
fts_ast_visit_sub_exp(): Return DB_OUT_OF_MEMORY if the
maximum recursion depth is exceeded.
This is motivated by a change in MySQL 5.6.50:
mysql/mysql-server@e2a46b4834
Bug #29929684 USING MANY NESTED ARGUMENTS WITH BOOLEAN FTS CAN LEAD
TO TERMINATE SERVER
In row_undo_ins_remove_clust_rec() and similar places,
an assertion !node->trx->dict_operation_lock_mode could fail,
because an online ALTER is not allowed to run at the same time
while DDL operations on the table are being rolled back.
This race condition would be fixed by always acquiring an InnoDB
table lock in ha_innobase::prepare_inplace_alter_table() or
prepare_inplace_alter_table_dict(), or by ensuring that recovered
transactions are protected by MDL that would block concurrent DDL
until the rollback has been completed.
This reverts commit 1509363970
and commit 22c4a7512f.
The function innodb_show_mutex_status() is the only ultimate caller of
LatchCounter::iterate() via MutexMonitor::iterate(). Because the call
is not protected by LatchCounter::m_mutex, any mutex_create() or
mutex_free() that is invoked concurrently during the execution, bad
things such as a crash could happen.
The most likely way for this to happen is buffer pool resizing,
which could cause buf_block_t::mutex (which existed before MDEV-15053)
to be created or freed. We could also register InnoDB mutexes in
TrxFactory::init() if trx_pools needs to grow.
The view INFORMATION_SCHEMA.INNODB_MUTEXES is not affected, because it
only displays information about rw-locks, not mutexes.
This commit intentionally touches also MutexMonitor::iterate()
and the only code that interfaces with LatchCounter::iterate()
to make it clearer for future readers that the scattered code
that is obfuscated by templates belongs together.
This is based on
mysql/mysql-server@273a93396f
- Based on patch: d6a983351c5a454bd0cb113852f
- Update combination example for 10.2 (commit 2a3fe45dd2 added change
for 10.3+)
```
==============================================================================
TEST RESULT TIME (ms) or COMMENT
--------------------------------------------------------------------------
worker[1] Using MTR_BUILD_THREAD 300, with reserved ports 16000..16019
rpl.rpl_invoked_features 'innodb,mix' [ pass ] 1677
rpl.rpl_invoked_features 'innodb,row' [ pass ] 3516
rpl.rpl_invoked_features 'innodb,stmt' [ pass ] 1609
--------------------------------------------------------------------------
```
- `gdb` option will be added during the merge
To fix this, it is necessary to add an option to exclude the
database with the name "lost+found" from processing (the database
name will be checked by the check_if_skip_database_by_path() or
by the check_if_skip_database() function, and as a result
"lost+found" will be skipped).
In addition, it is necessary to slightly modify the verification
logic in the check_if_skip_database() function.
Also added a new test galera_sst_mariabackup_lost_found.test
is_bulk_op())' fails on UPDATE on a partitioned table with subquery
(MySQL:71630)
Analysis and fix: Error is not checked. So correct error state is not returned.
Fix: Check for error and return the error state.