- InnoDB aborts when table is dropping the column. This is
caused by 5f09b53bdb (MDEV-31086).
While iterating the altered table fields, we fail to consider
the dropped columns.
This patch adds for "--ps-protocol" second execution
of queries "SELECT".
Also in this patch it is added ability to disable/enable
(--disable_ps2_protocol/--enable_ps2_protocol) second
execution for "--ps-prototocol" in testcases.
First UPDATE under START TRANSACTION does nothing (nstate= nstate),
but anyway generates history. Since update vector is empty we get into
(!uvect->n_fields) branch which only adds history row, but does not do
update. After that we get current row with wrong (old) row_start value
and because of that second UPDATE tries to insert history row again
because it sees trx->id != row_start which is the guard to avoid
inserting multiple trx_id-based history rows under same transaction
(because we have same trx_id and we get duplicate error and this bug
demostrates that). But this try anyway fails because PK is based on
row_end which is constant under same transaction, so PK didn't change.
The fix moves vers_make_update() to an earlier stage of
calc_row_difference(). Therefore it prepares update vector before
(!uvect->n_fields) check and never gets into that branch, hence no
need to handle versioning inside that condition anymore.
Now trx->id and row_start are equal after first UPDATE and we don't
try to insert second history row.
== Cleanups and improvements ==
ha_innobase::update_row():
vers_set_fields and vers_ins_row are cleaned up into direct condition
check. SQLCOM_ALTER_TABLE check now is not used as this is dead code,
assertion is done instead.
upd_node->is_delete is set in calc_row_difference() just to keep
versioning code as much in one place as possible. vers_make_delete()
is still located in row_update_for_mysql() as this is required for
ha_innodbase::delete_row() as well.
row_ins_duplicate_error_in_clust():
Restrict DB_FOREIGN_DUPLICATE_KEY to the better conditions.
VERSIONED_DELETE is used specifically to help lower stack to
understand what caused current insert. Related to MDEV-29813.
1. Exclude merging history rows into fts index.
The check !history_fts && (index->type & DICT_FTS) was just incorrect
attempt to avoid history in fts index.
2. Don't check for duplicates for history rows.
Add the end of file marker x1A to the DBF file and handle it
correctly to preserve interoperability with Libreoffice, and others
that have followed the DBF spec.
The file open mode of "a+" was problematic because Linux and the OSX,
the previous main development mode are inconsistent (see man fopen).
The main problem per the bug report was the inability to fseek back to the
beginning to update the records in the header.
As such the "a+" mode is remove and "w+b" is used inserting to a new file
and "r+b" is used for appending to the file.
In DBFFAM::CloseTableFile move PlugCloseFile down to close the file in
all modes.
The year unlike the comments is always since 1900. Use the
YYYY-MM-DD as an unabigious form during tracing.
Thanks for Mr. Zoltan Duna for the descriptive bug report.
The existing (incorrect) overriding mechanism is:
Non-minus-one var value overrides table param overrides default value.
Before MDEV-27169, unspecified var value is -1. So if the user sets
both the var to be a value other than -1 and the table param, the var
value will prevail, which is incorrect.
After MDEV-27169, unspecified var value is default value. So if the
user does not set the var but sets the table param, the default value
will prevail, which is even more incorrect.
This patch fixes it so that table param, if specified, always
overrides var value, and the latter if not specified or set to -1,
falls back to the default value
We achieve this by replacing all such overriding in spd_param.cc with
macros that override in the correct way, and removing all the
"overriding -1" lines involving table params in
spider_set_connect_info_default() except for those table params not
defined as sysvar/thdvar in spd_params.cc
We also introduced macros for non-overriding sysvar and thdvar, so
that the code is cleaner and less error-prone
In server versions where MDEV-27169 has not been applied, we also
backport the patch, that is, replacing -1 default values with real
default values
In server versions where MDEV-28006 has not been applied, we do the
same for udf params
PROBLEM:
A deadlock was possible when a transaction tried to "upgrade" an already
held Record Lock to Next Key Lock.
SOLUTION:
This patch is based on observations that:
(1) a Next Key Lock is equivalent to Record Lock combined with Gap Lock
(2) a GAP Lock never has to wait for any other lock
In case we request a Next Key Lock, we check if we already own a Record
Lock of equal or stronger mode, and if so, then we change the requested
lock type to GAP Lock, which we either already have, or can be granted
immediately, as GAP locks don't conflict with any other lock types.
(We don't consider Insert Intention Locks a Gap Lock in above statements).
The reason of why we don't upgrage Record Lock to Next Key Lock is the
following.
Imagine a transaction which does something like this:
for each row {
request lock in LOCK_X|LOCK_REC_NOT_GAP mode
request lock in LOCK_S mode
}
If we upgraded lock from Record Lock to Next Key lock, there would be
created only two lock_t structs for each page, one for
LOCK_X|LOCK_REC_NOT_GAP mode and one for LOCK_S mode, and then used
their bitmaps to mark all records from the same page.
The situation would look like this:
request lock in LOCK_X|LOCK_REC_NOT_GAP mode on row 1:
// -> creates new lock_t for LOCK_X|LOCK_REC_NOT_GAP mode and sets bit for
// 1
request lock in LOCK_S mode on row 1:
// -> notices that we already have LOCK_X|LOCK_REC_NOT_GAP on the row 1,
// so it upgrades it to X
request lock in LOCK_X|LOCK_REC_NOT_GAP mode on row 2:
// -> creates a new lock_t for LOCK_X|LOCK_REC_NOT_GAP mode (because we
// don't have any after we've upgraded!) and sets bit for 2
request lock in LOCK_S mode on row 2:
// -> notices that we already have LOCK_X|LOCK_REC_NOT_GAP on the row 2,
// so it upgrades it to X
...etc...etc..
Each iteration of the loop creates a new lock_t struct, and in the end we
have a lot (one for each record!) of LOCK_X locks, each with single bit
set in the bitmap. Soon we run out of space for lock_t structs.
If we create LOCK_GAP instead of lock upgrading, the above scenario works
like the following:
// -> creates new lock_t for LOCK_X|LOCK_REC_NOT_GAP mode and sets bit for
// 1
request lock in LOCK_S mode on row 1:
// -> notices that we already have LOCK_X|LOCK_REC_NOT_GAP on the row 1,
// so it creates LOCK_S|LOCK_GAP only and sets bit for 1
request lock in LOCK_X|LOCK_REC_NOT_GAP mode on row 2:
// -> reuses the lock_t for LOCK_X|LOCK_REC_NOT_GAP by setting bit for 2
request lock in LOCK_S mode on row 2:
// -> notices that we already have LOCK_X|LOCK_REC_NOT_GAP on the row 2,
// so it reuses LOCK_S|LOCK_GAP setting bit for 2
In the end we have just two locks per page, one for each mode:
LOCK_X|LOCK_REC_NOT_GAP and LOCK_S|LOCK_GAP.
Another benefit of this solution is that it avoids not-entirely
const-correct, (and otherwise looking risky) "upgrading".
The fix was ported from
mysql/mysql-server@bfba840dfamysql/mysql-server@75cefdb1f7
Reviewed by: Marko Mäkelä
This fixes mdev_26541.test, and the new clean_up_spider.inc will be
useful for other tests where part of deinit_spider does not apply,
e.g. those testing spider initialisation only.
- When foreign_key_check is disabled, allowing to modify the
column which is part of foreign key constraint can lead to
refusal of TRUNCATE TABLE, OPTIMIZE TABLE later. So it make
sense to block the column modify operation when foreign key
is involved irrespective of foreign_key_check variable.
Correct way to modify the charset of the column when fk is involved:
SET foreign_key_checks=OFF;
ALTER TABLE child DROP FOREIGN KEY fk, MODIFY m VARCHAR(200) CHARSET utf8mb4;
ALTER TABLE parent MODIFY m VARCHAR(200) CHARSET utf8mb4;
ALTER TABLE child ADD CONSTRAINT FOREIGN KEY (m) REFERENCES PARENT(m);
SET foreign_key_checks=ON;
fk_check_column_changes(): Remove the FOREIGN_KEY_CHECKS while
checking the column change for foreign key constraint. This
is the partial revert of commit 5f1f2fc0e4
and it changes the behaviour of copy alter algorithm
ha_innobase::prepare_inplace_alter_table(): Find the modified
column and check whether it is part of existing and newly
added foreign key constraint.
spider_db_mbase_util::open_item_func() is a monster function.
It is difficult to maintain while it is expected that we need to
modify it when a new SQL function or a new func_type is added.
We split the function into two distinct functions: one handles the
case of str != NULL and the other handles the case of str == NULL.
This refactoring was done in a conservative way because we do not
have comprehensive tests on the function.
It also fixes MDEV-29447 and MDEV-31338 where field items that are
arguments of a func item may be used before created / initialised.
Note this commit is adapted from a patch by Nayuta for MDEV-26285.
This is a backport from 10.5.
The problem seems to be a deadlock between KILL command execution
and BF abort issued by an applier, where:
* KILL has locked victim's LOCK_thd_kill and LOCK_thd_data.
* Applier has innodb side global lock mutex and victim trx mutex.
* KILL is calling innobase_kill_query, and is blocked by innodb
global lock mutex.
* Applier is in wsrep_innobase_kill_one_trx and is blocked by
victim's LOCK_thd_kill.
The fix in this commit removes the TOI replication of KILL command
and makes KILL execution less intrusive operation. Aborting the
victim happens now by using awake_no_mutex() and ha_abort_transaction().
If the KILL happens when the transaction is committing, the
KILL operation is postponed to happen after the statement
has completed in order to avoid KILL to interrupt commit
processing.
Notable changes in this commit:
* wsrep client connections's error state may remain sticky after
client connection is closed. This error message will then pop
up for the next client session issuing first SQL statement.
This problem raised with test galera.galera_bf_kill.
The fix is to reset wsrep client error state, before a THD is
reused for next connetion.
* Release THD locks in wsrep_abort_transaction when locking
innodb mutexes. This guarantees same locking order as with applier
BF aborting.
* BF abort from MDL was changed to do BF abort on server/wsrep-lib
side first, and only then do the BF abort on InnoDB side. This
removes the need to call back from InnoDB for BF aborts which originate
from MDL and simplifies the locking.
* Removed wsrep_thd_set_wsrep_aborter() from service_wsrep.h.
The manipulation of the wsrep_aborter can be done solely on
server side. Moreover, it is now debug only variable and
could be excluded from optimized builds.
* Remove LOCK_thd_kill from wsrep_thd_LOCK/UNLOCK to allow more
fine grained locking for SR BF abort which may require locking
of victim LOCK_thd_kill. Added explicit call for
wsrep_thd_kill_LOCK/UNLOCK where appropriate.
* Wsrep-lib was updated to version which allows external
locking for BF abort calls.
Changes to MTR tests:
* Disable galera_bf_abort_group_commit. This test is going to
be removed (MDEV-30855).
* Record galera_gcache_recover_manytrx as result file was incomplete.
Trivial change.
* Make galera_create_table_as_select more deterministic:
Wait until CTAS execution has reached MDL wait for multi-master
conflict case. Expected error from multi-master conflict is
ER_QUERY_INTERRUPTED. This is because CTAS does not yet have open
wsrep transaction when it is waiting for MDL, query gets interrupted
instead of BF aborted. This should be addressed in separate task.
* A new test galera_kill_group_commit to verify correct behavior
when KILL is executed while the transaction is committing.
Co-authored-by: Seppo Jaakola <seppo.jaakola@iki.fi>
Co-authored-by: Jan Lindström <jan.lindstrom@galeracluster.com>
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
rw_trx_hash_t::find() acquires element->mutex, then unpins pins, used for
lf_hash element search. After that the "element" can be deallocated and
reused by some other thread.
If we take a look rw_trx_hash_t::insert()->lf_hash_insert()->lf_alloc_new()
calls, we will not find any element->mutex acquisition, as it was not
initialized yet before it's allocation. rw_trx_hash_t::insert() can reuse
the chunk, unpinned in rw_trx_hash_t::find().
The scenario is the following:
1. Thread 1 have just executed lf_hash_search() in
rw_trx_hash_t::find(), but have not acquired element->mutex yet.
2. Thread 2 have removed the element from hash table with
rw_trx_hash_t::erase() call.
3. Thread 1 acquired element->mutex and unpinned pin 2 pin with
lf_hash_search_unpin(pins) call.
4. Some thread purged memory of the element.
5. Thread 3 reused the memory for the element, filled element->id,
element->trx.
6. Thread 1 crashes with failed "DBUG_ASSERT(trx_id == trx->id)"
assertion.
Note that trx_t objects are also reused, see the code around trx_pools
for details.
The fix is to invoke "lf_hash_search_unpin(pins);" after element->trx is
stored in local variable in rw_trx_hash_t::find().
Reviewed by: Nikita Malyavin, Marko Mäkelä.
Modern software (including text editors, static analysis software,
and web-based code review interfaces) often requires source code files
to be interpretable via a consistent character encoding, with UTF-8 or
ASCII (a strict subset of UTF-8) as the default. Several of the MariaDB
source files contain bytes that are not valid in either the UTF-8 or
ASCII encodings, but instead represent strings encoded in the
ISO-8859-1/Latin-1 or ISO-8859-2/Latin-2 encodings.
These inconsistent encodings may prevent software from correctly
presenting or processing such files. Converting all source files to
valid UTF8 characters will ensure correct handling.
Comments written in Czech were replaced with lightly-corrected
translations from Google Translate. Additionally, comments describing
the proper handling of special characters were changed so that the
comments are now purely UTF8.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer
Amazon Web Services, Inc.
Co-authored-by: Andrew Hutchings <andrew@linuxjedi.co.uk>
These files are currently not being used nor compiled in MariaDB. The
use of large lists of 'case' statements in these source files are also
not a great way to represent translated strings. This git history can
be referred to when a better translation interface can be implemented
in the future.
Therefore, these files can be removed to cleanup the MariaDB codebase.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer
Amazon Web Services, Inc.
Old style C functions `strcpy()`, `strcat()` and `sprintf()` are vulnerable to
security issues due to lacking memory boundary checks. Replace these in the
Connect storage engine with safe new and/or custom functions such as
`snprintf()` `safe_strcpy()` and `safe_strcat()`.
With this change FlawFinder and other static security analyzers report 287
fewer findings.
All new code of the whole pull request, including one or several files that are
either new files or modified ones, are contributed under the BSD-new license. I
am contributing on behalf of my employer Amazon Web Services, Inc.
RocksDB (in a submodule) has to include <cstdint> to use uint64_t
but it doesn't. Until the submodule is upgraded, let's replace
problematic types with something that's available
The cause of the crash was that test was setting
aria_sort_buffer_size to MAX_LONG_LONG, which caused an overflow in
my_malloc() when trying to allocate the buffer + 8 bytes.
Fixed by reducing max size of sort_buffer for Aria and MyISAM
Other things:
- Added code in maria_repair_parallell() to not allocate a big sort buffer
for small files.
- Updated size of minumim sort buffer in Aria
The reason for the MDEV reported failures is that the tests are enabling
encryption for Aria but not providing any encryption keys.
Fixed by checking if encryption keys exists before creating the table.
Other things:
- maria.encrypt_wrong-key changed as we now get the error on CREATE
instead during insert.
- `mariadb-backup --backup` was fixed to fetch the value of the
@@aria_log_dir_path server variable and copy aria_log* files
from @@aria_log_dir_path directory to the backup directory.
Absolute and relative (to --datadir) paths are supported.
Before this change aria_log* files were copied to the backup
only if they were in the default location in @@datadir.
- `mariadb-backup --copy-back` now understands a new my.cnf and command line
parameter --aria-log-dir-path.
`mariadb-backup --copy-back` in the main loop in copy_back()
(when copying back from the backup directory to --datadir)
was fixed to ignore all aria_log* files.
A new function copy_back_aria_logs() was added.
It consists of a separate loop copying back aria_log* files from
the backup directory to the directory specified in --aria-log-dir-path.
Absolute and relative (to --datadir) paths are supported.
If --aria-log-dir-path is not specified,
aria_log* files are copied to --datadir by default.
- The function is_absolute_path() was fixed to understand MTR style
paths on Windows with forward slashes, e.g.
--aria-log-dir-path=D:/Buildbot/amd64-windows/build/mysql-test/var/...
fp->field_length was unsigned and therefore the negative
condition around it.
Backport of cc182aca93 fixes it, however to correct the
consistent use of types pcf->Length needs to be unsigned
too.
At one point pcf->Precision is assigned from pcf->Length so
that's also unsigned.
GetTypeSize is assigned to length and has a length argument.
A -1 default value seemed dangerious to case, so at least 0
should assert if every hit.
Similar to 567b6812 continue to replace use of strcat() and
strcpy() with safer options strncat() and strncpy().
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the BSD-new
license. I am contributing on behalf of my employer Amazon Web Services
stored externally
row_merge_buf_add(): Has strict assert that fixed length mismatch
shouldn't happen while rebuilding the redundant row format table
btr_index_rec_validate(): Fixed size column can be stored externally.
So sum of inline stored length and external stored length of the
column should be equal to total column length
This issue happens when race condition happens when DDL
and fts optimize thread. DDL adds the new index to fts cache.
At the same time, fts optimize thread clears the cache
and reinitialize it. Take cache init lock before reinitializing
the cache. fts_sync_commit() should take dict_sys mutex
to avoid the deadlock with create index.
The glibc headers declare fallocate only if _GNU_SOURCE is defined.
Without this change, the probe fails with C compilers which do not
support implicit function declarations even if the system does in
fact support the fallocate function.
Upstream rocksdb does not need this because the probe is run with the
C++ compiler, and current g++ versions define _GNU_SOURCE
automatically.
- Adding a new argument "flag" to MY_COLLATION_HANDLER::strnncollsp_nchars()
and a flag MY_STRNNCOLLSP_NCHARS_EMULATE_TRIMMED_TRAILING_SPACES.
The flag defines if strnncollsp_nchars() should emulate trailing spaces
which were possibly trimmed earlier (e.g. in InnoDB CHAR compression).
This is important for NOPAD collations.
For example, with this input:
- str1= 'a ' (Latin letter a followed by one space)
- str2= 'a ' (Latin letter a followed by two spaces)
- nchars= 3
if the flag is given, strnncollsp_nchars() will virtually restore
one trailing space to str1 up to nchars (3) characters and compare two
strings as equal:
- str1= 'a ' (one extra trailing space emulated)
- str2= 'a ' (as is)
If the flag is not given, strnncollsp_nchars() does not add trailing
virtual spaces, so in case of a NOPAD collation, str1 will be compared
as less than str2 because it is shorter.
- Field_string::cmp_prefix() now passes the new flag.
Field_varstring::cmp_prefix() and Field_blob::cmp_prefix() do
not pass the new flag.
- The branch in cmp_whole_field() in storage/innobase/rem/rem0cmp.cc
(which handles the CHAR data type) now also passed the new flag.
- Fixing UCA collations to respect the new flag.
Other collations are possibly also affected, however
I had no success in making an SQL script demonstrating the problem.
Other collations will be extended to respect this flags in a separate
patch later.
- Changing the meaning of the last parameter of Field::cmp_prefix()
from "number of bytes" (internal length)
to "number of characters" (user visible length).
The code calling cmp_prefix() from handler.cc was wrong.
After this change, the call in handler.cc became correct.
The code calling cmp_prefix() from key_rec_cmp() in key.cc
was adjusted according to this change.
- Old strnncollsp_nchar() related tests in unittest/strings/strings-t.c
now pass the new flag.
A few new tests also were added, without the flag.
This is allowed:
STRING_WITH_LEN("string literal")
This is not:
char *str = "pointer to string";
... STRING_WITH_LEN(str) ..
In C++ this is also allowed:
const char str[] = "string literal";
... STRING_WITH_LEN(str) ...
Let us make innodb_buffer_pool_filename a read-only variable
so that a malicious user cannot cause an important file to be
deleted on InnoDB shutdown. An attempt to delete a directory
will fail because it is not a regular file, but what if the
variable pointed to (say) ibdata1, ib_logfile0 or some *.ibd file?
It does not seem to make much sense for this parameter to be
configurable in the first place, but we will not change that in order
to avoid breaking compatibility.
The solution is to suppress error messages for missing tablespaces if
mariabackup is launched with "--prepare --export" options.
"mariabackup --prepare --export" invokes itself with --mysqld parameter.
If the parameter is set, then it starts server to feed "FLUSH TABLES ...
FOR EXPORT;" queries for exported tablespaces. This is "normal" server
start, that's why new srv_operation value is introduced.
Reviewed by Marko Makela.
row_upd_rec_in_place(): Avoid calling page_zip_write_rec() if we
are not modifying any fields that are stored in compressed format.
btr_cur_update_in_place_zip_check(): New function to check if a
ROW_FORMAT=COMPRESSED record can actually be updated in place.
btr_cur_pessimistic_update(): If the BTR_KEEP_POS_FLAG is not set
(we are in a ROLLBACK and cannot write any BLOBs), ignore the potential
overflow and let page_zip_reorganize() or page_zip_compress() handle it.
This avoids a failure when an attempted UPDATE of an NULL column to 0 is
rolled back. During the ROLLBACK, we would try to move a non-updated
long column to off-page storage in order to avoid a compression failure
of the ROW_FORMAT=COMPRESSED page.
page_zip_write_trx_id_and_roll_ptr(): Remove an assertion that would fail
in row_upd_rec_in_place() because the uncompressed page would already
have been modified there.
Thanks to Jean-François Gagné for providing a copy of a page that
triggered these bugs on the ROLLBACK of UPDATE and DELETE.
A 10.6 version of this was tested by Matthias Leich using
cmake -DWITH_INNODB_EXTRA_DEBUG=ON a.k.a. UNIV_ZIP_DEBUG.
mtr uses group suffix, but some existing inc and test files use
server_id for expect files. This patch aims to fix that.
For spider:
With this change we will not have to maintain a separate version of
restart_mysqld.inc for spider, that duplicates code, just because
spider tests use different names for expect files, and shutdown_mysqld
requires magical names for them.
With this change spider tests will also be able to use other features
provided by restart_mysqld.inc without code duplication, like the
parameter $restart_parameters (see e.g. the testcase mdev_29904.test
in commit ef1161e5d4f).
Tests run after this change: default, spider, rocksdb, galera, using
the following command
mtr --parallel=auto --force --max-test-fail=0 --skip-core-file
mtr --suite spider,spider/*,spider/*/* \
--skip-test="spider/oracle.*|.*/t\..*" --parallel=auto --big-test \
--force --max-test-fail=0 --skip-core-file
mtr --suite galera --parallel=auto
mtr --suite rocksdb --parallel=auto
buf_LRU_block_remove_hashed(): Ever since
commit 2e814d4702
we could get page_zip_validate() failures after an ALTER TABLE
operation was aborted and BtrBulk::pageCommit() had never been
executed on some blocks.