trx_purge_truncate_history(): Remove a debug assertion that
had originally been added in
commit 0de3be8cfd (MDEV-30671).
In trx_t::commit_empty() we do not have any efficient way to rewind
rseg.needs_purge to an accurate value that would satisfy this
debug assertion.
Note: No correctness property should be violated here. At the point
where the debug assertion was located, we had already established
that purge_sys.sees(rseg.needs_purge) holds, that is, it is safe
to remove everything from rseg.
trx_undo_reuse_cached(): Assert that this is being invoked on the
persistent rollback segment of the transaction, and remove dead code
that was handling cached temporary undo log. This was missed in
commit 51e62cb3b3 (MDEV-26782).
- Lifetime of temporary tables is expected to be short, it would
seem to make sense to assume that all temporary tablespace pages
will remain in the buffer pool. It doesn't make sense to have
read-ahead for pages of temporary tablespace
buf_flush_page_cleaner(): Before finishing a batch, wake up any threads
that are waiting for buf_pool.done_flush_LRU.
This should fix a hung shutdown that we observed
after SET GLOBAL innodb_buffer_pool_size started was executed
to shrink the InnoDB buffer pool.
Starting with commit 4ff5311dec
log_write_up_to(trx->commit_lsn, true) in DDL operations could end up
being a no-op, because trx->commit_lsn would be 0.
trx_flush_log_if_needed(): Revert an incorrect attempt to ensure
that DDL operations are crash-safe.
trx_t::commit(std::vector<pfs_os_file_t> &), ha_innobase::rename_table():
Set trx_t::flush_log_later so that trx_t::commit_in_memory() will
retain trx_t::commit_lsn for the final durability call.
Tested by: Matthias Leich
lock_wait(): Never return the transient error code DB_LOCK_WAIT.
In commit 78a04a4c22 (MDEV-29869)
some assignments assign trx->error_state = DB_SUCCESS were removed,
and it was possible that the field was left at its initial value
DB_LOCK_WAIT.
The test case for this is nondeterministic; without this fix, it
would only occasionally fail.
Reviewed by: Vladislav Lesin
lock_sys_t::cancel(trx_t*): Remove, and merge to its only caller
innobase_kill_query().
innobase_kill_query(): Before reading trx->lock.wait_lock,
do acquire lock_sys.wait_mutex, like we did before
commit e71e613353 (MDEV-24671).
In this way, we should not miss a recently started lock wait
by the killee transaction.
lock_rec_lock(): Add a DEBUG_SYNC "lock_rec" for the test case.
lock_wait(): Invoke trx_is_interrupted() before entering the wait,
in case innobase_kill_query() was invoked some time earlier and
some longer-running operation did not check for interrupts.
As suggested by Vladislav Lesin, do not overwrite
trx->error_state==DB_INTERRUPTED with DB_SUCCESS.
This would avoid a call to trx_is_interrupted() when the test is
modified to use the DEBUG_SYNC point lock_wait_start instead of lock_rec.
Avoid some redundant loads of trx->lock.wait_lock; cache the value
in the local variable wait_lock.
Deadlock::check_and_resolve(): Take wait_lock as a parameter and
return wait_lock (or -1 or nullptr). We only need to reload
trx->lock.wait_lock if lock_sys.wait_mutex had been released
and reacquired.
trx_t::error_state: Correctly document the data member.
trx_lock_t::was_chosen_as_deadlock_victim: Clarify that other threads
may set the field (or flags in it) while holding lock_sys.wait_mutex.
Thanks to Johannes Baumgarten for reporting the problem and testing
the fix, as well as to Kristian Nielsen for suggesting the fix.
Reviewed by: Vladislav Lesin
Tested by: Matthias Leich
Some s390x environments include
https://github.com/madler/zlib/pull/410
and a more pessimistic compressBound: (sourceLen * 16 + 2308) / 8 + 6.
Let us adjust the recently enabled tests accordingly.
trx_undo_write_trx_xid(): Silence the debug assertion by passing
a template parameter that causes us to not care that the contents of
the page did not actually change and no log record would be written.
This debug assertion could fail if XA PREPARE was executed multiple
times with the same XID.
innodb_monitor_validate(): Let item_val_str() allocate the memory
in THD, so that it will be available to innodb_monitor_update().
In this way, there is no need to allocate another buffer, and
no problem if the call to innodb_monitor_update() is skipped due
to an invalid value that is passed to another configuration parameter.
There are some other callers to st_mysql_sys_var::val_str()
that validate configuration parameters that are related to FULLTEXT INDEX,
but they will allocate memory by invoking thd_strmake().
Currently include/have_innodb_4k.inc etc. files only check that the
server is running with the corresponding page size. I think it would
be more convenient if they actually enforced the setting.
The test innodb_zip.index_large_prefix_4k would not run unless it is
invoked as
./mtr --mysqld=--innodb-page-size=4k innodb_zip.index_large_prefix_4k
This test was originally developed to cover an option that was removed
in commit 0c92794db3. Starting with
MariaDB Server 10.2, which introduced innodb_default_row_format=dynamic,
the option innodb_large_prefix had become useless.
Let us remove some of the stale tests and adjust the outcome to the
expected behaviour.
Let us avoid inserting the rows fid=714 and fid=715, because we would
evaluate g=NULL for them, and NULL values are not allowed in InnoDB
SPATIAL INDEX.
Also, let the test run on any page size, and on non-debug builds.
-Wdeprecated-copy-with-user-provided-copy was causing a few errors on
things that where defined in a way that was implicit. By removing code
it now compiles without warnings.
tested with fc38 / clang-16
Port the test case from MySQL to MariaDB:
MySQL fix Bug#33813951, Change-Id: I2448e3f2f36925fe70d882ae5681a6234f0d5a98.
Function test_simple_temporal() from MySQL ported from C++ to pure C.
This includes one change:
- DIE_UNLESS(field->type == MYSQL_TYPE_DATETIME);
+ DIE_UNLESS(field->type == MYSQL_TYPE_TIMESTAMP);
The bound param of SELECT ? is TIMESTAMP in this code.
MySQL returns it back as DATETIME. MariaDB preserves TIMESTAMP.
Code packaged for commit by Daniel Black.
The problem was that parallel replication of temporary tables using
statement-based binlogging could overlap the COMMIT in one thread with a DML
or DROP TEMPORARY TABLE in another thread using the same temporary table.
Temporary tables are not safe for concurrent access, so this caused
reference to freed memory and possibly other nastiness.
The fix is to disable the optimisation with overlapping commits of one
transaction with the start of a later transaction, when temporary tables are
in use. Then the following event groups will be blocked from starting until
the one using temporary tables is completed.
This also fixes occasional test failures of rpl.rpl_parallel_temptable seen
in Buildbot.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
recalculate long unique hash in Write_rows_log_event
and Update_rows_log_event.
normally generated columns (stored and indexed virtual)
are deterministic and their values don't need to be recalculated
on the slave as they're already present in the row image.
but the long unique hash function was changed in MDEV-27653,
so a row event from the old master will have the old hash,
but a table created on the new slave will need a new hash.
32 bit MariaDB crashed in innodb.innodb-16k and a few other tests.
Fixed by using correct sizeof() calls.
Histograms where not read if first read was without histograms.
The problem is that s390x is not using the default bzip library we use
on other platforms, which causes compressed string lengths to be differnt
than what mtr tests expects.
Fixed by:
- Added have_normal_bzip.inc, which checks if compress() returns the
expected length.
- Adjust the results to match the expected one
- main.func_compress.test & archive.archive
- Don't print lengths that depends on compression library
- mysqlbinlog compress tests & connect.zip
- Don't print DATA_LENGTH for SET column_compression_zlib_level=1
- main.column_compression
The problem is in manager/worker communication when worker sends
WARNINGS and then TESTRESULT. If manager yet didn't read WARNINGS
response both responses get into the same buffer, can_read() will
indicate we have data only once and we must read all the data from the
socket at once. Otherwise TESTRESULT response is lost and manager
waits it forever.
The fix now instead of single line reads the socket in a loop. But if
there is only one response in the buffer the second read will be
blocked waiting until new data arrives. That can be overcame by
blocking(0) which sets the handle into non-blocking mode. If there is
no data second read just returns undef.
The problem is non-blocking mode is not supported by all perl flavors
on Windows. Strawberry and ActiveState do not support it. Cygwin and
MSYS2 do support. There is some ioctl() hack that was known to "work"
but it doesn't do what is expected (it does not return data when there
is data). So for Windows if it is not Cygwin we disable the fix.
MSYS2 is basically Cygwin, except it has more easy installation (but
with tools which are not used) and it has some more control of path
conversion via MSYS2_ARG_CONV_EXCL and MSYS2_ENV_CONV_EXCL. So it
should be more Windows-friendly than Cygwin.
Installation
Similar to Cygwin, except installing patch requires additional command
run from shell:
pacman -S patch
MSYS2 still doesn't work as it returns wierd "Bad address" when
exec-ing forked process from create_process(). Same exec from
standalone perl -e runs just fine... :(
Cygwin is more Unix-oriented. It does not treat \n as \r\n in regexps
(fixed by \R), it supplies Unix-style paths (fixed by
mixed_path()). It does some cleanup on paths when running exe, so it
will be different in exe output (like with $exe_mysqld, comparing
basename() is enough).
Cygwin installation
1. Just install latest perl version (only base package) and
patchutils from cygwin-setup;
2. Don't forget to add c:\cygwin64\bin into system path
before any other perl flavors;
3. There is path-style conflict (see below), you must replace
c:\cygwin64\bin\sh.exe with the wrapper. Run MTR with
--cygwin-subshell-fix=do for that. Make sure you are running Cygwin
perl for the option to work.
4. Restart buildbot via net stop buildbot; net start buildbot
Path-style conflict of Cygwin-ish Perl
Some exe paths are passed to mysqltest which are executed by a native
call. This requires native-style paths (\-style). These exe paths also
executed by Perl itself. Either by MTR itself which is not so
critical, but also by tests' --perl blocks which is impossible to
change. And if Perl detects shell-expansion or uses pipe command it
passess this exe path to /bin/sh which is Cygwin-compiled bash that
cannot work with \-style (or at least in -c processing). Thus we require
\-style on some parts of MTR execution and /-style on another parts.
The examples of tests which cover these different parts are:
main.mysqlbinlog_row_compressed \
main.sp_trans_log
That could be great to force Perl to use something different from
/bin/sh, but unfortunately /bin/sh is compiled-in into binary. So the
only solution left is to overwrite /bin/sh with some wrapper script
which passes the command to cmd.exe instead of bash.
See "Path-style conflict" in "MDEV-30836 MTR Cygwin fix" for explanation.
To install subshell fix use --cygwin-subshell-fix=do
To uninstall use --cygwin-subshell-fix=remove
This works only from Cygwin environment. As long as perl on PATH is
from Cygwin you are on Cygwin environment. Check it with
perl --version
This is perl 5, version 36, subversion 1 (v5.36.1) built for
x86_64-cygwin-threads-multi
run_test_server() is actually manager main loop. We move out this
function into Manager package and split into run() and
parse_protocol(). The latter is needed for the fix. Moving into
separate package helps to make some common variables which was local
to run_test_server().
Functions from the main package is now prefixed with main:: (should be
reorganized somehow later or auto-imported).
Create test for for case insensitive gives a basic warning on creating
a test file and the next thing a user might see is an abort.
ProtectHome and other systemd setting protect system services from
accessing user data. Unfortunately some of our users do put things
on /home due space or other reasons.
Rather than enumberate the systemd options in a very clunkly fragile
way we put an error associated with the "Can't create test file" and
hope the user can work it out from there.
%M tip thanks Sergei.
Fixed memory leak taken place on executing a prepared statement or
a stored routine that querying a view and this view constructed
on an information schema table. For example,
Lets consider the following definition of the view 'v1'
CREATE VIEW v1 AS SELECT table_name FROM information_schema.views
ORDER BY table_name;
Querying this view in PS mode result in hit of assert.
PREPARE stmt FROM "SELECT * FROM v1";
EXECUTE stmt;
EXECUTE stmt; (*)
Running the statement marked with (*) leads to a crash in case
server build with mode to control allocation of a memory from SP/PS
memory root on the second and following executions of PS/SP.
The reason of leaking the memory is that a memory allocated on
processing of FRM file for the view requested from a PS/PS memory
root meaning that this memory be released only when a stored routine
be evicted from SP-cache or a prepared statement be deallocated
that typically happens on termination of a user session.
To fix the issue switch to a memory root specially created for
allocation of short-lived objects that requested on parsing FRM.
In case a table accessed by a PS/SP is dropped after the first execution of
PS/SP and a view created with the same name as a table just dropped then
the second execution of PS/SP leads to allocation of a memory on SP/PS
memory root already marked as read only on first execution.
For example, the following test case:
CREATE TABLE t1 (a INT);
PREPARE stmt FROM "INSERT INTO t1 VALUES (1)";
EXECUTE stmt;
DROP TABLE t1;
CREATE VIEW t1 S SELECT 1;
--error ER_NON_INSERTABLE_TABLE
EXECUTE stmt; # (*)
DROP VIEW t1;
will hit assert on running the statement 'EXECUTE stmt' marked with (*)
when allocation of a memory be performed on parsing the view.
Memory allocation is requested inside the function mysql_make_view
when a view definition being parsed. In order to avoid an assertion
failure, call of the function mysql_make_view() must be moved after
invocation of the function check_and_update_table_version().
It will result in re-preparing the whole PS statement or current
SP instruction that will free currently allocated items and reset
read_only flag for the memory root.
Moved call of the function check_and_update_table_version() just
before the place where the function extend_table_list() is invoked
in order to avoid allocation of memory on a PS/SP memory root
marked as read only. It happens by the reason that the function
extend_table_list() invokes sp_add_used_routine() to add a trigger
created for the table in time frame between execution the statement
EXECUTE `stmt_id` .
For example, the following test case
create table t1 (a int);
prepare stmt from "insert into t1 (a) value (1)";
execute stmt;
create trigger t1_bi before insert on t1 for each row
set @message= new.a;
execute stmt; # (*)
adds the trigger t1_bi to a list of used routines that involves
allocation of a memory on PS memory root that has been already marked
as read only on first run of the statement 'execute stmt'.
In result, when the statement marked with (*) is executed it results in
assert hit.
To fix the issue call the function check_and_update_table_version()
before invocation of extend_table_list() to force re-compilation of
PS/SP that resets read-only flag of its memory root.