- Simplified Chinese translation added
- Character encoding is gdk
-- gdk covers more characters
-- gdk includes both Simplified and Traditional
-- best option I think, may need to work along with other locale
settings
- Other cleanup
-- Within each error, messages are sorted according to language code
-- More consistent formatting (8 spaces proceeding each translation)
-- jps removed as duplicate of jpn translation
This should be a good starting point. More refinement is appreciated,
and needed down the road.
English "containt" (sic) spelling fixes on ER_FK_NO_INDEX_{CHILD,PARENT}
resulting in mtr test case adjustments.
Edited/reviewed by Daniel Black
`m_status == DA_ERROR' failed on SELECT after setting tmp_disk_table_size.
Analysis: Mismatch in number of warnings between "194 warnings" vs
"64 rows in set" is because of max_error_count variable which has default
value of 64.
About the corrupted tables, the error that occurs because of insufficient
tmp_disk_table_size variable is not reported correctly and we continue to
execute the statement. But because the previous error (about table being
full)is not reported correctly, this error moves up the stack and is
wrongly reported as parsing error later on while parsing frm file of one
of the information schema table. This parsing error gives corrupted table
error.
As for the innodb error, it occurs even when tmp_disk_table_size is not
insufficient is default but the internal error handler takes care of it
and the error doesn't show. But when tmp_disk_table_size is insufficient,
the fatal error which wasn't reported correctly moves up the stack so
internal error handler is not called. So it shows errors.
Fix: Report the error correctly.
We will remove the parameter innodb_disallow_writes because it is badly
designed and implemented. The parameter was never allowed at startup.
It was only internally used by Galera snapshot transfer.
If a user executed
SET GLOBAL innodb_disallow_writes=ON;
the server could hang even on subsequent read operations.
During Galera snapshot transfer, we will block writes
to implement an rsync friendly snapshot, as follows:
sst_flush_tables() will acquire a global lock by executing
FLUSH TABLES WITH READ LOCK, which will block any writes
at the high level.
sst_disable_innodb_writes(), invoked via ha_disable_internal_writes(true),
will suspend or disable InnoDB background tasks or threads that could
initiate writes. As part of this, log_make_checkpoint() will be invoked
to ensure that anything in the InnoDB buf_pool.flush_list will be written
to the data files. This has the nice side effect that the Galera joiner
will avoid crash recovery.
The changes to sql/wsrep.cc and to the tests are based on a prototype
that was developed by Jan Lindström.
Reviewed by: Jan Lindström
This commit contains a fix to use modern syntax for selecting
character classes in the tr utility options.
Also one of the tests for SST via rsync (galera_sst_rysnc2) is made
more reliable (to avoid rare failures during automatic testing).
rpl.rpl_semi_sync_slave_compressed_protocol.test was manually
re-enabled only in 10.3 but left disabled in 10.4+. The fix went
into 10.3+, but the test was left disabled in later versions. This
commit re-enables the test in 10.4+.
mariabackup does not load dictionary during backup, but it loads
tablespaces, that is why fil_system.max_assigned_id is not set with
dict_check_tablespaces_and_store_max_id(). There is no sense to issue the
warning during backup.
Task 6:
We can find the .frm type of file. If it is sequence then is_sequence
passed to dd_frm_type() will be true. Since there is already a check
to give error message if we trigger is on temporary table or view, an
additional condition is added to check if .frm is sequence
(is_sequence==true) and error message is changed to show
"Trigger's '%-.192s' is view, temporary table or sequence" instead of
"Trigger's '%-.192s' is view or temporary table".
Task 4 and 5:
Already fixed when I started working on bug. Task 4 correctly gives syntax
error ('(' and ')' are not part of create sequence syntax). Task 5 also
gives correct error because s1 is table not sequence. Fails with
ER_NOT_SEQUENCE2
Task 1:
If table is added to list using option TL_OPTION_SEQUENCE (done when we
have sequence functions) then then we are dealing with sequence instead
of table. So global table list will have sequence set to true. This is
used to check and give correct error message about unknown sequence
instead of table doesn't exist.
Fix a possible crash on my_free() due to the use of strdup() versus
my_strdup(), and a memory leak.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
As main() invokes parse_page() when -S or -D are set, it can be a case
when parse_page() is invoked when -D filename is not set, that is why
any attempt to write to page dump file must be done only if the file
name is set with -D.
The bug is caused by 2ef7a5a13a
(MDEV-13443).
records_are_comparable() requires this condition:
bitmap_is_subset(table->write_set, table->read_set)
On first iteration vers_update_fields() changes write_set and
read_set. On second iteration the above condition fails.
Added missing read bit for ROW_START. Also reorganized
bitmap_set_bit() so it is called only when needed.
Throw ER_NOT_FORM_FILE if this is wrong FRM data (warning with
ER_VERS_FIELD_WRONG_TYPE is still printed for deeper knowledge of what
was happened).
Keep ER_VERS_FIELD_WRONG_TYPE for creating partitioned table with
trx-versioning. Tested by MDEV-15951 in trx_id.test
Problem: In regular replication, when master binlogged using statement format
slave might not have written an event to its binary log when the Query
event aimed at a temporary table.
Specifically this was observed with LOAD DATA INFILE.
This effect was possible because unlike master slave holds temporary
tables in its pool and the master side check of existence of a
temporary table at the format bin-logging decision did not apply.
Solution: replace THD::has_thd_temporary_tables() with
THD::has_temporary_tables which allows to identify temporary table
presence on either side.
--
Reviewed by Andrei Elkin.
This commit adds a mtr test for reproducing a test scenario where despite of
innodb_disallow_writes blocking, writes to file system can still happen.
The test launches a garbd node, which triggers one of the cluster node to switch to
SST donor state. In this state, all disk activity should be halted, and e.g.
innodb_disallow_writes has been set. The test records md5sum aggregate over mariadb
data directory when the node enters the donor state, and records another md5sum
when the node leaves the donor state. If there is no IO activity in data directory, these
hashes should be equal.
For this test, the Donor state processing, has beeen instrumented so that, SST donor thread can be
stopped when entering the donor state. The test uses this new dbug sync point,
to control when to record the md5sums.
New SST script was added: wsrep_sst_backup, and garbd uses backup method to lauch the donor
node to call this script, and to enter in donor state.
The backup script could be later extended as general purpose backup method for the cluster.
This commit fixes also one race condition happening in wsrep_sst_rsync, like this:
* wsrep_rsync_sst script requests for flush tables,
and then waits in a loop until mariadbd has created file tables_flushed,
as confirmation that FLUSH TABLES has completed
* mariadbd's SST donor thread, wakes for the flush table request and then performs FTWRL,
and after this it creates the tables_flushed file
* note that SST script will now continue to startup rsync sending
* mariadbd's SST donor thread now calls for sst_disallow_writes(),
so that innodb would setup disk IO blockage, however rsyncing may already be ongoing at this point
This race condition is fixed in this commit, by performing all disk IO blocking before
creating the tables_flushed file.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Problem:
========
When the mysql.gtid_slave_pos table uses the InnoDB engine, and
mysqld starts, it reads the table and begins a transaction. After
reading the value, it should end the transaction and release all
associated locks. The bug reported in DBAAS-7828 shows that when
autocommit is off, the locks are not released, resulting in
indefinite hangs on future attempts to change gtid_slave_pos. In
particular, the transaction was not properly finalized because
thd->server_status was not updated to reflect the end of the
transaction.
Solution:
========
This patch updates the code to properly commit the transaction after
reading gtid_slave_pos during mysqld start-up.
Reviewed By:
============
Andrei Elkin <andrei.elkin@mariadb.com>
Problem:
========
When using mariadb-binlog with --raw and --stop-never, events from
the master's currently active log file should be written to their
respective log file specified by --result-file, and shown on-disk.
There is a bug where mariadb-binlog does not flush the result file
to disk when new events are received
Solution:
========
Add a function call to flush mariadb-binlog’s result file after
receiving an event in --raw mode.
Reviewed By:
============
Andrei Elkin <andrei.elkin@mariadb.com>
When thread is BF aborted by high priority service, ULL (user level
locks need to be removed and released). Calling directly release of lock for
MDL_EXPLICIT type doesn't clear also `thd->ull_hash`. Method
`mysql_ull_cleanup` will properly clear all information about ULL locks
for thread.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
* Fix test galera.MW-44 to make it work with --ps-protocol
* Skip test galera.MW-328C under --ps-protocol This test
relies on wsrep_retry_autocommit, which has no effect
under ps-protocol.
* Return WSREP related errors on COM_STMT_PREPARE commands
Change wsrep_command_no_result() to allow sending back errors
when a statement is prepared. For example, to handle deadlock
error due to BF aborted transaction during prepare.
* Add sync waiting before statement prepare
When a statement is prepared, tables used in the statement may be
opened and checked for existence. Because of that, some tests (for
example galera_create_table_as_select) that CREATE a table in one node
and then SELECT from the same table in another node may result in errors
due to non existing table.
To make tests behave similarly under normal and PS protocol, we add a
call to sync wait before preparing statements that would sync wait
during normal execution.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Update wsrep-lib which contains a fixup introduced with MDEV-27553.
Also, adapt the corresponding test: after apply failure on ROLLBACK,
node will disconnect from cluster
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
This commit contains a test for reproducing the issue in MDEV-27649,
where a transaction, executing a prepared statment, is BF aborted.
The scenario, in MDEV-27649 has a transaction which has prepared a PS,
but not yet executed it, and this transaction is then BF aborted in this state.
When the BF aborted transaction tries to execute the PS, it will receive deadlock error.
But, when it tries to execute the PS second time, the node crashes.
Mtr test galera.galera_bf_abort_ps_bind, exercises this scenario.
However, mtr test platform does not have mechanism to control the execution of PS in required detail.
For this purpose, mysqltetst.cc was extended to contain 4 new commands:
PS_prepare - to prepare a prepared statement
PS_bind - to bind values for parameters for the PS
PS_execute - to execute the PS
PS_close - to close the PS
The support for controlling prepared statments in mtr scripts is quite minimal
in this commit. Limitations are:
* only one PS can be used by a connection, at a time
* only input parameters can be bound for the PS
* only varchar, integer or float type of parameters can be bound
added the result
fixes
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
extra2_read_len resolved by keeping the implementation
in sql/table.cc by exposed it for use by ha_partition.cc
Remove identical implementation in unireg.h
(ref: bfed2c7d57)
The 10.5 test error main.grant_kill showed up a incorrect
thread id on a big endian architecture.
The cause of this is the sql_kill_user function assumed the
error was ER_OUT_OF_RESOURCES, when the the actual error was
ER_KILL_DENIED_ERROR. ER_KILL_DENIED_ERROR as an error message
requires a thread id to be passed as unsigned long, however a
user/host was passed.
ER_OUT_OF_RESOURCES doesn't even take a user/host, despite
the optimistic comment. We remove this being passed as an
argument to the function so that when MDEV-21978 is implemented
one less compiler format warning is generated (which would
have caught this error sooner).
Thanks Otto for reporting and Marko for analysis.
- InnoDB fails to skip newly created column while checking for
change column when table is in redundant row format. This issue
is caused the MDEV-18035 (ccb1acbd3c)
Let us explicitly wait for purge before invoking a slow shutdown,
so that instrumented builds (such as ASAN or UBSAN) will not
exceed the 60-second timeout during shutdown.
The first step for deprecating innodb_autoinc_lock_mode(see MDEV-27844) is:
- to switch statement binlog format to ROW if binlog format is MIXED and
the statement changes autoincremented fields
- issue warnings if innodb_autoinc_lock_mode == 2 and binlog format is
STATEMENT