Due to 32-bit arithmetics, SRV_TMP_SPACE_ID page number 0x200002 would be
folded to 0, which is incompatible with the assumption that was made in
commit 7cffb5f6e8 (MDEV-23399).
page_id_t::fold(): Compute in the native word width instead of uint32_t.
On 64-bit platforms, an alternative would be to return the 64-bit m_id
directly, but that was measured to cause a performance regression.
fil_space_t::open(): Invoke fil_node_t::find_metadata() when the
tablespace is being created. In this way, we will actually detect
that the temporary tablespace resides on SSD. (During database
creation, also the system tablespace will correctly be detected as
residing on SSD.)
The only purpose of ibuf_bitmap_mutex is to prevent a deadlock between
two concurrent invocations of ibuf_update_free_bits_for_two_pages_low()
on the same pair of bitmap pages, but in opposite order.
The mutex is unnecessarily serializing the execution of the function
even when it is being invoked on totally different tablespaces.
To avoid deadlocks, it suffices to ensure that the two page latches
are being acquired in a deterministic (sorted) order.
The --skip-write-binlog message was confusing that it only had
an effect if the galera was enabled. There are uses beyond galera
so we apply SET SESSION SQL_LOG_BIN=0 as implied by the option
without being conditional on the wsrep status.
We also with --skip-write-binlog actually check the session @@WSREP_ON
variable rather than the global server variable.
Since 10.6, the wsrep_mode could replicate Aria and MyISAM, in which
case no change to innodb and back is needed.
By removing the conditions, we can use LOCK TABLES in a general case
improving the load speed of Aria (MDEV-23326), regardless of the
skip-write-binlog flag. The only case where we don't use LOCK TABLES is
when we are replicating via Innodb, because wsrep_on=1 and wsrep_mode
doesn't contain REPLICATE_ARIA{,MYISAM}. This uses an Innodb transaction
instead. When replicating via InnoDB we change the table engine type
back to what it was originally.
By removing the \d and other syntax that requires parsing by
the mariadb client, we can use the generated SQL more generally, like
in the embedded server.
We also save and restore the SQL_LOG_BIN and WSREP_ON session server
variables so this can be included in the same session as other data
without taking into changes in state.
Remove wsrep.mysql_tzinfo_to_sql_symlink{,_skip} tests as they offered
no additional coverage beyond main.mysql_tzinfo_to_sql_symlink (no
server testing was done).
Add galera.mariadb_tzinfo_to_sql to actually test the replication
of tzinfo data through galera.
The conditional executable comment around /*M!100602 ...
START TRANSACTION .. LOCK TABLES.. */ is so that we can provide tzinfo
files (MDEV-27113, MDBF-389) and in the case that a user uses it on a
pre-10.6 server version it will still work. Both START TRANSACTION and
LOCK TABLES are not supported in prepared statements in MariaDB versions
earlier than 10.6.2.
Reviewed by Brandon Nesterenko
- Simplified Chinese translation added
- Character encoding is gdk
-- gdk covers more characters
-- gdk includes both Simplified and Traditional
-- best option I think, may need to work along with other locale
settings
- Other cleanup
-- Within each error, messages are sorted according to language code
-- More consistent formatting (8 spaces proceeding each translation)
-- jps removed as duplicate of jpn translation
This should be a good starting point. More refinement is appreciated,
and needed down the road.
English "containt" (sic) spelling fixes on ER_FK_NO_INDEX_{CHILD,PARENT}
resulting in mtr test case adjustments.
Edited/reviewed by Daniel Black
The following condition has to added:
1) InnoDB fails to include the offset of the node pointer field
in non-leaf record for redundant row format.
2) If the Fixed length field does have only prefix length then
calculate the field maximum size as prefix length.
- Added the test case to test (2) and to check maximum number of
fields can exist in the index.
- While loading the indexes, InnoDB fails to clear the index if the
system record has TEMP_INDEX_PREFIX_STR. This lead to ASAN failure.
The leak was introduced by commit cc2ddde4d8
(MDEV-18518)
When we enable writes after Galera SST srv_n_fil_crypt_threads needs
to be set temporally to 0 (as was done when writes were disabled)
to make sure that encryption threads will be really started based
on old value of encryption threads.
Fix provided by Marko Mäkelä.
column generated using date_format() and if()
vcol_info->expr is allocated on expr_arena at parsing stage. Since
expr item is allocated on expr_arena all its containee items must be
allocated on expr_arena too. Otherwise fix_session_expr() will
encounter prematurely freed item.
When table is reopened from cache vcol_info contains stale
expression. We refresh expression via TABLE::vcol_fix_exprs() but
first we must prepare a proper context (Vcol_expr_context) which meets
some requirements:
1. As noted above expr update must be done on expr_arena as there may
be new items created. It was a bug in fix_session_expr_for_read() and
was just not reproduced because of no second refix. Now refix is done
for more cases so it does reproduce. Tests affected: vcol.binlog
2. Also name resolution context must be narrowed to the single table.
Tested by: vcol.update main.default vcol.vcol_syntax gcol.gcol_bugfixes
3. sql_mode must be clean and not fail expr update.
sql_mode such as MODE_NO_BACKSLASH_ESCAPES, MODE_NO_ZERO_IN_DATE, etc
must not affect vcol expression update. If the table was created
successfully any further evaluation must not fail. Tests affected:
main.func_like
Reviewed by: Sergei Golubchik <serg@mariadb.org>
1. moved fix_vcol_exprs() call to open_table()
mysql_alter_table() doesn't do lock_tables() so it cannot win from
fix_vcol_exprs() from there. Tests affected: main.default_session
2. Vanilla cleanups and comments.
UNION ALL queries are a subject of optimization introduced in MDEV-334
when creation of a temporary table is skipped.
While there is a check for this optimization in Explain_union::print_explain()
there was no such in Explain_union::print_explain_json(). This resulted in
printing irrelevant data like:
"union_result": {
"table_name": "<union2,3>",
"access_type": "ALL",
"r_loops": 0,
"r_rows": null
in case when creation of the temporary table was actually optimized out.
This commits adds a check whether the temporary table was actually created
during the UNION ALL processing and eliminates printing of the irrelevant data.
The test encryption.innodb-redo-nokeys did not actually test
recovery without valid keys, because due to the setting
innodb_encrypt_tables, InnoDB refused to start up at all,
without even attempting any crash recovery.
fil_ibd_load(): If the encryption key is not available,
refuse to load the file.
A few PERFORMANCE_SCHEMA instrumentation keys were not exposed
in all_innodb_mutexes[]. Let us remove them.
The keys fts_pll_tokenize_mutex_key and read_view_mutex_key were
internally used. Let us make ReadView::m_mutex use the simpler
and smaller srw_mutex, hoping to improve memory access patterns.
trx_lock_t: Remove byte pad[256] and use
alignas(CPU_LEVEL1_DCACHE_LINESIZE) instead.
trx_t: Declare n_ref (the first member) aligned at cache line.
Pool: Assert that the sizes are multiples of
CPU_LEVEL1_DCACHE_LINESIZE, and invoke an aligned allocator.
Access the all_charsets[] array directly in a hot loop.
This avoids get_charset() that increments a shared counter via
my_collation_statistics_inc_use_count(), causing a scalability issue.
Instead, call get_charset() when a table is opened (and InnoDB
fills in dtype_t values) - this is enough, as charset is marked
ready (MY_CS_READY) only once, on the first get_charset() call,
after that it can be accessed directly.
This also fixes a potential bug in InnoDB. It used to access charsets
directly when filling dtype_t values. This wasn't preceded by any
get_charset() call inside InnoDB. Normally the server would call
get_charset() on reading the frm, but if the table was first opened
from a purge thread InnoDB could, theoretically, access a not-ready
charset in innobase_get_cset_width().
on_table_fill_finished() should always be done at the end of open()
even if result is not Select_materialize but (for example)
Select_fetch_into_spvars.
on_table_fill_finished() should always be done at the end of open()
even if result is not Select_materialize but (for example)
Select_fetch_into_spvars.
This commit fixes problems with parsing ipv6 addresses given via
the wsrep_sst_receive_address and wsrep_node_address options.
Also, this commit removes extra lines in the configuration files
in the mtr test suites for Galera related to these parameters.
don't initialize error_log_handler_list in set_handlers()
* error_log_handler_list is initialized to LOG_FILE early, in init_base()
* set_handlers always reinitializes it to LOG_FILE, so it's pointless
* after init_base() concurrent threads start using sql_log_warning,
so following set_handlers() shouldn't modify error_log_handler_list
without some protection
The problem was not with the server behavior,
it was with wrong test results recorded.
Enabling both type_set and type_enum.
type_set.result was OK. The test did not fail.
type_enum.result was incorrectly recorded in this commit:
eb483c5181
Recording correct results.
Remove unndeeded paths from mariadb-server-10.6.postinst as
it's already in export PATH and it fires 'command-with-path-in-maintainer-script'
in lintian checks
I change from `exit;` to `exit(1);` on a function `usage()`.
When we try to run mtr with a wrong option, a function `usage()` is called with the wrong option as its argument. In this case, because the function call `exit` in a first if statement, we get exit status 0.
`m_status == DA_ERROR' failed on SELECT after setting tmp_disk_table_size.
Analysis: Mismatch in number of warnings between "194 warnings" vs
"64 rows in set" is because of max_error_count variable which has default
value of 64.
About the corrupted tables, the error that occurs because of insufficient
tmp_disk_table_size variable is not reported correctly and we continue to
execute the statement. But because the previous error (about table being
full)is not reported correctly, this error moves up the stack and is
wrongly reported as parsing error later on while parsing frm file of one
of the information schema table. This parsing error gives corrupted table
error.
As for the innodb error, it occurs even when tmp_disk_table_size is not
insufficient is default but the internal error handler takes care of it
and the error doesn't show. But when tmp_disk_table_size is insufficient,
the fatal error which wasn't reported correctly moves up the stack so
internal error handler is not called. So it shows errors.
Fix: Report the error correctly.
This is a temporary fix for 10.2.
This problem was permanently fixed in 10.9 under terms of MDEV-27743.
This patch should propagate up to 10.8 then null-merged to 10.9.