The test requires a larger innodb log file size; this was lost as a
side-effect of d7699c51eb.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Clarify confusing comments in the previous commit, and note that the failure
started after push of MDEV-34504.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
- Added plugin_debug.test, multiple_index.test to innodb_fts suite
from mysql-5.7.
- commit c5b28e55f6 removes the warning
for InnoDB rebuilding table to add FTS_DOC_ID
- multiple_index test case has MATCH(a) values are smaller
than in MySQL because ROLLBACK updates the stat_n_rows.
- st_mysql_ftparser_boolean_info structure conveys boolean
metadata to mysql search engine for every word in the query.
This structure misses the position value to store the correct
offset of every word. So phrase search queries in plugin_debug
test case with boolean mode for simple parser throws
wrong result.
int wsrep_thd_append_key(THD*, const wsrep_key*, int, Wsrep_service_key_type)
CREATE TABLE [SELECT|REPLACE SELECT] is CTAS and idea was that
we force ROW format. However, it was not correctly enforced
and keys were appended before wsrep transaction was started.
At THD::decide_logging_format we should force used stmt binlog
format to ROW in CTAS case and produce a warning if used
binlog format was not ROW.
At ha_innodb::update_row we should not append keys similarly
as in ha_innodb::write_row if sql_command is SQLCOM_CREATE_TABLE.
Improved error logging on ::write_row, ::update_row and ::delete_row
if wsrep key append fails.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
1. it links with ${SSL_LIBRARIES}, in WolfSSL builds it's a static
library, so when a plugin is loaded there will be two copies of
wolfssl in the same address space. It breaks odr (at least).
2. Plugin can linked with OpenSSL and the server with WolfSSL or
vice versa. It might load, but then we'll have both WolfSSL and
OpenSSL at the same time. Kind of risky.
Fix: link the plugin statically into the server if it's a WolfSSL build
adjust tests to work with static and dynamic parsec
here MSAN complains that
==218853==WARNING: MemorySanitizer: use-of-uninitialized-value
#0 0x7f84a77c60a3 in _gnutls_rnd_init /tmp/msan/lib/random.c:69:6
#1 0x7f84a77c60a3 in gnutls_rnd /tmp/msan/lib/random.c:168:6
but the line lib/random.c:69 in gnutls-3.7.1 is
69 if (unlikely(!rnd_initialized)) {
and rnd_initialized is declared as
40 static _Thread_local unsigned rnd_initialized = 0;
which apparently MSAN isn't happy with
PARSEC: Password Authentication using Response Signed with Elliptic Curve
new authentication plugin that uses salted passwords,
key derivation, extensible password storage format,
and both server- and client-side scrambles.
It signs the response with ed25519, but it uses stock
unmodified ed25519 as provided by OpenSSL/WolfSSL/GnuTLS.
Edited by: Sergei Golubchik
Assertion `table->field[0]->ptr >= table->record[0] &&
table->field[0]->ptr <= table->record[0] + table->s->reclength' failed in
handler::assert_icp_limitations.
table->move_fields has some limitations:
1. It cannot be used in cascade
2. It should always have a restoring pair.
Rule 1 is covered by assertions in handler::assert_icp_limitations
and handler::ptr_in_record (commit 30894fe9a9).
Rule 2 should be manually maintained with care. Hopefully, the rule 1 assertions
may sometimes help as well.
In ha_myisam::repair, both rules are broken. table->move_fields is used
asymmetrically there: it is set on every param->fix_record call
(i.e. in compute_vcols) but is restored only once, in the end of repair.
The reason to updating field ptr's for every call is that compute_vcols can
(supposedly) be called in parallel, that is, with the same table, but different
records.
The condition to "unmove" the pointers in ha_myisam::restore_vcos_after_repair
is incorrect, when stored vcols are available, and myisam stores a VIRTUAL field
if it's the only field in the table (the record cannot be of zero length).
This patch solves the problem by "unmoving" the pointers symmetrically, in
compute_vcols. That is, both rules will be preserved maintained.
Before this change the unix socket auth plugin returned true only when
the OS socket user id matches the MariaDB user name.
The authentication string was ignored.
Now if an authentication string is defined with in `unix_socket`
authentication rule, then the authentication string will be used to
compare with the socket's user name, and the plugin will return a
positive if matching.
Make the plugin to fill in the @@external_user variable.
This change is similar to MySQL commit of
https://github.com/mysql/mysql-server/commit/6ddbc58e.
However there's one difference with above commit:
- For MySQL, both Unix user matches DB user name and Unix user matches the
authentication string will be allowed to connect.
- For MariaDB, we only allows the Unix user matches the authentication
string to connect, if the authentication string is defined.
This is because allowing both Unix user names has risks and couldn't
handle the case that a customer only wants to allow one single Unix user
to connect which doesn't matches the DB user name.
If DB user is created with multiple unix_socket options for example:
`create user A identified via unix_socket as 'B' or unix_socket as 'C';`
Then both Unix user of B and C are accepted.
Existing MTR test of `plugins.unix_socket` is not impacted.
Also add a new MTR test to verify authentication with authentication
string. See the MTR test cases for supported/unsupported cases.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
in bintars the server is linked with wolfssl, while the connector
is linked with gnutls. Thus client_ed25519.so gets gnutls
dependency, unresolved symbols and it cannot be loaded into the
server and gnutls symbols aren't present there.
linking the plugin statically with gnutls fixes that and the test passes.
but when such a plugin is loaded into the client, the client gets
two copies of gnutls - they conflict and ssl doesn't work at all.
let's detect this and disable the test for now.
Modified node config with longer timeouts for suspect,
inactive, install and wait_prim timeout. Increased
node_1 weight to keep it primary component when
other nodes are voted out.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Replication of MyISAM and Aria DML is experimental and best
effort only. Earlier change make INSERT SELECT on both
MyISAM and Aria to replicate using TOI and STATEMENT
replication. Replication should happen only if user
has set needed wsrep_mode setting.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Fixed used configuration and added suppression for warning
message. Test case changes only.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Modified test configuration file to use wsrep_sync_wait
to make sure committed transactions are replicated before
next operation.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
note that:
* unit.conc_tls is broken in mtr
* schannel now doesn't fail on invalid ca path unless
--ssl-verify-server-cert is used. openssl still does.
Reason:
======
- InnoDB fails to load the instant alter table metadata from
clustered index while loading the table definition.
The reason is that InnoDB metadata blob has the column length
exceeds maximum fixed length column size.
Fix:
===
- InnoDB should treat the long fixed length column as variable
length fields that needs external storage while initializing
the field map for instant alter operation
Problem:
========
- After the commit ada1074bb1 (MDEV-14398)
fil_crypt_set_encrypt_tables() iterates through all tablespaces to
fill the default_encrypt tables list. This was a trigger to
encrypt or decrypt when key rotation age is set to 0. But import
tablespace does call fil_crypt_set_encrypt_tables() unnecessarily.
The motivation for the call is to signal the encryption threads.
Fix:
====
ha_innobase::discard_or_import_tablespace: Remove the
fil_crypt_set_encrypt_tables() and add the import tablespace
to the default encrypt list if necessary
- commit 85db534731 (MDEV-33400)
retains the instantness in the table definition after discard
tablespace. So there is no need to assign n_core_null_bytes
during instant table preparation unless they are not
initialized.
- During copy algorithm, InnoDB should use bulk insert operation
for row by row insert operation. By doing this, copy algorithm
can effectively build indexes. This optimization is disabled
for temporary table, versioning table and table which has
foreign key relation.
Introduced the variable innodb_alter_copy_bulk to allow
the bulk insert operation for copy alter operation
inside InnoDB. This is enabled by default
ha_innobase::extra(): HA_EXTRA_END_ALTER_COPY mode tries to apply
the buffered bulk insert operation, updates the non-persistent
table stats.
row_merge_bulk_t::write_to_index(): Update stat_n_rows after
applying the bulk insert operation
row_ins_clust_index_entry_low(): In case of copy algorithm,
switch to bulk insert operation.
copy_data_error_ignore(): Handles the error while copying
the data from source to target file.
(With trivial fixes by sergey@mariadb.com)
Added option fix_innodb_cardinality to optimizer_adjust_secondary_key_costs
Using fix_innodb_cardinality disables the 'divide by 2' of rec_per_key_int
in InnoDB that in effect doubles the Cardinality for secondary keys.
This has the biggest effect for indexes where a few rows has the same key
value. Using this may also cause table scans for very small tables (which
in some cases may be better than an index scan).
The user visible effect is that 'SHOW INDEX FROM table_name' will for
InnoDB show the true Cardinality (and not 2x the real value). It will
also allow the optimizer to chose a better index in some cases as the
division by 2 could have a bad effect for tables with 2-5 identical values
per key.
A few notes about using fix_innodb_cardinality:
- It has direct affect for SHOW INDEX FROM table_name. SHOW INDEX
will also update the statistics in table share.
- The effect of fix_innodb_cardinality for query plans or EXPLAIN
is only visible after first open of the table. This is why one must
do a flush tables or use SHOW INDEX for the option to take effect.
- Using fix_innodb_cardinality can thus affect all user in their query
plans if they are using the same tables.
Because of this, it is strongly recommended that one uses
optimizer_adjust_secondary_key_costs=fix_innodb_cardinality mainly
in configuration files to not cause issues for other users.
This commit adds 3 new status variables to 'show all slaves status':
- Master_last_event_time ; timestamp of the last event read from the
master by the IO thread.
- Slave_last_event_time ; Master timestamp of the last event committed
on the slave.
- Master_Slave_time_diff: The difference of the above two timestamps.
All the above variables are NULL until the slave has started and the
slave has read one query event from the master that changes data.
- Added information_schema.slave_status, which allows us to remove:
- show_master_info(), show_master_info_get_fields(),
send_show_master_info_data(), show_all_master_info()
- class Sql_cmd_show_slave_status.
- Protocol::store(I_List<i_string_pair>* str_list) as it is not
used anymore.
- Changed old SHOW SLAVE STATUS and SHOW ALL SLAVES STATUS to
use the SELECT code path, as all other SHOW ... STATUS commands.
Other things:
- Xid_log_time is set to time of commit to allow slave that reads the
binary log to calculate Master_last_event_time and
Slave_last_event_time.
This is needed as there is not 'exec_time' for row events.
- Fixed that Load_log_event calculates exec_time identically to
Query_event.
- Updated RESET SLAVE to reset Master/Slave_last_event_time
- Updated SQL thread's update on first transaction read-in to
only update Slave_last_event_time on group events.
- Fixed possible (unlikely) bugs in sql_show.cc ...old_format() functions
if allocation of 'field' would fail.
Reviewed By:
Brandon Nesterenko <brandon.nesterenko@mariadb.com>
Kristian Nielsen <knielsen@knielsen-hq.org>
When there is no bounds on the upper or lower part of the window,
it doesn't matter if the type is numeric.
It also doesn't matter how many ORDER BY items there are in the
query.
Reviewers: Sergei Petrunia and Oleg Smirnov
When mysqldump is run to dump the `mysql` system database, it generates
INSERT statements into the table `mysql.gtid_slave_pos`.
After running the backup script
those inserts did not produce the expected gtid state on slave. In
particular the maximum of mysql.gtid_slave_pos.sub_id did not make
into
rpl_global_gtid_slave_state.last_sub_id
an in-memory object that is supposed to match the current state of the
table. And that was regardless of whether --gtid option was specified
or not. Later when the backup recipient server starts as slave
in *non-gtid* mode this desychronization may lead to a duplicate key
error.
This effect is corrected for --gtid mode mysqldump/mariadb-dump only
as the following. The fixes ensure the insert block of the dump
script is followed with a "summing-up" SET @global.gtid_slave_pos
assignment.
For the implemenation part, note a deferred print-out of
SET-gtid_slave_pos and associated comments is prefered over relocating
of the entire blocks if (opt_master,slave_data &&
do_show_master,slave_status) ... because of compatiblity
concern. Namely an error inside do_show_*() is handled in the new code
the same way, as early as, as before.
A regression test can be run in how-to-reproduce mode as well.
One affected mtr test observed.
rpl_mysqldump_slave.result "mismatch" shows now the new deferring print
of SET-gtid_slave_pos policy in action.