Before this change the unix socket auth plugin returned true only when
the OS socket user id matches the MariaDB user name.
The authentication string was ignored.
Now if an authentication string is defined with in `unix_socket`
authentication rule, then the authentication string will be used to
compare with the socket's user name, and the plugin will return a
positive if matching.
Make the plugin to fill in the @@external_user variable.
This change is similar to MySQL commit of
https://github.com/mysql/mysql-server/commit/6ddbc58e.
However there's one difference with above commit:
- For MySQL, both Unix user matches DB user name and Unix user matches the
authentication string will be allowed to connect.
- For MariaDB, we only allows the Unix user matches the authentication
string to connect, if the authentication string is defined.
This is because allowing both Unix user names has risks and couldn't
handle the case that a customer only wants to allow one single Unix user
to connect which doesn't matches the DB user name.
If DB user is created with multiple unix_socket options for example:
`create user A identified via unix_socket as 'B' or unix_socket as 'C';`
Then both Unix user of B and C are accepted.
Existing MTR test of `plugins.unix_socket` is not impacted.
Also add a new MTR test to verify authentication with authentication
string. See the MTR test cases for supported/unsupported cases.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
in bintars the server is linked with wolfssl, while the connector
is linked with gnutls. Thus client_ed25519.so gets gnutls
dependency, unresolved symbols and it cannot be loaded into the
server and gnutls symbols aren't present there.
linking the plugin statically with gnutls fixes that and the test passes.
but when such a plugin is loaded into the client, the client gets
two copies of gnutls - they conflict and ssl doesn't work at all.
let's detect this and disable the test for now.
note that:
* unit.conc_tls is broken in mtr
* schannel now doesn't fail on invalid ca path unless
--ssl-verify-server-cert is used. openssl still does.
Problem:
========
- After the commit ada1074bb1 (MDEV-14398)
fil_crypt_set_encrypt_tables() iterates through all tablespaces to
fill the default_encrypt tables list. This was a trigger to
encrypt or decrypt when key rotation age is set to 0. But import
tablespace does call fil_crypt_set_encrypt_tables() unnecessarily.
The motivation for the call is to signal the encryption threads.
Fix:
====
ha_innobase::discard_or_import_tablespace: Remove the
fil_crypt_set_encrypt_tables() and add the import tablespace
to the default encrypt list if necessary
This test ran `show status like '%Created_tmp%'`. This captures
`Created_tmp_files` as well as the intended `Created_tmp_tables`.
In 11.5, the former got moved to `FLUSH GLOBAL`, so when testing, the
result can now be random.
This fix makes the test just use `Created_tmp_tables`.
Remove an assert added by fix for MDEV-34417. BNL-H join can be used with
prefix keys. This happens when there are real prefix indexes on the
equi-join columns (although it probably doesn't make a lot of sense).
Anyway, remove the assert. The code receives properly truncated key values
for hashing/comparison so it can handle them just fine.
- commit 85db534731 (MDEV-33400)
retains the instantness in the table definition after discard
tablespace. So there is no need to assign n_core_null_bytes
during instant table preparation unless they are not
initialized.
- During copy algorithm, InnoDB should use bulk insert operation
for row by row insert operation. By doing this, copy algorithm
can effectively build indexes. This optimization is disabled
for temporary table, versioning table and table which has
foreign key relation.
Introduced the variable innodb_alter_copy_bulk to allow
the bulk insert operation for copy alter operation
inside InnoDB. This is enabled by default
ha_innobase::extra(): HA_EXTRA_END_ALTER_COPY mode tries to apply
the buffered bulk insert operation, updates the non-persistent
table stats.
row_merge_bulk_t::write_to_index(): Update stat_n_rows after
applying the bulk insert operation
row_ins_clust_index_entry_low(): In case of copy algorithm,
switch to bulk insert operation.
copy_data_error_ignore(): Handles the error while copying
the data from source to target file.
Statements affected by this bug need all the following to be true
1) a derived table table or view whose specification contains a set
operation at the top level.
2) a grouping operator (group by/having) operating on a column alias
other than in the first select of the union/intersect
3) an outer condition that will be pushed into all selects in this
union/intersect, either into the where or having clause
When pushing a condition into all selects of a unit with more than one
select, pushdown_cond_for_derived() renames items so we can re-use the
condition being pushed.
These names need to be saved and reset for correct name resolution on
second execution of prepared statements.
Reviewed by Igor Babaev (igor@mariadb.com)
(With trivial fixes by sergey@mariadb.com)
Added option fix_innodb_cardinality to optimizer_adjust_secondary_key_costs
Using fix_innodb_cardinality disables the 'divide by 2' of rec_per_key_int
in InnoDB that in effect doubles the Cardinality for secondary keys.
This has the biggest effect for indexes where a few rows has the same key
value. Using this may also cause table scans for very small tables (which
in some cases may be better than an index scan).
The user visible effect is that 'SHOW INDEX FROM table_name' will for
InnoDB show the true Cardinality (and not 2x the real value). It will
also allow the optimizer to chose a better index in some cases as the
division by 2 could have a bad effect for tables with 2-5 identical values
per key.
A few notes about using fix_innodb_cardinality:
- It has direct affect for SHOW INDEX FROM table_name. SHOW INDEX
will also update the statistics in table share.
- The effect of fix_innodb_cardinality for query plans or EXPLAIN
is only visible after first open of the table. This is why one must
do a flush tables or use SHOW INDEX for the option to take effect.
- Using fix_innodb_cardinality can thus affect all user in their query
plans if they are using the same tables.
Because of this, it is strongly recommended that one uses
optimizer_adjust_secondary_key_costs=fix_innodb_cardinality mainly
in configuration files to not cause issues for other users.
This commit adds 3 new status variables to 'show all slaves status':
- Master_last_event_time ; timestamp of the last event read from the
master by the IO thread.
- Slave_last_event_time ; Master timestamp of the last event committed
on the slave.
- Master_Slave_time_diff: The difference of the above two timestamps.
All the above variables are NULL until the slave has started and the
slave has read one query event from the master that changes data.
- Added information_schema.slave_status, which allows us to remove:
- show_master_info(), show_master_info_get_fields(),
send_show_master_info_data(), show_all_master_info()
- class Sql_cmd_show_slave_status.
- Protocol::store(I_List<i_string_pair>* str_list) as it is not
used anymore.
- Changed old SHOW SLAVE STATUS and SHOW ALL SLAVES STATUS to
use the SELECT code path, as all other SHOW ... STATUS commands.
Other things:
- Xid_log_time is set to time of commit to allow slave that reads the
binary log to calculate Master_last_event_time and
Slave_last_event_time.
This is needed as there is not 'exec_time' for row events.
- Fixed that Load_log_event calculates exec_time identically to
Query_event.
- Updated RESET SLAVE to reset Master/Slave_last_event_time
- Updated SQL thread's update on first transaction read-in to
only update Slave_last_event_time on group events.
- Fixed possible (unlikely) bugs in sql_show.cc ...old_format() functions
if allocation of 'field' would fail.
Reviewed By:
Brandon Nesterenko <brandon.nesterenko@mariadb.com>
Kristian Nielsen <knielsen@knielsen-hq.org>
When there is no bounds on the upper or lower part of the window,
it doesn't matter if the type is numeric.
It also doesn't matter how many ORDER BY items there are in the
query.
Reviewers: Sergei Petrunia and Oleg Smirnov
SP instructions, consisting a body of a stored routine, had the same memory
root as an instance of the class sp_head, representing abstraction for stored
routine itself. It resulted in memory leaks on re-parsing a failed statement
of a stored routine in case the statement re-compilation has to be performed
by the reason of changes in metadata of tables, triggers, etc. the stored
routine depends on.
To fix this kind of memory leaks, every SP instruction requiring access to
a LEX object must do re-parsing of a failed statement on its own memory root.
These memory roots are allocated on sp_head's memory root and every instance of
the sp_lex_instr class has a pointer to allocated memory root in case re-parsing
of the correspondiong SP instruction was requested. On every subsequent
re-parsing of the failed statement, a memory allocated on SP instruction's
memory root is released and the memory root re-initialized. Following memory
allocations taken place on re-parsing the SP instruction's statement
is performed on the dedicated memory root. So, no memory leaks will happen on
SP statement re-parsing.
New runtime diagnostic introduced with MDEV-34490 has detected
that `Item_int_with_ref` incorrectly returns an instance of its ancestor
class `Item_int`. This commit fixes that.
In addition, this commit reverts a part of the diagnostic related
to `clone_item()` checks. As it turned out, `clone_item()` is not required
to return an object of the same class as the cloned one. For example,
look at `Item_param::clone_item()`: it can return objects of `Item_null`,
`Item_int`, `Item_string`, etc, depending on the object state.
So the runtime type diagnostic is not applicable to `clone_item()` and
is disabled with this commit.
As the similar diagnostic failures are expected to appear again
in the future, this commit introduces a new test file in the main suite:
item_types.test, and new test cases may be added to this file
Reviewer: Oleksandr Byelkin <sanja@mariadb.com>
When mysqldump is run to dump the `mysql` system database, it generates
INSERT statements into the table `mysql.gtid_slave_pos`.
After running the backup script
those inserts did not produce the expected gtid state on slave. In
particular the maximum of mysql.gtid_slave_pos.sub_id did not make
into
rpl_global_gtid_slave_state.last_sub_id
an in-memory object that is supposed to match the current state of the
table. And that was regardless of whether --gtid option was specified
or not. Later when the backup recipient server starts as slave
in *non-gtid* mode this desychronization may lead to a duplicate key
error.
This effect is corrected for --gtid mode mysqldump/mariadb-dump only
as the following. The fixes ensure the insert block of the dump
script is followed with a "summing-up" SET @global.gtid_slave_pos
assignment.
For the implemenation part, note a deferred print-out of
SET-gtid_slave_pos and associated comments is prefered over relocating
of the entire blocks if (opt_master,slave_data &&
do_show_master,slave_status) ... because of compatiblity
concern. Namely an error inside do_show_*() is handled in the new code
the same way, as early as, as before.
A regression test can be run in how-to-reproduce mode as well.
One affected mtr test observed.
rpl_mysqldump_slave.result "mismatch" shows now the new deferring print
of SET-gtid_slave_pos policy in action.
The test was missing a save_master_gtid.inc on the master,
leading to the slave thinking it was in sync after executing
sync_with_master_gtid.inc, despite not having executed the
latest transaction. This skipped transaction, XA COMMIT,
was supposed to error-to-be-ignored because its XID could not
be found, but be thrown out because the replication filters
would filter out the target database. However, if the slave
was able to stop before executing the transaction, then
the replication filer is reset (to empty), and when the
slave is later restarted, that transactions error would
no longer be ignored.
Additionally, as the test cases added in MDEV-33921 rely
on GTID synchronization, the test cases now force
master_use_gtid=slave_pos for consistency
for ALTER_PARTITION_ADMIN (CHECK/REPAIR/LOAD INDEX/CACHE INDEX/etc)
partitioning marks affected partitions with PART_ADMIN state.
The assumption is that the server will call a corresponding
method of ha_partition which will reset the state back to PART_NORMAL.
This assumption is invalid, the server is not required to do so,
indeed, in CHECK ... FOR UPGRADE the server might decide early that
the table is fine and won't call ha_partition::check(), leaving
partitions in the wrong state. It will thus leak into the next
statement confusing the engine about what it is doing (see
ha_partition::create_handler_file()), causing a crash later.
Let's force all partitions into PART_NORMAL state after the admin
operation succeeded, in case it did so without consulting the engine.
There are 3 diff in result:
1) NULL value from SELECT
Due to incorrect truncating of the hex value, incorrect value is
written instead of original value to the view frm. This results in reading
incorrect value from frm, so eventual result is NULL.
2) 'Name_exp1' in column name (in gis.test)
This was because the identifier in SELECT is longer than 64 characters,
so 'Name_exp1' alias is also written to the view frm.
3)diff in explain extended
This was because the query plan for view protocol doesn't
contain database name. As a fix, disable view protocol for that particular
query.
This patch fixes two problems:
- The code inside my_strtod_int() in strings/dtoa.c could test the byte
behind the end of the string when processing the mantissa.
Rewriting the code to avoid this.
- The code in test_if_number() in sql/sql_analyse.cc called my_atof()
which is unsafe and makes the called my_strtod_int() look behind
the end of the string if the input string is not 0-terminated.
Fixing test_if_number() to use my_strtod() instead, passing the correct
end pointer.
Gain MySQL compatibility by allowing table aliases in a single
table statement.
This now supports the syntax of:
DELETE [delete_opts] FROM tbl_name [[AS] tbl_alias] [PARTITION (partition_name [, partition_name] ...)] ....
The delete.test is from MySQL commit 1a72b69778a9791be44525501960b08856833b8d
/ Change-Id: Iac3a2b5ed993f65b7f91acdfd60013c2344db5c0.
Co-Author: Gleb Shchepa <gleb.shchepa@oracle.com> (for delete.test)
Reviewed by Igor Babaev (igor@mariadb.com)
With that, it is possible to restore the full "instance" from a backup
made with mariadb-dump --dir
The patch implements executing DDL (tables, views, triggers) using
statements that are stored in .sql file, created by mariadb-dump
--dir .
Care is taken of creating triggers correctly after the data is loaded,
disabling foreign keys and unique key checks etc.
The files are loaded in descending order by datafile size -
to ensure better work distribution when running with --parallel option.
In addition to --dir option, following options are implemented for
partial restore
include-only options:
--database - import one or several databases
--table - import one or several tables
exclude options:
--ignore-database -. ignore one or several databases when importing
--ignore-table - to ignore one or several tables when importing
All options above are only valid together with --dir option,
and can be specified multiple times.
- Fix view-protocol: long expressions in SELECT
list should have "expr AS column_name".
- Also, moved the test from subselect*test to
suite/json/t/json_table.test.
Based on the current logic, objects of classes Item_func_charset and
Item_func_coercibility (responsible for CHARSET() and COERCIBILITY()
functions) are always considered constant.
However, SQL syntax allows their use in a non-constant manner, such as
CHARSET(t1.a), COERCIBILITY(t1.a).
In these cases, the `used_tables()` parameter corresponds to table names
in the function parameters, creating an inconsistency: the item is marked
as constant but accesses tables. This leads to crashes when
conditions with CHARSET()/COERCIBILITY() are pushed into derived tables.
This commit addresses the issue by setting `used_tables()` to 0 for
`Item_func_charset` and `Item_func_coercibility`. Additionally, the items
now store the return values during the preparation phase and return
them during the execution phase. This ensures that the items do not call
its arguments methods during the execution and are truly constant.
Reviewer: Alexander Barkov <bar@mariadb.com>
- During XA PREPARE, InnoDB releases the non-exclusive locks.
But it fails to remove the non-exclusive table lock from the
transaction table locks. In the mean time, main thread evicts
the table from the LRU cache. While rollbacking the XA transaction,
InnoDB iterates through the table locks to check whether it
holds lock on any system tables and wrongly assumes the
evicted table as system table since the table id is 0
Fix:
===
During XA PREPARE, remove the table locks of the transaction while
releasing the non-exclusive locks.