Updated ha_mroonga::storage_check_if_supported_inplace_alter to support
new ALTER TABLE flags.
This fixes failing tests:
mroonga/storage.alter_table_add_index_unique_duplicated
mroonga/storage.alter_table_add_index_unique_multiple_column_duplicated
According to commit ea56841997
the stack normally grows downwards, except on HP PA-RISC where
it grows upwards. Because determining the stack direction is not
possible in a portable way, let us determine the default STACK_DIRECTION
in CMake based on the ISA.
On clang 16.0.6 running on and targeting AMD64, STACK_DIRECTION=1 is
being incorrectly detected, causing the failure of a number of tests.
row_vers_vc_matches_cluster(): Invoke dtype_get_at_most_n_mbchars()
to extract the correct number of bytes corresponding to the number
of characters in a virtual column prefix index, just like we do in
row_sel_sec_rec_is_for_clust_rec().
The test case would occasionally reproduce the failure when this
fix is not present.
On creation of a VIEW that depends on a stored routine an instance of
the class Item_func_sp is allocated on a memory root of SP statement.
It happens since mysql_make_view() calls the method
THD::activate_stmt_arena_if_needed()
before parsing definition of the view.
On the other hand, when sp_head's rcontext is created an instance of
the class Field referenced by the data member
Item_func_sp::result_field
is allocated on the Item_func_sp's Query_arena (call arena) that set up
inside the method
Item_sp::execute_impl
just before calling the method
sp_head::execute_function()
On return from the method sp_head::execute_function() all items allocated
on the Item_func_sp's Query_arena are released and its memory root is freed
(see implementation of the method Item_sp::execute_impl). As a consequence,
the pointer
Item_func_sp::result_field
references to the deallocated memory. Later, when the method
sp_head::execute
cleans up items allocated for just executed SP instruction the method
Item_func_sp::cleanup is invoked and tries to delete an object referenced
by data member Item_func_sp::result_field that points to already deallocated
memory, that results in a server abnormal termination.
To fix the issue the current active arena shouldn't be switched to
a statement arena inside the function mysql_make_view() that invoked indirectly
by the method sp_head::rcontext_create. It is implemented by introducing
the new Query_arena's state STMT_SP_QUERY_ARGUMENTS that is set when explicit
Query_arena is created for placing SP arguments and other caller's side items
used during SP execution. Then the method THD::activate_stmt_arena_if_needed()
checks Query_arena's state and returns immediately without switching to
statement's arena.
mariadb-install-db --auth-root-authentication-method=normal created 4
root accounts by default, but only two of these had PROXY privilege
granted.
mariadb-install-db (default option
--auth-root-authentication-method=socket) as non-root user also didn't
grant PROXY priv to the created nonroot@localhost user.
To fix this, in mysql_system_tables_data.sql, we re-use tmp_user_nopasswd
as this contains the list of all root users.
REPLACE INTO tmp_proxies_priv SELECT @current_hostname, IFNULL(@auth_root_socket, 'root')
creates the $user@$current_host but will not error if @auth_root_socket
is null. Note @current_hostname lines are filtered out with
--cross-bootstrap in mariadb-install-db so it was needed to include this
expression for consistency.
Like the existing mysql_system_tables.sql is used to create teh
$user@localhost proxies_priv.
Test cases roles.acl_statistics, perfschema,privilege_table_io depends on the number of proxy users.
After:
--auth-root-authentication-method=normal:
MariaDB [mysql]> select * from global_priv;
+-----------+-------------+--------------------------------------------------------------------------------------------------------------------------+
| Host | User | Priv |
+-----------+-------------+--------------------------------------------------------------------------------------------------------------------------+
| localhost | mariadb.sys | {"access":0,"plugin":"mysql_native_password","authentication_string":"","account_locked":true,"password_last_changed":0} |
| localhost | root | {"access":18446744073709551615} |
| bark | root | {"access":18446744073709551615} |
| 127.0.0.1 | root | {"access":18446744073709551615} |
| ::1 | root | {"access":18446744073709551615} |
| localhost | | {} |
| bark | | {} |
+-----------+-------------+--------------------------------------------------------------------------------------------------------------------------+
7 rows in set (0.001 sec)
MariaDB [mysql]> select * from proxies_priv;
+-----------+------+--------------+--------------+------------+---------+---------------------+
| Host | User | Proxied_host | Proxied_user | With_grant | Grantor | Timestamp |
+-----------+------+--------------+--------------+------------+---------+---------------------+
| localhost | root | | | 1 | | 2023-07-10 12:12:24 |
| 127.0.0.1 | root | | | 1 | | 2023-07-10 12:12:24 |
| ::1 | root | | | 1 | | 2023-07-10 12:12:24 |
| bark | root | | | 1 | | 2023-07-10 12:12:24 |
+-----------+------+--------------+--------------+------------+---------+---------------------+
--auth-root-authentication-method=socket:
MariaDB [mysql]> select * from proxies_priv;
+-----------+------+--------------+--------------+------------+---------+---------------------+
| Host | User | Proxied_host | Proxied_user | With_grant | Grantor | Timestamp |
+-----------+------+--------------+--------------+------------+---------+---------------------+
| localhost | root | | | 1 | | 2023-07-10 12:11:55 |
| localhost | dan | | | 1 | | 2023-07-10 12:11:55 |
| bark | dan | | | 1 | | 2023-07-10 12:11:55 |
+-----------+------+--------------+--------------+------------+---------+---------------------+
3 rows in set (0.017 sec)
MariaDB [mysql]> select * from global_priv;
+-----------+-------------+--------------------------------------------------------------------------------------------------------------------------------------------+
| Host | User | Priv |
+-----------+-------------+--------------------------------------------------------------------------------------------------------------------------------------------+
| localhost | mariadb.sys | {"access":0,"plugin":"mysql_native_password","authentication_string":"","account_locked":true,"password_last_changed":0} |
| localhost | root | {"access":18446744073709551615,"plugin":"mysql_native_password","authentication_string":"invalid","auth_or":[{},{"plugin":"unix_socket"}]} |
| localhost | dan | {"access":18446744073709551615,"plugin":"mysql_native_password","authentication_string":"invalid","auth_or":[{},{"plugin":"unix_socket"}]} |
| localhost | | {} |
| bark | | {} |
+-----------+-------------+--------------------------------------------------------------------------------------------------------------------------------------------+
5 rows in set (0.000 sec)
MariaDB [mysql]> show grants;
+----------------------------------------------------------------------------------------------------------------------------------------+
| Grants for dan@localhost |
+----------------------------------------------------------------------------------------------------------------------------------------+
| GRANT ALL PRIVILEGES ON *.* TO `dan`@`localhost` IDENTIFIED VIA mysql_native_password USING 'invalid' OR unix_socket WITH GRANT OPTION |
| GRANT PROXY ON ''@'%' TO 'dan'@'localhost' WITH GRANT OPTION |
+----------------------------------------------------------------------------------------------------------------------------------------+
Fixed tests:
main.order_by_pack_big - disabled view-protocol for some queries
because the view is created with wrong column name if column
name > 64 symbols
Removed some redundant hint related string literals from
spd_db_conn.cc
Clean up SPIDER_PARAM_*_[CHAR]LEN[S]
Adding tests covering monitoring_kind=2. What it does is that it reads
from mysql.spider_link_mon_servers with matching db_name, table_name,
link_id, and does not do anything about that...
How monitoring_* can be useful: in the deprecated spider high
availability feature, when one remote fails, spider will try another
remote, which apparently makes use of these table parameters.
A test covering the query_cache_sync table param. Some further tests
on some spider table params.
Wrapper should be case insensitive.
Code documentation on spider priority binary tree.
Add an assertion that static_key_cardinality is always -1. All tests
pass still
This helps eliminate "server exists" failures
Also, spider/bugfix.mdev_29676, when enabled after MDEV-29525 is
pushed will fail because we have not --recorded the result. But the
failure will only emerge when working on MDEV-31138 where we manually
re-enable this test, so let's worry about that then.
The direct aggregate mechanism sems to be only intended to work when
otherwise a full table scan query will be executed from the spider
node and the aggregation done at the spider node too. Typically this
happens in sub_select(). In the test spider.direct_aggregate_part
direct aggregate allows to send COUNT statements directly to the data
nodes and adds up the results at the spider node, instead of iterating
over the rows one by one at the spider node.
By contrast, the group by handler (GBH) typically sends aggregated
queries directly to data nodes, in which case DA does not improve the
situation here.
That is why we should fix it by disabling DA when GBH is used.
There are other reasons supporting this change. First, the creation of
GBH results in a call to change_to_use_tmp_fields() (as opposed to
setup_copy_fields()) which causes the spider DA function
spider_db_fetch_for_item_sum_funcs() to work on wrong items. Second,
the spider DA function only calls direct_add() on the items, and the
follow-up add() needs to be called by the sql layer code. In
do_select(), after executing the query with the GBH, it seems that the
required add() would not necessarily be called.
Disabling DA when GBH is used does fix the bug. There are a few
other things included in this commit to improve the situation with
spider DA:
1. Add a session variable that allows user to disable DA completely,
this will help as a temporary measure if/when further bugs with DA
emerge.
2. Move the increment of direct_aggregate_count to the spider DA
function. Currently this is done in rather bizarre and random
locations.
3. Fix the spider_db_mbase_row creation so that the last of its row
field (sentinel) is NULL. The code is already doing a null check, but
somehow the sentinel field is on an invalid address, causing the
segfaults. With a correct implementation of the row creation, we can
avoid such segfaults.
- Remove extra connections in the form of `server_number_1` for the same server
during initialization of servers in the `rpl_init.inc` file.
- Remove disconnecting and reconnecting to the same connections,
since they are not used by the test.
- Update comments about the above.
- Reviewer: <knielsen@knielsen-hq.org>
<brandon.nesterenko@mariadb.com>
- Fix the calling of the assertion condition when `rpl_check_server_ids` parameter is used.
- Fix comments regarding the default usage and configuration files
extension in this case.
- Reviewer: <knielsen@knielsen-hq.org>
<brandon.nesterenko@mariadb.com>
fseg_free_extent(): After fsp_free_extent() succeeded, properly
mark the affected pages as freed. We failed to write FREE_PAGE records.
This bug was revealed or caused by
commit e938d7c18f (MDEV-32028).
- `default_client` is included already in rpl_1slave_base.cnf`, so
remove it from `my.cnf`
- Remove option group for `mysqld` server as and add comment how to
override specific settings for specific server
- Reviewer: <brandon.nesterenko@mariadb.com>
The SQL thread and a user connection executing SHOW SLAVE STATUS
have a race condition on Last_SQL_Errno, such that a slave which
previously errored and stopped, on its next start, SHOW SLAVE STATUS
can show that the SQL Thread is running while the previous error is
also showing.
The fix is to move when the last error is cleared when the SQL
thread starts to occur before setting the status of
Slave_SQL_Running.
Thanks to Kristian Nielson for his work diagnosing the problem!
Reviewed By:
============
Andrei Elkin <andrei.elkin@mariadb.com>
Kristian Nielson <knielsen@knielsen-hq.org>
- Removed commented out and unused lines.
- Updated test to reference true failure of timeout
rather than deadlock
- Switched save variables from MTR to user
- Forced relay-log purge to not potentially re-execute
an already prepared transaction
Remove TLSv1.1 from the default tls_version system variable.
Output a warning if TLSv1.0 or TLSv1.1 are selected.
Thanks Tingyao Nian for the feature request.
There is a list of plugins in the WiX configuration file for HeidiSQL,
and the installer only installs DLLs from that list although the HeidiSQL
portable archive may include other plugins.
This commit adds client_ed25519.dll to this list and also rearranges
the list alphabetically, so it is easier to verify its contents
buf_read_page_low(): Use 64-bit arithmetics when computing the
file byte offset. In other calls to fil_space_t::io() the offset
was being computed correctly, for example by
buf_page_t::physical_offset().
MariaDB async replication SQL thread was stopped for any failure
in applying of replication events and error message logged for the failure
was: "Node has dropped from cluster". The assumption was that event applying
failure is always due to node dropping out.
With optimistic parallel replication, event applying can fail for natural
reasons and applying should be retried to handle the failure. This retry
logic was never exercised because the slave SQL thread was stopped with first
applying failure.
To support optimistic parallel replication retrying logic this commit will
now skip replication slave abort, if node remains in cluster (wsrep_ready==ON)
and replication is configured for optimistic or aggressive retry logic.
During the development of this fix, galera.galera_as_slave_nonprim test showed
some problems. The test was analyzed, and it appears to need some attention.
One excessive sleep command was removed in this commit, but it will need more
fixes still to be fully deterministic. After this commit galera_as_slave_nonprim
is successful, though.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Test case is starting too many servers that are not really
needed for original problem testing. This fix reduces
number of servers to make test case smaller and more
robust.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Problem was that if wsrep_notify_cmd was set it was called
with a new status "joined" it tries to connect to the server
to update some table, but the server isn't initialized yet,
it's not listening for connections. So the server waits for the
script to finish, script waits for mariadb client to connect,
and the client cannot connect, because the server isn't listening.
Fix is to call script only when Galera has already formed a
view or when it is synched or donor.
This fix also enables following test cases:
* galera.MW-284
* galera.galera_binlog_checksum
* galera_var_notify_ssl_ipv6
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Some s390x environments include
https://github.com/madler/zlib/pull/410
and a more pessimistic compressBound: (sourceLen * 16 + 2308) / 8 + 6.
Let us adjust the recently enabled tests accordingly.
* version_compile_os can be "linux-systemd", not equal to "Linux"
* main.no-threads forces no-threads scheduler, a check whether it
has one_thread_per_connection is guaranteed to fail.
trx_undo_write_trx_xid(): Silence the debug assertion by passing
a template parameter that causes us to not care that the contents of
the page did not actually change and no log record would be written.
This debug assertion could fail if XA PREPARE was executed multiple
times with the same XID.
innodb_monitor_validate(): Let item_val_str() allocate the memory
in THD, so that it will be available to innodb_monitor_update().
In this way, there is no need to allocate another buffer, and
no problem if the call to innodb_monitor_update() is skipped due
to an invalid value that is passed to another configuration parameter.
There are some other callers to st_mysql_sys_var::val_str()
that validate configuration parameters that are related to FULLTEXT INDEX,
but they will allocate memory by invoking thd_strmake().
Currently include/have_innodb_4k.inc etc. files only check that the
server is running with the corresponding page size. I think it would
be more convenient if they actually enforced the setting.
The test innodb_zip.index_large_prefix_4k would not run unless it is
invoked as
./mtr --mysqld=--innodb-page-size=4k innodb_zip.index_large_prefix_4k
This test was originally developed to cover an option that was removed
in commit 0c92794db3. Starting with
MariaDB Server 10.2, which introduced innodb_default_row_format=dynamic,
the option innodb_large_prefix had become useless.
Let us remove some of the stale tests and adjust the outcome to the
expected behaviour.
Let us avoid inserting the rows fid=714 and fid=715, because we would
evaluate g=NULL for them, and NULL values are not allowed in InnoDB
SPATIAL INDEX.
Also, let the test run on any page size, and on non-debug builds.
-Wdeprecated-copy-with-user-provided-copy was causing a few errors on
things that where defined in a way that was implicit. By removing code
it now compiles without warnings.
tested with fc38 / clang-16
Port the test case from MySQL to MariaDB:
MySQL fix Bug#33813951, Change-Id: I2448e3f2f36925fe70d882ae5681a6234f0d5a98.
Function test_simple_temporal() from MySQL ported from C++ to pure C.
This includes one change:
- DIE_UNLESS(field->type == MYSQL_TYPE_DATETIME);
+ DIE_UNLESS(field->type == MYSQL_TYPE_TIMESTAMP);
The bound param of SELECT ? is TIMESTAMP in this code.
MySQL returns it back as DATETIME. MariaDB preserves TIMESTAMP.
Code packaged for commit by Daniel Black.
The problem was that parallel replication of temporary tables using
statement-based binlogging could overlap the COMMIT in one thread with a DML
or DROP TEMPORARY TABLE in another thread using the same temporary table.
Temporary tables are not safe for concurrent access, so this caused
reference to freed memory and possibly other nastiness.
The fix is to disable the optimisation with overlapping commits of one
transaction with the start of a later transaction, when temporary tables are
in use. Then the following event groups will be blocked from starting until
the one using temporary tables is completed.
This also fixes occasional test failures of rpl.rpl_parallel_temptable seen
in Buildbot.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>