At the moment we cannot support
wsrep_forced_binlog_format=[MIXED|STATEMENT]
during CREATE TABLE AS SELECT.
Statement will use ROW instead and give
a warning.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
otherwise, e.g.
./mtr main.mysql_install_db_win_admin,innodb
results in sporadic
mysql-test-run: *** ERROR: Could not run main.mysql_install_db_win_admin with 'innodb' combination(s)
depending on whether it'll process the skip (not windows admin)
or the innodb.combinations first (if skip is processed first,
innodb combination wasn't, making the further code think that the test
doesn't have innodb combination)
The error was specific to threadpool/compressed protocol.
set_thd_idle() set socket state to idle twice, causing assert failure.
This happens if unread compressed data on connection,after query was
finished. On a protocol level, this means a single compression packet
contains multiple command packets.
state() == s_prepared || state() == s_must_abort || state() == s_aborting ||
state() == s_cert_failed || state() == s_must_replay' failed
When applier tries to execute write rows event it find out
in table_def::compatible_with that value is not compatible
and sets error and thd->is_slave_error but thd->is_error()
is false. Later in rpl_group_info::slave_close_thread_tables
we commit stmt. This is bad for Galera because later in
apply_write_set we notice that event apply was not successful
and try to rollback transaction, but wsrep transaction
is already in s_committed state.
This is fixed on rpl_group_info::slave_close_thread_tables
so that in Galera case we rollback stmt if thd->is_slave_error
or thd->is_error() is set. Then later we can rollback wsrep
transaction.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Problem is that mysql.galera_slave_pos table is replicated,
thus it should be InnoDB to allow rolling back in case
of replay.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
The function setup_windows() called at the prepare phase of processing a
select builds a list of all window specifications used in the select. This list
is built on the statement memory and it must be done only once.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
MDEV-31455: main.events_stress or events.events_stress fails with view-protocol
MDEV-31457: main.delete_use_source fails (hangs) with view-protocol
Fixed tests:
main.sum_distinct-big, main.delete_use_source - disabled view-protocol
for some cases because they use transactions without autocommit
main.events_stress, main.merge-big - disabled service connection
for some queries since it is necessary that the query SELECT pass
in the same session
.snapshot exists as a directory on NetApp storage and
should not be copied during the sst process.
Thanks Daniel Czadek for the bug report.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Problem was that with BINLOG-statement you can execute
binlog events on master also (not only in applier).
Fix removes too strict part wsrep_thd_is_applying from
assertion. Note that actual event in test is intentionally
corrupted to test should this error being ignored.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
|| state() == s_prepared || state() == s_committing
|| state() == s_must_abort || state() == s_replaying'
failed.
CACHE INDEX and LOAD INDEX INTO CACHE are local operations.
Therefore, do not replicate them with Galera.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Problem:
=========
During commit, server calls prepare_commit_versioned to
determine the transaction modified system-versioned data.
Due to binlog_do_db option, we disable the binlog for the
statement. But prepare_commit_versioned() is being
called only when binlog is enabled for the statement.
Fix:
===
prepare_commit_versioned() should happen irrespective
of binlog state. So if the server has any read-write operation
then we should call prepare_commit_versioned().
There are many filesystem related errors that can occur with
MariaBackup. These already outputed to stderr with a good description of
the error. Many of these are permission or resource (file descriptor)
limits where the assertion and resulting core crash doesn't offer
developers anything more than the log message. To the user, assertions
and core crashes come across as poor error handling.
As such we return an error and handle this all the way up the stack.
The fix is to return 3-state value from Range_rowid_filter::build()
call:
1. The filter was built successfully;
2. The filter was not built, but the error was not fatal, i.e. there is
no need to rollback transaction. For example, if the size of container to
storevrow ids is not enough;
3. The filter was not built because of fatal error, for example,
deadlock or lock wait timeout from storage engine. In this case we
should stop query plan execution and roll back transaction.
Reviewed by: Sergey Petrunya
Spider is part of the server, and there's no need to check the
version.
All spider plugins are uninstalled in clean_up_spider.inc
DROP SERVER IF EXISTS makes things easier
is_file_on_ssd() is more expensive than it should be.
It caches the results by volume name, but still calls GetVolumePathName()
every time, which, as procmon shows, opens multiple directories in
filesystem hierarchy (db directory, datadir, and all ancestors)
The fix is to cache SSD status by volume serial ID, which is cheap to
retrieve with GetFileInformationByHandleEx()
trx_t::set_skip_lock_inheritance() must be invoked at the very beginning
of lock_release_on_prepare(). Currently trx_t::set_skip_lock_inheritance()
is invoked at the end of lock_release_on_prepare() when lock_sys and trx
are released, and there can be a case when locks on prepare are released,
but "not inherit gap locks" bit has not yet been set, and page split
inherits lock to supremum.
Also reset supremum bit and rebuild waiting queue when XA is prepared.
Reviewed by: Marko Mäkelä
When resolving a column from the HAVING clause, a new Item_field
object may be created inside Item_ref::fix_fields().
But the object is created with an empty name resolution context,
which then leads to debug assertion failure during
Item_field::fix_fields().
The solution is to pass the correct name resolution context
when creating the Item_field object.
Reviewer: Oleksandr Byelkin (sanja@mariadb.com)
The table structure from MySQL-5.1.14 is:
CREATE TABLE `slow_log` (
`start_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`user_host` mediumtext NOT NULL,
`query_time` time NOT NULL,
`lock_time` time NOT NULL,
`rows_sent` int(11) NOT NULL,
`rows_examined` int(11) NOT NULL,
`db` varchar(512) DEFAULT NULL,
`last_insert_id` int(11) DEFAULT NULL,
`insert_id` int(11) DEFAULT NULL,
`server_id` int(11) DEFAULT NULL,
`sql_text` mediumtext NOT NULL
) ENGINE=CSV DEFAULT CHARSET=utf8 COMMENT='Slow log'
Even as far back as MySQL-5.5.40 this table could be created as NULLs
where not permitted in the CSV table time, but it seems they
where allowed sometime.
As the first part of mariadb-upgrade adds the column thread_id without
correcting the 'NULL'able status of existing columns it fails.
We reorder the sql statements in the ugprade as follows:
ALTER TABLE slow_log MODIFY {columns} {new types} NOT NULL,....
As thread_id doesn't exist in the above statement it was removed from
the first ALTER TABLE statement to prevent failure.
Previous ALTER TABLE slow_log where moved later appending thread_id
and rows_affected, and also enforces the type of thread_id if it
was incorrectly like the now first ALTER STATEMENT slow_log used
to do.
Updated ha_mroonga::storage_check_if_supported_inplace_alter to support
new ALTER TABLE flags.
This fixes failing tests:
mroonga/storage.alter_table_add_index_unique_duplicated
mroonga/storage.alter_table_add_index_unique_multiple_column_duplicated
According to commit ea56841997
the stack normally grows downwards, except on HP PA-RISC where
it grows upwards. Because determining the stack direction is not
possible in a portable way, let us determine the default STACK_DIRECTION
in CMake based on the ISA.
On clang 16.0.6 running on and targeting AMD64, STACK_DIRECTION=1 is
being incorrectly detected, causing the failure of a number of tests.
row_vers_vc_matches_cluster(): Invoke dtype_get_at_most_n_mbchars()
to extract the correct number of bytes corresponding to the number
of characters in a virtual column prefix index, just like we do in
row_sel_sec_rec_is_for_clust_rec().
The test case would occasionally reproduce the failure when this
fix is not present.
On creation of a VIEW that depends on a stored routine an instance of
the class Item_func_sp is allocated on a memory root of SP statement.
It happens since mysql_make_view() calls the method
THD::activate_stmt_arena_if_needed()
before parsing definition of the view.
On the other hand, when sp_head's rcontext is created an instance of
the class Field referenced by the data member
Item_func_sp::result_field
is allocated on the Item_func_sp's Query_arena (call arena) that set up
inside the method
Item_sp::execute_impl
just before calling the method
sp_head::execute_function()
On return from the method sp_head::execute_function() all items allocated
on the Item_func_sp's Query_arena are released and its memory root is freed
(see implementation of the method Item_sp::execute_impl). As a consequence,
the pointer
Item_func_sp::result_field
references to the deallocated memory. Later, when the method
sp_head::execute
cleans up items allocated for just executed SP instruction the method
Item_func_sp::cleanup is invoked and tries to delete an object referenced
by data member Item_func_sp::result_field that points to already deallocated
memory, that results in a server abnormal termination.
To fix the issue the current active arena shouldn't be switched to
a statement arena inside the function mysql_make_view() that invoked indirectly
by the method sp_head::rcontext_create. It is implemented by introducing
the new Query_arena's state STMT_SP_QUERY_ARGUMENTS that is set when explicit
Query_arena is created for placing SP arguments and other caller's side items
used during SP execution. Then the method THD::activate_stmt_arena_if_needed()
checks Query_arena's state and returns immediately without switching to
statement's arena.
mariadb-install-db --auth-root-authentication-method=normal created 4
root accounts by default, but only two of these had PROXY privilege
granted.
mariadb-install-db (default option
--auth-root-authentication-method=socket) as non-root user also didn't
grant PROXY priv to the created nonroot@localhost user.
To fix this, in mysql_system_tables_data.sql, we re-use tmp_user_nopasswd
as this contains the list of all root users.
REPLACE INTO tmp_proxies_priv SELECT @current_hostname, IFNULL(@auth_root_socket, 'root')
creates the $user@$current_host but will not error if @auth_root_socket
is null. Note @current_hostname lines are filtered out with
--cross-bootstrap in mariadb-install-db so it was needed to include this
expression for consistency.
Like the existing mysql_system_tables.sql is used to create teh
$user@localhost proxies_priv.
Test cases roles.acl_statistics, perfschema,privilege_table_io depends on the number of proxy users.
After:
--auth-root-authentication-method=normal:
MariaDB [mysql]> select * from global_priv;
+-----------+-------------+--------------------------------------------------------------------------------------------------------------------------+
| Host | User | Priv |
+-----------+-------------+--------------------------------------------------------------------------------------------------------------------------+
| localhost | mariadb.sys | {"access":0,"plugin":"mysql_native_password","authentication_string":"","account_locked":true,"password_last_changed":0} |
| localhost | root | {"access":18446744073709551615} |
| bark | root | {"access":18446744073709551615} |
| 127.0.0.1 | root | {"access":18446744073709551615} |
| ::1 | root | {"access":18446744073709551615} |
| localhost | | {} |
| bark | | {} |
+-----------+-------------+--------------------------------------------------------------------------------------------------------------------------+
7 rows in set (0.001 sec)
MariaDB [mysql]> select * from proxies_priv;
+-----------+------+--------------+--------------+------------+---------+---------------------+
| Host | User | Proxied_host | Proxied_user | With_grant | Grantor | Timestamp |
+-----------+------+--------------+--------------+------------+---------+---------------------+
| localhost | root | | | 1 | | 2023-07-10 12:12:24 |
| 127.0.0.1 | root | | | 1 | | 2023-07-10 12:12:24 |
| ::1 | root | | | 1 | | 2023-07-10 12:12:24 |
| bark | root | | | 1 | | 2023-07-10 12:12:24 |
+-----------+------+--------------+--------------+------------+---------+---------------------+
--auth-root-authentication-method=socket:
MariaDB [mysql]> select * from proxies_priv;
+-----------+------+--------------+--------------+------------+---------+---------------------+
| Host | User | Proxied_host | Proxied_user | With_grant | Grantor | Timestamp |
+-----------+------+--------------+--------------+------------+---------+---------------------+
| localhost | root | | | 1 | | 2023-07-10 12:11:55 |
| localhost | dan | | | 1 | | 2023-07-10 12:11:55 |
| bark | dan | | | 1 | | 2023-07-10 12:11:55 |
+-----------+------+--------------+--------------+------------+---------+---------------------+
3 rows in set (0.017 sec)
MariaDB [mysql]> select * from global_priv;
+-----------+-------------+--------------------------------------------------------------------------------------------------------------------------------------------+
| Host | User | Priv |
+-----------+-------------+--------------------------------------------------------------------------------------------------------------------------------------------+
| localhost | mariadb.sys | {"access":0,"plugin":"mysql_native_password","authentication_string":"","account_locked":true,"password_last_changed":0} |
| localhost | root | {"access":18446744073709551615,"plugin":"mysql_native_password","authentication_string":"invalid","auth_or":[{},{"plugin":"unix_socket"}]} |
| localhost | dan | {"access":18446744073709551615,"plugin":"mysql_native_password","authentication_string":"invalid","auth_or":[{},{"plugin":"unix_socket"}]} |
| localhost | | {} |
| bark | | {} |
+-----------+-------------+--------------------------------------------------------------------------------------------------------------------------------------------+
5 rows in set (0.000 sec)
MariaDB [mysql]> show grants;
+----------------------------------------------------------------------------------------------------------------------------------------+
| Grants for dan@localhost |
+----------------------------------------------------------------------------------------------------------------------------------------+
| GRANT ALL PRIVILEGES ON *.* TO `dan`@`localhost` IDENTIFIED VIA mysql_native_password USING 'invalid' OR unix_socket WITH GRANT OPTION |
| GRANT PROXY ON ''@'%' TO 'dan'@'localhost' WITH GRANT OPTION |
+----------------------------------------------------------------------------------------------------------------------------------------+
Fixed tests:
main.order_by_pack_big - disabled view-protocol for some queries
because the view is created with wrong column name if column
name > 64 symbols
Removed some redundant hint related string literals from
spd_db_conn.cc
Clean up SPIDER_PARAM_*_[CHAR]LEN[S]
Adding tests covering monitoring_kind=2. What it does is that it reads
from mysql.spider_link_mon_servers with matching db_name, table_name,
link_id, and does not do anything about that...
How monitoring_* can be useful: in the deprecated spider high
availability feature, when one remote fails, spider will try another
remote, which apparently makes use of these table parameters.
A test covering the query_cache_sync table param. Some further tests
on some spider table params.
Wrapper should be case insensitive.
Code documentation on spider priority binary tree.
Add an assertion that static_key_cardinality is always -1. All tests
pass still
This helps eliminate "server exists" failures
Also, spider/bugfix.mdev_29676, when enabled after MDEV-29525 is
pushed will fail because we have not --recorded the result. But the
failure will only emerge when working on MDEV-31138 where we manually
re-enable this test, so let's worry about that then.
The direct aggregate mechanism sems to be only intended to work when
otherwise a full table scan query will be executed from the spider
node and the aggregation done at the spider node too. Typically this
happens in sub_select(). In the test spider.direct_aggregate_part
direct aggregate allows to send COUNT statements directly to the data
nodes and adds up the results at the spider node, instead of iterating
over the rows one by one at the spider node.
By contrast, the group by handler (GBH) typically sends aggregated
queries directly to data nodes, in which case DA does not improve the
situation here.
That is why we should fix it by disabling DA when GBH is used.
There are other reasons supporting this change. First, the creation of
GBH results in a call to change_to_use_tmp_fields() (as opposed to
setup_copy_fields()) which causes the spider DA function
spider_db_fetch_for_item_sum_funcs() to work on wrong items. Second,
the spider DA function only calls direct_add() on the items, and the
follow-up add() needs to be called by the sql layer code. In
do_select(), after executing the query with the GBH, it seems that the
required add() would not necessarily be called.
Disabling DA when GBH is used does fix the bug. There are a few
other things included in this commit to improve the situation with
spider DA:
1. Add a session variable that allows user to disable DA completely,
this will help as a temporary measure if/when further bugs with DA
emerge.
2. Move the increment of direct_aggregate_count to the spider DA
function. Currently this is done in rather bizarre and random
locations.
3. Fix the spider_db_mbase_row creation so that the last of its row
field (sentinel) is NULL. The code is already doing a null check, but
somehow the sentinel field is on an invalid address, causing the
segfaults. With a correct implementation of the row creation, we can
avoid such segfaults.