ha_innobase::check_if_supported_inplace_alter(): Require ALGORITHM=COPY
when creating a FULLTEXT INDEX on a versioned table.
row_merge_buf_add(), row_merge_read_clustered_index(): Remove the parameter
or local variable history_fts that had been added in the attempt to fix
MDEV-25004.
Reviewed by: Thirunarayanan Balathandayuthapani
Tested by: Matthias Leich
After MDEV-31400, plugins are allowed to ask for retries when failing
initialisation. However, such failures also cause plugin system
variables to be deleted (plugin_variables_deinit()) before retrying
and are not re-added during retry.
We fix this by checking that if the plugin has requested a retry the
variables are not deleted. Because plugin_deinitialize() also calls
plugin_variables_deinit(), if the retry fails, the variables will
still be deleted.
Alternatives considered:
- remove the plugin_variables_deinit() from plugin_initialize() error
handling altogether. We decide to take a more conservative approach
here.
- re-add the system variables during retry. It is more complicated
than simply iterating over plugin->system_vars and call
my_hash_insert(). For example we will need to assign values to
the test_load field and extract more code from test_plugin_options(),
if that is possible.
This patch fixes the issue with passing the DEFAULT or IGNORE values to
positional parameters for some kind of SQL statements to be executed
as prepared statements.
The main idea of the patch is to associate an actual value being passed
by the USING clause with the positional parameter represented by
the Item_param class. Such association must be performed on execution of
UPDATE statement in PS/SP mode. Other corner cases that results in
server crash is on handling CREATE TABLE when positional parameter
placed after the DEFAULT clause or CALL statement and passing either
the value DEFAULT or IGNORE as an actual value for the positional parameter.
This case is fixed by checking whether an error is set in diagnostics
area at the function pack_vcols() on return from the function pack_expression()
This is the prerequisite patch to refactor the method
Item_default_value::fix_fields.
The former implementation of this method was extracted and placed
into the standalone function make_default_field() and the method
Item_default_value::tie_field(). The motivation for this modification
is upcoming changes for core implementation of the task MDEV-15703
since these functions will be used from several places within
the source code.
row_discard_tablespace(): Do not invoke dict_index_t::clear_instant_alter()
because that would corrupt any adaptive hash index entries in the table.
row_import_for_mysql(): Invoke dict_index_t::clear_instant_alter()
after detaching any adaptive hash index entries.
In commit 179424db the file lowercase_table2.result was made executable
for no known reason, most likely just a mistake. Test result files
definitely should not be executable.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
row_search_mvcc(): Revise an overflow check, disabling it on 64-bit
systems. The maximum number of consecutive record reads in a key range
scan should be limited by the maximum number of records per page
(less than 2^13) and the maximum number of pages per tablespace (2^32)
to less than 2^45. On 32-bit systems we can simplify the overflow check.
Reviewed by: Vladislav Lesin
rpl.rpl_seconds_behind_master_spike uses the DEBUG_SYNC mechanism to
count how many format descriptor events (FDEs) have been executed,
to attempt to pause on a specific relay log FDE after executing
transactions. However, depending on when the IO thread is stopped,
it can send an extra FDE before sending the transactions, forcing
the test to pause before executing any transactions, resulting in a
table not existing, that is attempted to be read for COUNT.
This patch fixes this by no longer counting FDEs, but rather by
programmatically waiting until the SQL thread has executed the
transaction and then automatically activating the DEBUG_SYNC point
to trigger at the next relay log FDE.
Add wait_condition between debug sync SIGNAL points and other
expected state conditions and refactor actual sync point for
easier to use in test case.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Correct used configuration and force server restarts before test
case. Add wait condition instead of sleep to verify that
all expected nodes are back to cluster.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Add more inserts before wsrep_slave_threads is set to 1 and
add wait_condition to wait all of them are replicated before
wait_condition about number of wsrep_slave_threads.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
rpl.rpl_seconds_behind_master_spike uses the DEBUG_SYNC mechanism to
count how many format descriptor events (FDEs) have been executed,
to attempt to pause on a specific relay log FDE after executing
transactions. However, depending on when the IO thread is stopped,
it can send an extra FDE before sending the transactions, forcing
the test to pause before executing any transactions, resulting in a
table not existing, that is attempted to be read for COUNT.
This patch fixes this by no longer counting FDEs, but rather by
programmatically waiting until the SQL thread has executed the
transaction and then automatically activating the DEBUG_SYNC point
to trigger at the next relay log FDE.
Problem was that if wsrep_load_data_splitting was used
streaming replication (SR) parameters were set
for MyISAM table. Galera does not currently support SR for
MyISAM.
Fix is to ignore wsrep_load_data_splitting setting (with
warning) if table is not InnoDB table.
This is 10.4-10.5 case of fix.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Problem is that Galera starts TOI (total order isolation) i.e.
it sends query to all nodes. Later it is discovered that
used engine or other feature is not supported by Galera.
Because TOI is executed parallelly in all nodes appliers
could execute given TOI and ignore the error and
start inconsistency voting causing node to leave from
cluster or we might have a crash as reported.
For example SEQUENCE engine does not support GEOMETRY data
type causing either inconsistency between nodes (because
some errors are ignored on applier) or crash.
Fixed my adding new function wsrep_check_support to check
can Galera support provided CREATE TABLE/SEQUENCE before TOI is
started and if not clear error message is provided to
the user.
Currently, not supported cases:
* CREATE TABLE ... AS SELECT when streaming replication is used
* CREATE TABLE ... WITH SYSTEM VERSIONING AS SELECT
* CREATE TABLE ... ENGINE=SEQUENCE
* CREATE SEQUENCE ... ENGINE!=InnoDB
* ALTER TABLE t ... ENGINE!=InnoDB where table t is SEQUENCE
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Problem was that REPLACE was using consistency check that started
TOI and we tried to rollback it.
Do not use wsrep_before_rollback and wsrep_after_rollback if
we are runing consistency check because no writeset keys are
in that case added. Do not allow consistency check usage
if table storage for target table is not InnoDB, instead
give warning. REPLACE|SELECT INTO ... SELECT will use
now TOI if table storage for target table is not InnoDB
to maintain consistency between galera nodes.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
A debug_sync signal could remain for the SQL thread that should have begun
a wait_for upon seeing a GTID event, but would instead see the old signal
and continue on without waiting. This broke an "idle" condition in
SHOW SLAVE STATUS
which should have automatically negated Seconds_Behind_Master. Instead,
because the SQL thread had already processed the GTID event, it set
sql_thread_caught_up to false, and thereby calculated the value of
Seconds_behind_master, when the test expected 0.
This patch fixes this by resetting the debug_sync state before creating a
new transaction which sends a GTID event to the replica
With this patch, 4-component MSI version can be used, e.g by setting
TINY_VERSION variable in CMake, or by adding a string, e.g
MYSQL_VERSION_EXTRA=-2
which sets TINY_VERSION to 2, and also changes the package name.
The 4-component MSI versions do not support MSI major upgrades, only minor
ones, i.e do not reinstall components, just update existing ones based
on versioning rules.
To support these rules, add DefaultVersion for the files that won't
otherwise be versioned - headers, static and import libraries,
pdbs, text - xml, python and perl scripts Also silence WiX warning
that MSI won't store hashes for those files anymore.
There are two array fields in spider_share with similar names:
share->use_sql_dbton_ids that maps from i to the i-th dbton used by
share. Thus it should be used only when i iterates over all distinct
dbtons used by share.
share->sql_dbton_ids that maps from i to the dbton used by the i-th
link of the share. Thus it should be used only when i iterates over
all links of a share.
We correct instances where share->sql_dbton_ids should be used instead
of share->use_sql_dbton_ids.
Turning REGEXP_REPLACE into two schema-qualified functions:
- mariadb_schema.regexp_replace()
- oracle_schema.regexp_replace()
Fixing oracle_schema.regexp_replace(subj,pattern,replacement) to treat
NULL in "replacement" as an empty string.
Adding new classes implementing oracle_schema.regexp_replace():
- Item_func_regexp_replace_oracle
- Create_func_regexp_replace_oracle
Adding helper methods:
- String *Item::val_str_null_to_empty(String *to)
- String *Item::val_str_null_to_empty(String *to, bool null_to_empty)
and reusing these methods in both Item_func_replace and
Item_func_regexp_replace.
When CMAKE_OSX_ARCHITECTURES isn't set we end up with
"Packaging as: mariadb-10.4.33-osx10.19-x86_64" on arm64 builders.
Instead of implying 64bit is x86, use CMAKE_SYSTEM_PROCESSOR to form
the filename.
While a replica may be reading events from the
primary, the primary is killed. Left to its own
devices, the IO thread may or may not stop in
error, depending on what it is doing when its
connection to the primary is killed (e.g. a
failed read results in an error, whereas if the
IO thread is idly waiting for events when the
connection dies, it will enter into a reconnect
loop and reconnect). MDEV-32168 changed the test
to always wait for the reconnect, thus breaking
the error case, as the IO thread would be stopped
at a time of expecting it to be running.
The fix is to manually stop/start the IO thread
to ensure it is in a consistent state.
Note that rpl_domain_id_filter_master_crash.test
will need additional changes after fixing MDEV-33268
Reviewed By:
============
Kristian Nielsen <knielsen@knielsen-hq.org>
This patch introduces the following behaviour for Linux while
maintaining old behaviour for Windows:
Ctrl + C (sigint) clears the current buffer and redraws the prompt.
Ctrl-C no longer exits the client if no query is running.
Ctrl-C kills the current running query if there is one. If there is an
error communicating with the server while trying to issue a KILL QUERY,
the client exits. This is in line with the past behaviour of Ctrl-C.
On Linux Ctrl-D can be used to close the client.
On Windows Ctrl-C and Ctrl-BREAK still exits the client if no query is running.
Windows can also exit the client via \q<enter> or exit<enter>.
== Implementation details ==
The Linux implementation has two corner cases, based on which library is
used: libreadline or libedit, both are handled in code to achieve the
same user experience.
Additional code is taken from MySQL, ensuring there is identical
behaviour on Windows, to MySQL's mysql client implementation for other
CTRL- related signals.
* The CTRL_CLOSE, CTRL_LOGOFF, CTRL_SHUTDOWN will issue the equivalent
of CTRL-C and "end" the program. This ensures that the query is killed
when the client is closed by closing the terminal, logging off the
user or shutting down the system. The latter two signals are not sent
for interactive applications, but it handles the case when a user has
defined a service to use mysql client to issue a command. See
https://learn.microsoft.com/en-us/windows/console/handlerroutine
This patch is built on top of the initial work done by Anel Husakovic
<anel@mariadb.org>.
Closes#2815
Modified `federatedx_io_mysql::actual_query` to set the time zone to '+00:00' only upon establishing a new connection instead of with each query execution.
This patch corrects the fix for MDEV-32569. The latter has not taken into
account the fact not each statement uses the SELECT_LEX structure. In
particular CALL statements do not use such structure. However the parameter
passed to the stored procedure used in such a statement may require an
invocation of Type_std_attributes::agg_item_set_converter().
Approved by Oleksandr Byelkin <sanja@mariadb.com>
os_file_set_size(): Let us invoke the Linux system call fallocate(2)
directly, because the GNU libc posix_fallocate() implements a fallback
that writes to the file 1 byte every 4096 or fewer bytes. In one
environment, invoking fallocate() directly would lead to 4 times the
file growth rate during ALTER TABLE. Presumably, what happened was
that the NFS server used a smaller allocation block size than 4096 bytes
and therefore created a heavily fragmented sparse file when
posix_fallocate() was used. For example, extending a file by 4 MiB
would create 1,024 file fragments. When the file is actually being
written to with data, it would be "unsparsed".
The built-in EOPNOTSUPP fallback in os_file_set_size() writes a buffer
of 1 MiB of NUL bytes. This was always used on musl libc and other
Linux implementations of posix_fallocate().
Modify the NS_ZERO state in the JSON number parser to allow
exponential notation with a zero coefficient (e.g. 0E-4).
The NS_ZERO state transition on 'E' was updated to move to the
NS_EX state rather than returning a syntax error. Similar change
was made for the NS_ZE1 (negative zero) starter state.
This allows accepted number grammar to include cases like:
- 0E4
- -0E-10
which were previously disallowed. Numeric parsing remains
the same for all other states.
Test cases are added to func_json.test to validate parsing for
various exponential numbers starting with zero coefficients.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services.