rpl.rpl_seconds_behind_master_spike uses the DEBUG_SYNC mechanism to
count how many format descriptor events (FDEs) have been executed,
to attempt to pause on a specific relay log FDE after executing
transactions. However, depending on when the IO thread is stopped,
it can send an extra FDE before sending the transactions, forcing
the test to pause before executing any transactions, resulting in a
table not existing, that is attempted to be read for COUNT.
This patch fixes this by no longer counting FDEs, but rather by
programmatically waiting until the SQL thread has executed the
transaction and then automatically activating the DEBUG_SYNC point
to trigger at the next relay log FDE.
A debug_sync signal could remain for the SQL thread that should have begun
a wait_for upon seeing a GTID event, but would instead see the old signal
and continue on without waiting. This broke an "idle" condition in
SHOW SLAVE STATUS
which should have automatically negated Seconds_Behind_Master. Instead,
because the SQL thread had already processed the GTID event, it set
sql_thread_caught_up to false, and thereby calculated the value of
Seconds_behind_master, when the test expected 0.
This patch fixes this by resetting the debug_sync state before creating a
new transaction which sends a GTID event to the replica
With this patch, 4-component MSI version can be used, e.g by setting
TINY_VERSION variable in CMake, or by adding a string, e.g
MYSQL_VERSION_EXTRA=-2
which sets TINY_VERSION to 2, and also changes the package name.
The 4-component MSI versions do not support MSI major upgrades, only minor
ones, i.e do not reinstall components, just update existing ones based
on versioning rules.
To support these rules, add DefaultVersion for the files that won't
otherwise be versioned - headers, static and import libraries,
pdbs, text - xml, python and perl scripts Also silence WiX warning
that MSI won't store hashes for those files anymore.
There are two array fields in spider_share with similar names:
share->use_sql_dbton_ids that maps from i to the i-th dbton used by
share. Thus it should be used only when i iterates over all distinct
dbtons used by share.
share->sql_dbton_ids that maps from i to the dbton used by the i-th
link of the share. Thus it should be used only when i iterates over
all links of a share.
We correct instances where share->sql_dbton_ids should be used instead
of share->use_sql_dbton_ids.
write_record() when performing REPLACE has an optimization:
- if the unique violation happened in the last unique key, then do UPDATE
- otherwise, do DELETE+INSERT
This patch changes the way of detecting if this optimization
can be applied if the table has long (hash based) unique
(i.e. UNIQUE..USING HASH) constraints.
Problem:
The old condition did not take into account that
TABLE_SHARE and TABLE see long uniques differently:
- TABLE_SHARE sees as HA_KEY_ALG_LONG_HASH and HA_NOSAME
- TABLE sees as usual non-unique indexes
So the old condition could erroneously decide that the UPDATE optimization
is possible when there are still some unique hash constraints in the table.
Fix:
- If the current key is a long unique, it now works as follows:
UPDATE can be done if the current long unique is the last
long unique, and there are no in-engine (normal) uniques.
- For in-engine uniques nothing changes, it still works as before:
If the current key is an in-engine (normal) unique:
UPDATE can be done if it is the last normal unique.
Turning REGEXP_REPLACE into two schema-qualified functions:
- mariadb_schema.regexp_replace()
- oracle_schema.regexp_replace()
Fixing oracle_schema.regexp_replace(subj,pattern,replacement) to treat
NULL in "replacement" as an empty string.
Adding new classes implementing oracle_schema.regexp_replace():
- Item_func_regexp_replace_oracle
- Create_func_regexp_replace_oracle
Adding helper methods:
- String *Item::val_str_null_to_empty(String *to)
- String *Item::val_str_null_to_empty(String *to, bool null_to_empty)
and reusing these methods in both Item_func_replace and
Item_func_regexp_replace.
When CMAKE_OSX_ARCHITECTURES isn't set we end up with
"Packaging as: mariadb-10.4.33-osx10.19-x86_64" on arm64 builders.
Instead of implying 64bit is x86, use CMAKE_SYSTEM_PROCESSOR to form
the filename.
use the original, not the truncated, field in the long unique prefix,
that is, in the hash(left(field, length)) expression.
because MyISAM CHECK/REPAIR in compute_vcols() moves table->field
but not prefix fields from keyparts.
Also, implement Field_string::cmp_prefix() for prefix comparison
of CHAR columns to work.
use thd->start_time for the "start_time" column of the slow_log table.
"current_time" here refers to the current_time() function return value
not to the actual *current* time.
also fixes
MDEV-33267 User with minimal permissions can intentionally corrupt mysql.slow_log table
Item_func_quote did not calculate its max_length correctly for nullable
arguments.
Fix:
In case if the argument is nullable, reserve at least 4 characters
so the string "NULL" fits.
MCOL-5611 supporting with Boost-1.80, the version "next_prime"
disappears from https://github.com/boostorg/unordered/blob/boost-1.79.0/include/boost/unordered/detail/implementation.hpp
makes it the currenly highest supported versions.
Lets check this version.
While CMake-3.19+ supports version ranges in package determinations this
isn't supported for Boost in Cmake-3.28. So we check for the 1.80 and
don't compile ColumnStore.
Statements affect by this bug are all SQL statements that
1) prefixed with "EXPLAIN"
2) have a lower level join structure created for a union subquery.
A bug in select_describe() passed an incorrect "result" object to
mysql_explain_union(), resulting in unpredictable behaviour and
out of context calls.
Reviewed by: Oleksandr Byelkin, sanja@mariadb.com
While a replica may be reading events from the
primary, the primary is killed. Left to its own
devices, the IO thread may or may not stop in
error, depending on what it is doing when its
connection to the primary is killed (e.g. a
failed read results in an error, whereas if the
IO thread is idly waiting for events when the
connection dies, it will enter into a reconnect
loop and reconnect). MDEV-32168 changed the test
to always wait for the reconnect, thus breaking
the error case, as the IO thread would be stopped
at a time of expecting it to be running.
The fix is to manually stop/start the IO thread
to ensure it is in a consistent state.
Note that rpl_domain_id_filter_master_crash.test
will need additional changes after fixing MDEV-33268
Reviewed By:
============
Kristian Nielsen <knielsen@knielsen-hq.org>
An existing binlog checksum can be overridden to 0 if writing a NULL
payload when using Zlib for the computation. That is, calling into
Zlib's crc32 with empty data initializes an incremental CRC
computation to 0.
This patch changes the Log_event_writer::write_data() to exit
immediately if there is nothing to write, thereby bypassing the
checksum computation. This follows the pattern of
Log_event_writer::encrypt_and_write(), which also exits immediately
if there is no data to write.
Reviewed By:
============
Andrei Elkin <andrei.elkin@mariadb.com>
This patch introduces the following behaviour for Linux while
maintaining old behaviour for Windows:
Ctrl + C (sigint) clears the current buffer and redraws the prompt.
Ctrl-C no longer exits the client if no query is running.
Ctrl-C kills the current running query if there is one. If there is an
error communicating with the server while trying to issue a KILL QUERY,
the client exits. This is in line with the past behaviour of Ctrl-C.
On Linux Ctrl-D can be used to close the client.
On Windows Ctrl-C and Ctrl-BREAK still exits the client if no query is running.
Windows can also exit the client via \q<enter> or exit<enter>.
== Implementation details ==
The Linux implementation has two corner cases, based on which library is
used: libreadline or libedit, both are handled in code to achieve the
same user experience.
Additional code is taken from MySQL, ensuring there is identical
behaviour on Windows, to MySQL's mysql client implementation for other
CTRL- related signals.
* The CTRL_CLOSE, CTRL_LOGOFF, CTRL_SHUTDOWN will issue the equivalent
of CTRL-C and "end" the program. This ensures that the query is killed
when the client is closed by closing the terminal, logging off the
user or shutting down the system. The latter two signals are not sent
for interactive applications, but it handles the case when a user has
defined a service to use mysql client to issue a command. See
https://learn.microsoft.com/en-us/windows/console/handlerroutine
This patch is built on top of the initial work done by Anel Husakovic
<anel@mariadb.org>.
Closes#2815
Modified `federatedx_io_mysql::actual_query` to set the time zone to '+00:00' only upon establishing a new connection instead of with each query execution.
This patch corrects the fix for MDEV-32569. The latter has not taken into
account the fact not each statement uses the SELECT_LEX structure. In
particular CALL statements do not use such structure. However the parameter
passed to the stored procedure used in such a statement may require an
invocation of Type_std_attributes::agg_item_set_converter().
Approved by Oleksandr Byelkin <sanja@mariadb.com>
recv_dblwr_t::find_first_page(): Free the allocated memory
to read the first 3 pages from tablespace.
innodb.doublewrite: Added sleep to ensure page cleaner thread
wake up from my_cond_wait
os_file_set_size(): Let us invoke the Linux system call fallocate(2)
directly, because the GNU libc posix_fallocate() implements a fallback
that writes to the file 1 byte every 4096 or fewer bytes. In one
environment, invoking fallocate() directly would lead to 4 times the
file growth rate during ALTER TABLE. Presumably, what happened was
that the NFS server used a smaller allocation block size than 4096 bytes
and therefore created a heavily fragmented sparse file when
posix_fallocate() was used. For example, extending a file by 4 MiB
would create 1,024 file fragments. When the file is actually being
written to with data, it would be "unsparsed".
The built-in EOPNOTSUPP fallback in os_file_set_size() writes a buffer
of 1 MiB of NUL bytes. This was always used on musl libc and other
Linux implementations of posix_fallocate().
Modify the NS_ZERO state in the JSON number parser to allow
exponential notation with a zero coefficient (e.g. 0E-4).
The NS_ZERO state transition on 'E' was updated to move to the
NS_EX state rather than returning a syntax error. Similar change
was made for the NS_ZE1 (negative zero) starter state.
This allows accepted number grammar to include cases like:
- 0E4
- -0E-10
which were previously disallowed. Numeric parsing remains
the same for all other states.
Test cases are added to func_json.test to validate parsing for
various exponential numbers starting with zero coefficients.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services.
- InnoDB fails to find the space id from the page0 of
the tablespace. In that case, InnoDB can use
doublewrite buffer to recover the page0 and write
into the file.
- buf_dblwr_t::init_or_load_pages(): Loads only the pages
which are valid.(page lsn >= checkpoint). To do that,
InnoDB has to open the redo log before system
tablespace, read the latest checkpoint information.
recv_dblwr_t::find_first_page():
1) Iterate the doublewrite buffer pages and find the 0th page
2) Read the tablespace flags, space id from the 0th page.
3) Read the 1st, 2nd and 3rd page from tablespace file and
compare the space id with the space id which is stored
in doublewrite buffer.
4) If it matches then we can write into the file.
5) Return space which matches the pages from the file.
SysTablespace::read_lsn_and_check_flags(): Remove the
retry logic for validating the first page. After
restoring the first page from doublewrite buffer,
assign tablespace flags by reading the first page.
recv_recovery_read_max_checkpoint(): Reads the maximum
checkpoint information from log file
recv_recovery_from_checkpoint_start(): Avoid reading
the checkpoint header information from log file
Datafile::validate_first_page(): Throw error in case
of first page validation fails.
PageBulk::init(): Unnecessary reserves the extent before
allocating a page for bulk insert. btr_page_alloc()
capable of handing the extending of tablespace.
The problem is the test is skipped after sourcing include/master-slave.inc.
This leaves the slave threads running after the test is skipped, causing a
following test to fail during rpl setup.
Also rename have_normal_bzip.inc to the more appropriate _zlib.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
- Closes PR #2839
- Usage of `Column_definition_fix_attributes()` suggested by Alexandar
Barkov - thanks bar, that is better than hook in server code
(reverted 22f3ebe4bf)
- This method is called after parsing the data type:
* in `CREATE/ALTER TABLE`
* in SP: return data type, parameter data type, variable data type
- We want to disallow all these use cases of MYSQL_JSON.
- Reviewer: bar@mariadb.comcvicentiu@mariadb.org
- We don't test `json` MySQL tables from `std_data` since the error `ER_TABLE_NEEDS_REBUILD
` is invoked. However MDEV-32235 will override this test after merge,
but leave it to show behavior and historical changes.
- Closes PR #2833
Reviewer: <cvicentiu@mariadb.org>
<serg@mariadb.com>