row_search_mvcc(): Revise an overflow check, disabling it on 64-bit
systems. The maximum number of consecutive record reads in a key range
scan should be limited by the maximum number of records per page
(less than 2^13) and the maximum number of pages per tablespace (2^32)
to less than 2^45. On 32-bit systems we can simplify the overflow check.
Reviewed by: Vladislav Lesin
There are two array fields in spider_share with similar names:
share->use_sql_dbton_ids that maps from i to the i-th dbton used by
share. Thus it should be used only when i iterates over all distinct
dbtons used by share.
share->sql_dbton_ids that maps from i to the dbton used by the i-th
link of the share. Thus it should be used only when i iterates over
all links of a share.
We correct instances where share->sql_dbton_ids should be used instead
of share->use_sql_dbton_ids.
Modified `federatedx_io_mysql::actual_query` to set the time zone to '+00:00' only upon establishing a new connection instead of with each query execution.
os_file_set_size(): Let us invoke the Linux system call fallocate(2)
directly, because the GNU libc posix_fallocate() implements a fallback
that writes to the file 1 byte every 4096 or fewer bytes. In one
environment, invoking fallocate() directly would lead to 4 times the
file growth rate during ALTER TABLE. Presumably, what happened was
that the NFS server used a smaller allocation block size than 4096 bytes
and therefore created a heavily fragmented sparse file when
posix_fallocate() was used. For example, extending a file by 4 MiB
would create 1,024 file fragments. When the file is actually being
written to with data, it would be "unsparsed".
The built-in EOPNOTSUPP fallback in os_file_set_size() writes a buffer
of 1 MiB of NUL bytes. This was always used on musl libc and other
Linux implementations of posix_fallocate().
PageBulk::init(): Unnecessary reserves the extent before
allocating a page for bulk insert. btr_page_alloc()
capable of handing the extending of tablespace.
The spider init bug fixes remove any race conditions during spider
init.
Also remove the add_suppressions in spider/bugfix.mdev_27575 which is
a similar issue.
A new column was introduced to the show index output in 10.6 in
f691d9865b
Thus we update the check of the number of columns to be at least 13,
rather than exactly 13.
Also backport an err number and format from 10.5 for better error
messages when the column number is wrong.
There are a large number of uses of `strerror` in the codebase,
the local declaration in `storage/connect/tabvct.cpp` is the only one.
Given that none is needed elsewhere, I conclude that this instance can
simply be removed.
In commit b4ff64568c the
signature of mysql_show_var_func was changed, but not all functions
of that type were adjusted.
When the server is configured with `cmake -DWITH_ASAN=ON` and
compiled with clang, runtime errors would be flagged for invoking
functions through an incompatible function pointer.
Reviewed by: Michael 'Monty' Widenius
In spider_db_mbase_util::print_item_func(), if the sql item_func has
an UNKNOWN_FUNC type, by default the spider group by handler (gbh)
transform infix to prefix. But regexp should remain infix, so we add
an if condition to account for this.
Remove DB_LOCK_WAIT return code check as it should have been resolved to
one of the other errors by that point.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
row_ins_clust_index_entry_low(): Invoke btr_set_instant() in the same
mini-transaction that has successfully inserted the metadata record.
In this way, if inserting the metadata record fails before any
undo log record was written for it, the index root page will remain
consistent.
innobase_instant_try(): Remove the btr_set_instant() call.
Reviewed by: Thirunarayanan Balathandayuthapani
Tested by: Matthias Leich
ha_innobase::check_if_supported_inplace_alter(): On ALTER_OPTIONS,
if innodb_file_per_table=1 and the table resides in the system tablespace,
require that the table be rebuilt (and moved to an .ibd file).
Reviewed by: Thirunarayanan Balathandayuthapani
Tested by: Matthias Leich
Remove ORACLE from the (session) sql_mode in connections made with sql
service to run init queries
The connection is new and the global variable value takes effect
rather than the session value from the caller of spider_db_init.
This should fix certain CI builds where the spider suite test files
and the main suite test files do not follow the same relative paths
relations as the mariadb source.
$MYSQLD_CMD uses .1 as the defaults-group-suffix, which could cause
the use of the default port (3306) or socket, which will fail in
environment where these defaults are already in use by another server.
Adding an extra --defaults-group-suffix=.1.1 does not help, because
the first flag wins.
So we use $MYSQLD_LAST_CMD instead, which uses the correct suffix.
The extra innodb buffer pool warning is irrelevant to the goal of the
test (running --wsrep-recover with --plug-load-add=ha_spider should
not cause hang)
Fix spider init bugs (MDEV-22979, MDEV-27233, MDEV-28218) while
preventing regression on old ones (MDEV-30370, MDEV-29904)
Two things are changed:
First, Spider initialisation is made fully synchronous, i.e. it no
longer happens in a background thread. Adapted from the original fix
by nayuta for MDEV-27233. This change itself would cause failure when
spider is initialised early, by plugin-load-add, due to dependency on
Aria and udf function creation, which are fixed in the second and
third parts below. Requires SQL Service, thus porting earlier versions
requires MDEV-27595
Second, if spider is initialised before udf_init(), create udf by
inserting into `mysql.func`, otherwise do it by `CREATE FUNCTION` as
usual. This change may be generalised in MDEV-31401.
Also factor out some clean-up queries from deinit_spider.inc for use
of spider init tests.
A minor caveat is that early spider initialisation will fail if the
server is bootstrapped for the first time, due to missing `mysql`
database which needs to be created by the bootstrap script.
Removing procedures that were created and dropped during init.
This also fixes a race condition where mtr test with
plugin-load-add=ha_spider.so causes post test check to fail as it
expects the procedures to still be there.
There are several plugins in ha_spider: spider, spider_alloc_mem,
spider_wrapper_protocols, spider_rewrite etc.
INSTALL PLUGIN foo SONAME ha_spider causes all the other ones to be
installed by the init queries where foo is any of the plugins.
This introduces unnecessary complexiy. For example it reads
mysql.plugins to find all other plugins, causing the hack of moving
spider plugin init to a separate thread.
To install all spider related plugins, install soname ha_spider should
be used instead.
This also fixes spurious rows in mysql.plugin when installing say only
the spider plugin with `plugin-load-add=SPIDER=ha_spider.so`:
select * from mysql.plugin;
name dl
spider_alloc_mem ha_spider.so # should not be here
spider_wrapper_protocols ha_spider.so # should not be here
Adapted from part of the reverted commit
c160a115b8.
Problem:
========
- InnoDB should have two keys on the same column for the self
referencing foreign key relation.
Solution:
=========
- Allow self referential foreign key relation to work with one
key.
row_upd_clust_rec_by_insert(): If we are resuming from a lock wait,
reset the 'disowned' flag of the BLOB pointers in 'entry' that we
copied from 'rec' on which we had invoked btr_cur_disown_inherited_fields()
before the lock wait started. In this way, the inserted record with
the updated PRIMARY KEY value will have the BLOB ownership associated
with itself, like it is supposed to be.
Note: If the lock wait had been aborted, then rollback would have
invoked btr_cur_unmark_extern_fields() and no corruption would be possible.
Reviewed by: Vladislav Lesin
Tested by: Matthias Leich
trx_t::commit_in_memory(): Empty the detailed_error string, so that
FOREIGN KEY error messages from an earlier transaction will not be
wrongly reused in ha_innobase::get_error_message().
Reviewed by: Thirunarayanan Balathandayuthapani
- Commit fc31e3114b2538d152194d17ff0f0439db565634(MDEV-8179) doesn't
report the progress of inplace alter completely. It just does only
in row_merge_sort(). Removing the progress report function completely
This avoids the scenario in MDEV-32849, when the unlock happens after
the connection has been freed, say in rollback. This is done in 10.5+
after the commit a26700cca5.
It may or may not prevent potential other scenarios where spider has
locked something, then for some reason the statement needs to be
rolled back and spider frees the connection, and then spider proceeds
to use the freed connection. But at least we fix the regression
introduced by MDEV-30014 to 10.4 and bring 10.4 closer in parity with
10.5+.
mysql_prepare_alter_table(): Alter table should check whether
foreign key exists when it expected to exists and
report the error in early stage
dict_foreign_parse_drop_constraints(): Don't throw error if the
foreign key constraints doesn't exist when if exists is given
in the statement.
We remove the call to update spider persistent table stats (sts/crd)
in spider_free_share(). This prevents spider from opening and closing
further tables during close(), which fixes the following issues:
MDEV-28739: ha_spider::close() is called during tdc_start_shutdown(),
which is called after query_cache_destroy(). Closing the sts/crd Aria
tables will trigger a call to Query_cache::invalidate_table(), which
will attempt to use the query cache mutex structure_guard_mutex
destroyed previously.
MDEV-29421: during ha_spider::close(), spider_free_share() could
trigger another spider_free_share() through updating sts/crd table,
because open_table() calls tc_add_table(), which could trigger another
ha_spider::close()...
Since spider sts/crd system tables are only updated here, there's no
use for these tables any more, and we remove all uses of these tables
too.
The removal should not cause any performance issue, as in memory
spider table stats are only updated based on a time
interval (spider_sts_interval and spider_crd_interval), which defaults
to 10 seconds. It should not affect accuracy either, due to the
infrequency of server restart. And inaccurate stats are not a problem
for optimizer anyway.
To be on the safe side, we defer the removal of the spider sts/crd
tables themselves to future.
btr_cur_update_in_place(): Update the DB_TRX_ID,DB_ROLL_PTR also on the
compressed copy of the page. In a test case, a server built with
cmake -DWITH_INNODB_EXTRA_DEBUG=ON would crash in page_zip_validate()
due to the inconsistency. In a normal debug build, a different assertion
would fail, depending on when the uncompressed page was restored from
the compressed page.
In MariaDB Server 10.5, this bug had already been fixed by
commit b3d02a1fcf (MDEV-12353).
Problem:
========
- InnoDB fails to free the allocated buffer of stored cursor
when information schema query is interrupted.
Solution:
=========
- In case of error handling, information schema query should free
the allocated buffer to store the cursor.
This will avoid issues like MDEV-32486
IDs used in
- spider_alloc_calc_mem_init()
- spider_string::init_calc_mem()
- spider_malloc()
- spider_bulk_alloc_mem()
- spider_bulk_malloc()
This fixes MDEV-30014, MDEV-29456, MDEV-29667, and MDEV-30049.
The server may ask storage engines to unlock when the original sql
command is not UNLOCK. This patch makes sure that spider honours these
requests, so that the server has the correct idea which tables are
locked and which are not.
MDEV-29456, MDEV-29667, MDEV-30049: a later LOCK statement would, as
the first step, unlock locked tables and clear the OPTION_TABLE_LOCK
bit in thd->variables.option_bits, as well as locked_tables_list,
indicating no tables are locked. If Spider does not unlock because the
sql command is not UNLOCK, and if after this the LOCK statement fails
to lock any tables, these indications that no tables are locked
remains, so a later UNLOCK TABLES; statement would not try to unlock
any table. Causing later statements requiring mdl locks to hang on
waiting until lock_wait_timeout (default 1h) has passed.
MDEV-30014: when a LOCK statement tries to lock more than one tables,
say t2 and t3 as in mdev_30014.test, and t2 succeeds but t3 fails, the
sql layer would try to undo by unlocking t2, and again, if Spider does
not honour this request, the sql layer would assume t2 has been
unlocked, but later actions on t2 or t2's remote table could hang on
waiting for the mdl.
Spider populates its lock lists (a hash) in store_lock(), and normally
clears them in the actual lock_tables(). However, if lock_tables()
fails, there's no reset_lock() method for storage engine handlers,
which can cause bad things to happen. For example, if one of the table
involved is dropped and recreated, or simply TRUNCATEd, when executing
LOCK TABLES again, the lock lists would be queried again in
store_lock(), which could cause access to freed space associated with
the dropped table.