MDEV-18957 UPDATE with LIMIT clause is wrong for versioned partitioned tables
UPDATE, DELETE: replace linear search of current/historical records
with vers_setup_conds().
Additional DML cases in view.test
---------------------------------------------------------
- Temporarily fix MDEV-13782 by commenting out LIKE_FUNC i, CondFilter
modified: storage/connect/ha_connect.cc
- Make Rest available for MariaDB binary distributed versions.
modified: storage/connect/CMakeLists.txt
- Remove unused declaration
modified: storage/connect/filter.h
Replace all io_context* occurrences with io_context_t
Even in release mode die immediately when some io_* functions return
EINVAL. This always means some programming bug and it's better to fail fast.
LinuxAIOHandler::resubmit(): fix condition. Stop ignoring -1 return code which
corresponds to EPERM and io_submit() really can return this one.
Use io_destroy() to stop leaking io_context_t.
Make m_aio_ctx std::vector instead of C array. I think that internal check
for index overflow might be useful.
Add debug assertions for EFAULT because for me receiving it
looks like a programming bug.
DbugParse(): removed mutex lock/unlock which should protect file writes only.
And no file writes happen in this function.
DbugFlush(): move mutex_unlock out of this method because fflush() doesn't
need any locking.
Slow stuff like mutex lock/unlock and accessing errno (TLS)
is moved to a more narrow scope.
For ROW_FORMAT=REDUNDANT, we must reserve fixed-length dummy values
for the CHAR columns in the metadata record. This is because in
MariaDB Server 10.4, btr_cur_instant_init_low() will rely on
dict_index_t::trx_id_offset being accurate for the metadata record.
In MariaDB Server 10.4, btr_cur_instant_init_low() assumes that
all PRIMARY KEY columns that are internally variable-length will
be encoded in 0 bytes in the metadata record. Sometimes, CHAR
columns can be encoded as variable-length. We should not
unnecessarily reserve space for a dummy string value in the
metadata record.
The fix consists of three commits backported from 10.3:
1) Cleanup isnan() portability checks
(cherry picked from commit 7ffd7fe962)
2) Cleanup isinf() portability checks
Original problem reported by Wlad: re-compilation of 10.3 on top of 10.2
build would cache undefined HAVE_ISINF from 10.2, whereas it is expected
to be 1 in 10.3.
std::isinf() seem to be available on all supported platforms.
(cherry picked from commit bc469a0bdf)
3) Use std::isfinite in C++ code
This is addition to parent revision fixing build failures.
(cherry picked from commit 54999f4e75)
DropIndex, CreateIndex: Remove. The file row0trunc.cc only exists
in MariaDB Server 10.3 so that the crash recovery of TRUNCATE TABLE
operations from older 10.2 and 10.3 servers will work. This dead code
was being used for implementing the MySQL 5.7 WL#6501 TRUNCATE TABLE
that was replaced with a backup-safe implementation in MDEV-13564.
buf_read_ibuf_merge_pages(): Discard any page numbers that are
outside the current bounds of the tablespace, by invoking the
function ibuf_delete_recs() that was introduced in MDEV-20934.
This could avoid an infinite change buffer merge loop on
innodb_fast_shutdown=0, because normally the change buffer merge
would only be attempted if a page was successfully loaded into
the buffer pool.
dict_drop_index_tree(): Add the parameter trx_t*.
To prevent the DROP TABLE crash, do not invoke btr_free_if_exists()
if the entire .ibd file will be dropped. Thus, we will avoid a crash
if the BTR_SEG_LEAF or BTR_SEG_TOP of the index is corrupted,
and we will also avoid unnecessarily accessing the to-be-dropped
tablespace via the buffer pool.
In MariaDB 10.2, we disable the DROP TABLE fix if innodb_safe_truncate=0,
because the backup-unsafe MySQL 5.7 WL#6501 form of TRUNCATE TABLE
requires that the individual pages be freed inside the tablespace.
This PR contains a mtr test for reproducing a failure with replicating create table as select statement (CTAS) through asynchronous mariadb replication to mariadb galera cluster.
The problem happens when CTAS replication contains both create table statement followed by row events for populating the table. In such situation, the galera node operating as mariadb replication slave, will first replicate only the create table part into the cluster, and then perform another replication containing both the create table and row events. This will lead all other nodes to fail for duplicate table create attempt, and crash due to this failure.
PR contains also a fix, which identifies the situation when CTAS has been replicated, and makes further scan in async replication stream to see if there are following row events. The slave node will replicate either single TOI in case the CTAS table is empty, or if CTAS table contains rows, then single bundled write set with create table and row events is replicated to galera cluster.
This fix should keep master server's GTID's for CTAS replication in sync with GTID's in galera cluster.
Make sure that the sort buffers can store atleast one sort key.
This is needed to make sure that all merge buffers are read else
with no sort keys some merge buffers are skipped because the code
makes a conclusion there is no data to be read.
This new CONNECT version 1.07 fully implements NOSQL support.
It allows working on JSON or XML data retrieved as REST query results
from all binary distributions of MariaDB when cpprestsdk is installed
and the GetRest library is available.
=====================================================================
- Make Rest available for MariaDB binary distributed versions.
Change RestGet function so it can be called from a library.
modified: storage/connect/CMakeLists.txt
modified: storage/connect/restget.cpp
modified: storage/connect/tabrest.cpp
- Make column FLAG option available to discovery functions.
modified: storage/connect/ha_connect.cc
modified: storage/connect/plgdbsem.h
- Update CONNECT version number and date.
modified: storage/connect/ha_connect.cc
- Move OEMColumns function from mycat.cc to reldef.cpp.
modified: storage/connect/mycat.cc
modified: storage/connect/reldef.cpp
- Allocate tables as TABREF (was RELDEF)
modified: storage/connect/mycat.cc
modified: storage/connect/mycat.h
- Fix MDEV-20845 by commenting out TIMEOUT setting.
modified: storage/connect/myconn.cpp
- Call DefineAM before calling GetColCatInfo. Column offset
is now based on record format instead of table type.
The RECFM_VCT format was added.
This enables tables to specify the record format and is
useful in particular for OEM tables.
modified: storage/connect/plgdbsem.h
modified: storage/connect/reldef.cpp
modified: storage/connect/reldef.h
modified: storage/connect/tabdos.cpp
modified: storage/connect/tabdos.h
modified: storage/connect/tabfix.cpp
modified: storage/connect/tabfmt.cpp
modified: storage/connect/tabmysql.cpp
modified: storage/connect/tabutil.cpp
modified: storage/connect/tabutil.h
modified: storage/connect/tabvct.cpp
modified: storage/connect/xindex.cpp
Fix partitioning and DS-MRR to work together
- In ha_partition::index_end(): take into account that ha_innobase (and
other engines using DS-MRR) will have inited=RND when initialized for
DS-MRR scan.
- In ha_partition::multi_range_read_next(): if the MRR scan is using
HA_MRR_NO_ASSOCIATION mode, it is not guaranteed that the partition's
handler will store anything into *range_info.
- In DsMrr_impl::choose_mrr_impl(): ha_partition will inquire partitions
about how much memory their MRR implementation needs by passing
*buffer_size=0. DS-MRR code didn't know about this (actually it used
uint for buffer size calculation and would have an under-flow).
Returning *buffer_size=0 made ha_partition assume that partitions do
not need MRR memory and pass the same buffer to each of them.
Now, this is fixed. If DS-MRR gets *buffer_size=0, it will return
the amount of buffer space needed, but not more than about
@@mrr_buffer_size.
* Fix ha_{innobase,maria,myisam}::clone. If ha_partition uses MRR on its
partitions, and partition use DS-MRR, the code will call handler->clone
with TABLE (*NOT partition*) name as an argument.
DS-MRR has no way of knowing the partition name, so the solution was
to have the ::clone() function for the affected storage engine to ignore
the name argument and get it elsewhere.
Apart from page latches (buf_block_t::lock), mini-transactions
are keeping track of at most one dict_index_t::lock and
fil_space_t::latch at a time, and in a rare case, purge_sys.latch.
Let us introduce interfaces for acquiring an index latch
or a tablespace latch.
In a later version, we may want to introduce mtr_t members
for holding a latched dict_index_t* and fil_space_t*,
and replace the remaining use of mtr_t::m_memo
with std::set<buf_block_t*> or with a map<buf_block_t*,byte*>
pointing to log records.
In the test innodb.instant_alter,4k we would be flagging an error
for too large row size. That error was previously only being reported
if the table was being rebuilt. Thus, this merge is fixing a small
omission in MDEV-11369 (instant ADD COLUMN).
Fix incorrect change introduced in the fix for MDEV-20109.
The patch tried to compute a more precise estimate for the record_count
value in SJ-Materialization-Scan strategy (in
Sj_materialization_picker::check_qep). However the new formula is worse
as it produces extremely optimistic results in common cases where
SJ-Materialization-Scan should be used)
The old formula produces pessimistic results in cases when Sj-Materialization-
Scan is unlikely to be a good choice anyway. So, the old behavior is better.
Move row size check to early CREATE/ALTER TABLE phase. Stop checking
on table open.
dict_index_add_to_cache(): remove parameter 'strict', stop checking row size
dict_index_t::record_size_info_t: this is a result of row size check operation
create_table_info_t::row_size_is_acceptable(): performs row size check.
Issues error or warning. Writes first overflow field to InnoDB log.
create_table_info_t::create_table(): add row size check
dict_index_t::record_size_info(): this is a refactored version
of dict_index_t::rec_potentially_too_big(). New version doesn't change global
state of a program but return all interesting info. And it's callers who
decide how to handle row size overflow.
dict_index_t::rec_potentially_too_big(): removed
A search with PAGE_CUR_GE may land on the supremum record on
a leaf page that is not the rightmost leaf page.
This could occur when all keys on the current page are
smaller than the search key, and the smallest key on the
successor page is larger than the search key.
ibuf_delete_recs(): Correct the debug assertion accordingly.
mtr_t::Impl, mtr_t::Command: Merge to mtr_t.
MTR_MAGIC_N: Remove.
MTR_STATE_COMMITTING: Remove. This state was only being set
internally during mtr_t::commit().
mtr_t::Command::m_locks_released: Remove (set-and-never-read member).
mtr_t::Command::m_start_lsn: Replaced with the return value of
finish_write() and a parameter to release_blocks().
mtr_t::Command::m_end_lsn: Removed as a duplicate of mtr_t::m_commit_lsn.
mtr_t::Command::prepare_write(): Replace a switch () with a
comparison against 0. Only 2 m_log_mode are allowed.
Problem:
========
CURRENT_TEST: binlog_encryption.rpl_corruption
mysqltest: In included file "./include/wait_for_slave_io_error.inc":
...
At line 72: Slave stopped with wrong error code
**** Slave stopped with wrong error code: 1743 (expected 1595,1913) ****
Analysis:
========
The test emulates the corruption at the various stages of replication for
example in binlog file, in network and in relay log etc. It verifies that all
corruption cases are handled through appropriate error messages.
The test cases which emulate network failure expect following errors.
--ER_SLAVE_RELAY_LOG_WRITE_FAILURE (1595)
--ER_NETWORK_READ_EVENT_CHECKSUM_FAILURE (1743)
Ideally test should expect error codes as 1595 and 1743.
But the test actually waits on incorrect error code 1595,1913
Fix:
===
Added appropriate error code for 'ER_NETWORK_READ_EVENT_CHECKSUM_FAILURE'.
Replaced 1913 with 1743.
The XDES_CLEAN_BIT is always set for every element of
the page allocation bitmap in the extent descriptor pages.
Do not bother touching it, to avoid redundant writes.