Executing an INSERT statement in PS mode having positional parameter
bound with an array could result in incorrect number of inserted rows
in case there is a BEFORE INSERT trigger that executes yet another
INSERT statement to put a copy of row being inserted into some table.
The reason for incorrect number of inserted rows is that a data structure
used for binding positional argument with its actual values is stored
in THD (this is thd->bulk_param) and reused on processing every INSERT
statement. It leads to consuming actual values bound with top-level
INSERT statement by other INSERT statements used by triggers' body.
To fix the issue, reset the thd->bulk_param temporary to the value nullptr
before invoking triggers and restore its value on finishing its execution.
I checked all stack overflow potential problems found with
gcc -Wstack-usage=16384
and
clang -Wframe-larger-than=16384 -no-inline
Fixes:
Added '#pragma clang diagnostic ignored "-Wframe-larger-than="'
to a lot of function to where stack usage large but resonable.
- Added stack check warnings to BUILD scrips when using clang and debug.
Function changed to use malloc instead allocating things on stack:
- read_bootstrap_query() now allocates line_buffer (20000 bytes) with
malloc() instead of using stack. This has a small performance impact
but this is not releant for bootstrap.
- mroonga grn_select() used 65856 bytes on stack. Changed it to use
malloc().
- Wsrep_schema::replay_transaction() and
Wsrep_schema::recover_sr_transactions().
- Connect zipOpen3()
Not fixed:
- mroonga/vendor/groonga/lib/expr.c grn_proc_call() uses
43712 byte on stack. However this is not easy to fix as the stack
used is caused by a lot of code generated by defines.
- Most changes in mroonga/groonga where only adding of pragmas to disable
stack warnings.
- rocksdb/options/options_helper.cc uses 20288 of stack space.
(no reason to fix except to get rid of the compiler warning)
- Causes using alloca() where the allocation size is resonable.
- An issue in libmariadb (reported to connectors).
Update the URLs in two places in the test_upgrade script.
* boost-program-options dependency points to
archives.fedoraproject.org
* gpgkey for the MariaDB rpm repository points to
archive.mariadb.org/PublicKey
This ensures that the upgrade testing run in .gitlab-ci.yml continues to
function and refers to the correct GPG key.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services.
Some fixes related to commit f838b2d799 and
Rows_log_event::do_apply_event() and Update_rows_log_event::do_exec_row()
for system-versioned tables were provided by Nikita Malyavin.
This was required by test versioning.rpl,trx_id,row.
Add "real ip:<ip_or_localhost>" part to the aborted message
Only for proxy-protocoled connection, so it does not not to cause
confusion to normal users.
Add "real ip:<ip_or_localhost>" part to the aborted message
Only for proxy-protocoled connection, so it does not not to cause
confusion to normal users.
Starting with clang-16, MemorySanitizer appears to check that
uninitialized values not be passed by value nor returned.
Previously, it was allowed to copy uninitialized data in such cases.
get_foreign_key_info(): Remove a local variable that was passed
uninitialized to a function.
DsMrr_impl: Initialize key_buffer, because DsMrr_impl::dsmrr_init()
is reading it.
test_bind_result_ext1(): MYSQL_TYPE_LONG is 32 bits, hence we must
use a 32-bit type, such as int. sizeof(long) differs between
LP64 and LLP64 targets.
move MYSQL::fields down, replacing MYSQL::unused5
this way only MYSQL::fields and MYSQL::field_alloc will still have
different offset in C/C and the server, but all other MYSQL members
will get back in sync.
luckily, plugins shouldn't need MYSQL::fields or MYSQL::field_alloc
added a check to ensure both MYSQL structures are always of
the same size.
After successful connection, server always sets SERVER_STATUS_AUTOCOMMIT
in server_status in the OK packet. This is wrong, if global variable
autocommit=0.
Fixed THD::init(), added mysql_client_test test.
Thanks to Diego Dupin for the providing the patch.
Signed-off-by: Vladislav Vaintroub <vvaintroub@gmail.com>
The MDEV-29693 conflict resolution is from Monty, as well as is
a bug fix where ANALYZE TABLE wrongly built histograms for
single-column PRIMARY KEY.
Also includes a fix for safe_malloc error reporting.
Other things:
- Copied main.log_slow from 10.4 to avoid mtr issue
Disabled test:
- spider/bugfix.mdev_27239 because we started to get
+Error 1429 Unable to connect to foreign data source: localhost
-Error 1158 Got an error reading communication packets
- main.delayed
- Bug#54332 Deadlock with two connections doing LOCK TABLE+INSERT DELAYED
This part is disabled for now as it fails randomly with different
warnings/errors (no corruption).
The error was specific to threadpool/compressed protocol.
set_thd_idle() set socket state to idle twice, causing assert failure.
This happens if unread compressed data on connection,after query was
finished. On a protocol level, this means a single compression packet
contains multiple command packets.
Port the test case from MySQL to MariaDB:
MySQL fix Bug#33813951, Change-Id: I2448e3f2f36925fe70d882ae5681a6234f0d5a98.
Function test_simple_temporal() from MySQL ported from C++ to pure C.
This includes one change:
- DIE_UNLESS(field->type == MYSQL_TYPE_DATETIME);
+ DIE_UNLESS(field->type == MYSQL_TYPE_TIMESTAMP);
The bound param of SELECT ? is TIMESTAMP in this code.
MySQL returns it back as DATETIME. MariaDB preserves TIMESTAMP.
Code packaged for commit by Daniel Black.
* Log rows in online_alter_binlog.
* Table online data is replicated within dedicated binlog file
* Cached data is written on commit.
* Versioning is fully supported.
* Works both wit and without binlog enabled.
* For now savepoints setup is forbidden while ONLINE ALTER goes on.
Extra support is required. We can simply log the SAVEPOINT query events
and replicate them together with row events. But it's not implemented
for now.
* Cache flipping:
We want to care for the possible bottleneck in the online alter binlog
reading/writing in advance.
IO_CACHE does not provide anything better that sequential access,
besides, only a single write is mutex-protected, which is not suitable,
since we should write a transaction atomically.
To solve this, a special layer on top Event_log is implemented.
There are two IO_CACHE files underneath: one for reading, and one for
writing.
Once the read cache is empty, an exclusive lock is acquired (we can wait
for a currently active transaction finish writing), and flip() is emitted,
i.e. the write cache is reopened for read, and the read cache is emptied,
and reopened for writing.
This reminds a buffer flip that happens in accelerated graphics
(DirectX/OpenGL/etc).
Cache_flip_event_log is considered non-blocking for a single reader and a
single writer in this sense, with the only lock held by reader during flip.
An alternative approach by implementing a fair concurrent circular buffer
is described in MDEV-24676.
* Cache managers:
We have two cache sinks: statement and transactional.
It is important that the changes are first cached per-statement and
per-transaction.
If a statement fails, then only statement data is rolled back. The
transaction moves along, however.
Turns out, there's no guarantee that TABLE well persist in
thd->open_tables to the transaction commit moment.
If an error occurs, tables from statement are purged.
Therefore, we can't store te caches in TABLE. Ideally, it should be
handlerton, but we cut the corner and store it in THD in a list.