fprintf() on Windows, when used on unbuffered FILE*, writes bytewise.
This can make crash handler messages harder to read, if they are mixed up
with other error log output.
Fixed , on Windows, by using a small buffer for formatting, and fwrite
instead of fprintf, if buffer is large enough for message.
The reason is that on Windows, OpenSSL can be built with different C runtime
than the server (e.g Debug runtime in debug vcpkg build).
Overwriting only malloc(), with CRT that server is using can cause
mixup of incompatible malloc() and free() inside openssl.
To fix, overwrite all memory functions.
Assertion `table->field[0]->ptr >= table->record[0] &&
table->field[0]->ptr <= table->record[0] + table->s->reclength' failed in
handler::assert_icp_limitations.
table->move_fields has some limitations:
1. It cannot be used in cascade
2. It should always have a restoring pair.
Rule 1 is covered by assertions in handler::assert_icp_limitations
and handler::ptr_in_record (commit 30894fe9a9).
Rule 2 should be manually maintained with care. Hopefully, the rule 1 assertions
may sometimes help as well.
In ha_myisam::repair, both rules are broken. table->move_fields is used
asymmetrically there: it is set on every param->fix_record call
(i.e. in compute_vcols) but is restored only once, in the end of repair.
The reason to updating field ptr's for every call is that compute_vcols can
(supposedly) be called in parallel, that is, with the same table, but different
records.
The condition to "unmove" the pointers in ha_myisam::restore_vcos_after_repair
is incorrect, when stored vcols are available, and myisam stores a VIRTUAL field
if it's the only field in the table (the record cannot be of zero length).
This patch solves the problem by "unmoving" the pointers symmetrically, in
compute_vcols. That is, both rules will be preserved maintained.
Before this change the unix socket auth plugin returned true only when
the OS socket user id matches the MariaDB user name.
The authentication string was ignored.
Now if an authentication string is defined with in `unix_socket`
authentication rule, then the authentication string will be used to
compare with the socket's user name, and the plugin will return a
positive if matching.
Make the plugin to fill in the @@external_user variable.
This change is similar to MySQL commit of
https://github.com/mysql/mysql-server/commit/6ddbc58e.
However there's one difference with above commit:
- For MySQL, both Unix user matches DB user name and Unix user matches the
authentication string will be allowed to connect.
- For MariaDB, we only allows the Unix user matches the authentication
string to connect, if the authentication string is defined.
This is because allowing both Unix user names has risks and couldn't
handle the case that a customer only wants to allow one single Unix user
to connect which doesn't matches the DB user name.
If DB user is created with multiple unix_socket options for example:
`create user A identified via unix_socket as 'B' or unix_socket as 'C';`
Then both Unix user of B and C are accepted.
Existing MTR test of `plugins.unix_socket` is not impacted.
Also add a new MTR test to verify authentication with authentication
string. See the MTR test cases for supported/unsupported cases.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
in bintars the server is linked with wolfssl, while the connector
is linked with gnutls. Thus client_ed25519.so gets gnutls
dependency, unresolved symbols and it cannot be loaded into the
server and gnutls symbols aren't present there.
linking the plugin statically with gnutls fixes that and the test passes.
but when such a plugin is loaded into the client, the client gets
two copies of gnutls - they conflict and ssl doesn't work at all.
let's detect this and disable the test for now.
Modified node config with longer timeouts for suspect,
inactive, install and wait_prim timeout. Increased
node_1 weight to keep it primary component when
other nodes are voted out.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Replication of MyISAM and Aria DML is experimental and best
effort only. Earlier change make INSERT SELECT on both
MyISAM and Aria to replicate using TOI and STATEMENT
replication. Replication should happen only if user
has set needed wsrep_mode setting.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Fixed used configuration and added suppression for warning
message. Test case changes only.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Modified test configuration file to use wsrep_sync_wait
to make sure committed transactions are replicated before
next operation.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
note that:
* unit.conc_tls is broken in mtr
* schannel now doesn't fail on invalid ca path unless
--ssl-verify-server-cert is used. openssl still does.
New runtime type diagnostic (MDEV-34490) has detected that classes
Item_func_eq, Item_default_value and Item_date_literal_for_invalid_dates
incorrectly return an instance of its ancestor classes when being cloned.
This commit fixes that.
Additionally, it fixes a bug at Item_func_case_simple::do_build_clone()
which led to an endless loop of cloning functions calls.
Reviewer: Oleksandr Byelkin <sanja@mariadb.com>
Issue: During mtr_t:commit, if there is not enough space available in
redo log buffer, we flush the buffer. During flush, the LSN lock is
released allowing other concurrent mtr to commit. After flush we
reacquire the lock but use the old LSN obtained before check. It could
lead to redo log corruption. As the LSN moves backwards with the
possibility of data loss and unrecoverable server if the server aborts
for any reason or if server is shutdown with innodb_fast_shutdown=2.
With normal shutdown, recovery fails to map the checkpoint LSN to
correct offset.
In debug mode it hits log0log.cc:863: lsn_t log_t::write_buf()
Assertion `new_buf_free == ((lsn - first_lsn) & write_size_1)' failed.
In release mode, after normal shutdown, restart fails.
[ERROR] InnoDB: Missing FILE_CHECKPOINT(8416546) at 8416546
[ERROR] InnoDB: Log scan aborted at LSN 8416546
Backup fails reading the corrupt redo log.
[00] 2024-07-31 20:59:10 Retrying read of log at LSN=7334851
[00] FATAL ERROR: 2024-07-31 20:59:11 Was only able to copy log from
7334851 to 7334851, not 8416446; try increasing innodb_log_file_size
Unless a backup is tried or the server is shutdown or killed
immediately, the corrupt redo part is eventually truncated and there
may not be any visible issues seen in release mode.
This issue was introduced by the following commit.
commit a635c40648
MDEV-27774 Reduce scalability bottlenecks in mtr_t::commit()
Fix: If we need to release latch and flush redo before writing mtr
logs, make sure to get the latest system LSN after reacquiring the
redo system latch.
The final touch to fixing memory leaks for PS/SP. This patch turns on
by default the option WITH_PROTECT_STATEMENT_MEMROOT in order to
build server with memory leaks detection mechanism switched on.
Reason:
======
- InnoDB fails to load the instant alter table metadata from
clustered index while loading the table definition.
The reason is that InnoDB metadata blob has the column length
exceeds maximum fixed length column size.
Fix:
===
- InnoDB should treat the long fixed length column as variable
length fields that needs external storage while initializing
the field map for instant alter operation
Before this patch the crash occured when a single row dataset is used and
Item::remove_eq_conds() is called for HAVING. This function is not supposed
to be called after the elimination of multiple equalities.
To fix this problem instead of Item::remove_eq_conds() Item::val_int() is
used. In this case the optimizer tries to evaluate the condition for the
single row dataset and discovers impossible HAVING immediately. So, the
execution phase is skipped.
Approved by Igor Babaev <igor@maridb.com>
Re-design of a way for handling the DELETE statement introduced by
the task MDEV-28883, added regression caused by missing reset of
the data member current_select->first_cond_optimization on handling
the DELETE statement that results in a memory leaks on second execution
of the same DELETE statement in PS mode.
To fix memory leaks, added set of the data member
current_select->first_cond_optimization
to the value false on finishing execution of DELETE statement.
Problem:
========
- After the commit ada1074bb1 (MDEV-14398)
fil_crypt_set_encrypt_tables() iterates through all tablespaces to
fill the default_encrypt tables list. This was a trigger to
encrypt or decrypt when key rotation age is set to 0. But import
tablespace does call fil_crypt_set_encrypt_tables() unnecessarily.
The motivation for the call is to signal the encryption threads.
Fix:
====
ha_innobase::discard_or_import_tablespace: Remove the
fil_crypt_set_encrypt_tables() and add the import tablespace
to the default encrypt list if necessary