Adjust test after fixing the C/C.
On Windows, use --host=127.0.0.2 to fake "insecure" transport
with TCP connection for test purposes. 127.0.0.2 is loopback address,
that can be used instead of usual 127.0.0.1
Unfortunately, this technique does not work on all *nixes the same,
notably neither on BSDs nor Solaris. Thus default --host=localhost
remains "insecure" transport,when TCP is used. but it is not that critical,
the "self-signed" is not nearly as annoying on *nixes as it is on Windows.
Changing the format in error messages:
- ER_PACKAGE_ROUTINE_IN_SPEC_NOT_DEFINED_IN_BODY
- ER_PACKAGE_ROUTINE_FORWARD_DECLARATION_NOT_DEFINED
from
"Subroutine 'db.pkg.f1' ..."
to a more clear:
"FUNCTION `db.pkg.f1` ..."
"PROCEDURE `db.pkg.p1` ..."
This reverts commit c37b2087b4.
In c37b20887, when re-binlogging a GTID event on a replica,
it will overwrite the thread_id from the primary to be the
value of the slave applier (SQL thread or parallel worker).
This should be the value of the original thread_id on the
master connection though, to both help track temporary
tables, and be consistent with Query_log_event.
Reverting the commit to re-target 11.5, so we can re-test
with the corrected thread_id.
When displaying the ER_SP_DOES_NOT_EXIST error, use
Sp_handler::type_lex_cstring() to the the underlying
object type:
- PROCEDURE
- FUNCTION
- PACKAGE
- PACKAGE BODY
instead of hard-coded "FUNCTION or PROCEDURE".
* --ssl-verify-server-cert was not enabled explicitly, and
* CA was not specified, and
* fingerprint was not specified, and
* protocol is TCP, and
* no password was provided
insecure passwordless logins are common in test environment, let's
not break them. practically, it hardly makes sense to have strong
MitM protection if an attacker can simply login without a password.
Covers mariadb, mariadb-admin, mariadb-binlog, mariadb-dump
enable ssl + ssl_verify_server_cert in the internal client too
* fix replication tests to disable master_ssl_verify_server_cert
because accounts are passwordless - except rpl.rpl_ssl1
* fix federated/federatedx/connect to disable SSL_VERIFY_SERVER_CERT
because they cannot configure an ssl connection
* fix spider to disable ssl_verify_server_cert, if configuration
says so, as spider _can_ configure an ssl connection
* memory leak in embedded test-connect
it's not an ssl option, so shouldn't be in mysql_ssl_free(),
which frees ssl options, and only unless CLIENT_REMEMBER_OPTIONS is set.
mysql->connector_fd must be freed when mysql->net.vio is closed
and fd becomes no longer valid
use SSL_VERIFY_PEER with the "always ok" callback,
instead of SSL_VERIFY_NONE with no callback.
The latter doesn't work correctly in wolfSSL, it accepts self-signed
certificates just fine (as in OpenSSL), but after that
SSL_get_verify_result() returns X509_V_OK, while it returns an error
(e.g. X509_V_ERR_SELF_SIGNED_CERT_IN_CHAIN) in OpenSSL.
X509_check_host() and X509_check_ip_asc() exist in all
supported SSL libraries
in OpenSSL >= 1.0.2 and in the bundled WolfSSL
And X509_free() handles NULL pointers all right.
* type of mi->ssl_verify_server_cert must be my_bool, because it's
passed by address to mysql_options(), and the latter expects my_bool
* explicitly disable ssl in MYSQL if mi->ssl is 0
* remove dead code (`#ifdef NOT_USED`)
* remove useless casts and checks replacing empty strings with NULL
(new_VioSSLFd() does that internally)
if the server is started with --ssl but without neither --ssl-key nor
--ssl-cert, let it automatically generate a self-signed certificate.
It's generated in memory only and never saved to disk.
when neither --ssl-key nor --ssl-cert were set, the errror
was "Private key does not match the certificate public key"
changed to "Unable to get certificate"
implement --ssl-fp and --ssl-fplist for all clients.
--ssl-fp takes one certificate fingerprint, for example,
00:11:22:33:44:55:66:77:88:99:AA:BB:CC:DD:EE:FF:00:11:22:33
--ssl-fplist takes a path to a file with one fingerprint per line.
if the server's certificate fingerprint matches ssl-fp or is found
in the file - the certificate is considered verified.
If the fingerprint is specified but doesn't match - the connection
is aborted independently from the --ssl-verify-server-cert
if the client enabled --ssl-verify-server-cert, then
the server certificate is verified as follows:
* if --ssl-ca or --ssl-capath were specified, the cert must have
a proper signature by the specified CA (or CA in the path)
and the cert's hostname must match the server's hostname.
If the cert isn't signed or a hostname is wrong - the
connection is aborted.
* if MARIADB_OPT_TLS_PEER_FP was used and the fingerprint matches,
the connection is allowed, if it doesn't match - aborted.
* If the connection uses unix socket or named pipes - it's allowed.
(consistent with server's --require-secure-transport behavior)
otherwise the cert is still in doubt, we don't know if we can trust
it or there's an active MitM in progress.
* If the user has provided no password or the server requested an
authentication plugin that sends the password in cleartext -
the connection is aborted.
* Perform the authentication. If the server accepts the password,
it'll send SHA2(scramble || password hash || cert fingerprint)
with the OK packet.
* Verify the SHA2 digest, if it matches - the connection is allowed,
otherwise it's aborted.
not default_mysqld.cnf. The latter has only server settings,
it misses mtr-specific client configuration
Except for spider, that doesn't use mysqld.1 server
and default_my.cnf starts it automatically.
Spider tests have to include both default_mysqld.cnf and
default_client.cnf
it's for client auth plugins only, server auth plugin should never
return it, because they cannot send a correct OK packet.
(OK packet is quite complex and carries a lot of information that
only the server knows)
This commit addresses multiple server shutdown problems observed on macOS,
Solaris, and FreeBSD:
1. Corrected a non-portable assumption where socket shutdown was expected
to wake up poll() with listening sockets in the main thread.
Use more robust self-pipe to wake up poll() by writing to the pipe's write
end.
2. Fixed a random crash on macOS from pthread_kill(signal_handler)
when the signal_handler was detached and the thread had already exited.
Use more robust `kill(getpid(), SIGTERM)` to wake up the signal handler
thread.
3. Made sure, that signal handler thread always exits once `abort_loop` is
set, and also calls `my_thread_end()` and clears `signal_thread_in_use`
when exiting.
This fixes warning "1 thread did not exit" by `my_global_thread_end()`
seen on FreeBSD/macOS when the process is terminated via signal.
Additionally, the shutdown code underwent light refactoring
for better readability and maintainability:
- Modified `break_connect_loop()` to no longer wait for the main thread,
aligning behavior with Windows (since 10.4).
- Removed dead code related to the unused `USE_ONE_SIGNAL_HAND`
preprocessor constant.
- Eliminated support for `#ifndef HAVE_POLL` in `handle_connection_sockets`
This code is also dead, since 10.4
This is done for symmetry with mariadb-dump, which does not use threads
but allows parallelism via --parallel
Traditional --use-threads can still be used, it is synonymous
with --parallel
- --parallel=N with or without --single-transaction
- Error cases (too many connections, emulate error on one connection)
- Windows specific test for named pipe connections
Parallelism is achieved by using mysql_send_query on multiple connections
without waiting for results, and using IO multiplexing (poll/IOCP) to
wait for completions.
Refresh libmariadb to pick up CONC-676 (fixes for IOCP use with named pipe)
- make connect_to_db() return MYSQL*, we'll reuse the function for
connection pool.
- Remove variable 'mysql_connection', duplicated by variable 'mysql'
- do not attempt to start slave if connection did not succeed,#
and fix mysqldump.result
Improve the performance of slave connect using B+-Tree indexes on each binlog
file. The index allows fast lookup of a GTID position to the corresponding
offset in the binlog file, as well as lookup of a position to find the
corresponding GTID position.
This eliminates a costly sequential scan of the starting binlog file
to find the GTID starting position when a slave connects. This is
especially costly if the binlog file is not cached in memory (IO
cost), or if it is encrypted or a lot of slaves connect simultaneously
(CPU cost).
The size of the index files is generally less than 1% of the binlog data, so
not expected to be an issue.
Most of the work writing the index is done as a background task, in
the binlog background thread. This minimises the performance impact on
transaction commit. A simple global mutex is used to protect index
reads and (background) index writes; this is fine as slave connect is
a relatively infrequent operation.
Here are the user-visible options and status variables. The feature is on by
default and is expected to need no tuning or configuration for most users.
binlog_gtid_index
On by default. Can be used to disable the indexes for testing purposes.
binlog_gtid_index_page_size (default 4096)
Page size to use for the binlog GTID index. This is the size of the nodes
in the B+-tree used internally in the index. A very small page-size (64 is
the minimum) will be less efficient, but can be used to stress the
BTree-code during testing.
binlog_gtid_index_span_min (default 65536)
Control sparseness of the binlog GTID index. If set to N, at most one
index record will be added for every N bytes of binlog file written.
This can be used to reduce the number of records in the index, at
the cost only of having to scan a few more events in the binlog file
before finding the target position
Two status variables are available to monitor the use of the GTID indexes:
Binlog_gtid_index_hit
Binlog_gtid_index_miss
The "hit" status increments for each successful lookup in a GTID index.
The "miss" increments when a lookup is not possible. This indicates that the
index file is missing (eg. binlog written by old server version
without GTID index support), or corrupt.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
This patch augments Gtid_log_event with the user thread-id.
In particular that compensates for the loss of this info in
Rows_log_events.
Gtid_log_event::thread_id gets visible in mysqlbinlog output like
#231025 16:21:45 server id 1 end_log_pos 537 CRC32 0x1cf1d963 GTID 0-1-2 ddl thread_id=10
as 64 bit unsigned integer.
While the size of Gtid event has grown by 8-9 bytes
replication from OLD <-> NEW is not affected by it.
This work was started by the late Sujatha Sivakumar.
Brandon Nesterenko took it over, reviewed initial patches and extended
the work.
Reviewed-by: <andrei.elkin@mariadb.com>
Summary
=======
With FULL_NODUP mode, before image inclues all columns and after
image inclues only the changed columns. flashback will swap the
value of changed columns from after image to before image.
For example:
BI: c1, c2, c3_old, c4_old
AI: c3_new, c4_new
flashback will reconstruct the before and after images to
BI: c1, c2, c3_new, c4_new
AI: c3_old, c4_old
Implementation
==============
When parsing the before and after image, position and length of
the fields are collected into ai_fields and bi_fields, if it is an
Update_rows_event and the after image doesn't includes all columns.
The changed fields are swapped between bi_fields and ai_fields.
Then it recreates the before image and after image by using
bi_fields and ai_fields. nullbit will be set to 1 if the
field is NULL, otherwise nullbit will be 0.
It also optimized flashback a little bit.
- calc_row_event_length is used instead of print_verbose_one_row
- swap_buff1 and swap_buff2 are removed.
BASE 62 uses 0-9, A-Z and then a-z to give the numbers 0-61. This patch
increases the range of the string functions to cover this.
Based on ideas and tests in PR #2589, but re-written into the charset
functions.
Includes fix by Sergei, UBSAN complained:
ctype-simple.c:683:38: runtime error: negation of -9223372036854775808
cannot be represented in type 'long long int'; cast to an unsigned
type to negate this value to itself
Co-authored-by: Weijun Huang <huangweijun1001@gmail.com>
Co-authored-by: Sergei Golubchik <serg@mariadb.org>
Field_new_decimal::store_value(const my_decimal*, int*)
Analysis
========
When rpl applier is unpacking a before row image, Field::reset() will be
called before setting a field to null if null bit of the field is set in
the row image. For Field_new_decimal::reset(), it calls
Field_new_decimal::store_value() to reset the value. store_value() asserts
that the field is in the write_set bitmap since it thinks the field is
updating.
But that is not true for the row image generated in FULL_NODUP
mode. In the mode, the before image includes all fields and the after
image includes only updated fields.
Fix
===
In the case unpacking binlog row images, the assertion is meaningless.
So the unpacking field is marked in write_set temporarily to avoid the
assertion failure.