remove extra loop over result set, don't check for INVISIBLE columns,
when not needed (xml ignores complete_insert and old servers cannot
have INVISIBLE)
To prevent ASAN heap-use-after-poison in the MDEV-16549 part of
./mtr --repeat=6 main.derived
the initialization of Name_resolution_context was cleaned up.
- Add `replicate_rewrite_db` status variable, that may accept comma
separated key-value pairs.
- Note that option `OPT_REPLICATE_REWRITE_DB` already existed in `mysqld.h`
from this commit 23d8586dbf
Reviewer:Brandon Nesterenko <brandon.nesterenko@mariadb.com>
Continue with similar changes as done in 19af1890 to replace sprintf(buf, ...)
with snprintf(buf, sizeof(buf), ...), specifically in the "easy" cases where buf
is allocated with a size known at compile time.
All new code of the whole pull request, including one or several files that are
either new files or modified ones, are contributed under the BSD-new license. I
am contributing on behalf of my employer Amazon Web Services, Inc.
- Added one neutral and 22 tailored (language specific) collations based on
Unicode Collation Algorithm version 14.0.0.
Collations were added for Unicode character sets
utf8mb3, utf8mb4, ucs2, utf16, utf32.
Every tailoring was added with four accent and case
sensitivity flag combinations, e.g:
* utf8mb4_uca1400_swedish_as_cs
* utf8mb4_uca1400_swedish_as_ci
* utf8mb4_uca1400_swedish_ai_cs
* utf8mb4_uca1400_swedish_ai_ci
and their _nopad_ variants:
* utf8mb4_uca1400_swedish_nopad_as_cs
* utf8mb4_uca1400_swedish_nopad_as_ci
* utf8mb4_uca1400_swedish_nopad_ai_cs
* utf8mb4_uca1400_swedish_nopad_ai_ci
- Introducing a conception of contextually typed named collations:
CREATE DATABASE db1 CHARACTER SET utf8mb4;
CREATE TABLE db1.t1 (a CHAR(10) COLLATE uca1400_as_ci);
The idea is that there is no a need to specify the character set prefix
in the new collation names. It's enough to type just the suffix
"uca1400_as_ci". The character set is taken from the context.
In the above example script the context character set is utf8mb4.
So the CREATE TABLE will make a column with the collation
utf8mb4_uca1400_as_ci.
Short collations names can be used in any parts of the SQL syntax
where the COLLATE clause is understood.
- New collations are displayed only one time
(without character set combinations) by these statements:
SELECT * FROM INFORMATION_SCHEMA.COLLATIONS;
SHOW COLLATION;
For example, all these collations:
- utf8mb3_uca1400_swedish_as_ci
- utf8mb4_uca1400_swedish_as_ci
- ucs2_uca1400_swedish_as_ci
- utf16_uca1400_swedish_as_ci
- utf32_uca1400_swedish_as_ci
have just one entry in INFORMATION_SCHEMA.COLLATIONS and SHOW COLLATION,
with COLLATION_NAME equal to "uca1400_swedish_as_ci", which is the suffix
without the character set name:
SELECT COLLATION_NAME FROM INFORMATION_SCHEMA.COLLATIONS
WHERE COLLATION_NAME LIKE '%uca1400_swedish_as_ci';
+-----------------------+
| COLLATION_NAME |
+-----------------------+
| uca1400_swedish_as_ci |
+-----------------------+
Note, the behaviour of old collations did not change.
Non-unicode collations (e.g. latin1_swedish_ci) and
old UCA-4.0.0 collations (e.g. utf8mb4_unicode_ci)
are still displayed with the character set prefix, as before.
- The structure of the table INFORMATION_SCHEMA.COLLATIONS was changed.
The NOT NULL constraint was removed from these columns:
- CHARACTER_SET_NAME
- ID
- IS_DEFAULT
and from the corresponding columns in SHOW COLLATION.
For example:
SELECT COLLATION_NAME, CHARACTER_SET_NAME, ID, IS_DEFAULT
FROM INFORMATION_SCHEMA.COLLATIONS
WHERE COLLATION_NAME LIKE '%uca1400_swedish_as_ci';
+-----------------------+--------------------+------+------------+
| COLLATION_NAME | CHARACTER_SET_NAME | ID | IS_DEFAULT |
+-----------------------+--------------------+------+------------+
| uca1400_swedish_as_ci | NULL | NULL | NULL |
+-----------------------+--------------------+------+------------+
The NULL value in these columns now means that the collation
is applicable to multiple character sets.
The behavioir of old collations did not change.
Make sure your client programs can handle NULL values in these columns.
- The structure of the table
INFORMATION_SCHEMA.COLLATION_CHARACTER_SET_APPLICABILITY was changed.
Three new NOT NULL columns were added:
- FULL_COLLATION_NAME
- ID
- IS_DEFAULT
New collations have multiple entries in COLLATION_CHARACTER_SET_APPLICABILITY.
The column COLLATION_NAME contains the collation name without the character
set prefix. The column FULL_COLLATION_NAME contains the collation name with
the character set prefix.
Old collations have full collation name in both FULL_COLLATION_NAME and
COLLATION_NAME.
SELECT COLLATION_NAME, FULL_COLLATION_NAME, CHARACTER_SET_NAME, ID, IS_DEFAULT
FROM INFORMATION_SCHEMA.COLLATION_CHARACTER_SET_APPLICABILITY
WHERE FULL_COLLATION_NAME RLIKE '^(utf8mb4|latin1).*swedish.*ci$';
+-----------------------------+-------------------------------------+--------------------+------+------------+
| COLLATION_NAME | FULL_COLLATION_NAME | CHARACTER_SET_NAME | ID | IS_DEFAULT |
+-----------------------------+-------------------------------------+--------------------+------+------------+
| latin1_swedish_ci | latin1_swedish_ci | latin1 | 8 | Yes |
| latin1_swedish_nopad_ci | latin1_swedish_nopad_ci | latin1 | 1032 | |
| utf8mb4_swedish_ci | utf8mb4_swedish_ci | utf8mb4 | 232 | |
| uca1400_swedish_ai_ci | utf8mb4_uca1400_swedish_ai_ci | utf8mb4 | 2368 | |
| uca1400_swedish_as_ci | utf8mb4_uca1400_swedish_as_ci | utf8mb4 | 2370 | |
| uca1400_swedish_nopad_ai_ci | utf8mb4_uca1400_swedish_nopad_ai_ci | utf8mb4 | 2372 | |
| uca1400_swedish_nopad_as_ci | utf8mb4_uca1400_swedish_nopad_as_ci | utf8mb4 | 2374 | |
+-----------------------------+-------------------------------------+--------------------+------+------------+
- Other INFORMATION_SCHEMA queries:
SELECT COLLATION_NAME FROM INFORMATION_SCHEMA.COLUMNS;
SELECT COLLATION_NAME FROM INFORMATION_SCHEMA.PARAMETERS;
SELECT TABLE_COLLATION FROM INFORMATION_SCHEMA.TABLES;
SELECT DEFAULT_COLLATION_NAME FROM INFORMATION_SCHEMA.SCHEMATA;
SELECT COLLATION_NAME FROM INFORMATION_SCHEMA.ROUTINES;
SELECT COLLATION_CONNECTION FROM INFORMATION_SCHEMA.EVENTS;
SELECT DATABASE_COLLATION FROM INFORMATION_SCHEMA.EVENTS;
SELECT COLLATION_CONNECTION FROM INFORMATION_SCHEMA.ROUTINES;
SELECT DATABASE_COLLATION FROM INFORMATION_SCHEMA.ROUTINES;
SELECT COLLATION_CONNECTION FROM INFORMATION_SCHEMA.TRIGGERS;
SELECT DATABASE_COLLATION FROM INFORMATION_SCHEMA.TRIGGERS;
SELECT COLLATION_CONNECTION FROM INFORMATION_SCHEMA.VIEWS;
display full collation names, including character sets prefix,
for all collations, including new collations.
Corresponding SHOW commands also display full collation names
in collation related columns:
SHOW CREATE TABLE t1;
SHOW CREATE DATABASE db1;
SHOW TABLE STATUS;
SHOW CREATE FUNCTION f1;
SHOW CREATE PROCEDURE p1;
SHOW CREATE EVENT ev1;
SHOW CREATE TRIGGER tr1;
SHOW CREATE VIEW;
These INFORMATION_SCHEMA queries and SHOW statements may change in
the future, to display show collation names.
mysqlimport starts many worker threads. when one of the worker
encounters an error, it frees global memory and calls exit().
it suppresses memory leak detector, because, as the comment says
"dirty exit, some threads are still running", indeed, it cannot
free the memory from other threads.
but precisely because some threads are still running, they
might use this global memory, so it cannot be freed.
fix: if we know that some threads are still running and accept
that we cannot free all memory anyway, let's not free global
allocations either
This is particularly important for Azure where there is no
MyISAM support in their MariaDB cloud product.
Like mysqldumper does, a view can satisfy the requirement
like a table, without constraints. The views in frm files are
text form and don't have column limits.
Thanks Thomas Casteleyn for the suggestion.
With a global non-default max-statement-time of a time interval that exceed
the query time mysqldump queries when doing a backup.
To solve both, add a max-statement-time option, defaulting to 0 (unlimited time).
Also like mariabackup, set the session wait_timeout=DEFAULT (28800). The
time/processing between mysqldump times isn't expected to get that
close ever, but let's adopt the standard of mariabackup as no-one has
challenged it has having a detrimental effect.
Reviewer and test case author Daniel Black
This avoids LF->CRLF conversion by the C runtime, which historically has
been rather buggy (see MDEV-9409)
Disabling text mode also fixes the --binary-mode in command line client
to work the same on Windows, as it does elsewhere.
The user-visible effect is that some text files, e.g output of mysqldump
or mysqlbinlog will not have CRLF end-of-lines,but LF. That should be
acceptable, as even Notepad can read this Unix EOLs since 2018
(on older Windows, Wordpad can)
Leave error log in text(CRLF) mode for now, for the sake of old Windows.
Apparently newer libedit is readline-compatible enough
to be detected as a readline, with USE_NEW_READLINE_INTERFACE defined
and USE_LIBEDIT_INTERFACE not defined.
Let's set the locale unconditionally, independently from the
readline/libedit variant. It's already happening anyway now,
unless one specifies --default-character-set explicitly.
For compatibility reasons, add the option to the MariaDB client without
any functional changes besides simply accepting the option and emitting
a warning that it is obsolete.
In MySQL this security related option is compulsory in certain use
cases. When users switch to MariaDB, this client command that used to
work starts failing without a sensible error message. In worst case
users resort to re-installing the mysql client from MySQL.
In MariaDB the option is obsolete and should simply be ignored. Users
however don't have any opportunity to learn that unless the client
program tells them so.
Before:
mysql --enable-cleartext-plugin ...
mysql: unknown option '--enable-cleartext-plugin'
(program terminates)
After:
mysql --enable-cleartext-plugin ...
WARNING: option '--enable-cleartext-plugin' is obsolete.
(program executes)
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
or slow query log when the log_output=TABLE.
When this happens, we temporary disable by changing log_output until
we've created the general_log and slow_log tables again.
Move </database> in xml mode until after the transaction_registry.
General_log and slow_log tables where moved to be first to be dumped so
that the disabling of the general/slow queries is minimal.
Previously the correct SQL mode for a stored routine or
package was only set before doing the CREATE part, this
worked out for PROCEDUREs and FUNCTIONs, but with ORACLE
mode specific PACKAGEs the DROP also only works in ORACLE
mode.
Moving the setting of the sql_mode a few lines up to happen
right before the DROP statement is writen fixes this.
New Feature:
============
Extend mariadb-binlog command-line tool to allow for filtering
events using GTID domain and server ids. The functionality mimics
that of a replica server’s DO_DOMAIN_IDS, IGNORE_DOMAIN_IDS, and
IGNORE_SERVER_IDS from CHANGE MASTER TO. For completeness, this
patch additionally adds the option --do-server-ids as an alias for
--server-id, which now accepts a list of server ids instead of a
single one.
Example usage:
mariadb-binlog --do-domain-ids=2,3,4 --do-server-ids=1,3
master-bin.000001
Functional Notes:
1. --do-domain-ids cannot be combined with --ignore-domain-ids
2. --do-server-ids cannot be combined with --ignore-server-ids
3. A domain id filter can be combined with a server id filter
4. When any new filter options are combined with the
--gtid-strict-mode option, events from excluded domains/servers are
not validated.
5. Domain/server id filters can be combined with GTID ranges (i.e.
specifications of --start-position and --stop-position). However,
because the --stop-position option implicitly undertakes filtering
to only output events within its range of domains, when combined
with --do-domain-ids or --ignore-domain-ids, output will consist of
the intersection between the filters. Specifically, with
--do-domain-ids and --stop-position, only events with domain ids
present in both argument lists will be output. Conversely, with
--ignore-domain-ids and --stop-position, only events with domain ids
present in the --stop-position and absent from the
--ignore-domain-ids options will be output.
Reviewed By
============
Andrei Elkin <andrei.elkin@mariadb.com>
The reason for this fix was that when I tried to run mysql_upgrade
at home to update an old 10.5 installation, mysql_upgrade failed
with warnings about mariadb.sys user not existing.
If the server was started with --skip-grants, there would be no warnings
from mysql_upgrade, but in some cases running mysql_upgrade again could
produce new warnings.
The reason for the warnings was that any access of the mysql.user view
will produce a warning if the mariadb.sys user does not exists.
Fixed with the following changes:
- Disable warnings about mariadb.sys user not existing
- Don't overwrite old mariadb.sys entries in tables_priv and global_priv
- Ensure that tables_priv has an entry for mariadb.sys if the user exists.
This fixes an issue that tables_priv would not be updated if there
was a failure directly after global_priv was updated.
Fix a possible crash on my_free() due to the use of strdup() versus
my_strdup(), and a memory leak.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Problem:
========
When using mariadb-binlog with --raw and --stop-never, events from
the master's currently active log file should be written to their
respective log file specified by --result-file, and shown on-disk.
There is a bug where mariadb-binlog does not flush the result file
to disk when new events are received
Solution:
========
Add a function call to flush mariadb-binlog’s result file after
receiving an event in --raw mode.
Reviewed By:
============
Andrei Elkin <andrei.elkin@mariadb.com>
This patch also fixes:
MDEV-27690 Crash on `CHARACTER SET csname COLLATE DEFAULT` in column definition
MDEV-27853 Wrong data type on column `COLLATE DEFAULT` and table `COLLATE some_non_default_collation`
MDEV-28067 Multiple conflicting column COLLATE clauses are not rejected
MDEV-28118 Wrong collation of `CAST(.. AS CHAR COLLATE DEFAULT)`
MDEV-28119 Wrong column collation on MODIFY + CONVERT
This commit contains a test for reproducing the issue in MDEV-27649,
where a transaction, executing a prepared statment, is BF aborted.
The scenario, in MDEV-27649 has a transaction which has prepared a PS,
but not yet executed it, and this transaction is then BF aborted in this state.
When the BF aborted transaction tries to execute the PS, it will receive deadlock error.
But, when it tries to execute the PS second time, the node crashes.
Mtr test galera.galera_bf_abort_ps_bind, exercises this scenario.
However, mtr test platform does not have mechanism to control the execution of PS in required detail.
For this purpose, mysqltetst.cc was extended to contain 4 new commands:
PS_prepare - to prepare a prepared statement
PS_bind - to bind values for parameters for the PS
PS_execute - to execute the PS
PS_close - to close the PS
The support for controlling prepared statments in mtr scripts is quite minimal
in this commit. Limitations are:
* only one PS can be used by a connection, at a time
* only input parameters can be bound for the PS
* only varchar, integer or float type of parameters can be bound
added the result
fixes
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Added ability to disable/enable (--disable_view_protocol/--enable_view_protocol) view-protocol in tests.
When the option "--disable_view_protocol" is used util connections are closed.
Added new test for checking view-protocol
Removed all dependencies of command line arguments based on positions in
an array (this kind of code should never have been written).
Instead use option names, which are stable.
Reviewer: Sergei Golubchik
This commit implements two phase binloggable ALTER.
When a new
@@session.binlog_alter_two_phase = YES
ALTER query gets logged in two parts, the START ALTER and the COMMIT
or ROLLBACK ALTER. START Alter is written in binlog as soon as
necessary locks have been acquired for the table. The timing is
such that any concurrent DML:s that update the same table are either
committed, thus logged into binary log having done work on the old
version of the table, or will be queued for execution on its new
version.
The "COMPLETE" COMMIT or ROLLBACK ALTER are written at the very point
of a normal "single-piece" ALTER that is after the most of
the query work is done. When its result is positive COMMIT ALTER is
written, otherwise ROLLBACK ALTER is written with specific error
happened after START ALTER phase.
Replication of two-phase binloggable ALTER is
cross-version safe. Specifically the OLD slave merely does not
recognized the start alter part, still being able to process and
memorize its gtid.
Two phase logged ALTER is read from binlog by mysqlbinlog to produce
BINLOG 'string', where 'string' contains base64 encoded
Query_log_event containing either the start part of ALTER, or a
completion part. The Query details can be displayed with `-v` flag,
similarly to ROW format events. Notice, mysqlbinlog output containing
parts of two-phase binloggable ALTER is processable correctly only by
binlog_alter_two_phase server.
@@log_warnings > 2 can reveal details of binlogging and slave side
processing of the ALTER parts.
The current commit also carries fixes to the following list of
reported bugs:
MDEV-27511, MDEV-27471, MDEV-27349, MDEV-27628, MDEV-27528.
Thanks to all people involved into early discussion of the feature
including Kristian Nielsen, those who helped to design, implement and
test: Sergei Golubchik, Andrei Elkin who took the burden of the
implemenation completion, Sujatha Sivakumar, Brandon
Nesterenko, Alice Sherepa, Ramesh Sivaraman, Jan Lindstrom.
New Feature:
===========
This commit extends the mariadb-binlog capabilities to allow events
to be filtered by GTID ranges. More specifically, the
--start-position and --stop-position arguments have been extended to
accept values formatted as a list of GTID positions, e.g.
--start-position=0-1-0,1-2-55. The following specific capabilities
are addressed:
1) GTIDs can be used to filter results on local binlog files
2) GTIDs can be used to filter results from remote servers
3) Implemented --gtid-strict-mode that ensures the GTID event
stream in each domain is monotonically increasing
4) Added new level of verbosity in mysqlbinlog -vvv to print
additional diagnostic information/warnings about invalid GTID
states
5) For a given GTID range, its start and stop position parameters
aim to mimic the behaviors of
CHANGE MASTER TO MASTER_USE_GTID=slave_pos and
START SLAVE UNTIL master_gtid_pos=<GTID>, respectively. In
particular, the start-position list expresses a gtid state of
the server, similarly to how @@global.gtid_slave_pos expresses
the gtid state of a slave server when connecting to a master
with MASTER_USE_GTID=slave_pos.
The GTID start-position list is exclusive and the
stop-position list is inclusive. This allows users to receive
events strictly after those that they already have, and is
useful in cases of point in (logical) time recovery including
1) events were received out of order and should be re-sent, or
2) specifying the gtid state of a slave to get events newer
than their current state. If a seq_no is 0 for start-position,
it means to include the entirety of the domain. If a seq_no is
0 for stop-position, it means to exclude all events from that
domain. The GTIDs provided in a start position argument must
match with the GTID state of the first processed log (i.e.
those listed in the Gtid_list event). If a stop position is
provided, the events that are output are limited to only those
with domain ids listed in the argument. When specifying
combinations of start and stop positions, the following
behaviors are expected:
[--start-position without --stop-position]: Events that have domain
ids in the start position are output if their seq_no occurs after
the respective start position. Events with domain ids that are
unspecified in the start position list are also output. Note that if
the Gtid_list event of the first binary log is populated (i.e.
non-empty), each domain in the Gtid_list must be present in the
start-position list with a seq_no at or after the listed value.
This behavior mimics how a slave only processes events after the
state provided by @@global.gtid_slave_pos when connecting to a
master with CHANGE MASTER TO MASTER_USE_GTID=slave_pos.
[--stop-position without --start-position]: Output is limited to
only events with both 1) domain ids that are present in the given
stop position list and 2) seq_nos that are less than or equal to
their respective stop GTID. Once all GTIDs in the stop position
list have been processed, the program will stop processing log
files. This behavior mimics how
START SLAVE UNTIL master_gtid_pos=<G>
has a slave only process events with domain ids present in G with
their seq_nos at or before the respective gtid.
[--start-position and --stop-position]: Output consists of the
intersection between the events permitted by both the start and stop
position rules. More concretely, the output can be defined by a
union of the following rules:
1. For domains which exist in both the start and stop position
lists, the events which exist in-between these positions
(exclusive start, inclusive stop) are output
2. For all other events, the rules of
[--stop-position without --start-position] are followed
This is due to the implicit filtering within each individual rule.
Even though the start position rule always includes events from
unspecified domains, the stop position rule takes precedence because
it always excludes events from unspecified domains. In other words,
events which the start position rule would have included would then
always be excluded by the stop position rule.
[neither --start-position nor --stop-position]: Events are not
omitted based on GTID positioning; however, --gtid-strict-mode and
-vvv can still analyze gtid correctness for warning and error
reporting.
[repeated specification of --start-position or --stop-position]:
Subsequent specifications of start and stop positions completely
override previous ones. E.g., if invoked as
mysqlbinlog --start-position=<G1> --start-position=<G2> ...
All GTIDs specified in G1 are ignored and only those specified in G2
are used for the start position.
A few additional notes:
1) this commit squashes together the commits:
f4319661120e-78a9d49907ba
2) Changed rpl.rpl_blackhole_row_annotate test because it has
out of order GTIDs in its binlog, so I added
--skip-gtid-strict-mode
3) After all binlog events have been written, the session server
id and domain id are reset to their values in the global state
Reviewed By:
===========
Andrei Elkin: <andrei.elkin@mariadb.com>
MDEV-27107 prevent two mariadb-upgrade running in parallel
MDEV-27279 mariadb_upgrade add --check-if-upgrade-is-needed /
restrict tests to major version
Code is based of pull request from Daniel Black, but with a several
extensions.
- mysql_upgrade now locks the mysql_upgrade file with my_lock()
(Advisory record locking). This ensures that two mysql_upgrades
cannot be run in parallel.
- Added --check-if-upgrade-is-needed to mysql_upgrade. This will return
0 if one has to run mysql_upgrade.
Other changes:
- mysql_upgrade will now immediately exit if the major version and minor
version (two first numbers in the version string) is same as last run.
Before this change mysql_upgrade was run if the version string was different
from last run.
- Better messages when there is no need to run mysql_upgrade.
- mysql_upgrade --verbose now prints out a lot more information about
the version checking.
- mysql_upgrade --debug now uses default debug arguments if there is no
option to --debug
- "MySQL" is renamed to MariaDB in the messages
- mysql_upgrade version increased to 2.0
Notes
Verifying "prevent two mariadb-upgrade running in parallel" was
done in a debugger as it would be a bit complex to do that in mtr.
Reviewer: Danial Black <daniel@mariadb.org>
Translate username, password and database from UTF8 into desired charset,
if non-auto default-character-set was used, on Windows10 1903
This change is implemented only in the command line client, and mainly to
allow users with non-UTF8 passwords to login.
The user is supposed to use the same charset that was used during setting
password (usually, console CP if used in CLI)
Add a test to document the behavior.
If someone on whatever reasons uses --default-character-set=cp850,
this will avoid incorrect display, and inserting incorrect data.
Adjusting console codepage sometimes also needs to happen with
--default-charset=auto, on older Windows. This is because autodetection
is not always exact. For example, console codepage on US editions of
Windows is 437. Client autodetects it as cp850, a rather loose
approximation, given 46 code point differences. We change the console
codepage to cp850, so that there is no discrepancy.
That fix is currently Windows-only, and serves people who used combination
of chcp to achieve WYSIWYG effect (although, this would mostly likely used
with utf8 in the past)
Now, --default-character-set would be a replacement for that.
Fix fs_character_set() detection of current codepage.
Corresponding Windows bug https://github.com/microsoft/terminal/issues/4551
Use ReadConsoleW instead and convert to console's input codepage, to
workaround.
Also, disable VT sequences in the console output, as we do not knows what
type of data comes with SELECT, we do not want VT escapes there.
Remove my_cgets()
This enables optimizer_trace output for the next SQL command.
Identical as if one would have done:
- Store value of @@optimizer_trace
- Set @optimizer_trace="enabled=on"
- Run query
- SELECT * from OPTIMIZER_TRACE
- Restore value of @@optimizer_trace
This is a great time saver when one wants to quickly check the optimizer
trace for a query in a mtr test.
https://jira.mariadb.org/browse/MDEV-26221
my_sys DYNAMIC_ARRAY and DYNAMIC_STRING inconsistancy
The DYNAMIC_STRING uses size_t for sizes, but DYNAMIC_ARRAY used uint.
This patch adjusts DYNAMIC_ARRAY to use size_t like DYNAMIC_STRING.
As the MY_DIR member number_of_files is copied from a DYNAMIC_ARRAY,
this is changed to be size_t.
As MY_TMPDIR members 'cur' and 'max' are copied from a DYNAMIC_ARRAY,
these are also changed to be size_t.
The lists of plugins and stored procedures use DYNAMIC_ARRAY,
but their APIs assume a size of 'uint'; these are unchanged.
This is a documentation-only patch to refine the description of
binary mode for the mariadb client.
Reviewed By:
============
Andrei Elkin <andrei.elkin@mariadb.com>
Note: This patch backports commits 10cd281 and 1755ea4 from 10.3.
10cd281:
Problem:- Some binary data is inserted into the table using
Jconnector. When binlog dump of the data is applied using mysql
client it gives syntax error.
Reason:-
After investigating it turns out to be a issue of mysql client not
able to properly handle \\0 <0 in binary>. In all binary files
where mysql client fails to insert
these 2 bytes are common (0x5c00)
Solution:-
I have changed mysql.cc to include for the possibility that binary
string can have \\0 in it
1755ea4:
Changes on top of Sachin’s patch. Specifically:
1) Refined the parsing break condition to only change the parser’s
behavior for parsing strings in binary mode (behavior of \0 outside
of strings is unchanged).
2) Prefixed binary_zero_insert.test with ‘mysql_’ to more clearly
associate the purpose of the test.
3) As the input of the test contains binary zeros (0x5c00),
different text editors can visualize this sequence differently, and
Github would not display it at all. Therefore, the input itself was
consolidated into the test and created out of hex sequences to make
it easier to understand what is happening.
4) Extended test to validate that the rows which correspond to the
INSERTS with 0x5c00 have the correct binary zero data.
Reviewed By:
============
Andrei Elkin <andrei.elkin@mariadb.com>
Changes on top of Sachin’s patch. Specifically:
1) Refined the parsing break condition to only change the parser’s
behavior for parsing strings in binary mode (behavior of \0 outside
of strings is unchanged).
2) Prefixed binary_zero_insert.test with ‘mysql_’ to more clearly
associate the purpose of the test.
3) As the input of the test contains binary zeros (0x5c00),
different text editors can visualize this sequence differently, and
Github would not display it at all. Therefore, the input itself was
consolidated into the test and created out of hex sequences to make
it easier to understand what is happening.
4) Extended test to validate that the rows which correspond to the
INSERTS with 0x5c00 have the correct binary zero data.
Reviewed By:
===========
Andrei Elkin <andrei.elkin@mariadb.com>
Problem:- Some binary data is inserted into the table using Jconnector. When
binlog dump of the data is applied using mysql cleint it gives syntax error.
Reason:-
After investigating it turns out to be a issue of mysql client not able to properly
handle \\\0 <0 in binary>. In all binary files where mysql client fails to insert
these 2 bytes are commom (0x5c00)
Solution:-
I have changed mysql.cc to include for the possibility that binary string can
have \\\0 in it
Per review by Serg, include start row with new line.
We are hoping we haven't annoyed people that prefered the old
way.
Adding an option for new lines seems like over-engineering in advance.
So if there are complaints, let them be known (JIRA), and we'll add
this under an option.
Test cases updated.
Currently, mysqldump --extended-insert outputs all rows on a single line,
which makes it difficult to use with various Unix command-line utilities
such as grep and diff.
We change mysqldump to emit a linebreak between each row, that would make
it easier to work with, without significantly affecting dump or restore
performance, or affecting compatibility.
Closes: #1865
Create minidump when server fails to shutdown. If process is being
debugged, cause a debug break.
Moves some code which is part of safe_kill into mysys, as both safe_kill,
and mysqltest produce minidumps on different timeouts.
Small cleanup in wait_until_dead() - replace inefficient loop with a single
wait.
instead of original name of the column
When doing refactoring of temporary table field creation a mistake was
done when copying the column name when creating internal temporary tables.
For internal temporary tables we should use the original field name, not
the item name (= alias).
there's no point in keeping it small.
this fixes numerous test failures in preview branches with
versions like 10.7.0-MDEV-xxxx-short-feature-description-MariaDB
In commit 1bd681c8b3 (MDEV-25506 part 3)
we introduced a "fake instant timeout" when a transaction would wait
for a table or record lock while holding dict_sys.latch. This prevented
a deadlock of the server but could cause bogus errors for operations
on the InnoDB persistent statistics tables.
A better fix is to ensure that whenever a transaction is being
executed in the InnoDB internal SQL parser (which will for now
require dict_sys.latch to be held), it will already have acquired
all locks that could be required for the execution. So, we will
acquire the following locks upfront, before acquiring dict_sys.latch:
(1) MDL on the affected user table (acquired by the SQL layer)
(2) If applicable (not for RENAME TABLE): InnoDB table lock
(3) If persistent statistics are going to be modified:
(3.a) MDL_SHARED on mysql.innodb_table_stats, mysql.innodb_index_stats
(3.b) exclusive table locks on the statistics tables
(4) Exclusive table locks on the InnoDB data dictionary tables
(not needed in ANALYZE TABLE and the like)
Note: Acquiring exclusive locks on the statistics tables may cause
more locking conflicts between concurrent DDL operations.
Notably, RENAME TABLE will lock the statistics tables
even if no persistent statistics are enabled for the table.
DROP DATABASE will only acquire locks on statistics tables if
persistent statistics are enabled for the tables on which the
SQL layer is invoking ha_innobase::delete_table().
For any "garbage collection" in innodb_drop_database(), a timeout
while acquiring locks on the statistics tables will result in any
statistics not being deleted for any tables that the SQL layer
did not know about.
If innodb_defragment=ON, information may be written to the statistics
tables even for tables for which InnoDB persistent statistics are
disabled. But, DROP TABLE will no longer attempt to delete that
information if persistent statistics are not enabled for the table.
This change should also fix the hangs related to InnoDB persistent
statistics and STATS_AUTO_RECALC (MDEV-15020) as well as
a bug that running ALTER TABLE on the statistics tables
concurrently with running ALTER TABLE on InnoDB tables could
cause trouble.
lock_rec_enqueue_waiting(), lock_table_enqueue_waiting():
Do not issue a fake instant timeout error when the transaction
is holding dict_sys.latch. Instead, assert that the dict_sys.latch
is never being held here.
lock_sys_tables(): A new function to acquire exclusive locks on all
dictionary tables, in case DROP TABLE or similar operation is
being executed. Locking non-hard-coded tables is optional to avoid
a crash in row_merge_drop_temp_indexes(). The SYS_VIRTUAL table was
introduced in MySQL 5.7 and MariaDB Server 10.2. Normally, we require
all these dictionary tables to exist before executing any DDL, but
the function row_merge_drop_temp_indexes() is an exception.
When upgrading from MariaDB Server 10.1 or MySQL 5.6 or earlier,
the table SYS_VIRTUAL would not exist at this point.
ha_innobase::commit_inplace_alter_table(): Invoke
log_write_up_to() while not holding dict_sys.latch.
dict_sys_t::remove(), dict_table_close(): No longer try to
drop index stubs that were left behind by aborted online ADD INDEX.
Such indexes should be dropped from the InnoDB data dictionary by
row_merge_drop_indexes() as part of the failed DDL operation.
Stubs for aborted indexes may only be left behind in the
data dictionary cache.
dict_stats_fetch_from_ps(): Use a normal read-only transaction.
ha_innobase::delete_table(), ha_innobase::truncate(), fts_lock_table():
While waiting for purge to stop using the table,
do not hold dict_sys.latch.
ha_innobase::delete_table(): Implement a work-around for the rollback
of ALTER TABLE...ADD PARTITION. MDL_EXCLUSIVE would not be held if
ALTER TABLE hits lock_wait_timeout while trying to upgrade the MDL
due to a conflicting LOCK TABLES, such as in the first ALTER TABLE
in the test case of Bug#53676 in parts.partition_special_innodb.
Therefore, we must explicitly stop purge, because it would not be
stopped by MDL.
dict_stats_func(), btr_defragment_chunk(): Allocate a THD so that
we can acquire MDL on the InnoDB persistent statistics tables.
mysqltest_embedded: Invoke ha_pre_shutdown() before free_used_memory()
in order to avoid ASAN heap-use-after-free related to acquire_thd().
trx_t::dict_operation_lock_mode: Changed the type to bool.
row_mysql_lock_data_dictionary(), row_mysql_unlock_data_dictionary():
Implemented as macros.
rollback_inplace_alter_table(): Apply an infinite timeout to lock waits.
innodb_thd_increment_pending_ops(): Wrapper for
thd_increment_pending_ops(). Never attempt async operation for
InnoDB background threads, such as the trx_t::commit() in
dict_stats_process_entry_from_recalc_pool().
lock_sys_t::cancel(trx_t*): Make dictionary transactions immune to KILL.
lock_wait(): Make dictionary transactions immune to KILL, and to
lock wait timeout when waiting for locks on dictionary tables.
parts.partition_special_innodb: Use lock_wait_timeout=0 to instantly
get ER_LOCK_WAIT_TIMEOUT.
main.mdl: Filter out MDL on InnoDB persistent statistics tables
Reviewed by: Thirunarayanan Balathandayuthapani
Based on Aleksey Midenkov's patch
mysqldump changes:
* --as-of option specifies historical point;
* query forging protection for --as-of parameter.
system versioned tables are detected by querying I_S.TABLES:
* it transfers much less data when the full table definition is not needed
* it does not give false positives on x TEXT DEFAULT 'WITH SYSTEM VERSIONING'
Test failed by firing assert in append_warnings() when it is called
from run_query_stmt() and there are more results from server.
Obviously, append_warnings() should be called after the last packet
received from server. So, to fix the assertion failure the function
mysql_more_results() has to be called to check that now more results
does exist and invokes append_warnings() in case the condition satisfied.
on response to COM_STMT_EXECUTE
The test cases like the following one
delimiter |;
CREATE PROCEDURE SP001()
BEGIN
DECLARE C1 CURSOR FOR SELECT 1;
OPEN C1;
SELECT 1;
CLOSE C1;
CLOSE C1;
END|
delimiter ;|
--error 1326
call SP001();
are failed since processing of ERR packet was missed by mysqltest
in case it is run with --ps-protocol
Additionally, the test sp-error was changed to don't run multi-statements
since they are not supported by PS protocol
Withing this task the following changes were made:
- Added sending of metadata info in prepare phase for the admin related
command (check table, checksum table, repair, optimize, analyze).
- Refactored implmentation of HELP command to support its execution in
PS mode
- Added support for execution of LOAD INTO and XA- related statements
in PS mode
- Modified mysqltest.cc to run statements in PS mode unconditionally
in case the option --ps-protocol is set. Formerly, only those statements
were executed using PS protocol that matched the hard-coded regular expression
- Fixed the following issues:
The statement
explain select (select 2)
executed in regular and PS mode produces different results:
MariaDB [test]> prepare stmt from "explain select (select 2)";
Query OK, 0 rows affected (0,000 sec)
Statement prepared
MariaDB [test]> execute stmt;
+------+-------------+-------+------+---------------+------+---------+------+------+----------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+-------+------+---------------+------+---------+------+------+----------------+
| 1 | PRIMARY | NULL | NULL | NULL | NULL | NULL | NULL | NULL | No tables used |
| 2 | SUBQUERY | NULL | NULL | NULL | NULL | NULL | NULL | NULL | No tables used |
+------+-------------+-------+------+---------------+------+---------+------+------+----------------+
2 rows in set (0,000 sec)
MariaDB [test]> explain select (select 2);
+------+-------------+-------+------+---------------+------+---------+------+------+----------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+-------+------+---------------+------+---------+------+------+----------------+
| 1 | SIMPLE | NULL | NULL | NULL | NULL | NULL | NULL | NULL | No tables used |
+------+-------------+-------+------+---------------+------+---------+------+------+----------------+
1 row in set, 1 warning (0,000 sec)
In case the statement
CREATE TABLE t1 SELECT * FROM (SELECT 1 AS a, (SELECT a+0)) a
is run in PS mode it fails with the error
ERROR 1054 (42S22): Unknown column 'a' in 'field list'.
- Uniform handling of read-only variables both in case the SET var=val
statement is executed as regular or prepared statememt.
- Fixed assertion firing on handling LOAD DATA statement for temporary tables
- Relaxed assert condition in the function lex_end_stage1() by adding
the commands SQLCOM_ALTER_EVENT, SQLCOM_CREATE_PACKAGE,
SQLCOM_CREATE_PACKAGE_BODY to a list of supported command
- Removed raising of the error ER_UNSUPPORTED_PS in the function
check_prepared_statement() for the ALTER VIEW command
- Added initialization of the data memember st_select_lex_unit::last_procedure
(assign NULL value) in the constructor
Without this change the test case main.ctype_utf8 fails with the following
report in case it is run with the optoin --ps-protocol.
mysqltest: At line 2278: query 'VALUES (_latin1 0xDF) UNION VALUES(_utf8'a' COLLATE utf8_bin)' failed: 2013: Lost connection
- The following bug reports were fixed:
MDEV-24460: Multiple rows result set returned from stored
routine over prepared statement binary protocol is
handled incorrectly
CONC-519: mariadb client library doesn't handle server_status and
warnign_count fields received in the packet
COM_STMT_EXECUTE_RESPONSE.
Reasons for these bug reports have the same nature and caused by
missing loop iteration on results sent by server in response to
COM_STMT_EXECUTE packet.
Enclosing of statements for processing of COM_STMT_EXECUTE response
in the construct like
do
{
...
} while (!mysql_stmt_next_result());
fixes the above mentioned bug reports.
* read_command_buf is a pointer now, sizeof() no longer reflects its
length, read_command_buflen is.
* my_safe_print_str() prints multiple screens of '\0' bytes after the
query end and up to read_command_buflen. Use fprintf() instead.
* when setting connection->name to "-closed_connection-" update
connection->name_len to match.
This fixed the MySQL bug# 20338 about misuse of double underscore
prefix __WIN__, which was old MySQL's idea of identifying Windows
Replace it by _WIN32 standard symbol for targeting Windows OS
(both 32 and 64 bit)
Not that connect storage engine is not fixed in this patch (must be
fixed in "upstream" branch)
- Major rewrite of ddl_log.cc and ddl_log.h
- ddl_log.cc described in the beginning how the recovery works.
- ddl_log.log has unique signature and is dynamic. It's easy to
add more information to the header and other ddl blocks while still
being able to execute old ddl entries.
- IO_SIZE for ddl blocks is now dynamic. Can be changed without affecting
recovery of old logs.
- Code is more modular and is now usable outside of partition handling.
- Renamed log file to dll_recovery.log and added option --log-ddl-recovery
to allow one to specify the path & filename.
- Added ddl_log_entry_phase[], number of phases for each DDL action,
which allowed me to greatly simply set_global_from_ddl_log_entry()
- Changed how strings are stored in log entries, which allows us to
store much more information in a log entry.
- ddl log is now always created at start and deleted on normal shutdown.
This simplices things notable.
- Added probes debug_crash_here() and debug_simulate_error() to simply
crash testing and allow crash after a given number of times a probe
is executed. See comments in debug_sync.cc and rename_table.test for
how this can be used.
- Reverting failed table and view renames is done trough the ddl log.
This ensures that the ddl log is tested also outside of recovery.
- Added helper function 'handler::needs_lower_case_filenames()'
- Extend binary log with Q_XID events. ddl log handling is using this
to check if a ddl log entry was logged to the binary log (if yes,
it will be deleted from the log during ddl_log_close_binlogged_events()
- If a DDL entry fails 3 time, disable it. This is to ensure that if
we have a crash in ddl recovery code the server will not get stuck
in a forever crash-restart-crash loop.
mysqltest.cc changes:
- --die will now replace $variables with their values
- $error will contain the error of the last failed statement
storage engine changes:
- maria_rename() was changed to be more robust against crashes during
rename.
This change is to get rid of randomly failing tests, especially those
that reads random position of the binary log. From looking at the logs
it's clear that some failures is because of a read char (with value >= 128)
is converted to a big long value. Using uchar everywhere makes this much
less likely to happen.
Another benefit is that a lot of cast of char to uchar could be removed.
Other things:
- Removed some extra space before '=' and '+=' in assignments
- Fixed indentations and lines > 80 characters
- Replace '16' with 'element_size' (from class definition) in
Gtid_list_log_event()
This change removed 68 explict strlen() calls from the code.
The following renames was done to ensure we don't use the old names
when merging code from earlier releases, as using the new variables
for print function could result in crashes:
- charset->csname renamed to charset->cs_name
- charset->name renamed to charset->coll_name
Almost everything where mechanical changes except:
- Changed to use the new Protocol::store(LEX_CSTRING..) when possible
- Changed to use field->store(LEX_CSTRING*, CHARSET_INFO*) when possible
- Changed to use String->append(LEX_CSTRING&) when possible
Other things:
- There where compiler issues with ensuring that all character set names
points to the same string: gcc doesn't allow one to use integer constants
when defining global structures (constant char * pointers works fine).
To get around this, I declared defines for each character set name
length.
Changes:
- To detect automatic strlen() I removed the methods in String that
uses 'const char *' without a length:
- String::append(const char*)
- Binary_string(const char *str)
- String(const char *str, CHARSET_INFO *cs)
- append_for_single_quote(const char *)
All usage of append(const char*) is changed to either use
String::append(char), String::append(const char*, size_t length) or
String::append(LEX_CSTRING)
- Added STRING_WITH_LEN() around constant string arguments to
String::append()
- Added overflow argument to escape_string_for_mysql() and
escape_quotes_for_mysql() instead of returning (size_t) -1 on overflow.
This was needed as most usage of the above functions never tested the
result for -1 and would have given wrong results or crashes in case
of overflows.
- Added Item_func_or_sum::func_name_cstring(), which returns LEX_CSTRING.
Changed all Item_func::func_name()'s to func_name_cstring()'s.
The old Item_func_or_sum::func_name() is now an inline function that
returns func_name_cstring().str.
- Changed Item::mode_name() and Item::func_name_ext() to return
LEX_CSTRING.
- Changed for some functions the name argument from const char * to
to const LEX_CSTRING &:
- Item::Item_func_fix_attributes()
- Item::check_type_...()
- Type_std_attributes::agg_item_collations()
- Type_std_attributes::agg_item_set_converter()
- Type_std_attributes::agg_arg_charsets...()
- Type_handler_hybrid_field_type::aggregate_for_result()
- Type_handler_geometry::check_type_geom_or_binary()
- Type_handler::Item_func_or_sum_illegal_param()
- Predicant_to_list_comparator::add_value_skip_null()
- Predicant_to_list_comparator::add_value()
- cmp_item_row::prepare_comparators()
- cmp_item_row::aggregate_row_elements_for_comparison()
- Cursor_ref::print_func()
- Removes String_space() as it was only used in one cases and that
could be simplified to not use String_space(), thanks to the fixed
my_vsnprintf().
- Added some const LEX_CSTRING's for common strings:
- NULL_clex_str, DATA_clex_str, INDEX_clex_str.
- Changed primary_key_name to a LEX_CSTRING
- Renamed String::set_quick() to String::set_buffer_if_not_allocated() to
clarify what the function really does.
- Rename of protocol function:
bool store(const char *from, CHARSET_INFO *cs) to
bool store_string_or_null(const char *from, CHARSET_INFO *cs).
This was done to both clarify the difference between this 'store' function
and also to make it easier to find unoptimal usage of store() calls.
- Added Protocol::store(const LEX_CSTRING*, CHARSET_INFO*)
- Changed some 'const char*' arrays to instead be of type LEX_CSTRING.
- class Item_func_units now used LEX_CSTRING for name.
Other things:
- Fixed a bug in mysql.cc:construct_prompt() where a wrong escape character
in the prompt would cause some part of the prompt to be duplicated.
- Fixed a lot of instances where the length of the argument to
append is known or easily obtain but was not used.
- Removed some not needed 'virtual' definition for functions that was
inherited from the parent. I added override to these.
- Fixed Ordered_key::print() to preallocate needed buffer. Old code could
case memory overruns.
- Simplified some loops when adding char * to a String with delimiters.
The --help comment for the --base64-output option in
mysqlbinlog was hard to decipher. This quick patch aims
to refine it.
Reviewed By:
==========
Andrei Elkin: <andrei.elkin@mariadb.com>
This patch changes the main name of 3 byte character set from utf8 to
utf8mb3. New old_mode UTF8_IS_UTF8MB3 is added and set TRUE by default,
so that utf8 would mean utf8mb3. If not set, utf8 would mean utf8mb4.
Problem:
=======
The ALWAYS option of the mariadb-binlog --base64-output flag
formats its output incorrectly. This option is deprecated, and
MySQL 8.0 has removed it entirely.
Solution:
========
Adhere to MySQL and remove this option from MariaDB.
Behavioral Changes:
==================
Use Case: ./mariadb-binlog --base64-output
Previous Behavior: Sets base64-output mode to always
New Behavior: Error message indicating incomplete argument
Use Case: ./mariadb-binlog --base64-output=always
Previous Behavior: Sets base64-output mode to always
New Behavior: Error message indicating invalid argument value
Reviewed By:
==========
Andrei Elkin: <andrei.elkin@mariadb.com>
Problem:
=======
MariaDB's command line utilities (e.g., mysql,
mysqldump, etc) silently ignore connection
property options (e.g., --port and --socket)
when protocol is not explicitly set via the
command-line for localhost connections.
Fix:
===
If connection properties are specified without a
protocol, override the protocol to be consistent.
For example, if --port is specified, automatically
set protocol=tcp.
Caveats:
=======
* When multiple connection properties are
specified, nothing is overridden
* If protocol is is set via the command-line,
its value is used
Reviewers:
========
Sergei Golubchik <serg@mariadb.com>
Vladislav Vaintroub <wlad@mariadb.com>
The easiest way to compile and test the server with UBSAN is to run:
./BUILD/compile-pentium64-ubsan
and then run mysql-test-run.
After this commit, one should be able to run this without any UBSAN
warnings. There is still a few compiler warnings that should be fixed
at some point, but these do not expose any real bugs.
The 'special' cases where we disable, suppress or circumvent UBSAN are:
- ref10 source (as here we intentionally do some shifts that UBSAN
complains about.
- x86 version of optimized int#korr() methods. UBSAN do not like unaligned
memory access of integers. Fixed by using byte_order_generic.h when
compiling with UBSAN
- We use smaller thread stack with ASAN and UBSAN, which forced me to
disable a few tests that prints the thread stack size.
- Verifying class types does not work for shared libraries. I added
suppression in mysql-test-run.pl for this case.
- Added '#ifdef WITH_UBSAN' when using integer arithmetic where it is
safe to have overflows (two cases, in item_func.cc).
Things fixed:
- Don't left shift signed values
(byte_order_generic.h, mysqltest.c, item_sum.cc and many more)
- Don't assign not non existing values to enum variables.
- Ensure that bool and enum values are properly initialized in
constructors. This was needed as UBSAN checks that these types has
correct values when one copies an object.
(gcalc_tools.h, ha_partition.cc, item_sum.cc, partition_element.h ...)
- Ensure we do not called handler functions on unallocated objects or
deleted objects.
(events.cc, sql_acl.cc).
- Fixed bugs in Item_sp::Item_sp() where we did not call constructor
on Query_arena object.
- Fixed several cast of objects to an incompatible class!
(Item.cc, Item_buff.cc, item_timefunc.cc, opt_subselect.cc, sql_acl.cc,
sql_select.cc ...)
- Ensure we do not do integer arithmetic that causes over or underflows.
This includes also ++ and -- of integers.
(Item_func.cc, Item_strfunc.cc, item_timefunc.cc, sql_base.cc ...)
- Added JSON_VALUE_UNITIALIZED to json_value_types and ensure that
value_type is initialized to this instead of to -1, which is not a valid
enum value for json_value_types.
- Ensure we do not call memcpy() when second argument could be null.
- Fixed that Item_func_str::make_empty_result() creates an empty string
instead of a null string (safer as it ensures we do not do arithmetic
on null strings).
Other things:
- Changed struct st_position to an OBJECT and added an initialization
function to it to ensure that we do not copy or use uninitialized
members. The change to a class was also motived that we used "struct
st_position" and POSITION randomly trough the code which was
confusing.
- Notably big rewrite in sql_acl.cc to avoid using deleted objects.
- Changed in sql_partition to use '^' instead of '-'. This is safe as
the operator is either 0 or 0x8000000000000000ULL.
- Added check for select_nr < INT_MAX in JOIN::build_explain() to
avoid bug when get_select() could return NULL.
- Reordered elements in POSITION for better alignment.
- Changed sql_test.cc::print_plan() to use pointers instead of objects.
- Fixed bug in find_set() where could could execute '1 << -1'.
- Added variable have_sanitizer, used by mtr. (This variable was before
only in 10.5 and up). It can now have one of two values:
ASAN or UBSAN.
- Moved ~Archive_share() from ha_archive.cc to ha_archive.h and marked
it virtual. This was an effort to get UBSAN to work with loaded storage
engines. I kept the change as the new place is better.
- Added in CONNECT engine COLBLK::SetName(), to get around a wrong cast
in tabutil.cpp.
- Added HAVE_REPLICATION around usage of rgi_slave, to get embedded
server to compile with UBSAN. (Patch from Marko).
- Added #ifdef for powerpc64 to avoid a bug in old gcc versions related
to integer arithmetic.
Changes that should not be needed but had to be done to suppress warnings
from UBSAN:
- Added static_cast<<uint16_t>> around shift to get rid of a LOT of
compiler warnings when using UBSAN.
- Had to change some '/' of 2 base integers to shift to get rid of
some compile time warnings.
Reviewed by:
- Json changes: Alexey Botchkov
- Charset changes in ctype-uca.c: Alexander Barkov
- InnoDB changes & Embedded server: Marko Mäkelä
- sql_acl.cc changes: Vicențiu Ciorbaru
- build_explain() changes: Sergey Petrunia
One should not change the program arguments!
This change also reduces warnings from the icc compiler.
Almost all changes are just syntax changes (adding const to
'get_one_option function' declarations).
Other changes:
- Added a few cast of 'argument' from 'const char*' to 'char *'. This
was mainly in calls to 'external' functions we don't have control of.
- Ensure that all reset of 'password command line argument' are similar.
(In almost all cases it was just adding a comment and a cast)
- In mysqlbinlog.cc and mysqld.cc there was a few cases that changed
the command line argument. These places where changed to instead allocate
the option in a MEM_ROOT to avoid changing the argument. Some of this
code was changed to ensure that different programs did parsing the
same way. Added a test case for the changes in mysqlbinlog.cc
- Changed a few variables that took their value from command line options
from 'char *' to 'const char *'.