In environments with load balancers or proxies, the audit plugin logs
only the IP address, making it difficult to differentiate individual client
connections from the same IP.
Add a new 'port' field to the appropriate event objects to capture the
client's TCP port number. Populate the port field with thd->port in the
appropriate functions. The audit plugin receives and logs this port field
along with other connection information, enabling better identification
of individual client connections.
All new code of the whole pull request, including one or several files that
are either new files or modified ones, are contributed under the BSD-new license.
I am contributing on behalf of my employer Amazon Web Services, Inc.
Add tls_version and tls_version_length variables to the audit plugin so
they can be logged. This is useful to help identify suspicious or
malformed connections attempting to use unsupported TLS versions. A log
with this information will allow to detect and block more malicious
connection attempts.
Users with 'server_audit_events' empty will have these two new variables
automatically visible in their logs, but if users don't want them, they
can always configure what fields to include by listing the fields in
'server_audit_events'.
In connection event, The TLS version will be populated in `object` field
in key=value format, and the key-value pair will be omitted when the
value is empty.
To ensure the MTR test result matches in all environments, the TLS
version string is replaced with a general `TLS_VERSION` to avoid the MTR
test failing unexpectedly. It stores the version with query `SHOW STATUS
LIKE 'Ssl_version'` and replace the output with `replace_result` command.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
[1]: https://docs.openssl.org/3.2/man3/SSL_get_version/
The fix for MDEV-34413 added support for Index Condition Pushdown with reverse
ordered scans. This makes Rowid filtering work with reverse-ordered scans, too,
so enable it. For example, InnoDB can now check the pushed index condition and
then check the rowid filter on success, in the ORDER BY ... DESC case.
Allows index condition pushdown for reverse ordered scans, a previously
disabled feature due to poor performance. This patch adds a new
API to the handler class called set_end_range which allows callers to
tell the handler what the end of the index range will be when scanning.
Combined with a pushed index condition, the handler can scan the index
efficiently and not read beyond the end of the given range. When
checking if the pushed index condition matches, the handler will also
check if scanning has reached the end of the provided range and stop if
so.
If we instead only enabled ICP for reverse ordered scans without
also calling this new API, then the handler would perform unnecessary
index condition checks. In fact this would continue until the end of
the index is reached.
These changes are agnostic of storage engine. That is, any storage
engine that supports index condition pushdown will inhereit this new
behavior as it is implemented in the SQL and storage engine
API layers.
The partitioned tables storage meta-engine (ha_partition) adds an
override of set_end_range which recursively calls set_end_range on its
child storage engine (handler) implementations.
This commit updates the test made in an earlier commit to show that
ICP matches happen for the reverse ordered case.
This patch is based on changes written by Olav Sandstaa in
MySQL commit da1d92fd46071cd86de61058b6ea39fd9affcd87
Adds tests which show that ICP was not enabled for reverse-ordered scans
prior to this mdev. A later commit for this same mdev records again
these same tests, showing that ICP for reverse-ordered scans is
enabled and working.
Item_func_sp::execute() called two times per row in this scenario:
SELECT ROW(f1(),1) = ROW(1,1), @counter FROM seq_1_to_5;
Fix:
Adding a new bool member Item_func_sp::m_use_cached_value,
which in case of "true" means that the Item_func_sp::execute()
has already been executed for the current result set row and
its return value has been cached in Item_func_sp::sp_result_field.
All Item_func_sp value methods (like val_int() in the reported query)
check m_use_cached_value. If it is "true" then execute() is not
called again and the function value is returned using directly
sp_result_field value methods.
Item_func_sp::bring_value() calls execute() no matter what m_use_cached_value is.
This patch adds support for SYS_REFCURSOR (a weakly typed cursor)
for both sql_mode=ORACLE and sql_mode=DEFAULT.
Works as a regular stored routine variable, parameter and return value:
- can be passed as an IN parameter to stored functions and procedures
- can be passed as an INOUT and OUT parameter to stored procedures
- can be returned from a stored function
Note, strongly typed REF CURSOR will be added separately.
Note, to maintain dependencies easier, some parts of sql_class.h
and item.h were moved to new header files:
- select_results.h:
class select_result_sink
class select_result
class select_result_interceptor
- sp_cursor.h:
class sp_cursor_statistics
class sp_cursor
- sp_rcontext_handler.h
class Sp_rcontext_handler and its descendants
The implementation consists of the following parts:
- A new class sp_cursor_array deriving from Dynamic_array
- A new class Statement_rcontext which contains data shared
between sub-statements of a compound statement.
It has a member m_statement_cursors of the sp_cursor_array data type,
as well as open cursor counter. THD inherits from Statement_rcontext.
- A new data type handler Type_handler_sys_refcursor in plugins/type_cursor/
It is designed to store uint16 references -
positions of the cursor in THD::m_statement_cursors.
- Type_handler_sys_refcursor suppresses some derived numeric features.
When a SYS_REFCURSOR variable is used as an integer an error is raised.
- A new abstract class sp_instr_fetch_cursor. It's needed to share
the common code between "OPEN cur" (for static cursors) and
"OPER cur FOR stmt" (for SYS_REFCURSORs).
- New sp_instr classes:
* sp_instr_copen_by_ref - OPEN sys_ref_curor FOR stmt;
* sp_instr_cfetch_by_ref - FETCH sys_ref_cursor INTO targets;
* sp_instr_cclose_by_ref - CLOSE sys_ref_cursor;
* sp_instr_destruct_variable - to destruct SYS_REFCURSOR variables when
the execution goes out of the BEGIN..END block
where SYS_REFCURSOR variables are declared.
- New methods in LEX:
* sp_open_cursor_for_stmt - handles "OPEN sys_ref_cursor FOR stmt".
* sp_add_instr_fetch_cursor - "FETCH cur INTO targets" for both
static cursors and SYS_REFCURSORs.
* sp_close - handles "CLOSE cur" both for static cursors and SYS_REFCURSORs.
- Changes in cursor functions to handle both static cursors and SYS_REFCURSORs:
* Item_func_cursor_isopen
* Item_func_cursor_found
* Item_func_cursor_notfound
* Item_func_cursor_rowcount
- A new system variable @@max_open_cursors - to limit the number
of cursors (static and SYS_REFCURSORs) opened at the same time.
Its allowed range is [0-65536], with 50 by default.
- A new virtual method Type_handler::can_return_bool() telling
if calling item->val_bool() is allowed for Items of this data type,
or if otherwise the "Illegal parameter for operation" error should be raised
at fix_fields() time.
- New methods in Sp_rcontext_handler:
* get_cursor()
* get_cursor_by_ref()
- A new class Sp_rcontext_handler_statement to handle top level statement
wide cursors which are shared by all substatements.
- A new virtual method expr_event_handler() in classes Item and Field.
It's needed to close (and make available for a new OPEN)
unused THD::m_statement_cursors elements which do not have any references
any more. It can happen in various moments in time, e.g.
* after evaluation parameters of an SQL routine
* after assigning a cursor expression into a SYS_REFCURSOR variable
* when leaving a BEGIN..END block with SYS_REFCURSOR variables
* after setting OUT/INOUT routine actual parameters from formal
parameters.
When a slave does not start up the slave threads on restart,
but not reporting anything to the error log about startup failures
either, this can be due to `skip-slave-start` being set in the
config file(s) or on the command line (and most likely is).
Reviewed-by: Sergei Golubchik <serg@mariadb.org>
Promote the last few SQL-inaccessible replication options (command line
or `mariadb.cnf`) as these GLOBAL read-only system variables:
```
@@master_info_file
@@replicate_same_server_id
@@show_slave_auth_info
```
Side effect: The latter two options changed from no argument
to optional argument. Quote `include/my_getopt.h`:
> It should be noted that for historical reasons variables with the
> combination arg_type=NO_ARG, my_option::var_type=GET_BOOL still
> accepts arguments. This is someone counter intuitive and care should
> be taken if the code is refactored.
Reviewed-by: Brandon Nesterenko <brandon.nesterenko@mariadb.com>
Add two new variables:
file_key_management_digest={sha1|sha224|sha256|sha384|sha512}
This specify the digest function to use in key derivation of the key
used for decryption of the keyfile.
file_key_management_use_pbkdf2=N
This specify whether pbkdf2 is used in the key derivation, and if
so (N != 0), how many iterations.
Appliers need to verify foreign key constraints during normal
operation, in multi-active topologies, and for this reason appliers
are configured to enable FK checking.
However, during node joining, in IST and latter catch up period,
the node is still idle (from local connections), and only source
for incoming transactions is the cluster sending certified write
sets for applying. IST happens with parallel applying, and there
is a possibility that foreign key check cause lock conflicts between
appliers accessing FK child and parent tables. Also, the excessive
FK checking will slow down IST process somewhat.
For this reasons, we could relax FK checks for appliers during IST
and catch up periods. The relaxed FK check mode should, however, be
configurable e.g. by wsrep_mode flag: SKIP_APPLIER_FK_CHECKS_IN_IST.
When this operation mode is set, and the node is processing IST or
catch up, appliers should skip FK checking.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
- Return error when reading file-based password from locations readable
by LOAD DATA or LOAD_FILE().
- If ssl_passphrase was set, SHOW VARIABLE will show "file:", "env:" or
"pass:" (but won't reveal sensitive data)
Add ssl_passphrase server parameter, which works similarly
to --passout/--passin openssl command line parameters.
Pass phrase value can be formatted as follows.
- pass:password
Provide actual password after the pass: prefix.
- env:var
Obtain the password from the environment variable 'var'a
- file:pathname
Reads the password from the specified file pathname.
Only the first line, up to the newline character, is read from the stream.
This commit implements optimizer hints allowing to affect the order
of joining tables:
- JOIN_FIXED_ORDER similar to existing STRAIGHT_JOIN hint;
- JOIN_ORDER to apply the specified table order;
- JOIN_PREFIX to hint what tables should be first in the join;
- JOIN_SUFFIX to hint what tables should be last in the join.
- Make print_warn() not be a template.
- Introduce Printable_parser_rule interface, make grammar rules that
emit warnings implement it ( and print_warn invokes its function).
- Remove Parser::Hint::append_args() as it is not used anywhere.
(it used to be necessary call print_warn(... (Parser::Hint*)NULL);
1. Disable opt_hint_timeout.test for embedded server. Make sure
the test does not crash even when started for embedded server.
Disable view-protocol since hints are not supported inside views.
2. Hints are designed to behave like regular /* ... */ comments:
`SELECT /*+ " */ 1;` -- a valid SQL query
However, the mysql client program waits for the closing doublequote
character.
Another problem is observed when there is no space character between
closing `*/` of the hint and the following `*`:
`SELECT /*+ some_hints(...) */* FROM t1;`
In this case the client treats `/*` as a comment section opening and
waits for the closing `*/` sequence.
This commit fixes all of these issues
It places a limit N (a timeout value in milliseconds) on how long
a statement is permitted to execute before the server terminates it.
Syntax:
SELECT /*+ MAX_EXECUTION_TIME(milliseconds) */ ...
Only top-level SELECT statements support the hint.
BNL() hint effectively increases join_cache_level up to 4 if it is
set to value less than 4.
This commit also makes the BKA() hint override not only
`join_cache_bka` optimizer switch but `join_cache_level` as well.
I.e., BKA() hint enables BKA and BKAH join buffers both flat and
incremental despite `join_cache_level` and `join_cache_bka` setting.
join_cache_level=0 disables join cache buffers, but the hint
BNL() now allows to employ BNL(H) buffers for particular tables
or query blocks.
This commit also adds a number of test cases including
OUTER JOINs to make sure hints do not break the rules of
join buffers application
Forbid adding optimizer hints to view definitions.
In the case when optimizer hints are added to the view definition
at a `CREATE (OR REPLACE) VIEW`/`ALTER VIEW` statement, a warning is
generated and the hints are ignored.
This commit also disables ps-protocol for test cases where
`Unresolved table/index name` warnings are generated. The reason
for this is such warnings are generated during both PREPARE
and EXECUTE stages. Since opt_hints.test has `--enable_prepare_warnings`,
running it with `--ps-protocol` causes duplication of warning messages
- Using Lex_ident_sys to scan identifiers, like the SQL parser does.
This fixes handling of double-quote-delimited and backtick-delimited identifiers,
as well as handling of non-ASCII identifiers.
Unescaping and converting from the client character set to the system
character set is now done using Lex_ident_cli_st and Lex_ident_sys,
like it's done in the SQL tokenizer/parser.
Adding helper methods to_ident_cli() and to_ident_sys()
in Optimizer_hint_parser::Token.
- Fixing the hint parser to report a syntax error when an empty identifiers:
SELECT /*+ BKA(``) */ * FROM t1;
- Moving a part of the code from opt_hints_parser.h to opt_hints_parser.cc
Moving these method definitions:
- Optimizer_hint_tokenizer::find_keyword()
- Optimizer_hint_tokenizer::get_token()
to avoid huge pieces of the code in the header file.
- A Lex_ident_cli_st cleanup
Fixing a few Lex_ident_cli_st methods to return Lex_ident_cli_st &
instead of void, to use them easier in the caller code.
- Fixing the hint parser to display the correct line number
Adding a new data type Lex_comment_st
(a combination of LEX_CSTRING and a line number)
Using it in sql_yacc.yy
- Getting rid of redundant dependencies on sql_hints_parser.h
Moving void LEX::resolve_optimizer_hints() from sql_lex.h to sql_lex.cc
Adding a class Optimizer_hint_parser_output, deriving from
Optimizer_hint_parser::Hint_list. Fixing the hint parser to
return a pointer to an allocated instance of Optimizer_hint_parser_output
rather than an instance of Optimizer_hint_parser::Hint_list.
This allows to use a forward declaration of Optimizer_hint_parser_output
in sql_lex.h and thus avoid dependencies on sql_hints_parser.h.
This commit introduces:
- the infrastructure for optimizer hints;
- hints for join buffering: BNL(), NO_BNL(), BKA(), NO_BKA();
- NO_ICP() hint for disabling index condition pushdown;
- MRR(), MO_MRR() hint for multi-range reads control;
- NO_RANGE_OPTIMIZATION() for disabling range optimization;
- QB_NAME() for assigning names for query blocks.
Added capability to create a trigger associated with several trigger events.
For this goal, the syntax of the CREATE TRIGGER statement was extended
to support the syntax structure { event [ OR ... ] } for the `trigger_event`
clause. Since one trigger will be able to handle several events it should be
provided a way to determine what kind of event is handled on execution of a
trigger. For this goal support of the clauses INSERTING, UPDATING , DELETING
was added by this patch. These clauses can be used inside a trigger body
to detect what kind of trigger action is currently processed using the following
boilerplate:
IF INSERTING THEN ...
ELSIF UPDATING THEN ...
ELSIF DELETING THEN ...
In case one of the clauses INSERTING, UPDATING, DELETING specified in
a trigger's body not matched with a trigger event type, the error
ER_INCOMPATIBLE_EVENT_FLAG is emitted.
After this patch be pushed, one Trigger object will be associated with several
trigger events. It means that the array Table_triggers_list::triggers can
contain several pointers to the same Trigger object in array members
corresponding to different events. Moreover, support of several trigger events
for the same trigger requires that the data members `next` and `action_order`
of the Trigger class be converted to arrays to store relating information per
trigger event base.
Ability to specify the same trigger for different event types results in
necessity to handle invalid cases on execution of the multi-event trigger,
when the OLD or NEW qualifiers doesn't match a current event type against that
the trigger is run. The clause OLD should produce the NULL value for INSERT event,
whereas the clause NEW should produce the NULL value for DELETE event.
Extend loose index scan to support descending indexes.
This is achieved by removing a block skipping creating loose index
scan plan for descending index, as well as generalising the execution
of such plans.
The generalisation applies to all levels looking for min/max in loose
index scan. In the highest level (get_next), generalise min and max to
first and last, so that it still proceeds in the direction agreeing
with the index parity. In the lower levels, combine next_min and
next_max methods into next_min_max, and combine next_min_in_range and
next_max_in_range into next_min_max_in_range. This retains existing
logic of these functions and reduces code duplication, while allowing
handling of all four combinations (min, max) x (asc index, desc
index).
Atomic CREATE OR REPLACE allows to keep an old table intact if the
command fails or during the crash. That is done by renaming the
original table to temporary name, as a backup and restoring it if the
CREATE fails. When the command is complete and logged the backup
table is deleted.
Atomic replace algorithm
Two DDL chains are used for CREATE OR REPLACE:
ddl_log_state_create (C) and ddl_log_state_rm (D).
1. (C) Log rename of ORIG to TMP table (Rename TMP to original).
2. Rename orignal to TMP.
3. (C) Log CREATE_TABLE_ACTION of ORIG (drops ORIG);
4. Do everything with ORIG (like insert data)
5. (D) Log drop of TMP
6. Write query to binlog (this marks (C) to be closed in
case of failure)
7. Execute drop of TMP through (D)
8. Close (C) and (D)
If there is a failure before 6) we revert the changes in (C)
Chain (D) is only executed if 6) succeded (C is closed on
crash recovery).
Foreign key errors will be found at the 1) stage.
Additional notes
- CREATE TABLE without REPLACE and temporary tables is not affected
by this commit.
set @@drop_before_create_or_replace=1 can be used to
get old behaviour where existing tables are dropped
in CREATE OR REPLACE.
- CREATE TABLE is reverted if binlogging the query fails.
- Engines having HTON_EXPENSIVE_RENAME flag set are not affected by
this commit. Conflicting tables marked with this flag will be
deleted with CREATE OR REPLACE.
- Replication execution is not affected by this commit.
- Replication will first drop the conflicting table and then
creating the new one.
- CREATE TABLE .. SELECT XID usage is fixed and now there is no need
to log DROP TABLE via DDL_CREATE_TABLE_PHASE_LOG (see comments in
do_postlock()). XID is now correctly updated so it disables
DDL_LOG_DROP_TABLE_ACTION. Note that binary log is flushed at the
final stage when the table is ready. So if we have XID in the
binary log we don't need to drop the table.
- Three variations of CREATE OR REPLACE handled:
1. CREATE OR REPLACE TABLE t1 (..);
2. CREATE OR REPLACE TABLE t1 LIKE t2;
3. CREATE OR REPLACE TABLE t1 SELECT ..;
- Test case uses 6 combinations for engines (aria, aria_notrans,
myisam, ib, lock_tables, expensive_rename) and 2 combinations for
binlog types (row, stmt). Combinations help to check differences
between the results. Error failures are tested for the above three
variations.
- expensive_rename tests CREATE OR REPLACE without atomic
replace. The effect should be the same as with the old behaviour
before this commit.
- Triggers mechanism is unaffected by this change. This is tested in
create_replace.test.
- LOCK TABLES is affected. Lock restoration must be done after new
table is created or TMP is renamed back to ORIG
- Moved ddl_log_complete() from send_eof() to finalize_ddl(). This
checkpoint was not executed before for normal CREATE TABLE but is
executed now.
- CREATE TABLE will now rollback also if writing to the binary
logging failed. See rpl_gtid_strict.test
backup ddl log changes
- In case of a successfull CREATE OR REPLACE we only log
the CREATE event, not the DROP TABLE event of the old table.
ddl_log.cc changes
ddl_log_execute_action() now properly return error conditions.
ddl_log_disable_entry() added to allow one to disable one entry.
The entry on disk is still reserved until ddl_log_complete() is
executed.
On XID usage
Like with all other atomic DDL operations XID is used to avoid
inconsistency between master and slave in the case of a crash after
binary log is written and before ddl_log_state_create is closed. On
recovery XIDs are taken from binary log and corresponding DDL log
events get disabled. That is done by
ddl_log_close_binlogged_events().
On linking two chains together
Chains are executed in the ascending order of entry_pos of execute
entries. But entry_pos assignment order is undefined: it may assign
bigger number for the first chain and then smaller number for the
second chain. So the execution order in that case will be reverse:
second chain will be executed first.
To avoid that we link one chain to another. While the base chain
(ddl_log_state_create) is active the secondary chain
(ddl_log_state_rm) is not executed. That is: only one chain can be
executed in two linked chains.
The interface ddl_log_link_chains() was defined in "MDEV-22166
ddl_log_write_execute_entry() extension".
Atomic info parameters in HA_CREATE_INFO
Many functions in CREATE TABLE pass the same parameters. These
parameters are part of table creation info and should be in
HA_CREATE_INFO (or whatever). Passing parameters via single
structure is much easier for adding new data and
refactoring.
InnoDB changes
Added ha_innobase::can_be_renamed_to_backup() to check if
a table with foreign keys can be renamed.
Aria changes:
- Fixed issue in Aria engine with CREATE + locked tables
that data was not properly commited in some cases in
case of crashes.
Known issues:
- InnoDB tables with foreign key definitions are not fully supported
with atomic create and replace:
- ha_innobase::can_be_renamed_to_backup() can detect some cases
where InnoDB does not support renaming table with foreign key
constraints. In this case MariaDB will drop the old table before
creating the new one.
The detected cases are:
- The new and old table is using the same foreign key constraint
name.
- The old table has self referencing constraints.
- If the old and new table uses the same name for a constraint the
create of the new table will fail. The orignal table will be
restored in this case.
- The above issues will be fixed in a future commit.
- CREATE OR REPLACE TEMPORARY table is not full atomic. Any conflicting
table will always be dropped before creating a new one. (Old behaviour).
REPAIR of InnoDB tables was logging ALTER TABLE and REPAIR to ddl log.
ALTER TABLE contained the new tableid and REPAIR, wrongly, contained the
old rowid.
Now only REPAIR is logged
ddl.log changes:
REPAIR TABLE and OPTIMIZE TABLE that are done through ALTER TABLE will
now contain the old and new table id. If not done through ALTER TABLE,
only the current rowid will be shown (as before).
The purpose of this commit is to ensure that creation and changes of
temporary tables are properly and predicable logged to the binary
log. It also fixes some bugs where ROW logging was used in MIXED mode,
when STATEMENT would be a better (and expected) choice.
In this comment STATEMENT stands for logging to binary log in
STATEMENT format, MIXED stands for MIXED binlog format and ROW for ROW
binlog format.
New rules for logging of temporary tables
- CREATE of temporary tables are now by default binlogged only if
STATEMENT binlog format is used. If it is binlogged, 1 is stored in
TABLE_SHARE->table_creation_was_logged. The user can change this
behavior by setting create_temporary_table_binlog_formats to
MIXED,STATEMENT in which case the create is logged in statement
format also in MIXED mode (as before).
- Changes to temporary tables are only binlogged if and only if
the CREATE was logged. The logging happens under STATEMENT or MIXED.
If binlog_format=ROW, temporary table changes are not binlogged. A
temporary table that are changed under ROW are marked as 'not up to
date in binlog' and no future row changes are logged. Any usage of
this temporary table will force row logging of other tables in any
future statements using the temporary table to be row logged.
- DROP TEMPORARY is binlogged only of the CREATE was binlogged.
Changes done:
- Row logging is forced for any statement using temporary tables that
are not up to date in the binary log.
(Before the row logging was forced if the user has a temporary table)
- If there is any changes to the temporary table that is not binlogged,
the table is marked as not up to date.
- TABLE_SHARE->table_creation_was_logged has a new definition for
temporary tables:
0 Table creating was not logged to binary log
1 Table creating was logged to binary log and table is up to date.
2 Table creating was logged to binary log but some changes where
not logged to binary log.
Table is not up to date in binary log is defined as value 0 or 2.
- If a multi-table-update or multi-table-delete fails then
all updated temporary tables are marked as not up to date.
- Enforce row logging if the query is using temporary tables
that are not up to date.
Before row logging was enforced if the user had any
temporary tables.
- When dropping temporary tables use IF EXISTS. This ensures
that slave will not stop if it had crashed and lost the
temporary tables.
- Remove comment and version from DROP /*!4000 TEMPORARY.. generated when
a connection closes that has open temporary tables. Added 'generated by
server' at the end of the DROP.
Bugs fixed:
- When using temporary tables with commands that forced row based,
like INSERT INTO temporary_table VALUES (UUID()), this was never
logged which causes the temporary table to be inconsistent on
master and slave.
- Used binlog format is now clearly defined. It is now only depending
on the current binlog_format and the tables used.
Before it was depending on the user had ANY temporary tables and
the state of 'current_stmt_binlog_format' set by previous queries.
This also caused temporary tables to be logged to binary log in
some cases.
- CREATE TABLE t1 LIKE not_logged_temporary_table caused replication
to stop.
- Rename of not binlogged temporary tables where binlogged to binary log
which caused replication to stop.
Changes in behavior:
- By default create_temporary_table_binlog_formats=STATEMENT, which
means that CREATE TEMPORARY is not logged to binary log under MIXED
binary logging. This can be changed by setting
create_temporary_table_binlog_formats to MIXED,STATEMENT.
- Using temporary tables that was not logged to the binary log will
cause any query using them for updating other tables to be logged in
ROW format. Before all queries was logged in ROW format if the user had
any temporary tables, even if they were not used by the query.
- Generated DROP TEMPORARY TABLE is now always using IF EXISTS and
has a "generated by server" comment in the binary log.
The consequences of the above is that manipulations of a lot of rows
through temporary tables will by default be be slower in mixed mode.
For example:
BEGIN;
CREATE TEMPORARY TABLE tmp AS SELECT a, b, c FROM
large_table1 JOIN large_table2 ON ...;
INSERT INTO other_table SELECT b, c FROM tmp WHERE a <100;
DROP TEMPORARY TABLE tmp;
COMMIT;
By default this will create a huge entry in the binary log, compared
to just a few hundred bytes in statement mode. However the change in
this commit will make usage of temporary tables more reliable and
predicable and is thus worth it. Using statement mode or
create_temporary_table_binlog_formats can be used to avoid this issue.
when a definer for SP/view is wrong - it shold be ER_MALFORMED_DEFINER,
not ER_NO_SUCH_USER
when one uses current_role as a definer or grantee but there's no
current role - it should be ER_INVALID_ROLE not ER_MALFORMED_DEFINER
when a non-existent user is specified - it should be ER_NO_SUCH_USER,
which should say "The user does not exist", not "Definer does not exist"
clarify ER_CANT_CHANGE_TX_CHARACTERISTICS to say what cannot be changed