:: Syntax change ::
Keyword AUTO enables history partition auto-creation.
Examples:
CREATE TABLE t1 (x int) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME INTERVAL 1 HOUR AUTO;
CREATE TABLE t1 (x int) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME INTERVAL 1 MONTH
STARTS '2021-01-01 00:00:00' AUTO PARTITIONS 12;
CREATE TABLE t1 (x int) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME LIMIT 1000 AUTO;
Or with explicit partitions:
CREATE TABLE t1 (x int) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME INTERVAL 1 HOUR AUTO
(PARTITION p0 HISTORY, PARTITION pn CURRENT);
To disable or enable auto-creation one can use ALTER TABLE by adding
or removing AUTO from partitioning specification:
CREATE TABLE t1 (x int) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME INTERVAL 1 HOUR AUTO;
# Disables auto-creation:
ALTER TABLE t1 PARTITION BY SYSTEM_TIME INTERVAL 1 HOUR;
# Enables auto-creation:
ALTER TABLE t1 PARTITION BY SYSTEM_TIME INTERVAL 1 HOUR AUTO;
If the rest of partitioning specification is identical to CREATE TABLE
no repartitioning will be done (for details see MDEV-27328).
:: Description ::
Before executing history-generating DML command (see the list of commands below)
add N history partitions, so that N would be sufficient for potentially
generated history. N > 1 may be required when history partitions are switched
by INTERVAL and current_timestamp is N times further than the interval
boundary of the last history partition.
If the last history partition equals or exceeds LIMIT records then new history
partition is created and selected as the working partition. According to
MDEV-28411 partitions cannot be switched (or created) while the command is
running. Thus LIMIT does not carry strict limitation and the history partition
size must be planned as LIMIT value plus average number of history one DML
command can generate.
Auto-creation is implemented by synchronous fast_alter_partition_table() call
from the thread of the executed DML command before the command itself is run
(by the fallback and retry mechanism similar to Discovery feature,
see Open_table_context).
The name for newly added partitions are generated like default partition names
with extension of MDEV-22155 (which avoids name clashes by extending assignment
counter to next free-enough gap).
These DML commands can trigger auto-creation:
DELETE (including multitable DELETE, excluding DELETE HISTORY)
UPDATE (including multitable UPDATE)
REPLACE (including REPLACE .. SELECT)
INSERT .. ON DUPLICATE KEY UPDATE (including INSERT .. SELECT .. ODKU)
LOAD DATA .. REPLACE
:: Bug fixes ::
MDEV-23642 Locking timeout caused by auto-creation affects original DML
The reasons for this are:
- Do not disrupt main business process (the history is auxiliary service);
- Consequences are non-fatal (history is not lost, but comes into wrong
partition; fixed by partitioning rebuild);
- There is more freedom for application to fail in this case or not: it may
read warning info and find corresponding error number.
- While non-failing command is easy to handle by an application and fail it,
the opposite is hard to handle: there is no automatic actions to fix
failed command and retry, DBA intervention is required and until then
application is non-functioning.
MDEV-23639 Auto-create does not work under LOCK TABLES or inside triggers
Don't do tdc_remove_table() for OT_ADD_HISTORY_PARTITION because it is
not possible in locked tables mode.
LTM_LOCK_TABLES mode (and LTM_PRELOCKED_UNDER_LOCK_TABLES) works out
of the box as fast_alter_partition_table() can reopen tables via
locked_tables_list.
In LTM_PRELOCKED we reopen and relock table manually.
:: More fixes ::
* some_table_marked_for_reopen flag fix
some_table_marked_for_reopen affets only reopen of
m_locked_tables. I.e. Locked_tables_list::reopen_tables() reopens only
tables from m_locked_tables.
* Unused can_recover_from_failed_open() condition
Is recover_from_failed_open() can be really used after
open_and_process_routine()?
:: Reviewed by ::
Sergei Golubchik <serg@mariadb.org>
Add chinese language to missing sql/share/CMakeLists.txt that
results in installed files.
Also add bulgarian=bgn which has existing for a long time.
Sort both lists properly.
Append both to debian/mariadb-server-core-10.4 too.
- Simplified Chinese translation added
- Character encoding is gdk
-- gdk covers more characters
-- gdk includes both Simplified and Traditional
-- best option I think, may need to work along with other locale
settings
- Other cleanup
-- Within each error, messages are sorted according to language code
-- More consistent formatting (8 spaces proceeding each translation)
-- jps removed as duplicate of jpn translation
This should be a good starting point. More refinement is appreciated,
and needed down the road.
English "containt" (sic) spelling fixes on ER_FK_NO_INDEX_{CHILD,PARENT}
resulting in mtr test case adjustments.
Edited/reviewed by Daniel Black
Range can be thought about in similar manner as wildcard (*) where
more than one elements are processed. To implement range notation, extended
json parser to parse the 'to' keyword and added JSON_PATH_ARRAY_RANGE for
path type. If there is 'to' keyword then use JSON_PATH_ARRAY range for
path type along with existing type.
This new integer to store the end index of range is n_item_end.
When there is 'to' keyword, store the integer in n_item_end else store in
n_item.
Task 6:
We can find the .frm type of file. If it is sequence then is_sequence
passed to dd_frm_type() will be true. Since there is already a check
to give error message if we trigger is on temporary table or view, an
additional condition is added to check if .frm is sequence
(is_sequence==true) and error message is changed to show
"Trigger's '%-.192s' is view, temporary table or sequence" instead of
"Trigger's '%-.192s' is view or temporary table".
extra2_read_len resolved by keeping the implementation
in sql/table.cc by exposed it for use by ha_partition.cc
Remove identical implementation in unireg.h
(ref: bfed2c7d57)
Per Marko's comment in JIRA, sql_kill is passing the thread id
as long long. We change the format of the error messages to match,
and cast the thread id to long long in sql_kill_user.
This commit implements two phase binloggable ALTER.
When a new
@@session.binlog_alter_two_phase = YES
ALTER query gets logged in two parts, the START ALTER and the COMMIT
or ROLLBACK ALTER. START Alter is written in binlog as soon as
necessary locks have been acquired for the table. The timing is
such that any concurrent DML:s that update the same table are either
committed, thus logged into binary log having done work on the old
version of the table, or will be queued for execution on its new
version.
The "COMPLETE" COMMIT or ROLLBACK ALTER are written at the very point
of a normal "single-piece" ALTER that is after the most of
the query work is done. When its result is positive COMMIT ALTER is
written, otherwise ROLLBACK ALTER is written with specific error
happened after START ALTER phase.
Replication of two-phase binloggable ALTER is
cross-version safe. Specifically the OLD slave merely does not
recognized the start alter part, still being able to process and
memorize its gtid.
Two phase logged ALTER is read from binlog by mysqlbinlog to produce
BINLOG 'string', where 'string' contains base64 encoded
Query_log_event containing either the start part of ALTER, or a
completion part. The Query details can be displayed with `-v` flag,
similarly to ROW format events. Notice, mysqlbinlog output containing
parts of two-phase binloggable ALTER is processable correctly only by
binlog_alter_two_phase server.
@@log_warnings > 2 can reveal details of binlogging and slave side
processing of the ALTER parts.
The current commit also carries fixes to the following list of
reported bugs:
MDEV-27511, MDEV-27471, MDEV-27349, MDEV-27628, MDEV-27528.
Thanks to all people involved into early discussion of the feature
including Kristian Nielsen, those who helped to design, implement and
test: Sergei Golubchik, Andrei Elkin who took the burden of the
implemenation completion, Sujatha Sivakumar, Brandon
Nesterenko, Alice Sherepa, Ramesh Sivaraman, Jan Lindstrom.
Problem: Currently stored function does not support IN/OUT/INOUT parameter qualifiers.
This is needed for Oracle compatibility (sql_mode = ORACLE).
Solution: Implemented parameter qualifier support to CREATE FUNCTION (reference: CREATE PROCEDURE)
Implemented return by reference for OUT/INOUT parameters in execute_function() (reference: execute_procedure())
Files changed:
sql/sql_yacc.yy: Added IN, OUT, INOUT parameter qualifiers for CREATE FUNCTION.
sql/sp_head.cc: Added input and output parameter binding for IN/OUT/INOUT parameters in execute_function() so that OUT/INOUT can return by reference.
sql/share/errmsg-utf8.txt: Added error message to restrict OUT/INOUT parameters while function being called from SQL query.
mysql-test/suite/compat/oracle/t/sp-inout.test: Added test cases
mysql-test/suite/compat/oracle/r/sp-inout.result: Added test results
Reviewed-by: iqbal@hasprime.com
Previous JSON parser was using an API which made the parsing
inefficient: the same JSON contents was parsed again and again.
Switch to using a lower-level parsing API which allows to do
parsing in an efficient way.
LIMIT history switching requires the number of history partitions to
be marked for read: from first to last non-empty plus one empty. The
least we can do is to fail with error message if the needed partition
was not marked for read. As this is handler interface we require new
handler error code to display user-friendly error message.
Switching by INTERVAL works out-of-the-box with
ER_ROW_DOES_NOT_MATCH_GIVEN_PARTITION_SET error.
Further changes to 3e0304884b
New changes in translation:
* Converted to LATAM countries treatment: tú for vd. This way it serves good for Spain and all LATAM countries.
* Minor changes
bzip2/lz4/lzma/lzo/snappy compression is now provided via *services*
they're almost like normal services, but in include/providers/
and they're supposed to provide exactly the same interface
as original compression libraries (but not everything,
only enough of if for the code to compile).
the services are implemented via dummy functions that return
corresponding error values (LZMA_PROG_ERROR, LZO_E_INTERNAL_ERROR, etc).
the actual compression libraries are linked into corresponding
provider plugins. Providers are daemon plugins that when loaded
replace service pointers to point to actual compression functions.
That is, run-time dependency on compression libraries is now on plugins,
and the server doesn't need any compression libraries to run, but
will automatically support the compression when a plugin is loaded.
InnoDB and Mroonga use compression plugins now. RocksDB doesn't,
because it comes with standalone utility binaries that cannot
load plugins.
Syntax for CONVERT keyword
ALTER TABLE tbl_name
[alter_option [, alter_option] ...] |
[partition_options]
partition_option: {
...
| CONVERT PARTITION partition_name TO TABLE tbl_name
}
Examples:
ALTER TABLE t1 CONVERT PARTITION p2 TO TABLE tp2;
New ALTER_PARTITION_CONVERT_OUT command for
fast_alter_partition_table() is done in alter_partition_convert_out()
function which basically does ha_rename_table().
Partition to extract is marked with the same flag as dropped
partition: PART_TO_BE_DROPPED. Note that we cannot have multiple
partitioning commands in one ALTER.
For DDL logging basically the principle is the same as for other
fast_alter_partition_table() commands. The only difference is that it
integrates late Atomic DDL functions and introduces additional phase
of WFRM_BACKUP_ORIGINAL. That is required for binlog consistency
because otherwise we could not revert back after WFRM_INSTALL_SHADOW
is done. And before DDL log is complete if we crash or fail the
altered table will be already new but binlog will miss that ALTER
command. Note that this is different from all other atomic DDL in that
it rolls back until the ddl_log_complete() is done even if everything
was done fully before the crash.
Test cases added to:
parts.alter_table \
parts.partition_debug \
versioning.partition \
atomic.alter_partition