When there are E empty partitions left, auto-create N new empty
partitions for SYSTEM_TIME partitioning rotated by INTERVAL/LIMIT and
marked by AUTO_INCREMENT keyword. Syntax change: AUTO_INCREMENT
keyword (or shorter AUTO may be used instead) after LIMIT/INTERVAL
clause.
CREATE OR REPLACE TABLE t (x INT) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME LIMIT 100000 AUTO_INCREMENT;
CREATE OR REPLACE TABLE t (x INT) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME INTERVAL 1 WEEK AUTO_INCREMENT;
The current revision implements hard-coded values of 1 for E and N. As
well as auto-creation threshold MinInterval = 1 hour, MinLimit = 1000.
The name for newly added partition will be first chosen as "pX", where
X is partition number and "p" is hard-coded name prefix. If this name
is already occupied, the X will be incremented until the resulting
name will be free to use.
ALTER TABLE ADD PARTITION is now always fast. If there some history
partition overflow occurs manual ALTER TABLE REBUILD PARTITION is
needed.
Introduced a new wsrep_strict_ddl configuration variable in which
Galera checks storage engine of the effected table. If table is not
InnoDB (only storage engine currently fully supporting Galera
replication) DDL-statement will return error code:
ER_GALERA_REPLICATION_NOT_SUPPORTED
eng "DDL-statement is forbidden as table storage engine does not support Galera replication"
However, when wsrep_replicate_myisam=ON we allow DDL-statements to
MyISAM tables. If effected table is allowed storage engine Galera
will run normal TOI.
This new setting should be for now set globally on all
nodes in a cluster. When this setting is set following DDL-clauses
accessing tables not supporting Galera replication are refused:
* CREATE TABLE (e.g. CREATE TABLE t1(a int) engine=Aria
* ALTER TABLE
* TRUNCATE TABLE
* CREATE VIEW
* CREATE TRIGGER
* CREATE INDEX
* DROP INDEX
* RENAME TABLE
* DROP TABLE
Statements on PROCEDURE, EVENT, FUNCTION are allowed as effected
tables are known only at execution. Furthermore, USER, ROLE, SERVER,
DATABASE statements are also allowed as they do not really have
effected table.
* Explicit STARTS syntax
* SHOW CREATE
* Default STARTS rounding depending on INTERVAL type
* Warn when STARTS timestamp is later than query time
* Fix uninitialized Lex->create_last_non_select_table under
mysql_unpack_partition()
Default STARTS rounding depending on INTERVAL type
If STARTS clause is omitted, default one is assigned with value
derived from query timestamp. The rounding is done on STARTS value
depending on INTERVAL type:
SECOND: no rounding is done;
MINUTE: timestamp seconds is set to 0;
HOUR: timestamp seconds and minutes are set to 0;
DAY, WEEK, MONTH and YEAR: timestamp seconds, minutes and hours are
set to 0 (the date of rotation is kept as current date).
InnoDB intentionally (it's a documented behavior) ignores changing of
DATA DIRECTORY and INDEX DIRECTORY for partitions. Though we should
issue warning when this happens.
Cherry picking:
Bug#25135304: RBR: WRONG FIELD LENGTH IN ERROR MESSAGE
commit 47bd3f7cf3c8518f62b1580ec65af2ba7ac13b95
Description:
============
In row based replication, when replicating from a table with a field with
character set set to UTF8mb3 to the same table with the same field set to
character set UTF8mb4 I get a confusing error message:
For VARCHAR: VARCHAR(1) 'utf8mb3' to VARCHAR(1) 'utf8mb4'
"Column 0 of table 'test.t1' cannot be converted from type 'varchar(3)' to
type 'varchar(1)'"
Similar issue with CHAR type as well.
Issue with respect to BLOB types:
For BLOB: LONGBLOB to TINYBLOB - Error message displays incorrect blob type.
"Column 0 of table 'test.t1' cannot be converted from type 'tinyblob' to type
'tinyblob'"
For BINARY to BINARY - Error message displays incorrect type for master side
field.
"Column 0 of table 'test.t' cannot be converted from type 'char(1)' to type
'binary(10)'"
Similar issue exists for VARBINARY type. It is displayed as 'VARCHAR'.
Analysis:
=========
In Row based replication charset information is not sent as part of metadata
from master to slave.
For VARCHAR field its character length is converted into equivalent
octets/bytes and stored internally. At the time of displaying the data to user
it is converted back to original character length.
For example:
VARCHAR(2)- utf8mb3 is stored as:2*3 = VARCHAR(6)
At the time of displaying it to user
VARCHAR(6)- charset utf8mb3:6/3= VARCHAR(2).
At present the internally converted octect length is sent from master to slave
with out providing the charset information. On slave side if the type
conversion fails 'show_sql_type' function is used to get the type specific
information from metadata. Since there is no charset information is available
the filed type is displayed as VARCHAR(6).
This results in confused error message.
For CHAR fields
CHAR(1)- utf8mb3 - CHAR(3)
CHAR(1)- utf8mb4 - CHAR(4)
'show_sql_type' function which retrieves type information from metadata uses
(bytes/local charset length) to get actual character length. If slave's chaset
is 'utf8mb4' then
CHAR(3/4)-->CHAR(0)
CHAR(4/4)-->CHAR(1).
This results in confused error message.
Analysis for BLOB type issue:
BLOB's length is represented in two forms.
1. Actual length
i.e
(length < 256) type= MYSQL_TYPE_TINY_BLOB;
(length < 65536) type= MYSQL_TYPE_BLOB; ...
2. packlength - The number of bytes used to represent the length of the blob
1- tinyblob
2- blob ...
In row based replication only the packlength is written in the binary log. On
the slave side this packlength is interpreted as actual length of the blob.
Hence the length is always < 256 and the type is displayed as tiny blob.
Analysis for BINARY to BINARY type issue:
The character set information is needed to identify a filed's type as char or
binary. Since master side character set information is not available on the
slave side both binary and char fields are displayed as char.
Fix:
===
For CHAR and VARCHAR fields display their length in octets for both source and
target fields. For target field display the charset information if it is
relevant.
For blob type changed the code to use the packlength and display appropriate
blob type in error message.
For binary and varbinary fields use the slave side character set as reference
to map them to binary or varbinary fields.
This commit adds a new feature to the server to add comments at the database
level. 1024 bytes is the maximum comment length allowed. If the comment length
exceeds this limit, a new error/warning code 4144 is thrown, based on whether
thd->is_strict_mode() is true/false. The database comment is also added to the
db.opt file, as well as to the information_schema.schemata table.
* add error for truncation of versioned tables: `ER_TRUNCATE_ILLEGAL_VERS`
* make a full table open with `tdc_aquire_share` instead of just `ha_table_exists` check
test suites run: main, parts, versioning
Closes#785
Fix partitioning for trx_id-versioned tables.
`partition by hash`, `range` and others now work.
`partition by system_time` is forbidden.
Currently we cannot use row_start and row_end in `partition by`, because
insertion of versioned field is done by engine's handler, as well as
row_start/row_end's value set up, which is a transaction id -- so it's
also forbidden.
The drawback is that it's now impossible to use `partition by key()`
without parameters for such tables, because it references row_start and
row_end implicitly.
* add handler::vers_can_native()
* drop Table_scope_and_contents_source_st::vers_native()
* drop partition_element::find_engine_flag as unused
* forbid versioning partitioning for trx_id as not supported
* adopt vers tests for trx_id partitioning
* forbid any row_end referencing in `partition by` clauses,
including implicit `by key()`
Part#2 (final): rewritting the code to pass the correct enum_sp_aggregate_type
to the sp_head constructor, so sp_head never changes its aggregation type
later on. The grammar has been simplified and defragmented.
This allowed to check aggregate specific instructions right after
a routine body has been scanned, by calling new LEX methods:
sp_body_finalize_{procedure|function|trigger|event}()
Moving some C++ code from *.yy to a few new helper methods in LEX.
This patch implements engine independent unique hash index.
Usage:- Unique HASH index can be created automatically for blob/varchar/test column whose key
length > handler->max_key_length()
or it can be explicitly specified.
Automatic Creation:-
Create TABLE t1 (a blob unique);
Explicit Creation:-
Create TABLE t1 (a int , unique(a) using HASH);
Internal KEY_PART Representations:-
Long unique key_info will have 2 representations.
(lets understand this with an example create table t1(a blob, b blob , unique(a, b)); )
1. User Given Representation:- key_info->key_part array will be similar to what user has defined.
So in case of example it will have 2 key_parts (a, b)
2. Storage Engine Representation:- In this case there will be only one key_part and it will point to
HASH_FIELD. This key_part will be always after user defined key_parts.
So:- User Given Representation [a] [b] [hash_key_part]
key_info->key_part ----^
Storage Engine Representation [a] [b] [hash_key_part]
key_info->key_part ------------^
Table->s->key_info will have User Given Representation, While table->key_info will have Storage Engine
Representation.Representation can be changed into each other by calling re/setup_keyinfo_hash function.
Working:-
1. So when user specifies HASH_INDEX or key_length is > handler->max_key_length(), In mysql_prepare_create_table
One extra vfield is added (for each long unique key). And key_info->algorithm is set to HA_KEY_ALG_LONG_HASH.
2. In init_from_binary_frm_image values for hash_keypart is set (like fieldnr , field and flags)
3. In parse_vcol_defs, HASH_FIELD->vcol_info is created. Item_func_hash is used with list of Item_fields,
When Explicit length is given by user then Item_left is used to concatenate Item_field values.
4. In ha_write_row/ha_update_row check_duplicate_long_entry_key is called which will create the hash key from
table->record[0] and then call ha_index_read_map , if we found duplicated hash , we will compare the result
field by field.