- CREATE TABLE ... SELECT drops constraints for columns that
are both in the create and select part.
- Fixed by copying the constraint in
Column_definiton::redefine_stage1_common()
- If one has both a default expression and check constraint for a
column, one can get the error "Expression for field `a` is refering
to uninitialized field `a`.
- Fixed by ignoring default expressions for current column when checking
for CHECK constraint
This was developed by Aleksey Midenkov based on my design.
In the original InnoDB storage format (that was retroactively named
ROW_FORMAT=REDUNDANT in MySQL 5.0.3), the length of each index field
is stored explicitly.
Because of this, we can and now will allow instant conversion from
VARCHAR to CHAR or VARBINARY to BINARY of equal or greater size,
as well as instant conversion of TINYINT to SMALLINT to MEDIUMINT
to INT to BIGINT (while not changing between signed and unsigned).
Theoretically, we could allow changing from an unsigned integer to
a bigger unsigned integer, as well as changing CHAR to VARCHAR, but
that would require additional metadata and conversions whenever
reading old records.
Field_str::is_equal(), Field_varstring::is_equal(), Field_num::is_equal():
Return the new result IS_EQUAL_PACK_LENGTH_EXT if the table advertises
HA_EXTENDED_TYPES_CONVERSION capability and we are considering the
above-mentioned conversions.
ALTER_COLUMN_EQUAL_PACK_LENGTH_EXT: A new ALTER TABLE flag, similar
to ALTER_COLUMN_EQUAL_PACK_LENGTH but requiring conversions when
reading the data. The Field::is_equal() result IS_EQUAL_PACK_LENGTH_EXT
will map to this flag.
dtype_get_fixed_size_low(): For BINARY, CHAR and integer columns
in ROW_FORMAT=REDUNDANT, return 0 (variable length) from now on.
dtype_get_sql_null_size(): Keep returning the current size for
BINARY, CHAR and integer columns, so that in ROW_FORMAT=REDUNDANT
it will remain possible to update in place between NULL and NOT NULL
values.
btr_index_rec_validate(): Relax a CHECK TABLE length check for
ROW_FORMAT=REDUNDANT tables.
btr_cur_instant_init_low(): No longer trust fixed_len
for ROW_FORMAT=REDUNDANT tables.
We cannot rely on fixed_len anymore because the record can have shorter
length from before instant extension. Note that importing such tablespace
into earlier MariaDB versions produces ER_TABLE_SCHEMA_MISMATCH when
using a .cfg file.
In the original InnoDB storage format (which was retroactively named
ROW_FORMAT=REDUNDANT in MySQL 5.0.3), the length of each index field
is stored explicitly. Thus, we can and from now on will allow arbitrary
extension of VARBINARY and VARCHAR columns when the table is in
ROW_FORMAT=REDUNDANT.
ha_innobase::open(): Advertise a new HA_EXTENDED_TYPES_CONVERSION
capability for ROW_FORMAT=REDUNDANT tables.
Field_varstring::is_equal(): If the HA_EXTENDED_TYPES_CONVERSION
capability is advertised for the table, return IS_EQUAL_PACK_LENGTH
for any length extension.
According to close(2) "Retrying the close() after a failure return is
the wrong thing to do"
Even the EINTR case its maybe closed. Take the prudent approach here
an risk leaking one file descriptor rather than closing one that is
nolonger ours.
For up to 127 bytes length, InnoDB would use 1 byte for length, and
that byte would always be less than 128. If the maximum length is
longer than 255 bytes, InnoDB would use a variable-length encoding
for the length, using 1 byte for lengths up to 127 bytes, and
2 bytes for longer lengths.
Thus, 1-byte lengths are always compatible when the maximum size
changes from less than 128 bytes to anything longer.
Field_varstring::is_equal(): Return IS_EQUAL_PACK_LENGTH also when
converting from VARCHAR less than 128 bytes to any longer VARCHAR.
No need to call list.empty(): first one is called by List constructor,
second one doesn't make sense as the object is destroyed immediately
afterwards.
This task involves the implementation for the optimizer trace.
This feature produces a trace for any SELECT/UPDATE/DELETE/,
which contains information about decisions taken by the optimizer during
the optimization phase (choice of table access method, various costs,
transformations, etc). This feature would help to tell why some decisions were
taken by the optimizer and why some were rejected.
Trace is session-local, controlled by the @@optimizer_trace variable.
To enable optimizer trace we need to write:
set @@optimizer_trace variable= 'enabled=on';
To display the trace one can run:
SELECT trace FROM INFORMATION_SCHEMA.OPTIMIZER_TRACE;
This task also involves:
MDEV-18489: Limit the memory used by the optimizer trace
introduces a switch optimizer_trace_max_mem_size which limits
the memory used by the optimizer trace. This was implemented by
Sergei Petrunia.
Change the default authentication for root@localhost to
IDENTIFIED VIA mysql_native_password USING 'invalid' OR unix_socket
which provides secure passwordless login, while still allowing
SET PASSWORD to work as expected.
Also create a second all-privilege account for the user that owns
datadir (and thus has full access to the data anyway).
Compile unix_socket plugin statically into the server.
If wsrep_load_data_splitting is configured, change streaming replication
parameters internally to match the original behavior, i.e. replicate
on every 10000 rows. After load data is over, restore original
streaming replication settings.
Removed redundant wsrep_tc_log_commit().
When importing a tablespace, we must initialize dummy DEFAULT NULL
values for any instantly added columns in order to avoid a debug
assertion failure when PageConverter::update_records() invokes
rec_get_offsets(). Finally, when the operation completes, we must
evict and reload the table definition, so that the correct
default values for instantly added columns will be loaded.
ha_innobase::discard_or_import_tablespace(): On successful
IMPORT TABLESPACE, evict and reload the table definition,
so that btr_cur_instant_init() will load the correct metadata.
PageConverter::update_index_page(): Fill in dummy DEFAULT NULL values
for instantly added columns. These will be replaced upon the
completion of the operation by evicting and reloading the metadata.
row_discard_tablespace(): Invoke dict_table_t::remove_instant().
After DISCARD TABLESPACE, the table is no longer in "instant ALTER"
format, because there is no data file attached.
The code was rewritten in the same way as the code of
ha_partition::multi_range_read_info_const() had been rewritten
earlier.
The fix allowed to run spider.partition_mrr.
Find indexes of one table which parts participate in one constraint.
These indexes are called constraint correlated.
New methods: TABLE::find_constraint_correlated_indexes() and
virtual method check_index_dependence() were added.
For each index it's own constraint correlated index map was created
where all indexes that are constraint correlated with the current are
marked.
The results of this task are used for MDEV-16188 (Use in-memory
PK filters built from range index scans).
This is only a placeholder that allows an implementation later
during the development of MariaDB, so that downgrade to an earlier
version (with this code) will be possible.
We want to be able to zero out freed pages to reduce write amplification,
and to scrub old data. Zeroing out the pages is optional, not mandatory
for correctness. After all, the MLOG_INIT_FREE_PAGE record can only be
emitted for pages that are marked free in the allocation bitmap page.
Use ibuf_bitmap_page_init() only during recovery.
fsp_fill_free_list(): Initialize the FIL_PAGE_TYPE using MLOG_2BYTES.
The page contents will already have been zeroed out by
MLOG_INIT_FILE_PAGE2.
ibuf_bitmap_init_apply(): Replaces ibuf_parse_bitmap_init().
* Donor node will now provide binlog-index argument to wsrep_sst_rsync script if binlog is used.
* Write correct path and binlog file names into joiner binlog-index file
During parallel execution mariabackup script can fail when trying to create archive because of non unique name. Extending archive timestamp with nanosecond.