otherwise we'd need to store sql_mode *per vcol*
(consider CREATE INDEX...) and how SHOW CREATE TABLE would
support that?
Additionally, get rid of vcol::expr_str, just to make sure
the string is always generated and never leaked in the
original form.
* remove old 5.2+ InnoDB support for virtual columns
* enable corresponding parts of the innodb-5.7 sources
* copy corresponding test cases from 5.7
* copy detailed Alter_inplace_info::HA_ALTER_FLAGS flags from 5.7
- and more detailed detection of changes in fill_alter_inplace_info()
* more "innodb compatibility hooks" in sql_class.cc to
- create/destroy/reset a THD (used by background purge threads)
- find a prelocked table by name
- open a table (from a background purge thread)
* different from 5.7:
- new service thread "thd_destructor_proxy" to make sure all THDs are
destroyed at the correct point in time during the server shutdown
- proper opening/closing of tables for vcol evaluations in
+ FK checks (use already opened prelocked tables)
+ purge threads (open the table, MDLock it, add it to tdc, close
when not needed)
- cache open tables in vc_templ
- avoid unnecessary allocations, reuse table->record[0] and table->s->default_values
- not needed in 5.7, because it overcalculates:
+ tell the server to calculate vcols for an on-going inline ADD INDEX
+ calculate vcols for correct error messages
* update other engines (mroonga/tokudb) accordingly
When updating a table with virtual BLOB columns, the following might happen:
- an old record is read from the table, it has no virtual blob values
- update_virtual_fields() is run, vcol blob gets its value into the
record. But only a pointer to the value is in the table->record[0],
the value is in Field_blob::value String (but it doesn't have to be!
it can be in the record, if the column is just a copy of another
columns: ... b VARCHAR, c BLOB AS (b) ...)
- store_record(table,record[1]), old record now is in record[1]
- fill_record() prepares new values in record[0], vcol blob is updated,
new value replaces the old one in the Field_blob::value
- now both record[1] and record[0] have a pointer that points to the
*new* vcol blob value. Or record[1] has a pointer to nowhere if
Field_blob::value had to realloc.
To resolve this we unlink vcol blobs from the pointer to the
data (in the record[1]). Because the value is not *always* in
the Field_blob::value String, we need to remember what blobs
were unlinked. The orphan memory must be freed manually.
To complicate the matter, ha_update_row() is also used in
multi-update, in REPLACE, in INSERT ... ON DUPLICATE KEY UPDATE,
also on REPLACE ... SELECT, REPLACE DELAYED, and LOAD DATA REPLACE, etc
multi-update was setting up read_set/vcol_set in
multi_update::initialize_tables() that is invoked after
the optimizer (JOIN::optimize_inner()). But some rows - if they're from
const tables - will be read already in the optimizer, and these rows
will not have all necessary column/vcol values.
* multi_update::initialize_tables() uses results from the optimizer
and cannot be moved to be called earlier.
* multi_update::prepare() is called before the optimizer, but
it cannot set up read_set/vcol_set, because the optimizer
might reset them (see SELECT_LEX::update_used_tables()).
As a fix I've added a new method, select_result::prepare_to_read_rows(),
it's called from inside the optimizer just before make_join_statistics().
This fixes a bug where handler::read_range_first (for example)
needed to compare vcol values that were not calculated yet.
As a bonus it fixes few cases where vcols were calculated twice
cannot use TABLE:merge_keys for that, because Field::part_of_key
was designed to mark fields for KEY_READ, so a field is not a
"part of key", if only prefix of the field is.
* don't issue an error for ER_KEY_BASED_ON_GENERATED_VIRTUAL_COLUMN
* support keyread on vcols
* callback into the server to compute vcol values from mi_check/mi_repair
* DMLs just work. Automatically.
crashes server
This bug is the result of merging the Oracle MySQL follow-up fix
BUG#22963169 MYSQL CRASHES ON CREATE FULLTEXT INDEX
without merging the base bug fix:
Bug#79475 Insert a token of 84 4-bytes chars into fts index causes
server crash.
Unlike the above mentioned fixes in MySQL, our fix will not change
the storage format of fulltext indexes in InnoDB or XtraDB
when a character encoding with mbmaxlen=2 or mbmaxlen=3
and the length of a word is between 128 and 84*mbmaxlen bytes.
The Oracle fix would allocate 2 length bytes for these cases.
Compatibility with other MySQL and MariaDB releases is ensured by
persisting the used maximum length in the SYS_COLUMNS table in the
InnoDB data dictionary.
This fix also removes some unnecessary strcmp() calls when checking
for the legacy default collation my_charset_latin1
(my_charset_latin1.name=="latin1_swedish_ci").
fts_create_one_index_table(): Store the actual length in bytes.
This metadata will be written to the SYS_COLUMNS table.
fts_zip_initialize(): Initialize only the first byte of the buffer.
Actually the code should not even care about this first byte, because
the length is set as 0.
FTX_MAX_WORD_LEN: Define as HA_FT_MAXCHARLEN * 4 aka 336 bytes,
not as 254 bytes.
row_merge_create_fts_sort_index(): Set the actual maximum length of the
column in bytes, similar to fts_create_one_index_table().
row_merge_fts_doc_tokenize(): Remove the redundant parameter word_dtype.
Use the actual maximum length of the column. Calculate the extra_size
in the same way as row_merge_buf_encode() does.
Since 10.2 (2e814d4702) this test
changed and the race condition of MDEV-10651 no longer forms part
of this test.
As such re-enable this test.
Also include save/restore of the default values of this variable.
Signed-off-by: Daniel Black <daniel.black@au.ibm.com>
trx_state_eq(): Add the parameter bool relaxed=false, to
allow trx->state==TRX_STATE_NOT_STARTED where a different
state is expected, if an error has been reported.
trx_release_savepoint_for_mysql(): Pass relaxed=true to
trx_state_eq(). That is, allow the transaction to be idle
when ROLLBACK TO SAVEPOINT is attempted after an error
has been reported to the client.
MySQL 5.7 introduced WL#7943: InnoDB: Implement Information_Schema.Files
to provide a long-term alternative for accessing tablespace metadata.
The INFORMATION_SCHEMA.INNODB_* views are considered internal interfaces
that are subject to change or removal between releases. So, users should
refer to I_S.FILES instead of I_S.INNODB_SYS_TABLESPACES to fetch metadata
about CREATE TABLESPACE.
Because MariaDB 10.2 does not support CREATE TABLESPACE or
CREATE TABLE…TABLESPACE for InnoDB, it does not make sense to support
I_S.FILES either. So, let MariaDB 10.2 omit the code that was added in
MySQL 5.7. After this change, I_S.FILES will report the empty result,
unless some other storage engine in MariaDB 10.2 implements the interface.
(The I_S.FILES interface was originally created for the NDB Cluster.)
This patch adds DEFAULT as a possible dynamic SQL parameter, e.g.:
EXECUTE IMMEDIATE 'INSERT INTO t1 (column) VALUES(?)' USING DEFAULT;
EXECUTE IMMEDIATE 'UPDATE t1 SET column=?' USING DEFAULT;
and for similar PREPARE..EXECUTE queries.
This is done for symmetry with the STMT_INDICATOR_DEFAULT indicator in
the client-server PS protocol.
The changes include:
- Allowing DEFAULT as a possible option in execute USING clause (sql_yacc.yy)
- Adding "virtual bool Item::save_in_param(THD *thd, Item_param *param)",
because "normal" items (that have real values) and Item_default_value
have now different actions when assigning itself as an Item_param value.
- Fixing switch() statements in a few Item_param methods not to have "default",
because it was easy to forget to add a new "case" when adding a new XXX_VALUE
value into the enum Item_param::enum_item_param_state.
This is important, as we'll be adding new values soon, e.g. for MDEV-11359.
Removing "default" helped to find and report bugs MDEV-11361 and MDEV-11362,
because DECIMAL_VALUE is obviously not properly handled in some cases.
Problem was that test assumes locks to be granted on first-come-first-served (FCFS)
policy. However, in 10.2 we use by default Variance-Aware-Transaction-Scheduling
(VATS) algorithm. Test failure fixed by setting lock wait policy to FCFS.
During total-order replication, Query_log_event's checksum_alg
should be explicitly set to the current binlog_checksum as it
is not set for DML queries.
Note: wsrep_replicate_myisam enables replication of DMLs on
MyISAM tables using TOI.
The .rdiff applied ok locally with my copy of patch, but failed with
"misordered hunks" on a test host. Maybe that host has a more strict
version of `patch`.
for InnoDB tables"
Don't use thr_lock.c locks for InnoDB tables. Below is list of changes that
were needed to implement this:
- HANDLER OPEN acquireis MDL_SHARED_READ instead of MDL_SHARED
- HANDLER READ calls external_lock() even if SE is not going to be locked by
THR_LOCK
- InnoDB lock wait timeouts are now honored which are much shorter by default
than server lock wait timeouts (1 year vs 50 seconds)
- with @@autocommit= 1 LOCK TABLES disables autocommit implicitely, though
user still sees @@autocommt= 1
- the above starts implicit transaction
- transactions started by LOCK TABLES are now rolled back on disconnect
(previously everything was committed due to autocommit)
- transactions started by LOCK TABLES are now rolled back by ROLLBACK
(previously everything was committed due to autocommit)
- it is now impossible to change BINLOG_FORMAT under LOCK TABLES (at least
to statement) due to running transaction
- LOCK TABLES WRITE is additionally handled by MDL
- ...in contrast LOCK TABLES READ protection against DML is pure InnoDB
- combining transactional and non-transactional tables under LOCK TABLES
may cause rolled back changes in transactional table and "committed"
changes in non-transactional table
- user may disable innodb_table_locks, which will cause LOCK TABLES to be
noop basically
Removed tests for BUG#45143 and BUG#55930 which cover InnoDB + THR_LOCK. To
operate properly these tests require code flow to go through THR_LOCK debug
sync points, which is not the case after this patch. These tests are removed
by WL#6671 as well. An alternative is to port them to different storage engine.