The ALTER related code cannot do at the same time both:
- modify partitions
- change column data types
Explicit changing of a column data type together with a partition change is
prohibited by the parter, so this is not allowed and returns a syntax error:
ALTER TABLE t MODIFY ts BIGINT, DROP PARTITION p1;
This fix additionally disables implicit data type upgrade
(e.g. from "MariaDB 5.3 TIME" to "MySQL 5.6 TIME", or the other way
around according to the current mysql56_temporal_format) in case of
an ALTER modifying partitions, e.g.:
ALTER TABLE t DROP PARTITION p1;
In such commands now only the partition change happens, while
the data types stay unchanged.
One can additionally run:
ALTER TABLE t FORCE;
either before or after the ALTER modifying partitions to
upgrade data types according to mysql56_temporal_format.
10.5 part: test cases and comments.
The code is in the merge commit 74fe1c44aa
When f.ex. table is partitioned by HASH(a) and we rename column `a' to
`b' partitioning filter stays unchanged: HASH(a). That's the wrong
behavior.
The patch updates partitioning filter in accordance to the new columns
names. That includes partition/subpartition expression and
partition/subpartition field list.
1. Store assignment failures on incompatible data types now raise errors if:
- STRICT_ALL_TABLES or STRICT_TRANS_TABLES sql_mode is used, and
- IGNORE is not used
Otherwise, only a warning is raised and the statement continues.
2. Changing the error/warning test as follows:
-ERROR HY000: Illegal parameter data types inet6 and int for operation 'SET'
+ERROR HY000: Cannot cast 'int' as 'inet6' in assignment of `db`.`t`.`col`
so in case of a big table it's easier to see which column has the problem.
The new error text is also applied to SP variables.
Not the SPIDER issue - happens to INSERT DELAYED.
the field::make_new_field does't copy the LONG_UNIQUE_HASH_FIELD
flag to the new field. Though the Delayed_insert::get_local_table
copies the field->vcol_info for this field. Ad a result
the parse_vcol_defs doesn't create the expression for that column
so the field->vcol_info->expr is NULL. Which leads to crash.
Backported fix for this from 10.5 - the flagg added in the
Delayed_insert::get_local_table.
Another problem with the USING HASH key is thst the
parse_vcol_defs modifies the table->keys content. Then the same
parse_vcol_defs is called on the table copy that has keys already
modified. Backported fix for that from 10.5 - key copying added
tot the Delayed_insert::get_local_table.
Finally - the created copy has to clear the expr_arena as
this table is not in the thd->open_tables list so won't be
cleared automatically.
Now INSERT, UPDATE, ALTER statements involving incompatible data type pairs, e.g.:
UPDATE TABLE t1 SET col_inet6=col_int;
INSERT INTO t1 (col_inet6) SELECT col_in FROM t2;
ALTER TABLE t1 MODIFY col_inet6 INT;
consistently return an error at the statement preparation time:
ERROR HY000: Illegal parameter data types inet6 and int for operation 'SET'
and abort the statement before starting interating rows.
This error is the same with what is raised for queries like:
SELECT col_inet6 FROM t1 UNION SELECT col_int FROM t2;
SELECT COALESCE(col_inet6, col_int) FROM t1;
Before this change the error was caught only during the execution time,
when a Field_xxx::store_xxx() was called for the very firts row.
The behavior was not consistent between various statements and could do different things:
- abort the statement
- set a column to the data type default value (e.g. '::' for INET6)
- set a column to NULL
A typical old error was:
ERROR 22007: Incorrect inet6 value: '1' for column `test`.`t1`.`a` at row 1
EXCEPTION:
Note, there is an exception: a multi-row INSERT..VALUES, e.g.:
INSERT INTO t1 (col_a,col_b) VALUES (a1,b1),(a2,b2);
checks assignment compability at the preparation time for the very first row only:
(col_a,col_b) vs (a1,b1)
Other rows are still checked at the execution time and return the old warnings
or errors in case of a failure. This is done because catching all rows at the
preparation time would change behavior significantly. So it still works
according to the STRICT_XXX_TABLES sql_mode flags and the table transaction ability.
This is too late to change this behavior in 10.7.
There is no a firm decision yet if a multi-row INSERT..VALUES
behavior will change in later versions.
column generated using date_format() and if()
vcol_info->expr is allocated on expr_arena at parsing stage. Since
expr item is allocated on expr_arena all its containee items must be
allocated on expr_arena too. Otherwise fix_session_expr() will
encounter prematurely freed item.
When table is reopened from cache vcol_info contains stale
expression. We refresh expression via TABLE::vcol_fix_exprs() but
first we must prepare a proper context (Vcol_expr_context) which meets
some requirements:
1. As noted above expr update must be done on expr_arena as there may
be new items created. It was a bug in fix_session_expr_for_read() and
was just not reproduced because of no second refix. Now refix is done
for more cases so it does reproduce. Tests affected: vcol.binlog
2. Also name resolution context must be narrowed to the single table.
Tested by: vcol.update main.default vcol.vcol_syntax gcol.gcol_bugfixes
3. sql_mode must be clean and not fail expr update.
sql_mode such as MODE_NO_BACKSLASH_ESCAPES, MODE_NO_ZERO_IN_DATE, etc
must not affect vcol expression update. If the table was created
successfully any further evaluation must not fail. Tests affected:
main.func_like
Reviewed by: Sergei Golubchik <serg@mariadb.org>
1. moved fix_vcol_exprs() call to open_table()
mysql_alter_table() doesn't do lock_tables() so it cannot win from
fix_vcol_exprs() from there. Tests affected: main.default_session
2. Vanilla cleanups and comments.
Hybrid functions (IF, COALESCE, etc) did not preserve the JSON property
from their arguments. The same problem was repeatable for single row subselects.
The problem happened because the method Item::is_json_type() was inconsistently
implemented across the Item hierarchy. For example, Item_hybrid_func
and Item_singlerow_subselect did not override is_json_type().
Solution:
- Removing Item::is_json_type()
- Implementing specific JSON type handlers:
Type_handler_string_json
Type_handler_varchar_json
Type_handler_tiny_blob_json
Type_handler_blob_json
Type_handler_medium_blob_json
Type_handler_long_blob_json
- Reusing the existing data type infrastructure to pass JSON
type handlers across all item types, including classes Item_hybrid_func
and Item_singlerow_subselect. Note, these two classes themselves do not
need any changes!
- Extending the data type infrastructure so data types can inherit
their properties (e.g. aggregation rules) from their base data types.
E.g. VARCHAR/JSON acts as VARCHAR, LONGTEXT/JSON acts as LONGTEXT
when mixed to a non-JSON data type. This is done by:
- adding virtual method Type_handler::type_handler_base()
- adding a helper class Type_handler_pair
- refactoring Type_handler_hybrid_field_type methods
aggregate_for_result(), aggregate_for_min_max(),
aggregate_for_num_op() to use Type_handler_pair.
This change also fixes:
MDEV-27361 Hybrid functions with JSON arguments do not send format metadata
Also, adding mtr tests for JSON replication. It was not covered yet.
And the current patch changes the replication code slightly.
Range optimizer incorrectly was used for ENUM columns
when the operation collation did not match the column collation.
Adding a virtual implementation of Field_enum::can_optimize_range()
which tests if the column and the operation collation match.
Long UNIQUE HASH index silently creates virtual column index, which should
be impossible for base columns featuring AUTO_INCREMENT.
Fix: add a relevant check; add new vcol type for a prettier error message.
ha_partition stores records in array of m_ordered_rec_buffer and uses
it for prio queue in ordered index scan. When the records are restored
from the array the blob buffers may be already freed or rewritten.
The solution is to take temporary ownership of cached blob buffers via
String::swap(). When the record is restored from m_ordered_rec_buffer
the ownership is returned to table fields.
Cleanups:
init_record_priority_queue(): removed needless !m_ordered_rec_buffer
check as there is same assertion few lines before.
dbug_print_row() for arbitrary row pointer
also avoid an oxymoron of using `MYSQL_PLUGIN_IMPORT` under
`#ifdef MYSQL_SERVER`, and empty_clex_str is so trivial that a plugin
can define it if needed.
- Changed order of class fields to remove dead alignment space.
- Changed bool fields in Item to bit fields.
- Used packed enum's for some fields in common classes
- Removed not used Item::rsize.
- Changed some class variables from uint/int to smaller type int's.
- Ensured that field_index is uint16 in all classes and functions. Fixed
also that we proparly compare with NO_CACHED_FIELD_INDEX when checking
if variable is not set.
- Removed checking of highest bit of unireg_check (has not been used in
a long time)
- Fixed wrong arguments to make_cond_for_table() for join_tab_idx_arg
from false to 0.
One of the result was reducing the size if class Item with ~24 bytes
aspects of decimals and integers
For fields and Item's uint8 should be good enough. After
discussions with Alexander Barkov we choose uint16 (for now)
as some format functions may accept +256 digits.
The reason for this patch was to make the usage and storage of decimal
digits simlar. Before this patch decimals was stored/used as uint8,
int and uint. The lengths for numbers where also using a lot of
different types.
Changed most decimal variables and functions to use the new typedef.
squash! af7f09106b6c1dc20ae8c480bff6fd22d266b184
Use decimal_digits_t for all aspects of digits (total, precision
and scale), both for decimals and integers.
table->move_fields wasn't undone in case of error.
1. move_fields is unconditionally undone even when error is occurred
2. cherry-pick an assertion in `ptr_in_record`, which is already in 10.5