The problem is that if table definition cache (TDC) is full of real tables
which are in tables cache, view definition can not stay there so will be
removed by its own underlying tables.
In situation above old mechanism of detection matching definition in PS
and current version always require reprepare and so prevent executing
the PS.
One work around is to increase TDC, other - improve version check for
views/triggers (which is done here). Now in suspicious cases we check:
- timestamp (microseconds) of the view to be sure that version really
have changed;
- time (microseconds) of creation of a trigger related to time
(microseconds) of statement preparation.
In commit 28325b0863
a compile-time option was introduced to disable the macros
DBUG_ENTER and DBUG_RETURN or DBUG_VOID_RETURN.
The parameter name WITH_DBUG_TRACE would hint that it also
covers DBUG_PRINT statements. Let us do that: WITH_DBUG_TRACE=OFF
shall disable DBUG_PRINT() as well.
A few InnoDB recovery tests used to check that some output from
DBUG_PRINT("ib_log", ...) is present. We can live without those checks.
Reviewed by: Vladislav Vaintroub
Related to MDEV-24176.
1. vcol_fix_expr() generates new tree changes:
Type_std_attributes::agg_item_set_converter() does change_item_tree().
The changes are allocated on expr_arena (via Vcol_expr_context as per
MDEV-24176).
2. vcol_cleanup_expr() doesn't remove these changes (can be a bug of
Type_std_attributes or per design).
3. Atomic CREATE OR REPLACE renames old table to backup
(finalize_atomic_replace()). It does that via
rename_table_and_triggers() and that closes table share and releases
expr_arena root. Hence now we have Item corpses in thd->change_list.
4. PS cleanup phase tries to rollback thd->change_list and accesses
already freed item corpses.
The fix saves and restores change_list on
vcol_fix_expr()/vcol_cleanup_expr().
trx_undo_page_report_rename(): Use the correct maximum length of
a table name. Both the database name and the table name can be up to
NAME_CHAR_LEN (64 characters) times 5 bytes per character in the
my_charset_filename encoding. They are not encoded in UTF-8!
fil_op_write_log(): Reserve the correct amount of log buffer for
a rename operation. The file name will be appended by
mlog_catenate_string().
rename_file_ext(): Reserve a large enough buffer for the file names.
empty identifier specified as `` ends up with a NULL LEX_CSTRING::str in lexer.
This is not considered correct in upper layers, for example in Compare_identifiers::operator().
Empty column name is usually avoided by a check_column_name() call while parsing,
and period name matches the column name completely.
Hence, this fix uses the mentioned call for verification, too.
1. Store assignment failures on incompatible data types now raise errors if:
- STRICT_ALL_TABLES or STRICT_TRANS_TABLES sql_mode is used, and
- IGNORE is not used
Otherwise, only a warning is raised and the statement continues.
2. Changing the error/warning test as follows:
-ERROR HY000: Illegal parameter data types inet6 and int for operation 'SET'
+ERROR HY000: Cannot cast 'int' as 'inet6' in assignment of `db`.`t`.`col`
so in case of a big table it's easier to see which column has the problem.
The new error text is also applied to SP variables.
Not the SPIDER issue - happens to INSERT DELAYED.
the field::make_new_field does't copy the LONG_UNIQUE_HASH_FIELD
flag to the new field. Though the Delayed_insert::get_local_table
copies the field->vcol_info for this field. Ad a result
the parse_vcol_defs doesn't create the expression for that column
so the field->vcol_info->expr is NULL. Which leads to crash.
Backported fix for this from 10.5 - the flagg added in the
Delayed_insert::get_local_table.
Another problem with the USING HASH key is thst the
parse_vcol_defs modifies the table->keys content. Then the same
parse_vcol_defs is called on the table copy that has keys already
modified. Backported fix for that from 10.5 - key copying added
tot the Delayed_insert::get_local_table.
Finally - the created copy has to clear the expr_arena as
this table is not in the thd->open_tables list so won't be
cleared automatically.
Now INSERT, UPDATE, ALTER statements involving incompatible data type pairs, e.g.:
UPDATE TABLE t1 SET col_inet6=col_int;
INSERT INTO t1 (col_inet6) SELECT col_in FROM t2;
ALTER TABLE t1 MODIFY col_inet6 INT;
consistently return an error at the statement preparation time:
ERROR HY000: Illegal parameter data types inet6 and int for operation 'SET'
and abort the statement before starting interating rows.
This error is the same with what is raised for queries like:
SELECT col_inet6 FROM t1 UNION SELECT col_int FROM t2;
SELECT COALESCE(col_inet6, col_int) FROM t1;
Before this change the error was caught only during the execution time,
when a Field_xxx::store_xxx() was called for the very firts row.
The behavior was not consistent between various statements and could do different things:
- abort the statement
- set a column to the data type default value (e.g. '::' for INET6)
- set a column to NULL
A typical old error was:
ERROR 22007: Incorrect inet6 value: '1' for column `test`.`t1`.`a` at row 1
EXCEPTION:
Note, there is an exception: a multi-row INSERT..VALUES, e.g.:
INSERT INTO t1 (col_a,col_b) VALUES (a1,b1),(a2,b2);
checks assignment compability at the preparation time for the very first row only:
(col_a,col_b) vs (a1,b1)
Other rows are still checked at the execution time and return the old warnings
or errors in case of a failure. This is done because catching all rows at the
preparation time would change behavior significantly. So it still works
according to the STRICT_XXX_TABLES sql_mode flags and the table transaction ability.
This is too late to change this behavior in 10.7.
There is no a firm decision yet if a multi-row INSERT..VALUES
behavior will change in later versions.
Also fixes
MDEV-27782 Wrong columns when using table level `CHARACTER SET utf8mb4 COLLATE DEFAULT`
MDEV-28644 Unexpected error on ALTER TABLE t1 CONVERT TO CHARACTER SET utf8mb3, DEFAULT CHARACTER SET utf8mb4
:: Syntax change ::
Keyword AUTO enables history partition auto-creation.
Examples:
CREATE TABLE t1 (x int) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME INTERVAL 1 HOUR AUTO;
CREATE TABLE t1 (x int) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME INTERVAL 1 MONTH
STARTS '2021-01-01 00:00:00' AUTO PARTITIONS 12;
CREATE TABLE t1 (x int) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME LIMIT 1000 AUTO;
Or with explicit partitions:
CREATE TABLE t1 (x int) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME INTERVAL 1 HOUR AUTO
(PARTITION p0 HISTORY, PARTITION pn CURRENT);
To disable or enable auto-creation one can use ALTER TABLE by adding
or removing AUTO from partitioning specification:
CREATE TABLE t1 (x int) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME INTERVAL 1 HOUR AUTO;
# Disables auto-creation:
ALTER TABLE t1 PARTITION BY SYSTEM_TIME INTERVAL 1 HOUR;
# Enables auto-creation:
ALTER TABLE t1 PARTITION BY SYSTEM_TIME INTERVAL 1 HOUR AUTO;
If the rest of partitioning specification is identical to CREATE TABLE
no repartitioning will be done (for details see MDEV-27328).
:: Description ::
Before executing history-generating DML command (see the list of commands below)
add N history partitions, so that N would be sufficient for potentially
generated history. N > 1 may be required when history partitions are switched
by INTERVAL and current_timestamp is N times further than the interval
boundary of the last history partition.
If the last history partition equals or exceeds LIMIT records then new history
partition is created and selected as the working partition. According to
MDEV-28411 partitions cannot be switched (or created) while the command is
running. Thus LIMIT does not carry strict limitation and the history partition
size must be planned as LIMIT value plus average number of history one DML
command can generate.
Auto-creation is implemented by synchronous fast_alter_partition_table() call
from the thread of the executed DML command before the command itself is run
(by the fallback and retry mechanism similar to Discovery feature,
see Open_table_context).
The name for newly added partitions are generated like default partition names
with extension of MDEV-22155 (which avoids name clashes by extending assignment
counter to next free-enough gap).
These DML commands can trigger auto-creation:
DELETE (including multitable DELETE, excluding DELETE HISTORY)
UPDATE (including multitable UPDATE)
REPLACE (including REPLACE .. SELECT)
INSERT .. ON DUPLICATE KEY UPDATE (including INSERT .. SELECT .. ODKU)
LOAD DATA .. REPLACE
:: Bug fixes ::
MDEV-23642 Locking timeout caused by auto-creation affects original DML
The reasons for this are:
- Do not disrupt main business process (the history is auxiliary service);
- Consequences are non-fatal (history is not lost, but comes into wrong
partition; fixed by partitioning rebuild);
- There is more freedom for application to fail in this case or not: it may
read warning info and find corresponding error number.
- While non-failing command is easy to handle by an application and fail it,
the opposite is hard to handle: there is no automatic actions to fix
failed command and retry, DBA intervention is required and until then
application is non-functioning.
MDEV-23639 Auto-create does not work under LOCK TABLES or inside triggers
Don't do tdc_remove_table() for OT_ADD_HISTORY_PARTITION because it is
not possible in locked tables mode.
LTM_LOCK_TABLES mode (and LTM_PRELOCKED_UNDER_LOCK_TABLES) works out
of the box as fast_alter_partition_table() can reopen tables via
locked_tables_list.
In LTM_PRELOCKED we reopen and relock table manually.
:: More fixes ::
* some_table_marked_for_reopen flag fix
some_table_marked_for_reopen affets only reopen of
m_locked_tables. I.e. Locked_tables_list::reopen_tables() reopens only
tables from m_locked_tables.
* Unused can_recover_from_failed_open() condition
Is recover_from_failed_open() can be really used after
open_and_process_routine()?
:: Reviewed by ::
Sergei Golubchik <serg@mariadb.org>
This bug could cause a crash of the server at the second call of a stored
procedure when it executed a query containing a mergeable derived table /
view whose specification used another mergeable derived_table or view and a
subquery with outer reference in the select list of the specification.
Such queries could cause the same problem when they were executed for the
second time in a prepared mode.
The problem appeared due to a typo mistake in the legacy code of the function
create_view_field() that prevented building Item_direct_view_ref wrapper
for the mentioned outer reference at the second execution of the query and
setting the depended_from field for the outer reference.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
column generated using date_format() and if()
vcol_info->expr is allocated on expr_arena at parsing stage. Since
expr item is allocated on expr_arena all its containee items must be
allocated on expr_arena too. Otherwise fix_session_expr() will
encounter prematurely freed item.
When table is reopened from cache vcol_info contains stale
expression. We refresh expression via TABLE::vcol_fix_exprs() but
first we must prepare a proper context (Vcol_expr_context) which meets
some requirements:
1. As noted above expr update must be done on expr_arena as there may
be new items created. It was a bug in fix_session_expr_for_read() and
was just not reproduced because of no second refix. Now refix is done
for more cases so it does reproduce. Tests affected: vcol.binlog
2. Also name resolution context must be narrowed to the single table.
Tested by: vcol.update main.default vcol.vcol_syntax gcol.gcol_bugfixes
3. sql_mode must be clean and not fail expr update.
sql_mode such as MODE_NO_BACKSLASH_ESCAPES, MODE_NO_ZERO_IN_DATE, etc
must not affect vcol expression update. If the table was created
successfully any further evaluation must not fail. Tests affected:
main.func_like
Reviewed by: Sergei Golubchik <serg@mariadb.org>
1. moved fix_vcol_exprs() call to open_table()
mysql_alter_table() doesn't do lock_tables() so it cannot win from
fix_vcol_exprs() from there. Tests affected: main.default_session
2. Vanilla cleanups and comments.
When fixing vcols, fix_fields might call convert_const_to_int().
And that will try to read the field value (from record[0]).
Mark the table as having no data to prevent that, because record[0]
is not initialized yet.
the bug was that in_vector array in Item_func_in was allocated in the
statement arena, not in the table->expr_arena.
revert part of the 5acd391e8b. Instead, change the arena correctly
in fix_all_session_vcol_exprs().
Remove TABLE_ARENA, that was introduced in 5acd391e8b to force
item tree changes to be rolled back (because they were allocated in the
wrong arena and didn't persist. now they do)
records_are_comparable() requires this condition:
bitmap_is_subset(table->write_set, table->read_set)
On first iteration vers_update_fields() changes write_set and
read_set. On second iteration the above condition fails.
Added missing read bit for ROW_START. Also reorganized
bitmap_set_bit() so it is called only when needed.
Throw ER_NOT_FORM_FILE if this is wrong FRM data (warning with
ER_VERS_FIELD_WRONG_TYPE is still printed for deeper knowledge of what
was happened).
Keep ER_VERS_FIELD_WRONG_TYPE for creating partitioned table with
trx-versioning. Tested by MDEV-15951 in trx_id.test
TYPELIBs for ENUM/SET columns could erroneously undergo redundant
hex-unescaping at the table open time.
Fix:
- Prevent multiple unescaping of the same TYPELIB
- Prevent sharing TYPELIBs between columns with different mbminlen
extra2_read_len resolved by keeping the implementation
in sql/table.cc by exposed it for use by ha_partition.cc
Remove identical implementation in unireg.h
(ref: bfed2c7d57)
whenever possible, partitioning should use the full
partition plugin name, not the one byte legacy code.
Normally, ha_partition can get the engine plugin from
table_share->default_part_plugin.
But in some cases, e.g. in DROP TABLE, the table isn't
opened, table_share is NULL, and ha_partition has to parse
the frm, much like dd_frm_type() does.
temporary_tables.cc, sql_table.cc:
When dropping a table, it must be deleted in the engine
first, then frm file. Because frm can be the only true
source of metadata that the engine might need for DROP.
table.cc:
when opening a partitioned table, if the engine for
partitions is not found, do not fallback to MyISAM.
Make it possible to specify engine-defined attributes on partitions
as well as tables.
If an engine-defined attribute is only specified at the table level,
it applies to all the partitions in the table.
This is a backward-compatible behavior.
If the same attribute is specified both at the table level and the
partition level, the per-partition one takes precedence.
So, we can consider per-table attributes as default values.
One cannot specify engine-defined attributes on subpartitions.
Implementation details:
* We store per-partition attributes in the partition_element class
because we already have the part_comment field, which is for
per-partition comments.
* In the case of ALTER TABLE statements, the partition_elements in
table->part_info is set up by mysql_unpack_partition().
So, we parse per-partition attributes after the call of the function.
Update was skipped (need_update was false) because compare_record()
used HA_PARTIAL_COLUMN_READ branch and it skipped row_start check
has_explicit_value() was false. When we set bit for row_start in
has_value_set the row is updated with new row_start value.
The bug was caused by combination of MDEV-23446 and 3789692d17. The
latter one says:
... But generated columns that are written to the table are always
deterministic and cannot change unless normal non-generated columns
were changed. ...
Since MDEV-23446 generated row_start can change while non-generated
columns are not changed.
Explicit value flag came from HAS_EXPLICIT_DEFAULT which was used to
distinguish default-generated value from user-supplied one.
Re-execution of a query containing subquery in the FROM clause results
in assert failure in case the query is run as part of a stored routine or
as a prepared statement AND derived table merge optimization is off.
As an example, the following test case
CREATE TABLE t1 (a INT) ;
CREATE PROCEDURE sp() SELECT * FROM (SELECT a FROM t1) tb;
CALL sp();
SET optimizer_switch='derived_merge=off';
CALL sp();
results in assert failure on the second invocation of the 'sp' stored routine.
The reason for assertion failure is that the expression
derived->is_excluded()
returns the value true where the value false expected.
The method is_excluded() returns the value true for a derived table
that has been merged to a parent select. Such transformation happens as part
of Derived Table Merge Optimization that is performed on first invocation of
a stored routine or a prepared statement containing a query with subquery
in the FROM clause of the main SELECT.
When the same routine or prepared statement is run the second time and
Derived Table Merge Optimization is OFF the MariaDB server tries to materialize
a derived table specified by the subquery that fails since this subquery
has already been merged to the top-most SELECT. This transformation is permanent
and can't be reverted. That is the reason why the assert
DBUG_ASSERT(!derived->is_excluded());
fails inside the function TABLE_LIST::set_check_materialized().
Similar behaviour can be observed in case a stored routine or prepared statement
containing a SELECT statement with subquery in the FROM clause, first is run
with the optimizer_switch option set to derived_merge=off and re-run after this
option has been switched to derived_merge=on. In this case a derived table for
subquery is materialized on the first execution and marked as merged derived
table on the second execution that results in error with misleading error
message:
MariaDB [test]> CALL sp1();
ERROR 1030 (HY000): Got error 1 "Operation not permitted" from storage engine MEMORY
To fix the issue, a derived table that has been already optimized shouldn't be
re-marked for one more round of optimization.
One significant consequence following from suggested change is that the data
member TABLE_LIST::derived_type is not updated once the table optimization
has been done. This fact should be taken into account when Prepared Statement
being handled since once a table listed in a query has been optimized on
execution of the statement PREPARE FROM it won't be touched anymore on handling
the statement EXECUTE.
One side effect caused by this change could be observed for the following
test case:
CREATE TABLE t1 (s1 INT);
CREATE VIEW v1 AS
SELECT s1,s2 FROM (SELECT s1 as s2 FROM t1 WHERE s1 <100) x, t1 WHERE t1.s1=x.s2;
INSERT INTO v1 (s1) VALUES (-300);
PREPARE stmt FROM "INSERT INTO v1 (s1) VALUES (-300)";
EXECUTE stmt;
Execution of the above EXECUTE statement results in issuing the error
ER_COLUMNACCESS_DENIED_ERROR since table_ref->is_merged_derived() is false
and check_column_grant_in_table_ref() called for a temporary table that
shouldn't be. To fix this issue the function find_field_in_tables has been
modified in such a way that the function check_column_grant_in_table_ref()
is not called for a temporary table.
InnoDB fails to apply buffered insert operation for
'mysql/transaction_registry' table during system versioning DDL.
To avoid this, DDL calls extra(HA_EXTRA_IGNORE_INSERT) to
inform the InnoDB for applying the buffered insert operation.
Long UNIQUE HASH index silently creates virtual column index, which should
be impossible for base columns featuring AUTO_INCREMENT.
Fix: add a relevant check; add new vcol type for a prettier error message.
In commit 1811fd51fb the assertion
should have said error_reported instead of !error_reported.
But, that revised assertion would still fail in main.defaults
where ER_BAD_DATA is reported during CREATE TABLE.
This is a duplicate of MDEV-18278 89936f11e9, but I will add an
additional assertion
Description:
The frm corruption should not be reported during CREATE TABLE. Normally
it doesn't, and the data to fill TABLE is taken by open_table_from_share
call. However, the vcol data is stored as SQL string in
table->s->vcol_defs.str and is anyway parsed on each table open.
It is impossible [or hard] to avoid, because it's hard to clone the
expression tree in general (it's easier to parse).
Normally parse_vcol_defs should only fail on semantic errors. If so,
error_reported is set to true. Any other failure is not expected during
table creation. There is either unhandled/unacknowledged error, or
something went really wrong, like memory reject. This all should be
asserted anyway.
Solution:
* Set *error_reported=true for the forward references check;
* Assert for every unacknowledged error during table creation.
There were two independent problems which lead to the crash
and to the non-relevant records returned in I_S queries:
- The code in the I_S implementation was not secure
about values with 0x00 bytes.
It's fixed by using check_db_name() and check_table_name()
inside make_table_name_list(), and by adding the test for
0x00 inside check_table_name().
- The code in Item_string::print() did not convert
strings without introducers when restoring
the CREATE VIEW statement from an Item tree.
This made wrong literals inside the "query" line in the view FRM file
in cases when the VIEW parse time
character_set_client!=character_set_connection.
That's fixed by adding a proper conversion.
This change also fixed a similar problem in SHOW PROCEDURE CODE -
the literals were displayed in wrong character set in SP instructions
in cases when the SP parse time
character_set_client!=character_set_connection.
in about a hundred of users of MY_BITMAP, only two were using its
built-in mutex, and only one of those two was actually needing it.
Remove the mutex from MY_BITMAP, remove all associated conditions
and checks in bitmap functions. Use an external LOCK_temp_pool
mutex and temp_pool_set_next/temp_pool_clear_bit acccessors.
Remove bitmap_init/bitmap_free, always use my_* versions.
Server crashes in Field::register_field_in_read_map upon select from
partitioned table with indexed by prefix virtual column.
After several read-mark fixes a problem has surfaced:
Since KEY (c(10),a) uses only a prefix of c, a new field is created,
duplicated from table->field[3], with a new length. However,
vcol_inco->expr is not copied.
Therefore, (*key_info)->key_part[i].field->vcol_info->expr was left NULL
in ha_partition::index_init().
Solution: copy vcol_info from table field when it's set up.
Server crashes in Field::register_field_in_read_map upon select from
partitioned table with indexed by prefix virtual column.
After several read-mark fixes a problem has surfaced:
Since KEY (c(10),a) uses only a prefix of c, a new field is created,
duplicated from table->field[3], with a new length. However,
vcol_inco->expr is not copied.
Therefore, (*key_info)->key_part[i].field->vcol_info->expr was left NULL
in ha_partition::index_init().
Solution: initialize vcols before key initialization
Also key initialization is moved to a function.
Reformulate mark_columns_used_by_index* function family in a more laconic
way:
mark_columns_used_by_index -> mark_index_columns
mark_columns_used_by_index_for_read_no_reset -> mark_index_columns_for_read
mark_columns_used_by_index_no_reset -> mark_index_columns_no_reset
static mark_index_columns -> do_mark_index_columns
Several different test cases were failing under the same reason: the
fields in a vcol expression were not marked during marking columns of a key
contatining virtual column for read.
Fix: make marking columns of a key for read a special case where
register_field_in_read_map() is done instead of plain bitmap_set_bit().
Some test cases are only reproducible in 10.4+, but the fix is applicable
to 10.2+
This is a 10.2+ part of a jira task
The two bugs regarding virtual column marking have been fixed:
1. UPDATE of a partitioned table, where the optimizer has chosen a
secondary index to make a filesort;
2. INSERT into a table with a nonblob field generated from a blob, with
binlog enabled and binlog_row_image=noblob.
3. DELETE from a view on a table with virtual column.
Generally the assertion happens from update_virtual_fields() call
These bugs are root-caused by missing field marking for dependant fields
of a virtual column.
Therefore a fix is: mark all the fields involved in the vcol expression by
calling field->register_field_in_read_map() instead just setting a single
bit.
3 was reproducible only on 10.4+, however the problem might has just been
invisible in the earlier versions. The fix is applicable to 10.2-10.3 as
well.
We cannot revert the ALTER, so anything happening after
the point of no return should not be treated as an error. A
very unfortunate condition that a user needs to be warned about - yes,
but we cannot say "ALTER TABLE has failed" if the table was successfully
altered.
This replaces 8711adb786
if a temptable field is created for some json expression (is_json_type()
returns true), make this temptable field a proper json field.
A field is a json field (see Item_field::is_json_type()) if it
has a CHECK constraint of JSON_VALID(field).
Note that it will never be actually checked for temptable fields,
so it won't cause a run-time slowdown.
The ROWNUM() function is for SELECT mapped to JOIN->accepted_rows, which is
incremented for each accepted rows.
For Filesort, update, insert, delete and load data, we map ROWNUM() to
internal variables incremented when the table is changed.
The connection between the row counter and Item_func_rownum is done
in sql_select.cc::fix_items_after_optimize() and
sql_insert.cc::fix_rownum_pointers()
When ROWNUM() is used anywhere in query, the optimization to ignore ORDER
BY in sub queries are disabled. This was done to get the following common
Oracle query to work:
select * from (select * from t1 order by a desc) as t where rownum() <= 2;
MDEV-3926 "Wrong result with GROUP BY ... WITH ROLLUP" contains a discussion
about this topic.
LIMIT optimization is enabled when in a top level WHERE clause comparing
ROWNUM() with a numerical constant using any of the following expressions:
- ROWNUM() < #
- ROWNUM() <= #
- ROWNUM() = 1
ROWNUM() can be also be the right argument to the comparison function.
LIMIT optimization is done in two cases:
- For the current sub query when the ROWNUM comparison is done on the top
level:
SELECT * from t1 WHERE rownum() <= 2 AND t1.a > 0
- For an inner sub query, when the upper level has only a ROWNUM comparison
in the WHERE clause:
SELECT * from (select * from t1) as t WHERE rownum() <= 2
In Oracle mode, one can also use ROWNUM without parentheses.
Other things:
- Fixed bug where the optimizer tries to optimize away sub queries
with RAND_TABLE_BIT set (non-deterministic queries). Now these
sub queries will not be converted to joins. This bug fix was also
needed to get rownum() working inside subqueries.
- In remove_const() remove setting simple_order to FALSE if ROLLUP is
USED. This code was disable a long time ago because of wrong assignment
in the following code. Instead we set simple_order to false if
RAND_TABLE_BIT was used in the SELECT list. This ensures that
we don't delete ORDER BY if the result set is not deterministic, like
in 'SELECT RAND() AS 'r' FROM t1 ORDER BY r';
- Updated parameters for Sort_param::init_for_filesort() to be able
to provide filesort with information where the number of accepted
rows should be stored
- Reordered fields in class Filesort to optimize storage layout
- Added new error messsage to tell that a function can't be used in HAVING
- Added field 'with_rownum' to THD to mark that ROWNUM() is used in the
query.
Co-author: Oleksandr Byelkin <sanja@mariadb.com>
LIMIT optimization for sub query
- Moved out creating StringBuffers in loops and instead create them
outside and just reset the buffer if it was not allocated (to avoid
a possible malloc/free for every entry)
Other things related to set_buffer_if_not_allocated()
- Changed Valuebuffer to not call set_buffer_if_not_allocated() when
it is created.
- Fixed geometry functions to reset string length before calling
String::reserve(). This is because one should not access length()
of an undefined.
- Added Item_func_conv_charset::save_in_field() as the item is using
str_value to store cached values, which conflicts with
Item::save_str_in_field().
- Changed Item_proc_string to not store the string value in sql_string
as this clashes with Item::save_str_in_field().
- Locally store value of full_name_cstring() in analyse::end_of_records()
as Item::save_str_in_field() may overwrite it.
- Marked some strings as set_thread_specific()
- Added String::free_buffer() to be used internally in String functions
to just free the buffer but not reset other String values.
- Fixed uses_buffer_owned_by() to check for allocated length instead of
strlength, which could be marked MEM_UNDEFINED().
This change removed 68 explict strlen() calls from the code.
The following renames was done to ensure we don't use the old names
when merging code from earlier releases, as using the new variables
for print function could result in crashes:
- charset->csname renamed to charset->cs_name
- charset->name renamed to charset->coll_name
Almost everything where mechanical changes except:
- Changed to use the new Protocol::store(LEX_CSTRING..) when possible
- Changed to use field->store(LEX_CSTRING*, CHARSET_INFO*) when possible
- Changed to use String->append(LEX_CSTRING&) when possible
Other things:
- There where compiler issues with ensuring that all character set names
points to the same string: gcc doesn't allow one to use integer constants
when defining global structures (constant char * pointers works fine).
To get around this, I declared defines for each character set name
length.
Changes:
- To detect automatic strlen() I removed the methods in String that
uses 'const char *' without a length:
- String::append(const char*)
- Binary_string(const char *str)
- String(const char *str, CHARSET_INFO *cs)
- append_for_single_quote(const char *)
All usage of append(const char*) is changed to either use
String::append(char), String::append(const char*, size_t length) or
String::append(LEX_CSTRING)
- Added STRING_WITH_LEN() around constant string arguments to
String::append()
- Added overflow argument to escape_string_for_mysql() and
escape_quotes_for_mysql() instead of returning (size_t) -1 on overflow.
This was needed as most usage of the above functions never tested the
result for -1 and would have given wrong results or crashes in case
of overflows.
- Added Item_func_or_sum::func_name_cstring(), which returns LEX_CSTRING.
Changed all Item_func::func_name()'s to func_name_cstring()'s.
The old Item_func_or_sum::func_name() is now an inline function that
returns func_name_cstring().str.
- Changed Item::mode_name() and Item::func_name_ext() to return
LEX_CSTRING.
- Changed for some functions the name argument from const char * to
to const LEX_CSTRING &:
- Item::Item_func_fix_attributes()
- Item::check_type_...()
- Type_std_attributes::agg_item_collations()
- Type_std_attributes::agg_item_set_converter()
- Type_std_attributes::agg_arg_charsets...()
- Type_handler_hybrid_field_type::aggregate_for_result()
- Type_handler_geometry::check_type_geom_or_binary()
- Type_handler::Item_func_or_sum_illegal_param()
- Predicant_to_list_comparator::add_value_skip_null()
- Predicant_to_list_comparator::add_value()
- cmp_item_row::prepare_comparators()
- cmp_item_row::aggregate_row_elements_for_comparison()
- Cursor_ref::print_func()
- Removes String_space() as it was only used in one cases and that
could be simplified to not use String_space(), thanks to the fixed
my_vsnprintf().
- Added some const LEX_CSTRING's for common strings:
- NULL_clex_str, DATA_clex_str, INDEX_clex_str.
- Changed primary_key_name to a LEX_CSTRING
- Renamed String::set_quick() to String::set_buffer_if_not_allocated() to
clarify what the function really does.
- Rename of protocol function:
bool store(const char *from, CHARSET_INFO *cs) to
bool store_string_or_null(const char *from, CHARSET_INFO *cs).
This was done to both clarify the difference between this 'store' function
and also to make it easier to find unoptimal usage of store() calls.
- Added Protocol::store(const LEX_CSTRING*, CHARSET_INFO*)
- Changed some 'const char*' arrays to instead be of type LEX_CSTRING.
- class Item_func_units now used LEX_CSTRING for name.
Other things:
- Fixed a bug in mysql.cc:construct_prompt() where a wrong escape character
in the prompt would cause some part of the prompt to be duplicated.
- Fixed a lot of instances where the length of the argument to
append is known or easily obtain but was not used.
- Removed some not needed 'virtual' definition for functions that was
inherited from the parent. I added override to these.
- Fixed Ordered_key::print() to preallocate needed buffer. Old code could
case memory overruns.
- Simplified some loops when adding char * to a String with delimiters.
This was done to simplify copying of with_* flags
Other things:
- Changed Flags to C++ enums, which enables gdb to print
out bit values for the flags. This also enables compiler
errors if one tries to manipulate a non existing bit in
a variable.
- Added set_maybe_null() as a shortcut as setting the
MAYBE_NULL flags was used in a LOT of places.
- Renamed PARAM flag to SP_VAR to ensure it's not confused with persistent
statement parameters.
One should instead use Item::fixed() and Item::with_subquery()
Removed Item::is_fixed() and has_subquery() and did the following replace:
replace is_fixed() fixed() -- *.*
replace 'has_subquery()' 'with_subquery()' -- *.*
The reason for the change is that neither clang or gcc can do efficient
code when several bit fields are change at the same time or when copying
one or more bits between identical bit fields.
Updated bits explicitely with & and | is MUCH more efficient than what
current compilers can do.
The problem was that when one used String::alloc() to allocate a string,
the String ensures that there is space for an extra NULL byte in the
buffer and if not, reallocates the string. This is a problem with the
String::set_int() that calls alloc(21), which forces extra
malloc/free calls to happen.
- We do not anymore re-allocate String if alloc() is called with the
Allocated_length. This reduces number of malloc() allocations,
especially one big re-allocation in Protocol::send_result_Set_metadata()
for almost every query that produced a result to the connnected client.
- Avoid extra mallocs when using LONGLONG_BUFFER_SIZE
This can now be done as alloc() doesn't increase buffers if new length is
not bigger than old one.
- c_ptr() is redesigned to be safer (but a bit longer) than before.
- Remove wrong usage of c_ptr_quick()
c_ptr_quick() was used in many cases to get the pointer to the used
buffer, even when it didn't need to be \0 terminated. In this case
ptr() is a better substitute.
Another problem with c_ptr_quick() is that it did not guarantee that
the string would be \0 terminated.
- item_val_str(), an API function not used currently by the server,
now always returns a null terminated string (before it didn't always
do that).
- Ensure that all String allocations uses STRING_PSI_MEMORY_KEY. The old
mixed usage of performance keys caused assert's when String buffers
where shrunk.
- Binary_string::shrink() is simplifed
- Fixed bug in String(const char *str, size_t len, CHARSET_INFO *cs) that
used Binary_string((char *) str, len) instead of Binary_string(str,len).
- Changed argument to String() creations and String.set() functions to use
'const char*' instead of 'char*'. This ensures that Alloced_length is
not set, which gives safety against someone trying to change the
original string. This also would allow us to use !Alloced_length in
c_ptr() if needed.
- Changed string_ptr_cmp() to use memcmp() instead of c_ptr() to avoid
a possible malloc during string comparision.
This patch changes the main name of 3 byte character set from utf8 to
utf8mb3. New old_mode UTF8_IS_UTF8MB3 is added and set TRUE by default,
so that utf8 would mean utf8mb3. If not set, utf8 would mean utf8mb4.
Adds an implementation for SELECT ... FOR UPDATE SKIP LOCKED /
SELECT ... LOCK IN SHARED MODE SKIP LOCKED
This is implemented only InnoDB at the moment, not in RockDB yet.
This adds a new hander flag HA_CAN_SKIP_LOCKED than
will be used when the storage engine advertises the flag.
When a storage engine indicates this flag it will get
TL_WRITE_SKIP_LOCKED and TL_READ_SKIP_LOCKED transaction types.
The Lex structure has been updated to store both the FOR UPDATE/LOCK IN
SHARE as well as the SKIP LOCKED so the SHOW CREATE VIEW
implementation is simplier.
"SELECT FOR UPDATE ... SKIP LOCKED" combined with CREATE TABLE AS or
INSERT.. SELECT on the result set is not safe for STATEMENT based
replication. MIXED replication will replicate this as row based events."
Thanks to guidance from Facebook commit
193896c466
This helped verify basic test case, and components that need implementing
(even though every part was implemented differently).
Thanks Marko for guidance on simplier InnoDB implementation.
Reviewers: Marko, Monty
Use < TL_FIRST_WRITE for determining a READ transaction.
Use TL_FIRST_WRITE as the relative operator replacing TL_WRITE_ALLOW_WRITE
as the minimium WRITE lock type.
Problem:
The problem happened because of a conceptual flaw in the server code:
a. The table level CHARSET/COLLATE clause affected all data types,
including numeric and temporal ones:
CREATE TABLE t1 (a INT) CHARACTER SET utf8 [COLLATE utf8_general_ci];
In the above example, the Column_definition_attributes
(and then the FRM record) for the column "a" erroneously inherited
"utf8" as its character set.
b. The "ALTER TABLE t1 CONVERT TO CHARACTER SET csname" statement
also erroneously affected Column_definition_attributes::charset
for numeric and temporal data types and wrote "csname" as their
character set into FRM files.
So now we have arbitrary non-relevant charset ID values for numeric
and temporal data types in all FRM files in the world :)
The code in the server and the other engines did not seem to be affected
by this flaw. Only InnoDB inplace ALTER was affected.
Solution:
Fixing the code in the way that only character string data types
(CHAR,VARCHAR,TEXT,ENUM,SET):
- inherit the table level CHARSET/COLLATE clause
- get the charset value according to "CONVERT TO CHARACTER SET csname".
Numeric and temporal data types now always get &my_charset_numeric
in Column_definition_attributes::charset and always write its ID into FRM files:
- no matter what the table level CHARSET/COLLATE clause is, and
- no matter what "CONVERT TO CHARACTER SET" says.
Details:
1. Adding helper classes to pass small parts of HA_CREATE_INFO
into Type_handler methods:
- Column_derived_attributes - to pass table level CHARSET/COLLATE,
so columns that do not have explicit CHARSET/COLLATE clauses
can derive them from the table level, e.g.
CREATE TABLE t1 (a VARCHAR(1), b CHAR(1)) CHARACTER SET utf8;
- Column_bulk_alter_attributes - to pass bulk attribute changes
generated by the ALTER related code. These bulk changes affect
multiple columns at the same time:
ALTER TABLE ... CONVERT TO CHARACTER SET csname;
Note, passing the whole HA_CREATE_INFO directly to Type_handler
would not be good: HA_CREATE_INFO is huge and would need not desired
dependencies in sql_type.h and sql_type.cc. The Type_handler API should
use smallest possible data types!
2. Type_handler::Column_definition_prepare_stage1() is now responsible
to set Column_definition::charset properly, according to the data type,
for example:
- For string data types, Column_definition_attributes::charset is set from
the table level CHARSET/COLLATE clause (if not specified explicitly in
the column definition).
- For numeric and temporal fields, Column_definition_attributes::charset is
set to &my_charset_numeric, no matter what the table level
CHARSET/COLLATE says.
- For GEOMETRY, Column_definition_attributes::charset is set to
&my_charset_bin, no matter what the table level CHARSET/COLLATE says.
Previously this code (setting `charset`) was outside of of
Column_definition_prepare_stage1(), namely in
mysql_prepare_create_table(), and was erroneously called for
all data types.
3. Adding Type_handler::Column_definition_bulk_alter(), to handle
"ALTER TABLE .. CONVERT TO". Previously this code was inside
get_sql_field_charset() and was erroneously called for all data types.
4. Removing the Schema_specification_st parameter from
Type_handler::Column_definition_redefine_stage1().
Column_definition_attributes::charset is now fully properly initialized by
Column_definition_prepare_stage1(). So we don't need access to the
table level CHARSET/COLLATE clause in Column_definition_redefine_stage1()
any more.
5. Other changes:
- Removing global function get_sql_field_charset()
- Moving the part of the former get_sql_field_charset(), which was
responsible to inherit the table level CHARSET/COLLATE clause to
new methods:
-- Column_definition_attributes::explicit_or_derived_charset() and
-- Column_definition::prepare_charset_for_string().
This code is only needed for string data types.
Previously it was erroneously called for all data types.
- Moving another part, which was responsible to apply the
"CONVERT TO" clause, to
Type_handler_general_purpose_string::Column_definition_bulk_alter().
- Replacing the call for get_sql_field_charset() in sql_partition.cc
to sql_field->explicit_or_derived_charset() - it is perfectly enough.
The old code was redundant: get_sql_field_charset() was called from
sql_partition.cc only when there were no a "CONVERT TO CHARACTER SET"
clause involved, so its purpose was only to inherit the table
level CHARSET/COLLATE clause.
- Moving the code handling the BINCMP_FLAG flag from
mysql_prepare_create_table() to
Column_definition::prepare_charset_for_string():
This code is responsible to resolve the BINARY comparison style
into the corresponding _bin collation, to do the following transparent
rewrite:
CREATE TABLE t1 (a VARCHAR(10) BINARY) CHARSET utf8; ->
CREATE TABLE t1 (a VARCHAR(10) CHARACTER SET utf8 COLLATE utf8_bin);
This code is only needed for string data types.
Previously it was erroneously called for all data types.
6. Renaming Table_scope_and_contents_source_pod_st::table_charset
to alter_table_convert_to_charset, because the only purpose it's used for
is handlering "ALTER .. CONVERT". The new name is much more self-descriptive.
This feature adds the functionality of ignorability for indexes.
Indexes are not ignored be default.
To control index ignorability explicitly for a new index,
use IGNORE or NOT IGNORE as part of the index definition for
CREATE TABLE, CREATE INDEX, or ALTER TABLE.
Primary keys (explicit or implicit) cannot be made ignorable.
The table INFORMATION_SCHEMA.STATISTICS get a new column named IGNORED that
would store whether an index needs to be ignored or not.
The issue happens when the secondary keys are extended with primary
key parts. Inside the function TABLE_SHARE::init_from_binary_frm_image()
adds the length bytes for the primary key key parts to the length of the
secondary key. This is not needed because when the extended keys are
used we recalculate the length for the used key parts.
Also removed TABLE_SHARE::total_key_length as it is not used in the code
Apporved-by: Monty <monty@mariadb.org>
This patch solves two key problems.
1. There is a type number clash between MySQL and MariaDB. The number
245, used for MariaDB Virtual Fields is the same as MySQL's JSON.
This leads to corrupt FRM errors if unhandled. The code properly
checks frm table version number and if it matches 5.7+ (until 10.0+)
it will assume it is dealing with a MySQL table with the JSON
datatype.
2. MySQL JSON datatype uses a proprietary format to pack JSON data. The
patch introduces a datatype plugin which parses the format and convers
it to its string representation.
The intended conversion path is to only use the JSON datatype within
ALTER TABLE <table> FORCE, to force a table recreate. This happens
during mysql_upgrade or via a direct ALTER TABLE <table> FORCE.
The problem was that the server was calling virtual functions on a record
that was not initialized with new data.
This happened when fill_record() was aborted in the middle because an
error in save_val() or save_in_field()
Problem:
Queries like this showed performance degratation in 10.4 over 10.3:
SELECT temporal_literal FROM t1;
SELECT temporal_literal + 1 FROM t1;
SELECT COUNT(*) FROM t1 WHERE temporal_column = temporal_literal;
SELECT COUNT(*) FROM t1 WHERE temporal_column = string_literal;
Fix:
Replacing the universal member "MYSQL_TIME cached_time" in
Item_temporal_literal to data type specific containers:
- Date in Item_date_literal
- Time in Item_time_literal
- Datetime in Item_datetime_literal
This restores the performance, and make it even better in some cases.
See benchmark results in MDEV.
Also, this change makes futher separations of Date, Time, Datetime
from each other, which will make it possible not to derive them from
a too heavy (40 bytes) MYSQL_TIME, and replace them to smaller data
type specific containers.
* Allocate items on thd->mem_root while refixing vcol exprs
* Make vcol tree changes register and roll them back after the statement is executed.
Explanation:
Due to collation implementation specifics an Item tree could change while fixing.
The tricky thing here is to make it on a proper arena.
It's usually not a problem when a field is deterministic, however, makes a pain vice-versa, during allocation allocating.
A non-deterministic field should be refixed on each statement, since it depends on the environment state.
Changing the tree will be temporary and therefore it should be reverted after the statement execution.
Fix stale virtual field value in 4 cases: when virtual field depends
on row_start/row_end in timestamp/trx_id versioned table. row_start
dep is recalculated in vers_update_fields() (SQL and InnoDB
layer). row_end dep is recalculated on history row insert.
include/my_valgrind.h:88:112: error: ‘void* memset(void*, int, size_t)’ writing to an object of non-trivial type ‘key_map’ {aka ‘class Bitmap<64>’}; use assignment instead [-Werror=class-memaccess]
in this case it's safe, Bitmap<> is trivial enough
In AddressSanitizer, we only want memory poisoning to happen
in connection with custom memory allocation or freeing.
The primary use of MEM_UNDEFINED is for declaring memory uninitialized
in Valgrind or MemorySanitizer. We do not want MEM_UNDEFINED to
have the unwanted side effect that AddressSanitizer would no longer
be able to complain about accessing unallocated memory.
MEM_UNDEFINED(): Define as no-op for AddressSanitizer.
MEM_MAKE_ADDRESSABLE(): Define as MEM_UNDEFINED() or
ASAN_UNPOISON_MEMORY_REGION().
MEM_CHECK_ADDRESSABLE(): Wrap also __asan_region_is_poisoned().
- Some of the bug fixes are backports from 10.5!
- The fix in innobase/fil/fil0fil.cc is just a backport to get less
error messages in mysqld.1.err when running with valgrind.
- Renamed HAVE_valgrind_or_MSAN to HAVE_valgrind
Fixed by:
- Make all quick_* variable allocated according to real number keys instead
of MAX_KEY
- Store all the quick* items in separated allocated structure (OPT_RANGE)
- Ensure we don't access any quick* variable without first checking
opt_range_keys.is_set(). Thanks to this, we don't need any
pre-initialization of quick* variables anymore.
Some renames was done to use the new structure:
table->quick_keys -> table->opt_range_keys
table->quick_rows[X] -> table->opt_range[X].rows
table->quick_key_parts[X] -> table->opt_range[X].key_parts
table->quick_costs[X] -> table->opt_range[X].cost
table->quick_index_only_costs[X] -> table->opt_range[X].index_only_cost
table->quick_n_ranges[X] -> table->opt_range[X].ranges
table->quick_condition_rows -> table->opt_range_condition_rows
This patch should both decrease memory needed for TABLE objects
(3528 -> 984 + keyinfo) and increase performance, thanks to less
initializations per query, and more localized memory, thanks to the
opt_range structure.
- Removed not needed bzero in void TABLE::initialize_quick_structures().
- Replaced bzero with TRASH_ALLOC() to have this change verfied with
memory checkers
- Added missing table->quick_keys.is_set in table_cond_selectivity()
Make sure to initialize members of TABLE::reginfo when TABLE::init is called. In this case the problem
was that table->reginfo.join_tab was set for the SELECT query and then was reused by the UPDATE query.
This case occurred only when the SELECT query had a degenerate join.
Problem was that FLUSH TABLES where trying to read latest sequence state
which conflicted with a running ALTER SEQUENCE. Removed the reading
of the state, when opening a table for FLUSH, as it's not needed in this
case.
Other thing:
- Fixed a potential issue with concurrently running ALTER SEQUENCE where
the later ALTER could potentially read old data
CHECK constraint is checked by check_expression() which walks its
items and gets into Item_field::check_vcol_func_processor() to check
for conformity with foreign key list.
WITHOUT OVERLAPS is checked for same conformity in
mysql_prepare_create_table().
Long uniques are already impossible with InnoDB foreign keys. See
ER_CANT_CREATE_TABLE in test case.
2 accompanying bugs fixed (test main.constraints failed):
1. check->name.str lived on SP execute mem_root while "check" obj
itself lives on SP main mem_root. On second SP execute check->name.str
had garbage data. Fixed by allocating from thd->stmt_arena->mem_root
which is SP main mem_root.
2. CHECK_CONSTRAINT_IF_NOT_EXISTS value was mixed with
VCOL_FIELD_REF. VCOL_FIELD_REF is assigned in check_expression() and
then detected as CHECK_CONSTRAINT_IF_NOT_EXISTS in
handle_if_exists_options().
Existing cases for MDEV-16932 in main.constraints cover both fixes.
reduce the amount of engine-specific code in the server,
particularly as it does not serve any purpose now.
may be needed for VP engine,
to be reconsidered in MDEV-7795
Problem:- Calling mark_columns_per_binlog_row_image() earlier may change the
result of mark_virtual_columns_for_write() , Since it can set the bitmap on
for virtual column, and henceforth mark_virtual_column_deps(field) will
never be called in mark_virtual_column_with_deps.
This bug is not specific for long unique, It also fails for this case
create table t2(id int primary key, a blob, b varchar(20) as (LEFT(a,2)));
Previously multiple threads were allowed to load histograms concurrently.
There were no known problems caused by this. But given amount of data
races in this code, it'd happen sooner or later.
To avoid scalability bottleneck, histograms loading is protected by
per-TABLE_SHARE atomic variable.
Whenever histograms were loaded by preceding statement (hot-path), a
scalable load-acquire check is performed.
Whenever histograms have to be loaded anew, mutual exclusion for loaders
is established by atomic variable. If histograms are being loaded
concurrently, statement waits until load is completed.
- Table_statistics::total_hist_size moved to TABLE_STATISTICS_CB: only
meaningful within TABLE_SHARE (not used for collected stats).
- TABLE_STATISTICS_CB::histograms_can_be_read and
TABLE_STATISTICS_CB::histograms_are_read are replaced with a tri state
atomic variable.
- Simplified away alloc_histograms_for_table_share().
Note: there's still likely a data race if a thread attempts accessing
histograms data after it failed to load it (because of concurrent load).
It was there previously and goes out of the scope of this effort. One way
of fixing it could be reviving TABLE::histograms_are_read and adding
appropriate checks whenever it is needed.
Part of MDEV-19061 - table_share used for reading statistical tables is
not protected
Previously multiple threads were allowed to load statistics concurrently.
There were no known problems caused by this. But given amount of data
races in this code, it'd happen sooner or later.
To avoid scalability bottleneck, statistics loading is protected by
per-TABLE_SHARE atomic variable.
Whenever statistics were loaded by preceding statement (hot-path), a
scalable load-acquire check is performed.
Whenever statistics have to be loaded anew, mutual exclusion for loaders
is established by atomic variable. If statistics are being loaded
concurrently, statement waits until load is completed.
TABLE_STATISTICS_CB::stats_can_be_read and
TABLE_STATISTICS_CB::stats_is_read are replaced with a tri state atomic
variable.
Part of MDEV-19061 - table_share used for reading statistical tables is
not protected
Respect system fields in NO_ZERO_DATE mode.
This is the subject for refactoring in MDEV-19597
Conflict resolution from 7d5223310789f967106d86ce193ef31b315ecff0
update_virtual_field() is called as part of index rebuild in
ha_myisam::repair() (MDEV-5800) which is done on bulk INSERT finish.
Assertion in update_virtual_field() was put as part of MDEV-16222
because update_virtual_field() returns in_use->is_error(). The idea:
wrongly mixed semantics of error status before update_virtual_field()
and the status returned by update_virtual_field(). The former can
falsely influence the latter.
MDEV-21398 Deadlock (server hang) or assertion failure in
Diagnostics_area::set_error_status upon ALTER under lock
This failure could only happen if one locked the same table
multiple times and then did an ALTER TABLE on the table.
Major change is to change all instances of
table->m_needs_reopen= true;
to
table->mark_table_for_reopen();
The main fix for the problem was to ensure that we mark all
instances of the table in the locked_table_list and when we
reopen the tables, we first close all tables before reopening
and locking them.
Other things:
- Don't call thd->locked_tables_list.reopen_tables if there
are no tables marked for reopen. (performance)
The code incorrectly assumed in multiple places that TYPELIB
values cannot have 0x00 bytes inside. In fact they can:
CREATE TABLE t1 (a ENUM(0x61, 0x0062) CHARACTER SET BINARY);
Note, the TYPELIB value encoding used in FRM is ambiguous about 0x00.
So this fix is partial.
It fixes 0x00 bytes in many (but not all) places:
- In the middle or in the end of a value:
CREATE TABLE t1 (a ENUM(0x6100) ...);
CREATE TABLE t1 (a ENUM(0x610062) ...);
- In the beginning of the first value:
CREATE TABLE t1 (a ENUM(0x0061));
CREATE TABLE t1 (a ENUM(0x0061), b ENUM('b'));
- In the beginning of the second (and following) value of the *last* ENUM/SET
in the table:
CREATE TABLE t1 (a ENUM('a',0x0061));
CREATE TABLE t1 (a ENUM('a'), b ENUM('b',0x0061));
However, it does not fix 0x00 when:
- 0x00 byte is in the beginning of a value of a non-last ENUM/SET
causes an error:
CREATE TABLE t1 (a ENUM('a',0x0061), b ENUM('b'));
ERROR 1033 (HY000): Incorrect information in file: './test/t1.frm'
This is an ambuguous case and will be fixed separately.
We need a new TYPELIB encoding to fix this.
Details:
- unireg.cc
The function pack_header() incorrectly used strlen() to detect
a TYPELIB value length. Adding a new function typelib_values_packed_length()
which uses TYPELIB::type_lengths[n] to detect the n-th value length,
and reusing the new function in pack_header() and packed_fields_length()
- table.cc
fix_type_pointers() assumed in multiple places that values cannot have
0x00 inside and used strlen(TYPELIB::type_names[n]) to set
the corresponding TYPELIB::type_lengths[n].
Also, fix_type_pointers() did not check the encoded data for consistency.
Rewriting fix_type_pointers() code to populate TYPELIB::type_names[n] and
TYPELIB::type_lengths[n] at the same time, so no additional loop
with strlen() is needed any more.
Adding many data consistency tests.
Fixing the main loop in fix_type_pointers() to use memchr() instead of
strchr() to handle 0x00 properly.
Fixing create_key_infos() to return the result in a LEX_STRING rather
that in a char*.
MDEV-22088 S3 partitioning support
All ALTER PARTITION commands should now work on S3 tables except
REBUILD PARTITION
TRUNCATE PARTITION
REORGANIZE PARTITION
In addition, PARTIONED S3 TABLES can also be replicated.
This is achived by storing the partition tables .frm and .par file on S3
for partitioned shared (S3) tables.
The discovery methods are enchanced by allowing engines that supports
discovery to also support of the partitioned tables .frm and .par file
Things in more detail
- The .frm and .par files of partitioned tables are stored in S3 and kept
in sync.
- Added hton callback create_partitioning_metadata to inform handler
that metadata for a partitoned file has changed
- Added back handler::discover_check_version() to be able to check if
a table's or a part table's definition has changed.
- Added handler::check_if_updates_are_ignored(). Needed for partitioning.
- Renamed rebind() -> rebind_psi(), as it was before.
- Changed CHF_xxx hadnler flags to an enum
- Changed some checks from using table->file->ht to use
table->file->partition_ht() to get discovery to work with partitioning.
- If TABLE_SHARE::init_from_binary_frm_image() fails, ensure that we
don't leave any .frm or .par files around.
- Fixed that writefrm() doesn't leave unusable .frm files around
- Appended extension to path for writefrm() to be able to reuse to function
for creating .par files.
- Added DBUG_PUSH("") to a a few functions that caused a lot of not
critical tracing.
copy_keys_from_share(): Use reinterpret_cast instead of
manipulating a reference to a type-punned pointer.
This cleans up after the cleanup
commit 0515577d12.
* The overlaps check is implemented on a handler level per row command.
It creates a separate cursor (actually, another handler instance) and
caches it inside the original handler, when ha_update_row or
ha_insert_row is issued. Cursor closes on unlocking the handler.
* Containing the same key in index means unique constraint violation
even in usual terms. So we fetch left and right neighbours and check
that they have same key prefix, excluding from the key only the period part.
If it doesnt match, then there's no such neighbour, and the check passes.
Otherwise, we check if this neighbour intersects with the considered key.
* The check does not introduce new error and fails with ER_DUPP_KEY error.
This might break REPLACE workflow and should be fixed separately
* rename to a generic name
* move remaning initializations from query exec to prepare time
* simplify/unify key handling in open_table_from_share and delayed
* remove dead code
* move tests where they belong
- multi_range_read_info_const now uses the new records_in_range interface
- Added handler::avg_io_cost()
- Don't calculate avg_io_cost() in get_sweep_read_cost if avg_io_cost is
not 1.0. In this case we trust the avg_io_cost() from the handler.
- Changed test_quick_select to use TIME_FOR_COMPARE instead of
TIME_FOR_COMPARE_IDX to align this with the rest of the code.
- Fixed bug when using test_if_cheaper_ordering where we didn't use
keyread if index was changed
- Fixed a bug where we didn't use index only read when using order-by-index
- Added keyread_time() to HEAP.
The default keyread_time() was optimized for blocks and not suitable for
HEAP. The effect was the HEAP prefered table scans over ranges for btree
indexes.
- Fixed get_sweep_read_cost() for HEAP tables
- Ensure that range and ref have same cost for simple ranges
Added a small cost (MULTI_RANGE_READ_SETUP_COST) to ranges to ensure
we favior ref for range for simple queries.
- Fixed that matching_candidates_in_table() uses same number of records
as the rest of the optimizer
- Added avg_io_cost() to JT_EQ_REF cost. This helps calculate the cost for
HEAP and temporary tables better. A few tests changed because of this.
- heap::read_time() and heap::keyread_time() adjusted to not add +1.
This was to ensure that handler::keyread_time() doesn't give
higher cost for heap tables than for normal tables. One effect of
this is that heap and derived tables stored in heap will prefer
key access as this is now regarded as cheap.
- Changed cost for index read in sql_select.cc to match
multi_range_read_info_const(). All index cost calculation is now
done trough one function.
- 'ref' will now use quick_cost for keys if it exists. This is done
so that for '=' ranges, 'ref' is prefered over 'range'.
- scan_time() now takes avg_io_costs() into account
- get_delayed_table_estimates() uses block_size and avg_io_cost()
- Removed default argument to test_if_order_by_key(); simplifies code
MDEV-21605 Clean up and speed up interfaces for binary row logging
MDEV-21617 Bug fix for previous version of this code
The intention is to have as few 'if' as possible in ha_write() and
related functions. This is done by pre-calculating once per statement the
row_logging state for all tables.
Benefits are simpler and faster code both when binary logging is disabled
and when it's enabled.
Changes:
- Added handler->row_logging to make it easy to check it table should be
row logged. This also made it easier to disabling row logging for system,
internal and temporary tables.
- The tables row_logging capabilities are checked once per "statements
that updates tables" in THD::binlog_prepare_for_row_logging() which
is called when needed from THD::decide_logging_format().
- Removed most usage of tmp_disable_binlog(), reenable_binlog() and
temporary saving and setting of thd->variables.option_bits.
- Moved checks that can't change during a statement from
check_table_binlog_row_based() to check_table_binlog_row_based_internal()
- Removed flag row_already_logged (used by sequence engine)
- Moved binlog_log_row() to a handler::
- Moved write_locked_table_maps() to THD::binlog_write_table_maps() as
most other related binlog functions are in THD.
- Removed binlog_write_table_map() and binlog_log_row_internal() as
they are now obsolete as 'has_transactions()' is pre-calculated in
prepare_for_row_logging().
- Remove 'is_transactional' argument from binlog_write_table_map() as this
can now be read from handler.
- Changed order of 'if's in handler::external_lock() and wsrep_mysqld.h
to first evaluate fast and likely cases before more complex ones.
- Added error checking in ha_write_row() and related functions if
binlog_log_row() failed.
- Don't clear check_table_binlog_row_based_result in
clear_cached_table_binlog_row_based_flag() as it's not needed.
- THD::clear_binlog_table_maps() has been replaced with
THD::reset_binlog_for_next_statement()
- Added 'MYSQL_OPEN_IGNORE_LOGGING_FORMAT' flag to open_and_lock_tables()
to avoid calculating of binary log format for internal opens. This flag
is also used to avoid reading statistics tables for internal tables.
- Added OPTION_BINLOG_LOG_OFF as a simple way to turn of binlog temporary
for create (instead of using THD::sql_log_bin_off.
- Removed flag THD::sql_log_bin_off (not needed anymore)
- Speed up THD::decide_logging_format() by remembering if blackhole engine
is used and avoid a loop over all tables if it's not used
(the common case).
- THD::decide_logging_format() is not called anymore if no tables are used
for the statement. This will speed up pure stored procedure code with
about 5%+ according to some simple tests.
- We now get annotated events on slave if a CREATE ... SELECT statement
is transformed on the slave from statement to row logging.
- In the original code, the master could come into a state where row
logging is enforced for all future events if statement could be used.
This is now partly fixed.
Other changes:
- Ensure that all tables used by a statement has query_id set.
- Had to restore the row_logging flag for not used tables in
THD::binlog_write_table_maps (not normal scenario)
- Removed injector::transaction::use_table(server_id_type sid, table tbl)
as it's not used.
- Cleaned up set_slave_thread_options()
- Some more DBUG_ENTER/DBUG_RETURN, code comments and minor indentation
changes.
- Ensure we only call THD::decide_logging_format_low() once in
mysql_insert() (inefficiency).
- Don't annotate INSERT DELAYED
- Removed zeroing pos_in_table_list in THD::open_temporary_table() as it's
already 0
MDEV-21606 Improve update handler (long unique keys on blobs)
MDEV-21470 MyISAM and Aria start_bulk_insert doesn't work with long unique
MDEV-21606 Bug fix for previous version of this code
MDEV-21819 2 Assertion `inited == NONE || update_handler != this'
- Move update_handler from TABLE to handler
- Move out initialization of update handler from ha_write_row() to
prepare_for_insert()
- Fixed that INSERT DELAYED works with update handler
- Give an error if using long unique with an autoincrement column
- Added handler function to check if table has long unique hash indexes
- Disable write cache in MyISAM and Aria when using update_handler as
if cache is used, the row will not be inserted until end of statement
and update_handler would not find conflicting rows.
- Removed not used handler argument from
check_duplicate_long_entries_update()
- Syntax cleanups
- Indentation fixes
- Don't use single character indentifiers for arguments
compare_keys_but_name(): do not use KEY_PART_INFO::field for
Field::is_equal(). Following the logic of that code we need to
compare fields of a table. But KEY_PART_INFO::field sometimes
(when key part is shorter than table field) is a different field.
In that case Field::is_equal() returns incorrect result and
problems occur.
KEY_PART_INFO::field may become some strange field in
open_frm_error open_table_from_share(). I think this is an
incorrect logic, some tecnhical debt. I'm not fixing it right now,
because I don't have time. But I'm making Field::field_length
a const class member. Then, the only fishy code which changed that
field requires now a const_cast<>. I'm bringing attention to that
code with it. This change should not affect logic of the
program in any way.
"write set" for replication finally got its correct place
(mark_columns_per_binlog_row_image()). When done generally in
mark_columns_needed_for_update() it affects optimization
algorithm. used_key_is_modified, query_plan.using_io_buffer are
wrongly set and that leads to wrong prepare_for_keyread() which limits
read_set.
Turn read cache off for update and multi-update for versioned
table. no_cache is reinited on each TABLE open because it is
applicable for specific algorithms.
As a side fix vers_insert_history_row() honors vers_write setting.
Aria with row_format=fixed uses IO_CACHE of type READ_CACHE for
sequential read in update loop. When history row is inserted inside
this loop the cache misses it and fails with error.
TODO:
Currently maria_extra() does not support SEQ_READ_APPEND. Probably it
might be possible to use this type of cache.
MDEV-18957 UPDATE with LIMIT clause is wrong for versioned partitioned tables
UPDATE, DELETE: replace linear search of current/historical records
with vers_setup_conds().
Additional DML cases in view.test
The issue here is the wrong estimate of the cardinality of a partial join,
the cardinality is too high because the function table_cond_selectivity()
returns an absurd number 100 while selectivity cannot be greater than 1.
When accessing table t by outer reference t1.a via index we do not perform any
range analysis for t. Yet we see TABLE::quick_key_parts[key] and
TABLE->quick_rows[key] contain a non-zero value though these should have been
remained untouched and equal to 0.
Thus real cause of the problem is that TABLE::init does not clean the arrays
TABLE::quick_key_parts[] and TABLE::>quick_rows[].
It should have done it because the TABLE structure created for any
instance of a table can be reused for many queries.
- Any temporary tables created under read-only mode will never be logged
to binary log. Any usage of these tables to update normal tables, even
after read-only has been disabled, will use row base logging (as the
temporary table will not be on the slave).
- Analyze, check and repair table will not be logged in read-only mode.
Other things:
- Removed not used varaibles in
MYSQL_BIN_LOG::flush_and_set_pending_rows_event.
- Set table_share->table_creation_was_logged for all normal tables.
- THD::binlog_query() now returns -1 if statement was not logged., This
is used to update table_share->table_creation_was_logged.
- Don't log admin statements in opt_readonly is set.
- Table's that doesn't have table_creation_was_logged will set binlog format to row
logging.
- Removed not needed/wrong setting of table->s->table_creation_was_logged
in create_table_from_items()
TABLE::mark_columns_needed_for_update(): use_all_columns() assigns
pointer of all_set into read_set and write_set, but this is not good
since all_set is changed later by
TABLE::mark_columns_used_by_index_no_reset().
Do column_bitmaps_signal() whenever we change read_set/write_set.
1. Removed TIMESTAMP/TRANSACTION unit auto-detection in favor of default TIMESTAMP.
Reasons:
1.1. rare practical use and doubtful advantage of such auto-detection;
1.2. it conflicts with MDEV-16226 (TRX_ID-based versioned tables performance improvement).
Needless check_unit membership removed.
2. SQL: versioning type handling refactoring
Vers_type_handler hierarchy stores versioning properties of type.
virtual Type_handler::vers() accesses specialization of
Vers_type_handler for specific type.
virtual Vers_type_handler::kind() returns versioning kind
(timestamp/trx_id).
Removed Type_handler::Vers_history_point_check_unit() in favor of
Type_handler::vers().
Renames:
require_timestamp() -> require_timestamp_error()
require_trx_id() -> require_trx_id_error()
EDIT by Alexander Barkov (@abarkov):
check_sys_fields() moved to Vers_type_handler::check_sys_fields()