Commit graph

2824 commits

Author SHA1 Message Date
Aleksey Midenkov
89936f11e9 MDEV-18278 Misleading error message in error log upon failed table creation
If error_reported is not set upper caller open_table_from_share()
throws error ER_NOT_FORM_FILE itself via open_table_error().
2021-10-11 12:26:43 +03:00
Marko Mäkelä
2b66cd2493 Merge 10.3 into 10.4 2021-08-23 10:44:06 +03:00
Marko Mäkelä
8a33d36dac Fix GCC 11.2.0 -Wmaybe-uninitialized
TABLE_LIST::calc_md5(): Remove an untruthful const qualifier.

thd_get_query_start_data(): Pass empty_clex_str instead of
an uninitialized LEX_CSTRING.
2021-08-23 09:00:37 +03:00
Marko Mäkelä
4a25957274 Merge 10.4 into 10.5 2021-08-18 18:22:35 +03:00
Marko Mäkelä
f84e28c119 Merge 10.3 into 10.4 2021-08-18 16:51:52 +03:00
Oleksandr Byelkin
850b2ba15d Merge branch '10.4' into 10.5 2021-08-02 16:53:37 +02:00
Oleksandr Byelkin
4902b0fdc9 Merge branch '10.3' into 10.4 2021-08-02 16:50:28 +02:00
Oleksandr Byelkin
7f264997dd Merge branch '10.2' into 10.3 2021-08-02 11:41:00 +02:00
Nikita Malyavin
b549af6913 MDEV-26220 Server crashes with indexed by prefix virtual column
Server crashes in Field::register_field_in_read_map upon select from
partitioned table with indexed by prefix virtual column.

After several read-mark fixes a problem has surfaced:
Since KEY (c(10),a) uses only a prefix of c, a new field is created,
duplicated from table->field[3], with a new length. However,
vcol_inco->expr is not copied.

Therefore, (*key_info)->key_part[i].field->vcol_info->expr was left NULL
in ha_partition::index_init().

Solution: copy vcol_info from table field when it's set up.
2021-08-02 10:31:22 +02:00
Oleksandr Byelkin
8b6c8a6ce9 Revert "MDEV-26220 Server crashes with indexed by prefix virtual column"
This reverts commit 9b8e207ce0.
2021-08-02 10:30:18 +02:00
Oleksandr Byelkin
ae6bdc6769 Merge branch '10.4' into 10.5 2021-07-31 23:19:51 +02:00
Oleksandr Byelkin
7841a7eb09 Merge branch '10.3' into 10.4 2021-07-31 22:59:58 +02:00
Nikita Malyavin
9b8e207ce0 MDEV-26220 Server crashes with indexed by prefix virtual column
Server crashes in Field::register_field_in_read_map upon select from
partitioned table with indexed by prefix virtual column.

After several read-mark fixes a problem has surfaced:
Since KEY (c(10),a) uses only a prefix of c, a new field is created,
duplicated from table->field[3], with a new length. However,
vcol_inco->expr is not copied.

Therefore, (*key_info)->key_part[i].field->vcol_info->expr was left NULL
in ha_partition::index_init().

Solution: initialize vcols before key initialization

Also key initialization is moved to a function.
2021-07-28 11:13:24 +02:00
Sergei Golubchik
6190a02f35 Merge branch '10.2' into 10.3 2021-07-21 20:11:07 +02:00
Nikita Malyavin
f64a4f672a follow-up MDEV-18166: rename marking functions
Reformulate mark_columns_used_by_index* function family in a more laconic
way:

mark_columns_used_by_index -> mark_index_columns
mark_columns_used_by_index_for_read_no_reset -> mark_index_columns_for_read
mark_columns_used_by_index_no_reset -> mark_index_columns_no_reset
static mark_index_columns -> do_mark_index_columns
2021-07-12 22:00:40 +03:00
Nikita Malyavin
0f6a5b4390 [2/2] MDEV-18166 ASSERT_COLUMN_MARKED_FOR_READ failed on tables with vcols
Several different test cases were failing under the same reason: the
fields in a vcol expression were not marked during marking columns of a key
contatining virtual column for read.

Fix: make marking columns of a key for read a special case where
register_field_in_read_map() is done instead of plain bitmap_set_bit().

Some test cases are only reproducible in 10.4+, but the fix is applicable
to 10.2+
2021-07-12 22:00:39 +03:00
Nikita Malyavin
7d9ba57da4 [1/2] MDEV-18166 ASSERT_COLUMN_MARKED_FOR_READ failed on tables with vcols
This is a 10.2+ part of a jira task

The two bugs regarding virtual column marking have been fixed:

1. UPDATE of a partitioned table, where the optimizer has chosen a
 secondary index to make a filesort;
2. INSERT into a table with a nonblob field generated from a blob, with
 binlog enabled and binlog_row_image=noblob.

3. DELETE from a view on a table with virtual column.

Generally the assertion happens from update_virtual_fields() call

These bugs are root-caused by missing field marking for dependant fields
of a virtual column.

Therefore a fix is: mark all the fields involved in the vcol expression by
calling field->register_field_in_read_map() instead just setting a single
bit.

3 was reproducible only on 10.4+, however the problem might has just been
invisible in the earlier versions. The fix is applicable to 10.2-10.3 as
well.
2021-07-12 22:00:39 +03:00
Jan Lindström
f673277491 MDEV-25586 : SIGSEGV in my_strcasecmp_utf8mb3
Fixed NULL pointer reference to db.str
2021-05-05 07:29:10 +03:00
Marko Mäkelä
2a7810759d MDEV-22775: Merge 10.4 into 10.5 2021-04-08 08:08:53 +03:00
Alexander Barkov
58780b5afb MDEV-22775 [HY000][1553] Changing name of primary key column with foreign key constraint fails.
Problem:

The problem happened because of a conceptual flaw in the server code:

a. The table level CHARSET/COLLATE clause affected all data types,
  including numeric and temporal ones:

   CREATE TABLE t1 (a INT) CHARACTER SET utf8 [COLLATE utf8_general_ci];

  In the above example, the Column_definition_attributes
  (and then the FRM record) for the column "a" erroneously inherited
  "utf8" as its character set.

b. The "ALTER TABLE t1 CONVERT TO CHARACTER SET csname" statement
   also erroneously affected Column_definition_attributes::charset
   for numeric and temporal data types and wrote "csname" as their
   character set into FRM files.

So now we have arbitrary non-relevant charset ID values for numeric
and temporal data types in all FRM files in the world :)

The code in the server and the other engines did not seem to be affected
by this flaw. Only InnoDB inplace ALTER was affected.

Solution:

Fixing the code in the way that only character string data types
(CHAR,VARCHAR,TEXT,ENUM,SET):
- inherit the table level CHARSET/COLLATE clause
- get the charset value according to "CONVERT TO CHARACTER SET csname".

Numeric and temporal data types now always get &my_charset_numeric
in Column_definition_attributes::charset and always write its ID into FRM files:
- no matter what the table level CHARSET/COLLATE clause is, and
- no matter what "CONVERT TO CHARACTER SET" says.

Details:

1. Adding helper classes to pass small parts of HA_CREATE_INFO
   into Type_handler methods:

   - Column_derived_attributes - to pass table level CHARSET/COLLATE,
     so columns that do not have explicit CHARSET/COLLATE clauses
     can derive them from the table level, e.g.

       CREATE TABLE t1 (a VARCHAR(1), b CHAR(1)) CHARACTER SET utf8;

   - Column_bulk_alter_attributes - to pass bulk attribute changes
     generated by the ALTER related code. These bulk changes affect
     multiple columns at the same time:

       ALTER TABLE ... CONVERT TO CHARACTER SET csname;

   Note, passing the whole HA_CREATE_INFO directly to Type_handler
   would not be good: HA_CREATE_INFO is huge and would need not desired
   dependencies in sql_type.h and sql_type.cc. The Type_handler API should
   use smallest possible data types!

2. Type_handler::Column_definition_prepare_stage1() is now responsible
   to set Column_definition::charset properly, according to the data type,
   for example:

   - For string data types, Column_definition_attributes::charset is set from
     the table level CHARSET/COLLATE clause (if not specified explicitly in
     the column definition).

   - For numeric and temporal fields, Column_definition_attributes::charset is
     set to &my_charset_numeric, no matter what the table level
     CHARSET/COLLATE says.

   - For GEOMETRY, Column_definition_attributes::charset is set to
     &my_charset_bin, no matter what the table level CHARSET/COLLATE says.

   Previously this code (setting `charset`) was outside of of
   Column_definition_prepare_stage1(), namely in
   mysql_prepare_create_table(), and was erroneously called for
   all data types.

3. Adding Type_handler::Column_definition_bulk_alter(), to handle
   "ALTER TABLE .. CONVERT TO". Previously this code was inside
   get_sql_field_charset() and was erroneously called for all data types.

4. Removing the Schema_specification_st parameter from
   Type_handler::Column_definition_redefine_stage1().
   Column_definition_attributes::charset is now fully properly initialized by
   Column_definition_prepare_stage1(). So we don't need access to the
   table level CHARSET/COLLATE clause in Column_definition_redefine_stage1()
   any more.

5. Other changes:
   - Removing global function get_sql_field_charset()

   - Moving the part of the former get_sql_field_charset(), which was
     responsible to inherit the table level CHARSET/COLLATE clause to
     new methods:
      -- Column_definition_attributes::explicit_or_derived_charset() and
      -- Column_definition::prepare_charset_for_string().
     This code is only needed for string data types.
     Previously it was erroneously called for all data types.

   - Moving another part, which was responsible to apply the
     "CONVERT TO" clause, to
     Type_handler_general_purpose_string::Column_definition_bulk_alter().

   - Replacing the call for get_sql_field_charset() in sql_partition.cc
     to sql_field->explicit_or_derived_charset() - it is perfectly enough.
     The old code was redundant: get_sql_field_charset() was called from
     sql_partition.cc only when there were no a "CONVERT TO CHARACTER SET"
     clause involved, so its purpose was only to inherit the table
     level CHARSET/COLLATE clause.

   - Moving the code handling the BINCMP_FLAG flag from
     mysql_prepare_create_table() to
     Column_definition::prepare_charset_for_string():
     This code is responsible to resolve the BINARY comparison style
     into the corresponding _bin collation, to do the following transparent
     rewrite:
        CREATE TABLE t1 (a VARCHAR(10) BINARY) CHARSET utf8;  ->
        CREATE TABLE t1 (a VARCHAR(10) CHARACTER SET utf8 COLLATE utf8_bin);
     This code is only needed for string data types.
     Previously it was erroneously called for all data types.

6. Renaming Table_scope_and_contents_source_pod_st::table_charset
   to alter_table_convert_to_charset, because the only purpose it's used for
   is handlering "ALTER .. CONVERT". The new name is much more self-descriptive.
2021-04-07 12:09:53 +04:00
Otto Kekäläinen
cebf9ee204 Fix various spelling errors still found in code
Reseting -> Resetting
Unknow -> Unknown
capabilites -> capabilities
choosen -> chosen
direcory -> directory
informations -> information
openned -> opened
refered -> referred
to access -> one to access
missmatch -> mismatch
succesfully -> successfully
dont -> don't
2021-03-22 18:10:39 +11:00
Sergei Golubchik
f33e57a9e6 Merge branch '10.4' into 10.5 2021-02-23 13:06:22 +01:00
Sergei Golubchik
e841957416 Merge branch '10.3' into 10.4 2021-02-23 09:25:57 +01:00
Sergei Golubchik
0ab1e3914c Merge branch '10.2' into 10.3 2021-02-22 22:42:27 +01:00
Varun Gupta
b87c342da5 MDEV-11172: EXPLAIN shows non-sensical value for key_len with type=index
The issue happens when the secondary keys are extended with primary
key parts. Inside the function TABLE_SHARE::init_from_binary_frm_image()
adds the length bytes for the primary key key parts to the length of the
secondary key. This is not needed because when the extended keys are
used we recalculate the length for the used key parts.

Also removed TABLE_SHARE::total_key_length as it is not used in the code

Apporved-by: Monty <monty@mariadb.org>
2021-01-30 14:41:43 +05:30
Marko Mäkelä
961c7938bb Merge 10.4 into 10.5 2021-01-25 12:44:24 +02:00
Marko Mäkelä
3467f63764 Merge 10.3 into 10.4 2021-01-25 11:02:07 +02:00
Sergei Golubchik
3ffd5f28f0 MDEV-17227 Server crash in TABLE_SHARE::init_from_sql_statement_string upon table discovery with non-existent database
* failed init_from_binary_frm_image can clear share->db_plugin,
  don't use it on the error path
* cleanup the test a bit
2021-01-12 10:25:04 +01:00
Sergei Golubchik
f144ce2cfa MDEV-20763 Table corruption or Assertion `btr_validate_index(index, 0, false)' failed in row_upd_sec_index_entry with virtual column and EMPTY_STRING_IS_NULL SQL mode
unset empty_string_is_null mode when parsing generated columns in a table,
this mode affects pasring.
2021-01-12 10:25:04 +01:00
Marko Mäkelä
6a1e655cb0 Merge 10.4 into 10.5 2020-12-02 18:29:49 +02:00
Marko Mäkelä
589cf8dbf3 Merge 10.3 into 10.4 2020-12-01 19:51:14 +02:00
Vicențiu Ciorbaru
f6549e9544 MDEV-18323 Convert MySQL JSON type to MariaDB TEXT in mysql_upgrade
This patch solves two key problems.
1. There is a type number clash between MySQL and MariaDB. The number
   245, used for MariaDB Virtual Fields is the same as MySQL's JSON.
   This leads to corrupt FRM errors if unhandled. The code properly
   checks frm table version number and if it matches 5.7+ (until 10.0+)
   it will assume it is dealing with a MySQL table with the JSON
   datatype.
2. MySQL JSON datatype uses a proprietary format to pack JSON data. The
   patch introduces a datatype plugin which parses the format and convers
   it to its string representation.

The intended conversion path is to only use the JSON datatype within
ALTER TABLE <table> FORCE, to force a table recreate. This happens
during mysql_upgrade or via a direct ALTER TABLE <table> FORCE.
2020-10-28 11:38:14 +02:00
Marko Mäkelä
1657b7a583 Merge 10.4 to 10.5 2020-10-22 17:08:49 +03:00
Nikita Malyavin
5896a49820 MDEV-19130 Assertion failed in handler::update_auto_increment
add store/restore_auto_increment in period portion insert/update functions
2020-10-14 21:57:58 +10:00
Sujatha
25ede13611 Merge branch '10.4' into 10.5 2020-09-29 16:59:36 +05:30
Monty
0b73ef0688 MDEV-21470 ASAN heap-use-after-free in my_hash_sort_bin
The problem was that the server was calling virtual functions on a record
that was not initialized with new data.
This happened when fill_record() was aborted in the middle because an
error in save_val() or save_in_field()
2020-09-25 13:07:04 +03:00
Marko Mäkelä
97a4a3872e Merge 10.4 into 10.5 2020-08-26 12:02:07 +03:00
Alexander Barkov
04ce29354b MDEV-23551 Performance degratation in temporal literals in 10.4
Problem:

Queries like this showed performance degratation in 10.4 over 10.3:

  SELECT temporal_literal FROM t1;
  SELECT temporal_literal + 1 FROM t1;
  SELECT COUNT(*) FROM t1 WHERE temporal_column = temporal_literal;
  SELECT COUNT(*) FROM t1 WHERE temporal_column = string_literal;

Fix:

Replacing the universal member "MYSQL_TIME cached_time" in
Item_temporal_literal to data type specific containers:
- Date in Item_date_literal
- Time in Item_time_literal
- Datetime in Item_datetime_literal

This restores the performance, and make it even better in some cases.
See benchmark results in MDEV.

Also, this change makes futher separations of Date, Time, Datetime
from each other, which will make it possible not to derive them from
a too heavy (40 bytes) MYSQL_TIME, and replace them to smaller data
type specific containers.
2020-08-24 09:17:47 +04:00
Oleksandr Byelkin
48b5777ebd Merge branch '10.4' into 10.5 2020-08-04 17:24:15 +02:00
Marko Mäkelä
50a11f396a Merge 10.4 into 10.5 2020-08-01 14:42:51 +03:00
Marko Mäkelä
9216114ce7 Merge 10.3 into 10.4 2020-07-31 18:09:08 +03:00
Marko Mäkelä
66ec3a770f Merge 10.2 into 10.3 2020-07-31 13:51:28 +03:00
Marko Mäkelä
4ec032b492 Merge 10.4 into 10.5 2020-07-21 17:33:16 +03:00
Marko Mäkelä
b1538f4d60 Merge 10.3 into 10.4 2020-07-21 16:36:47 +03:00
Nikita Malyavin
ebca70ead3 fix c++98 build 2020-07-21 23:12:32 +10:00
Nikita Malyavin
5acd391e8b MDEV-16039 Crash when selecting virtual columns generated using functions with DAYNAME()
* Allocate items on thd->mem_root while refixing vcol exprs
* Make vcol tree changes register and roll them back after the statement is executed.

Explanation:
Due to collation implementation specifics an Item tree could change while fixing.
The tricky thing here is to make it on a proper arena.
It's usually not a problem when a field is deterministic, however, makes a pain vice-versa, during allocation allocating.
A non-deterministic field should be refixed on each statement, since it depends on the environment state.
Changing the tree will be temporary and therefore it should be reverted after the statement execution.
2020-07-21 16:18:00 +10:00
Aleksey Midenkov
af83ed9f0e MDEV-20661 Virtual fields are not recalculated on system fields value assignment
Fix stale virtual field value in 4 cases: when virtual field depends
on row_start/row_end in timestamp/trx_id versioned table. row_start
dep is recalculated in vers_update_fields() (SQL and InnoDB
layer). row_end dep is recalculated on history row insert.
2020-07-20 18:28:08 +03:00
Marko Mäkelä
a85f81af03 MDEV-22535 fixup: Define a single-caller function inline
Let us avoid any overhead in release builds, for an empty function.
2020-07-04 14:28:11 +03:00
Monty
0fd89a1a89 Merge remote-tracking branch 'origin/10.4' into 10.5 2020-07-03 23:31:12 +03:00
Sergei Golubchik
7a4afad969 compilation fix
include/my_valgrind.h:88:112: error: ‘void* memset(void*, int, size_t)’ writing to an object of non-trivial type ‘key_map’ {aka ‘class Bitmap<64>’}; use assignment instead [-Werror=class-memaccess]

in this case it's safe, Bitmap<> is trivial enough
2020-07-03 15:01:21 +02:00
Monty
5211af1c16 Merge remote-tracking branch 'origin/10.3' into 10.4 2020-07-03 00:35:28 +03:00
Marko Mäkelä
b6ec1e8bbf MDEV-20377 post-fix: Introduce MEM_MAKE_ADDRESSABLE
In AddressSanitizer, we only want memory poisoning to happen
in connection with custom memory allocation or freeing.

The primary use of MEM_UNDEFINED is for declaring memory uninitialized
in Valgrind or MemorySanitizer. We do not want MEM_UNDEFINED to
have the unwanted side effect that AddressSanitizer would no longer
be able to complain about accessing unallocated memory.

MEM_UNDEFINED(): Define as no-op for AddressSanitizer.

MEM_MAKE_ADDRESSABLE(): Define as MEM_UNDEFINED() or
ASAN_UNPOISON_MEMORY_REGION().

MEM_CHECK_ADDRESSABLE(): Wrap also __asan_region_is_poisoned().
2020-07-02 17:59:28 +03:00
Monty
65f831d17c Fixed bugs found by valgrind
- Some of the bug fixes are backports from 10.5!
- The fix in innobase/fil/fil0fil.cc is just a backport to get less
  error messages in mysqld.1.err when running with valgrind.
- Renamed HAVE_valgrind_or_MSAN to HAVE_valgrind
2020-07-02 17:57:34 +03:00
Monty
6cee9b1953 MDEV-22535 TABLE::initialize_quick_structures() takes 0.5% in oltp_read_only
Fixed by:
- Make all quick_* variable allocated according to real number keys instead
  of MAX_KEY
- Store all the quick* items in separated allocated structure (OPT_RANGE)
- Ensure we don't access any quick* variable without first checking
  opt_range_keys.is_set().  Thanks to this, we don't need any
  pre-initialization of quick* variables anymore.

Some renames was done to use the new structure:
table->quick_keys                -> table->opt_range_keys
table->quick_rows[X]             -> table->opt_range[X].rows
table->quick_key_parts[X]        -> table->opt_range[X].key_parts
table->quick_costs[X]            -> table->opt_range[X].cost
table->quick_index_only_costs[X] -> table->opt_range[X].index_only_cost
table->quick_n_ranges[X]         -> table->opt_range[X].ranges
table->quick_condition_rows      -> table->opt_range_condition_rows

This patch should both decrease memory needed for TABLE objects
(3528 -> 984 + keyinfo) and increase performance, thanks to less
initializations per query, and more localized memory, thanks to the
opt_range structure.
2020-07-02 16:59:14 +03:00
Monty
3f2044ae99 MDEV-22535 TABLE::initialize_quick_structures() takes 0.5% in oltp_read_only
- Removed not needed bzero in void TABLE::initialize_quick_structures().
- Replaced bzero with TRASH_ALLOC() to have this change verfied with
  memory checkers
- Added missing table->quick_keys.is_set in table_cond_selectivity()
2020-07-02 14:25:41 +03:00
Marko Mäkelä
1813d92d0c Merge 10.4 into 10.5 2020-07-02 09:41:44 +03:00
Varun Gupta
cc0dca3663 MDEV-22910: SIGSEGV in Opt_trace_context::is_started & SIGSEGV in Json_writer::add_table_name (on optimized builds)
Make sure to initialize members of TABLE::reginfo when TABLE::init is called. In this case the problem
was that table->reginfo.join_tab was set for the SELECT query and then was reused by the UPDATE query.
This case occurred only when the SELECT query had a degenerate join.
2020-06-30 18:29:02 +05:30
Monty
6a3b581b90 MDEV-19745 BACKUP STAGE BLOCK_DDL hangs on flush sequence table
Problem was that FLUSH TABLES where trying to read latest sequence state
which conflicted with a running ALTER SEQUENCE. Removed the reading
of the state, when opening a table for FLUSH, as it's not needed in this
case.

Other thing:
- Fixed a potential issue with concurrently running ALTER SEQUENCE where
  the later ALTER could potentially read old data
2020-06-14 19:39:43 +03:00
Aleksey Midenkov
762bf7a03b MDEV-22602 Disable UPDATE CASCADE for SQL constraints
CHECK constraint is checked by check_expression() which walks its
items and gets into Item_field::check_vcol_func_processor() to check
for conformity with foreign key list.

WITHOUT OVERLAPS is checked for same conformity in
mysql_prepare_create_table().

Long uniques are already impossible with InnoDB foreign keys. See
ER_CANT_CREATE_TABLE in test case.

2 accompanying bugs fixed (test main.constraints failed):

1. check->name.str lived on SP execute mem_root while "check" obj
itself lives on SP main mem_root. On second SP execute check->name.str
had garbage data. Fixed by allocating from thd->stmt_arena->mem_root
which is SP main mem_root.

2. CHECK_CONSTRAINT_IF_NOT_EXISTS value was mixed with
VCOL_FIELD_REF. VCOL_FIELD_REF is assigned in check_expression() and
then detected as CHECK_CONSTRAINT_IF_NOT_EXISTS in
handle_if_exists_options().

Existing cases for MDEV-16932 in main.constraints cover both fixes.
2020-06-12 11:12:40 +03:00
Sergei Golubchik
89a33303c4 remove dead code
reduce the amount of engine-specific code in the server,
particularly as it does not serve any purpose now.

may be needed for VP engine,
to be reconsidered in MDEV-7795
2020-06-09 14:32:43 +02:00
Marko Mäkelä
0e69f601aa Merge 10.4 into 10.5 2020-06-07 12:22:06 +03:00
Sachin
e208f91ba8 MDEV-21804 Assertion `marked_for_read()' failed upon INSERT into table with long unique blob under binlog_row_image=NOBLOB
Problem:- Calling mark_columns_per_binlog_row_image() earlier may change the
result of mark_virtual_columns_for_write() , Since it can set the bitmap on
for virtual column, and henceforth  mark_virtual_column_deps(field) will
never be called in mark_virtual_column_with_deps.

This bug is not specific for long unique, It also fails for this case
   create table t2(id int primary key, a blob, b varchar(20) as (LEFT(a,2)));
2020-06-07 12:07:36 +05:30
Kentoku SHIBA
23c8adda74 MDEV-6268 SPIDER table with no COMMENT clause causes queries to wait forever
Add looping check

Conflicts:
	sql/table.h
2020-06-05 17:29:59 +09:00
Nikita Malyavin
35d327fddb MDEV-22599 WITHOUT OVERLAPS does not work with prefix indexes
cmp_max is used instead of cmp to compare key_parts
2020-06-05 20:04:37 +10:00
Marko Mäkelä
701efbb25b Merge 10.4 into 10.5 2020-06-03 09:45:39 +03:00
Marko Mäkelä
8059148154 Merge 10.3 into 10.4 2020-06-03 07:32:09 +03:00
Marko Mäkelä
8300f639a1 Merge 10.2 into 10.3 2020-06-02 10:25:11 +03:00
Marko Mäkelä
d72eebaa3d Merge 10.1 into 10.2 2020-06-01 09:33:03 +03:00
Marko Mäkelä
4a0b56f604 Merge 10.4 into 10.5 2020-05-31 10:28:59 +03:00
Marko Mäkelä
6da14d7b4a Merge 10.3 into 10.4 2020-05-30 11:04:27 +03:00
Sergey Vojtovich
c279878493 Thread safe histograms loading
Previously multiple threads were allowed to load histograms concurrently.
There were no known problems caused by this. But given amount of data
races in this code, it'd happen sooner or later.

To avoid scalability bottleneck, histograms loading is protected by
per-TABLE_SHARE atomic variable.

Whenever histograms were loaded by preceding statement (hot-path), a
scalable load-acquire check is performed.

Whenever histograms have to be loaded anew, mutual exclusion for loaders
is established by atomic variable. If histograms are being loaded
concurrently, statement waits until load is completed.

- Table_statistics::total_hist_size moved to TABLE_STATISTICS_CB: only
  meaningful within TABLE_SHARE (not used for collected stats).
- TABLE_STATISTICS_CB::histograms_can_be_read and
  TABLE_STATISTICS_CB::histograms_are_read are replaced with a tri state
  atomic variable.
- Simplified away alloc_histograms_for_table_share().

Note: there's still likely a data race if a thread attempts accessing
histograms data after it failed to load it (because of concurrent load).
It was there previously and goes out of the scope of this effort. One way
of fixing it could be reviving TABLE::histograms_are_read and adding
appropriate checks whenever it is needed.

Part of MDEV-19061 - table_share used for reading statistical tables is
                     not protected
2020-05-29 21:53:54 +04:00
Sergey Vojtovich
609a0d3db3 Thread safe statistics loading
Previously multiple threads were allowed to load statistics concurrently.
There were no known problems caused by this. But given amount of data
races in this code, it'd happen sooner or later.

To avoid scalability bottleneck, statistics loading is protected by
per-TABLE_SHARE atomic variable.

Whenever statistics were loaded by preceding statement (hot-path), a
scalable load-acquire check is performed.

Whenever statistics have to be loaded anew, mutual exclusion for loaders
is established by atomic variable. If statistics are being loaded
concurrently, statement waits until load is completed.

TABLE_STATISTICS_CB::stats_can_be_read and
TABLE_STATISTICS_CB::stats_is_read are replaced with a tri state atomic
variable.

Part of MDEV-19061 - table_share used for reading statistical tables is
                     not protected
2020-05-29 21:53:54 +04:00
Aleksey Midenkov
57f7b4866f MDEV-16937 Strict SQL with system versioned tables causes issues (10.4)
Respect system fields in NO_ZERO_DATE mode.

This is the subject for refactoring in MDEV-19597

Conflict resolution from 7d5223310789f967106d86ce193ef31b315ecff0
2020-05-29 11:45:19 +03:00
Aleksey Midenkov
19da9a51ae MDEV-16937 Strict SQL with system versioned tables causes issues
Respect system fields in NO_ZERO_DATE mode.

This is the subject for refactoring in MDEV-19597
2020-05-28 22:22:20 +03:00
Marko Mäkelä
dad7a8ee7d Merge 10.2 into 10.3 2020-05-27 17:10:39 +03:00
Aleksey Midenkov
f1f14c2092 MDEV-20015 Assertion `!in_use->is_error()' failed in TABLE::update_virtual_field
update_virtual_field() is called as part of index rebuild in
ha_myisam::repair() (MDEV-5800) which is done on bulk INSERT finish.

Assertion in update_virtual_field() was put as part of MDEV-16222
because update_virtual_field() returns in_use->is_error(). The idea:
wrongly mixed semantics of error status before update_virtual_field()
and the status returned by update_virtual_field(). The former can
falsely influence the latter.
2020-05-26 13:15:45 +03:00
Marko Mäkelä
ca38b6e427 Merge 10.3 into 10.4 2020-05-26 11:54:55 +03:00
Marko Mäkelä
ecc7f305dd Merge 10.2 into 10.3 2020-05-25 19:41:58 +03:00
Monty
d8e2fa0c49 Fixed compiler failure on windows 2020-05-23 14:58:33 +03:00
Monty
be647ff14d Fixed deadlock with LOCK TABLES and ALTER TABLE
MDEV-21398 Deadlock (server hang) or assertion failure in
Diagnostics_area::set_error_status upon ALTER under lock

This failure could only happen if one locked the same table
multiple times and then did an ALTER TABLE on the table.

Major change is to change all instances of
table->m_needs_reopen= true;
to
table->mark_table_for_reopen();

The main fix for the problem was to ensure that we mark all
instances of the table in the locked_table_list and when we
reopen the tables, we first close all tables before reopening
and locking them.

Other things:
- Don't call thd->locked_tables_list.reopen_tables if there
  are no tables marked for reopen. (performance)
2020-05-23 14:58:33 +03:00
Alexander Barkov
bdab5b667e Merge remote-tracking branch 'origin/10.1' into 10.2 2020-05-22 14:22:45 +04:00
Alexander Barkov
cb9c49a9b2 MDEV-22111 ERROR 1064 & 1033 and SIGSEGV on CREATE TABLE w/ various charsets on 10.4/5 optimized builds | Assertion `(uint) (table_check_constraints - share->check_constraints) == (uint) (share->table_check_constraints - share->field_check_constraints)' failed
The code incorrectly assumed in multiple places that TYPELIB
values cannot have 0x00 bytes inside. In fact they can:

  CREATE TABLE t1 (a ENUM(0x61, 0x0062) CHARACTER SET BINARY);

Note, the TYPELIB value encoding used in FRM is ambiguous about 0x00.

So this fix is partial.

It fixes 0x00 bytes in many (but not all) places:

- In the middle or in the end of a value:
    CREATE TABLE t1 (a ENUM(0x6100) ...);
    CREATE TABLE t1 (a ENUM(0x610062) ...);

- In the beginning of the first value:
    CREATE TABLE t1 (a ENUM(0x0061));
    CREATE TABLE t1 (a ENUM(0x0061), b ENUM('b'));

- In the beginning of the second (and following) value of the *last* ENUM/SET
  in the table:

    CREATE TABLE t1 (a ENUM('a',0x0061));
    CREATE TABLE t1 (a ENUM('a'), b ENUM('b',0x0061));

However, it does not fix 0x00 when:

- 0x00 byte is in the beginning of a value of a non-last ENUM/SET
  causes an error:

   CREATE TABLE t1 (a ENUM('a',0x0061), b ENUM('b'));
   ERROR 1033 (HY000): Incorrect information in file: './test/t1.frm'

  This is an ambuguous case and will be fixed separately.
  We need a new TYPELIB encoding to fix this.

Details:

- unireg.cc

  The function pack_header() incorrectly used strlen() to detect
  a TYPELIB value length. Adding a new function typelib_values_packed_length()
  which uses TYPELIB::type_lengths[n] to detect the n-th value length,
  and reusing the new function in pack_header() and packed_fields_length()

- table.cc
  fix_type_pointers() assumed in multiple places that values cannot have
  0x00 inside and used strlen(TYPELIB::type_names[n]) to set
  the corresponding TYPELIB::type_lengths[n].

  Also, fix_type_pointers() did not check the encoded data for consistency.

  Rewriting fix_type_pointers() code to populate TYPELIB::type_names[n] and
  TYPELIB::type_lengths[n] at the same time, so no additional loop
  with strlen() is needed any more.

  Adding many data consistency tests.

  Fixing the main loop in fix_type_pointers() to use memchr() instead of
  strchr() to handle 0x00 properly.

  Fixing create_key_infos() to return the result in a LEX_STRING rather
  that in a char*.
2020-05-22 07:47:49 +04:00
Marko Mäkelä
23047d3ed4 Merge 10.4 into 10.5 2020-05-18 17:30:02 +03:00
Marko Mäkelä
4f29d776c7 Merge 10.3 into 10.4 2020-05-16 06:27:55 +03:00
Alexander Barkov
ef65c39ab3 Merge remote-tracking branch 'origin/10.2' into 10.3 2020-05-14 10:01:54 +04:00
Monty
edbf124515 Ensure that auto_increment fields are marked properly on update
MDEV-19622 Assertion failures in
ha_partition::set_auto_increment_if_higher upon UPDATE on Aria
table
2020-05-13 23:30:34 +03:00
Monty
eca5c2c67f Added support for more functions when using partitioned S3 tables
MDEV-22088 S3 partitioning support

All ALTER PARTITION commands should now work on S3 tables except

REBUILD PARTITION
TRUNCATE PARTITION
REORGANIZE PARTITION

In addition, PARTIONED S3 TABLES can also be replicated.
This is achived by storing the partition tables .frm and .par file on S3
for partitioned shared (S3) tables.

The discovery methods are enchanced by allowing engines that supports
discovery to also support of the partitioned tables .frm and .par file

Things in more detail

- The .frm and .par files of partitioned tables are stored in S3 and kept
  in sync.
- Added hton callback create_partitioning_metadata to inform handler
  that metadata for a partitoned file has changed
- Added back handler::discover_check_version() to be able to check if
  a table's or a part table's definition has changed.
- Added handler::check_if_updates_are_ignored(). Needed for partitioning.
- Renamed rebind() -> rebind_psi(), as it was before.
- Changed CHF_xxx hadnler flags to an enum
- Changed some checks from using table->file->ht to use
  table->file->partition_ht() to get discovery to work with partitioning.
- If TABLE_SHARE::init_from_binary_frm_image() fails, ensure that we
  don't leave any .frm or .par files around.
- Fixed that writefrm() doesn't leave unusable .frm files around
- Appended extension to path for writefrm() to be able to reuse to function
  for creating .par files.
- Added DBUG_PUSH("") to a a few functions that caused a lot of not
  critical tracing.
2020-04-19 17:33:51 +03:00
Aleksey Midenkov
6fb3e83d74 MDEV-21889 Typo fix: ER_KEY_DOES_NOT_EXISTS
libmariadb revision updated.
2020-04-04 00:52:54 +03:00
Marko Mäkelä
51a9dd6793 Fix GCC 9.3.0 -Wstrict-aliasing
copy_keys_from_share(): Use reinterpret_cast instead of
manipulating a reference to a type-punned pointer.

This cleans up after the cleanup
commit 0515577d12.
2020-04-01 11:33:58 +03:00
Nikita Malyavin
259fb1cbed MDEV-16978 Application-time periods: WITHOUT OVERLAPS
* The overlaps check is implemented on a handler level per row command.
  It creates a separate cursor (actually, another handler instance) and
  caches it inside the original handler, when ha_update_row or
  ha_insert_row is issued. Cursor closes on unlocking the handler.

* Containing the same key in index means unique constraint violation
  even in usual terms. So we fetch left and right neighbours and check
  that they have same key prefix, excluding from the key only the period part.
  If it doesnt match, then there's no such neighbour, and the check passes.
  Otherwise, we check if this neighbour intersects with the considered key.

* The check does not introduce new error and fails with ER_DUPP_KEY error.
  This might break REPLACE workflow and should be fixed separately
2020-03-31 17:42:34 +02:00
Sergei Golubchik
0515577d12 cleanup: prepare "update_handler" for WITHOUT OVERLAPS
* rename to a generic name
* move remaning initializations from query exec to prepare time
* simplify/unify key handling in open_table_from_share and delayed
* remove dead code
* move tests where they belong
2020-03-31 17:42:34 +02:00
Nikita Malyavin
3bef848226 cleanup: reduce code duplication in read_extra2() 2020-03-31 17:42:33 +02:00
Monty
eb483c5181 Updated optimizer costs in multi_range_read_info_const() and sql_select.cc
- multi_range_read_info_const now uses the new records_in_range interface
- Added handler::avg_io_cost()
- Don't calculate avg_io_cost() in get_sweep_read_cost if avg_io_cost is
  not 1.0.  In this case we trust the avg_io_cost() from the handler.
- Changed test_quick_select to use TIME_FOR_COMPARE instead of
  TIME_FOR_COMPARE_IDX to align this with the rest of the code.
- Fixed bug when using test_if_cheaper_ordering where we didn't use
  keyread if index was changed
- Fixed a bug where we didn't use index only read when using order-by-index
- Added keyread_time() to HEAP.
  The default keyread_time() was optimized for blocks and not suitable for
  HEAP. The effect was the HEAP prefered table scans over ranges for btree
  indexes.
- Fixed get_sweep_read_cost() for HEAP tables
- Ensure that range and ref have same cost for simple ranges
  Added a small cost (MULTI_RANGE_READ_SETUP_COST) to ranges to ensure
  we favior ref for range for simple queries.
- Fixed that matching_candidates_in_table() uses same number of records
  as the rest of the optimizer
- Added avg_io_cost() to JT_EQ_REF cost. This helps calculate the cost for
  HEAP and temporary tables better. A few tests changed because of this.
- heap::read_time() and heap::keyread_time() adjusted to not add +1.
  This was to ensure that handler::keyread_time() doesn't give
  higher cost for heap tables than for normal tables. One effect of
  this is that heap and derived tables stored in heap will prefer
  key access as this is now regarded as cheap.
- Changed cost for index read in sql_select.cc to match
  multi_range_read_info_const(). All index cost calculation is now
  done trough one function.
- 'ref' will now use quick_cost for keys if it exists. This is done
  so that for '=' ranges, 'ref' is prefered over 'range'.
- scan_time() now takes avg_io_costs() into account
- get_delayed_table_estimates() uses block_size and avg_io_cost()
- Removed default argument to test_if_order_by_key(); simplifies code
2020-03-27 03:58:32 +02:00
Monty
91ab42a823 Clean up and speed up interfaces for binary row logging
MDEV-21605 Clean up and speed up interfaces for binary row logging
MDEV-21617 Bug fix for previous version of this code

The intention is to have as few 'if' as possible in ha_write() and
related functions. This is done by pre-calculating once per statement the
row_logging state for all tables.

Benefits are simpler and faster code both when binary logging is disabled
and when it's enabled.

Changes:
- Added handler->row_logging to make it easy to check it table should be
  row logged. This also made it easier to disabling row logging for system,
  internal and temporary tables.
- The tables row_logging capabilities are checked once per "statements
  that updates tables" in THD::binlog_prepare_for_row_logging() which
  is called when needed from THD::decide_logging_format().
- Removed most usage of tmp_disable_binlog(), reenable_binlog() and
  temporary saving and setting of thd->variables.option_bits.
- Moved checks that can't change during a statement from
  check_table_binlog_row_based() to check_table_binlog_row_based_internal()
- Removed flag row_already_logged (used by sequence engine)
- Moved binlog_log_row() to a handler::
- Moved write_locked_table_maps() to THD::binlog_write_table_maps() as
  most other related binlog functions are in THD.
- Removed binlog_write_table_map() and binlog_log_row_internal() as
  they are now obsolete as 'has_transactions()' is pre-calculated in
  prepare_for_row_logging().
- Remove 'is_transactional' argument from binlog_write_table_map() as this
  can now be read from handler.
- Changed order of 'if's in handler::external_lock() and wsrep_mysqld.h
  to first evaluate fast and likely cases before more complex ones.
- Added error checking in ha_write_row() and related functions if
  binlog_log_row() failed.
- Don't clear check_table_binlog_row_based_result in
  clear_cached_table_binlog_row_based_flag() as it's not needed.
- THD::clear_binlog_table_maps() has been replaced with
  THD::reset_binlog_for_next_statement()
- Added 'MYSQL_OPEN_IGNORE_LOGGING_FORMAT' flag to open_and_lock_tables()
  to avoid calculating of binary log format for internal opens. This flag
  is also used to avoid reading statistics tables for internal tables.
- Added OPTION_BINLOG_LOG_OFF as a simple way to turn of binlog temporary
  for create (instead of using THD::sql_log_bin_off.
- Removed flag THD::sql_log_bin_off (not needed anymore)
- Speed up THD::decide_logging_format() by remembering if blackhole engine
  is used and avoid a loop over all tables if it's not used
  (the common case).
- THD::decide_logging_format() is not called anymore if no tables are used
  for the statement. This will speed up pure stored procedure code with
  about 5%+ according to some simple tests.
- We now get annotated events on slave if a CREATE ... SELECT statement
  is transformed on the slave from statement to row logging.
- In the original code, the master could come into a state where row
  logging is enforced for all future events if statement could be used.
  This is now partly fixed.

Other changes:
- Ensure that all tables used by a statement has query_id set.
- Had to restore the row_logging flag for not used tables in
  THD::binlog_write_table_maps (not normal scenario)
- Removed injector::transaction::use_table(server_id_type sid, table tbl)
  as it's not used.
- Cleaned up set_slave_thread_options()
- Some more DBUG_ENTER/DBUG_RETURN, code comments and minor indentation
  changes.
- Ensure we only call THD::decide_logging_format_low() once in
  mysql_insert() (inefficiency).
- Don't annotate INSERT DELAYED
- Removed zeroing pos_in_table_list in THD::open_temporary_table() as it's
  already 0
2020-03-24 21:00:03 +02:00
Monty
4ef437558a Improve update handler (long unique keys on blobs)
MDEV-21606 Improve update handler (long unique keys on blobs)
MDEV-21470 MyISAM and Aria start_bulk_insert doesn't work with long unique
MDEV-21606 Bug fix for previous version of this code
MDEV-21819 2 Assertion `inited == NONE || update_handler != this'

- Move update_handler from TABLE to handler
- Move out initialization of update handler from ha_write_row() to
  prepare_for_insert()
- Fixed that INSERT DELAYED works with update handler
- Give an error if using long unique with an autoincrement column
- Added handler function to check if table has long unique hash indexes
- Disable write cache in MyISAM and Aria when using update_handler as
  if cache is used, the row will not be inserted until end of statement
  and update_handler would not find conflicting rows.
- Removed not used handler argument from
  check_duplicate_long_entries_update()
- Syntax cleanups
  - Indentation fixes
  - Don't use single character indentifiers for arguments
2020-03-24 21:00:02 +02:00
Oleksandr Byelkin
fad47df995 Merge branch '10.4' into 10.5 2020-03-11 17:52:49 +01:00
Alexander Barkov
a1e330de5a MDEV-21743 Split up SUPER privilege to smaller privileges 2020-03-10 23:49:47 +04:00
Sergei Golubchik
c1c5222cae cleanup: PSI key is *always* the first argument 2020-03-10 19:24:23 +01:00
Sergei Golubchik
05779bc6f1 perfschema mdl related instrumentation changes 2020-03-10 19:24:22 +01:00
Sergei Golubchik
7c58e97bf6 perfschema memory related instrumentation changes 2020-03-10 19:24:22 +01:00
Sergei Golubchik
2ac3121af2 perfschema - various collateral cleanups and small changes 2020-03-10 19:24:22 +01:00
Eugene Kosov
1394216e3d MDEV-21669 InnoDB: Table ... contains <n> indexes inside InnoDB, which is different from the number of indexes <n> defined in the MariaDB
compare_keys_but_name(): do not use KEY_PART_INFO::field for
Field::is_equal(). Following the logic of that code we need to
compare fields of a table. But KEY_PART_INFO::field sometimes
(when key part is shorter than table field) is a different field.
In that case Field::is_equal() returns incorrect result and
problems occur.

KEY_PART_INFO::field may become some strange field in
open_frm_error open_table_from_share(). I think this is an
incorrect logic, some tecnhical debt. I'm not fixing it right now,
because I don't have time. But I'm making Field::field_length
a const class member. Then, the only fishy code which changed that
field requires now a const_cast<>. I'm bringing attention to that
code with it. This change should not affect logic of the
program in any way.
2020-02-13 15:10:25 +03:00
Oleksandr Byelkin
4b087e1754 Merge branch '10.4' into 10.5 2020-02-12 08:55:17 +01:00
Oleksandr Byelkin
646d1ec83a Merge branch '10.3' into 10.4 2020-02-11 14:40:35 +01:00
Alexander Barkov
83e75b39b3 MDEV-21702 Add a data type for privileges 2020-02-11 08:10:26 +04:00
Oleksandr Byelkin
58b70dc136 Merge branch '10.2' into 10.3 2020-02-10 20:34:16 +01:00
Oleksandr Byelkin
594282a534 Merge branch '10.1' into 10.2 2020-02-10 14:31:39 +01:00
Oleksandr Byelkin
716161ea03 Merge branch '5.5' into 10.1 2020-02-10 14:18:00 +01:00
Anel Husakovic
4932ec871f Clean the comment for table_f_c unt parameter
Deleted with commit: c70a9fa1e3
2020-01-29 12:50:17 +01:00
Alexander Barkov
f1e13fdc8d MDEV-21581 Helper functions and methods for CHARSET_INFO 2020-01-28 12:29:23 +04:00
Aleksey Midenkov
8ed646f071 Merge 10.4 into 10.5 2019-12-02 13:35:54 +03:00
Aleksey Midenkov
0b8b11b0b1 Merge 10.3 into 10.4 2019-12-02 12:51:53 +03:00
Aleksey Midenkov
97aa07abf5 MDEV-21147 Assertion `marked_for_read()' upon UPDATE on versioned table via view
"write set" for replication finally got its correct place
(mark_columns_per_binlog_row_image()). When done generally in
mark_columns_needed_for_update() it affects optimization
algorithm. used_key_is_modified, query_plan.using_io_buffer are
wrongly set and that leads to wrong prepare_for_keyread() which limits
read_set.
2019-12-02 11:48:37 +03:00
Aleksey Midenkov
498a96a478 MDEV-20441 ER_CRASHED_ON_USAGE upon update on versioned Aria table
Turn read cache off for update and multi-update for versioned
table. no_cache is reinited on each TABLE open because it is
applicable for specific algorithms.

As a side fix vers_insert_history_row() honors vers_write setting.

Aria with row_format=fixed uses IO_CACHE of type READ_CACHE for
sequential read in update loop. When history row is inserted inside
this loop the cache misses it and fails with error.

TODO:

Currently maria_extra() does not support SEQ_READ_APPEND. Probably it
might be possible to use this type of cache.
2019-12-02 11:48:37 +03:00
Aleksey Midenkov
0c05a2ed71 Merge 10.4 into 10.5 2019-11-25 17:24:09 +03:00
Aleksey Midenkov
33f55789d3 MDEV-18727 improve DML operation of System Versioning (10.4)
UPDATE, DELETE: replace linear search of current/historical records
with vers_setup_conds().

Additional DML cases in view.test
2019-11-25 16:01:43 +03:00
Aleksey Midenkov
0076dce2c8 MDEV-18727 improve DML operation of System Versioning
MDEV-18957 UPDATE with LIMIT clause is wrong for versioned partitioned tables

UPDATE, DELETE: replace linear search of current/historical records
with vers_setup_conds().

Additional DML cases in view.test
2019-11-22 14:29:03 +03:00
Marko Mäkelä
1fe6e5a1a5 Merge 10.4 into 10.5 2019-11-12 17:21:37 +02:00
Marko Mäkelä
33cb10d4e9 Merge 10.3 into 10.4 2019-11-12 16:55:44 +02:00
Andrei Elkin
d103c5a489 merge 10.2->10.3 with conflict resolutions 2019-11-11 16:28:21 +02:00
Andrei Elkin
26fd880d5e manual merge 10.1->10.2 2019-11-11 16:03:43 +02:00
Varun Gupta
b1ab2ba583 MDEV-20519: Query plan regression with optimizer_use_condition_selectivity > 1
The issue here is the wrong estimate of the cardinality of a partial join,
the cardinality is too high because the function table_cond_selectivity()
returns an absurd number 100 while selectivity cannot be greater than 1.

When accessing table t by outer reference t1.a via index we do not perform any
range analysis for t. Yet we see TABLE::quick_key_parts[key] and
TABLE->quick_rows[key] contain a non-zero value though these should have been
remained untouched and equal to 0.

Thus real cause of the problem is that TABLE::init does not clean the arrays
TABLE::quick_key_parts[] and TABLE::>quick_rows[].
It should have done it because the TABLE structure created for any
instance of a table can be reused for many queries.
2019-11-07 15:24:21 +05:30
Oleksandr Byelkin
3ad37ed0eb Merge 10.4 into 10.5 2019-11-07 08:52:30 +01:00
Marko Mäkelä
ec40980ddd Merge 10.3 into 10.4 2019-11-01 15:23:18 +02:00
Monty
b62101f84b Fixes for binary logging --read-only mode
- Any temporary tables created under read-only mode will never be logged
  to binary log.  Any usage of these tables to update normal tables, even
  after read-only has been disabled, will use row base logging (as the
  temporary table will not be on the slave).
- Analyze, check and repair table will not be logged in read-only mode.

Other things:
- Removed not used varaibles in
  MYSQL_BIN_LOG::flush_and_set_pending_rows_event.
- Set table_share->table_creation_was_logged for all normal tables.
- THD::binlog_query() now returns -1 if statement was not logged., This
  is used to update table_share->table_creation_was_logged.
- Don't log admin statements in opt_readonly is set.
- Table's that doesn't have table_creation_was_logged will set binlog format to row
  logging.
- Removed not needed/wrong setting of table->s->table_creation_was_logged
  in create_table_from_items()
2019-10-20 11:52:29 +03:00
Marko Mäkelä
d04f2de80a Merge 10.4 into 10.5 2019-10-11 08:41:36 +03:00
Marko Mäkelä
09afd3da1a Merge 10.3 into 10.4 2019-10-10 21:30:40 +03:00
Aleksey Midenkov
a92f3146d2 MDEV-19406 Assertion on updating view of join with versioned table
TABLE::mark_columns_needed_for_update(): use_all_columns() assigns
pointer of all_set into read_set and write_set, but this is not good
since all_set is changed later by
TABLE::mark_columns_used_by_index_no_reset().

Do column_bitmaps_signal() whenever we change read_set/write_set.
2019-10-10 00:20:34 +03:00
Alexander Barkov
c717483c9d MDEV-20016 Add MariaDB_DATA_TYPE_PLUGIN 2019-10-04 22:14:44 +04:00
Alexander Barkov
cefe5bb6b3 A cleanup for MDEV-20042 Implement EXTRA2_FIELD_DATA_TYPE_INFO in FRM
Adding error reporting (ER_UNKNOWN_DATA_TYPE) when a handler name read
from EXTRA2_FIELD_DATA_TYPE_INFO is not known to the server.
2019-10-02 18:10:58 +04:00
Alexander Barkov
02dea3ffd5 MDEV-20706 Store scale in Column_definition::decimals rather than Column_definition::pack_flag 2019-10-01 13:12:46 +04:00
Alexander Barkov
1ae09ec863 Merge remote-tracking branch 'origin/10.4' into 10.5 2019-10-01 11:44:27 +04:00
Aleksey Midenkov
58fdf5b2fa MDEV-16144 Default TIMESTAMP clause for SELECT from versioned
1. Removed TIMESTAMP/TRANSACTION unit auto-detection in favor of default TIMESTAMP.

Reasons:

1.1. rare practical use and doubtful advantage of such auto-detection;
1.2. it conflicts with MDEV-16226 (TRX_ID-based versioned tables performance improvement).

Needless check_unit membership removed.

2. SQL: versioning type handling refactoring

Vers_type_handler hierarchy stores versioning properties of type.

virtual Type_handler::vers() accesses specialization of
Vers_type_handler for specific type.

virtual Vers_type_handler::kind() returns versioning kind
(timestamp/trx_id).

Removed Type_handler::Vers_history_point_check_unit() in favor of
Type_handler::vers().

Renames:

require_timestamp() -> require_timestamp_error()
require_trx_id() -> require_trx_id_error()

EDIT by Alexander Barkov (@abarkov):

check_sys_fields() moved to Vers_type_handler::check_sys_fields()
2019-09-30 14:05:09 +03:00
Alexander Barkov
f1dcbc2d9a MDEV-20639 ASAN SEGV in get_prefix upon modifying base column type with existing indexed virtual column 2019-09-28 12:34:57 +04:00
Alexander Barkov
fdd00665c2 Revert "Part3: MDEV-18156 Assertion 0' failed or btr_validate_index(index, 0, false)' in row_upd_sec_index_entry or error code 126: Index is corrupted upon DELETE with PAD_CHAR_TO_FULL_LENGTH"
This reverts commit 5a9e2b77d4 in 10.5,
so it produces an error on "bad" generated columns.

Note, 10.2, 10.3, 10.4 produce only warnings, for backward compatibility.
2019-09-11 13:52:33 +04:00
Alexander Barkov
0636645e7e Merge remote-tracking branch 'origin/10.4' into 10.5 2019-09-11 11:46:31 +04:00
Sergei Golubchik
8885e7ba78 Merge branch '10.3' into 10.4 2019-09-06 20:12:11 +02:00
Sergei Golubchik
f80e02e043 Merge branch '10.2' into 10.3 2019-09-06 16:58:39 +02:00
Sergei Golubchik
5a9e2b77d4 Part3: MDEV-18156 Assertion 0' failed or btr_validate_index(index, 0, false)' in row_upd_sec_index_entry or error code 126: Index is corrupted upon DELETE with PAD_CHAR_TO_FULL_LENGTH
Don't break compatibility in GA releases.
Warn the user, but don't refuse to create a table until 10.5
2019-09-06 16:35:56 +02:00
Marko Mäkelä
4081b7b27a Merge 10.4 into 10.5 2019-09-06 17:16:40 +03:00
Marko Mäkelä
780d2bb8a7 Merge 10.4 into 10.5 2019-09-06 14:25:20 +03:00
Sergei Golubchik
244f0e6dd8 Merge branch '10.3' into 10.4 2019-09-06 11:53:10 +02:00
Marko Mäkelä
537f8594a6 Merge 10.2 into 10.3 2019-09-04 17:52:04 +03:00
Alexander Barkov
7e08ac0b41 Merge 10.2 (up to commit ef00ac4c86) into 10.3 2019-09-04 10:19:58 +04:00
Sergei Golubchik
17ab02f4b0 cleanup: on update default now
* remove one level of virtual functions
* remove redundant checks
* remove an if() as the value is always known at compilation time

don't pretend that "DEFAULT expr" and "ON UPDATE DEFAULT NOW"
are "basically the same thing"
2019-09-03 20:34:30 +02:00
Alexander Barkov
ef00ac4c86 Part2: MDEV-18156 Assertion 0' failed or btr_validate_index(index, 0, false)' in row_upd_sec_index_entry or error code 126: Index is corrupted upon DELETE with PAD_CHAR_TO_FULL_LENGTH
This patch allows the server to open old tables that have
"bad" generated columns (i.e. indexed virtual generated columns,
persistent generated columns) that depend on sql_mode,
for general things like SELECT, INSERT, DROP, etc.
Warning are issued in such cases.

Only these commands are now disallowed and return an error:
- CREATE TABLE introducing a "bad" generated column
- ALTER TABLE introducing a "bad" generated column
- CREATE INDEX introdicing a "bad" generated column
  (i.e. adding an index on a virtual generated column
   that depends on sql_mode).

Note, these commands are allowed:
- ALTER TABLE removing a "bad" generate column
- ALTER TABLE removing an index from a "bad" virtual generated column
- DROP INDEX removing an index from a "bad" virtual generated column
but only if the table does not have any "bad" columns as a result.
2019-09-03 09:51:35 +04:00
Alexander Barkov
dc719597ee MDEV-18156 Assertion 0' failed or btr_validate_index(index, 0, false)' in row_upd_sec_index_entry or error code 126: Index is corrupted upon DELETE with PAD_CHAR_TO_FULL_LENGTH
This change takes into account a column's GENERATED ALWAYS AS
expression dependcy on sql_mode's PAD_CHAR_TO_FULL_LENGTH and
NO_UNSIGNED_SUBTRACTION flags.

Indexed virtual columns as well as persistent generated columns are
now not allowed to have such dependencies to avoid inconsistent data
or index files on sql_mode changes.
So an error is now returned in cases like this:

  CREATE OR REPLACE TABLE t1
  (
    a CHAR(5),
    v VARCHAR(5) AS (a) PERSISTENT -- CHAR->VARCHAR or CHAR->TEXT = ERROR
  );

Functions RPAD() and RTRIM() can now remove dependency on
PAD_CHAR_TO_FULL_LENGTH. So this can be used instead:

  CREATE OR REPLACE TABLE t1
  (
    a CHAR(5),
    v VARCHAR(5) AS (RTRIM(a)) PERSISTENT
  );

Note, unlike CHAR->VARCHAR and CHAR->TEXT this still works,
not RPAD(a) is needed:

  CREATE OR REPLACE TABLE t1
  (
    a CHAR(5),
    v CHAR(5) AS (a) PERSISTENT -- CHAR->CHAR is OK
  );

More sql_mode flags may affect values of generated columns.
They will be addressed separately.

See comments in sql_mode.h for implementation details.
2019-09-03 05:34:53 +04:00
Marko Mäkelä
db4a27ab73 Merge 10.3 into 10.4 2019-08-31 06:53:45 +03:00
Igor Babaev
9380850d87 MDEV-15777 Use inferred IS NOT NULL predicates in the range optimizer
This patch introduces the optimization that allows range optimizer to
consider index range scans that are built employing NOT NULL predicates
inferred from WHERE conditions and ON expressions.
The patch adds a new optimizer switch not_null_range_scan.
2019-08-30 18:47:14 -07:00
Marko Mäkelä
f42a23178e MDEV-20425: Fix -Wimplicit-fallthrough
With --skip-debug-assert, DBUG_ASSERT(false) will allow execution to
continue. Hence, we will need /* fall through */ after them.

Some DBUG_ASSERT(0) were replaced by break; when the switch () statement
was followed by DBUG_ASSERT(0).
2019-08-30 14:11:59 +03:00