Commit graph

71122 commits

Author SHA1 Message Date
Monty
346d10a953 Fixed error messages from DROP VIEW to align with DROP TABLE
- Produce a "Note" for all wrongly dropped objects
  (Like doing DROP VIEW on a table).
- IF EXISTS ends with a list of all not existing objects, instead of a
  separate note for every not existing object.

Other things:
 - Fixed bug where one could do CREATE TEMPORARY SEQUENCE multiple times
   and create multiple temporary sequences with the same name.
2020-06-14 19:39:42 +03:00
Monty
5bcb1d6532 MDEV-11412 Ensure that table is truly dropped when using DROP TABLE
The used code is largely based on code from Tencent

The problem is that in some rare cases there may be a conflict between .frm
files and the files in the storage engine. In this case the DROP TABLE
was not able to properly drop the table.

Some MariaDB/MySQL forks has solved this by adding a FORCE option to
DROP TABLE. After some discussion among MariaDB developers, we concluded
that users expects that DROP TABLE should always work, even if the
table would not be consistent. There should not be a need to use a
separate keyword to ensure that the table is really deleted.

The used solution is:
- If a .frm table doesn't exists, try dropping the table from all storage
  engines.
- If the .frm table exists but the table does not exist in the engine
  try dropping the table from all storage engines.
- Update storage engines using many table files (.CVS, MyISAM, Aria) to
  succeed with the drop even if some of the files are missing.
- Add HTON_AUTOMATIC_DELETE_TABLE to handlerton's where delete_table()
  is not needed and always succeed. This is used by ha_delete_table_force()
  to know which handlers to ignore when trying to drop a table without
  a .frm file.

The disadvantage of this solution is that a DROP TABLE on a non existing
table will be a bit slower as we have to ask all active storage engines
if they know anything about the table.

Other things:
- Added a new flag MY_IGNORE_ENOENT to my_delete() to not give an error
  if the file doesn't exist. This simplifies some of the code.
- Don't clear thd->error in ha_delete_table() if there was an active
  error. This is a bug fix.
- handler::delete_table() will not abort if first file doesn't exists.
  This is bug fix to handle the case when a drop table was aborted in
  the middle.
- Cleaned up mysql_rm_table_no_locks() to ensure that if_exists uses
  same code path as when it's not used.
- Use non_existing_Table_error() to detect if table didn't exists.
  Old code used different errors tests in different position.
- Table_triggers_list::drop_all_triggers() now drops trigger file if
  it can't be parsed instead of leaving it hanging around (bug fix)
- InnoDB doesn't anymore print error about .frm file out of sync with
  InnoDB directory if .frm file does not exists. This change was required
  to be able to try to drop an InnoDB file when .frm doesn't exists.
- Fixed bug in mi_delete_table() where the .MYD file would not be dropped
  if the .MYI file didn't exists.
- Fixed memory leak in Mroonga when deleting non existing table
- Fixed memory leak in Connect when deleting non existing table

Bugs fixed introduced by the original version of this commit:
MDEV-22826 Presence of Spider prevents tables from being force-deleted from
           other engines
2020-06-14 19:39:42 +03:00
Marko Mäkelä
3dbc49f075 Merge 10.4 into 10.5 2020-06-14 10:13:53 +03:00
Sergei Golubchik
9ed08f3576 MDEV-22884 Assertion `grant_table || grant_table_role' failed on perfschema
when allowing access via perfschema callbacks, update
the cached GRANT_INFO to match
2020-06-13 21:22:07 +02:00
Sergei Golubchik
b58586aae9 MDEV-21560 Assertion `grant_table || grant_table_role' failed in check_grant_all_columns
With RETURNING it can happen that the user has some privileges on
the table (namely, DELETE), but later needs different privileges
on individual columns (namely, SELECT).

Do the same as in check_grant_column() - ER_COLUMNACCESS_DENIED_ERROR,
not an assert.
2020-06-13 18:49:42 +02:00
Marko Mäkelä
805340936a Merge 10.3 into 10.4 2020-06-13 19:01:28 +03:00
Marko Mäkelä
d83a443250 Merge 10.2 into 10.3 2020-06-13 15:11:43 +03:00
Alexander Barkov
6c30bc2181 MDEV-22268 virtual longlong Item_func_div::int_op(): Assertion `0' failed in Item_func_div::int_op
Item_func_div::fix_length_and_dec_temporal() set the return data type to
integer in case of @div_precision_increment==0 for temporal input with FSP=0.
This caused Item_func_div to call int_op(), which is not implemented,
so a crash on DBUG_ASSERT(0) happened.

Fixing fix_length_and_dec_temporal() to set the result type to DECIMAL.
2020-06-13 09:30:04 +04:00
Sidney Cammeresi
114a843669
when printing Item_in_optimizer, use precedence of wrapped Item
when Item::print() is called with the QT_PARSABLE flag, WHERE i NOT IN
(SELECT ...) gets printed as WHERE !i IN (SELECT ...) instead of WHERE
!(i in (SELECT ...)) because Item_in_optimizer returns DEFAULT_PRECEDENCE.
it should return the precedence of the inner operation.
2020-06-12 12:00:10 -07:00
Varun Gupta
ab9bd6284c MDEV-22840: JSON_ARRAYAGG gives wrong results with NULL values and ORDER by clause
The problem here is similar to the case with DISTINCT, the tree used for ORDER BY
needs to also hold the null bytes of the record. This was not done for GROUP_CONCAT
as NULLS are rejected by GROUP_CONCAT.

Also introduced a comparator function for the order by tree to handle null
values with JSON_ARRAYAGG.
2020-06-12 23:47:38 +05:30
Varun Gupta
0f6f0daa4d MDEV-22011: DISTINCT with JSON_ARRAYAGG gives wrong results
For DISTINCT to be handled with JSON_ARRAYAGG, we need to make sure
that the Unique tree also holds the NULL bytes of a table record
inside the node of the tree. This behaviour for JSON_ARRAYAGG is
different from GROUP_CONCAT because in GROUP_CONCAT we just reject
NULL values for columns.

Also introduced a comparator function for the unique tree to handle null
values for distinct inside JSON_ARRAYAGG.
2020-06-12 23:47:38 +05:30
Varun Gupta
a006e88cac MDEV-11563: GROUP_CONCAT(DISTINCT ...) may produce a non-distinct list
Backported from MYSQL
Bug #25331425: DISTINCT CLAUSE DOES NOT WORK IN GROUP_CONCAT
Issue:
------
The problem occurs when:
1) GROUP_CONCAT (DISTINCT ....) is used in the query.
2) Data size greater than value of system variable:
tmp_table_size.

The result would contain values that are non-unique.

Root cause:
-----------
An in-memory structure is used to filter out non-unique
values. When the data size exceeds tmp_table_size, the
overflow is written to disk as a separate file. The
expectation here is that when all such files are merged,
the full set of unique values can be obtained.

But the Item_func_group_concat::add function is in a bit of
hurry. Even as it is adding values to the tree, it wants to
decide if a value is unique and write it to the result
buffer. This works fine if the configured maximum size is
greater than the size of the data. But since tmp_table_size
is set to a low value, the size of the tree is smaller and
hence requires the creation of multiple copies on disk.

Item_func_group_concat currently has no mechanism to merge
all the copies on disk and then generate the result. This
results in duplicate values.

Solution:
---------
In case of the DISTINCT clause, don't write to the result
buffer immediately. Do the merge and only then put the
unique values in the result buffer. This has be done in
Item_func_group_concat::val_str.

Note regarding result file changes:
-----------------------------------
Earlier when a unique value was seen in
Item_func_group_concat::add, it was dumped to the output.
So result is in the order stored in SE. But with this fix,
we wait until all the data is read and the final set of
unique values are written to output buffer. So the data
appears in the sorted order.

This only fixes the cases when we have DISTINCT without ORDER BY clause
in GROUP_CONCAT.
2020-06-12 23:47:38 +05:30
Sergei Petrunia
d7d80689b3 MDEV-15101: Stop ANALYZE TABLE from flushing table definition cache
Apply this patch from Percona Server (amended for 10.5):

commit cd7201514fee78aaf7d3eb2b28d2573c76f53b84
Author: Laurynas Biveinis <laurynas.biveinis@gmail.com>
Date:   Tue Nov 14 06:34:19 2017 +0200

    Fix bug 1704195 / 87065 / TDB-83 (Stop ANALYZE TABLE from flushing table definition cache)

    Make ANALYZE TABLE stop flushing affected tables from the table
    definition cache, which has the effect of not blocking any subsequent
    new queries involving the table if there's a parallel long-running
    query:

    - new table flag HA_ONLINE_ANALYZE, return it for InnoDB and TokuDB
      tables;
    - in mysql_admin_table, if we are performing ANALYZE TABLE, and the
      table flag is set, do not remove the table from the table
      definition cache, do not invalidate query cache;
    - in partitioning handler, refresh the query optimizer statistics
      after ANALYZE if the underlying handler supports HA_ONLINE_ANALYZE;
    - new testcases main.percona_nonflushing_analyze_debug,
      parts.percona_nonflushing_abalyze_debug and a supporting debug sync
      point.

    For TokuDB, this change exposes bug TDB-83 (Index cardinality stats
    updated for handler::info(HA_STATUS_CONST), not often enough for
    tokudb_cardinality_scale_percent). TokuDB may return different
    rec_per_key values depending on dynamic variable
    tokudb_cardinality_scale_percent value. The server does not have a way
    of knowing that changing this variable invalidates the previous
    rec_per_key values in any opened table shares, and so does not call
    info(HA_STATUS_CONST) again. Fix by updating rec_per_key for both
    HA_STATUS_CONST and HA_STATUS_VARIABLE. This also forces a re-record
    of tokudb.bugs.db756_card_part_hash_1_pick, with the new output
    seeming to be more correct.
2020-06-12 20:29:05 +03:00
Sergei Golubchik
0b5dc6268f more mysql_create_view link/unlink woes 2020-06-12 14:23:05 +02:00
Sergei Golubchik
fb70eb773c MDEV-22878 galera.wsrep_strict_ddl hangs in 10.5 after merge
if mysql_create_view is aborted when `view` isn't unlinked,
it should not be linked back on cleanup
2020-06-12 14:23:05 +02:00
Oleksandr Byelkin
82f3ceed12 MDEV-16470: switch off user variables (and fixes of its support) 2020-06-12 12:14:14 +02:00
Andrei Elkin
e156a8da08 MDEV-21851: Error in BINLOG_BASE64_EVENT i s always error-logged as if it is done by Slave
The prefix of error log message out of a failed BINLOG applying
is corrected to be the sql command name.
2020-06-12 11:25:27 +03:00
Aleksey Midenkov
762bf7a03b MDEV-22602 Disable UPDATE CASCADE for SQL constraints
CHECK constraint is checked by check_expression() which walks its
items and gets into Item_field::check_vcol_func_processor() to check
for conformity with foreign key list.

WITHOUT OVERLAPS is checked for same conformity in
mysql_prepare_create_table().

Long uniques are already impossible with InnoDB foreign keys. See
ER_CANT_CREATE_TABLE in test case.

2 accompanying bugs fixed (test main.constraints failed):

1. check->name.str lived on SP execute mem_root while "check" obj
itself lives on SP main mem_root. On second SP execute check->name.str
had garbage data. Fixed by allocating from thd->stmt_arena->mem_root
which is SP main mem_root.

2. CHECK_CONSTRAINT_IF_NOT_EXISTS value was mixed with
VCOL_FIELD_REF. VCOL_FIELD_REF is assigned in check_expression() and
then detected as CHECK_CONSTRAINT_IF_NOT_EXISTS in
handle_if_exists_options().

Existing cases for MDEV-16932 in main.constraints cover both fixes.
2020-06-12 11:12:40 +03:00
Vicențiu Ciorbaru
2fd2fd77e7 Fix wrong merge of commit d218d1aa49 2020-06-12 10:55:53 +03:00
Vicențiu Ciorbaru
8c67ffffe8 Merge branch '10.1' into 10.2 2020-06-11 22:35:30 +03:00
Alexander Barkov
e835881c47 MDEV-21619 Server crash or assertion failures in my_datetime_to_str
Item_cache_datetime::decimals was always copied from example->decimals
without limiting to 6 (maximum possible fractional digits), so
val_str() later crashed on asserts inside my_time_to_str() and
my_datetime_to_str().
2020-06-11 15:33:16 +04:00
Varun Gupta
ade0f40ff1 MDEV-22819: Wrong result or Assertion `ix > 0' failed in read_to_buffer upon select with GROUP BY and GROUP_CONCAT
In the merge_buffers phase for sorting, the sort buffer size is divided between the number of chunks.
The chunks have a start and end position (m_buffer_start and m_buffer_end).
Then we read the as many records that fit in this buffer for a chunk of the file.
The issue here was we were resetting the end of buffer(m_buffer_end) to the number of bytes that was
read, this was causing a problem because with dynamic size of sort keys it is possible that later
we would not be able to accommodate even one key inside a chunk of file.
So the fix was to not reset the end of buffer for a chunk of file.
2020-06-11 12:04:21 +05:30
Sachin
ba2c2cfb20 Fix typo 2020-06-11 11:47:22 +05:30
Alexander Barkov
de20091f5c MDEV-22755 CREATE USER leads to indirect SIGABRT in __stack_chk_fail () from fill_schema_user_privileges + *** stack smashing detected *** (on optimized builds)
The code erroneously used buff[100] in a fiew places to make
a GRANTEE value in the form:
  'user'@'host'

Fix:
- Fixing the code to use (USER_HOST_BUFF_SIZE + 6) instead of 100.
- Adding a DBUG_ASSERT to make sure the buffer is enough
- Wrapping the code into a class Grantee_str, to reuse it easier in 4 places.
2020-06-11 09:57:05 +04:00
Sachin
72776d4c49 MDEV-22722 Assertion "inited==NONE" failed in handler::ha_index_init on the slave during UPDATE
Add missing call for handler->prepare_for_insert() in Rows_log_event::do_apply_event
2020-06-11 10:39:33 +05:30
Alexander Barkov
6e2d967b1b MDEV-14347 CREATE PROCEDURE returns no error when using an unknown variable
CREATE PROCEDURE did not detect unknown SP variables in assignments like this:

  SET var=a_long_var_name_with_a_typo;

The error happened only during the SP execution time, and only of the control
flow reaches the erroneous statement.

Fixing most expressions to detect unknown identifiers.
This includes simple subqueries without tables:

- Query specification: SELECT list, WHERE,
  HAVING (inside aggregate functions) clauses, e.g.
    SET var= (SELECT unknown_ident+1);
    SET var= (SELECT 1 WHERE unknown_identifier);
    SET var= (SELECT 1 HAVING SUM(unknown_identifier);

- Table value constructor: VALUES clause, e.g.:
    SET var= (VALUES(unknown_ident));

Note, in some more complex subquery cases unknown variables are still not detected
(this will be fixed separately):

- Derived tables:
  SET a=(SELECT unknown_ident FROM (SELECT 1 AS alias) t1);
  SET res=(SELECT * FROM t1 LEFT OUTER JOIN (SELECT unknown_ident) t2 USING (c1));

- CTE:
  SET a=(WITH cte1 (a) AS (SELECT unknown_ident) SELECT * FROM cte1);
  SET a=(WITH cte1 (a,b) AS (VALUES (unknown,2),(3,4)) SELECT * FROM cte1);
  SET a=(WITH cte1 (a,b) AS (VALUES (1,2),(3,4)) SELECT unknown_ident FROM cte1);

- SELECT .. GROUP BY unknown_identifier
- SELECT .. ORDER BY unknown_identifier
- HAVING with an unknown identifier outside of any aggregate functions:
  SELECT .. HAVING unknown_identifier;
2020-06-10 18:09:35 +04:00
Alexander Barkov
bf2a244406 MDEV-22854 Garbage returned with SELECT CASE..DEFAULT(timestamp_field_with_now_as_default)
Item_default_value did not override val_native(), so the inherited
Item_field::val_native() was called. As a result Item_default_value::calculate()
was not called and Item_field::val_native() was called on a Field
with a non-initialized ptr.

Implementing Item_default_value::val_native() properly.
2020-06-10 13:55:55 +04:00
Sujatha
840fb495ce MDEV-22059: MSAN report at replicate_ignore_table_grant
Analysis:
========
List of values provided for "replicate_ignore_table" and "replicate_do_table"
are stored in HASH.  When an empty list is provided the HASH structure doesn't
get initialized. Existing code treats empty element list as an error and tries
to clean the uninitialized HASH. This results in above MSAN issue.

Fix:
===
The clean up should be initiated only when there is an error while parsing the
'replicate_do_table' or 'replicate_ignore_table' list and the HASH is in
initialized state. Otherwise for empty list it should simply return success.
2020-06-10 14:44:27 +05:30
Oleksandr Byelkin
59717bbce4 MDEV-5924: MariaDB could crash after changing the query_cache size
The real problem was that attempt to roll back cahnes after end of memory in QC was made incorrectly and lead to using uninitialized memory.
(bug has nothing to do with resize operation, it is just lack of resources erro processed incorrectly)
2020-06-10 09:35:38 +02:00
Oleksandr Byelkin
61862d711d Revert "MDEV-22830: SQL_CALC_FOUND_ROWS not working properly for single SELECT for DUAL"
This reverts commit 443391236d.
2020-06-10 09:34:56 +02:00
Marko Mäkelä
e76ca24bb1 Fix GCC -Wunused-function
debug_sync_set_action(): Declare the dummy function inline,
to silence a warning about declared-but-unused static function.
This amends commit 3ccd6766d0.
2020-06-10 07:43:18 +03:00
Julius Goryavsky
3ccd6766d0 Fixed compilation error in DCMAKE_BUILD_TYPE=mysql_release mode when WSREP enabled 2020-06-10 03:51:49 +02:00
Varun Gupta
648b54746c MDEV-22399: Remove multiple calls to enable and disable Handler::keyread and perform it after the plan refinement phase is done
Introduce a function to enable keyreads for indexes and use this
function when all the decision of plan refinement phase are done.
2020-06-10 02:29:28 +05:30
Varun Gupta
04c5cdffeb MDEV-22836: Server crashes in err_conv / ErrBuff::set_str
The issue here is charset for Sort_param::tmp_buffer is cleared when bzero is done for Sort_param.
Make sure to set the charset explicitly in the constructor for tmp_buffer.
2020-06-09 18:43:19 +05:30
Sergei Golubchik
89a33303c4 remove dead code
reduce the amount of engine-specific code in the server,
particularly as it does not serve any purpose now.

may be needed for VP engine,
to be reconsidered in MDEV-7795
2020-06-09 14:32:43 +02:00
Varun Gupta
81a08c5462 MDEV-11563: GROUP_CONCAT(DISTINCT ...) may produce a non-distinct list
Backported from MYSQL
 Bug #25331425: DISTINCT CLAUSE DOES NOT WORK IN GROUP_CONCAT
    Issue:
    ------
    The problem occurs when:
    1) GROUP_CONCAT (DISTINCT ....) is used in the query.
    2) Data size greater than value of system variable:
    tmp_table_size.

    The result would contain values that are non-unique.

    Root cause:
    -----------
    An in-memory structure is used to filter out non-unique
    values. When the data size exceeds tmp_table_size, the
    overflow is written to disk as a separate file. The
    expectation here is that when all such files are merged,
    the full set of unique values can be obtained.

    But the Item_func_group_concat::add function is in a bit of
    hurry. Even as it is adding values to the tree, it wants to
    decide if a value is unique and write it to the result
    buffer. This works fine if the configured maximum size is
    greater than the size of the data. But since tmp_table_size
    is set to a low value, the size of the tree is smaller and
    hence requires the creation of multiple copies on disk.

    Item_func_group_concat currently has no mechanism to merge
    all the copies on disk and then generate the result. This
    results in duplicate values.

    Solution:
    ---------
    In case of the DISTINCT clause, don't write to the result
    buffer immediately. Do the merge and only then put the
    unique values in the result buffer. This has be done in
    Item_func_group_concat::val_str.

    Note regarding result file changes:
    -----------------------------------
    Earlier when a unique value was seen in
    Item_func_group_concat::add, it was dumped to the output.
    So result is in the order stored in SE. But with this fix,
    we wait until all the data is read and the final set of
    unique values are written to output buffer. So the data
    appears in the sorted order.

    This only fixes the cases when we have DISTINCT without ORDER BY clause
    in GROUP_CONCAT.
2020-06-09 17:55:29 +05:30
rucha174
443391236d MDEV-22830: SQL_CALC_FOUND_ROWS not working properly for single SELECT for DUAL
In case of SELECT without tables which returns either 0 or 1 rows,
JOIN::exec_inner() did not check if the flag representing SQL_CALC_FOUND_ROWS
is set or not and send_records was direclty assigned 0. So SELECT FOUND_ROWS()
was giving 0 in the output. Now it checks if the flag is set, if it is set
send_record=1 else 0. 1 is the number of rows that could have been sent
to the client if the SELECT query had SQL_CALC_FOUND_ROWS.
It is 0 when no rows were sent because the SELECT query did not have
SQL_CALC_FOUND_ROWS.
2020-06-09 14:43:15 +05:30
Alexander Barkov
76cb2f9dd6 MDEV-21765 Possibly inconsistent behavior of BIT_xx functions with INET6 field
Disallow BIT_AND(), BIT_OR(), BIT_XOR() for data types GEOMETRY and INET6,
as they cannot return any useful integer values.
2020-06-09 12:54:04 +04:00
Sujatha
e1045a768b MDEV-22717: Conditional jump or move depends on uninitialised value(s) in find_uniq_filename(char*, unsigned long)
Fix:
===
Initialize 'number' variable to '0'.
2020-06-08 21:55:12 +05:30
Alexander Barkov
86c50a255a MDEV-22734 Assertion `mon > 0 && mon < 13' failed in sec_since_epoch
When processing a condition like:
   WHERE timestamp_column='2010-00-01 00:00:00'
don't replace the constant to Item_datetime_literal if the constant
it has zeros (in the month or in the day).
2020-06-08 14:00:19 +04:00
Marko Mäkelä
d3681335b1 Merge 10.4 into 10.5 2020-06-08 12:58:11 +03:00
Marko Mäkelä
57022dfb25 Merge 10.3 into 10.4 2020-06-08 11:45:28 +03:00
Marko Mäkelä
befb0bed68 Merge 10.2 into 10.3 2020-06-08 11:09:49 +03:00
Monty
a9bee9884a Don't allow ALTER TABLE ... ORDER BY on SEQUENCE objects
MDEV-19320 Sequence gets corrupted and produces ER_KEY_NOT_FOUND
           (Can't find record) after ALTER .. ORDER BY
2020-06-07 16:32:00 +03:00
Monty
e6a6382f15 Don't allow illegal create options for SEQUENCE
MDEV-19977 Assertion `(0xFUL & mode) == LOCK_S ||
           (0xFUL & mode) == LOCK_X' failed in lock_rec_lock
2020-06-07 16:32:00 +03:00
Alexander Barkov
fad348a9a6 MDEV-22822 sql_mode="oracle" cannot declare without variable errors 2020-06-07 16:23:47 +04:00
Marko Mäkelä
0e69f601aa Merge 10.4 into 10.5 2020-06-07 12:22:06 +03:00
Sachin
eb14e073ea MDEV-22719 Long unique keys are not created when individual key_part->length < max_key_length but SUM(key_parts->length) > max_key_length
Make UNIQUE HASH key in case when key_info->key_length > max_key_length
2020-06-07 12:07:41 +05:30
Sachin
e208f91ba8 MDEV-21804 Assertion `marked_for_read()' failed upon INSERT into table with long unique blob under binlog_row_image=NOBLOB
Problem:- Calling mark_columns_per_binlog_row_image() earlier may change the
result of mark_virtual_columns_for_write() , Since it can set the bitmap on
for virtual column, and henceforth  mark_virtual_column_deps(field) will
never be called in mark_virtual_column_with_deps.

This bug is not specific for long unique, It also fails for this case
   create table t2(id int primary key, a blob, b varchar(20) as (LEFT(a,2)));
2020-06-07 12:07:36 +05:30
Varun Gupta
d218d1aa49 MDEV-22728: SIGFPE in Unique::get_cost_calc_buff_size from prepare_search_best_index_intersect on optimized builds
For low sort_buffer_size, in the cost calculation of using the Unique object the elements in the tree were evaluated to 0, make sure to have atleast 1 element in the Unique tree.

Also for the function Unique::get allocate memory for atleast MERGEBUFF2+1 keys.
2020-06-07 04:19:58 +05:30