Commit graph

69722 commits

Author SHA1 Message Date
Marko Mäkelä
d83a443250 Merge 10.2 into 10.3 2020-06-13 15:11:43 +03:00
Alexander Barkov
6c30bc2181 MDEV-22268 virtual longlong Item_func_div::int_op(): Assertion `0' failed in Item_func_div::int_op
Item_func_div::fix_length_and_dec_temporal() set the return data type to
integer in case of @div_precision_increment==0 for temporal input with FSP=0.
This caused Item_func_div to call int_op(), which is not implemented,
so a crash on DBUG_ASSERT(0) happened.

Fixing fix_length_and_dec_temporal() to set the result type to DECIMAL.
2020-06-13 09:30:04 +04:00
Vicențiu Ciorbaru
2fd2fd77e7 Fix wrong merge of commit d218d1aa49 2020-06-12 10:55:53 +03:00
Vicențiu Ciorbaru
8c67ffffe8 Merge branch '10.1' into 10.2 2020-06-11 22:35:30 +03:00
Alexander Barkov
e835881c47 MDEV-21619 Server crash or assertion failures in my_datetime_to_str
Item_cache_datetime::decimals was always copied from example->decimals
without limiting to 6 (maximum possible fractional digits), so
val_str() later crashed on asserts inside my_time_to_str() and
my_datetime_to_str().
2020-06-11 15:33:16 +04:00
Alexander Barkov
de20091f5c MDEV-22755 CREATE USER leads to indirect SIGABRT in __stack_chk_fail () from fill_schema_user_privileges + *** stack smashing detected *** (on optimized builds)
The code erroneously used buff[100] in a fiew places to make
a GRANTEE value in the form:
  'user'@'host'

Fix:
- Fixing the code to use (USER_HOST_BUFF_SIZE + 6) instead of 100.
- Adding a DBUG_ASSERT to make sure the buffer is enough
- Wrapping the code into a class Grantee_str, to reuse it easier in 4 places.
2020-06-11 09:57:05 +04:00
Oleksandr Byelkin
59717bbce4 MDEV-5924: MariaDB could crash after changing the query_cache size
The real problem was that attempt to roll back cahnes after end of memory in QC was made incorrectly and lead to using uninitialized memory.
(bug has nothing to do with resize operation, it is just lack of resources erro processed incorrectly)
2020-06-10 09:35:38 +02:00
Oleksandr Byelkin
61862d711d Revert "MDEV-22830: SQL_CALC_FOUND_ROWS not working properly for single SELECT for DUAL"
This reverts commit 443391236d.
2020-06-10 09:34:56 +02:00
Varun Gupta
81a08c5462 MDEV-11563: GROUP_CONCAT(DISTINCT ...) may produce a non-distinct list
Backported from MYSQL
 Bug #25331425: DISTINCT CLAUSE DOES NOT WORK IN GROUP_CONCAT
    Issue:
    ------
    The problem occurs when:
    1) GROUP_CONCAT (DISTINCT ....) is used in the query.
    2) Data size greater than value of system variable:
    tmp_table_size.

    The result would contain values that are non-unique.

    Root cause:
    -----------
    An in-memory structure is used to filter out non-unique
    values. When the data size exceeds tmp_table_size, the
    overflow is written to disk as a separate file. The
    expectation here is that when all such files are merged,
    the full set of unique values can be obtained.

    But the Item_func_group_concat::add function is in a bit of
    hurry. Even as it is adding values to the tree, it wants to
    decide if a value is unique and write it to the result
    buffer. This works fine if the configured maximum size is
    greater than the size of the data. But since tmp_table_size
    is set to a low value, the size of the tree is smaller and
    hence requires the creation of multiple copies on disk.

    Item_func_group_concat currently has no mechanism to merge
    all the copies on disk and then generate the result. This
    results in duplicate values.

    Solution:
    ---------
    In case of the DISTINCT clause, don't write to the result
    buffer immediately. Do the merge and only then put the
    unique values in the result buffer. This has be done in
    Item_func_group_concat::val_str.

    Note regarding result file changes:
    -----------------------------------
    Earlier when a unique value was seen in
    Item_func_group_concat::add, it was dumped to the output.
    So result is in the order stored in SE. But with this fix,
    we wait until all the data is read and the final set of
    unique values are written to output buffer. So the data
    appears in the sorted order.

    This only fixes the cases when we have DISTINCT without ORDER BY clause
    in GROUP_CONCAT.
2020-06-09 17:55:29 +05:30
rucha174
443391236d MDEV-22830: SQL_CALC_FOUND_ROWS not working properly for single SELECT for DUAL
In case of SELECT without tables which returns either 0 or 1 rows,
JOIN::exec_inner() did not check if the flag representing SQL_CALC_FOUND_ROWS
is set or not and send_records was direclty assigned 0. So SELECT FOUND_ROWS()
was giving 0 in the output. Now it checks if the flag is set, if it is set
send_record=1 else 0. 1 is the number of rows that could have been sent
to the client if the SELECT query had SQL_CALC_FOUND_ROWS.
It is 0 when no rows were sent because the SELECT query did not have
SQL_CALC_FOUND_ROWS.
2020-06-09 14:43:15 +05:30
Sujatha
e1045a768b MDEV-22717: Conditional jump or move depends on uninitialised value(s) in find_uniq_filename(char*, unsigned long)
Fix:
===
Initialize 'number' variable to '0'.
2020-06-08 21:55:12 +05:30
Marko Mäkelä
befb0bed68 Merge 10.2 into 10.3 2020-06-08 11:09:49 +03:00
Monty
a9bee9884a Don't allow ALTER TABLE ... ORDER BY on SEQUENCE objects
MDEV-19320 Sequence gets corrupted and produces ER_KEY_NOT_FOUND
           (Can't find record) after ALTER .. ORDER BY
2020-06-07 16:32:00 +03:00
Monty
e6a6382f15 Don't allow illegal create options for SEQUENCE
MDEV-19977 Assertion `(0xFUL & mode) == LOCK_S ||
           (0xFUL & mode) == LOCK_X' failed in lock_rec_lock
2020-06-07 16:32:00 +03:00
Alexander Barkov
fad348a9a6 MDEV-22822 sql_mode="oracle" cannot declare without variable errors 2020-06-07 16:23:47 +04:00
Varun Gupta
d218d1aa49 MDEV-22728: SIGFPE in Unique::get_cost_calc_buff_size from prepare_search_best_index_intersect on optimized builds
For low sort_buffer_size, in the cost calculation of using the Unique object the elements in the tree were evaluated to 0, make sure to have atleast 1 element in the Unique tree.

Also for the function Unique::get allocate memory for atleast MERGEBUFF2+1 keys.
2020-06-07 04:19:58 +05:30
Igor Babaev
e9dbbf1120 MDEV-22748 MariaDB crash on WITH RECURSIVE large query
This bug is the same as the bug MDEV-17024. The crashes caused by these
bugs were due to premature cleanups of the unit specifying recursive CTEs
that happened in some cases when there were several outer references the
same recursive CTE.
The problem of premature cleanups for recursive CTEs could be already
resolved by the correction in TABLE_LIST::set_as_with_table() introduced
in this patch. ALL other changes introduced by the patches for MDEV-17024
and MDEV-22748 guarantee that this clean-ups are performed as soon as
possible: when the select containing the last outer reference to a
recursive CTE is being cleaned up the specification of the recursive CTE
should be cleaned up as well.
2020-06-06 11:56:10 -07:00
Marko Mäkelä
b3e395a13e Merge 10.2 into 10.3 2020-06-06 18:50:25 +03:00
Marko Mäkelä
0df01ccb66 Merge 10.1 into 10.2 2020-06-06 18:07:04 +03:00
Igor Babaev
a8c200c73c MDEV-22042 Server crash in Item_field::print on ANALYZE FORMAT=JSON
When processing a query with a recursive CTE a temporary table is used for
each recursive reference of the CTE. As any temporary table it uses its own
mem-root for table definition structures. Due to specifics of the current
implementation of ANALYZE stmt command this mem-root can be freed only at
the very of query processing. Such deallocation of mem-root memory happens
in close_thread_tables(). The function looks through the list of the tmp
tables rec_tables attached to the THD of the query and frees corresponding
mem-roots. If the query uses a stored function then such list is created
for each query of the function. When a new rec_list has to be created the
old one has to be saved and then restored at the proper moment.
The bug occurred because only one rec_list for the query containing CTE was
created. As a result close_thread_tables() freed tmp mem-roots used for
rec_tables prematurely destroying some data needed for the output produced
by the ANALYZE command.
2020-06-05 11:00:07 -07:00
Julius Goryavsky
5f55f69e4a Merge 10.1 into 10.2 2020-06-05 18:32:37 +02:00
Marko Mäkelä
680463a8d9 Merge 10.2 into 10.3 2020-06-05 16:51:26 +03:00
Sergey Vojtovich
dce4c0f979 MDEV-22339 - Assertion `str_length < len' failed
When acquiring SNW/SNRW/X MDL lock DDL/admin statements may abort pending
thr lock in concurrent connection with open HANDLER (or delayed insert
thread).

This may lead to a race condition when table->alias is accessed
concurrently by such threads. Either assertion failure or memory leak
is a practical consequence of this race condition.

Specifically HANDLER is opening a table and issuing alias.copy(), while
DDL executing get_lock_data()/alias.c_ptr()/realloc()/realloc_raw().

Fixed by perforimg table->init() before it is published via
thd->open_tables.
2020-06-04 23:52:10 +02:00
Varun Gupta
f30ff10c8d MDEV-22715: SIGSEGV in radixsort_for_str_ptr and in native_compare/my_qsort2 (optimized builds)
For DECIMAL[(M[,D])] datatype max_sort_length was not being honoured which was leading to buffer
overflow while making the sort key. The fix to this problem would be to create sort keys for decimals
with atmost max_sort_key bytes

Important:
The minimum value of max_sort_length has been raised to 8 (previously was 4),
so fixed size datatypes like DOUBLE and BIGINIT are not truncated for
lower values of max_sort_length.
2020-06-05 01:11:03 +05:30
Varun Gupta
f69278bcd0 MDEV-16230: Server crashes when Analyze format=json is run with a window function with empty PARTITION BY and ORDER BY clauses
Currently when both PARTITION BY and ORDER BY clauses are empty then we create a Item
with the first field in the select list and sort with that field.
It should be created as an Item_temptable_field instead of Item_field because the
print() function continues to work even if the table has been dropped.
2020-06-04 17:03:03 +05:30
Aleksey Midenkov
05693cf214 MDEV-22112 Assertion `tab_part_info->part_type == RANGE_PARTITION || tab_part_info->part_type == LIST_PARTITION' failed in prep_alter_part_table
Incorrect syntax for SYSTEM_TIME partition. work_part_info is detected
as HASH partition. We cannot add partition of different type neither
we cannot reorganize SYSTEM_TIME into/from different type
partitioning.

The sidefix for version until 10.5 corrects the message:
"For LIST partitions each partition must be defined"
2020-06-04 12:12:49 +03:00
sjaakola
8ec0e9111a MDEV-22763 backporting MDEV-20225 fix into 10.1
Backported the support for aborting and replaying stored procedure and fix for trigger
key assigments from 10.4 version.
Backported also two mtr tests: wsrep_sp_bf_abort and MDEV-20225
2020-06-03 15:34:44 +02:00
Marko Mäkelä
8300f639a1 Merge 10.2 into 10.3 2020-06-02 10:25:11 +03:00
Marko Mäkelä
d72eebaa3d Merge 10.1 into 10.2 2020-06-01 09:33:03 +03:00
Marko Mäkelä
e9aaa10c11 Merge 10.2 into 10.3 2020-05-29 22:21:19 +03:00
Sergey Vojtovich
c279878493 Thread safe histograms loading
Previously multiple threads were allowed to load histograms concurrently.
There were no known problems caused by this. But given amount of data
races in this code, it'd happen sooner or later.

To avoid scalability bottleneck, histograms loading is protected by
per-TABLE_SHARE atomic variable.

Whenever histograms were loaded by preceding statement (hot-path), a
scalable load-acquire check is performed.

Whenever histograms have to be loaded anew, mutual exclusion for loaders
is established by atomic variable. If histograms are being loaded
concurrently, statement waits until load is completed.

- Table_statistics::total_hist_size moved to TABLE_STATISTICS_CB: only
  meaningful within TABLE_SHARE (not used for collected stats).
- TABLE_STATISTICS_CB::histograms_can_be_read and
  TABLE_STATISTICS_CB::histograms_are_read are replaced with a tri state
  atomic variable.
- Simplified away alloc_histograms_for_table_share().

Note: there's still likely a data race if a thread attempts accessing
histograms data after it failed to load it (because of concurrent load).
It was there previously and goes out of the scope of this effort. One way
of fixing it could be reviving TABLE::histograms_are_read and adding
appropriate checks whenever it is needed.

Part of MDEV-19061 - table_share used for reading statistical tables is
                     not protected
2020-05-29 21:53:54 +04:00
Sergey Vojtovich
609a0d3db3 Thread safe statistics loading
Previously multiple threads were allowed to load statistics concurrently.
There were no known problems caused by this. But given amount of data
races in this code, it'd happen sooner or later.

To avoid scalability bottleneck, statistics loading is protected by
per-TABLE_SHARE atomic variable.

Whenever statistics were loaded by preceding statement (hot-path), a
scalable load-acquire check is performed.

Whenever statistics have to be loaded anew, mutual exclusion for loaders
is established by atomic variable. If statistics are being loaded
concurrently, statement waits until load is completed.

TABLE_STATISTICS_CB::stats_can_be_read and
TABLE_STATISTICS_CB::stats_is_read are replaced with a tri state atomic
variable.

Part of MDEV-19061 - table_share used for reading statistical tables is
                     not protected
2020-05-29 21:53:54 +04:00
Sergey Vojtovich
1055a7f4fc Simplified away statistics_for_tables_is_needed()
Removed redundant loops, integrated logics into the caller instead.
Unified condition in read_statistics_for_tables(), less
"table_share != NULL" checks, no more potential "table_share == NULL"
dereferencing.

Part of MDEV-19061 - table_share used for reading statistical tables is
                     not protected
2020-05-29 21:53:54 +04:00
Alexander Barkov
a2932e86b5 MDEV-22744 *SAN: sql/item_xmlfunc.cc:791:43: runtime error: downcast of address ... which does not point to an object of type 'Item_func' note: object is of type 'Item_bool' (on optimized builds)
In Item_nodeset_func_predicate::val_nodeset, args[1] is not necessarily
an Item_func descendant. It can be Item_bool.

Removing a wrong cast. It was not really needed anyway.
2020-05-29 15:31:24 +04:00
Vladislav Vaintroub
069200267d Fix cmake warning - custom command succeeds without creating own OUTPUT. 2020-05-29 12:28:34 +02:00
Aleksey Midenkov
19da9a51ae MDEV-16937 Strict SQL with system versioned tables causes issues
Respect system fields in NO_ZERO_DATE mode.

This is the subject for refactoring in MDEV-19597
2020-05-28 22:22:20 +03:00
Aleksey Midenkov
dd9773b723 MDEV-22413 Server hangs upon UPDATE on a view reading from versioned partitioned table
UPDATE gets access to history records because versioning conditions
are not set for VIEW. This leads to endless loop of inserting history
records when clustered index is rebuilt and ha_rnd_next() returns
newly inserted history record.

Return back original behavior of failing on write-locked table in
historical query.

35b679b9 assumed that SELECT_LEX::lock_type influences anything, but
actually at this point table is already locked. Original bug report
was tempesta-tech/mariadb#102
2020-05-28 22:22:19 +03:00
Anel Husakovic
a1b3bebe1f fix pre-definition for embedded server for find_user_or_anon()
Pre-definitions are allowed for non-embedded.
Failur catched with:
```
cmake ../../10.1 -DCMAKE_BUILD_TYPE=Debug -DCMAKE_CXX_COMPILER=g++-9
-DCMAKE_C_COMPILER=gcc-9 -DWITH_EMBEDDED_SERVER=ON -DCMAKE_BUILD_TYPE=Debug
-DPLUGIN_{ARCHIVE,TOKUDB,MROONGA,OQGRAPH,ROCKSDB,PERFSCHEMA,SPIDER,SPHINX}=N
-DMYSQL_MAINTAINER_MODE=ON -DNOT_FOR_DISTRIBUTION=ON
```
Alternative fix would be
```
--- a/sql/sql_acl.cc
+++ b/sql/sql_acl.cc
@@ -201,8 +201,10 @@ LEX_STRING current_user= { C_STRING_WITH_LEN("*current_user") };
 LEX_STRING current_role= { C_STRING_WITH_LEN("*current_role") };
 LEX_STRING current_user_and_current_role= { C_STRING_WITH_LEN("*current_user_and_current_role") };

+#ifndef EMBEDDED_LIBRARY
 class ACL_USER;
 static ACL_USER *find_user_or_anon(const char *host, const char *user, const char *ip);
+#endif
```
2020-05-28 20:18:25 +02:00
Aleksey Midenkov
3e9b96b6ff MDEV-18794 append_drop_column() small refactoring
Bogus if() logic inside the func.
2020-05-28 21:00:49 +03:00
Aleksey Midenkov
dac1280a65 MDEV-18794 Assertion `!m_innodb' failed in ha_partition::cmp_ref upon SELECT from partitioned table
System versioning assertion fix. Since DROP SYSTEM VERSIONING does not
change list of dropped keys we should handle a special case.

Caused by MDEV-19751. This fix deprecates MDEV-17091.
2020-05-28 20:55:59 +03:00
Anel Husakovic
957cb7b7ba MDEV-22312: Bad error message for SET DEFAULT ROLE when user account is not granted the role
- `SET DEFAULT ROLE xxx [FOR yyy]` should say:
  "User yyy has not been granted a role xxx" if:
    - The current user (not the user `yyy` in the FOR clause) can see the
    role xxx. It can see the role if:
      * role exists in `mysql.roles_mappings` (traverse the graph),
      * If the current user has read access on `mysql.user` table - in
    that case, it can see all roles, granted or not.
    - Otherwise it should be "Invalid role specification".

In other words, it should not be possible to use `SET DEFAULT ROLE` to discover whether a specific role exist or not.
2020-05-28 17:08:40 +02:00
Marko Mäkelä
dad7a8ee7d Merge 10.2 into 10.3 2020-05-27 17:10:39 +03:00
Sergei Golubchik
e64dc07125 assert(a && b); -> assert(a); assert(b); 2020-05-27 15:56:40 +02:00
Sergei Golubchik
cceb965a79 Revert "MDEV-12445 : Rocksdb does not shutdown worker threads and aborts in memleak check on server shutdown"
This reverts commit 6f1f911497.

because it doesn't do anything now (the server doesn't check
my_disable_leak_check) and it never did anything before
(because without `extern` it simply created a local instance of
my_disable_leak_check, did not affect server's my_disable_leak_check).
2020-05-27 15:56:40 +02:00
Sergei Golubchik
ad77247866 MDEV-21958 Query having many NOT-IN clauses running forever and causing available free memory to use completely
let thd->killed to abort range optimizer
2020-05-27 15:56:40 +02:00
Sergei Golubchik
1e951155bd bugfix: use THD::main_mem_root for kill error message
cannot use the current THD::mem_root, because it can be temporarily
reassigned to something with a very different life time
(e.g. to TABLE::mem_root or range optimizer mem_root).
2020-05-27 15:56:40 +02:00
Sergei Golubchik
b01c8a6cc8 MDEV-22558 wrong error for invalid utf8 table comment 2020-05-27 15:56:40 +02:00
Monty
403dacf6a9 Fixed crash in aria recovery when using bulk insert
MDEV-20578 Got error 126 when executing undo undo_key_delete
upon Aria crash recovery

The crash happens in this scenario:
- Table with unique keys and non unique keys
- Batch insert (LOAD DATA or INSERT ... SELECT) with REPLACE
- Some insert succeeds followed by duplicate key error

In the above scenario the table gets corrupted.

The bug was that we don't generate any undo entry for the
failed insert as the whole insert can be ignored by undo.
The code did however not take into account that when bulk
insert is used, we would write cached keys to the file on
failure and undo would wrongly ignore these.

Fixed by moving the writing of the cache keys after we write
the aborted-insert event to the log.
2020-05-26 20:05:17 +03:00
Andrei Elkin
0c1f97b3ab MDEV-15152 Optimistic parallel slave doesnt cope well with START SLAVE UNTIL
The immediate bug was caused by a failure to recognize a correct
position to stop the slave applier run in optimistic parallel mode.
There were the following set of issues that the analysis unveil.
1 incorrect estimate for the event binlog position passed to
  is_until_satisfied
2 wait for workers to complete by the driver thread did not account non-group events
  that could be left unprocessed and thus to mix up the last executed
  binlog group's file and position:
  the file remained old and the position related to the new rotated file
3 incorrect 'slave reached file:pos' by the parallel slave report in the error log
4 relay log UNTIL missed out the parallel slave branch in
  is_until_satisfied.

The patch addresses all of them to simplify logics of log change
notification in either the master and relay-log until case.
P.1 is addressed with passing the event into is_until_satisfied()
for proper analisis by the function.
P.2 is fixed by changes in handle_queued_pos_update().
P.4 required removing relay-log change notification by workers.
Instead the driver thread updates the notion of the current relay-log
fully itself with aid of introduced
bool Relay_log_info::until_relay_log_names_defer.

An extra print out of the requested until file:pos is arranged
with --log-warning=3.
2020-05-26 18:49:43 +03:00
Andrei Elkin
dbe447a789 MDEV-15152 Optimistic parallel slave doesnt cope well with START SLAVE UNTIL
The immediate bug was caused by a failure to recognize a correct
position to stop the slave applier run in optimistic parallel mode.
There were the following set of issues that the analysis unveil.
1 incorrect estimate for the event binlog position passed to
  is_until_satisfied
2 wait for workers to complete by the driver thread did not account non-group events
  that could be left unprocessed and thus to mix up the last executed
  binlog group's file and position:
  the file remained old and the position related to the new rotated file
3 incorrect 'slave reached file:pos' by the parallel slave report in the error log
4 relay log UNTIL missed out the parallel slave branch in
  is_until_satisfied.

The patch addresses all of them to simplify logics of log change
notification in either the master and relay-log until case.
P.1 is addressed with passing the event into is_until_satisfied()
for proper analisis by the function.
P.2 is fixed by changes in handle_queued_pos_update().
P.4 required removing relay-log change notification by workers.
Instead the driver thread updates the notion of the current relay-log
fully itself with aid of introduced
bool Relay_log_info::until_relay_log_names_defer.

An extra print out of the requested until file:pos is arranged
with --log-warning=3.
2020-05-26 18:26:50 +03:00