Problem was that FLUSH TABLES where trying to read latest sequence state
which conflicted with a running ALTER SEQUENCE. Removed the reading
of the state, when opening a table for FLUSH, as it's not needed in this
case.
Other thing:
- Fixed a potential issue with concurrently running ALTER SEQUENCE where
the later ALTER could potentially read old data
MCOL-3875 Columnstore write cache
The main change is to change thr_lock function get_status to
return a value that indicates we have to abort the lock.
Other thing:
- Made start_bulk_insert() and end_bulk_insert() protected so that the
insert cache can use these
The reson for the change was to make it easier to find true errors
when searching in trace logs.
"error:" should mainly be used when we have a real error
- IF EXISTS ends with a list of all not existing object, instead of a
separate note for every not existing object
- Produce a "Note" for all wrongly dropped objects
(like trying to do DROP SEQUENCE for a normal table)
- Do not write existing tables that could not be dropped to binlog
Other things:
MDEV-22820 Bogus "Unknown table" warnings produced upon attempt to drop
parent table referenced by FK
This was caused by an older version of this commit patch and later fixed
- Produce a "Note" for all wrongly dropped objects
(Like doing DROP VIEW on a table).
- IF EXISTS ends with a list of all not existing objects, instead of a
separate note for every not existing object.
Other things:
- Fixed bug where one could do CREATE TEMPORARY SEQUENCE multiple times
and create multiple temporary sequences with the same name.
The used code is largely based on code from Tencent
The problem is that in some rare cases there may be a conflict between .frm
files and the files in the storage engine. In this case the DROP TABLE
was not able to properly drop the table.
Some MariaDB/MySQL forks has solved this by adding a FORCE option to
DROP TABLE. After some discussion among MariaDB developers, we concluded
that users expects that DROP TABLE should always work, even if the
table would not be consistent. There should not be a need to use a
separate keyword to ensure that the table is really deleted.
The used solution is:
- If a .frm table doesn't exists, try dropping the table from all storage
engines.
- If the .frm table exists but the table does not exist in the engine
try dropping the table from all storage engines.
- Update storage engines using many table files (.CVS, MyISAM, Aria) to
succeed with the drop even if some of the files are missing.
- Add HTON_AUTOMATIC_DELETE_TABLE to handlerton's where delete_table()
is not needed and always succeed. This is used by ha_delete_table_force()
to know which handlers to ignore when trying to drop a table without
a .frm file.
The disadvantage of this solution is that a DROP TABLE on a non existing
table will be a bit slower as we have to ask all active storage engines
if they know anything about the table.
Other things:
- Added a new flag MY_IGNORE_ENOENT to my_delete() to not give an error
if the file doesn't exist. This simplifies some of the code.
- Don't clear thd->error in ha_delete_table() if there was an active
error. This is a bug fix.
- handler::delete_table() will not abort if first file doesn't exists.
This is bug fix to handle the case when a drop table was aborted in
the middle.
- Cleaned up mysql_rm_table_no_locks() to ensure that if_exists uses
same code path as when it's not used.
- Use non_existing_Table_error() to detect if table didn't exists.
Old code used different errors tests in different position.
- Table_triggers_list::drop_all_triggers() now drops trigger file if
it can't be parsed instead of leaving it hanging around (bug fix)
- InnoDB doesn't anymore print error about .frm file out of sync with
InnoDB directory if .frm file does not exists. This change was required
to be able to try to drop an InnoDB file when .frm doesn't exists.
- Fixed bug in mi_delete_table() where the .MYD file would not be dropped
if the .MYI file didn't exists.
- Fixed memory leak in Mroonga when deleting non existing table
- Fixed memory leak in Connect when deleting non existing table
Bugs fixed introduced by the original version of this commit:
MDEV-22826 Presence of Spider prevents tables from being force-deleted from
other engines
With RETURNING it can happen that the user has some privileges on
the table (namely, DELETE), but later needs different privileges
on individual columns (namely, SELECT).
Do the same as in check_grant_column() - ER_COLUMNACCESS_DENIED_ERROR,
not an assert.
Item_func_div::fix_length_and_dec_temporal() set the return data type to
integer in case of @div_precision_increment==0 for temporal input with FSP=0.
This caused Item_func_div to call int_op(), which is not implemented,
so a crash on DBUG_ASSERT(0) happened.
Fixing fix_length_and_dec_temporal() to set the result type to DECIMAL.
when Item::print() is called with the QT_PARSABLE flag, WHERE i NOT IN
(SELECT ...) gets printed as WHERE !i IN (SELECT ...) instead of WHERE
!(i in (SELECT ...)) because Item_in_optimizer returns DEFAULT_PRECEDENCE.
it should return the precedence of the inner operation.
The problem here is similar to the case with DISTINCT, the tree used for ORDER BY
needs to also hold the null bytes of the record. This was not done for GROUP_CONCAT
as NULLS are rejected by GROUP_CONCAT.
Also introduced a comparator function for the order by tree to handle null
values with JSON_ARRAYAGG.
For DISTINCT to be handled with JSON_ARRAYAGG, we need to make sure
that the Unique tree also holds the NULL bytes of a table record
inside the node of the tree. This behaviour for JSON_ARRAYAGG is
different from GROUP_CONCAT because in GROUP_CONCAT we just reject
NULL values for columns.
Also introduced a comparator function for the unique tree to handle null
values for distinct inside JSON_ARRAYAGG.
Backported from MYSQL
Bug #25331425: DISTINCT CLAUSE DOES NOT WORK IN GROUP_CONCAT
Issue:
------
The problem occurs when:
1) GROUP_CONCAT (DISTINCT ....) is used in the query.
2) Data size greater than value of system variable:
tmp_table_size.
The result would contain values that are non-unique.
Root cause:
-----------
An in-memory structure is used to filter out non-unique
values. When the data size exceeds tmp_table_size, the
overflow is written to disk as a separate file. The
expectation here is that when all such files are merged,
the full set of unique values can be obtained.
But the Item_func_group_concat::add function is in a bit of
hurry. Even as it is adding values to the tree, it wants to
decide if a value is unique and write it to the result
buffer. This works fine if the configured maximum size is
greater than the size of the data. But since tmp_table_size
is set to a low value, the size of the tree is smaller and
hence requires the creation of multiple copies on disk.
Item_func_group_concat currently has no mechanism to merge
all the copies on disk and then generate the result. This
results in duplicate values.
Solution:
---------
In case of the DISTINCT clause, don't write to the result
buffer immediately. Do the merge and only then put the
unique values in the result buffer. This has be done in
Item_func_group_concat::val_str.
Note regarding result file changes:
-----------------------------------
Earlier when a unique value was seen in
Item_func_group_concat::add, it was dumped to the output.
So result is in the order stored in SE. But with this fix,
we wait until all the data is read and the final set of
unique values are written to output buffer. So the data
appears in the sorted order.
This only fixes the cases when we have DISTINCT without ORDER BY clause
in GROUP_CONCAT.
Apply this patch from Percona Server (amended for 10.5):
commit cd7201514fee78aaf7d3eb2b28d2573c76f53b84
Author: Laurynas Biveinis <laurynas.biveinis@gmail.com>
Date: Tue Nov 14 06:34:19 2017 +0200
Fix bug 1704195 / 87065 / TDB-83 (Stop ANALYZE TABLE from flushing table definition cache)
Make ANALYZE TABLE stop flushing affected tables from the table
definition cache, which has the effect of not blocking any subsequent
new queries involving the table if there's a parallel long-running
query:
- new table flag HA_ONLINE_ANALYZE, return it for InnoDB and TokuDB
tables;
- in mysql_admin_table, if we are performing ANALYZE TABLE, and the
table flag is set, do not remove the table from the table
definition cache, do not invalidate query cache;
- in partitioning handler, refresh the query optimizer statistics
after ANALYZE if the underlying handler supports HA_ONLINE_ANALYZE;
- new testcases main.percona_nonflushing_analyze_debug,
parts.percona_nonflushing_abalyze_debug and a supporting debug sync
point.
For TokuDB, this change exposes bug TDB-83 (Index cardinality stats
updated for handler::info(HA_STATUS_CONST), not often enough for
tokudb_cardinality_scale_percent). TokuDB may return different
rec_per_key values depending on dynamic variable
tokudb_cardinality_scale_percent value. The server does not have a way
of knowing that changing this variable invalidates the previous
rec_per_key values in any opened table shares, and so does not call
info(HA_STATUS_CONST) again. Fix by updating rec_per_key for both
HA_STATUS_CONST and HA_STATUS_VARIABLE. This also forces a re-record
of tokudb.bugs.db756_card_part_hash_1_pick, with the new output
seeming to be more correct.
CHECK constraint is checked by check_expression() which walks its
items and gets into Item_field::check_vcol_func_processor() to check
for conformity with foreign key list.
WITHOUT OVERLAPS is checked for same conformity in
mysql_prepare_create_table().
Long uniques are already impossible with InnoDB foreign keys. See
ER_CANT_CREATE_TABLE in test case.
2 accompanying bugs fixed (test main.constraints failed):
1. check->name.str lived on SP execute mem_root while "check" obj
itself lives on SP main mem_root. On second SP execute check->name.str
had garbage data. Fixed by allocating from thd->stmt_arena->mem_root
which is SP main mem_root.
2. CHECK_CONSTRAINT_IF_NOT_EXISTS value was mixed with
VCOL_FIELD_REF. VCOL_FIELD_REF is assigned in check_expression() and
then detected as CHECK_CONSTRAINT_IF_NOT_EXISTS in
handle_if_exists_options().
Existing cases for MDEV-16932 in main.constraints cover both fixes.
Item_cache_datetime::decimals was always copied from example->decimals
without limiting to 6 (maximum possible fractional digits), so
val_str() later crashed on asserts inside my_time_to_str() and
my_datetime_to_str().
In the merge_buffers phase for sorting, the sort buffer size is divided between the number of chunks.
The chunks have a start and end position (m_buffer_start and m_buffer_end).
Then we read the as many records that fit in this buffer for a chunk of the file.
The issue here was we were resetting the end of buffer(m_buffer_end) to the number of bytes that was
read, this was causing a problem because with dynamic size of sort keys it is possible that later
we would not be able to accommodate even one key inside a chunk of file.
So the fix was to not reset the end of buffer for a chunk of file.
The code erroneously used buff[100] in a fiew places to make
a GRANTEE value in the form:
'user'@'host'
Fix:
- Fixing the code to use (USER_HOST_BUFF_SIZE + 6) instead of 100.
- Adding a DBUG_ASSERT to make sure the buffer is enough
- Wrapping the code into a class Grantee_str, to reuse it easier in 4 places.
CREATE PROCEDURE did not detect unknown SP variables in assignments like this:
SET var=a_long_var_name_with_a_typo;
The error happened only during the SP execution time, and only of the control
flow reaches the erroneous statement.
Fixing most expressions to detect unknown identifiers.
This includes simple subqueries without tables:
- Query specification: SELECT list, WHERE,
HAVING (inside aggregate functions) clauses, e.g.
SET var= (SELECT unknown_ident+1);
SET var= (SELECT 1 WHERE unknown_identifier);
SET var= (SELECT 1 HAVING SUM(unknown_identifier);
- Table value constructor: VALUES clause, e.g.:
SET var= (VALUES(unknown_ident));
Note, in some more complex subquery cases unknown variables are still not detected
(this will be fixed separately):
- Derived tables:
SET a=(SELECT unknown_ident FROM (SELECT 1 AS alias) t1);
SET res=(SELECT * FROM t1 LEFT OUTER JOIN (SELECT unknown_ident) t2 USING (c1));
- CTE:
SET a=(WITH cte1 (a) AS (SELECT unknown_ident) SELECT * FROM cte1);
SET a=(WITH cte1 (a,b) AS (VALUES (unknown,2),(3,4)) SELECT * FROM cte1);
SET a=(WITH cte1 (a,b) AS (VALUES (1,2),(3,4)) SELECT unknown_ident FROM cte1);
- SELECT .. GROUP BY unknown_identifier
- SELECT .. ORDER BY unknown_identifier
- HAVING with an unknown identifier outside of any aggregate functions:
SELECT .. HAVING unknown_identifier;
Item_default_value did not override val_native(), so the inherited
Item_field::val_native() was called. As a result Item_default_value::calculate()
was not called and Item_field::val_native() was called on a Field
with a non-initialized ptr.
Implementing Item_default_value::val_native() properly.
Analysis:
========
List of values provided for "replicate_ignore_table" and "replicate_do_table"
are stored in HASH. When an empty list is provided the HASH structure doesn't
get initialized. Existing code treats empty element list as an error and tries
to clean the uninitialized HASH. This results in above MSAN issue.
Fix:
===
The clean up should be initiated only when there is an error while parsing the
'replicate_do_table' or 'replicate_ignore_table' list and the HASH is in
initialized state. Otherwise for empty list it should simply return success.
The real problem was that attempt to roll back cahnes after end of memory in QC was made incorrectly and lead to using uninitialized memory.
(bug has nothing to do with resize operation, it is just lack of resources erro processed incorrectly)
debug_sync_set_action(): Declare the dummy function inline,
to silence a warning about declared-but-unused static function.
This amends commit 3ccd6766d0.
The issue here is charset for Sort_param::tmp_buffer is cleared when bzero is done for Sort_param.
Make sure to set the charset explicitly in the constructor for tmp_buffer.
reduce the amount of engine-specific code in the server,
particularly as it does not serve any purpose now.
may be needed for VP engine,
to be reconsidered in MDEV-7795
Backported from MYSQL
Bug #25331425: DISTINCT CLAUSE DOES NOT WORK IN GROUP_CONCAT
Issue:
------
The problem occurs when:
1) GROUP_CONCAT (DISTINCT ....) is used in the query.
2) Data size greater than value of system variable:
tmp_table_size.
The result would contain values that are non-unique.
Root cause:
-----------
An in-memory structure is used to filter out non-unique
values. When the data size exceeds tmp_table_size, the
overflow is written to disk as a separate file. The
expectation here is that when all such files are merged,
the full set of unique values can be obtained.
But the Item_func_group_concat::add function is in a bit of
hurry. Even as it is adding values to the tree, it wants to
decide if a value is unique and write it to the result
buffer. This works fine if the configured maximum size is
greater than the size of the data. But since tmp_table_size
is set to a low value, the size of the tree is smaller and
hence requires the creation of multiple copies on disk.
Item_func_group_concat currently has no mechanism to merge
all the copies on disk and then generate the result. This
results in duplicate values.
Solution:
---------
In case of the DISTINCT clause, don't write to the result
buffer immediately. Do the merge and only then put the
unique values in the result buffer. This has be done in
Item_func_group_concat::val_str.
Note regarding result file changes:
-----------------------------------
Earlier when a unique value was seen in
Item_func_group_concat::add, it was dumped to the output.
So result is in the order stored in SE. But with this fix,
we wait until all the data is read and the final set of
unique values are written to output buffer. So the data
appears in the sorted order.
This only fixes the cases when we have DISTINCT without ORDER BY clause
in GROUP_CONCAT.
In case of SELECT without tables which returns either 0 or 1 rows,
JOIN::exec_inner() did not check if the flag representing SQL_CALC_FOUND_ROWS
is set or not and send_records was direclty assigned 0. So SELECT FOUND_ROWS()
was giving 0 in the output. Now it checks if the flag is set, if it is set
send_record=1 else 0. 1 is the number of rows that could have been sent
to the client if the SELECT query had SQL_CALC_FOUND_ROWS.
It is 0 when no rows were sent because the SELECT query did not have
SQL_CALC_FOUND_ROWS.