After this code
end_inplace:
if (thd->locked_tables_list.reopen_tables(thd, false))
goto err_with_mdl_after_alter;
table is not reopened (need_reopen is false) but
some_table_marked_for_reopen is reset to false.
Item_field is allocated on table lock and assigned new name on first
ALTER which is then freed at the end of the command. Second ALTER
accessess this Item_field and gets garbage value.
MDEV-22689 MSAN use-of-uninitialized-value in decode_bytes()
This was not a user visible issue as the huffman code lookup tables would
automatically ignore any of the unitialized bits
Fixed by adding a end-zero byte to the bit-stream buffer.
Other things:
- Fixed a (for this case) wrong assert in strmov() for myisamchk
and aria_chk by removing the strmov()
- IF EXISTS ends with a list of all not existing object, instead of a
separate note for every not existing object
- Produce a "Note" for all wrongly dropped objects
(like trying to do DROP SEQUENCE for a normal table)
- Do not write existing tables that could not be dropped to binlog
Other things:
MDEV-22820 Bogus "Unknown table" warnings produced upon attempt to drop
parent table referenced by FK
This was caused by an older version of this commit patch and later fixed
- Produce a "Note" for all wrongly dropped objects
(Like doing DROP VIEW on a table).
- IF EXISTS ends with a list of all not existing objects, instead of a
separate note for every not existing object.
Other things:
- Fixed bug where one could do CREATE TEMPORARY SEQUENCE multiple times
and create multiple temporary sequences with the same name.
The used code is largely based on code from Tencent
The problem is that in some rare cases there may be a conflict between .frm
files and the files in the storage engine. In this case the DROP TABLE
was not able to properly drop the table.
Some MariaDB/MySQL forks has solved this by adding a FORCE option to
DROP TABLE. After some discussion among MariaDB developers, we concluded
that users expects that DROP TABLE should always work, even if the
table would not be consistent. There should not be a need to use a
separate keyword to ensure that the table is really deleted.
The used solution is:
- If a .frm table doesn't exists, try dropping the table from all storage
engines.
- If the .frm table exists but the table does not exist in the engine
try dropping the table from all storage engines.
- Update storage engines using many table files (.CVS, MyISAM, Aria) to
succeed with the drop even if some of the files are missing.
- Add HTON_AUTOMATIC_DELETE_TABLE to handlerton's where delete_table()
is not needed and always succeed. This is used by ha_delete_table_force()
to know which handlers to ignore when trying to drop a table without
a .frm file.
The disadvantage of this solution is that a DROP TABLE on a non existing
table will be a bit slower as we have to ask all active storage engines
if they know anything about the table.
Other things:
- Added a new flag MY_IGNORE_ENOENT to my_delete() to not give an error
if the file doesn't exist. This simplifies some of the code.
- Don't clear thd->error in ha_delete_table() if there was an active
error. This is a bug fix.
- handler::delete_table() will not abort if first file doesn't exists.
This is bug fix to handle the case when a drop table was aborted in
the middle.
- Cleaned up mysql_rm_table_no_locks() to ensure that if_exists uses
same code path as when it's not used.
- Use non_existing_Table_error() to detect if table didn't exists.
Old code used different errors tests in different position.
- Table_triggers_list::drop_all_triggers() now drops trigger file if
it can't be parsed instead of leaving it hanging around (bug fix)
- InnoDB doesn't anymore print error about .frm file out of sync with
InnoDB directory if .frm file does not exists. This change was required
to be able to try to drop an InnoDB file when .frm doesn't exists.
- Fixed bug in mi_delete_table() where the .MYD file would not be dropped
if the .MYI file didn't exists.
- Fixed memory leak in Mroonga when deleting non existing table
- Fixed memory leak in Connect when deleting non existing table
Bugs fixed introduced by the original version of this commit:
MDEV-22826 Presence of Spider prevents tables from being force-deleted from
other engines
With RETURNING it can happen that the user has some privileges on
the table (namely, DELETE), but later needs different privileges
on individual columns (namely, SELECT).
Do the same as in check_grant_column() - ER_COLUMNACCESS_DENIED_ERROR,
not an assert.
Item_func_div::fix_length_and_dec_temporal() set the return data type to
integer in case of @div_precision_increment==0 for temporal input with FSP=0.
This caused Item_func_div to call int_op(), which is not implemented,
so a crash on DBUG_ASSERT(0) happened.
Fixing fix_length_and_dec_temporal() to set the result type to DECIMAL.
when Item::print() is called with the QT_PARSABLE flag, WHERE i NOT IN
(SELECT ...) gets printed as WHERE !i IN (SELECT ...) instead of WHERE
!(i in (SELECT ...)) because Item_in_optimizer returns DEFAULT_PRECEDENCE.
it should return the precedence of the inner operation.
The problem here is similar to the case with DISTINCT, the tree used for ORDER BY
needs to also hold the null bytes of the record. This was not done for GROUP_CONCAT
as NULLS are rejected by GROUP_CONCAT.
Also introduced a comparator function for the order by tree to handle null
values with JSON_ARRAYAGG.
For DISTINCT to be handled with JSON_ARRAYAGG, we need to make sure
that the Unique tree also holds the NULL bytes of a table record
inside the node of the tree. This behaviour for JSON_ARRAYAGG is
different from GROUP_CONCAT because in GROUP_CONCAT we just reject
NULL values for columns.
Also introduced a comparator function for the unique tree to handle null
values for distinct inside JSON_ARRAYAGG.
Backported from MYSQL
Bug #25331425: DISTINCT CLAUSE DOES NOT WORK IN GROUP_CONCAT
Issue:
------
The problem occurs when:
1) GROUP_CONCAT (DISTINCT ....) is used in the query.
2) Data size greater than value of system variable:
tmp_table_size.
The result would contain values that are non-unique.
Root cause:
-----------
An in-memory structure is used to filter out non-unique
values. When the data size exceeds tmp_table_size, the
overflow is written to disk as a separate file. The
expectation here is that when all such files are merged,
the full set of unique values can be obtained.
But the Item_func_group_concat::add function is in a bit of
hurry. Even as it is adding values to the tree, it wants to
decide if a value is unique and write it to the result
buffer. This works fine if the configured maximum size is
greater than the size of the data. But since tmp_table_size
is set to a low value, the size of the tree is smaller and
hence requires the creation of multiple copies on disk.
Item_func_group_concat currently has no mechanism to merge
all the copies on disk and then generate the result. This
results in duplicate values.
Solution:
---------
In case of the DISTINCT clause, don't write to the result
buffer immediately. Do the merge and only then put the
unique values in the result buffer. This has be done in
Item_func_group_concat::val_str.
Note regarding result file changes:
-----------------------------------
Earlier when a unique value was seen in
Item_func_group_concat::add, it was dumped to the output.
So result is in the order stored in SE. But with this fix,
we wait until all the data is read and the final set of
unique values are written to output buffer. So the data
appears in the sorted order.
This only fixes the cases when we have DISTINCT without ORDER BY clause
in GROUP_CONCAT.
Apply this patch from Percona Server (amended for 10.5):
commit cd7201514fee78aaf7d3eb2b28d2573c76f53b84
Author: Laurynas Biveinis <laurynas.biveinis@gmail.com>
Date: Tue Nov 14 06:34:19 2017 +0200
Fix bug 1704195 / 87065 / TDB-83 (Stop ANALYZE TABLE from flushing table definition cache)
Make ANALYZE TABLE stop flushing affected tables from the table
definition cache, which has the effect of not blocking any subsequent
new queries involving the table if there's a parallel long-running
query:
- new table flag HA_ONLINE_ANALYZE, return it for InnoDB and TokuDB
tables;
- in mysql_admin_table, if we are performing ANALYZE TABLE, and the
table flag is set, do not remove the table from the table
definition cache, do not invalidate query cache;
- in partitioning handler, refresh the query optimizer statistics
after ANALYZE if the underlying handler supports HA_ONLINE_ANALYZE;
- new testcases main.percona_nonflushing_analyze_debug,
parts.percona_nonflushing_abalyze_debug and a supporting debug sync
point.
For TokuDB, this change exposes bug TDB-83 (Index cardinality stats
updated for handler::info(HA_STATUS_CONST), not often enough for
tokudb_cardinality_scale_percent). TokuDB may return different
rec_per_key values depending on dynamic variable
tokudb_cardinality_scale_percent value. The server does not have a way
of knowing that changing this variable invalidates the previous
rec_per_key values in any opened table shares, and so does not call
info(HA_STATUS_CONST) again. Fix by updating rec_per_key for both
HA_STATUS_CONST and HA_STATUS_VARIABLE. This also forces a re-record
of tokudb.bugs.db756_card_part_hash_1_pick, with the new output
seeming to be more correct.
In the merge_buffers phase for sorting, the sort buffer size is divided between the number of chunks.
The chunks have a start and end position (m_buffer_start and m_buffer_end).
Then we read the as many records that fit in this buffer for a chunk of the file.
The issue here was we were resetting the end of buffer(m_buffer_end) to the number of bytes that was
read, this was causing a problem because with dynamic size of sort keys it is possible that later
we would not be able to accommodate even one key inside a chunk of file.
So the fix was to not reset the end of buffer for a chunk of file.
CREATE PROCEDURE did not detect unknown SP variables in assignments like this:
SET var=a_long_var_name_with_a_typo;
The error happened only during the SP execution time, and only of the control
flow reaches the erroneous statement.
Fixing most expressions to detect unknown identifiers.
This includes simple subqueries without tables:
- Query specification: SELECT list, WHERE,
HAVING (inside aggregate functions) clauses, e.g.
SET var= (SELECT unknown_ident+1);
SET var= (SELECT 1 WHERE unknown_identifier);
SET var= (SELECT 1 HAVING SUM(unknown_identifier);
- Table value constructor: VALUES clause, e.g.:
SET var= (VALUES(unknown_ident));
Note, in some more complex subquery cases unknown variables are still not detected
(this will be fixed separately):
- Derived tables:
SET a=(SELECT unknown_ident FROM (SELECT 1 AS alias) t1);
SET res=(SELECT * FROM t1 LEFT OUTER JOIN (SELECT unknown_ident) t2 USING (c1));
- CTE:
SET a=(WITH cte1 (a) AS (SELECT unknown_ident) SELECT * FROM cte1);
SET a=(WITH cte1 (a,b) AS (VALUES (unknown,2),(3,4)) SELECT * FROM cte1);
SET a=(WITH cte1 (a,b) AS (VALUES (1,2),(3,4)) SELECT unknown_ident FROM cte1);
- SELECT .. GROUP BY unknown_identifier
- SELECT .. ORDER BY unknown_identifier
- HAVING with an unknown identifier outside of any aggregate functions:
SELECT .. HAVING unknown_identifier;
Item_default_value did not override val_native(), so the inherited
Item_field::val_native() was called. As a result Item_default_value::calculate()
was not called and Item_field::val_native() was called on a Field
with a non-initialized ptr.
Implementing Item_default_value::val_native() properly.
The issue here is charset for Sort_param::tmp_buffer is cleared when bzero is done for Sort_param.
Make sure to set the charset explicitly in the constructor for tmp_buffer.
Backported from MYSQL
Bug #25331425: DISTINCT CLAUSE DOES NOT WORK IN GROUP_CONCAT
Issue:
------
The problem occurs when:
1) GROUP_CONCAT (DISTINCT ....) is used in the query.
2) Data size greater than value of system variable:
tmp_table_size.
The result would contain values that are non-unique.
Root cause:
-----------
An in-memory structure is used to filter out non-unique
values. When the data size exceeds tmp_table_size, the
overflow is written to disk as a separate file. The
expectation here is that when all such files are merged,
the full set of unique values can be obtained.
But the Item_func_group_concat::add function is in a bit of
hurry. Even as it is adding values to the tree, it wants to
decide if a value is unique and write it to the result
buffer. This works fine if the configured maximum size is
greater than the size of the data. But since tmp_table_size
is set to a low value, the size of the tree is smaller and
hence requires the creation of multiple copies on disk.
Item_func_group_concat currently has no mechanism to merge
all the copies on disk and then generate the result. This
results in duplicate values.
Solution:
---------
In case of the DISTINCT clause, don't write to the result
buffer immediately. Do the merge and only then put the
unique values in the result buffer. This has be done in
Item_func_group_concat::val_str.
Note regarding result file changes:
-----------------------------------
Earlier when a unique value was seen in
Item_func_group_concat::add, it was dumped to the output.
So result is in the order stored in SE. But with this fix,
we wait until all the data is read and the final set of
unique values are written to output buffer. So the data
appears in the sorted order.
This only fixes the cases when we have DISTINCT without ORDER BY clause
in GROUP_CONCAT.
When processing a condition like:
WHERE timestamp_column='2010-00-01 00:00:00'
don't replace the constant to Item_datetime_literal if the constant
it has zeros (in the month or in the day).
Problem:- Calling mark_columns_per_binlog_row_image() earlier may change the
result of mark_virtual_columns_for_write() , Since it can set the bitmap on
for virtual column, and henceforth mark_virtual_column_deps(field) will
never be called in mark_virtual_column_with_deps.
This bug is not specific for long unique, It also fails for this case
create table t2(id int primary key, a blob, b varchar(20) as (LEFT(a,2)));
Bit operators (~ ^ | & << >>) and the function BIT_COUNT()
always called val_int() for their arguments.
It worked correctly only for INT type arguments.
In case of DECIMAL and DOUBLE arguments it did not work well:
the argument values were truncated to the maximum SIGNED BIGINT value
of 9223372036854775807.
Fixing the code as follows:
- If the argument if of an integer data type,
it works using val_int() as before.
- If the argument if of some other data type, it gets the argument value
using val_decimal(), to avoid truncation, and then converts the result
to ulonglong.
Using Item_handled_func to switch between the two approaches easier.
As an additional advantage, with Item_handled_func it will be easier
to implement overloading in the future, so data type plugings will be able
to define their own behavioir of bit operators and BIT_COUNT().
Moving the code from the former val_int() implementations
as methods to Longlong_null, to avoid code duplication in the
INT and DECIMAL branches.
We have to include NULL in the result which the GOUP_CONCAT doesn't
always do. Also converting should be done into another String instance
as these can be same.