remove HA_EXTRA_PREPARE_FOR_RENAME - neither OPTIMIZE nor REPAIR need it
(was introduced in b58e79566c when replacing remove_table_from_cache()
with wait_while_table_is_used() even though remove_table_from_cache()
did not have it).
May also fix: MDEV-14970 "MariaDB crashed with signal 11 and Aria table"
I am not able to reproduce a crash, however there was no protection in
print_keydup_error() if the storage engine reported the wrong key number.
This patch adds such a protection and should stop any further crashes
in this case.
Other things:
- Added extra protection in Aria to not set errkey to more than number of
keys. (Don't think this is cause of this crash, but better safe than
sorry)
- Extend test_if_equal_repl_errors() to handle different cases of
ER_DUP_ENTRY. This is just mainly precaution for the future.
MDEV-14957: JOIN::prepare gets unusable "conds" as argument
Do not touch merged derived (it is irreversible)
Fix first argument of in_optimizer for calls possible before fix_fields()
escape all charecters less or equal 0x1F (control symbols)
(shorter sequence are not used to make code simple, long encoding is always legal according to the rfc4627)
Problem:-
If we create table using myisam/aria then this crashes the server.
CREATE TABLE t1(a bit(1), b int auto_increment , index(a,b));
insert into t1 values(1,1);
Or this query
CREATE TABLE t1 (b BIT(1), pk INTEGER AUTO_INCREMENT PRIMARY KEY);
ALTER TABLE t1 ADD INDEX(b,pk);
INSERT INTO t1 VALUES (1,b'1');
ALTER TABLE t1 DROP PRIMARY KEY;
Reason:-
The reason for this is
1st- find_ref_key() finds what key an auto_increment field belongs to by
comparing key_part->offset and field->ptr. But BIT fields might have
zero length in the record, so a key might have many key parts with the
same offset. That is, comparing offsets cannot uniquely identify the
correct key part.
2nd- Since next_number_key_offset is zero it myisam/aria will think that
auto_increment is in first part of key.
3nd- myisam/aria will call retrieve_auto_key which will see first key_part
field as a bit field and call assert(0)
Solution:-
Many key parts might have the same offset, but BIT fields do not
support auto_increment. So, we can skip all key parts over BIT fields,
and then comparing offsets will be unambiguous.
The assertion failure was caused by an incorrectly set read_set for
functions in the ORDER BY clause in part of a union, when we are using
a mergeable view and the order by clause can be skipped (removed).
An order by clause can be skipped if it's part of one part of the UNION as
the result set is not meaningful when multiple SELECT queries are UNIONed. The
server is aware of this optimization and tries to remove the order by
clause before JOIN::prepare. The problem is that we need to throw an
error when the ORDER BY clause contains invalid columns. To do this, we
attempt resolving the ORDER BY expressions, then subsequently drop them
if resolution succeeded. However, ORDER BY resolution had the side
effect of adding the expressions to the all_fields list, which is used
to construct temporary tables to store the result. We may be ignoring
the ORDER BY statement, but the tmp table still tried to compute the
values for the expressions, even if the columns are never used.
The assertion only shows itself if the order by clause contains members
which were not previously in the select list, and are part of a
function.
There is an additional question as to why this only manifests when using
VIEWS and not when using a regular table. The difference lies with the
"reset" of the read_set for the temporary table during
SELECT_LEX::update_used_tables() in JOIN::optimize(). The changes
introduced in fdf789a7ea cleared the
read_set when a mergeable view is encountered in the TABLE_LIST
defintion.
Upon initial order_list resolution, the table's read_set is updated
correctly. JOIN::optimize() will only reset the read_set if it
encounters a VIEW. Since we no longer have ORDER BY clause in
JOIN::optimize() we never get to correctly update the read_set again.
Other relevant commit by Timour, which first introduced the order
resolution when we "can_skip_sort_order":
883af99e7d
Solution:
Don't add the resolved ORDER BY elements to all_fields. We only resolve
them to check if an error should be returned for the query. Ignore them
completely otherwise.
It has its limitations, e.g. it assumes that there's only one
gdb and only one valgrind process is running. And a hard-coded
one-second delay might be too short for slow machines.
Still, it's better than "doesn't work at all"
InnoDB limited the maximum number of bytes per character to 4.
But, the filename character set that was introduced in MySQL 5.1
uses up to 5 bytes per character.
To allow InnoDB tables to be created with wider characters, let
us split the mbminmaxlen fields into mbminlen, mbmaxlen, and increase
the limit to 7 bytes per character. This will increase the payload size
of dtype_t and dict_col_t by one bit. The storage size will be unchanged
(54 bits and 77 bits will use the same number of bytes as the
previous sizes 53 and 76 bits).
In this case we were using the optimization derived_with_keys but we could not create a key
because the length of the key was greater than the max allowed(MI_MAX_KEY_LENGTH).
To do the join we needed to create a hash join key instead, but in the explain output it
showed that we were still referring to derived keys which were created but not used.
In the function JOIN::shrink_join_buffers the iteration over joined
tables was organized in a wrong way. This could cause a crash if
the optimizer chose to materialize a semi-join that used join caches
for which the sizes must be adjusted.
optimizer_switch
For DATE and DATETIME columns defined as NOT NULL,
"date_notnull IS NULL" has to be modified to:
"date_notnull IS NULL OR date_notnull == 0"
if date_notnull is from an inner table of outer join);
"date_notnull == 0" - otherwise.
This must hold for such columns of mergeable views and derived
tables as well. So far the code did the above re-writing only
for columns of base tables and temporary tables.
The function trans_rollback_to_savepoint(), unlike trans_savepoint(),
did not allow xa_state=XA_ACTIVE, so an attempt to do ROLLBCK TO SAVEPOINT
inside an XA transaction incorrectly returned an error
"...command cannot be executed ... in the ACTIVE state...".
Partially merging a MySQL patch:
7fb5c47390311d9b1b5367f97cb8fedd4102dd05
This is WL#7193 (Decouple THD and st_transactions)...
The currently merged part includes these changes:
- Introducing st_xid_state::check_has_uncommitted_xa()
- Reusing it in both trans_rollback_to_savepoint() and trans_savepoint(),
so now both allow XA_ACTIVE.
The problem was in such scenario:
T1 - starts registering query and locked QC
T2 - starts disabling QC and wait for UNLOCK
T1 - unlock QC
T2 - disable QC and destroy signals without waiting for query unlock
T1 a) - not yet unlocked query in qc and crash on attempt to unlock because
QC signals are destroyed
b) if above was done before destruction, it execute end_of results first
time at exit on after try_lock which see QC disables and return TRUE.
But it do not reset query_cache_tls->first_query_block which lead to
second call of end_of_result when diagnostic arena has already
inappropriate status (not is_eof()).
Fix is:
1) wait for all queries unlocked before destroying them by locking and
unlocking
2) remove query_cache_tls->first_query_block if QC disabled
with joins, SQ, ORDER BY, semijoin=on
A bug in get_sort_by_table() could mislead the function
setup_semijoin_dups_elimination(). As a result the optimizer
could produce invalid execution plans for queries with ORDER BY
and subquery predicates that could be converted to semi-joins.
Remove non prepared (and so belonging to removed clauses FT functions) from the list.
in later version it will be fixed by building the list during preparation.
This bug happens when locking the same Aria "transactional" table
(page format) more then once with LOCK TABLES and inserting into one
of them with INSERT ... SELECT when the table is empty.
Fixed by ensuring we don't use fast bulk insert if table is opened
twice with LOCK TABLES (as this changes table->s->state)
Code changes:
- Added use_count to MARIA_USED_TABLES to be able to check if
table is opened twice for a statement/lock table
- Don't clear history or reset info->start_state if we
don't have versioning. One reason for the bug was
was that info->start_state was set to point to different
states for the two tables. If there is no versioning
info->start_state should always point to info->s->state.common.
Other things:
- Fixed also some typos that was noticed while scanning the code
- More DBUG_PRINT
The XtraDB option innodb_track_changed_pages causes
the function log_group_read_log_seg() to be invoked
even when recv_sys==NULL, leading to the SIGSEGV.
This regression was caused by
MDEV-11027 InnoDB log recovery is too noisy
dict_foreign_find_index(): Ignore incompletely created indexes.
After a failed ADD UNIQUE INDEX, an incompletely created index
could be left behind until the next ALTER TABLE statement.
* don't use Env module in tests, use $ENV{xxx} instead
* collateral changes:
** $file in the error message was unset
** $file in the other error message was unset too :)
** source file arguments are conventionally upper-cased
** abort the test (die) on error, don't just echo/exit
If translation table present when we materialize the derived table then
change it to point to the materialized table.
Added debug info to see really what happens with what derived.
row_log_table_apply_insert_low(), row_log_table_apply_update():
When reporting the error_key_num, only count the clustered index
if it corresponds to a key in the SQL layer.
The assertion failure was probably introduced by the (incomplete)
MySQL 5.6.28 bug fix
Bug #21364096 THE BOGUS DUPLICATE KEY ERROR IN ONLINE DDL
WITH INCORRECT KEY NAME
which we are improving.
Side note: the fix was incorrectly merged to MySQL 5.7.10;
incorrect key names will continue to be reported in MySQL 5.7.
In the function make_sortkey a tmp buffer was defined and in the absence of
param->tmp_buffer, tmp buffer used the sort_keys buffer. sort_keys buffer
has a length defined in sort_field->length, while param->tmp_buffer is
stored in param->rec_length. Make sure to use the appropriate length
based on which buffer we are using otherwise we'll overflow.
Also added a type cast to size_t during the calculation of the sort keys
buffer size to avoid an oveflow if the buffer size exceeds 32 bits.
Whenever we call merge_role_privileges on a role, we make use of
the role->counter variable to check if all it's children have had their
privileges merged. Only if all children have had their privileges merged,
do we update the privileges on parent. This is done to prevent extra work.
The same idea is employed during flush privileges. You only begin merging
from "leaf" roles. The recursive calls will merge their parents at some point.
A problem arises when we try to "re-merge" a parent. Take the following graph:
{noformat}
A (0) ---- C (2) ---- D (2) ---- USER
/ /
B (0) ----/ /
/
E (0) --------------/
{noformat}
In parentheses we have the "counter" value right before we start to iterate
through the roles hash and propagate values. It represents the number of roles
granted to the current role. The order in which we iterate through the roles
hash is alphabetical.
* First merge A, which leads to decreasing the counter for C to 1. Since C is
not 0, we don't proceed with merging into C.
* Second we merge B, which leads to decreasing the counter for C to 0. Now
we proceed with merging into C. This leads to reducing the counter for D to 1
as part of C merge process.
* Third as we iterate through the hash, we see that C has counter 0, thus we
start the merge process *again*. This leads to reducing the counter for
D to 0! We then attempt to merge D.
* Fourth we start merging E. When E sees D as it's parent (according to the code)
it attempts to reduce D's counter, which leads to overflow. Now D's counter is
a very large number, thus E's privileges are not forwarded to D yet.
To correct this behavior we must make sure to only start merging from initial
leaf nodes.
When granting a role to another role, DB privileges get propagated. If
the grantee had no previous DB privileges, an extra ACL_DB entry is created to
house those "indirectly received" privileges. If, afterwards, DB
privileges are granted to the grantee directly, we must make sure to not
create a duplicate ACL_DB entry.
row_undo_step(), trx_rollback_active(): Abort the rollback of a
recovered ordinary transaction if fast shutdown has been initiated.
trx_rollback_resurrected(): Convert an aborted-rollback transaction
into a fake XA PREPARE transaction, so that fast shutdown can proceed.
trx_rollback_resurrected(): If shutdown was initiated, fake all
remaining active transactions to XA PREPARE state, so that shutdown
can proceed. Also, make the parameter "all" an output that will be
assigned to FALSE in this case.
trx_rollback_or_clean_recovered(): Remove the shutdown check
(it was moved to trx_rollback_resurrected()).
trx_undo_free_prepared(): Relax assertions.
For BIT field null_bit is not set to 0 even for a field defined as NOT NULL.
So now in the function TABLE::create_key_part_by_field, if the bit field is not nullable
then the null_bit is explicitly set to 0