dict_foreign_find_index(): Ignore incompletely created indexes.
After a failed ADD UNIQUE INDEX, an incompletely created index
could be left behind until the next ALTER TABLE statement.
There is a race condition on a test. con1 is the older transaction
as it query started wait first. Test continues so that con1 gets
lock wait timeout first. There is possibility that default connection
gets lock timeout also or as con1 is rolled back it gets the locks
it waited and does the update. Fixed by removing query outputs as
they could vary and accepting success from default connection
query.
This bug affects both writing and reading encrypted redo log in
MariaDB 10.1, starting from version 10.1.3 which added support for
innodb_encrypt_log. That is, InnoDB crash recovery and Mariabackup
will sometimes fail when innodb_encrypt_log is used.
MariaDB 10.2 or Mariabackup 10.2 or later versions are not affected.
log_block_get_start_lsn(): Remove. This function would cause trouble if
a log segment that is being read is crossing a 32-bit boundary of the LSN,
because this function does not allow the most significant 32 bits of the
LSN to change.
log_blocks_crypt(), log_encrypt_before_write(), log_decrypt_after_read():
Add the parameter "lsn" for the start LSN of the block.
log_blocks_encrypt(): Remove (unused function).
If the specification of a CTE contains a reference to a temporary table
then THD::open_temporary_table() must be called for this reference for
any occurrence of the CTE in the query. By mistake this was done only
for the first occurrences of CTEs.
The patch fixes this problem in With_element::clone_parsed_spec().
It also moves there the call of check_dependencies_in_with_clauses()
to its proper place before the call of check_table_access().
Additionally the patch optimizes the number of calls of the
function check_dependencies_in_with_clauses().
debug_key_management
encrypt_and_grep
innodb_encryption
If real table count is different from what is expected by the test, it
just hangs on waiting to fulfill hardcoded number. And then exits with
**failed** after 10 minutes of wait: quite unfriendly and hard to
figure out what's going on.
Fix and enable some of the tests; some remain disabled.
The tests innodb_gis.rtree_old and innodb_gis.row_format
duplicated some versions of the test main.gis-rtree.
Instead of duplicating, source that test, in a new test
innodb_gis.innodb_gis_rtree.
Introduce innodb_row_format.combinations. Due to this,
ROW_FORMAT=COMPRESSED will not be covered in some tests
where it is covered in MySQL 5.7.
The function rtr_update_mbr_field_in_place() is generating
MLOG_REC_UPDATE_IN_PLACE or MLOG_COMP_REC_UPDATE_IN_PLACE records
on non-leaf pages, even though MLOG_WRITE_STRING would perfectly
suffice for updating a fixed-length data field.
btr_cur_parse_update_in_place(): If flags==7, the record may be
from rtr_update_mbr_field_in_place(), and we must check if the
page is a leaf page. Otherwise, assume that it is.
btr_cur_update_in_place(): Assert that the page is a leaf page.
Other things, mainly to get
create_mysqld_error_find_printf_error tool to work:
- Added protection to not include mysqld_error.h twice
- Include "unireg.h" instead of "mysqld_error.h" in server
- Added protection if ER_XX messages are already defined
- Removed wrong calls to my_error(ER_OUTOFMEMORY) as
my_malloc() and my_alloc will do this automatically
- Added missing %s to ER_DUP_QUERY_NAME
- Removed old and wrong calls to my_strerror() when using
MY_ERROR_ON_RENAME (wrong merge)
- Fixed deadlock error message from Galera. Before the extra
information given to ER_LOCK_DEADLOCK was missing because
ER_LOCK_DEADLOCK doesn't provide any extra information.
I kept #ifdef mysqld_error_find_printf_error_used in sql_acl.h
to make it easy to do this kind of check again in the future
trx_undo_page_report_modify(): For SPATIAL INDEX, keep logging
updated off-page columns twice, so that
the minimum bounding rectangle (MBR) will be logged.
Avoiding the redundant logging would require larger changes
to the undo log format.
row_build_index_entry_low(): Handle SPATIAL_UNKNOWN more robustly,
by refusing to purge the record from the spatial index.
We can get this code when processing old undo log from 10.2.10 or
10.2.11 (the releases affected by MDEV-14799, which was a regression
from MDEV-14051).
The InnoDB background tasks can modify tables while LOCK TABLES...WRITE
is in effect. The purge of InnoDB history always worked like this in
MariaDB, but in MySQL 5.7 it sometimes yields to LOCK TABLES.
Also, make gcol.innodb_virtual_index run the purge for an UPDATE
before DROP TABLE is executed.
Code in QUICK_RANGE_SELECT::init_ror_merged_scan() could theoretically
have caused crashes if this was ever called from an update or delete
This also found a bug in the vcol/range.result. file.
Locked_tables_list::unlock_locked_tables
Similarly to regular DROP TABLE, don't leave locked tables mode if CREATE OR
REPLACE dropped temporary table but failed to cerate new one.
The problem is that there's no track of which temporary table was "locked" by
LOCK TABLES.
* don't use Env module in tests, use $ENV{xxx} instead
* collateral changes:
** $file in the error message was unset
** $file in the other error message was unset too :)
** source file arguments are conventionally upper-cased
** abort the test (die) on error, don't just echo/exit
for a query that uses CTE
The first reference to a CTE in the processed query uses the unit
built by the parser for the CTE specification. This unit is
considered as the specification of the derived table created for
the first reference of the CTE. This requires some transformation of
the original query tree: the unit of the specification must be moved
to a new position as a slave of the select where the first reference
to the CTE occurs. The transformation is performed by the function
st_select_lex_node::move_as_slave(). There was an obvious bug in this
function. As a result of this bug in many cases the moved unit turned
out to be lost in the query tree. This could cause different problems.
In particular the prepared statements for queries that used CTEs could
miss cleanup for some selects that was performed at the end of the
preparation/execution of the PSs. If such cleanup is not done for a PS
the next execution of the PS causes an assertion abort or a crash.
If translation table present when we materialize the derived table then
change it to point to the materialized table.
Added debug info to see really what happens with what derived.
Problem:- Gtid are not transferred in Galera Cluster.
Solution:- We need to transfer gtid in the case on either when cluster is
slave/master in async replication. In normal Gtid replication gtid are generated on
recieving node itself and it is always on sync with other nodes. Because galera keeps
node in sync , So all nodes get same no of event groups. So the issue arises when
say galera is slave in async replication.
A
| (Async replication)
D <-> E <-> F {Galera replication}
So what should happen is that all node should apply the master gtid but this does
node happen, becuase node E, F does not recieve gtid from D in write set , So what E(or F)
does is that it applies wsrep_gtid_domain_id, D server-id , E gtid next seq no. This
generated gtid does not always work when say A has different domain id.
So In this commit, on galera node when we see that this event is recieved from master
we simply write Gtid_Log_Event in write_set and send it to other nodes.
Sometimes, the test would fail with a result difference for
the READ UNCOMMITTED read, because the incremental backup
would finish before redo log was written for all the rows
that were inserted in the second batch.
To fix that, cause a redo log write by creating another
transaction. The transaction rollback (which internally does commit)
will be flushed to the redo log, and before that, all the preceding
changes will be flushed to the redo log as well.
row_log_table_apply_insert_low(), row_log_table_apply_update():
When reporting the error_key_num, only count the clustered index
if it corresponds to a key in the SQL layer.
The assertion failure was probably introduced by the (incomplete)
MySQL 5.6.28 bug fix
Bug #21364096 THE BOGUS DUPLICATE KEY ERROR IN ONLINE DDL
WITH INCORRECT KEY NAME
which we are improving.
Side note: the fix was incorrectly merged to MySQL 5.7.10;
incorrect key names will continue to be reported in MySQL 5.7.
In the function make_sortkey a tmp buffer was defined and in the absence of
param->tmp_buffer, tmp buffer used the sort_keys buffer. sort_keys buffer
has a length defined in sort_field->length, while param->tmp_buffer is
stored in param->rec_length. Make sure to use the appropriate length
based on which buffer we are using otherwise we'll overflow.
Also added a type cast to size_t during the calculation of the sort keys
buffer size to avoid an oveflow if the buffer size exceeds 32 bits.