ha_innobase::prepare_inplace_alter_table(): check max column length for every
index in a table, not just added in this particular ALTER TABLE with ADD INDEX ones.
There was a race condition in the error handling of ALTER TABLE when
the table contains FULLTEXT INDEX.
During the error handling of an erroneous ALTER TABLE statement,
when InnoDB would drop the internally created tables for FULLTEXT INDEX,
it could happen that one of the hidden tables was being concurrently
accessed by a background thread. Because of this, InnoDB would defer
the drop operation to the background.
However, related to MDEV-13564 backup-safe TRUNCATE TABLE and its
prerequisite MDEV-14585, we had to make the background drop table queue
crash-safe by renaming the table to a temporary name before enqueueing it.
This renaming was introduced in a follow-up of the MDEV-13407 fix.
As part of this rename operation, we were unnecessarily parsing the
current SQL statement, because the same rename operation could also be
executed as part of ALTER TABLE via ha_innobase::rename_table().
If an ALTER TABLE statement was being refused due to incorrectly formed
FOREIGN KEY constraint, then it could happen that the renaming of the hidden
internal tables for FULLTEXT INDEX could also fail, triggering a host of
error log messages, and causing a subsequent table-rebuilding ALTER TABLE
operation to fail due to the tablespace already existing.
innobase_rename_table(), row_rename_table_for_mysql(): Add the parameter
use_fk for suppressing the parsing of FOREIGN KEY constraints. It
will only be passed as use_fk=true by ha_innobase::rename_table(),
which can be invoked as part of ALTER TABLE...ALGORITHM=COPY.
The problem here is EITS statistics does not calculate statistics for the partitions of the table.
So a temporary solution would be to not read EITS statistics for partitioned tables.
Also disabling reading of EITS for columns that participate in the partition list of a table.
merge_role_db_privileges() was remembering pointers into Dynamic_array
acl_dbs, and later was using them, while pushing more elements into the
array. But pushing can cause realloc, and it can invalidate all pointers.
Fix: remember and use indexes of elements, not pointers.
- This is a regression of commit b26e603aeb. While dropping
the incompletely created table, InnoDB shouldn't consider that operation as non-atomic one.
When the with clause of a query contains a recursive CTE that is not used
then processing of EXPLAIN for this query does not require optimization
of the unit specifying this CTE. In this case if 'derived' is the
TABLE_LIST object created for this CTE then derived->derived_result is NULL
and any assignment to derived->derived_result->table causes a crash.
After fixing this problem in the code of st_select_lex_unit::prepare()
EXPLAIN for such a query worked without crashes. Yet an execution
plan for the recursive CTE appeared there. The cause of this problem was
an incorrect condition used in JOIN::save_explain_data_intern() that
determined whether CTE was to be optimized or not. A similar condition was
used in select_describe() and this patch has corrected it as well.
Suppress unused parameter from PlugSubSet
modified: storage/connect/global.h
modified: storage/connect/plugutil.cpp
modified: storage/connect/jsonudf.cpp
modified: storage/connect/tabjson.cpp
modified: storage/connect/user_connect.cc
- Fix a bug making column catalog XML tables fail
modified: storage/connect/tabxml.cpp
- Comment out wrong message
modified: storage/connect/ha_connect.cc
- Update error message when sorting an ODBC table fails
modified: storage/connect/tabodbc.cpp
- Add error message when gettting an address
from an OEM fails.
modified: storage/connect/reldef.cpp
- Make some modifications useful for OEM module writting
Export discovery functions for CSV, JDBC and XML
Remove unuseful include from tabjson.h
Move TDBXML::data_charset function from header file to source
modified: storage/connect/tabfmt.h
modified: storage/connect/tabjson.h
modified: storage/connect/tabxml.cpp
modified: storage/connect/tabxml.h
- Update test result
modified: storage/connect/mysql-test/connect/r/jdbc_oracle.result
There was an incorrect check for MariaDB and InnoDB
tables fields count. Corruption was reported when there was no corruption.
Also, a warning message had incorrect field numbers for both MariaDB and InnoDB
tables.
ha_innobase::open(): fixed check and message
dict_create_add_foreigns_to_dictionary(): Do not commit the transaction.
The operation can still fail in dict_load_foreigns(), and we want
to be able to roll back the transaction.
create_table_info_t::create_table(): Never reset m_drop_before_rollback,
and never commit the transaction. We use a single point of rollback
in ha_innobase::create(). Merge the logic from
row_table_add_foreign_constraints().
This is a regression due to MDEV-17816.
When creating a table fails, we must roll back the dictionary
transaction. Because the rollback may rename tables, and because
InnoDB lacks proper undo logging for CREATE operations, we must
drop the incompletely created table before rolling back the
transaction, which could include a RENAME operation.
But, we must not blindly drop the table by name; after all,
the operation could have failed because another table by the
same name already existed.
create_table_info_t::m_drop_before_rollback: A flag that is set
if the table needs to be dropped before transaction rollback.
create_table_info_t::create_table(): Remove some duplicated
error handling.
ha_innobase::create(): On error, only drop the table if it was
actually created.
fil_space_t::add(): Replaces fil_node_create(), fil_node_create_low().
Let the caller pass fil_node_t::handle, to avoid having to close and
re-open files.
fil_node_t::read_page0(): Refactored from fil_node_open_file().
Read the first page of a data file.
fil_node_open_file(): Open the file only once.
srv_undo_tablespace_open(): Set the file handle for the opened
undo tablespace. This should ensure that ut_ad(file->is_open())
no longer fails in recv_add_trim().
xtrabackup_backup_func(): Remove some dead code.
xb_fil_cur_open(): Open files only if needed. Undo tablespaces
should already have been opened.
When dropping a partially created table due to failure,
use SQLCOM_TRUNCATE instead of SQLCOM_DROP_DB, so that
no foreign key constraints will be touched. If any
constraints were added as part of the creation, they would
be reverted as part of the transaction rollback.
We need an explicit call to row_drop_table_for_mysql(),
because InnoDB does not do proper undo logging for CREATE TABLE,
but would only drop the table at the end of the rollback.
This would not work if the transaction combines both
RENAME and CREATE, like TRUNCATE now does.
If a table had a KEY_BLOCK_SIZE attribute, but no ROW_FORMAT,
it would be created as ROW_FORMAT=COMPRESSED in InnoDB.
However, TRUNCATE TABLE would lose the KEY_BLOCK_SIZE attribute
and create the table with the innodb_default_row_format (DYNAMIC).
This is a regression that was introduced by MDEV-13564.
update_create_info_from_table(): Copy also KEY_BLOCK_SIZE.
The error handling in the MDEV-13564 TRUNCATE TABLE was broken
when an error occurred during table creation.
row_create_index_for_mysql(): Do not drop the table on error.
fts_create_one_common_table(), fts_create_one_index_table():
Do drop the table on error.
create_index(), create_table_info_t::create_table():
Let the caller handle the index creation errors.
ha_innobase::create(): If create_table_info_t::create_table()
fails, drop the incomplete table, roll back the transaction,
and finally return an error to the caller.
lock_rec_queue_validate(): Assert page_rec_is_leaf(rec), except when
the record is a page infimum or supremum.
lock_rec_validate_page(): Relax the assertion that failed.
The assertion was reachable when the record lock bitmap was empty.
lock_rec_insert_check_and_lock(): Assert page_is_leaf().
Problem was that controlling connection i.e. connection that
executed the query SET GLOBAL wsrep_reject_queries = ALL_KILL;
was also killed but server would try to send result from that
query to controlling connection resulting a assertion
mysqld: /home/jan/mysql/10.2-sst/include/mysql/psi/mysql_socket.h:738: inline_mysql_socket_send: Assertion `mysql_socket.fd != -1' failed.
as socket was closed when controlling connection was closed.
wsrep_close_client_connections()
Do not close controlling connection and instead of
wsrep_close_thread() we do now soft kill by THD::awake
wsrep_reject_queries_update()
Call wsrep_close_client_connections using current thd.
Problem was that controlling connection i.e. connection that
executed the query SET GLOBAL wsrep_reject_queries = ALL_KILL;
was also killed but server would try to send result from that
query to controlling connection resulting a assertion
mysqld: /home/jan/mysql/10.2-sst/include/mysql/psi/mysql_socket.h:738: inline_mysql_socket_send: Assertion `mysql_socket.fd != -1' failed.
as socket was closed when controlling connection was closed.
wsrep_close_client_connections()
Do not close controlling connection and instead of
wsrep_close_thread() we do now soft kill by THD::awake
wsrep_reject_queries_update()
Call wsrep_close_client_connections using current thd.