No need to lowercase table names on case-sensitive file systems, as the
cache won't contain the 'lowercased' table anyway. And it prevents the
UPPERCASE.frm from being deleted.
Problem:
=======
Mariabackup incremental prepare creates new tablespace when it encounter
new tablespace. It sets the intial size as FIL_IBD_FILE_INITIAL_SIZE (4).
But while applying redo log, it tries to access 5th page and then
it leads to out of tablespace error.
Fix:
===
While parsing the redo log record, track FSP_SIZE in recv_spaces for the
respective space id. Assign the recv_size for the tablespace when it
is loaded. Extend the tablespace depends on recv_size while applying
the redo log record.
Do not try to write ER_SHUTDOWN error message to socket, when it is forcefully closed by the shutdown.
This will avoid the race condition (attempt to write to closed socket, if connection shuts down by itself).
Analysis:
========
Increasing the length of the indexed varchar column is not an instant operation for
innodb.
Fix:
===
- Introduce the new handler flag 'Alter_inplace_info::ALTER_COLUMN_INDEX_LENGTH' to
indicate the index length differs due to change of column length changes.
- InnoDB makes the ALTER_COLUMN_INDEX_LENGTH flag as instant operation.
This is a port of Mysql fix.
commit 913071c0b16cc03e703308250d795bc381627e37
Author: Nisha Gopalakrishnan <nisha.gopalakrishnan@oracle.com>
Date: Wed May 30 14:54:46 2018 +0530
BUG#26848813: INDEXED COLUMN CAN'T BE CHANGED FROM VARCHAR(15)
TO VARCHAR(40) INSTANTANEOUSLY
If the TC log did not provide list of XIDs to recover, the
commit by XID was skipped during wsrep recovery if binlog emulation
was on. However, with wsrep we want to commit every prepared transaction
with assigned wsrep XID since the transaction has already been
committed in the cluster.
Added a special condition to always proceed to commit by XID in
xarecover_handlerton() if binlog is off and the recovered transaction
has wsrep XID.
Clear wsrep XID in innobase_rollback_by_xid() for recovered wsrep
transaction in order to avoid resetting XID storage when rolling back
wsrep transaction during recovery.
Sort wsrep XIDs read from storage engine in ascending order and
erify that the range is continuous during crash recovery. If binlog is off,
commit all recovered transactions for continuous seqno range. This is safe
because all transactions with wsrep XID have been certified and must be
committed in the cluster. On the other hand if binlog is on, respect binlog
as a transaction coordinator in order to avoid missing transactions in binlog
that have been committed into storage engine .
Backported wsrep-recover test from 10.4 to test the wsrep recovery
after database shutdown or crash. Renamed the test to wsrep-recover-v25
to avoid conflicts with existing test in 10.4 and to provide coverage
for wsrep-API v25 compatible behavior.
The test case in 10.2 is simpler because out of order prepare
step cannot happen, the commit order critical section is grabbed
before prepare phase.
Test is not recorded since it does not produce expected result.
Remove fil_node_t::sync_event.
I had a discussion with kernel fellows and they said it's safe to call
fsync() simultaneously at least on VFS and ext4. So initially I wanted
to disable check for recent Linux but than I realized code is buggy.
Consider a case when one thread is inside fsync() and two others are
waiting inside os_event. First thread after fsync() calls os_event_set()
which is a broadcast! So two waiting threads will awake and may call
fsync() at the same time.
One fix is to add a notify_one() functionality to os_event but I decided
to remove incorrect check completely. Note, it works for one waiting
thread but not for more than one.
IMO it's ok to avoid existing bugs but there is not too much sense in
avoiding possible(!) bugs as this code does.
fil_space_t::is_in_rotation_list(), fil_space_t::is_in_unflushed_spaces():
Replace redundant bool fields with member functions.
fil_node_t::needs_flush: Replaces fil_node_t::modification_counter and
fil_node_t::flush_counter. We need to know whether there _are_ some
unflushed writes and we do not need to know _how many_ writes.
fil_system_t::modification_counter: Remove as not needed.
Even if we needed fil_node_t::modification_counter, every file
could have its own counter that would be incremented on each write.
fil_system_t::modification_counter is a global modification counter
for all files. It was incremented on every write. But whether some
file was flushed or not is an internal fil_node_t deal/state and
this makes fil_system_t::modification_counter useless.
Closes#1061
When innobase_allocate_row_for_vcol() returns true (for failure),
it may already have invoked mem_heap_create(). However, some callers
would fail to invoke mem_heap_free().
Problem:
========
Server fails to notify the engine by not setting the ADD_PK_INDEX and
DROP_PK_INDEX When there is a
i) Change in candidate for primary key.
ii) New candidate for primary key.
Fix:
====
Server sets the ADD_PK_INDEX and DROP_PK_INDEX while doing alter for the
above problematic case.
PROBLEM
-------
1. We are inserting a base column entry which causes an invalid value
by the function provided to generate virtual column,but we go ahead
and insert this due to ignore keyword.
2. We then delete this record, making this record delete marked in innodb.
If we try to insert another record with the same pk as the deleted
record and if the rec is not purged ,then we try to undelete mark this
record and try to build a update vector with previous and updated value
and while calculating the value of virtual column we get error from
server that we cannot calculate this from base column.
Innodb assumes that innobase_get_computed_value() Should always return
a valid value for the base column present in the row. The failure of
this call was not handled ,so we were crashing.
FIX
32 bit int
Row-based slave applier could not parse correctly the table id when
the value exceeded the max of 32 bit unsigned int.
The reason turns out in that the being parsed value placeholder
was sized as 4 bytes.
The type is fixed to ulonglong.
Additionally the patch works around Rows_log_event::m_table_id 4 bytes
size on 32 bits platforms. In case of last_table_id value overflows
the 4 byte max, there won't be the zero value for m_table_id generated
and the first wrapped-around value is one, this is thanks to excluding
UINT_MAX32 + 1 from TABLE_SHARE::table_map_id.
dict_sys_get_size(): Replace the time-consuming loop with
a crude estimate that can be computed without holding any mutex.
Even before dict_sys->size was removed in MDEV-13325,
not all memory allocations by the InnoDB data dictionary cache
were being accounted for. One example is foreign key constraints.
Another example is virtual column metadata, starting with 10.2.
on startup innodb is checking whether files "ib_logfileN"
(for N from 1 to 100) exist, and whether they're readable.
A non-existent file aborted the scan.
A directory instead of a file made InnoDB to fail.
Now it treats "directory exists" as "file doesn't exist".
When InnoDB is invoking posix_fallocate() to extend data files, it
was missing a call to fsync() to update the file system metadata.
If file system recovery is needed, the file size could be incorrect.
When the setting innodb_flush_method=O_DIRECT_NO_FSYNC
that was introduced in MariaDB 10.0.11 (and MySQL 5.6) is enabled,
InnoDB would wrongly skip fsync() after extending files.
Furthermore, the merge commit d8b45b0c00
inadvertently removed XtraDB error checking for posix_fallocate()
which this fix is restoring.
fil_flush(): Add the parameter bool metadata=false to request that
fil_buffering_disabled() be ignored.
fil_extend_space_to_desired_size(): Invoke fil_flush() with the
extra parameter. After successful posix_fallocate(), invoke
os_file_flush(). Note: The bookkeeping for fil_flush() would not be
updated the posix_fallocate() code path, so the "redundant"
fil_flush() should be a no-op.
thd_destructor_proxy() may miss abort signal if innobase_end() is running
concurrently, which causes server hang in pthread_join() on shutdown.
The problem was that aborting wasn't protected by mutex:
proxy thr: while (!myvar->abort)
end thr: running->abort = 1;
end thr: mysql_cond_broadcast(...);
proxy thr: mysql_cond_wait(...); // nobody to awake it
end thr: pthread_join(...); // waits for proxy thr
Also made main.mysqld_option_err reentrant.
In the function QUICK_RANGE_SELECT::init_ror_merged_scan we create a seperate handler if the handler in
head->file cannot be reused. The flag free_file tells us if we have a seperate handler or not.
There are cases where you might create a handler and then there might be a failure(running ALTER)
and then we have to revert the handler back to the original one. The code does that
but it does not reset the flag 'free_file' in this case.
Also backported f2c418079d.