On macOS pthread id is a pointer to struct _opaque_pthread_t type,
requires explicit cast to ulint which in turn is size_t;
Was failing with Clang 9.1.0 Debug build on macOS 10.13.4:
sync0policy.h:53:4: error: cannot initialize a member subobject of type 'ulint'
(aka 'unsigned long') with an rvalue of type 'os_thread_id_t' (aka '_opaque_pthread_t *')
m_thread_id(os_thread_id_t(ULINT_UNDEFINED))
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
sync0policy.h:79:4: error: cannot initialize a parameter of type 'int64' (aka 'long long') with
an rvalue of type 'os_thread_id_t' (aka '_opaque_pthread_t *')
my_atomic_storelint(&m_thread_id, os_thread_get_curr_id());
The remote users need the SUPER privilege because by default Spider sends a
'SET SQL_LOG_OFF' statement to the data nodes. This is controlled by the
spider_internal_sql_log_off configuration setting on the Spider node, which
can only be set to 0 or 1, with a default value of 1.
I have fixed the problem by changing this configuration setting so that if it
is NOT SET, which is the most likely case, the Spider node DOES NOT SEND the
'SET SQL_LOG_OFF' statement to the data nodes. However if the
spider_internal_sql_log_off setting IS EXPLICITLY SET to either 0 or 1, then
the Spider node DOES SEND the 'SET SQL_LOG_OFF' statement, requiring a remote
user with the SUPER privilege. The Spider documentation will be updated to
reflect this change.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Cherry-Picked:
Commit 72f0efa on branch bb-10.3-MDEV-15697
The remote users need the SUPER privilege because by default Spider sends a
'SET SQL_LOG_OFF' statement to the data nodes. This is controlled by the
spider_internal_sql_log_off configuration setting on the Spider node, which
can only be set to 0 or 1, with a default value of 1.
I have fixed the problem by changing this configuration setting so that if it
is NOT SET, which is the most likely case, the Spider node DOES NOT SEND the
'SET SQL_LOG_OFF' statement to the data nodes. However if the
spider_internal_sql_log_off setting IS EXPLICITLY SET to either 0 or 1, then
the Spider node DOES SEND the 'SET SQL_LOG_OFF' statement, requiring a remote
user with the SUPER privilege. The Spider documentation will be updated to
reflect this change.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Merged:
Branch bb-10.3-MDEV-15697 into 10.3
Problem:
=======
InnoDB master thread encounters the shutdown state as SRV_SHUTDOWN_FLUSH_PHASE
when innodb_force_recovery >=2 and slow scheduling of master thread during
shutdown.
Fix:
====
There is no need for master thread itself if innodb_force_recovery >=2.
Don't create the master thread if innodb_force_recovery >= 2
The problem is hard to repeat, and I failed to create a deterministic
test case. Online index creation creates stubs for to-be-created indexes.
If index creation fails, we could remove these stubs while locks exist
in the indexes. (This would require that the index creation was completed,
and a concurrent DML operation acquired a lock on a record in the
uncommitted index. If a duplicate key error occurs in an uncommitted
index, the error will be reported for the CREATE UNIQUE INDEX, not for
the DML operation that tried to insert the duplicate.)
dict_table_try_drop_aborted(), row_merge_drop_indexes(): If transactional
locks exist on the table, keep the table->indexes intact.
ibuf_restore_pos(): Do not issue any messages if the tablespace
is being dropped or truncated. In MariaDB 10.2, TRUNCATE TABLE
would not change the tablespace ID, and therefore the tablespace
would be found, even though TRUNCATE is in progress.
Furthermore, do not commit suicide if restoring the change buffer
cursor fails. The worst that could happen is that a secondary index
becomes corrupted due to incomplete change buffer merge.
The following variables are used in this project, but they are set to NOTFOUND.
LZ4_LIBS
The reason for the failure is that pkg_check_modules will not guarantee
<prefix>_LIBRARY_DIRS variable to be set, according to documentation.
When it's not set, we would force find_library to look in an empty path
and thus fail to correctly find LZ4_LIBS, although pck_check_modules
did previously discover that the library is installed.
To fix the problem and still keep the logic of first following
LIBLZ4_LIBRARY_DIRS and *then* look at other paths, we call find_library
twice. This is the recommended approach, according to CMake 3.11
documentation.
In reverse-ordered column families, if one wants to start reading at the
logical end of the index, they should Seek() to a key value that is not
covered by the index. This may (and typically does) prevent use of a bloom
filter.
The calls to setup_scan_iterator() that are made for index and table scan
didn't take this into account and passed eq_cond_len=INDEX_NUMBER_SIZE.
Fixed them to compute and pass correct eq_cond_len.
Also, removed an incorrect assert in ha_rocksdb::setup_iterator_bounds.
If creating a secondary index fails (typically, ADD UNIQUE INDEX fails
due to duplicate key), it is possible that concurrently running UPDATE
or DELETE will access the index stub and hit the debug assertion.
It does not make any sense to keep updating an uncommitted index whose
creation has failed.
dict_index_t::is_corrupted(): Replaces dict_index_is_corrupted().
Also take online_status into account.
Replace some calls to dict_index_is_clust() with calls to
dict_index_t::is_primary().
Introduced new alter algorithm type called NOCOPY & INSTANT for
inplace alter operation.
NOCOPY - Algorithm refuses any alter operation that would
rebuild the clustered index. It is a subset of INPLACE algorithm.
INSTANT - Algorithm allow any alter operation that would
modify only meta data. It is a subset of NOCOPY algorithm.
Introduce new variable called alter_algorithm. The values are
DEFAULT(0), COPY(1), INPLACE(2), NOCOPY(3), INSTANT(4)
Message to deprecate old_alter_table variable and make it alias
for alter_algorithm variable.
alter_algorithm variable for slave is always set to default.
ha_innobase::commit_inplace_alter_table(): Defer the freeing of ctx->trx
until after the operation has been successfully committed. In this way,
rollback on a partitioned table will be possible.
rollback_inplace_alter_table(): Handle ctx->new_table == NULL when
ctx->trx != NULL.
Performance schema likely/unlikely assume that performance schema
is enabled by default, which causes a performance degradation for
default installations that doesn't have performance schema enabled.
Fixed by changing the likely/unlikely in PS to assume it's
not enabled. This can be changed by compiling with
-DPSI_ON_BY_DEFAULT
Other changes:
- Added psi_likely/psi_unlikely that is depending on
PSI_ON_BY_DEFAULT. psi_likely() is assumed to be true
if PS is enabled.
- Added likely/unlikely to some PS interface code.
- Moved pfs_enabled to mysys (was initialized but not used before)
- Added "if (pfs_likely(pfs_enabled))" around calls to PS to avoid
an extra call if PS is not enabled.
- Moved checking flag_global_instrumention before other flags
to speed up the case when PS is not enabled.
If the tablespace is dropped or truncated after the
space->is_stopping() check in fil_crypt_get_page_throttle_func(),
we would proceed to request the page, and eventually report a fatal
error.
buf_page_get_gen(): Do not retry reading if mode==BUF_GET_POSSIBLY_FREED.
lock_rec_block_validate(): Be prepared for a NULL return value when
invoking buf_page_get_gen() with mode=BUF_GET_POSSIBLY_FREED.
The fix for this bug was automatically merged from 10.2. However, that fix
was incomplete in 10.3. This commit is for the additional changes that are
necessary in 10.3.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
During an online table rebuild, a table could be emptied and converted
from 'instant ADD' format to plain (pre-10.3) format. All online_log
records for rebuilding the table must be written and parsed in the
format of the table that existed at the start of the operation.
row_log_t::n_core_fields: A new field for recording index->n_core_fields
when online ALTER is initiated in row_log_allocate().
row_log_t::is_instant(): Determine if the log is in the instant format.
Only invoked by the row_log_table_ family of functions.
dict_index_t::get_n_nullable(): Remove is_instant() debug assertions.
Because a table can be converted to non-instant format during a
table-rebuilding ALTER TABLE, these assertions would be bogus when
executing row_log_table_apply().
rec_init_offsets_temp(): Add the parameter n_core for passing the
original index->n_core_fields.
rec_init_offsets_temp(): Add a 3-parameter variant.
rec_init_offsets_comp_ordinary(): Add the parameter n_core for
passing the index->n_core_fields.
Issue:
------
Prefix for externally stored columns were being stored in online_log when a
table is altered and alter causes table to be rebuilt. Space in online_log is
limited and if length of prefix of externally stored columns is very big, then
it is being written to online log without making sure if it fits. This leads to
memory corruption.
Fix:
----
After fix for Bug#16544143, there is no need to store prefixes of externally
stored columnd in online_log. Thus remove the code which stores column prefixes
for externally stored columns. Also, before writing anything on online_log,
make sure it fits to available memory to avoid memory corruption.
Read RB page for more details.
Reviewed-by: Annamalai Gurusami <annamalai.gurusami@oracle.com>
RB: 18239
MDEV-12266 changed dict_table_t::space to a pointer.
Displaying pointer values in error messages would be even more
meaningless than displaying numeric tablespace identifiers.
row_create_table_for_mysql(): Do not display table->space when
deleting the file fails. We cannot dereference table->space here,
because fil_delete_tablespace() would have freed the object.
fil_wait_crypt_bg_threads(): Do not display table->space. We could
display table->space_id here, but it should not really add any value,
because the table reference-counts have no direct connection to files
or tablespaces.
btr_pcur_store_position(): Assert that the 'default row' record never
is the only record in a page. (If that would happen, an empty
root page would be re-created in the non-instant format, not containing
the special record.) When the cursor is positioned on the page infimum,
never use the 'default row' as the BTR_PCUR_BEFORE reference.
(This is additional cleanup, not fixing the bug.)
rec_copy_prefix_to_buf(): When converting a record prefix to
the non-instant-add format, copy the original number of null flags.
Rename the variable instant_len to instant_omit, and introduce a
few more variables to make the code easiser to read.
Note: In purge, rec_copy_prefix_to_buf() is also used for storing the
persistent cursor position on a 'default row' record. The stored record
reference will be garbage, but row_search_on_row_ref() will do special
handling to reposition the cursor on the 'default row', based on
ref->info_bits.
innodb.dml_purge: Also cover the 'default row'.
When a comma separator is missing between COMMENT fields, Spider ignores the
parameter values that are beyond the last expected parameter value. There are
also some error messages that Spider does generate on COMMENT fields that are
incorrectly formed.
I have introduced additional infrastructure in Spider to fix these problems.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Cherry-Picked:
Commit c10da98 on branch bb-10.3-MDEV-15698
When a comma separator is missing between COMMENT fields, Spider ignores the
parameter values that are beyond the last expected parameter value. There are
also some error messages that Spider does generate on COMMENT fields that are
incorrectly formed.
I have introduced additional infrastructure in Spider to fix these problems.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Merged From:
Branch bb-10.3-MDEV-15698
The remote users need the SUPER privilege because by default Spider sends a
'SET SQL_LOG_OFF' statement to the data nodes. This is controlled by the
spider_internal_sql_log_off configuration setting on the Spider node, which
can only be set to 0 or 1, with a default value of 1.
I have fixed the problem by changing this configuration setting so that if it
is NOT SET, which is the most likely case, the Spider node DOES NOT SEND the
'SET SQL_LOG_OFF' statement to the data nodes. However if the
spider_internal_sql_log_off setting IS EXPLICITLY SET to either 0 or 1, then
the Spider node DOES SEND the 'SET SQL_LOG_OFF' statement, requiring a remote
user with the SUPER privilege. The Spider documentation will be updated to
reflect this change.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Remove unused InnoDB function parameters and functions.
i_s_sys_virtual_fill_table(): Do not allocate heap memory.
mtr_is_block_fix(): Replace with mtr_memo_contains().
mtr_is_page_fix(): Replace with mtr_memo_contains_page().
When an attempt to connect to the remote server fails, Spider retries to
connect to the remote server 1000 times or until the connection attempt
succeeds. This is perceived as a hang if the remote server remains
unavailable.
I have introduced changes in Spider's table status handler to fix this problem.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Cherry-Picked:
Commit 6ee6933 on branch bb-10.3-MDEV-15712
When an attempt to connect to the remote server fails, Spider retries to
connect to the remote server 1000 times or until the connection attempt
succeeds. This is perceived as a hang if the remote server remains
unavailable.
I have introduced changes in Spider's table status handler to fix this problem.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Merged From:
Branch bb-10.3-MDEV-15712.