The issue here is that the window function execution is not called for the correct join tab, when we have GROUP BY
where we create extra temporary tables then we need to call window function execution for the last join tab. For doing
so the current code does not take into account the JOIN::aggr_tables.
Fixed by introducing a new function JOIN::total_join_tab_cnt that takes in account the temporary tables also.
don't try to convert a default value string from a user character set
into a column character set, if this particular default value string did
not came from the user at all (that is, if it's an ALTER TABLE and the
default value string is the *old* default value of the unaltered
column).
This used to crash, because old defaults are allocated on the old
table's memroot, which is freed mid-ALTER when the old table is closed.
So thd->rollback_item_tree_changes() at the end of the ALTER was writing
into the freed memory.
table.cc:
virtual columns must be computed for INSERT, if they're part
of the partitioning expression.
this change broke gcol.gcol_partition_innodb.
fix CHECK TABLE for partitioned tables and vcols.
sql_partition.cc:
mark prerequisite base columns in full_part_field_set
ha_partition.cc
initialize vcol_set accordingly
This previously unreported warning comes from casting size_t to ulong
in sql_hset.h in Hash_Set::at().
Change my_hash_element to accept size_t index parameter.
The remote users need the SUPER privilege because by default Spider sends a
'SET SQL_LOG_OFF' statement to the data nodes. This is controlled by the
spider_internal_sql_log_off configuration setting on the Spider node, which
can only be set to 0 or 1, with a default value of 1.
I have fixed the problem by changing this configuration setting so that if it
is NOT SET, which is the most likely case, the Spider node DOES NOT SEND the
'SET SQL_LOG_OFF' statement to the data nodes. However if the
spider_internal_sql_log_off setting IS EXPLICITLY SET to either 0 or 1, then
the Spider node DOES SEND the 'SET SQL_LOG_OFF' statement, requiring a remote
user with the SUPER privilege. The Spider documentation will be updated to
reflect this change.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Cherry-Picked:
Commit 72f0efa on branch bb-10.3-MDEV-15697
Problem:
=======
InnoDB master thread encounters the shutdown state as SRV_SHUTDOWN_FLUSH_PHASE
when innodb_force_recovery >=2 and slow scheduling of master thread during
shutdown.
Fix:
====
There is no need for master thread itself if innodb_force_recovery >=2.
Don't create the master thread if innodb_force_recovery >= 2
The problem is hard to repeat, and I failed to create a deterministic
test case. Online index creation creates stubs for to-be-created indexes.
If index creation fails, we could remove these stubs while locks exist
in the indexes. (This would require that the index creation was completed,
and a concurrent DML operation acquired a lock on a record in the
uncommitted index. If a duplicate key error occurs in an uncommitted
index, the error will be reported for the CREATE UNIQUE INDEX, not for
the DML operation that tried to insert the duplicate.)
dict_table_try_drop_aborted(), row_merge_drop_indexes(): If transactional
locks exist on the table, keep the table->indexes intact.
ibuf_restore_pos(): Do not issue any messages if the tablespace
is being dropped or truncated. In MariaDB 10.2, TRUNCATE TABLE
would not change the tablespace ID, and therefore the tablespace
would be found, even though TRUNCATE is in progress.
Furthermore, do not commit suicide if restoring the change buffer
cursor fails. The worst that could happen is that a secondary index
becomes corrupted due to incomplete change buffer merge.
The following variables are used in this project, but they are set to NOTFOUND.
LZ4_LIBS
The reason for the failure is that pkg_check_modules will not guarantee
<prefix>_LIBRARY_DIRS variable to be set, according to documentation.
When it's not set, we would force find_library to look in an empty path
and thus fail to correctly find LZ4_LIBS, although pck_check_modules
did previously discover that the library is installed.
To fix the problem and still keep the logic of first following
LIBLZ4_LIBRARY_DIRS and *then* look at other paths, we call find_library
twice. This is the recommended approach, according to CMake 3.11
documentation.
In reverse-ordered column families, if one wants to start reading at the
logical end of the index, they should Seek() to a key value that is not
covered by the index. This may (and typically does) prevent use of a bloom
filter.
The calls to setup_scan_iterator() that are made for index and table scan
didn't take this into account and passed eq_cond_len=INDEX_NUMBER_SIZE.
Fixed them to compute and pass correct eq_cond_len.
Also, removed an incorrect assert in ha_rocksdb::setup_iterator_bounds.
If creating a secondary index fails (typically, ADD UNIQUE INDEX fails
due to duplicate key), it is possible that concurrently running UPDATE
or DELETE will access the index stub and hit the debug assertion.
It does not make any sense to keep updating an uncommitted index whose
creation has failed.
dict_index_t::is_corrupted(): Replaces dict_index_is_corrupted().
Also take online_status into account.
Replace some calls to dict_index_is_clust() with calls to
dict_index_t::is_primary().
ha_innobase::commit_inplace_alter_table(): Defer the freeing of ctx->trx
until after the operation has been successfully committed. In this way,
rollback on a partitioned table will be possible.
rollback_inplace_alter_table(): Handle ctx->new_table == NULL when
ctx->trx != NULL.
If the tablespace is dropped or truncated after the
space->is_stopping() check in fil_crypt_get_page_throttle_func(),
we would proceed to request the page, and eventually report a fatal
error.
buf_page_get_gen(): Do not retry reading if mode==BUF_GET_POSSIBLY_FREED.
lock_rec_block_validate(): Be prepared for a NULL return value when
invoking buf_page_get_gen() with mode=BUF_GET_POSSIBLY_FREED.