Do not try to set versioning conditions on every SP call. It may work
incorrectly, but it's a general bug described in MDEV-774.
This patch makes system versioning stuff consistent with other code and
also fixes a use-after-free bug.
Closes#756
Make mariadb crc32 lib platform independent
It looks strange that someone can make use of 2 crc libraries
(Power64 or AArch64) at the same time.
The patch sets macros 'CRC32_LIBRARY' to make platform independence as an optional crc32 library.
Change-Id: I68bbf73cafb6a12f7fb105ad57d117b114a8c4af
Signed-off-by: Yuqi Gu <yuqi.gu@arm.com>
INNOBASE_DEFAULTS: Replace ALTER_ADD_COLUMN with the more appropriate
ALTER_ADD_STORED_BASE_COLUMN. This clean-up causes no change of
behaviour, because ALGORITHM=INPLACE would be refused for
ALTER_ADD_STORED_GENERATED_COLUMN, and default values are not computed
nor substituted for ALTER_ADD_VIRTUAL_COLUMN.
commit 2dbeebdb16 accidentally changed
ALTER_COLUMN_OPTION and ALTER_COLUMN_STORAGE_TYPE to be separate flags.
InnoDB and Mroonga are only checking for the latter;
the example storage engine is checking for the former only.
The impact of this bug should be incorrect operation of Mroonga when
the column options GROONGA_TYPE, FLAGS are changed.
InnoDB does not define any column options, only table options,
so the flag ALTER_COLUMN_OPTION should never have been set.
Also, remove the unused flag ALTER_DROP_HISTORICAL.
Fix type mismatches in the unit test mdev10259().
btr_search_info_get_ref_count(): Do not return early if !table->space.
We can simply access table->space_id even after the tablespace has
been discarded.
btr_get_search_latch(): Relax a debug assertion to allow
!index->table->space.
Include all the Makefiles that define variables that can be useful
within debian/rules. This includes buildflags.mk as well.
Use the standard variable names and don't define our own.
Also fixes MDEV-14727, MDEV-14491
InnoDB: Error: Waited for 5 secs for hash index ref_count (1) to drop to 0
by replacing the flawed wait logic in dict_index_remove_from_cache_low().
On DISCARD TABLESPACE, there is no need to drop the adaptive hash index.
We must drop it on IMPORT TABLESPACE, and eventually on DROP TABLE or
DROP INDEX. As long as the dict_index_t object remains in the cache
and the table remains inaccessible, the adaptive hash index entries
to orphaned pages would not do any harm. They would be dropped when
buffer pool pages are reused for something else.
btr_search_drop_page_hash_when_freed(), buf_LRU_drop_page_hash_batch():
Remove the parameter zip_size, and pass 0 to buf_page_get_gen().
buf_page_get_gen(): Ignore zip_size if mode==BUF_PEEK_IF_IN_POOL.
buf_LRU_drop_page_hash_for_tablespace(): Drop the adaptive hash index
even if the tablespace is inaccessible.
buf_LRU_drop_page_hash_for_tablespace(): New global function, to drop
the adaptive hash index.
buf_LRU_flush_or_remove_pages(), fil_delete_tablespace():
Remove the parameter drop_ahi.
dict_index_remove_from_cache_low(): Actively drop the adaptive hash index
if entries exist. This should prevent InnoDB hangs on DROP TABLE or
DROP INDEX.
row_import_for_mysql(): Drop any adaptive hash index entries for the table.
row_drop_table_for_mysql(): Drop any adaptive hash index for the table,
except if the table resides in the system tablespace. (DISCARD TABLESPACE
does not apply to the system tablespace, and we do no want to drop the
adaptive hash index for other tables than the one that is being dropped.)
row_truncate_table_for_mysql(): Drop any adaptive hash index entries for
the table, except if the table resides in the system tablespace.
When the transaction isolation level is SERIALIZABLE, or when
a locking read is performed in the REPEATABLE READ isolation level,
InnoDB must lock delete-marked records in order to prevent another
transaction from inserting something.
However, at READ UNCOMMITTED or READ COMMITTED isolation level or
when the parameter innodb_locks_unsafe_for_binlog is set, the
repeatability of the reads does not matter, and there is no need
to lock any records.
row_search_for_mysql(): Skip locks on delete-marked committed records
upfront, instead of invoking row_unlock_for_mysql() afterwards.
The unlocking never worked for secondary index records.
infos[]: Allocate enough entries to accommodate all keys from both
checkpoint pages.
infos_used: The size of infos[].
get_crypt_info(): Merge to the only caller, log_crypt_101_read_block().
log_crypt_101_read_block(): Do not validate the log block checksum,
because it will not be valid when upgrading from MariaDB 10.1.10.
Instead, check that the encryption key exists.
log_crypt_101_read_checkpoint(): Append to infos[] instead of overwriting.
All packages in group 'essential' are by default installed on all
Debian and Ubuntu systems, and there is no need to mark them as
dependencies for any binary.
We have carried along this patch as a patch inside our sources
since 2012 (commit cfd4fcb0bc).
The validity of this has thus been vetted in production for years
and the review done now did not find otherwise.
A race in dash causes mysqld_safe to occasionally loop infinitely.
Fix by using bash instead.
https://bugs.launchpad.net/ubuntu/+source/mysql-dfsg-5.0/+bug/675185
As this is the last patch, we can also clean away usage of dpatch.
We have carried along this patch as a patch inside our sources
since 2012 (commit cfd4fcb0bc).
This same patch has been used also in MySQL packaging at Oracle
and in downstream Debian.org packages for both MySQL and MariaDB.
The validity of this has thus been vetted in production for years
and the review done now did not find otherwise.
Code contributed to Oracle with
http://forge.mysql.com/wiki/Sun_Contributor_Agreement
Reported as http://bugs.mysql.com/bug.php?id=31361
Fixed by deleting the sequence if we where not able to initialize it
I also noticed that we didn't always set the error message when
check_killed(), which could lead to aborted queries without error
beeing properly set. Fixed by default setting error message if
check_error() noticed that killed had been called.
This allowed me to remove a lot of calls to thd->send_kill_message().
Crash happened because in discover, table->work_part_info was not properly
reset before execution.
Fixed by resetting before calling execute alter table, create table or
mysql_create_frm_image.
Original problem reported by Wlad: re-compilation of 10.3 on top of 10.2
build would cache undefined HAVE_ISINF from 10.2, whereas it is expected
to be 1 in 10.3.
std::isinf() seem to be available on all supported platforms.
Close MYSQL (and destroy THD) in the same thread where it was used,
because THD embeds MDL_context, that owns some LF_PINS, that remember
a pointer to my_thread_var->stack_ends_here.
Being executed under slow_log is ON the test revealed a "side-effect"
in MDEV-8305 implementation which inadvertently made the trigger or
stored function statements to reset the top-level query's
THD::start_time et al. (Details of the test failure analysis are footnoted).
Unlike the SP case the SF and Trigger's internal statement should not
do that.
Fixed with revising the MDEV-8305 decision to backup/reset/restore
the session timestamp inside sp_instr_stmt::execute(). The timestamp
actually remains reset in the SP case by its caller per statement basis by ever
existing logics.
Timestamps related tests are extended to cover the trigger and stored function case.
Note, commit 3395ab7324 is reverted as its struct QUERY_START_TIME_INFO
declaration is not in use anymore after this patch.
Footnote:
--------
Specifically to the failing test, a query on the master was logged
okay with a timestamp of the query's top-level statement but its post
update trigger managed to compute one more (later) timestamp which got
inserted into another table. The latter table master-vs-slave
no fractional part timestamp discrepancy became evident
thanks to different execution time of the trigger combined with the
fact of the logged with micro-second fractional part master timestamp
was truncated on the slave. On master when the fractional part was
close to 1 the trigger execution added up its own latency to overflow
to next second value. That's how the master timestamp surprisingly
turned out to bigger than the slave's one.