The predicate dict_table_is_discarded() checks whether
ALTER TABLE…DISCARD TABLESPACE has been executed.
Replace most occurrences of dict_table_is_discarded() with
checks of dict_table_t::space. A few checks for the flag
DICT_TF2_DISCARDED are necessary; write them inline.
Because !is_readable() implies !space, some checks for
dict_table_is_discarded() were redundant.
MDEV-14823 Wrong error message upon selecting from a system_time partition
MDEV-15956 Strange ER_UNSUPPORTED_ACTION_ON_GENERATED_COLUMN upon ALTER on versioning column
Store transaction start time in thd->transaction.start_time.
THD::transaction_time() wraps over transaction.start_time taking into
account current status of BEGIN.
Don't use hidden system time in versioning,
but keep the system time logic in THD
to workaround low-res system clock and
replication not versioned to versioned.
This reverts MDEV-14788 (System versioning cannot
be based on local timestamps, as it is now).
Versioning is based on local timestamps again,
but timestamps are protected by MDEV-15923
(option to control who can set session @@timestamp).
Make sure that SELECT_LEX_UNIT::derived, behaves as documented
(points to the "TABLE_LIST representing this union in the
embedding select"). For recursive CTE this was not necessarily
the case, it could've pointed to the TABLE_LIST inside the CTE,
not in the embedding select.
To fix:
* don't update unit->derived in mysql_derived_prepare(), pass derived
as an argument to st_select_lex_unit::prepare()
* prefer to set unit->derived in TABLE_LIST::init_derived()
to the TABLE_LIST in the embedding select, not to the recursive
reference. Fail if there are many TABLE_LISTs in the embedding
select with conflicting FOR SYSTEM_TIME clauses.
cleanup:
* remove redundant THD* argument from st_select_lex_unit::prepare()
Problem:
Fix for Bug #21348684 (#Rb9581) introduced a conditional debug execute
'buf_pool_resize_chunk_null', which causes new chunks memory for 2nd
buffer pool instance is freed.
Buffer pool resize function removes all old chunks entry from
'buf_chunk_map_reg' and add new chunks entry into it. But when
'buf_pool_resize_chunk_null' is set true, 2nd buffer pool
instance's chunk entries are not added into 'buf_chunk_map_reg'.
When purge thread tries to access that buffer chunk, it leads to
debug assertion.
Fix:
Added old chunk entries into 'buf_chunk_map_reg' for 2nd buffer pool
instance when 'buf_pool_resize_chunk_null' debug condition is set to true.
Reviewed by: Jimmy <Jimmy.Yang@oracle.com>
RB: 18664
PROBLEM
Issue found during ntest run is a regression of Bug #27141613. The
issue is basically when index is being freed due to an error during its
creation,when the index isn't added to dictionary cache its field
columns are not set, the derefrencing of null col pointer during the
clean of index from the virtual column's leads to a crash.
NOTE: Also test i_innodb.virtual_debug was failing on 32k page size and
above for the newly added scenario. Fixed that.
FIX
Added a check that if only the index is cached , the virtual index
freeing from the virtual cols index list is performed.
Reviewed by: Satya Bodapati<satya.bodapati@oracle.com>
RB: 18670
PROBLEM
=======
When add of virtual index fails with DB_TOO_BIG_RECORD , the virtual
index being freed isn't removed from the list of indexes a virtual
column(which is part of the index). This while the undo log is read
could fetch a wrong value during rollback and cause the assertion
reported in the bug particularly.
FIX
===
Added a function that is called when the virtual index being freed would
allow the index be removed from the index list of virtual column which
was a field of that index.
Reviwed By: Jimmy Yang<Jimmy.Yang@oracle.com>
RB: 18528
PROBLEM
-------
Whenever an fts table is created it registers itself in a queue which
is operated by a background thread whose job is to optimize the
fts tables in background. Additionally we place these fts tables in
non-LRU list so that they cannot be evicted from cache. But in the
scenario when a node is brought up which is already having fts
tables ,we first try to load the fts tables in dictionary ,but we skip
the part where it is added in background queue and in non-LRU list because
the background thread is not yet created,so these tables are loaded
but they can be evicted from the cache. Now coming to the deadlock scenario
1. A Server background thread is trying to evict a table from the cache
because the cache is full,so it scans the LRU list for the tables it can
evict.It finds the fts table (because of the reason explained above)
can be evicted and it takes the dict_sys->mutex (this is a system wide mutex)
submits a request to the background thread to remove this table from queue
and waits it to be completed.
2. In the mean time fts_optimize_thread() is processing another job
in the queue and needs dict_sys->mutex for a small amount of time,
but it cannot get it because it is blocked by the first background thread.
So Thread 1 is waiting for its job to be completed by Thread 2,whereas Thread 2
is waiting for dict_sys->mutex held by thread 1 ,causing the deadlock.
FIX
Problem:
when incorrect value is assigned to innodb_data_file_path or
innodb_temp_data_file_path parameter, Innodb returns error and logs error
message in mysqlds.err file but there is no information in error message about
the parameter which causes Innodb initialization is failed.
Fix:
Added error message with parameter name and value, which causes Innodb
initialization is failed.
Reviewed by: Jimmy <Jimmy.Yang@oracle.com>
RB: 18206
Compressed blob columns didn't accept data at their capacity. E.g. storing
255 bytes to TINYBLOB results in "Data too long" error.
Now it is allowed assuming compression method was able to produce shorter
string (so that both metadata and compressed data fits blob) and
column_compression_threshold is lower than blob.
If no compression was performed, we still have to reserve additional byte
for metadata and thus we perform normal data truncation and return it's
status.
Problem:
During ALTER, when filling stored column info, wrong column number is used.
This is because we ignored virtual column when iterating over columns in
table and lead to debug assertion.
Fix:
In InnoDB table cache object, vcols are on stored on one list, stored and
normal columns are stored in another list.
When looking for stored column, ignore the virtual columns to get the right
column number of stored column.
Reviewed by: Thiru <thirunarayanan.balathandayuth@oracle.com>,
Satya <satya.bodapati@oracle.com>
RB: 17939
This is the MariaDB equivalent of fixing the MySQL 5.7 regression
Bug #26935001 ALTER TABLE AUTO_INCREMENT TRIES TO READ
INDEX FROM DISCARDED TABLESPACE
Oracle did not publish a test case, but it is easy to guess
based on the commit message. The MariaDB code is different
due to MDEV-6076 implementing persistent AUTO_INCREMENT.
commit_set_autoinc(): Report ER_TABLESPACE_DISCARDED if the
tablespace is missing.
prepare_inplace_alter_table_dict(): Avoid accessing a discarded
tablespace. (This avoids generating warnings in fil_space_acquire().)
Problem:
When FTS index is added into a table which doesn't have 'FTS_DOC_ID'
column, Innodb rebuilds table to add column 'FTS_DOC_ID'. when this FTS
index is dropped from this table. Innodb doesn't not rebuild table to
remove 'FTS_DOC_ID' column and deletes FTS index auxiliary tables.
But it doesn't delete FTS common auxiliary tables.
Later when the database having this table is renamed, FTS auxiliary
tables are not renamed because table's flags2 (dict_table_t.flags2)
has been resetted for DICT_TF2_FTS flag during FTS index drop operation.
Now when we drop old database, it leads to an assert.
Fix:
During renaming of FTS auxiliary tables, ORed a condition to check if
table has DICT_TF2_FTS_HAS_DOC_ID flag set.
RB: 18769
Reviewed by : Jimmy.Yang@oracle.com
1. Adding THD::convert_string(LEX_CSTRING *to,...) as a wrapper
for convert_string(LEX_STRING *to,...), as LEX_CSTRING
is now frequently used for conversion purpose.
This reduced duplicate code in TEXT_STRING_sys,
TEXT_STRING_literal, TEXT_STRING_filesystem grammar rules in *.yy
2. Adding yet another THD::convert_string() with an extra parameter
"bool simple_copy_is_possible". This even more reduced
repeatable code in the mentioned grammar rules in *.yy
3. Deriving Lex_ident_cli_st from Lex_string_with_metadata_st,
as they have very similar functionality. Moving m_quote
from Lex_ident_cli_st to Lex_string_with_metadata_st,
as m_quote will be used later to optimize string literals anyway
(e.g. avoid redundant copying on the tokenizer stage).
Adjusting Lex_input_stream::get_text() accordingly.
4. Moving the reminders of the code in TEXT_STRING_sys, TEXT_STRING_literal,
TEXT_STRING_filesystem grammar rules as new methods in THD:
- make_text_string_sys()
- make_text_string_connection()
- make_text_string_filesystem()
and changing *.yy to use these new methods.
This reduced the amount of similar code in
sql_yacc.yy and sql_yacc_ora.yy.
5. Removing duplicate code in Lex_input_stream::body_utf8_append_ident():
by reusing THD::make_text_string_sys(). Thanks to #3 and #4.
6. Making THD members charset_is_system_charset,
charset_is_collation_connection, charset_is_character_set_filesystem
private, as they are not needed externally any more.
Problem:
=======
Multiple insert statement in table contains FULLTEXT KEY and a
FTS_DOC_ID column aborts the server if the FTS_DOC_ID exceeds
FTS_DOC_ID_MAX_STEP.
Solution:
========
Remove the exception for first committed insert statement.
Reviewed-by: Jimmy Yang<jimmy.yang@oracle.com>
RB: 18023