This patch does the following:
1. Makes Field_vers_trx_id::type_handler() return
&type_handler_vers_trx_id rather than &type_handler_longlong.
Fixes Item_func::convert_const_compared_to_int_field() to
test field_item->type_handler() against &type_handler_vers_trx_id,
instead of testing field_item->vers_trx_id().
2. Removes VERS_TRX_ID related code from
Type_handler_hybrid_field_type::aggregate_for_comparison(),
because "BIGINT UNSIGNED GENERATED ALWAYS AS ROW {START|END}"
columns behave just like a BIGINT in a regular comparison,
i.e. when not inside AS OF.
3. Removes
- Type_handler_hybrid_field_type::m_vers_trx_id;
- Type_handler_hybrid_field_type::m_flags;
because a "BIGINT UNSIGNED GENERATED ALWAYS AS ROW {START|END}"
behaves like a regular BIGINT column when in UNION.
4. Removes Field::vers_trx_id(), Item::vers_trx_id(), Item::field_flags()
They are not needed anymore. See N1.
fil_space_t::n_pending_ops, n_pending_ios: Use a combination of
fil_system.mutex and atomic memory access for protection.
fil_space_t::release(): Replaces fil_space_release().
Does not acquire fil_system.mutex.
fil_space_t::release_for_io(): Replaces fil_space_release_for_io().
Does not acquire fil_system.mutex.
Problem:
=======
InnoDB cleans all temporary undo logs during commit. During rollback
of secondary index entry, InnoDB tries to build the previous version
of clustered index. It leads to access of freed undo page during
previous transaction commit and it leads to undo log corruption.
Solution:
=========
During rollback, temporary undo logs should not try to build
the previous version of the record.
Problems:
1. Unlike Item_field::fix_fields(),
Item_sum_sp::fix_length_and_dec() and Item_func_sp::fix_length_and_dec()
did not run the code which resided in adjust_max_effective_column_length(),
therefore they did not extend max_length for the integer return data types
from the user-specified length to the maximum length according to
the data type capacity.
2. The code in adjust_max_effective_column_length() was not correct
for TEXT data, because Field_blob::max_display_length()
multiplies to mbmaxlen. So TEXT variants were unintentionally
promoted to the next longer data type for multi-byte character
sets: TINYTEXT->TEXT, TEXT->MEDIUMTEXT, MEDIUMTEXT->LONGTEXT.
3. Item_sum_sp::create_table_field_from_handler()
Item_func_sp::create_table_field_from_handler()
erroneously called tmp_table_field_from_field_type(),
which converted VARCHAR(>512) to TEXT variants.
So "CREATE..SELECT spfunc()" erroneously converted
VARCHAR to TEXT. This was wrong, because stored
functions have explicitly declared data types,
which should be preserved.
Solution:
- Removing Type_std_attributes(const Field *)
and using instead Type_std_attributes::set() in combination
with field->type_str_attributes() all around the code, e.g.:
Type_std_attributes::set(field->type_std_attributes())
These two ways of copying attributes from a Field
to an Item duplicated each other, and were slightly
different in how to mix max_length and mbmaxlen.
- Removing adjust_max_effective_column_length() and
fixing Field::type_std_attributes() to do all necessary
type-specific calculations , so no further adjustments
is needed.
Field::type_std_attributes() is now called from all affected methods:
Item_field::fix_fields()
Item_sum_sp::fix_length_and_dec()
Item_func_sp::fix_length_and_dec()
This fixes the problem N1.
- Making Field::type_std_attributes() virtual, to make
sure that type-specific adjustments a properly done
by individual Field_xxx classes. Implementing
Field_blob::type_std_attributes() in the way that
no TEXT promotion is done.
This fixes the problem N2.
- Fixing Item_sum_sp::create_table_field_from_handler()
Item_func_sp::create_table_field_from_handler() to
call create_table_field_from_handler() instead of
tmp_table_field_from_field_type() to avoid
VARCHAR->TEXT conversion on "CREATE..SELECT spfunc()".
- Recording mysql-test/suite/compat/oracle/r/sp-param.result
as "CREATE..SELECT spfunc()" now correctly
preserve the data type as specified in the RETURNS clause.
- Adding new tests
Problem:
The logic in store_column_type() with a switch on field type was
hard to follow. The part for MEDIUMINT (MYSQL_TYPE_INT24) was not correct.
It erroneously calculated the precision of MEDIUMINT UNSIGNED
as 7 instead of 8.
A similar hard-to-follow switch doing some type specific calculations
resided in adjust_max_effective_column_length(). It was also wrong for
MEDIUMINT (reported as a separate issue in MDEV-15946).
Solution:
1. Introducing a new class Information_schema_numeric_attributes
2. Adding a new virtual method Field::information_schema_numeric_attributes()
3. Splitting the logic in store_column_type() into virtual
implementations of information_schema_numeric_attributes().
4. In order to avoid adding duplicate code for the integer data types,
adding a new virtual method Field_int::numeric_precision(),
which returns the number of digits.
Additional changes:
1. Adding the "const" qualifier to Field::max_display_length()
2. Moving the code from adjust_max_effective_column_length()
directly to Field::max_display_length().
There was no any sense to have two implementations:
- a set of wrong virtual implementations for Field_xxx::max_display_length()
- additional code in adjust_max_effective_column_length() fixing
bad results of Field_xxx::max_display_length()
This change is safe:
- The code using Field::max_display_length()
in field.cc, sql_show.cc, sql_type.cc is not affected.
- The code in rpl_utility.cc is also not affected.
See a new DBUG_ASSSERT and new comments explaining why.
In the new reduction, Field_xxx::max_display_length() returns
correct results for all integer types (except MEDIUMINT, see below).
Putting implementations of numeric_precision() and max_display_length()
near each other in field.h made the logic much clearer and thus
helped to reveal bad results for Field_medium::max_display_length(),
which returns 9 instead of 8 for signed MEDIUMINT fields.
This problem will be addressed separately (MDEV-15946).
Note, this change is also useful for pluggable data types (see MDEV-4912),
as now a user defined Field_xxx has a way to control what's returned
in INFORMATION_SCHEMA.COLUMNS.NUMERIC_PRECISION and
INFORMATION_SCHEMA.COLUMNS.NUMERIC_SCALE by implementing
a desired behavior in Field_xxx::information_schema_numeric_attributes().
Character set safe truncation is done when storing non-empty string in
VARCHAR(0) COMPRESSED column, so that string becomes empty. The code
didn't expect empty string after truncation.
Fixed by moving empty string check after truncation.
Element_type& Bounds_checked_array<Element_type>::operator[]
(size_t) [with Element_type = Item*; size_t = long unsigned int]
In sql_yacc.yy the semantic actions for the MEDIAN window function
lacked a call of st_select_lex::prepare_add_window_spec().
This function saves the head of the thd->lex->order_list into
lex->save_order_list in order this head to be restored in
st_select_lex::add_window_spec after the specification of the
window function has been parsed.
Without a call of prepare_add_window_spec() when add_window_spec()
was called the head of an empty list was copied into
thd->lex->order_list (instead of assumed saved head this list).
This made the list thd->lex->order_list invalid and potentially
could cause many different problems.
Corrected the result set in the test case for MDEV-15899 that
used the MEDIAN window function and could not be correct
without this fix.
after rebuilding under test_pseudo_invisible
If we are doing alter related to partitioning then simple alter stmt
like adding column(or any alter stmt) can't be combined with partition
alter, this will generate a syntax error.
But IF we add
SET debug_dbug="+d,test_pseudo_invisible";
or test_completely_invisible
this will add a column to table with have an already partitioning related
alter. This execution of wrong stmt will crash the server on later stages.
(like on repair partition).
So we will simply return 1 (and ER_INTERNAL_ERROR) if we any of these
debug_dbug flags turned on.
MyRocks internally will print non-critical messages to
sql_print_verbose_info() which will do what InnoDB does in similar cases:
check if (global_system_variables.log_warnings > 2).
slave node killed himself.
Problem:- If we try to delete table with foreign key and table whom it is
referring with wsrep_slave_threads>1 then galera tries to execute both
Delete_rows_log-event in parallel, which should not happen.
Solution:- This is happening because we do not have foreign key info in
write set. Upto version 10.2.7 it used to work fine. Actually it happening
because of issue in commit 2f342c4. wsrep_must_process_fk has changed to
make it similar to original condition.
In this commit we are adding three more status variable to SHOW SLAVE
STATUS. Slave_DDL_Events and Slave_Non_Transactional_Events.
Slave_DDL_Groups:- This status variable counts the occurrence of DDL
statements
Slave_Non_Transactional_Groups:- This variable count the occurrence
of non-transnational event group.
Slave_Transactional_Groups:- This variable count the occurrence
of transnational event group.
Patch Credit:- Kristian Nielsen
It changes the cmake WITH_NUMA option to have 3 values
Auto:- If libnuma present compile with numa (Default value)
OFF:- Compile without libnuma
On:- Compile with numa , throw error if libnuma not present
Patch Contributer:- Vesa
Patch Reviewer:- serg
insert into table with TIMESTAMP INVISIBLE
Problem:- The segfault occurs because value is null but since timestamp field
is VISIBLE it expects a value , and it tries to call value->save_in_field(..
Timestamp field should not be visible this is the problem.
Solution:- While we clone field for record0_field we don't honor the field
_visibility , this patch changes that.
- If select query chooses the index 'b' over clustered index then the issue
can happen. Changed the test case to use primary index for the select
query.
- Inplace alter shouldn't support if the number of newly added fts index
exceeds 1 even though the table undergoes rebuild. It is a regression of
MDEV-14016
The crash happened because JOIN::check_for_splittable_materialized()
called by mistake the function JOIN_TAB::is_inner_table_of_outer_join()
instead of the function TABLE_LIST::is_inner_table_of_outer_join().
The former cannot be called before the call of make_outerjoin_info().
Usage of aggregate/window functions in non-recursive parts of recursive CTEs
is allowed. Error messages complaining about this were reported by mistake.
MDEV-13073 effectively made the master semisync component depending on
the plugin one through instantiation of THD by its Ack thread.
The thread therefore must be closing its resources prior to
plugin_shutdown(), which was not the case.
Fixed with implementing the requirement.
file IO, rather than int.
On Windows, it is suboptimal to depend on C runtime, as it has limited
number of file descriptors. This change eliminates
os_file_read_no_error_handling_int_fd(), os_file_write_int_fd(),
OS_FILE_FROM_FD() macro.
This bug manifested itself when the optimizer chose an execution plan with
an access of the recursive CTE in a recursive query by key and ARIA/MYISAM
temporary tables were used to store recursive tables.
The problem appeared due to passing an incorrect parameter to the call of
instantiate_tmp_table() in the function With_element::instantiate_tmp_tables().
This is to align the naming to compile-pentium64 and to
avoid mistakes of accidently compiling a 32 bit binary on
a 64 bit system
Removed also a few very old not usable BUILD scripts
This bug happened due to a defect of the implementation of the handler
function ha_delete_all_rows() for the ARIA engine.
The function maria_delete_all_rows() truncated the table, but it didn't
touch the write cache, so the cache's write offset was not reset.
In the scenario like in the function st_select_lex_unit::exec_recursive
when first all records were deleted from the table and then several new
records were added some metadata became inconsistent with the state of
the cache. As a result the table scan function could not read records
at the end of the table.
The same defect could be found in the implementation of ha_delete_all_rows()
for the MYISAM engine mi_delete_all_rows().
Additionally made late instantiation for the temporary table used to store
rows that were used for each new iteration when executing a recursive CTE.
fil_crypt_rotate_pages
If tablespace is marked as stopping stop also page rotation
fil_crypt_flush_space
If tablespace is marked as stopping do not try to read
page 0 and write it back.