There was a race condition between delayed insert thread and connection thread
actually performing INSERT/REPLACE DELAYED. It was triggered by concurrent
INSERT/REPLACE DELAYED and statements that flush the same table either
explicitely or implicitely (like FLUSH TABLE, ALTER TABLE, ...).
This race condition was caused by a gap in delayed thread shutdown logic,
which allowed concurrent connection running INSERT/REPLACE DELAYED to change
essential data consequently leaving table in semi-consistent state.
Specifically query thread could decrease "tables_in_use" reference counter in
this gap, causing delayed insert thread to shutdown without releasing auto
increment and table lock.
Fixed by extending condition so that delayed insert thread won't shutdown
until there're locked tables.
Also removed volatile qualifier from tables_in_use and stacked_inserts since
they're supposed to be protected by mutexes.
Problem was that in-place online alter table was used on a table
that had mismatch between MySQL frm file and InnoDB data dictionary.
Fixed so that traditional "Copy" method is used if the MySQL frm
and InnoDB data dictionary is not consistent.
INCORRECT ERROR.
Analysis
========
INSERT with DUPLICATE KEY UPDATE and REPLACE on a table
where foreign key constraint is defined fails with an
incorrect 'duplicate entry' error rather than foreign
key constraint violation error.
As part of the bug fix for BUG#22037930, a new flag
'HA_CHECK_FK_ERROR' was added while checking for non fatal
errors to manage FK errors based on the 'IGNORE' flag. For
INSERT with DUPLICATE KEY UPDATE and REPLACE queries, the
foreign key constraint violation error was marked as non-fatal,
even though IGNORE was not set. Hence it continued with the
duplicate key processing resulting in an incorrect error.
Fix:
===
Foreign key violation errors are treated as non fatal only when
the IGNORE is not set in the above mentioned queries. Hence reports
the appropriate foreign key violation error.
Don't read from socket in yassl in SSL_pending().
Just return size of the buffered processed data.
This is what OpenSSL is documented to do too:
SSL_pending() returns the number of bytes which have been processed,
buffered and are available inside ssl for immediate read.
1. the same message text for INSERT and INSERT IGNORE
2. no new warnings in UPDATE IGNORE yet (big change for 5.5)
and replace a commonly used expression with a
named constant
MDEV-9278 - Debian: the Lintian complains about "shlib-calls-exit" in ha_spider.so
Handlersocket handles errors in a way that it aborts program execution. In most
cases it is done via abort(). One exception was host/service resolution failure,
which was aborted with exit().
As a workaround replaced this exit() with abort() for symmetry with other error
handling.
For some reason check_cxx_compiler_flag() passes result variable name down to
compiler:
https://github.com/Kitware/CMake/blob/master/Modules/CheckCXXSourceCompiles.cmake#L57
But compiler doesn't permit dashes in macro name, like in
-DHAVE_CXX_-fimplicit-templates.
Workarounded by renaming HAVE_CXX_-fimplicit-templates to
HAVE_CXX_IMPLICIT_TEMPLAES.
This is a backport of the patch for MDEV-9653 (fixed earlier in 10.1.13).
The code in Item_func_case::fix_length_and_dec() did not
calculate max_length and decimals properly.
In case of any numeric result (DECIMAL, REAL, INT) a generic method
Item_func_case::agg_num_lengths() was called, which could erroneously result
into a DECIMAL item with max_length==0 and decimals==0, so the constructor of
Field_new_decimals tried to create a field of DECIMAL(0,0) type,
which caused a crash.
Unlike Item_func_case, the code responsible for merging attributes in
Item_func_coalesce::fix_length_and_dec() works fine: it has specific execution
branches for all distinct numeric types and correctly creates a DECIMAL(1,0)
column instead of DECIMAL(0,0) for the same set of arguments.
The fix does the following:
- Moves the attribute merging code from Item_func_coalesce::fix_length_and_dec()
to a new method Item_func_hybrid_result_type::fix_attributes()
- Removes the wrong code from Item_func_case::fix_length_and_dec()
and reuses fix_attributes() in both Item_func_coalesce::fix_length_and_dec()
and Item_func_case::fix_length_and_dec()
- Fixes count_real_length() and count_decimal_length() to get an array
of Items as an argument, instead of using Item::args directly.
This is needed for Item_func_case::fix_length_and_dec().
- Moves methods Item_func::count_xxx_length() from "public" to "protected".
- Removes Item_func_case::agg_num_length(), as it's not used any more.
- Additionally removes Item_func_case::agg_str_length(),
as it also was not used (dead code).
SHOW PROCESSLIST output can be affected by not completed concurrent queries.
Removed this affected SHOW PROCESSLIST since it doesn't seem to affect original
problem.
special treatment for temporal values in
create_tmp_field_from_item().
old code only did it when result_type() was STRING_RESULT,
but Item_cache_temporal::result_type() is INT_RESULT
ANALYSIS:
=========
A LEX_STRING structure pointer is processed during the
validation of a stored program name. During this processing,
there is a possibility of null pointer dereference.
FIX:
====
check_routine_name() is invoked by the parser by supplying a
non-empty string as the SP name. To avoid any potential calls
to check_routine_name() with NULL value, a debug assert has
been added to catch such cases.
Fixed wait condition in test case to actually wait for get_lock() completion
(not for lock acquisition as it was before). This removes sporadic extra row in
subsequent processlist queries.
FAILURES
Analysis:
=========
Test script is not ensuring that "assert_grep.inc" should be
called only after 'Disk is full' error is written to the
error log.
Test checks for "Queueing master event to the relay log"
state. But this state is set before invoking 'queue_event'.
Actual 'Disk is full' error happens at a very lower level.
It can happen that we might even reset the debug point
before even the actual disk full simulation occurs and the
"Disk is full" message will never appear in the error log.
In order to guarentee that we must have some mechanism where
in after we write "Disk is full" error messge into the error
log we must signal the test to execute SSS and then reset
the debug point. So that test is deterministic.
Fix:
===
Added debug sync point to make script deterministic.
Item_func_ifnull::date_op() and Item_func_coalesce::date_op() could
erroneously return 0000-00-00 instead of NULL when get_date()
was called with the TIME_FUZZY_DATES flag, e.g. from LEAST().
Previous fix using wait condition did not work because of MDEV-9867, so
we have to use conditional sleep instead. Sleep will only happen if the test
is executed after another one which also ran buffer pool dump without
server restart between two tests
This was a regression bug.
modified: storage/connect/ha_connect.cc
modified: storage/connect/mysql-test/connect/r/part_table.result
modified: storage/connect/mysql-test/connect/t/part_table.test