ISSUE: Queries with mediumint as column when operated with
long long type of data results in buffer overflow in
store_long function.
The merging rule specified for (MYSQL_TYPE_LONGLONG
MYSQL_TYPE_INT24) is MYSQL_TYPE_LONG. Due to this store_long
function was getting called which resulted in buffer overflow.
SOLUTION:
The correct merging rule for (MYSQL_TYPE_LONGLONG,
MYSQL_TYPE_INT24) should be MYSQL_TYPE_LONGLONG.
So, instead of function store_long, function store_longlong
is called which correctly handles the type MYSQL_TYPE_LONGLONG.
External Bug #23645238 is a duplicate of this issue.
DERIVED TABLE IN JOIN
ISSUE:
------
This problem occurs under the following conditions:
1) A parameter is used in the select-list of a derived table.
2) The derived table is part of a JOIN.
SOLUTION:
---------
When a derived table is materialized, a temporary table is
created. This temporary table creates a field each for the
items in the select-list of the derived table. This set of
fields is later used to setup the join.
Currently no field is created in the temporary table if a
parameter is used in the select-list.
Create a field for the parameter. By default Item_param's
result type in a prepared statement is set to
STRING_RESULT. This can change during the execute phase
depending on the user variable. But since the execute phase
creates its own temporary table, it will be handled
separately.
This is a backport of the fix for BUG#22392374.
Fix get_quick_keys(): When building range tree from a condition
in form
keypart1=const AND (keypart2 < 0 OR keypart2>=0)
the SEL_ARG for keypart2 represents an interval (-inf, +inf).
However, the logic that sets UNIQUE_RANGE flag fails to recognize
this, and sets UNIQUE_RANGE flag if (keypart1, keypart2) covered
a unique key.
As a result, range access executor assumes the interval can have
at most one row and only reads the first row from it.
Problem:
In debug builds, there is a chance that an out-of-bounds
read is performed when tables are locked in
LTM_PRELOCKED_UNDER_LOCK_TABLES mode. It can happen because
the debug code uses enum values as index for an array of
mode descriptions, but it only takes into consideration 3
out of 4 of the enum values.
Fix:
This patch fixes it by implementing a getter for the enum which
returns a string representation of the enum,
effectively removing the out-of-bounds read.
Moreover, it also fixes the lock mode descriptions that
would be print out in debug builds.
Commit#ebd24626ca38e7fa1e3da2acdcf88540be70fabe obsoleted the THREAD and
THREAD_SAFE_CLIENT preprocessor symbols. This is not removed in the
sql/net_serv.cc thereby the code that retries on EINTR became dead code.
Remove the THREAD_SAFE_CLIENT preprocessor directive form sql/net_serv.cc.
Also check errno for EINTR only if there is an error in preceding read call.
GET_SERVER_FROM_TABLE_TO_CACHE
Description:- Server received SIG11 in the function,
"get_server_from_table_to_cache()".
Analysis:- Defining a server with a blank name is not
handled properly.
Fix:- Modified "get_server_from_table_to_cache()" to
take care of blank server name.
FROM I_S
Issue:
------
There is a difference in the field type created when the
following DDLs are used:
1) CREATE TABLE t0 AS SELECT NULL;
2) CREATE TABLE t0 AS SELECT GREATEST(NULL,NULL);
The first statement creates field of type Field_string and
the second one creates a field of type Field_null.
This creates a problem when the query mentioned in this bug
is used. Since the null_ptr is calculated differently for
Field_null.
Solution:
---------
When there is a function returning null in the select list
as mentioned above, the field should be of type
Field_string.
This was fixed in 5.6+ as part of Bug#14021323. This is a
backport to mysql-5.5.
An incorrect comment in innodb_bug54044.test has been
corrected in all versions.
ASSERTION `0' FAILED ON SELECT AREA
Problem:
Optimizer tries to get the points to calculate area without
checking the return value of uint4korr for 0 "points". As a
result server exits.
Solution:
Check the return value from uint4korr().
shutdown)
There was race condition between shutdown thread and event worker threads.
Shutdown thread waits for thread_count to become 0 in close_connections(). It
may happen so that event worker thread was started but didn't increment
thread_count by this time. In this case shutdown thread may miss wait for this
working thread and continue deinitialization. Worker thread in turn may continue
execution and crash on deinitialized data.
Fixed by incrementing thread_count before thread is actually created like it is
done for connection threads.
Also let event scheduler not to inc/dec running threads counter for symmetry
with other "service" threads.
MYSQL-5.5
The bug asks for a backport of bug#1463594 and bug#20682959. This
is required because of the fact that if replication is enabled, master
transaction can commit whereas slave can't commit due to not exact
'enviroment'. This manifestation is seen in bug#22024200.
bytes are possibly lost
Timer thread of threadpool is created "joinable", but they're not "joined" on
completion. This causes memory leaks around thread local storage.
Fixed by joining timer thread.
NAME_CONST QUERY
ISSUE:
------
Using NAME_CONST with a non-constant negated expression as
value can result in incorrect behavior.
SOLUTION:
---------
The problem can be avoided by checking whether the argument
is a constant value.
The fix is a backport of Bug#12735545.
THAT ACTUALLY EXISTS
ANALYSIS:
=========
Stored functions updating a view where the view table has a
trigger defined that updates another table, fails reporting
an error that the table doesn't exist.
If there is a trigger defined on a table, a variable
'trg_event_map' will be set to a non-zero value after the
parsed tree creation. This indicates what triggers we need to
pre-load for the TABLE_LIST when opening an associated table.
During the prelocking phase, the variable 'trg_event_map'
will not be set for the view table. This value will be set
after the processing of triggers defined on the table. During
the processing of sub-statements, 'locked_tables_mode' will be
set to 'LTM_PRELOCKED' which denotes that further locking
of tables/functions cannot be done. This results in the other
table not being locked and thus further processing results in
an error getting reported.
FIX:
====
During the prelocking of view, the value of 'trg_event_map'
of the view is copied to 'trg_event_map' of the next table
in the TABLE_LIST. This results in the locking of tables
associated with the trigger as well.
Revert following bug fix:
Bug#20685029: SLAVE IO THREAD SHOULD STOP WHEN DISK IS
FULL
Bug#21753696: MAKE SHOW SLAVE STATUS NON BLOCKING IF IO
THREAD WAITS FOR DISK SPACE
This fix results in a deadlock between slave IO thread
and SQL thread.
(cherry picked from commit e3fea6c6dbb36c6ab21c4ab777224560e9608b53)
Revert following bug fix:
Bug#20685029: SLAVE IO THREAD SHOULD STOP WHEN DISK IS
FULL
Bug#21753696: MAKE SHOW SLAVE STATUS NON BLOCKING IF IO
THREAD WAITS FOR DISK SPACE
This fix results in a deadlock between slave IO thread
and SQL thread.
INSERTS/UPDATES ON TEMPORARY TABLES
Bug#14294223: CHANGES NOT ALLOWED TO TEMPORARY TABLES ON
READ-ONLY SERVERS
Problem:
========
Running 5.5.14 in read only we can create temporary tables
but can not insert or update records in the table. When we
try we get Error 1290 : The MySQL server is running with the
--read-only option so it cannot execute this statement.
Analysis:
=========
This bug is very specific to binlog being enabled and
binlog-format being stmt/mixed. Standalone server without
binlog enabled or with row based binlog-mode works fine.
How standalone server and row based replication work:
=====================================================
Standalone server and row based replication mark the
transactions as read_write only when they are modifying
non temporary tables as part of their current transaction.
Because of this when code enters commit phase it checks
if a transaction is read_write or not. If the transaction
is read_write and global read only mode is enabled those
transaction will fail with 'server is read only mode'
error.
In the case of statement based mode at the time of writing
to binary log a binlog handler is created and it is always
marked as read_write. In case of temporary tables even
though the engine did not mark the transaction as read_write
but the new transaction that is started by binlog handler is
considered as read_write.
Hence in this case when code enters commit phase it finds
one handler which has a read_write transaction even when
we are modifying temporary table. This causes the server
to throw an error when global read-only mode is enabled.
Fix:
====
At the time of commit in "ha_commit_trans" if a read_write
transaction is found, we should check if this transaction is
coming from a handler other than binlog_handler. This will
ensure that there is a genuine read_write transaction being
sent by the engine apart from binlog_handler and only then
it should be blocked.
Integer comparison of INT expressions with different signess in BETWEEN
is not safe. Switching to DECIMAL comparison in case if INT arguments
have different signess.
INCORRECT ERROR.
Analysis
========
INSERT with DUPLICATE KEY UPDATE and REPLACE on a table
where foreign key constraint is defined fails with an
incorrect 'duplicate entry' error rather than foreign
key constraint violation error.
As part of the bug fix for BUG#22037930, a new flag
'HA_CHECK_FK_ERROR' was added while checking for non fatal
errors to manage FK errors based on the 'IGNORE' flag. For
INSERT with DUPLICATE KEY UPDATE and REPLACE queries, the
foreign key constraint violation error was marked as non-fatal,
even though IGNORE was not set. Hence it continued with the
duplicate key processing resulting in an incorrect error.
Fix:
===
Foreign key violation errors are treated as non fatal only when
the IGNORE is not set in the above mentioned queries. Hence reports
the appropriate foreign key violation error.
1. the same message text for INSERT and INSERT IGNORE
2. no new warnings in UPDATE IGNORE yet (big change for 5.5)
and replace a commonly used expression with a
named constant
This is a backport of the patch for MDEV-9653 (fixed earlier in 10.1.13).
The code in Item_func_case::fix_length_and_dec() did not
calculate max_length and decimals properly.
In case of any numeric result (DECIMAL, REAL, INT) a generic method
Item_func_case::agg_num_lengths() was called, which could erroneously result
into a DECIMAL item with max_length==0 and decimals==0, so the constructor of
Field_new_decimals tried to create a field of DECIMAL(0,0) type,
which caused a crash.
Unlike Item_func_case, the code responsible for merging attributes in
Item_func_coalesce::fix_length_and_dec() works fine: it has specific execution
branches for all distinct numeric types and correctly creates a DECIMAL(1,0)
column instead of DECIMAL(0,0) for the same set of arguments.
The fix does the following:
- Moves the attribute merging code from Item_func_coalesce::fix_length_and_dec()
to a new method Item_func_hybrid_result_type::fix_attributes()
- Removes the wrong code from Item_func_case::fix_length_and_dec()
and reuses fix_attributes() in both Item_func_coalesce::fix_length_and_dec()
and Item_func_case::fix_length_and_dec()
- Fixes count_real_length() and count_decimal_length() to get an array
of Items as an argument, instead of using Item::args directly.
This is needed for Item_func_case::fix_length_and_dec().
- Moves methods Item_func::count_xxx_length() from "public" to "protected".
- Removes Item_func_case::agg_num_length(), as it's not used any more.
- Additionally removes Item_func_case::agg_str_length(),
as it also was not used (dead code).
special treatment for temporal values in
create_tmp_field_from_item().
old code only did it when result_type() was STRING_RESULT,
but Item_cache_temporal::result_type() is INT_RESULT
ANALYSIS:
=========
A LEX_STRING structure pointer is processed during the
validation of a stored program name. During this processing,
there is a possibility of null pointer dereference.
FIX:
====
check_routine_name() is invoked by the parser by supplying a
non-empty string as the SP name. To avoid any potential calls
to check_routine_name() with NULL value, a debug assert has
been added to catch such cases.
FAILURES
Analysis:
=========
Test script is not ensuring that "assert_grep.inc" should be
called only after 'Disk is full' error is written to the
error log.
Test checks for "Queueing master event to the relay log"
state. But this state is set before invoking 'queue_event'.
Actual 'Disk is full' error happens at a very lower level.
It can happen that we might even reset the debug point
before even the actual disk full simulation occurs and the
"Disk is full" message will never appear in the error log.
In order to guarentee that we must have some mechanism where
in after we write "Disk is full" error messge into the error
log we must signal the test to execute SSS and then reset
the debug point. So that test is deterministic.
Fix:
===
Added debug sync point to make script deterministic.
Item_func_ifnull::date_op() and Item_func_coalesce::date_op() could
erroneously return 0000-00-00 instead of NULL when get_date()
was called with the TIME_FUZZY_DATES flag, e.g. from LEAST().
INTERVALS
ISSUE:
------
Some string functions return one or a combination of the
parameters as their result. Here the resultant string's
charset could be incorrectly set to that of the chosen
parameter.
This results in incorrect behavior when an ascii string is
expected.
SOLUTION:
---------
Since an ascii string is expected, val_str_ascii should
explicitly convert the string.
Part of the fix is a backport of Bug#22340858 for mysql-5.5
and mysql-5.6.
FULL
Bug#21753696: MAKE SHOW SLAVE STATUS NON BLOCKING IF IO
THREAD WAITS FOR DISK SPACE
Problem:
========
Currently SHOW SLAVE STATUS blocks if IO thread waits for
disk space. This makes automation tools verifying
server health block on taking relevant action. Finally this
will create SHOW SLAVE STATUS piles.
Analysis:
=========
SHOW SLAVE STATUS hangs on mi->data_lock if relay log write
is waiting for free disk space while holding mi->data_lock.
mi->data_lock is needed to protect the format description
event (mi->format_description_event) which is accessed by
the clients running FLUSH LOGS and slave IO thread. Note
relay log writes don't need to be protected by
mi->data_lock, LOCK_log is used to protect relay log between
IO and SQL thread (see MYSQL_BIN_LOG::append_event). The
code takes mi->data_lock to protect
mi->format_description_event during relay log rotate which
might get triggered right after relay log write.
Fix:
====
Release the data_lock just for the duration of writing into
relay log.
Made change to ensure the following lock order is maintained
to avoid deadlocks.
data_lock, LOCK_log
data_lock is held during relay log rotations to protect
the description event.
REPLICATION
Problem: In RBR mode, merge table updates are not successfully applied on a cascading
replication.
Analysis & Fix: Every type of row event is preceded by one or more table_map_log_events
that gives the information about all the tables that are involved in the row
event. Server maintains the list in RPL_TABLE_LIST and it goes through all the
tables and checks for the compatibility between master and slave. Before
checking for the compatibility, it calls 'open_tables()' which takes the list
of all tables that needs to be locked and opened. In RBR, because of the
Table_map_log_event , we already have all the tables including base tables in
the list. But the open_tables() which is generic call takes care of appending
base tables if the list contains merge tables. There is an assumption in the
current replication layer logic that these tables (TABLE_LIST type objects) are always
added in the end of the list. Replication layer maintains the count of
tables(tables_to_lock_count) that needs to be verified for compatibility check
and runs through only those many tables from the list and rest of the objects
in linked list can be skipped. But this assumption is wrong.
open_tables()->..->add_children_to_list() adds base tables to the list immediately
after seeing the merge table in the list.
For eg: If the list passed to open_tables() is t1->t2->t3 where t3 is merge
table (and t1 and t2 are base tables), it adds t1'->t2' to the list after t3.
New table list looks like t1->t2->t3->t1'->t2'. It looks like it added at the
end of the list but that is not correct. If the list passed to open_tables()
is t3->t1->t2 where t3 is merge table (and t1 and t2 are base tables), the new
prepared list will be t3->t1'->t2'->t1->t2. Where t1' and t2' are of
TABLE_LIST objects which were added by add_children_to_list() call and replication
layer should not look into them. Here tables_to_lock_count will not help as the
objects are added in between the list.
Fix: After investigating add_children_list() logic (which is called from open_tables()),
there is no flag/logic in it to skip adding the children to the list even if the
children are already included in the table list. Hence to fix the issue, a
logic should be added in the replication layer to skip children in the list by
checking whether 'parent_l' is non-null or not. If it is children, we will skip 'compatibility'
check for that table.
Also this patch is not removing 'tables_to_lock_count' logic for the performance issues
if there are any children at the end of the list, those can be easily skipped directly by
stopping the loop with tables_to_lock_count check.