Description:
THREAD_CONCURRENCY is deprecated and there is no
deprecation warning message while setting this variable
while starting the server.
Analysis:
This variable is specific to Solaris 8 and earlier systems
and is ignored on all other platforms. But since many
customers, who uses other than Solaris, still has this
variable in their configuration file, it is important to
have a deprecation warning.
Fix:
THREAD_CONCURRENCY deprecation warning message is added.
Description:
THREAD_CONCURRENCY is deprecated and there is no
deprecation warning message while setting this variable
while starting the server.
Analysis:
This variable is specific to Solaris 8 and earlier systems
and is ignored on all other platforms. But since many
customers, who uses other than Solaris, still has this
variable in their configuration file, it is important to
have a deprecation warning.
Fix:
THREAD_CONCURRENCY deprecation warning message is added.
BACKGROUND:
This bug is a followup on Bug#16368875.
The assertion failure happens because in SQL layer the key
does not get promoted to PRIMARY KEY but InnoDB takes it
as PRIMARY KEY.
ANALYSIS:
Here we are trying to create an index on POINT (GEOMETRY)
data type which is a type of BLOB (since GEOMETRY is a
subclass of BLOB).
In general, we can't create an index over GEOMETRY family
type field unless we specify the length of the
keypart (similar to BLOB fields).
Only exception is the POINT field type. The POINT column
max size is 25. The problem is that the field is not treated
as PRIMARY KEY when we create a index on POINT column using
its max column size as key part prefix. The fix would allow
index on POINT column to be treated as PRIMARY KEY.
FIX:
Patch for Bug#16368875 is extended to take into account
GEOMETRY datatype, POINT in particular to consider it
as PRIMARY KEY in SQL layer.
BACKGROUND:
This bug is a followup on Bug#16368875.
The assertion failure happens because in SQL layer the key
does not get promoted to PRIMARY KEY but InnoDB takes it
as PRIMARY KEY.
ANALYSIS:
Here we are trying to create an index on POINT (GEOMETRY)
data type which is a type of BLOB (since GEOMETRY is a
subclass of BLOB).
In general, we can't create an index over GEOMETRY family
type field unless we specify the length of the
keypart (similar to BLOB fields).
Only exception is the POINT field type. The POINT column
max size is 25. The problem is that the field is not treated
as PRIMARY KEY when we create a index on POINT column using
its max column size as key part prefix. The fix would allow
index on POINT column to be treated as PRIMARY KEY.
FIX:
Patch for Bug#16368875 is extended to take into account
GEOMETRY datatype, POINT in particular to consider it
as PRIMARY KEY in SQL layer.
server initialization
ER() macro was used during server initialization. It refers to
current_thd, which is not available that early.
Print error to error log in "lc-messages" locale.
Avoid duplicate error message during server initialization.
CORRUPTS FRM
Analysis:
---------
ALTER TABLE on a partitioned table resulted in the wrong
engine being written into the table's FRM file and displayed
in SHOW CREATE TABLE.
The prep_alter_part_table() modifies the partition_info object
for TABLE instance representing the old version of table.
If the ALTER TABLE ENGINE statement fails, the partition_info
object for the TABLE contains the altered storage engine name.
The SHOW CREATE TABLE uses the TABLE object to display the table
information, hence displays incorrect storage engine for the table.
Also a subsequent successful ALTER TABLE operation will write the
incorrect engine information into the FRM file.
Fix:
---
A copy of the partition_info object is created before modification so
that any changes would not cause the the original partition_info object
to be modified if the ALTER TABLE fails.(Backported part of the code
provided as fix for bug#14156617 in mysql-5.6.6).
CORRUPTS FRM
Analysis:
---------
ALTER TABLE on a partitioned table resulted in the wrong
engine being written into the table's FRM file and displayed
in SHOW CREATE TABLE.
The prep_alter_part_table() modifies the partition_info object
for TABLE instance representing the old version of table.
If the ALTER TABLE ENGINE statement fails, the partition_info
object for the TABLE contains the altered storage engine name.
The SHOW CREATE TABLE uses the TABLE object to display the table
information, hence displays incorrect storage engine for the table.
Also a subsequent successful ALTER TABLE operation will write the
incorrect engine information into the FRM file.
Fix:
---
A copy of the partition_info object is created before modification so
that any changes would not cause the the original partition_info object
to be modified if the ALTER TABLE fails.(Backported part of the code
provided as fix for bug#14156617 in mysql-5.6.6).
Backport of the fix:
: Bug 18017820: BISON 3 BREAKS MYSQL BUILD
: ========================================
:
: The source of the reported problem is a removal of a few deprecated
: things from Bison 3.x:
: * YYPARSE_PARAM macro (use the %parse-param bison directive instead),
: * YYLEX_PARAM macro (use %lex-param instead),
:
: The fix removes obsolete macro calls and introduces use of
: %parse-param and %lex-param directives.
Backport of the fix:
: Bug 18017820: BISON 3 BREAKS MYSQL BUILD
: ========================================
:
: The source of the reported problem is a removal of a few deprecated
: things from Bison 3.x:
: * YYPARSE_PARAM macro (use the %parse-param bison directive instead),
: * YYLEX_PARAM macro (use %lex-param instead),
:
: The fix removes obsolete macro calls and introduces use of
: %parse-param and %lex-param directives.
- Filesort has an optmization where it reads only columns that are
needed before the sorting is done.
- When ref(_or_null) is picked by the join optimizer, it may remove parts
of WHERE clause that are guaranteed to be true.
- However, if we use quick select, we must put all of the range columns into the
read set. Not doing so will may cause us to fail to detect the end of the range.
That particular part of slave connect to master was missing code to handle
retry in case of network errors. The same problem is present in MySQL 5.5, but
fixed in MySQL 5.6.
Fixed with this patch, by adding the code (mostly identical to MySQL 5.6), and
also adding a test case.
I checked other queries done towards master during slave connect, and they now
all seem to handle reconnect in case of network failures.
NON-EXISTS RECORDS
Problem:
========
In RBR replication, master deletes a record but the record
don't exist on slave. when slave tries to apply the
Delete_row_log_event from master, it will result in an
assert on slave.
Analysis:
========
This problem exists not only with Delete_rows event but also
with Update_rows event as well. Trying to update a non
existing row on the slave from the master will cause the
same assert. This assert occurs only for the tables that
doesn't have primary keys and which basically require
sequential scan to be done to locate a record. This bug
occurs only with innodb engine not with myisam.
When update or delete rows is executed on a slave on a table
which doesn't have primary key the updated record is stored
in a buffer named table->record[0] and the same is copied to
table->record[1] so that during sequential scan
table->record[0] can reloaded with fetched data from the
table and compared against table->record[1]. In a special
case where there is no record on the slave side scan will
result in EOF in that case we reinit the scan and we try to
compare record[0] with record[1] which are basically the
same. This comparison is incorrect. Since they both are the
same record_compare() will report that record is found and
we try to go ahead and try to update/delete non existing
row. Ideally if the scan results in EOF means no data found
hence no need to do a record_compare() at all.
Fix:
===
Avoid comparision of records on EOF.
sql/log_event.cc:
Avoid record comparison on end of file.
sql/log_event_old.cc:
Avoid record comparison on end of file.
NON-EXISTS RECORDS
Problem:
========
In RBR replication, master deletes a record but the record
don't exist on slave. when slave tries to apply the
Delete_row_log_event from master, it will result in an
assert on slave.
Analysis:
========
This problem exists not only with Delete_rows event but also
with Update_rows event as well. Trying to update a non
existing row on the slave from the master will cause the
same assert. This assert occurs only for the tables that
doesn't have primary keys and which basically require
sequential scan to be done to locate a record. This bug
occurs only with innodb engine not with myisam.
When update or delete rows is executed on a slave on a table
which doesn't have primary key the updated record is stored
in a buffer named table->record[0] and the same is copied to
table->record[1] so that during sequential scan
table->record[0] can reloaded with fetched data from the
table and compared against table->record[1]. In a special
case where there is no record on the slave side scan will
result in EOF in that case we reinit the scan and we try to
compare record[0] with record[1] which are basically the
same. This comparison is incorrect. Since they both are the
same record_compare() will report that record is found and
we try to go ahead and try to update/delete non existing
row. Ideally if the scan results in EOF means no data found
hence no need to do a record_compare() at all.
Fix:
===
Avoid comparision of records on EOF.
- When range optimizer cannot the lookup value into [VAR]CHAR(n) column,
it should produce:
= "Impossible range" for equality
= "no range" for non-equalities.
MDEV-6099 Bad results for DATE_ADD(.., INTERVAL 2000000000000000000.0 SECOND)
MDEV-6097 Inconsistent results for CAST(int,decimal,double AS DATETIME)
MDEV-6100 No warning on CAST(9000000 AS TIME)
We have to run the derived table prepare before the unique table check to mark the derived table (in this case the unique table check can turn that table to materialized one).
- When the optimizer chose LooseScan, make_join_readinfo() should
use the index that was chosen for LooseScan, and should not try
to find a better (shortest) index.
Problem:
Load_log_event::print_query() function does not put escape character in file name
for "LOAD DATA INFILE" statement.
Analysis:
When we have "'" in our file name for "LOAD DATA INFILE" statement,
Load_log_event::print_query() function does not put escape character
in our file name.
This one result that when we show binary-log, we get file name without
escape character.
Solution:
To put escape character when we have "'" in file name, for this instead of using
simple memcpy() to put file-name, we will use pretty_print_str().
Problem:
Load_log_event::print_query() function does not put escape character in file name
for "LOAD DATA INFILE" statement.
Analysis:
When we have "'" in our file name for "LOAD DATA INFILE" statement,
Load_log_event::print_query() function does not put escape character
in our file name.
This one result that when we show binary-log, we get file name without
escape character.
Solution:
To put escape character when we have "'" in file name, for this instead of using
simple memcpy() to put file-name, we will use pretty_print_str().
"HAVING SUM(DISTINCT)": WRONG RESULTS.
ISSUE:
------
If a query uses loose index scan and it has both
AGG(DISTINCT) and MIN()/MAX()functions. Then, result values
of MIN/MAX() is set improperly.
When query has AGG(DISTINCT) then end_select is set to
end_send_group. "end_send_group" keeps doing aggregation
until it sees a record from next group. And, then it will
send out the result row of that group.
Since query also has MIN()/MAX() and loose index scan is
used, values of MIN/MAX() are set as part of loose index
scan itself. Setting MIN()/MAX() values as part of loose
index scan overwrites values computed in end_send_group.
This caused invalid result.
For such queries to work loose index scan should stop
performing MIN/MAX() aggregation. And, let end_send_group to
do the same. But according to current design loose index
scan can produce only one row per group key. If we have both
MIN() and MAX() then it has to give two records out. This is
not possible as interface has to use common buffer
record[0]! for both records at a time.
SOLUTIONS:
----------
For such queries to work we need a new interface for loose
index scan. Hence, do not choose loose_index_scan for such
cases. So a new rule SA7 is introduced to take care of the
same.
SA7: "If Q has both AGG_FUNC(DISTINCT ...) and
MIN/MAX() functions then loose index scan access
method is not used."
mysql-test/r/group_min_max.result:
Expected result.
mysql-test/t/group_min_max.test:
1. Test with various combination of AGG(DISTINCT) and
MIN(), MAX() functions.
2. Corrected the plan for old queries.
sql/opt_range.cc:
A new rule SA7 is introduced.
"HAVING SUM(DISTINCT)": WRONG RESULTS.
ISSUE:
------
If a query uses loose index scan and it has both
AGG(DISTINCT) and MIN()/MAX()functions. Then, result values
of MIN/MAX() is set improperly.
When query has AGG(DISTINCT) then end_select is set to
end_send_group. "end_send_group" keeps doing aggregation
until it sees a record from next group. And, then it will
send out the result row of that group.
Since query also has MIN()/MAX() and loose index scan is
used, values of MIN/MAX() are set as part of loose index
scan itself. Setting MIN()/MAX() values as part of loose
index scan overwrites values computed in end_send_group.
This caused invalid result.
For such queries to work loose index scan should stop
performing MIN/MAX() aggregation. And, let end_send_group to
do the same. But according to current design loose index
scan can produce only one row per group key. If we have both
MIN() and MAX() then it has to give two records out. This is
not possible as interface has to use common buffer
record[0]! for both records at a time.
SOLUTIONS:
----------
For such queries to work we need a new interface for loose
index scan. Hence, do not choose loose_index_scan for such
cases. So a new rule SA7 is introduced to take care of the
same.
SA7: "If Q has both AGG_FUNC(DISTINCT ...) and
MIN/MAX() functions then loose index scan access
method is not used."
SHOW PROCESSLIST, SHOW BINLOGS
Problem: A deadlock was occurring when 4 threads were
involved in acquiring locks in the following way
Thread 1: Dump thread ( Slave is reconnecting, so on
Master, a new dump thread is trying kill
zombie dump threads. It acquired thread's
LOCK_thd_data and it is about to acquire
mysys_var->current_mutex ( which LOCK_log)
Thread 2: Application thread is executing show binlogs and
acquired LOCK_log and it is about to acquire
LOCK_index.
Thread 3: Application thread is executing Purge binary logs
and acquired LOCK_index and it is about to
acquire LOCK_thread_count.
Thread 4: Application thread is executing show processlist
and acquired LOCK_thread_count and it is
about to acquire zombie dump thread's
LOCK_thd_data.
Deadlock Cycle:
Thread 1 -> Thread 2 -> Thread 3-> Thread 4 ->Thread 1
The same above deadlock was observed even when thread 4 is
executing 'SELECT * FROM information_schema.processlist' command and
acquired LOCK_thread_count and it is about to acquire zombie
dump thread's LOCK_thd_data.
Analysis:
There are four locks involved in the deadlock. LOCK_log,
LOCK_thread_count, LOCK_index and LOCK_thd_data.
LOCK_log, LOCK_thread_count, LOCK_index are global mutexes
where as LOCK_thd_data is local to a thread.
We can divide these four locks in two groups.
Group 1 consists of LOCK_log and LOCK_index and the order
should be LOCK_log followed by LOCK_index.
Group 2 consists of other two mutexes
LOCK_thread_count, LOCK_thd_data and the order should
be LOCK_thread_count followed by LOCK_thd_data.
Unfortunately, there is no specific predefined lock order defined
to follow in the MySQL system when it comes to locks across these
two groups. In the above problematic example,
there is no problem in the way we are acquiring the locks
if you see each thread individually.
But If you combine all 4 threads, they end up in a deadlock.
Fix:
Since everything seems to be fine in the way threads are taking locks,
In this patch We are changing the duration of the locks in Thread 4
to break the deadlock. i.e., before the patch, Thread 4
('show processlist' command) mysqld_list_processes()
function acquires LOCK_thread_count for the complete duration
of the function and it also acquires/releases
each thread's LOCK_thd_data.
LOCK_thread_count is used to protect addition and
deletion of threads in global threads list. While show
process list is looping through all the existing threads,
it will be a problem if a thread is exited but there is no problem
if a new thread is added to the system. Hence a new mutex is
introduced "LOCK_thd_remove" which will protect deletion
of a thread from global threads list. All threads which are
getting exited should acquire LOCK_thd_remove
followed by LOCK_thread_count. (It should take LOCK_thread_count
also because other places of the code still thinks that exit thread
is protected with LOCK_thread_count. In this fix, we are changing
only 'show process list' query logic )
(Eg: unlink_thd logic will be protected with
LOCK_thd_remove).
Logic of mysqld_list_processes(or file_schema_processlist)
will now be protected with 'LOCK_thd_remove' instead of
'LOCK_thread_count'.
Now the new locking order after this patch is:
LOCK_thd_remove -> LOCK_thd_data -> LOCK_log ->
LOCK_index -> LOCK_thread_count