GOES AWAY, MYSQL QUITS WORKING.
Analysis:
-----------------
Issue in this bug and in bug 11907705 is, the socket file or
fifo file is set for general log at command line while starting
the server. But currently, only regular file can be set for the
general log. Instead of reporting any error, the provided files
are opened for writing and continued. Because of this issues
mentioned in the bug reports are seen.
As mentioned, only when any non-regular file is set for general
log at command line while starting the server, these issues are
seen. If general log file is set to non-regular file from CLI
using system variable general_log_file then error is reported.
These issues can also be faced with slow query log file, if it is
set to non-regular file.
Fix:
-----------------
Currently while starting the server if we fail to open log file
then we report an error, disable logging to file and continue.
To fix issue reported code is modified to check whether file
is regular file or not before opening it. If file is not a
regular file then error is logged to error log and logging to
file is disabled.
Add some code adapted from 5.6 to check for "real" DTrace. If found,
and system is Linux, we simply set DTRACE to OFF. Otherwise no change.
Build will still break if one tries to manually set DTRACE to ON.
IN RECOVERY
During redo log processing, the data dictionary is not available. We should
check it in dict_find_table_by_space() to prevent SEGV error.
rb#5678, approved by Jimmy.
Problem:
If leading zeroes of fractional part of a decimal
number exceeds 45, mod operation on the same fails.
Analysis:
Currently there is a miscalcultion of fractional
part for very small decimals in do_div_mod.
For ex:
For 0.000(45 times).....3
length of the integer part becomes -5 (for a length of one,
buffer can hold 9 digits. Since number of zeroes are 45, integer
part becomes 5) and it is negative because of the leading
zeroes present in the fractional part.
Fractional part is the number of digits present after the
point which is 46 and therefore rounded off to the nearest 9
multiple which is 54. So the length of the resulting fractional
part becomes 6.
Because of this, the combined length of integer part and fractional
part exceeds the max length allocated which is 9 and thereby failing.
Solution:
In case of negative integer value, it indicates there are
leading zeroes in fractional part. As a result stop1 pointer
should be set not just based on frac0 but also intg0. This is
because the detination buffer will be filled with 0's for the length
of intg0.
Problem:
When a unique secondary index is scanned for duplicate checking, gap locks
were not taken if the transaction had isolation level <= READ COMMITTED.
This change was done while fixing Bug #16133801 UNEXPLAINABLE INNODB UNIQUE
INDEX LOCKS ON DELETE + INSERT WITH SAME VALUES (rb#2035). Because of this
the duplicate check logic failed, and resulted in duplicate values in unique
secondary index.
Solution:
When a unique secondary index is scanned for duplicate checking, gap locks
must be taken irrespective of the transaction isolation level. This is
achieved by reverting rb#2035.
rb#5910 approved by Jimmy
Description:
THREAD_CONCURRENCY is deprecated and there is no
deprecation warning message while setting this variable
while starting the server.
Analysis:
This variable is specific to Solaris 8 and earlier systems
and is ignored on all other platforms. But since many
customers, who uses other than Solaris, still has this
variable in their configuration file, it is important to
have a deprecation warning.
Fix:
THREAD_CONCURRENCY deprecation warning message is added.
Mysqldump overflows stack buffer when copying table name from commandline arguments resulting in stack corruption and ability to execute arbitrary code.
Fix: Check length of all positional arguments passed to mysqldump is smaller than NAME_LEN.
Note: Mysqldump heavily depends on that database objects (databases, tablespaces, tables, etc) are limited to small size (now it is 64).
Mysqldump overflows stack buffer when copying table name from commandline arguments resulting in stack corruption and ability to execute arbitrary code.
Fix: Check length of all positional arguments passed to mysqldump is smaller than NAME_LEN.
Note: Mysqldump heavily depends on that database objects (databases, tablespaces, tables, etc) are limited to small size (now it is 64).
The test case makes use of the fine DEBUG_SYNC facility. Furthermore,
since it needs synchronization on internal threads (dump and SQL
threads) the server code has DEBUG_SYNC commands internally deployed
and activated through the DBUG_EXECUTE_IF macro. The internal
DBUG_SYNC commands are then controlled from the test case through the
DEBUG variable.
There were three problems around the DEBUG + DEBUG_SYNC facility
usage:
1. When signaling the SQL thread to continue, the test would reset
immediately the DEBUG_SYNC variable. This could mean that the SQL
thread might loose the signal and continue to wait forever;
2. A similar scenario was happening with the dump thread on the
master. This thread was instructed to wait, and later it would be
signaled to continue, but immediately after the DEBUG_SYNC would be
reset. This could lead to the dump thread missing the signal and
wait forever;
3. The test was not cleaning itself up with respect to the
instrumentation of the dump thread. This would leave the
conditional execution of an internal DEBUG_SYNC command active
(through the usage of DBUG_EXECUTE_IF).
We fix#1 and #2 by waiting for the threads to receive the signal and
only then issue the reset. We fix#3 by reseting the DEBUG variable,
thus deactivating the dump thread internal DEBUG_SYNC command.
BACKGROUND:
This bug is a followup on Bug#16368875.
The assertion failure happens because in SQL layer the key
does not get promoted to PRIMARY KEY but InnoDB takes it
as PRIMARY KEY.
ANALYSIS:
Here we are trying to create an index on POINT (GEOMETRY)
data type which is a type of BLOB (since GEOMETRY is a
subclass of BLOB).
In general, we can't create an index over GEOMETRY family
type field unless we specify the length of the
keypart (similar to BLOB fields).
Only exception is the POINT field type. The POINT column
max size is 25. The problem is that the field is not treated
as PRIMARY KEY when we create a index on POINT column using
its max column size as key part prefix. The fix would allow
index on POINT column to be treated as PRIMARY KEY.
FIX:
Patch for Bug#16368875 is extended to take into account
GEOMETRY datatype, POINT in particular to consider it
as PRIMARY KEY in SQL layer.
Bug#16415173 CRLF INSTEAD OF LF IN SQL-BENCH SCRIPTS
Correct perms and converts from Windows style to UNIX style line endings on some files.
Fix perms on installed ini files.
(MySQL 5.5 version)
WITH CERTAIN MAX_HEAP_TABLE_SIZE VALUES
Description:
When the system variable 'max_heap_table_size'
is set to 20GB, the server crashes on creation of a
temporary tables or tables using MEMORY storage engine.
Analysis:
The variable 'max_record' determines the amount heap
allocated for the records of the table. This value
is determined using the 'max_heap_table_size' variable.
'records_in_block' in turn uses the max_records to
determine the number of records per block.
When the 'max_heap_table_size' is set to 20GB, then
the 'records_in_block' is calculated to a value of
2^28.
The size of the block determined by multiplying the
'records_in_block' and 'recbuffer' results in overflow
and hence the value becomes zero. As a result, zero bytes
of the heap is allocated for the table. This will
result in a server crash when the table is accessed.
Fix:
The variables 'records_in_block' and 'recbuffer' are
typecasted to 'unsigned long' while calculating the
size of the block.
CORRUPTS FRM
Analysis:
---------
ALTER TABLE on a partitioned table resulted in the wrong
engine being written into the table's FRM file and displayed
in SHOW CREATE TABLE.
The prep_alter_part_table() modifies the partition_info object
for TABLE instance representing the old version of table.
If the ALTER TABLE ENGINE statement fails, the partition_info
object for the TABLE contains the altered storage engine name.
The SHOW CREATE TABLE uses the TABLE object to display the table
information, hence displays incorrect storage engine for the table.
Also a subsequent successful ALTER TABLE operation will write the
incorrect engine information into the FRM file.
Fix:
---
A copy of the partition_info object is created before modification so
that any changes would not cause the the original partition_info object
to be modified if the ALTER TABLE fails.(Backported part of the code
provided as fix for bug#14156617 in mysql-5.6.6).
Backport of the fix:
: Bug 18017820: BISON 3 BREAKS MYSQL BUILD
: ========================================
:
: The source of the reported problem is a removal of a few deprecated
: things from Bison 3.x:
: * YYPARSE_PARAM macro (use the %parse-param bison directive instead),
: * YYLEX_PARAM macro (use %lex-param instead),
:
: The fix removes obsolete macro calls and introduces use of
: %parse-param and %lex-param directives.
DISCONNECT CON1 AND CON2
Problem:
The test suite/binlog/t/binlog_killed.test makes the connections
con1 and con2 but forgets to disconnect them + wait till that
operation is finished at test end.
This mistake has the potential to harm subsequent tests in
case these tests depend on the content of the processlist.
Solution:
Added disconnect + wait_until_disconnected.inc
within the test cleanup.
NON-EXISTS RECORDS
Problem:
========
In RBR replication, master deletes a record but the record
don't exist on slave. when slave tries to apply the
Delete_row_log_event from master, it will result in an
assert on slave.
Analysis:
========
This problem exists not only with Delete_rows event but also
with Update_rows event as well. Trying to update a non
existing row on the slave from the master will cause the
same assert. This assert occurs only for the tables that
doesn't have primary keys and which basically require
sequential scan to be done to locate a record. This bug
occurs only with innodb engine not with myisam.
When update or delete rows is executed on a slave on a table
which doesn't have primary key the updated record is stored
in a buffer named table->record[0] and the same is copied to
table->record[1] so that during sequential scan
table->record[0] can reloaded with fetched data from the
table and compared against table->record[1]. In a special
case where there is no record on the slave side scan will
result in EOF in that case we reinit the scan and we try to
compare record[0] with record[1] which are basically the
same. This comparison is incorrect. Since they both are the
same record_compare() will report that record is found and
we try to go ahead and try to update/delete non existing
row. Ideally if the scan results in EOF means no data found
hence no need to do a record_compare() at all.
Fix:
===
Avoid comparision of records on EOF.
SLOW/CRASHES SEMAPHORE
Problem:
There are 2 lakh tables - fk_000001, fk_000002 ... fk_200000. All of them
are related to the same parent_table through a foreign key constraint.
When the parent_table is loaded into the dictionary cache, all the child table
will also be loaded. This is taking lot of time. Since this operation happens
when the dictionary latch is taken, the scenario leads to "long semaphore wait"
situation and the server gets killed.
Analysis:
A simple performance analysis showed that the slowness is because of the
dict_foreign_find() function. It does a linear search on two linked list
table->foreign_list and table->referenced_list, looking for a particular
foreign key object based on foreign->id as the key. This is called two
times for each foreign key object.
Solution:
Introduce a rb tree in table->foreign_rbt and table->referenced_rbt, which
are some sort of index on table->foreign_list and table->referenced_list
respectively, using foreign->id as the key. These rbt structures will be
solely used by dict_foreign_find().
rb#5599 approved by Vasil
IN
SSL_CTX_LOAD_VERIFY_
LOCATIONS()
and
OFF-BY-ONE PROBLEM IN
VOID CERTDECODER::
GETDATE(DATETYPE DT)
IN ASN.CPP
Description : Fixes corner cases in yassl code.
Refer to bug page for details.
MYSQLADMIN IN PROCESSES LIST
Description: Checking the process status (with ps -ef)
while executing "mysqladmin" with old password and new
password via command-line will show the new password in the
process list sporadically.
Analysis: The old password is being masked by "mysqladmin".
So masking the new password in the similar manner would
reduce hitting the bug. But this would not completely fix
the bug, because if "ps -ef " command hits the mysqladmin
before it masks the passwords it will show both the old and
new passwords in the process list. But the chances of
hitting this is very less.
Fix: The new password also masked in the similar manner
that of the --password argument.
Problem:
Load_log_event::print_query() function does not put escape character in file name
for "LOAD DATA INFILE" statement.
Analysis:
When we have "'" in our file name for "LOAD DATA INFILE" statement,
Load_log_event::print_query() function does not put escape character
in our file name.
This one result that when we show binary-log, we get file name without
escape character.
Solution:
To put escape character when we have "'" in file name, for this instead of using
simple memcpy() to put file-name, we will use pretty_print_str().
"HAVING SUM(DISTINCT)": WRONG RESULTS.
ISSUE:
------
If a query uses loose index scan and it has both
AGG(DISTINCT) and MIN()/MAX()functions. Then, result values
of MIN/MAX() is set improperly.
When query has AGG(DISTINCT) then end_select is set to
end_send_group. "end_send_group" keeps doing aggregation
until it sees a record from next group. And, then it will
send out the result row of that group.
Since query also has MIN()/MAX() and loose index scan is
used, values of MIN/MAX() are set as part of loose index
scan itself. Setting MIN()/MAX() values as part of loose
index scan overwrites values computed in end_send_group.
This caused invalid result.
For such queries to work loose index scan should stop
performing MIN/MAX() aggregation. And, let end_send_group to
do the same. But according to current design loose index
scan can produce only one row per group key. If we have both
MIN() and MAX() then it has to give two records out. This is
not possible as interface has to use common buffer
record[0]! for both records at a time.
SOLUTIONS:
----------
For such queries to work we need a new interface for loose
index scan. Hence, do not choose loose_index_scan for such
cases. So a new rule SA7 is introduced to take care of the
same.
SA7: "If Q has both AGG_FUNC(DISTINCT ...) and
MIN/MAX() functions then loose index scan access
method is not used."