MYISAM TABLE CAUSES THE SERVER TO CRASH
Issue:
-----
During index maintanence, R-tree node might need a split.
In some cases the square of mbr could be calculated to
infinite (as in this case) or to NaN. This is currently
not handled. This is specific to MyISAM.
SOLUTION:
---------
If the calculated value in "mbr_join_square" is infinite or
NaN, set it to max double value.
Initialization of output parameters of "pick_seeds" is
required if calculation is infinite (or negative infinite).
Similar to the fix made for INNODB as part of Bug#19533996.
RESERVATION AND SIGNAL COUNT
Problem:
Reservation and Signal count value shows negative value for show engine
innodb statement.
Solution:
This is happening due to counter overflow error. Reservation and Signal
count values are defined as unsigned long but these variables are converted to
long while printing it. Change Reservation and Signal count values as unsigned
long datatype while printing it.
Reviewed-by: Marko Mäkelä <marko.makela@oracle.com>
Approved in bug page.
MYISAM TABLE CAUSES THE SERVER TO CRASH
Issue:
-----
During index maintanence, R-tree node might need a split.
In some cases the square of mbr could be calculated to
infinite (as in this case) or to NaN. This is currently
not handled. This is specific to MyISAM.
SOLUTION:
---------
If the calculated value in "mbr_join_square" is infinite or
NaN, set it to max double value.
Initialization of output parameters of "pick_seeds" is
required if calculation is infinite (or negative infinite).
Similar to the fix made for INNODB as part of Bug#19533996.
Problem:
This is a coding mistake during error handling. When the specified foreign
key constraint is wrong because of data type mismatch, the resulting
foreign key object will not have valid foreign->id (it will be NULL.)
Solution:
While removing the foreign key object from dictionary cache during error
handling, ensure that foreign->id is not null before using it.
rb#8204 approved by Sunny.
ISSUE:
------
There can be up to MERGEBUFF2 number of sorted merge chunks,
We need enough buffer space for at least one record from
each merge chunks. If estimates are wrong(very low) and we
allocate buffer space for less than MERGEBUFF2, then we will
have issue in merge_buffers, if actual number of rows to be
sorted is bigger than estimate and external filesort is
chosen.
SOLUTION:
---------
Set number of rows to sort to be at least MERGEBUFF2.
CRASHES WITH AUTO_INCREMENT COLUMN
Description:- Creating a federated table with AUTO_INCREMENT
column using LIKE clause results in a server crash.
Analysis:- Creating a federated table with AUTO_INCREMENT
column using LIKE clause results in a federated server
crash due to the uninitialized connection structure(mysql).
Also due to unassigned connection string for the remote
server, at the time of preparation of "create_info"
structure, the creation of any federated table using LIKE
clause fails with an error, "ERROR 1 (HY000): server name:
'' doesn't exist!". This bug is not only with
AUTO_INCREMENT but in all creations of federated tables with
LIKE clause.
Fix :- In ha_federated::info(), "mysql->insert_id" assigned
to "stats.auto_increment_value" only when there is an
active connection. This fixes the crash issue. For creating
the federated table with LIKE clause, connection string is
assigned at the time of preparation of "create_info"
structure.
CRASHES ON EVERY START ATTEMPT
Description:
------------
push_warning_printf function is used to print the warning message
to the client. So this function should not invoke while recovering
the server. Moreover current_thd is NULL while starting the server.
Solution:
---------
- Avoiding the warning to be printed while recovery.
This patch already pushed in mysql-5.6.
COMMUNICATION PACKETS; FEDERATED TABLE
Description:- Execution of FLUSH TABLES on a federated
table which has been idle for wait_timeout (on the remote
server) + tcp_keepalive_time, fails with an error,
"ERROR 1160 (08S01): Got an error writing communication
packets."
Analysis:- During FLUSH TABLE execution the federated
table is closed which will inturn close the federated
connection. While closing the connection, federated server
tries to communincate with the remote server. Since the
connection was idle for wait_timeout(on the remote server)+
tcp_keepalive_time, the socket gets closed. So this
communication fails because of broken pipe and the error is
thrown. But federated connections are expected to reconnect
silently. And also it cannot reconnect because the
"auto_reconnect" variable is set to 0 in "mysql_close()".
Fix:- Before closing the federated connection, in
"ha_federated_close()", a check is added which will verify
wheather the connection is alive or not. If the connection
is not alive, then "mysql->net.error" is set to 2 which
will indicate that the connetion is broken. Also the
setting of "auto_reconnect" variable to 0 is delayed and is
done after "COM_QUIT" command.
NOTE:- For reproducing this issue, "tcp_keepalive_time" has
to be set to a smaller value. This value is set in the
"/proc/sys/net/ipv4/tcp_keepalive_time" file in Unix
systems. So we need root permission for changing it, which
can't be done through mtr test. So submitting the patch
without mtr test.
Description:
Using correct length when moving to next field in cmp_ref. The store
length already includes the length bytes of blobs, which is already considered
earlier for blob types.
Approved by Mattias, Jimmy [rb-7088]
The debug configuration parameter innodb_optimistic_insert_debug
which was introduced for testing corner cases in B-tree handling
had a bug in it. The value 1 would trigger an infinite sequence
of page splits.
Fix: When the value 1 is specified, disable this debug feature.
Approved by Yasufumi Kinoshita
Problem:
In the function dict_foreign_remove_from_cache(), the rb tree was updated
without actually verifying whether the given foreign key object is there in the
rb tree or not. There can be an existing foreign key object with the same id
in the rb tree, which must not be removed. Such a scenario comes when an
attempt is made to add a foreign key object with a duplicate identifier.
Solution:
When the foreign key object is removed from the dictionary cache, ensure
that the foreign key object removed from the rbt is the correct one.
rb#7168 approved by Jimmy and Marko.
dict_set_corrupted(): Use the canonical way of searching for
less-than-equal (PAGE_CUR_LE) and then checking low_match.
The code that was introduced in MySQL 5.5.17 in
Bug#11830883 SUPPORT "CORRUPTED" BIT FOR INNODB TABLES AND INDEXES
could position the cursor on the page supremum, and then attempt
to overwrite non-existing 7th field of the 1-field supremum record.
Approved by Jimmy Yang
Bug#17959689: MAKE GCC AND CLANG GIVE CONSISTENT COMPILATION WARNINGS
Bug#18313717: ENABLE -WERROR IN MAINTANER MODE WHEN COMPILING WITH CLANG
Bug#18510941: REMOVE CMAKE WORKAROUNDS FOR OLDER VERSIONS OF OS X/XCODE
Backport from mysql-5.6 to mysql-5.5
FROM A FUNCTION
Scenario:
In a stored procedure, CREATE TABLE statement is not allowed. But an
exception is provided for CREATE TEMPORARY TABLE. We can create a temporary
table in a stored procedure.
Let there be two stored functions f1 and f2 and two stored procedures p1 and
p2. Their properties are as follows:
. stored function f1() calls stored procedure p1().
. stored function f2() calls stored procedure p2().
. stored procedure p1() creates temporary table t1.
. stored procedure p2() does DML on t1.
Consider the following situation:
1. Autocommit mode is on.
2. select f1()
3. select f2()
Step 2: In this step, t1 would be created via p1(). A table level transaction
lock would have been taken. The ::external_lock() would not have been called
on this table. At the end of step 2, because of autocommit mode on, this table
level lock will be released.
Step 3: When we execute DML on table t1 via p2() we have two problems:
Problem 1:
The function ha_innobase::external_lock() would have been called but since
it is a select query no table level locks would have been taken. Hence the
following assert will fail:
ut_ad(lock_table_has(thr_get_trx(thr), index->table, LOCK_IX));
Solution:
The solution would be to identify this situation and take a table level lock
and use the proper lock type prebuilt->select_lock_type = LOCK_X for DML
operations.
Problem 2:
Another problem is that in step 3, ha_innobase::open() is never called on
the table t1.
Solution:
The solution would be to identify this situation and call re-init the handler
of table t1.
rb#6429 approved by Krunal.
Problem:
Creation of a table fails when innodb_strict_mode is enabled, but the same
table is created without any warning when innodb_strict_mode is enabled.
Solution:
If creation of a table fails with an error when innodb_strict_mode is
enabled, it must issue a warning when innodb_strict_mode is disabled.
rb#6723 approved by Krunal.
CHECK.
Analysis:
----------
Issue here is, while creating or altering the InnoDB table,
if the foreign key defined on the table references a parent
table on which the user has no access privileges then the
table is created without reporting any error.
Currently the privilege level REFERENCES_ACL is unused
and is not used for access evaluation while creating the
table with a foreign key constraint or adding the foreign
key constraint to a table. But when no privileges are granted
to user then also access evaluation on parent table is ignored.
Fix:
---------
For DMLs, irrelevant of the fact, support does not want any
changes to avoid permission checks on every operation.
So, as a fix, added a function "check_fk_parent_table_access"
to check whether any of the SELECT_ACL, INSERT_ACL, UDPATE_ACL,
DELETE_ACL or REFERENCE_ACL privileges are granted for user
at table level. If none of them is granted then error is reported.
This function is called during the table creation and alter
operation.
Problem:
We maintain two rb trees in each dict_table_t. The foreign_rbt must be in
sync with foreign_list. The referenced_rbt must be in sync with
referenced_list. There is one function which checks this consistency and it
failed, resulting in an assert failure.
The root cause of the problem was identified that the search order was
lost in the referenced_rbt. This is because while renaming the table,
we didn't not refresh this referenced_rbt.
Solution:
When a foreign key is renamed, we must delete and re-insert into both
foreign_rbt and referenced_rbt.
rb#6412 approved by Jimmy.
IN RECOVERY
During redo log processing, the data dictionary is not available. We should
check it in dict_find_table_by_space() to prevent SEGV error.
rb#5678, approved by Jimmy.
Problem:
When a unique secondary index is scanned for duplicate checking, gap locks
were not taken if the transaction had isolation level <= READ COMMITTED.
This change was done while fixing Bug #16133801 UNEXPLAINABLE INNODB UNIQUE
INDEX LOCKS ON DELETE + INSERT WITH SAME VALUES (rb#2035). Because of this
the duplicate check logic failed, and resulted in duplicate values in unique
secondary index.
Solution:
When a unique secondary index is scanned for duplicate checking, gap locks
must be taken irrespective of the transaction isolation level. This is
achieved by reverting rb#2035.
rb#5910 approved by Jimmy
WITH CERTAIN MAX_HEAP_TABLE_SIZE VALUES
Description:
When the system variable 'max_heap_table_size'
is set to 20GB, the server crashes on creation of a
temporary tables or tables using MEMORY storage engine.
Analysis:
The variable 'max_record' determines the amount heap
allocated for the records of the table. This value
is determined using the 'max_heap_table_size' variable.
'records_in_block' in turn uses the max_records to
determine the number of records per block.
When the 'max_heap_table_size' is set to 20GB, then
the 'records_in_block' is calculated to a value of
2^28.
The size of the block determined by multiplying the
'records_in_block' and 'recbuffer' results in overflow
and hence the value becomes zero. As a result, zero bytes
of the heap is allocated for the table. This will
result in a server crash when the table is accessed.
Fix:
The variables 'records_in_block' and 'recbuffer' are
typecasted to 'unsigned long' while calculating the
size of the block.
SLOW/CRASHES SEMAPHORE
Problem:
There are 2 lakh tables - fk_000001, fk_000002 ... fk_200000. All of them
are related to the same parent_table through a foreign key constraint.
When the parent_table is loaded into the dictionary cache, all the child table
will also be loaded. This is taking lot of time. Since this operation happens
when the dictionary latch is taken, the scenario leads to "long semaphore wait"
situation and the server gets killed.
Analysis:
A simple performance analysis showed that the slowness is because of the
dict_foreign_find() function. It does a linear search on two linked list
table->foreign_list and table->referenced_list, looking for a particular
foreign key object based on foreign->id as the key. This is called two
times for each foreign key object.
Solution:
Introduce a rb tree in table->foreign_rbt and table->referenced_rbt, which
are some sort of index on table->foreign_list and table->referenced_list
respectively, using foreign->id as the key. These rbt structures will be
solely used by dict_foreign_find().
rb#5599 approved by Vasil
Description: Using the temporary file vulnerability an
attacker can create a file with arbitrary content at a
location of his choice. This can be used to create the
file /var/lib/mysql/my.cnf, which will be read as a
configuration file by MySQL, because it is located in the
home directory of the mysql user. With this configuration
file, the attacker can specify his own plugin_dir variable,
which then allows him to load arbitrary code via
"INSTALL PLUGIN...".
Analysis: While creating the ".TMD" file we are not checking
if the file is already exits or not in mi_repair() function.
And we are truncating if the ".TMD" file exits and going ahead
This is creating the security breach.
Fix: We need to use O_EXCL flag along with O_RDWR and O_TRUNC
which will make sure if any user creates ".TMD" file, will
fails the repair table with "cannot create ".TMD" file error".
Actually we are initialing "param.tmpfile_createflag" member
with O_RDWR | O_TRUNC | O_EXCL in myisamchk_init(). And we
are modifying it in ha_myisam::repair() to O_RDWR | O_TRUNC.
So, we need to remove the line which is modifying the
"param.tmpfile_createflag".
archive table which is using an auto increment column, the
server hangs. In order to recover the mysqld process, it
has to be terminated abnormally using SIGKILL. The problem
is observed in mysql-5.5.
Bug #18065452 "PREPARING" STATE HOGS CPU WITH ARCHIVE
+ SUBQUERY
Analysis: This happens because the server is trapped inside
an infinite loop in the function,
"subselect_indexsubquery_engine::exec()". This function
resolves the correlated suquery by doing an index lookup
for the appropriate engine. In case of archive engine,
after reaching the end of records, "table->status" is not
set to STATUS_NOT_FOUND. As a result the loop is not
terminated.
Fix: The "table->status" is set to STATUS_NOT_FOUND when
the end of records is reached.
THE PERFORMANCE UNDER HEAVY INSERT
Problem:
There are three memset call to allocate memory for system fields
in each insert.
Solution:
Instead of calling it in 3 times, we can combine it into
one memset call. It will reduce the CPU usage under heavy insert.
Approved by Marko rb-4916
FAILING ASSERTION: FLEN == LEN
Problem:
Broken invariant triggered when building a unique index on a
binary column and the input data contains duplicate keys. This was broken
in debug builds only.
Fix:
Fixed length of the binary datatype can be greater than length of
the shorter prefix on which index is being created.
Problem:
In the clustered index, when an update operation is done the overall
scenario (after rb#4479) is as follows:
1. Delete mark the old record that is to be updated.
2. The old record disowns the blobs.
3. Insert the new record into clustered index.
4. For non-updated blobs, new record must own it. Verified by assert.
5. For non-updated blobs, in new record marked as inherited.
Scenario involving DB_LOCK_WAIT:
If step 3 times out, then we will skip 1 and 2 and will continue from
step 3. This skipping is achieved by the UPD_NODE_INSERT_BLOB state.
In this case, step 4 is not correct. Because of step 1, the new
record need not own the blobs. Hence the assert failure.
Solution:
The assert in step 4 is removed. Instead code is added to ensure that
the record owns the blob.
Note:
This is a regression caused by rb#4479.
rb#4571 approved by Marko
AUTO_INCREMENT_INCREMENT
Problem:
=======
When auto_increment_increment system variable decreases,
immediate next value of auto increment column is not affected.
Solution:
========
Get the previous inserted value of auto increment column by
subtracting the previous auto_increment_increment from next
auto increment value. After that calculate the current autoinc value
using newly changed auto_increment_increment variable.
Approved by Sunny [rb#4394]
Performance schema tables are local to a server and they should not
be allowed to be executed by the slave from the relay log.
From 5.6.10, P_S events are not written into the binary log.
But prior to that, from mysql 5.5 onwards, P_S events are written
to the binary log by master.
The following are problematic scenarios:
1. Master 5.5 -> Slave 5.5
========================
A) RBR: Slave crashes
B) SBR: P_S statements are replicated.
2.Master 5.5 -> Slave 5.6
========================
A) RBR: SQL thd generates error
B) SBR : P_S statements are replicated
3. 5.5 binlog executed on a server 5.5 using mysqlbinlog|mysql
=================================================================
A) RBR: Server crash (because of BINLOG'... statement)
B) SBR: P_S statements are executed
4. 5.5 binlog executed on server 5.6 using mysqlbinlog|mysql
================================================================
A) RBR: SQL error (because of BINLOG'... statement)
B) SBR: P_S statements are executed.
The generalized behaviour should be:
a) Slave SQL thread should certainly ignore P_S events read from
the relay log.
b) mysqlbinlog|mysql should replay the binlog succesfully.
Problem:
The function row_upd_changes_ord_field_binary() is used to decide whether to
use row_upd_clust_rec_by_insert() or row_upd_clust_rec(). The function
row_upd_changes_ord_field_binary() does not make use of charset information.
Based on binary comparison it decides that r1 and r2 differ in their ordering
fields.
In the function row_upd_clust_rec_by_insert(), an update is done by delete +
insert. These operations internally make use of cmp_dtuple_rec_with_match()
to compare records r1 and r2. This comparison takes place with the use of
charset information.
This means that it is possible for the deleted record to be reused in the
subsequent insert. In the given scenario, the characters 'a' and 'A' are
considered equal in the my_charset_latin1. When this happens, the ownership
information of externally stored blobs are not correctly handled.
Solution:
When an update is done by delete followed by insert, disown the relevant
externally stored fields during the delete marking itself (within the same
mtr). If the insert succeeds, then nothing with respect to blob ownership
needs to be done. If the insert fails, then the disown done earlier will be
removed when the operation is rolled back.
rb#4479 approved by Marko.
The maximum value for innodb_thread_sleep_delay is 4294967295 (32-bit) or
18446744073709551615 (64-bit) microseconds. This is way too big, since
the max value of innodb_thread_sleep_delay is limited by
innodb_adaptive_max_sleep_delay if that value is set to non-zero value
(its default is 150,000).
Solution
The maximum value of innodb_thread_sleep_delay should be the same as
the maximum value of innodb_adaptive_max_sleep_delay, which is 1000000.
Approved by Jimmy, rb#4429
IN DOCUMENTATION
Problem
-------
The documentation says that we support 'K' prefix
while specifiying size for innodb datafile in the
server variable for innodb_data_file_path ,but the
function srv_parse_megabytes() only handles only
'M' (megabytes) and 'G' (gigabytes) .
Fix
---
Modify srv_parse_megabytes() to handle Kilobytes.
Add in documentation that while specifying size
in KB it should be mentioned in multiples of 1024
other wise they will be rounded off to nearest
MB (megabyte) boundry .(eg if size mentioned
as 2313KB will be considered as 2 MB ).
[ Approved by Marko #rb 2387 ]
LOCAL TABLE WHEN ONLY 1 LOCAL ROW
Description: When updating a federated table with UPDATE...
JOIN, the server consistently crashes with Signal 11 when
only 1 row exists in the local table involved in the join
and that 1 row can be joined with a row in the federated
table.
Analysis: Interaction between the federated engine and the
optimizer results in the crash. In our scenario, ie, local
table having only one row, the program is following a
different path because the table is treated as a constant
table by the join optimizer. So in this scenario
"index_read()" is happening in the prepare phase,
since optimizer plan is different for constant table joins.
In this case, "index_read_idx_map()" (inside handler.cc) is
calling "index_read()" and inside "index_read()", matching
rows are fetched and "stored_result" gets populated by
calling "store_result()". And just after "index_read()",
"index_end()" function is called. And in the "index_end()",
its freeing the "stored_result" by calling "free_result()".
So when it reaches the execution phase, in "position()"
function, we are getting assertion at
"DBUG_ASSERT(stored_result);". In all other scenarios (ie,
table with more than 1 row), optimizer plan is different
and "index_read()" is happening in the execution phase.
Fix: So my fix is to have a separate ha_federated member
function for "index_read_idx_map()" which will handle
federated engine separately. So that position() will be
called before index_end() call in constant table scenario.
ERRORS IN THE FK SECTION
ANALYSIS
--------
Any error during the renaming of the table was
incorrectly logged in the dict_foreign_err_file
and it showed up in foreign key section when
we give the query "show engine innodb status".
FIX
---
Prevent renaming error from being logged in
dict_foreign_err_file section.
[Aprooved by marko #rb 2501 ]
DESTRUCTED THD OBJ
Prior to fix, function check_performance_schema() could leave
behind stale pointers in thread local storage, for the following keys:
- THR_THD (used by _current_thd)
- THR_MALLOC (used for memory allocation)
This is an unsafe practice, which can potentially cause crashes,
and that can cause other bugs when code is modified during maintenance.
With this fix, thread local storage keys used temporarily within
function check_performance_schema() are cleaned up after use.
possibly since it was introduced in the patch for Bug#16720368 around
2013-04-30. This fix is simply to adjust the mtr.add_suppression() lines
in the testcase and to add a missing "\n" in the error message.
Approved by Marko in RB 3746
Regression from bug#14621190 due to disabled optimistic restoration
of cursor, which required full key lookup instead of verifying
if previously positioned btree cursor could be reused.
Fixed by enable optimistic restore and adjust cursor afterward.
rb#3324 approved by Marko.
--Implemented CHECK TABLE...QUICK.
Introduce CHECK TABLE...QUICK that would skip the btr_validate_index()
and btr_search_validate() call, and count the no. of records in each index.
Approved by Marko and Kevin. (rb#3567).