BY A CONCURRENT TRANSACTIO
The member function QUICK_RANGE_SELECT::init_ror_merged_scan() performs
a table handler clone. Innodb does not provide a clone operation.
The ha_innobase::clone() is not there. The handler::clone() does not
take care of the ha_innobase->prebuilt->select_lock_type. Because of
this what happens is that for one index we do a locking read, and
for the other index we were doing a non-locking (consistent) read.
The patch introduces ha_innobase::clone() member function.
It is implemented similar to ha_myisam::clone(). It calls the
base class handler::clone() and then does any additional operation
required. I am setting the ha_innobase->prebuilt->select_lock_type
correctly.
rb://1060 approved by Marko
CAUSES RESTORE PROBLEM
Problem Statement:
------------------
mysqldump is not having the dump stmts for general_log and slow_log
tables. That is because of the fix for Bug#26121. Hence, after
dropping the mysql database, and applying the dump by enabling the
logging, "'general_log' table not found" errors are logged into the
server log file.
Analysis:
---------
As part of the fix for Bug#26121, we skipped the dumping of tables
for general_log and slow_log, because the data dump of those tables
are taking LOCKS, which is not allowed for log tables.
Fix:
----
We came up with an approach that instead of taking both meta data
and data dump information for those tables, take only the meta data
dump which doesn't need LOCKS.
As part of fixing the issue we came up with below algorithm.
Design before fix:
1) mysql database is having tables like db, event,... general_log,
... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Take the TL_READ lock on tables which are present in the table
list and do 'show create table'.
4) Release the lock.
Design with the fix:
1) mysql database is having tables like db, event,... general_log,
... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Explicitly call the 'show create table' for general_log and
slow_log
3) Take the TL_READ lock on tables which are present in the table
list and do 'show create table'.
4) Release the lock.
While taking the meta data dump for general_log and slow_log the
"CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS".
This is because we skipped "DROP TABLE" for those tables,
"DROP TABLE" fails for these tables if logging is enabled.
Customer is applying the dump by enabling logging so, if the dump
has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE"
stmts for those tables.
After the fix we could observe "Table 'mysql.general_log'
doesn't exist" errors initially that is because in the customer
scenario they are dropping the mysql database by enabling the
logging, Hence, those errors are expected. Once we apply the
dump which is taken before the "drop database mysql", the errors
will not be there.
CAUSES RESTORE PROBLEM
Problem Statement:
------------------
mysqldump is not having the dump stmts for general_log and slow_log
tables. That is because of the fix for Bug#26121. Hence, after
dropping the mysql database, and applying the dump by enabling the
logging, "'general_log' table not found" errors are logged into the
server log file.
Analysis:
---------
As part of the fix for Bug#26121, we skipped the dumping of tables
for general_log and slow_log, because the data dump of those tables
are taking LOCKS, which is not allowed for log tables.
Fix:
----
We came up with an approach that instead of taking both meta data
and data dump information for those tables, take only the meta data
dump which doesn't need LOCKS.
As part of fixing the issue we came up with below algorithm.
Design before fix:
1) mysql database is having tables like db, event,... general_log,
... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Take the TL_READ lock on tables which are present in the table
list and do 'show create table'.
4) Release the lock.
Design with the fix:
1) mysql database is having tables like db, event,... general_log,
... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Explicitly call the 'show create table' for general_log and
slow_log
3) Take the TL_READ lock on tables which are present in the table
list and do 'show create table'.
4) Release the lock.
While taking the meta data dump for general_log and slow_log the
"CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS".
This is because we skipped "DROP TABLE" for those tables,
"DROP TABLE" fails for these tables if logging is enabled.
Customer is applying the dump by enabling logging so, if the dump
has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE"
stmts for those tables.
After the fix we could observe "Table 'mysql.general_log'
doesn't exist" errors initially that is because in the customer
scenario they are dropping the mysql database by enabling the
logging, Hence, those errors are expected. Once we apply the
dump which is taken before the "drop database mysql", the errors
will not be there.
IF LOCALHOST IS BOTH IPV4/IPV6 ENABLED.
The original patch removed default value of the bind-address option.
So, the default value became NULL. By coincedence NULL resolves
to 0.0.0.0 and ::, and since the server chooses first IPv4-address,
0.0.0.0 is choosen. So, there was no change in the behaviour.
This patch restores default value of the bind-address option to "0.0.0.0".
Problem - The failure on PB2 is possbily due to the port number being still in
use even after the server restarts which is not reflected in the
server restart.
Fix - The problem is fixed by starting the servers forcefully using the option
file and also the parameters for the server restart is passed correctly.
Details:
- test case bug12427262.test was failing on windows because
on windows '/' was not recognized. And this was used in
LIKE clause of the query being run in this test case.
Fix:
- Windows needs '\\\\' for path seperater in mysql. I was
not sure how to keep a single query with two different
syntax based on platform. So modifying query to make sure
it runs correctly on both platform.
The function mysql_show_binlog_events has a local stack variable
'LOG_INFO linfo;', which is assigned to thd->current_linfo, however
this variable goes out of scope and is destroyed before clean
thd->current_linfo.
The problem is solved by moving 'LOG_INFO linfo;' to function scope.
BUG#11761686 insert_id event is not filtered.
Two issues are covered.
INSERT into autoincrement field which is not the first part in the composed primary key
is unsafe by autoincrement logging design. The case is specific to MyISAM engine
because Innodb does not allow such table definition.
However no warnings and row-format logging in the MIXED mode was done, and
that is fixed.
Int-, Rand-, User-var log-events were not filtered along with their parent
query that made possible them to screw up execution context of the following
query.
Fixed with deferring their execution until the parent query.
******
Bug#11754117
Post review fixes.
Currently SHOW MASTER LOGS and SHOW BINARY LOGS require the SUPER
privilege. Monitoring tools (such as MEM) often want to check this
output - for instance MEM generates the SUM of the sizes of the logs
reported here, and puts that in the Replication overview within the MEM
Dashboard.
However, because of the SUPER requirement, these tools often have an
account that holds open the connection whilst monitoring, and can lock
out administrators when the server gets overloaded and reaches
max_connections - there is already another SUPER privileged account
connected, the "monitor".
As SHOW MASTER STATUS, and all other replication related statements,
return with either REPLICATION CLIENT or SUPER privileges, this worklog
is to make SHOW MASTER LOGS and SHOW BINARY LOGS be consistent with this
as well, and allow both of these commands with either SUPER or
REPLICATION CLIENT.
This allows monitoring tools to not require a SUPER privilege any more,
so is safer in overloaded situations, as well as being more secure, as
lighter privileges can be given to users of such tools or scripts.
ORDER BY COUNT(*) LIMIT.
PROBLEM:
With respect to problem in the bug description, we
exhibit different behaviors for the two tables
presented, because innodb statistics (rec_per_key
in this case) are updated for the first table
and not so for the second one. As a result the
query plan gets changed in test_if_skip_sort_order
to use 'index' scan. Hence the difference in the
explain output. (NOTE: We can reproduce the problem
with first table by reducing the number of tuples
and changing the table structure)
The varied output w.r.t the query on the second table
is because of the result in the query plan change.
When a query plan is changed to use 'index' scan,
after the call to test_if_skip_sort_order, we set
keyread to TRUE immedietly. If for some reason
we drop this index scan for a filesort later on,
we fetch only the keys not the entire tuple.
As a result we would see junk values in the result set.
Following is the code flow:
Call test_if_skip_sort_order
-Choose an index to give sorted output
-If this is a covering index, set_keyread to TRUE
-Set the scan to INDEX scan
Call test_if_skip_sort_order second time
-Index is not chosen (note that we do not pass the
actual limit value second time. Hence we do not choose
index scan second time which in itself is a bug fixed
in 5.6 with WL#5558)
-goto filesort
Call filesort
-Create quick range on a different index
-Since keyread is set to TRUE, we fetch only the columns of
the index
-results in the required columns are not fetched
FIX:
Remove the call to set_keyread(TRUE) from
test_if_skip_sort_order. The access function which is
'join_read_first' or 'join_read_last' calls set_keyread anyways.
UNHANDLED, CONFUSING ERROR
The main confusion with the error message is that "it
implies that your data dictionary may now be out of
sync". This patch will remove the unwanted and the
misleading error message by not doing an unnecessary
operation in the error handling code.
rb://980 approved by: Dmitry Lenev
PROBLEM:
--------
When binary log statements are replayed on the slave, BEGIN is represented
in com_counters but COMMIT is not. Similarly in 'ROW' based replication
'INSERT','UPDATE',and 'DELETE' com_counters are not getting incremented
when the binary log statements are replayed at slave.
ANALYSIS:
---------
In 'ROW' based replication for COMMIT,INSERT,UPDATE and DELETE operations
following special events are invoked.
Xid_log_event,Write_rows_log_event,Update_rows_log_event,Update_rows_log_event.
The above mentioned events doesn't go through the parser where the
'COM_COUNTERS' are incremented.
FIX:
-----
Increment statements are added at appropriate events.
Respective functions are listed below.
'Xid_log_event::do_apply_event'
'Write_rows_log_event::do_before_row_operations'
'Update_rows_log_event::do_before_row_operations'
'Delete_rows_log_event::do_before_row_operations'
TABLES IN INCORRECT ENGINE
PROBLEM:
CREATE/ALTER TABLE currently can move system tables like
mysql.db, user, host etc, to engines other than MyISAM. This is not
completely supported as of now, by mysqld. When some of system tables
like plugin, servers, event, func, *_priv, time_zone* are moved
to innodb, mysqld restart crashes. Currently system tables
can be moved to BLACKHOLE also!!!.
ANALYSIS:
The problem is that there is no check before creating or moving
a system table to some particular engine.
System tables are suppose to be residing in MyISAM. We can think
of restricting system tables to exist only in MyISAM. But, there could
be future needs of these system tables to be part of other engines
by design. For eg, NDB cluster expects some tables to be on innodb
or ndb engine. This calls for a solution, by which system
tables can be supported by any desired engine, with minimal effort.
FIX:
The solution provides a handlerton interface using which,
mysqld server can query particular storage engine handlerton for
system tables that it supports. This way each storage engine
layer can define their own system database and system tables.
The check_engine() function uses the new handlerton function
ha_check_if_supported_system_table() to check if db.tablename
provided in the DDL is supported by the SE.
Note: This fix has modified a test in help.test, which was moving
mysql.help_* to innodb. The primary intention of the test was not
to move them between engines.
Problem - this failure occured in the test added for the fix of the
bug-13333431. The basic problem of the failure was the
value of the report_port which persisted even after the end
of the test (ie. rpl_end.inc). So this causes the assertion
in the test to fail if it is executed again.
Fix - restarted the server with the default value being passed to the
report_port after testing the two expected case so that in the
next run of the test we will not encounter the previous value of
report_port.
Bug#13639204 64111: CRASH ON SELECT SUBQUERY WITH NON UNIQUE INDEX
The crash happened due to wrong calculation
of key length during creation of reference for
sort order index. The problem is that
keyuse->used_tables can have OUTER_REF_TABLE_BIT enabled
but used_tables parameter(create_ref_for_key() func) does
not have it. So key parts which have OUTER_REF_TABLE_BIT
are ommited and it could lead to incorrect key length
calculation(zero key length).
Description: When the table has more than one unique or primary key,
INSERT... ON DUP KEY UPDATE statement is sensitive to the order in which
the storage engines checks the keys. Depending on this order, the storage
engine may determine different rows to mysql, and hence mysql can update
different rows on master and slave.
Solution: We mark INSERT...ON DUP KEY UPDATE on a table with more than on unique
key as unsafe therefore the event will be logged in row format if it is available
(ROW/MIXED). If only STATEMENT format is available, a warning will be thrown.
Background:
- as described in MySQL Internals Prepared Stored
(http://forge.mysql.com/wiki/MySQL_Internals_Prepared_Stored),
the Optimizer sometimes does destructive changes to the parsed
LEX-object (Item-tree), which makes it impossible to re-use
that tree for PS/SP re-execution.
- in order to be able to re-use the Item-tree, the destructive
changes are remembered and rolled back after the statement execution.
The problem, discovered by this bug, was that the objects representing
GROUP-BY clause did not restored after query execution. So, the GROUP-BY
part of the statement could not be properly re-initialized for re-execution
after destructive changes.
Those objects do not take part in the Item-tree, so they can not be saved
using the approach for Item-tree.
The fix is as follows:
- introduce a new array in st_select_lex to store the original
ORDER pointers, representing the GROUP-BY clause;
- Initialize this array in fix_prepare_information().
- restore the list of GROUP-BY items in reinit_stmt_before_use().
Fix the calculation of the next autoinc value when offset > 1. Some of the
results have changed due to the changes in the allocation calculation. The
new calculation will result in slightly bigger gaps for bulk inserts.
rb://866 Approved by Jimmy Yang.
Backported from mysql-trunk (5.6)
HANG IN PREPARING WITH 100% CPU USAGE
Infinite loop in the subselect_indexsubquery_engine::exec()
function caused Server hang with 100% CPU usage.
The BLACKHOLE storage engine didn't update handler's
table->status variable after index operations, that
caused an infinite "while(!table->status)" execution.
Index access methods of the BLACKHOLE engine handler
have been updated to set table->status variable to
STATUS_NOT_FOUND or 0 when such a method returns a
HA_ERR_END_OF_FILE error or 0 respectively.
Analysis:
-------------------------------
According to the Manual
(http://dev.mysql.com/doc/refman/5.1/en/identifier-case-sensitivity.html):
"Column, index, stored routine, and event names are not case sensitive on any
platform, nor are column aliases."
In other words, 'lower_case_table_names' does not affect the behaviour of
those identifiers.
On the other hand, trigger names are case sensitive on some platforms,
and case insensitive on others. 'lower_case_table_names' does not affect
the behaviour of trigger names either.
The bug was that SHOW statements did case sensitive comparison
for stored procedure / stored function / event names.
Fix:
Modified the code so that comparison in case insensitive for routines
and events for "SHOW" operation.
As part of this commit, only fixing the test failures due to the actual code fix.