Fixed some mtr test problems
dbug/tests.c:
Fixed compiler warnings
mysql-test/r/handlersocket.result:
Fixed that plugin_license is written
mysql-test/suite/innodb/t/innodb_bug60196.test:
Force sorted results as it was sometimes different on windows
mysql-test/suite/rpl/t/rpl_heartbeat_basic.test:
Prolong test as this failed on windows
mysql-test/t/handlersocket.test:
Fixed that plugin_license is written
plugin/handler_socket/handlersocket/handlersocket.cpp:
Use maria_declare_plugin
plugin/handler_socket/handlersocket/mysql_incl.hpp:
Fixed compiler warning
plugin/handler_socket/libhsclient/auto_addrinfo.hpp:
Fixed compiler warning
sql/handler.h:
Fixed typo
sql/sql_plugin.cc:
Fixed bug that caused plugin library name twice in error message
storage/maria/ma_checkpoint.c:
Fixed compiler warning
storage/maria/ma_loghandler.c:
Fixed compiler warning
unittest/mysys/base64-t.c:
Fixed compiler warning
unittest/mysys/bitmap-t.c:
Fixed compiler warning
unittest/mysys/my_malloc-t.c:
Fixed compiler warning
Fix is done by doing an autocommit in truncate table inside Aria
storage/maria/ha_maria.cc:
Force a commit for TRUNCATE TABLE inside lock tables
Check that we don't call TRUNCATE with concurrent inserts going on.
Make ha_maria::implict_commit faster when we don't have Aria tables in the transaction.
(Most of the patch is just re-indentation because I removed an if level)
The optimizer chose a less efficient execution plan due to the following
defects of the code:
1. the generic handler function handler::keyread_time did not take into account
that in clustered primary keys record data is included into each index entry
2. the function make_join_readinfo erroneously decided that index only scan
could not be used if join cache was empoyed.
Added no additional test case.
Adjusted some of the test results.
Changed HA_EXTRA_NORMAL to HA_EXTRA_NOT_USED (more clean)
mysql-test/suite/maria/lock.result:
More extensive tests of LOCK TABLE with FLUSH and REPAIR
mysql-test/suite/maria/lock.test:
More extensive tests of LOCK TABLE with FLUSH and REPAIR
sql/sql_admin.cc:
Fix that REPAIR TABLE ... USE_FRM works with LOCK TABLES
sql/sql_base.cc:
Ensure that transactions are closed in ARIA when doing flush
HA_EXTRA_NORMAL -> HA_EXTRA_NOT_USED
Don't call extra many times for a table in close_all_tables_for_name()
Added test if table_list->table as this can happen in error situations
sql/sql_partition.cc:
HA_EXTRA_NORMAL -> HA_EXTRA_NOT_USED
sql/sql_reload.cc:
Fixed comment
sql/sql_table.cc:
HA_EXTRA_NORMAL -> HA_EXTRA_NOT_USED
sql/sql_trigger.cc:
HA_EXTRA_NORMAL -> HA_EXTRA_NOT_USED
sql/sql_truncate.cc:
HA_EXTRA_FORCE_REOPEN -> HA_EXTRA_PREPARE_FOR_DROP for truncate, as this speeds up truncate by not having to flush the cache to disk.
mysql-test/suite/maria/maria-partitioning.result:
New test case
mysql-test/suite/maria/maria-partitioning.test:
New test case
sql/sql_base.cc:
Ignore HA_EXTRA_NORMAL for wait_while_table_is_used()
More DBUG
sql/sql_partition.cc:
Don't use HA_EXTRA_FORCE_REOPEN for wait_while_table_is_used() as the table is opened multiple times (in prep_alter_part_table)
This fixes the assert in Aria where we check if table is opened multiple times if HA_EXTRA_FORCE_REOPEN is issued
- 5.5 was missing calls to ha_extra(HA_PREPARE_FOR_DROP | HA_PREPARE_FOR_RENAME); Lost in merge 5.3 -> 5.5
sql/sql_admin.cc:
Updated arguments for close_all_tables_for_name
sql/sql_base.h:
Updated arguments for close_all_tables_for_name
sql/sql_partition.cc:
Updated arguments for close_all_tables_for_name
sql/sql_table.cc:
Updated arguments for close_all_tables_for_name
Removed test of kill, as we have already called 'ha_extra(HA_PREPARE_FOR_DROP)' and the table may be inconsistent.
sql/sql_trigger.cc:
Updated arguments for close_all_tables_for_name
sql/sql_truncate.cc:
For truncate that is done with drop + recreate, signal that the table will be dropped.
This will contune the test case even if there was an error
and makes it easier to run a test that contains many sub tests against one engine.
(originally by Monty)
The not_null_tables() of Item_func_not_all and Item_in_optimizer was inherited from
Item_func by mistake. It made the optimizer think that subquery
predicates with ALL/ANY/IN were null-rejecting. This could trigger invalid
conversions of outer joins into inner joins.
BY A CONCURRENT TRANSACTIO
The member function QUICK_RANGE_SELECT::init_ror_merged_scan() performs
a table handler clone. Innodb does not provide a clone operation.
The ha_innobase::clone() is not there. The handler::clone() does not
take care of the ha_innobase->prebuilt->select_lock_type. Because of
this what happens is that for one index we do a locking read, and
for the other index we were doing a non-locking (consistent) read.
The patch introduces ha_innobase::clone() member function.
It is implemented similar to ha_myisam::clone(). It calls the
base class handler::clone() and then does any additional operation
required. I am setting the ha_innobase->prebuilt->select_lock_type
correctly.
rb://1060 approved by Marko
The failures are missing entries in the slow query log. The reason for the failure are sleep() calls with short duration 10ms, which is less than the default system timer resolution for various WaitForXXXObject functions (15.6 ms) and thus can't work reliably.
The fix is to make sleeps tiny bit longer (20ms from 10ms) in the test.
let x = `SELECT <something>`
The fix is to detect the condition "no active connection", to report error and die.
Note, that the check for no active connection was already in place for ordinary commands,
and was missing only for assign-variable command.
CAUSES RESTORE PROBLEM
Merging the fix from mysql-5.1 to mysql-5.5
mysql-test/t/mysqldump.test:
There is a difference in the testcase which is added as
part of this fix, when compared with mysql-5.1. In mysql-5.5
and mysql-5.6, "DROP mysql database" fails by enabling
logging, hence removed those lines.
CAUSES RESTORE PROBLEM
Problem Statement:
------------------
mysqldump is not having the dump stmts for general_log and slow_log
tables. That is because of the fix for Bug#26121. Hence, after
dropping the mysql database, and applying the dump by enabling the
logging, "'general_log' table not found" errors are logged into the
server log file.
Analysis:
---------
As part of the fix for Bug#26121, we skipped the dumping of tables
for general_log and slow_log, because the data dump of those tables
are taking LOCKS, which is not allowed for log tables.
Fix:
----
We came up with an approach that instead of taking both meta data
and data dump information for those tables, take only the meta data
dump which doesn't need LOCKS.
As part of fixing the issue we came up with below algorithm.
Design before fix:
1) mysql database is having tables like db, event,... general_log,
... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Take the TL_READ lock on tables which are present in the table
list and do 'show create table'.
4) Release the lock.
Design with the fix:
1) mysql database is having tables like db, event,... general_log,
... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Explicitly call the 'show create table' for general_log and
slow_log
3) Take the TL_READ lock on tables which are present in the table
list and do 'show create table'.
4) Release the lock.
While taking the meta data dump for general_log and slow_log the
"CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS".
This is because we skipped "DROP TABLE" for those tables,
"DROP TABLE" fails for these tables if logging is enabled.
Customer is applying the dump by enabling logging so, if the dump
has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE"
stmts for those tables.
After the fix we could observe "Table 'mysql.general_log'
doesn't exist" errors initially that is because in the customer
scenario they are dropping the mysql database by enabling the
logging, Hence, those errors are expected. Once we apply the
dump which is taken before the "drop database mysql", the errors
will not be there.
client/mysqldump.c:
In get_table_structure() added code to skip the DROP TABLE stmts for general_log
and slow_log tables, because when logging is enabled those stmts will fail. And
replaced CREATE TABLE with CREATE IF NOT EXISTS for those tables, just to make
sure CREATE stmt for those tables doesn't fail since we removed DROP stmts for
those tables.
In dump_all_tables_in_db() added code to call get_table_structure() for
general_log and slow_log tables.
mysql-test/r/mysqldump.result:
Added a test as part of fix for Bug #11754178
mysql-test/t/mysqldump.test:
Added a test as part of fix for Bug #11754178
Optimization of aggregate functions detected constant under max() and evalueted it, but condition in the WHWRE clause (which is always FALSE) was not taken into account
The patch backports two patches from mysql 5.6:
- BUG#12640437: USING SQL_BUFFER_RESULT RESULTS IN A DIFFERENT QUERY OUTPUT
- Bug#12578908: SELECT SQL_BUFFER_RESULT OUTPUTS TOO MANY ROWS WHEN GROUP IS OPTIMIZED AWAY
Original comment:
-----------------
3714 Jorgen Loland 2012-03-01
BUG#12640437 - USING SQL_BUFFER_RESULT RESULTS IN A DIFFERENT
QUERY OUTPUT
For all but simple grouped queries, temporary tables are used to
resolve grouping. In these cases, the list of grouping fields is
stored in the temporary table and grouping is resolved
there (e.g. by adding a unique constraint on the involved
fields). Because of this, grouping is already done when the rows
are read from the temporary table.
In the case where a group clause may be optimized away, grouping
does not have to be resolved using a temporary table. However, if
a temporary table is explicitly requested (e.g. because the
SQL_BUFFER_RESULT hint is used, or the statement is
INSERT...SELECT), a temporary table is used anyway. In this case,
the temporary table is created with an empty group list (because
the group clause was optimized away) and it will therefore not
create groups. Since the temporary table does not take care of
grouping, JOIN::group shall not be set to false in
make_simple_join(). This was fixed in bug 12578908.
However, there is an exception where make_simple_join() should
set JOIN::group to false even if the query uses a temporary table
that was explicitly requested but is not strictly needed. That
exception is if the loose index scan access method (explain
says "Using index for group-by") is used to read into the
temporary table. With loose index scan, grouping is resolved
by the access method. This is exactly what happens in this bug.
CAUSES RESTORE PROBLEM
Problem Statement:
------------------
mysqldump is not having the dump stmts for general_log and slow_log
tables. That is because of the fix for Bug#26121. Hence, after
dropping the mysql database, and applying the dump by enabling the
logging, "'general_log' table not found" errors are logged into the
server log file.
Analysis:
---------
As part of the fix for Bug#26121, we skipped the dumping of tables
for general_log and slow_log, because the data dump of those tables
are taking LOCKS, which is not allowed for log tables.
Fix:
----
We came up with an approach that instead of taking both meta data
and data dump information for those tables, take only the meta data
dump which doesn't need LOCKS.
As part of fixing the issue we came up with below algorithm.
Design before fix:
1) mysql database is having tables like db, event,... general_log,
... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Take the TL_READ lock on tables which are present in the table
list and do 'show create table'.
4) Release the lock.
Design with the fix:
1) mysql database is having tables like db, event,... general_log,
... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Explicitly call the 'show create table' for general_log and
slow_log
3) Take the TL_READ lock on tables which are present in the table
list and do 'show create table'.
4) Release the lock.
While taking the meta data dump for general_log and slow_log the
"CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS".
This is because we skipped "DROP TABLE" for those tables,
"DROP TABLE" fails for these tables if logging is enabled.
Customer is applying the dump by enabling logging so, if the dump
has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE"
stmts for those tables.
After the fix we could observe "Table 'mysql.general_log'
doesn't exist" errors initially that is because in the customer
scenario they are dropping the mysql database by enabling the
logging, Hence, those errors are expected. Once we apply the
dump which is taken before the "drop database mysql", the errors
will not be there.
client/mysqldump.c:
In get_table_structure() added code to skip the DROP TABLE stmts for general_log
and slow_log tables, because when logging is enabled those stmts will fail. And
replaced CREATE TABLE with CREATE IF NOT EXISTS for those tables, just to make
sure CREATE stmt for those tables doesn't fail since we removed DROP stmts for
those tables.
In dump_all_tables_in_db() added code to call get_table_structure() for
general_log and slow_log tables.
mysql-test/r/mysqldump.result:
Added a test as part of fix for Bug #11754178
mysql-test/t/mysqldump.test:
Added a test as part of fix for Bug #11754178