The problem there is that HAVING condition evaluates const
parts of condition despite the condition has references
on aggregate functions. Table t1 became const tables
after make_join_statistics and table1.pk = 1, HAVING is
transformed into MAX(1) < 7 and taken away from HAVING.
The fix is to skip evaluation of HAVING conts parts if
HAVING condition has references on aggregate functions.
The handler function for reading one row from a specific index
was not optimized in the partitioning handler since it
used the default implementation.
No test case since it is performance only, verified by hand.
(without the unrelated whitespace changes):
------------------------------------------------------------------------
r7009 | jyang | 2010-04-29 20:44:56 +0300 (Thu, 29 Apr 2010) | 6 lines
branches/5.0: Port fix for bug #49238 (Creating/Dropping a temporary
table while at 1023 transactions will cause assert) from 5.1 to
branches/5.1. Separate action for return value DB_TOO_MANY_CONCURRENT_TRXS
from that of DB_MUST_GET_MORE_FILE_SPACE in row_drop_table_for_mysql().
------------------------------------------------------------------------
This crash occured after ALTER TABLE was used on a temporary
transactional table locked by LOCK TABLES. Any later attempts to
execute LOCK/UNLOCK TABLES, caused the server to crash.
The reason for the crash was the list of locked tables would
end up having a pointer to a free'd table instance. This happened
because ALTER TABLE deleted the table without also removing the
table reference from the locked tables list.
This patch fixes the problem by making sure ALTER TABLE also
removes the table from the locked tables list.
Test case added to innodb_mysql.test.
Since the original fix for this bug lowercases the search pattern it's not a
good idea to copy the search pattern to the output instead of the real table
name found (since, depending on the case mode these two names may differ in
case).
Fixed the infrmation_schema.test failure by making sure the actual table
name of an inoformation schema table is passed instead of the lookup pattern
even when the pattern doesn't contain wildcards.
The atomic operations implementation on 5.1 has a few problems,
which might cause tests to abort randomly. Since no code in 5.1
uses atomic operations, simply remove the code.
Merge up to sunny.bains@oracle.com-20100625081841-ppulnkjk1qlazh82 .
There are 8 more changesets in mysql-5.1-innodb, but PB2 shows a
failure for a test added in one of them. If that is resolved quickly
then those 8 more changesets will be merged too.
Fixed an incomplete historical ALTER TABLE MODIFY trimming the trigger
privilege bit from mysql.tables_priv.Table_priv column.
Removed the duplicate ALTER TABLE MODIFY.
Test suite added.
POSIX requires that a signal handler defined with sigaction()
is not reset on delivering a signal unless SA_NODEFER or
SA_RESETHAND is set. It is therefore unnecessary to redefine
the handler on signal delivery on platforms where sigaction()
is used without those flags.
The problem is that QUICK_SELECT_DESC behaviour depends
on used_key_parts value which can be bigger than selected
best_key_parts value if an engine supports clustered key.
But used_key_parts is overwritten with best_key_parts
value that prevents from correct selection of index
access method. The fix is to preserve used_key_parts
value for further use in QUICK_SELECT_DESC.
(READ UNCOMMITTED access failure of off-page DYNAMIC or COMPRESSED columns).
Records that lack incompletely written externally stored columns may
be accessed by READ UNCOMMITTED transaction even without involving a
crash during an INSERT or UPDATE operation. I verified this as follows.
(1) added a delay after the mini-transaction for writing the clustered
index 'stub' record was committed (patch attached)
(2) started mysqld in gdb, setting breakpoints to the where the
assertions about READ UNCOMMITTED were added in the bug fix
(3) invoked ibtest3 --create-options=key_block_size=2
to create BLOBs in a COMPRESSED table
(4) invoked the following:
yes 'set transaction isolation level read uncommitted;
checksum table blobt3;select sleep(1);'|mysql -uroot test
(5) noted that one of the breakpoints was triggered
(return(NULL) in btr_rec_copy_externally_stored_field())
=== modified file 'storage/innodb_plugin/row/row0ins.c'
--- storage/innodb_plugin/row/row0ins.c 2010-06-30 08:17:25 +0000
+++ storage/innodb_plugin/row/row0ins.c 2010-06-30 08:17:25 +0000
@@ -2120,6 +2120,7 @@ function_exit:
rec_t* rec;
ulint* offsets;
mtr_start(&mtr);
+ os_thread_sleep(5000000);
btr_cur_search_to_nth_level(index, 0, entry, PAGE_CUR_LE,
BTR_MODIFY_TREE, &cursor, 0,
=== modified file 'storage/innodb_plugin/row/row0upd.c'
--- storage/innodb_plugin/row/row0upd.c 2010-06-30 08:11:55 +0000
+++ storage/innodb_plugin/row/row0upd.c 2010-06-30 08:11:55 +0000
@@ -1763,6 +1763,7 @@ row_upd_clust_rec(
rec_offs_init(offsets_);
mtr_start(mtr);
+ os_thread_sleep(5000000);
ut_a(btr_pcur_restore_position(BTR_MODIFY_TREE, pcur, mtr));
rec = btr_cur_get_rec(btr_cur);
automatic reconnect
A client with automatic reconnect enabled will see the error
message "Lost connection to MySQL server during query" if the
connection is lost between mysql_stmt_prepare() and
mysql_stmt_execute(). The mysql_stmt_errno() number, however,
is 0 -- not the corresponding value 2013.
This patch checks for the case where the prepared statement
has been pruned due to a connection loss (i.e., stmt->mysql
has been set to NULL) during a call to cli_advanced_command(),
and avoids changing the last_errno to the result of the last
reconnect attempt.
dict_table_get_format(index->table)
The REDUNDANT and COMPACT formats store a local 768-byte prefix of
each externally stored column. No row_ext cache is needed, but we
initialized one nevertheless. When the BLOB pointer was zero, we would
ignore the locally stored prefix as well. This triggered an assertion
failure in row_undo_mod_upd_exist_sec().
row_build(): Allow ext==NULL when a REDUNDANT or COMPACT table
contains externally stored columns.
row_undo_search_clust_to_pcur(), row_upd_store_row(): Invoke
row_build() with ext==NULL on REDUNDANT and COMPACT tables.
rb://382 approved by Jimmy Yang
columns
When the server crashes after a record stub has been inserted and
before all its off-page columns have been written, the record will
contain incomplete off-page columns after crash recovery. Such records
may only be accessed at the READ UNCOMMITTED isolation level or when
rolling back a recovered transaction in recv_recovery_rollback_active().
Skip these records at the READ UNCOMMITTED isolation level.
TODO: Add assertions for checking the above assumptions hold when an
incomplete BLOB is encountered.
btr_rec_copy_externally_stored_field(): Return NULL if the field is
incomplete.
row_prebuilt_t::templ_contains_blob: Clarify what "BLOB" means in this
context. Hint: MySQL BLOBs are not the same as InnoDB BLOBs.
row_sel_store_mysql_rec(): Return FALSE if not all columns could be
retrieved. Previously this function always returned TRUE. Assert that
the record is not delete-marked.
row_sel_push_cache_row_for_mysql(): Return FALSE if not all columns
could be retrieved.
row_search_for_mysql(): Skip records containing incomplete off-page
columns. Assert that the transaction isolation level is READ
UNCOMMITTED.
rb://380 approved by Jimmy Yang
The default value of the myisam_max_extra_sort_file_size could be
higher than the maximum accepted value, leading to warnings upon
the server start.
The solution is to simply set the value to the maximum value in a
32-bit built (2147483647, one less than the current). This should
be harmless as the option is currently unused in 5.1.
The problem was that a user could supply supply data in chunks
via the COM_STMT_SEND_LONG_DATA command to prepared statement
parameter other than of type TEXT or BLOB. This posed a problem
since other parameter types aren't setup to handle long data,
which would lead to a crash when attempting to use the supplied
data.
Given that long data can be supplied at any stage of a prepared
statement, coupled with the fact that the type of a parameter
marker might change between consecutive executions, the solution
is to validate at execution time each parameter marker for which
a data stream was provided. If the parameter type is not TEXT or
BLOB (that is, if the type is not able to handle a data stream),
a error is returned.
DROP USER
RENAME USER CURRENT_USER() ...
GRANT ... TO CURRENT_USER()
REVOKE ... FROM CURRENT_USER()
ALTER DEFINER = CURRENT_USER() EVENTbut, When these statements are binlogged, CURRENT_USER() just is binlogged
as 'CURRENT_USER()', it is not expanded to the real user name. When slave
executes the log event, 'CURRENT_USER()' is expand to the user of slave
SQL thread, but SQL thread's user name always NULL. This breaks the replication.
After this patch, session's user will be written into query log events
if these statements call CURREN_USER() or 'ALTER EVENT' does not assign a definer.
This deadlock happened if DROP DATABASE was blocked due to an open
HANDLER table from a different connection. While DROP DATABASE
is blocked, it holds the LOCK_mysql_create_db mutex. This results
in a deadlock if the connection with the open HANDLER table tries
to execute a CREATE/ALTER/DROP DATABASE statement as they all
try to acquire LOCK_mysql_create_db.
This patch makes this deadlock scenario very unlikely by closing and
marking for re-open all HANDLER tables for which there are pending
conflicing locks, before LOCK_mysql_create_db is acquired.
However, there is still a very slight possibility that a connection
could access one of these HANDLER tables between closing/marking for
re-open and the acquisition of LOCK_mysql_create_db.
This patch is for 5.1 only, a separate and complete fix will be
made for 5.5+.
Test case added to schema.test.