statement being KILLed".
When statement which was trying to obtain write lock on then table and
which was blocked by existing read lock was killed, concurrent statements
that were trying to obtain read locks on the same table and that were
blocked by the presence of this pending write lock were not woken up and
had to wait until this first read lock goes away.
This problem was caused by the fact that we forgot to wake up threads
which pending requests could have been satisfied after removing lock
request for the killed thread.
The patch solves the problem by waking up those threads in such situation.
Test for this bug will be added to 5.1 only as it has much better
facilities for its implementation. Particularly, by using I_S.PROCESSLIST
and wait_condition.inc script we can wait until thread will be blocked on
certain table lock without relying on unconditional sleep (which usage
increases time needed for test runs and might cause spurious test
failures on slower platforms).
When a table was explicitly locked with LOCK TABLES no associated
tables from any related trigger on the subject table were locked.
As a result of this the user could experience unexpected locking
behavior and statement failures similar to "failed: 1100: Table'xx'
was not locked with LOCK TABLES".
This patch fixes this problem by making sure triggers are
pre-loaded on any statement if the subject table was explicitly
locked with LOCK TABLES.
between perm and temp tables. Review fixes.
The original bug report complains that if we locked a temporary table
with LOCK TABLES statement, we would not leave LOCK TABLES mode
when this temporary table is dropped.
Additionally, the bug was escalated when it was discovered than
when a temporary transactional table that was previously
locked with LOCK TABLES statement was dropped, futher actions with
this table, such as UNLOCK TABLES, would lead to a crash.
The problem originates from incomplete support of transactional temporary
tables. When we added calls to handler::store_lock()/handler::external_lock()
to operations that work with such tables, we only covered the normal
server code flow and did not cover LOCK TABLES mode.
In LOCK TABLES mode, ::external_lock(LOCK) would sometimes be called without
matching ::external_lock(UNLOCK), e.g. when a transactional temporary table
was dropped. Additionally, this table would be left in the list of LOCKed
TABLES.
The patch aims to address this inadequacy. Now, whenever an instance
of 'handler' is destroyed, we assert that it was priorly
external_lock(UNLOCK)-ed. All the places that violate this assert
were fixed.
This patch introduces no changes in behavior -- the discrepancy in
behavior will be fixed when we start calling ::store_lock()/::external_lock()
for all tables, regardless whether they are transactional or not,
temporary or not.
INSERT/DELETE/UPDATE followed by ALTER TABLE within LOCK TABLES
may cause table corruption on Windows.
That happens because ALTER TABLE writes outdated shared state
info into index file.
Fixed by removing obsolete workaround.
Affects MyISAM tables on Windows only.
"Federated Denial of Service"
Federated storage engine used to attempt to open connections within
the ::create() and ::open() methods which are invoked while LOCK_open
mutex is being held by mysqld. As a result, no other client sessions
can open tables while Federated is attempting to open a connection.
Long DNS lookup times would stall mysqld's operation and a rogue
connection string which connects to a remote server which simply
stalls during handshake can stall mysqld for a much longer period of
time.
This patch moves the opening of the connection much later, when the
federated actually issues queries, by which time the LOCK_open mutex is
no longer being held.
binary SHOW CREATE TABLE or SELECT FROM I_S.
The problem is that mysqldump generates incorrect dump for a table
with non-ASCII column name if the mysqldump's character set is
ASCII.
The fix is to:
1. Switch character_set_client for the mysqldump's connection
to binary before issuing SHOW CREATE TABLE statement in order
to avoid conversion.
2. Dump switch character_set_client statements to UTF8 and back
for CREATE TABLE statement.
When the SQL_BIG_RESULT flag is specified SELECT should store items from the
select list in the filesort data and use them when sending to a client.
The get_addon_fields function is responsible for creating necessary structures
for that. But this function was allowed to do so only for SELECT and
INSERT .. SELECT queries. This makes the SQL_BIG_RESULT useless for
the CREATE .. SELECT queries.
Now the get_addon_fields allows storing select list items in the filesort
data for the CREATE .. SELECT queries.
"getGeneratedKeys() does not work with FEDERATED table"
mysql_insert() expected the storage engine to update the row data
during the write_row() operation with the value of the new auto-
increment field. The field must be updated when only one row has
been inserted as mysql_insert() would ignore the thd->last_insert.
This patch implements HA_STATUS_AUTO support in ha_federated::info()
and ensures that ha_federated::write_row() does update the row's
auto-increment value.
The test case was written in C as the protocol's 'id' value is
accessible through libmysqlclient and not via SQL statements.
mysql-test-run.pl was extended to enable running the test binary.
If a primary key is defined over column c of enum type then
the EXPLAIN command for a look-up query of the form
SELECT * FROM t WHERE c=0
said that the query was with an impossible where condition though the
query correctly returned non-empty result set when the table indeed
contained rows with error empty strings for column c.
This kind of misbehavior was due to a bug in the function
Field_enum::store(longlong,bool) that erroneously returned 1 if
the the value to be stored was equal to 0.
Note that the method
Field_enum::store(const char *from,uint length,CHARSET_INFO *cs)
correctly returned 0 if a value of the error empty string
was stored.
This bug manifested itself for join queries with GROUP BY and HAVING clauses
whose SELECT lists contained DISTINCT. It occurred when the optimizer could
deduce that the result set would have not more than one row.
The bug could lead to wrong result sets for queries of this type because
HAVING conditions were erroneously ignored in some cases in the function
remove_duplicates.
After dumping triggers mysqldump copied
the value of the OLD_SQL_MODE variable to the SQL_MODE
variable. If the --compact option of the mysqldump was
not set the OLD_SQL_MODE variable had the value
of the uninitialized SQL_MODE variable. So
usually the NO_AUTO_VALUE_ON_ZERO option of the
SQL_MODE variable was discarded.
This fix is for non-"--compact" mode of the mysqldump,
because mysqldump --compact never set SQL_MODE to the
value of NO_AUTO_VALUE_ON_ZERO.
The dump_triggers_for_table function has been modified
to restore previous value of the SQL_MODE variable after
dumping triggers using the SAVE_SQL_MODE temporary
variable.
ORDER BY primary_key on InnoDB table
Queries that use an InnoDB secondary index to retrieve
data don't need to sort in case of ORDER BY primary key
if the secondary index is compared to constant(s).
They can also skip sorting if ORDER BY contains both the
the secondary key parts and the primary key parts (in
that order).
This is because InnoDB returns the rows in order of the
primary key for rows with the same values of the secondary
key columns.
Fixed by preventing temp table sort for the qualifying
queries.