This bug may manifest itself not only with the queries for which
the index-merge access method is chosen. It also may display
itself for queries with DISTINCT.
The bug was in how the Unique::get method used the merge_buffers
function. To compare elements in the the queue employed by
merge_buffers() it must use the buffpek_compare function rather
than the function for binary comparison.
LOCK TABLES takes a list of tables to lock. It may lock several
tables successfully and then encounter a tables that it can't lock,
e.g. because it's locked. In such case it needs to undo the locks on
the already locked tables. And it does that. But it has also notified
the relevant table storage engine handlers that they should lock.
The only reliable way to ensure that the table handlers will give up
their locks is to end the transaction. This is what UNLOCK TABLE
does : it ends the transaction if there were locked tables by LOCK
tables.
It is possible to end the transaction when the lock fails in
LOCK TABLES because LOCK TABLES ends the transaction at its start
already.
Fixed by ending (again) the transaction when LOCK TABLES fails to
lock a table.
(Part of fix for Bug#25621 Error in my_thread_global_end(): 1 threads didn't exit)
Give correct error message if InnoDB table is not found
(This allows us to drop a an innodb table that is not in the InnoDB registery)
wrong result for DML
When making key reference buffers over CHAR fields whitespace (0x20)
must be used to fill in the remaining space in the field's buffer.
This is what Field_string::store() does.
Fixed Field_string::get_key_image() to do the same.
Problem: we may create a deadlock committing changes in the mysql_alter_table() when
LOCK_open is set. Moreover, "in some variants of the ALTER TABLE commit
happens earlier, outside of LOCK_open, in other later - inside. It's no good, a storage
engine code that is called in between could expect a consistency - either there is a
transaction or there is not".
Fix: move the commit to happen earlier and outside of the LOCK_open.
for a query over an empty table right after its creation.
The crash is the result of an attempt made by JOIN::optimize to evaluate
the WHERE condition when no records have been actually read.
The added test case can reproduce the crash only with InnoDB tables and
only with 5.0.x.
INSERT...ON DUPLICATE KEY UPDATE may cause error 1032:
"Can't find record in ..." if we are inserting into
InnoDB table unique index of partial key with
underlying UTF-8 string field.
This error occurs because INSERT...ON DUPLICATE uses a wrong
procedure to copy string fields of multi-byte character sets
for index search.
INSERT into InnoDB table may cause "ERROR 1062 (23000): Duplicate entry..."
errors or lost records after multi-row INSERT of the form:
"INSERT INTO t (id...) VALUES (NULL...) ON DUPLICATE KEY UPDATE id=VALUES(id)",
where "id" is an AUTO_INCREMENT column.
It happens because InnoDB handler forgets to save next insert id after
updating of auto_increment column with new values. As result of that
last insert id stored inside InnoDB dictionary tables differs from it's
cached thd->next_insert_id value.
A wrong order of statements in QUICK_GROUP_MIN_MAX_SELECT::reset
caused a crash when a query with DISTINCT was executed by a loose scan
for an InnoDB table that had been emptied.
It's not possible to flush the global status variables in 5.0
Update test case so it works by recording the value of handle_rollback
before and compare it to the value after
If the error happens during DELETE IGNORE, nothing could be send to the
client, thus leaving it frozen expecting the reply.
The problem was that if some error occurred, it wouldn't be reported to
the client because of IGNORE, but neither success would be reported.
MySQL 4.1 would not freeze the client, but will report
ERROR 1105 (HY000): Unknown error
instead, which is also a bug.
The solution is to report success if we are in DELETE IGNORE and some
non-fatal error has happened.
The bug is present only in 4.1, will be null-merged to 5.0
For InnoDB, check value of thd->transaction.all.innodb_active_trans instead of thd->transaction.stmt.innobase_tid to see if we really need to rollback.
Currently SQL_BIG_RESULT is checked only at compile time.
However, additional optimizations may take place after
this check that change the sort method from 'filesort'
to sorting via index. As a result the actual plan
executed is not the one specified by the SQL_BIG_RESULT
hint. Similarly, there is no such test when executing
EXPLAIN, resulting in incorrect output.
The patch corrects the problem by testing for
SQL_BIG_RESULT both during the explain and execution
phases.
The crash was caused by invalid sequence of handler::** calls:
ha_smth->index_init();
ha_smth->index_next_same(); (2)
(2) is an invalid call as it was not preceeded by any 'scan setup' call
like index_first() or index_read(). The cause was that QUICK_SELECT::reset()
didn't "fully reset" the quick select- current QUICK_RANGE wasn't forgotten,
and quick select might attempt to continue reading the range, which would
result in the above mentioned invalid sequence of handler calls.
5.x versions are not affected by the bug - they already have the missing
"range=NULL" clause.
* don't use join cache when the incoming data set is already ordered
for ORDER BY
This choice must be made because join cache will effectively
reverse the join order and the results will be sorted by the index
of the table that uses join cache.
The bug was as follows: When merge_key_fields() encounters "t.key=X OR t.key=Y" it will
try to join them into ref_or_null access via "t.key=X OR NULL". In order to make this
inference it checks if Y<=>NULL, ignoring the fact that value of Y may be not yet known.
The fix is that the check if Y<=>NULL is made only if value of Y is known (i.e. it is a
constant).
TODO: When merging to 5.0, replace used_tables() with const_item() everywhere in merge_key_fields().