The index (key_part_1, key_part-2) was erroneously considered as compatible
with the required ordering in the function test_test_if_order_by_key when
a query with an ORDER BY clause contained a condition of the form
key_part_1=const OR key_part_1 IS NULL
and the order list contained only key_part_2. This happened because the value
of the const_key_parts field in the KEYUSE structure was not formed correctly
for the keys that could be used for ref_or_null access.
This was fixed in the code of the update_ref_and_keys function.
The problem could not manifest itself for MyISAM databases because the
implementation of the keys_to_use_for_scanning() handler function always
returns an empty bitmap for the MyISAM engine.
Default values of variables were not subject to upper/lower bounds
and step, while setting variables was. Bounds and step are also
applied to defaults now; defaults are corrected quietly, values
given by the user are corrected, and a correction-warning is thrown
as needed. Lastly, very large values could wrap around, starting
from 0 again. They are bounded at the maximum value for the
respective data-type now if no lower maximum is specified in the
variable's definition.
file .\ha_innodb.
Problem: if a partial unique key followed by a non-partial one we declare
the second one as a primary key.
Fix: sort non-partial unique keys before partial ones.
The optimizer sets index traversal in reverse order only if there are
used key parts that are not compared to a constant.
However using the primary key as an ORDER BY suffix rendered the check
incomplete : going in reverse order must still be used even if
all the parts of the secondary key are compared to a constant.
Fixed by relaxing the check and set reverse traversal even when all
the secondary index keyparts are compared to a const.
Also account for the case when all the primary keys are compared to a
constant.
The optimization that uses a unique index to remove GROUP BY, did not
ensure that the index was actually used, thus violating the ORDER BY
that is impled by GROUP BY.
Fixed by replacing GROUP BY with ORDER BY if the GROUP BY clause contains
a unique index. In case GROUP BY ... ORDER BY null is used, GROUP BY is
simply removed.
ORDER BY is used
The range analysis module did not correctly signal to the
handler that a range represents a ref (EQ_RANGE flag). This causes
non-range queries like
SELECT ... FROM ... WHERE keypart_1=const, ..., keypart_n=const
ORDER BY ... FOR UPDATE
to wait for a lock unneccesarily if another running transaction uses
SELECT ... FOR UPDATE on the same table.
Fixed by setting EQ_RANGE for all range accesses that represent
an equality predicate.
between perm and temp tables. Review fixes.
The original bug report complains that if we locked a temporary table
with LOCK TABLES statement, we would not leave LOCK TABLES mode
when this temporary table is dropped.
Additionally, the bug was escalated when it was discovered than
when a temporary transactional table that was previously
locked with LOCK TABLES statement was dropped, futher actions with
this table, such as UNLOCK TABLES, would lead to a crash.
The problem originates from incomplete support of transactional temporary
tables. When we added calls to handler::store_lock()/handler::external_lock()
to operations that work with such tables, we only covered the normal
server code flow and did not cover LOCK TABLES mode.
In LOCK TABLES mode, ::external_lock(LOCK) would sometimes be called without
matching ::external_lock(UNLOCK), e.g. when a transactional temporary table
was dropped. Additionally, this table would be left in the list of LOCKed
TABLES.
The patch aims to address this inadequacy. Now, whenever an instance
of 'handler' is destroyed, we assert that it was priorly
external_lock(UNLOCK)-ed. All the places that violate this assert
were fixed.
This patch introduces no changes in behavior -- the discrepancy in
behavior will be fixed when we start calling ::store_lock()/::external_lock()
for all tables, regardless whether they are transactional or not,
temporary or not.
ORDER BY primary_key on InnoDB table
Queries that use an InnoDB secondary index to retrieve
data don't need to sort in case of ORDER BY primary key
if the secondary index is compared to constant(s).
They can also skip sorting if ORDER BY contains both the
the secondary key parts and the primary key parts (in
that order).
This is because InnoDB returns the rows in order of the
primary key for rows with the same values of the secondary
key columns.
Fixed by preventing temp table sort for the qualifying
queries.
by long running transaction
On Windows opened files can't be deleted. There was a special
upgraded lock mode (TL_WRITE instead of TL_WRITE_ALLOW_READ)
in ALTER TABLE to make sure nobody has the table opened
when deleting the old table in ALTER TABLE. This special mode
was causing ALTER TABLE to hang waiting on a lock inside InnoDB.
This special lock is no longer necessary as the server is
closing the tables it needs to delete in ALTER TABLE.
Fixed by removing the special lock.
Note that this also reverses the fix for bug 17264 that deals with
another consequence of this special lock mode being used.
Problem: we may break a multibyte char sequence using a key
reduced to maximum allowed length for a storage engine
(that leads to failed assertion in the innodb code,
see also #17530).
Fix: align truncated key length to multibyte char boundary.
This bug may manifest itself not only with the queries for which
the index-merge access method is chosen. It also may display
itself for queries with DISTINCT.
The bug was in how the Unique::get method used the merge_buffers
function. To compare elements in the the queue employed by
merge_buffers() it must use the buffpek_compare function rather
than the function for binary comparison.
LOCK TABLES takes a list of tables to lock. It may lock several
tables successfully and then encounter a tables that it can't lock,
e.g. because it's locked. In such case it needs to undo the locks on
the already locked tables. And it does that. But it has also notified
the relevant table storage engine handlers that they should lock.
The only reliable way to ensure that the table handlers will give up
their locks is to end the transaction. This is what UNLOCK TABLE
does : it ends the transaction if there were locked tables by LOCK
tables.
It is possible to end the transaction when the lock fails in
LOCK TABLES because LOCK TABLES ends the transaction at its start
already.
Fixed by ending (again) the transaction when LOCK TABLES fails to
lock a table.
(Part of fix for Bug#25621 Error in my_thread_global_end(): 1 threads didn't exit)
Give correct error message if InnoDB table is not found
(This allows us to drop a an innodb table that is not in the InnoDB registery)
wrong result for DML
When making key reference buffers over CHAR fields whitespace (0x20)
must be used to fill in the remaining space in the field's buffer.
This is what Field_string::store() does.
Fixed Field_string::get_key_image() to do the same.
Problem: we may create a deadlock committing changes in the mysql_alter_table() when
LOCK_open is set. Moreover, "in some variants of the ALTER TABLE commit
happens earlier, outside of LOCK_open, in other later - inside. It's no good, a storage
engine code that is called in between could expect a consistency - either there is a
transaction or there is not".
Fix: move the commit to happen earlier and outside of the LOCK_open.
for a query over an empty table right after its creation.
The crash is the result of an attempt made by JOIN::optimize to evaluate
the WHERE condition when no records have been actually read.
The added test case can reproduce the crash only with InnoDB tables and
only with 5.0.x.
INSERT...ON DUPLICATE KEY UPDATE may cause error 1032:
"Can't find record in ..." if we are inserting into
InnoDB table unique index of partial key with
underlying UTF-8 string field.
This error occurs because INSERT...ON DUPLICATE uses a wrong
procedure to copy string fields of multi-byte character sets
for index search.
INSERT into InnoDB table may cause "ERROR 1062 (23000): Duplicate entry..."
errors or lost records after multi-row INSERT of the form:
"INSERT INTO t (id...) VALUES (NULL...) ON DUPLICATE KEY UPDATE id=VALUES(id)",
where "id" is an AUTO_INCREMENT column.
It happens because InnoDB handler forgets to save next insert id after
updating of auto_increment column with new values. As result of that
last insert id stored inside InnoDB dictionary tables differs from it's
cached thd->next_insert_id value.
A wrong order of statements in QUICK_GROUP_MIN_MAX_SELECT::reset
caused a crash when a query with DISTINCT was executed by a loose scan
for an InnoDB table that had been emptied.
It's not possible to flush the global status variables in 5.0
Update test case so it works by recording the value of handle_rollback
before and compare it to the value after
If the error happens during DELETE IGNORE, nothing could be send to the
client, thus leaving it frozen expecting the reply.
The problem was that if some error occurred, it wouldn't be reported to
the client because of IGNORE, but neither success would be reported.
MySQL 4.1 would not freeze the client, but will report
ERROR 1105 (HY000): Unknown error
instead, which is also a bug.
The solution is to report success if we are in DELETE IGNORE and some
non-fatal error has happened.