When a UNION statement forced conversion of an UTF8
charset value to a binary charset value, the byte
length of the result values was truncated to the
CHAR_LENGTH of the original UTF8 value.
spaces.
When the my_strnncollsp_simple function compares two strings and one is a prefix
of another then this function compares characters in the rest of longer key
with the space character to find whether the longer key is greater or less.
But the sort order of the collation isn't used in this comparison. This may
lead to a wrong comparison result, wrongly created index or wrong order of the
result set of a query with the ORDER BY clause.
Now the my_strnncollsp_simple function uses collation sort order to compare
the characters in the rest of longer key with the space character.
query / no aggregate of subquery
The optimizer counts the aggregate functions that
appear as top level expressions (in all_fields) in
the current subquery. Later it makes a list of these
that it uses to actually execute the aggregates in
end_send_group().
That count is used in several places as a flag whether
there are aggregates functions.
While collecting the above info it must not consider
aggregates that are not aggregated in the current
context. It must treat them as normal expressions
instead. Not doing that leads to incorrect data about
the query, e.g. running a query that actually has no
aggregate functions as if it has some (and hence is
expected to return only one row).
Fixed by ignoring the aggregates that are not aggregated
in the current context.
One other smaller omission discovered and fixed in the
process : the place of aggregation was not calculated for
user defined functions. Fixed by calling
Item_sum::init_sum_func_check() and
Item_sum::check_sum_func() as it's done for the rest of
the aggregate functions.
Sometimes the number of really updated rows (with changed
column values) cannot be determined at the server level
alone (e.g. if the storage engine does not return enough
column values to verify that). So the only dependable way
in such cases is to let the storage engine return that
information if possible.
Fixed the bug at server level by providing a way for the
storage engine to return information about wether it
actually updated the row or the old and the new column
values are the same. It can do that by returning
HA_ERR_RECORD_IS_THE_SAME in ha_update_row().
Note that each storage engine may choose not to try to
return this status code, so this behaviour remains
storage engine specific.
Thanks to Martin Friebe for finding and submitting a fix for this bug!
A table with maximum number of key segments and maximum length key name
would have a corrupted .frm file, due to an incorrect calculation of the
complete key length. Now the key length is computed correctly (I hope) :-)
MyISAM would reject a table with the maximum number of keys and the maximum
number of key segments in all keys. It would allow one less than this total
maximum. Now MyISAM accepts a table defined with the maximum. (This is a
very minor issue.)
Sometimes special 0 ENUM values was ALTERed to normal
empty string ENUM values.
Special 0 ENUM value has the same string representation
as normal ENUM value defined as '' (empty string).
The do_field_string function was used to convert
ENUM data at an ALTER TABLE request, but this
function doesn't care about numerical "indices" of
ENUM values, i.e. do_field_string doesn't distinguish
a special 0 value from an empty string value.
A new copy function called do_field_enum has been added to
copy special 0 ENUM values without conversion to an empty
string.
In the ha_partition::position we don't calculate the number of
the partition of the record. We use m_last_part_value instead relying on
that it is set in other place like previous calls of ::write_row().
In replication we do neither of these calls before ::position().
Delete_row_log_event::do_exec_row calls find_and_fetch_row() where
we used position() & rnd_pos() calls to find the record for the
PARTITION/INNODB table as it posesses InnoDB table flags.
Fixed by removing HA_PRIMARY_KEY_REQUIRED_FOR_POSITION flag from PARTITION
a lookup into a BINARY index by a key ended with spaces. It caused
an assertion abort for a debug version and wrong results for non-debug
versions.
The problem occurred because the function _mi_pack_key stripped off
the trailing spaces from binary search keys while the function _mi_make_key
did not do it when keys were inserted into the index.
Now the function _mi_pack_key does not remove the trailing spaces from
search keys if they are of the binary type.
MySQL uses _beginthread()/_endthread() instead of
_beginthreadex()/_endthreadex() to create/end its threads
on Windows.
According to MSDN _endthread() does close the thread handle.
So there's no need the handle to be closed explicitly.
Besides : WaitForSingleObject(, INFINITE) != WAIT_OBJECT_0) is
true for all practical cases as the other two possible return
codes (according to MSDN) cannot happen in that case the
CloseHandle() was actually a dead code.
Fixed by removing the CloseHandle() call. No test case added
because it's not possible to test for absence of dead code.
If one sets MYSQL_READ_DEFAULTS_FILE and MYSQL_READ_DEFAULT_GROUP options
after mysql_real_connect() called with that MYSQL instance,
these options will affect next mysql_reconnect then.
As we use a copy of the original MYSQL object inside mysql_reconnect,
and mysql_real_connect frees options.my_cnf_file and _group strings,
we will free these twice when we execute mysql_reconnect with the
same MYSQL for the second time.
I don't think we should ever read defaults files handling mysql_reconnect.
So i just set them to 0 for the temporary MYSQL object there/
when index_init() or rnd_init() return an error, we still set
handler->inited to INDEX or RND in ha_index_init and ha_rnd_init.
As caller doesn't call ha_*_end() in this case, we get DBUG_ASSERT
failed.
LOCK TABLES takes a list of tables to lock. It may lock several
tables successfully and then encounter a tables that it can't lock,
e.g. because it's locked. In such case it needs to undo the locks on
the already locked tables. And it does that. But it has also notified
the relevant table storage engine handlers that they should lock.
The only reliable way to ensure that the table handlers will give up
their locks is to end the transaction. This is what UNLOCK TABLE
does : it ends the transaction if there were locked tables by LOCK
tables.
It is possible to end the transaction when the lock fails in
LOCK TABLES because LOCK TABLES ends the transaction at its start
already.
Fixed by ending (again) the transaction when LOCK TABLES fails to
lock a table.
the loose scan optimization for grouping queries was applied returned
a wrong result set when the query was used with the SQL_BIG_RESULT
option.
The SQL_BIG_RESULT option forces to use sorting algorithm for grouping
queries instead of employing a suitable index. The current loose scan
optimization is applied only for one table queries when the suitable
index is covering. It does not make sense to use sort algorithm in this
case. However the create_sort_index function does not take into account
the possible choice of the loose scan to implement the DISTINCT operator
which makes sorting unnecessary. Moreover the current implementation of
the loose scan for queries with distinct assumes that sorting will
never happen. Thus in this case create_sort_index should not call
the function filesort.