Sometimes special 0 ENUM values was ALTERed to normal
empty string ENUM values.
Special 0 ENUM value has the same string representation
as normal ENUM value defined as '' (empty string).
The do_field_string function was used to convert
ENUM data at an ALTER TABLE request, but this
function doesn't care about numerical "indices" of
ENUM values, i.e. do_field_string doesn't distinguish
a special 0 value from an empty string value.
A new copy function called do_field_enum has been added to
copy special 0 ENUM values without conversion to an empty
string.
a lookup into a BINARY index by a key ended with spaces. It caused
an assertion abort for a debug version and wrong results for non-debug
versions.
The problem occurred because the function _mi_pack_key stripped off
the trailing spaces from binary search keys while the function _mi_make_key
did not do it when keys were inserted into the index.
Now the function _mi_pack_key does not remove the trailing spaces from
search keys if they are of the binary type.
end_log_pos data within a transaction are relative to
the start of the transaction rather than absolute.
we fix those groups in situ before writing the log out.
additional comments and handling for groups with very
large single events, as suggested by Guilhem.
If one sets MYSQL_READ_DEFAULTS_FILE and MYSQL_READ_DEFAULT_GROUP options
after mysql_real_connect() called with that MYSQL instance,
these options will affect next mysql_reconnect then.
As we use a copy of the original MYSQL object inside mysql_reconnect,
and mysql_real_connect frees options.my_cnf_file and _group strings,
we will free these twice when we execute mysql_reconnect with the
same MYSQL for the second time.
I don't think we should ever read defaults files handling mysql_reconnect.
So i just set them to 0 for the temporary MYSQL object there/
fix binlog-writing so that end_log_pos is given correctly even
within transactions for both SHOW BINLOG and SHOW MASTER STATUS,
that is as absolute values (from log start) rather than relative
values (from transaction's start).
---
Merge tnurnberg@bk-internal.mysql.com:/home/bk/mysql-5.0-maint
into sin.intern.azundris.com:/home/tnurnberg/22540/50-22540
LOCK TABLES takes a list of tables to lock. It may lock several
tables successfully and then encounter a tables that it can't lock,
e.g. because it's locked. In such case it needs to undo the locks on
the already locked tables. And it does that. But it has also notified
the relevant table storage engine handlers that they should lock.
The only reliable way to ensure that the table handlers will give up
their locks is to end the transaction. This is what UNLOCK TABLE
does : it ends the transaction if there were locked tables by LOCK
tables.
It is possible to end the transaction when the lock fails in
LOCK TABLES because LOCK TABLES ends the transaction at its start
already.
Fixed by ending (again) the transaction when LOCK TABLES fails to
lock a table.
Max compressed file size was calculated incorretly causing server
crash on INSERT.
With this patch we use proper max file size provided by zlib.
Affects 5.0 only.
the loose scan optimization for grouping queries was applied returned
a wrong result set when the query was used with the SQL_BIG_RESULT
option.
The SQL_BIG_RESULT option forces to use sorting algorithm for grouping
queries instead of employing a suitable index. The current loose scan
optimization is applied only for one table queries when the suitable
index is covering. It does not make sense to use sort algorithm in this
case. However the create_sort_index function does not take into account
the possible choice of the loose scan to implement the DISTINCT operator
which makes sorting unnecessary. Moreover the current implementation of
the loose scan for queries with distinct assumes that sorting will
never happen. Thus in this case create_sort_index should not call
the function filesort.
INSERT into table from SELECT from the same table
with ORDER BY and LIMIT was inserting other data
than sole SELECT ... ORDER BY ... LIMIT returns.
One part of the patch for bug #9676 improperly pushed
LIMIT to temporary table in the presence of the ORDER BY
clause.
That part has been removed.
The C optimizer may decide that data access operations
through pointer of different type are not related to
the original data (strict aliasing).
This is what happens in fetch_long_with_conversion(),
when called as part of mysql_stmt_fetch() : it tries
to check for truncation errors by first storing float
(and other types of data) into a char * buffer and then
accesses them through a float pointer.
This is done to prevent the effects of excess precision
when using FPU registers.
However the doublestore() macro converts a double pointer
to an union pointer. This violates the strict aliasing rule.
Fixed by making the intermediary variables volatile (
to not re-introduce the excess precision bug) and using
the intermediary value instead of the char * buffer.
Note that there can be loss of precision for both signed
and unsigned 64 bit integers converted to double and back,
so the check must stay there (even for compatibility
reasons).
Based on the excellent analysis in bug 28400.
Problem: separator was not converted to the result character set,
so the result was a mixture of two different character sets,
which was especially bad for UCS2.
Fix: convert separator to the result character set.
Problem: long and long long types mess in a comparison may lead to wrong results on some platforms.
Fix: prefer [unsigned] long long as [u]longlong as it's used unconditionally in many places.
ALTER VIEW is currently not supported as a prepared statement
and should be disabled as such as they otherwise could cause server crashes.
ALTER VIEW is currently not supported when called from stored
procedures or functions for related reasons and should also be disabled.
This patch disables these DDL statements and adjusts the appropriate test
cases accordingly.
Additional tests has been added to reflect on the fact that we do support
CREATE/ALTER/DROP TABLE for Prepared Statements (PS), Stored Procedures (SP)
and PS within SP.
The reason the "reap;" succeeds unexpectedly is because the query was completing(almost always) and the network buffer was big enough to store the query result (sometimes) on Windows, meaning the response was completely sent before the server thread could be killed.
Therefore we use a much longer running query that doesn't have a chance to fully complete before the reap happens, testing the kill properly.