The `SELECT 'r' INTO OUTFILE ... FIELDS ENCLOSED BY 'r' ' statement
encoded the 'r' string to a 4 byte string of value x'725c7272'
(sequence of 4 characters: r\rr).
The LOAD DATA statement decoded this string to a 1 byte string of
value x'0d' (ASCII Carriage Return character) instead of the original
'r' character.
The same error also happened with the FIELDS ENCLOSED BY clause
followed by special characters: 'n', 't', 'r', 'b', '0', 'Z' and 'N'.
NOTE 1: This is a result of the undocumented feature: the LOAD DATA INFILE
recognises 2-byte input sequences like \n, \t, \r and \Z in addition
to documented 2-byte sequences: \0 and \N. This feature should be
documented (here backspace character is a default ESCAPED BY character,
in the real-life example it may be any ESCAPED BY character).
NOTE 2, changed behaviour:
Now the `SELECT INTO OUTFILE' statement with the `FIELDS ENCLOSED BY'
clause followed by one of: 'n', 't', 'r', 'b', '0', 'Z' or 'N' characters
encodes this special character itself by doubling it ('r' --> 'rr'),
not by prepending it with an escape character.
This bug may manifest itself not only with the queries for which
the index-merge access method is chosen. It also may display
itself for queries with DISTINCT.
The bug was in how the Unique::get method used the merge_buffers
function. To compare elements in the the queue employed by
merge_buffers() it must use the buffpek_compare function rather
than the function for binary comparison.
When a UNION statement forced conversion of an UTF8
charset value to a binary charset value, the byte
length of the result values was truncated to the
CHAR_LENGTH of the original UTF8 value.
spaces.
When the my_strnncollsp_simple function compares two strings and one is a prefix
of another then this function compares characters in the rest of longer key
with the space character to find whether the longer key is greater or less.
But the sort order of the collation isn't used in this comparison. This may
lead to a wrong comparison result, wrongly created index or wrong order of the
result set of a query with the ORDER BY clause.
Now the my_strnncollsp_simple function uses collation sort order to compare
the characters in the rest of longer key with the space character.
query / no aggregate of subquery
The optimizer counts the aggregate functions that
appear as top level expressions (in all_fields) in
the current subquery. Later it makes a list of these
that it uses to actually execute the aggregates in
end_send_group().
That count is used in several places as a flag whether
there are aggregates functions.
While collecting the above info it must not consider
aggregates that are not aggregated in the current
context. It must treat them as normal expressions
instead. Not doing that leads to incorrect data about
the query, e.g. running a query that actually has no
aggregate functions as if it has some (and hence is
expected to return only one row).
Fixed by ignoring the aggregates that are not aggregated
in the current context.
One other smaller omission discovered and fixed in the
process : the place of aggregation was not calculated for
user defined functions. Fixed by calling
Item_sum::init_sum_func_check() and
Item_sum::check_sum_func() as it's done for the rest of
the aggregate functions.
"Federared Transactions Failure"
Bug occurs when the user performs an operation which inserts more than
one row into the federated table and the federated table references a
remote table stored within a transactional storage engine. When the
insert operation for any one row in the statement fails due to
constraint violation, the federated engine is unable to perform
statement rollback and so the remote table contains a partial commit.
The user would expect a statement to perform the same so a statement
rollback is expected.
This bug was fixed by implementing bulk-insert handling into the
federated storage engine. This will relieve the bug for most common
situations by enabling the generation of a multi-row insert into the
remote table and thus permitting the remote table to perform
statement rollback when neccessary.
The multi-row insert is limited to the maximum packet size between
servers and should the size overflow, more than one insert statement
will be sent and this bug will reappear. Multi-row insert is disabled
when an "INSERT...ON DUPLICATE KEY UPDATE" is being performed.
The bulk-insert handling will offer a significant performance boost
when inserting a large number of small rows.
This patch builds on Bug29019 and Bug25511
"Federated INSERT failures"
Federated does not correctly handle "INSERT...ON DUPLICATE KEY UPDATE"
However, implementing such support is not reasonably possible without
increasing complexity of the storage engine: checking that constraints
on remote server match local server and parsing error messages.
This patch causes 'ON DUPLICATE KEY' to fail with ER_DUP_KEY message
if a conflict occurs and not to fail silently.
SHOW CREATE TABLE or SELECT FROM I_S.
Actually, the bug discovers two problems:
- the original query is not preserved properly. This is the problem
of BUG#16291;
- the resultset of SHOW CREATE TABLE statement is binary.
This patch fixes the second problem for the 5.0.
Both problems will be fixed in 5.1.
Problem: like_range() returned wrong ranges for contractions (like 'ch' in Czech').
Fix: adding a special code to handle tricky cases:
- contraction head followed by a wild character
- full contraction
- contraction part followed by another contraction part,
but they are not a contraction together.
"REPLACE/INSERT IGNORE/UPDATE IGNORE doesn't work"
Federated does not record neccessary HA_EXTRA flags in order to
support REPLACE/INSERT IGNORE/UPDATE IGNORE.
Implement ::extra() to capture flags neccessary for functionality.
New function append_ident() to better escape identifiers consistantly.
Fulltext index may get corrupt by certain gbk characters.
The problem was that when skipping leading non-true-word-characters,
we assumed that these characters are always 1 byte long. This is not
the case with gbk character set, since non-true-word-characters may
be 2 bytes long.
Affects 5.0 only.
Thanks to Martin Friebe for finding and submitting a fix for this bug!
A table with maximum number of key segments and maximum length key name
would have a corrupted .frm file, due to an incorrect calculation of the
complete key length. Now the key length is computed correctly (I hope) :-)
MyISAM would reject a table with the maximum number of keys and the maximum
number of key segments in all keys. It would allow one less than this total
maximum. Now MyISAM accepts a table defined with the maximum. (This is a
very minor issue.)
CHECK TABLE against ARCHIVE table may falsely report table corruption,
or cause server crash.
Fixed by using proper buffer for CHECK TABLE.
Affects both 5.0 and 5.1.
Sometimes special 0 ENUM values was ALTERed to normal
empty string ENUM values.
Special 0 ENUM value has the same string representation
as normal ENUM value defined as '' (empty string).
The do_field_string function was used to convert
ENUM data at an ALTER TABLE request, but this
function doesn't care about numerical "indices" of
ENUM values, i.e. do_field_string doesn't distinguish
a special 0 value from an empty string value.
A new copy function called do_field_enum has been added to
copy special 0 ENUM values without conversion to an empty
string.
a lookup into a BINARY index by a key ended with spaces. It caused
an assertion abort for a debug version and wrong results for non-debug
versions.
The problem occurred because the function _mi_pack_key stripped off
the trailing spaces from binary search keys while the function _mi_make_key
did not do it when keys were inserted into the index.
Now the function _mi_pack_key does not remove the trailing spaces from
search keys if they are of the binary type.
fix binlog-writing so that end_log_pos is given correctly even
within transactions for both SHOW BINLOG and SHOW MASTER STATUS,
that is as absolute values (from log start) rather than relative
values (from transaction's start).
---
Merge tnurnberg@bk-internal.mysql.com:/home/bk/mysql-5.0-maint
into sin.intern.azundris.com:/home/tnurnberg/22540/50-22540
LOCK TABLES takes a list of tables to lock. It may lock several
tables successfully and then encounter a tables that it can't lock,
e.g. because it's locked. In such case it needs to undo the locks on
the already locked tables. And it does that. But it has also notified
the relevant table storage engine handlers that they should lock.
The only reliable way to ensure that the table handlers will give up
their locks is to end the transaction. This is what UNLOCK TABLE
does : it ends the transaction if there were locked tables by LOCK
tables.
It is possible to end the transaction when the lock fails in
LOCK TABLES because LOCK TABLES ends the transaction at its start
already.
Fixed by ending (again) the transaction when LOCK TABLES fails to
lock a table.
Max compressed file size was calculated incorretly causing server
crash on INSERT.
With this patch we use proper max file size provided by zlib.
Affects 5.0 only.
the loose scan optimization for grouping queries was applied returned
a wrong result set when the query was used with the SQL_BIG_RESULT
option.
The SQL_BIG_RESULT option forces to use sorting algorithm for grouping
queries instead of employing a suitable index. The current loose scan
optimization is applied only for one table queries when the suitable
index is covering. It does not make sense to use sort algorithm in this
case. However the create_sort_index function does not take into account
the possible choice of the loose scan to implement the DISTINCT operator
which makes sorting unnecessary. Moreover the current implementation of
the loose scan for queries with distinct assumes that sorting will
never happen. Thus in this case create_sort_index should not call
the function filesort.
INSERT into table from SELECT from the same table
with ORDER BY and LIMIT was inserting other data
than sole SELECT ... ORDER BY ... LIMIT returns.
One part of the patch for bug #9676 improperly pushed
LIMIT to temporary table in the presence of the ORDER BY
clause.
That part has been removed.
Problem: separator was not converted to the result character set,
so the result was a mixture of two different character sets,
which was especially bad for UCS2.
Fix: convert separator to the result character set.
ALTER VIEW is currently not supported as a prepared statement
and should be disabled as such as they otherwise could cause server crashes.
ALTER VIEW is currently not supported when called from stored
procedures or functions for related reasons and should also be disabled.
This patch disables these DDL statements and adjusts the appropriate test
cases accordingly.
Additional tests has been added to reflect on the fact that we do support
CREATE/ALTER/DROP TABLE for Prepared Statements (PS), Stored Procedures (SP)
and PS within SP.
The reason the "reap;" succeeds unexpectedly is because the query was completing(almost always) and the network buffer was big enough to store the query result (sometimes) on Windows, meaning the response was completely sent before the server thread could be killed.
Therefore we use a much longer running query that doesn't have a chance to fully complete before the reap happens, testing the kill properly.
- When creating an index for the sort, the number of rows plus 1 is used
to allocate a buffer. In this test case, the number of rows 4294967295
is the max value of an unsigned integer, so when 1 was added to it, a
buffer of size 0 was allocated causing the crash.
- Create new test suite for this bug's test suite as per QA.
Occasionally mysqlbinlog --hexdump failed with error:
ERROR 1064 (42000) at line ...: You have an error in your
SQL syntax; check the manual that corresponds to your MySQL
server version for the right syntax to use near
'Query thread_id=... exec_time=... error_code=...
When the length of hexadecimal dump of binlog header was
divisible by 16, commentary sign '#' after header was lost.
The Log_event::print_header function has been modified to always
finish hexadecimal binlog header with "\n# ".
The abort happened when a query contained a conjunctive predicate
of the form 'view column = constant' in the WHERE condition and
the grouping list also contained a reference to a view column yet
a different one.
Removed the failing assertion as invalid in a general case.
Also fixed a bug that prevented applying some optimization for grouping
queries using views. If the WHERE condition of such a query contains
a conjunctive condition of the form 'view column = constant' and
this view column is used in the grouping list then grouping by this
column can be eliminated. The bug blocked performing this elimination.
(Part of fix for Bug#25621 Error in my_thread_global_end(): 1 threads didn't exit)
Give correct error message if InnoDB table is not found
(This allows us to drop a an innodb table that is not in the InnoDB registery)
and replicated):
A DROP USER statement with a non-existing user was correctly written to
the binary log (there might be users that were removed, but not all),
but the error code was not set, which caused the slave to stop with an
error.
The error reporting code was moved to before the statement was logged
to ensure that the error information for the thread was correctly set
up. This works since my_error() will set the fields net.last_errno and
net.last_error for the thread that is reporting the error, and this
will then be picked up when the Query_log_event is created and written
to the binary log.
slave_sql thread calls thd->clear_error() to force error to be ignored,
though this method didn't clear thd->killed state, what causes
slave_sql thread to stop.
clear thd->killed state if we ignore an error
For a join query with GROUP BY and/or ORDER BY and a view reference
in the FROM list the metadata erroneously showed empty table aliases
and database names for the view columns.
Changed code to enforce that SQL_CACHE only in the first SELECT is used to turn on caching(as documented), but any SQL_NO_CACHE will turn off caching (not documented, but a useful behaviour, especially for machine generated queries). Added test cases to explicitly test the documented caching behaviour and test cases for the reported bug.
The method select_insert::send_error does two things, it rolls back a statement
being executed and outputs an error message. But when a
nonexistent column is referenced, an error message has been published already and
there is no need to publish another.
Fixed by moving all functionality beyond publishing an error message into
select_insert::abort() and calling only that function.
represented by an expression of the type UNSIGNED INT and this
expression was evaluated to 0 then the function erroneously returned
the value of the first argument instead of an empty string.
This problem was introduced by the patch for bug 10963.
The problem has been resolved by a proper modification of the code of
Item_func_substr::val_str.
DECIMAL column was used instead of BIGINT for the minimal possible
BIGINT (-9223372036854775808).
The Item_func_neg::fix_length_and_dec has been adjusted to
to inherit the type of the argument in the case when it's an
Item_int object whose value is equal to LONGLONG_MIN.
Problem: show slave status may return different Slave_IO_Running values running some tests.
Fix: wait for a certain slave state if needed to get tests more predictable.
Problem: Temporary buffer which is used for quoting and escaping
was initialized to character set utf8, and thus didn't allow
to store data in other character sets.
Fix: changing character set of the buffer to be able to
store any arbitrary sequence of bytes.
of its arguments was evaluated to NULL, while the predicate
LOCATE(str,NULL) IS NULL erroneously was evaluated to FALSE.
This happened because the Item_func_locate::fix_length_and_dec
method by mistake set the value of the maybe_null flag for
the function item to 0. In consequence of this the function
was considered as the one that could not ever return NULL.
was erroneously converted to double, while the result of
ROUND(<decimal expr>, <int literal>) was preserved as decimal.
As a result of such a conversion the value of ROUND(D,A) could
differ from the value of ROUND(D,val(A)) if D was a decimal expression.
Now the result of the ROUND function is never converted to
double if the first argument is decimal.