Problem:
if a user was granted privileges on database "d1",
it also was able to act on "D1" (i.e. in upper case),
even on Unix with case sensitive file system.
Fix:
Initialize grant hash to use binary comparison
if lower_case_file_system is not set (on most unixes),
and case insensitive comparison otherwise (Windows, MacOSX).
mysqldump / SHOW CREATE TABLE will show the NEXT available value for
the PK, rather than the *first* one that was available (that named in
the original CREATE TABLE ... AUTO_INCREMENT = ... statement).
This should produce correct and robust behaviour for the obvious use
cases -- when no data were inserted, then we'll produce a statement
featuring the same value the original CREATE TABLE had; if we dump
with values, INSERTing the values on the target machine should set the
correct next_ID anyway (and if not, we'll still have our AUTO_INCREMENT =
... to do that). Lastly, just the CREATE statement (with no data) for
a table that saw inserts would still result in a table that new values
could safely be inserted to).
There seems to be no robust way however to see whether the next_ID
field is > 1 because it was set to something else with CREATE TABLE
... AUTO_INCREMENT = ..., or because there is an AUTO_INCREMENT column
in the table (but no initial value was set with AUTO_INCREMENT = ...)
and then one or more rows were INSERTed, counting up next_ID. This
means that in both cases, we'll generate an AUTO_INCREMENT =
... clause in SHOW CREATE TABLE / mysqldump. As we also show info on,
say, charsets even if the user did not explicitly give that info in
their own CREATE TABLE, this shouldn't be an issue.
As per above, the next_ID will be affected by any INSERTs that have
taken place, though. This /should/ result in correct and robust
behaviour, but it may look non-intuitive to some users if they CREATE
TABLE ... AUTO_INCREMENT = 1000 and later (after some INSERTs) have
SHOW CREATE TABLE give them a different value (say, CREATE TABLE
... AUTO_INCREMENT = 1006), so the docs should possibly feature a
caveat to that effect.
It's not very intuitive the way it works now (with the fix), but it's
*correct*. We're not storing the original value anyway, if we wanted
that, we'd have to change on-disk representation?
If we do dump/load cycles with empty DBs, nothing will change. This
changeset includes an additional test case that proves that tables
with rows will create the same next_ID for AUTO_INCREMENT = ... across
dump/restore cycles.
Confirmed by support as likely solution for client's problem.
In the code that converts IN predicates to EXISTS predicates it is changing
the select list elements to constant 1. Example :
SELECT ... FROM ... WHERE a IN (SELECT c FROM ...)
is transformed to :
SELECT ... FROM ... WHERE EXISTS (SELECT 1 FROM ... HAVING a = c)
However there can be no FROM clause in the IN subquery and it may not be
a simple select : SELECT ... FROM ... WHERE a IN (SELECT f(..) AS
c UNION SELECT ...) This query is transformed to : SELECT ... FROM ...
WHERE EXISTS (SELECT 1 FROM (SELECT f(..) AS c UNION SELECT ...)
x HAVING a = c) In the above query c in the HAVING clause is made to be
an Item_null_helper (a subclass of Item_ref) pointing to the real
Item_field (which is not referenced anywhere else in the query anymore).
This is done because Item_ref_null_helper collects information whether
there are NULL values in the result. This is OK for directly executed
statements, because the Item_field pointed by the Item_null_helper is
already fixed when the transformation is done. But when executed as
a prepared statement all the Item instances are "un-fixed" before the
recompilation of the prepared statement. So when the Item_null_helper
gets fixed it discovers that the Item_field it points to is not fixed
and issues an error. The remedy is to keep the original select list
references when there are no tables in the FROM clause. So the above
becomes : SELECT ... FROM ... WHERE EXISTS (SELECT c FROM (SELECT f(..)
AS c UNION SELECT ...) x HAVING a = c) In this way c is referenced
directly in the select list as well as by reference in the HAVING
clause. So it gets correctly fixed even with prepared statements. And
since the Item_null_helper subclass of Item_ref_null_helper is not used
anywhere else it's taken out.
Concurrent read and update of privilege structures (like simultaneous
run of SHOW GRANTS and ADD USER) could result in server crash.
Ensure that proper locking of ACL structures is done.
No test case is provided because this bug can't be reproduced
deterministically.
Backporting a changeset made for 5.0. Comments from there:
The fix refines the algorithm of generating DROPs for binlog.
Temp tables with common pseudo_thread_id are clustered into one query.
Consequently one replication event per pseudo_thread_id is generated.
The bug caused wrong result sets for union constructs of the form
(SELECT ... ORDER BY order_list1 [LIMIT n]) ORDER BY order_list2.
For such queries order lists were concatenated and limit clause was
completely neglected.
Fixing part2 of this problem: AND didn't work well
with utf8_czech_ci and utf8_lithianian_ci in some cases.
The problem was because when during condition optimization
field was replaced with a constant, the constant's collation
and collation derivation was used later for comparison instead
of the field collation and derivation, which led to non-equal
new condition in some cases.
This patch copies collation and derivation from the field being removed
to the new constant, which makes comparison work using the same collation
with the one which would be used if no condition optimization were done.
In other words:
where s1 < 'K' and s1 = 'Y';
was rewritten to:
where 'Y' < 'K' and s1 = 'Y';
Now it's rewritten to:
where 'Y' collate collation_of_s1 < 'K' and s1 = 'Y'
(using derivation of s1)
Note, the first problem of this bug (with latin1_german2_ci) was fixed
earlier in 5.0 tree, in a separate changeset.
After a locking error the open table(s) were not fully
cleaned up for reuse. But they were put into the open table
cache even before the lock was tried. The next statement
reused the table(s) with a wrong lock type set up. This
tricked MyISAM into believing that it don't need to update
the table statistics. Hence CHECK TABLE reported a mismatch
of record count and table size.
Fortunately nothing worse has been detected yet. The effect
of the test case was that the insert worked on a read locked
table. (!)
I added a new function that clears the lock type from all
tables that were prepared for a lock. I call this function
when a lock failes.
No test case. One test would add 50 seconds to the
test suite. Another test requires file mode modifications.
I added a test script to the bug report. It contains three
cases for failing locks. All could reproduce a table
corruption. All are fixed by this patch.
This bug was not lock timeout specific.
Conversion from int and real numbers to UCS2 didn't work fine:
CONVERT(100, CHAR(50) UNICODE)
CONVERT(103.9, CHAR(50) UNICODE)
The problem appeared because numbers have binary charset, so,
simple charset recast binary->ucs2 was performed
instead of real conversion.
Fixed to make numbers pretend to be non-binary.
used
In a simple queries a result of the GROUP_CONCAT() function was always of
varchar type.
But if length of GROUP_CONCAT() result is greater than 512 chars and temporary
table is used during select then the result is converted to blob, due to
policy to not to store fields longer than 512 chars in tmp table as varchar
fields.
In order to provide consistent behaviour, result of GROUP_CONCAT() now
will always be converted to blob if it is longer than 512 chars.
Item_func_group_concat::field_type() is modified accordingly.