Problem 1:
column_priv_hash uses utf8_general_ci collation
for the key comparison. The key consists of user name,
db name and table name. Thus user with privileges on table t1
is able to perform the same operation on T1
(the similar situation with user name & db name, see acl_cache).
So collation which is used for column_priv_hash and acl_cache
should be case sensitive.
The fix:
replace system_charset_info with my_charset_utf8_bin for
column_priv_hash and acl_cache
Problem 2:
The same situation with proc_priv_hash, func_priv_hash,
the only difference is that Routine name is case insensitive.
So the fix is to use my_charset_utf8_bin for
proc_priv_hash & func_priv_hash and convert routine name into lower
case before writing the element into the hash and
before looking up the key.
Additional fix: mysql.procs_priv Routine_name field collation
is changed to utf8_general_ci.
It's necessary for REVOKE command
(to find a field by routine hash element values).
Note:
It's safe for lower-case-table-names mode too because
db name & table name are converted into lower case
(see GRANT_NAME::GRANT_NAME).
This assertion would occur if UPDATE was used to update multiple
tables containing an AUTO_INCREMENT column and if the inserted
row had a user-supplied value for that column. The assertion
could then be triggered by the next statement.
The problem was only noticeable on debug builds of the server.
The cause of the problem was that the code for multi update did
not properly reset the TABLE->auto_increment_if_null flag after update.
The flag is used to indicate that a non-null value of an auto_increment field
has been provided by the user or retrieved from a current record.
Open_tables() contains an assertion that tests this flag, and this
was triggered in this case by ALTER TABLE.
This patch fixes the problem by resetting the auto_increment_if_null
field to FALSE once a row has been updated.
This bug is similar to Bug#47274, but for multi update rather
than INSERT DELAYED.
Test case added to update.test.
Problem: involving a spatial index for "non-spatial" queries
(that don't containt MBRXXX() functions) may lead to failed assert.
Fix: don't use spatial indexes in such cases.
Backporting BUG#43789 to mysql-5.1-bugteam
The replication was generating corrupted data, warning messages on Valgrind
and aborting on debug mode while replicating a "null" to "not null" field.
Specifically the unpack_row routine, was considering the slave's table
definition and trying to retrieve a field value, where there was nothing to be
retrieved, ignoring the fact that the value was defined as "null" by the master.
To fix the problem, we proceed as follows:
1 - If it is not STRICT sql_mode, implicit default values are used, regardless
if it is multi-row or single-row statement.
2 - However, if it is STRICT mode, then a we do what follows:
2.1 If it is a transactional engine, we do a rollback on the first NULL that is
to be set into a NOT NULL column and return an error.
2.2 If it is a non-transactional engine and it is the first row to be inserted
with multi-row, we also return the error. Otherwise, we proceed with the
execution, use implicit default values and print out warning messages.
Unfortunately, the current patch cannot mimic the behavior showed by the master
for updates on multi-tables and multi-row inserts. This happens because such
statements are unfolded in different row events. For instance, considering the
following updates and strict mode:
(master)
create table t1 (a int);
create table t2 (a int not null);
insert into t1 values (1);
insert into t2 values (2);
update t1, t2 SET t1.a=10, t2.a=NULL;
t1 would have (10) and t2 would have (0) as this would be handled as a
multi-row update. On the other hand, if we had the following updates:
(master)
create table t1 (a int);
create table t2 (a int);
(slave)
create table t1 (a int);
create table t2 (a int not null);
(master)
insert into t1 values (1);
insert into t2 values (2);
update t1, t2 SET t1.a=10, t2.a=NULL;
On the master t1 would have (10) and t2 would have (NULL). On
the slave, t1 would have (10) but the update on t1 would fail.
Backporting BUG#38173 to mysql-5.1-bugteam
The reason of the bug was incompatibile with the master side behaviour.
INSERT query on the master is allowed to insert into a table without specifying
values of DEFAULT-less fields if sql_mode is not strict.
Fixed with checking sql_mode by the sql thread to decide how to react.
Non-strict sql_mode should allow Write_rows event to complete.
todo: warnings can be shown via show slave status, still this is a
separate rather general issue how to show warnings for the slave threads.
line 138 when forcing a spatial index
Problem: "Spatial indexes can be involved in the search
for queries that use a function such as MBRContains()
or MBRWithin() in the WHERE clause".
Using spatial indexes for JOINs with =, <=> etc.
predicates is incorrect.
Fix: disable spatial indexes for such queries.
If the first argument to GeomFromWKB function is a geometry
field then the function just returns its value.
However in doing so it's not preserving first argument's
null_value flag and this causes unexpected null value to
be returned to the calling function.
Fixed by updating the null_value of the GeomFromWKB function
in such cases (and all other cases that return a NULL e.g.
because of not enough memory for the return buffer).
file
Issuing 'FLUSH LOGS' does not close and reopen indexfile.
Instead a SEEK_SET is performed.
This patch makes index file to be closed and reopened whenever a
rotation happens (FLUSH LOGS is issued or binary log exceeds
maximum configured size).
grants are reapplied.
After renaming a user and trying to re-apply grants results in additional
grants.
This is because we use username as part of the key for GRANT_TABLE structure.
When the user is renamed, we only change the username stored and the hash key
still contains the old user name and this results in the extra privileges
Fixed by rebuilding the hash key and updating the column_priv_hash structure
when the user is renamed
If a thread is killed in the server, we throw "shutdown" only if one is actually in
progress; otherwise, we throw "query interrupted".
Control-C in the mysql command-line client is "incremental" now.
First Control-C sends KILL QUERY (when connected to 5.0+ server, otherwise, see next)
Next Control-C sends KILL CONNECTION
Next Control-C aborts client.
As the first two steps only pertain to an existing query,
Control-C will abort the client right away if no query is running.
client will give more detailed/consistent feedback on Control-C now.
UPDATE + VIEW + SP + MERGE + ALTER
When cleaning up the stored procedure's internal
structures the flag to ignore the errors for
INSERT/UPDATE IGNORE was not cleaned up.
As a result error ignoring was on during name
resolution. And this is an abnormal situation : the
SELECT_LEX flag can be on only during query execution.
Fixed by correctly cleaning up the SELECT_LEX flag
when reusing the SELECT_LEX in a second execution.
SP variables
A function call may end without throwing an error or without setting
the return value. This can happen when e.g. an error occurs while
calculating the return value.
Fixed by setting the value to NULL when error occurs during evaluation
of an expression.
Adding @@session and @@global prefixes to a
declared variable in a stored procedure the server
would lead to a crash.
The reason was that during the parsing of the
syntactic rule 'option_value' an uninitialized
set_var object was pushed to the parameter stack
of the SET statement. The parent rule
'option_type_value' interpreted the existence of
variables on the parameter stack as an assignment
and wrapped it in a sp_instr_set object.
As the procedure later was executed an attempt
was made to run the method 'check()' on an
uninitialized member object (NULL value) belonging
to the previously created but uninitialized object.
Problem: using null microsecond part in a WHERE condition
(e.g. WHERE date_time_field <= "YYYY-MM-DD HH:MM:SS.0000")
may lead to wrong results due to improper DATETIMEs
comparison in some cases.
Fix: comparing DATETIMEs as strings we must trim trailing 0's
in such cases.
The problem was in incorrect handling of predicates involving
NULL as a constant value by the range optimizer.
For example, when creating a SEL_ARG node from a condition of
the form "field < const" (which would normally result in the
"NULL < field < const" SEL_ARG), the special case when "const"
is NULL was not taken into account, so "NULL < field < NULL"
was produced for the "field < NULL" condition.
As a result, SEL_ARG structures of this form could not be
further optimized which in turn could lead to incorrectly
constructed SEL_ARG trees. In particular, code assuming SEL_ARG
structures to always form a sequence of ordered disjoint
intervals could enter an infinite loop under some
circumstances.
Fixed by changing get_mm_leaf() so that for any sargable
predicate except "<=>" involving NULL as a constant, "empty"
SEL_ARG is returned, since such a predicate is always false.
view that has Group By
When SELECT'ing from a view that mentions another,
materialized, view, access was being denied. The issue was
resolved by lifting a special case which avoided such access
checking in check_single_table_access. In the past, this was
necessary since if such a check were performed, the error
message would be downgraded to a warning in the case of SHOW
CREATE VIEW. The downgrading of errors was meant to handle
only that scenario, but could not distinguish the two as it
read only the error messages.
The special case was needed in the fix of bug no 36086.
Before that, views were confused with derived tables.
After bug no 35996 was fixed, the manipulation of errors
during SHOW CREATE VIEW execution is not dependent on the
actual error messages in the queue, it rather looks at the
actual cause of the error and takes appropriate
action. Hence the aforementioned special case is now
superfluous and the bug is fixed.
Implemented the server infrastructure for the fix:
1. Added a function LEX_STRING *thd_query_string(THD) to return
a LEX_STRING structure instead of char *.
This is the function that must be called in innodb instead of
thd_query()
2. Did some encapsulation in THD : aggregated thd_query and
thd_query_length into a LEX_STRING and made accessor and mutator
methods for easy code updating.
3. Updated the server code to use the new methods where applicable.
Temporary tables may set join->group to 0 even though there is
grouping. Also need to test if sum_func_count>0 when JOIN::exec()
decides whether to present results in a grouped manner.
columns without where/group
Simple SELECT with implicit grouping used to return many rows if
the query was ordered by the aggregated column in the SELECT
list. This was incorrect because queries with implicit grouping
should only return a single record.
The problem was that when JOIN:exec() decided if execution needed
to handle grouping, it was assumed that sum_func_count==0 meant
that there were no aggregate functions in the query. This
assumption was not correct in JOIN::exec() because the aggregate
functions might have been optimized away during JOIN::optimize().
The reason why queries without ordering behaved correctly was
that sum_func_count is only recalculated if the optimizer chooses
to use temporary tables (which it does in the ordered case).
Hence, non-ordered queries were correctly treated as grouped.
The fix for this bug was to remove the assumption that
sum_func_count==0 means that there is no need for grouping. This
was done by introducing variable "bool implicit_grouping" in the
JOIN object.
The BINLOG statement was sharing too much code with the slave SQL thread, introduced with
the patch for Bug#32407. This caused statements to be logged with the wrong server_id, the
id stored inside the events of the BINLOG statement rather than the id of the running
server.
Fix by rearranging code a bit so that only relevant parts of the code are executed by
the BINLOG statement, and the server_id of the server executing the statements will
not be overrided by the server_id stored in the 'format description BINLOG statement'.
The problem was in incorrect handling of predicates involving
NULL as a constant value by the range optimizer.
For example, when creating a SEL_ARG node from a condition of
the form "field < const" (which would normally result in the
"NULL < field < const" SEL_ARG), the special case when "const"
is NULL was not taken into account, so "NULL < field < NULL"
was produced for the "field < NULL" condition.
As a result, SEL_ARG structures of this form could not be
further optimized which in turn could lead to incorrectly
constructed SEL_ARG trees. In particular, code assuming SEL_ARG
structures to always form a sequence of ordered disjoint
intervals could enter an infinite loop under some
circumstances.
Fixed by changing get_mm_leaf() so that for any sargable
predicate except "<=>" involving NULL as a constant, "empty"
SEL_ARG is returned, since such a predicate is always false.
Problem: using null microsecond part (e.g. "YYYY-MM-DD HH:MM:SS.0000")
in a WHERE condition may lead to wrong results due to improper
DATETIMEs comparison in some cases.
Fix: as we compare DATETIMEs as strings we must trim trailing 0's
in such cases.
removed if server_id changes
When MySQL crashes (or a snapshot is taken which simulates
a crash), then it is possible that internal XA
transactions (used to sync the binary log and InnoDB)
can be left in a PREPARED state, whereas they should be
rolled back. This is done when the server_id changes
before the restart occurs.
This patch releases he restriction that the server_id
should be consistent if the XID is to be considerred
valid. The rollback phase should then be able to
clean up all pending XA transactions.
We set up DATE and TIMESTAMP differently in field-creation than we
did in field-MD creation (for CREATE). Admirably, ALTER TABLE
detected this and didn't damage any data, but it did initiate a
full copy/conversion, which we don't really need to do.
Now we describe Field and Create_field the same for those types.
As a result, ALTER TABLE that only changes meta-data (like a
field's name) no longer forces a data-copy when there needn't
be one.
covering index
When two range predicates were combined under an OR
predicate, the algorithm tried to merge overlapping ranges
into one. But the case when a range overlapped several other
ranges was not handled. This lead to
1) ranges overlapping, which gave repeated results and
2) a range that overlapped several other ranges was cut off.
Fixed by
1) Making sure that a range got an upper bound equal to the
next range with a greater minimum.
2) Removing a continue statement