with temporary tables
There were two problems the test case from this bug was
triggering:
1. JOIN::rollup_init() was supposed to wrap all constant Items
into another object for queries with the WITH ROLLUP modifier
to ensure they are never considered as constants and therefore
are written into temporary tables if the optimizer chooses to
employ them for DISTINCT/GROUP BY handling.
However, JOIN::rollup_init() was called before
make_join_statistics(), so Items corresponding to fields in
const tables could not be handled as intended, which was
causing all kinds of problems later in the query execution. In
particular, create_tmp_table() assumed all constant items
except "hidden" ones to be removed earlier by remove_const()
which led to improperly initialized Field objects for the
temporary table being created. This is what was causing crashes
and valgrind errors in storage engines.
2. Even when the above problem had been fixed, the query from
the test case produced incorrect results due to some
DISTINCT/GROUP BY optimizations being performed by the
optimizer that are inapplicable in the WITH ROLLUP case.
Fixed by disabling inapplicable DISTINCT/GROUP BY optimizations
when the WITH ROLLUP modifier is present, and splitting the
const-wrapping part of JOIN::rollup_init() into a separate
method which is now invoked after make_join_statistics() when
the const tables are already known.
subquery returning multiple rows
Error handling was missing when handling subqueires in WHERE
and when assigning a SELECT result to a @variable.
This caused crash(es).
Fixed by adding error handling code to both the WHERE
condition evaluation and to assignment to an @variable.
having clause...
The fix for bug 46184 was not very complete. It was not covering
views using temporary tables and multiple tables in a FROM clause.
Fixed by reverting the fix for 46184 and making a more general
check that is checking at the right execution stage and for all
of the non-supported cases.
Now PROCEDURE ANALYZE on non-top level SELECT is also forbidden.
Updated the analyse.test and subselect.test accordingly.
Queries with nested outer joins may lead to crashes or
bad results because an internal data structure is not handled
correctly.
The optimizer uses bitmaps of nested JOINs to determine
if certain table can be placed at a certain place in the
JOIN order.
It does maintain a bitmap describing in which JOINs
last placed table is nested.
When it puts a table it makes sure the bit of every JOIN that
contains the table in question is set (because JOINs can be nested).
It does that by recursively setting the bit for the next enclosing
JOIN when this is the first table in the JOIN and recursively
resetting the bit if it's the last table in the JOIN.
When it removes a table from the join order it should do the
opposite : recursively unset the bit if it's the only remaining
table in this join and and recursively set the bit if it's removing
the last table of a JOIN.
There was an error in how the bits was set for the upper levels :
when removing a table it was setting the bit for all the enclosing
nested JOINs even if there were more tables left in the current JOIN
(which practically means that the upper nested JOINs were not affected).
Fixed by stopping the recursion at the relevant level.
XA START may cause assertion failure/server crash when it is called
after unilateral roll back issued by the Resource Manager (both
in regular transaction and after XA transaction).
The problem was that rm_error variable wasn't set/reset properly.
Bug#46539 Various crashes on INSERT IGNORE SELECT + SELECT FOR UPDATE.
If a transaction was rolled back inside InnoDB due to a deadlock
or lock wait timeout, and the statement had IGNORE clause,
the server could crash at the end of the statement or on shutdown.
This was caused by the error handling infrastructure's attempt to
ignore a non-ignorable error.
When a transaction rollback request is raised, switch off
current_select->no_error flag, so that the following error
won't be ignored.
Instead, we could add !thd->is_fatal_sub_stmt_error to
my_message_sql(), but since in write_record() we switch
off no_error, the same approach is used in
thd_mark_transaction_to_rollback().
@todo: call thd_mark_transaction_to_rollback() from
handler::print_error(), then we can easily make sure
that the error reported by print_error is not ignored.
BUG#47073 - valgrind errs, corruption,failed repair of partition,
low myisam_sort_buffer_size
Fixed race conditions discovered with the provided test case and
stabilized test case.
Problem 1:
column_priv_hash uses utf8_general_ci collation
for the key comparison. The key consists of user name,
db name and table name. Thus user with privileges on table t1
is able to perform the same operation on T1
(the similar situation with user name & db name, see acl_cache).
So collation which is used for column_priv_hash and acl_cache
should be case sensitive.
The fix:
replace system_charset_info with my_charset_utf8_bin for
column_priv_hash and acl_cache
Problem 2:
The same situation with proc_priv_hash, func_priv_hash,
the only difference is that Routine name is case insensitive.
So the fix is to use my_charset_utf8_bin for
proc_priv_hash & func_priv_hash and convert routine name into lower
case before writing the element into the hash and
before looking up the key.
Additional fix: mysql.procs_priv Routine_name field collation
is changed to utf8_general_ci.
It's necessary for REVOKE command
(to find a field by routine hash element values).
Note:
It's safe for lower-case-table-names mode too because
db name & table name are converted into lower case
(see GRANT_NAME::GRANT_NAME).
This assertion would occur if UPDATE was used to update multiple
tables containing an AUTO_INCREMENT column and if the inserted
row had a user-supplied value for that column. The assertion
could then be triggered by the next statement.
The problem was only noticeable on debug builds of the server.
The cause of the problem was that the code for multi update did
not properly reset the TABLE->auto_increment_if_null flag after update.
The flag is used to indicate that a non-null value of an auto_increment field
has been provided by the user or retrieved from a current record.
Open_tables() contains an assertion that tests this flag, and this
was triggered in this case by ALTER TABLE.
This patch fixes the problem by resetting the auto_increment_if_null
field to FALSE once a row has been updated.
This bug is similar to Bug#47274, but for multi update rather
than INSERT DELAYED.
Test case added to update.test.
Problem: involving a spatial index for "non-spatial" queries
(that don't containt MBRXXX() functions) may lead to failed assert.
Fix: don't use spatial indexes in such cases.
line 138 when forcing a spatial index
Problem: "Spatial indexes can be involved in the search
for queries that use a function such as MBRContains()
or MBRWithin() in the WHERE clause".
Using spatial indexes for JOINs with =, <=> etc.
predicates is incorrect.
Fix: disable spatial indexes for such queries.
If the first argument to GeomFromWKB function is a geometry
field then the function just returns its value.
However in doing so it's not preserving first argument's
null_value flag and this causes unexpected null value to
be returned to the calling function.
Fixed by updating the null_value of the GeomFromWKB function
in such cases (and all other cases that return a NULL e.g.
because of not enough memory for the return buffer).
grants are reapplied.
After renaming a user and trying to re-apply grants results in additional
grants.
This is because we use username as part of the key for GRANT_TABLE structure.
When the user is renamed, we only change the username stored and the hash key
still contains the old user name and this results in the extra privileges
Fixed by rebuilding the hash key and updating the column_priv_hash structure
when the user is renamed
If a thread is killed in the server, we throw "shutdown" only if one is actually in
progress; otherwise, we throw "query interrupted".
Control-C in the mysql command-line client is "incremental" now.
First Control-C sends KILL QUERY (when connected to 5.0+ server, otherwise, see next)
Next Control-C sends KILL CONNECTION
Next Control-C aborts client.
As the first two steps only pertain to an existing query,
Control-C will abort the client right away if no query is running.
client will give more detailed/consistent feedback on Control-C now.
UPDATE + VIEW + SP + MERGE + ALTER
When cleaning up the stored procedure's internal
structures the flag to ignore the errors for
INSERT/UPDATE IGNORE was not cleaned up.
As a result error ignoring was on during name
resolution. And this is an abnormal situation : the
SELECT_LEX flag can be on only during query execution.
Fixed by correctly cleaning up the SELECT_LEX flag
when reusing the SELECT_LEX in a second execution.
SP variables
A function call may end without throwing an error or without setting
the return value. This can happen when e.g. an error occurs while
calculating the return value.
Fixed by setting the value to NULL when error occurs during evaluation
of an expression.
Problem: the "caseinfo" member of CHARSET_INFO structure was not
initialized for user-defined Unicode collations, which made the
server crash.
Fix: initializing caseinfo properly.
Adding @@session and @@global prefixes to a
declared variable in a stored procedure the server
would lead to a crash.
The reason was that during the parsing of the
syntactic rule 'option_value' an uninitialized
set_var object was pushed to the parameter stack
of the SET statement. The parent rule
'option_type_value' interpreted the existence of
variables on the parameter stack as an assignment
and wrapped it in a sp_instr_set object.
As the procedure later was executed an attempt
was made to run the method 'check()' on an
uninitialized member object (NULL value) belonging
to the previously created but uninitialized object.
Problem: using null microsecond part in a WHERE condition
(e.g. WHERE date_time_field <= "YYYY-MM-DD HH:MM:SS.0000")
may lead to wrong results due to improper DATETIMEs
comparison in some cases.
Fix: comparing DATETIMEs as strings we must trim trailing 0's
in such cases.
The problem was in incorrect handling of predicates involving
NULL as a constant value by the range optimizer.
For example, when creating a SEL_ARG node from a condition of
the form "field < const" (which would normally result in the
"NULL < field < const" SEL_ARG), the special case when "const"
is NULL was not taken into account, so "NULL < field < NULL"
was produced for the "field < NULL" condition.
As a result, SEL_ARG structures of this form could not be
further optimized which in turn could lead to incorrectly
constructed SEL_ARG trees. In particular, code assuming SEL_ARG
structures to always form a sequence of ordered disjoint
intervals could enter an infinite loop under some
circumstances.
Fixed by changing get_mm_leaf() so that for any sargable
predicate except "<=>" involving NULL as a constant, "empty"
SEL_ARG is returned, since such a predicate is always false.
view that has Group By
When SELECT'ing from a view that mentions another,
materialized, view, access was being denied. The issue was
resolved by lifting a special case which avoided such access
checking in check_single_table_access. In the past, this was
necessary since if such a check were performed, the error
message would be downgraded to a warning in the case of SHOW
CREATE VIEW. The downgrading of errors was meant to handle
only that scenario, but could not distinguish the two as it
read only the error messages.
The special case was needed in the fix of bug no 36086.
Before that, views were confused with derived tables.
After bug no 35996 was fixed, the manipulation of errors
during SHOW CREATE VIEW execution is not dependent on the
actual error messages in the queue, it rather looks at the
actual cause of the error and takes appropriate
action. Hence the aforementioned special case is now
superfluous and the bug is fixed.