in index search MySQL was not explicitly
suppressing warnings. And if the context
happens to enable warnings (e.g. INSERT ..
SELECT) the warnings resulting from converting
the data the key is compared to are
reported to the client.
Fixed by suppressing warnings when converting
the data to the same type as the key parts.
The problem in this bug is when we create temporary tables. When
temporary tables are created for unions, there is some
inferrence being carried out regarding the type of the column.
Whenever this column type is inferred to be REAL (i.e. FLOAT or
DOUBLE), MySQL will always try to maintain exact precision, and
if that is not possible (there are hardware limits, since FLOAT
and DOUBLE are stored as approximate values) will switch to
using approximate values. The problem here is that at this point
the information about number of significant digits is not
available. Furthermore, the number of significant digits should
be increased for the AVG function, however, this was not properly
handled. There are 4 parts to the problem:
#1: DOUBLE and FLOAT fields don't display their proper display
lengths in max_display_length(). This is hard-coded as 53 for
DOUBLE and 24 for FLOAT. Now changed to instead return the
field_length.
#2: Type holders for temporary tables do not preserve the
max_length of the Item's from which they are created, and is
instead reverted to the 53 and 24 from above. This causes
*all* fields to get non-fixed significant digits.
#3: AVG function does not update max_length (display length)
when updating number of decimals.
#4: The function that switches to non-fixed number of
significant digits should use DBL_DIG + 2 or FLT_DIG + 2 as
cut-off values (Since fixed precision does not use the 'e'
notation)
Of these points, #1 is the controversial one, but this
change is preferred and has been cleared with Monty. The
function causes quite a few unit tests to blow up and they had
to b changed, but each one is annotated and motivated. We
frequently see the magical 53 and 24 give way to more relevant
numbers.
fix for cast( AS DATETIME) + 0 operation.
I just implemented Item_datetime_typecast::val() method
as it is usually done in other classes.
Should be fixed more radically in 5.0
of its argument happened to be a decimal expression returning
the NULL value.
The crash was due to the fact the function in_decimal::set did
not take into account that val_decimal() could return 0 if
the decimal expression had been evaluated to NULL.
INTO clause can be specified only for the last select of a UNION and it
receives the result of the whole query. But it was wrongly allowed in
non-last selects of a UNION which leads to a confusing query result.
Now INTO allowed only in the last select of a UNION.
aggregated in outer context returned wrong results.
This happened only if the subquery did not contain any references
to outer fields.
As there were no references to outer fields the subquery erroneously
was taken for non-correlated one.
Now any set function aggregated in outer context makes the subquery
correlated.
To correctly decide which predicates can be evaluated with a given table
the optimizer must know the exact set of tables that a predicate depends
on. If that mask is too wide (refer to non-existing tables) the optimizer
can erroneously skip a predicate.
One such case of wrong table usage mask were the aggregate functions.
The have a all-1 mask (meaning depend on all tables, including non-existent
ones).
Fixed by making a real used_tables mask for the aggregates. The mask is
constructed in the following way :
1. OR the table dependency masks of all the arguments of the aggregate.
2. If all the arguments of the function are from the local name resolution
context and it is evaluated in the same name resolution
context where it is referenced all the tables from that name resolution
context are OR-ed to the dependency mask. This is to denote that an
aggregate function depends on the number of rows it processes.
3. Handle correctly the case of an aggregate function optimization (such that
the aggregate function can be pre-calculated and made a constant).
Made sure that an aggregate function is never a constant (unless subject of a
specific optimization and pre-calculation).
One other flaw was revealed and fixed in the process : references were
not calling the recalculation method for used_tables of their targets.
Removed wrong fix for the bug#27006.
The bug was added by the fix for the bug#19978 and fixed by Monty on 2007/02/21.
trigger.test, trigger.result:
Corrected test case for the bug#27006.
UPDATE if the row wasn't actually changed.
This bug was caused by fix for bug#19978. It causes AFTER UPDATE triggers
not firing if a row wasn't actually changed by the update part of the
INSERT .. ON DUPLICATE KEY UPDATE.
Now triggers are always fired if a row is touched by the INSERT ... ON
DUPLICATE KEY UPDATE.
INSERT uses query_id to verify what fields are
mentioned in the fields list of the INSERT command.
However the check for that is made after the
ON DUPLICATE KEY is processed. This causes all
the fields mentioned in ON DUPLICATE KEY to be
considered as mentioned in the fields list of
INSERT.
Moved the check up, right after processing the
fields list.
touched but not actually changed.
The LAST_INSERT_ID() is reset to 0 if no rows were inserted or changed.
This is the case when an INSERT ... ON DUPLICATE KEY UPDATE updates a row
with the same values as the row contains.
Now the LAST_INSERT_ID() values is reset to 0 only if there were no rows
successfully inserted or touched.
The new 'touched' field is added to the COPY_INFO structure. It holds the
number of rows that were touched no matter whether they were actually
changed or not.
TABLE ... WRITE".
Memory and CPU hogging occured when connection which had to wait for table
lock was serviced by thread which previously serviced connection that was
killed (note that connections can reuse threads if thread cache is enabled).
One possible scenario which exposed this problem was when thread which
provided binlog dump to replication slave was implicitly/automatically
killed when the same slave reconnected and started pulling data through
different thread/connection.
The problem also occured when one killed particular query in connection
(using KILL QUERY) and later this connection had to wait for some table
lock.
This problem was caused by the fact that thread-specific mysys_var::abort
variable, which indicates that waiting operations on mysys layer should
be aborted (this includes waiting for table locks), was set by kill
operation but was never reset back. So this value was "inherited" by the
following statements or even other connections (which reused the same
physical thread). Such discrepancy between this variable and THD::killed
flag broke logic on SQL-layer and caused CPU and memory hogging.
This patch tries to fix this problem by properly resetting this member.
There is no test-case associated with this patch since it is hard to test
for memory/CPU hogging conditions in our test-suite.
TABLE ... WRITE".
CPU hogging occured when connection which had to wait for table lock was
serviced by thread which previously serviced connection that was killed
(note that connections can reuse threads if thread cache is enabled).
One possible scenario which exposed this problem was when thread which
provided binlog dump to replication slave was implicitly/automatically
killed when the same slave reconnected and started pulling data through
different thread/connection.
In 5.* versions memory hogging was added to CPU hogging. Moreover in
those versions the problem also occured when one killed particular query
in connection (using KILL QUERY) and later this connection had to wait for
some table lock.
This problem was caused by the fact that thread-specific mysys_var::abort
variable, which indicates that waiting operations on mysys layer should
be aborted (this includes waiting for table locks), was set by kill
operation but was never reset back. So this value was "inherited" by the
following statements or even other connections (which reused the same
physical thread). Such discrepancy between this variable and THD::killed
flag broke logic on SQL-layer and caused CPU and memory hogging.
This patch tries to fix this problem by properly resetting this member.
There is no test-case associated with this patch since it is hard to test
for memory/CPU hogging conditions in our test-suite.
Before this fix, the parser would accept illegal code in SQL exceptions
handlers, that later causes the runtime to crash when executing the code,
due to memory violations in the exception handler stack.
The root cause of the problem is instructions within an exception handler
that jumps to code located outside of the handler. This is illegal according
to the SQL 2003 standard, since labels located outside the handler are not
supposed to be visible (they are "out of scope"), so any instruction that
jumps to these labels, like ITERATE or LEAVE, should not parse.
The section of the standard that is relevant for this is :
SQL:2003 SQL/PSM (ISO/IEC 9075-4:2003)
section 13.1 <compound statement>,
syntax rule 4
<quote>
The scope of the <beginning label> is CS excluding every <SQL schema
statement> contained in CS and excluding every
<local handler declaration list> contained in CS. <beginning label> shall
not be equivalent to any other <beginning label>s within that scope.
</quote>
With this fix, the C++ class sp_pcontext, which represent the "parsing
context" tree (a.k.a symbol table) of a stored procedure, has been changed
as follows:
- constructors have been cleaned up, so that only building a root node for
the tree is public; building nodes inside a tree is not public.
- a new member, m_label_scope, indicates if a given syntactic context
belongs to a DECLARE HANDLER block,
- label resolution, in the method find_label(), has been changed to
implement the restriction of scope regarding labels used in a compound
statement.
The actions in the parser, when parsing the body of a SQL exception handler,
have been changed as follows:
- the implementation of an exception handler (DECLARE HANDLER) now creates
explicitly a new sp_pcontext, to isolate the code inside the handler from
the containing compound statement context.
- registering exception handlers as a result occurs in the parent context,
see the rule sp_hcond_element
- the code in sp_hcond_list has been cleaned up, to avoid code duplication
In addition, the flags IN_SIMPLE_CASE and IN_HANDLER, declared in sp_head.h
have been removed, since they are unused and broken by design (as seen with
Bug 19194 (Right recursion in parser for CASE causes excessive stack usage,
limitation), representing a stack in a single flag is not possible.
Tests in sp-error have been added to show that illegal constructs are now
rejected.
Tests in sp have been added for code coverage, to show that ITERATE or LEAVE
statements are legal when jumping to a label in scope, inside the body of
an exception handler.