Removed wrong fix for the bug#27006.
The bug was added by the fix for the bug#19978 and fixed by Monty on 2007/02/21.
trigger.test, trigger.result:
Corrected test case for the bug#27006.
UPDATE if the row wasn't actually changed.
This bug was caused by fix for bug#19978. It causes AFTER UPDATE triggers
not firing if a row wasn't actually changed by the update part of the
INSERT .. ON DUPLICATE KEY UPDATE.
Now triggers are always fired if a row is touched by the INSERT ... ON
DUPLICATE KEY UPDATE.
INSERT uses query_id to verify what fields are
mentioned in the fields list of the INSERT command.
However the check for that is made after the
ON DUPLICATE KEY is processed. This causes all
the fields mentioned in ON DUPLICATE KEY to be
considered as mentioned in the fields list of
INSERT.
Moved the check up, right after processing the
fields list.
touched but not actually changed.
The LAST_INSERT_ID() is reset to 0 if no rows were inserted or changed.
This is the case when an INSERT ... ON DUPLICATE KEY UPDATE updates a row
with the same values as the row contains.
Now the LAST_INSERT_ID() values is reset to 0 only if there were no rows
successfully inserted or touched.
The new 'touched' field is added to the COPY_INFO structure. It holds the
number of rows that were touched no matter whether they were actually
changed or not.
TABLE ... WRITE".
Memory and CPU hogging occured when connection which had to wait for table
lock was serviced by thread which previously serviced connection that was
killed (note that connections can reuse threads if thread cache is enabled).
One possible scenario which exposed this problem was when thread which
provided binlog dump to replication slave was implicitly/automatically
killed when the same slave reconnected and started pulling data through
different thread/connection.
The problem also occured when one killed particular query in connection
(using KILL QUERY) and later this connection had to wait for some table
lock.
This problem was caused by the fact that thread-specific mysys_var::abort
variable, which indicates that waiting operations on mysys layer should
be aborted (this includes waiting for table locks), was set by kill
operation but was never reset back. So this value was "inherited" by the
following statements or even other connections (which reused the same
physical thread). Such discrepancy between this variable and THD::killed
flag broke logic on SQL-layer and caused CPU and memory hogging.
This patch tries to fix this problem by properly resetting this member.
There is no test-case associated with this patch since it is hard to test
for memory/CPU hogging conditions in our test-suite.
Before this fix, the parser would accept illegal code in SQL exceptions
handlers, that later causes the runtime to crash when executing the code,
due to memory violations in the exception handler stack.
The root cause of the problem is instructions within an exception handler
that jumps to code located outside of the handler. This is illegal according
to the SQL 2003 standard, since labels located outside the handler are not
supposed to be visible (they are "out of scope"), so any instruction that
jumps to these labels, like ITERATE or LEAVE, should not parse.
The section of the standard that is relevant for this is :
SQL:2003 SQL/PSM (ISO/IEC 9075-4:2003)
section 13.1 <compound statement>,
syntax rule 4
<quote>
The scope of the <beginning label> is CS excluding every <SQL schema
statement> contained in CS and excluding every
<local handler declaration list> contained in CS. <beginning label> shall
not be equivalent to any other <beginning label>s within that scope.
</quote>
With this fix, the C++ class sp_pcontext, which represent the "parsing
context" tree (a.k.a symbol table) of a stored procedure, has been changed
as follows:
- constructors have been cleaned up, so that only building a root node for
the tree is public; building nodes inside a tree is not public.
- a new member, m_label_scope, indicates if a given syntactic context
belongs to a DECLARE HANDLER block,
- label resolution, in the method find_label(), has been changed to
implement the restriction of scope regarding labels used in a compound
statement.
The actions in the parser, when parsing the body of a SQL exception handler,
have been changed as follows:
- the implementation of an exception handler (DECLARE HANDLER) now creates
explicitly a new sp_pcontext, to isolate the code inside the handler from
the containing compound statement context.
- registering exception handlers as a result occurs in the parent context,
see the rule sp_hcond_element
- the code in sp_hcond_list has been cleaned up, to avoid code duplication
In addition, the flags IN_SIMPLE_CASE and IN_HANDLER, declared in sp_head.h
have been removed, since they are unused and broken by design (as seen with
Bug 19194 (Right recursion in parser for CASE causes excessive stack usage,
limitation), representing a stack in a single flag is not possible.
Tests in sp-error have been added to show that illegal constructs are now
rejected.
Tests in sp have been added for code coverage, to show that ITERATE or LEAVE
statements are legal when jumping to a label in scope, inside the body of
an exception handler.
Different set of conditions is used to verify
the validity of index definitions over a GEOMETRY
column in ALTER TABLE and CREATE TABLE.
The difference was on how sub-keys notion validity
is checked.
Fixed by extending the CREATE TABLE condition to
support the cases allowed in ALTER TABLE.
Made the SHOW CREATE TABLE not to display spatial
indexes using the sub-key notion.
differences in tables
Certain merge tables were wrongly reported as having incorrect definition:
- Some fields that are 1 byte long (e.g. TINYINT, CHAR(1)), might
be internally casted (in certain cases) to a different type on a
storage engine layer. (affects 4.1 and up)
- If tables in a merge (and a MERGE table itself) had short VARCHAR column (less
than 4 bytes) and at least one (but not all) tables were ALTER'ed (even to an
identical table: ALTER TABLE xxx ENGINE=yyy), table definitions went ouf of
sync. (affects 4.1 only)
This is fixed by relaxing a check for underlying conformance and setting
field type to FIELD_TYPE_STRING in case varchar is shorter than 4
when a table is created.
when the column is to be read from a derived table column which
was specified as a concatenation of string literals.
The bug happened because the Item_string::append did not adjust the
value of Item_string::max_length. As a result of it the temporary
table column defined to store the concatenation of literals was
not wide enough to hold the whole value.
after single-row table substitution could lead to a wrong result set.
The bug happened because the function Item_field::replace_equal_field
erroniously assumed that any field included in a multiple equality
with a constant has been already substituted for this constant.
This not true for fields becoming constant after row substitutions
for constant tables.
When the SUBSTRING() function was used over a LONGTEXT field the max_length of
the SUBSTRING() result was wrongly calculated and set to 0. As the max_length
parameter is used while tmp field creation it limits the length of the result
field and leads to printing an empty string instead of the correct result.
Now the Item_func_substr::fix_length_and_dec() function correctly calculates
the max_length parameter.
construct references invalid name.
Derived tables currently cannot use outer references.
Thus there is no outer context for them.
The 4.1 code takes this fact into account while the
Item_field::fix_outer_field code of 5.0 lost the check that blocks
any attempts to resolve names in outer context for derived tables.