too much memory. Instead, either create the equvalent SEL_TREE manually, or create only two ranges that
strictly include the area to scan
(Note: just to re-iterate: increasing NOT_IN_IGNORE_THRESHOLD will make optimization run slower for big
IN-lists, but the server will not run out of memory. O(N^2) memory use has been eliminated)
The fix refines the algorithm of generating DROPs for binlog.
Temp tables with common pseudo_thread_id are clustered into one query.
Consequently one replication event per pseudo_thread_id is generated.
Backporting a changeset made for 5.0. Comments from there:
The fix refines the algorithm of generating DROPs for binlog.
Temp tables with common pseudo_thread_id are clustered into one query.
Consequently one replication event per pseudo_thread_id is generated.
Error was emitted when one tried to select information from view which used
merge algorithm and which also had CONVERT_TZ() function in its select list.
This bug was caused by wrong assumption that global table list for view
which is handled using merge algorithm begins from tables belonging to
the main select of this view. Nowadays the above assumption is not true only
when one uses convert_tz() function in view's select list, but in future
other cases may be added (for example we may support merging of views
with subqueries in select list one day). Relying on this false assumption
led to the usage of wrong table list for field lookups and therefor errors.
With this fix we explicitly use pointer to the beginning of main select's
table list.
The bug caused wrong result sets for union constructs of the form
(SELECT ... ORDER BY order_list1 [LIMIT n]) ORDER BY order_list2.
For such queries order lists were concatenated and limit clause was
completely neglected.
After a locking error the open table(s) were not fully
cleaned up for reuse. But they were put into the open table
cache even before the lock was tried. The next statement
reused the table(s) with a wrong lock type set up. This
tricked MyISAM into believing that it don't need to update
the table statistics. Hence CHECK TABLE reported a mismatch
of record count and table size.
Fortunately nothing worse has been detected yet. The effect
of the test case was that the insert worked on a read locked
table. (!)
I added a new function that clears the lock type from all
tables that were prepared for a lock. I call this function
when a lock failes.
No test case. One test would add 50 seconds to the
test suite. Another test requires file mode modifications.
I added a test script to the bug report. It contains three
cases for failing locks. All could reproduce a table
corruption. All are fixed by this patch.
This bug was not lock timeout specific.
Removed sp-goto.test, sp-goto.result and all (disabled) GOTO code.
Also removed some related code that's not needed any more (no possible
unresolved label references any more, so no need to check for them).
NB: Keeping the ER_SP_GOTO_IN_HNDLR in errmsg.txt; it might become useful
in the future, and removing it (and thus re-enumerating error codes)
might upset things. (Anything referring to explicit error codes.)