too much memory. Instead, either create the equvalent SEL_TREE manually, or create only two ranges that
strictly include the area to scan
(Note: just to re-iterate: increasing NOT_IN_IGNORE_THRESHOLD will make optimization run slower for big
IN-lists, but the server will not run out of memory. O(N^2) memory use has been eliminated)
supported in SP but not in PS": just enable them in prepared
statements, the supporting functionality was implemented when
they were enabled in stored procedures.
trigger fails".
In cases when CONVERT_TZ() function was used in trigger or stored function
(or in stored procedure which was called from trigger or stored function)
error about non existing '.' table was reported.
Statements that use CONVERT_TZ() function should have time zone related
tables in their table list. tz_init_table_list() function which is used
to produce part of table list containing those tables didn't set
TABLE_LIST::db_length/table_name_length members properly. As result time
zone tables needed for CONVERT_TZ() function were incorrectly handled by
prelocking algorithm and "Table '.' doesn't exist' error was emitted.
This fix changes tz_init_table_list() in such way that it properly inits
TABLE_LIST::table_name_length/db_length members and thus produces table list
which can be handled by prelocking algorithm correctly.
The fix refines the algorithm of generating DROPs for binlog.
Temp tables with common pseudo_thread_id are clustered into one query.
Consequently one replication event per pseudo_thread_id is generated.
Backporting a changeset made for 5.0. Comments from there:
The fix refines the algorithm of generating DROPs for binlog.
Temp tables with common pseudo_thread_id are clustered into one query.
Consequently one replication event per pseudo_thread_id is generated.
Error was emitted when one tried to select information from view which used
merge algorithm and which also had CONVERT_TZ() function in its select list.
This bug was caused by wrong assumption that global table list for view
which is handled using merge algorithm begins from tables belonging to
the main select of this view. Nowadays the above assumption is not true only
when one uses convert_tz() function in view's select list, but in future
other cases may be added (for example we may support merging of views
with subqueries in select list one day). Relying on this false assumption
led to the usage of wrong table list for field lookups and therefor errors.
With this fix we explicitly use pointer to the beginning of main select's
table list.
Make InnoDB option "loose", as the server might be
started with this option just to find out the test
is to be skipped in the configuration (bug#17359)
Now NDB is only initialized and started when the tests that are
being run will make use of it. The same thing is also done for the
slave databases and the instance manager.
After review from Magnus: Only take a snapshot of the data directories
that are in use.