Item_sum_distinct::setup(THD*): Assertion
There was an assertion to detect a bug in ROLLUP
implementation. However the assertion is not true
when used in a subquery context with non-cacheable
statements.
Fixed by turning the assertion to accepted case
(just like it's done for the other aggregate functions).
Our web server has been restructured several times, and references
to it in our source code has stayed the same. This patch from Paul
DuBois updates all URLs to modern semantics.
A rule was introduced by the 5.1 part of the fix for bug 27531 to
prefer filesort over indexed ORDER BY when accessing all of the rows of a
table (because it's faster). This new rule was not accounting for the
presence of a LIMIT clause.
Fixed the condition for this rule so it will prefer filesort over
indexed ORDER BY only if no LIMIT.
when used in a VIEW.
The problem was that wrong function (create_tmp_from_item())
was used to create a temporary field for Item_func_sp.
The fix is to use create_tmp_from_field().
Problem: The "regex" library written by Henry Spencer
does not support tricky character sets like UCS2.
Fix: convert tricky character sets to UTF8 before calling
regex functions.
This bug is a symptom of the way handler's tables are managed. The
most different aspect, compared to the conventional behavior, is that
the handler's tables are long lived, meaning that their lifetimes are
not bounded by the duration of the command that opened them. For this
effect the handler code uses its own list (handler_tables instead of
open_tables) to hold open handler tables so that the tables won't be
closed at the end of the command/statement. Besides the handler_tables
list, there is a hash (handler_tables_hash) which is used to associate
handler aliases to tables and to refresh the tables upon demand (flush
tables).
The current implementation doesn't work properly with refreshed tables
-- more precisely when flush commands are issued by other initiators.
This happens because when a handler open or read statement is being
processed, the associated table has to be opened or locked and, for this
matter, the open_tables and handler_tables lists are swapped so that the
new table being opened is inserted into the handler_tables list. But when
opening or locking the table, if the refresh version is different from the
thread refresh version then all used tables in the open_tables list (now
handler_tables) are refreshed. In the "refreshing" process the handler
tables are flushed (closed) without being properly unlinked from the
handler hash.
The current implementation also fails to properly discard handlers of
dropped tables, but this and other problems are going to be addressed
in the fixes for bugs 31397 and 31409.
The chosen approach tries to properly save and restore the table state
so that no table is flushed during the table open and lock operations.
The logic is almost the same as before with the list swapping, but with
a working glue code.
The test case for this bug is going to be committed into 5.1 because it
requires a test feature only avaiable in 5.1 (wait_condition).
"Disabled plugin is provoking Valgrind error"
If there are any auto-alloced string plug-in options, memory is
allocated during the call for handle_options(). We must free this
memory if we are not installing the plug-in.
Report claims that Seconds_behind_master behaves unexpectedly.
Code analysis shows that there is an evident flaw in that treating of FormatDescription event is wrong
so that after FLUSH LOGS on slave the Seconds_behind_master's calculation slips and incorrect
value can be reported to SHOW SLAVE STATUS.
Even worse is that the gap between the correct and incorrect deltas grows with time.
Fixed with prohibiting changes to rpl->last_master_timestamp by artifical events (any kind of).
suggestion as comments is added how to fight with lack of info on the slave side by means of
new heartbeat feature coming.
The test can not be done ealily fully determistic.
This actually, fix for the patch for bug-27354. The problem with
the patch was that Item_func_sp::used_tables() was updated, but
Item_func_sp::const_item() was not. So, for Item_func_sp, we had
the following inconsistency:
- used_tables() returned RAND_TABLE, which means that the item
can produce "random" results;
- but const_item() returned TRUE, which means that the item is
a constant one.
The fix is to change Item_func_sp::const_item() behaviour: it must
return TRUE (an item is a constant one) only if a stored function
is deterministic and each of its arguments (if any) is a constant
item.
Two cases in ha_partition::extra() was missing
(HA_EXTRA_DELETE_CANNOT_BATCH and HA_EXTRA_UPDATE_CANNOT_BATCH)
which only is currently used by NDB (which not uses ha_partition)