Crash may happen when selecting from a merge table that has underlying
tables with less indexes than in a merge table itself.
If number of keys in merge table is not bigger than requested key number,
return error.
The function receives an exactly-sized buffer (not a C NUL-terminated string)
and passes it into a printf function to be interpreted with "%s".
Instead, create an intermediate String object, and copy the data into it,
and pass in a pointer to the String's NUL-terminated buffer.
There was possible stack overrun in an edge case which handles invalid body of
a SP in mysql.proc . That should be case when mysql.proc has been changed
manually. Though, due to bug 21513, it can be exploited without having access
to mysql.proc only being able to create a stored routine.
Re-execution of a parametrized prepared statement or a stored routine
with a SELECT that use LEFT JOIN with second table having only one row
could yield incorrect result.
The problem appeared only for left joins with second table having only
one row (aka const table) and equation conditions in ON or WHERE clauses
that depend on the argument passed. Once the condition was false for
second const table, a NULL row was created for it, and any field involved
got NULL-value flag, which then was never reset.
The cause of the problem was that Item_field::null_value could be set
without being reset for re-execution. The solution is to reset
Item_field::null_value in Item_field::cleanup().
The STACK_MIN_SIZE is currently set to 8192, when we actually need
(emperically discovered) 9236 bytes to raise an fatal error, on Ubuntu
Dapper Drake, libc6 2.3.6-0ubuntu2, Linux kernel 2.6.15-27-686, on x86.
I'm taking that as a new lower bound, plus 100B of wiggle-room for sundry
word sizes and stack behaviors.
The added test verifies in a cross-platform way that there are no gaps
between the space that we think we need and what we actually need to report
an error.
DOCUMENTERS: This also adds "let" to the mysqltest commands that evaluate
an argument to expand variables therein. (Only right of the "=", of course.)
Presence of a subquery in the ON expression of a join
should not block merging the view that contains this join.
Before this patch the such views were converted into
into temporary table views.
an ALL/ANY quantified subquery in HAVING.
The Item::split_sum_func2 method should not create Item_ref
for objects of any class derived from Item_subselect.
Item_substr's results are improperly stored in a temporary table due to
wrongly calculated max_length value for multi-byte charsets if two
arguments specified.
this key does not stop" (version for 5.0 only).
UPDATE statement which WHERE clause used key and which invoked trigger
that modified field in this key worked indefinetely.
This problem occured because in cases when UPDATE statement was
executed in update-on-the-fly mode (in which row is updated right
during evaluation of select for WHERE clause) the new version of
the row became visible to select representing WHERE clause and was
updated again and again.
We already solve this problem for UPDATE statements which does not
invoke triggers by detecting the fact that we are going to update
field in key used for scanning and performing update in two steps,
during the first step we gather information about the rows to be
updated and then doing actual updates. We also do this for
MULTI-UPDATE and in its case we even detect situation when such
fields are updated in triggers (actually we simply assume that
we always update fields used in key if we have before update
trigger).
The fix simply extends this check which is done in check_if_key_used()/
QUICK_SELECT_I::check_if_keys_used() routine/method in such way that
it also detects cases when field used in key is updated in trigger.
As nice side-effect we have more precise and thus more optimal
perfomance-wise check for the MULTI-UPDATE.
Also check_if_key_used()/QUICK_SELECT_I::check_if_keys_used() were
renamed to is_key_used()/QUICK_SELECT_I::is_keys_used() in order to
better reflect that boolean predicate.
Note that this check is implemented in much more elegant way in 5.1
Any default value for a enum fields over UCS2 charsets was corrupted
when we put it into the frm file, as it had been overwritten by its
HEX representation.
To fix it now we save a copy of structure that represents the enum
type and when putting the default values we use this copy.
that returns the results of aggregation by GROUP_CONCAT.
The crash was due to an overflow happened for the field
sortoder->length.
The fix prevents this overflow exploiting the fact that the
value of sortoder->length cannot be greater than the value of
thd->variables.max_sort_length.
Bug#20627 - INSERT DELAYED does not honour auto_increment_* variables
INSERT DELAYED ignored an explicitly set INSERT_ID and session
specific auto_increment_* variables.
The problem was that the inserts are done by a system thread,
which does not have access to the session variables of the user
thread.
On a proposal of Guilhem I fixed it so that the variables are
copied to the data structure for every delayed row. The system
thread sets its session variables from these values.