When concurrent inserts were disabled, statements after an INSERT
were not put into the query cache. This happened because we do not
save the current data file length at statement start when
concurrent inserts are disabled. But we checked the always zero
local length against the real file length anyway.
Fixed by doing the check only if concurrent inserts are not diabled.
- Apply Eric Bergen's patch: in join_read_always_key(), move ha_index_init() call
to before the late NULLs filtering code.
- Backport function comments from 6.0.
added new function test_if_data_home_dir() which checks that
path does not contain mysql data home directory.
Using of mysql data home directory in
DATA DIRECTORY & INDEX DIRECTORY is disallowed.
Assertion `0' failed
If ROW item is a part of an expression that also has
aggregate function calls (COUNT/SUM/AVG...), a
"splitting" with an Item::split_sum_func2 function
is applied to that ROW item.
Current implementation of Item::split_sum_func2
replaces this Item_row with a newly created
Item_aggregate_ref reference to it.
Then the row cache tries to work with the
Item_aggregate_ref object as with the Item_row object:
row cache calls row-emulation methods such as cols and
element_index. Item_aggregate_ref (like it's parent
Item_ref) inherits dummy implementations of those
methods from the hierarchy root Item, and call to
them leads to failed assertions and wrong data
output.
Row-emulation virtual functions (cols, element_index, addr,
check_cols, null_inside and bring_value) of Item_ref have
been overloaded to forward calls to an underlying item
reference.
The problem is that passing anything other than a integer to a limit
clause in a prepared statement would fail. This limitation was introduced
to avoid replication problems (e.g: replicating the statement with a
string argument would cause a parse failure in the slave).
The solution is to convert arguments to the limit clause to a integer
value and use this converted value when persisting the query to the log.
NAME_CONST('whatever', -1) * MAX(whatever) bombed since -1 was
not seen as constant, but as FUNCTION_UNARY_MINUS(constant)
while we are at the same time pretending it was a basic const
item. This confused the aggregate handlers in exciting ways.
We now make NAME_CONST() behave more consistently.
Was a double-free of the Unique member of Item_func_group_concat.
This was not causing a crash because the Unique is a descendent of
Sql_alloc.
Fixed to free the Unique only if it was allocated for the instance
of Item_func_group_concat it was referenced from
documentation
While the manual mentions FRAC_SECOND only for the TIMESTAMPADD()
function, it was also possible to use FRAC_SECOND with DATE_ADD(),
DATE_SUB() and +/- INTERVAL.
Fixed the parser to match the manual, i.e. using FRAC_SECOND for
anything other than TIMESTAMPADD()/TIMESTAMPDIFF() now produces a
syntax error.
Additionally, the patch allows MICROSECOND to be used in TIMESTAMPADD/
TIMESTAMPDIFF and marks FRAC_SECOND as deprecated.
log-slave-updates and circul repl
Slave SQL thread may execute one extra event when there are events
skipped by slave I/O thread (e.g. originated by the same server).
Whereas it was requested not to do so by the UNTIL condition.
This happens because we compare with the end position of previously
executed event. This is fine when there are no skipped by slave I/O
thread events, as end position of previous event equals to start
position of to be executed event. Otherwise this position equals to
start position of skipped event.
This is fixed by:
- reading the event to be executed before checking if the until condition
is satisfied.
- comparing the start position of the event to be executed. Since we do
not have the start position available, we compute it by subtracting
event length from end position (which is available).
- if there are no events on the event queue at the slave sql starting
time, that meet until condition, we stop immediately, as in this
case we do not want to wait for next event.
suite)
Under some circumstances a combination of aggregate functions and
GROUP BY in a SELECT query over a VIEW could lead to incorrect
calculation of the result type of the aggregate function. This in
turn could result in incorrect results, or assertion failures on debug
builds.
Fixed by changing the logic in Item_sum_hybrid::fix_fields() so that
the argument's item is dereferenced before calling its type() method.
The problem is that CREATE VIEW statements inside prepared statements
weren't being expanded during the prepare phase, which leads to objects
not being allocated in the appropriate memory arenas.
The solution is to perform the validation of CREATE VIEW statements
during the prepare phase of a prepared statement. The validation
during the prepare phase assures that transformations of the parsed
tree will use the permanent arena of the prepared statement.
a table name.
The problem was that fill_defined_view_parts() did not return
an error if a table is going to be altered. That happened if
the table was already in the table cache. In that case,
open_table() returned non-NULL value (valid TABLE-instance from
the cache).
The fix is to ensure that an error is thrown even if the table
is in the cache.
(This is a backport of the original patch for 5.1)
The test case for the bug#31048 checks that there is no crash on stack
overrun. But due to different stack sizes on different platforms it failed
on some of them.
The new test case check that a query with at least 4 level subquery nesting
works without the stack overrun nesting and other levels of nesting doesn't
cause a crash.
and ps-protocol
Finding a routine should be a transparent operation as
far as the binary log is concerned.
But it was influencing the binary log because of the TIMESTAMP
column in the proc table.
Fixed by preserving and restoring the time_zone usage flag when
searching for a stored routine in the proc table.
breaks replication
NAME_CONST() didn't replicate constant character set and collation
correctly.
With this fix NAME_CONST() inherits collation from the value argument.
Problem is not about intervals and doesn't actually cause 'full table scan'.
We have an optimization for DISTINCT when we have
'DISTINCT field_from_first_join_table' we don't need to read all the
rows from the JOIN-ed table if we found one conforming row.
It stopped working in 5.0 as we return NESTED_LOOP_OK if we came upon
that case in the evaluate_join_record() and that doesn't break the
recordreading loop in sub_select().
Fixed by returning NESTED_LOOP_NO_MORE_ROWS in this case.