The problem becomes apparent only if HAVE_purify is undefined.
It related to the part of code placed in open_table_from_share() fuction
where we initialize record buffer only if HAVE_purify is enabled.
So in case of HAVE_purify=OFF record buffer is not initialized
on open table stage.
Next we read key, find NULL value and update appropriate null bit
but do not update record buffer. After that the record is stored
in the join cache(store_record_in_cache). For CHAR fields we
strip trailing spaces and in our case this procedure uses
uninitialized record buffer.
The fix is to skip stripping space procedure in case of null values
for CHAR fields(partially based on 6.0 JOIN_CACHE implementation).
removed in MySQL 6.0
CREATE TABLE... TYPE= returns the warning "The syntax
'TYPE=storage_engine' is deprecated and will be removed in
MySQL 6.0. Please use 'ENGINE=storage_engine' instead"
This syntax is deprecated already from version 5.4.4, so
the message has been changed.
In addition, the deprecation macro was changed to reflect
the ServerPT decision not to include version number in the
warning message.
A number of test result files have been changed as a
consequence of the change in the deprecation macro.
Queries optimized with GROUP_MIN_MAX didn't cleanup KEYREAD
optimization properly. As a result subsequent queries may
return incomplete rows (fields are initialized to default
values).
Grouping by a subquery in a query with a distinct aggregate
function lead to a wrong result (wrong and unordered
grouping values).
There are two related problems:
1) The query like this:
SELECT (SELECT t1.a) aa, COUNT(DISTINCT b) c
FROM t1 GROUP BY aa
returned wrong result, because the outer reference "t1.a"
in the subquery was substituted with the Item_ref item.
The Item_ref item obtains data from the result_field object
that refreshes once after the end of each group. This data
is not applicable to filesort since filesort() doesn't care
about groups (and doesn't update result_field objects with
copy_fields() and so on). Also that data is not applicable
to group separation algorithm: end_send_group() checks every
record with test_if_group_changed() that evaluates Item_ref
items, but it refreshes those Item_ref-s only after the end
of group, that is a vicious circle and the grouped column
values in the output are shifted.
Fix: if
a) we grouping by a subquery and
b) that subquery has outer references to FROM list
of the grouping query,
then we substitute these outer references with
Item_direct_ref like references under aggregate
functions: Item_direct_ref obtains data directly
from the current record.
2) The query with a non-trivial grouping expression like:
SELECT (SELECT t1.a) aa, COUNT(DISTINCT b) c
FROM t1 GROUP BY aa+0
also returned wrong result, since JOIN::exec() substitutes
references to top-level aliases in SELECT list with Item_copy
caching items. Item_copy items have same refreshing policy
as Item_ref items, so the whole groping expression with
Item_copy inside returns wrong result in filesort() and
end_send_group().
Fix: include aliased items into GROUP BY item tree instead
of Item_ref references to them.
logging is disabled
The server would hit an assertion because of a DBUG violation.
There was a missing DBUG_RETURN and instead a plain return
was used.
This patch replaces the return with DBUG_RETURN.
Performing fulltext prefix search (a word with truncation
operator) may cause a dead-loop. ft_min_word_len value
doesn't matter actually.
The problem was introduced along with "smarter index merge"
optimization.
There was two problems:
The first was the symptom, caused by bad error handling in
ha_partition. It did not handle print_error etc. when
having no partitions (when used by dummy handler).
The second was the real problem that when dropping tables
it reused the table type (storage engine) from when the lock
was asked for, not the table type that it had when gaining
the exclusive name lock. So that it tried to delete tables
from wrong storage engines.
Solutions for the first problem was to accept some handler
calls to the partitioning handler even if it was not setup
with any partitions, and also if possible fallback
to use the base handler's default functions.
Solution for the second problem was to remove the optimization
to reuse the definition from the cache, instead always check
the frm-file when holding the LOCK_open mutex
(updated with a fix for a debug print crash and better
comments as required by reviewer, and removed optimization
to avoid reading the frm-file).
Fixed 2 problems :
1. test_if_order_by_key() was continuing on the primary key
as if it has a primary key suffix (as the secondary keys do).
This leads to crashes in ORDER BY <pk>,<pk>.
Fixed by not treating the primary key as the secondary one
and not depending on it being clustered with a primary key.
2. The cost calculation was trying to read the records
per key when operating on ORDER BYs that order on all of the
secondary key + some of the primary key.
This leads to crashes because of out-of-bounds array access.
Fixed by assuming we'll find 1 record per key in such cases.
column is used for ORDER BY
Problem: filesort isn't meant for null length sort data
(e.g. char(0)), that leads to a server crash.
Fix: disregard sort order if sort data record length is 0 (nothing
to sort).
The problem was that a DROP TRIGGER statement inside a stored
procedure could cause a crash in subsequent invocations. This
was due to the addition, on the first execution, of a temporary
table reference to the stored procedure query table list. In
a subsequent invocation, there would be a attempt to reinitialize
the temporary table reference, which by then was already gone.
The solution is to backup and reset the query table list each
time a trigger needs to be dropped. This ensures that any temp
changes to the query table list are discarded. It is safe to
do so at this time as drop trigger is restricted from more
complicated scenarios (ie, not allowed within stored functions,
etc).
Server crashes when accessing ARCHIVE table with missing
.ARZ file.
When opening a table, ARCHIVE didn't properly pass through
error code from lower level azopen() to higher level open()
method.
Bulk REPLACE or bulk INSERT ... ON DUPLICATE KEY UPDATE may
break dynamic record MyISAM table.
The problem is limited to bulk REPLACE and INSERT ... ON
DUPLICATE KEY UPDATE, because only these operations may
be done via UPDATE internally and may request write cache.
When flushing write cache, MyISAM may write remaining
cached data at wrong position. Fixed by requesting write
cache to seek to a correct position.
table and view...
Invalid memory reads after a query referencing MyISAM table
multiple times with write lock. Invalid memory reads may
lead to server crash, valgrind warnings, incorrect values
in INFORMATION_SCHEMA.TABLES.{TABLE_ROWS, DATA_LENGTH,
INDEX_LENGTH, ...}.
This may happen when one of the table instances gets closed
after a query, e.g. out of slots in open tables cache. UNION,
MERGE and VIEW are irrelevant.
The problem was that MyISAM didn't restore state info
pointer to default value.
In case of 'CREATE VIEW' subselect transformation does not happen(see JOIN::prepare).
During fix_fields Item_row may call is_null() method for its arugmens which
leads to item calculation(wrong subselect in our case as
transformation did not happen before). This is_null() call
does not make sence for 'CREATE VIEW'.
Note:
Only Item_row is affected because other items don't call is_null()
during fix_fields() for arguments.
SHOW CREATE TABLE on a view (v1) that contains a function whose
statement uses another view (v2), could trigger a infinite loop
if the view referenced within the function causes a warning to
be raised while opening the said view (v2).
The problem was a infinite loop over the stack of internal error
handlers. The problem would be triggered if the stack contained
two or more handlers and the first two handlers didn't handle the
raised condition. In this case, the loop variable would always
point to the second handler in the stack.
The solution is to correct the loop variable assignment so that
the loop is able to iterate over all handlers in the stack.
The problem was that a failure to open a view wasn't being
properly handled. When opening a view with unknown definer,
the open procedure would be treated as successful and would
later crash when attempting to lock the view (which wasn't
opened to begin with).
The solution is to skip further processing when opening a
table if it fails with a fatal error.
error causes debug assertion
The IGNORE option of the multiple-table UPDATE command was
not intended to suppress errors caused by the
sql_safe_updates mode. This flag will raise an error if the
execution of UPDATE does not use a key for row retrieval,
and should continue do so regardless of the IGNORE option.
However the implementation of IGNORE does not support
exceptions to the rule; it always converts errors to
warnings and cannot be extended. The Internal_error_handler
interface offers the infrastructure to handle individual
errors, making sure that the error raised by
sql_safe_updates is not silenced.
Fixed by implementing an Internal_error_handler and using it
for UPDATE IGNORE commands.
Detailed revision comments:
r6489 | sunny | 2010-01-21 02:57:50 +0200 (Thu, 21 Jan 2010) | 2 lines
branches/5.1: Factor out test for bug#44030 from innodb-autoinc.test
into a separate test/result files.
Detailed revision comments:
r6488 | sunny | 2010-01-21 02:55:08 +0200 (Thu, 21 Jan 2010) | 2 lines
branches/5.1: Factor out test for bug#44030 from innodb-autoinc.test
into a separate test/result files.
check_access() returning false for a database does not
guarantee that the access is granted to it.
This wrong condition in filling the INFORMATION_SCHEMA
tables causes extra tables to be returned to the user
even if he has no rights to see them.
Fixed by correcting the condition.
fulltext search and row op.
The search for fulltext indexes is searching for some special
predicate layouts. While doing so it's not checking for the number
of columns of the expressions it tries to calculate.
And since row expressions can't return a single scalar value there
was a crash.
Fixed by checking if the expressions are scalar (in addition to
being constant) before calling Item::val_xxx() methods.
in multitable delete/subquery
SQL_BUFFER_RESULT should not have an effect on non-SELECT
statements according to our documentation.
Fixed by not passing it through to multi-table DELETE (similarly
to how it's done for multi-table UPDATE).
Several items said to be deprecated in the 4.1 manual
have never been removed. This worklog adds deprecation
warnings when these items are used, and warns the user
that the items will be removed in MySQL 5.6.
A couple of previously deprecation decision have been
reversed (see single file comments)
valgrind pointed to a buffer allocated by my_realloc which looked fishy
Replaced size with what was probably intended, added test case.
Now also fixed line after review comment
'CREATE TABLE IF NOT EXISTS ... SELECT' statement were causing 'CREATE
TEMPORARY TABLE ...' to be written to the binary log in row-based
mode (a.k.a. RBR), when there was a temporary table with the same name.
Because the 'CREATE TABLE ... SELECT' statement was executed as
'INSERT ... SELECT' into the temporary table. Since in RBR mode no
other statements related to temporary tables are written into binary log,
this sometimes broke replication.
This patch changes behavior of 'CREATE TABLE [IF NOT EXISTS] ... SELECT ...'.
it ignores existence of temporary table with the
same name as table being created and is interpreted
as attempt to create/insert into base table. This makes behavior of
'CREATE TABLE [IF NOT EXISTS] ... SELECT' consistent with
how ordinary 'CREATE TABLE' and 'CREATE TABLE ... LIKE' behave.
Selecting of the CONCAT_WS(...<PS parameter>...) result into
a user variable may return wrong data.
Item_func_concat_ws::val_str contains a number of memory
allocation-saving optimization tricks. After the fix
for bug 46815 the control flow has been changed to a
branch that is commented as "This is quite uncommon!":
one of places where we are trying to concatenate
strings inplace. However, that "uncommon" place
didn't care about PS parameters, that have another
trick in Item_sp_variable::val_str(): they use the
intermediate Item_sp_variable::str_value field,
where they may store a reference to an external
argument's buffer.
The Item_func_concat_ws::val_str function has been
modified to take into account val_str functions
(such as Item_sp_variable::val_str) that return a
pointer to an internal Item member variable that
may reference to a buffer provided.