The problem becomes apparent only if HAVE_purify is undefined.
It related to the part of code placed in open_table_from_share() fuction
where we initialize record buffer only if HAVE_purify is enabled.
So in case of HAVE_purify=OFF record buffer is not initialized
on open table stage.
Next we read key, find NULL value and update appropriate null bit
but do not update record buffer. After that the record is stored
in the join cache(store_record_in_cache). For CHAR fields we
strip trailing spaces and in our case this procedure uses
uninitialized record buffer.
The fix is to skip stripping space procedure in case of null values
for CHAR fields(partially based on 6.0 JOIN_CACHE implementation).
error in the query.
Fixes a leak after materializing a GROUP BY subquery to a
temp table when the subquery has a blob column in the SELECT
list.
Fixed by correctly destructing temporary buffers for re-usable
queries
The optimizer must not continue executing the current query
if e.g. the storage engine reports an error.
This is somewhat hard to implement with Item::val_xxx()
because they do not have means to return error code.
This is why we need to check the thread's error state after
a call to one of the Item::val_xxx() methods.
Fixed store_key_item::copy_inner() to return an error state
if an error happened during the call to Item::save_in_field()
because it calls Item::val_xxx().
Also added similar checks to related places.
Bug#41756 "Strange error messages about locks from InnoDB".
In JT_EQ_REF (join_read_key()) access method,
don't try to unlock rows in the handler, unless certain that
a) they were locked
b) they are not used.
Unlocking of rows is done by the logic of the nested join loop,
and is unaware of the possible caching that the access method may
have. This could lead to double unlocking, when a row
was unlocked first after reading into the cache, and then
when taken from cache, as well as to unlocking of rows which
were actually used (but taken from cache).
Delegate part of the unlocking logic to the access method,
and in JT_EQ_REF count how many times a record was actually
used in the join. Unlock it only if it's usage count is 0.
Implemented review comments.
Bug#41756 "Strange error messages about locks from InnoDB".
In JT_EQ_REF (join_read_key()) access method,
don't try to unlock rows in the handler, unless certain that
a) they were locked
b) they are not used.
Unlocking of rows is done by the logic of the nested join loop,
and is unaware of the possible caching that the access method may
have. This could lead to double unlocking, when a row
was unlocked first after reading into the cache, and then
when taken from cache, as well as to unlocking of rows which
were actually used (but taken from cache).
Delegate part of the unlocking logic to the access method,
and in JT_EQ_REF count how many times a record was actually
used in the join. Unlock it only if it's usage count is 0.
Implemented review comments.
with temporary tables
There were two problems the test case from this bug was
triggering:
1. JOIN::rollup_init() was supposed to wrap all constant Items
into another object for queries with the WITH ROLLUP modifier
to ensure they are never considered as constants and therefore
are written into temporary tables if the optimizer chooses to
employ them for DISTINCT/GROUP BY handling.
However, JOIN::rollup_init() was called before
make_join_statistics(), so Items corresponding to fields in
const tables could not be handled as intended, which was
causing all kinds of problems later in the query execution. In
particular, create_tmp_table() assumed all constant items
except "hidden" ones to be removed earlier by remove_const()
which led to improperly initialized Field objects for the
temporary table being created. This is what was causing crashes
and valgrind errors in storage engines.
2. Even when the above problem had been fixed, the query from
the test case produced incorrect results due to some
DISTINCT/GROUP BY optimizations being performed by the
optimizer that are inapplicable in the WITH ROLLUP case.
Fixed by disabling inapplicable DISTINCT/GROUP BY optimizations
when the WITH ROLLUP modifier is present, and splitting the
const-wrapping part of JOIN::rollup_init() into a separate
method which is now invoked after make_join_statistics() when
the const tables are already known.
columns without where/group
Simple SELECT with implicit grouping used to return many rows if
the query was ordered by the aggregated column in the SELECT
list. This was incorrect because queries with implicit grouping
should only return a single record.
The problem was that when JOIN:exec() decided if execution needed
to handle grouping, it was assumed that sum_func_count==0 meant
that there were no aggregate functions in the query. This
assumption was not correct in JOIN::exec() because the aggregate
functions might have been optimized away during JOIN::optimize().
The reason why queries without ordering behaved correctly was
that sum_func_count is only recalculated if the optimizer chooses
to use temporary tables (which it does in the ordered case).
Hence, non-ordered queries were correctly treated as grouped.
The fix for this bug was to remove the assumption that
sum_func_count==0 means that there is no need for grouping. This
was done by introducing variable "bool implicit_grouping" in the
JOIN object.
buffering is used
FORCE INDEX FOR ORDER BY now prevents the optimizer from
using join buffering. As a result the optimizer can use
indexed access on the first table and doesn't need to
sort the complete resultset at the end of the statement.
- Add support for setting it as a server commandline argument
- Add support for those switches:
= no_index_merge
= no_index_merge_union
= no_index_merge_sort_union
= no_index_merge_intersection
ORDER BY could cause a server crash
Dependent subqueries like
SELECT COUNT(*) FROM t1, t2 WHERE t2.b
IN (SELECT DISTINCT t2.b FROM t2 WHERE t2.b = t1.a)
caused a memory leak proportional to the
number of outer rows.
The make_simple_join() function has been modified to
JOIN class method to store join_tab_reexec and
table_reexec values in the parent join only
(make_simple_join of tmp_join may access these values
via 'this' pointer of the parent JOIN).
NOTE: this patch doesn't include standard test case (this is
"out of memory" bug). See bug #42037 page for test cases.
Table could be marked dependent because it is
either 1) an inner table of an outer join, or 2) it is a part of
STRAIGHT_JOIN. In case of STRAIGHT_JOIN table->maybe_null should not
be assigned. The fix is to set st_table::maybe_null to 'true' only
for those tables which are used in outer join.
- Make send_row_on_empty_set() return FALSE when simplify_cond() has found out
that HAVING is always FALSE
re-committing to put the fix into 5.0 and 5.1
When constructing a key image stricter date checking (from sql_mode)
should not be enabled, because it will reject invalid dates that the
server would otherwise accept for searching when there's no index.
Fixed by disabling strict date checking when constructing a key image.
Fixed the usage of spatial data (and Point in specific) with
non-spatial indexes.
Several problems :
- The length of the Point class was not updated to include the
spatial reference system identifier. Fixed by increasing with 4
bytes.
- The storage length of the spatial columns was not accounting for
the length that is prepended to it. Fixed by treating the
spatial data columns as blobs (and thus increasing the storage
length)
- When creating the key image for comparison in index read wrong
key image was created (the one needed for and r-tree search,
not the one for b-tree/other search). Fixed by treating the
spatial data columns as blobs (and creating the correct kind of
image based on the index type).
This patch adds cost estimation for the queries with ORDER BY / GROUP BY
and LIMIT.
If there was a ref/range access to the table whose rows were required
to be ordered in the result set the optimizer always employed this access
though a scan by a different index that was compatible with the required
order could be cheaper to produce the first L rows of the result set.
Now for such queries the optimizer makes a choice between the cheapest
ref/range accesses not compatible with the given order and index scans
compatible with it.
- Don't call mysql_select() several times for the select that enumerates
a temporary table with the results of the UNION. Making this call for
every subquery execution caused O(#enumerated-rows-in-the-outer-query)
memory allocations.
- Instead, call join->reinit() and join->exec(), and
= disable constant table detection for such joins,
= provide special handling for table-less constant subqueries.
SELECT statement itself returns empty.
As a result of this bug 'SELECT AGGREGATE_FUNCTION(fld) ... GROUP BY'
can return one row instead of an empty result set.
When GROUP BY only has fields of constant tables
(with a single row), the optimizer deletes the group_list.
After that we lose the information about whether we had an
GROUP BY statement. Though it's important
as SELECT min(x) from empty_table; and
SELECT min(x) from empty_table GROUP BY y; have to return
different results - the first query should return one row,
second - an empty result set.
So here we add the 'group_optimized_away' flag to remember this case
when GROUP BY exists in the query and is removed
by the optimizer, and check this flag in end_send_group()
query / no aggregate of subquery
The optimizer counts the aggregate functions that
appear as top level expressions (in all_fields) in
the current subquery. Later it makes a list of these
that it uses to actually execute the aggregates in
end_send_group().
That count is used in several places as a flag whether
there are aggregates functions.
While collecting the above info it must not consider
aggregates that are not aggregated in the current
context. It must treat them as normal expressions
instead. Not doing that leads to incorrect data about
the query, e.g. running a query that actually has no
aggregate functions as if it has some (and hence is
expected to return only one row).
Fixed by ignoring the aggregates that are not aggregated
in the current context.
One other smaller omission discovered and fixed in the
process : the place of aggregation was not calculated for
user defined functions. Fixed by calling
Item_sum::init_sum_func_check() and
Item_sum::check_sum_func() as it's done for the rest of
the aggregate functions.
a crash when the left operand of the predicate is evaluated to NULL.
It happens when the rows from the inner tables (tables from the subquery)
are accessed by index methods with key values obtained by evaluation of
the left operand of the subquery predicate. When this predicate is
evaluated to NULL an alternative access with full table scan is used
to check whether the result set returned by the subquery is empty or not.
The crash was due to the fact the info about the access methods used for
regular key values was not properly restored after a switch back from the
full scan access method had occurred.
The patch restores this info properly.
The same problem existed for queries with IN subquery predicates if they
were used not at the top level of the queries.
The following type conversions was done:
- Changed byte to uchar
- Changed gptr to uchar*
- Change my_string to char *
- Change my_size_t to size_t
- Change size_s to size_t
Removed declaration of byte, gptr, my_string, my_size_t and size_s.
Following function parameter changes was done:
- All string functions in mysys/strings was changed to use size_t
instead of uint for string lengths.
- All read()/write() functions changed to use size_t (including vio).
- All protocoll functions changed to use size_t instead of uint
- Functions that used a pointer to a string length was changed to use size_t*
- Changed malloc(), free() and related functions from using gptr to use void *
as this requires fewer casts in the code and is more in line with how the
standard functions work.
- Added extra length argument to dirname_part() to return the length of the
created string.
- Changed (at least) following functions to take uchar* as argument:
- db_dump()
- my_net_write()
- net_write_command()
- net_store_data()
- DBUG_DUMP()
- decimal2bin() & bin2decimal()
- Changed my_compress() and my_uncompress() to use size_t. Changed one
argument to my_uncompress() from a pointer to a value as we only return
one value (makes function easier to use).
- Changed type of 'pack_data' argument to packfrm() to avoid casts.
- Changed in readfrm() and writefrom(), ha_discover and handler::discover()
the type for argument 'frmdata' to uchar** to avoid casts.
- Changed most Field functions to use uchar* instead of char* (reduced a lot of
casts).
- Changed field->val_xxx(xxx, new_ptr) to take const pointers.
Other changes:
- Removed a lot of not needed casts
- Added a few new cast required by other changes
- Added some cast to my_multi_malloc() arguments for safety (as string lengths
needs to be uint, not size_t).
- Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
explicitely as this conflict was often hided by casting the function to
hash_get_key).
- Changed some buffers to memory regions to uchar* to avoid casts.
- Changed some string lengths from uint to size_t.
- Changed field->ptr to be uchar* instead of char*. This allowed us to
get rid of a lot of casts.
- Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
- Include zlib.h in some files as we needed declaration of crc32()
- Changed MY_FILE_ERROR to be (size_t) -1.
- Changed many variables to hold the result of my_read() / my_write() to be
size_t. This was needed to properly detect errors (which are
returned as (size_t) -1).
- Removed some very old VMS code
- Changed packfrm()/unpackfrm() to not be depending on uint size
(portability fix)
- Removed windows specific code to restore cursor position as this
causes slowdown on windows and we should not mix read() and pread()
calls anyway as this is not thread safe. Updated function comment to
reflect this. Changed function that depended on original behavior of
my_pwrite() to itself restore the cursor position (one such case).
- Added some missing checking of return value of malloc().
- Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
- Changed type of table_def::m_size from my_size_t to ulong to reflect that
m_size is the number of elements in the array, not a string/memory
length.
- Moved THD::max_row_length() to table.cc (as it's not depending on THD).
Inlined max_row_length_blob() into this function.
- More function comments
- Fixed some compiler warnings when compiled without partitions.
- Removed setting of LEX_STRING() arguments in declaration (portability fix).
- Some trivial indentation/variable name changes.
- Some trivial code simplifications:
- Replaced some calls to alloc_root + memcpy to use
strmake_root()/strdup_root().
- Changed some calls from memdup() to strmake() (Safety fix)
- Simpler loops in client-simple.c
Non-correlated scalar subqueries may get executed
in EXPLAIN at the optimization phase if they are
part of a right hand sargable expression.
If the scalar subquery uses a temp table to
materialize its results it will replace the
subquery structure from the parser with a simple
select from the materialization table.
As a result the EXPLAIN will crash as the
temporary materialization table is not to be shown
in EXPLAIN at all.
Fixed by preserving the original query structure
right after calling optimize() for scalar subqueries
with temp tables executed during EXPLAIN.
in index search MySQL was not explicitly
suppressing warnings. And if the context
happens to enable warnings (e.g. INSERT ..
SELECT) the warnings resulting from converting
the data the key is compared to are
reported to the client.
Fixed by suppressing warnings when converting
the data to the same type as the key parts.
away.
During optimization stage the WHERE conditions can be changed or even
be removed at all if they know for sure to be true of false. Thus they aren't
showed in the EXPLAIN EXTENDED which prints conditions after optimization.
Now if all elements of an Item_cond were removed this Item_cond is substituted
for an Item_int with the int value of the Item_cond.
If there were conditions that were totally optimized away then values of the
saved cond_value and having_value will be printed instead.