Actually there is two different bugs.
The first one caused crash on queries with WHERE condition over views
containing WHERE condition. A wrong check for prepared statement phase led
to items for view fields being allocated in the execution memory and freed
at the end of execution. Thus the optimized WHERE condition refers to
unallocated memory on the second execution and server crashed.
The second one caused by the Item_cond::compile function not saving changes
it made to the item tree. Thus on the next execution changes weren't
reverted and server crashed on dereferencing of unallocated space.
The new helper function called is_stmt_prepare_or_first_stmt_execute
is added to the Query_arena class.
The find_field_in_view function now uses
is_stmt_prepare_or_first_stmt_execute() to check whether
newly created view items should be freed at the end of the query execution.
The Item_cond::compile function now saves changes it makes to item tree.
mysql-test/r/ps.result:
Added a test case for the bug#48508.
mysql-test/t/ps.test:
Added a test case for the bug#48508.
sql/item_cmpfunc.cc:
Bug#48508: Crash on prepared statement re-execution.
The Item_cond::compile function now saves changes it makes to item tree.
sql/sql_base.cc:
Bug#48508: Crash on prepared statement re-execution.
The find_field_in_view function now uses
is_stmt_prepare_or_first_stmt_execute() to check whether
newly created view items should be freed at the end of the query execution.
sql/sql_class.h:
Bug#48508: Crash on prepared statement re-execution.
The Query_arena::is_stmt_prepare_or_first_sp_execute function now correctly
do its check.
WHERE conditions
check_group_min_max() checks if the loose index scan
optimization is applicable for a given WHERE condition, that is
if the MIN/MAX attribute participates only in range predicates
comparing the corresponding field with constants.
The problem was that it considered the whole predicate suitable
for the loose index scan optimization as soon as it encountered
a constant as a predicate argument. This is obviously wrong for
cases when a constant is the first argument of a predicate
which does not satisfy the above condition.
Fixed check_group_min_max() so that all arguments of the input
predicate are considered to decide if it passes the test, even
though a constant has already been encountered.
mysql-test/r/group_min_max.result:
Added a test case for bug #48472.
mysql-test/t/group_min_max.test:
Added a test case for bug #48472.
sql/opt_range.cc:
Fixed check_group_min_max() so that all arguments of the input
predicate are considered to decide if it passes the test, even
though a constant has already been encountered.
memory
The server was doing a bad class typecast causing setting of
wrong value for the maximum number of items in an internal
structure used in equality propagation.
Fixed by not doing the wrong typecast and asserting the type
of the Item where it should be done.
values
We should re-set the access method functions when changing the access
method when switching to another index to avoid sorting.
Fixed by doing a little re-engineering : encapsulating all the function
assignment into a special function and calling it when flipping the
indexes.
only const tables
The problem was caused by two shortcuts in the optimizer that
are inapplicable in the ROLLUP case.
Normally in a case when only const tables are involved in a
query, DISTINCT clause can be safely optimized away since there
may be only one row produced by the join. Similarly, we don't
need to create a temporary table to resolve DISTINCT/GROUP
BY/ORDER BY. Both of these are inapplicable when the WITH
ROLLUP modifier is present.
Fixed by disabling the said optimizations for the WITH ROLLUP
case.
mysql-test/r/olap.result:
Added a test case for bug #48475.
mysql-test/t/olap.test:
Added a test case for bug #48475.
sql/sql_select.cc:
Disabled const-only table optimizations for the WITH ROLLUP
case.
Bug#41756 "Strange error messages about locks from InnoDB".
In JT_EQ_REF (join_read_key()) access method,
don't try to unlock rows in the handler, unless certain that
a) they were locked
b) they are not used.
Unlocking of rows is done by the logic of the nested join loop,
and is unaware of the possible caching that the access method may
have. This could lead to double unlocking, when a row
was unlocked first after reading into the cache, and then
when taken from cache, as well as to unlocking of rows which
were actually used (but taken from cache).
Delegate part of the unlocking logic to the access method,
and in JT_EQ_REF count how many times a record was actually
used in the join. Unlock it only if it's usage count is 0.
Implemented review comments.
mysql-test/r/bug41756.result:
Add result file (Bug#41756)
mysql-test/t/bug41756-master.opt:
Use --innodb-locks-unsafe-for-binlog, as in 5.0 just
using read_committed isolation is not sufficient to
reproduce the bug.
mysql-test/t/bug41756.test:
Add a test file (Bug#41756)
sql/item_subselect.cc:
Complete struct READ_RECORD initialization with a new
member to unlock records.
sql/records.cc:
Extend READ_RECORD API with a method to unlock read records.
sql/sql_select.cc:
In JT_EQ_REF (join_read_key()) access method,
don't try to unlock rows in the handler, unless certain that
a) they were locked
b) they are not used.
sql/sql_select.h:
Add members to TABLE_REF to count TABLE_REF buffer usage count.
sql/structs.h:
Update declarations.
When a sessione is closed, all temporary tables of the session are automatically
dropped and are binlogged. But it will be binlogged with wrong database names when
the length of the temporary tables' database names are greater than the
length of the current database name or the current database is not set.
Query_log_event's db_len is forgot to set when Query_log_event's db is set.
This patch wrote code to set db_len immediately after db has set.
with temporary tables
There were two problems the test case from this bug was
triggering:
1. JOIN::rollup_init() was supposed to wrap all constant Items
into another object for queries with the WITH ROLLUP modifier
to ensure they are never considered as constants and therefore
are written into temporary tables if the optimizer chooses to
employ them for DISTINCT/GROUP BY handling.
However, JOIN::rollup_init() was called before
make_join_statistics(), so Items corresponding to fields in
const tables could not be handled as intended, which was
causing all kinds of problems later in the query execution. In
particular, create_tmp_table() assumed all constant items
except "hidden" ones to be removed earlier by remove_const()
which led to improperly initialized Field objects for the
temporary table being created. This is what was causing crashes
and valgrind errors in storage engines.
2. Even when the above problem had been fixed, the query from
the test case produced incorrect results due to some
DISTINCT/GROUP BY optimizations being performed by the
optimizer that are inapplicable in the WITH ROLLUP case.
Fixed by disabling inapplicable DISTINCT/GROUP BY optimizations
when the WITH ROLLUP modifier is present, and splitting the
const-wrapping part of JOIN::rollup_init() into a separate
method which is now invoked after make_join_statistics() when
the const tables are already known.
mysql-test/r/olap.result:
Added a test case for bug #48131.
mysql-test/t/olap.test:
Added a test case for bug #48131.
sql/sql_select.cc:
1. Disabled inapplicable DISTINCT/GROUP BY optimizations when
the WITH ROLLUP modifier is present.
2. Split the const-wrapping part of JOIN::rollup_init() into a
separate method.
sql/sql_select.h:
Added rollup_process_const_fields() declaration.
subquery returning multiple rows
Error handling was missing when handling subqueires in WHERE
and when assigning a SELECT result to a @variable.
This caused crash(es).
Fixed by adding error handling code to both the WHERE
condition evaluation and to assignment to an @variable.
having clause...
The fix for bug 46184 was not very complete. It was not covering
views using temporary tables and multiple tables in a FROM clause.
Fixed by reverting the fix for 46184 and making a more general
check that is checking at the right execution stage and for all
of the non-supported cases.
Now PROCEDURE ANALYZE on non-top level SELECT is also forbidden.
Updated the analyse.test and subselect.test accordingly.
Queries with nested outer joins may lead to crashes or
bad results because an internal data structure is not handled
correctly.
The optimizer uses bitmaps of nested JOINs to determine
if certain table can be placed at a certain place in the
JOIN order.
It does maintain a bitmap describing in which JOINs
last placed table is nested.
When it puts a table it makes sure the bit of every JOIN that
contains the table in question is set (because JOINs can be nested).
It does that by recursively setting the bit for the next enclosing
JOIN when this is the first table in the JOIN and recursively
resetting the bit if it's the last table in the JOIN.
When it removes a table from the join order it should do the
opposite : recursively unset the bit if it's the only remaining
table in this join and and recursively set the bit if it's removing
the last table of a JOIN.
There was an error in how the bits was set for the upper levels :
when removing a table it was setting the bit for all the enclosing
nested JOINs even if there were more tables left in the current JOIN
(which practically means that the upper nested JOINs were not affected).
Fixed by stopping the recursion at the relevant level.
mysql-test/r/join.result:
Bug #42116: test case
mysql-test/t/join.test:
Bug #42116: test case
sql/sql_select.cc:
Bug #41116: don't go up and set the bits if more tables in
at the current JOIN level
BUG#41597 - After rename of user, there are additional grants
when grants are reapplied.
Fixed build failure on Windows. Added missing cast.
sql/sql_acl.cc:
Fixed build failure on Windows. Added missing cast.
Problem 1:
column_priv_hash uses utf8_general_ci collation
for the key comparison. The key consists of user name,
db name and table name. Thus user with privileges on table t1
is able to perform the same operation on T1
(the similar situation with user name & db name, see acl_cache).
So collation which is used for column_priv_hash and acl_cache
should be case sensitive.
The fix:
replace system_charset_info with my_charset_utf8_bin for
column_priv_hash and acl_cache
Problem 2:
The same situation with proc_priv_hash, func_priv_hash,
the only difference is that Routine name is case insensitive.
So the fix is to use my_charset_utf8_bin for
proc_priv_hash & func_priv_hash and convert routine name into lower
case before writing the element into the hash and
before looking up the key.
Additional fix: mysql.procs_priv Routine_name field collation
is changed to utf8_general_ci.
It's necessary for REVOKE command
(to find a field by routine hash element values).
Note:
It's safe for lower-case-table-names mode too because
db name & table name are converted into lower case
(see GRANT_NAME::GRANT_NAME).
mysql-test/include/have_case_insensitive_fs.inc:
test case
mysql-test/r/case_insensitive_fs.require:
test case
mysql-test/r/grant_lowercase_fs.result:
test result
mysql-test/r/lowercase_fs_off.result:
test result
mysql-test/r/ps_grant.result:
test result
mysql-test/r/system_mysql_db.result:
changed Routine_name field collation to case insensitive
mysql-test/t/grant_lowercase_fs.test:
test case
mysql-test/t/lowercase_fs_off.test:
test case
scripts/mysql_system_tables.sql:
changed Routine_name field collation to case insensitive
scripts/mysql_system_tables_fix.sql:
changed Routine_name field collation to case insensitive
sql/sql_acl.cc:
Problem 1:
column_priv_hash uses utf8_general_ci collation
for the key comparison. The key consists of user name,
db name and table name. Thus user with privileges on table t1
is able to perform the same operation on T1
(the similar situation with user name & db name, see acl_cache).
So collation which is used for column_priv_hash and acl_cache
should be case sensitive.
The fix:
replace system_charset_info with my_charset_utf8_bin for
column_priv_hash and acl_cache
Problem 2:
The same situation with proc_priv_hash, func_priv_hash,
the only difference is that Routine name is case insensitive.
So the fix is to use my_charset_utf8_bin for
proc_priv_hash & func_priv_hash and convert routine name into lower
case before writing the element into the hash and
before looking up the key.
Additional fix: mysql.procs_priv Routine_name field collation
is changed to utf8_general_ci.
It's necessary for REVOKE command
(to find a field by routine hash element values).
Note:
It's safe for lower-case-table-names mode too because
db name & table name are converted into lower case
(see GRANT_NAME::GRANT_NAME).
If the first argument to GeomFromWKB function is a geometry
field then the function just returns its value.
However in doing so it's not preserving first argument's
null_value flag and this causes unexpected null value to
be returned to the calling function.
Fixed by updating the null_value of the GeomFromWKB function
in such cases (and all other cases that return a NULL e.g.
because of not enough memory for the return buffer).
Problem: involving a spatial index for "non-spatial" queries
(that don't containt MBRXXX() functions) may lead to failed assert.
Fix: don't use spatial indexes in such cases.
mysql-test/r/gis-rtree.result:
Fix for bug#48258: Assertion failed when using a spatial index
- test result.
mysql-test/t/gis-rtree.test:
Fix for bug#48258: Assertion failed when using a spatial index
- test case.
sql/opt_range.cc:
Fix for bug#48258: Assertion failed when using a spatial index
- allow only spatial functions (MBRXXX) for itMBR keyparts.
line 138 when forcing a spatial index
Problem: "Spatial indexes can be involved in the search
for queries that use a function such as MBRContains()
or MBRWithin() in the WHERE clause".
Using spatial indexes for JOINs with =, <=> etc.
predicates is incorrect.
Fix: disable spatial indexes for such queries.
mysql-test/r/select.result:
Fix for bug#47019: Assertion failed: 0, file .\rt_mbr.c,
line 138 when forcing a spatial index
- test result.
mysql-test/t/select.test:
Fix for bug#47019: Assertion failed: 0, file .\rt_mbr.c,
line 138 when forcing a spatial index
- test case.
sql/sql_select.cc:
Fix for bug#47019: Assertion failed: 0, file .\rt_mbr.c,
line 138 when forcing a spatial index
- disable spatial indexes for queries which use
non-spatial conditions (e.g. NATURAL JOINs).
grants are reapplied.
After renaming a user and trying to re-apply grants results in additional
grants.
This is because we use username as part of the key for GRANT_TABLE structure.
When the user is renamed, we only change the username stored and the hash key
still contains the old user name and this results in the extra privileges
Fixed by rebuilding the hash key and updating the column_priv_hash structure
when the user is renamed
mysql-test/r/grant3.result:
Bug #41597 - After rename of user, there are additional grants when
grants are reapplied.
Testcase for BUG#41597
mysql-test/t/grant3.test:
Bug #41597 - After rename of user, there are additional grants when
grants are reapplied.
Testcase for BUG#41597
sql/sql_acl.cc:
Bug #41597 - After rename of user, there are additional grants when
grants are reapplied.
Fixed handle_grant_struct() to update the hash key when the user is renamed.
Added to set_user_details() method to GRANT_NAME class
If a thread is killed in the server, we throw "shutdown" only if one is actually in
progress; otherwise, we throw "query interrupted".
Control-C in the mysql command-line client is "incremental" now.
First Control-C sends KILL QUERY (when connected to 5.0+ server, otherwise, see next)
Next Control-C sends KILL CONNECTION
Next Control-C aborts client.
As the first two steps only pertain to an existing query,
Control-C will abort the client right away if no query is running.
client will give more detailed/consistent feedback on Control-C now.
client/mysql.cc:
Extends Control-C handling; enhances up feedback to user.
On 5.0+ servers, we try to be nice and send KILL QUERY first
if Control-C is pressed in the command-line client, but if
that doesn't work, we now give the user the opportunity to
send KILL CONNECTION with another Control-C (and to kill the
client with another Control-C if that somehow doesn't work
either).
mysql-test/t/flush_read_lock_kill.test:
we're getting correct "thread killed" rather than
"in shutdown" error now
mysql-test/t/kill.test:
we're getting correct "thread killed" rather than
"in shutdown" error now
mysql-test/t/rpl000001.test:
we're getting correct "thread killed" rather than
"in shutdown" error now
mysql-test/t/rpl_error_ignored_table.test:
we're getting correct "thread killed" rather than
"in shutdown" error now
sql/records.cc:
make error messages on KILL uniform for rr_*()
by folding that handling into rr_handle_error()
sql/sql_class.h:
Only throw "shutdown" when we have one flagged as being in progress;
otherwise, throw "query interrupted" as it's likely to be "KILL CONNECTION"
or related.
can lead to bad memory access
Problem: Field_bit is the only field which returns INT_RESULT
and doesn't have unsigned flag. As it's not a descendant of the
Field_num, so using ((Field_num *) field_bit)->unsigned_flag may lead
to unpredictable results.
Fix: check the field type before casting.
mysql-test/r/type_bit.result:
Fix for bug #42803: Field_bit does not have unsigned_flag field,
can lead to bad memory access
- test result.
mysql-test/t/type_bit.test:
Fix for bug #42803: Field_bit does not have unsigned_flag field,
can lead to bad memory access
- test case.
sql/opt_range.cc:
Fix for bug #42803: Field_bit does not have unsigned_flag field,
can lead to bad memory access
- don't cast to (Field_num *) Field_bit, as it's not a Field_num
descendant and is always unsigned by nature.
On Mac OS X or Windows, sending a SIGHUP to the server or a
asynchronous flush (triggered by flush_time), would cause the
server to crash.
The problem was that a hook used to detach client API handles
wasn't prepared to handle cases where the thread does not have
a associated session.
The solution is to verify whether the thread has a associated
session before trying to detach a handle.
mysql-test/r/federated_debug.result:
Add test case result for Bug#47525
mysql-test/t/federated_debug-master.opt:
Debug point.
mysql-test/t/federated_debug.test:
Add test case for Bug#47525
sql/slave.cc:
Check whether a the thread has a associated session.
sql/sql_parse.cc:
Add debug code to simulate a reload without thread session.
'flush tables' crashes
The server crashes when 'show procedure status' and 'flush tables' are
run concurrently.
This is caused by the way mysql.proc table is added twice to the list
of table to lock although the requirements on the current locking API
assumes differently.
No test case is submitted because of the nature of the crash which is
currently difficult to reproduce in a deterministic way.
This is a backport from 5.1
myisam/mi_dbug.c:
* check_table_is_closed is only used in EXTRA_DEBUG mode but since it is
iterating over myisam shared data it still needs to be protected by an
appropriate mutex.
sql/sql_yacc.yy:
* Since the I_S mechanism is already handling the open and close of
mysql.proc there is no need for the method sp_add_to_query_tables.
replication
MySQL server uses wrong lock type (always TL_READ instead of
TL_READ_NO_INSERT when appropriate) for tables used in
subqueries of UPDATE statement. This leads in some cases to
a broken replication as statements are written in the wrong
order to the binlog.
sql/sql_yacc.yy:
* Set lock_option to either TL_READ_NO_INSERT or
TL_READ for any sub-SELECT following UPDATE.
* Changed line adjusted for parser identation
rules; code begins at column 13.
query
The fix for bug 46749 removed the check for OUTER_REF_TABLE_BIT
and substituted it for a check on the presence of
Item_ident::depended_from.
Removing it altogether was wrong : OUTER_REF_TABLE_BIT should
still be checked in addition to depended_from (because it's not
set in all cases and doesn't contradict to the check of depended_from).
Fixed by returning the old condition back as a compliment to the
new one.
When parsing the service installation parameter in
default_service_handling() make sure the value of the
optional parameter doesn't overwrite it's name.
The problem is that argument buffer can be used as result buffer
and it leads to argument value change.
The fix is to use 'old buffer' as result buffer only
if first argument is not constant item.
mysql-test/r/func_str.result:
test result
mysql-test/t/func_str.test:
test case
sql/item_strfunc.cc:
The problem is that argument buffer can be used as result buffer
and it leads to argument value change.
The fix is to use 'old buffer' as result buffer only
if first argument is not constant item.
can crash under load
Backport from 5.1.
Does also include key cache fixes from:
Bug 44068 (RESTORE can disable the MyISAM Key Cache)
Bug 40944 (Backup: crash after myisampack)
include/keycache.h:
Bug#17332 - changing key_buffer_size on a running server
can crash under load
Added KEY_CACHE components in_resize and waiting_for_resize_cnt.
myisam/mi_preload.c:
Bug#17332 - changing key_buffer_size on a running server
can crash under load
Added code to allow LOAD INDEX to load indexes of different block size.
mysys/mf_keycache.c:
Bug#17332 - changing key_buffer_size on a running server
can crash under load
.
Changed resize_key_cache() to not disable the key cache
after the flush phase. Changed queue handling to use
standard functions. Wake all threads waiting on resize_queue.
We can now have read/write threads waiting there (see below).
.
Combined add_to_queue() and the wait loops that were always
following it to the new function wait_on_queue().
Combined release_queue() and the condition that was always
preceding it to the new function release_whole_queue().
.
Added code to flag and respect the exceptional situation
BLOCK_IN_EVICTION.
.
Rewrote the resize branch of find_key_block().
.
Added code to the eviction handling in find_key_block()
to catch more exceptional cases.
.
Changed key_cache_read(), key_cache_insert() and key_cache_write()
so that they lock keycache->cache_lock whenever the key cache is
initialized. Checking for a disabled cache and incrementing and
decrementing the "resize counter" is always done within the lock.
Locking and unlocking as well as counting the "resize counter" is
now done once outside the loop. All three functions can now handle
a NULL return from find_key_block. This happens in the flush phase
of a resize and demands direct file I/O. Care is taken for
secondary requests (PAGE_WAIT_TO_BE_READ) to wait in any case.
Moved block status changes behind the copying of buffer data.
key_cache_insert() does now read the block if the caller did
supply less data than a full cache block.
key_cache_write() does now take care of parallel running flushes
(BLOCK_FOR_UPDATE, BLOCK_IN_FLUSHWRITE).
.
Changed free_block() to un-initialize block variables in the
correct order and respect an exceptional BLOCK_IN_EVICTION state.
.
Changed flushing to take care for parallel running writes.
Changed flushing to avoid freeing blocks in eviction.
Changed flushing to consider that parallel writes can move blocks
from the file_blocks hash to the changed_blocks hash.
Changed flushing to take care for other parallel flushes.
Changed flushing to assure that it ends with everything flushed.
Optimized normal flush at end of statement (FLUSH_KEEP),
but let other flush types be stringent.
.
Added some comments and debugging statements.
mysys/my_static.c:
Bug#17332 - changing key_buffer_size on a running server
can crash under load
Removed an unused global variable.
sql/ha_myisam.cc:
Bug#17332 - changing key_buffer_size on a running server
can crash under load
Moved an automatic (stack) variable to the scope where it is used.
sql/sql_table.cc:
Bug#17332 - changing key_buffer_size on a running server
can crash under load
Changed TL_READ to TL_READ_NO_INSERT in mysql_preload_keys.
Memory allocated in TMP_TABLE_PARAM::copy_field is not cleaned up.
The fix is to clean up TMP_TABLE_PARAM::copy_field array in JOIN::destroy.
mysql-test/r/explain.result:
test result
mysql-test/t/explain.test:
test case
sql/sql_select.cc:
Memory allocated in TMP_TABLE_PARAM::copy_field is not cleaned up.
The fix is to clean up TMP_TABLE_PARAM::copy_field array in JOIN::destroy.
name as existing view
When trying to create a table with the same name as existing view with
join, mysql server crashes.
The problem is when create table is issued with the same name as view, while
verifying with the existing tables, we assume that base table object is
created always.
In this case, since it is a view over multiple tables, we don't have the
mysql derived table object.
Fixed the logic which checks if there is an existing table to not to assume
that table object is created when the base table is view over multiple
tables.
mysql-test/r/create.result:
BUG#46384 - mysqld segfault when trying to create table with same
name as existing view
Testcase for the bug
mysql-test/t/create.test:
BUG#46384 - mysqld segfault when trying to create table with same
name as existing view
Testcase for the bug
sql/sql_insert.cc:
BUG#46384 - mysqld segfault when trying to create table with same
name as existing view
Fixed create_table_from_items() method to properly check, if the base table
is a view over multiple tables.
function,file sql_base.cc
When uncacheable queries are written to a temp table the optimizer must
preserve the original JOIN structure, because it is re-using the JOIN
structure to read from the resulting temporary table.
This was done only for uncacheable sub-queries.
But top level queries can also benefit from this mechanism, specially if
they're using index access and need a reset.
Fixed by not limiting the saving of JOIN structure to subqueries
exclusively.
Added a new test file to extend the existing (large) subquery.test.
CREATE TABLE...LIKE...
The mysql server option 'sync_frm' is ignored when table is created with
syntax CREATE TABLE .. LIKE..
Fixed by adding the MY_SYNC flag and calling my_sync() from my_copy() when
the flag is set.
In mysql_create_table(), when the 'sync_frm' is set, MY_SYNC flag is passed
to my_copy().
Note: TestCase is not attached and can be tested manually using debugger.
client/Makefile.am:
BUG#46591 - .frm file isn't sync'd with sync_frm enabled for
CREATE TABLE...LIKE...
add my_sync to sources as it is used in my_copy() method
include/my_sys.h:
BUG#46591 - .frm file isn't sync'd with sync_frm enabled for
CREATE TABLE...LIKE...
MY_SYNC flag is added to call my_sync() method
mysys/my_copy.c:
BUG#46591 - .frm file isn't sync'd with sync_frm enabled for
CREATE TABLE...LIKE...
my_sync() is method is called when MY_SYNC is set in my_copy()
sql/sql_table.cc:
BUG#46591 - .frm file isn't sync'd with sync_frm enabled for
CREATE TABLE...LIKE...
Fixed mysql_create_like_table() to call my_sync() when opt_sync_frm variable
is set
field references
This error requires a combination of factors :
1. An "impossible where" in the outermost SELECT
2. An aggregate in the outermost SELECT
3. A correlated subquery with a WHERE clause that includes an outer
field reference as a top level WHERE sargable predicate
When JOIN::optimize detects an "impossible WHERE" it will bail out
without doing the rest of the work and initializations. It will not
call make_join_statistics() as well. And make_join_statistics fills
in various structures for each table referenced.
When processing the result of the "impossible WHERE" the query must
send a single row of data if there are aggregate functions in it.
In this case the server marks all the aggregates as having received
no rows and calls the relevant Item::val_xxx() method on the SELECT
list. However if this SELECT list happens to contain a correlated
subquery this subquery is evaluated in a normal evaluation mode.
And if this correlated subquery has a reference to a field from the
outermost "impossible where" SELECT the add_key_fields will mistakenly
consider the outer field reference as a "local" field reference when
looking for sargable predicates.
But since the SELECT where the outer field reference refers to is not
completely initialized due to the "impossible WHERE" in this level
we'll get a NULL pointer reference.
Fixed by making a better condition for discovering if a field is "local"
to the SELECT level being processed.
It's not enough to look for OUTER_REF_TABLE_BIT in this case since
for outer references to constant tables the Item_field::used_tables()
will return 0 regardless of whether the field reference is from the
local SELECT or not.
with gcc 4.3.2
This patch fixes a number of GCC warnings about variables used
before initialized. A new macro UNINIT_VAR() is introduced for
use in the variable declaration, and LINT_INIT() usage will be
gradually deprecated. (A workaround is used for g++, pending a
patch for a g++ bug.)
GCC warnings for unused results (attribute warn_unused_result)
for a number of system calls (present at least in later
Ubuntus, where the usual void cast trick doesn't work) are
also fixed.
client/mysqlmanager-pwgen.c:
A fix for warn_unused_result, adding fallback to use of
srand()/rand() if /dev/random cannot be used. Also actually
adds calls to rand() in the second branch so that it actually
creates a random password.
When a connection is dropped any remaining temporary table is also automatically
dropped and the SQL statement of this operation is written to the binary log in
order to drop such tables on the slave and keep the slave in sync. Specifically,
the current code base creates the following type of statement:
DROP /*!40005 TEMPORARY */ TABLE IF EXISTS `db`.`table`;
Unfortunately, appending the database to the table name in this manner circumvents
the replicate-rewrite-db option (and any options that check the current database).
To solve the issue, we started writing the statement to the binary as follows:
use `db`; DROP /*!40005 TEMPORARY */ TABLE IF EXISTS `table`;
The crash happens because select_union object is used as result set
for queries which have derived tables.
select_union use temporary table as data storage and if
fields count exceeds 10(count of values for procedure ANALYSE())
then we get a crash on fill_record() function.
mysql-test/r/analyse.result:
test result
mysql-test/r/subselect.result:
result fix
mysql-test/t/analyse.test:
test case
mysql-test/t/subselect.test:
test fix
sql/sql_yacc.yy:
The crash happens because select_union object is used as result set
for queries which have derived tables.
select_union use temporary table as data storage and if
fields count exceeds 10(count of values for procedure ANALYSE())
then we get a crash on fill_record() function.
The code was using a special global buffer for the value of IS NULL ranges.
This was not always long enough to be copied by a regular memcpy. As a
result read buffer overflows may occur.
Fixed by setting the null byte to 1 and setting the rest of the field disk image
to NULL with a bzero (instead of relying on the buffer and memcpy()).
The check for stack overflow was independent of the size of the
structure stored in the heap.
Fixed by adding sizeof(PARAM) to the requested free heap size.