having clause...
The fix for bug 46184 was not very complete. It was not covering
views using temporary tables and multiple tables in a FROM clause.
Fixed by reverting the fix for 46184 and making a more general
check that is checking at the right execution stage and for all
of the non-supported cases.
Now PROCEDURE ANALYZE on non-top level SELECT is also forbidden.
Updated the analyse.test and subselect.test accordingly.
Queries with nested outer joins may lead to crashes or
bad results because an internal data structure is not handled
correctly.
The optimizer uses bitmaps of nested JOINs to determine
if certain table can be placed at a certain place in the
JOIN order.
It does maintain a bitmap describing in which JOINs
last placed table is nested.
When it puts a table it makes sure the bit of every JOIN that
contains the table in question is set (because JOINs can be nested).
It does that by recursively setting the bit for the next enclosing
JOIN when this is the first table in the JOIN and recursively
resetting the bit if it's the last table in the JOIN.
When it removes a table from the join order it should do the
opposite : recursively unset the bit if it's the only remaining
table in this join and and recursively set the bit if it's removing
the last table of a JOIN.
There was an error in how the bits was set for the upper levels :
when removing a table it was setting the bit for all the enclosing
nested JOINs even if there were more tables left in the current JOIN
(which practically means that the upper nested JOINs were not affected).
Fixed by stopping the recursion at the relevant level.
Problem 1:
column_priv_hash uses utf8_general_ci collation
for the key comparison. The key consists of user name,
db name and table name. Thus user with privileges on table t1
is able to perform the same operation on T1
(the similar situation with user name & db name, see acl_cache).
So collation which is used for column_priv_hash and acl_cache
should be case sensitive.
The fix:
replace system_charset_info with my_charset_utf8_bin for
column_priv_hash and acl_cache
Problem 2:
The same situation with proc_priv_hash, func_priv_hash,
the only difference is that Routine name is case insensitive.
So the fix is to use my_charset_utf8_bin for
proc_priv_hash & func_priv_hash and convert routine name into lower
case before writing the element into the hash and
before looking up the key.
Additional fix: mysql.procs_priv Routine_name field collation
is changed to utf8_general_ci.
It's necessary for REVOKE command
(to find a field by routine hash element values).
Note:
It's safe for lower-case-table-names mode too because
db name & table name are converted into lower case
(see GRANT_NAME::GRANT_NAME).
If the first argument to GeomFromWKB function is a geometry
field then the function just returns its value.
However in doing so it's not preserving first argument's
null_value flag and this causes unexpected null value to
be returned to the calling function.
Fixed by updating the null_value of the GeomFromWKB function
in such cases (and all other cases that return a NULL e.g.
because of not enough memory for the return buffer).
Problem: involving a spatial index for "non-spatial" queries
(that don't containt MBRXXX() functions) may lead to failed assert.
Fix: don't use spatial indexes in such cases.
line 138 when forcing a spatial index
Problem: "Spatial indexes can be involved in the search
for queries that use a function such as MBRContains()
or MBRWithin() in the WHERE clause".
Using spatial indexes for JOINs with =, <=> etc.
predicates is incorrect.
Fix: disable spatial indexes for such queries.
grants are reapplied.
After renaming a user and trying to re-apply grants results in additional
grants.
This is because we use username as part of the key for GRANT_TABLE structure.
When the user is renamed, we only change the username stored and the hash key
still contains the old user name and this results in the extra privileges
Fixed by rebuilding the hash key and updating the column_priv_hash structure
when the user is renamed
If a thread is killed in the server, we throw "shutdown" only if one is actually in
progress; otherwise, we throw "query interrupted".
Control-C in the mysql command-line client is "incremental" now.
First Control-C sends KILL QUERY (when connected to 5.0+ server, otherwise, see next)
Next Control-C sends KILL CONNECTION
Next Control-C aborts client.
As the first two steps only pertain to an existing query,
Control-C will abort the client right away if no query is running.
client will give more detailed/consistent feedback on Control-C now.
can lead to bad memory access
Problem: Field_bit is the only field which returns INT_RESULT
and doesn't have unsigned flag. As it's not a descendant of the
Field_num, so using ((Field_num *) field_bit)->unsigned_flag may lead
to unpredictable results.
Fix: check the field type before casting.
On Mac OS X or Windows, sending a SIGHUP to the server or a
asynchronous flush (triggered by flush_time), would cause the
server to crash.
The problem was that a hook used to detach client API handles
wasn't prepared to handle cases where the thread does not have
a associated session.
The solution is to verify whether the thread has a associated
session before trying to detach a handle.
'flush tables' crashes
The server crashes when 'show procedure status' and 'flush tables' are
run concurrently.
This is caused by the way mysql.proc table is added twice to the list
of table to lock although the requirements on the current locking API
assumes differently.
No test case is submitted because of the nature of the crash which is
currently difficult to reproduce in a deterministic way.
This is a backport from 5.1
replication
MySQL server uses wrong lock type (always TL_READ instead of
TL_READ_NO_INSERT when appropriate) for tables used in
subqueries of UPDATE statement. This leads in some cases to
a broken replication as statements are written in the wrong
order to the binlog.
query
The fix for bug 46749 removed the check for OUTER_REF_TABLE_BIT
and substituted it for a check on the presence of
Item_ident::depended_from.
Removing it altogether was wrong : OUTER_REF_TABLE_BIT should
still be checked in addition to depended_from (because it's not
set in all cases and doesn't contradict to the check of depended_from).
Fixed by returning the old condition back as a compliment to the
new one.
When parsing the service installation parameter in
default_service_handling() make sure the value of the
optional parameter doesn't overwrite it's name.
The problem is that argument buffer can be used as result buffer
and it leads to argument value change.
The fix is to use 'old buffer' as result buffer only
if first argument is not constant item.
can crash under load
Backport from 5.1.
Does also include key cache fixes from:
Bug 44068 (RESTORE can disable the MyISAM Key Cache)
Bug 40944 (Backup: crash after myisampack)
name as existing view
When trying to create a table with the same name as existing view with
join, mysql server crashes.
The problem is when create table is issued with the same name as view, while
verifying with the existing tables, we assume that base table object is
created always.
In this case, since it is a view over multiple tables, we don't have the
mysql derived table object.
Fixed the logic which checks if there is an existing table to not to assume
that table object is created when the base table is view over multiple
tables.
function,file sql_base.cc
When uncacheable queries are written to a temp table the optimizer must
preserve the original JOIN structure, because it is re-using the JOIN
structure to read from the resulting temporary table.
This was done only for uncacheable sub-queries.
But top level queries can also benefit from this mechanism, specially if
they're using index access and need a reset.
Fixed by not limiting the saving of JOIN structure to subqueries
exclusively.
Added a new test file to extend the existing (large) subquery.test.
CREATE TABLE...LIKE...
The mysql server option 'sync_frm' is ignored when table is created with
syntax CREATE TABLE .. LIKE..
Fixed by adding the MY_SYNC flag and calling my_sync() from my_copy() when
the flag is set.
In mysql_create_table(), when the 'sync_frm' is set, MY_SYNC flag is passed
to my_copy().
Note: TestCase is not attached and can be tested manually using debugger.
field references
This error requires a combination of factors :
1. An "impossible where" in the outermost SELECT
2. An aggregate in the outermost SELECT
3. A correlated subquery with a WHERE clause that includes an outer
field reference as a top level WHERE sargable predicate
When JOIN::optimize detects an "impossible WHERE" it will bail out
without doing the rest of the work and initializations. It will not
call make_join_statistics() as well. And make_join_statistics fills
in various structures for each table referenced.
When processing the result of the "impossible WHERE" the query must
send a single row of data if there are aggregate functions in it.
In this case the server marks all the aggregates as having received
no rows and calls the relevant Item::val_xxx() method on the SELECT
list. However if this SELECT list happens to contain a correlated
subquery this subquery is evaluated in a normal evaluation mode.
And if this correlated subquery has a reference to a field from the
outermost "impossible where" SELECT the add_key_fields will mistakenly
consider the outer field reference as a "local" field reference when
looking for sargable predicates.
But since the SELECT where the outer field reference refers to is not
completely initialized due to the "impossible WHERE" in this level
we'll get a NULL pointer reference.
Fixed by making a better condition for discovering if a field is "local"
to the SELECT level being processed.
It's not enough to look for OUTER_REF_TABLE_BIT in this case since
for outer references to constant tables the Item_field::used_tables()
will return 0 regardless of whether the field reference is from the
local SELECT or not.
with gcc 4.3.2
This patch fixes a number of GCC warnings about variables used
before initialized. A new macro UNINIT_VAR() is introduced for
use in the variable declaration, and LINT_INIT() usage will be
gradually deprecated. (A workaround is used for g++, pending a
patch for a g++ bug.)
GCC warnings for unused results (attribute warn_unused_result)
for a number of system calls (present at least in later
Ubuntus, where the usual void cast trick doesn't work) are
also fixed.
When a connection is dropped any remaining temporary table is also automatically
dropped and the SQL statement of this operation is written to the binary log in
order to drop such tables on the slave and keep the slave in sync. Specifically,
the current code base creates the following type of statement:
DROP /*!40005 TEMPORARY */ TABLE IF EXISTS `db`.`table`;
Unfortunately, appending the database to the table name in this manner circumvents
the replicate-rewrite-db option (and any options that check the current database).
To solve the issue, we started writing the statement to the binary as follows:
use `db`; DROP /*!40005 TEMPORARY */ TABLE IF EXISTS `table`;
The crash happens because select_union object is used as result set
for queries which have derived tables.
select_union use temporary table as data storage and if
fields count exceeds 10(count of values for procedure ANALYSE())
then we get a crash on fill_record() function.
The code was using a special global buffer for the value of IS NULL ranges.
This was not always long enough to be copied by a regular memcpy. As a
result read buffer overflows may occur.
Fixed by setting the null byte to 1 and setting the rest of the field disk image
to NULL with a bzero (instead of relying on the buffer and memcpy()).
The check for stack overflow was independent of the size of the
structure stored in the heap.
Fixed by adding sizeof(PARAM) to the requested free heap size.
view that has Group By
Table access rights checking function check_grant() assumed
that no view is opened when it's called.
This is not true with nested views where the inner view
needs materialization. In this case the view is already
materialized when check_grant() is called for it.
This caused check_grant() to not look for table level
grants on the materialized view table.
Fixed by checking if a view is already materialized and if
it is check table level grants using the original table name
(not the ones of the materialized temp table).
Bug#45243: crash on win in sql thread clear_tables_to_lock() -> free()
Bug#45242: crash on win in mysql_close() -> free()
Bug#45238: rpl_slave_skip, rpl_change_master failed (lost connection) for STOP SLAVE
Bug#46030: rpl_truncate_3innodb causes server crash on windows
Bug#46014: rpl_stm_reset_slave crashes the server sporadically in pb2
When killing a user session on the server, it's necessary to
interrupt (notify) the thread associated with the session that
the connection is being killed so that the thread is woken up
if waiting for I/O. On a few platforms (Mac, Windows and HP-UX)
where the SIGNAL_WITH_VIO_CLOSE flag is defined, this interruption
procedure is to asynchronously close the underlying socket of
the connection.
In order to enable this schema, each connection serving thread
registers its VIO (I/O interface) so that other threads can
access it and close the connection. But only the owner thread of
the VIO might delete it as to guarantee that other threads won't
see freed memory (the thread unregisters the VIO before deleting
it). A side note: closing the socket introduces a harmless race
that might cause a thread attempt to read from a closed socket,
but this is deemed acceptable.
The problem is that this infrastructure was meant to only be used
by server threads, but the slave I/O thread was registering the
VIO of a mysql handle (a client API structure that represents a
connection to another server instance) as a active connection of
the thread. But under some circumstances such as network failures,
the client API might destroy the VIO associated with a handle at
will, yet the VIO wouldn't be properly unregistered. This could
lead to accesses to freed data if a thread attempted to kill a
slave I/O thread whose connection was already broken.
There was a attempt to work around this by checking whether
the socket was being interrupted, but this hack didn't work as
intended due to the aforementioned race -- attempting to read
from the socket would yield a "bad file descriptor" error.
The solution is to add a hook to the client API that is called
from the client code before the VIO of a handle is deleted.
This hook allows the slave I/O thread to detach the active vio
so it does not point to freed memory.
Replication SQL thread does not set database default charset to
thd->variables.collation_database properly, when executing LOAD DATA binlog.
This bug can be repeated by using "LOAD DATA" command in STATEMENT mode.
This patch adds code to find the default character set of the current database
then assign it to thd->db_charset when slave server begins to execute a relay log.
The test of this bug is added into rpl_loaddata_charset.test
The problem is that the lexer could inadvertently skip over the
end of a query being parsed if it encountered a malformed multibyte
character. A specially crated query string could cause the lexer
to jump up to six bytes past the end of the query buffer. Another
problem was that the laxer could use unfiltered user input as
a signed array index for the parser maps (having upper and lower
bounds 0 and 256 respectively).
The solution is to ensure that the lexer only skips over well-formed
multibyte characters and that the index value of the parser maps
is always a unsigned value.
- Define and pass compile time path variables as pre-processor definitions to
mimic the makefile build.
- Set new CMake version and policy requirements explicitly.
- Changed DATADIR to MYSQL_DATADIR to avoid conflicting definition in
Platform SDK header ObjIdl.h which also defines DATADIR.
The maximum value of the max_join_size variable is set by converting
a signed type (long int) with negative value (-1) to a wider unsigned
type (unsigned long long), which yields the largest possible value of
the wider unsigned type -- as per the language conversion rules. But,
depending on build options, the type of the max_join_size might be a
shorter type (ha_rows - unsigned long) which causes the warning to be
thrown once the large value is truncated to fit.
The solution is to ensure that the maximum value of the variable is
always set to the maximum value of integer type of max_join_size.
Furthermore, it would be interesting to always have a fixed type for
this variable, but this would incur in a change of behavior which is
not acceptable for a GA version. See Bug#35346.
compression
Since uint3korr() may read 4 bytes depending on build flags and
platform, allocate 1 extra "safety" byte in the network buffer
for cases when uint3korr() in my_real_read() is called to read
last 3 bytes in the buffer.
It is practically hard to construct a reliable and reasonably
small test case for this bug as that would require constructing
input stream such that a certain sequence of bytes in a
compressed packet happens to be the last 3 bytes of the network
buffer.