Bug#29816 Syntactically wrong query fails with misleading error message
The core problem is that an SQL-invoked function name can be a <schema
qualified routine name> that contains no <schema name>, but the mysql
parser insists that all stored procedures (function, procedures and
triggers) must have a <schema name>, which is not true for functions.
This problem is especially visible when trying to create a function
or when a query contains a syntax error after a function call (in the
same query), both will fail with a "No database selected" message if
the session is not attached to a particular schema, but the first
one should succeed and the second fail with a "syntax error" message.
Part of the fix is to revamp the sp name handling so that a schema
name may be omitted for functions -- this means that the internal
function name representation may not have a dot, which represents
that the function doesn't have a schema name. The other part is
to place schema checks after the type (function, trigger or procedure)
of the routine is known.
This deadlock occurs when a client issues a HANDLER ... OPEN statement
that tries to open a table that has a pending name-lock on it by another
client that also needs a name-lock on some other table which is already
open and associated to a HANDLER instance owned by the first client.
The deadlock happens because the open_table() function will back-off
and wait until the name-lock goes away, causing a circular wait if some
other name-lock is also pending for one of the open HANDLER tables.
Such situation, for example, can be easily repeated by issuing a RENAME
TABLE command in such a way that the existing table is already open
as a HANDLER table by another client and this client tries to open
a HANDLER to the new table name.
The solution is to allow handler tables with older versions (marked for
flush) to be closed before waiting for the name-lock completion. This is
safe because no other name-lock can be issued between the flush and the
check for pending name-locks.
The test case for this bug is going to be committed into 5.1 because it
requires a test feature only avaiable in 5.1 (wait_condition).
bitmap_is_set(table->write_set, fiel
Problem: creating a temporary table we allocate the group buffer if needed
followed by table bitmaps (see create_tmp_table()). Reserving less memory for
the group buffer than actually needed (used) for values retrieval may lead
to overlapping with followed bitmaps in the memory pool that in turn leads
to unpredictable consequences.
As we use Item->max_length sometimes to calculate group buffer size,
it must be set to proper value. In this particular case
Item_datetime_typecast::max_length is too small.
Another problem is that we use max_length to calculate the group buffer
key length for items represented as DATE/TIME fields which is superfluous.
Fix: set Item_datetime_typecast::max_length properly,
accurately calculate the group buffer key length for items
represented as DATE/TIME fields in the buffer.
Our web server has been restructured several times, and references
to it in our source code has stayed the same. This patch from Paul
DuBois updates all URLs to modern semantics.
Problem: creating an rb-tree key we store length (2 bytes) before the actual data for
varchar key parts. The fact was missed for NULL key parts, when we set NULL byte and
skip the rest.
Fix: take into account the length of the varchar key parts for NULLs.
This bug is a symptom of the way handler's tables are managed. The
most different aspect, compared to the conventional behavior, is that
the handler's tables are long lived, meaning that their lifetimes are
not bounded by the duration of the command that opened them. For this
effect the handler code uses its own list (handler_tables instead of
open_tables) to hold open handler tables so that the tables won't be
closed at the end of the command/statement. Besides the handler_tables
list, there is a hash (handler_tables_hash) which is used to associate
handler aliases to tables and to refresh the tables upon demand (flush
tables).
The current implementation doesn't work properly with refreshed tables
-- more precisely when flush commands are issued by other initiators.
This happens because when a handler open or read statement is being
processed, the associated table has to be opened or locked and, for this
matter, the open_tables and handler_tables lists are swapped so that the
new table being opened is inserted into the handler_tables list. But when
opening or locking the table, if the refresh version is different from the
thread refresh version then all used tables in the open_tables list (now
handler_tables) are refreshed. In the "refreshing" process the handler
tables are flushed (closed) without being properly unlinked from the
handler hash.
The current implementation also fails to properly discard handlers of
dropped tables, but this and other problems are going to be addressed
in the fixes for bugs 31397 and 31409.
The chosen approach tries to properly save and restore the table state
so that no table is flushed during the table open and lock operations.
The logic is almost the same as before with the list swapping, but with
a working glue code.
The test case for this bug is going to be committed into 5.1 because it
requires a test feature only avaiable in 5.1 (wait_condition).
This actually, fix for the patch for bug-27354. The problem with
the patch was that Item_func_sp::used_tables() was updated, but
Item_func_sp::const_item() was not. So, for Item_func_sp, we had
the following inconsistency:
- used_tables() returned RAND_TABLE, which means that the item
can produce "random" results;
- but const_item() returned TRUE, which means that the item is
a constant one.
The fix is to change Item_func_sp::const_item() behaviour: it must
return TRUE (an item is a constant one) only if a stored function
is deterministic and each of its arguments (if any) is a constant
item.
Bug#28878: InnoDB tables with UTF8 character set and indexes cause wrong result for DML
When making key reference buffers over CHAR fields whitespace (0x20) must be used to fill in the remaining space in the field's buffer. This is what Field_string::store() does. Fixed Field_string::get_key_image() to do the same.
were accidentally removed during a previous rototill of this
code. Fixes bug#27692.
While it can be argued we should strive to provide a 'secure by
default' installation, this happens to be the setup currently
documented in the manual as the default, so defer changes that
improve security out of the box to a co-ordinated effort later
on.
For now, make a note about the test databases and anonymous user
in mysql_install_db and recommend that mysql_secure_installation
be ran for users wishing to remove these defaults.
[..re-commit of previously lost change..]
This is for bug #29446 "Specifying a myisam_sort_buffer > 4GB on 64 bit machines not possible". Support for myisam_sort_buffer_size > 4 GB on 64-bit Windows will be looked at later in 5.2.