used.
Sorting by RAND() uses a temporary table in order to get a correct results.
User defined variable was set during filling the temporary table and later
on it is substituted for its value from the temporary table. Due to this
it contains the last value stored in the temporary table.
Now if the result_field is set for the Item_func_set_user_var object it
updates variable from the result_field value when being sent to a client.
The Item_func_set_user_var::check() now accepts a use_result_field
parameter. Depending on its value the result_field or the args[0] is used
to get current value.
There were two problems: RESET QUERY CACHE took a long time to complete
and other threads were blocked during this time.
The patch does three things:
1 fixes a bug with improper use of test-lock-test_again technique.
AKA Double-Checked Locking is applicable here only in few places.
2 Somewhat improves performance of RESET QUERY CACHE.
Do my_hash_reset() instead of deleting elements one by one. Note
however that the slowdown also happens when inserting into sorted
list of free blocks, should be rewritten using balanced tree.
3 Makes RESET QUERY CACHE non-blocking.
The patch adjusts the locking protocol of the query cache in the
following way: it introduces a flag flush_in_progress, which is
set when Query_cache::flush_cache() is in progress. This call
sets the flag on enter, and then releases the lock. Every other
call is able to acquire the lock, but does nothing if
flush_in_progress is set (as if the query cache is disabled).
The only exception is the concurrent calls to
Query_cache::flush_cache(), that are blocked until the flush is
over. When leaving Query_cache::flush_cache(), the lock is
acquired and the flag is reset, and one thread waiting on
Query_cache::flush_cache() (if any) is notified that it may
proceed.
"real" table fails in JOINs".
This is a regression caused by the fix for Bug 18444.
This fix removed the assignment of empty_c_string to table->db performed
in add_table_to_list, as neither me nor anyone else knew what it was
there for. Now we know it and it's covered with tests: the only case
when a table database name can be empty is when the table is a derived
table. The fix puts the assignment back but makes it a bit more explicit.
Additionally, finally drop sp.result.orig which was checked in by mistake.
To make MySQL compatible with some ODBC applications, you can find
the AUTO_INCREMENT value for the last inserted row with the following query:
SELECT * FROM tbl_name WHERE auto_col IS NULL.
This is done with a special code that replaces 'auto_col IS NULL' with
'auto_col = LAST_INSERT_ID'.
However this also resets the LAST_INSERT_ID to 0 as it uses it for a flag
so as to ensure that only the first SELECT ... WHERE auto_col IS NULL
after an INSERT has this special behaviour.
In order to avoid resetting the LAST_INSERT_ID a special flag is introduced
in the THD class. This flag is used to restrict the second and subsequent
SELECTs instead of LAST_INSERT_ID.
run at startup"
The server returned an error when trying to execute init-file with a
stored procedure that could return multiple result sets to the client.
A stored procedure can return multiple result sets if it contains
PREPARE, SELECT, SHOW and similar statements.
The fix is to set client_capabilites|=CLIENT_MULTI_RESULTS in
sql_parse.cc:handle_bootstrap(). There is no "client" really, so
nothing is ever sent. This makes init-file feature behave consistently:
the prepared statements that can be called directly in the init-file
can be used in a stored procedure too.
Re-committed the patch originally submitted by Per-Erik after review.
It was possible that fetching a record by an exact key value
(including the record pointer) could return a record with a
different key value. This happened only if a concurrent insert
added a record with the searched key value after the fetching
statement locked the table for read.
The search succeded on the key value, but the record was
rejected as it was past the file length that was remembered
at start of the fetching statement. With other words it was
rejected as being a concurrently inserted record.
The action to recover from this problem was to fetch the
record that is pointed at by the next key of the index.
This was repeated until a record below the file length was
found.
I do now avoid this loop if an exact match was searched.
If this match is beyond the file length, it is now treated
as "key not found". There cannot be another key with the
same record pointer.
Stored procedure execution sometimes placed the address of auto variables
in the list of Item changes to undo in THD::rollback_item_tree_changes().
This could cause stack corruption.
too many open statements". The patch adds a new global variable
@@max_prepared_stmt_count. This variable limits the total number
of prepared statements in the server. The default value of
@@max_prepared_stmt_count is 16382. 16382 small statements
(a select against 3 tables with GROUP, ORDER and LIMIT) consume
100MB of RAM. Once this limit has been reached, the server will
refuse to prepare a new statement and return ER_UNKNOWN_ERROR
(unfortunately, we can't add new errors to 4.1 without breaking 5.0). The limit is changeable after startup
and can accept any value from 0 to 1 million. In case
the new value of the limit is less than the current
statement count, no new statements can be added, while the old
still can be used. Additionally, the current count of prepared
statements is now available through a global read-only variable
@@prepared_stmt_count.
according to the standard.
The idea is to use Field-classes to implement stored routines
variables. Also, we should provide facade to Item-hierarchy
by Item_field class (it is necessary, since SRVs take part
in expressions).
The patch fixes the following bugs:
- BUG#8702: Stored Procedures: No Error/Warning shown for inappropriate data
type matching;
- BUG#8768: Functions: For any unsigned data type, -ve values can be passed
and returned;
- BUG#8769: Functions: For Int datatypes, out of range values can be passed
and returned;
- BUG#9078: STORED PROCDURE: Decimal digits are not displayed when we use
DECIMAL datatype;
- BUG#9572: Stored procedures: variable type declarations ignored;
- BUG#12903: upper function does not work inside a function;
- BUG#13705: parameters to stored procedures are not verified;
- BUG#13808: ENUM type stored procedure parameter accepts non-enumerated
data;
- BUG#13909: Varchar Stored Procedure Parameter always BINARY string (ignores
CHARACTER SET);
- BUG#14161: Stored procedure cannot retrieve bigint unsigned;
- BUG#14188: BINARY variables have no 0x00 padding;
- BUG#15148: Stored procedure variables accept non-scalar values;
The cause of the bug was the use of end_write_group instead of end_write
in the case when ORDER BY required a temporary table, which didn't take
into account the fact that loose index scan already computes the result
of MIN/MAX aggregate functions (and performs grouping).
The solution is to call end_write instead of end_write_group and to add
the MIN/MAX functions to the list of regular functions so that their
values are inserted into the temporary table.