When checking for an error after removing the special view error handler the code
was not taking into account that open_tables() may fail because of the current
statement being killed.
Added a check for thd->killed.
Added a client program to test it.
Problem: Item_char_typecast reported wrong max_length when
casting to BINARY, which lead, in particular, in wrong
"ORDER BY BINARY(char_column)" results.
Fix: making Item_char_typecast report correct max_length.
@ mysql-test/r/ctype_utf16.result
Fixing old incorrect test result.
@ mysql-test/r/ctype_utf32.result
Fixing old incorrect test result.
@ mysql-test/r/ctype_utf8.result
Adding new test
@ mysql-test/t/ctype_utf8.test
Adding new test
@ sql/item_timefunc.cc
Making Item_char_typecast report correct max_length
when cast is done to BINARY.
Problem: SHOW CREATE FUNCTION and SELECT DTD_IDENTIFIER FROM I_S.ROUTINES
returned wrong values in case of ENUM return data type and UCS2
character set.
Fix: the string to collect returned data type was incorrectly set to
"binary" character set, therefore UCS2 values where returned with
extra '\0' characters.
Setting string character set to creation_ctx->get_client_cs()
in sp_find_routine(), and to system_charset_info in sp_create_routine
fixes the problem.
Adding tests:
- the original test with Latin letters
- an extra test with non-Latin letters
Actually there is two different bugs.
The first one caused crash on queries with WHERE condition over views
containing WHERE condition. A wrong check for prepared statement phase led
to items for view fields being allocated in the execution memory and freed
at the end of execution. Thus the optimized WHERE condition refers to
unallocated memory on the second execution and server crashed.
The second one caused by the Item_cond::compile function not saving changes
it made to the item tree. Thus on the next execution changes weren't
reverted and server crashed on dereferencing of unallocated space.
The new helper function called is_stmt_prepare_or_first_stmt_execute
is added to the Query_arena class.
The find_field_in_view function now uses
is_stmt_prepare_or_first_stmt_execute() to check whether
newly created view items should be freed at the end of the query execution.
The Item_cond::compile function now saves changes it makes to item tree.
The bug 38816 changed the lock that protects THD::query from
LOCK_thread_count to LOCK_thd_data, but didn't update the associated
InnoDB functions.
1. The innobase_mysql_prepare_print_arbitrary_thd and the
innobase_mysql_end_print_arbitrary_thd InnoDB functions have been
removed, since now we have a per-thread mutex: now we don't need to wrap
several inter-thread access tries to THD::query with a single global
LOCK_thread_count lock, so we can simplify the code.
2. The innobase_mysql_print_thd function has been modified to lock
LOCK_thd_data in direct way.
SET TRANSACTION ISOLATION LEVEL is used to temporarily set
the trans.iso.level for the next transaction. After the
transaction, the iso.level is (re-)set to value of the
session variable 'tx_isolation'.
The bug is caused by setting the thd->variables.tx_isolation
field to the value of the session variable upon each
statement commit. It should only be set at the end of the
full transaction.
The fix has been to remove the setting of the variable in
ha_autocommit_or_rollback if we're in a transaction, as it
will be correctly set in either ha_rollback or
ha_commit_one_phase.
If, on the other hand, we're in autocommit mode, tx_isolation
will be explicitly set here.
The 'slave_patternload_file' is assigned to the real path of the load data file
when initializing the object of Relay_log_info. But the path of the load data
file is not formatted to real path when executing event from relay log. So the
error will be encountered if the path of the load data file is a symbolic link.
Actually the global 'opt_secure_file_priv' is not formatted to real path when
loading data from file. So the same thing will happen too.
To fix these errors, the path of the load data file should be formatted to
real path when executing event from relay log. And the 'opt_secure_file_priv'
should be formatted to real path when loading data infile.
<tmp_tbl> with RBL
When binlogging the statement, the server always handle the existing
object as a table, even though it is a view. However a view is
handled differently in other parts of the code thus leading the
statement to crash in RBL if the view exists.
This happens because the underlying tables for the view are not opened
when we try to call store_create_info() on the view in order to build
a CREATE TABLE statement.
This patch will only address the crash problem, other binlogging
problems related to CREATE TABLE IF NOT EXISTS LIKE when the existing
object is a view will be solved by BUG 47442.
for each record'
There was an error in an internal structure in the range
optimizer (SEL_ARG). Bad design causes parts of a data
structure not to be initialized when it is in a certain
state. All client code must check that this state is not
present before trying to access the structure's data. Fixed
by
- Checking the state before trying to access data (in
several places, most of which not covered by test case.)
- Copying the keypart id when cloning SEL_ARGs
BUG#46000 - using index called GEN_CLUST_INDEX crashes server
Detailed revision comments:
r6180 | jyang | 2009-11-17 10:54:57 +0200 (Tue, 17 Nov 2009) | 7 lines
branches/5.0: Merge/Port fix for bug #46000 from branches/5.1
-r5895 to branches/5.0. Disallow creating index with the
name of "GEN_CLUST_INDEX" which is reserved for the default
system primary index. Minor adjusts on table name screening
format for added tests.
BUG#47777 - innodb dies with spatial pk: Failing assertion: buf <= original_buf + buf_len
Detailed revision comments:
r6178 | jyang | 2009-11-17 08:52:11 +0200 (Tue, 17 Nov 2009) | 6 lines
branches/5.0: Merge fix for bug #47777 from branches/5.1 -r6045
to bracnches/5.0. Treat the Geometry data same as Binary BLOB
in ha_innobase::store_key_val_for_row(), since the Geometry
data is stored as Binary BLOB in Innodb.
In RBR, All statements operating on temporary tables should not be binlogged.
Despite this fact, after executing 'TRUNCATE... ' on a temporary table,
the command is still logged, even if in row-based mode. Consequently, this raises
problems in the slave as the table may not exist, resulting in an
execution failure. Ultimately, this causes the slave to report
an error and abort.
After this patch, 'TRUNCATE ...' statement on a temporary table will not be
binlogged in RBR.
The problem is that the server could crash when attempting
to access a non-conformant proc system table. One such case
was a crash when invoking stored procedure related statements
on a 5.1 server with a proc system table in the 5.0 format.
The solution is to validate the proc system table format
before attempts to access it are made. If the table is not
in the format that the server expects, a message is written
to the error log and the statement that caused the table to
be accessed fails.
This patch introduce a limit on the time the query cache can
block with a lock on SELECTs.
Other operations which causes a change in the table
data will still be blocked.
Bug #48370 Absolutely wrong calculations with GROUP BY and
decimal fields when using IF
Added the test cases in the above two bugs for regression
testing.
Added additional tests that demonstrate a incomplete fix.
Added a new factory method for Field_new_decimal to
create a field from an (decimal returning) Item.
In the new method made sure that all the precision and
length variables are capped in a proper way.
This is required because Item's can have larger precision
than the decimal fields and thus need to be capped when
creating a field based on an Item type.
Fixed the wrong typecast to Item_decimal.
When merging ranges during calculation of the result of OR
to two range sets the current range may be obsoleted by the
resulting merged range.
The first overlapping range can be obsoleted as well.
Fixed by moving the pointer to the first overlapping range to the
pointer of the resulting union range.
Added few comments at key places in key_or().
Problem: Some system functions that could return different values on
master and slave were not marked unsafe. In particular:
GET_LOCK
IS_FREE_LOCK
IS_USED_LOCK
MASTER_POS_WAIT
RELEASE_LOCK
SLEEP
SYSDATE
VERSION
Fix: Mark these functions unsafe.
DELETE IGNORE
The ER_CANT_UPDATE_USED_TABLE_IN_SF_OR_TRG error was set in the
diagnostics area when it happened, but the DELETE cleanup code
never checked for a non-fatal error condition, thus trying to
set diag.area to "ok". This triggered an assert checking that
the diag.area was empty.
The fix was to test if there existed a non-fatal error condition
(thd->is_error() before ok'ing the operation.
The problem was a "self-deadlock" if the connection issuing INSERT DELAYED
had both the global read lock (FLUSH TABLES WITH READ LOCK) and LOCK TABLES
mode active. The table being inserted into had to be different from the
table(s) locked by LOCK TABLES.
For INSERT DELAYED, the connection thread waits until the handler thread has
opened and locked its table before returning. But since the global read lock
was active, the handler thread would be unable to lock and would wait for the
global read lock to go away.
So the handler thread would be waiting for the connection thread to release
the global read lock while the connection thread was waiting for the handler
thread to lock the table. This gave a "self-deadlock" (same connection,
different threads).
The deadlock would only happen if we also had LOCK TABLES mode since the
INSERT otherwise will try to get protection against global read lock before
starting the handler thread. It will then notice that the global read lock
is owned by the same connection and report ER_CANT_UPDATE_WITH_READLOCK.
This patch removes the deadlock by reporting ER_CANT_UPDATE_WITH_READLOCK
also if we are inside LOCK TABLES mode.
Test case added to delayed.test.
WHERE conditions
check_group_min_max() checks if the loose index scan
optimization is applicable for a given WHERE condition, that is
if the MIN/MAX attribute participates only in range predicates
comparing the corresponding field with constants.
The problem was that it considered the whole predicate suitable
for the loose index scan optimization as soon as it encountered
a constant as a predicate argument. This is obviously wrong for
cases when a constant is the first argument of a predicate
which does not satisfy the above condition.
Fixed check_group_min_max() so that all arguments of the input
predicate are considered to decide if it passes the test, even
though a constant has already been encountered.