list using a function
When executing dependent subqueries they are re-inited and re-exec() for
each row of the outer context.
The cause for the bug is that during subquery reinitialization/re-execution,
the optimizer reallocates JOIN::join_tab, JOIN::table in make_simple_join()
and the local variable in 'sortorder' in create_sort_index(), which is
allocated by make_unireg_sortorder().
Care must be taken not to allocate anything into the thread's memory pool
while re-initializing query plan structures between subquery re-executions.
All such items mush be cached and reused because the thread's memory pool
is freed at the end of the whole query.
Note that they must be cached and reused even for queries that are not
otherwise cacheable because otherwise it will grow the thread's memory
pool every time a cacheable query is re-executed.
We provide additional members to the JOIN structure to store references
to the items that need to be cached.
The mysql_alter_table() was able to rename only a table.
The view/table renaming code is moved from the function rename_tables
to the new function called do_rename().
The mysql_alter_table() function calls it when it needs to rename a view.
(race cond)
It was possible for one thread to interrupt a Data Definition Language
statement and thereby get messages to the binlog out of order. Consider:
Connection 1: Drop Foo x
Connection 2: Create or replace Foo x
Connection 2: Log "Create or replace Foo x"
Connection 1: Log "Drop Foo x"
Local end would have Foo x, but the replicated slaves would not.
The fix for this is to wrap all DDL and logging of a kind in the same mutex.
Since we already use mutexes for the various parts of altering the server,
this only entails moving the logging events down close to the action, inside
the mutex protection.
The problem was that during DROP TEMPORARY TABLE we tried to acquire
the name lock, though temporary tables belongs to one connection, and
no race is possible.
The solution is to not use table name locking while executing
DROP TEMPORARY TABLE.
- if there are two character set definitions in the column declaration,
we replace the first one with the second one as we store both in the LEX->charset
slot. Add a separate slot to the LEX structure to store underscore charset.
- convert default values to the column charset of STRING, VARSTRING fields
if necessary as well.
Fix for BUG#16676: Database CHARSET not used for stored procedures
The problem in BUG#16211 is that CHARSET-clause of the return type for
stored functions is just ignored.
The problem in BUG#16676 is that if character set is not explicitly
specified for sp-variable, the server character set is used instead
of the database one.
The fix has two parts:
- always store CHARSET-clause of the return type along with the
type definition in mysql.proc.returns column. "Always" means that
CHARSET-clause is appended even if it has not been explicitly
specified in CREATE FUNCTION statement (this affects BUG#16211 only).
Storing CHARSET-clause if it is not specified is essential to avoid
changing character set if the database character set is altered in
the future.
NOTE: this change is not backward compatible with the previous releases.
- use database default character set if CHARSET-clause is not explicitly
specified (this affects both BUG#16211 and BUG#16676).
NOTE: this also breaks backward compatibility.
NDB table".
SQL-layer was not marking fields which were used in triggers as such. As
result these fields were not always properly retrieved/stored by handler
layer. So one might got wrong values or lost changes in triggers for NDB,
Federated and possibly InnoDB tables.
This fix solves the problem by marking fields used in triggers
appropriately.
Also this patch contains the following cleanup of ha_ndbcluster code:
We no longer rely on reading LEX::sql_command value in handler in order
to determine if we can enable optimization which allows us to handle REPLACE
statement in more efficient way by doing replaces directly in write_row()
method without reporting error to SQL-layer.
Instead we rely on SQL-layer informing us whether this optimization
applicable by calling handler::extra() method with
HA_EXTRA_WRITE_CAN_REPLACE flag.
As result we no longer apply this optimzation in cases when it should not
be used (e.g. if we have on delete triggers on table) and use in some
additional cases when it is applicable (e.g. for LOAD DATA REPLACE).
Finally this patch includes fix for bug#20728 "REPLACE does not work
correctly for NDB table with PK and unique index".
This was yet another problem which was caused by improper field mark-up.
During row replacement fields which weren't explicity used in REPLACE
statement were not marked as fields to be saved (updated) so they have
retained values from old row version. The fix is to mark all table
fields as set for REPLACE statement. Note that in 5.1 we already solve
this problem by notifying handler that it should save values from all
fields only in case when real replacement happens.
Table comment: issue a warning(error in traditional mode) if length of comment > 60 symbols
Column comment: issue a warning(error in traditional mode) if length of comment > 255 symbols
Table 'comment' is changed from char* to LEX_STRING
Bug#19022 "Memory bug when switching db during trigger execution"
Bug#17199 "Problem when view calls function from another database."
Bug#18444 "Fully qualified stored function names don't work correctly in
SELECT statements"
Documentation note: this patch introduces a change in behaviour of prepared
statements.
This patch adds a few new invariants with regard to how THD::db should
be used. These invariants should be preserved in future:
- one should never refer to THD::db by pointer and always make a deep copy
(strmake, strdup)
- one should never compare two databases by pointer, but use strncmp or
my_strncasecmp
- TABLE_LIST object table->db should be always initialized in the parser or
by creator of the object.
For prepared statements it means that if the current database is changed
after a statement is prepared, the database that was current at prepare
remains active. This also means that you can not prepare a statement that
implicitly refers to the current database if the latter is not set.
This is not documented, and therefore needs documentation. This is NOT a
change in behavior for almost all SQL statements except:
- ALTER TABLE t1 RENAME t2
- OPTIMIZE TABLE t1
- ANALYZE TABLE t1
- TRUNCATE TABLE t1 --
until this patch t1 or t2 could be evaluated at the first execution of
prepared statement.
CURRENT_DATABASE() still works OK and is evaluated at every execution
of prepared statement.
Note, that in stored routines this is not an issue as the default
database is the database of the stored procedure and "use" statement
is prohibited in stored routines.
This patch makes obsolete the use of check_db_used (it was never used in the
old code too) and all other places that check for table->db and assign it
from THD::db if it's NULL, except the parser.
How this patch was created: THD::{db,db_length} were replaced with a
LEX_STRING, THD::db. All the places that refer to THD::{db,db_length} were
manually checked and:
- if the place uses thd->db by pointer, it was fixed to make a deep copy
- if a place compared two db pointers, it was fixed to compare them by value
(via strcmp/my_strcasecmp, whatever was approproate)
Then this intermediate patch was used to write a smaller patch that does the
same thing but without a rename.
TODO in 5.1:
- remove check_db_used
- deploy THD::set_db in mysql_change_db
See also comments to individual files.
Addendum fixes after changing the condition variable
for the global read lock.
The stress test suite revealed some deadlocks. Some were
related to the new condition variable (COND_global_read_lock)
and some were general problems with the global read lock.
It is now necessary to signal COND_global_read_lock whenever
COND_refresh is signalled.
We need to wait for the release of a global read lock if one
is set before every operation that requires a write lock.
But we must not wait if we have locked tables by LOCK TABLES.
After setting a global read lock a thread waits until all
write locks are released.
ALTER TABLE crashes
Executing fast alter table (one that doesn't need to copy data)
on tables created by mysql versions prior to 4.0.25 could result
in posterior server crash when accessing these tables.
There was a bug prior to mysql-4.0.25. Number of null fields was
calculated incorrectly. As a result frm and data files gets out of
sync after fast alter table. There is no way to determine by which
mysql version (in 4.0 and 4.1 branches) table was created, thus we
disable fast alter table for all tables created by mysql versions
prior to 5.0 branch.
See BUG#6236.
Bug#18282 "INFORMATION_SCHEMA.TABLES provides inconsistent info about invalid views"
This bug caused crashes or resulted in wrong data being returned
when one tried to obtain information from I_S tables about views
using stored functions.
It was caused by the fact that we were using LEX representing
statement which were doing select from I_S tables as active LEX
when contents of I_S table were built. So state of this LEX both
affected and was affected by open_tables() calls which happened
during this process. This resulted in wrong behavior and in
violations of some of invariants which caused crashes.
This fix tries to solve this problem by properly saving/resetting
and restoring part of LEX which affects and is affected by the
process of opening tables and views in get_all_tables() routine.
To simplify things we separated this part of LEX in a new class
and made LEX its descendant.
frm and data files for tables created by earlier MySQL
versions becomes out of sync after certain ALTER TABLE statements:
- One that changes column default value;
- One that changes table comment;
- One that changes table password.
As a result one can expirience either server crash or data corruption/loss.
This fix ensures that running ALTER TABLE on tables created by earlier
MySQL versions recreates data files.
"alter table from MyISAM to MERGE lost data without errors and warnings"
Add new handlerton flag which prevent user from altering table storage
engine to storage engines which would lose data. Both 'blackhole' and
'merge' are marked with the new flag.
Tests included.
or implicitly uses stored function gives "Table not locked" error'
CREATE TABLE ... SELECT ... statement which was explicitly or implicitly
(through view) using stored function gave "Table not locked" error.
The actual bug resides in the current locking scheme of CREATE TABLE SELECT
code, which first opens and locks tables of the SELECT statement itself,
and then, having SELECT tables locked, creates the .FRM, opens the .FRM and
acquires lock on it. This scheme opens a possibility for a deadlock, which
was present and ignored since version 3.23 or earlier. This scheme also
conflicts with the invariant of the prelocking algorithm -- no table can
be open and locked while there are tables locked in prelocked mode.
The patch makes an exception for this invariant when doing CREATE TABLE ...
SELECT, thus extending the possibility of a deadlock to the prelocked mode.
We can't supply a better fix in 5.0.