Fixed compiler warnings.
In set_var.cc, the code was not properly returning an error code
if close_cached_tables() failed.
In sql_tables.cc, the code was not returning properly an error code
if lock_table_names() failed.
Both cases are bugs, introduced in 5.1 only by recent changes.
- Removed not used variables and functions
- Added #ifdef around code that is not used
- Renamed variables and functions to avoid conflicts
- Removed some not used arguments
Fixed some class/struct warnings in ndb
Added define IS_LONGDATA() to simplify code in libmysql.c
I did run gcov on the changes and added 'purecov' comments on almost all lines that was not just variable name changes
Bug#4968 "Stored procedure crash if cursor opened on altered table"
Bug#19733 "Repeated alter, or repeated create/drop, fails"
Bug#19182 "CREATE TABLE bar (m INT) SELECT n FROM foo; doesn't work from
stored procedure."
Bug#6895 "Prepared Statements: ALTER TABLE DROP COLUMN does nothing"
Bug#22060 "ALTER TABLE x AUTO_INCREMENT=y in SP crashes server"
Test cases for bugs 4968, 19733, 6895 will be added in 5.0.
Re-execution of CREATE DATABASE, CREATE TABLE and ALTER TABLE
statements in stored routines or as prepared statements caused
incorrect results (and crashes in versions prior to 5.0.25).
In 5.1 the problem occured only for CREATE DATABASE, CREATE TABLE
SELECT and CREATE TABLE with INDEX/DATA DIRECTOY options).
The problem of bugs 4968, 19733, 19282 and 6895 was that functions
mysql_prepare_table, mysql_create_table and mysql_alter_table were not
re-execution friendly: during their operation they used to modify contents
of LEX (members create_info, alter_info, key_list, create_list),
thus making the LEX unusable for the next execution.
In particular, these functions removed processed columns and keys from
create_list, key_list and drop_list. Search the code in sql_table.cc
for drop_it.remove() and similar patterns to find evidence.
The fix is to supply to these functions a usable copy of each of the
above structures at every re-execution of an SQL statement.
To simplify memory management, LEX::key_list and LEX::create_list
were added to LEX::alter_info, a fresh copy of which is created for
every execution.
The problem of crashing bug 22060 stemmed from the fact that the above
metnioned functions were not only modifying HA_CREATE_INFO structure in
LEX, but also were changing it to point to areas in volatile memory of
the execution memory root.
The patch solves this problem by creating and using an on-stack
copy of HA_CREATE_INFO (note that code in 5.1 already creates and
uses a copy of this structure in mysql_create_table()/alter_table(),
but this approach didn't work well for CREATE TABLE SELECT statement).
with other alterations causes lost tables
Using RENAME clause combined with other clauses of ALTER TABLE led to
data loss (the data was there but not accessible). This could happen if the
changes do not change the table much. Adding and droppping of fields and
indices was safe. Renaming a column with MODIFY or CHANGE was unsafe operation,
if the actual column didn't change (changing from int to int, which is a noop)
Depending on the storage engine (SE) the behavior is different:
1)MyISAM/MEMORY - the ALTER TABLE statement completes
without any error but next SELECT against the new table fails.
2)InnoDB (and every other transactional table) - The ALTER TABLE statement
fails. There are the the following files in the db dir -
`new_table_name.frm` and a temporary table's frm. If the SE is file
based, then the data and index files will be present but with the old
names. What happens is that for InnoDB the table is not renamed in the
internal DDIC.
Fixed by adding additional call to mysql_rename_table() method, which should
not include FRM file rename, because it has been already done during file
names juggling.
- Removed not used variables
- Changed some ulong parameters/variables to ulonglong (possible serious bug)
- Added casts to get rid of safe assignment from longlong to long (and similar)
- Added casts to function parameters
- Fixed signed/unsigned compares
- Added some constructores to structures
- Removed some not portable constructs
Better fix for bug Bug #21428 "skipped 9 bytes from file: socket (3)" on "mysqladmin shutdown"
(Added new parameter to net_clear() to define when we want the communication buffer to be emptied)
ALTER TABLE DISABLE KEYS doesn't work when modifying the table
ENABLE|DISABLE KEYS combined with another ALTER TABLE option, different
than RENAME TO did nothing. Also, if the table had disabled keys
and was ALTER-ed then the end table was with enabled keys.
Fixed by checking whether the table had disabled keys and enabling them
in the copied table.
Added missing DBUG_RETURN statements (in mysqldump.c)
Added missing enums
Fixed a lot of wrong DBUG_PRINT() statements, some of which could cause crashes
Removed usage of %lld and %p in printf strings as these are not portable or produces different results on different systems.
(this is the 5.0 patch, because 4.1 differs)
There was an improper order of doing chained operations.
To the documentor: ENABLE|DISABLE KEYS combined with RENAME TO, and no other
ALTER TABLE clause, leads to server crash independent of the presence of
indices and data in the table.
There was an improper order of doing chained operations.
To the documentor: ENABLE|DISABLE KEYS combined with RENAME TO, and no other
ALTER TABLE clause, leads to server crash independent of the presence of
indices and data in the table.
This is a performance issue for queries with subqueries evaluation
of which requires filesort.
Allocation of memory for the sort buffer at each evaluation of a
subquery may take a significant amount of time if the buffer is rather big.
With the fix we allocate the buffer at the first evaluation of the
subquery and reuse it at each subsequent evaluation.
list using a function
When executing dependent subqueries they are re-inited and re-exec() for
each row of the outer context.
The cause for the bug is that during subquery reinitialization/re-execution,
the optimizer reallocates JOIN::join_tab, JOIN::table in make_simple_join()
and the local variable in 'sortorder' in create_sort_index(), which is
allocated by make_unireg_sortorder().
Care must be taken not to allocate anything into the thread's memory pool
while re-initializing query plan structures between subquery re-executions.
All such items mush be cached and reused because the thread's memory pool
is freed at the end of the whole query.
Note that they must be cached and reused even for queries that are not
otherwise cacheable because otherwise it will grow the thread's memory
pool every time a cacheable query is re-executed.
We provide additional members to the JOIN structure to store references
to the items that need to be cached.
The mysql_alter_table() was able to rename only a table.
The view/table renaming code is moved from the function rename_tables
to the new function called do_rename().
The mysql_alter_table() function calls it when it needs to rename a view.
Bug #21785 "Server crashes after rename of the log table" and
Bug #21966 "Strange warnings on create like/repair of the log
tables"
According to the patch, from now on, one should use RENAME to
perform a log table rotation (this should also be reflected in
the manual).
Here is a sample:
use mysql;
CREATE TABLE IF NOT EXISTS general_log2 LIKE general_log;
RENAME TABLE general_log TO general_log_backup, general_log2 TO general_log;
The rules for Rename of the log tables are following:
IF 1. Log tables are enabled
AND 2. Rename operates on the log table and nothing is being
renamed to the log table.
DO 3. Throw an error message.
ELSE 4. Perform rename.
The very RENAME query will go the the old (backup) table. This is
consistent with the behavoiur we have with binlog ROTATE LOGS
statement.
Other problems, which are solved by the patch are:
1) Now REPAIR of the log table is exclusive operation (as it should be), this
also eliminates lock-related warnings. and
2) CREATE LIKE TABLE now usese usual read lock on the source table rather
then name lock, which is too restrictive. This way we get rid of another
log table-related warning, which occured because of the above fact
(as a side-effect, name lock resulted in a warning).
(race cond)
It was possible for one thread to interrupt a Data Definition Language
statement and thereby get messages to the binlog out of order. Consider:
Connection 1: Drop Foo x
Connection 2: Create or replace Foo x
Connection 2: Log "Create or replace Foo x"
Connection 1: Log "Drop Foo x"
Local end would have Foo x, but the replicated slaves would not.
The fix for this is to wrap all DDL and logging of a kind in the same mutex.
Since we already use mutexes for the various parts of altering the server,
this only entails moving the logging events down close to the action, inside
the mutex protection.
In practice this means that handlerton is now created by the server and is passed to the engine. Plugin startups can now also control how plugins are inited (and can optionally pass values). Bit more flexibility to those who want to write plugin interfaces to the database.
Bug #18559 "log tables cannot change engine, and
gets deadlocked when dropping w/ log on":
1) Add more generic error messages
2) Add new handlerton flag for engines, which support
log tables
3) Remove (log-tables related) mutex lock in myisam to
improve performance
The problem was that during DROP TEMPORARY TABLE we tried to acquire
the name lock, though temporary tables belongs to one connection, and
no race is possible.
The solution is to not use table name locking while executing
DROP TEMPORARY TABLE.
- if there are two character set definitions in the column declaration,
we replace the first one with the second one as we store both in the LEX->charset
slot. Add a separate slot to the LEX structure to store underscore charset.
- convert default values to the column charset of STRING, VARSTRING fields
if necessary as well.
we didn't check for NULL value of the
lex_create_info->db_type pointer.
The pointer is NULL in the case, when the engine name is
unknown to the server. This happens with NDB on Windows.
gets deadlocked when dropping w/ log on"
Log tables rely on concurrent insert machinery to add data.
This means that log tables are always opened and locked by
special (artificial) logger threads. Because of this, the thread
which tries to drop a log table starts to wait for the table
to be unlocked. Which will happen only if the log table is disabled.
Alike situation happens if one tries to alter a log table.
However in addition to the problem above, alter table calls
check_if_locking_is_allowed() routine for the engine. The
routine does not allow alter for the log tables. So, alter
doesn't start waiting forever for logs to be disabled, but
returns with an error.
Another problem is that not all engines could be used for
the log tables. That's because they need concurrent insert.
In this patch we:
(1) Explicitly disallow to drop/alter a log table if it
is currently used by the logger.
(2) Update MyISAM to support log tables
(3) Allow to drop log tables/alter log tables if log is
disabled
At the same time we (4) Disallow to alter log tables to
unsupported engine (after this patch CSV and MyISAM are
alowed)
Recommit with review fixes.
Continued implementation of WL#1324 (table name to filename encoding)
The intermediate (not temporary) files of the new table
during ALTER TABLE was visible for SHOW TABLES. These
intermediate files are copies of the original table with
the changes done by ALTER TABLE. After all the data is
copied over from the original table, these files are renamed
to the original tables file names. So they are not temporary
files. They persist after ALTER TABLE, but just with another
name.
In 5.0 the intermediate files are invisible for SHOW TABLES
because all file names beginning with "#sql" were suppressed.
This failed since 5.1.6 because even temporary table names were
converted when making file names from them. The prefix became
converted to "@0023sql". Converting the prefix during SHOW TABLES
would suppress the listing of user tables that start with "#sql".
The solution of the problem is to continue the implementation of
the table name to file name conversion feature. One requirement
is to suppress the conversion for temporary table names.
This change is straightforward for real temporary tables as there
is a function that creates temporary file names.
But the generated path names are located in TMPDIR and have no
relation to the internal table name. This cannot be used for
ALTER TABLE. Its intermediate files need to be in the same
directory as the old table files. And it is necessary to be
able to deduce the same path from the same table name repeatedly.
Consequently the intermediate table files must be handled like normal
tables. Their internal names shall start with tmp_file_prefix
(#sql) and they shall not be converted like normal table names.
I added a flags parameter to all relevant functions that are
called from ALTER TABLE. It is used to suppress the conversion
for the intermediate table files.
The outcome is that the suppression of #sql in SHOW TABLES
works again. It does not suppress user tables as these are
converted to @0023sql on file level.
This patch does also fix ALTER TABLE ... RENAME, which could not
rename a table with non-ASCII characters in its name.
It does also fix the problem that a user could create a table like
`#sql-xxxx-yyyy`, where xxxx is mysqld's pid and yyyy is the thread
ID of some other thread, which prevented this thread from running
ALTER TABLE.
Some of the above problems are mentioned in Bug 1405, which can
be closed with this patch.
This patch does also contain some minor fixes for other forgotten
conversions. Still known problems are reported as bugs 21370,
21373, and 21387.
Fix for BUG#16676: Database CHARSET not used for stored procedures
The problem in BUG#16211 is that CHARSET-clause of the return type for
stored functions is just ignored.
The problem in BUG#16676 is that if character set is not explicitly
specified for sp-variable, the server character set is used instead
of the database one.
The fix has two parts:
- always store CHARSET-clause of the return type along with the
type definition in mysql.proc.returns column. "Always" means that
CHARSET-clause is appended even if it has not been explicitly
specified in CREATE FUNCTION statement (this affects BUG#16211 only).
Storing CHARSET-clause if it is not specified is essential to avoid
changing character set if the database character set is altered in
the future.
NOTE: this change is not backward compatible with the previous releases.
- use database default character set if CHARSET-clause is not explicitly
specified (this affects both BUG#16211 and BUG#16676).
NOTE: this also breaks backward compatibility.
this is a cleanup patch for our current auto_increment handling:
new names for auto_increment variables in THD, new methods to manipulate them
(see sql_class.h), some move into handler::, causing less backup/restore
work when executing substatements.
This makes the logic hopefully clearer, less work is is needed in
mysql_insert().
By cleaning up, using different variables for different purposes (instead
of one for 3 things...), we fix those bugs, which someone may want to fix
in 5.0 too:
BUG#20339 "stored procedure using LAST_INSERT_ID() does not replicate
statement-based"
BUG#20341 "stored function inserting into one auto_increment puts bad
data in slave"
BUG#19243 "wrong LAST_INSERT_ID() after ON DUPLICATE KEY UPDATE"
(now if a row is updated, LAST_INSERT_ID() will return its id)
and re-fixes:
BUG#6880 "LAST_INSERT_ID() value changes during multi-row INSERT"
(already fixed differently by Ramil in 4.1)
Test of documented behaviour of mysql_insert_id() (there was no test).
The behaviour changes introduced are:
- LAST_INSERT_ID() now returns "the first autogenerated auto_increment value
successfully inserted", instead of "the first autogenerated auto_increment
value if any row was successfully inserted", see auto_increment.test.
Same for mysql_insert_id(), see mysql_client_test.c.
- LAST_INSERT_ID() returns the id of the updated row if ON DUPLICATE KEY
UPDATE, see auto_increment.test. Same for mysql_insert_id(), see
mysql_client_test.c.
- LAST_INSERT_ID() does not change if no autogenerated value was successfully
inserted (it used to then be 0), see auto_increment.test.
- if in INSERT SELECT no autogenerated value was successfully inserted,
mysql_insert_id() now returns the id of the last inserted row (it already
did this for INSERT VALUES), see mysql_client_test.c.
- if INSERT SELECT uses LAST_INSERT_ID(X), mysql_insert_id() now returns X
(it already did this for INSERT VALUES), see mysql_client_test.c.
- NDB now behaves like other engines wrt SET INSERT_ID: with INSERT IGNORE,
the id passed in SET INSERT_ID is re-used until a row succeeds; SET INSERT_ID
influences not only the first row now.
Additionally, when unlocking a table we check that the thread is not keeping
a next_insert_id (as the table is unlocked that id is potentially out-of-date);
forgetting about this next_insert_id is done in a new
handler::ha_release_auto_increment().
Finally we prepare for engines capable of reserving finite-length intervals
of auto_increment values: we store such intervals in THD. The next step
(to be done by the replication team in 5.1) is to read those intervals from
THD and actually store them in the statement-based binary log. NDB
will be a good engine to test that.
NDB table".
SQL-layer was not marking fields which were used in triggers as such. As
result these fields were not always properly retrieved/stored by handler
layer. So one might got wrong values or lost changes in triggers for NDB,
Federated and possibly InnoDB tables.
This fix solves the problem by marking fields used in triggers
appropriately.
Also this patch contains the following cleanup of ha_ndbcluster code:
We no longer rely on reading LEX::sql_command value in handler in order
to determine if we can enable optimization which allows us to handle REPLACE
statement in more efficient way by doing replaces directly in write_row()
method without reporting error to SQL-layer.
Instead we rely on SQL-layer informing us whether this optimization
applicable by calling handler::extra() method with
HA_EXTRA_WRITE_CAN_REPLACE flag.
As result we no longer apply this optimzation in cases when it should not
be used (e.g. if we have on delete triggers on table) and use in some
additional cases when it is applicable (e.g. for LOAD DATA REPLACE).
Finally this patch includes fix for bug#20728 "REPLACE does not work
correctly for NDB table with PK and unique index".
This was yet another problem which was caused by improper field mark-up.
During row replacement fields which weren't explicity used in REPLACE
statement were not marked as fields to be saved (updated) so they have
retained values from old row version. The fix is to mark all table
fields as set for REPLACE statement. Note that in 5.1 we already solve
this problem by notifying handler that it should save values from all
fields only in case when real replacement happens.
Table comment: issue a warning(error in traditional mode) if length of comment > 60 symbols
Column comment: issue a warning(error in traditional mode) if length of comment > 255 symbols
Table 'comment' is changed from char* to LEX_STRING
Bug#19022 "Memory bug when switching db during trigger execution"
Bug#17199 "Problem when view calls function from another database."
Bug#18444 "Fully qualified stored function names don't work correctly in
SELECT statements"
Documentation note: this patch introduces a change in behaviour of prepared
statements.
This patch adds a few new invariants with regard to how THD::db should
be used. These invariants should be preserved in future:
- one should never refer to THD::db by pointer and always make a deep copy
(strmake, strdup)
- one should never compare two databases by pointer, but use strncmp or
my_strncasecmp
- TABLE_LIST object table->db should be always initialized in the parser or
by creator of the object.
For prepared statements it means that if the current database is changed
after a statement is prepared, the database that was current at prepare
remains active. This also means that you can not prepare a statement that
implicitly refers to the current database if the latter is not set.
This is not documented, and therefore needs documentation. This is NOT a
change in behavior for almost all SQL statements except:
- ALTER TABLE t1 RENAME t2
- OPTIMIZE TABLE t1
- ANALYZE TABLE t1
- TRUNCATE TABLE t1 --
until this patch t1 or t2 could be evaluated at the first execution of
prepared statement.
CURRENT_DATABASE() still works OK and is evaluated at every execution
of prepared statement.
Note, that in stored routines this is not an issue as the default
database is the database of the stored procedure and "use" statement
is prohibited in stored routines.
This patch makes obsolete the use of check_db_used (it was never used in the
old code too) and all other places that check for table->db and assign it
from THD::db if it's NULL, except the parser.
How this patch was created: THD::{db,db_length} were replaced with a
LEX_STRING, THD::db. All the places that refer to THD::{db,db_length} were
manually checked and:
- if the place uses thd->db by pointer, it was fixed to make a deep copy
- if a place compared two db pointers, it was fixed to compare them by value
(via strcmp/my_strcasecmp, whatever was approproate)
Then this intermediate patch was used to write a smaller patch that does the
same thing but without a rename.
TODO in 5.1:
- remove check_db_used
- deploy THD::set_db in mysql_change_db
See also comments to individual files.
Addendum fixes after changing the condition variable
for the global read lock.
The stress test suite revealed some deadlocks. Some were
related to the new condition variable (COND_global_read_lock)
and some were general problems with the global read lock.
It is now necessary to signal COND_global_read_lock whenever
COND_refresh is signalled.
We need to wait for the release of a global read lock if one
is set before every operation that requires a write lock.
But we must not wait if we have locked tables by LOCK TABLES.
After setting a global read lock a thread waits until all
write locks are released.
Changed test for functions if they are supported.
3 categories:
1) Fully supported
2) Supported for single character collations
3) Supported for binary collations
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),
- New optional handler function introduced: reset()
This is called after every DML statement to make it easy for a handler to
statement specific cleanups.
(The only case it's not called is if force the file to be closed)
- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
should be moved to handler::reset()
- table->read_set contains a bitmap over all columns that are needed
in the query. read_row() and similar functions only needs to read these
columns
- table->write_set contains a bitmap over all columns that will be updated
in the query. write_row() and update_row() only needs to update these
columns.
The above bitmaps should now be up to date in all context
(including ALTER TABLE, filesort()).
The handler is informed of any changes to the bitmap after
fix_fields() by calling the virtual function
handler::column_bitmaps_signal(). If the handler does caching of
these bitmaps (instead of using table->read_set, table->write_set),
it should redo the caching in this code. as the signal() may be sent
several times, it's probably best to set a variable in the signal
and redo the caching on read_row() / write_row() if the variable was
set.
- Removed the read_set and write_set bitmap objects from the handler class
- Removed all column bit handling functions from the handler class.
(Now one instead uses the normal bitmap functions in my_bitmap.c instead
of handler dedicated bitmap functions)
- field->query_id is removed. One should instead instead check
table->read_set and table->write_set if a field is used in the query.
- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
instead use table->read_set to check for which columns to retrieve.
- If a handler needs to call Field->val() or Field->store() on columns
that are not used in the query, one should install a temporary
all-columns-used map while doing so. For this, we provide the following
functions:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
field->val();
dbug_tmp_restore_column_map(table->read_set, old_map);
and similar for the write map:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
field->val();
dbug_tmp_restore_column_map(table->write_set, old_map);
If this is not done, you will sooner or later hit a DBUG_ASSERT
in the field store() / val() functions.
(For not DBUG binaries, the dbug_tmp_restore_column_map() and
dbug_tmp_restore_column_map() are inline dummy functions and should
be optimized away be the compiler).
- If one needs to temporary set the column map for all binaries (and not
just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
methods) one should use the functions tmp_use_all_columns() and
tmp_restore_column_map() instead of the above dbug_ variants.
- All 'status' fields in the handler base class (like records,
data_file_length etc) are now stored in a 'stats' struct. This makes
it easier to know what status variables are provided by the base
handler. This requires some trivial variable names in the extra()
function.
- New virtual function handler::records(). This is called to optimize
COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
(stats.records is not supposed to be an exact value. It's only has to
be 'reasonable enough' for the optimizer to be able to choose a good
optimization path).
- Non virtual handler::init() function added for caching of virtual
constants from engine.
- Removed has_transactions() virtual method. Now one should instead return
HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
transactions.
- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
that is to be used with 'new handler_name()' to allocate the handler
in the right area. The xxxx_create_handler() function is also
responsible for any initialization of the object before returning.
For example, one should change:
static handler *myisam_create_handler(TABLE_SHARE *table)
{
return new ha_myisam(table);
}
->
static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
{
return new (mem_root) ha_myisam(table);
}
- New optional virtual function: use_hidden_primary_key().
This is called in case of an update/delete when
(table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
but we don't have a primary key. This allows the handler to take precisions
in remembering any hidden primary key to able to update/delete any
found row. The default handler marks all columns to be read.
- handler::table_flags() now returns a ulonglong (to allow for more flags).
- New/changed table_flags()
- HA_HAS_RECORDS Set if ::records() is supported
- HA_NO_TRANSACTIONS Set if engine doesn't support transactions
- HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
Set if we should mark all primary key columns for
read when reading rows as part of a DELETE
statement. If there is no primary key,
all columns are marked for read.
- HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some
cases (based on table->read_set)
- HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
- HA_DUPP_POS Renamed to HA_DUPLICATE_POS
- HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
Set this if we should mark ALL key columns for
read when when reading rows as part of a DELETE
statement. In case of an update we will mark
all keys for read for which key part changed
value.
- HA_STATS_RECORDS_IS_EXACT
Set this if stats.records is exact.
(This saves us some extra records() calls
when optimizing COUNT(*))
- Removed table_flags()
- HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if
handler::records() gives an exact count() and
HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
- HA_READ_RND_SAME Removed (no one supported this one)
- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
- Renamed handler::dupp_pos to handler::dup_pos
- Removed not used variable handler::sortkey
Upper level handler changes:
- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
cache is updated on engine creation time and updated on open.
MySQL level changes (not obvious from the above):
- DBUG_ASSERT() added to check that column usage matches what is set
in the column usage bit maps. (This found a LOT of bugs in current
column marking code).
- In 5.1 before, all used columns was marked in read_set and only updated
columns was marked in write_set. Now we only mark columns for which we
need a value in read_set.
- Column bitmaps are created in open_binary_frm() and open_table_from_share().
(Before this was in table.cc)
- handler::table_flags() calls are replaced with handler::ha_table_flags()
- For calling field->val() you must have the corresponding bit set in
table->read_set. For calling field->store() you must have the
corresponding bit set in table->write_set. (There are asserts in
all store()/val() functions to catch wrong usage)
- thd->set_query_id is renamed to thd->mark_used_columns and instead
of setting this to an integer value, this has now the values:
MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
Changed also all variables named 'set_query_id' to mark_used_columns.
- In filesort() we now inform the handler of exactly which columns are needed
doing the sort and choosing the rows.
- The TABLE_SHARE object has a 'all_set' column bitmap one can use
when one needs a column bitmap with all columns set.
(This is used for table->use_all_columns() and other places)
- The TABLE object has 3 column bitmaps:
- def_read_set Default bitmap for columns to be read
- def_write_set Default bitmap for columns to be written
- tmp_set Can be used as a temporary bitmap when needed.
The table object has also two pointer to bitmaps read_set and write_set
that the handler should use to find out which columns are used in which way.
- count() optimization now calls handler::records() instead of using
handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
- Added extra argument to Item::walk() to indicate if we should also
traverse sub queries.
- Added TABLE parameter to cp_buffer_from_ref()
- Don't close tables created with CREATE ... SELECT but keep them in
the table cache. (Faster usage of newly created tables).
New interfaces:
- table->clear_column_bitmaps() to initialize the bitmaps for tables
at start of new statements.
- table->column_bitmaps_set() to set up new column bitmaps and signal
the handler about this.
- table->column_bitmaps_set_no_signal() for some few cases where we need
to setup new column bitmaps but don't signal the handler (as the handler
has already been signaled about these before). Used for the momement
only in opt_range.cc when doing ROR scans.
- table->use_all_columns() to install a bitmap where all columns are marked
as use in the read and the write set.
- table->default_column_bitmaps() to install the normal read and write
column bitmaps, but not signaling the handler about this.
This is mainly used when creating TABLE instances.
- table->mark_columns_needed_for_delete(),
table->mark_columns_needed_for_delete() and
table->mark_columns_needed_for_insert() to allow us to put additional
columns in column usage maps if handler so requires.
(The handler indicates what it neads in handler->table_flags())
- table->prepare_for_position() to allow us to tell handler that it
needs to read primary key parts to be able to store them in
future table->position() calls.
(This replaces the table->file->ha_retrieve_all_pk function)
- table->mark_auto_increment_column() to tell handler are going to update
columns part of any auto_increment key.
- table->mark_columns_used_by_index() to mark all columns that is part of
an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
it to quickly know that it only needs to read colums that are part
of the key. (The handler can also use the column map for detecting this,
but simpler/faster handler can just monitor the extra() call).
- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
also mark all columns that is used by the given key.
- table->restore_column_maps_after_mark_index() to restore to default
column maps after a call to table->mark_columns_used_by_index().
- New item function register_field_in_read_map(), for marking used columns
in table->read_map. Used by filesort() to mark all used columns
- Maintain in TABLE->merge_keys set of all keys that are used in query.
(Simplices some optimization loops)
- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
but the field in the clustered key is not assumed to be part of all index.
(used in opt_range.cc for faster loops)
- dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
mark all columns as usable. The 'dbug_' version is primarily intended
inside a handler when it wants to just call Field:store() & Field::val()
functions, but don't need the column maps set for any other usage.
(ie:: bitmap_is_set() is never called)
- We can't use compare_records() to skip updates for handlers that returns
a partial column set and the read_set doesn't cover all columns in the
write set. The reason for this is that if we have a column marked only for
write we can't in the MySQL level know if the value changed or not.
The reason this worked before was that MySQL marked all to be written
columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
bug'.
- open_table_from_share() does not anymore setup temporary MEM_ROOT
object as a thread specific variable for the handler. Instead we
send the to-be-used MEMROOT to get_new_handler().
(Simpler, faster code)
Bugs fixed:
- Column marking was not done correctly in a lot of cases.
(ALTER TABLE, when using triggers, auto_increment fields etc)
(Could potentially result in wrong values inserted in table handlers
relying on that the old column maps or field->set_query_id was correct)
Especially when it comes to triggers, there may be cases where the
old code would cause lost/wrong values for NDB and/or InnoDB tables.
- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
This allowed me to remove some wrong warnings about:
"Some non-transactional changed tables couldn't be rolled back"
- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
(thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
some warnings about
"Some non-transactional changed tables couldn't be rolled back")
- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
which could cause delete_table to report random failures.
- Fixed core dumps for some tests when running with --debug
- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
(This has probably caused us to not properly remove temporary files after
crash)
- slow_logs was not properly initialized, which could maybe cause
extra/lost entries in slow log.
- If we get an duplicate row on insert, change column map to read and
write all columns while retrying the operation. This is required by
the definition of REPLACE and also ensures that fields that are only
part of UPDATE are properly handled. This fixed a bug in NDB and
REPLACE where REPLACE wrongly copied some column values from the replaced
row.
- For table handler that doesn't support NULL in keys, we would give an error
when creating a primary key with NULL fields, even after the fields has been
automaticly converted to NOT NULL.
- Creating a primary key on a SPATIAL key, would fail if field was not
declared as NOT NULL.
Cleanups:
- Removed not used condition argument to setup_tables
- Removed not needed item function reset_query_id_processor().
- Field->add_index is removed. Now this is instead maintained in
(field->flags & FIELD_IN_ADD_INDEX)
- Field->fieldnr is removed (use field->field_index instead)
- New argument to filesort() to indicate that it should return a set of
row pointers (not used columns). This allowed me to remove some references
to sql_command in filesort and should also enable us to return column
results in some cases where we couldn't before.
- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
bitmap, which allowed me to use bitmap functions instead of looping over
all fields to create some needed bitmaps. (Faster and smaller code)
- Broke up found too long lines
- Moved some variable declaration at start of function for better code
readability.
- Removed some not used arguments from functions.
(setup_fields(), mysql_prepare_insert_check_table())
- setup_fields() now takes an enum instead of an int for marking columns
usage.
- For internal temporary tables, use handler::write_row(),
handler::delete_row() and handler::update_row() instead of
handler::ha_xxxx() for faster execution.
- Changed some constants to enum's and define's.
- Using separate column read and write sets allows for easier checking
of timestamp field was set by statement.
- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
- Don't build table->normalized_path as this is now identical to table->path
(after bar's fixes to convert filenames)
- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
do comparision with the 'convert-dbug-for-diff' tool.
Things left to do in 5.1:
- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
Mats has promised to look into this.
- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
(I added several test cases for this, but in this case it's better that
someone else also tests this throughly).
Lars has promosed to do this.
New prototype for get_auto_increment() (but new arguments not yet used), to be able
to reserve a finite interval of auto_increment values from cooperating engines.
A hint on how many values to reserve is found in handler::estimation_rows_to_insert,
filled by ha_start_bulk_insert(), new wrapper around start_bulk_insert().
NOTE: this patch changes nothing, for all engines. But it makes the API ready for those
engines which will want to do reservation.
More csets will come to complete WL#3146.
ALTER TABLE crashes
Executing fast alter table (one that doesn't need to copy data)
on tables created by mysql versions prior to 4.0.25 could result
in posterior server crash when accessing these tables.
There was a bug prior to mysql-4.0.25. Number of null fields was
calculated incorrectly. As a result frm and data files gets out of
sync after fast alter table. There is no way to determine by which
mysql version (in 4.0 and 4.1 branches) table was created, thus we
disable fast alter table for all tables created by mysql versions
prior to 5.0 branch.
See BUG#6236.
Bug#18282 "INFORMATION_SCHEMA.TABLES provides inconsistent info about invalid views"
This bug caused crashes or resulted in wrong data being returned
when one tried to obtain information from I_S tables about views
using stored functions.
It was caused by the fact that we were using LEX representing
statement which were doing select from I_S tables as active LEX
when contents of I_S table were built. So state of this LEX both
affected and was affected by open_tables() calls which happened
during this process. This resulted in wrong behavior and in
violations of some of invariants which caused crashes.
This fix tries to solve this problem by properly saving/resetting
and restoring part of LEX which affects and is affected by the
process of opening tables and views in get_all_tables() routine.
To simplify things we separated this part of LEX in a new class
and made LEX its descendant.
duplicate fields removed, st_mysql_storage_engine added to support
run-time handlerton initialization (no compiler warnings), handler API
is now tied to MySQL version, handlerton->plugin mapping added
(slot-based), dummy default_hton removed, plugin-type-specific
initialization generalized, built-in plugins are now initialized too,
--default-storage-engine no longer needs a list of storage engines
in handle_options().
mysql-test-run.pl bugfixes
frm and data files for tables created by earlier MySQL
versions becomes out of sync after certain ALTER TABLE statements:
- One that changes column default value;
- One that changes table comment;
- One that changes table password.
As a result one can expirience either server crash or data corruption/loss.
This fix ensures that running ALTER TABLE on tables created by earlier
MySQL versions recreates data files.
"alter table from MyISAM to MERGE lost data without errors and warnings"
Add new handlerton flag which prevent user from altering table storage
engine to storage engines which would lose data. Both 'blackhole' and
'merge' are marked with the new flag.
Tests included.
or implicitly uses stored function gives "Table not locked" error'
CREATE TABLE ... SELECT ... statement which was explicitly or implicitly
(through view) using stored function gave "Table not locked" error.
The actual bug resides in the current locking scheme of CREATE TABLE SELECT
code, which first opens and locks tables of the SELECT statement itself,
and then, having SELECT tables locked, creates the .FRM, opens the .FRM and
acquires lock on it. This scheme opens a possibility for a deadlock, which
was present and ignored since version 3.23 or earlier. This scheme also
conflicts with the invariant of the prelocking algorithm -- no table can
be open and locked while there are tables locked in prelocked mode.
The patch makes an exception for this invariant when doing CREATE TABLE ...
SELECT, thus extending the possibility of a deadlock to the prelocked mode.
We can't supply a better fix in 5.0.
Added support for key_block_size to MyISAM.
Simplify interface to 'new Key' to make it easier to add new key options.
mysqld option --new is used to define where key options are printed.
(In 5.3 we should move all key options to after key part definition to avoid problem with reserved names)
Fixed some compiler warnings and a memory leak in ssl
ALTER TABLE temporarily creates a new table with a .frm
file and optionally other files. For fast ALTER TABLE
only the .frm file is created. If the operation succeeds,
The temporary files are renamed to their final target.
In case of an error, the temporary file was forgotten to
remove.
Manually tested. The test requires to look at files,
which I think cannot be done portably with the test suite.
The test file is attached to the bug report.
It was impossible to create some table names on Windows
(e.g. LPT1, AUX, COM1, etc).
Fixed to pad dangerous names with thee "at" signs
(e.g. LPT1@@@, AUX@@@, COM1@@@, and so on).
InnoDB requires a full table rebuild for foreign key changes.
It was not possible in compare_tables() to detect such changes.
On Heikkis proposal I added a new flag to the syntax parser
where foreign key definition changes are done. I test for
this flag in compare_tables() now.
The GROUP_CONCAT uses its own temporary table. When ROLLUP is present
it creates the second copy of Item_func_group_concat. This copy receives the
same list of arguments that original group_concat does. When the copy is
set up the result_fields of functions from the argument list are reset to the
temporary table of this copy.
As a result of this action data from functions flow directly to the ROLLUP copy
and the original group_concat functions shows wrong result.
Since queries with COUNT(DISTINCT ...) use temporary tables to store
the results the COUNT function they are also affected by this bug.
The idea of the fix is to copy content of the result_field for the function
under GROUP_CONCAT/COUNT from the first temporary table to the second one,
rather than setting result_field to point to the second temporary table.
To achieve this goal force_copy_fields flag is added to Item_func_group_concat
and Item_sum_count_distinct classes. This flag is initialized to 0 and set to 1
into the make_unique() member function of both classes.
To the TMP_TABLE_PARAM structure is modified to include the similar flag as
well.
The create_tmp_table() function passes that flag to create_tmp_field().
When the flag is set the create_tmp_field() function will set result_field
as a source field and will not reset that result field to newly created
field for Item_func_result_field and its descendants. Due to this there
will be created copy func to copy data from old result_field to newly
created field.
Added missing DBUG_xxx_RETURN statements
Fixed some usage of not initialized variables (as found by valgrind)
Ensure that we don't remove locked tables used as name locks from open table cache until unlock_table_names() are called.
This was fixed by having drop_locked_name() returning any table used as a name lock so that we can free it in unlock_table_names()
This will allow Tomas to continue with his work to use namelocks to syncronize things.
Note: valgrind still produces a lot of warnings about using not initialized code and shows memory loss errors when running the ndb tests
Loads of review comments fixed
inactivate => deactivate
table log => ddl log
Commented on Error Inject Module added
Put various #defines into enums
Fixed abort_and_upgrade_lock, removed unnecessary parameter
Fixed mysqlish method intro's
Fixed warning statements
5.1.7 was released still with partition states in clear text
Fixed io_size bug
Fixed bug in open that TRUNCATED before reading :)
file_entry => file_entry_buf
Don't open DDL log until first write call to DDL log
handler_type => handler_name
no => num
tables corrupt triggers".
It turned out that we also have relied at certain places that
(new_table != table_name) were always true on Windows and for transactional
tables. Since our fix for the bug brakes this assumption we have to add new
flag to pass this information around.
This code needs to be refactored but I dare not to do this in 5.0.
triggers".
Applying ALTER/OPTIMIZE/REPAIR TABLE statements to transactional table or to
table of any type on Windows caused disappearance of its triggers.
Bug was introduced in 5.0.19 by my fix for bug #13525 "Rename table does not
keep info of triggers" (see comment for sql_table.cc for more info).
.
Moved some code to else path to avoid delete, create, delete, create scenarios.
Fixed up the partition info object for some cases where we move from default
partitioning in NDB to default partitioning using partitioning.
Added new syntax ALTER TABLE t1 REMOVE PARTITIONING,
changed semantics of ALTER TABLE t1 ENGINE=X; to not remove partitioning
Fix a number of mix engine bugs in partitioning
added THD::work_part_info member where we now store modified
partition_info structure.
It allows no solve problem when different parts of the part_info get
into different mem_roots
and new binlog format called "mixed" (which is statement-based except if only row-based is correct,
in this cset it means if UDF or UUID is used; more cases could be added in later 5.1 release):
SET GLOBAL|SESSION BINLOG_FORMAT=row|statement|mixed|default;
the global default is statement unless cluster is enabled (then it's row) as in 5.1-alpha.
It's not possible to use SET on this variable if a session is currently in row-based mode and has open temporary tables (because CREATE
TEMPORARY TABLE was not binlogged so temp table is not known on slave), or if NDB is enabled (because
NDB does not support such change on-the-fly, though it will later), of if in a stored function (see below).
The added tests test the possibility or impossibility to SET, their effects, and the mixed mode,
including in prepared statements and in stored procedures and functions.
Caveats:
a) The mixed mode will not work for stored functions: in mixed mode, a stored function will
always be binlogged as one call and in a statement-based way (e.g. INSERT VALUES(myfunc()) or SELECT myfunc()).
b) for the same reason, changing the thread's binlog format inside a stored function is
refused with an error message.
c) the same problems apply to triggers; implementing b) for triggers will be done later (will ask
Dmitri).
Additionally, as the binlog format is now changeable by each user for his session, I remove the implication
which was done at startup, where row-based automatically set log-bin-trust-routine-creators to 1
(not possible anymore as a user can now switch to stmt-based and do nasty things again), and automatically
set --innodb-locks-unsafe-for-binlog to 1 (was anyway theoretically incorrect as it disabled
phantom protection).
Plus fixes for compiler warnings.
Let us transfer triggers associated with table when we rename it (but only if
we are not changing database to which table belongs, in the latter case we will
emit error).
column is increasing when table is recreated with PS/SP":
make use of create_field::char_length more consistent in the code.
Reinit create_field::length from create_field::char_length
for every execution of a prepared statement (actually fixes the
bug).
When a too long field is used for a key, only a prefix part of the field is
used. Length is reduced to the max key length allowed for storage. But if the
field have a multibyte charset it is possible to break multibyte char
sequence. This leads to the failed assertion in the innodb code and
server crash when a record is inserted.
The make_prepare_table() now aligns truncated key length to the boundary of
multibyte char.