Commit graph

115 commits

Author SHA1 Message Date
Sergey Glukhov
a2aa73d92a 5.1-bugteam->5.5-bugteam merge 2010-12-14 13:46:00 +03:00
Sergey Glukhov
cd36a6a5d5 Fixed following problems:
--Bug#52157 various crashes and assertions with multi-table update, stored function
--Bug#54475 improper error handling causes cascading crashing failures in innodb/ndb
--Bug#57703 create view cause Assertion failed: 0, file .\item_subselect.cc, line 846
--Bug#57352 valgrind warnings when creating view
--Recently discovered problem when a nested materialized derived table is used
  before being populated and it leads to incorrect result

We have several modes when we should disable subquery evaluation.
The reasons for disabling are different. It could be
uselessness of the evaluation as in case of 'CREATE VIEW'
or 'PREPARE stmt', or we should disable subquery evaluation
if tables are not locked yet as it happens in bug#54475, or
too early evaluation of subqueries can lead to wrong result
as it happened in Bug#19077.
Main problem is that if subquery items are treated as const
they are evaluated in ::fix_fields(), ::fix_length_and_dec()
of the parental items as a lot of these methods have
Item::val_...() calls inside.
We have to make subqueries non-const to prevent unnecessary
subquery evaluation. At the moment we have different methods
for this. Here is a list of these modes:

1. PREPARE stmt;
We use UNCACHEABLE_PREPARE flag.
It is set during parsing in sql_parse.cc, mysql_new_select() for
each SELECT_LEX object and cleared at the end of PREPARE in
sql_prepare.cc, init_stmt_after_parse(). If this flag is set
subquery becomes non-const and evaluation does not happen.

2. CREATE|ALTER VIEW, SHOW CREATE VIEW, I_S tables which
   process FRM files
We use LEX::view_prepare_mode field. We set it before
view preparation and check this flag in
::fix_fields(), ::fix_length_and_dec().
Some bugs are fixed using this approach,
some are not(Bug#57352, Bug#57703). The problem here is
that we have a lot of ::fix_fields(), ::fix_length_and_dec()
where we use Item::val_...() calls for const items.

3. Derived tables with subquery = wrong result(Bug19077)
The reason of this bug is too early subquery evaluation.
It was fixed by adding Item::with_subselect field
The check of this field in appropriate places prevents
const item evaluation if the item have subquery.
The fix for Bug19077 fixes only the problem with
convert_constant_item() function and does not cover
other places(::fix_fields(), ::fix_length_and_dec() again)
where subqueries could be evaluated.

Example:
CREATE TABLE t1 (i INT, j BIGINT);
INSERT INTO t1 VALUES (1, 2), (2, 2), (3, 2);
SELECT * FROM (SELECT MIN(i) FROM t1
WHERE j = SUBSTRING('12', (SELECT * FROM (SELECT MIN(j) FROM t1) t2))) t3;
DROP TABLE t1;

4. Derived tables with subquery where subquery
   is evaluated before table locking(Bug#54475, Bug#52157)

Suggested solution is following:

-Introduce new field LEX::context_analysis_only with the following
 possible flags:
 #define CONTEXT_ANALYSIS_ONLY_PREPARE 1
 #define CONTEXT_ANALYSIS_ONLY_VIEW    2
 #define CONTEXT_ANALYSIS_ONLY_DERIVED 4
-Set/clean these flags when we perform
 context analysis operation
-Item_subselect::const_item() returns
 result depending on LEX::context_analysis_only.
 If context_analysis_only is set then we return
 FALSE that means that subquery is non-const.
 As all subquery types are wrapped by Item_subselect
 it allow as to make subquery non-const when
 it's necessary.
2010-12-14 12:33:03 +03:00
Martin Hansson
3825f36429 Merge of fix for Bug#54543. Test case only (bug is not present in this tree). 2010-09-07 10:00:52 +02:00
Martin Hansson
3d5b9792e6 Bug#54543: update ignore with incorrect subquery leads to assertion failure:
inited==INDEX

When an error occurs while sending the data in a temporary table there was no
cleanup performed. This caused a failed assertion in the case when different
access methods were used for populating the table vs. retrieving the data from
the table if IGNORE was specified and sql_safe_updates = 0. In this case
execution continues, but the handler expects to continue with the access
method used for row retrieval.

Fixed by doing the cleanup even if errors occur.
2010-09-07 09:58:05 +02:00
Dmitry Lenev
a6c00c276e Part of fix for bug#52044 "FLUSH TABLES WITH READ LOCK and
FLUSH TABLES <list> WITH READ LOCK are incompatible" to
be pushed as separate patch.

Replaced thread state name "Waiting for table", which was
used by threads waiting for a metadata lock or table flush, 
with a set of names which better reflect types of resources
being waited for.

Also replaced "Table lock" thread state name, which was used 
by threads waiting on thr_lock.c table level lock, with more
elaborate "Waiting for table level lock", to make it 
more consistent with other thread state names.

Updated test cases and their results according to these 
changes.

Fixed sys_vars.query_cache_wlock_invalidate_func test to not
to wait for timeout of wait_condition.inc script.
2010-08-06 15:29:37 +04:00
a19c5a6608 Manual merge 2010-05-26 22:34:25 +08:00
cc05440836 Bug #49741 test files contain explicit references to bin/relay-log positions
Some of the test cases reference to binlog position and
these position numbers are written into result explicitly.
It is difficult to maintain if log event format changes. 

There are a couple of cases explicit position number appears, 
we handle them in different ways
A. 'CHANGE MASTER ...' with MASTER_LOG_POS or/and RELAY_LOG_POS options
   Use --replace_result to mask them.
B. 'SHOW BINLOG EVENT ...'
   Replaced by show_binlog_events.inc or wait_for_binlog_event.inc. 
   show_binlog_events.inc file's function is enhanced by given
   $binlog_file and $binlog_limit.
C. 'SHOW SLAVE STATUS', 'show_slave_status.inc' and 'show_slave_status2.inc'
   For the test cases just care a few items in the result of 'SHOW SLAVE STATUS',
   only the items related to each test case are showed.
   'show_slave_status.inc' is rebuild, only the given items in $status_items
   will be showed.
   'check_slave_is_running.inc' and 'check_slave_no_error.inc'
   and 'check_slave_param.inc' are auxiliary files helping
   to show running status and error information easily.
2010-05-24 21:54:08 +08:00
Alexander Nozdrin
04b8cb1882 Manual merge from mysql-trunk-merge.
Conflicts:
  - client/mysql.cc
  - client/mysqldump.c
  - configure.in
  - mysql-test/r/csv.result
  - mysql-test/r/func_time.result
  - mysql-test/r/show_check.result
  - mysql-test/r/sp-error.result
  - mysql-test/r/sp.result
  - mysql-test/r/sp_trans.result
  - mysql-test/r/type_blob.result
  - mysql-test/r/type_timestamp.result
  - mysql-test/r/warnings.result
  - mysql-test/suite/rpl/r/rpl_sp.result
  - sql/mysql_priv.h
  - sql/mysqld.cc
  - sql/sp.cc
  - sql/sql_base.cc
  - sql/sql_table.cc
  - sql/sql_trigger.cc
  - sql/sql_view.cc
  - sql/table.h
  - sql/share/errmsg.txt
  - mysql-test/suite/sys_vars/r/log_bin_trust_routine_creators_basic.result
2010-02-24 16:52:27 +03:00
Dmitry Lenev
afd15c43a9 Implement new type-of-operation-aware metadata locks.
Add a wait-for graph based deadlock detector to the
MDL subsystem.

Fixes bug #46272 "MySQL 5.4.4, new MDL: unnecessary deadlock" and
bug #37346 "innodb does not detect deadlock between update and
alter table".

The first bug manifested itself as an unwarranted abort of a
transaction with ER_LOCK_DEADLOCK error by a concurrent ALTER
statement, when this transaction tried to repeat use of a
table, which it has already used in a similar fashion before
ALTER started.

The second bug showed up as a deadlock between table-level
locks and InnoDB row locks, which was "detected" only after
innodb_lock_wait_timeout timeout.

A transaction would start using the table and modify a few
rows.
Then ALTER TABLE would come in, and start copying rows
into a temporary table. Eventually it would stumble on
the modified records and get blocked on a row lock.
The first transaction would try to do more updates, and get
blocked on thr_lock.c lock.
This situation of circular wait would only get resolved
by a timeout.

Both these bugs stemmed from inadequate solutions to the
problem of deadlocks occurring between different
locking subsystems.

In the first case we tried to avoid deadlocks between metadata
locking and table-level locking subsystems, when upgrading shared
metadata lock to exclusive one.
Transactions holding the shared lock on the table and waiting for
some table-level lock used to be aborted too aggressively.

We also allowed ALTER TABLE to start in presence of transactions
that modify the subject table. ALTER TABLE acquires
TL_WRITE_ALLOW_READ lock at start, and that block all writes
against the table (naturally, we don't want any writes to be lost
when switching the old and the new table). TL_WRITE_ALLOW_READ
lock, in turn, would block the started transaction on thr_lock.c
lock, should they do more updates. This, again, lead to the need
to abort such transactions.

The second bug occurred simply because we didn't have any
mechanism to detect deadlocks between the table-level locks
in thr_lock.c and row-level locks in InnoDB, other than
innodb_lock_wait_timeout.

This patch solves both these problems by moving lock conflicts
which are causing these deadlocks into the metadata locking
subsystem, thus making it possible to avoid or detect such
deadlocks inside MDL.

To do this we introduce new type-of-operation-aware metadata
locks, which allow MDL subsystem to know not only the fact that
transaction has used or is going to use some object but also what
kind of operation it has carried out or going to carry out on the
object.

This, along with the addition of a special kind of upgradable
metadata lock, allows ALTER TABLE to wait until all
transactions which has updated the table to go away.
This solves the second issue.
Another special type of upgradable metadata lock is acquired
by LOCK TABLE WRITE. This second lock type allows to solve the
first issue, since abortion of table-level locks in event of
DDL under LOCK TABLES becomes also unnecessary.

Below follows the list of incompatible changes introduced by
this patch:

- From now on, ALTER TABLE and CREATE/DROP TRIGGER SQL (i.e. those
  statements that acquire TL_WRITE_ALLOW_READ lock)
  wait for all transactions which has *updated* the table to
  complete.

- From now on, LOCK TABLES ... WRITE, REPAIR/OPTIMIZE TABLE
  (i.e. all statements which acquire TL_WRITE table-level lock) wait
  for all transaction which *updated or read* from the table
  to complete.
  As a consequence, innodb_table_locks=0 option no longer applies
  to LOCK TABLES ... WRITE.

- DROP DATABASE, DROP TABLE, RENAME TABLE no longer abort
  statements or transactions which use tables being dropped or
  renamed, and instead wait for these transactions to complete.

- Since LOCK TABLES WRITE now takes a special metadata lock,
  not compatible with with reads or writes against the subject table
  and transaction-wide, thr_lock.c deadlock avoidance algorithm
  that used to ensure absence of deadlocks between LOCK TABLES
  WRITE and other statements is no longer sufficient, even for
  MyISAM. The wait-for graph based deadlock detector of MDL
  subsystem may sometimes be necessary and is involved. This may
  lead to ER_LOCK_DEADLOCK error produced for multi-statement
  transactions even if these only use MyISAM:

  session 1:         session 2:
  begin;

  update t1 ...      lock table t2 write, t1 write;
                     -- gets a lock on t2, blocks on t1

  update t2 ...
  (ER_LOCK_DEADLOCK)

- Finally,  support of LOW_PRIORITY option for LOCK TABLES ... WRITE
  was abandoned.
  LOCK TABLE ... LOW_PRIORITY WRITE from now on has the same
  priority as the usual LOCK TABLE ... WRITE.
  SELECT HIGH PRIORITY no longer trumps LOCK TABLE ... WRITE  in
  the wait queue.

- We do not take upgradable metadata locks on implicitly
  locked tables. So if one has, say, a view v1 that uses
  table t1, and issues:
  LOCK TABLE v1 WRITE;
  FLUSH TABLE t1; -- (or just 'FLUSH TABLES'),
  an error is produced.
  In order to be able to perform DDL on a table under LOCK TABLES,
  the table must be locked explicitly in the LOCK TABLES list.
2010-02-01 14:43:06 +03:00
Martin Hansson
2a22dc2e01 Bug#49534: multitable IGNORE update with sql_safe_updates
error causes debug assertion

The IGNORE option of the multiple-table UPDATE command was
not intended to suppress errors caused by the
sql_safe_updates mode. This flag will raise an error if the
execution of UPDATE does not use a key for row retrieval,
and should continue do so regardless of the IGNORE option.

However the implementation of IGNORE does not support
exceptions to the rule; it always converts errors to
warnings and cannot be extended. The Internal_error_handler
interface offers the infrastructure to handle individual
errors, making sure that the error raised by
sql_safe_updates is not silenced.

Fixed by implementing an Internal_error_handler and using it
for UPDATE IGNORE commands.
2010-02-10 15:37:34 +01:00
Konstantin Osipov
c44665aeb1 Backport of:
------------------------------------------------------------
revno: 3035.4.1
committer: Davi Arnaut <Davi.Arnaut@Sun.COM>
branch nick: 39897-6.0
timestamp: Thu 2009-01-15 12:17:57 -0200
message:
Bug#39897: lock_multi fails in pushbuild: timeout waiting for processlist

The problem is that relying on the "Table lock" thread state in
its current position to detect that a thread is waiting on a lock
is race prone. The "Table lock" state change happens before the
thread actually tries to grab a lock on a table.

The solution is to move the "Table lock" state so that its set
only when a thread is actually going to wait for a lock. The state
change happens after the thread fails to grab the lock (because it
is owned by other thread) and proceeds to wait on a condition.

This is considered part of work related to WL#4284 "Transactional
DDL locking"
Warning: this patch contains an incompatible change. 
When waiting on a lock in thr_lock.c, the server used to display "Locked"
processlist state. After this patch, the state is "Table lock".
The new state was actually intended to be display since year 2002,
when Monty added it. But up until removal of thd->locked boolean
member, this state was ignored by SHOW PROCESSLIST code.
2009-12-03 23:08:27 +03:00
Matthias Leich
f1a55f8fcf Merge 5.0 -> 5.1 2009-02-09 22:00:15 +01:00
Matthias Leich
a63c2e5c30 2. Slice of fix for Bug#42003 tests missing the disconnect of connections <> default
- If missing: add "disconnect <session>"
   - If physical disconnect of non "default" sessions is not finished
     at test end: add routine which waits till this happened
+ additional improvements
  - remove superfluous files created by the test
  - replace error numbers by error names
  - remove trailing spaces, replace tabs by spaces
  - unify writing of bugs within comments
  - correct comments
  - minor changes of formatting
Fixed tests:
  backup
  check
  compress
  grant
  information_schema
  multi_update
  overflow
  packet
  query_cache_not_embedded
  sp-threads
  subselect
  synchronization
  timezone_grant
2009-02-05 21:47:23 +01:00
Matthias Leich
1e3f6091b8 Merge 5.0 -> 51. of fix for bug 26890 2008-11-25 13:17:50 +01:00
Matthias Leich
177c0c2a3c Fix for Bug#26890 main.multi_update times out
The test itself is not faulty. The testcase timeout
problem happens if this IMHO mid size resource
(space in vardir, virtual memory, amount of disk I/O)
consuming test meets a weak (excessive disk I/O caused
by parallel applications or paging) testing box.

The modifications:
- Move the most time and disk I/O consuming subtest
  for Bug 1820 into its own script (multi_update2)
  This will reduce the likelihood that we exceed the
  testcase timeout.
- Replace error numbers with error names
- Minor improvements of the formatting

-
2008-11-19 19:17:26 +01:00
aelkin/elkin@koti.dsl.inet.fi
8f7550ecad manual merge for bug_29136, bug#29309. 2007-10-13 23:12:50 +03:00
aelkin/elkin@koti.dsl.inet.fi
a7b1a823a5 Merge koti.dsl.inet.fi:/home/elkin/MySQL/TEAM/FIXES/5.0/bug29136-mdelete
into  koti.dsl.inet.fi:/home/elkin/MySQL/merge-5.1
2007-10-13 16:51:16 +03:00
aelkin/elkin@koti.dsl.inet.fi
ed0ab76e28 Bug #29136 erred multi-delete on trans table does not rollback the statement
similar to bug_27716, but it was stressed on in the synopsis on that there is another
side of the artifact affecting behaviour in transaction.

Fixed with deploying multi_delete::send_error() - otherwise never called - and refining its logic
to perform binlogging job if needed.

The changeset includes the following side effects:
- added tests to check bug_23333's scenarios on the mixture of tables for multi_update;
- fixes bug@30763 with two-liner patch and a test coinciding to one added for bug_23333.
2007-10-13 15:49:42 +03:00
mats@kindahl-laptop.dnsalias.net
1447f6318f Fixes to tests and test results. 2007-06-19 11:09:22 +02:00
aelkin/elkin@dsl-hkibras1-ff5dc300-70.dhcp.inet.fi
f544286789 Bug #27716 multi-update did partially and has not binlogged
manual merge with 5.0: automatic merge went incorrectly; fixing tests in rbr mode.
2007-06-04 15:02:40 +03:00
aelkin/elkin@dsl-hkibras1-ff5dc300-70.dhcp.inet.fi
798b38a447 Merge dsl-hkibras1-ff5dc300-70.dhcp.inet.fi:/home/elkin/MySQL/TEAM/BARE/5.0
into  dsl-hkibras1-ff5dc300-70.dhcp.inet.fi:/tmp/merge_5.0
2007-06-02 17:56:39 +03:00
aelkin/elkin@dsl-hkibras1-ff5dc300-70.dhcp.inet.fi
1c0ac63808 Bug #27716 multi-update did partially and has not binlogged
Implementation of mysql_multi_update did not call multi_update::send_error method in some cases 
(see the test reported on bug page and test cases in changeset).

Fixed with deploying the method, ::send_error() is refined to get binlogging code which works whenever 
there is modified non-transactional table.
thd->no_trans_update.stmt flag is set in to TRUE to ease testing though being the beginning of relative 
bug#27417 fix (addresses a part of those issues).
Eliminating two minor issues (small bugs) in multi_update methods.
This patch for multi-update also addresses a part of the issues reported in bug#13270,bug#23333.
2007-06-01 11:14:04 +03:00
brian@zim.(none)
b93a5c5f04 Updated row result, and fix for Windows build. 2006-08-14 03:26:53 -07:00
evgen@moonbone.local
79c91f6214 Manually merged 2006-06-18 14:56:35 +04:00
monty@mysql.com
74cc73d461 This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),

- New optional handler function introduced: reset()
  This is called after every DML statement to make it easy for a handler to
  statement specific cleanups.
  (The only case it's not called is if force the file to be closed)

- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
  should be moved to handler::reset()

- table->read_set contains a bitmap over all columns that are needed
  in the query.  read_row() and similar functions only needs to read these
  columns
- table->write_set contains a bitmap over all columns that will be updated
  in the query. write_row() and update_row() only needs to update these
  columns.
  The above bitmaps should now be up to date in all context
  (including ALTER TABLE, filesort()).

  The handler is informed of any changes to the bitmap after
  fix_fields() by calling the virtual function
  handler::column_bitmaps_signal(). If the handler does caching of
  these bitmaps (instead of using table->read_set, table->write_set),
  it should redo the caching in this code. as the signal() may be sent
  several times, it's probably best to set a variable in the signal
  and redo the caching on read_row() / write_row() if the variable was
  set.

- Removed the read_set and write_set bitmap objects from the handler class

- Removed all column bit handling functions from the handler class.
  (Now one instead uses the normal bitmap functions in my_bitmap.c instead
  of handler dedicated bitmap functions)

- field->query_id is removed. One should instead instead check
  table->read_set and table->write_set if a field is used in the query.

- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
  handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
  instead use table->read_set to check for which columns to retrieve.

- If a handler needs to call Field->val() or Field->store() on columns
  that are not used in the query, one should install a temporary
  all-columns-used map while doing so. For this, we provide the following
  functions:

  my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
  field->val();
  dbug_tmp_restore_column_map(table->read_set, old_map);

  and similar for the write map:

  my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
  field->val();
  dbug_tmp_restore_column_map(table->write_set, old_map);

  If this is not done, you will sooner or later hit a DBUG_ASSERT
  in the field store() / val() functions.
  (For not DBUG binaries, the dbug_tmp_restore_column_map() and
  dbug_tmp_restore_column_map() are inline dummy functions and should
  be optimized away be the compiler).

- If one needs to temporary set the column map for all binaries (and not
  just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
  methods) one should use the functions tmp_use_all_columns() and
  tmp_restore_column_map() instead of the above dbug_ variants.

- All 'status' fields in the handler base class (like records,
  data_file_length etc) are now stored in a 'stats' struct. This makes
  it easier to know what status variables are provided by the base
  handler.  This requires some trivial variable names in the extra()
  function.

- New virtual function handler::records().  This is called to optimize
  COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
  (stats.records is not supposed to be an exact value. It's only has to
  be 'reasonable enough' for the optimizer to be able to choose a good
  optimization path).

- Non virtual handler::init() function added for caching of virtual
  constants from engine.

- Removed has_transactions() virtual method. Now one should instead return
  HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
  transactions.

- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
  that is to be used with 'new handler_name()' to allocate the handler
  in the right area.  The xxxx_create_handler() function is also
  responsible for any initialization of the object before returning.

  For example, one should change:

  static handler *myisam_create_handler(TABLE_SHARE *table)
  {
    return new ha_myisam(table);
  }

  ->

  static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
  {
    return new (mem_root) ha_myisam(table);
  }

- New optional virtual function: use_hidden_primary_key().
  This is called in case of an update/delete when
  (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
  but we don't have a primary key. This allows the handler to take precisions
  in remembering any hidden primary key to able to update/delete any
  found row. The default handler marks all columns to be read.

- handler::table_flags() now returns a ulonglong (to allow for more flags).

- New/changed table_flags()
  - HA_HAS_RECORDS	    Set if ::records() is supported
  - HA_NO_TRANSACTIONS	    Set if engine doesn't support transactions
  - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
                            Set if we should mark all primary key columns for
			    read when reading rows as part of a DELETE
			    statement. If there is no primary key,
			    all columns are marked for read.
  - HA_PARTIAL_COLUMN_READ  Set if engine will not read all columns in some
			    cases (based on table->read_set)
 - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
   			    Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
 - HA_DUPP_POS              Renamed to HA_DUPLICATE_POS
 - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
			    Set this if we should mark ALL key columns for
			    read when when reading rows as part of a DELETE
			    statement. In case of an update we will mark
			    all keys for read for which key part changed
			    value.
  - HA_STATS_RECORDS_IS_EXACT
			     Set this if stats.records is exact.
			     (This saves us some extra records() calls
			     when optimizing COUNT(*))
			    

- Removed table_flags()
  - HA_NOT_EXACT_COUNT     Now one should instead use HA_HAS_RECORDS if
			   handler::records() gives an exact count() and
			   HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
  - HA_READ_RND_SAME	   Removed (no one supported this one)

- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()

- Renamed handler::dupp_pos to handler::dup_pos

- Removed not used variable handler::sortkey


Upper level handler changes:

- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
  cache is updated on engine creation time and updated on open.


MySQL level changes (not obvious from the above):

- DBUG_ASSERT() added to check that column usage matches what is set
  in the column usage bit maps. (This found a LOT of bugs in current
  column marking code).

- In 5.1 before, all used columns was marked in read_set and only updated
  columns was marked in write_set. Now we only mark columns for which we
  need a value in read_set.

- Column bitmaps are created in open_binary_frm() and open_table_from_share().
  (Before this was in table.cc)

- handler::table_flags() calls are replaced with handler::ha_table_flags()

- For calling field->val() you must have the corresponding bit set in
  table->read_set. For calling field->store() you must have the
  corresponding bit set in table->write_set. (There are asserts in
  all store()/val() functions to catch wrong usage)

- thd->set_query_id is renamed to thd->mark_used_columns and instead
  of setting this to an integer value, this has now the values:
  MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
  Changed also all variables named 'set_query_id' to mark_used_columns.

- In filesort() we now inform the handler of exactly which columns are needed
  doing the sort and choosing the rows.

- The TABLE_SHARE object has a 'all_set' column bitmap one can use
  when one needs a column bitmap with all columns set.
  (This is used for table->use_all_columns() and other places)

- The TABLE object has 3 column bitmaps:
  - def_read_set     Default bitmap for columns to be read
  - def_write_set    Default bitmap for columns to be written
  - tmp_set          Can be used as a temporary bitmap when needed.
  The table object has also two pointer to bitmaps read_set and write_set
  that the handler should use to find out which columns are used in which way.

- count() optimization now calls handler::records() instead of using
  handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).

- Added extra argument to Item::walk() to indicate if we should also
  traverse sub queries.

- Added TABLE parameter to cp_buffer_from_ref()

- Don't close tables created with CREATE ... SELECT but keep them in
  the table cache. (Faster usage of newly created tables).


New interfaces:

- table->clear_column_bitmaps() to initialize the bitmaps for tables
  at start of new statements.

- table->column_bitmaps_set() to set up new column bitmaps and signal
  the handler about this.

- table->column_bitmaps_set_no_signal() for some few cases where we need
  to setup new column bitmaps but don't signal the handler (as the handler
  has already been signaled about these before). Used for the momement
  only in opt_range.cc when doing ROR scans.

- table->use_all_columns() to install a bitmap where all columns are marked
  as use in the read and the write set.

- table->default_column_bitmaps() to install the normal read and write
  column bitmaps, but not signaling the handler about this.
  This is mainly used when creating TABLE instances.

- table->mark_columns_needed_for_delete(),
  table->mark_columns_needed_for_delete() and
  table->mark_columns_needed_for_insert() to allow us to put additional
  columns in column usage maps if handler so requires.
  (The handler indicates what it neads in handler->table_flags())

- table->prepare_for_position() to allow us to tell handler that it
  needs to read primary key parts to be able to store them in
  future table->position() calls.
  (This replaces the table->file->ha_retrieve_all_pk function)

- table->mark_auto_increment_column() to tell handler are going to update
  columns part of any auto_increment key.

- table->mark_columns_used_by_index() to mark all columns that is part of
  an index.  It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
  it to quickly know that it only needs to read colums that are part
  of the key.  (The handler can also use the column map for detecting this,
  but simpler/faster handler can just monitor the extra() call).

- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
  also mark all columns that is used by the given key.

- table->restore_column_maps_after_mark_index() to restore to default
  column maps after a call to table->mark_columns_used_by_index().

- New item function register_field_in_read_map(), for marking used columns
  in table->read_map. Used by filesort() to mark all used columns

- Maintain in TABLE->merge_keys set of all keys that are used in query.
  (Simplices some optimization loops)

- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
  but the field in the clustered key is not assumed to be part of all index.
  (used in opt_range.cc for faster loops)

-  dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
   tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
   mark all columns as usable.  The 'dbug_' version is primarily intended
   inside a handler when it wants to just call Field:store() & Field::val()
   functions, but don't need the column maps set for any other usage.
   (ie:: bitmap_is_set() is never called)

- We can't use compare_records() to skip updates for handlers that returns
  a partial column set and the read_set doesn't cover all columns in the
  write set. The reason for this is that if we have a column marked only for
  write we can't in the MySQL level know if the value changed or not.
  The reason this worked before was that MySQL marked all to be written
  columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
  bug'.

- open_table_from_share() does not anymore setup temporary MEM_ROOT
  object as a thread specific variable for the handler. Instead we
  send the to-be-used MEMROOT to get_new_handler().
  (Simpler, faster code)



Bugs fixed:

- Column marking was not done correctly in a lot of cases.
  (ALTER TABLE, when using triggers, auto_increment fields etc)
  (Could potentially result in wrong values inserted in table handlers
  relying on that the old column maps or field->set_query_id was correct)
  Especially when it comes to triggers, there may be cases where the
  old code would cause lost/wrong values for NDB and/or InnoDB tables.

- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
  OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
  This allowed me to remove some wrong warnings about:
  "Some non-transactional changed tables couldn't be rolled back"

- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
  (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
  some warnings about
  "Some non-transactional changed tables couldn't be rolled back")

- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
  which could cause delete_table to report random failures.

- Fixed core dumps for some tests when running with --debug

- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
  (This has probably caused us to not properly remove temporary files after
  crash)

- slow_logs was not properly initialized, which could maybe cause
  extra/lost entries in slow log.

- If we get an duplicate row on insert, change column map to read and
  write all columns while retrying the operation. This is required by
  the definition of REPLACE and also ensures that fields that are only
  part of UPDATE are properly handled.  This fixed a bug in NDB and
  REPLACE where REPLACE wrongly copied some column values from the replaced
  row.

- For table handler that doesn't support NULL in keys, we would give an error
  when creating a primary key with NULL fields, even after the fields has been
  automaticly converted to NOT NULL.

- Creating a primary key on a SPATIAL key, would fail if field was not
  declared as NOT NULL.


Cleanups:

- Removed not used condition argument to setup_tables

- Removed not needed item function reset_query_id_processor().

- Field->add_index is removed. Now this is instead maintained in
  (field->flags & FIELD_IN_ADD_INDEX)

- Field->fieldnr is removed (use field->field_index instead)

- New argument to filesort() to indicate that it should return a set of
  row pointers (not used columns). This allowed me to remove some references
  to sql_command in filesort and should also enable us to return column
  results in some cases where we couldn't before.

- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
  bitmap, which allowed me to use bitmap functions instead of looping over
  all fields to create some needed bitmaps. (Faster and smaller code)

- Broke up found too long lines

- Moved some variable declaration at start of function for better code
  readability.

- Removed some not used arguments from functions.
  (setup_fields(), mysql_prepare_insert_check_table())

- setup_fields() now takes an enum instead of an int for marking columns
   usage.

- For internal temporary tables, use handler::write_row(),
  handler::delete_row() and handler::update_row() instead of
  handler::ha_xxxx() for faster execution.

- Changed some constants to enum's and define's.

- Using separate column read and write sets allows for easier checking
  of timestamp field was set by statement.

- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()

- Don't build table->normalized_path as this is now identical to table->path
  (after bar's fixes to convert filenames)

- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
  do comparision with the 'convert-dbug-for-diff' tool.


Things left to do in 5.1:

- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
  row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
  Mats has promised to look into this.

- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
  (I added several test cases for this, but in this case it's better that
  someone else also tests this throughly).
  Lars has promosed to do this.
2006-06-04 18:52:22 +03:00
evgen@moonbone.local
41f968e1b4 Manually merged 2006-05-29 19:07:35 +04:00
evgen@moonbone.local
1f30bf5a33 Fixed bug#19225: unchecked error results in server crash
In multi-table delete a table for delete can't be used for selecting in
subselects. Appropriate error was raised but wasn't checked which leads to a
crash at the execution phase.

The mysql_execute_command() now checks for errors before executing select
for multi-delete.
2006-05-29 00:32:59 +04:00
monty@mysql.com
e42c980967 Table definition cache, part 2
The table opening process now works the following way:
- Create common TABLE_SHARE object
- Read the .frm file and unpack it into the TABLE_SHARE object
- Create a TABLE object based on the information in the TABLE_SHARE
  object and open a handler to the table object

Other noteworthy changes:
- In TABLE_SHARE the most common strings are now LEX_STRING's
- Better error message when table is not found
- Variable table_cache is now renamed 'table_open_cache'
- New variable 'table_definition_cache' that is the number of table defintions that will be cached
- strxnmov() calls are now fixed to avoid overflows
- strxnmov() will now always add one end \0 to result
- engine objects are now created with a TABLE_SHARE object instead of a TABLE object.
- After creating a field object one must call field->init(table) before using it

- For a busy system this change will give you:
 - Less memory usage for table object
 - Faster opening of tables (if it's has been in use or is in table definition cache)
 - Allow you to cache many table definitions objects
 - Faster drop of table
2005-11-23 22:45:02 +02:00
monty@mysql.com
15d48525af Merge mysql.com:/home/my/mysql-4.1
into  mysql.com:/home/my/mysql-5.0
2005-07-28 17:09:54 +03:00
monty@mysql.com
3c12d0ae54 Added end marker for tests to make future merges easier 2005-07-28 03:22:47 +03:00
jimw@mysql.com
c18307e8a2 Cleanup tests and results after merge from 4.1 of embedded
server testing cleanups.
2005-04-04 12:43:58 -07:00
monty@mysql.com
3c6d5e43f9 Merge 2005-01-04 13:23:04 +02:00
serg@sergbook.mysql.com
b23a226151 mysql-test/r/multi_update.result, mysql-test/t/multi_update.test
don't fail w/o bdb (or innodb)
sql/sql_base.cc
    typo fixed.
    "mysql-test-run --ps-protocol select" fixed (item->cached_item was set to the last table if many matches)
2005-01-04 00:22:22 +02:00
monty@mysql.com
309b1a2b6c Merge with 4.1 tree to get fix for INSERT IGNORE ... ON DUPLICATE KEY 2005-01-03 23:04:52 +02:00
monty@mysql.com
d35d4eab91 Merge bk-internal.mysql.com:/home/bk/mysql-4.1
into mysql.com:/home/my/mysql-4.1
2005-01-03 21:14:20 +02:00
serg@sergbook.mysql.com
a04fc26c54 manually merged 2004-12-31 15:26:24 +01:00
serg@sergbook.mysql.com
45ce994e5d post-merge 2004-12-31 11:52:14 +01:00
monty@mysql.com
1bd22faa05 Remove DUP_IGNORE from enum_duplicates and instead use a separate ignore flag
This allows use to use INSERT IGNORE ... ON DUPLICATE ...
2004-12-31 12:04:35 +02:00
monty@mysql.com
d71c030587 After merge fixes 2004-12-31 00:44:00 +02:00
monty@mishka.local
4f4bbfc279 Merge with 4.1 2004-12-22 13:54:39 +02:00
sergefp@mysql.com
5b0e3b132c Post-merge fixes 2004-12-11 16:36:12 +03:00
sergefp@mysql.com
aee9340493 Fix for BUG#5837 merged from 4.0 2004-12-11 15:55:50 +03:00
sergefp@mysql.com
6b55909673 Fix for BUG#5837 - attempt 3.
Call mark_as_null_row in join_read_const and join_read_system.
2004-12-11 15:51:52 +03:00
bell@sanja.is.com.ua
9d0bb7426a postreview fixes 2004-11-08 01:54:23 +02:00
bell@sanja.is.com.ua
064b02203f new lock for multiupdate:
- open and create derived tables
- detect which tables should be locked for write
- lock and fill derived tables
some unitialized variables fixed
2004-11-05 17:29:47 +02:00
monty@mysql.com
47bbf768d4 Fixes after merge with 4.1
FOUND is not a reserved keyword anymore
Added Item_field::set_no_const_sub() to be able to mark fields that can't be substituted
Added 'simple_select' method to be able to quickly determinate if a select_result is a normal SELECT
Note that the 5.0 tree is not yet up to date: Sanja will have to fix multi-update-locks for this merge to be complete
2004-11-03 12:39:38 +02:00
monty@mysql.com
afbe601302 merge with 4.1 2004-10-29 19:26:52 +03:00
monty@mysql.com
6239edc1d1 After merge fixes
Some bigger code changes was necessary becasue of the multi-table-update and the new HANDLER code
2004-10-07 10:50:13 +03:00
monty@mysql.com
62f3cd6a31 Merge with 4.0 for 4.1 release
Noteworthy:
- New HANDLER code
- New multi-update-grant-check code
- Table lock code in ha_innodb.cc was not applied
2004-10-06 19:14:33 +03:00
antony@ltantony.rdg.cyberkinetica.homeunix.net
2973a047ea Bug#4118: multi-table UPDATE takes WRITE lock on read table
Ensures that WRITE lock is not obtained on all tables referenced.
2004-10-03 00:20:47 +01:00