Commit graph

575 commits

Author SHA1 Message Date
jani@ua141d10.elisa.omakaista.fi
335153121b Merge jamppa@bk-internal.mysql.com:/home/bk/mysql-5.0
into  ua141d10.elisa.omakaista.fi:/home/my/bk/mysql-5.0-marvel
2007-04-12 12:50:02 +03:00
jani@ua141d10.elisa.omakaista.fi
b4ba815967 Merge jamppa@bk-internal.mysql.com:/home/bk/mysql-5.1
into  ua141d10.elisa.omakaista.fi:/home/my/bk/mysql-5.1-marvel
2007-04-10 16:28:47 +03:00
mskold/marty@mysql.com/linux.site
3e8cf5958b Merge mysql.com:/windows/Linux_space/MySQL/mysql-5.0
into  mysql.com:/windows/Linux_space/MySQL/mysql-5.0-ndb
2007-04-05 08:39:12 +02:00
Justin.He/justin.he@dev3-240.dev.cn.tlan
10dffc5a19 Merge dev3-240.dev.cn.tlan:/home/justin.he/mysql/mysql-5.1/mysql-5.1-new-ndb
into  dev3-240.dev.cn.tlan:/home/justin.he/mysql/mysql-5.1/mysql-5.1-new-ndb-bj.merge
2007-04-05 12:26:01 +08:00
Justin.He/justin.he@dev3-240.dev.cn.tlan
fdf57c7ac2 Merge dev3-240.dev.cn.tlan:/home/justin.he/mysql/mysql-5.1/mysql-5.1-new-ndb
into  dev3-240.dev.cn.tlan:/home/justin.he/mysql/mysql-5.1/mysql-5.1-new-ndb-bj.merge
2007-04-05 10:47:40 +08:00
mskold/marty@linux.site
552d1086ec Merge mysql.com:/windows/Linux_space/MySQL/mysql-5.1
into  mysql.com:/windows/Linux_space/MySQL/mysql-5.1-new-ndb
2007-04-04 17:03:31 +02:00
mskold/marty@mysql.com/linux.site
6c8f5c5859 Merge from 5.0 2007-04-04 16:58:25 +02:00
mskold/marty@linux.site
7e33b92279 Merge mysql.com:/windows/Linux_space/MySQL/mysql-5.0
into  mysql.com:/windows/Linux_space/MySQL/mysql-5.1
2007-04-04 13:21:49 +02:00
mskold/marty@mysql.com/linux.site
625a2629f0 Bug #26242 UPDATE with subquery and triggers failing with cluster tables
In certain cases AFTER UPDATE/DELETE triggers on NDB tables that referenced
subject table didn't see the results of operation which caused invocation
of those triggers. In other words AFTER trigger invoked as result of update
(or deletion) of particular row saw version of this row before update (or
deletion).

The problem occured because NDB handler in those cases postponed actual
update/delete operations to be able to perform them later as one batch.

This fix solves the problem by disabling this optimization for particular
operation if subject table has AFTER trigger for this operation defined.
To achieve this we introduce two new flags for handler::extra() method:
HA_EXTRA_DELETE_CANNOT_BATCH and HA_EXTRA_UPDATE_CANNOT_BATCH.
These are called if there exists AFTER DELETE/UPDATE triggers during a
statement that potentially can generate calls to delete_row()/update_row().
This includes multi_delete/multi_update statements as well as insert statements
that do delete/update as part of an ON DUPLICATE statement.
2007-04-04 12:50:39 +02:00
jani@ua141d10.elisa.omakaista.fi
901896fd1c Manual merge from 5.0 2007-03-29 21:11:17 +03:00
jani@ua141d10.elisa.omakaista.fi
1c7beca65e Merge ua141d10.elisa.omakaista.fi:/home/my/bk/mysql-5.0-marvel
into  ua141d10.elisa.omakaista.fi:/home/my/bk/mysql-5.1-marvel
2007-03-29 17:27:42 +03:00
aelkin/elkin@andrepl.(none)
2afa90b5c5 Bug #27395 OPTION_STATUS_NO_TRANS_UPDATE is not preserved at the end of SF()
thd->options' OPTION_STATUS_NO_TRANS_UPDATE bit was not restored at the end of SF() invocation, where
SF() modified non-ta table.
As the result of this artifact it was not possible to detect whether there were any side-effects when
top-level query ends. 
If the top level query table was not modified and the bit is lost there would be no binlogging.

Fixed with preserving the bit inside of thd->no_trans_update struct. The struct agregates two bool flags
telling whether the current query and the current transaction modified any non-ta table.
The flags stmt, all are dropped at the end of the query and the transaction.
2007-03-23 17:12:58 +02:00
Justin.He/justin.he@dev3-240.dev.cn.tlan
0bbae24750 Bug#27127, Incorrect behaviour of timestamp column with DEFAULT CURRENT_TIMESTAMP
correct the bitmap_set_bit when a field is timestamp and described 
with default CURRENT_TIMESTAMP or on update CURRENT_TIMESTAMP,
then it will reduce a little time cost when the field doesnot need 
to write.
2007-03-22 12:16:52 +08:00
gkodinov/kgeorge@magare.gmz
c03a483c51 Merge gkodinov@bk-internal.mysql.com:/home/bk/mysql-5.1-opt
into  magare.gmz:/home/kgeorge/mysql/autopush/WL3527-5.1-opt
2007-03-09 17:54:13 +02:00
holyfoot/hf@hfmain.(none)
cdcf3ec097 Merge bk@192.168.21.1:mysql-5.1
into  mysql.com:/home/hf/work/mrg/mysql-5.1-opt
2007-03-08 22:04:17 +04:00
holyfoot/hf@mysql.com/hfmain.(none)
11dd0fa326 Merge bk@192.168.21.1:mysql-5.0
into  mysql.com:/home/hf/work/mrg/mysql-5.0-opt
2007-03-08 21:42:41 +04:00
holyfoot/hf@hfmain.(none)
75be7cd1ae Merge mysql.com:/home/hf/work/mrg/mysql-5.0-opt
into  mysql.com:/home/hf/work/mrg/mysql-5.1-opt
2007-03-08 19:08:28 +04:00
malff/marcsql@weblab.(none)
9f0b0df961 Merge malff@bk-internal.mysql.com:/home/bk/mysql-5.0-runtime
into  weblab.(none):/home/marcsql/TREE/mysql-5.0-8407_b
2007-03-06 11:30:08 -07:00
malff/marcsql@weblab.(none)
8643745d3e Merge weblab.(none):/home/marcsql/TREE/mysql-5.0-8407_b
into  weblab.(none):/home/marcsql/TREE/mysql-5.1-8407-merge
2007-03-06 10:33:10 -07:00
malff/marcsql@weblab.(none)
b216d959bb Bug#8407 (Stored functions/triggers ignore exception handler)
Bug 18914 (Calling certain SPs from triggers fail)
Bug 20713 (Functions will not not continue for SQLSTATE VALUE '42S02')
Bug 21825 (Incorrect message error deleting records in a table with a
  trigger for inserting)
Bug 22580 (DROP TABLE in nested stored procedure causes strange dependency
  error)
Bug 25345 (Cursors from Functions)


This fix resolves a long standing issue originally reported with bug 8407,
which affect the behavior of Stored Procedures, Stored Functions and Trigger
in many different ways, causing symptoms reported by all the bugs listed.
In all cases, the root cause of the problem traces back to 8407 and how the
server locks tables involved with sub statements.

Prior to this fix, the implementation of stored routines would:
- compute the transitive closure of all the tables referenced by a top level
statement
- open and lock all the tables involved
- execute the top level statement
"transitive closure of tables" means collecting:
- all the tables,
- all the stored functions,
- all the views,
- all the table triggers
- all the stored procedures
involved, and recursively inspect these objects definition to find more
references to more objects, until the list of every object referenced does
not grow any more.
This mechanism is known as "pre-locking" tables before execution.
The motivation for locking all the tables (possibly) used at once is to
prevent dead locks.

One problem with this approach is that, if the execution path the code
really takes during runtime does not use a given table, and if the table is
missing, the server would not execute the statement.
This in particular has a major impact on triggers, since a missing table
referenced by an update/delete trigger would prevent an insert trigger to run.

Another problem is that stored routines might define SQL exception handlers
to deal with missing tables, but the server implementation would never give
user code a chance to execute this logic, since the routine is never
executed when a missing table cause the pre-locking code to fail.

With this fix, the internal implementation of the pre-locking code has been
relaxed of some constraints, so that failure to open a table does not
necessarily prevent execution of a stored routine.

In particular, the pre-locking mechanism is now behaving as follows:

1) the first step, to compute the transitive closure of all the tables
possibly referenced by a statement, is unchanged.

2) the next step, which is to open all the tables involved, only attempts
to open the tables added by the pre-locking code, but silently fails without
reporting any error or invoking any exception handler is the table is not
present. This is achieved by trapping internal errors with
Prelock_error_handler

3) the locking step only locks tables that were successfully opened.

4) when executing sub statements, the list of tables used by each statements
is evaluated as before. The tables needed by the sub statement are expected
to be already opened and locked. Statement referencing tables that were not
opened in step 2) will fail to find the table in the open list, and only at
this point will execution of the user code fail.

5) when a runtime exception is raised at 4), the instruction continuation
destination (the next instruction to execute in case of SQL continue
handlers) is evaluated.
This is achieved with sp_instr::exec_open_and_lock_tables()

6) if a user exception handler is present in the stored routine, that
handler is invoked as usual, so that ER_NO_SUCH_TABLE exceptions can be
trapped by stored routines. If no handler exists, then the runtime execution
will fail as expected.

With all these changes, a side effect is that view security is impacted, in
two different ways.

First, a view defined as "select stored_function()", where the stored
function references a table that may not exist, is considered valid.
The rationale is that, because the stored function might trap exceptions
during execution and still return a valid result, there is no way to decide
when the view is created if a missing table really cause the view to be invalid.

Secondly, testing for existence of tables is now done later during
execution. View security, which consist of trapping errors and return a
generic ER_VIEW_INVALID (to prevent disclosing information) was only
implemented at very specific phases covering *opening* tables, but not
covering the runtime execution. Because of this existing limitation,
errors that were previously trapped and converted into ER_VIEW_INVALID are
not trapped, causing table names to be reported to the user.
This change is exposing an existing problem, which is independent and will
be resolved separately.
2007-03-05 19:42:07 -07:00
gkodinov/kgeorge@macbook.gmz
b9c82eaa89 WL#3527: Extend IGNORE INDEX so places where index is ignored
can be specified
Currently MySQL allows one to specify what indexes to ignore during
join optimization. The scope of the current USE/FORCE/IGNORE INDEX 
statement is only the FROM clause, while all other clauses are not 
affected.

However, in certain cases, the optimizer
may incorrectly choose an index for sorting and/or grouping, and
produce an inefficient query plan.

This task provides the means to specify what indexes are
ignored/used for what operation in a more fine-grained manner, thus
making it possible to manually force a better plan. We do this
by extending the current IGNORE/USE/FORCE INDEX syntax to:

IGNORE/USE/FORCE INDEX [FOR {JOIN | ORDER | GROUP BY}]

so that:
- if no FOR is specified, the index hint will apply everywhere.
- if MySQL is started with the compatibility option --old_mode then
  an index hint without a FOR clause works as in 5.0 (i.e, the 
  index will only be ignored for JOINs, but can still be used to
  compute ORDER BY).

See the WL#3527 for further details.
2007-03-05 19:08:41 +02:00
evgen@moonbone.local
44f97f3f50 Merge epotemkin@bk-internal.mysql.com:/home/bk/mysql-5.0-opt
into  moonbone.local:/mnt/gentoo64/work/25122-bug-5.0-opt-mysql
2007-03-02 00:10:25 +03:00
evgen@moonbone.local
11588e5e4a Bug#25122: Views based on a self-joined table aren't insertable.
When INSERT is done over a view the table being inserted into is 
checked to be unique among all views tables. But if the view contains
self-joined table an error will be thrown even if all tables are used under
different aliases.

The unique_table() function now also checks tables' aliases when needed.
2007-03-02 00:09:22 +03:00
gluh@mysql.com/eagle.(none)
e8635ad3cb Merge mysql.com:/home/gluh/MySQL/Merge/5.0
into  mysql.com:/home/gluh/MySQL/Merge/5.0-opt
2007-02-26 16:57:45 +04:00
gluh@eagle.(none)
975a23ed0f Merge mysql.com:/home/gluh/MySQL/Merge/5.0-opt
into  mysql.com:/home/gluh/MySQL/Merge/5.1-opt
2007-02-26 15:54:43 +04:00
evgen@moonbone.local
9a233742b8 Bug#23800: Outer fields in correlated subqueries is used in a temporary table
created for sorting.

Any outer reference in a subquery was represented by an Item_field object.
If the outer select employs a temporary table all such fields should be
replaced with fields from that temporary table in order to point to the 
actual data. This replacement wasn't done and that resulted in a wrong
subquery evaluation and a wrong result of the whole query.

Now any outer field is represented by two objects - Item_field placed in the
outer select and Item_outer_ref in the subquery. Item_field object is
processed as a normal field and the reference to it is saved in the
ref_pointer_array. Thus the Item_outer_ref is always references the correct
field. The original field is substituted for a reference in the
Item_field::fix_outer_field() function.

New function called fix_inner_refs() is added to fix fields referenced from
inner selects and to fix references (Item_ref objects) to these fields.

The new Item_outer_ref class is a descendant of the Item_direct_ref class.
It additionally stores a reference to the original field and designed to
behave more like a field.
2007-02-21 23:00:32 +03:00
monty@mysql.com/narttu.mysql.fi
26aa385bc5 Merge bk-internal.mysql.com:/home/bk/mysql-5.0
into  mysql.com:/home/my/mysql-5.0
2007-02-21 14:07:08 +02:00
igor@olga.mysql.com
273d016aad Post-merge fix. 2007-02-13 13:15:23 -08:00
igor@olga.mysql.com
fb9e0ad3be Merge olga.mysql.com:/home/igor/mysql-5.0-opt
into  olga.mysql.com:/home/igor/mysql-5.1-opt
2007-02-13 01:34:36 -08:00
igor@olga.mysql.com
8d4027fd74 Fixed bug #25931.
View check option clauses were ignored for updates of multi-table
views when the updates could not be performed on fly and the rows
to update had to be put into temporary tables first.
2007-02-07 14:41:57 -08:00
evgen@moonbone.local
968369906e Bug#19978: INSERT .. ON DUPLICATE erroneously reports some records were
updated.

INSERT ... ON DUPLICATE KEY UPDATE reports that a record was updated when
the duplicate key occurs even if the record wasn't actually changed
because the update values are the same as those in the record.

Now the compare_record() function is used to check whether the record was
changed and the update of a record reported only if the record differs
from the original one.
2007-02-07 00:46:03 +03:00
monty@narttu.mysql.fi
bb464613ce Merge bk-internal.mysql.com:/home/bk/mysql-5.1
into  mysql.com:/home/my/mysql-5.1
2007-01-29 01:57:07 +02:00
monty@narttu.mysql.fi
8a80e36ac3 Merge mysql.com:/home/my/mysql-5.0
into  mysql.com:/home/my/mysql-5.1
Merge of 'remove compiler warnings when using -Wshadow'
2007-01-27 03:46:45 +02:00
gluh@eagle.(none)
67db27537b Merge mysql.com:/home/gluh/MySQL/Merge/5.1-opt
into  mysql.com:/home/gluh/MySQL/Merge/5.1
2007-01-26 16:46:01 +04:00
gluh@mysql.com/eagle.(none)
45f61a06fb Merge mysql.com:/home/gluh/MySQL/Merge/5.0-opt
into  mysql.com:/home/gluh/MySQL/Merge/5.0
2007-01-26 16:36:50 +04:00
gkodinov/kgeorge@macbook.gmz
17a0207ece Merge macbook.gmz:/Users/kgeorge/mysql/work/mysql-5.0-opt
into  macbook.gmz:/Users/kgeorge/mysql/work/merge-5.1-opt
2007-01-23 12:34:50 +02:00
monty@mysql.com/narttu.mysql.fi
a04157fbb3 Merge bk-internal.mysql.com:/home/bk/mysql-5.0
into  mysql.com:/home/my/mysql-5.0
2007-01-22 14:04:40 +02:00
evgen@moonbone.local
d7d5db64ec Bug#25172: Not checked buffer size leads to a server crash.
After fix for bug#21798 JOIN stores the pointer to the buffer for sorting
fields. It is used while sorting for grouping and for ordering. If ORDER BY
clause has more elements then the GROUP BY clause then a memory overrun occurs.

Now the length of the ORDER BY list is always passed to the 
make_unireg_sortorder() function and it allocates buffer big enough to be
used for bigger list.
2007-01-19 18:34:09 +03:00
istruewing@chilla.local
54c147a1bc Merge bk-internal.mysql.com:/home/bk/mysql-5.1
into  chilla.local:/home/mydev/mysql-5.1-axmrg
2007-01-03 10:27:51 +01:00
istruewing@chilla.local
d37ff7d7bc Merge bk-internal.mysql.com:/home/bk/mysql-5.0
into  chilla.local:/home/mydev/mysql-5.0-axmrg
2007-01-03 08:52:50 +01:00
kent@kent-amd64.(none)
58763e383e Merge mysql.com:/home/kent/bk/main/mysql-5.0
into  mysql.com:/home/kent/bk/main/mysql-5.1
2006-12-31 01:32:21 +01:00
kent@mysql.com/kent-amd64.(none)
6523aca729 my_strtoll10-x86.s:
Corrected spelling in copyright text
Makefile.am:
  Don't update the files from BitKeeper
Many files:
  Removed "MySQL Finland AB & TCX DataKonsult AB" from copyright header
  Adjusted year(s) in copyright header 
Many files:
  Added GPL copyright text
Removed files:
  Docs/Support/colspec-fix.pl
  Docs/Support/docbook-fixup.pl
  Docs/Support/docbook-prefix.pl
  Docs/Support/docbook-split
  Docs/Support/make-docbook
  Docs/Support/make-makefile
  Docs/Support/test-make-manual
  Docs/Support/test-make-manual-de
  Docs/Support/xwf
2006-12-31 01:02:27 +01:00
svoj@june.mysql.com
9d106f7a10 Merge mysql.com:/home/svoj/devel/mysql/BUG21310/mysql-5.0-engines
into  mysql.com:/home/svoj/devel/mysql/BUG21310/mysql-5.1-engines
2006-12-29 17:33:34 +04:00
svoj@mysql.com/june.mysql.com
441f5069db Merge svojtovich@bk-internal.mysql.com:/home/bk/mysql-5.0-engines
into  mysql.com:/home/svoj/devel/mysql/BUG21310/mysql-5.0-engines
2006-12-29 15:33:03 +04:00
svoj@mysql.com/april.(none)
fb01230ac5 After merge fix. 2006-12-27 19:54:49 +04:00
kent@kent-amd64.(none)
be15e3bc15 Merge mysql.com:/home/kent/bk/main/mysql-5.0
into  mysql.com:/home/kent/bk/main/mysql-5.1
2006-12-23 20:20:40 +01:00
kent@mysql.com/kent-amd64.(none)
226a5c833f Many files:
Changed header to GPL version 2 only
2006-12-23 20:17:15 +01:00
svoj@mysql.com/april.(none)
32c7187952 Merge mysql.com:/home/svoj/devel/mysql/BUG21310/mysql-4.1-engines
into  mysql.com:/home/svoj/devel/mysql/BUG21310/mysql-5.0-engines
2006-12-20 20:01:31 +04:00
svoj@mysql.com/april.(none)
5424cf19de Merge mysql.com:/home/svoj/devel/bk/mysql-4.1-engines
into  mysql.com:/home/svoj/devel/mysql/BUG21310/mysql-4.1-engines
2006-12-20 19:08:28 +04:00
svoj@mysql.com/april.(none)
5ad9035605 BUG#21310 - Trees in SQL causing a "crashed" table with MyISAM storage engine
An update that used a join of a table to itself and modified the
table on one side of the join reported the table as crashed or
updated wrong rows.

Fixed by creating temporary table for self-joined multi update statement.
2006-12-20 19:05:35 +04:00
monty@mysql.com/narttu.mysql.fi
88dd873de0 Fixed compiler warnings detected by option -Wshadow and -Wunused:
- Removed not used variables and functions
- Added #ifdef around code that is not used
- Renamed variables and functions to avoid conflicts
- Removed some not used arguments

Fixed some class/struct warnings in ndb
Added define IS_LONGDATA() to simplify code in libmysql.c

I did run gcov on the changes and added 'purecov' comments on almost all lines that was not just variable name changes
2006-12-15 00:51:37 +02:00
monty@mysql.com/narttu.mysql.fi
3d40956039 Fixed portability issue in my_thr_init.c (was added in my last push)
Fixed compiler warnings (detected by VC++):
- Removed not used variables
- Added casts
- Fixed wrong assignments to bool
- Fixed wrong calls with bool arguments
- Added missing argument to store(longlong), which caused wrong store method to be called.
2006-11-30 18:25:05 +02:00
monty@mysql.com/narttu.mysql.fi
3a35c30027 Fixed compiler warnings (Mostly VC++):
- Removed not used variables
- Changed some ulong parameters/variables to ulonglong (possible serious bug)
- Added casts to get rid of safe assignment from longlong to long (and similar)
- Added casts to function parameters
- Fixed signed/unsigned compares
- Added some constructores to structures
- Removed some not portable constructs

Better fix for bug Bug #21428 "skipped 9 bytes from file: socket (3)" on "mysqladmin shutdown"
(Added new parameter to net_clear() to define when we want the communication buffer to be emptied)
2006-11-30 03:40:42 +02:00
monty@nosik.monty.fi
89570bf966 Merge mysql.com:/home/my/mysql-5.0
into  mysql.com:/home/my/mysql-5.1
2006-11-22 14:11:36 +02:00
monty@mysql.com/nosik.monty.fi
e825879800 Remove compiler warnings
(Mostly in DBUG_PRINT() and unused arguments)
Fixed bug in query cache when used with traceing (--with-debug)
Fixed memory leak in mysqldump
Removed warnings from mysqltest scripts (replaced -- with #)
2006-11-20 22:42:06 +02:00
kostja@bodhi.local
6a28c436f7 Merge bk-internal.mysql.com:/home/bk/mysql-4.1
into  bodhi.local:/opt/local/work/mysql-4.1-runtime
2006-11-02 01:08:39 +03:00
gkodinov@dl145s.mysql.com
aaed398254 Merge dl145s.mysql.com:/data/bk/team_tree_merge/MERGE/mysql-5.0-opt
into  dl145s.mysql.com:/data/bk/team_tree_merge/MERGE/mysql-5.1-opt
2006-10-19 16:43:46 +02:00
gkodinov@dl145s.mysql.com
892495acaf Merge dl145s.mysql.com:/data/bk/team_tree_merge/mysql-5.0
into  dl145s.mysql.com:/data/bk/team_tree_merge/MERGE/mysql-5.0-opt
2006-10-19 14:37:49 +02:00
gkodinov/kgeorge@macbook.gmz
a64ae1844d Merge bk-internal:/home/bk/mysql-5.0-opt
into  macbook.gmz:/Users/kgeorge/mysql/work/B21798-5.0-opt-merge
2006-10-17 16:36:44 +03:00
gkodinov/kgeorge@macbook.gmz
f7b8937661 Bug#21798: memory leak during query execution with subquery in column
list using a function
When executing dependent subqueries they are re-inited and re-exec() for 
each row of the outer context.
The cause for the bug is that during subquery reinitialization/re-execution,
the optimizer reallocates JOIN::join_tab, JOIN::table in make_simple_join()
and the local variable in 'sortorder' in create_sort_index(), which is
allocated by make_unireg_sortorder().
Care must be taken not to allocate anything into the thread's memory pool
while re-initializing query plan structures between subquery re-executions.
All such items mush be cached and reused because the thread's memory pool
is freed at the end of the whole query.
Note that they must be cached and reused even for queries that are not 
otherwise cacheable because otherwise it will grow the thread's memory 
pool every time a cacheable query is re-executed. 
We provide additional members to the JOIN structure to store references 
to the items that need to be cached.
2006-10-17 16:20:26 +03:00
svoj@may.pils.ru
0ea6959d78 Merge svojtovich@bk-internal.mysql.com:/home/bk/mysql-5.1
into  may.pils.ru:/home/svoj/devel/bk/mysql-5.1-engines
2006-10-08 15:32:00 +05:00
svoj@may.pils.ru
8cfef42f72 Merge svojtovich@bk-internal.mysql.com:/home/bk/mysql-5.0
into  may.pils.ru:/home/svoj/devel/bk/mysql-5.0-engines
2006-10-08 15:14:53 +05:00
kroki/tomash@moonlight.intranet
ee0cebf9a7 BUG#21726: Incorrect result with multiple invocations of LAST_INSERT_ID.
Note: bug#21726 does not directly apply to 4.1, as it doesn't have stored
procedures.  However, 4.1 had some bugs that were fixed in 5.0 by the
patch for bug#21726, and this patch is a backport of those fixes.
Namely, in 4.1 it fixes:

  - LAST_INSERT_ID(expr) didn't return value of expr (4.1 specific).

  - LAST_INSERT_ID() could return the value generated by current
    statement if the call happens after the generation, like in

      CREATE TABLE t1 (i INT AUTO_INCREMENT PRIMARY KEY, j INT);
      INSERT INTO t1 VALUES (NULL, 0), (NULL, LAST_INSERT_ID());

  - Redundant binary log LAST_INSERT_ID_EVENTs could be generated.
2006-10-06 13:34:07 +04:00
svoj@april.(none)
77cb275b70 Merge mysql.com:/home/svoj/devel/mysql/BUG21381/mysql-5.0-engines
into  mysql.com:/home/svoj/devel/mysql/BUG21381/mysql-5.1-engines
2006-10-05 21:15:12 +05:00
svoj@mysql.com/april.(none)
d41340516f Merge mysql.com:/home/svoj/devel/mysql/BUG21381/mysql-4.1-engines
into  mysql.com:/home/svoj/devel/mysql/BUG21381/mysql-5.0-engines
2006-10-05 19:02:29 +05:00
svoj@mysql.com/april.(none)
b141a7c1b9 BUG#21381 - Engine not notified about multi-table UPDATE IGNORE
Though this is not storage engine specific problem, I was able to
repeat this problem with BDB and NDB engines only. That was the
reason to add a test case into ndb_update.test. As a result
different bad things could happen.

BDB has removed duplicate rows which is not expected.
NDB returns an error.

For multi table update notify storage engine about UPDATE IGNORE
as it is done in single table UPDATE.
2006-10-05 18:23:53 +05:00
dlenev@mockturtle.local
8eb4401c59 Merge bk-internal.mysql.com:/home/bk/mysql-5.0
into  mockturtle.local:/home/dlenev/src/mysql-5.0-rt-merge
2006-10-02 22:53:10 +04:00
kroki/tomash@moonlight.intranet
5ea8adfae7 BUG#21726: Incorrect result with multiple invocations of LAST_INSERT_ID
Non-upper-level INSERTs (the ones in the body of stored procedure,
stored function, or trigger) into a table that have AUTO_INCREMENT
column didn't affected the result of LAST_INSERT_ID() on this level.

The problem was introduced with the fix of bug 6880, which in turn was
introduced with the fix of bug 3117, where current insert_id value was
remembered on the first call to LAST_INSERT_ID() (bug 3117) and was
returned from that function until it was reset before the next
_upper-level_ statement (bug 6880).

The fix for bug#21726 brings back the behaviour of version 4.0, and
implements the following: remember insert_id value at the beginning
of the statement or expression (which at that point equals to
the first insert_id value generated by the previous statement), and
return that remembered value from LAST_INSERT_ID() or @@LAST_INSERT_ID.

Thus, the value returned by LAST_INSERT_ID() is not affected by values
generated by current statement, nor by LAST_INSERT_ID(expr) calls in
this statement.

Version 5.1 does not have this bug (it was fixed by WL 3146).
2006-10-02 14:28:23 +04:00
dlenev@mockturtle.local
79bf51461e Merge bk-internal.mysql.com:/home/bk/mysql-5.0-runtime
into  mockturtle.local:/home/dlenev/src/mysql-5.0-bg20670-2
2006-09-30 11:35:34 +04:00
dlenev@mockturtle.local
93fed182eb Merge bk-internal.mysql.com:/home/bk/mysql-5.1-runtime
into  mockturtle.local:/home/dlenev/src/mysql-5.1-bg20670
2006-09-30 00:31:33 +04:00
dlenev@mockturtle.local
5f149dde09 Fix for bug#20670 "UPDATE using key and invoking trigger that modifies
this key does not stop" (5.1 version).

UPDATE statement which WHERE clause used key and which invoked trigger
that modified field in this key worked indefinetely.

This problem occured because in cases when UPDATE statement was
executed in update-on-the-fly mode (in which row is updated right
during evaluation of select for WHERE clause) the new version of
the row became visible to select representing WHERE clause and was
updated again and again.
We already solve this problem for UPDATE statements which does not
invoke triggers by detecting the fact that we are going to update
field in key used for scanning and performing update in two steps,
during the first step we gather information about the rows to be
updated and then doing actual updates. We also do this for
MULTI-UPDATE and in its case we even detect situation when such
fields are updated in triggers (actually we simply assume that
we always update fields used in key if we have before update
trigger).

The fix simply extends this check which is done with help of
check_if_key_used()/QUICK_SELECT_I::check_if_keys_used()
routine/method in such way that it also detects cases when
field used in key is updated in trigger. We do this by
changing check_if_key_used() to take field bitmap instead
field list as argument and passing TABLE::write_set
to it (we also have to add info about fields used in
triggers to this bitmap a bit earlier).
As nice side-effect we have more precise and thus more optimal
perfomance-wise check for the MULTI-UPDATE.
Also check_if_key_used() routine and similar method were renamed
to is_key_used()/is_keys_used() in order to better reflect that
it is simple boolean predicate.
Finally, partition_key_modified() routine now also takes field
bitmap instead of field list as argument.
2006-09-21 13:39:29 +04:00
dlenev@mockturtle.local
091ed9fb38 Fix for bug#20670 "UPDATE using key and invoking trigger that modifies
this key does not stop" (version for 5.0 only).

UPDATE statement which WHERE clause used key and which invoked trigger
that modified field in this key worked indefinetely.

This problem occured because in cases when UPDATE statement was
executed in update-on-the-fly mode (in which row is updated right
during evaluation of select for WHERE clause) the new version of
the row became visible to select representing WHERE clause and was
updated again and again.
We already solve this problem for UPDATE statements which does not
invoke triggers by detecting the fact that we are going to update
field in key used for scanning and performing update in two steps,
during the first step we gather information about the rows to be
updated and then doing actual updates. We also do this for
MULTI-UPDATE and in its case we even detect situation when such
fields are updated in triggers (actually we simply assume that
we always update fields used in key if we have before update
trigger).

The fix simply extends this check which is done in check_if_key_used()/
QUICK_SELECT_I::check_if_keys_used() routine/method in such way that
it also detects cases when field used in key is updated in trigger.
As nice side-effect we have more precise and thus more optimal
perfomance-wise check for the MULTI-UPDATE.
Also check_if_key_used()/QUICK_SELECT_I::check_if_keys_used() were
renamed to is_key_used()/QUICK_SELECT_I::is_keys_used() in order to
better reflect that boolean predicate.

Note that this check is implemented in much more elegant way in 5.1
2006-09-21 11:35:38 +04:00
gkodinov@dl145s.mysql.com
ce8ed889d7 Merge dl145s.mysql.com:/data/bk/team_tree_merge/MERGE/mysql-5.0-opt
into  dl145s.mysql.com:/data/bk/team_tree_merge/MERGE/mysql-5.1
2006-09-18 12:57:20 +02:00
igor@rurik.mysql.com
d3d3cef88c Fixed bug #21493: crash for the second execution of a function
containing a select statement that uses an aggregating IN subquery.
Added a parameter to the function fix_prepare_information 
to restore correctly the having clause for the second execution.
Saved andor structure of the having conditions at the proper moment
before any calls of split_sum_func2 that could modify the having structure
adding new Item_ref objects. (These additions, are produced not with 
the statement mem_root, but rather with the execution mem_root.)
2006-09-16 09:50:48 -07:00
evgen@moonbone.local
8cf9781717 Merge moonbone.local:/work/tmp_merge-5.0-mysql
into  moonbone.local:/work/tmp_merge-5.1-opt-mysql
2006-08-29 18:58:50 +04:00
evgen@sunlight.local
dd9a07706b Fixed bug#21261: Wrong access rights was required for an insert into a view
SELECT right instead of INSERT right was required for an insert into to a view.
This wrong behaviour appeared after the fix for bug #20989. Its intention was
to ask only SELECT right for all tables except the very first for a complex
INSERT query. But that patch has done it in a wrong way and lead to asking 
a wrong access right for an insert into a view.

The setup_tables_and_check_access() function now accepts two want_access
parameters. One will be used for the first table and the second for other
tables.
2006-08-15 21:45:24 +04:00
svoj@may.pils.ru
67db270c71 BUG#7391 - Cross-database multi-table UPDATE uses active database
privileges

This problem is 4.1 specific. It doesn't affect 4.0 and was fixed
in 5.x before.

Having any mysql user who is allowed to issue multi table update
statement and any column/table grants, allows this user to update
any table on a server (mysql grant tables are not exception).

check_grant() accepts number of tables (in table list) to be checked
in 5-th param. While checking grants for multi table update, number
of tables must be 1. It must never be 0 (actually we have
DBUG_ASSERT(number > 0) in 5.x in grant_check() function).
2006-08-03 14:03:08 +05:00
kostja@bodhi.local
73189969f3 Merge bodhi.local:/opt/local/work/tmp_merge
into  bodhi.local:/opt/local/work/mysql-5.1-runtime-merge
2006-07-26 23:33:25 +04:00
guilhem@gbichot3.local
bca3fc7800 Merge gbichot3.local:/home/mysql_src/mysql-5.1-interval-move-next-insert-id
into  gbichot3.local:/home/mysql_src/mysql-5.1
2006-07-09 22:50:02 +02:00
guilhem@gbichot3.local
0594e1b84b WL#3146 "less locking in auto_increment":
this is a cleanup patch for our current auto_increment handling:
new names for auto_increment variables in THD, new methods to manipulate them
(see sql_class.h), some move into handler::, causing less backup/restore
work when executing substatements. 
This makes the logic hopefully clearer, less work is is needed in
mysql_insert().
By cleaning up, using different variables for different purposes (instead
of one for 3 things...), we fix those bugs, which someone may want to fix
in 5.0 too:
BUG#20339 "stored procedure using LAST_INSERT_ID() does not replicate
statement-based"
BUG#20341 "stored function inserting into one auto_increment puts bad
data in slave"
BUG#19243 "wrong LAST_INSERT_ID() after ON DUPLICATE KEY UPDATE"
(now if a row is updated, LAST_INSERT_ID() will return its id)
and re-fixes:
BUG#6880 "LAST_INSERT_ID() value changes during multi-row INSERT"
(already fixed differently by Ramil in 4.1)
Test of documented behaviour of mysql_insert_id() (there was no test).
The behaviour changes introduced are:
- LAST_INSERT_ID() now returns "the first autogenerated auto_increment value
successfully inserted", instead of "the first autogenerated auto_increment
value if any row was successfully inserted", see auto_increment.test.
Same for mysql_insert_id(), see mysql_client_test.c.
- LAST_INSERT_ID() returns the id of the updated row if ON DUPLICATE KEY
UPDATE, see auto_increment.test. Same for mysql_insert_id(), see
mysql_client_test.c.
- LAST_INSERT_ID() does not change if no autogenerated value was successfully 
inserted (it used to then be 0), see auto_increment.test.
- if in INSERT SELECT no autogenerated value was successfully inserted,
mysql_insert_id() now returns the id of the last inserted row (it already
did this for INSERT VALUES), see mysql_client_test.c.
- if INSERT SELECT uses LAST_INSERT_ID(X), mysql_insert_id() now returns X
(it already did this for INSERT VALUES), see mysql_client_test.c.
- NDB now behaves like other engines wrt SET INSERT_ID: with INSERT IGNORE,
the id passed in SET INSERT_ID is re-used until a row succeeds; SET INSERT_ID
influences not only the first row now.

Additionally, when unlocking a table we check that the thread is not keeping
a next_insert_id (as the table is unlocked that id is potentially out-of-date);
forgetting about this next_insert_id is done in a new
handler::ha_release_auto_increment().

Finally we prepare for engines capable of reserving finite-length intervals
of auto_increment values: we store such intervals in THD. The next step
(to be done by the replication team in 5.1) is to read those intervals from
THD and actually store them in the statement-based binary log. NDB
will be a good engine to test that.
2006-07-09 17:52:19 +02:00
kostja@bodhi.local
7bf73ac3e5 Merge bodhi.local:/opt/local/work/mysql-5.0-root
into  bodhi.local:/opt/local/work/mysql-5.0-runtime
2006-07-07 22:09:43 +04:00
tomas@poseidon.ndb.mysql.com
98874725e0 Bug #20784 Uninitialized memory in update on table with PK not on first column
- partial backport of code from 5.1, do cot compare_record for engines that do not read all columns during update
2006-07-04 11:43:06 +02:00
dlenev@mysql.com
d4450e6696 Fix for bug#18437 "Wrong values inserted with a before update trigger on
NDB table".

SQL-layer was not marking fields which were used in triggers as such. As
result these fields were not always properly retrieved/stored by handler
layer. So one might got wrong values or lost changes in triggers for NDB,
Federated and possibly InnoDB tables.
This fix solves the problem by marking fields used in triggers
appropriately.

Also this patch contains the following cleanup of ha_ndbcluster code:

We no longer rely on reading LEX::sql_command value in handler in order
to determine if we can enable optimization which allows us to handle REPLACE
statement in more efficient way by doing replaces directly in write_row()
method without reporting error to SQL-layer.
Instead we rely on SQL-layer informing us whether this optimization
applicable by calling handler::extra() method with
HA_EXTRA_WRITE_CAN_REPLACE flag.
As result we no longer apply this optimzation in cases when it should not
be used (e.g. if we have on delete triggers on table) and use in some
additional cases when it is applicable (e.g. for LOAD DATA REPLACE).

Finally this patch includes fix for bug#20728 "REPLACE does not work
correctly for NDB table with PK and unique index".
  
This was yet another problem which was caused by improper field mark-up.
During row replacement fields which weren't explicity used in REPLACE
statement were not marked as fields to be saved (updated) so they have
retained values from old row version. The fix is to mark all table
fields as set for REPLACE statement. Note that in 5.1 we already solve
this problem by notifying handler that it should save values from all
fields only in case when real replacement happens.
2006-07-02 01:51:10 +04:00
mikael@dator5.(none)
4e390d711a BUG#17138: Crashes in stored procedure
Last round of review fixes
2006-07-01 00:01:37 -04:00
gkodinov@mysql.com
96ceb6c234 gcc 4.1 linux warning fixes backported from 5.0. 2006-06-28 16:28:29 +03:00
mikael@dator5.(none)
a8e6967c02 BUG#17138: Error in stored procedure
Review comments
2006-06-21 20:55:33 -04:00
mikael@dator5.(none)
cd93441a0a Merge dator5.(none):/home/pappa/clean-mysql-5.1
into  dator5.(none):/home/pappa/bug17138
2006-06-20 16:59:51 -04:00
svoj@may.pils.ru
1c42c9730d Merge may.pils.ru:/home/svoj/devel/mysql/BUG18036/mysql-5.0
into  may.pils.ru:/home/svoj/devel/mysql/BUG18036/mysql-5.1
2006-06-20 16:45:51 +05:00
mikael@dator5.(none)
8bd81e1fee BUG#17138: Error in stored procedure due to fatal error when not really
a fatal error. New handling of ignore error in place.
2006-06-19 20:56:50 -04:00
svoj@may.pils.ru
2c28b3cbf1 Addition to fix for
BUG#18036 - update of table joined to self reports table as crashed

Set exclude_from_table_unique_test value back to FALSE. It is needed for
further check in multi_update::prepare whether to use record cache.
2006-06-19 17:50:52 +05:00
svoj@may.pils.ru
6a90aede13 Merge may.pils.ru:/home/svoj/devel/mysql/BUG18036/mysql-4.1
into  may.pils.ru:/home/svoj/devel/mysql/BUG18036/mysql-5.0
2006-06-19 16:06:29 +05:00
svoj@may.pils.ru
37cdb0fbf3 BUG#18036 - update of table joined to self reports table as crashed
Certain updates of table joined to self results in unexpected
behavior.

The problem was that record cache was mistakenly enabled for
self-joined table updates. Normally record cache must be disabled
for such updates.

Fixed wrong condition in code that determines whether to use
record cache for self-joined table updates.

Only MyISAM tables were affected.
2006-06-19 14:05:14 +05:00
monty@mysql.com
c46fb742b8 Merge bk-internal.mysql.com:/home/bk/mysql-5.1-new
into  mysql.com:/home/my/mysql-5.1
2006-06-04 21:05:22 +03:00
monty@mysql.com
74cc73d461 This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),

- New optional handler function introduced: reset()
  This is called after every DML statement to make it easy for a handler to
  statement specific cleanups.
  (The only case it's not called is if force the file to be closed)

- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
  should be moved to handler::reset()

- table->read_set contains a bitmap over all columns that are needed
  in the query.  read_row() and similar functions only needs to read these
  columns
- table->write_set contains a bitmap over all columns that will be updated
  in the query. write_row() and update_row() only needs to update these
  columns.
  The above bitmaps should now be up to date in all context
  (including ALTER TABLE, filesort()).

  The handler is informed of any changes to the bitmap after
  fix_fields() by calling the virtual function
  handler::column_bitmaps_signal(). If the handler does caching of
  these bitmaps (instead of using table->read_set, table->write_set),
  it should redo the caching in this code. as the signal() may be sent
  several times, it's probably best to set a variable in the signal
  and redo the caching on read_row() / write_row() if the variable was
  set.

- Removed the read_set and write_set bitmap objects from the handler class

- Removed all column bit handling functions from the handler class.
  (Now one instead uses the normal bitmap functions in my_bitmap.c instead
  of handler dedicated bitmap functions)

- field->query_id is removed. One should instead instead check
  table->read_set and table->write_set if a field is used in the query.

- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
  handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
  instead use table->read_set to check for which columns to retrieve.

- If a handler needs to call Field->val() or Field->store() on columns
  that are not used in the query, one should install a temporary
  all-columns-used map while doing so. For this, we provide the following
  functions:

  my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
  field->val();
  dbug_tmp_restore_column_map(table->read_set, old_map);

  and similar for the write map:

  my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
  field->val();
  dbug_tmp_restore_column_map(table->write_set, old_map);

  If this is not done, you will sooner or later hit a DBUG_ASSERT
  in the field store() / val() functions.
  (For not DBUG binaries, the dbug_tmp_restore_column_map() and
  dbug_tmp_restore_column_map() are inline dummy functions and should
  be optimized away be the compiler).

- If one needs to temporary set the column map for all binaries (and not
  just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
  methods) one should use the functions tmp_use_all_columns() and
  tmp_restore_column_map() instead of the above dbug_ variants.

- All 'status' fields in the handler base class (like records,
  data_file_length etc) are now stored in a 'stats' struct. This makes
  it easier to know what status variables are provided by the base
  handler.  This requires some trivial variable names in the extra()
  function.

- New virtual function handler::records().  This is called to optimize
  COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
  (stats.records is not supposed to be an exact value. It's only has to
  be 'reasonable enough' for the optimizer to be able to choose a good
  optimization path).

- Non virtual handler::init() function added for caching of virtual
  constants from engine.

- Removed has_transactions() virtual method. Now one should instead return
  HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
  transactions.

- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
  that is to be used with 'new handler_name()' to allocate the handler
  in the right area.  The xxxx_create_handler() function is also
  responsible for any initialization of the object before returning.

  For example, one should change:

  static handler *myisam_create_handler(TABLE_SHARE *table)
  {
    return new ha_myisam(table);
  }

  ->

  static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
  {
    return new (mem_root) ha_myisam(table);
  }

- New optional virtual function: use_hidden_primary_key().
  This is called in case of an update/delete when
  (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
  but we don't have a primary key. This allows the handler to take precisions
  in remembering any hidden primary key to able to update/delete any
  found row. The default handler marks all columns to be read.

- handler::table_flags() now returns a ulonglong (to allow for more flags).

- New/changed table_flags()
  - HA_HAS_RECORDS	    Set if ::records() is supported
  - HA_NO_TRANSACTIONS	    Set if engine doesn't support transactions
  - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
                            Set if we should mark all primary key columns for
			    read when reading rows as part of a DELETE
			    statement. If there is no primary key,
			    all columns are marked for read.
  - HA_PARTIAL_COLUMN_READ  Set if engine will not read all columns in some
			    cases (based on table->read_set)
 - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
   			    Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
 - HA_DUPP_POS              Renamed to HA_DUPLICATE_POS
 - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
			    Set this if we should mark ALL key columns for
			    read when when reading rows as part of a DELETE
			    statement. In case of an update we will mark
			    all keys for read for which key part changed
			    value.
  - HA_STATS_RECORDS_IS_EXACT
			     Set this if stats.records is exact.
			     (This saves us some extra records() calls
			     when optimizing COUNT(*))
			    

- Removed table_flags()
  - HA_NOT_EXACT_COUNT     Now one should instead use HA_HAS_RECORDS if
			   handler::records() gives an exact count() and
			   HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
  - HA_READ_RND_SAME	   Removed (no one supported this one)

- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()

- Renamed handler::dupp_pos to handler::dup_pos

- Removed not used variable handler::sortkey


Upper level handler changes:

- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
  cache is updated on engine creation time and updated on open.


MySQL level changes (not obvious from the above):

- DBUG_ASSERT() added to check that column usage matches what is set
  in the column usage bit maps. (This found a LOT of bugs in current
  column marking code).

- In 5.1 before, all used columns was marked in read_set and only updated
  columns was marked in write_set. Now we only mark columns for which we
  need a value in read_set.

- Column bitmaps are created in open_binary_frm() and open_table_from_share().
  (Before this was in table.cc)

- handler::table_flags() calls are replaced with handler::ha_table_flags()

- For calling field->val() you must have the corresponding bit set in
  table->read_set. For calling field->store() you must have the
  corresponding bit set in table->write_set. (There are asserts in
  all store()/val() functions to catch wrong usage)

- thd->set_query_id is renamed to thd->mark_used_columns and instead
  of setting this to an integer value, this has now the values:
  MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
  Changed also all variables named 'set_query_id' to mark_used_columns.

- In filesort() we now inform the handler of exactly which columns are needed
  doing the sort and choosing the rows.

- The TABLE_SHARE object has a 'all_set' column bitmap one can use
  when one needs a column bitmap with all columns set.
  (This is used for table->use_all_columns() and other places)

- The TABLE object has 3 column bitmaps:
  - def_read_set     Default bitmap for columns to be read
  - def_write_set    Default bitmap for columns to be written
  - tmp_set          Can be used as a temporary bitmap when needed.
  The table object has also two pointer to bitmaps read_set and write_set
  that the handler should use to find out which columns are used in which way.

- count() optimization now calls handler::records() instead of using
  handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).

- Added extra argument to Item::walk() to indicate if we should also
  traverse sub queries.

- Added TABLE parameter to cp_buffer_from_ref()

- Don't close tables created with CREATE ... SELECT but keep them in
  the table cache. (Faster usage of newly created tables).


New interfaces:

- table->clear_column_bitmaps() to initialize the bitmaps for tables
  at start of new statements.

- table->column_bitmaps_set() to set up new column bitmaps and signal
  the handler about this.

- table->column_bitmaps_set_no_signal() for some few cases where we need
  to setup new column bitmaps but don't signal the handler (as the handler
  has already been signaled about these before). Used for the momement
  only in opt_range.cc when doing ROR scans.

- table->use_all_columns() to install a bitmap where all columns are marked
  as use in the read and the write set.

- table->default_column_bitmaps() to install the normal read and write
  column bitmaps, but not signaling the handler about this.
  This is mainly used when creating TABLE instances.

- table->mark_columns_needed_for_delete(),
  table->mark_columns_needed_for_delete() and
  table->mark_columns_needed_for_insert() to allow us to put additional
  columns in column usage maps if handler so requires.
  (The handler indicates what it neads in handler->table_flags())

- table->prepare_for_position() to allow us to tell handler that it
  needs to read primary key parts to be able to store them in
  future table->position() calls.
  (This replaces the table->file->ha_retrieve_all_pk function)

- table->mark_auto_increment_column() to tell handler are going to update
  columns part of any auto_increment key.

- table->mark_columns_used_by_index() to mark all columns that is part of
  an index.  It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
  it to quickly know that it only needs to read colums that are part
  of the key.  (The handler can also use the column map for detecting this,
  but simpler/faster handler can just monitor the extra() call).

- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
  also mark all columns that is used by the given key.

- table->restore_column_maps_after_mark_index() to restore to default
  column maps after a call to table->mark_columns_used_by_index().

- New item function register_field_in_read_map(), for marking used columns
  in table->read_map. Used by filesort() to mark all used columns

- Maintain in TABLE->merge_keys set of all keys that are used in query.
  (Simplices some optimization loops)

- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
  but the field in the clustered key is not assumed to be part of all index.
  (used in opt_range.cc for faster loops)

-  dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
   tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
   mark all columns as usable.  The 'dbug_' version is primarily intended
   inside a handler when it wants to just call Field:store() & Field::val()
   functions, but don't need the column maps set for any other usage.
   (ie:: bitmap_is_set() is never called)

- We can't use compare_records() to skip updates for handlers that returns
  a partial column set and the read_set doesn't cover all columns in the
  write set. The reason for this is that if we have a column marked only for
  write we can't in the MySQL level know if the value changed or not.
  The reason this worked before was that MySQL marked all to be written
  columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
  bug'.

- open_table_from_share() does not anymore setup temporary MEM_ROOT
  object as a thread specific variable for the handler. Instead we
  send the to-be-used MEMROOT to get_new_handler().
  (Simpler, faster code)



Bugs fixed:

- Column marking was not done correctly in a lot of cases.
  (ALTER TABLE, when using triggers, auto_increment fields etc)
  (Could potentially result in wrong values inserted in table handlers
  relying on that the old column maps or field->set_query_id was correct)
  Especially when it comes to triggers, there may be cases where the
  old code would cause lost/wrong values for NDB and/or InnoDB tables.

- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
  OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
  This allowed me to remove some wrong warnings about:
  "Some non-transactional changed tables couldn't be rolled back"

- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
  (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
  some warnings about
  "Some non-transactional changed tables couldn't be rolled back")

- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
  which could cause delete_table to report random failures.

- Fixed core dumps for some tests when running with --debug

- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
  (This has probably caused us to not properly remove temporary files after
  crash)

- slow_logs was not properly initialized, which could maybe cause
  extra/lost entries in slow log.

- If we get an duplicate row on insert, change column map to read and
  write all columns while retrying the operation. This is required by
  the definition of REPLACE and also ensures that fields that are only
  part of UPDATE are properly handled.  This fixed a bug in NDB and
  REPLACE where REPLACE wrongly copied some column values from the replaced
  row.

- For table handler that doesn't support NULL in keys, we would give an error
  when creating a primary key with NULL fields, even after the fields has been
  automaticly converted to NOT NULL.

- Creating a primary key on a SPATIAL key, would fail if field was not
  declared as NOT NULL.


Cleanups:

- Removed not used condition argument to setup_tables

- Removed not needed item function reset_query_id_processor().

- Field->add_index is removed. Now this is instead maintained in
  (field->flags & FIELD_IN_ADD_INDEX)

- Field->fieldnr is removed (use field->field_index instead)

- New argument to filesort() to indicate that it should return a set of
  row pointers (not used columns). This allowed me to remove some references
  to sql_command in filesort and should also enable us to return column
  results in some cases where we couldn't before.

- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
  bitmap, which allowed me to use bitmap functions instead of looping over
  all fields to create some needed bitmaps. (Faster and smaller code)

- Broke up found too long lines

- Moved some variable declaration at start of function for better code
  readability.

- Removed some not used arguments from functions.
  (setup_fields(), mysql_prepare_insert_check_table())

- setup_fields() now takes an enum instead of an int for marking columns
   usage.

- For internal temporary tables, use handler::write_row(),
  handler::delete_row() and handler::update_row() instead of
  handler::ha_xxxx() for faster execution.

- Changed some constants to enum's and define's.

- Using separate column read and write sets allows for easier checking
  of timestamp field was set by statement.

- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()

- Don't build table->normalized_path as this is now identical to table->path
  (after bar's fixes to convert filenames)

- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
  do comparision with the 'convert-dbug-for-diff' tool.


Things left to do in 5.1:

- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
  row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
  Mats has promised to look into this.

- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
  (I added several test cases for this, but in this case it's better that
  someone else also tests this throughly).
  Lars has promosed to do this.
2006-06-04 18:52:22 +03:00
jani@a193-229-222-105.elisa-laajakaista.fi
c81b4c01bf Merge a193-229-222-105.elisa-laajakaista.fi:/home/my/bk/mysql-5.0
into  a193-229-222-105.elisa-laajakaista.fi:/home/my/bk/mysql-5.1-new
2006-05-30 16:07:49 +03:00
gkodinov@mysql.com
a21a2b5bcd BUG#18681: View privileges are broken
The check for view security was lacking several points :
1. Check with the right set of permissions : for each table ref that
participates in a view there were the right credentials to use in it's
security_ctx member, but these weren't used for checking the credentials.
This makes hard enforcing the SQL SECURITY DEFINER|INVOKER property
consistently.
2. Because of the above the security checking for views was just ruled out
in explicit ways in several places.
3. The security was checked only for the columns of the tables that are
brought into the query from a view. So if there is no column reference
outside of the view definition it was not detecting the lack of access to
the tables in the view in SQL SECURITY INVOKER mode.

The fix below tries to fix the above 3 points.
2006-05-26 11:47:53 +03:00
evgen@sunlight.local
368b92390c Manually merged 2006-04-05 20:12:26 +04:00
evgen@sunlight.local
b9949ed9c7 Fixed bug #16281: Multi-table update broken in 5.0 on tables imported from 4.1
Mutli-table uses temporary table to store new values for fields. With the
new values the rowid of the record to be updated is stored in a Field_string
field. Table to be updated is set as source table of the rowid field.
But when the temporary table creates the tmp field for the rowid field it
converts it to a varstring field because the table to be updated was created by
the v4.1. Due to this the stored rowids were broken and no records for 
update were found.

The flag can_alter_field_type is added to Field_string class. When it is set to
0 the field won't be converted to varstring. The Field_string::type() function 
now always returns MYSQL_TYPE_STRING if can_alter_field_type is set to 0.
The multi_update::initialize_tables() function now sets can_alter_field_type
flag to 0 for the rowid fields denying conversion of the field to a varstring
field.
2006-04-05 13:29:04 +04:00
monty@mysql.com
54274976e7 Fixed compiler warnings from gcc 4.0.2:
- Added empty constructors and virtual destructors to many classes and structs
- Removed some usage of the offsetof() macro to instead use C++ class pointers
2006-02-25 17:46:30 +02:00
konstantin@mysql.com
a27e32b565 Merge mysql.com:/home/kostja/mysql/mysql-5.0-root
into  mysql.com:/home/kostja/mysql/mysql-5.1-merge
2006-02-22 14:04:24 +03:00