Commit graph

829 commits

Author SHA1 Message Date
davi@endora.local
cc007acb78 Bug#30882 Dropping a temporary table inside a stored function may cause a server crash
If a stored function that contains a drop temporary table statement
is invoked by a create temporary table of the same name may cause
a server crash. The problem is that when dropping a table no check
is done to ensure that table is not being used by some outer query
(or outer statement), potentially leaving the outer query with a
reference to a stale (freed) table.

The solution is when dropping a temporary table, always check if
the table is being used by some outer statement as a temporary
table can be dropped inside stored procedures.

The check is performed by looking at the TABLE::query_id value for
temporary tables. To simplify this check and to solve a bug related
to handling of temporary tables in prelocked mode, this patch changes
the way in which this member is used to track the fact that table is
used/unused. Now we ensure that TABLE::query_id is zero for unused
temporary tables (which means that all temporary tables which were
used by a statement should be marked as free for reuse after it's
execution has been completed).
2007-11-01 18:52:56 -02:00
kostja@bodhi.(none)
e4b353c40c Use an inline getter method (thd->is_error()) to query if there is an error
in THD.
In future the error may be stored elsewhere (not in net.report_error) and 
it's important to start using an opaque getter to simplify merges.
2007-10-30 20:08:16 +03:00
gluh@eagle.(none)
17acda6ca8 Merge mysql.com:/home/gluh/MySQL/Merge/5.1
into  mysql.com:/home/gluh/MySQL/Merge/5.1-opt
2007-10-23 19:08:21 +05:00
davi@moksha.local
f3b6d56fb2 The fix for BUG 21136 (ChangeSet@1.2611.1.1) introduced a regression that
caused a few tests to fail because the thd->extra_lock wasn't being set to
NULL after the table was unlocked. This poses a serious problem because later
attempts to access thd->extra_lock (now a dangling pointer) will probably
result in a crash (undefined behavior) -- and that's what actually happens
in some test cases.

The solution is to set the select_create::m_plock pointee to NULL, which
means that thd->extra_lock is set to NULL when the lock data is not for a
temporary table.
2007-09-29 16:04:31 -03:00
davi@moksha.local
ae079ec427 Bug#21136 CREATE TABLE SELECT within CREATE TABLE SELECT causes server crash
When CREATE TEMPORARY TABLE .. SELECT is invoked from a stored function
which in turn is called from CREATE TABLE SELECT causes a memory leak
because the inner create temporary table overrides the outter extra_lock
reference when locking the table.

The solution is to simply not overrride the extra_lock by only using the
extra_lock for a non-temporary table lock.
2007-09-28 18:25:47 -03:00
gkodinov/kgeorge@magare.gmz
f42a392b2a merged the fix for bug 30468 to 5.1-opt 2007-09-27 12:32:59 +03:00
gkodinov/kgeorge@magare.gmz
bbe1d37089 Merge gkodinov@bk-internal.mysql.com:/home/bk/mysql-5.0-opt
into  magare.gmz:/home/kgeorge/mysql/autopush/B30468-5.0-opt
2007-09-27 12:17:16 +03:00
gkodinov/kgeorge@magare.gmz
fb3b12176d Bug #30468: column level privileges not respected when joining tables
When expanding a * in a USING/NATURAL join the check for table access
for both tables in the join was done using the grant information of the
first one.
Fixed by getting the grant information for the current table while 
iterating through the columns of the join.
2007-09-27 12:15:19 +03:00
gkodinov/kgeorge@magare.gmz
d0b796bf14 merge of the fix for bug 27802 & 27216 to 5.1-opt 2007-09-26 19:06:54 +03:00
evgen@sunlight.local
4fd6de8b1a Merge sunlight.local:/local_work/27216-bug-5.0-opt-mysql
into  sunlight.local:/local_work/merge-5.1-opt-mysql
2007-09-24 17:23:40 +04:00
evgen@sunlight.local
7ed30b971c Merge epotemkin@bk-internal.mysql.com:/home/bk/mysql-5.0-opt
into  sunlight.local:/local_work/27216-bug-5.0-opt-mysql
2007-09-22 13:10:44 +04:00
evgen@sunlight.local
36bf417b40 Bug#27216: functions with parameters of different date types may return wrong
type of the result.

There are several functions that accept parameters of different types.
The result field type of such functions was determined based on
the aggregated result type of its arguments. As the DATE and the DATETIME
types are represented by the STRING type, the result field type
of the affected functions was always STRING for DATE/DATETIME arguments.
The affected functions are COALESCE, IF, IFNULL, CASE, LEAST/GREATEST, CASE.

Now the affected functions aggregate the field types of their arguments rather
than their result types and return the result of aggregation as their result
field type.
The cached_field_type member variable is added to the number of classes to
hold the aggregated result field type.
The str_to_date() function's result field type now defaults to the
MYSQL_TYPE_DATETIME.
The agg_field_type() function is added. It aggregates field types with help
of the Field::field_type_merge() function.
The create_table_from_items() function now uses the 
item->tmp_table_field_from_field_type() function to get the proper field
when the item is a function with a STRING result type.
2007-09-22 11:49:27 +04:00
evgen@sunlight.local
bb5cdfb87e Bug#30384: Having SQL_BUFFER_RESULT option in the CREATE .. KEY(..) .. SELECT
led to creating corrupted index.

While execution of the  CREATE .. SELECT SQL_BUFFER_RESULT statement the 
engine->start_bulk_insert function was called twice. On the first call
On the first call MyISAM disabled all non-unique indexes and on the second
call it decides to not re-enable them because all indexes was disabled.
Due to this no indexes was actually created during CREATE TABLE thus
producing crashed table.

Now the select_inset class has is_bulk_insert_mode flag which prevents
calling the start_bulk_insert function twice.
The flag is set in the select_create::prepare, select_insert::prepare2
functions and the select_insert class constructor.
The flag is reset in the select_insert::send_eof function.
2007-09-21 12:09:00 +04:00
thek@adventure.(none)
ab60d8ae92 Merge adventure.(none):/home/thek/Development/cpp/bug27358/my51-bug27358
into  adventure.(none):/home/thek/Development/cpp/mysql-5.1-runtime
2007-09-10 11:42:16 +02:00
thek@adventure.(none)
1f5f2b7649 Merge adventure.(none):/home/thek/Development/cpp/bug27358/my50-bug27358
into  adventure.(none):/home/thek/Development/cpp/bug27358/my51-bug27358
2007-09-10 11:38:23 +02:00
malff/marcsql@weblab.(none)
4cab11097e Merge malff@bk-internal.mysql.com:/home/bk/mysql-5.1-runtime
into  weblab.(none):/home/marcsql/TREE/mysql-5.1-rt50-merge
2007-08-30 17:05:49 -06:00
davi@moksha.local
11830073bc Merge moksha.local:/Users/davi/mysql/push/bugs/28587-5.0
into  moksha.local:/Users/davi/mysql/push/bugs/28587-5.1
2007-08-30 16:13:07 -03:00
davi@moksha.local
1180c22aef Bug#28587 SELECT is blocked by INSERT waiting on read lock, even with low_priority_updates
The problem is that a SELECT on one thread is blocked by INSERT ... ON
DUPLICATE KEY UPDATE on another thread even when low_priority_updates is
activated.

The solution is to possibly downgrade the lock type to the setting of
low_priority_updates if the INSERT cannot be concurrent.
2007-08-30 16:11:53 -03:00
thek@adventure.(none)
b1b2108275 Bug#27358 INSERT DELAYED does not honour SQL_MODE of the client
SQL_MODE was ignored when a client issued INSERT DELAYED.

Some system settings weren't copied as intended when a record was saved for
a delayed insert.
2007-08-23 10:22:20 +02:00
monty@narttu.mysql.fi
9d609a59fd Merge bk-internal.mysql.com:/home/bk/mysql-5.1
into  mysql.com:/home/my/mysql-5.1
2007-08-14 00:22:34 +03:00
monty@mysql.com/nosik.monty.fi
e53a73e26c Fixed a lot of compiler warnings and errors detected by Forte C++ on Solaris
Faster thr_alarm()
Added 'Opened_files' status variable to track calls to my_open()
Don't give warnings when running mysql_install_db
Added option --source-install to mysql_install_db

I had to do the following renames() as used polymorphism didn't work with Forte compiler on 64 bit systems
index_read()      -> index_read_map()
index_read_idx()  -> index_read_idx_map()
index_read_last() -> index_read_last_map()
2007-08-13 16:11:25 +03:00
df@pippilotta.erinye.com
493634e4c7 Merge bk-internal:/home/bk/mysql-5.1-marvel
into  pippilotta.erinye.com:/shared/home/df/mysql/build/mysql-5.1-build-marvel-engines
2007-08-03 17:15:23 +02:00
df@pippilotta.erinye.com
3d986fde8d Merge bk-internal:/home/bk/mysql-5.1-engines
into  pippilotta.erinye.com:/shared/home/df/mysql/build/mysql-5.1-build-marvel-engines
2007-08-03 16:53:06 +02:00
istruewing@chilla.local
91e9aeb88d Merge chilla.local:/home/mydev/mysql-5.0-amain
into  chilla.local:/home/mydev/mysql-5.0-axmrg
2007-08-03 06:56:04 +02:00
svoj@april.(none)
0f9cae4843 Merge mysql.com:/home/svoj/devel/mysql/BUG29152/mysql-5.0-engines
into  mysql.com:/home/svoj/devel/mysql/BUG29152/mysql-5.1-engines
2007-08-02 19:08:39 +05:00
monty@nosik.monty.fi
93f0771fca Merge bk-internal.mysql.com:/home/bk/mysql-5.1
into  mysql.com:/home/my/mysql-5.1
2007-08-02 07:55:33 +03:00
tsmith@ramayana.hindu.god
a52a078f75 Merge tsmith@bk-internal.mysql.com:/home/bk/mysql-5.1-opt
into  ramayana.hindu.god:/home/tsmith/m/bk/maint/51
2007-08-01 18:40:02 -06:00
svoj@mysql.com/april.(none)
2c539642c0 BUG#29152 - INSERT DELAYED does not use concurrent_insert on slave
INSERT DELAYED on a replication slave was converted to regular INSERT,
whereas it should try concurrent INSERT first.

With this patch we try to convert delayed insert to concurrent insert on
a replication slave. If it is impossible for some reason, we fall back to
regular insert.

No test case for this fix. I do not see anything indicating this is
regression - we behave this way since Nov 2000.
2007-07-31 19:40:36 +05:00
gkodinov/kgeorge@magare.gmz
992698ce5f merge of the fix for bug 17417 5.0-opt->5.1-opt 2007-07-31 14:58:04 +03:00
gkodinov/kgeorge@magare.gmz
cfbfb8bae8 (Pushing for Andrei)
Merge magare.gmz:/home/kgeorge/mysql/work/B27417-5.0-opt
into  magare.gmz:/home/kgeorge/mysql/work/B27417-5.1-opt
2007-07-30 19:02:21 +03:00
gkodinov/kgeorge@magare.gmz
9a0e6ec6d2 (pushing for Andrei)
Bug #27417 thd->no_trans_update.stmt lost value inside of SF-exec-stack
  
Once had been set the flag might later got reset inside of a stored routine 
execution stack.
The reason was in that there was no check if a new statement started at time 
of resetting.
The artifact affects most of binlogable DML queries. Notice, that multi-update 
is wrapped up within
  bug@27716 fix, multi-delete bug@29136.
  
Fixed with saving parent's statement flag of whether the statement modified 
non-transactional table, and unioning (merging) the value with that was gained 
in mysql_execute_command.
  
Resettling thd->no_trans_update members into thd->transaction.`member`;
Asserting code;
Effectively the following properties are held.
  
1. At the end of a substatement thd->transaction.stmt.modified_non_trans_table
   reflects the fact if such a table got modified by the substatement.
   That also respects THD::really_abort_on_warnin() requirements.
2. Eventually thd->transaction.stmt.modified_non_trans_table will be computed as
   the union of the values of all invoked sub-statements.
   That fixes this bug#27417;

Computing of thd->transaction.all.modified_non_trans_table is refined to base to 
the stmt's value for all the case including insert .. select statement which 
before the patch had an extra issue bug@28960.
Minor issues are covered with mysql_load, mysql_delete, and binloggin of insert in
to temp_table select. 
  
The supplied test verifies limitely, mostly asserts. The ultimate testing is defered
for bug@13270, bug@23333.
2007-07-30 18:27:36 +03:00
monty@mysql.com/nosik.monty.fi
b16289a5e0 Slow query log to file now displays queries with microsecond precission
--long-query-time is now given in seconds with microseconds as decimals
--min_examined_row_limit added for slow query log
long_query_time user variable is now double with 6 decimals
Added functions to get time in microseconds
Added faster time() functions for system that has gethrtime()  (Solaris)
We now do less time() calls.
Added field->in_read_set() and field->in_write_set() for easier field manipulation by handlers
set_var.cc and my_getopt() can now handle DOUBLE variables.
All time() calls changed to my_time()
my_time() now does retry's if time() call fails.
Added debug function for stopping in mysql_admin_table() when tables are locked
Some trivial function and struct variable renames to avoid merge errors.
Fixed compiler warnings
Initialization of some time variables on windows moved to my_init()
2007-07-30 11:33:50 +03:00
gshchepa/uchum@gleb.loc
e6c7e7d630 Merge gleb.loc:/home/uchum/work/bk/5.0-opt
into  gleb.loc:/home/uchum/work/bk/5.1-opt
2007-07-27 13:38:11 +05:00
malff/marcsql@weblab.(none)
c7bbd8917c WL#3984 (Revise locking of mysql.general_log and mysql.slow_log)
Bug#25422 (Hang with log tables)
Bug 17876 (Truncating mysql.slow_log in a SP after using cursor locks the
          thread)
Bug 23044 (Warnings on flush of a log table)
Bug 29129 (Resetting general_log while the GLOBAL READ LOCK is set causes
           a deadlock)

Prior to this fix, the server would hang when performing concurrent
ALTER TABLE or TRUNCATE TABLE statements against the LOG TABLES,
which are mysql.general_log and mysql.slow_log.

The root cause traces to the following code:
in sql_base.cc, open_table()
  if (table->in_use != thd)
  {
    /* wait_for_condition will unlock LOCK_open for us */
    wait_for_condition(thd, &LOCK_open, &COND_refresh);
  }
The problem with this code is that the current implementation of the
LOGGER creates 'fake' THD objects, like
- Log_to_csv_event_handler::general_log_thd
- Log_to_csv_event_handler::slow_log_thd
which are not associated to a real thread running in the server,
so that waiting for these non-existing threads to release table locks
cause the dead lock.

In general, the design of Log_to_csv_event_handler does not fit into the
general architecture of the server, so that the concept of general_log_thd
and slow_log_thd has to be abandoned:
- this implementation does not work with table locking
- it will not work with commands like SHOW PROCESSLIST
- having the log tables always opened does not integrate well with DDL
operations / FLUSH TABLES / SET GLOBAL READ_ONLY

With this patch, the fundamental design of the LOGGER has been changed to:
- always open and close a log table when writing a log
- remove totally the usage of fake THD objects
- clarify how locking of log tables is implemented in general.

See WL#3984 for details related to the new locking design.

Additional changes (misc bugs exposed and fixed):

1)

mysqldump which would ignore some tables in dump_all_tables_in_db(),
 but forget to ignore the same in dump_all_views_in_db().

2)

mysqldump would also issue an empty "LOCK TABLE" command when all the tables
to lock are to be ignored (numrows == 0), instead of not issuing the query.

3)

Internal errors handlers could intercept errors but not warnings
(see sql_error.cc).

4)

Implementing a nested call to open tables, for the performance schema tables,
exposed an existing bug in remove_table_from_cache(), which would perform:
  in_use->some_tables_deleted=1;
against another thread, without any consideration about thread locking.
This call inside remove_table_from_cache() was not required anyway,
since calling mysql_lock_abort() takes care of aborting -- cleanly -- threads
that might hold a lock on a table.
This line (in_use->some_tables_deleted=1) has been removed.
2007-07-27 00:31:06 -06:00
gkodinov/kgeorge@magare.gmz
9bc5e92614 Merge gkodinov@bk-internal.mysql.com:/home/bk/mysql-5.0-opt
into  magare.gmz:/home/kgeorge/mysql/autopush/B29571-5.0-opt
2007-07-26 11:32:27 +03:00
gkodinov/kgeorge@magare.gmz
02fe5e27d2 Bug #29571: INSERT DELAYED IGNORE written to binary log on
the master but on the slave

MySQL can decide to "downgrade" a INSERT DELAYED statement
to normal insert in certain situations.
One such situation is when the slave is replaying a 
replication feed.
However INSERT DELAYED is logged even if there're no updates
whereas the NORMAL INSERT is not logged in such cases.

Fixed by always logging a "downgraded" INSERT DELAYED: even 
if there were no updates.
2007-07-26 11:31:10 +03:00
kostja@bodhi.(none)
0e728d1463 Merge bk-internal.mysql.com:/home/bk/mysql-5.1
into  bodhi.(none):/opt/local/work/mysql-5.1-runtime
2007-07-21 17:54:23 +04:00
gshchepa/uchum@gleb.loc
95d43074cc Merge gleb.loc:/home/uchum/work/bk/5.1
into  gleb.loc:/home/uchum/work/bk/5.1-opt
2007-07-20 04:21:46 +05:00
kostja@bodhi.(none)
4c7d2df910 A follow-up fix for Bug#29431 "killing an insert delayed thread causes
crash" in 5.1
2007-07-19 21:10:19 +04:00
kostja@bodhi.(none)
5e72901c60 Merge bodhi.(none):/opt/local/work/mysql-5.0-runtime
into  bodhi.(none):/opt/local/work/mysql-5.1-runtime
2007-07-19 20:27:35 +04:00
kostja@bodhi.(none)
c4bcce64a4 Rename all references to 'Delayed_insert' instances from 'tmp' to 'di'
for consistency.
2007-07-19 19:36:52 +04:00
kostja@bodhi.(none)
1aae30060e A fix for Bug#29431 killing an insert delayed thread causes crash
No test case, since the bug requires a stress case with 30 INSERT DELAYED
threads and 1 killer thread to repeat. The patch is verified
manually.
Review fixes.

The server that is running DELAYED inserts would deadlock itself
or crash under high load if some of the delayed threads were KILLed
in the meanwhile.

The fix is to change internal lock acquisition order of delayed inserts
subsystem and to ensure that
Delayed_insert::table_list::db does not point to volatile memory in some 
cases.
For details, please see a comment for sql_insert.cc.
2007-07-19 19:28:00 +04:00
dkatz@damien-katzs-computer.local
639fd4c555 Bug #29692 Single row inserts can incorrectly report a huge number of row insertions
This bug was caused by unitialized value that was the result of a bad 5.0 merge.
2007-07-16 14:53:05 -04:00
evgen@moonbone.local
c502557183 Merge epotemkin@bk-internal.mysql.com:/home/bk/mysql-5.1-opt
into  moonbone.local:/mnt/gentoo64/work/29310-bug-5.1-opt-mysql
2007-07-09 00:04:03 +04:00
evgen@moonbone.local
42d1e3c457 Bug#29310: An InnoDB table was updated when the data wasn't actually changed.
When a table is being updated it has two set of fields - fields required for
checks of conditions and fields to be updated. A storage engine is allowed
not to retrieve columns marked for update. Due to this fact records can't
be compared to see whether the data has been changed or not. This makes the
server always update records independently of data change.

Now when an auto-updatable timestamp field is present and server sees that
a table handle isn't going to retrieve write-only fields then all of such
fields are marked as to be read to force the handler to retrieve them.
2007-07-08 18:13:04 +04:00
antony@ppcg5.local
2b52524ea5 Merge anubis.xiphis.org:/usr/home/antony/work/mysql-5.0-engines
into  anubis.xiphis.org:/usr/home/antony/work/mysql-5.0-engines.merge
2007-07-06 09:00:40 -07:00
antony@anubis.xiphis.org
673a8708d1 Merge anubis.xiphis.org:/usr/home/antony/work/mysql-5.1-engines
into  anubis.xiphis.org:/usr/home/antony/work/mysql-5.1-merge
2007-07-01 20:56:47 -07:00
istruewing@synthia.local
bc3e18cd39 Merge synthia.local:/home/mydev/mysql-5.0-axmrg
into  synthia.local:/home/mydev/mysql-5.1-axmrg
2007-06-30 13:17:49 +02:00
antony@ppcg5.local
b0b0b0fbc4 Bug#25511
"Federated INSERT failures"
  Federated does not correctly handle "INSERT...ON DUPLICATE KEY UPDATE"
  However, implementing such support is not reasonably possible without
  increasing complexity of the storage engine: checking that constraints
  on remote server match local server and parsing error messages.
  This patch causes 'ON DUPLICATE KEY' to fail with ER_DUP_KEY message
  if a conflict occurs and not to fail silently.
2007-06-28 13:36:26 -07:00
gkodinov/kgeorge@magare.gmz
71aaf52a2f Bug #29157: UPDATE, changed rows incorrect
Sometimes the number of really updated rows (with changed
column values) cannot be determined at the server level
alone (e.g. if the storage engine does not return enough
column values to verify that). So the only dependable way
in such cases is to let the storage engine return that
information if possible.
Fixed the bug at server level by providing a way for the 
storage engine to return information about wether it 
actually updated the row or the old and the new column 
values are the same. It can do that by returning 
HA_ERR_RECORD_IS_THE_SAME in ha_update_row().
Note that each storage engine may choose not to try to
return this status code, so this behaviour remains 
storage engine specific.
2007-06-28 16:07:55 +03:00