Commit graph

18231 commits

Author SHA1 Message Date
Sven Sandberg
b70973d326 post-push fixes for BUG#39934: updating test cases 2009-07-15 18:41:02 +02:00
Sven Sandberg
45f724ec54 Merged fix for BUG#39934 up a few revisions.
NOTE: This undoes changes by BUG#42829 in sql_class.cc:binlog_query().
I will revert the change in a post-push fix (the binlog filter should
be checked in sql_base.cc:decide_logging_format()).
2009-07-14 22:12:27 +02:00
Sven Sandberg
f3985c649d BUG#39934: Slave stops for engine that only support row-based logging
General overview:
The logic for switching to row format when binlog_format=MIXED had
numerous flaws. The underlying problem was the lack of a consistent
architecture.
General purpose of this changeset:
This changeset introduces an architecture for switching to row format
when binlog_format=MIXED. It enforces the architecture where it has
to. It leaves some bugs to be fixed later. It adds extensive tests to
verify that unsafe statements work as expected and that appropriate
errors are produced by problems with the selection of binlog format.
It was not practical to split this into smaller pieces of work.

Problem 1:
To determine the logging mode, the code has to take several parameters
into account (namely: (1) the value of binlog_format; (2) the
capabilities of the engines; (3) the type of the current statement:
normal, unsafe, or row injection). These parameters may conflict in
several ways, namely:
 - binlog_format=STATEMENT for a row injection
 - binlog_format=STATEMENT for an unsafe statement
 - binlog_format=STATEMENT for an engine only supporting row logging
 - binlog_format=ROW for an engine only supporting statement logging
 - statement is unsafe and engine does not support row logging
 - row injection in a table that does not support statement logging
 - statement modifies one table that does not support row logging and
   one that does not support statement logging
Several of these conflicts were not detected, or were detected with
an inappropriate error message. The problem of BUG#39934 was that no
appropriate error message was written for the case when an engine
only supporting row logging executed a row injection with
binlog_format=ROW. However, all above cases must be handled.
Fix 1:
Introduce new error codes (sql/share/errmsg.txt). Ensure that all
conditions are detected and handled in decide_logging_format()

Problem 2:
The binlog format shall be determined once per statement, in
decide_logging_format(). It shall not be changed before or after that.
Before decide_logging_format() is called, all information necessary to
determine the logging format must be available. This principle ensures
that all unsafe statements are handled in a consistent way.
However, this principle is not followed:
thd->set_current_stmt_binlog_row_based_if_mixed() is called in several
places, including from code executing UPDATE..LIMIT,
INSERT..SELECT..LIMIT, DELETE..LIMIT, INSERT DELAYED, and
SET @@binlog_format. After Problem 1 was fixed, that caused
inconsistencies where these unsafe statements would not print the
appropriate warnings or errors for some of the conflicts.
Fix 2:
Remove calls to THD::set_current_stmt_binlog_row_based_if_mixed() from
code executed after decide_logging_format(). Compensate by calling the
set_current_stmt_unsafe() at parse time. This way, all unsafe statements
are detected by decide_logging_format().

Problem 3:
INSERT DELAYED is not unsafe: it is logged in statement format even if
binlog_format=MIXED, and no warning is printed even if
binlog_format=STATEMENT. This is BUG#45825.
Fix 3:
Made INSERT DELAYED set itself to unsafe at parse time. This allows
decide_logging_format() to detect that a warning should be printed or
the binlog_format changed.

Problem 4:
LIMIT clause were not marked as unsafe when executed inside stored
functions/triggers/views/prepared statements. This is
BUG#45785.
Fix 4:
Make statements containing the LIMIT clause marked as unsafe at
parse time, instead of at execution time. This allows propagating
unsafe-ness to the view.
2009-07-14 21:31:19 +02:00
Georgi Kodinov
392259bba5 automerge 2009-07-14 11:59:17 +03:00
Jim Winstead
f9025ea331 Merge bug fixes 2009-07-13 12:11:16 -07:00
Georgi Kodinov
8f64c16c25 Merge of the fix for bug to 5.1. 2009-07-13 20:36:54 +03:00
Georgi Kodinov
80dd3a593a Bug : Embedded SELECT inside UPDATE or DELETE can timeout
without error

When using quick access methods for searching rows in UPDATE or 
DELETE there was no check if a fatal error was not already sent 
to the client while evaluating the quick condition.
As a result a false OK (following the error) was sent to the 
client and the error was thus transformed into a warning.

Fixed by checking for errors sent to the client during 
SQL_SELECT::check_quick() and treating them as real errors.

Fixed a wrong test case in group_min_max.test
Fixed a wrong return code in mysql_update() and mysql_delete()
2009-07-13 18:11:16 +03:00
Georgi Kodinov
adff023c71 merge 5.0-bugteam -> 5.1-bugteam 2009-07-13 14:34:23 +03:00
Georgi Kodinov
dccac4ff11 Addendum to the fix for bug : fixed the error handling 2009-07-13 14:17:14 +03:00
Alexey Kopytov
606e5bf699 Automerge. 2009-07-12 20:56:43 +06:00
Gleb Shchepa
8724706aa8 Bug : List of derived tables acts like a chain of
mutually-nested subqueries

Queries of the form

  SELECT * FROM (SELECT 1) AS t1,
                (SELECT 2) AS t2,...
                (SELECT 32) AS t32

caused the "Too high level of nesting for select" error
as if the query has a form

  SELECT * FROM (SELECT 1 FROM (SELECT 2 FROM (SELECT 3 FROM...


The table_factor parser rule has been modified to adjust
the LEX::nest_level variable value after every derived table.
2009-07-11 23:44:29 +05:00
Georgi Kodinov
55a426f130 automerge 2009-07-10 17:07:36 +03:00
Georgi Kodinov
598c8e9346 automerge 2009-07-10 17:04:58 +03:00
Georgi Kodinov
6a7240b8d7 Addendum to the fix for bug : fixed the test case 2009-07-10 17:03:09 +03:00
Georgi Kodinov
5a6809590c Bug : group_concat(... order by) crashes server when
sort_buffer_size cannot allocate

The NULL return from tree_insert() (on low memory) was not
checked for in Item_func_group_concat::add(). As a result
on low memory conditions a crash happens.

Fixed by properly checking the return code.
2009-07-10 15:00:34 +03:00
Satya B
f5bec50697 Applying InnoDB snapshot 5.1-ss5488,part 4. Fixes BUG#21704
1. BUG#21704 - Renaming column does not update FK definition

2. Changes in mysql-test/include/mtr_warnings.sql so that the testcase
   for BUG#21704 doesn't fail because of the warnings generated.

Detailed revision comments:

r5488 | vasil | 2009-07-09 19:16:44 +0300 (Thu, 09 Jul 2009) | 13 lines
branches/5.1:

Fix Bug#21704 Renaming column does not update FK definition

by checking whether a column that participates in a FK definition is being
renamed and denying the ALTER in this case.

The patch was originally developed by Davi Arnaut <Davi.Arnaut@Sun.COM>:
http://lists.mysql.com/commits/77714
and was later adjusted to conform to InnoDB coding style by me (Vasil),
I also added some more comments and moved the bug specific mysql-test to
a separate file to make it more manageable and flexible.
2009-07-10 17:05:53 +05:30
Alexey Kopytov
9cb84f8466 Bug : invalid memory reads and writes when altering merge
and base tables 

myrg_attach_children() could reuse a buffer that was allocated 
previously based on a definition of a child table. The problem 
was that the child's definition might have been changed, so 
reusing the buffer could lead to crashes or valgrind errors 
under some circumstances. 
 
Fixed by changing myrg_attach_children() so that the 
rec_per_key_part buffer is reused only when the child table
have not changed, and reallocated otherwise (the old buffer is 
deallocated if necessary).
2009-07-10 17:34:03 +06:00
Satya B
1a40d4cd5e Applying InnoDB snapshot 5.1-ss5488,part 2. Fixes BUG#45749
BUG#45749 - Race condition in SET GLOBAL innodb_commit_concurrency=DEFAULT

Detailed revision comments:

r5419 | marko | 2009-06-25 16:11:57 +0300 (Thu, 25 Jun 2009) | 18 lines
branches/5.1: Merge r5418 from branches/zip:

  ------------------------------------------------------------------------
  r5418 | marko | 2009-06-25 15:55:52 +0300 (Thu, 25 Jun 2009) | 5 lines
  Changed paths:
     M /branches/zip/ChangeLog
     M /branches/zip/handler/ha_innodb.cc
     M /branches/zip/mysql-test/innodb_bug42101-nonzero.result
     M /branches/zip/mysql-test/innodb_bug42101-nonzero.test
     M /branches/zip/mysql-test/innodb_bug42101.result
     M /branches/zip/mysql-test/innodb_bug42101.test
  
  branches/zip: Fix a race condition caused by
  SET GLOBAL innodb_commit_concurrency=DEFAULT. (Bug )
  When innodb_commit_concurrency is initially set nonzero,
  DEFAULT would change it back to 0, triggering Bug .
  rb://139 approved by Heikki Tuuri.
  ------------------------------------------------------------------------
2009-07-10 16:06:07 +05:30
Georgi Kodinov
d9e82ba86c Bug (Optimizing with ORDER BY) and bug#45828 (Optimizer won't
use partial primary key if another index can prevent filesort

The fix for bug  causes the covering ordering indexes to be 
preferred unconditionally over non-covering and ref indexes.

Fixed by comparing the cost of using a covering index to the cost of
using a ref index even for covering ordering indexes.
Added an assertion to clarify the condition the local variables should
be in.
2009-07-07 15:52:34 +03:00
Georgi Kodinov
bb90779c41 revert of hiding of the error exposed by this suite. 2009-07-07 17:18:44 +03:00
Georgi Kodinov
e52358fa66 fixed a failing test ctype_gbk_binlog : Table 't2' already exists 2009-07-07 16:09:06 +03:00
Ramil Kalimullin
381da0c9d8 Fix for bug#42364 reverted. 2009-07-06 11:55:53 +05:00
Alexey Kopytov
0c5207f52f Automerge. 2009-07-03 14:43:54 +04:00
Alexey Kopytov
c936b6444a Manual merge. 2009-07-03 14:36:04 +04:00
Sergey Glukhov
c2ed9d36eb 5.0-bugteam->5.1-bugteam merge 2009-07-03 13:39:22 +05:00
Sergey Glukhov
1a539f170d Bug#45806 crash when replacing into a view with a join!
The crash happend because for views which are joins
we have table_list->table == 0 and 
table_list->table->'any method' call leads to crash.
The fix is to perform table_list->table->file->extra()
method for all tables belonging to view.
2009-07-03 13:35:00 +05:00
Bernt M. Johnsen
7e4cb19a8c Prepare for push 2009-07-03 10:36:10 +02:00
Sergey Glukhov
5072f4da36 Bug#42364 SHOW ERRORS returns empty resultset after dropping non existent table
enabled message storing into error message list
for 'drop table' command
2009-07-03 13:22:06 +05:00
Bernt M. Johnsen
5f1d2aaee7 automerge 2009-07-03 10:33:34 +02:00
Bernt M. Johnsen
ff002a8ca3 Bug#15866 Prepared for push on 5.1 2009-07-03 10:23:16 +02:00
Bernt M. Johnsen
8b5986d77f Bug#15866 Prepared for push on 5.0 2009-07-03 10:19:32 +02:00
Alexey Kopytov
4692566f9e Bug : Bad effects with CREATE TABLE and DECIMAL
Using DECIMAL constants with more than 65 digits in CREATE 
TABLE ... SELECT led to bogus errors in release builds or 
assertion failures in debug builds. 
 
The problem was in inconsistency in how DECIMAL constants and 
fields are handled internally. We allow arbitrarily long 
DECIMAL constants, whereas DECIMAL(M,D) columns are limited to 
M<=65 and D<=30. my_decimal_precision_to_length() was used in 
both Item and Field code and truncated precision to 
DECIMAL_MAX_PRECISION when calculating value length without 
adjusting precision and decimals. As a result, a DECIMAL 
constant with more than 65 digits ended up having length less 
than precision or decimals which led to assertion failures. 
 
Fixed by modifying my_decimal_precision_to_length() so that 
precision is truncated to DECIMAL_MAX_PRECISION only for Field 
object which is indicated by the new 'truncate' parameter. 
 
Another inconsistency fixed by this patch is how DECIMAL 
constants and expressions are handled for CREATE ... SELECT. 
create_tmp_field_from_item() (which is used for constants) was 
changed as a part of the bugfix for bug  to handle long 
DECIMAL constants gracefully. Item_func::tmp_table_field() 
(which is used for expressions) on the other hand was still 
using a simplistic approach when creating a Field_new_decimal 
from a DECIMAL expression.
2009-07-03 11:41:19 +04:00
Georgi Kodinov
5572f8e7de Bug : crash accessing partitioned table and sql_mode
contains ONLY_FULL_GROUP_BY

The partitioning code needs to issue a Item::fix_fields()
on the partitioning expression in order to prepare 
it for being evaluated.
It does this by creating a special table and a table list 
for the scope of the partitioning expression.
But when checking ONLY_FULL_GROUP_BY the 
Item_field::fix_fields() was relying that there always be
cached_table set and was trying to use it to get the 
select_lex of the SELECT the field's table is in.
But the cached_table was not set by the partitioning code
that creates the artificial TABLE_LIST used to resolve the
partitioning expression and this resulted in a crash.
 
Fixed by rectifying the following errors :
1. Item_field::fix_fields() : the code that check for 
ONLY_FULL_GROUP_BY relies on having tables with 
cacheable_table set. This is mostly true, the only 
two exceptions being the partitioning context table
and the trigger context table.
Fixed by taking the current parsing context if no pointer
to the TABLE_LIST instance is present in the cached_table.

2. fix_fields_part_func() : 

2a. The code that adds the table being created to the 
scope for the partitioning expression is mostly a copy 
of the add_table_to_list and friends with one exception :
it was not marking the table as cacheable (something that
normal add_table_to_list is doing). This caused the 
problem in the check for ONLY_FULL_GROUP_BY in 
Item_field::fix_fields() to appear.
Fixed by setting the correct members to make the table
cacheable.
The ideal structural fix for this is to use a unified 
interface for adding a table to a table list 
(add_table_to_list?) : noted in a TODO comment

2b. The Item::fix_fields() was called with a NULL destination
pointer. This causes uninitalized memory reads in the 
overloaded ::fix_fields() function (namely 
Item_field::fix_fields()) as it expects a non-zero pointer 
there. Fixed by passing the source pointer similarly to how 
it's done in JOIN::prepare().
2009-07-02 17:42:00 +03:00
Matthias Leich
90c97d7ccc Fix for Bug#45902 rpl_sp fails sporadically in check testcases
Details:
- Add "sync_slave_with_master" at test end
- Restore the initial value of log_bin_trust_function_creators
2009-07-02 13:22:12 +02:00
Satya B
3766e560f0 merge to mysql-5.1-bugteam 2009-06-29 18:33:11 +05:30
Evgeny Potemkin
05a2329fe9 Merged bug#45266. 2009-06-26 19:59:41 +00:00
Evgeny Potemkin
0e64988a5e Bug#45266: Uninitialized variable lead to an empty result.
The TABLE::reginfo.impossible_range is used by the optimizer to indicate
that the condition applied to the table is impossible. It wasn't initialized
at table opening and this might lead to an empty result on complex queries:
a query might set the impossible_range flag on a table and when the query finishes,
all tables are returned back to the table cache. The next query that uses the table
with the impossible_range flag set and an index over the table will see the flag
and thus return an empty result.

The open_table function now initializes the TABLE::reginfo.impossible_range
variable.
2009-06-26 19:57:42 +00:00
Davi Arnaut
4c1333e6ed Bug#45548: XA transaction without access to InnoDB tables crashes the server
The problem is that the one phase commit function failed to
properly end a empty transaction. The solution is to ensure
that the transaction cleanup procedure is invoked even for
empty transactions.
2009-06-25 12:25:23 -03:00
Sergey Glukhov
3393fdf80a Bug#45412 SHOW CREATE TRIGGER does not require privileges to disclose trigger data
Added privilege checking to SHOW CREATE TRIGGER code.
2009-06-25 15:52:50 +05:00
Satya B
c417035866 Applying InnoDB snashot 5.0-ss5406, part 2. Fixes BUG#40565
BUG#40565 - Update Query Results in "1 Row Affected" But Should Be "Zero Rows"

Detailed revision comments:

r5232 | marko | 2009-06-03 14:31:04 +0300 (Wed, 03 Jun 2009) | 21 lines
branches/5.0: Merge r3590 from branches/5.1 in order to fix Bug 
(Update Query Results in "1 Row Affected" But Should Be "Zero Rows").

Also, add a test case for Bug .

rb://128 approved by Heikki Tuuri
  ------------------------------------------------------------------------
  r3590 | marko | 2008-12-18 15:33:36 +0200 (Thu, 18 Dec 2008) | 11 lines

  branches/5.1: When converting a record to MySQL format, copy the default
  column values for columns that are SQL NULL.  This addresses failures in
  row-based replication (Bug ).

  row_prebuilt_t: Add default_rec, for the default values of the columns in
  MySQL format.

  row_sel_store_mysql_rec(): Use prebuilt->default_rec instead of
  padding columns.

  rb://64 approved by Heikki Tuuri
  ------------------------------------------------------------------------
2009-06-25 15:20:26 +05:30
Sergey Glukhov
cd8151ed25 test case fix 2009-06-25 13:44:50 +05:00
Sergey Glukhov
2c53a70e15 Bug#45485 replication different between master/slaver using procedure with gbk
In Item_param::set_from_user_var
value.cs_info.character_set_client is set
to 'fromcs' value. It's wrong, it should be set to
thd->variables.character_set_client.
2009-06-25 11:22:39 +05:00
Martin Hansson
ecd470d190 Merge 2009-06-22 16:01:42 +02:00
Martin Hansson
2cc1134c2c Bug#44653: Server crash noticed when executing random queries with partitions.
When opening a table, it is imperative that the flag
TABLE::auto_increment_field_not_null be false. But if an error occured during
the creation of a table (e.g. the table exists already) with an auto_increment
column and a BEFORE trigger that used the INSERT ... SELECT construct, the
flag was not reset until after error checking. Thus if an error occured,
select_insert::send_data() returned immediately and it was not reset (see * in
pseudocode below).  Crash happened if the table was opened again. Fixed by
resetting the flag after error checking.

nested-loops_join():
  for each row in SELECT table {
    select_insert::send_data():
      if a values is supplied for AUTO_INCREMENT column
         table->auto_increment_field_not_null= TRUE
       else
         table->auto_increment_field_not_null= FALSE
       if (error)
         return 1; *
       if (table->auto_increment_field_not_null == FALSE)
         ...
       table->auto_increment_field_not_null == FALSE 
  }
<-- table returned to table cache and later retrieved by open_table: 
open_table():
  assert(table->auto_increment_field_not_null)
2009-06-22 14:51:33 +02:00
Satya B
2c6d342149 Applying InnoDB snashot 5.1-ss5343, Fixes BUG#45357
1. BUG#45357 - 5.1.35 crashes with Failing assertion: index->type & DICT_CLUSTERED

2. Also fixes the compilation problem when the flag -DUNIV_MUST_NOT_INLINE

Detailed revision comments:

r5340 | marko | 2009-06-17 12:11:49 +0300 (Wed, 17 Jun 2009) | 4 lines
branches/5.1: row_unlock_for_mysql(): When the clustered index is unknown,
refuse to unlock the record.
(Bug , caused by the fix of Bug ).
rb://132 approved by Sunny Bains.
r5339 | marko | 2009-06-17 11:01:37 +0300 (Wed, 17 Jun 2009) | 2 lines
branches/5.1: Add missing #include "mtr0log.h" so that the code compiles
with -DUNIV_MUST_NOT_INLINE.
2009-06-22 16:58:00 +05:30
Alfranio Correia
5289e1a9e2 Post-fix for BUG#43929. 2009-06-19 12:27:24 +01:00
Staale Smedseng
2b48caa42d Bug SETting max_allowed_packet variable
Inconsistent behavior of session variable max_allowed_packet 
(and net_buffer_length); only assignment to the global variable 
has any effect, without this being obvious to the user.
      
The patch for Bug#22891 is backported to 5.0, making the two
session variables read-only. As this is a backport to GA 
software, the error used when trying to assign to the read-
only variable is ER_UNKNOWN_ERROR. The error message is the 
same as in 5.1+.
2009-06-19 11:27:19 +02:00
Alfranio Correia
16ead29710 auto-merge mysql-5.1-bugteam (local) --> mysql-5.1-bugteam 2009-06-18 15:16:14 +01:00
Alfranio Correia
ac1b464a33 BUG#43929 binlog corruption when max_binlog_cache_size is exceeded
Large transactions and statements may corrupt the binary log if the size of the
cache, which is set by the max_binlog_cache_size, is not enough to store the
the changes.

In a nutshell, to fix the bug, we save the position of the next character in the
cache before starting processing a statement. If there is a problem, we simply
restore the position thus removing any effect of the statement from the cache.
Unfortunately, to avoid corrupting the binary log, we may end up loosing changes
on non-transactional tables if they do not fit in the cache. In such cases, we
store an Incident_log_event in order to stop the slave and alert users that some
changes were not logged.

Precisely, for every non-transactional changes that do not fit into the cache,
we do the following:
  a) the statement is *not* logged
  b) an incident event is logged after committing/rolling back the transaction,
  if any. Note that if a failure happens before writing the incident event to
  the binary log, the slave will not stop and the master will not have reported
  any error.
  c) its respective statement gives an error

For transactional changes that do not fit into the cache, we do the following:
  a) the statement is *not* logged
  b) its respective statement gives an error

To work properly, this patch requires two additional things. Firstly, callers to
MYSQL_BIN_LOG::write and THD::binlog_query must handle any error returned and
take the appropriate actions such as undoing the effects of a statement. We
already changed some calls in the sql_insert.cc, sql_update.cc and sql_insert.cc
modules but the remaining calls spread all over the code should be handled in
BUG#37148. Secondly, statements must be either classified as DDL or DML because
DDLs that do not get into the cache must generate an incident event since they
cannot be rolled back.
2009-06-18 14:52:46 +01:00
Martin Hansson
0a214a42d1 Merge 2009-06-18 09:25:46 +02:00