Commit graph

26294 commits

Author SHA1 Message Date
Dmitry Shulga
0c91b53d10 Fixed bug#42503 - "Lost connection" errors when using
compression protocol.

The loss of connection was caused by a malformed packet
sent by the server in case when query cache was in use.
When storing data in the query cache, the query  cache
memory allocation algorithm had a tendency to reduce
the amount of memory block necessary to store a result
set, up to finally storing the entire result set in a single
block. With a significant result set, this memory block
could turn out to be quite large - 30, 40 MB and on.
When such a result set was sent to the client, the entire
memory block was compressed and written to network as a
single network packet. However, the length of the
network packet is limited by 0xFFFFFF (16MB), since
the packet format only allows 3 bytes for packet length.
As a result, a malformed, overly large packet
with truncated length would be sent to the client
and break the client/server protocol.

The solution is, when sending result sets from the query
cache, to ensure that the data is chopped into
network packets of size <= 16MB, so that there
is no corruption of packet length. This solution,
however, has a shortcoming: since the result set
is still stored in the query cache as a single block,
at the time of sending, we've lost boundaries of individual
logical packets (one logical packet = one row of the result
set) and thus can end up sending a truncated logical
packet in a compressed network packet.

As a result, on the client we may require more memory than 
max_allowed_packet to keep, both, the truncated
last logical packet, and the compressed next packet.
This never (or in practice never) happens without compression,
since without compression it's very unlikely that
a) a truncated logical packet would remain on the client
when it's time to read the next packet
b) a subsequent logical packet that is being read would be
so large that size-of-new-packet + size-of-old-packet-tail >
max_allowed_packet.
To remedy this issue, we send data in 1MB sized packets,
that's below the current client default of 16MB for
max_allowed_packet, but large enough to ensure there is no
unnecessary overhead from too many syscalls per result set.


sql/net_serv.cc:
  net_realloc() modified: consider already used memory
  when compare packet buffer length
sql/sql_cache.cc:
  modified Query_cache::send_result_to_client: send result to client
  in chunks limited by 1 megabyte.
2010-09-16 17:24:27 +07:00
Mattias Jonsson
9d1ed095f5 merge 2010-09-13 16:07:50 +02:00
Mattias Jonsson
b76f391262 merge 2010-09-13 15:14:17 +02:00
Martin Hansson
3beeb5d045 Bug #50394: Regression in EXPLAIN with index scan, LIMIT, GROUP BY and
ORDER BY computed col
      
GROUP BY implies ORDER BY in the MySQL dialect of SQL. Therefore, when an
index on the first table in the query is used, and that index satisfies
ordering according to the GROUP BY clause, the query optimizer estimates the
number of tuples that need to be read from this index. If there is a LIMIT
clause, table statistics on tables following this 'sort table' are employed.

There may be a separate ORDER BY clause however, which mandates reading the
whole 'sort table' anyway. But the previous estimate was left untouched.

Fixed by removing the estimate from EXPLAIN output if GROUP BY is used in
conjunction with an ORDER BY clause that mandates using a temporary table.
2010-09-13 13:33:19 +02:00
Gleb Shchepa
daa6d1f4f3 Bug #55779: select does not work properly in mysql server
Version "5.1.42 SUSE MySQL RPM"

When a query was using a DATE or DATETIME value formatted
using different formatting than "yyyy-mm-dd HH:MM:SS", a
query with a greater-or-equal '>=' condition matched only
greater values in an indexed TIMESTAMP column.

The problem was introduced by the fix for the bug 46362
and partially solved (for DATE and DATETIME columns only)
by the fix for the bug 47925.

The stored_field_cmp_to_item function has been modified
to take into account TIMESTAMP columns like we do for
DATE and DATETIME columns.


mysql-test/r/type_timestamp.result:
  Test case for bug #55779.
mysql-test/t/type_timestamp.test:
  Test case for bug #55779.
sql/item.cc:
  Bug #55779: select does not work properly in mysql server
              Version "5.1.42 SUSE MySQL RPM"
  
  The stored_field_cmp_to_item function has been modified
  to take into account TIMESTAMP columns like we do for
  DATE and DATETIME.
2010-09-13 11:18:35 +04:00
Mattias Jonsson
a41cef53da merge 2010-09-10 11:52:35 +02:00
Mattias Jonsson
2ac69d648a merge 2010-09-10 11:50:38 +02:00
Alexey Kopytov
da7646b64c Addendum patch for bug #54190.
The patch caused some test failures when merged to 5.5 because,
unlike 5.1, it utilizes Item_cache_row to actually cache row
values. The problem was that Item_cache_row::bring_value()
essentially did nothing. In particular, it did not update its
null_value, so all Item_cache_row objects were always having
their null_values set to TRUE. This went unnoticed previously,
but now when Arg_comparator::compare_row() actually depends on
the row's null_value to evaluate the comparison, the problem
has surfaced.

Fixed by calling the underlying item's bring_value() and
updating null_value in Item_cache_row::bring_value().

Since the problem also exists in 5.1 code (albeit hidden, since
the relevant code is not used anywhere), the addendum patch is
against 5.1.
2010-09-09 18:44:53 +04:00
Alexey Kopytov
3ce925bfe7 Automerge. 2010-09-09 16:48:06 +04:00
Alexey Kopytov
453107bc56 Bug #54190: Comparison to row subquery produces incorrect
result

Row subqueries producing no rows were not handled as UNKNOWN
values in row comparison expressions.

That was a result of the following two problems:

1. Item_singlerow_subselect did not mark the resulting row
value as NULL/UNKNOWN when no rows were produced.

2. Arg_comparator::compare_row() did not take into account that
a whole argument may be NULL rather than just individual scalar
values.

Before bug#34384 was fixed, the above problems were hidden
because an uninitialized (i.e. without any stored value) cached
object would appear as NULL for scalar values in a row subquery
returning an empty result. After the fix
Arg_comparator::compare_row() would try to evaluate
uninitialized cached objects.

Fixed by removing the aforementioned problems.


mysql-test/r/row.result:
  Added a test case for bug #54190.
mysql-test/r/subselect.result:
  Updated the result for a test relying on wrong behavior.
mysql-test/t/row.test:
  Added a test case for bug #54190.
sql/item_cmpfunc.cc:
  If either of the argument rows is NULL, return NULL as the
  result of comparison.
sql/item_subselect.cc:
  Adjust null_value for Item_singlerow_subselect depending on
  whether a row has been produced by the row subquery.
2010-09-09 16:46:13 +04:00
Mattias Jonsson
af951a6c36 Bug#55458: Partitioned MyISAM table gets crashed by multi-table update
Updated according to reviewers comments.
2010-09-07 17:56:43 +02:00
Martin Hansson
4f4d03a416 Bug#51070: Query with a NOT IN subquery predicate returns a wrong result set
The EXISTS transformation has additional switches to catch the known corner
cases that appear when transforming an IN predicate into EXISTS. Guarded
conditions are used which are deactivated when a NULL value is seen in the
outer expression's row. When the inner query block supplies NULL values,
however, they are filtered out because no distinction is made between the
guarded conditions; guarded NOT x IS NULL conditions in the HAVING clause that
filter out NULL values cannot be de-activated in isolation from those that
match values or from the outer expression or NULL's.

The above problem is handled by making the guarded conditions remember whether
they have rejected a NULL value or not, and index access methods are taking
this into account as well. 

The bug consisted of 

1) Not resetting the property for every nested loop iteration on the inner
   query's result.

2) Not propagating the NULL result properly from inner query to IN optimizer.

3) A hack that may or may not have been needed at some point. According to a
   comment it was aimed to fix #2 by returning NULL when FALSE was actually
   the result. This caused failures when #2 was properly fixed. The hack is
   now removed.

The fix resolves all three points.
2010-09-07 11:21:09 +02:00
Dmitry Shulga
d6f6db6f4c Fixed bug #55421 - Protocol::end_statement(): Assertion `0' on
multi-table UPDATE IGNORE.
The problem was that if there was an active SELECT statement
during trigger execution, an error risen during the execution
may cause a crash. The fix is to temporary reset LEX::current_select
before trigger execution and restore it afterwards. This way
errors risen during the trigger execution are processed as
if there was no active SELECT.

mysql-test/r/trigger_notembedded.result:
  added test case result for bug #55421.
mysql-test/t/trigger_notembedded.test:
  added test case for bug #55421.
sql/sql_trigger.cc:
  Reset thd->lex->current_select before start trigger execution
  and restore its original value after execution is finished.
  This is neccessery in order to set error status in 
  diagnostic_area in case of trigger execution failure.
2010-09-07 15:53:46 +07:00
Martin Hansson
446cc653c0 Bug#54543: update ignore with incorrect subquery leads to assertion failure:
inited==INDEX

When an error occurs while sending the data in a temporary table there was no
cleanup performed. This caused a failed assertion in the case when different
access methods were used for populating the table vs. retrieving the data from
the table if IGNORE was specified and sql_safe_updates = 0. In this case
execution continues, but the handler expects to continue with the access
method used for row retrieval.

Fixed by doing the cleanup even if errors occur.
2010-09-07 09:58:05 +02:00
Magne Mahre
64b639260c Bug#39932 "create table fails if column for FK is in different
case than in corr index".
      
Server was unable to find existing or explicitly created supporting
index for foreign key if corresponding statement clause used field
names in case different than one used in key specification and created
yet another supporting index.
In cases when name of constraint (and thus name of generated index)
was the same as name of existing/explicitly created index this led
to duplicate key name error.
      
The problem was that unlike all other code Key_part_spec::operator==()
compared field names in case sensitive fashion. As result routines
responsible for getting rid of redundant generated supporting indexes
for foreign key were not working properly for versions of field names
using different cases.

(backported from mysql-trunk)


sql/sql_class.cc:
  Make field name comparison case-insensitive like it is
  in the rest of server.
2010-09-01 19:38:34 +02:00
Mattias Jonsson
e5bab33a2a Bug#53806: Wrong estimates for range query in partitioned MyISAM table
Bug#46754: 'rows' field doesn't reflect partition pruning

The EXPLAIN's result in 'rows' field
was evaluated to number of rows when the table was opened
(not from the table cache) and only the partitions left
after pruning was updated with its correct number
of rows.

The evaluation of the 'rows' field was using handler::records()
which is a potentially expensive call, and ignores the partitioning
pruning.

The fix was to use the handlers stats.records after updating it
with ::info(HA_STATUS_VARIABLE) instead.

mysql-test/r/partition_pruning.result:
  updated result
mysql-test/t/partition_pruning.test:
  Added test.
sql/sql_select.cc:
  Use ::info + stats.records instead of ::records().
2010-08-26 17:14:18 +02:00
Gleb Shchepa
cfcc7e265e automerge local --> 5.1-bugteam (bug 53034) 2010-08-31 02:32:03 +04:00
Gleb Shchepa
ccab4d8771 Bug #53034: Multiple-table DELETE statements not accepting
"Access compatibility" syntax

The "wild" "DELETE FROM table_name.* ... USING ..." syntax
for multi-table DELETE statements is documented but it was
lost in the fix for the bug 30234.

The table_ident_opt_wild parser rule has been added
to restore the lost syntax.


mysql-test/r/delete.result:
  Test case for bug #53034.
mysql-test/t/delete.test:
  Test case for bug #53034.
sql/sql_yacc.yy:
  Bug #53034: Multiple-table DELETE statements not accepting
              "Access compatibility" syntax
  
  The table_ident_opt_wild parser rule has been added
  to restore the lost syntax.
  Note: simple extending of table_ident with opt_wild in
  the table_alias_ref rule is not acceptable, because
  a) it adds one conflict more and b) this conflict resolves
  in the inappropriate way.
2010-08-31 02:16:38 +04:00
Ramil Kalimullin
ed8aa284ba Automerge. 2010-08-30 12:08:28 +04:00
Ramil Kalimullin
6a113b215a Fix for bug #51875: crash when loading data into geometry function polyfromwkb
Check for number of line strings in the incoming polygon data (wkb) and
for number of points in the incoming linestring wkb.



mysql-test/r/gis.result:
  Fix for bug #51875: crash when loading data into geometry function polyfromwkb
    - test result.
mysql-test/t/gis.test:
  Fix for bug #51875: crash when loading data into geometry function polyfromwkb
    - test case.
sql/spatial.cc:
  Fix for bug #51875: crash when loading data into geometry function polyfromwkb
    - creating a polygon from wkb check for number of line strings,
    - creating a linestring from wkb check for number of line points.
2010-08-30 11:51:46 +04:00
Alexey Kopytov
c1bd124c68 Automerge. 2010-08-30 11:19:09 +04:00
Alexey Kopytov
d7d0f6390b Bug #54465: assert: field_types == 0 || field_types[field_pos]
== MYSQL_TYPE_LONGLONG

A MIN/MAX() function with a subquery as its argument could lead
to a debug assertion on debug builds or wrong data on release
ones.

The problem was a combination of the following factors:

- Item_sum_hybrid::fix_fields() might use the argument
(args[0]) to calculate 'hybrid_field_type' which was later used
to decide how the data should be sent to the client.

- Item_sum::make_field() might use the argument again to
calculate the field's type when sending result set metadata to
the client.

- The argument could be changed in between these two calls via
  Item::set_arg() leading to inconsistent metadata being
  reported.

Here is what was happening for the bug's test case:

1. Item_sum_hybrid::fix_fields() calculates hybrid_field_type
as MYSQL_TYPE_LONGLONG based on args[0] which is an
Item::SUBSELECT_ITEM at that time.

2. A temporary table is created to execute the
query. create_tmp_field_from_item() creates a Field_long object
according to the subselect's max_length.

3. The subselect item in Item_sum_hybrid is replaced by the
Item_field object referencing the newly created Field_long.

4. Item_sum::make_field() rightfully returns the
MYSQL_TYPE_LONG type when calculating the result set metadata.

5. When sending the actual data, Item::send() relies on the
virtual field_type() function which in our case returns
previously calculated hybrid_field_type == MYSQL_TYPE_LONGLONG.

It looks like the only solution is to never refer to the
argument's metadata after the result metadata has been
calculated in fix_fields(), since the argument itself may be
different by then. In this sense, Item_sum::make_field() should
never be used, because it may rely on the argument's metadata
and is only called after fix_fields(). The "default"
implementation in Item::make_field() should be used instead as
it relies only on field_type(), but not on the argument's type.

Fixed by removing Item_sum::make_field() so that the superclass
implementation Item::make_field() is always used.

mysql-test/r/func_group.result:
  Added a test case for bug #54465.
mysql-test/t/func_group.test:
  Added a test case for bug #54465.
sql/item_sum.cc:
  Removed Item_sum::make_field() so that the superclass
  implementation Item::make_field() is always used.
sql/item_sum.h:
  Removed Item_sum::make_field() so that the superclass
  implementation Item::make_field() is always used.
2010-08-27 13:44:35 +04:00
Ramil Kalimullin
7ebd2cd797 Fix for bug #54253: memory leak when using I_S plugins w/o deinit method
Free memory allocated by the server for all plugins,
with or without deinit() method.
2010-08-27 11:44:06 +04:00
Alexey Kopytov
27d5b6e57f Automerge. 2010-08-26 16:36:48 +04:00
Evgeny Potemkin
b4dc600af9 Bug #55656: mysqldump can be slower after bug 39653 fix.
After fix for bug 39653 the shortest available secondary index was used for
full table scan. Primary clustered key was used only if no secondary index
can be used. However, when chosen secondary index includes all fields of the
table being scanned it's better to use primary index since the amount of
data to scan is the same but the primary index is clustered.
Now the find_shortest_key function takes this into account.


mysql-test/suite/innodb/r/innodb_mysql.result:
  Added a test case for the bug#55656.
mysql-test/suite/innodb/t/innodb_mysql.test:
  Added a test case for the bug#55656.
sql/sql_select.cc:
  Bug #55656: mysqldump can be slower after bug #39653 fix.
  The find_shortest_key function now prefers clustered primary key
  if found secondary key includes all fields of the table.
2010-08-26 13:31:04 +04:00
Dmitry Shulga
800feb16cb Fixed bug #29751 - do not rename the error log at FLUSH LOGS.
Added open log file with FILE_SHARE_DELETE flag on Windows.

sql/log.cc:
  added reopen_fstreams();
  modified redirect_std_streams(): call to sequence of freopen()
  replaced to reopen_fstreams();
  modified flush_error_log(): removed file rename for flushed
  error log file.
sql/mysqld.cc:
  modified main() and init_server_components(): do open log error file
  over call to reopen_fstreams().
2010-08-25 15:47:45 +07:00
Alexey Kopytov
e7b2688271 Bug #54802: 'NOT BETWEEN' evaluation is incorrect
Queries involving predicates of the form "const NOT BETWEEN
not_indexed_column AND indexed_column" could return wrong data
due to incorrect handling by the range optimizer.

For "c NOT BETWEEN f1 AND f2" predicates, get_mm_tree()
produces a disjunction of the SEL_ARG trees for "f1 > c" and
"f2 < c". If one of the trees is empty (i.e. one of the
arguments is not sargable) the resulting tree should be empty
as well, since the whole expression in this case is not
sargable.

The above logic is implemented in get_mm_tree() as follows. The
initial state of the resulting tree is NULL (aka empty). We
then iterate through arguments and compute the corresponding
SEL_ARG tree (either "f1 > c" or "f2 < c"). If the resulting
tree is NULL, it is simply replaced by the generated
tree. Otherwise it is replaced by a disjunction of itself and
the generated tree. The obvious flaw in this implementation is
that if the first argument is not sargable and thus produces a
NULL tree, the resulting tree will simply be replaced by the
tree for the second argument. As a result, "c NOT BETWEEN f1
AND f2" will end up as just "f2 < c".

Fixed by adding a check so that when the first argument
produces an empty tree for the NOT BETWEEN case, the loop is
aborted with an empty tree as a result. The whole idea of using
a loop for 2 arguments does not make much sense, but it was
probably used to avoid code duplication for several BETWEEN
variants.
2010-08-24 19:51:32 +04:00
Georgi Kodinov
7d3a9b4cf6 merge 2010-08-20 12:09:17 +03:00
Mattias Jonsson
dd9329761a merge 2010-08-19 09:20:17 +02:00
unknown
9d6811502e WL#5370 Keep forward-compatibility when changing
'CREATE TABLE IF NOT EXISTS ... SELECT' behaviour
BUG#55474, BUG#55499, BUG#55598, BUG#55616 and BUG#55777 are fixed
in this patch too.

This is the 5.1 part.
It implements:
- if the table exists, binlog two events: CREATE TABLE IF NOT EXISTS
  and INSERT ... SELECT

- Insert nothing and binlog nothing on master if the existing object
  is a view. It only generates a warning that table already exists.


mysql-test/r/trigger.result:
  Ather this patch, 'CREATE TABLE IF NOT EXISTS ... SELECT' will not
  insert anything if the creating table already exists and is a view.
sql/sql_class.h:
  Declare virtual function write_to_binlog() for select_insert.
  It's used to binlog 'create select'
sql/sql_insert.cc:
  Implement write_to_binlog();
  Use write_to_binlog() instead of binlog_query() to binlog the statement.
  if the table exists, binlog two events: CREATE TABLE IF NOT EXISTS
  and INSERT ... SELECT
sql/sql_lex.h:
  Declare create_select_start_with_brace and create_select_pos.
  They are helpful for binlogging 'create select'
sql/sql_parse.cc:
  Do nothing on master if the existing object is a view.
sql/sql_yacc.yy:
  Record the relative postion of 'SELECT' in the 'CREATE ...SELECT' statement.
  Record whether there is a '(' before the 'SELECT' clause.
2010-08-18 12:56:06 +08:00
Georgi Kodinov
790852c0c9 Bug #55580 : segfault in read_view_sees_trx_id
The server was not checking for errors generated during
the execution of Item::val_xxx() methods when copying
data to the group, order, or distinct temp table's row.
Fixed by extending the copy_funcs() to return an error
code and by checking for that error code on the places
copy_funcs() is called. 
Test case added.
2010-08-13 11:07:39 +03:00
Georgi Kodinov
8b25c0e4dc Bug #55565: debug assertion when ordering by expressions with user
variable assignments

The assert() that is firing is checking if expressions that can't be
null return a NULL when evaluated.
MAKEDATE() function can return NULL if the second argument is 
less then or equal to 0. Thus its nullability depends not only on 
the nullability of its arguments but also on their values.
Fixed by (overoptimistically) setting MAKEDATE() to be nullable 
despite the nullability of its arguments.
Test added.
Had to update one test result to reflect the metadata change.
2010-08-13 16:05:46 +03:00
Mattias Jonsson
b409fad8cc Bug#55458: Partitioned MyISAM table gets crashed by multi-table update
Problem was that the handler call ::extra(HA_EXTRA_CACHE) was cached
but the ::extra(HA_EXTRA_PREPARE_FOR_UPDATE) was not.

Solution was to also cache the other call and forward it when moving
to a new partition to scan.

mysql-test/r/partition.result:
  test result
mysql-test/t/partition.test:
  New test from bug report.
sql/ha_partition.cc:
  cache the HA_EXTRA_PREPARE_FOR_UPDATE just like HA_EXTRA_CACHE.
sql/ha_partition.h:
  Added cache flag for HA_EXTRA_PREPARE_FOR_UPDATE
2010-08-10 10:43:12 +02:00
Jon Olav Hauglid
d62bfebc7e Bug #54106 assert in Protocol::end_statement,
INSERT IGNORE ... SELECT ... UNION SELECT ...

This assert was triggered by INSERT IGNORE ... SELECT. The assert checks that a
statement either sends OK or an error to the client. If the bug was triggered
on release builds, it caused OK to be sent to the client instead of the correct
error message (in this case ER_FIELD_SPECIFIED_TWICE).

The reason the assert was triggered, was that lex->no_error was set to TRUE
during JOIN::optimize() because of IGNORE. This causes all errors to be ignored.
However, not all errors can be ignored. Some, such as ER_FIELD_SPECIFIED_TWICE
will cause the INSERT to fail no matter what. But since lex->no_error was set,
the critical errors were ignored, the INSERT failed and neither OK nor the
error message was sent to the client.

This patch fixes the problem by temporarily turning off lex->no_error in
places where errors cannot be ignored during processing of INSERT ... SELECT.

Test case added to insert.test.
2010-08-09 13:39:59 +02:00
Georgi Kodinov
7b3b8ae154 Addendum to bug #42144 : fixed a wrong type conversation causing plugin tests
failures on sparc64.
2010-08-05 15:10:24 +03:00
Martin Hansson
0c81dcf332 Bug#54568: create view cause Assertion failed: 0,
file .\item_subselect.cc, line 836

IN quantified predicates are never executed directly. They are rather wrapped
inside nodes called IN Optimizers (Item_in_optimizer) which take care of the
execution. However, this is not done during query preparation. Unfortunately
the LIKE predicate pre-evaluates constant right-hand side arguments even
during name resolution. Likely this is meant as an optimization.

Fixed by not pre-evaluating LIKE arguments in view prepare mode.
2010-08-05 12:42:14 +02:00
Georgi Kodinov
b1a8b3aa6e Bug #42144: plugin_load fails
Reverted the ulong->uint diff
Re-applied the first diff.
The original commit message follows:

enum plugin system variables are ulong internally, not int.
On systems where long is not the same as an int it causes
problems. 
Fixed by correct typecasting. Removed the test from the 
experimental list.
2010-08-04 15:58:09 +03:00
Georgi Kodinov
5eeb6488cf Bug #42144: plugin_load fails
The enum system variables were handled inconsistently 
as ints, unsigned int and unsigned long on various places.
This caused problems on platforms on which 
sizeof(int) != sizeof(long).
Fixed by homogenizing the type of the enum variables
to unsigned int, since it's size compatible with the C enum
type. 
Removed the test from the experimental list.
2010-08-03 19:01:30 +03:00
Alfranio Correia
7909541953 auto-merge mysql-5.1-security (local) --> mysql-5.1-security 2010-08-03 12:52:02 +01:00
unknown
bcb3170c97 Bug #34283 mysqlbinlog leaves tmpfile after termination if binlog contains load data infile
With statement- or mixed-mode logging, "LOAD DATA INFILE" queries
are written to the binlog using special types of log events.
When mysqlbinlog reads such events, it re-creates the file in a
temporary directory with a generated filename and outputs a
"LOAD DATA INFILE" query where the filename is replaced by the
generated file. The temporary file is not deleted by mysqlbinlog
after termination.

To fix the problem, in mixed mode we go to row-based. In SBR, we
document it to remind user the tmpfile is left in a temporary
directory.




mysql-test/extra/rpl_tests/rpl_loaddata.test:
  Updated for Bug#34283
mysql-test/suite/binlog/r/binlog_mixed_load_data.result:
  Test result for BUG#34283.
mysql-test/suite/binlog/t/binlog_killed_simulate.test:
  Updated for Bug#34283
mysql-test/suite/binlog/t/binlog_mixed_load_data.test:
  Added the test file to verify that 'load data infile...' statement
  will go to row-based in mixed mode.
mysql-test/suite/binlog/t/binlog_stm_blackhole.test:
  Updated for Bug#34283
mysql-test/suite/rpl/r/rpl_innodb_mixed_dml.result:
  Updated for Bug#34283
mysql-test/suite/rpl/t/rpl_loaddata_fatal.test:
  Updated for Bug#34283
mysql-test/suite/rpl/t/rpl_loaddata_map.test:
  Updated for Bug#34283
mysql-test/suite/rpl/t/rpl_slave_load_remove_tmpfile.test:
  Updated for Bug#34283
mysql-test/suite/rpl/t/rpl_stm_log.test:
  Updated for Bug#34283
sql/sql_load.cc:
  Added code to go to row-based in mixed mode for
  'load data infile ...' statement
2010-08-03 10:22:19 +08:00
Alfranio Correia
f62e89fade BUG#55625 RBR breaks on failing 'CREATE TABLE'
A CREATE...SELECT that fails is written to the binary log if a non-transactional
statement is updated. If the logging format is ROW, the CREATE statement and the
changes are written to the binary log as distinct events and by consequence the
created table is not rolled back in the slave.

In this patch, we opted to let the slave goes out of sync by not writting to the
binary log the CREATE statement. We do this by simply reseting the binary log's
cache.

mysql-test/suite/rpl/r/rpl_drop.result:
  Added a test case.
mysql-test/suite/rpl/t/rpl_drop.test:
  Added a test case.
sql/log.cc:
  Introduced a function to clean up the cache.
sql/log.h:
  Introduced a function to clean up the cache.
sql/sql_insert.cc:
  Cleaned up the binary log cache if a CREATE...SELECT fails.
2010-08-02 20:48:56 +01:00
Georgi Kodinov
4f738e9b7c merge mysql-5.1-bugteam into mysql-5.1-security 2010-08-02 10:50:15 +03:00
Gleb Shchepa
80aa882497 Bug #54461: crash with longblob and union or update with subquery
Queries may crash, if
  1) the GREATEST or the LEAST function has a mixed list of
     numeric and LONGBLOB arguments and
  2) the result of such a function goes through an intermediate
     temporary table.

An Item that references a LONGBLOB field has max_length of
UINT_MAX32 == (2^32 - 1).

The current implementation of GREATEST/LEAST returns REAL
result for a mixed list of numeric and string arguments (that
contradicts with the current documentation, this contradiction
was discussed and it was decided to update the documentation).

The max_length of such a function call was calculated as a
maximum of argument max_length values (i.e. UINT_MAX32).

That max_length value of UINT_MAX32 was used as a length for
the intermediate temporary table Field_double to hold
GREATEST/LEAST function result.

The Field_double::val_str() method call on that field
allocates a String value.

Since an allocation of String reserves an additional byte
for a zero-termination, the size of String buffer was
set to (UINT_MAX32 + 1), that caused an integer overflow:
actually, an empty buffer of size 0 was allocated.

An initialization of the "first" byte of that zero-size
buffer with '\0' caused a crash.

The Item_func_min_max::fix_length_and_dec() has been
modified to calculate max_length for the REAL result like
we do it for arithmetical operators.


******
Bug #54461: crash with longblob and union or update with subquery

Queries may crash, if
  1) the GREATEST or the LEAST function has a mixed list of
     numeric and LONGBLOB arguments and
  2) the result of such a function goes through an intermediate
     temporary table.

An Item that references a LONGBLOB field has max_length of
UINT_MAX32 == (2^32 - 1).

The current implementation of GREATEST/LEAST returns REAL
result for a mixed list of numeric and string arguments (that
contradicts with the current documentation, this contradiction
was discussed and it was decided to update the documentation).

The max_length of such a function call was calculated as a
maximum of argument max_length values (i.e. UINT_MAX32).

That max_length value of UINT_MAX32 was used as a length for
the intermediate temporary table Field_double to hold
GREATEST/LEAST function result.

The Field_double::val_str() method call on that field
allocates a String value.

Since an allocation of String reserves an additional byte
for a zero-termination, the size of String buffer was
set to (UINT_MAX32 + 1), that caused an integer overflow:
actually, an empty buffer of size 0 was allocated.

An initialization of the "first" byte of that zero-size
buffer with '\0' caused a crash.

The Item_func_min_max::fix_length_and_dec() has been
modified to calculate max_length for the REAL result like
we do it for arithmetical operators.



mysql-test/r/func_misc.result:
  Test case for bug #54461.
  
  ******
  Test case for bug #54461.
mysql-test/t/func_misc.test:
  Test case for bug #54461.
  
  ******
  Test case for bug #54461.
sql/item_func.cc:
  Bug #54461: crash with longblob and union or update with subquery
  
  The Item_func_min_max::fix_length_and_dec() has been
  modified to calculate max_length for the REAL result like
  we do it for arithmetical operators.
  
  ******
  Bug #54461: crash with longblob and union or update with subquery
  
  The Item_func_min_max::fix_length_and_dec() has been
  modified to calculate max_length for the REAL result like
  we do it for arithmetical operators.
2010-08-01 22:12:36 +04:00
Davi Arnaut
9899e690f0 Bug#45288: pb2 returns a lot of compilation warnings on linux
Fix compiler warnings.

mysys/stacktrace.c:
  Tag unused parameters.
sql/sql_lex.cc:
  Variable becomes unused in non-debug builds. Also, no need to
  assert the obvious.
2010-07-30 17:33:10 -03:00
Luis Soares
655d913bfc Automerge mysql-5.1-bugteam into mysql-5.1-bugteam latest. 2010-07-30 15:32:28 +01:00
Luis Soares
55e60e14fa Revert patch for BUG#34283. Causing lots of test failures in PB2,
mostly because existing test result files were not updated.
2010-07-30 14:44:39 +01:00
Georgi Kodinov
de5029a458 Bug #55188: GROUP BY, GROUP_CONCAT and TEXT - inconsistent results
In order to be able to check if the set of the grouping fields in a 
GROUP BY has changed (and thus to start a new group) the optimizer
caches the current values of these fields in a set of Cached_item 
derived objects.
The Cached_item_str, used for caching varchar and TEXT columns,
is limited in length by the max_sort_length variable.
A String buffer to store the value with an alloced length of either
the max length of the string or the value of max_sort_length 
(whichever is smaller) in Cached_item_str's constructor.
Then, at compare time the value of the string to compare to was 
truncated to the alloced length of the string buffer inside 
Cached_item_str.
This is all fine and valid, but only if you're not assigning 
values near or equal to the alloced length of this buffer.
Because when assigning values like this the alloced length is 
rounded up and as a result the next set of data will not match the
group buffer, thus leading to wrong results because of the changed
alloced_length.
Fixed by preserving the original maximum length in the 
Cached_item_str's constructor and using this instead of the 
alloced_length to limit the string to compare to.
Test case added.
2010-07-30 16:35:06 +03:00
Davi Arnaut
a9538cacda Bug#54041: MySQL 5.0.92 fails when tests from Connector/C suite run
Fix a regression (due to a typo) which caused spurious incorrect
argument errors for long data stream parameters if all forms of
logging were disabled (binary, general and slow logs).

mysql-test/t/mysql_client_test.test:
  Save the status of the slow_log.
sql/sql_prepare.cc:
  Add a missing logical NOT operator.
tests/mysql_client_test.c:
  Disable all query logs when running C tests. Fixes a omission
  when, slow log should have been disabled too.
  
  Run test case for Bug#54041 with query logs enabled and disabled.
2010-07-30 09:17:10 -03:00
unknown
5e13086bf8 Bug #34283 mysqlbinlog leaves tmpfile after termination if binlog contains load data infile
With statement- or mixed-mode logging, "LOAD DATA INFILE" queries
are written to the binlog using special types of log events.
When mysqlbinlog reads such events, it re-creates the file in a
temporary directory with a generated filename and outputs a
"LOAD DATA INFILE" query where the filename is replaced by the
generated file. The temporary file is not deleted by mysqlbinlog
after termination.

To fix the problem, in mixed mode we go to row-based. In SBR, we
document it to remind user the tmpfile is left in a temporary
directory.


mysql-test/suite/binlog/r/binlog_mixed_load_data.result:
  Test result for BUG#34283.
mysql-test/suite/binlog/t/binlog_mixed_load_data.test:
  Added the test file to verify that 'load data infile...' statement
  will go to row-based in mixed mode.
sql/sql_load.cc:
  Added code to go to row-based in mixed mode for
  'load data infile ...' statement
2010-07-30 11:59:34 +08:00
unknown
2124538d9c BUG#49124 Security issue with /*!-versioned */ SQL statements on Slave
/*![:version:] Query Code */, where [:version:] is a sequence of 5 
digits representing the mysql server version(e.g /*!50200 ... */),
is a special comment that the query in it can be executed on those 
servers whose versions are larger than the version appearing in the 
comment. It leads to a security issue when slave's version is larger 
than master's. A malicious user can improve his privileges on slaves. 
Because slave SQL thread is running with SUPER privileges, so it can
execute queries that he/she does not have privileges on master.

This bug is fixed with the logic below: 
- To replace '!' with ' ' in the magic comments which are not applied on
  master. So they become common comments and will not be applied on slave.

- Example:
  'INSERT INTO t1 VALUES (1) /*!10000, (2)*/ /*!99999 ,(3)*/
  will be binlogged as
  'INSERT INTO t1 VALUES (1) /*!10000, (2)*/ /* 99999 ,(3)*/

mysql-test/suite/rpl/t/rpl_conditional_comments.test:
  Test the patch for this bug.
sql/mysql_priv.h:
  Rename inBuf as rawBuf and remove the const limitation.
sql/sql_lex.cc:
  To replace '!' with ' ' in the magic comments which are not applied on
  master.
sql/sql_lex.h:
  Remove the const limitation on parameter buff, as it can be modified in the function since
  this patch.
  Add member function yyUnput for Lex_input_stream. It set a character back the query buff.
sql/sql_parse.cc:
  Rename inBuf as rawBuf and remove the const limitation.
sql/sql_partition.cc:
  Remove the const limitation on parameter part_buff, as it can be modified in the function since
  this patch.
sql/sql_partition.h:
  Remove the const limitation on parameter part_buff, as it can be modified in the function since
  this patch.
sql/table.h:
  Remove the const limitation on variable partition_info, as it can be modified since
  this patch.
2010-07-29 11:00:57 +08:00