Commit graph

27428 commits

Author SHA1 Message Date
unknown
4304dbc464 MDEV-616 fix (MySQL fix accepted) 2012-10-09 17:36:02 +03:00
unknown
72ab07c1cb MDEV-746: Merged mysql fix of the bug LP:1002546 & MySQL Bug#13651009.
Empty result after reading const tables now works for subqueries.
2012-10-14 19:29:31 +03:00
unknown
82eb2c6de0 fixed MDEV-568: Wrong result for a hash index look-up if the index is unique and the key is NULL
Check ability of index to be NULL as it made in MyISAM. UNIQUE with NULL could have several NULL entries so we have to continue even if ve have found a row.
2012-10-02 12:53:20 +03:00
Sergei Golubchik
22c5ffde30 a simple pam user mapper module 2012-09-25 20:23:01 +02:00
unknown
b917fb63a6 Fix bug lp:1009187, mdev-373, mysql bug#58628
Analysis:
The queries in question use the [unique | index]_subquery execution methods.
These methods reuse the ref keys constructed by create_ref_for_key(). The
way create_ref_for_key() works is that it doesn't store in ref.key_copy[]
store_key elements that represent constants. In particular it doesn't store
the store_key for NULL constants.

The execution of [unique | index]_subquery calls
subselect_uniquesubquery_engine::copy_ref_key, which in addition to copy
the left IN argument into a index lookup key, is supposed to detect if
the left IN argument contains NULLs. Since the store_key for the NULL
constant is not copied into the key array, the null is not detected, and
execution erroneously proceeds as if it should look for a complete match.

Solution:
The solution (unlike MySQL) is to reuse already computed information about
NULL presence. Item_in_optimizer::val_int already finds out if the left IN
operand contains NULLs. The fix propagates this to the execution methods
subselect_[unique | index]subquery_engine::exec so it knows if there were
NULL values independent of the presence of keys.

In addition the patch siplifies copy_ref_key() and the logic that hanldes
the case of NULLs in the left IN operand.
2012-09-14 11:26:01 +03:00
unknown
89e4d23f3b Merge into latest 5.2. 2012-08-24 12:57:19 +02:00
unknown
96703a63da Merge from 5.1. 2012-08-24 12:32:46 +02:00
unknown
cdeabcfd43 MDEV-382: Incorrect quoting
Various places in the server replication code was incorrectly quoting
strings, which could lead to incorrect SQL on the slave/mysqlbinlog.
2012-08-24 10:06:16 +02:00
Sergei Golubchik
1fd8150a5b 5.1 merge
increase xtradb verson from 13.0 to 13.01
2012-08-22 16:13:54 +02:00
Sergei Golubchik
115a296756 merge with XtraDB as of Percona-Server-5.1.63-rel13.4 2012-08-22 16:10:31 +02:00
Sergei Golubchik
cefc30b166 merge with MySQL 5.1.65 2012-08-22 11:40:39 +02:00
Rohit Kalhans
91c8e79fcd BUG#11762667:MYSQLBINLOG IGNORES ERRORS WHILE WRITING OUTPUT
This is a followup patch for the bug enabling the test
i_binlog.binlog_mysqlbinlog_file_write.test
this was disabled in mysql trunk and mysql 5.5 as in the release
build mysqlbinlog was not debug compiled whereas the mysqld was.
Since have_debug.inc script checks only for mysqld to be debug
compiled, the test was not being skipped on release builds.

We resolve this problem by creating a new inc file 
mysqlbinlog_have_debug.inc which checks exclusively for mysqlbinlog
to be debug compiled. if not it skips the test.
 

mysql-test/include/mysqlbinlog_have_debug.inc:
  new inc file to check if mysqlbinlog is debug compiled.
2012-07-03 18:00:21 +05:30
Gleb Shchepa
767501fb54 Backport of the deprecation warning from WL#6219: "Deprecate and remove YEAR(2) type"
Print the warning(note):

 YEAR(x) is deprecated and will be removed in a future release. Please use YEAR(4) instead

on "CREATE TABLE ... YEAR(x)" or "ALTER TABLE MODIFY ... YEAR(x)", where x != 4
2012-06-29 12:55:45 +04:00
unknown
60561ae613 Fix for LP bug#1001505 and LP bug#1001510
We set correct cmp_context during preparation to avoid changing it later by Item_field::equal_fields_propagator.
(see mysql bugs #57135 #57692 during merging)
2012-06-21 18:47:13 +03:00
Igor Babaev
0c69f22032 Fixed bug mdev-354.
Virtual columns of ENUM and SET data types were not supported properly
in the original patch that introduced virtual columns into MariaDB 5.2.
The problem was that for any  virtual column the patch used the 
interval_id field of the definition of the column in the frm file as
a reference to the virtual column expression.
The fix stores the optional interval_id of the virtual column in the
extended header of the virtual column expression.
2012-06-18 22:32:17 -07:00
unknown
775293b898 BUG #13946716: FEDERATED_PLUGIN TEST CASE FAIL ON 64BIT ARCHITECTURES 2012-06-14 17:07:49 +05:30
Igor Babaev
64aa1fcb13 Adjusted results in pbxt.negation_elimination after the fix for lp bug 992380. 2012-06-12 10:06:26 -07:00
Manish Kumar
4a2d65cc31 BUG#12400221 - 60926: BINARY LOG EVENTS LARGER THAN MAX_ALLOWED_PACKET
Problem
========
            
Replication breaks in the cases if the event length exceeds 
the size of master Dump thread's max_allowed_packet.
              
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet, on addition of the  
max_event_header length exceeds the max_allowed_packet of the DUMP thread.
This causes the Dump thread to break replication and throw an error.
                      
That can happen e.g with row-based replication in Update_rows event.
            
Fix
====
          
The problem is fixed in 2 steps:

1.) The Dump thread limit to read event is increased to the upper limit
    i.e. Dump thread reads whatever gets logged in the binary log.

2.) On the slave side we increase the the max_allowed_packet for the
    slave's threads (IO/SQL) by increasing it to 1GB.

    This is done using the new server option (slave_max_allowed_packet)
    included, is used to regulate the max_allowed_packet of the  
    slave thread (IO/SQL) by the DBA, and facilitates the sending of
    large packets from the master to the slave.

    This causes the large packets to be received by the slave and apply
    it successfully.

sql/log_event.cc:
  The max_allowed_packet is not evaluated to the new option 
  slave_max_allowed_packet after the fix.
sql/log_event.h:
  Added the new option in the log_event.h file.
sql/mysqld.cc:
  Added a new option to the server.
sql/slave.cc:
  Increasing the session max_allowed_packet to a large value,
  i.e. not taking global(max_allowed) into consideration, for the slave's threads.
sql/sql_repl.cc:
  The dump thread's max_allowed_packet is set to the upper limit
  which makes it independent and it now reads whatever gets 
  logged in the binary log.
2012-06-12 12:59:13 +05:30
Sergey Petrunya
ae51b5b698 Merge 2012-06-10 14:04:21 +04:00
Sergey Petrunya
a7229e8c20 BUG#1010351: New "via" keyword in 5.2+ can't be used as identifier anymore
- Add the VIA_SYM token into keyword_sp list, which makes it allowed for
  use as keyword and SP label.
2012-06-10 13:50:21 +04:00
Sergei Golubchik
41d860ef53 5.1 merge 2012-06-01 23:45:54 +02:00
Sergei Golubchik
34f2f8ea41 MDEV-256 lp:995501 - mysqltest attempts to parse Perl code inside a block
with false condition, gets confused and throws wrong errors
2012-06-01 17:53:59 +02:00
unknown
f45784c850 Fix of LP bug#992380 + revise fix_fields about missing with_subselect collection
The problem is that some fix_fields do not call Item_func::fix_fields and do not collect with subselect_information.
2012-05-25 10:29:53 +03:00
unknown
d56f5dae1e Fix bug lp:1001506
This is a backport of the (unchaged) fix for MySQL bug #11764372, 57197.

Analysis:

When the outer query finishes its main execution and computes GROUP BY,
it needs to construct a new temporary table (and a corresponding JOIN) to
execute the last DISTINCT operation. At this point JOIN::exec calls
JOIN::join_free, which calls JOIN::cleanup -> TMP_TABLE_PARAM::cleanup
for both the outer and the inner JOINs. The call to the inner
TMP_TABLE_PARAM::cleanup sets copy_field = NULL, but not copy_field_end.

The final execution phase that computes the DISTINCT invokes:
evaluate_join_record -> end_write -> copy_funcs
The last function copies the results of all functions into the temp table.
copy_funcs walks over all functions in join->tmp_table_param.items_to_copy.
In this case items_to_copy contains both assignments to user variables.
The process of copying user variables invokes Item_func_set_user_var::check
which in turn re-evaluates the arguments of the user variable assignment.
This in turn triggers re-evaluation of the subquery, and ultimately
copy_field.

However, the previous call to TMP_TABLE_PARAM::cleanup for the subquery
already set copy_field to NULL but not its copy_field_end. This results
in a null pointer access, and a crash.

Fix:
Set copy_field_end and save_copy_field_end to null when deleting
copy fields in TMP_TABLE_PARAM::cleanup().
2012-05-23 18:18:08 +03:00
unknown
950abd5268 Fix of LP bug#992380 + revise fix_fields about missing with_subselect collection
The problem is that some fix_fields do not call Item_func::fix_fields and do not collect with subselect_information.
2012-05-22 08:48:10 +03:00
Manish Kumar
1605b7f68f BUG#12400221 - 60926: BINARY LOG EVENTS LARGER THAN MAX_ALLOWED_PACKET
Problem
========
            
SQL statements close to the size of max_allowed_packet produce binary
log events larger than max_allowed_packet.
              
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet + max_event_header
length. Now since the event length exceeds this size master Dump
thread is unable to send the packet on to the slave.
                      
That can happen e.g with row-based replication in Update_rows event.
            
Fix
====
          
The problem was fixed by increasing the max_allowed_packet for the
slave's threads (IO/SQL) by increasing it to 1GB.
This is done using the new server option included which is used to
regulate the max_allowed_packet of the slave thread (IO/SQL).
This causes the large packets to be received by the slave and apply
it successfully.

sql/log_event.h:
  Added the new option in the log_event.h file.
sql/mysqld.cc:
  Added a new option to the server.
sql/slave.cc:
  Increasing the session max_allowed_packet to a large value ,
  i.e. not taking global(max_allowed) into consideration, for the slave's threads.
2012-05-21 12:57:39 +05:30
Sergei Golubchik
280fcf0808 5.1 merge 2012-05-18 14:23:05 +02:00
Sergei Golubchik
57f824b099 post-merge fixes
sql/slave.cc:
  add mutex protection, like in sql_parse.cc
2012-05-18 12:42:06 +02:00
Sergei Golubchik
0a8c9b98f6 merge with mysql-5.1.63 2012-05-17 12:12:33 +02:00
unknown
5a47413934 fix of LP bug#998321
The problem is that we can't check null_value field of non-basic constant without the item execution.:
2012-05-17 10:13:25 +03:00
Nuno Carvalho
5d8b38df4c BUG#11754117 - 45670: INTVAR_EVENTS FOR FILTERED-OUT QUERY_LOG_EVENTS ARE EXECUTED
Improved random number filtering verification on
rpl_filter_tables_not_exist test.
2012-05-15 22:06:48 +01:00
Bjorn Munch
e72278fd42 Added some extra optional path to test suites 2012-05-15 09:14:44 +02:00
Sergey Petrunya
e1b6e1b899 Merge 5.2->5.3 2012-05-12 12:12:35 +04:00
Sergey Petrunya
97ae1682f1 BUG#997747: Assertion `join->best_read < ((double)1.79..5e+308L)' failed
in greedy_search with LEFT JOINs and unique keys
- Backport the fix for BUG#806524 from MariaDB 5.3
2012-05-12 11:53:14 +04:00
unknown
f2cbc014d9 fix for LP bug#994392
The not_null_tables() of Item_func_not_all and Item_in_optimizer was inherited from
Item_func by mistake. It made the optimizer think that  subquery
predicates with ALL/ANY/IN were null-rejecting. This could trigger invalid
conversions of outer joins into inner joins.
2012-05-11 09:35:46 +03:00
Annamalai Gurusami
391ea219c2 Bug #14007649 65111: INNODB SOMETIMES FAILS TO UPDATE ROWS INSERTED
BY A CONCURRENT TRANSACTIO

The member function QUICK_RANGE_SELECT::init_ror_merged_scan() performs
a table handler clone. Innodb does not provide a clone operation.  
The ha_innobase::clone() is not there. The handler::clone() does not 
take care of the ha_innobase->prebuilt->select_lock_type.  Because of 
this what happens is that for one index we do a locking read, and 
for the other index we were doing a non-locking (consistent) read. 
The patch introduces ha_innobase::clone() member function.  
It is implemented similar to ha_myisam::clone().  It calls the 
base class handler::clone() and then does any additional operation 
required.  I am setting the ha_innobase->prebuilt->select_lock_type 
correctly. 

rb://1060 approved by Marko
2012-05-10 10:18:31 +05:30
Vladislav Vaintroub
54534a6984 MDEV-262 : log_state occationally fails in buildbot.
The failures are  missing entries in the slow query log.  The reason for the failure  are sleep() calls  with short duration 10ms, which is less than the default system timer resolution for various WaitForXXXObject functions  (15.6 ms) and thus can't work reliably.
The fix is to make sleeps tiny bit longer (20ms from 10ms) in the test.
2012-05-08 12:38:22 +02:00
Sunanda Menon
074ce71e90 Merge from mysql-5.1.63-release 2012-05-08 07:19:14 +02:00
Venkata Sidagam
e7364ec29c Bug #11754178 45740: MYSQLDUMP DOESN'T DUMP GENERAL_LOG AND SLOW_QUERY
CAUSES RESTORE PROBLEM
Problem Statement:
------------------
mysqldump is not having the dump stmts for general_log and slow_log
tables. That is because of the fix for Bug#26121. Hence, after 
dropping the mysql database, and applying the dump by enabling the 
logging, "'general_log' table not found" errors are logged into the 
server log file.

Analysis:
---------
As part of the fix for Bug#26121, we skipped the dumping of tables 
for general_log and slow_log, because the data dump of those tables 
are taking LOCKS, which is not allowed for log tables.

Fix:
----
We came up with an approach that instead of taking both meta data 
and data dump information for those tables, take only the meta data 
dump which doesn't need LOCKS.
As part of fixing the issue we came up with below algorithm.
Design before fix:
1) mysql database is having tables like db, event,... general_log,
   ... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Take the TL_READ lock on tables which are present in the table 
   list and do 'show create table'.
4) Release the lock.

Design with the fix:
1) mysql database is having tables like db, event,... general_log,
   ... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Explicitly call the 'show create table' for general_log and 
   slow_log
3) Take the TL_READ lock on tables which are present in the table 
   list and do 'show create table'.
4) Release the lock.

While taking the meta data dump for general_log and slow_log the 
"CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS". 
This is because we skipped "DROP TABLE" for those tables, 
"DROP TABLE" fails for these tables if logging is enabled. 
Customer is applying the dump by enabling logging so, if the dump 
has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE" 
stmts for those tables.
  
After the fix we could observe "Table 'mysql.general_log' 
doesn't exist" errors initially that is because in the customer 
scenario they are dropping the mysql database by enabling the 
logging, Hence, those errors are expected. Once we apply the 
dump which is taken before the "drop database mysql", the errors 
will not be there.

client/mysqldump.c:
  In get_table_structure() added code to skip the DROP TABLE stmts for general_log
  and slow_log tables, because when logging is enabled those stmts will fail. And
  replaced CREATE TABLE with CREATE IF NOT EXISTS for those tables, just to make 
  sure CREATE stmt for those tables doesn't fail since we removed DROP stmts for
  those tables.
  In dump_all_tables_in_db() added code to call get_table_structure() for 
  general_log and slow_log tables.
mysql-test/r/mysqldump.result:
  Added a test as part of fix for Bug #11754178
mysql-test/t/mysqldump.test:
  Added a test as part of fix for Bug #11754178
2012-05-07 16:46:44 +05:30
unknown
8065143637 Fix for LP bug#993726
Optimization of aggregate functions detected constant under max() and evalueted it, but condition in the WHWRE clause (which is always FALSE) was not taken into account
2012-05-07 13:26:34 +03:00
unknown
213476ef3e Fix for bug lp:992405
The patch backports two patches from mysql 5.6:
- BUG#12640437: USING SQL_BUFFER_RESULT RESULTS IN A DIFFERENT QUERY OUTPUT
- Bug#12578908: SELECT SQL_BUFFER_RESULT OUTPUTS TOO MANY ROWS WHEN GROUP IS OPTIMIZED AWAY

Original comment:
-----------------
3714 Jorgen Loland	2012-03-01
      BUG#12640437 - USING SQL_BUFFER_RESULT RESULTS IN A DIFFERENT 
                     QUERY OUTPUT
      
      For all but simple grouped queries, temporary tables are used to
      resolve grouping. In these cases, the list of grouping fields is
      stored in the temporary table and grouping is resolved
      there (e.g. by adding a unique constraint on the involved
      fields). Because of this, grouping is already done when the rows
      are read from the temporary table.
      
      In the case where a group clause may be optimized away, grouping
      does not have to be resolved using a temporary table. However, if
      a temporary table is explicitly requested (e.g. because the
      SQL_BUFFER_RESULT hint is used, or the statement is
      INSERT...SELECT), a temporary table is used anyway. In this case,
      the temporary table is created with an empty group list (because
      the group clause was optimized away) and it will therefore not
      create groups. Since the temporary table does not take care of
      grouping, JOIN::group shall not be set to false in 
      make_simple_join(). This was fixed in bug 12578908. 
      
      However, there is an exception where make_simple_join() should
      set JOIN::group to false even if the query uses a temporary table
      that was explicitly requested but is not strictly needed. That
      exception is if the loose index scan access method (explain
      says "Using index for group-by") is used to read into the 
      temporary table. With loose index scan, grouping is resolved 
      by the access method. This is exactly what happens in this bug.
2012-05-07 11:02:58 +03:00
unknown
c9a73aa204 Fix bug lp:993745
This is a backport of the fix for MySQL bug #13723054 in 5.6.

Original comment:
      The crash is caused by arbitrary memory area owerwriting in case of
      BLOB fields during attempt to copy BLOB field key image into record
      buffer(record buffer is too small to get BLOB key part image).
      note:
      QUICK_GROUP_MIN_MAX_SELECT can not work with BLOB fields
      because it uses record buffer as temporary buffer for key values
      however this case is filtered out by covering_keys() check
      in get_best_group_min_max() as BLOBs always require key length
      modificator in the key declaration and if the key has a BLOB
      then it can not be covered key.
      The fix is to use 'max_used_key_length' key length instead of 0.

Analysis:
Spcifically the crash in this bug was a result of the call to key_copy()
that copied the whole key, inlcuding the BLOB field which is not used
for index access. Copying the blob field overwrote memory as far as the
function parameter 'key_info'. As a result the contents of key_info was
all 0, which resulted in a crash when this key_info was accessed few
lines below in key_cmp().
2012-05-03 14:49:52 +03:00
Sergei Golubchik
167ad4c4a5 update the result file 2012-05-02 22:00:31 +02:00
Oleksandr Byelkin
8fe40c50db MDEV-214 lp:967242 Wrong result with JOIN, AND in ON condition, multi-part key, GROUP BY, subquery and OR in WHERE
The problem was in the code (update_const_equal_items()) which marked
index parts constant independently of the place where the equality was used.
In the test suite it marked t2_1.c part constant despite the fact that
it connected by OR with other expression.

Solution is to mark constant only top equalities connected with AND.
2012-05-02 18:11:02 +02:00
Vladislav Vaintroub
26fcc55017 LP993103: Wrong result with LAST_DAY('0000-00-00 00:00:00')IS NULL in WHERE condition
Fix is to set  maybe_null  flag for Item_func_last_day.
2012-05-02 16:53:02 +02:00
Andrei Elkin
f189ccf530 merge from 5.1 repo 2012-04-23 12:05:05 +03:00
Andrei Elkin
bd0c38064f BUG#11754117
rpl_auto_increment_bug45679.test is refined due to not fixed in 5.1 Bug11749859-39934.
2012-04-23 11:51:19 +03:00
Nuno Carvalho
cdaae1692b BUG#13979418: SHOW BINLOG EVENTS MAY CRASH THE SERVER
The function mysql_show_binlog_events has a local stack variable
'LOG_INFO linfo;', which is assigned to thd->current_linfo, however
this variable goes out of scope and is destroyed before clean
thd->current_linfo.

The problem is solved by moving 'LOG_INFO linfo;' to function scope.
2012-04-20 22:25:59 +01:00
Andrei Elkin
49e484c8cd BUG#11754117 incorrect logging of INSERT into auto-increment
BUG#11761686 insert_id event is not filtered.
  
Two issues are covered.
  
INSERT into autoincrement field which is not the first part in the composed primary key 
is unsafe by autoincrement logging design. The case is specific to MyISAM engine
because Innodb does not allow such table definition.
  
However no warnings and row-format logging in the MIXED mode was done, and
that is fixed.
  
Int-, Rand-, User-var log-events were not filtered along with their parent
query that made possible them to screw up execution context of the following
query.
  
Fixed with deferring their execution until the parent query.

******
Bug#11754117 

Post review fixes.

mysql-test/suite/rpl/r/rpl_auto_increment_bug45679.result:
  a new result file is added.
mysql-test/suite/rpl/r/rpl_filter_tables_not_exist.result:
  results updated.
mysql-test/suite/rpl/t/rpl_auto_increment_bug45679.test:
  regression test for BUG#11754117-45670 is added.
mysql-test/suite/rpl/t/rpl_filter_tables_not_exist.test:
  regression test for filtering issue of BUG#11754117 - 45670 is added.
sql/log_event.cc:
  Logics are added for deferring and executing events associated 
  with the Query event.
sql/log_event.h:
  Interface to deferred events batch execution is added.
sql/rpl_rli.cc:
  initialization for new RLI members is added.
sql/rpl_rli.h:
  New members to RLI are added to facilitate deferred events gathering
  and execution control;
  two general character RLI cleanup methods are constructed.
sql/rpl_utility.cc:
  Deferred_log_events methods are difined.
sql/rpl_utility.h:
  A new class Deferred_log_events is defined to implement
  IRU events gathering, execution and cleanup.
sql/slave.cc:
  Necessary changes to initialize `rli->deferred_events' and prevent
  deferred event deletion in the main read-exec branch.
sql/sql_base.cc:
  A new safe-check function for multi-part pk with auto-increment is defined
  and deployed in lock_tables().
sql/sql_class.cc:
  Initialization for a new member and replication cleanups are added
  to THD class.
sql/sql_class.h:
  THD class receives a new member to hold a specific execution
  context for slave applier.
sql/sql_parse.cc:
  Execution of the deferred event in started prior to its parent query.
2012-04-20 19:41:20 +03:00
Tor Didriksen
11b2cf4f03 Backport 5.5=>5.1 Patch for Bug#13805127:
Stored program cache produces wrong result in same THD.
2012-04-18 13:14:05 +02:00