Commit graph

33708 commits

Author SHA1 Message Date
Sergei Golubchik
2510f9c606 cleanup: remove special case from store_key::store_key(), add Field_blob::new_key_field
(prep for MDEV-6065)
2014-06-09 20:18:53 +02:00
Sergei Golubchik
dc9b2a95bf MDEV-6249 mark P_S STABLE and disable it by default 2014-06-09 20:00:23 +02:00
Igor Babaev
2436d58e19 Merge 2014-06-10 15:32:56 -07:00
Sergey Petrunya
02720fd7ac Merge 2014-06-10 21:51:02 +02:00
Sergey Petrunya
b80a02cbc4 Merge 2014-06-10 21:46:27 +02:00
Igor Babaev
1f7e68044c Merge. 2014-06-10 12:45:20 -07:00
Igor Babaev
d42e6d3a99 Fixed bug mdev-6071.
The method JOIN_CACHE::init may fail (return 1) if some conditions on the
used join buffer is not satisfied. For example it fails if join_buffer_size
is greater than join_buffer_space_limit. The conditions should be checked
when running the EXPLAIN command for the query. That's why the method
JOIN_CACHE::init has to be called for EXPLAIN commands as well.
2014-06-10 10:34:58 -07:00
Alexey Botchkov
6b84ecdc37 MDEV-4440 IF NOT EXISTS in multi-action ALTER does not work when the problem is created by a previous part of the ALTER.
Loops added to the handle_if_exists_option() to check the
        CREATE/DROP lists for duplicates.
2014-06-10 17:02:46 +05:00
Sergey Petrunya
aeb62282a2 MDEV-5985: EITS: selectivity estimates look illogical for join and non-key equalities
Part#1. 

table_cond_selectivity() should discount selectivity of table' 
conditions only when ity counts that selectivity to begin with. 

For non-ref-based access methods (ALL/range/index_merge/etc),
we start with sel=1.0 and hence do not need to discount any
selectivities.
2014-06-10 12:25:16 +02:00
unknown
b1886e2bff merge of MDEV-6047 2014-06-09 13:47:20 +03:00
unknown
4cd676cbd9 MDEV-6047: Make exists_to_in optimization ON by default 2014-06-09 13:42:21 +03:00
Sergey Petrunya
349e31d5a7 Merge 2014-06-07 23:45:05 +02:00
Sergey Petrunya
71df035551 MDEV-6308: Server crashes in table_multi_eq_cond_selectivity with ...
- In table_cond_selectivity(), reset keyuse variable between the loops.
2014-06-07 19:00:26 +02:00
Sergey Petrunya
ee6f400fe1 MDEV-5976: TokuDB: Wrong query result using mrr=on
- Key_value_records_iterator::get_next() should pass pointer to the key 
  to handler->ha_index_next_same().  Because of a typo bug, pointer-to-pointer
  was passed instead in certain cases.
2014-06-06 21:28:42 +04:00
Tor Didriksen
f88e362fbc Bug#18786138 SHA/MD5 HASHING FUNCTIONS DIE WITH "FILENAME" CHARACTER SET
For charsets with no binary collation: use my_charset_bin.
2014-06-06 16:49:25 +02:00
Alexander Barkov
216fbe2af3 MDEV-6102 Comparison between TIME and DATETIME does not use CURRENT_DATE
MDEV-6101 Hybrid functions do not add CURRENT_DATE when converting TIME to DATETIME
2014-06-06 10:29:52 +04:00
Sergei Golubchik
e27c338634 5.5.38 merge 2014-06-06 00:07:27 +02:00
Sergey Petrunya
c7e5a1f70d MDEV-6105: Emoji unicode character string search query makes mariadb performance down
- When range optimizer cannot the lookup value into [VAR]CHAR(n) column,
  it should produce:
  = "Impossible range" for equality
  = "no range" for non-equalities.
2014-06-05 19:18:35 +04:00
Sergei Golubchik
2d2697ea8d MDEV-6221 SQL_CALC_FOUND_ROWS yields wrong result again
replace another old "was there a filesort?" test with
a correct "did filesort calculate found_rows?" test.
2014-06-05 15:59:46 +02:00
Sergei Golubchik
fde6ee61bb revert the fix for MDEV-5898, restore the fix for MDEV-5549.
simplify test case for MDEV-5898
2014-06-05 15:59:41 +02:00
Sergei Golubchik
37d353770f MDEV-5998 MySQL Bug#11756966 - 48958: STORED PROCEDURES CAN BE LEVERAGED TO BYPASS DATABASE SECURITY
Merge from mysql-5.6:
revno: 3257
committer: Jon Olav Hauglid <jon.hauglid@oracle.com>
branch nick: mysql-trunk-bug11756966
timestamp: Thu 2011-07-14 09:32:01 +0200
message:
  Bug#11756966 - 48958: STORED PROCEDURES CAN BE LEVERAGED TO BYPASS
                 DATABASE SECURITY

  The problem was that CREATE PROCEDURE/FUCTION could be used to
  check the existence of databases for which the user had no
  privileges and therefore should not be allowed to see.

  The reason was that existence of a given database was checked
  before privileges. So trying to create a stored routine in
  a non-existent database would give a different error than trying
  to create a stored routine in a restricted database.

  This patch fixes the problem by changing the order of the checks
  for CREATE PROCEDURE/FUNCTION so that privileges are checked first.
  This means that trying to create a stored routine in a
  non-existent database and in a restricted database both will
  give ER_DBACCESS_DENIED_ERROR error.

  Test case added to grant.test.
2014-06-05 15:59:35 +02:00
Alexander Barkov
7d3a67a976 Merge 5.3->5.5 2014-06-05 09:15:25 +04:00
Alexander Barkov
30b007a1ab Fixing a valgrind warning introduced in the previous changeset:
"have_warnings" was set to an uninialized value when converting
a negative number to datetime.
2014-06-05 09:01:05 +04:00
Alexander Barkov
284479c085 Merge 5.3->5.5 2014-06-04 21:53:15 +04:00
Alexander Barkov
661daf16f1 MDEV-4858 Wrong results for a huge unsigned value inserted into a TIME column
MDEV-6099 Bad results for DATE_ADD(.., INTERVAL 2000000000000000000.0 SECOND)
MDEV-6097 Inconsistent results for CAST(int,decimal,double AS DATETIME)
MDEV-6100 No warning on CAST(9000000 AS TIME)
2014-06-04 20:32:57 +04:00
Michael Widenius
414e8388bf Fixed compiler warnings
mysys/psi_noop.c:
  Fixed wrong prototype
sql/rpl_gtid.cc:
  Added #ifndef to hide not used variable
storage/connect/connect.cc:
  Added volatile to avoid compiler warning in gcc 4.8.1
storage/connect/filamvct.cpp:
  Added volatile to avoid compiler warning in gcc 4.8.1
storage/maria/ma_checkpoint.c:
  Removed cast to avoid compiler warning
storage/myisam/mi_delete_table.c:
  Added attribute to avoid compiler warning
storage/tokudb/ha_tokudb.cc:
  Use LINT_INIT_STRUCT to avoid compiler warnings
storage/tokudb/hatoku_hton.cc:
  Use LINT_INIT_STRUCT to avoid compiler warnings
storage/tokudb/tokudb_card.h:
  Use LINT_INIT_STRUCT to avoid compiler warnings
storage/tokudb/tokudb_status.h:
  Use LINT_INIT_STRUCT to avoid compiler warnings
2014-06-04 13:23:00 +03:00
unknown
113333d447 MDEV-6046: MySQL Bug#11766684 59851: UNINITIALISED VALUE IN ITEM_FUNC_LIKE::SELECT_OPTIMIZE WITH SUBQUERY AND 2014-06-04 13:03:55 +03:00
unknown
55bfabf971 MDEV-6163: Error while executing an update query that has the same table in a sub-query
We have to run the derived table prepare before the unique table check to mark the derived table (in this case the unique table check can turn that table to materialized one).
2014-06-04 10:10:19 +03:00
Sergey Petrunya
b6b5b748e7 MDEV-5884: EXPLAIN UPDATE ... ORDER BY LIMIT shows wrong #rows
- Make get_index_for_order() return correct #rows. 
  changed EXPLAIN outputs are checked - only #rows is different.
2014-06-04 00:26:27 +04:00
unknown
629b822913 MDEV-5262, MDEV-5914, MDEV-5941, MDEV-6020: Deadlocks during parallel
replication causing replication to fail.

In parallel replication, we run transactions from the master in parallel, but
force them to commit in the same order they did on the master. If we force T1
to commit before T2, but T2 holds eg. a row lock that is needed by T1, we get
a deadlock when T2 waits until T1 has committed.

Usually, we do not run T1 and T2 in parallel if there is a chance that they
can have conflicting locks like this, but there are certain edge cases where
it can occasionally happen (eg. MDEV-5914, MDEV-5941, MDEV-6020). The bug was
that this would cause replication to hang, eventually getting a lock timeout
and causing the slave to stop with error.

With this patch, InnoDB will report back to the upper layer whenever a
transactions T1 is about to do a lock wait on T2. If T1 and T2 are parallel
replication transactions, and T2 needs to commit later than T1, we can thus
detect the deadlock; we then kill T2, setting a flag that causes it to catch
the kill and convert it to a deadlock error; this error will then cause T2 to
roll back and release its locks (so that T1 can commit), and later T2 will be
re-tried and eventually also committed.

The kill happens asynchroneously in a slave background thread; this is
necessary, as the reporting from InnoDB about lock waits happen deep inside
the locking code, at a point where it is not possible to directly call
THD::awake() due to mutexes held.

Deadlock is assumed to be (very) rarely occuring, so this patch tries to
minimise the performance impact on the normal case where no deadlocks occur,
rather than optimise the handling of the occasional deadlock.

Also fix transaction retry due to deadlock when it happens after a transaction
already signalled to later transactions that it started to commit. In this
case we need to undo this signalling (and later redo it when we commit again
during retry), so following transactions will not start too early.

Also add a missing thd->send_kill_message() that got triggered during testing
(this corrects an incorrect fix for MySQL Bug#58933).
2014-06-03 10:31:11 +02:00
Sergei Golubchik
5d16592d44 mysql-5.5.38 merge 2014-06-03 09:55:08 +02:00
Sergei Golubchik
e5daa0946f 5.3 merge 2014-06-02 19:08:59 +02:00
unknown
0fbe91b45b MDEV-6251: SIGSEGV in query optimizer (in set_check_materialized with MERGE view)
mysql_derived_merge() made correctly working with views.
2014-06-02 15:36:06 +03:00
Alexander Barkov
4211b1cd48 MDEV-4051 INET6_ATON() and INET6_NTOA()
Backporting functions from MySQL-5.6:

- INET6_ATON()
- INET6_NTOA()
- IS_IPV4()
- IS_IPV4_COMPAT()
- IS_IPV4_MAPPED()
- IS_IPV6()
2014-05-30 16:19:00 +04:00
Alexander Barkov
1449d1d54f Moving implementation of INET_ATON() INET_NTOA() into
separate files item_inetfunc.h and item_inetfunc.cc.
2014-05-30 15:24:25 +04:00
Sergey Petrunya
d533a64bf3 MDEV-6239: Partition pruning is not working as expected in an inner query
- Make partition pruning work for tables inside semi-join nests
  (the new condition is the same that range optimizer uses so 
   it should be ok)
2014-05-29 02:25:37 +04:00
Sergey Petrunya
dedc76b7d9 MDEV-6263: Wrong result when using IN subquery with order by
- When the optimizer chose LooseScan, make_join_readinfo() should
  use the index that was chosen for LooseScan, and should not try 
  to find a better (shortest) index.
2014-05-28 17:32:43 +04:00
Sergei Golubchik
63a0e5640d typo fixed (compilation failure with libwrap) 2014-05-26 13:31:11 +02:00
Tor Didriksen
ab8bd02b9b Bug#18315770 BUG#12368495 FIX IS INCOMPLETE
Item_func_ltrim::val_str did not handle multibyte charsets.
Fix: factor out common code for Item_func_trim and Item_func_ltrim.
2014-05-16 10:18:43 +02:00
unknown
787c470cef MDEV-5262: Missing retry after temp error in parallel replication
Handle retry of event groups that span multiple relay log files.

 - If retry reaches the end of one relay log file, move on to the next.

 - Handle refcounting of relay log files, and avoid purging relay log
   files until all event groups have completed that might have needed
   them for transaction retry.
2014-05-15 15:52:08 +02:00
Neeraj Bisht
10978e0aa9 Bug#18207212 : FILE NAME IS NOT ESCAPED IN BINLOG FOR LOAD DATA INFILE STATEMENT
Problem:
Load_log_event::print_query() function does not put escape character in file name 
for "LOAD DATA INFILE" statement.

Analysis:
When we have "'" in our file name for "LOAD DATA INFILE" statement,
Load_log_event::print_query() function does not put escape character 
in our file name.

This one result that when we show binary-log, we get file name without 
escape character.

Solution:
To put escape character when we have "'" in file name, for this instead of using 
simple memcpy() to put file-name, we will use pretty_print_str().
2014-05-15 15:50:52 +05:30
mithun
f220233512 Bug#17217128 : BAD INTERACTION BETWEEN MIN/MAX AND
"HAVING SUM(DISTINCT)": WRONG RESULTS.
ISSUE:
------
If a query uses loose index scan and it has both
AGG(DISTINCT) and MIN()/MAX()functions. Then, result values
of MIN/MAX() is set improperly.
When query has AGG(DISTINCT) then end_select is set to
end_send_group. "end_send_group" keeps doing aggregation
until it sees a record from next group. And, then it will
send out the result row of that group.
Since query also has MIN()/MAX() and loose index scan is
used, values of MIN/MAX() are set as part of loose index
scan itself. Setting MIN()/MAX() values as part of loose
index scan overwrites values computed in end_send_group.
This caused invalid result.
For such queries to work loose index scan should stop
performing MIN/MAX() aggregation. And, let end_send_group to
do the same. But according to current design loose index
scan can produce only one row per group key. If we have both
MIN() and MAX() then it has to give two records out. This is
not possible as interface has to use common buffer
record[0]! for both records at a time.

SOLUTIONS:
----------
For such queries to work we need a new interface for loose
index scan. Hence, do not choose loose_index_scan for such
cases. So a new rule SA7 is introduced to take care of the
same.

SA7: "If Q has both AGG_FUNC(DISTINCT ...) and
      MIN/MAX() functions then loose index scan access
      method is not used."

mysql-test/r/group_min_max.result:
  Expected result.
mysql-test/t/group_min_max.test:
  1. Test with various combination of AGG(DISTINCT) and
  MIN(), MAX() functions.
  2. Corrected the plan for old queries.
sql/opt_range.cc:
  A new rule SA7 is introduced.
2014-05-15 11:46:57 +05:30
unknown
d60915692c MDEV-5262: Missing retry after temp error in parallel replication
Implement that if first retry fails, we can do another attempt.

Add testcases to test multi-retry that succeeds in second attempt, and
multi-retry that eventually fails due to exceeding slave_trans_retries.
2014-05-13 13:42:06 +02:00
Sergei Golubchik
edf1fbd25b MDEV-6153 Trivial Lintian errors in MariaDB sources: spelling errors and wrong executable bits 2014-05-13 11:53:30 +02:00
Sergei Golubchik
1170a54060 fix a bad merge, causing a crash of fulltext.test in --ps-protocol 2014-05-10 23:42:01 +02:00
Sergei Golubchik
fca2b1a773 for windows 2014-05-09 12:36:15 +02:00
Sergei Golubchik
d3e2e1243b 5.5 merge 2014-05-09 12:35:11 +02:00
unknown
45a91d8cbb MDEV-6193: Problems with multi-table updates that JOIN against read-only table
All underlying tables should share the same lock type.
2014-05-08 22:56:36 +03:00
Venkatesh Duggirala
2870bd7423 Bug#17283409 4-WAY DEADLOCK: ZOMBIES, PURGING BINLOGS,
SHOW PROCESSLIST, SHOW BINLOGS

Problem:  A deadlock was occurring when 4 threads were
involved in acquiring locks in the following way
Thread 1: Dump thread ( Slave is reconnecting, so on
              Master, a new dump thread is trying kill
              zombie dump threads. It acquired thread's
              LOCK_thd_data and it is about to acquire
              mysys_var->current_mutex ( which LOCK_log)
Thread 2: Application thread is executing show binlogs and
               acquired LOCK_log and it is about to acquire
               LOCK_index.
Thread 3: Application thread is executing Purge binary logs
               and acquired LOCK_index and it is about to
               acquire LOCK_thread_count.
Thread 4: Application thread is executing show processlist
               and acquired LOCK_thread_count and it is
               about to acquire zombie dump thread's
               LOCK_thd_data.
Deadlock Cycle:
     Thread 1 -> Thread 2 -> Thread 3-> Thread 4 ->Thread 1

The same above deadlock was observed even when thread 4 is
executing 'SELECT * FROM information_schema.processlist' command and
acquired LOCK_thread_count and it is about to acquire zombie
dump thread's LOCK_thd_data.

Analysis:
There are four locks involved in the deadlock.  LOCK_log,
LOCK_thread_count, LOCK_index and LOCK_thd_data.
LOCK_log, LOCK_thread_count, LOCK_index are global mutexes
where as LOCK_thd_data is local to a thread.
We can divide these four locks in two groups.
Group 1 consists of LOCK_log and LOCK_index and the order
should be LOCK_log followed by LOCK_index.
Group 2 consists of other two mutexes
LOCK_thread_count, LOCK_thd_data and the order should
be LOCK_thread_count followed by LOCK_thd_data.
Unfortunately, there is no specific predefined lock order defined
to follow in the MySQL system when it comes to locks across these
two groups. In the above problematic example,
there is no problem in the way we are acquiring the locks
if you see each thread individually.
But If you combine all 4 threads, they end up in a deadlock.

Fix: 
Since everything seems to be fine in the way threads are taking locks,
In this patch We are changing the duration of the locks in Thread 4
to break the deadlock. i.e., before the patch, Thread 4
('show processlist' command) mysqld_list_processes()
function acquires LOCK_thread_count for the complete duration
of the function and it also acquires/releases
each thread's LOCK_thd_data.

LOCK_thread_count is used to protect addition and
deletion of threads in global threads list. While show
process list is looping through all the existing threads,
it will be a problem if a thread is exited but there is no problem
if a new thread is added to the system. Hence a new mutex is
introduced "LOCK_thd_remove" which will protect deletion
of a thread from global threads list. All threads which are
getting exited should acquire LOCK_thd_remove
followed by LOCK_thread_count. (It should take LOCK_thread_count
also because other places of the code still thinks that exit thread
is protected with LOCK_thread_count. In this fix, we are changing
only 'show process list' query logic )
(Eg: unlink_thd logic will be protected with
LOCK_thd_remove).

Logic of mysqld_list_processes(or file_schema_processlist)
will now be protected with 'LOCK_thd_remove' instead of
'LOCK_thread_count'.

Now the new locking order after this patch is:
LOCK_thd_remove -> LOCK_thd_data -> LOCK_log ->
LOCK_index -> LOCK_thread_count
2014-05-08 18:13:01 +05:30
unknown
b0b60f2498 MDEV-5262: Missing retry after temp error in parallel replication
Start implementing that an event group can be re-tried in parallel replication
if it fails with a temporary error (like deadlock).

Patch is very incomplete, just some very basic retry works.

Stuff still missing (not complete list):

 - Handle moving to the next relay log file, if event group to be retried
   spans multiple relay log files.

 - Handle refcounting of relay log files, to ensure that we do not purge a
   relay log file and then later attempt to re-execute events out of it.

 - Handle description_event_for_exec - we need to save this somehow for the
   possible retry - and use the correct one in case it differs between relay
   logs.

 - Do another retry attempt in case the first retry also fails.

 - Limit the max number of retries.

 - Lots of testing will be needed for the various edge cases.
2014-05-08 14:20:18 +02:00