Commit graph

44001 commits

Author SHA1 Message Date
Alexander Barkov
07cb53c58b Merge 5.3->5.5 2014-07-23 14:59:23 +04:00
Alexander Barkov
80708da138 MDEV-5750 Assertion `ltime->year == 0' fails on a query with EXTRACT DAY_MINUTE and TIME column
Item_func_min_max::get_date() did not clear ltime->year when returning a TIME value.
2014-07-23 13:38:48 +04:00
Praveenkumar Hulakund
97744101f4 Bug#14757009: WHEN THE GENERAL_LOG IS A SOCKET AND THE READER
GOES AWAY, MYSQL QUITS WORKING.

Analysis:
-----------------
Issue in this bug and in bug 11907705 is, the socket file or
fifo file is set for general log at command line while starting
the server. But currently, only regular file can be set for the 
general log. Instead of reporting any error, the provided files
are opened for writing and continued. Because of this issues
mentioned in the bug reports are seen.

As mentioned, only when any non-regular file is set for general
log at command line while starting the server, these issues are
seen. If general log file is set to non-regular file from CLI
using system variable general_log_file then error is reported.

These issues can also be faced with slow query log file, if it is
set to non-regular file.

Fix:
-----------------
Currently while starting the server if we fail to open log file
then we report an error, disable logging to file and continue.
To fix issue reported code is modified to check whether file
is regular file or not before opening it. If file is not a 
regular file then error is logged to error log and logging to 
file is disabled.
2014-07-17 11:21:18 +05:30
Praveenkumar Hulakund
cd4fb2aeae Bug#14757009: WHEN THE GENERAL_LOG IS A SOCKET AND THE READER
GOES AWAY, MYSQL QUITS WORKING.

Analysis:
-----------------
Issue in this bug and in bug 11907705 is, the socket file or
fifo file is set for general log at command line while starting
the server. But currently, only regular file can be set for the 
general log. Instead of reporting any error, the provided files
are opened for writing and continued. Because of this issues
mentioned in the bug reports are seen.

As mentioned, only when any non-regular file is set for general
log at command line while starting the server, these issues are
seen. If general log file is set to non-regular file from CLI
using system variable general_log_file then error is reported.

These issues can also be faced with slow query log file, if it is
set to non-regular file.

Fix:
-----------------
Currently while starting the server if we fail to open log file
then we report an error, disable logging to file and continue.
To fix issue reported code is modified to check whether file
is regular file or not before opening it. If file is not a 
regular file then error is logged to error log and logging to 
file is disabled.
2014-07-17 11:21:18 +05:30
Venkata Sidagam
9406108356 Bug #17357528 BACKPORT BUG#16513435 TO 5.5 AND 5.6
Description: Backporting BUG#16513435 to 5.5 and 5.6
This is a fix for REMOTE PREAUTH USER ENUMERATION FLAW bug
2014-06-30 19:24:25 +05:30
Venkata Sidagam
3bba29a397 Bug #17357528 BACKPORT BUG#16513435 TO 5.5 AND 5.6
Description: Backporting BUG#16513435 to 5.5 and 5.6
This is a fix for REMOTE PREAUTH USER ENUMERATION FLAW bug
2014-06-30 19:24:25 +05:30
Praveenkumar Hulakund
14aa44bb8f Bug#18903155: BACKPORT BUG-18008907 TO 5.5+ VERSIONS.
Backporting patch committed for bug 18008907 to 5.5
and 5.6.
2014-06-27 17:04:08 +05:30
Praveenkumar Hulakund
b2c2656b62 Bug#18903155: BACKPORT BUG-18008907 TO 5.5+ VERSIONS.
Backporting patch committed for bug 18008907 to 5.5
and 5.6.
2014-06-27 17:04:08 +05:30
Nisha Gopalakrishnan
d63645c890 BUG#18405221: SHOW CREATE VIEW OUTPUT INCORRECT
Fix:
---
The issue reported is same as the BUG#14117018.
Hence backporting the patch from mysql-trunk
to mysql-5.5 and mysql-5.6
2014-06-25 16:33:04 +05:30
Nisha Gopalakrishnan
b278384f64 BUG#18405221: SHOW CREATE VIEW OUTPUT INCORRECT
Fix:
---
The issue reported is same as the BUG#14117018.
Hence backporting the patch from mysql-trunk
to mysql-5.5 and mysql-5.6
2014-06-25 16:33:04 +05:30
Terje Rosten
410b1dd86d Bug#16395459 TEST AND RESULT FILES WITH EXECUTE BIT
Bug#16415173 CRLF INSTEAD OF LF IN SQL-BENCH SCRIPTS
      
Correct perms and converts from Windows style to UNIX style line endings on some files.
Fix perms on installed ini files.

(MySQL 5.5 version)
2014-06-25 12:35:50 +02:00
Terje Rosten
5c4937c101 Bug#16395459 TEST AND RESULT FILES WITH EXECUTE BIT
Bug#16415173 CRLF INSTEAD OF LF IN SQL-BENCH SCRIPTS
      
Correct perms and converts from Windows style to UNIX style line endings on some files.
Fix perms on installed ini files.

(MySQL 5.5 version)
2014-06-25 12:35:50 +02:00
Nisha Gopalakrishnan
24756e8e3f BUG#18618561: FAILED ALTER TABLE ENGINE CHANGE WITH PARTITIONS
CORRUPTS FRM

Analysis:
---------
ALTER TABLE on a partitioned table resulted in the wrong
engine being written into the table's FRM file and displayed
in SHOW CREATE TABLE.

The prep_alter_part_table() modifies the partition_info object
for TABLE instance representing the old version of table.
If the ALTER TABLE ENGINE statement fails, the partition_info
object for the TABLE contains the altered storage engine name.
The SHOW CREATE TABLE uses the TABLE object to display the table
information, hence displays incorrect storage engine for the table.
Also a subsequent successful ALTER TABLE operation will write the
incorrect engine information into the FRM file.

Fix:
---
A copy of the partition_info object is created before modification so
that any changes would not cause the the original partition_info object
to be modified if the ALTER TABLE fails.(Backported part of the code
provided as fix for bug#14156617 in mysql-5.6.6).
2014-06-24 10:15:53 +05:30
Nisha Gopalakrishnan
0e947e88b1 BUG#18618561: FAILED ALTER TABLE ENGINE CHANGE WITH PARTITIONS
CORRUPTS FRM

Analysis:
---------
ALTER TABLE on a partitioned table resulted in the wrong
engine being written into the table's FRM file and displayed
in SHOW CREATE TABLE.

The prep_alter_part_table() modifies the partition_info object
for TABLE instance representing the old version of table.
If the ALTER TABLE ENGINE statement fails, the partition_info
object for the TABLE contains the altered storage engine name.
The SHOW CREATE TABLE uses the TABLE object to display the table
information, hence displays incorrect storage engine for the table.
Also a subsequent successful ALTER TABLE operation will write the
incorrect engine information into the FRM file.

Fix:
---
A copy of the partition_info object is created before modification so
that any changes would not cause the the original partition_info object
to be modified if the ALTER TABLE fails.(Backported part of the code
provided as fix for bug#14156617 in mysql-5.6.6).
2014-06-24 10:15:53 +05:30
Jon Olav Hauglid
879fec69fc WL#7436: Deprecate and remove timed_mutexes system variable
This is the 5.5/5.6 version of the patch.

Add deprecation warning for timed_mutexes.
2014-06-19 16:47:41 +02:00
Jon Olav Hauglid
1f1c0faffd WL#7436: Deprecate and remove timed_mutexes system variable
This is the 5.5/5.6 version of the patch.

Add deprecation warning for timed_mutexes.
2014-06-19 16:47:41 +02:00
Sergey Vojtovich
3375e137f8 MDEV-6351 - --plugin=force has no effect for built-in plugins
mysqld didn't fail to start if a compiled-in plugin failed to initialize
(--xxx=FORCE behaving as --xxx=ON)
2014-06-17 13:03:26 +04:00
Sergey Petrunya
07c0b1d8d0 MDEV-6434: Wrong result (extra rows) with ORDER BY, multiple-column index, InnoDB
- Filesort has an optmization where it reads only columns that are 
  needed before the sorting is done.
- When ref(_or_null) is picked by the join optimizer, it may remove parts
  of WHERE clause that are guaranteed to be true.
- However, if we use quick select, we must put all of the range columns into the 
  read set. Not doing so will may cause us to fail to detect the end of the range.
2014-07-22 15:52:49 +04:00
Sergei Golubchik
6d75570e99 fix range.test 2014-06-05 19:25:51 +02:00
Sergey Petrunya
c7e5a1f70d MDEV-6105: Emoji unicode character string search query makes mariadb performance down
- When range optimizer cannot the lookup value into [VAR]CHAR(n) column,
  it should produce:
  = "Impossible range" for equality
  = "no range" for non-equalities.
2014-06-05 19:18:35 +04:00
Alexander Barkov
284479c085 Merge 5.3->5.5 2014-06-04 21:53:15 +04:00
Alexander Barkov
661daf16f1 MDEV-4858 Wrong results for a huge unsigned value inserted into a TIME column
MDEV-6099 Bad results for DATE_ADD(.., INTERVAL 2000000000000000000.0 SECOND)
MDEV-6097 Inconsistent results for CAST(int,decimal,double AS DATETIME)
MDEV-6100 No warning on CAST(9000000 AS TIME)
2014-06-04 20:32:57 +04:00
unknown
55bfabf971 MDEV-6163: Error while executing an update query that has the same table in a sub-query
We have to run the derived table prepare before the unique table check to mark the derived table (in this case the unique table check can turn that table to materialized one).
2014-06-04 10:10:19 +03:00
Sergei Golubchik
57d15d62f1 Add a test case for MySQL's:
Bug #18167356: EXPLAIN W/ EXISTS(SELECT* UNION SELECT*)
                 WHERE ONE OF SELECT* IS DISTINCT FAILS.

the bugfix itself was not merged - MariaDB doesn't have this bug.
2014-06-03 10:52:36 +02:00
Sergei Golubchik
5d16592d44 mysql-5.5.38 merge 2014-06-03 09:55:08 +02:00
Sergei Golubchik
e5daa0946f 5.3 merge 2014-06-02 19:08:59 +02:00
unknown
0fbe91b45b MDEV-6251: SIGSEGV in query optimizer (in set_check_materialized with MERGE view)
mysql_derived_merge() made correctly working with views.
2014-06-02 15:36:06 +03:00
Sergey Petrunya
d533a64bf3 MDEV-6239: Partition pruning is not working as expected in an inner query
- Make partition pruning work for tables inside semi-join nests
  (the new condition is the same that range optimizer uses so 
   it should be ok)
2014-05-29 02:25:37 +04:00
Sergey Petrunya
dedc76b7d9 MDEV-6263: Wrong result when using IN subquery with order by
- When the optimizer chose LooseScan, make_join_readinfo() should
  use the index that was chosen for LooseScan, and should not try 
  to find a better (shortest) index.
2014-05-28 17:32:43 +04:00
mithun
f220233512 Bug#17217128 : BAD INTERACTION BETWEEN MIN/MAX AND
"HAVING SUM(DISTINCT)": WRONG RESULTS.
ISSUE:
------
If a query uses loose index scan and it has both
AGG(DISTINCT) and MIN()/MAX()functions. Then, result values
of MIN/MAX() is set improperly.
When query has AGG(DISTINCT) then end_select is set to
end_send_group. "end_send_group" keeps doing aggregation
until it sees a record from next group. And, then it will
send out the result row of that group.
Since query also has MIN()/MAX() and loose index scan is
used, values of MIN/MAX() are set as part of loose index
scan itself. Setting MIN()/MAX() values as part of loose
index scan overwrites values computed in end_send_group.
This caused invalid result.
For such queries to work loose index scan should stop
performing MIN/MAX() aggregation. And, let end_send_group to
do the same. But according to current design loose index
scan can produce only one row per group key. If we have both
MIN() and MAX() then it has to give two records out. This is
not possible as interface has to use common buffer
record[0]! for both records at a time.

SOLUTIONS:
----------
For such queries to work we need a new interface for loose
index scan. Hence, do not choose loose_index_scan for such
cases. So a new rule SA7 is introduced to take care of the
same.

SA7: "If Q has both AGG_FUNC(DISTINCT ...) and
      MIN/MAX() functions then loose index scan access
      method is not used."

mysql-test/r/group_min_max.result:
  Expected result.
mysql-test/t/group_min_max.test:
  1. Test with various combination of AGG(DISTINCT) and
  MIN(), MAX() functions.
  2. Corrected the plan for old queries.
sql/opt_range.cc:
  A new rule SA7 is introduced.
2014-05-15 11:46:57 +05:30
mithun
4c4def9043 Bug#17217128 : BAD INTERACTION BETWEEN MIN/MAX AND
"HAVING SUM(DISTINCT)": WRONG RESULTS.
ISSUE:
------
If a query uses loose index scan and it has both
AGG(DISTINCT) and MIN()/MAX()functions. Then, result values
of MIN/MAX() is set improperly.
When query has AGG(DISTINCT) then end_select is set to
end_send_group. "end_send_group" keeps doing aggregation
until it sees a record from next group. And, then it will
send out the result row of that group.
Since query also has MIN()/MAX() and loose index scan is
used, values of MIN/MAX() are set as part of loose index
scan itself. Setting MIN()/MAX() values as part of loose
index scan overwrites values computed in end_send_group.
This caused invalid result.
For such queries to work loose index scan should stop
performing MIN/MAX() aggregation. And, let end_send_group to
do the same. But according to current design loose index
scan can produce only one row per group key. If we have both
MIN() and MAX() then it has to give two records out. This is
not possible as interface has to use common buffer
record[0]! for both records at a time.

SOLUTIONS:
----------
For such queries to work we need a new interface for loose
index scan. Hence, do not choose loose_index_scan for such
cases. So a new rule SA7 is introduced to take care of the
same.

SA7: "If Q has both AGG_FUNC(DISTINCT ...) and
      MIN/MAX() functions then loose index scan access
      method is not used."
2014-05-15 11:46:57 +05:30
Venkatesh Duggirala
aa992742db Bug#17283409 4-WAY DEADLOCK: ZOMBIES, PURGING BINLOGS,
SHOW PROCESSLIST, SHOW BINLOGS

Fixing post push test failure (MTR does not like giving
127.0.0.1 for localhost incase of --embedded run, it thinks
it is an external ip address)
2014-05-09 09:52:15 +05:30
Venkatesh Duggirala
2b8a41a6c4 Bug#17283409 4-WAY DEADLOCK: ZOMBIES, PURGING BINLOGS,
SHOW PROCESSLIST, SHOW BINLOGS

Fixing post push test failure (MTR does not like giving
127.0.0.1 for localhost incase of --embedded run, it thinks
it is an external ip address)
2014-05-09 09:52:15 +05:30
Venkatesh Duggirala
2870bd7423 Bug#17283409 4-WAY DEADLOCK: ZOMBIES, PURGING BINLOGS,
SHOW PROCESSLIST, SHOW BINLOGS

Problem:  A deadlock was occurring when 4 threads were
involved in acquiring locks in the following way
Thread 1: Dump thread ( Slave is reconnecting, so on
              Master, a new dump thread is trying kill
              zombie dump threads. It acquired thread's
              LOCK_thd_data and it is about to acquire
              mysys_var->current_mutex ( which LOCK_log)
Thread 2: Application thread is executing show binlogs and
               acquired LOCK_log and it is about to acquire
               LOCK_index.
Thread 3: Application thread is executing Purge binary logs
               and acquired LOCK_index and it is about to
               acquire LOCK_thread_count.
Thread 4: Application thread is executing show processlist
               and acquired LOCK_thread_count and it is
               about to acquire zombie dump thread's
               LOCK_thd_data.
Deadlock Cycle:
     Thread 1 -> Thread 2 -> Thread 3-> Thread 4 ->Thread 1

The same above deadlock was observed even when thread 4 is
executing 'SELECT * FROM information_schema.processlist' command and
acquired LOCK_thread_count and it is about to acquire zombie
dump thread's LOCK_thd_data.

Analysis:
There are four locks involved in the deadlock.  LOCK_log,
LOCK_thread_count, LOCK_index and LOCK_thd_data.
LOCK_log, LOCK_thread_count, LOCK_index are global mutexes
where as LOCK_thd_data is local to a thread.
We can divide these four locks in two groups.
Group 1 consists of LOCK_log and LOCK_index and the order
should be LOCK_log followed by LOCK_index.
Group 2 consists of other two mutexes
LOCK_thread_count, LOCK_thd_data and the order should
be LOCK_thread_count followed by LOCK_thd_data.
Unfortunately, there is no specific predefined lock order defined
to follow in the MySQL system when it comes to locks across these
two groups. In the above problematic example,
there is no problem in the way we are acquiring the locks
if you see each thread individually.
But If you combine all 4 threads, they end up in a deadlock.

Fix: 
Since everything seems to be fine in the way threads are taking locks,
In this patch We are changing the duration of the locks in Thread 4
to break the deadlock. i.e., before the patch, Thread 4
('show processlist' command) mysqld_list_processes()
function acquires LOCK_thread_count for the complete duration
of the function and it also acquires/releases
each thread's LOCK_thd_data.

LOCK_thread_count is used to protect addition and
deletion of threads in global threads list. While show
process list is looping through all the existing threads,
it will be a problem if a thread is exited but there is no problem
if a new thread is added to the system. Hence a new mutex is
introduced "LOCK_thd_remove" which will protect deletion
of a thread from global threads list. All threads which are
getting exited should acquire LOCK_thd_remove
followed by LOCK_thread_count. (It should take LOCK_thread_count
also because other places of the code still thinks that exit thread
is protected with LOCK_thread_count. In this fix, we are changing
only 'show process list' query logic )
(Eg: unlink_thd logic will be protected with
LOCK_thd_remove).

Logic of mysqld_list_processes(or file_schema_processlist)
will now be protected with 'LOCK_thd_remove' instead of
'LOCK_thread_count'.

Now the new locking order after this patch is:
LOCK_thd_remove -> LOCK_thd_data -> LOCK_log ->
LOCK_index -> LOCK_thread_count
2014-05-08 18:13:01 +05:30
Venkatesh Duggirala
33f15dc7ac Bug#17283409 4-WAY DEADLOCK: ZOMBIES, PURGING BINLOGS,
SHOW PROCESSLIST, SHOW BINLOGS

Problem:  A deadlock was occurring when 4 threads were
involved in acquiring locks in the following way
Thread 1: Dump thread ( Slave is reconnecting, so on
              Master, a new dump thread is trying kill
              zombie dump threads. It acquired thread's
              LOCK_thd_data and it is about to acquire
              mysys_var->current_mutex ( which LOCK_log)
Thread 2: Application thread is executing show binlogs and
               acquired LOCK_log and it is about to acquire
               LOCK_index.
Thread 3: Application thread is executing Purge binary logs
               and acquired LOCK_index and it is about to
               acquire LOCK_thread_count.
Thread 4: Application thread is executing show processlist
               and acquired LOCK_thread_count and it is
               about to acquire zombie dump thread's
               LOCK_thd_data.
Deadlock Cycle:
     Thread 1 -> Thread 2 -> Thread 3-> Thread 4 ->Thread 1

The same above deadlock was observed even when thread 4 is
executing 'SELECT * FROM information_schema.processlist' command and
acquired LOCK_thread_count and it is about to acquire zombie
dump thread's LOCK_thd_data.

Analysis:
There are four locks involved in the deadlock.  LOCK_log,
LOCK_thread_count, LOCK_index and LOCK_thd_data.
LOCK_log, LOCK_thread_count, LOCK_index are global mutexes
where as LOCK_thd_data is local to a thread.
We can divide these four locks in two groups.
Group 1 consists of LOCK_log and LOCK_index and the order
should be LOCK_log followed by LOCK_index.
Group 2 consists of other two mutexes
LOCK_thread_count, LOCK_thd_data and the order should
be LOCK_thread_count followed by LOCK_thd_data.
Unfortunately, there is no specific predefined lock order defined
to follow in the MySQL system when it comes to locks across these
two groups. In the above problematic example,
there is no problem in the way we are acquiring the locks
if you see each thread individually.
But If you combine all 4 threads, they end up in a deadlock.

Fix: 
Since everything seems to be fine in the way threads are taking locks,
In this patch We are changing the duration of the locks in Thread 4
to break the deadlock. i.e., before the patch, Thread 4
('show processlist' command) mysqld_list_processes()
function acquires LOCK_thread_count for the complete duration
of the function and it also acquires/releases
each thread's LOCK_thd_data.

LOCK_thread_count is used to protect addition and
deletion of threads in global threads list. While show
process list is looping through all the existing threads,
it will be a problem if a thread is exited but there is no problem
if a new thread is added to the system. Hence a new mutex is
introduced "LOCK_thd_remove" which will protect deletion
of a thread from global threads list. All threads which are
getting exited should acquire LOCK_thd_remove
followed by LOCK_thread_count. (It should take LOCK_thread_count
also because other places of the code still thinks that exit thread
is protected with LOCK_thread_count. In this fix, we are changing
only 'show process list' query logic )
(Eg: unlink_thd logic will be protected with
LOCK_thd_remove).

Logic of mysqld_list_processes(or file_schema_processlist)
will now be protected with 'LOCK_thd_remove' instead of
'LOCK_thread_count'.

Now the new locking order after this patch is:
LOCK_thd_remove -> LOCK_thd_data -> LOCK_log ->
LOCK_index -> LOCK_thread_count
2014-05-08 18:13:01 +05:30
mithun
ee3c555ad9 Bug #17059925: UNIONS COMPUTES ROWS_EXAMINED INCORRECTLY
ISSUE:
------
For UNION of selects, rows examined by the query will be sum
of rows examined by individual select operations and rows
examined for union operation. The value of session level
global counter that is used to count the rows examined by a
select statement should be accumulated and reset before it
is used for next select statement. But we have missed to
reset the same. Because of this examined row count of a
select query is accounted more than once.

SOLUTION:
---------
In union reset the session level global counter used to
accumulate count of examined rows after its value is saved.

mysql-test/r/union.result:
  Expected output of testcase added.
mysql-test/t/union.test:
  Test to verify examined row count of Union operations.
sql/sql_union.cc:
  Reset the value of thd->examined_row_count after
  accumulating the value.
2014-05-08 14:49:53 +05:30
mithun
263d47d3a1 Bug #17059925: UNIONS COMPUTES ROWS_EXAMINED INCORRECTLY
ISSUE:
------
For UNION of selects, rows examined by the query will be sum
of rows examined by individual select operations and rows
examined for union operation. The value of session level
global counter that is used to count the rows examined by a
select statement should be accumulated and reset before it
is used for next select statement. But we have missed to
reset the same. Because of this examined row count of a
select query is accounted more than once.

SOLUTION:
---------
In union reset the session level global counter used to
accumulate count of examined rows after its value is saved.
2014-05-08 14:49:53 +05:30
Chaithra Gopalareddy
8ade414b28 Bug#17909656 - WRONG RESULTS FOR A SIMPLE QUERY WITH GROUP BY
Problem:
If there is a predicate on a column referenced by MIN/MAX and
that predicate is not present in all the disjunctions on
keyparts earlier in the compound index, Loose Index Scan will
not return correct result.

Analysis:
When loose index scan is chosen, range optimizer currently
groups all the predicates that contain group parts separately
and minmax parts separately. It therefore applies all the
conditions on the group parts first to the fetched row.
Then in the call to next_max, it processes the conditions
which have min/max keypart.

For ex in the following query:
Select f1, max(f2) from t1 where (f1 = 10 and f2 = 13) or
(f1 = 3) group by f1;
Condition (f2 = 13) would be applied even for rows that
satisfy (f1 = 3) thereby giving wrong results.

Solution:
Do not choose loose_index_scan for such cases. So a new rule
WA2 is introduced to take care of the same.

WA2: "If there are predicates on C, these predicates must
be in conjuction to all predicates on all earlier keyparts
in I."

Todo the same, fix reuses the function get_constant_key_infix().
Since this funciton will fail for all multi-range conditions, it
is re-written to recognize that if the sub-conditions are
equivalent across the disjuncts: it will now succeed.
And to achieve this a new helper function is introduced called
all_same().

The fix also moves the test of NGA3 up to the former only
caller, get_constant_key_infix().


mysql-test/r/group_min_max_innodb.result:
  Added test result change for Bug#17909656
mysql-test/t/group_min_max_innodb.test:
  Added test cases for Bug#17909656
sql/opt_range.cc:
  Introduced Rule WA2 because of Bug#17909656
2014-05-07 14:59:23 +05:30
Chaithra Gopalareddy
5fa8e768ca Bug#17909656 - WRONG RESULTS FOR A SIMPLE QUERY WITH GROUP BY
Problem:
If there is a predicate on a column referenced by MIN/MAX and
that predicate is not present in all the disjunctions on
keyparts earlier in the compound index, Loose Index Scan will
not return correct result.

Analysis:
When loose index scan is chosen, range optimizer currently
groups all the predicates that contain group parts separately
and minmax parts separately. It therefore applies all the
conditions on the group parts first to the fetched row.
Then in the call to next_max, it processes the conditions
which have min/max keypart.

For ex in the following query:
Select f1, max(f2) from t1 where (f1 = 10 and f2 = 13) or
(f1 = 3) group by f1;
Condition (f2 = 13) would be applied even for rows that
satisfy (f1 = 3) thereby giving wrong results.

Solution:
Do not choose loose_index_scan for such cases. So a new rule
WA2 is introduced to take care of the same.

WA2: "If there are predicates on C, these predicates must
be in conjuction to all predicates on all earlier keyparts
in I."

Todo the same, fix reuses the function get_constant_key_infix().
Since this funciton will fail for all multi-range conditions, it
is re-written to recognize that if the sub-conditions are
equivalent across the disjuncts: it will now succeed.
And to achieve this a new helper function is introduced called
all_same().

The fix also moves the test of NGA3 up to the former only
caller, get_constant_key_infix().
2014-05-07 14:59:23 +05:30
Mattias Jonsson
548db49210 Bug#17909699: WRONG RESULTS WITH PARTITION BY LIST COLUMNS()
Typo leading to not including the last list values (partition).

Also improved pruning to skip last partition if not used.

rb#4762 approved by Aditya and Marko.
2014-05-06 11:05:37 +02:00
Mattias Jonsson
b822ebf60c Bug#17909699: WRONG RESULTS WITH PARTITION BY LIST COLUMNS()
Typo leading to not including the last list values (partition).

Also improved pruning to skip last partition if not used.

rb#4762 approved by Aditya and Marko.
2014-05-06 11:05:37 +02:00
Michael Widenius
a55c159424 MDEV-6245 Certain compressed tables with myisampack are corrupted by "CHECK TABLE"
- Fixed bug that we where using wrong checksum algorithm when using VARCHAR with fixed lenth rows
- Ensure in myisampack that HA_OPTION_NULL_FIELDS is set for tables with null fields.

mysql-test/r/myisampack.result:
  Updated results
mysql-test/t/myisampack.test:
  Added more tests
storage/myisam/mi_open.c:
  Use correct checksum algorithm when we have VARCHAR fields with fixed length records
storage/myisam/myisampack.c:
  Ensure HA_OPTION_NULL_FIELDS is set for tables with null fields.
  (This was not set by default for not compressed tables without checksums to keep MyISAM tables compatible with MySQL)
2014-05-17 10:42:59 +03:00
unknown
45a91d8cbb MDEV-6193: Problems with multi-table updates that JOIN against read-only table
All underlying tables should share the same lock type.
2014-05-08 22:56:36 +03:00
unknown
3f80740aa8 merge 5.5->5.3 2014-05-07 09:28:12 +03:00
Sergei Golubchik
a313864814 MDEV-6056 [PATCH] mysqldump writes usage to stdout even when not explicitly requested 2014-05-05 14:24:25 +02:00
unknown
285160dee2 MDEV-5981: name resolution issues with views and multi-update in ps-protocol
It is triple bug with one test suite:
1. Incorrect outer table detection
2. Incorrect leaf table processing for multi-update (should be full like for usual updates and inserts)
3. ON condition fix_fields() fould be called for all tables of the query.
2014-05-01 17:19:17 +03:00
Alexander Nozdrin
d14f191e6b Patch for Bug#18511348 (DDL_I18N_UTF8 AND DDL_I18N_KOI8R
ARE PERMANENTLY SKIPPED IN 5.5/5.6).

The problem was that some result files were not updated,
so the tests were skipped.

The fix is to record updated result files.
2014-04-30 20:48:29 +04:00
Alexander Nozdrin
b5b5758d9a Patch for Bug#18511348 (DDL_I18N_UTF8 AND DDL_I18N_KOI8R
ARE PERMANENTLY SKIPPED IN 5.5/5.6).

The problem was that some result files were not updated,
so the tests were skipped.

The fix is to record updated result files.
2014-04-30 20:48:29 +04:00
Tor Didriksen
c76e29c884 Backport from trunk:
Bug#18396916 MAIN.OUTFILE_LOADDATA TEST FAILS ON ARM, AARCH64, PPC/PPC64
  
  The recorded results for the failing tests were wrong.
  They were introduced by the patch for
  Bug#30946 mysqldump silently ignores --default-character-set when used with --tab
  
  Correct results were returned for platforms where 'char' is implemented as unsigned.
  This was reported as 
  Bug#46895 Test "outfile_loaddata" fails (reproducible)
  Bug#11755168 46895: TEST "OUTFILE_LOADDATA" FAILS (REPRODUCIBLE)
  The patch for that bug fixed only parts of the problem,
  leaving the incorrect results in the .result file.
  
  Solution: use 'uchar' for field_terminator and line_terminator on all platforms.
  Also: remove some un-necessary casts, leaving the ones we actually need.
2014-04-23 17:01:35 +02:00
Tor Didriksen
c006e3f27a Backport from trunk:
Bug#18396916 MAIN.OUTFILE_LOADDATA TEST FAILS ON ARM, AARCH64, PPC/PPC64
  
  The recorded results for the failing tests were wrong.
  They were introduced by the patch for
  Bug#30946 mysqldump silently ignores --default-character-set when used with --tab
  
  Correct results were returned for platforms where 'char' is implemented as unsigned.
  This was reported as 
  Bug#46895 Test "outfile_loaddata" fails (reproducible)
  Bug#11755168 46895: TEST "OUTFILE_LOADDATA" FAILS (REPRODUCIBLE)
  The patch for that bug fixed only parts of the problem,
  leaving the incorrect results in the .result file.
  
  Solution: use 'uchar' for field_terminator and line_terminator on all platforms.
  Also: remove some un-necessary casts, leaving the ones we actually need.
2014-04-23 17:01:35 +02:00