select where .. (col=col and col=col) or ... (false expression)
Problem: optimizer didn't take into account a singular case
when we eliminated all the predicates at the AND level of WHERE.
That may lead to wrong results.
Fix: replace (a=a AND a=a...) with TRUE if we eliminated all the
predicates.
Revised patch incorporating cleaner test code brought up during review.
Removed the use of grep and accomplished same actions via SQL / use of the server.
Runs as before on *nix systems and now runs on Windows without Cygwin as well.
There are some recursive targets that automake generates which reference
DIST_SUBDIRS. It's critical, then, for such subdirs to exist even if they
won't be built as part of SUBDIRS.
During a VPATH build, it is the configure script which creates the subdirs
(when it processes the AC_CONFIG_FILES() for each subdir's Makefile). If
autoconf doesn't create a subdir's Makefile, then the recursive make will fail
when it is unable to cd into that subdir.
This isn't a problem in non-VPATH builds, because the subdirs are all present
in the source tarball. So the problem only shows up during 'make distcheck',
which does a VPATH build.
The fix is to look, when configure is being created by autoconf, for any
plugin subdirectories. These are the dynamic subdirectories which need to be
handled specially. It's enough to tell autoconf to generate a Makefile for
any Makefile.am found in the plugin directory - all plugin subdirectories
using automake (i.e., listed in the plugin's DIST_SUBDIRS) will have a
Makefile.am.
This is done by calling 'find'. This means that 'find' must be in the PATH on
the host that is running autoconf. 'find' is NOT needed when calling
configure, so it is not an additional dependency for the user.
Finally, ha_ndbcluster.m4 had called AC_CONFIG_FILES() on all those subdir
Makefiles, but only when the plugin was actually being built. So it didn't
work in the case that NDB was not being built. All of those Makefiles have to
be removed from this static list, since the plugin machinery is now adding
them automatically. autoconf fails if a file is duplicated in
AC_CONFIG_FILES().
Bug #43203 Overflow from auto incrementing causes server segv
Detailed revision comments:
r4325 | sunny | 2009-03-02 02:28:52 +0200 (Mon, 02 Mar 2009) | 10 lines
branches/5.1: Bug#43203: Overflow from auto incrementing causes server segv
It was not a SIGSEGV but an assertion failure. The assertion was checking
the invariant that *first_value passed in by MySQL doesn't contain a value
that is greater than the max value for that type. The assertion has been
changed to a check and if the value is greater than the max we report a
generic AUTOINC failure.
rb://93
Approved by Heikki
Bug #42714 AUTO_INCREMENT errors in 5.1.31
Detailed revision comments:
r4287 | sunny | 2009-02-25 05:32:01 +0200 (Wed, 25 Feb 2009) | 10 lines
branches/5.1: Fix Bug#42714 AUTO_INCREMENT errors in 5.1.31. There are two
changes to the autoinc handling.
1. To fix the immediate problem from the bug report, we must ensure that the
value written to the table is always less than the max value stored in
dict_table_t.
2. The second related change is that according to MySQL documentation when
the offset is greater than the increment, we should ignore the offset.
Bug #42400 InnoDB autoinc code can't handle floating-point columns
Detailed revision comments:
r4065 | sunny | 2009-01-29 16:01:36 +0200 (Thu, 29 Jan 2009) | 8 lines
branches/5.1: In the last round of AUTOINC cleanup we assumed that AUTOINC
is only defined for integer columns. This caused an assertion failure when
we checked for the maximum value of a column type. We now calculate the
max value for floating-point autoinc columns too.
Fix Bug#42400 - InnoDB autoinc code can't handle floating-point columns
rb://84 and Mantis issue://162
r4111 | sunny | 2009-02-03 22:06:52 +0200 (Tue, 03 Feb 2009) | 2 lines
branches/5.1: Add the ULL suffix otherwise there is an overflow.
Bug #42279 Race condition in btr_search_drop_page_hash_when_freed()
Detailed revision comments:
r4032 | marko | 2009-01-23 15:43:51 +0200 (Fri, 23 Jan 2009) | 10 lines
branches/5.1: Merge r4031 from branches/5.0:
btr_search_drop_page_hash_when_freed(): Check if buf_page_get_gen()
returns NULL. The page may have been evicted from the buffer pool
between buf_page_peek_if_search_hashed() and buf_page_get_gen(),
because the buffer pool mutex will be released between these two calls.
(Bug #42279)
rb://82 approved by Heikki Tuuri
Since there is more than one duplicate value in the table, when adding the
unique index it is not deterministic which value will be reported as causing a
problem. Replace the reported value with '' so that it doesn't affect the
results.
The problem is that creating a event could fail if the value of
the variable server_id didn't fit in the originator column of
the event system table. The cause is two-fold: it was possible
to set server_id to a value outside the documented range (from
0 to 2^32-1) and the originator column of the event table didn't
have enough room for values in this range.
The log tables (general_log and slow_log) also don't have a proper
column type to store the server_id and having a large server_id
value could prevent queries from being logged.
The solution is to ensure that all system tables that store the
server_id value have a proper column type (int unsigned) and that
the variable can't be set to a value that is not within the range.
The copy of the original arguments of a aggregate function was not
initialized until after fix_fields().
Sometimes (e.g. when there's an error processing the statement)
the print() can be called with no corresponding fix_fields() call.
Fixed by adding a check if the Item is fixed before using the arguments
copy.
The method to purge binary log files produces different results in some platforms.
The reason is that the purge time is calculated based on table modified time and
that can't guarantee to purge master-bin.000002 in all platforms.(eg. windows)
Use a new way that sets the time to purge binlog file 1 second after the last modified time of master-bin.000002.
That can be sure that the file is always deleted in any platform.
--ignore-table option
mysqldump would correctly omit temporary tables for views, but would
incorrectly still emit all CREATE VIEW statements.
Backport a fix from 5.1, where we capture the names we want to emit
views for in one pass (the placeholder tables) and in the pass where
we actually emit the views, we don't emit a view if it wasn't in that
list.
Took the Xfree implementation (based on the same rewrite as the NDB one)
and added it instead of the current implementation.
Added a macro to make the calls to MD5 more streamlined.