Change default for innodb_use_fallocate to FALSE, due to bugs in older Linux kernels (posix_fallocate() does not always guarantee that file size is like one specified)
The official Debian Wheezy MySQL packages have versions like 5.5.30+dfsg-xxx.
Such version is larger than 5.5.30-yyy, so apt prefers it.
So use instead 5.5.30+maria-yyy, which is larger and can be pulled in
automatically by apt.
Also included are a couple of fixes for test failures in buildbot.
This fixes that by default LOAD DATA INFILE will not generate the error:
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage..."
mysql-test/suite/sys_vars/r/max_binlog_cache_size_basic.result:
Updated test case
mysql-test/suite/sys_vars/r/max_binlog_stmt_cache_size_basic.result:
Updated test case
sql/sys_vars.cc:
Increase default value of max_binlog_cache_size and max_binlog_stmt_cache_size to ulonglong_max.
MySQL bug http://bugs.mysql.com/bug.php?id=61713 was fixed in 5.5
Fix is to remove check for multiple entries returned by getaddrinfo(), and use the first entry that works - i.e socket can be created.
Unlike Oracle/MySQL's fix ,this one is kept minimal :
- we do not prioritize IPv4 over IPv6, orr other way around, and just rely on operating system to sort getaddrinfo() entries in sensible order. There is RFC that defines what is sensible order for getaddrinfo entries ( RFC 3484), and OS specific tweaks are also possible , like /etc/gai.conf o Linux.
- also, we do not force "0.0.0.0" address if bind-address is not given - this would be a change in behavior of 5.5 at least on Windows, where passing NULL as to getaddrinfo() gives back IPv6-wildcard.
fulltext search was initialized for all MATCH ... AGAINST items
at the end of the JOIN::optimize(). But since 5.3 derived tables
are initialized lazily on first use, very late in the sub_select().
Skip Item_func_match::init_search initialization if the corresponding
table isn't open yet; repeat fulltext initialization for all
not-yet-initialized MATCH ... AGAINST items after creating derived tables.
currently get_mm_tree skipped the evaluation of this constant
and icorrectly proceeded. The correct behavior is to return a
NULL subtree, according to the IF branch being fixed - when it
evaluates the constant it returns a value, and doesn't continue
further.
update 5.1 to replicate from 10.0 and
to show the server version (as of 10.0) correctly
sql-common/client.c:
mdev:4088
sql/slave.cc:
use the version number, not just the first character of the version string
(we want 10 > 4 not "10" < "4").
init join->top_join_tab_count to be in sync with join->join_tab=stat,
otherwise a query can be killed in-between and join_tab's won't be deleted
(JOIN::cleanup won't call JOIN_TAB::cleanup)
- Let index_merge allocate table handlers on quick select's MEM_ROOT,
not on statement's MEM_ROOT.
This is crucial for big "range checked for each record" queries, where
index_merge can be created and deleted many times during query exection.
We should not make O(#rows) allocations on statement's MEM_ROOT.
Analysis:
The reason for the inefficent plan was that Item_subselect::is_expensive()
didn't detect the special case when a subquery was optimized, but had no
join plan because it either has no table, or its tables have been optimized
away, or the optimizer detected that the result set is empty.
Solution:
Identify the special cases above in the Item_subselect::is_expensive(),
and consider such degenerate subqueries inexpensive.
This bug was introduced by the patch for WL#3220.
If the memory allocated for the tree to store unique elements
to be counted is not big enough to include all of them then
an external file is used to store the elements.
The unique elements are guaranteed not to be nulls. So, when
reading them from the file we don't have to care about the null
flags of the read values. However, we should remove the flag
at the very beginning of the process. If we don't do it and
if the last value written into the record buffer for the field
whose distinct values needs to be counted happens to be null,
then all values read from the file are considered to be nulls
and are not counted in.
The fix does not remove a possible null flag for the read values.
Rather it just counts the values in the same way it was done
before WL #3220.
In some cases, when using views the optimizer incorrectly determined
possible join orders for queries with nested outer and inner joins.
This could lead to invalid execution plans for such queries.