problem was that ha_partition::records was not implemented, thus
using the default handler::records, which is not correct if the engine
does not support HA_STATS_RECORDS_IS_EXACT.
Solution was to implement ha_partition::records as a wrapper around
the underlying partitions records.
The rows column in explain partitions will now include the total
number of records in the partitioned table.
(recommit after removing out-commented code)
enabled)
Before this fix, the lexer and parser would treat the ';' character as a
different token (either ';' or END_OF_INPUT), based on convoluted logic,
which failed in simple cases where a stored procedure is implemented as a
single statement, and used in a multi query.
With this fix:
- the character ';' is always parsed as a ';' token in the lexer,
- parsing multi queries is implemented in the parser, in the 'query:' rules,
- the value of thd->client_capabilities, which is the capabilities
negotiated between the client and the server during bootstrap,
is immutable and not arbitrarily modified during parsing (which was the
root cause of the bug)
Problem: the test syncs slave by a 'wait_condition' waiting until
table t1 has 5000 rows. However, there is no guarantee that t1
makes it to the slave before the wait_condition.
Fix: sync_slave_with_master just after t1 was created.
On certain kinds of errors (e.g., out of stack), a call to Item_func_
set_user_var::fix_fields() might fail. Since the return value of this
call was not checked inside User_var_log_event::exec_event(), continuing
execution after this will cause a crash inside Item_func_set_user_var::
update_hash().
The bug is fixed by aborting execution of the event with an error if
fix_fields() fails, since it is not possible to continue execution anyway.
Problem: rpl_ndb_transaction fails because it assumes nothing
is written to the binlog at a certain point. However, ndb may
binlog updates in ndb system tables at a nondeterministic
time point after an ndb table update has been committed.
Fix: break the test into two. rpl_ndb_transaction still does
the ndb updates needed by the first half of the test. The new
test case rpl_bug26395 includes the part that assumes nothing
more will be written to the binlog.
When there is an error executing EXISTS predicates they return NULL as their string
or decimal value but don't set the NULL value flag.
Fixed by returning 0 (as a decimal or a string) on error exectuting the subquery.
Note that we can't return NULL as EXISTS is not supposed to return NULL.
Problem 1: main.loaddata tried to trigger an error caused by
reading files outside the vardir, by reading itself. However,
if loaddata.test is not world-readable (e.g., umask=0077),
then another error is triggered.
Fix 1: allow the other error too.
Problem 2: rpl_slave_skip and rpl_innodb_mixed_dml tried to
copy a file from mysql-test/suite/rpl/data to mysql-test/var
and then read it. That failed too if umask=0077, since the
file would not become world-readable.
Fix 2: move the files from mysql-test/suite/rpl/data to
mysql-test/std_data and update tests accordingly. Remove
the directory mysql-test/suite/rpl/data.
Bug#12093 "SP not found on second PS execution if another thread
drops other SP in between" and
Bug#21294 "executing a prepared statement that executes a stored
function which was recreat"
Stored functions are resolved at prepared statement prepare only.
If someone flushes the stored functions cache between prepare and
execute, execution fails.
The fix is to detect the situation of the cache flush and automatically
reprepare the prepared statement after it.
This bug has been fixed in two slightly different ways in
6.0-rpl and {5.1,6.0}-bugteam. To avoid future merge
problems, I'm now copying the 6.0-rpl fix to 5.1-bugteam.
The previous fix for the bug was incomplete. The test failed
because t2 did not exist on the slave (since the slave was
lagging) when the
wait_condition was executed. Fixed by inserting
sync_slave_with_master just after t2 was created.
Test was failing due to the addition of a '\x05' character in result sets
Latest builds of the server have shown this problem to have disappeared.
Removing code within the test that disables the test on Mac OS X.
Recommit due to tree error on earlier, approved patch.
Bug#36787 Test funcs_1.charset_collation_1 failing
Details:
1. Skip charset_collation_1 if charset "ucs2_bin" is
missing (property which distincts "vanilla" builds
from the others)
2. Let builds with version_comment LIKE "%Advanced%"
(found them for 5.1) execute charset_collation_3.
3. Update comments charset_collation.inc so that they
reflect the current experiences.
In order to handle CHAR() fields, 8 bits were reserved for
the size of the CHAR field. However, instead of denoting the
number of characters in the field, field_length was used which
denotes the number of bytes in the field.
Since UTF-8 fields can have three bytes per character (and
has been extended to have four bytes per character in 6.0),
an extra two bits have been encoded in the field metadata
work for fields of type Field_string (i.e., CHAR fields).
Since the metadata word is filled, the extra bits have been
encoded in the upper 4 bits of the real type (the most
significant byte of the metadata word) by computing the
bitwise xor of the extra two bits. Since the upper 4 bits
of the real type always is 1111 for Field_string, this
means that for fields of length <256, the encoding is
identical to the encoding used in pre-5.1.26 servers, but
for lengths of 256 or more, an unrecognized type is formed,
causing an old slave (that does not handle lengths of 256
or more) to stop.
Problem: rpl_switch_stm_row_mixed did not wait until row events generated by
INSERT DELAYED were written to the master binlog before it synchronized slave
with master. This caused sporadic errors where these rows were missing on
slave.
Fix: wait until all rows appear on the slave.
This is a backport, applying the same to 5.1-bugteam as was previously
applied to 6.0-rpl
InnoDB table, where all selected columns
belong to the same unique index key, returns
incorrect results
Server executes some queries via QUICK_GROUP_MIN_MAX_SELECT
(MIN/MAX optimization for queries with GROUP BY or DISTINCT
clause) and that optimization implies loose index scan, so all
grouping is done by the QUICK_GROUP_MIN_MAX_SELECT::get_next
method.
The server does not set the precomputed_group_by flag for some
QUICK_GROUP_MIN_MAX_SELECT queries and duplicates grouping by
call to the end_send_group function.
Fix: when the test_if_skip_sort_order function selects loose
index scan as a best way to satisfy an ORDER BY/GROUP BY type
of query, the precomputed_group_by flag has been set to use
end_send/end_write functions instead of end_send_group/
end_write_group functions.