the auto_increment value
This is an alternative patch that instead of allowing RECREATE TABLE
on TRUNCATE TABLE it implements reset_auto_increment that is called
after delete_all_rows.
Note: this bug was fixed by Mattias Jonsson:
Pusing this patch: http://lists.mysql.com/commits/70370
Had attempted to disable this test on Windows only, but the nature of this bug
does not allow for this. The master.opt file is processed before anything in
in the actual test. As a result, we must use disabled.def files to ensure
these tests are skipped on the problematic platforms.
Removed Windows-only code and updated the proper disabled.def files accordingly.
Some collations were causing IBMDB2I to report
inaccurate key range estimations to the optimizer
for LIKE clauses that select substrings. This can
be seen by running EXPLAIN. This problem primarily
affects multi-byte and unicode character sets.
This patch involves substantial changes to several
modules. There are a number of problems with the
character set and collation handling. These problems
have been or are being fixed, and a comprehensive
test has been included which should provide much
better coverage than there was before. This test
is enabled only for IBM i 6.1, because that version
has support for the greatest number of collations.
timeout
In STMT and MIXED modes, a statement that changes both non-transactional and
transactional tables must be written to the binary log whenever there are
changes to non-transactional tables. This means that the statement gets into the
binary log even when the changes to the transactional tables fail. In particular
, in the presence of a failure such statement is annotated with the error number
and wrapped in a begin/rollback. On the slave, while applying the statement, it
is expected the same failure and the rollback prevents the transactional changes
to be persisted.
Unfortunately, statements that fail due to concurrency issues (e.g. deadlocks,
timeouts) are logged in the same way causing the slave to stop as the statements
are applied sequentially by the SQL Thread. To fix this bug, we automatically
ignore concurrency failures on the slave. Specifically, the following failures
are ignored: ER_LOCK_WAIT_TIMEOUT, ER_LOCK_DEADLOCK and ER_XA_RBDEADLOCK.
server
If the server connection was lost during repeated status commands,
the client would fail to detect this and the client output would be inconsistent.
This patch fixes this issue by making sure that the server is online
before the client attempts to execute the status command.
The crash happend because for views which are joins
we have table_list->table == 0 and
table_list->table->'any method' call leads to crash.
The fix is to perform table_list->table->file->extra()
method for all tables belonging to view.
Using DECIMAL constants with more than 65 digits in CREATE
TABLE ... SELECT led to bogus errors in release builds or
assertion failures in debug builds.
The problem was in inconsistency in how DECIMAL constants and
fields are handled internally. We allow arbitrarily long
DECIMAL constants, whereas DECIMAL(M,D) columns are limited to
M<=65 and D<=30. my_decimal_precision_to_length() was used in
both Item and Field code and truncated precision to
DECIMAL_MAX_PRECISION when calculating value length without
adjusting precision and decimals. As a result, a DECIMAL
constant with more than 65 digits ended up having length less
than precision or decimals which led to assertion failures.
Fixed by modifying my_decimal_precision_to_length() so that
precision is truncated to DECIMAL_MAX_PRECISION only for Field
object which is indicated by the new 'truncate' parameter.
Another inconsistency fixed by this patch is how DECIMAL
constants and expressions are handled for CREATE ... SELECT.
create_tmp_field_from_item() (which is used for constants) was
changed as a part of the bugfix for bug #24907 to handle long
DECIMAL constants gracefully. Item_func::tmp_table_field()
(which is used for expressions) on the other hand was still
using a simplistic approach when creating a Field_new_decimal
from a DECIMAL expression.
contains ONLY_FULL_GROUP_BY
The partitioning code needs to issue a Item::fix_fields()
on the partitioning expression in order to prepare
it for being evaluated.
It does this by creating a special table and a table list
for the scope of the partitioning expression.
But when checking ONLY_FULL_GROUP_BY the
Item_field::fix_fields() was relying that there always be
cached_table set and was trying to use it to get the
select_lex of the SELECT the field's table is in.
But the cached_table was not set by the partitioning code
that creates the artificial TABLE_LIST used to resolve the
partitioning expression and this resulted in a crash.
Fixed by rectifying the following errors :
1. Item_field::fix_fields() : the code that check for
ONLY_FULL_GROUP_BY relies on having tables with
cacheable_table set. This is mostly true, the only
two exceptions being the partitioning context table
and the trigger context table.
Fixed by taking the current parsing context if no pointer
to the TABLE_LIST instance is present in the cached_table.
2. fix_fields_part_func() :
2a. The code that adds the table being created to the
scope for the partitioning expression is mostly a copy
of the add_table_to_list and friends with one exception :
it was not marking the table as cacheable (something that
normal add_table_to_list is doing). This caused the
problem in the check for ONLY_FULL_GROUP_BY in
Item_field::fix_fields() to appear.
Fixed by setting the correct members to make the table
cacheable.
The ideal structural fix for this is to use a unified
interface for adding a table to a table list
(add_table_to_list?) : noted in a TODO comment
2b. The Item::fix_fields() was called with a NULL destination
pointer. This causes uninitalized memory reads in the
overloaded ::fix_fields() function (namely
Item_field::fix_fields()) as it expects a non-zero pointer
there. Fixed by passing the source pointer similarly to how
it's done in JOIN::prepare().