With ibmdb2i_create_index_option set to 1, creating an IBMDB2I table
with a primary key should produce an additional index that uses EBCDIC
hexadecimal sorting. However, this does not work. Adding indexes that
are not primary keys does work. The ibmdb2i_create_index_option should
be honoured when creating a table with a primary key.
This patch adds code to the create() function to check for the value
of the ibmdb2i_create_index_option variable and, when appropriate, to
generate a *HEX-based shadow index in DB2 for the primary key. Previously
this behavior was limited to secondary indexes.
Additionally, this patch restricts the creation of shadow indexes to
cases in which a non-*HEX sort sequence is used, as the documentation
for ibmdb2i_create_index_option describes. Previously, the shadow index
would in some cases be created even when the MySQL-specific index used
*HEX sorting, leading to redundant indexes.
Finally, the code used to generate the list of fields for indexes
and the code used to generate the SQL statement for the shadow
indexes has been refactored into individual functions.
mysql-test/suite/ibmdb2i/r/ibmdb2i_bug_45983.result:
Bug#45983 ibmdb2i_create_index_option=1 not working for primary key
Result file for the test case.
mysql-test/suite/ibmdb2i/t/ibmdb2i_bug_45983.test:
Bug#45983 ibmdb2i_create_index_option=1 not working for primary key
Add tests to verify that the ibmdb2i_create_index_option is being honoured
when creating a table with a primary key.
storage/ibmdb2i/ha_ibmdb2i.cc:
Bug#45983 ibmdb2i_create_index_option=1 not working for primary key
- Add code to the create() function to check for the value of the
ibmdb2i_create_index_option variable and, when appropriate, to
generate a *HEX-based shadow index in DB2 for the primary key.
- Restrict the creation of shadow indexes to cases in which a
non-*HEX sort sequence is used.
- Refractor code used to generate the list of fields for indexes
and the code used to generate the SQL statement for the shadow
indexes into individual functions.
storage/ibmdb2i/ha_ibmdb2i.h:
Bug#45983 ibmdb2i_create_index_option=1 not working for primary key
Add function prototypes for the functions that.
- Generate the list of fields for indexes
- Generate the SQL statement for the shadow
indexes
Had attempted to disable this test on Windows only, but the nature of this bug
does not allow for this. The master.opt file is processed before anything in
in the actual test. As a result, we must use disabled.def files to ensure
these tests are skipped on the problematic platforms.
Removed Windows-only code and updated the proper disabled.def files accordingly.
Some collations were causing IBMDB2I to report
inaccurate key range estimations to the optimizer
for LIKE clauses that select substrings. This can
be seen by running EXPLAIN. This problem primarily
affects multi-byte and unicode character sets.
This patch involves substantial changes to several
modules. There are a number of problems with the
character set and collation handling. These problems
have been or are being fixed, and a comprehensive
test has been included which should provide much
better coverage than there was before. This test
is enabled only for IBM i 6.1, because that version
has support for the greatest number of collations.
mysql-test/suite/ibmdb2i/r/ibmdb2i_collations.result:
Bug#45803 Inaccurate estimates for partial key values with IBMDB2I
result file for test case.
mysql-test/suite/ibmdb2i/t/ibmdb2i_collations.test:
Bug#45803 Inaccurate estimates for partial key values with IBMDB2I
Tests for character sets and collations. This test
is enabled only for IBM i 6.1, because that version
has support for the greatest number of collations.
storage/ibmdb2i/db2i_conversion.cc:
Bug#45803 Inaccurate estimates for partial key values with IBMDB2I
- Added support in convertFieldChars to enable records_in_range
to determine how many substitute characters were inserted and
to suppress conversion warnings.
- Fixed bug which was causing all multi-byte and Unicode fields
to be created as UTF16 (CCSID 1200) fields in DB2. The corrected
code will now create UCS2 fields as UCS2 (CCSID 13488), UTF8
fields (except for utf8_general_ci) as UTF8 (CCSID 1208), and
all other multi-byte or Unicode fields as UTF16. This will only
affect tables that are newly created through the IBMDB2I storage
engine. Existing IBMDB2I tables will retain the original CCSID
until recreated. The existing behavior is believed to be
functionally correct, but it may negatively impact performance
by causing unnecessary character conversion. Additionally, users
accessing IBMDB2I tables through DB2 should be aware that mixing
tables created before and after this change may require extra type
casts or other workarounds. For this reason, users who have
existing IBMDB2I tables using a Unicode collation other than
utf8_general_ci are encouraged to recreate their tables (e.g.
ALTER TABLE t1 ENGINE=IBMDB2I) in order to get the updated CCSIDs
associated with their DB2 tables.
- Improved error reporting for unsupported character sets by forcing
a check for the iconv conversion table at table creation time,
rather than at data access time.
storage/ibmdb2i/db2i_myconv.h:
Bug#45803 Inaccurate estimates for partial key values with IBMDB2I
Fix to set errno when iconv fails.
storage/ibmdb2i/db2i_rir.cc:
Bug#45803 Inaccurate estimates for partial key values with IBMDB2I
Significant improvements were made to the records_in_range code
that handles partial length string data in keys for optimizer plan
estimation. Previously, to obtain an estimate for a partial key
value, the implementation would perform any necessary character
conversion and then attempt to determine the unpadded length of
the partial key by searching for the minimum or maximum sort
character. While this algorithm was sufficient for most single-byte
character sets, it did not treat Unicode and multi-byte strings
correctly. Furthermore, due to an operating system limitation,
partial keys having UTF8 collations (ICU sort sequences in DB2)
could not be estimated with this method.
With this patch, the code no longer attempts to explicitly determine
the unpadded length of the key. Instead, the entire key is converted
(if necessary), including padding, and then passed to the operating
system for estimation. Depending on the source and target character
sets and collations, additional logic is required to correctly
handle cases in which MySQL uses unconvertible or differently
-weighted values to pad the key. The bulk of the patch exists
to implement this additional logic.
storage/ibmdb2i/ha_ibmdb2i.h:
Bug#45803 Inaccurate estimates for partial key values with IBMDB2I
The convertFieldChars declaration was updated to support additional
optional behaviors.
timeout
In STMT and MIXED modes, a statement that changes both non-transactional and
transactional tables must be written to the binary log whenever there are
changes to non-transactional tables. This means that the statement gets into the
binary log even when the changes to the transactional tables fail. In particular
, in the presence of a failure such statement is annotated with the error number
and wrapped in a begin/rollback. On the slave, while applying the statement, it
is expected the same failure and the rollback prevents the transactional changes
to be persisted.
Unfortunately, statements that fail due to concurrency issues (e.g. deadlocks,
timeouts) are logged in the same way causing the slave to stop as the statements
are applied sequentially by the SQL Thread. To fix this bug, we automatically
ignore concurrency failures on the slave. Specifically, the following failures
are ignored: ER_LOCK_WAIT_TIMEOUT, ER_LOCK_DEADLOCK and ER_XA_RBDEADLOCK.
The crash happend because for views which are joins
we have table_list->table == 0 and
table_list->table->'any method' call leads to crash.
The fix is to perform table_list->table->file->extra()
method for all tables belonging to view.
mysql-test/r/view.result:
test result
mysql-test/t/view.test:
test case
sql/sql_insert.cc:
added prepare_for_positional_update() function
which updates extra info about primary key for
tables belonging to view.
enabled message storing into error message list
for 'drop table' command
mysql-test/r/warnings.result:
test result
mysql-test/t/warnings.test:
test case
sql/sql_table.cc:
We should skip error sending then we should return
warnings to client as some functions may send its
own errors, so we should set no_warnings_for_error= 0
only in case of warning.
The fix is to enable message storing into error message
list for 'drop table' command(only for error case).
tests/mysql_client_test.c:
test fix
Using DECIMAL constants with more than 65 digits in CREATE
TABLE ... SELECT led to bogus errors in release builds or
assertion failures in debug builds.
The problem was in inconsistency in how DECIMAL constants and
fields are handled internally. We allow arbitrarily long
DECIMAL constants, whereas DECIMAL(M,D) columns are limited to
M<=65 and D<=30. my_decimal_precision_to_length() was used in
both Item and Field code and truncated precision to
DECIMAL_MAX_PRECISION when calculating value length without
adjusting precision and decimals. As a result, a DECIMAL
constant with more than 65 digits ended up having length less
than precision or decimals which led to assertion failures.
Fixed by modifying my_decimal_precision_to_length() so that
precision is truncated to DECIMAL_MAX_PRECISION only for Field
object which is indicated by the new 'truncate' parameter.
Another inconsistency fixed by this patch is how DECIMAL
constants and expressions are handled for CREATE ... SELECT.
create_tmp_field_from_item() (which is used for constants) was
changed as a part of the bugfix for bug #24907 to handle long
DECIMAL constants gracefully. Item_func::tmp_table_field()
(which is used for expressions) on the other hand was still
using a simplistic approach when creating a Field_new_decimal
from a DECIMAL expression.
mysql-test/r/type_newdecimal.result:
Added a test case for bug #45262.
mysql-test/t/type_newdecimal.test:
Added a test case for bug #45262.
sql/item.cc:
Use the new 'truncate' parameter in
my_decimal_precision_to_length().
sql/item_cmpfunc.cc:
Use the new 'truncate' parameter in
my_decimal_precision_to_length().
sql/item_func.cc:
1. Use the new 'truncate' parameter in
my_decimal_precision_to_length().
2. Do not truncate decimal precision to DECIMAL_MAX_PRECISION
for additive expressions involving long DECIMAL constants.
3. Fixed an incosistency in how DECIMAL constants and
expressions are handled for CREATE ... SELECT.
sql/item_func.h:
Use the new 'truncate' parameter in
my_decimal_precision_to_length().
sql/item_sum.cc:
Use the new 'truncate' parameter in
my_decimal_precision_to_length().
sql/my_decimal.h:
Do not truncate precision to DECIMAL_MAX_PRECISION
when calculating length in
my_decimal_precision_to_length() if 'truncate' parameter
is FALSE.
sql/sql_select.cc:
1. Use the new 'truncate' parameter in
my_decimal_precision_to_length().
2. Use a more correct logic when adjusting value's length.
contains ONLY_FULL_GROUP_BY
The partitioning code needs to issue a Item::fix_fields()
on the partitioning expression in order to prepare
it for being evaluated.
It does this by creating a special table and a table list
for the scope of the partitioning expression.
But when checking ONLY_FULL_GROUP_BY the
Item_field::fix_fields() was relying that there always be
cached_table set and was trying to use it to get the
select_lex of the SELECT the field's table is in.
But the cached_table was not set by the partitioning code
that creates the artificial TABLE_LIST used to resolve the
partitioning expression and this resulted in a crash.
Fixed by rectifying the following errors :
1. Item_field::fix_fields() : the code that check for
ONLY_FULL_GROUP_BY relies on having tables with
cacheable_table set. This is mostly true, the only
two exceptions being the partitioning context table
and the trigger context table.
Fixed by taking the current parsing context if no pointer
to the TABLE_LIST instance is present in the cached_table.
2. fix_fields_part_func() :
2a. The code that adds the table being created to the
scope for the partitioning expression is mostly a copy
of the add_table_to_list and friends with one exception :
it was not marking the table as cacheable (something that
normal add_table_to_list is doing). This caused the
problem in the check for ONLY_FULL_GROUP_BY in
Item_field::fix_fields() to appear.
Fixed by setting the correct members to make the table
cacheable.
The ideal structural fix for this is to use a unified
interface for adding a table to a table list
(add_table_to_list?) : noted in a TODO comment
2b. The Item::fix_fields() was called with a NULL destination
pointer. This causes uninitalized memory reads in the
overloaded ::fix_fields() function (namely
Item_field::fix_fields()) as it expects a non-zero pointer
there. Fixed by passing the source pointer similarly to how
it's done in JOIN::prepare().
mysql-test/r/partition.result:
Bug #45807: test case
mysql-test/t/partition.test:
Bug #45807: test case
sql/item.cc:
Bug #45807: fix the ONLY_FULL_GROUP_BY check code to
handle correctly non-cacheable tables.
sql/sql_partition.cc:
Bug #45807: fix the Item::fix_fields() context
initializatio for the partitioning expression in
CREATE TABLE.
Creating an IBMDB2I table with the macce character set
is successful, but any attempt to insert data into the
table was failing.
This was happening because the character set name "macce"
is not a valid iconv descriptor for IBM i PASE. This patch
adds an override to convertTextDesc to use the equivalent
valid iconv descriptor "IBM-1282" instead.
mysql-test/suite/ibmdb2i/r/ibmdb2i_bug_45793.result:
Bug#45793 macce charset causes error with IBMDB2I
Result file for the test case.
mysql-test/suite/ibmdb2i/t/ibmdb2i_bug_45793.test:
Bug#45793 macce charset causes error with IBMDB2I
Add a testcase for the macce charater set.
storage/ibmdb2i/db2i_charsetSupport.cc:
Bug#45793 macce charset causes error with IBMDB2I
The character set name "macce" is not a valid iconv
descriptor for IBM i PASE. Add an override to convertTextDesc
to use the equivalent valid iconv descriptor "IBM-1282"
instead.
Some collations--including cp1250_czech_cs,latin2_czech_cs,
ucs2/utf8_czech_ci, ucs2/utf8_danish_ci--are not being
sorted correctly by the IBMDB2I storage engine. This
was being caused because the sort order used by DB2 is
incompatible with the order expected by MySQL.
This patch removes support for the cp1250_czech_cs and
latin2_czech_cs collations because it has been determined
that the sort order used by DB2 is incompatible with the
order expected by MySQL. Users needing a czech collation
with IBMDB2I are encouraged to use a Unicode-based collation
instead of these single-byte collations. This patch also
modifies the DB2 sort sequence used for ucs2/utf8_czech_ci
and ucs2/utf8_danish_ci collations to better match the
sorting expected by MySQL. This will only affect indexes
or tables that are newly created through the IBMDB2I storage
engine. Existing IBMDB2I tables will retain the old sort
sequence until recreated.
mysql-test/suite/ibmdb2i/r/ibmdb2i_bug_45196.result:
Bug#45196 Some collations do not sort correctly with IBMDB2I
Result file for the test case.
mysql-test/suite/ibmdb2i/t/ibmdb2i_bug_45196.test:
Bug#45196 Some collations do not sort correctly with IBMDB2I
Adding tests for testing the sort order with the modified collations.
storage/ibmdb2i/db2i_collationSupport.cc:
Bug#45196 Some collations do not sort correctly with IBMDB2I
Remove the support for the cp1250_czech_cs and latin2_czech_cs
collations because it has been determined that the sort order
used by DB2 is incompatible with the order expected by MySQL.
Users needing a czech collation with IBMDB2I are encouraged to
use a Unicode-based collation instead of these single-byte
collations. This patch also modifies the DB2 sort sequence
used for ucs2/utf8_czech_ci and ucs2/utf8_danish_ci collations
to better match the sorting expected by MySQL. This will only
affect indexes or tables that are newly created through the
IBMDB2I storage engine. Existing IBMDB2I tables will retain
the old sort sequence until recreated.
format." warnings
Despite the fact that a statement would be filtered out from binlog, a
warning would still be thrown if it was issued with the LIMIT.
This patch addresses this issue by checking the filtering rules before
printing out the warning.
mysql-test/suite/binlog/t/binlog_stm_unsafe_warning-master.opt:
Parameter to filter out database: "b42851".
mysql-test/suite/binlog/t/binlog_stm_unsafe_warning.test:
Added a new test case.
sql/sql_class.cc:
Added filtering rules check to condition used to decide whether to
printout warning or not.
The TABLE::reginfo.impossible_range is used by the optimizer to indicate
that the condition applied to the table is impossible. It wasn't initialized
at table opening and this might lead to an empty result on complex queries:
a query might set the impossible_range flag on a table and when the query finishes,
all tables are returned back to the table cache. The next query that uses the table
with the impossible_range flag set and an index over the table will see the flag
and thus return an empty result.
The open_table function now initializes the TABLE::reginfo.impossible_range
variable.
mysql-test/r/select.result:
A test case for the bug#45266: Uninitialized variable lead to an empty result.
mysql-test/t/select.test:
A test case for the bug#45266: Uninitialized variable lead to an empty result.
sql/sql_base.cc:
Bug#45266: Uninitialized variable lead to an empty result.
The open_table function now initializes the TABLE::reginfo.impossible_range
variable.
sql/sql_select.cc:
Bug#45266: Uninitialized variable lead to an empty result.
The open_table function now initializes the TABLE::reginfo.impossible_range
variable.
sql/structs.h:
Bug#45266: Uninitialized variable lead to an empty result.
A comment is added.
The problem is that the one phase commit function failed to
properly end a empty transaction. The solution is to ensure
that the transaction cleanup procedure is invoked even for
empty transactions.
mysql-test/r/xa.result:
Add test case result for Bug#45548
mysql-test/t/xa.test:
Add test case for Bug#45548
sql/handler.cc:
Invoke transaction cleanup function whenever a transaction is ended.
The test case added failed sporadically on PB. This is due to the
fact that the user thread in some cases is waiting for slave IO
to stop and then check the error number. Thence, sometimes the
user thread would race for the error number with IO thread.
This post push fix addresses this by replacing the wait for slave
io to stop with a wait for slave io error (as it seems it was
added in 6.0 also after patch on which this is based was
pushed). This implied backporting wait_for_slave_io_error.inc
from 6.0 also.
Added privilege checking to SHOW CREATE TRIGGER code.
mysql-test/r/trigger_notembedded.result:
test result
mysql-test/t/trigger_notembedded.test:
test case
sql/sql_show.cc:
Added privilege checking to SHOW CREATE TRIGGER code.
BUG#40565 - Update Query Results in "1 Row Affected" But Should Be "Zero Rows"
Detailed revision comments:
r5232 | marko | 2009-06-03 14:31:04 +0300 (Wed, 03 Jun 2009) | 21 lines
branches/5.0: Merge r3590 from branches/5.1 in order to fix Bug #40565
(Update Query Results in "1 Row Affected" But Should Be "Zero Rows").
Also, add a test case for Bug #40565.
rb://128 approved by Heikki Tuuri
------------------------------------------------------------------------
r3590 | marko | 2008-12-18 15:33:36 +0200 (Thu, 18 Dec 2008) | 11 lines
branches/5.1: When converting a record to MySQL format, copy the default
column values for columns that are SQL NULL. This addresses failures in
row-based replication (Bug #39648).
row_prebuilt_t: Add default_rec, for the default values of the columns in
MySQL format.
row_sel_store_mysql_rec(): Use prebuilt->default_rec instead of
padding columns.
rb://64 approved by Heikki Tuuri
------------------------------------------------------------------------
In Item_param::set_from_user_var
value.cs_info.character_set_client is set
to 'fromcs' value. It's wrong, it should be set to
thd->variables.character_set_client.
mysql-test/r/ctype_gbk_binlog.result:
test result
mysql-test/t/ctype_gbk_binlog.test:
test case
sql/item.cc:
In Item_param::set_from_user_var
value.cs_info.character_set_client is set
to 'fromcs' value. It's wrong, it should be set to
thd->variables.character_set_client.
When opening a table, it is imperative that the flag
TABLE::auto_increment_field_not_null be false. But if an error occured during
the creation of a table (e.g. the table exists already) with an auto_increment
column and a BEFORE trigger that used the INSERT ... SELECT construct, the
flag was not reset until after error checking. Thus if an error occured,
select_insert::send_data() returned immediately and it was not reset (see * in
pseudocode below). Crash happened if the table was opened again. Fixed by
resetting the flag after error checking.
nested-loops_join():
for each row in SELECT table {
select_insert::send_data():
if a values is supplied for AUTO_INCREMENT column
table->auto_increment_field_not_null= TRUE
else
table->auto_increment_field_not_null= FALSE
if (error)
return 1; *
if (table->auto_increment_field_not_null == FALSE)
...
table->auto_increment_field_not_null == FALSE
}
<-- table returned to table cache and later retrieved by open_table:
open_table():
assert(table->auto_increment_field_not_null)
mysql-test/r/trigger.result:
Bug#44653: Test result
mysql-test/t/trigger.test:
Bug#44653: Test case
sql/sql_insert.cc:
Bug#44653: Fix: Make sure to unset this field before returning in case of error
1. BUG#45357 - 5.1.35 crashes with Failing assertion: index->type & DICT_CLUSTERED
2. Also fixes the compilation problem when the flag -DUNIV_MUST_NOT_INLINE
Detailed revision comments:
r5340 | marko | 2009-06-17 12:11:49 +0300 (Wed, 17 Jun 2009) | 4 lines
branches/5.1: row_unlock_for_mysql(): When the clustered index is unknown,
refuse to unlock the record.
(Bug #45357, caused by the fix of Bug #39320).
rb://132 approved by Sunny Bains.
r5339 | marko | 2009-06-17 11:01:37 +0300 (Wed, 17 Jun 2009) | 2 lines
branches/5.1: Add missing #include "mtr0log.h" so that the code compiles
with -DUNIV_MUST_NOT_INLINE.