Detailed revision comments:
r6536 | sunny | 2010-01-30 00:13:42 +0200 (Sat, 30 Jan 2010) | 6 lines
branches/5.1: Check *first_value everytime against the column max
value and set *first_value to next autoinc if it's > col max value.
ie. not rely on what is passed in from MySQL.
[49497] Error 1467 (ER_AUTOINC_READ_FAILED) on inserting a negative value
rb://236
Detailed revision comments:
r6535 | sunny | 2010-01-30 00:08:40 +0200 (Sat, 30 Jan 2010) | 11 lines
branches/5.1: Undo the change from r6424. We need to return DB_SUCCESS even
if we were unable to initialize the tabe autoinc value. This is required for
the open to succeed. The only condition we currently treat as a hard error
is if the autoinc field instance passed in by MySQL is NULL.
Previously if the table autoinc value was 0 and the next value was requested
we had an assertion that would fail. Change that assertion and treat a value
of 0 to mean that the autoinc system is unavailable. Generation of next
value will now return failure.
rb://237
Propagation of a large unsigned numeric constant
in the WHERE expression led to wrong result.
For example,
"WHERE a = CAST(0xFFFFFFFFFFFFFFFF AS USIGNED) AND FOO(a)",
where a is an UNSIGNED BIGINT, and FOO() accepts strings,
was transformed to "... AND FOO('-1')".
That has been fixed.
Also EXPLAIN EXTENDED printed incorrect numeric constants in
transformed WHERE expressions like above. That has been
fixed too.
The problem was in an incorrect debug assertion. The expression
used in the failing assertion states that when finding
references matching ORDER BY expressions, there can be only one
reference to a single table. But that does not make any sense,
all test cases for this bug are valid examples with multiple
identical WHERE expressions referencing the same table which
are also present in the ORDER BY list.
Fixed by removing the failing assertion. We also have to take
care of the 'found' counter so that we count multiple
references only once. We rely on this fact later in
eq_ref_table().
"TYPE=storage_engine" is deprecated, and will be removed
in the Celosia release of MySQL. Since the option is
present in the Betony release and the version number of
Celosia is still not decided, we need to bump the
deprecation version number back up to "6.0".
When EXPLAIN EXTENDED tries to print column names, it checks whether the
referenced table is CONST (in which case, the column's value rather than
its name will be printed). If no proper table is reference (i.e. because
a derived table was used that has since gone out of scope), this will fail
spectacularly.
This ports an equivalent of the fix for Bug 43354.
CHECK_FIELD_IGNORE was treated as CHECK_FIELD_ERROR_FOR_NULL;
UPDATE...SET...NULL on NOT NULL fields behaved differently after
a trigger.
Now distinguishes between IGNORE and ERROR_FOR_NULL and save/restores
check-field options.
Table corruption happens during table reading in ha_tina::find_current_row() func.
Field::store() method returns error(true) if stored value is 0.
The fix:
added special case for enum type which correctly processes 0 value.
Additional fix:
INSERT...(default) and INSERT...() have the same behaviour now for enum type.
Some logic would group by suite always
Disable this if using --noreorder
Also fix getting array from collect_one_suite() in this case
Amended according to previous comment
A "%define" is no shell command, so it must not be the
only line in the "then" or "else" branch of an "if".
Add a ':' line to make the branch non-empty.
The problem is that during temporary table creation uneven bits
are not taken into account for hidden fields. It leads to incorrect
calculation&allocation of null bytes size for table record. And
if grouped value is null we set wrong bit for this value(see end_update()).
Fixed by adding separate calculation of uneven bit for hidden fields.
This bug is just one facet of stored routines not being able to
detect changes in meta-data (WL#4179). This particular problem
can be triggered within a single session due to the improper
management of the pre-locking list if the view is expanded after
the pre-locking list is calculated.
Since the overall solution for the meta-data detection issue is
planned for a later release, for now a workaround is used to
fix this particular aspect that only involves a single session.
The workaround is to flush the thread-local stored routine cache
every time a view is created or modified, causing locally cached
routines to be re-evaluated upon invocation.
The problem becomes apparent only if HAVE_purify is undefined.
It related to the part of code placed in open_table_from_share() fuction
where we initialize record buffer only if HAVE_purify is enabled.
So in case of HAVE_purify=OFF record buffer is not initialized
on open table stage.
Next we read key, find NULL value and update appropriate null bit
but do not update record buffer. After that the record is stored
in the join cache(store_record_in_cache). For CHAR fields we
strip trailing spaces and in our case this procedure uses
uninitialized record buffer.
The fix is to skip stripping space procedure in case of null values
for CHAR fields(partially based on 6.0 JOIN_CACHE implementation).