doesn't find the column"
When a user was using 4.1 tables with VARCHAR column and 5.0 server
and a query that used a temporary table to resolve itself, the
table metadata for the varchar column sent to client was incorrect:
MYSQL_FIELD::table member was empty.
The bug was caused by implicit "upgrade" from old VARCHAR to new
VARCHAR hard-coded in Field::new_field, which did not preserve
the information about the original table. Thus, the field metadata
of the "upgraded" field pointed to an auxiliary temporary table
created for query execution.
The fix is to copy the pointer to the original table to the new field.
into govinda.patg.net:/home/patg/mysql-build/mysql-5.1-5.0-merge2
Push by holyfoot@production.mysql.com on Tue Jul 25 13:41:40 2006:
bk clone -l -r'holyfoot/hf@mysql.com/deer.(none)|ChangeSet|20060725085017|41021' mysql-5.0 tmp_merge
The Item::tmp_table_field_from_field_type() function creates Field_datetime
object instead of Field_timestamp object for timestamp field thus always
changing data type is a tmp table is used.
The Field_blob object constructor which is used in the
Item::tmp_table_field_from_field_type() is always setting packlength field of
newly created blob to 4. This leads to changing fields data type for example
from the blob to the longblob if a temporary table is used.
The Item::make_string_field() function always converts Field_string objects
to Field_varstring objects. This leads to changing data type from the
char/binary to varchar/varbinary.
Added appropriate Field_timestamp object constructor for using in the
Item::tmp_table_field_from_field_type() function.
Added Field_blob object constructor which sets pack length according to
max_length argument.
The Item::tmp_table_field_from_field_type() function now creates
Field_timestamp object for a timestamp field.
The Item_type_holder::display_length() now returns correct NULL length NULL
length.
The Item::make_string_field() function now doesn't change Field_string to
Field_varstring in the case of Item_type_holder.
The Item::tmp_table_field_from_field_type() function now uses the Field_blob
constructor which sets packlength according to max_length.
When a default of '' was specified for TEXT/BLOB columns, the specification
was silently ignored. This is presumably to be nice to applications (or
people) who generate their column definitions in a not-very-clever fashion.
For clarity, doing this now results in a warning, or an error in strict
mode.
The Federated storage engine used Field methods that had arbitrary limits on
the amount of data they could process, which caused problems with data
over that limit (4K). By removing those Field methods and just using
features of the String class, we can avoid this problem.
Bug#17294 - INSERT DELAYED puting an \n before data
Bug#16611 - INSERT DELAYED corrupts data
Bug#13707 - Server crash with INSERT DELAYED on MyISAM table
Combined as Bug#16218.
INSERT DELAYED crashed in 5.0 on a table with a varchar that
could be NULL and was created pre-5.0 (Bugs 16218 and 13707).
INSERT DELAYED corrupted data in 5.0 on a table with varchar
fields that was created pre-5.0 (Bugs 17294 and 16611).
In case of INSERT DELAYED the open table is copied from the
delayed insert thread to be able to create a record for the
queue. When copying the fields, a method was used that did
convert old varchar to new varchar fields and did not set up
some pointers into the record buffer of the table.
The field conversion was guilty for the misinterpretation of
the record contents by the delayed insert thread. The wrong
pointer setup was guilty for the crashes.
For Bug 13707 (Server crash with INSERT DELAYED on MyISAM table)
I fixed the above mentioned method to set up one of the pointers.
For Bug 16218 I set up the other pointers too.
But when looking at the corruptions I got aware that converting
the field type was totally wrong for INSERT DELAYED. The copied
table is used to create a record that is to be sent to the
delayed insert thread. Of course it can interpret the record
correctly only if all field types are the same in both table
objects.
So I revoked the fix for Bug 13707 and changed the new_field()
method so that it can suppress conversions.
No test case as this is a migration problem. One needs to
create a table with 4.x and use it with 5.x. I added two
test scripts to the bug report.
This bug in Field_string::cmp resulted in a wrong comparison
with keys in partial indexes over multi-byte character fields.
Given field a is declared as a varchar(16) collate utf8_unicode_ci
INDEX(a(4)) gives us an example of such an index.
Wrong key comparisons could lead to wrong result sets if
the selected query execution plan used a range scan by
a partial index over a utf8 character field.
This also caused wrong results in many other cases.
The problem appeared because the same values produced different hash
during INSERT and SELECT for VARCHAR data type.
Fix:
VARCHAR required special treatment to avoid hashing of length bytes
(leftmost one or two bytes) as well as trailing bytes beyond real length,
which could contain garbage. Fix is done by introducing hash() - new method
in the Field class.