Corrected spelling in copyright text
Makefile.am:
Don't update the files from BitKeeper
Many files:
Removed "MySQL Finland AB & TCX DataKonsult AB" from copyright header
Adjusted year(s) in copyright header
Many files:
Added GPL copyright text
Removed files:
Docs/Support/colspec-fix.pl
Docs/Support/docbook-fixup.pl
Docs/Support/docbook-prefix.pl
Docs/Support/docbook-split
Docs/Support/make-docbook
Docs/Support/make-makefile
Docs/Support/test-make-manual
Docs/Support/test-make-manual-de
Docs/Support/xwf
The problem is that the GEOMETRY NOT NULL can't automatically set
any value as a default one. We always tried to complete LOAD DATA
command even if there's not enough data in file. That doesn't work
for GEOMETRY NOT NULL. Now Field_*::reset() returns an error sign
and it's checked in mysql_load()
Fixed compiler warnings (detected by VC++):
- Removed not used variables
- Added casts
- Fixed wrong assignments to bool
- Fixed wrong calls with bool arguments
- Added missing argument to store(longlong), which caused wrong store method to be called.
When implicitly converting string fields to numbers the
string-to-number conversion error was not sent to the client.
Added code to send the conversion error as warning.
We also need to prevent generation of warnings from the places
where val_xxx() methods are called for the sole purpose of updating
the Item::null_value flag.
To achieve that a special function is added (and called) :
update_null_value(). This function will set the no_errors flag and
will call val_xxx(). The warning generation in Field_string::val_xxx()
will use the flag when generating the conversion warnings.
The problem was that any VIEW columns had always implicit derivation.
Fix: derivation is now copied from the original expression
given in VIEW definition.
For example:
- a VIEW column which comes from a string constant
in CREATE VIEW definition have now coercible derivation.
- a VIEW column having COLLATE clause
in CREATE VIEW definition have now explicit derivation.
Problem: Too confusing error message when cannot convert
between string and column character sets on INSERT and UPDATE.
Fix: producing a better error message, instead of "Data too long"
in such cases
Additional changes: Adding "DROP TABLE IF EXISTS" into several
tests to be safe against failures in previous tests.
- bug #11655 "Wrong time is returning from nested selects - maximum time exists
- input and output TIME values were not validated properly in several conversion functions
- bug #20927 "sec_to_time treats big unsigned as signed"
- integer overflows were not checked in several functions. As a result, input values like 2^32 or 3600*2^32 were treated as 0
- BIGINT UNSIGNED values were treated as SIGNED in several functions
- in cases where both input string truncation and out-of-range TIME value occur, only 'truncated incorrect time value' warning was produced
Problem: for character sets having mbmaxlen==2,
any ALTER TABLE changed TEXT column type to MEDIUMTEXT,
due to wrong "internal length to create length" formula.
Fix: removing rounding code introduced in early 4.1 time,
which is not correct anymore.
The function receives an exactly-sized buffer (not a C NUL-terminated string)
and passes it into a printf function to be interpreted with "%s".
Instead, create an intermediate String object, and copy the data into it,
and pass in a pointer to the String's NUL-terminated buffer.
doesn't find the column"
When a user was using 4.1 tables with VARCHAR column and 5.0 server
and a query that used a temporary table to resolve itself, the
table metadata for the varchar column sent to client was incorrect:
MYSQL_FIELD::table member was empty.
The bug was caused by implicit "upgrade" from old VARCHAR to new
VARCHAR hard-coded in Field::new_field, which did not preserve
the information about the original table. Thus, the field metadata
of the "upgraded" field pointed to an auxiliary temporary table
created for query execution.
The fix is to copy the pointer to the original table to the new field.
The Item::tmp_table_field_from_field_type() function creates Field_datetime
object instead of Field_timestamp object for timestamp field thus always
changing data type is a tmp table is used.
The Field_blob object constructor which is used in the
Item::tmp_table_field_from_field_type() is always setting packlength field of
newly created blob to 4. This leads to changing fields data type for example
from the blob to the longblob if a temporary table is used.
The Item::make_string_field() function always converts Field_string objects
to Field_varstring objects. This leads to changing data type from the
char/binary to varchar/varbinary.
Added appropriate Field_timestamp object constructor for using in the
Item::tmp_table_field_from_field_type() function.
Added Field_blob object constructor which sets pack length according to
max_length argument.
The Item::tmp_table_field_from_field_type() function now creates
Field_timestamp object for a timestamp field.
The Item_type_holder::display_length() now returns correct NULL length NULL
length.
The Item::make_string_field() function now doesn't change Field_string to
Field_varstring in the case of Item_type_holder.
The Item::tmp_table_field_from_field_type() function now uses the Field_blob
constructor which sets packlength according to max_length.
The problem was that when converting a string to an exact number,
rounding didn't work, because conversion didn't understand
approximate numbers notation.
Fix: a new function for string-to-number conversion was implemented,
which is aware of approxinate number notation (with decimal point
and exponent, e.g. -19.55e-1)
When a default of '' was specified for TEXT/BLOB columns, the specification
was silently ignored. This is presumably to be nice to applications (or
people) who generate their column definitions in a not-very-clever fashion.
For clarity, doing this now results in a warning, or an error in strict
mode.
The Federated storage engine used Field methods that had arbitrary limits on
the amount of data they could process, which caused problems with data
over that limit (4K). By removing those Field methods and just using
features of the String class, we can avoid this problem.
Bug#17294 - INSERT DELAYED puting an \n before data
Bug#16611 - INSERT DELAYED corrupts data
Bug#13707 - Server crash with INSERT DELAYED on MyISAM table
Combined as Bug#16218.
INSERT DELAYED crashed in 5.0 on a table with a varchar that
could be NULL and was created pre-5.0 (Bugs 16218 and 13707).
INSERT DELAYED corrupted data in 5.0 on a table with varchar
fields that was created pre-5.0 (Bugs 17294 and 16611).
In case of INSERT DELAYED the open table is copied from the
delayed insert thread to be able to create a record for the
queue. When copying the fields, a method was used that did
convert old varchar to new varchar fields and did not set up
some pointers into the record buffer of the table.
The field conversion was guilty for the misinterpretation of
the record contents by the delayed insert thread. The wrong
pointer setup was guilty for the crashes.
For Bug 13707 (Server crash with INSERT DELAYED on MyISAM table)
I fixed the above mentioned method to set up one of the pointers.
For Bug 16218 I set up the other pointers too.
But when looking at the corruptions I got aware that converting
the field type was totally wrong for INSERT DELAYED. The copied
table is used to create a record that is to be sent to the
delayed insert thread. Of course it can interpret the record
correctly only if all field types are the same in both table
objects.
So I revoked the fix for Bug 13707 and changed the new_field()
method so that it can suppress conversions.
No test case as this is a migration problem. One needs to
create a table with 4.x and use it with 5.x. I added two
test scripts to the bug report.
This bug in Field_string::cmp resulted in a wrong comparison
with keys in partial indexes over multi-byte character fields.
Given field a is declared as a varchar(16) collate utf8_unicode_ci
INDEX(a(4)) gives us an example of such an index.
Wrong key comparisons could lead to wrong result sets if
the selected query execution plan used a range scan by
a partial index over a utf8 character field.
This also caused wrong results in many other cases.