Pushbuild fixes:
- Make MAX_SEL_ARGS smaller (even 16K records_in_range() calls is
more than it makes sense to do in typical cases)
- Don't call sel_arg->test_use_count() if we've already allocated
more than MAX_SEL_ARGs elements. The test will succeed but will take
too much time for the test suite (and not provide much value).
- GRANT and REVOKE statments didn't have the "updating" flag set and
thus statements with a table specified would not replicate if
slave filtering rules where turned on.
For example "GRANT ... ON test.t1 TO ..." would not replicate.
Thanks to Martin Friebe for finding and submitting a fix for this bug!
A table with maximum number of key segments and maximum length key name
would have a corrupted .frm file, due to an incorrect calculation of the
complete key length. Now the key length is computed correctly (I hope) :-)
MyISAM would reject a table with the maximum number of keys and the maximum
number of key segments in all keys. It would allow one less than this total
maximum. Now MyISAM accepts a table defined with the maximum. (This is a
very minor issue.)
- Added PARAM::alloced_sel_args where we count the # of SEL_ARGs
created by SEL_ARG tree cloning operations.
- Made the range analyzer to shortcut and not do any more cloning
if we've already created MAX_SEL_ARGS SEL_ARG objects in cloning.
- Added comments about space complexity of SEL_ARG-graph
representation.
causes incorrect duplicate entries
Keys for BTREE indexes on ENUM and SET columns of MEMORY tables
with character set UTF8 were computed incorrectly. Many
different column values got the same key value.
Apart of possible performance problems, it made unique indexes
of this type unusable because it rejected many different
values as duplicates.
The problem was that multibyte character detection was tried
on the internal numeric column value. Many values were not
identified as characters. Their key value became blank filled.
Thanks to Alexander Barkov and Ramil Kalimullin for the patch,
which sets the character set of ENUM and SET key segments to
the pseudo binary character set.
to 0 causes wrong (large) length to be read
from the row in _mi_calc_blob_length() when
storing NULL values in (e.g) POINT columns.
This large length is then used to allocate
a block of memory that (on some OSes) causes
trouble.
Fixed by calling the base class's
Field_blob::reset() from Field_geom::reset()
that is called when storing a NULL value into
the column.
fix for cast( AS DATETIME) + 0 operation.
I just implemented Item_datetime_typecast::val() method
as it is usually done in other classes.
Should be fixed more radically in 5.0
Shift the ID values up into a range where they will not collide with those
which we use for real data, when we fill the system tables.
Will be merged up to 5.0 where it is needed for 5.0.38.
Using a MEMORY table BTREE index for scanning for updatable rows
could lead to an infinite loop.
Everytime a key was inserted into a btree index, the position
in the index scan was cleared. The search started from the
beginning and found the same key again.
Now we do not clear the position on key insert an more.
when index is used
When the table contained TEXT columns with empty contents
('', zero length, but not NULL) _and_ strings starting with
control characters like tabulator or newline, the empty values
were not found in a "records in range" estimate. Hence count(*)
missed these records.
The reason was a different set of search flags used for key
insert and key range estimation.
I decided to fix the set of flags used in range estimation.
Otherwise millions of databases around the world would require
a repair after an upgrade.
The consequence is that the manual must be fixed, which claims
that TEXT columns are compared with "end space padding". This
is true for CHAR/VARCHAR but wrong for TEXT. See also bug 21335.
differences in tables
Certain merge tables were wrongly reported as having incorrect definition:
- Some fields that are 1 byte long (e.g. TINYINT, CHAR(1)), might
be internally casted (in certain cases) to a different type on a
storage engine layer. (affects 4.1 and up)
- If tables in a merge (and a MERGE table itself) had short VARCHAR column (less
than 4 bytes) and at least one (but not all) tables were ALTER'ed (even to an
identical table: ALTER TABLE xxx ENGINE=yyy), table definitions went ouf of
sync. (affects 4.1 only)
This is fixed by relaxing a check for underlying conformance and setting
field type to FIELD_TYPE_STRING in case varchar is shorter than 4
when a table is created.
incorrect key file for table
In certain cases it could happen that deleting a row could
corrupt an RTREE index.
According to Guttman's algorithm, page underflow is handled
by storing the page in a list for later re-insertion. The
keys from the stored pages have to be inserted into the
remaining pages of the same level of the tree. Hence the
level number is stored in the re-insertion list together
with the page.
In the MySQL RTree implementation the level counts from zero
at the root page, increasing numbers for levels down the tree.
If during re-insertion of the keys the tree height grows, all
level numbers become invalid. The remaining keys will be
inserted at the wrong level.
The fix is to increment the level numbers stored in the
reinsert list after a split of the root block during reinsertion.
Using INSERT DELAYED on MERGE tables could lead to table
corruptions.
The manual lists a couple of storage engines, which can be
used with INSERT DELAYED. MERGE is not in this list.
The attempt to try it anyway has not been rejected yet.
This bug was not detected earlier as it can work under
special circumstances. Most notable is low concurrency.
To be safe, this patch rejects any attempt to use INSERT
DELAYED on MERGE tables.
When the ORDER BY clause gets fixed it's allowed to search in the current
item_list in order to find aliased fields and expressions. This is ok for a
SELECT but wrong for an UPDATE statement. If the ORDER BY clause will
contain a non-existing field which is mentioned in the UPDATE set list
then the server will crash due to using of non-existing (0x0) field.
When an Item_field is getting fixed it's allowed to search item list for
aliased expressions and fields only for selects.
Having maybe_null flag unset for geometry/spatial functions leads to
wrong Item_func_isnull::val_int()'s results.
Fix: set maybe_null flag and add is_null() methods.
- Starting time of a query sent by file bootstrapping wasn't initialized
and starting time defaulted to 0. This later used value by the Now-
item and is translated to 1970-01-01 11:00:00.
- marking the time with thd->set_time() before the call to
mysql_parse resolves this issue.
ENUMs weren't allowed to have character 0xff, a perfectly good character in some locales.
This was circumvented by mapping 0xff in ENUMs to ',', thereby prevent actual commas from
being used. Now if 0xff makes an appearance, we find a character not used in the enum and
use that as a separator. If no such character exists, we throw an error.
Any solution would have broken some sort of existing behaviour. This solution should
serve both fractions (those with 0xff and those with ',' in their enums), but
WILL REQUIRE A DUMP/RESTORE CYCLE FROM THOSE WITH 0xff IN THEIR ENUMS. :-/
That is, mysqldump with their current server, and restore when upgrading to one with
this patch.
"update existingtable set anycolumn=nonexisting order by nonexisting" would crash
the server.
Though we would find the reference to a field, that doesn't mean we can then use
it to set some values. It could be a reference to another field. If it is NULL,
don't try to use it to set values in the Item_field and instead return an error.
Over the previous patch, this signals an error at the location of the error, rather
than letting the subsequent deref signal it.
When a merge table is opened compare column and key definition of
underlying tables against column and key definition of merge table.
If any of underlying tables have different column/key definition
refuse to open merge table.
Depending on the queries we use different data processing methods
and can lose some data in case of double (and decimal in 4.1) fields.
The fix consists of two parts:
1. double comparison changed, now double a is equal to double b
if (a-b) is less than 5*0.1^(1 + max(a->decimals, b->decimals)).
For example, if a->decimals==1, b->decimals==2, a==b if (a-b)<0.005
2. if we use a temporary table, store double values there as is
to avoid any data conversion (rounding).
The bug report has demonstrated the following two problems.
1. If an ORDER/GROUP BY list includes a constant expression being
optimized away and, at the same time, containing single-row
subselects that return more that one row, no error is reported.
Strictly speaking the standard allows to ignore error in this case.
Yet, now a corresponding fatal error is reported in this case.
2. If a query requires sorting by expressions containing single-row
subselects that, however, return more than one row, then the execution
of the query may cause a server crash.
To fix this some code has been added that blocks execution of a subselect
item in case of a fatal error in the method Item_subselect::exec.
WL#3681 (ALTER TABLE ORDER BY)
Before this fix, the ALTER TABLE statement implemented an ORDER BY option
with the following characteristics :
1) The order by clause accepts a list of criteria, with optional ASC or
DESC keywords
2) Each criteria can be a general expression, involving operators,
native functions, stored functions, user defined functions, subselects ...
With this fix :
1) has been left unchanged, since it's a de-facto existing feature,
that was already present in the code base and partially covered in the test
suite. Code coverage for ASC and DESC was missing and has been improved.
2) has been changed to limit the kind of criteria that are permissible:
now only a column name is valid.
for queries using 'range checked for each record'.
The problem was fixed in 5.0 by the patch for bug 12291.
This patch down-ported the corresponding code from 5.0 into
QUICK_SELECT::init() and added a new test case.
correctly.
The Item_func::print method was used to print the Item_func_encode and the
Item_func_decode objects. The last argument to ENCODE and DECODE functions
is a plain C string and thus Item_func::print wasn't able to print it.
The print() method is added to the Item_func_encode class. It correctly
prints the Item_func_encode and the Item_func_decode objects.
WHERE is present.
If a DELETE statement with ORDER BY and LIMIT contains a WHERE clause
with conditions that for sure cannot be used for index access (like in
WHERE @var:= field) the execution always follows the filesort path.
It happens currently even when for the above case there is an index that
can be used to speedup sorting by the order by list.
Now if a DELETE statement with ORDER BY and LIMIT contains such WHERE
clause conditions that cannot be used to build any quick select then
the mysql_delete() tries to use an index like there is no WHERE clause at all.
In the method Item_field::fix_fields we try to resolve the name of
the field against the names of the aliases that occur in the select
list. This is done by a call of the function find_item_in_list.
When this function finds several occurrences of the field name
it sends an error message to the error queue and returns 0.
Yet the code did not take into account that find_item_in_list
could return 0 and tried to dereference the returned value.
The function mi_get_pointer_length() computed too small
pointer size for very large tables.
Inserted missing 'else' between the branches for very
large tables.
An update that used a join of a table to itself and modified the
table on one side of the join reported the table as crashed or
updated wrong rows.
Fixed by creating temporary table for self-joined multi update statement.
- When this bug was corrected it changed the behavior
for data/index directory in the myisam test case.
- This patch moves the OS depending tests to a non-windows
test file.