The message is gramatically wrong, and factually wrong.
Change it to refer to the myisam_sort_buffer_size variable and change
"to" to "too".
myisam/sort.c:
Change error messages to be gramatically correct and to refer to the
correct variable.
mysql-test/r/repair.result:
Refer to the correct variable. Message changed.
some rollup rows (rows with NULLs for grouping attributes) if GROUP BY
list contained constant expressions.
This happened because the results of constant expressions were not put
in the temporary table used for duplicate elimination. In fact a constant
item from the GROUP BY list of a ROLLUP query can be replaced for an
Item_null_result object when a rollup row is produced .
Now the JOIN::rollup_init function wraps any constant item referenced in
the GROYP BY list of a ROLLUP query into an Item_func object of a special
class that is never detected as constant item. This ensures creation of
fields for such constant items in temporary tables and guarantees right
results when the result of the rollup operation first has to be written
into a temporary table, e.g. in the cases when duplicate elimination is
required.
mysql-test/r/olap.result:
Added a test case for bug #24856.
mysql-test/t/olap.test:
Added a test case for bug #24856.
sql/item_func.h:
Fixed bug #24856: the result set of a ROLLUP query with DISTINCT could lack
some rollup rows (rows with NULLs for grouping attributes) if GROUP BY
list contained constant expressions.
Itroduced class Item_func_rollup_const derived from Item_func. The object of
this class are never detected as constant items.
We use them for wrapping constant items from the GROUP BY list of any ROLLUP
query. This wrapping allows us to ensure writing constant items into temporary
tables whenever the result of the ROLLUP operation has to be written into a
temporary table, e.g. when ROLLUP is used together with DISTINCT in the SELECT
list.
sql/sql_select.cc:
Fixed bug #24856: the result set of a ROLLUP query with DISTINCT could lack
some rollup rows (rows with NULLs for grouping attributes) if GROUP BY
list contained constant expressions.
Now the JOIN::rollup_init function wraps any constant item referenced in
the GROYP BY list of a ROLLUP query into an Item_func object of a special
class that is never detected as constant item. This ensures creation of
fields for such constant items in temporary tables and guarantees right
results when the result of the rollup operation first has to be written
into a temporary table, e.g. in the cases when duplicate elimination is
required.
INSERT...ON DUPLICATE KEY UPDATE may cause error 1032:
"Can't find record in ..." if we are inserting into
InnoDB table unique index of partial key with
underlying UTF-8 string field.
This error occurs because INSERT...ON DUPLICATE uses a wrong
procedure to copy string fields of multi-byte character sets
for index search.
mysql-test/t/innodb_mysql.test:
Added test case for bug #13191.
mysql-test/r/innodb_mysql.result:
Added test case for bug #13191.
sql/field.h:
Fixed bug #13191.
Field_string::get_key_image() virtual function was overloaded
to implement copying of variable length character (UTF-8) fields.
Field::get_key_image() function prototype has been changed to
return byte size of copied data.
sql/field.cc:
Fixed bug #13191.
Field_string::get_key_image() virtual function was overloaded
to implement copying of variable length character (UTF-8) fields.
Field::get_key_image() function prototype has been changed to
return byte size of copied data.
sql/key.cc:
Fixed bug #13191.
INSERT...ON DUPLICATE KEY UPDATE may cause error 1032:
"Can't find record in ...".
This error occurs because INSERT...ON DUPLICATE uses
a wrong procedure to copy field parts for index search.
key_copy() function has been fixed.
This bug occurs when error message length exceeds allowed limit: my_error()
function outputs "%s" sequences instead of long string arguments.
Formats like %-.64s are very common in errmsg.txt files, however my_error()
function simply ignores precision of those formats.
mysys/my_error.c:
Fixed bug #20710.
This bug occurs when error message length exceeds allowed limit: my_error()
function output "%s" sequences instead of long string arguments.
my_error() function has been fixed to accept formats like %-.64s.
mysql-test/t/alter_table.test:
Added test case for bug #20710.
mysql-test/r/alter_table.result:
Added test case for bug #20710.
Support for NULL components was incomplete for row comparison,
fixed. Added support for abort_on_null at compare_row() like
in 5.x
sql/item_cmpfunc.h:
Bug#27704: incorrect comparison of rows with NULL components
Added support for abort_on_null at Item_bool_func2
like in 5.x
sql/item_cmpfunc.cc:
Bug#27704: incorrect comparison of rows with NULL components
Support for NULL components was incomplete for row comparison,
fixed. Added support for abort_on_null at compare_row() like
in 5.x
mysql-test/t/row.test:
Test case updated for Bug#27704 (incorrect comparison
of rows with NULL components)
mysql-test/r/row.result:
Test case updated for Bug#27704 (incorrect comparison
of rows with NULL components)
mysql-test/r/subselect.result:
Test case updated for Bug#27704 (incorrect comparison
of rows with NULL components)
Added missed DROP privilege check on the original table for RENAME TABLE command.
mysql-test/r/grant.result:
Fix for bug #27515: DROP previlege is not required anymore for RENAME TABLE
- test result.
mysql-test/t/grant.test:
Fix for bug #27515: DROP previlege is not required anymore for RENAME TABLE
- test case.
sql/sql_parse.cc:
Fix for bug #27515: DROP previlege is not required anymore for RENAME TABLE
- added DROP privilege check on the original table for RENAME TABLE command.
IGNORE/USE/FORCE INDEX hints were honored when choosing FULLTEXT
index.
With this fix these hints are ignored. For regular indexes we may
perform table scan instead of index lookup when IGNORE INDEX was
specified. We cannot do this for FULLTEXT in NLQ mode.
mysql-test/r/fulltext.result:
A test case for bug#25951.
mysql-test/t/fulltext.test:
A test case for bug#25951.
sql/item_func.cc:
IGNOR/USE/FORCE INDEX hints should not be honored when choosing FULLTEXT
index.
Use proper bitmap, that is not modified by IGNORE/USE/FORCE INDEX hints.
are used as arguments of the IN predicate.
Added a function to check compatibility of row expressions. Made sure that this
function to be called for Item_func_in objects by fix_length_and_dec().
mysql-test/r/row.result:
Added a test case for bug #27484.
mysql-test/t/row.test:
Added a test case for bug #27484.
MERGE engine may return incorrect values when several representations
of equal keys are present in the index. For example "groß" and "gross"
or "gross" and "gross " (trailing space), which are considered equal,
but have different lengths.
The problem was that key length was not recalculated after key lookup.
Only MERGE engine is affected.
myisam/mi_rkey.c:
info->lastkey gets rewritten by mi_search. Later we recalculate found lastkey
length. This is done to make sure that mi_rnext_same gets true, found (not
searched) lastkey length. Searched and found key lengths may be different,
for example in case searched key is "groß" and found is "gross" or in case
a key has trailing spaces.
Unfortunately we recalculate found lastkey length only for first
underlying table. To recalculate found key length for non-first underlying
table we need to know how much key segments were used to create this key.
When mi_rkey is called for first underlying table of a merge table, store
offset to last used key segment.
Restore last_used_keyseg variable when mi_rkey is called for non-first
underlying table.
myisam/myisamdef.h:
Added last_used_keyseg variable to MI_INFO. It is used by merge engine to calculate
key length.
myisammrg/myrg_rkey.c:
Pass last used key segment returned by first table key read to other
table key reads.
mysql-test/r/merge.result:
A test case for bug#24342.
mysql-test/t/merge.test:
A test case for bug#24342.
into pilot.blaudden:/home/msvensson/mysql/mysql-4.1-maint
client/mysqltest.c:
Auto merged
mysql-test/r/mysqltest.result:
Auto merged
mysql-test/t/mysqltest.test:
Auto merged
IN/BETWEEN predicates in sorting expressions.
Wrong results may occur when the select list contains an expression
with IN/BETWEEN predicate that differs from a sorting expression by
an additional NOT only.
Added the method Item_func_opt_neg::eq to compare correctly expressions
containing [NOT] IN/BETWEEN.
The eq method inherited from the Item_func returns TRUE when comparing
'a IN (1,2)' with 'a NOT IN (1,2)' that is not, of course, correct.
mysql-test/r/order_by.result:
Added a test case for bug #27532.
mysql-test/t/order_by.test:
Added a test case for bug #27532.
sql/item_cmpfunc.cc:
Fixed bug #27532.
Added the method Item_func_opt_neg::eq to compare correctly expressions
containing [NOT] IN/BETWEEN.
The eq method inherited from the Item_func returns TRUE when comparing
'a IN (1,2)' with 'a NOT IN (1,2)' that is not, of course, correct.
sql/item_cmpfunc.h:
Added the method Item_func_opt_neg::eq to compare correctly expressions
containing [NOT] IN/BETWEEN.
The eq method inherited from the Item_func returns TRUE when comparing
'a IN (1,2)' with 'a NOT IN (1,2)' that is not, of course, correct.
Pushbuild fixes:
- Make MAX_SEL_ARGS smaller (even 16K records_in_range() calls is
more than it makes sense to do in typical cases)
- Don't call sel_arg->test_use_count() if we've already allocated
more than MAX_SEL_ARGs elements. The test will succeed but will take
too much time for the test suite (and not provide much value).
mysql-test/r/range.result:
BUG#26624: high mem usage (crash) in range optimizer
Pushbuild fixes: make the test go faster
mysql-test/t/range.test:
BUG#26624: high mem usage (crash) in range optimizer
Pushbuild fixes: make the test go faster
- GRANT and REVOKE statments didn't have the "updating" flag set and
thus statements with a table specified would not replicate if
slave filtering rules where turned on.
For example "GRANT ... ON test.t1 TO ..." would not replicate.
mysql-test/r/rpl_ignore_table.result:
Add test results
mysql-test/t/rpl_ignore_table.test:
Add tests
sql/sql_yacc.yy:
Pass option TL_OPTION_UPDATING to 'add_table_to_list' when parsing a
GRANT or REVOKE and a table specifier is found. This will set the
property "updating" on the table and thus the slave filtering rules will
be applied.
Without setting updating the statement will be not
replicated - since "it's not updating anything" - an optimization
to quickly skip SELECT's and similar.
Thanks to Martin Friebe for finding and submitting a fix for this bug!
A table with maximum number of key segments and maximum length key name
would have a corrupted .frm file, due to an incorrect calculation of the
complete key length. Now the key length is computed correctly (I hope) :-)
MyISAM would reject a table with the maximum number of keys and the maximum
number of key segments in all keys. It would allow one less than this total
maximum. Now MyISAM accepts a table defined with the maximum. (This is a
very minor issue.)
myisam/mi_open.c:
change >= to > in a comparison (i.e., error only if key_parts_in_table
really is greater than MAX_KEY * MAX_KEY_SEG)
mysql-test/r/create.result:
Add test results for bug #26642 (create index corrupts table definition in .frm)
mysql-test/t/create.test:
Add test case for bug #26642 (create index corrupts table definition in .frm)
sql/table.cc:
In create_frm(), fix formula for key_length; it was too small by (keys * 2) bytes
- Added PARAM::alloced_sel_args where we count the # of SEL_ARGs
created by SEL_ARG tree cloning operations.
- Made the range analyzer to shortcut and not do any more cloning
if we've already created MAX_SEL_ARGS SEL_ARG objects in cloning.
- Added comments about space complexity of SEL_ARG-graph
representation.
mysql-test/r/range.result:
BUG#26624: Testcase
mysql-test/t/range.test:
BUG#26624: Testcase
causes incorrect duplicate entries
Keys for BTREE indexes on ENUM and SET columns of MEMORY tables
with character set UTF8 were computed incorrectly. Many
different column values got the same key value.
Apart of possible performance problems, it made unique indexes
of this type unusable because it rejected many different
values as duplicates.
The problem was that multibyte character detection was tried
on the internal numeric column value. Many values were not
identified as characters. Their key value became blank filled.
Thanks to Alexander Barkov and Ramil Kalimullin for the patch,
which sets the character set of ENUM and SET key segments to
the pseudo binary character set.
mysql-test/r/heap_btree.result:
Bug#24985 - UTF8 ENUM primary key on MEMORY using BTREE
causes incorrect duplicate entries
Added test result.
mysql-test/t/heap_btree.test:
Bug#24985 - UTF8 ENUM primary key on MEMORY using BTREE
causes incorrect duplicate entries
Added test.
sql/ha_heap.cc:
Bug#24985 - UTF8 ENUM primary key on MEMORY using BTREE
causes incorrect duplicate entries
Set key segment charset to my_charset_bin for ENUM and SET
columns.
to 0 causes wrong (large) length to be read
from the row in _mi_calc_blob_length() when
storing NULL values in (e.g) POINT columns.
This large length is then used to allocate
a block of memory that (on some OSes) causes
trouble.
Fixed by calling the base class's
Field_blob::reset() from Field_geom::reset()
that is called when storing a NULL value into
the column.
mysql-test/r/gis.result:
Bug #27164: test case
mysql-test/t/gis.test:
Bug #27164: test case
sql/field.h:
Bug #27164: not reseting the data pointer
to 0 causes wrong (large) length to be read
from the row in _mi_calc_blob_length() when
storing NULL values in (e.g) POINT columns.
This large length is then used to allocate
a block of memory that (on some OSes) causes
trouble.
fix for cast( AS DATETIME) + 0 operation.
I just implemented Item_datetime_typecast::val() method
as it is usually done in other classes.
Should be fixed more radically in 5.0
mysql-test/r/type_datetime.result:
result added
mysql-test/t/type_datetime.test:
testcase
sql/item_timefunc.h:
added double conversion to Item_datetime_typecast
Shift the ID values up into a range where they will not collide with those
which we use for real data, when we fill the system tables.
Will be merged up to 5.0 where it is needed for 5.0.38.
mysql-test/r/help.result:
Fix the result file according to the changed ID values in "help.test".
mysql-test/t/help.test:
Now that (at least in 5.0) the system tables are filled with real data,
inserting rows vith ID values 1 .. 5 will fail in release build tests (it did in 5.0.38)
like it should already have done in customer installations.
Shift the ID values up into a high area where they will not conflict,
also make the distinct for the different kinds of values (= unique throughout the test).
No change to the logic.
Using a MEMORY table BTREE index for scanning for updatable rows
could lead to an infinite loop.
Everytime a key was inserted into a btree index, the position
in the index scan was cleared. The search started from the
beginning and found the same key again.
Now we do not clear the position on key insert an more.
heap/hp_write.c:
Bug#26996 - Update of a Field in a Memory Table ends with wrong result
Removed the index-scan-breaking nulling of last_pos.
The comment behind this line ("For heap_rnext/heap_rprev")
was misleading. It should have been "Breaks heap_rnext/heap_rprev".
mysql-test/r/heap_btree.result:
Bug#26996 - Update of a Field in a Memory Table ends with wrong result
Added test result.
mysql-test/t/heap_btree.test:
Bug#26996 - Update of a Field in a Memory Table ends with wrong result
Added test.
when index is used
When the table contained TEXT columns with empty contents
('', zero length, but not NULL) _and_ strings starting with
control characters like tabulator or newline, the empty values
were not found in a "records in range" estimate. Hence count(*)
missed these records.
The reason was a different set of search flags used for key
insert and key range estimation.
I decided to fix the set of flags used in range estimation.
Otherwise millions of databases around the world would require
a repair after an upgrade.
The consequence is that the manual must be fixed, which claims
that TEXT columns are compared with "end space padding". This
is true for CHAR/VARCHAR but wrong for TEXT. See also bug 21335.
myisam/mi_range.c:
Bug#26231 - select count(*) on myisam table returns wrong value
when index is used
Added SEARCH_UPDATE to the search flags so that it compares
like write/update/delete operations do. Only so it expects
the keys at the place where they have been inserted.
myisam/mi_search.c:
Bug#26231 - select count(*) on myisam table returns wrong value
when index is used
Added some comments to explain how _mi_get_binary_pack_key()
works.
mysql-test/r/myisam.result:
Bug#26231 - select count(*) on myisam table returns wrong value
when index is used
Added a test.
mysql-test/t/myisam.test:
Bug#26231 - select count(*) on myisam table returns wrong value
when index is used
Added test result.
differences in tables
Certain merge tables were wrongly reported as having incorrect definition:
- Some fields that are 1 byte long (e.g. TINYINT, CHAR(1)), might
be internally casted (in certain cases) to a different type on a
storage engine layer. (affects 4.1 and up)
- If tables in a merge (and a MERGE table itself) had short VARCHAR column (less
than 4 bytes) and at least one (but not all) tables were ALTER'ed (even to an
identical table: ALTER TABLE xxx ENGINE=yyy), table definitions went ouf of
sync. (affects 4.1 only)
This is fixed by relaxing a check for underlying conformance and setting
field type to FIELD_TYPE_STRING in case varchar is shorter than 4
when a table is created.
myisam/mi_create.c:
Added a comment.
mysql-test/r/merge.result:
A test case for bug#26881.
mysql-test/t/merge.test:
A test case for bug#26881.
sql/ha_myisam.cc:
Relaxed some checks performed by check_definition():
As comparing of fulltext keys (and key segments) is not yet implemented,
only return an error in case one of keys is fulltext and other is not.
Otherwise, if both keys are fulltext, accept them as is.
As comparing of spatial keys (and key segments) is not yet implemented,
only return an error in case one of keys is spatial and other is not.
Otherwise, if both keys are spatial, accept them as is.
A workaround to handle situation when field is casted from FIELD_SKIP_ZERO
to FIELD_NORMAL. This could happen only in case field length is 1 and row
format is fixed.
sql/sql_parse.cc:
When a table that has varchar field shorter than 4 is created, field type is
set to FIELD_TYPE_VAR_STRING. Later, when a table is modified using alter
table, field type is changed to FIELD_TYPE_STRING (see Field_string::type).
That means HA_OPTION_PACK_RECORD flag might be lost and thus null_bit might
be shifted by alter table, in other words alter table doesn't create 100%
equal table definition.
This is usually not a problem, since when a table is created/altered,
definition on a storage engine layer is based on one that is passed from
sql layer. But it is a problem for merge engine - null_bit is shifted when
a table (merge or underlying) is altered.
Set field type to FIELD_TYPE_STRING in case FIELD_TYPE_VAR_STRING is shorter
than 4 when a table is created as it is done in Field::type.