IGNORE/USE/FORCE INDEX hints were honored when choosing FULLTEXT
index.
With this fix these hints are ignored. For regular indexes we may
perform table scan instead of index lookup when IGNORE INDEX was
specified. We cannot do this for FULLTEXT in NLQ mode.
are used as arguments of the IN predicate.
Added a function to check compatibility of row expressions. Made sure that this
function to be called for Item_func_in objects by fix_length_and_dec().
IN/BETWEEN predicates in sorting expressions.
Wrong results may occur when the select list contains an expression
with IN/BETWEEN predicate that differs from a sorting expression by
an additional NOT only.
Added the method Item_func_opt_neg::eq to compare correctly expressions
containing [NOT] IN/BETWEEN.
The eq method inherited from the Item_func returns TRUE when comparing
'a IN (1,2)' with 'a NOT IN (1,2)' that is not, of course, correct.
Pushbuild fixes:
- Make MAX_SEL_ARGS smaller (even 16K records_in_range() calls is
more than it makes sense to do in typical cases)
- Don't call sel_arg->test_use_count() if we've already allocated
more than MAX_SEL_ARGs elements. The test will succeed but will take
too much time for the test suite (and not provide much value).
- GRANT and REVOKE statments didn't have the "updating" flag set and
thus statements with a table specified would not replicate if
slave filtering rules where turned on.
For example "GRANT ... ON test.t1 TO ..." would not replicate.
Thanks to Martin Friebe for finding and submitting a fix for this bug!
A table with maximum number of key segments and maximum length key name
would have a corrupted .frm file, due to an incorrect calculation of the
complete key length. Now the key length is computed correctly (I hope) :-)
MyISAM would reject a table with the maximum number of keys and the maximum
number of key segments in all keys. It would allow one less than this total
maximum. Now MyISAM accepts a table defined with the maximum. (This is a
very minor issue.)
- Added PARAM::alloced_sel_args where we count the # of SEL_ARGs
created by SEL_ARG tree cloning operations.
- Made the range analyzer to shortcut and not do any more cloning
if we've already created MAX_SEL_ARGS SEL_ARG objects in cloning.
- Added comments about space complexity of SEL_ARG-graph
representation.
- Define Sql_alloc::operator new() as thow() so that C++ compiler
handles NULL return values
(there is no testcase as there is no portable way to set limit on the
amount of memory that a process can allocate)
causes incorrect duplicate entries
Keys for BTREE indexes on ENUM and SET columns of MEMORY tables
with character set UTF8 were computed incorrectly. Many
different column values got the same key value.
Apart of possible performance problems, it made unique indexes
of this type unusable because it rejected many different
values as duplicates.
The problem was that multibyte character detection was tried
on the internal numeric column value. Many values were not
identified as characters. Their key value became blank filled.
Thanks to Alexander Barkov and Ramil Kalimullin for the patch,
which sets the character set of ENUM and SET key segments to
the pseudo binary character set.
to 0 causes wrong (large) length to be read
from the row in _mi_calc_blob_length() when
storing NULL values in (e.g) POINT columns.
This large length is then used to allocate
a block of memory that (on some OSes) causes
trouble.
Fixed by calling the base class's
Field_blob::reset() from Field_geom::reset()
that is called when storing a NULL value into
the column.
fix for cast( AS DATETIME) + 0 operation.
I just implemented Item_datetime_typecast::val() method
as it is usually done in other classes.
Should be fixed more radically in 5.0
- Problem: data separators were copied to a fixed-size buffer
on the stack; memcpy was used, without bounds checking; a
server crash could result if long FIELDS ENCLOSED BY, etc.,
was given
- Fix: write the separators directly, instead of copying to
a buffer first (in select_export::send_data())
TABLE ... WRITE".
CPU hogging occured when connection which had to wait for table lock was
serviced by thread which previously serviced connection that was killed
(note that connections can reuse threads if thread cache is enabled).
One possible scenario which exposed this problem was when thread which
provided binlog dump to replication slave was implicitly/automatically
killed when the same slave reconnected and started pulling data through
different thread/connection.
In 5.* versions memory hogging was added to CPU hogging. Moreover in
those versions the problem also occured when one killed particular query
in connection (using KILL QUERY) and later this connection had to wait for
some table lock.
This problem was caused by the fact that thread-specific mysys_var::abort
variable, which indicates that waiting operations on mysys layer should
be aborted (this includes waiting for table locks), was set by kill
operation but was never reset back. So this value was "inherited" by the
following statements or even other connections (which reused the same
physical thread). Such discrepancy between this variable and THD::killed
flag broke logic on SQL-layer and caused CPU and memory hogging.
This patch tries to fix this problem by properly resetting this member.
There is no test-case associated with this patch since it is hard to test
for memory/CPU hogging conditions in our test-suite.
my_seek: Assertion `fd != -1' failed"
In difficult optimize/repair situations the server could crash.
Under some circumstances the server retries an optimize/repair
with more elaborate options. But it did not check if the first
attempt failed so badly that a second one must not be tried.
This could happen when a new data file has been created
but it was not possible to open it. In this case the
repair leaves behind a table with closed data file.
This must not be used for another repair attempt.
We do now detect the closed data file and do not try
another repair attempt in this situation.
No test case. The required table corruption can not be
repeated easily. There is a test program attached to
bug 25433.
differences in tables
Certain merge tables were wrongly reported as having incorrect definition:
- Some fields that are 1 byte long (e.g. TINYINT, CHAR(1)), might
be internally casted (in certain cases) to a different type on a
storage engine layer. (affects 4.1 and up)
- If tables in a merge (and a MERGE table itself) had short VARCHAR column (less
than 4 bytes) and at least one (but not all) tables were ALTER'ed (even to an
identical table: ALTER TABLE xxx ENGINE=yyy), table definitions went ouf of
sync. (affects 4.1 only)
This is fixed by relaxing a check for underlying conformance and setting
field type to FIELD_TYPE_STRING in case varchar is shorter than 4
when a table is created.
Using INSERT DELAYED on MERGE tables could lead to table
corruptions.
The manual lists a couple of storage engines, which can be
used with INSERT DELAYED. MERGE is not in this list.
The attempt to try it anyway has not been rejected yet.
This bug was not detected earlier as it can work under
special circumstances. Most notable is low concurrency.
To be safe, this patch rejects any attempt to use INSERT
DELAYED on MERGE tables.
When the ORDER BY clause gets fixed it's allowed to search in the current
item_list in order to find aliased fields and expressions. This is ok for a
SELECT but wrong for an UPDATE statement. If the ORDER BY clause will
contain a non-existing field which is mentioned in the UPDATE set list
then the server will crash due to using of non-existing (0x0) field.
When an Item_field is getting fixed it's allowed to search item list for
aliased expressions and fields only for selects.
to the client only after the binlog write and engine commit.
No testcase for this bug, as to reproduce it, we need to "kill -9" mysqld,
which we cannot do in the testsuite. But, I tested by hand.
Having maybe_null flag unset for geometry/spatial functions leads to
wrong Item_func_isnull::val_int()'s results.
Fix: set maybe_null flag and add is_null() methods.