The server sends a number of columns to the client.
It uses a limited "fast" function for that instead of the
general one. This fast function cannot send numbers larger
than 2 bytes.
This causes the client to expect smaller number of columns.
The client writes outside of the allocated memory buffer
as a result.
Fixed the server to use the general function to send column
count.
Fixed the client to check the column count before writing
column data.
- When returning metadata for scalar subqueries the actual type of the
column was calculated based on the value type, which limits the actual
type of a scalar subselect to the set of (currently) 3 basic types :
integer, double precision or string. This is the reason that columns
of types other then the basic ones (e.g. date/time) are reported as
being of the corresponding basic type.
Fixed by storing/returning information for the column type in addition
to the result type.
The parser is allocating Item_field for references by name in ORDER BY
expressions. Such expressions however may point not only to Item_field
in the select list (or to a table column) but also to an arbitrary Item.
This causes Item_field::fix_fields to throw an error about missing
column.
The fix substitutes Item_field for the reference with an Item_ref when
not pointing to Item_field.
If elements a not top-level IN subquery were accessed by an index and
the subquery result set included a NULL value then the quantified
predicate that contained the subquery was evaluated to NULL when
it should return a non-null value.
Repair table could crash a server if there is not sufficient
memory (myisam_sort_buffer_size) to operate. Affects not only
repair, but also all statements that use create index by sort:
repair by sort, parallel repair, bulk insert.
Return an error if there is not sufficient memory to store at
least one key per BUFFPEK.
Also fixed memory leak if thr_find_all_keys returns an error.
When resolving unqualified name references MySQL was not
checking what is the item type for the reference. Thus
e.g a string literal item that has by convention a name
equal to its string value will also work as a reference to
a SELECT list item or a table field.
Fixed by allowing only Item_ref or Item_field to referenced by
(unqualified) name.
hangs on Linux
If REPAIR TABLE ... USE_FRM is issued for table that is located in different
than default database server crash could happen.
In reopen_name_locked_table take database name from table_list (user specified
or default database) instead of from thd (default database).
Affects 4.1 only.
OPTIMIZE TABLE with myisam_repair_threads > 1 performs a non-quick
parallel repair. This means that it does not only rebuild all
indexes, but also the data file.
Non-quick parallel repair works so that there is one thread per
index. The first of the threads rebuilds also the new data file.
The problem was that all threads shared the read io cache on the
old data file. If there were holes (deleted records) in the table,
the first thread skipped them, writing only contiguous, non-deleted
records to the new data file. Then it built the new index so that
its entries pointed to the correct record positions. But the other
threads didn't know the new record positions, but put the positions
from the old data file into the index.
The new design is so that there is a shared io cache which is filled
by the first thread (the data file writer) with the new contiguous
records and read by the other threads. Now they know the new record
positions.
Another problem was that for the parallel repair of compressed
tables a common bit_buff and rec_buff was used. I changed it so
that thread specific buffers are used for parallel repair.
A similar problem existed for checksum calculation. I made this
multi-thread safe too.
Though this is not storage engine specific problem, I was able to
repeat this problem with BDB and NDB engines only. That was the
reason to add a test case into ndb_update.test. As a result
different bad things could happen.
BDB has removed duplicate rows which is not expected.
NDB returns an error.
For multi table update notify storage engine about UPDATE IGNORE
as it is done in single table UPDATE.
Fix for bug 7894 replaces a field(s) in a non-aggregate function with a item
reference if such a field was specified in the GROUP BY clause in order to
get a correct result.
When ROLLUP is involved this lead to a wrong result due to value of a such
field is got through a copy function and copying happens after the function
evaluation.
Such replacement isn't needed if grouping is also done by such a function.
The change_group_ref() function now isn't called for a function present in
the group list.
RTree keys are really different from BTree and need specific
paramters to be set by optimizer to work.
Sometimes optimizer doesn't set those properly.
Here we decided just to add code to check that the parameters
are correct. Hope to fix optimizer sometimes.
Crash may happen when selecting from a merge table that has underlying
tables with less indexes than in a merge table itself.
If number of keys in merge table is not bigger than requested key number,
return error.
an ALL/ANY quantified subquery in HAVING.
The Item::split_sum_func2 method should not create Item_ref
for objects of any class derived from Item_subselect.