In create_myisam_from_heap() mark all errors as fatal except
HA_ERR_RECORD_FILE_FULL for a HEAP table.
Not doing so could lead to problems, e.g. in a case when a
temporary MyISAM table gets overrun due to its MAX_ROWS limit
while executing INSERT/REPLACE IGNORE ... SELECT.
The SELECT execution was aborted, but the error was
converted to a warning due to IGNORE clause, so neither 'ok'
nor 'error' packet could be sent back to the client. This
condition led to hanging client when using 5.0 server, or
assertion failure in 5.1.
'INSERT ... SELECT' statements
The code that produces result rows expected that a duplicate row
error could not occur in INSERT ... SELECT statements with
unfulfilled WHERE conditions. This may happen, however, if the
SELECT list contains only aggregate functions.
Fixed by checking if an error occured before trying to send EOF
to the client.
led to creating corrupted index.
While execution of the CREATE .. SELECT SQL_BUFFER_RESULT statement the
engine->start_bulk_insert function was called twice. On the first call
On the first call MyISAM disabled all non-unique indexes and on the second
call it decides to not re-enable them because all indexes was disabled.
Due to this no indexes was actually created during CREATE TABLE thus
producing crashed table.
Now the select_inset class has is_bulk_insert_mode flag which prevents
calling the start_bulk_insert function twice.
The flag is set in the select_create::prepare, select_insert::prepare2
functions and the select_insert class constructor.
The flag is reset in the select_insert::send_eof function.
SELECT statement itself returns empty.
As a result of this bug 'SELECT AGGREGATE_FUNCTION(fld) ... GROUP BY'
can return one row instead of an empty result set.
When GROUP BY only has fields of constant tables
(with a single row), the optimizer deletes the group_list.
After that we lose the information about whether we had an
GROUP BY statement. Though it's important
as SELECT min(x) from empty_table; and
SELECT min(x) from empty_table GROUP BY y; have to return
different results - the first query should return one row,
second - an empty result set.
So here we add the 'group_optimized_away' flag to remember this case
when GROUP BY exists in the query and is removed
by the optimizer, and check this flag in end_send_group()
INSERT into table from SELECT from the same table
with ORDER BY and LIMIT was inserting other data
than sole SELECT ... ORDER BY ... LIMIT returns.
One part of the patch for bug #9676 improperly pushed
LIMIT to temporary table in the presence of the ORDER BY
clause.
That part has been removed.
The patch for WL 1563 added a new duplicate key error message so that the
key name could be provided instead of the key number. But the error code
for the new message was used even though that did not need to change.
This could cause unnecessary problems for applications that used the old
ER_DUP_ENTRY error code to detect duplicate key errors.
in index search MySQL was not explicitly
suppressing warnings. And if the context
happens to enable warnings (e.g. INSERT ..
SELECT) the warnings resulting from converting
the data the key is compared to are
reported to the client.
Fixed by suppressing warnings when converting
the data to the same type as the key parts.
UPDATE contains wrong data if the SELECT employs a temporary table.
If the UPDATE values of the INSERT .. SELECT .. ON DUPLICATE KEY UPDATE
statement contains fields from the SELECT part and the select employs a
temporary table then those fields will contain wrong values because they
aren't corrected to get data from the temporary table.
The solution is to add these fields to the selects all_fields list,
to store pointers to those fields in the selects ref_pointer_array and
to access them via Item_ref objects.
The substitution for Item_ref objects is done in the new function called
Item_field::update_value_transformer(). It is called through the
item->transform() mechanism at the end of the select_insert::prepare()
function.
updated.
INSERT ... ON DUPLICATE KEY UPDATE reports that a record was updated when
the duplicate key occurs even if the record wasn't actually changed
because the update values are the same as those in the record.
Now the compare_record() function is used to check whether the record was
changed and the update of a record reported only if the record differs
from the original one.
The cause of the bug was an incomplete fix for bug 18080.
The problem was that setup_tables() unconditionally reset the
name resolution context to its 'tables' argument, which pointed
to the first table of an SQL statement.
The bug fix limits resetting of the name resolution context in
setup_tables() only in the cases when the context was not set
by earlier parser/optimizer phases.
There was an incomplete reset of the name resolution context, that caused
INSERT ... SELECT ... JOIN statements to resolve not by joint row type calculated
for the join.
Removed the redundant re-initialization of the context, because
mysql_insert_select_prepare() now correctly saves/restores the context.
tables
Currently in INSERT ... SELECT ... LIMIT ... the compiler uses a
temporary table to store the results of SELECT ... LIMIT .. and then
uses that table as a source for INSERT. The problem is that in some cases
it actually skips the LIMIT clause in doing that and materializes the
whole SELECT result set regardless of the LIMIT.
This fix is limiting the process of filling up the temp table with only
that much rows that will be actually used by propagating the LIMIT value.
VALUES() can only refer to table insert going to.
But Item_insert_value::fix_fields() were passing to it's arg full table list,
This results in finding second column which shouldn't be found, and
failing with error about ambiguous field.
Item_insert_value::fix_fields() now passes only first table of full table
list.
"Process NATURAL and USING joins according to SQL:2003".
* Some of the main problems fixed by the patch:
- in "select *" queries the * expanded correctly according to
ANSI for arbitrary natural/using joins
- natural/using joins are correctly transformed into JOIN ... ON
for any number/nesting of the joins.
- column references are correctly resolved against natural joins
of any nesting and combined with arbitrary other joins.
* This patch also contains a fix for name resolution of items
inside the ON condition of JOIN ... ON - in this case items must
be resolved only against the JOIN operands. To support such
'local' name resolution, the patch introduces a stack of
name resolution contexts used at parse time.
NOTICE:
- This patch is not complete in the sense that
- there are 2 test cases that still do not pass -
one in join.test, one in select.test. Both are marked
with a comment "TODO: WL#2486".
- it does not include a new test specific for the task
#9728 'Decreased functionality in "on duplicate key update
#8147 'a column proclaimed ambigous in INSERT ... SELECT .. ON DUPLICATE'
This ensures fields are uniquely qualified and also that one can't update other tables in the ON DUPLICATE KEY UPDATE part
Remove changes made by bug fix#8147. They strips list of insert_table_list to
only insert table, which results in error reported in bug #9728.
Added flag to Item to resolve ambigous fields reported in bug #8147.
Now one can use user variables as target for data loaded from file
(besides table's columns). Also LOAD DATA got new SET-clause in which
one can specify values for table columns as expressions.
For example the following is possible:
LOAD DATA INFILE 'words.dat' INTO TABLE t1 (a, @b) SET c = @b + 1;
This patch also implements new way of replicating LOAD DATA.
Now we do it similarly to other queries.
We store LOAD DATA query in new Execute_load_query event
(which is last in the sequence of events representing LOAD DATA).
When we are executing this event we simply rewrite part of query which
holds name of file (we use name of temporary file) and then execute it
as usual query. In the beggining of this sequence we use Begin_load_query
event which is almost identical to Append_file event
Version for 5.0. Committed for merge.
If the result table is one of the select tables in INSERT SELECT,
we must not disable the result tables indexes before selecting.
Now the preparation is split into two prepare methods.
The first detects the situation and defers some preparations
until the second phase.
Version for 4.1. Committed for merge.
If the result table is one of the select tables in INSERT SELECT,
we must not disable the result tables indexes before selecting.
mysql_execute_command() detects the match for other reasons and
adds the flag OPTION_BUFFER_RESULT to the 'select_options'.
In this case the result is put into a temporary table first.
Hence, we can defer the preparation of the insert
table until the result is to be used.