derived table cause crash
When a multi-UPDATE command fails to lock some table, and
subsequently succeeds, the tables need to be reopened if
they were altered. But the reopening procedure failed for
derived tables.
Extra cleanup has been added.
``FLUSH TABLES WITH READ LOCK''
Concurrent execution of 1) multitable update with a
NATURAL/USING join and 2) a such query as "FLUSH TABLES
WITH READ LOCK" or "ALTER TABLE" of updating table led
to a server crash.
The mysql_multi_update_prepare() function call is optimized
to lock updating tables only, so it postpones locking to
the last, and if locking fails, it does cleanup of modified
syntax structures and repeats a query analysis. However,
that cleanup procedure was incomplete for NATURAL/USING join
syntax data: 1) some Field_item items pointed into freed
table structures, and 2) the TABLE_LIST::join_columns fields
was not reset.
Major change:
short-living Field *Natural_join_column::table_field has
been replaced with long-living Item*.
The problem is that when statement-based replication was enabled,
statements such as INSERT INTO .. SELECT FROM .. and CREATE TABLE
.. SELECT FROM need to grab a read lock on the source table that
does not permit concurrent inserts, which would in turn be denied
if the source table is a log table because log tables can't be
locked exclusively.
The solution is to not take such a lock when the source table is
a log table as it is unsafe to replicate log tables under statement
based replication. Furthermore, the read lock that does not permits
concurrent inserts is now only taken if statement-based replication
is enabled and if the source table is not a log table.
- In QUICK_INDEX_MERGE_SELECT::read_keys_and_merge: when we got table->sort from Unique,
tell init_read_record() not to use rr_from_cache() because a) rowids are already sorted
and b) it might be that the the data is used by filesort(), which will need record rowids
(which rr_from_cache() cannot provide).
- Fully de-initialize the table->sort read in QUICK_INDEX_MERGE_SELECT::get_next(). This fixes BUG#35477.
(bk trigger: file as fix for BUG#35478).
first row or fails with an error:
ERROR 1022 (23000): Can't write; duplicate key in table ''
The server uses intermediate temporary table to store updated
row data. The first column of this table contains rowid.
Current server implementation doesn't reset NULL flag of that
column even if the server fills a column with rowid.
To keep each rowid unique, there is an unique index.
An insertion into an unique index takes into account NULL
flag of key value and ignores real data if NULL flag is set.
So, insertion of actually different rowids may lead to two
kind of problems. Visible effect of each of these problems
depends on an initial engine type of temporary table:
1. If multiupdate initially creates temporary table as
a MyISAM table (a table contains blob columns, and the
create_tmp_table function assumes, that this table is
large), it inserts only one single row and updates
only rows with one corresponding rowid. Other rows are
silently ignored.
2. If multiupdate initially creates MEMORY temporary
table, fills it with data and reaches size limit for
MEMORY tables (max_heap_table_size), multiupdate
converts MEMORY table into MyISAM table and fails
with an error:
ERROR 1022 (23000): Can't write; duplicate key in table ''
Multiupdate has been fixed to update the NULL flag of
temporary table rowid columns.
The bool data type was redefined to BOOL (4 bytes on windows).
Removed the #define and fixed some of the warnings that were uncovered
by this.
Note that the fix also disables 2 warnings :
4800 : 'type' : forcing value to bool 'true' or 'false' (performance warning)
4805: 'operation' : unsafe mix of type 'type' and type 'type' in operation
These warnings will be handled in a separate bug, as they are performance related or bogus.
Fixed to int the return type of functions that return more than
2 distinct values.
binlog_format=mixed
Statement-based replication of DELETE ... LIMIT, UPDATE ... LIMIT,
INSERT ... SELECT ... LIMIT is not safe as order of rows is not
defined.
With this fix, we issue a warning that this statement is not safe to
replicate in statement mode, or go to row-based mode in mixed mode.
Note that we may consider a statement as safe if ORDER BY primary_key
is present. However it may confuse users to see very similiar statements
replicated differently.
Note 2: regular UPDATE statement (w/o LIMIT) is unsafe as well, but
this patch doesn't address this issue. See comment from Kristian
posted 18 Mar 10:55.
a SELECT doesn't cause ROLLBACK of statem".
The idea of the fix is to ensure that we always commit the current
statement at the end of dispatch_command(). In order to not issue
redundant disc syncs, an optimization of the two-phase commit
protocol is implemented to bypass the two phase commit if
the transaction is read-only.
The problem is that AFTER UPDATE triggers will fire only if the
new data is different from the old data on the row. The trigger
should fire regardless of whether there are changes to the data.
The solution is to fire the trigger on UPDATE even if there are
no changes to the value (because the value is the same).
does not use trans tables
There had been two issues.
Rollback statement was recorded in binlog even though a multi-update
had not modified any non-transactional table.
The reason for this artifact was a false initial value of multi_update::transactional_tables.
Yet another artifact that explained on the bug page is that
`ha_autocommit_or_rollback' works differently depending on whether
a transaction engine has been compiled in.
Fixed: with setting multi_update::transactional_tables to zero at initialization
time. Multi-update on non-trans table won't cause ROLLBACK in binlog with
either compilation option.
The 2nd mentioned artifact comprises a self-standing issue (to be reported
separately).
columns (default datatype value is assigned).
The mysql_update function has been modified to generate
an error when trying to set a NOT NULL field to NULL rather than a warning
in the set_field_to_null_with_conversions function.
called from a SELECT doesn't cause ROLLBACK of state"
Make private all class handler methods (PSEA API) that may modify
data. Introduce and deploy public ha_* wrappers for these methods in
all sql/.
This necessary to keep track of all data modifications in sql/,
which is in turn necessary to be able to optimize two-phase
commit of those transactions that do not modify data.
cause ROLLBACK of statement", part 1. Review fixes.
Do not send OK/EOF packets to the client until we reached the end of
the current statement.
This is a consolidation, to keep the functionality that is shared by all
SQL statements in one place in the server.
Currently this functionality includes:
- close_thread_tables()
- log_slow_statement().
After this patch and the subsequent patch for Bug#12713, it shall also include:
- ha_autocommit_or_rollback()
- net_end_statement()
- query_cache_end_of_result().
In future it may also include:
- mysql_reset_thd_for_next_command().
Query_log_event::error_code
A query can perform completely having the local var error of mysql_$query
zero, where $query in insert, update, delete, load,
and be binlogged with error_code e.g KILLED_QUERY while there is no
reason do to so.
That can happen because Query_log_event consults thd->killed flag to
evaluate error_code.
Fixed with implementing a scheme suggested and partly implemented at
time of bug@22725 work-on. error_status is cached immediatly after the
control leaves the main rows-loop and that instance always corresponds
to `error' the local of mysql_$query functions. The cached value
is passed to Query_log_event constructor, not the default thd->killed
which can be changed in between of the caching and the constructing.