INSERT DELAYED on a replication slave was converted to regular INSERT,
whereas it should try concurrent INSERT first.
With this patch we try to convert delayed insert to concurrent insert on
a replication slave. If it is impossible for some reason, we fall back to
regular insert.
No test case for this fix. I do not see anything indicating this is
regression - we behave this way since Nov 2000.
Bug #27417 thd->no_trans_update.stmt lost value inside of SF-exec-stack
Once had been set the flag might later got reset inside of a stored routine
execution stack.
The reason was in that there was no check if a new statement started at time
of resetting.
The artifact affects most of binlogable DML queries. Notice, that multi-update
is wrapped up within
bug@27716 fix, multi-delete bug@29136.
Fixed with saving parent's statement flag of whether the statement modified
non-transactional table, and unioning (merging) the value with that was gained
in mysql_execute_command.
Resettling thd->no_trans_update members into thd->transaction.`member`;
Asserting code;
Effectively the following properties are held.
1. At the end of a substatement thd->transaction.stmt.modified_non_trans_table
reflects the fact if such a table got modified by the substatement.
That also respects THD::really_abort_on_warnin() requirements.
2. Eventually thd->transaction.stmt.modified_non_trans_table will be computed as
the union of the values of all invoked sub-statements.
That fixes this bug#27417;
Computing of thd->transaction.all.modified_non_trans_table is refined to base to
the stmt's value for all the case including insert .. select statement which
before the patch had an extra issue bug@28960.
Minor issues are covered with mysql_load, mysql_delete, and binloggin of insert in
to temp_table select.
The supplied test verifies limitely, mostly asserts. The ultimate testing is defered
for bug@13270, bug@23333.
--long-query-time is now given in seconds with microseconds as decimals
--min_examined_row_limit added for slow query log
long_query_time user variable is now double with 6 decimals
Added functions to get time in microseconds
Added faster time() functions for system that has gethrtime() (Solaris)
We now do less time() calls.
Added field->in_read_set() and field->in_write_set() for easier field manipulation by handlers
set_var.cc and my_getopt() can now handle DOUBLE variables.
All time() calls changed to my_time()
my_time() now does retry's if time() call fails.
Added debug function for stopping in mysql_admin_table() when tables are locked
Some trivial function and struct variable renames to avoid merge errors.
Fixed compiler warnings
Initialization of some time variables on windows moved to my_init()
Bug#25422 (Hang with log tables)
Bug 17876 (Truncating mysql.slow_log in a SP after using cursor locks the
thread)
Bug 23044 (Warnings on flush of a log table)
Bug 29129 (Resetting general_log while the GLOBAL READ LOCK is set causes
a deadlock)
Prior to this fix, the server would hang when performing concurrent
ALTER TABLE or TRUNCATE TABLE statements against the LOG TABLES,
which are mysql.general_log and mysql.slow_log.
The root cause traces to the following code:
in sql_base.cc, open_table()
if (table->in_use != thd)
{
/* wait_for_condition will unlock LOCK_open for us */
wait_for_condition(thd, &LOCK_open, &COND_refresh);
}
The problem with this code is that the current implementation of the
LOGGER creates 'fake' THD objects, like
- Log_to_csv_event_handler::general_log_thd
- Log_to_csv_event_handler::slow_log_thd
which are not associated to a real thread running in the server,
so that waiting for these non-existing threads to release table locks
cause the dead lock.
In general, the design of Log_to_csv_event_handler does not fit into the
general architecture of the server, so that the concept of general_log_thd
and slow_log_thd has to be abandoned:
- this implementation does not work with table locking
- it will not work with commands like SHOW PROCESSLIST
- having the log tables always opened does not integrate well with DDL
operations / FLUSH TABLES / SET GLOBAL READ_ONLY
With this patch, the fundamental design of the LOGGER has been changed to:
- always open and close a log table when writing a log
- remove totally the usage of fake THD objects
- clarify how locking of log tables is implemented in general.
See WL#3984 for details related to the new locking design.
Additional changes (misc bugs exposed and fixed):
1)
mysqldump which would ignore some tables in dump_all_tables_in_db(),
but forget to ignore the same in dump_all_views_in_db().
2)
mysqldump would also issue an empty "LOCK TABLE" command when all the tables
to lock are to be ignored (numrows == 0), instead of not issuing the query.
3)
Internal errors handlers could intercept errors but not warnings
(see sql_error.cc).
4)
Implementing a nested call to open tables, for the performance schema tables,
exposed an existing bug in remove_table_from_cache(), which would perform:
in_use->some_tables_deleted=1;
against another thread, without any consideration about thread locking.
This call inside remove_table_from_cache() was not required anyway,
since calling mysql_lock_abort() takes care of aborting -- cleanly -- threads
that might hold a lock on a table.
This line (in_use->some_tables_deleted=1) has been removed.
the master but on the slave
MySQL can decide to "downgrade" a INSERT DELAYED statement
to normal insert in certain situations.
One such situation is when the slave is replaying a
replication feed.
However INSERT DELAYED is logged even if there're no updates
whereas the NORMAL INSERT is not logged in such cases.
Fixed by always logging a "downgraded" INSERT DELAYED: even
if there were no updates.
No test case, since the bug requires a stress case with 30 INSERT DELAYED
threads and 1 killer thread to repeat. The patch is verified
manually.
Review fixes.
The server that is running DELAYED inserts would deadlock itself
or crash under high load if some of the delayed threads were KILLed
in the meanwhile.
The fix is to change internal lock acquisition order of delayed inserts
subsystem and to ensure that
Delayed_insert::table_list::db does not point to volatile memory in some
cases.
For details, please see a comment for sql_insert.cc.
When a table is being updated it has two set of fields - fields required for
checks of conditions and fields to be updated. A storage engine is allowed
not to retrieve columns marked for update. Due to this fact records can't
be compared to see whether the data has been changed or not. This makes the
server always update records independently of data change.
Now when an auto-updatable timestamp field is present and server sees that
a table handle isn't going to retrieve write-only fields then all of such
fields are marked as to be read to force the handler to retrieve them.
"Federated INSERT failures"
Federated does not correctly handle "INSERT...ON DUPLICATE KEY UPDATE"
However, implementing such support is not reasonably possible without
increasing complexity of the storage engine: checking that constraints
on remote server match local server and parsing error messages.
This patch causes 'ON DUPLICATE KEY' to fail with ER_DUP_KEY message
if a conflict occurs and not to fail silently.
Sometimes the number of really updated rows (with changed
column values) cannot be determined at the server level
alone (e.g. if the storage engine does not return enough
column values to verify that). So the only dependable way
in such cases is to let the storage engine return that
information if possible.
Fixed the bug at server level by providing a way for the
storage engine to return information about wether it
actually updated the row or the old and the new column
values are the same. It can do that by returning
HA_ERR_RECORD_IS_THE_SAME in ha_update_row().
Note that each storage engine may choose not to try to
return this status code, so this behaviour remains
storage engine specific.
The method select_insert::send_error does two things, it rolls back a statement
being executed and outputs an error message. But when a
nonexistent column is referenced, an error message has been published already and
there is no need to publish another.
Fixed by moving all functionality beyond publishing an error message into
select_insert::abort() and calling only that function.
When the INSERT .. ON DUPLICATE KEY UPDATE has to update a matched row but
the new data is the same as in the record then it returns as if
no rows were inserted or updated. Nevertheless the row is silently
updated. This leads to a situation when zero updated rows are reported
in the case when data has actually been changed.
Now the write_record function updates a row only if new data differs from
that in the record.
Coding style: classes start with a capital letter.
Rename some classes related to parsing:
create_field -> Create_field
foreign_key -> Foreign_key
key_part_spec -> Key_part_spec
flag is set.
When the CLIENT_FOUND_ROWS flag is set then the server should return
found number of rows independently whether they were updated or not.
But this wasn't the case for the INSERT statement which always returned
number of rows that were actually changed thus providing wrong info to
the user.
Now the select_insert::send_eof method and the mysql_insert function
are sending the number of touched rows if the CLIENT_FOUND_ROWS flag is set.