When a table is being updated it has two set of fields - fields required for
checks of conditions and fields to be updated. A storage engine is allowed
not to retrieve columns marked for update. Due to this fact records can't
be compared to see whether the data has been changed or not. This makes the
server always update records independently of data change.
Now when an auto-updatable timestamp field is present and server sees that
a table handle isn't going to retrieve write-only fields then all of such
fields are marked as to be read to force the handler to retrieve them.
"Federated INSERT failures"
Federated does not correctly handle "INSERT...ON DUPLICATE KEY UPDATE"
However, implementing such support is not reasonably possible without
increasing complexity of the storage engine: checking that constraints
on remote server match local server and parsing error messages.
This patch causes 'ON DUPLICATE KEY' to fail with ER_DUP_KEY message
if a conflict occurs and not to fail silently.
Sometimes the number of really updated rows (with changed
column values) cannot be determined at the server level
alone (e.g. if the storage engine does not return enough
column values to verify that). So the only dependable way
in such cases is to let the storage engine return that
information if possible.
Fixed the bug at server level by providing a way for the
storage engine to return information about wether it
actually updated the row or the old and the new column
values are the same. It can do that by returning
HA_ERR_RECORD_IS_THE_SAME in ha_update_row().
Note that each storage engine may choose not to try to
return this status code, so this behaviour remains
storage engine specific.
When the INSERT .. ON DUPLICATE KEY UPDATE has to update a matched row but
the new data is the same as in the record then it returns as if
no rows were inserted or updated. Nevertheless the row is silently
updated. This leads to a situation when zero updated rows are reported
in the case when data has actually been changed.
Now the write_record function updates a row only if new data differs from
that in the record.
Coding style: classes start with a capital letter.
Rename some classes related to parsing:
create_field -> Create_field
foreign_key -> Foreign_key
key_part_spec -> Key_part_spec
flag is set.
When the CLIENT_FOUND_ROWS flag is set then the server should return
found number of rows independently whether they were updated or not.
But this wasn't the case for the INSERT statement which always returned
number of rows that were actually changed thus providing wrong info to
the user.
Now the select_insert::send_eof method and the mysql_insert function
are sending the number of touched rows if the CLIENT_FOUND_ROWS flag is set.
Moving code to check storage engine capabilities to after tables
are locked. Moving code to cache table flags so that table flags
are read from the storage engine at the beginning of the statement
in addition to when the storage engine is opened.
To handle CREATE-SELECT, the decision function is called after the
table is created and it is called with all tables that are in the select
part of the statement as well as the newly created table.
Refining the tests since pb revealed the older version's fragality - the error from SF() due to killed
may be different on different env:s.
DBUG_ASSERT instead of assert.
The reason for the bug was that replaying of a query on slave could not be possible since its event
was recorded with the killed error. Due to the specific of handling INSERT, which per-row-while-loop is
unbreakable to killing, the query on transactional table should have not appeared in binlog unless
there was a call to a stored routine that got interrupted with killing (and then there must be an error
returned out of the loop).
The offered solution added the following rule for binlogging of INSERT that accounts the above
specifics:
For INSERT on transactional-table if the error was not set the only raised flag
is harmless and is ignored via masking out on time of creation of binlog event.
For both table types the combination of raised error and KILLED flag indicates that there
was potentially partial execution on master and consistency is under the question.
In that case the code continues to binlog an event with an appropriate killed error.
The fix relies on the specified behaviour of stored routine that must propagate the error
to the top level query handling if the thd->killed flag was raised in the routine execution.
The patch adds an arg with the default killed-status-unset value to Query_log_event::Query_log_event.
- A race condition caused brief unavailablility when trying to acccess
a table.
- The variable 'grant_option' was removed to resolve the race condition and
to simplify the design pattern. This flag was originally intended to optimize
grant checks.
Bug#4968 ""Stored procedure crash if cursor opened on altered table"
Bug#6895 "Prepared Statements: ALTER TABLE DROP COLUMN does nothing"
Bug#19182 "CREATE TABLE bar (m INT) SELECT n FROM foo; doesn't work from
stored procedure."
Bug#19733 "Repeated alter, or repeated create/drop, fails"
Bug#22060 "ALTER TABLE x AUTO_INCREMENT=y in SP crashes server"
Bug#24879 "Prepared Statements: CREATE TABLE (UTF8 KEY) produces a
growing key length" (this bug is not fixed in 5.0)
Re-execution of CREATE DATABASE, CREATE TABLE and ALTER TABLE
statements in stored routines or as prepared statements caused
incorrect results (and crashes in versions prior to 5.0.25).
In 5.1 the problem occured only for CREATE DATABASE, CREATE TABLE
SELECT and CREATE TABLE with INDEX/DATA DIRECTOY options).
The problem of bugs 4968, 19733, 19282 and 6895 was that functions
mysql_prepare_table, mysql_create_table and mysql_alter_table are not
re-execution friendly: during their operation they modify contents
of LEX (members create_info, alter_info, key_list, create_list),
thus making the LEX unusable for the next execution.
In particular, these functions removed processed columns and keys from
create_list, key_list and drop_list. Search the code in sql_table.cc
for drop_it.remove() and similar patterns to find evidence.
The fix is to supply to these functions a usable copy of each of the
above structures at every re-execution of an SQL statement.
To simplify memory management, LEX::key_list and LEX::create_list
were added to LEX::alter_info, a fresh copy of which is created for
every execution.
The problem of crashing bug 22060 stemmed from the fact that the above
metnioned functions were not only modifying HA_CREATE_INFO structure
in LEX, but also were changing it to point to areas in volatile memory
of the execution memory root.
The patch solves this problem by creating and using an on-stack
copy of HA_CREATE_INFO in mysql_execute_command.
Additionally, this patch splits the part of mysql_alter_table
that analizes and rewrites information from the parser into
a separate function - mysql_prepare_alter_table, in analogy with
mysql_prepare_table, which is renamed to mysql_prepare_create_table.
The problem reported is a compile bug,
reported by the development GCC team with GCC 4.2.
The original issue can no longer be reproduced in MySQL 5.1,
since the configure script no longer define HAVE_ATOMIC_ADD,
which caused the Linux atomic functions to be used (and cause a problem
with an invalid cast).
This patch implements some code cleanup for 5.1 only, which was identified
during the investigation of this issue.
With this patch, statistics maintained in THD::status_var are by definition
owned by the running thread, and do not need to be protected against race
conditions. These statistics are maintained by the status_var_* helpers,
which do not require any lock.
Bug#21483 "Server abort or deadlock on INSERT DELAYED with another
implicit insert"
Also fixes and adds test cases for bugs:
20497 "Trigger with INSERT DELAYED causes Error 1165"
21714 "Wrong NEW.value and server abort on INSERT DELAYED to a
table with a trigger".
Post-review fixes.
Problem:
In MySQL INSERT DELAYED is a way to pipe all inserts into a
given table through a dedicated thread. This is necessary for
simplistic storage engines like MyISAM, which do not have internal
concurrency control or threading and thus can not
achieve efficient INSERT throughput without support from SQL layer.
DELAYED INSERT works as follows:
For every distinct table, which can accept DELAYED inserts and has
pending data to insert, a dedicated thread is created to write data
to disk. All user connection threads that attempt to
delayed-insert into this table interact with the dedicated thread in
producer/consumer fashion: all records to-be inserted are pushed
into a queue of the dedicated thread, which fetches the records and
writes them.
In this design, client connection threads never open or lock
the delayed insert table.
This functionality was introduced in version 3.23 and does not take
into account existence of triggers, views, or pre-locking.
E.g. if INSERT DELAYED is called from a stored function, which,
in turn, is called from another stored function that uses the delayed
table, a deadlock can occur, because delayed locking by-passes
pre-locking. Besides:
* the delayed thread works directly with the subject table through
the storage engine API and does not invoke triggers
* even if it was patched to invoke triggers, if triggers,
in turn, used other tables, the delayed thread would
have to open and lock involved tables (use pre-locking).
* even if it was patched to use pre-locking, without deadlock
detection the delayed thread could easily lock out user
connection threads in case when the same table is used both
in a trigger and on the right side of the insert query:
the delayed thread would not release locks until all inserts
are complete, and user connection can not complete inserts
without having locks on the tables used on the right side of the
query.
Solution:
These considerations suggest two general alternatives for the
future of INSERT DELAYED:
* it is considered a full-fledged alternative to normal INSERT
* it is regarded as an optimisation that is only relevant
for simplistic engines.
Since we missed our chance to provide complete support of new
features when 5.0 was in development, the first alternative
currently renders infeasible.
However, even the second alternative, which is to detect
new features and convert DELAYED insert into a normal insert,
is not easy to implement.
The catch-22 is that we don't know if the subject table has triggers
or is a view before we open it, and we only open it in the
delayed thread. We don't know if the query involves pre-locking
until we have opened all tables, and we always first create
the delayed thread, and only then open the remaining tables.
This patch detects the problematic scenarios and converts
DELAYED INSERT to a normal INSERT using the following approach:
* if the statement is executed under pre-locking (e.g. from
within a stored function or trigger) or the right
side may require pre-locking, we detect the situation
before creating a delayed insert thread and convert the statement
to a conventional INSERT.
* if the subject table is a view or has triggers, we shutdown
the delayed thread and convert the statement to a conventional
INSERT.
Replacing binlog_row_based_if_mixed with variable binlog_stmt_flags
holding several flags and adding member functions to manipulate the
flags.
Added code to generate a warning when an attempt to log an unsafe
statement to the binary log was made. The warning is both pushed to the
SHOW WARNINGS table and written to the error log. The prevent flooding
the error log, the warning is just written to the error log once per
open session.