NDB table".
SQL-layer was not marking fields which were used in triggers as such. As
result these fields were not always properly retrieved/stored by handler
layer. So one might got wrong values or lost changes in triggers for NDB,
Federated and possibly InnoDB tables.
This fix solves the problem by marking fields used in triggers
appropriately.
Also this patch contains the following cleanup of ha_ndbcluster code:
We no longer rely on reading LEX::sql_command value in handler in order
to determine if we can enable optimization which allows us to handle REPLACE
statement in more efficient way by doing replaces directly in write_row()
method without reporting error to SQL-layer.
Instead we rely on SQL-layer informing us whether this optimization
applicable by calling handler::extra() method with
HA_EXTRA_WRITE_CAN_REPLACE flag.
As result we no longer apply this optimzation in cases when it should not
be used (e.g. if we have on delete triggers on table) and use in some
additional cases when it is applicable (e.g. for LOAD DATA REPLACE).
Finally this patch includes fix for bug#20728 "REPLACE does not work
correctly for NDB table with PK and unique index".
This was yet another problem which was caused by improper field mark-up.
During row replacement fields which weren't explicity used in REPLACE
statement were not marked as fields to be saved (updated) so they have
retained values from old row version. The fix is to mark all table
fields as set for REPLACE statement. Note that in 5.1 we already solve
this problem by notifying handler that it should save values from all
fields only in case when real replacement happens.
BUG#18036 - update of table joined to self reports table as crashed
Set exclude_from_table_unique_test value back to FALSE. It is needed for
further check in multi_update::prepare whether to use record cache.
Certain updates of table joined to self results in unexpected
behavior.
The problem was that record cache was mistakenly enabled for
self-joined table updates. Normally record cache must be disabled
for such updates.
Fixed wrong condition in code that determines whether to use
record cache for self-joined table updates.
Only MyISAM tables were affected.
The check for view security was lacking several points :
1. Check with the right set of permissions : for each table ref that
participates in a view there were the right credentials to use in it's
security_ctx member, but these weren't used for checking the credentials.
This makes hard enforcing the SQL SECURITY DEFINER|INVOKER property
consistently.
2. Because of the above the security checking for views was just ruled out
in explicit ways in several places.
3. The security was checked only for the columns of the tables that are
brought into the query from a view. So if there is no column reference
outside of the view definition it was not detecting the lack of access to
the tables in the view in SQL SECURITY INVOKER mode.
The fix below tries to fix the above 3 points.
Mutli-table uses temporary table to store new values for fields. With the
new values the rowid of the record to be updated is stored in a Field_string
field. Table to be updated is set as source table of the rowid field.
But when the temporary table creates the tmp field for the rowid field it
converts it to a varstring field because the table to be updated was created by
the v4.1. Due to this the stored rowids were broken and no records for
update were found.
The flag can_alter_field_type is added to Field_string class. When it is set to
0 the field won't be converted to varstring. The Field_string::type() function
now always returns MYSQL_TYPE_STRING if can_alter_field_type is set to 0.
The multi_update::initialize_tables() function now sets can_alter_field_type
flag to 0 for the rowid fields denying conversion of the field to a varstring
field.
- Added empty constructors and virtual destructors to many classes and structs
- Removed some usage of the offsetof() macro to instead use C++ class pointers
trigger starts trigger".
In short, the deadlock/crash happened when execution of statement, which used
stored functions or activated triggers, coincided with alteration of the
tables used by these functions or triggers (in highly concurrent environment).
Bug was caused by the incorrect handling of tables from prelocked set in
open_tables() functions in situations when refresh happened. This fix replaces
old smart but not very robust way of handling tables after refresh (which was
closing only old tables), with new one which simply closes all tables opened so
far and restarts open_tables().
Also fixed handling of temporary tables in close_tables_for_reopen().
No test case present since bug manifests itself only in concurrent environment.
Problem #1: INSERT...SELECT, Version for 5.0.
Extended the unique table check by a check of lock data.
Merge sub-tables cannot be detected by doing name checks only.
Problem #1: INSERT...SELECT, Version for 4.1.
INSERT ... SELECT with the same table on both sides (hidden
below a MERGE table) does now work by buffering the select result.
The duplicate detection works now after open_and_lock_tables()
on the locks.
I did not find a test case that failed without the change in
sql_update.cc. I made the change anyway as it should in theory
fix a possible MERGE table problem with multi-table update.
depending on table order
multi_update::send_data() was counting updates, not updated rows. Thus if one
record have several updates it will be counted several times in 'rows matched'
but updated only once.
multi_update::send_data() now counts only unique rows.
Date field was declared as not null, thus expression 'datefield is null'
was always false. For SELECT special handling of such cases is used.
There 'datefield is null' converted to 'datefield eq "0000-00-00"'.
In mysql_update() before creation of select added remove_eq_conds() call.
It makes some optimization of conds and in particular performs conversion
from 'is null' to 'eq'.
Also remove_eq_conds() makes some evaluation of conds and if it founds that
conds is always false then update statement is not processed further.
All this allows to perform some update statements process faster due to
optimized conds, and not wasting resources if conds known to be false.
Not fixed in 4.1 as not critical. Also I'm correcting error checking of multi-UPDATE/DELETE
when it comes to binlogging, to make it consistent with when we rollback the statement.
This bug occurs when some trigger for table used by DML statement is created
or changed while statement was waiting in lock_tables(). In this situation
prelocking set which we have calculated becames invalid which can easily lead
to errors and even in some cases to crashes.
With proposed patch we no longer silently reopen tables in lock_tables(),
instead caller of lock_tables() becomes responsible for reopening tables and
recalculation of prelocking set.