columns
Fixed confusing warning.
Quoting INSERT section of the manual:
----
Inserting NULL into a column that has been declared NOT NULL. For
multiple-row INSERT statements or INSERT INTO ... SELECT statements, the
column is set to the implicit default value for the column data type. This
is 0 for numeric types, the empty string ('') for string types, and the
"zero" value for date and time types. INSERT INTO ... SELECT statements are
handled the same way as multiple-row inserts because the server does not
examine the result set from the SELECT to see whether it returns a single
row. (For a single-row INSERT, no warning occurs when NULL is inserted into
a NOT NULL column. Instead, the statement fails with an error.)
----
This is also true for LOAD DATA INFILE. For INSERT user can specify
DEFAULT keyword as a value to set column default. There is no similiar
feature available for LOAD DATA INFILE.
Added HA_NULL_IN_KEY to table flags to allow for nullable unique indexes
and added test to verify
ha_federated.h:
BUG #15133 "unique index with nullable value not accepted in federated table"
added HA_NULL_IN_KEY to table flags to allow for nullable unique indexes
federated.test:
BUG #15133 "unique index with nullable value not accepted in federated table"
New test to show that nullable unique indexes work
federated.result:
BUG #15133 "unique index with nullable value not accepted in federated table"
New results for new test
The Federated storage engine used Field methods that had arbitrary limits on
the amount of data they could process, which caused problems with data
over that limit (4K). By removing those Field methods and just using
features of the String class, we can avoid this problem.
Changed the error reporting (and a crash) when inserting data into a
MERGE table that has no underlying tables or no INSERT_METHOD specified
by reporting that it is read-only.
Fix random failures in test 'wait_timeout' that depend on exact timing.
1. Force a reconnect initially if necessary, as otherwise slow startup
might have caused a connection timeout before the test can even start.
2. Explicitly disconnect the first connection to remove confusion about
which connection aborts from timeout, causing test failure.
a too large value": the bug was that if MySQL generated a value for an
auto_increment column, based on auto_increment_* variables, and this value
was bigger than the column's max possible value, then that max possible
value was inserted (after issuing a warning). But this didn't honour
auto_increment_* variables (and so could cause conflicts in a master-master
replication where one master is supposed to generated only even numbers,
and the other only odd numbers), so now we "round down" this max possible
value to honour auto_increment_* variables, before inserting it.
"temporary table with data directory option fails"
myisam should not use user-specified table name when creating
temporary tables and use generated connection specific real name.
Test included.
auto_increment breaks binlog":
if slave's table had a higher auto_increment counter than master's (even
though all rows of the two tables were identical), then in some cases,
REPLACE and INSERT ON DUPLICATE KEY UPDATE failed to replicate
statement-based (it inserted different values on slave from on master).
write_record() contained a "thd->next_insert_id=0" to force an adjustment
of thd->next_insert_id after the update or replacement. But it is this
assigment introduced indeterminism of the statement on the slave, thus
the bug. For ON DUPLICATE, we replace that assignment by a call to
handler::adjust_next_insert_id_after_explicit_value() which is deterministic
(does not depend on slave table's autoinc counter). For REPLACE, this
assignment can simply be removed (as REPLACE can't insert a number larger
than thd->next_insert_id).
We also move a too early restore_auto_increment() down to when we really know
that we can restore the value.
run at startup"
The server returned an error when trying to execute init-file with a
stored procedure that could return multiple result sets to the client.
A stored procedure can return multiple result sets if it contains
PREPARE, SELECT, SHOW and similar statements.
The fix is to set client_capabilites|=CLIENT_MULTI_RESULTS in
sql_parse.cc:handle_bootstrap(). There is no "client" really, so
nothing is ever sent. This makes init-file feature behave consistently:
the prepared statements that can be called directly in the init-file
can be used in a stored procedure too.
Re-committed the patch originally submitted by Per-Erik after review.
invoker name
The bug was fixed similar to how context switch is handled in
Item_func_sp::execute_impl(): we store pointer to current
Name_resolution_context in Item_func_current_user class, and use
its Security_context in Item_func_current_user::fix_fields().
NDB table".
SQL-layer was not marking fields which were used in triggers as such. As
result these fields were not always properly retrieved/stored by handler
layer. So one might got wrong values or lost changes in triggers for NDB,
Federated and possibly InnoDB tables.
This fix solves the problem by marking fields used in triggers
appropriately.
Also this patch contains the following cleanup of ha_ndbcluster code:
We no longer rely on reading LEX::sql_command value in handler in order
to determine if we can enable optimization which allows us to handle REPLACE
statement in more efficient way by doing replaces directly in write_row()
method without reporting error to SQL-layer.
Instead we rely on SQL-layer informing us whether this optimization
applicable by calling handler::extra() method with
HA_EXTRA_WRITE_CAN_REPLACE flag.
As result we no longer apply this optimzation in cases when it should not
be used (e.g. if we have on delete triggers on table) and use in some
additional cases when it is applicable (e.g. for LOAD DATA REPLACE).
Finally this patch includes fix for bug#20728 "REPLACE does not work
correctly for NDB table with PK and unique index".
This was yet another problem which was caused by improper field mark-up.
During row replacement fields which weren't explicity used in REPLACE
statement were not marked as fields to be saved (updated) so they have
retained values from old row version. The fix is to mark all table
fields as set for REPLACE statement. Note that in 5.1 we already solve
this problem by notifying handler that it should save values from all
fields only in case when real replacement happens.