FOR UPDATE is causing a lock".
This patch tries to address problems which were exposed
during backporting of original patch to 5.1 tree.
- It ensures that we don't change locking behavior of simple
SELECT statements on InnoDB tables when they are executed
under LOCK TABLES ... READ and with @@innodb_table_locks=0.
Also we no longer pass TL_READ_DEFAULT/TL_WRITE_DEFAULT
lock types, which are supposed to be parser-only, to
handler::start_stmt() method.
- It makes check_/no_concurrent_insert.inc auxiliary scripts
more robust against changes in test cases that use them
and also ensures that they don't unnecessarily change
environment of caller.
The problem was that OPTMIZE TABLE was allowed to run on a table
in use by a transaction in a different connection. This caused
repeatable read to break.
This bug was fixed by the introduction of metadata locking, WL#4284.
OPTIMIZE TABLE will now be blocked until the transaction using the
table, has ended.
This patch contains a regression test added to innodb_mysql_lock.test
and no code changes.
for ALTER TABLE, LOAD DATA).
ROW_COUNT is now assigned according to the following rules:
- In my_ok():
- for DML statements: to the number of affected rows;
- for DDL statements: to 0.
- In my_eof(): to -1 to indicate that there was a result set.
We derive this semantics from the JDBC specification, where int
java.sql.Statement.getUpdateCount() is defined to (sic) "return the
current result as an update count; if the result is a ResultSet
object or there are no more results, -1 is returned".
- In my_error(): to -1 to be compatible with the MySQL C API and
MySQL ODBC driver.
- For SIGNAL statements: to 0 per WL#2110 specification. Zero is used
since that's the "default" value of ROW_COUNT in the diagnostics area.
data is selected or not
Temporary and permanent tables should live in different
namespaces. In this case, resolving a permanent table
name gave the temporary table, resulting in a name
collision.
The bug happened under the following condition:
- there was a user variable of type REAL, containing NULL value
- there was a table with a NOT_NULL column of any type but REAL, having
default value (or auto increment);
- a row was inserted into the table with the user variable as value.
A warning was emitted here.
The problem was that handling of NULL values of REAL type was not properly
implemented: it didn't expect that REAL NULL value can be assigned to other
data type.
Basically, the problem was that set_field_to_null() was used instead of
set_field_to_null_with_conversions().
The fix is to use the right function, or more generally, to allow conversion of
REAL NULL values to other data types.
Problem:
item->name was NULL for Item_user_var_as_out_param
which made strcmp(something, item->name) crash in the LOAD XML code.
Fix:
- item_func.h: Adding set_name() in constuctor for Item_user_var_as_out_param
- sql_load.cc: Changing the condition in write_execute_load_query_log_event() which
distiguished between Item_user_var_as_out_param and Item_field
from
if (item->name == NULL)
to
if (item->type() == Item::FIELD_ITEM)
- loadxml.result, loadxml.test: adding tests
table
If a temporary table A exists, and a (permanent) table
with the same name is attempted created with
"CREATE TABLE ... AS SELECT", the create would fail with
an error.
1050: Table 'A' already exists
The error occured in MySQL 5.1 releases, but is not
present in MySQL 5.5. This patch adds a regression
test to ensure that the problem does not reoccur.
Problem: after introduction of "WL#2649 Number-to-string conversions"
This query:
SET NAMES cp850; -- Or any other non-latin1 ASCII-based character set
SELECT * FROM t1
WHERE datetime_column='2010-01-01 00:00:00'
started to add extra character set conversion:
SELECT * FROM t1
WHERE CONVERT(datetime_column USING cp850)='2010-01-01 00:00:00';
so index on DATETIME column was not used anymore.
Fix:
avoid convertion of NUMERIC/DATETIME items
(i.e. those with derivation DERIVATION_NUMERIC).
Clarified error messages related to unsafe statements:
- avoid the internal technical term "row injection"
- use 'binary log' instead of 'binlog'
- avoid the word 'unsafeness'
Fix for bug #46947 "Embedded SELECT without FOR UPDATE is
causing a lock", with after-review fixes.
SELECT statements with subqueries referencing InnoDB tables
were acquiring shared locks on rows in these tables when they
were executed in REPEATABLE-READ mode and with statement or
mixed mode binary logging turned on.
This was a regression which were introduced when fixing
bug 39843.
The problem was that for tables belonging to subqueries
parser set TL_READ_DEFAULT as a lock type. In cases when
statement/mixed binary logging at open_tables() time this
type of lock was converted to TL_READ_NO_INSERT lock at
open_tables() time and caused InnoDB engine to acquire
shared locks on reads from these tables. Although in some
cases such behavior was correct (e.g. for subqueries in
DELETE) in case of SELECT it has caused unnecessary locking.
This patch tries to solve this problem by rethinking our
approach to how we handle locking for SELECT and subqueries.
Now we always set TL_READ_DEFAULT lock type for all cases
when we read data. When at open_tables() time this lock
is interpreted as TL_READ_NO_INSERT or TL_READ depending
on whether this statement as a whole or call to function
which uses particular table should be written to the
binary log or not (if yes then statement should be properly
serialized with concurrent statements and stronger lock
should be acquired).
Test coverage is added for both InnoDB and MyISAM.
This patch introduces an "incompatible" change in locking
scheme for subqueries used in SELECT ... FOR UPDATE and
SELECT .. IN SHARE MODE.
In 4.1 the server would use a snapshot InnoDB read for
subqueries in SELECT FOR UPDATE and SELECT .. IN SHARE MODE
statements, regardless of whether the binary log is on or off.
If the user required a different type of read (i.e. locking read),
he/she could request so explicitly by providing FOR UPDATE/IN SHARE MODE
clause for each individual subquery.
On of the patches for 5.0 broke this behaviour (which was not documented
or tested), and started to use locking reads fora all subqueries in SELECT ...
FOR UPDATE/IN SHARE MODE. This patch restored 4.1 behaviour.
transaction
BUG#52616 Temp table prevents switch binlog format from STATEMENT to ROW
Before the WL#2687 and BUG#46364, every non-transactional change that happened
after a transactional change was written to trx-cache and flushed upon
committing the transaction. WL#2687 and BUG#46364 changed this behavior and
non-transactional changes are now written to the binary log upon committing
the statement.
A binary log event is identified as transactional or non-transactional through
a flag in the Log_event which is set taking into account the underlie storage
engine on what it is stems from. In the current bug, this flag was not being
set properly when the DROP TEMPORARY TABLE was executed.
However, while fixing this bug we figured out that changes to temporary tables
should be always written to the trx-cache if there is an on-going transaction.
Otherwise, binlog events in the reversed order would be produced.
Regarding concurrency, keeping changes to temporary tables in the trx-cache is
also safe as temporary tables are only visible to the owner connection.
In this patch, we classify the following statements as unsafe:
1 - INSERT INTO t_myisam SELECT * FROM t_myisam_temp
2 - INSERT INTO t_myisam_temp SELECT * FROM t_myisam
3 - CREATE TEMPORARY TABLE t_myisam_temp SELECT * FROM t_myisam
On the other hand, the following statements are classified as safe:
1 - INSERT INTO t_innodb SELECT * FROM t_myisam_temp
2 - INSERT INTO t_myisam_temp SELECT * FROM t_innodb
The patch also guarantees that transactions that have a DROP TEMPORARY are
always written to the binary log regardless of the mode and the outcome:
commit or rollback. In particular, the DROP TEMPORARY is extended with the
IF EXISTS clause when the current statement logging format is set to row.
Finally, the patch allows to switch from STATEMENT to MIXED/ROW when there
are temporary tables but the contrary is not possible.
This assertion could be triggered during execution of OPTIMIZE TABLE for
InnoDB tables. As part of optimize for InnoDB tables, the table is recreated
and then opened again. If the reopen failed for any reason, the assertion
would be triggered. This could for example be caused by a concurrent DROP
TABLE executed by a different connection. The reason for the assertion was
that any failures during reopening were ignored.
This patch fixes the problem by making sure that the result of reopening the
table is checked and that any error messages are sent to the client.
Test case added to innodb_mysql_sync.test.
This was a deadlock between CREATE/ALTER/DROP EVENT and a query
accessing both the mysql.event table and I_S.GLOBAL_VARIABLES.
The root of the problem was that the LOCK_event_metadata mutex was
used to both protect the "event_scheduler" global system variable
and the internal event data structures used by CREATE/ALTER/DROP EVENT.
The deadlock would occur if CREATE/ALTER/DROP EVENT held
LOCK_event_metadata while trying to open the mysql.event table,
at the same time as the query had mysql.event open, trying to
lock LOCK_event_metadata to access "event_scheduler".
This bug was fixed in the scope of Bug#51160 by using only
LOCK_global_system_variables to protect "event_scheduler".
This makes it so that the query above won't lock LOCK_event_metadata,
thereby preventing this deadlock from occuring.
This patch contains no code changes.
Test case added to lock_sync.test.
even if myisam-recover is OFF
The problem was that a corrupted MyISAM table was auto repaired
even if the myisam_recover_options server variable (or the
myisam_recover option) was set to OFF.
The reason was that the auto_repair() function, which is supposed
to say if auto repair is to be used, did not use the server variable
setting correctly. This bug was a regression introduced by WL#4738.
This patch fixes the problem by making sure auto_repair() returns
FALSE if myisam_recover_options is set to OFF.
Test case added to myisam.test.
for write by another connection
The problem was that if a table was locked in one connection by
LOCK TABLES ... WRITE, REPAIR TABLE or OPTIMIZE TABLE, SHOW CREATE
TABLE from another connection would be blocked. As SHOW CREATE TABLE
only reads metadata about the table, such blocking is not needed.
The problem was that when SHOW CREATE TABLE tried to get a metadata
lock on the table in order to open it, it used the wrong type of
metadata lock request. It used MDL_SHARED_READ which is used when
the intent is to read both table metadata and table data. Instead
it should have used MDL_SHARED_HIGH_PRIO which signifies an intent
to only read metadata.
This patch fixes the problem by making sure SHOW CREATE TABLE uses
the MDL_SHARED_HIGH_PRIO metadata lock request type when trying to
open the table. The patch also fixes a similar problem with the
mysql_list_fields API call.
Test case added to show_check.test.
Allow stored procedure variables in LIMIT clause.
Only allow variables of INTEGER types.
Handle negative values by means of an implicit cast to UNSIGNED
(similarly to prepared statement placeholders).
Add tests.
Make sure replication works by not doing NAME_CONST substitution
for variables in LIMIT clause.
Add replication tests.
ChangeSet@1.2703, 2007-12-07 09:35:28-05:00, cmiller@zippy.cornsilk.net +40 -0
Bug#13174: SHA2 function
Patch contributed from Bill Karwin, paper unnumbered CLA in Seattle
Implement SHA2 functions.
Chad added code to make it work with YaSSL. Also, he removed the
(probable) bug of embedded server never using SSL-dependent
functions. (libmysqld/Makefile.am didn't read ANY autoconf defs.)
Function specification:
SHA2( string cleartext, integer hash_length )
-> string hash, or NULL
where hash_length is one of 224, 256, 384, or 512. If either is
NULL or a length is unsupported, then the result is NULL. The
resulting string is always the length of the hash_length parameter
or is NULL.
Include the canonical hash examples from the NIST in the test
results.
---
Polish and address concerns of reviewers.
Problem: Segmentation fault in add_group_and_distinct_keys() when accessing
field of what is assumed to be an Item_field object.
Cause: In case of views, the item added to list by is_indexed_agg_distinct()
was not of type Item_field, but Item_ref.
Resolution: Add the real Item_field object, the one referred to by
Item_ref object, to the list, instead.
The failing assertion was written with the assumption that a NULL
string can never be passed to my_strtod(). However, an empty string
may be passed under some circumstances by passing str == NULL and
*end == NULL.
Fixed the assertion to take the above case into account.
Conflicts:
Text conflict in client/mysqlbinlog.cc
Text conflict in mysql-test/Makefile.am
Text conflict in mysql-test/collections/default.daily
Text conflict in mysql-test/r/mysqlbinlog_row_innodb.result
Text conflict in mysql-test/suite/rpl/r/rpl_typeconv_innodb.result
Text conflict in mysql-test/suite/rpl/t/rpl_get_master_version_and_clock.test
Text conflict in mysql-test/suite/rpl/t/rpl_row_create_table.test
Text conflict in mysql-test/suite/rpl/t/rpl_slave_skip.test
Text conflict in mysql-test/suite/rpl/t/rpl_typeconv_innodb.test
Text conflict in mysys/charset.c
Text conflict in sql/field.cc
Text conflict in sql/field.h
Text conflict in sql/item.h
Text conflict in sql/item_func.cc
Text conflict in sql/log.cc
Text conflict in sql/log_event.cc
Text conflict in sql/log_event_old.cc
Text conflict in sql/mysqld.cc
Text conflict in sql/rpl_utility.cc
Text conflict in sql/rpl_utility.h
Text conflict in sql/set_var.cc
Text conflict in sql/share/Makefile.am
Text conflict in sql/sql_delete.cc
Text conflict in sql/sql_plugin.cc
Text conflict in sql/sql_select.cc
Text conflict in sql/sql_table.cc
Text conflict in storage/example/ha_example.h
Text conflict in storage/federated/ha_federated.cc
Text conflict in storage/myisammrg/ha_myisammrg.cc
Text conflict in storage/myisammrg/myrg_open.c
Problem: caseup_multiply and casedn_multiply members
were not initialized for a dynamic collation, so
UPPER() and LOWER() functions returned empty strings.
Fix: initializing the members properly.
Adding tests:
mysql-test/r/ctype_ldml.result
mysql-test/t/ctype_ldml.test
Applying the fix:
mysys/charset.c
(Original patch by Sinisa Milivojevic)
The YEAR(4) value of 2000 was equal to the "bad" YEAR(4) value of 0000.
The get_year_value() function has been modified to not adjust bad
YEAR(4) value to 2000.
Conflicts:
Text conflict in mysql-test/r/partition_innodb.result
Text conflict in sql/field.h
Text conflict in sql/item.h
Text conflict in sql/item_cmpfunc.h
Text conflict in sql/item_sum.h
Text conflict in sql/log_event_old.cc
Text conflict in sql/protocol.cc
Text conflict in sql/sql_select.cc
Text conflict in sql/sql_yacc.yy
The problem is that when we make conditon for
grouped result const part of condition is cut off.
It happens because some parts of 'having' condition
which refer to outer join become const after
make_join_statistics. These parts may be lost
during further having condition transformation
in JOIN::exec. The fix is adding 'having'
condition check for const tables after
make_join_statistics is performed.
This has been back-ported from 6.0 as the problems proved to afflict
5.1 as well.
The fix exposed two new bugs. They were reported as follows.
Bug no 52174: Sometimes wrong plan when reading a MAX value
from non-NULL index
Bug no 52173: Reading NULL value from non-NULL index gives wrong
result in embedded server
Both bugs taken together affect a much smaller class of queries than #47762,
so the fix stays for now.
Optimizer erroneously translated LEFT JOIN into INNER JOIN.
It leads to cutting rows with NULL right side. It happens
because Item_row uses not_null_tables() method form the
base(Item) class and does not calculate 'null tables'
properly. The fix is adding calculation of 'not null tables'
to Item_row.
The crash happens because of discrepancy between values of
conts_tables and join->const_table_map(make_join_statisctics).
Calculation of conts_tables used condition with
HA_STATS_RECORDS_IS_EXACT flag check. Calculation of
join->const_table_map does not use this flag check.
In case of MERGE table without union with index
the table does not become const table and
thus join_read_const_table() is not called
for the table. join->const_table_map supposes
this table is const and later in make_join_select
this table is used for making&calculation const
condition. As table record buffer is not populated
it leads to crash.
The fix is adding a check if an engine supports
HA_STATS_RECORDS_IS_EXACT flag before updating
join->const_table_map.
All numeric operators and functions on integer, floating point
and DECIMAL values now throw an 'out of range' error rather
than returning an incorrect value or NULL, when the result is
out of supported range for the corresponding data type.
Some test cases in the test suite had to be updated
accordingly either because the test case itself relied on a
value returned in case of a numeric overflow, or because a
numeric overflow was the root cause of the corresponding bugs.
The latter tests are no longer relevant, since the expressions
used to trigger the corresponding bugs are not valid anymore.
However, such test cases have been adjusted and kept "for the
record".
reverse DNS lookup of "localhost" returns "broadcasthost" on Snow Leopard, and NULL on most others.
Simply ignore the output, as this is not an essential part of UDF testing.
definition at engine
If a single ALTER TABLE contains both DROP INDEX and ADD INDEX using
the same index name (a.k.a. index modification) we need to disable
in-place alter table because we can't ask the storage engine to have
two copies of the index with the same name even temporarily (if we
first do the ADD INDEX and then DROP INDEX) and we can't modify
indexes that are needed by e.g. foreign keys if we first do
DROP INDEX and then ADD INDEX.
Fixed the problem by disabling in-place ALTER TABLE for these cases.