bug #51263 "Deadlock between transactional SELECT and ALTER
TABLE ... REBUILD PARTITION" and has been causing compilation
error when server was built with NDB support.
is allowed on views (not documented, broken)".
Remove support of ALTER TABLE RENAME for views as:
a) this feature was not documented,
c) does not add any compatibility with other databases,
b) its implementation doesn't follow metadata locking
protocol by accessing .FRM without holding any
metadata lock,
c) its implementation complicates ALTER TABLE's code
by introducing yet another separate branch to it.
After this patch one can rename a view by using the
documented way - RENAME TABLE statement.
Post-merge fix: Pass the right parameter type to open_and_lock_tables.
Passing FALSE ensures that derived table handling is disabled, truncate
only operates on base tables.
The problem was that mdl_sync.test was failing sporadically,
due to fact that part of the test didn't take into account
effects of MyISAM's concurrent insert.
This patch solves the problem by making test case robust
against concurrent insert.
SELECT and ALTER TABLE ... REBUILD PARTITION".
ALTER TABLE on InnoDB table (including partitioned tables)
acquired exclusive locks on rows of table being altered.
In cases when there was concurrent transaction which did
locking reads from this table this sometimes led to a
deadlock which was not detected by MDL subsystem nor by
InnoDB engine (and was reported only after exceeding
innodb_lock_wait_timeout).
This problem stemmed from the fact that ALTER TABLE acquired
TL_WRITE_ALLOW_READ lock on table being altered. This lock
was interpreted as a write lock and thus for table being
altered handler::external_lock() method was called with
F_WRLCK as an argument. As result InnoDB engine treated
ALTER TABLE as an operation which is going to change data
and acquired LOCK_X locks on rows being read from old
version of table.
In case when there was a transaction which already acquired
SR metadata lock on table and some LOCK_S locks on its rows
(e.g. by using it in subquery of DML statement) concurrent
ALTER TABLE was blocked at the moment when it tried to
acquire LOCK_X lock before reading one of these rows.
The transaction's attempt to acquire SW metadata lock on
table being altered led to deadlock, since it had to wait
for ALTER TABLE to release SNW lock. This deadlock was not
detected and got resolved only after timeout expiring
because waiting were happening in two different subsystems.
Similar deadlocks could have occured in other situations.
This patch tries to solve the problem by changing ALTER TABLE
implementation to use TL_READ_NO_INSERT lock instead of
TL_WRITE_ALLOW_READ. After this step handler::external_lock()
is called with F_RDLCK as an argument and InnoDB engine
correctly interprets ALTER TABLE as operation which only
reads data from original version of table. Thanks to this
ALTER TABLE acquires only LOCK_S locks on rows it reads.
This, in its turn, causes inter-subsystem deadlocks to go
away, as all potential lock conflicts and thus deadlocks will
be limited to metadata locking subsystem:
- When ALTER TABLE reads rows from table being altered it
can't encounter any locks which conflict with LOCK_S row
locks. There should be no concurrent transactions holding
LOCK_X row locks. Such a transaction should have been
acquired SW metadata lock on table first which would have
conflicted with ALTER's SNW lock.
- Vice versa, when DML which runs concurrently with ALTER
TABLE tries to lock row it should be requesting only LOCK_S
lock which is compatible with locks acquired by ALTER,
as otherwise such DML must own an SW metadata lock on table
which would be incompatible with ALTER's SNW lock.
The problem was that TRUNCATE TABLE didn't take a exclusive
lock on a table if it resorted to truncating via delete of
all rows in the table. Specifically for InnoDB tables, this
could break proper isolation as InnoDB ends up aborting some
granted locks when truncating a table.
The solution is to take a exclusive metadata lock before
TRUNCATE TABLE can proceed. This guarantees that no other
transaction is using the table.
Incompatible change: Truncate via delete no longer fails
if sql_safe_updates is activated (this was a undocumented
side effect).
transactional SELECT and ALTER TABLE ... REBUILD PARTITION".
The goal of this patch is to decouple type of metadata
lock acquired for table by open_tables() from type of
table-level lock to be acquired on it.
To achieve this we change approach to how we determine what
type of metadata lock should be acquired on table to be open.
Now instead of inferring it at open_tables() time from flags
and type of table-level lock we rely on that type of metadata
lock is properly set at parsing time and is not changed
further.
FOR UPDATE is causing a lock".
This patch tries to address problems which were exposed
during backporting of original patch to 5.1 tree.
- It ensures that we don't change locking behavior of simple
SELECT statements on InnoDB tables when they are executed
under LOCK TABLES ... READ and with @@innodb_table_locks=0.
Also we no longer pass TL_READ_DEFAULT/TL_WRITE_DEFAULT
lock types, which are supposed to be parser-only, to
handler::start_stmt() method.
- It makes check_/no_concurrent_insert.inc auxiliary scripts
more robust against changes in test cases that use them
and also ensures that they don't unnecessarily change
environment of caller.
The problem was that OPTMIZE TABLE was allowed to run on a table
in use by a transaction in a different connection. This caused
repeatable read to break.
This bug was fixed by the introduction of metadata locking, WL#4284.
OPTIMIZE TABLE will now be blocked until the transaction using the
table, has ended.
This patch contains a regression test added to innodb_mysql_lock.test
and no code changes.
That was a pure test issue -- filter implementation in Perl did not work
on some platform (the bug occurred on Windows Server 2008 with
Cygwin Perl 5.10.0).
multiquery packet).
Background:
- a query can contain multiple SQL statements;
- the server frees resources allocated to process a query when the
whole query is handled. In other words, resources allocated to process
one SQL statement from a multi-statement query are freed when all SQL
statements are handled.
The problem was that the parser allocated a buffer of size of the whole
query for each SQL statement in a multi-statement query. Thus, if a query
had many SQL-statements (so, the query was long), but each SQL statement
was short, ther parser tried to allocate huge amount of memory (number of
small SQL statements * length of the whole query).
The memory was allocated for a so-called "cpp buffer", which is intended to
store pre-processed SQL statement -- SQL text without version specific
comments.
The fix is to allocate memory for the "cpp buffer" once for all SQL
statements (once for a query).
for ALTER TABLE, LOAD DATA).
ROW_COUNT is now assigned according to the following rules:
- In my_ok():
- for DML statements: to the number of affected rows;
- for DDL statements: to 0.
- In my_eof(): to -1 to indicate that there was a result set.
We derive this semantics from the JDBC specification, where int
java.sql.Statement.getUpdateCount() is defined to (sic) "return the
current result as an update count; if the result is a ResultSet
object or there are no more results, -1 is returned".
- In my_error(): to -1 to be compatible with the MySQL C API and
MySQL ODBC driver.
- For SIGNAL statements: to 0 per WL#2110 specification. Zero is used
since that's the "default" value of ROW_COUNT in the diagnostics area.
- Update/fix file layouts for each package type, add new types for
native package formats including deb, rpm and svr4.
- Build all plugins, including debug versions
- Update compiler flags to match current release
- Add missing @VAR@ expansions
- Install correct mysqclient library symlinks
- Fix icc/ia64 builds
- Fix install of libmysqld-debug
- Don't include mysql_embedded
- Remove unpackaged manual pages to avoid missing files warnings
- Don't install mtr's test suite
with other merges from the old distribution-specific spec file.
- update copyright notices
- remove __os_install_post override, it was only necessary as a
hack to build debuginfo packages - now that we no longer make
them we can revert to the distribution macro which likely has
other useful bits we might want
- remove _unpackaged_files_terminate_build override, we want to
know of any orphaned files
- include native distribution support
- no longer build separate debuginfo RPMs, instead just include
debug/symbols in all binaries, which is more useful for support
- include support for building commercial RPMs, requires a
commercial source tree
- remove cluster RPM support, we don't build them from this
source tree
- use CMake for building, and update package lists to match the
new install layout/files. Remove any options which were only
useful for automake builds (e.g. yassl/zlib).
- other minor cleanups
via mysqld_safe
Plugin dir was set to a hard-coded path instead of relative the base dir.
This patch fixes this by using a path relative the basedir instead of the
plugin directory indicated by the configuration.
data is selected or not
Temporary and permanent tables should live in different
namespaces. In this case, resolving a permanent table
name gave the temporary table, resulting in a name
collision.
The bug happened under the following condition:
- there was a user variable of type REAL, containing NULL value
- there was a table with a NOT_NULL column of any type but REAL, having
default value (or auto increment);
- a row was inserted into the table with the user variable as value.
A warning was emitted here.
The problem was that handling of NULL values of REAL type was not properly
implemented: it didn't expect that REAL NULL value can be assigned to other
data type.
Basically, the problem was that set_field_to_null() was used instead of
set_field_to_null_with_conversions().
The fix is to use the right function, or more generally, to allow conversion of
REAL NULL values to other data types.
Problem:
item->name was NULL for Item_user_var_as_out_param
which made strcmp(something, item->name) crash in the LOAD XML code.
Fix:
- item_func.h: Adding set_name() in constuctor for Item_user_var_as_out_param
- sql_load.cc: Changing the condition in write_execute_load_query_log_event() which
distiguished between Item_user_var_as_out_param and Item_field
from
if (item->name == NULL)
to
if (item->type() == Item::FIELD_ITEM)
- loadxml.result, loadxml.test: adding tests
table
If a temporary table A exists, and a (permanent) table
with the same name is attempted created with
"CREATE TABLE ... AS SELECT", the create would fail with
an error.
1050: Table 'A' already exists
The error occured in MySQL 5.1 releases, but is not
present in MySQL 5.5. This patch adds a regression
test to ensure that the problem does not reoccur.
Problem: after introduction of "WL#2649 Number-to-string conversions"
This query:
SET NAMES cp850; -- Or any other non-latin1 ASCII-based character set
SELECT * FROM t1
WHERE datetime_column='2010-01-01 00:00:00'
started to add extra character set conversion:
SELECT * FROM t1
WHERE CONVERT(datetime_column USING cp850)='2010-01-01 00:00:00';
so index on DATETIME column was not used anymore.
Fix:
avoid convertion of NUMERIC/DATETIME items
(i.e. those with derivation DERIVATION_NUMERIC).
There were two problems here:
1. misleading error message
2. abusing KILL QUERY in the test case
1. The server reported "'DELETE FROM t1' failed: 1689: Wait on a lock was
aborted due to a pending exclusive lock", while the proper error message
should be "'DELETE FROM t1' failed: 1317: Query execution was interrupted".
The problem is that the server has two different flags for
signalling that a query is being killed: THD::killed and
mysys_var::abort. The test case triggers a race: sometimes
mysys_var::abort is set earlier than THD::killed. That leads
to the following situation:
- thr_lock() checks mysys_var::abort and returns error status,
since mysys_var::abort is set;
- the caller (mysql_lock_tables()) gets an error from thr_lock(),
but THD::killed is not set, so it decides that thr_lock() couldn't
get a lock due to a pending exclusive lock.
This is a known issue with the server and it's not going to be fixed soon.
5.5 differs from 5.1 here as follows: when thr_lock() returns an error:
- 5.1 continues trying thr_lock() until success;
- 5.5 propagates the error
2. The test case uses KILL QUERY is a highly concurent environment.
The fix is to wait for the dying statement to rest in peace before
executing another DELETE FROM t1.
Clarified error messages related to unsafe statements:
- avoid the internal technical term "row injection"
- use 'binary log' instead of 'binlog'
- avoid the word 'unsafeness'