Diagnostics_area::set_ok_status on DROP FUNCTION
This assert tests that the server is not trying to send "ok" to
the client if an error has occured during statement processing.
In this case, the assert was triggered by lock timeout errors when
accessing system tables to do an implicit REVOKE after executing
DROP FUNCTION/PROCEDURE. In practice, this was only likely to
happen with very low values for "lock_wait_timeout" (in the bug report
1 second was used). These errors were ignored and the server tried
to send "ok" to the client, triggering the assert.
The patch for Bug#45225 introduced lock timeouts for metadata locks.
This made it possible to get timeouts when accessing system tables.
Note that a followup patch for Bug#45225 pushed after this
bug was reported, changed accessing of system tables such
that the user-supplied timeout value is ignored and the maximum
timeout value is used instead. This exact bug was therefore
only noticeable in the period between the initial Bug#45225 patch
and the followup patch.
However, the same problem could occur for any errors during revoking
of privileges - not just timeouts. This patch fixes the problem by
making sure that any errors during revoking of privileges are
reported to the client.
Test case added to sp-destruct.test. Since the original bug is not
reproducable now that system tables are accessed using a a long
timeout value, this test instead calls DROP FUNCTION with a grant
system table missing.
Add deprecation warning when variable optimizer_search_depth is given
the value 63.
mysql-test/r/greedy_optimizer.result
Updated with warning text.
mysql-test/r/mysqld--help-notwin.result
Updated with warning from mysqld --help --verbose.
mysql-test/r/mysqld--help-win.result
Updated with warning from mysqld --help --verbose.
sql/sys_vars.cc
Added an update check function to the constructor invocation for
the optimizer_search_depth variable. The function emits a
warning message for the value 63.
There was auto-reconnecting by slave earlier than a prescribed by slave_net_timeout value.
The issue happened on 64bit solaris that spotted rather incorrect casting of
the ulong slave_net_timeout into the uint of mysql.options.read_timeout.
Notice, that there is no reason for slave_net_timeout to be of type of ulong.
Since it's primarily passed as arg to mysql_options the type can be made
as uint to avoid all conversion hassles.
That's what the fixes are made.
A "side" effect of the patch is a new value for the max of slave_net_timeout
to be the max of the unsigned int type (therefore to vary across platforms).
Note, a regression test can't be made to run reliably without making it to last over some
20 secs. That's why it is placed in suite/large_tests.
Extend and implement the grammar that allows to FLUSH WITH READ LOCK
a list of tables, rather than all of them.
Incompatible grammar change:
Previously one could perform FLUSH TABLES, HOSTS, PRIVILEGES in a single
statement.
After this change, FLUSH TABLES must always be alone on the list.
Judging by the test suite, however, the old extended syntax
was never or very rarely used.
The new statement requires RELOAD ACL global privilege and
LOCK_TABLES_ACL | SELECT_ACL on individual tables.
In other words, it's an atomic combination of LOCK TALBES <list> READ
and FLUSH TABLES <list>, and requires respective privileges.
For additional information about the semantics, please
see WL#5000 and the comment for flush_tables_with_read_lock()
function in sql_parse.cc
Before this fix, the performance schema file instrumentation would treat:
- a relative path to a file
- an absolute path to the same file
as two different files.
This would lead to:
- separate aggregation counters
- file leaks when a file is removed.
With this fix, a relative and absolute path are resolved to the same file instrument.
The problem was that ALTER TABLE on a merge table which was locked
using LOCK TABLE ... WRITE, by mistake gave
ER_TABLE_NOT_LOCKED_FOR_WRITE.
During opening of the table to be ALTERed, open_table() tried to
get an upgradable metadata lock. In LOCK TABLEs mode, this lock
must already exist (i.e. taken by LOCK TABLE) as new locks of this
type cannot be acquired for fear of deadlock. So in LOCK TABLEs
mode, open_table() tried to find an existing upgradable lock for
the table to be altered.
The problem was that open_table() also tried to find upgradable
metadata locks for children of merge tables even if no such
locks are needed to execute ALTER TABLE on merge tables.
This patch fixes the problem by making sure that open tables code
only searches for upgradable metadata locks for the merge table
and not for the merge children tables.
The patch also fixes a related bug where an upgradable metadata
lock was aquired outside of LOCK TABLEs mode even if the table in
question was temporary. This bug meant that LOCK TABLES or DDL on
temporary tables by mistake could be blocked/aborted by locks held
on base tables with the same table name by other connections.
Test cases added to merge.test and lock_multi.test.
Attempts to execute RESET statements within a transaction that
had acquired metadata locks, led to an assertion failure on
debug servers. This bug didn't cause any problems on release
builds.
The triggered assert is designed to check that caches are not
flushed or reset while having active transactions. It is triggered
if acquired metadata locks exist that are not from LOCK TABLE or
HANDLER statements.
In this case it was triggered by RESET QUERY CACHE while having
an active transaction that had acquired locks. The reason the
assertion was triggered, was that RESET statements, unlike the
similar FLUSH statements, was not causing an implicit commit.
This patch fixes the problem by making sure RESET statements
commit the current transaction before executing. The commit
causes acquired metadata locks to be released, preventing the
assertion from being triggered.
Incompatible change: This patch changes RESET statements so
that they cause an implicit commit.
Test case added to query_cache.test.
bool MDL_context::try_acquire_lock(MDL_request*)
This assert was triggered in the following way:
1) HANDLER OPEN t1 from connection 1
2) DROP TABLE t1 from connection 2. This will block due to the metadata lock
held by the open handler in connection 1.
3) DML statement (e.g. INSERT) from connection 1. This will close the table
opened by the HANDLER in 1) and release its metadata lock. This is done due
to the pending exclusive metadata lock from 2).
4) DROP TABLE t1 from connection 2 now completes and removes table t1.
5) HANDLER READ from connection 1. Since the handler table was closed in 3),
the handler code will try to reopen the table. First a new metadata lock on
t1 will be granted before the command fails since the table was removed in 4).
6) HANDLER READ from connection 1. This caused the assert.
The reason for the assert was that the MDL_request's pointer to the lock
ticket was not reset when the statement failed. HANDLER READ then tried to
acquire a lock using the same MDL_request object, triggering the assert.
This bug was only noticeable on debug builds and did not cause any problems
on release builds.
This patch fixes the problem by assuring that the pointer to the metadata
lock ticket is reset when reopening of handler tables fails.
Test case added to handler.inc
- Remove INSTALL-BINARY from installed docs directory, we provide a copy
in the root directory (but perhaps this should be revisited later).
- Disable audit_null and daemon_example plugins.
- Fix the docs directory.
- Remove mysql-test/Makefile.in
- Build and install mysql_tzinfo_to_sql
- Remove share/charsets/languages.html
The task is to
(a) add a comment on indexes and
(b) increase the maximum length of column, table and the new index comments.
The patch committed on behalf of Yoshinori Matsunobu (Yoshinori.Matsunobu@Sun.COM).
WL#5154 - Remove deprecated 4.1 features
The fix is the removal of the sql_log_update_basic
test, as this option is deprecated and removed,
and a minor change to the result file of
lc_time_names_basic as the error message has
changed.
an INFORMATION_SCHEMA table
When a prepared statement using a merged view containing an information
schema table was executed, a metadata lock of the view was not taken.
This meant that it was possible for concurrent view DDL to execute,
thereby breaking the binary log. For example, it was possible
for DROP VIEW to appear in the binary log before a query using the view.
This also happened when a statement in a stored routine was executed a
second time.
For such views, the information schema table is merged into the view
during the prepare phase (or first execution of a statement in a routine).
The problem was that we took a short cut and were not executing full-blown
view opening during subsequent executions of the statement. As a result,
a metadata lock on the view was not taken to protect the view definition.
This patch resolves the problem by making sure a metadata lock is taken
for views even after information schema tables are merged into them.
Test cased added to view.test.
Some logic would group by suite always
Disable this if using --noreorder
Also fix getting array from collect_one_suite() in this case
Amended according to previous comment
The problem was that a shared InnoDB row lock was taken when executing
SELECT statements inside a stored function as a part of a transaction
using REPEATABLE READ. This prevented other transactions from updating
the row.
InnoDB uses multi-versioning and consistent nonlocking reads. SELECTs
should therefore not acquire locks and block other transactions
wishing to do updates.
This bug is no longer repeatable with the changes introduced in the scope
of metadata locking.
Test case added to innodb_mysql.test.