'flush tables' crashes
The server crashes when 'show procedure status' and 'flush tables' are
run concurrently.
This is caused by the way mysql.proc table is added twice to the list
of table to lock although the requirements on the current locking API
assumes differently.
No test case is submitted because of the nature of the crash which is
currently difficult to reproduce in a deterministic way.
This is a backport from 5.1
> ------------------------------------------------------------
> revno: 2796
> revision-id: sergey.glukhov@sun.com-20090827102219-sgjz0v5t1rfccs14
> parent: joro@sun.com-20090824122803-1d5jlaysjc7a7j6q
> committer: Sergey Glukhov <Sergey.Glukhov@sun.com>
> branch nick: mysql-5.0-bugteam
> timestamp: Thu 2009-08-27 15:22:19 +0500
> message:
> Bug#46184 Crash, SELECT ... FROM derived table procedure analyze
> The crash happens because select_union object is used as result set
> for queries which have derived tables.
> select_union use temporary table as data storage and if
> fields count exceeds 10(count of values for procedure ANALYSE())
> then we get a crash on fill_record() function.
> ------------------------------------------------------------
> revno: 2791.2.3
> revision-id: joro@sun.com-20090827114042-h55n7qp9990bl6ge
> parent: anurag.shekhar@sun.com-20090831073231-e55y1hsck6n08ux8
> committer: Georgi Kodinov <joro@sun.com>
> branch nick: B46749-5.0-bugteam
> timestamp: Thu 2009-08-27 14:40:42 +0300
> message:
> Bug #46749: Segfault in add_key_fields() with outer subquery level
> field references
>
> This error requires a combination of factors :
> 1. An "impossible where" in the outermost SELECT
> 2. An aggregate in the outermost SELECT
> 3. A correlated subquery with a WHERE clause that includes an outer
> field reference as a top level WHERE sargable predicate
>
> When JOIN::optimize detects an "impossible WHERE" it will bail out
> without doing the rest of the work and initializations. It will not
> call make_join_statistics() as well. And make_join_statistics fills
> in various structures for each table referenced.
> When processing the result of the "impossible WHERE" the query must
> send a single row of data if there are aggregate functions in it.
> In this case the server marks all the aggregates as having received
> no rows and calls the relevant Item::val_xxx() method on the SELECT
> list. However if this SELECT list happens to contain a correlated
> subquery this subquery is evaluated in a normal evaluation mode.
> And if this correlated subquery has a reference to a field from the
> outermost "impossible where" SELECT the add_key_fields will mistakenly
> consider the outer field reference as a "local" field reference when
> looking for sargable predicates.
> But since the SELECT where the outer field reference refers to is not
> completely initialized due to the "impossible WHERE" in this level
> we'll get a NULL pointer reference.
> Fixed by making a better condition for discovering if a field is "local"
> to the SELECT level being processed.
> It's not enough to look for OUTER_REF_TABLE_BIT in this case since
> for outer references to constant tables the Item_field::used_tables()
> will return 0 regardless of whether the field reference is from the
> local SELECT or not.
The 'BEGIN/COMMIT/ROLLBACK' log event could be filtered out if the
database is not selected by --database option of mysqlbinlog command.
This can result in problem if there are some statements in the
transaction are not filtered out.
To fix the problem, mysqlbinlog will output 'BEGIN/ROLLBACK/COMMIT'
in regardless of the database filtering rules.
The 'BEGIN/COMMIT/ROLLBACK' log event could be filtered out if the
database is not selected by --database option of mysqlbinlog command.
This can result in problem if there are some statements in the
transaction are not filtered out.
To fix the problem, mysqlbinlog will output 'BEGIN/ROLLBACK/COMMIT'
in regardless of the database filtering rules.
The problem is that safe_kill_win fails to detect a dead process. OpenProcess() will
succeed even after the process died, it will first fail after the last handle to process
is closed.
To fix the problem, check process status with GetExitCodeProcess() and consider
process to be dead if the exit code returned by this routine is not STILL_ALIVE.
Backport from 6.0 to 5.1.
Only those sync points are included, which are used in debug_sync.test.
The Debug Sync Facility allows to place synchronization points
in the code:
open_tables(...)
DEBUG_SYNC(thd, "after_open_tables");
lock_tables(...)
When activated, a sync point can
- Send a signal and/or
- Wait for a signal
Nomenclature:
- signal: A value of a global variable that persists
until overwritten by a new signal. The global
variable can also be seen as a "signal post"
or "flag mast". Then the signal is what is
attached to the "signal post" or "flag mast".
- send a signal: Assign the value (the signal) to the global
variable ("set a flag") and broadcast a
global condition to wake those waiting for
a signal.
- wait for a signal: Loop over waiting for the global condition until
the global value matches the wait-for signal.
Please find more information in the top comment in debug_sync.cc
or in the worklog entry.
replication
MySQL server uses wrong lock type (always TL_READ instead of
TL_READ_NO_INSERT when appropriate) for tables used in
subqueries of UPDATE statement. This leads in some cases to
a broken replication as statements are written in the wrong
order to the binlog.
The problem was that appending values to the end of an existing
ENUM or SET column was being treated as table data modification,
preventing a immediately (fast) table alteration that occurs when
only table metadata is being modified.
The cause was twofold: adding a enumeration or set members to the
end of the list of valid member values was not being considered
a "compatible" table alteration, and for SET columns, the check
was being done upon the max display length and not the underlying
(pack) length of the field.
The solution is to augment the function that checks wether two ENUM
or SET fields are compatible -- by comparing the pack lengths and
performing a limited comparison of the member values.
The bug is not related to MERGE table or TRIGGER. More correct description
would be 'assertion on multi-table UPDATE + NATURAL JOIN + MERGEABLE VIEW'.
On PREPARE stage(see test case) we call mark_common_columns() func which
creates ON condition for NATURAL JOIN and sets appropriate
table read_set bitmaps for fields which are used in ON condition.
On EXECUTE stage mark_common_columns() is not called, we set
necessary read_set bitmaps in setup_conds(). But 'B.f1' field
is already processed and related item alredy fixed before
setup_conds() as updated field and setup_conds can not set
read_set bitmap because of that.
The fix is to set read_set bitmap for appropriate table field even
if Item_direct_view_ref item which represents a refernce to this field
is fixed.
"load data" statements were written to the binlog as a mix of the original statement
and bits recreated from parse-info. This relied on implementation details and broke
with IGNORE_SPACES and versioned comments.
We now completely resynthesize the query for LOAD DATA for binlog (which among other
things normalizes them somewhat with regard to case, spaces, etc.).
We have already parsed the query properly, so we make use of that rather
than mix-and-match string literals and parsed items.
This should make us safe with regard to versioned comments, even those
spanning multiple tokens. Also no longer affected by IGNORE_SPACES.
view definition
During SHOW CREATE VIEW there is no reason to 'anonymize'
errors that name objects that a user does not have access
to. Moreover it was inconsistently implemented. For example
base tables being referenced from a view appear to be ok,
but not views. The manual on the other hand is clear: If a
user has the privileges SELECT and SHOW VIEW, the view
definition is available to that user, period. The fix
changes the behavior to support the manual.
trigger, merge table
The problem with break statements is that they have very
local effects. Hence a break statement within the inner loop
of a nested-loops join caused execution to proceed to the
next table even though a serious error occurred. The problem
was fixed by breaking out the inner loop into its own
method. The change empowers all errors to terminate the
execution.
The errors that will now halt multi-DELETE execution
altogether are
- triggers returning errors
- handler errors
- server being killed
All statements executed by mysql_upgrade are binlogged and then are replicated to slave.
This will result in some errors. The report of this bug has demonstrated some examples.
Master and slave should be upgraded separately. All statements executed by
mysql_upgrade will not be binlogged.
--write-binlog and --skip-write-binlog options are added into mysql_upgrade.
These options control whether sql statements are binlogged or not.
In RBR, 'DROP TEMPORARY TABLE IF EXISTS...' statement is binlogged when the table
does not exist.
In fact, 'DROP TEMPORARY TABLE ...' statement should never be binlogged in RBR
no matter if the table exists or not.
This patch addresses this by checking whether we are dropping a
temporary table or not, when building the custom drop statement.
failure to cleanup of table
The test case was not dropping a table before exiting (ie, it was
not cleaning itself after execution). In this case, the warning
message stating that the test did not do a proper cleanup was
deterministic (which can be annoying).
I have found other tests cases on which mtr sporadically reports
that they have not cleaned up after execution:
- rpl_ndb_circular
- rpl_failed_optimize
In this case, the master was dropping a table but there was no
synchronization between the slave and the master.
This patch addresses the rpl_ndb_circular_simplex case by adding
the missing DROP table. The other cases are fixed by deploying
the missing sync_slave_with_master instruction.
HA_ERR_WRONG_INDEX
In RBR, disabling keys on slave table will break replication when
updating or deleting a record. When the slave thread tries to
find the row, by searching in the storage engine, it checks
whether the table has a key or not. If it has one, then the slave
thread uses it to search the record.
Nonetheless, the slave only checks whether the key exists or not,
it does not verify if it is active. Should the key be
disabled (eg, DBA has issued an ALTER TABLE ... DISABLE KEYS)
then it will result in error: HA_ERR_WRONG_INDEX.
This patch addresses this issue by making the slave thread also
check whether the key is active or not before actually using it.
Network error happened here, but it can be caused by CR_CONNECTION_ERROR,
CR_CONN_HOST_ERROR, CR_SERVER_GONE_ERROR, CR_SERVER_LOST, ER_CON_COUNT_ERROR,
and ER_SERVER_SHUTDOWN. We just check CR_SERVER_LOST here, so the test fails.
To fix the problem, check all errors that can be cause by the master shutdown.