Add code to waiting for a set of errors.
Add code to waiting for an error instead of waiting for io thread to stop, as
after 'START SLAVE', the status of io thread is still not running.
But it doesn't mean slave io thread encounters an error.
without FOR UPDATE is causing a lock".
SELECT statements with subqueries referencing InnoDB tables
were acquiring shared locks on rows in these tables when they
were executed in REPEATABLE-READ mode and with statement or
mixed mode binary logging turned on.
This was a regression which were introduced when fixing
bug 39843.
The problem was that for tables belonging to subqueries
parser set TL_READ_DEFAULT as a lock type. In cases when
statement/mixed binary logging at open_tables() time this
type of lock was converted to TL_READ_NO_INSERT lock at
open_tables() time and caused InnoDB engine to acquire
shared locks on reads from these tables. Although in some
cases such behavior was correct (e.g. for subqueries in
DELETE) in case of SELECT it has caused unnecessary locking.
This patch implements minimal version of the fix for the
specific problem described in the bug-report which supposed
to be not too risky for pushing into 5.1 tree.
The 5.5 tree already contains a more appropriate solution
which also addresses other related issues like bug 53921
"Wrong locks for SELECTs used stored functions may lead
to broken SBR".
This patch tries to solve the problem by ensuring that
TL_READ_DEFAULT lock which is set in the parser for
tables participating in subqueries at open_tables()
time is interpreted as TL_READ_NO_INSERT or TL_READ.
TL_READ is used only if we know that this is a SELECT
and that this particular table is not used by a stored
function.
Test coverage is added for both InnoDB and MyISAM.
This patch introduces an "incompatible" change in locking
scheme for subqueries used in SELECT ... FOR UPDATE and
SELECT .. IN SHARE MODE.
In 4.1 (as well as in 5.0 and 5.1 before fix for bug 39843)
the server would use a snapshot InnoDB read for subqueries
in SELECT FOR UPDATE and SELECT .. IN SHARE MODE statements,
regardless of whether the binary log is on or off.
If the user required a different type of read (i.e. locking
read), he/she could request so explicitly by providing FOR
UPDATE/IN SHARE MODE clause for each individual subquery.
The patch for bug 39843 broke this behaviour (which was not
documented or tested), and started to use locking reads for
all subqueries in SELECT ... FOR UPDATE/IN SHARE MODE.
This patch restores 4.1 behaviour.
This patch should be mostly null-merged into 5.5 tree.
Some of the test cases reference to binlog position and
these position numbers are written into result explicitly.
It is difficult to maintain if log event format changes.
There are a couple of cases explicit position number appears,
we handle them in different ways
A. 'CHANGE MASTER ...' with MASTER_LOG_POS or/and RELAY_LOG_POS options
Use --replace_result to mask them.
B. 'SHOW BINLOG EVENT ...'
Replaced by show_binlog_events.inc or wait_for_binlog_event.inc.
show_binlog_events.inc file's function is enhanced by given
$binlog_file and $binlog_limit.
C. 'SHOW SLAVE STATUS', 'show_slave_status.inc' and 'show_slave_status2.inc'
For the test cases just care a few items in the result of 'SHOW SLAVE STATUS',
only the items related to each test case are showed.
'show_slave_status.inc' is rebuild, only the given items in $status_items
will be showed.
'check_slave_is_running.inc' and 'check_slave_no_error.inc'
and 'check_slave_param.inc' are auxiliary files helping
to show running status and error information easily.
they are ignored to a new test suite "innodb_plugin".
Remove a hack in mtr that was deployed to run the builtin InnoDB tests against
the InnoDB Plugin. Also detect if a test is an 'innodb plugin test' and if so
then transparently replace the builtin InnoDB with the InnoDB Plugin.
This has been back-ported from 6.0 as the problems proved to afflict
5.1 as well.
The fix exposed two new bugs. They were reported as follows.
Bug no 52174: Sometimes wrong plan when reading a MAX value
from non-NULL index
Bug no 52173: Reading NULL value from non-NULL index gives wrong
result in embedded server
Both bugs taken together affect a much smaller class of queries than #47762,
so the fix stays for now.
NULL column for NULL
The optimization to read MIN() and MAX() values from an
index did not properly handle comparisons with NULL
values. Fixed by giving up the particular optimization step
if there are non-NULL safe comparisons with NULL values, as
the result is NULL anyway.
Also, Oracle copyright notice was added to all files.
The problem is that not all column names retrieved from a SELECT
statement can be used as view column names due to length and format
restrictions. The server failed to properly check the conformity
of those automatically generated column names before storing the
final view definition on disk.
Since columns retrieved from a SELECT statement can be anything
ranging from functions to constants values of any format and length,
the solution is to rewrite to a pre-defined format any names that
are not acceptable as a view column name.
The name is rewritten to "Name_exp_%u" where %u translates to the
position of the column. To avoid this conversion scheme, define
explict names for the view columns via the column_list clause.
Also, aliases are now only generated for top level statements.
write()/read()
Sometimes stop/restart master or stop/restart salve can cause
network error, which can cause the 'invalid file descriptor
-1 in syscall write()/read()' warnings. All involved test
cases except rpl_slave_load_remove_tmpfile belong to the
kind of network error. So they are expected.
The 'rpl_slave_load_remove_tmpfile' belongs to file error,
but it is testing the file error as following code:
DBUG_EXECUTE_IF("remove_slave_load_file_before_write",
my_close(fd,MYF(0)); fd= -1; my_delete(fname, MYF(0)););
So it's expected too.
To fix the problem, add the valgrind warnings to the global
suppression list to suppress it.
The test case rpl_binlog_corruption fails on windows because when
adding a line to the binary log index file it gets terminated
with a CR+LF (which btw, is the normal case in windows, but not on
Unixes - LF). This causes mismatch between the relay log names,
causing mysqld to report that it cannot find the log file.
We fix this by creating the instrumented index file through
mysql, ie, using SELECT ... INTO DUMPFILE ..., as opposed on
relying on ultimatly OS commands like: -- echo "..." >
index. These changes go into the file and make the procedure
platform independent:
include/setup_fake_relay_log.inc
Side note: when using SELECT ... INTO DUMPFILE ..., one needs to
check if mysqld is running with secure_file_priv. If it is, we do
it in two steps: 1. create the file on the allowed location;
2. move it to the datadir. If it is not, then we just create the
file directly on the datadir (so previous step 2. is not needed).
Manually deleteing one or more entries from 'master-bin.index', will
cause master infinitely loop to send one binlog file.
When starting a dump session, master opens index file and search the binlog file
which is being requested by the slave. The position of the binlog file in the
index file is recorded. it will be used to find the next binlog file when current
binlog file has dumped completely. As only the position is used, it may
not get the correct file if some entries has been removed manually from the index file.
the master will reopen the current binlog file which has been dump completely
and redump it if it can not get the next binlog file's name from index file.
It obviously is a logical error.
Even though it is allowed to manually change index file,
but it is not recommended. so after this patch, master
sends a fatal error to slave and close the dump session if a new binlog file
has been generated and master can not get it from the index file.
BUG#47983: rpl_extraColmaster_myisam failed in PB2 with "Found
warnings!!"
BUG 45214 fixed the case when get_master_version_and_clock
function, used by the slave, would not report errors. The slave
now detects them and if related to transient network failures, it
prints some warnings and retries to connect. On the other hand,
if not network related, it just gives up and fails.
As such, sometimes, in PB2, the slave comes across some transient
communication issues between master and slave, while calling
get_master_version_and_clock, causing warnings print outs to the
error log. Nevertheless, in such cases slave retries to connect,
in which it succeeds, and the test case continues as it normally
would. But then, at the end of a successful test run, MTR checks
the error log, finds the unexpected warnings and considers them
harmful. This causes MTR to report error and, consequently, PB2
to report a failing test.
We fix this by adding to the global warnings suppress list the
warnings related to transient network failures only, which are
reported while in function get_master_version_and_clock.
Problem 1:
column_priv_hash uses utf8_general_ci collation
for the key comparison. The key consists of user name,
db name and table name. Thus user with privileges on table t1
is able to perform the same operation on T1
(the similar situation with user name & db name, see acl_cache).
So collation which is used for column_priv_hash and acl_cache
should be case sensitive.
The fix:
replace system_charset_info with my_charset_utf8_bin for
column_priv_hash and acl_cache
Problem 2:
The same situation with proc_priv_hash, func_priv_hash,
the only difference is that Routine name is case insensitive.
So the fix is to use my_charset_utf8_bin for
proc_priv_hash & func_priv_hash and convert routine name into lower
case before writing the element into the hash and
before looking up the key.
Additional fix: mysql.procs_priv Routine_name field collation
is changed to utf8_general_ci.
It's necessary for REVOKE command
(to find a field by routine hash element values).
Note:
It's safe for lower-case-table-names mode too because
db name & table name are converted into lower case
(see GRANT_NAME::GRANT_NAME).
- Remove the "hack" from mtr.pl that skipped searching for the .dll files
when embedded and windows. Now the variables will be preoperly initialized.
- Make the tests detect that they can't run on windows+embedded
Backport from 6.0 to 5.1.
Only those sync points are included, which are used in debug_sync.test.
The Debug Sync Facility allows to place synchronization points
in the code:
open_tables(...)
DEBUG_SYNC(thd, "after_open_tables");
lock_tables(...)
When activated, a sync point can
- Send a signal and/or
- Wait for a signal
Nomenclature:
- signal: A value of a global variable that persists
until overwritten by a new signal. The global
variable can also be seen as a "signal post"
or "flag mast". Then the signal is what is
attached to the "signal post" or "flag mast".
- send a signal: Assign the value (the signal) to the global
variable ("set a flag") and broadcast a
global condition to wake those waiting for
a signal.
- wait for a signal: Loop over waiting for the global condition until
the global value matches the wait-for signal.
Please find more information in the top comment in debug_sync.cc
or in the worklog entry.
All statements executed by mysql_upgrade are binlogged and then are replicated to slave.
This will result in some errors. The report of this bug has demonstrated some examples.
Master and slave should be upgraded separately. All statements executed by
mysql_upgrade will not be binlogged.
--write-binlog and --skip-write-binlog options are added into mysql_upgrade.
These options control whether sql statements are binlogged or not.
Invalid (old?) table or database name in logs
Problem was still not completely fixed, due to
qouting.
This is the server side only fix (in explain_filename),
the change from filename_to_tablename to use explain_filename
in the InnoDB code must be done before the bug is
fixed.
binlog
Mixing transactional (T) and non-transactional (N) tables on behalf of a
transaction may lead to inconsistencies among masters and slaves in STATEMENT
mode. The problem stems from the fact that although modifications done to
non-transactional tables on behalf of a transaction become immediately visible
to other connections they do not immediately get to the binary log and therefore
consistency is broken. Although there may be issues in mixing T and M tables in
STATEMENT mode, there are safe combinations that clients find useful.
In this bug, we fix the following issue. Mixing N and T tables in multi-level
(e.g. a statement that fires a trigger) or multi-table table statements (e.g.
update t1, t2...) were not handled correctly. In such cases, it was not possible
to distinguish when a T table was updated if the sequence of changes was N and T.
In a nutshell, just the flag "modified_non_trans_table" was not enough to reflect
that both a N and T tables were changed. To circumvent this issue, we check if an
engine is registered in the handler's list and changed something which means that
a T table was modified.
Check WL 2687 for a full-fledged patch that will make the use of either the MIXED or
ROW modes completely safe.
Problem was that the partition containing NULL values
was pruned away, since '2001-01-01' < '2001-02-00' but
TO_DAYS('2001-02-00') is NULL.
Added the NULL partition for RANGE/LIST partitioning on TO_DAYS()
function to be scanned too.
Also fixed a bug that added ALLOW_INVALID_DATES to sql_mode
(SELECT * FROM t WHERE date_col < '1999-99-99' on a RANGE/LIST
partitioned table would add it).