Before this patch, semisync assumed transactions running in parallel
can not be larger than max_connections, but this is not true when
the event scheduler is executing events, and cause semisync run out
of preallocated transaction nodes.
Fix the problem by allocating transaction nodes dynamically.
This patch also fixed a possible deadlock when running UNINSTALL
PLUGIN rpl_semi_sync_master and updating in parallel. Fixed by
releasing the internal Delegate lock before unlock the plugins.
Text conflict in mysql-test/collections/default.experimental
Text conflict in mysql-test/r/show_check.result
Text conflict in mysql-test/r/sp-code.result
Text conflict in mysql-test/suite/binlog/r/binlog_tmp_table.result
Text conflict in mysql-test/suite/rpl/t/disabled.def
Text conflict in mysql-test/t/show_check.test
Text conflict in mysys/my_delete.c
Text conflict in sql/item.h
Text conflict in sql/item_cmpfunc.h
Text conflict in sql/log.cc
Text conflict in sql/mysqld.cc
Text conflict in sql/repl_failsafe.cc
Text conflict in sql/slave.cc
Text conflict in sql/sql_parse.cc
Text conflict in sql/sql_table.cc
Text conflict in sql/sql_yacc.yy
Text conflict in storage/myisam/ha_myisam.cc
Corrected results for
stm_auto_increment_bug33029.reject 2009-12-01
20:01:49.000000000 +0300
<andrei> @@ -42,9 +42,6 @@
<andrei> RETURN i;
<andrei> END//
<andrei> CALL p1();
<andrei> -Warnings:
<andrei> -Note 1592 Statement may not be safe to log in statement
format.
<andrei> -Note 1592 Statement may not be safe to log in statement
format.
There should be indeed no Note present because there is in fact autoincrement
top-level query in sp() that triggers inserting in yet another auto-inc table.
(todo: alert DaoGang to improve the test).
Fixes several test cases.
applied revisions: r6181, r6182, r6183, r6184
r6183 - changes are made only to tests in innodb suite which is only
innodb-consistent-master.opt
Detailed revision comments:
r6181 | vasil | 2009-11-17 12:21:41 +0200 (Tue, 17 Nov 2009) | 33 lines
branches/zip:
At the end of innodb-index.test: restore the environment as it was before
the test was started to silence this warning:
MTR's internal check of the test case 'main.innodb-index' failed.
This means that the test case does not preserve the state that existed
before the test case was executed. Most likely the test case did not
do a proper clean-up.
This is the diff of the states of the servers before and after the
test case was executed:
mysqltest: Logging to '/tmp/autotest.sh-20091117_033000-zip.btyZwu/mysql-5.1/mysql-test/var/tmp/check-mysqld_1.log'.
mysqltest: Results saved in '/tmp/autotest.sh-20091117_033000-zip.btyZwu/mysql-5.1/mysql-test/var/tmp/check-mysqld_1.result'.
mysqltest: Connecting to server localhost:13000 (socket /tmp/autotest.sh-20091117_033000-zip.btyZwu/mysql-5.1/mysql-test/var/tmp/mysqld.1.sock) as 'root', connection 'default', attempt 0 ...
mysqltest: ... Connected.
mysqltest: Start processing test commands from './include/check-testcase.test' ...
mysqltest: ... Done processing test commands.
--- /tmp/autotest.sh-20091117_033000-zip.btyZwu/mysql-5.1/mysql-test/var/tmp/check-mysqld_1.result 2009-11-17 13:10:40.000000000 +0300
+++ /tmp/autotest.sh-20091117_033000-zip.btyZwu/mysql-5.1/mysql-test/var/tmp/check-mysqld_1.reject 2009-11-17 13:10:54.000000000 +0300
@@ -84,7 +84,7 @@
INNODB_DOUBLEWRITE ON
INNODB_FAST_SHUTDOWN 1
INNODB_FILE_FORMAT Antelope
-INNODB_FILE_FORMAT_CHECK Antelope
+INNODB_FILE_FORMAT_CHECK Barracuda
INNODB_FILE_PER_TABLE OFF
INNODB_FLUSH_LOG_AT_TRX_COMMIT 1
INNODB_FLUSH_METHOD
mysqltest: Result content mismatch
not ok
r6182 | marko | 2009-11-17 13:49:15 +0200 (Tue, 17 Nov 2009) | 1 line
branches/zip: Set svn:eol-style on mysql-test files.
r6183 | marko | 2009-11-17 13:51:16 +0200 (Tue, 17 Nov 2009) | 1 line
branches/zip: Prepend loose_ to plugin-only mysql-test options.
r6184 | marko | 2009-11-17 13:52:01 +0200 (Tue, 17 Nov 2009) | 1 line
branches/zip: innodb-index.test: Restore innodb_file_format_check.
1. add testcase for BUG#46676
2. Allow CREATE INDEX to be interrupted
3. ha_innobase::change_active_index(): When the history is
missing, report it to the client, not to the error log
4. ChangeLog entries
appplied revisions:r6169, r6170, r6175, r6177, r6179
Detailed revision comments:
r6169 | calvin | 2009-11-12 14:40:43 +0200 (Thu, 12 Nov 2009) | 6 lines
branches/zip: add test case for bug#46676
This crash is reproducible with InnoDB plugin 1.0.4 + MySQL 5.1.37.
But no longer reproducible after MySQL 5.1.38 (with plugin 1.0.5).
Add test case to catch future regression.
r6170 | marko | 2009-11-12 15:49:08 +0200 (Thu, 12 Nov 2009) | 4 lines
branches/zip: Allow CREATE INDEX to be interrupted. (Issue #354)
rb://183 approved by Heikki Tuuri
r6175 | vasil | 2009-11-16 20:07:39 +0200 (Mon, 16 Nov 2009) | 4 lines
branches/zip:
Wrap line at 78th char in the ChangeLog
r6177 | calvin | 2009-11-16 20:20:38 +0200 (Mon, 16 Nov 2009) | 2 lines
branches/zip: add an entry to ChangeLog for r6065
r6179 | marko | 2009-11-17 10:19:34 +0200 (Tue, 17 Nov 2009) | 2 lines
branches/zip: ha_innobase::change_active_index(): When the history is
missing, report it to the client, not to the error log.
applied revisions: r6157
Detailed revision comments:
r6157 | jyang | 2009-11-11 14:27:09 +0200 (Wed, 11 Nov 2009) | 10 lines
branches/zip: Fix an issue that a local variable defined
in innodb_file_format_check_validate() is being referenced
across function in innodb_file_format_check_update().
In addition, fix "set global innodb_file_format_check =
DEFAULT" call.
Bug #47167: "set global innodb_file_format_check" cannot
set value by User-Defined Variable."
rb://169 approved by Sunny Bains and Marko.
The 'slave_patternload_file' is assigned to the real path of the load data file
when initializing the object of Relay_log_info. But the path of the load data
file is not formatted to real path when executing event from relay log. So the
error will be encountered if the path of the load data file is a symbolic link.
Actually the global 'opt_secure_file_priv' is not formatted to real path when
loading data from file. So the same thing will happen too.
To fix these errors, the path of the load data file should be formatted to
real path when executing event from relay log. And the 'opt_secure_file_priv'
should be formatted to real path when loading data infile.
<tmp_tbl> with RBL
When binlogging the statement, the server always handle the existing
object as a table, even though it is a view. However a view is
handled differently in other parts of the code thus leading the
statement to crash in RBL if the view exists.
This happens because the underlying tables for the view are not opened
when we try to call store_create_info() on the view in order to build
a CREATE TABLE statement.
This patch will only address the crash problem, other binlogging
problems related to CREATE TABLE IF NOT EXISTS LIKE when the existing
object is a view will be solved by BUG 47442.
In RBR, All statements operating on temporary tables should not be binlogged.
Despite this fact, after executing 'TRUNCATE... ' on a temporary table,
the command is still logged, even if in row-based mode. Consequently, this raises
problems in the slave as the table may not exist, resulting in an
execution failure. Ultimately, this causes the slave to report
an error and abort.
After this patch, 'TRUNCATE ...' statement on a temporary table will not be
binlogged in RBR.
beyond unsigned long.
BUG#44779: binlog.binlog_max_extension may be causing failure on
next test in PB
NOTE1: this is the backport to next-mr.
NOTE2: already includes patch for BUG#44779.
Binlog file extensions would turn into negative numbers once the
variable used to hold the value reached maximum for signed
long. Consequently, incrementing value to the next (negative) number
would lead to .000000 extension, causing the server to fail.
This patch addresses this issue by not allowing negative extensions
and by returning an error on find_uniq_filename, when the limit is
reached. Additionally, warnings are printed to the error log when the
limit is approaching. FLUSH LOGS will also report warnings to the
user, if the extension number has reached the limit. The limit has been
set to 0x7FFFFFFF as the maximum.
This is the non-ndb part of the patch.
The return value of mysql_bin_log.write was ignored by most callers,
which may lead to inconsistent on master and slave if the transaction
was committed while the binlog was not correctly written. If
my_error() is call in mysql_bin_log.write, this could also lead to
assertion issue if my_ok() or my_error() is called after.
This fixed the problem by let the caller to check and handle the
return value of mysql_bin_log.write. This patch only adresses the
simple cases.
The mentioned on the bug report set of bugs fixes have not be pushed to the main trees.
Fixed with extracting commits done to 6.0-rpl tree and applying them to the main 5.1.
Notes.
1. part of changes - the mtr's specific - were packported to the main 5.0 tree for mtr v1
as http://lists.mysql.com/commits/46562
However, there is no that fix anymore in the mtr v2. (This fact was mailed to mtr maintaining
people).
2. Bug@36929 crash in kill_zombie_dump_threads-> THD::awake() with replication tests
is not backported because the base code of the patch is libevent and that was removed
from the main trees due to its instability.
Problem: Some system functions that could return different values on
master and slave were not marked unsafe. In particular:
GET_LOCK
IS_FREE_LOCK
IS_USED_LOCK
MASTER_POS_WAIT
RELEASE_LOCK
SLEEP
SYSDATE
VERSION
Fix: Mark these functions unsafe.
One statement that have more than one different tables to update with
autoinc columns just was marked as unsafe in mixed mode, so the unsafe
warning can't be produced in statement mode.
To fix the problem, mark the statement as unsafe in statement mode too.
------------------------------------------------------------
revno: 2572.2.1
revision-id: sp1r-davi@mysql.com/endora.local-20080227225948-16317
parent: sp1r-anozdrin/alik@quad.-20080226165712-10409
committer: davi@mysql.com/endora.local
timestamp: Wed 2008-02-27 19:59:48 -0300
message:
Bug#27525 table not found when using multi-table-deletes with aliases over several databas
Bug#30234 Unexpected behavior using DELETE with AS and USING
The multi-delete statement has a documented limitation that
cross-database multiple-table deletes using aliases are not
supported because it fails to find the tables by alias if it
belongs to a different database. The problem is that when
building the list of tables to delete from, if a database
name is not specified (maybe an alias) it defaults to the
name of the current selected database, making impossible to
to properly resolve tables by alias later. Another problem
is a inconsistency of the multiple table delete syntax that
permits ambiguities in a delete statement (aliases that refer
to multiple different tables or vice-versa).
The first step for a solution and proper implementation of
the cross-databse multiple table delete is to get rid of any
ambiguities in a multiple table statement. Currently, the parser
is accepting multiple table delete statements that have no obvious
meaning, such as:
DELETE a1 FROM db1.t1 AS a1, db2.t2 AS a1;
DELETE a1 AS a1 FROM db1.t1 AS a1, db2.t2 AS a1;
The solution is to resolve the left part of a delete statement
using the right part, if the a table on right has an alias,
it must be referenced in the left using the given alias. Also,
each table on the left side must match unambiguously only one
table in the right side.
------------------------------------------------------------
revno: 2630.2.13
revision-id: davi@mysql.com-20080612190452-cx6h7rm557bcq7sa
parent: davi@mysql.com-20080611124915-csejwrxfdga9upho
committer: Davi Arnaut <davi@mysql.com>
branch nick: 36785-6.0
timestamp: Thu 2008-06-12 16:04:52 -0300
message:
Bug#36785: Wrong error message when group_concat() exceeds max length
The problem is that when ER_CUT_VALUE_GROUP_CONCAT is elevated
to a error, the message does not get updated with the number of
cut lines when group_concat() exceeds max length.
The solution is to modify the warning message to be more meaningful
by giving the number of the line that was cut and to issue the warning
for each line that is cut. This approach is inline with how other
per-row truncated data warnings are issued avoids violating the warning
internal interface.