The problem was in a test case for Bug33507:
- when the number of active connections reaches the limit,
the server accepts only root connections. That's achieved by
accepting a connection, negotiating with the client and
checking user credentials. If it is not SUPER, the connection
is dropped.
- when the server accepts connection, it increases the counter;
- when the server drops connection, it decreases the counter;
- the race was in between of decreasing the counter and accepting
new connection:
- max_user_connections = 2;
- 2 oridinary user connections accepted;
- extra user connection is establishing;
- server checked user credentials, and sent 'Too many connections'
error;
- the client receives the error and establishes extra SUPER user
connection;
- the server however didn't decrease the counter (the extra
user connection still is "alive" in the server) -- so, the new
SUPER-user connection, will be dropped, because it exceeds
(max_user_connections + 1).
The fix is to implement "safe connect", which makes several attempts
to connect and use it in the test script.
using a trig in SP
For all 5.0 and up to 5.1.12 exclusive, when a stored routine or
trigger caused an INSERT into an AUTO_INCREMENT column, the
generated AUTO_INCREMENT value should not be written into the
binary log, which means if a statement does not generate
AUTO_INCREMENT value itself, there will be no Intvar event (SET
INSERT_ID) associated with it even if one of the stored routine
or trigger caused generation of such a value. And meanwhile, when
executing a stored routine or trigger, it would ignore the
INSERT_ID value even if there is a INSERT_ID value available set
by a SET INSERT_ID statement.
Starting from MySQL 5.1.12, the generated AUTO_INCREMENT value is
written into the binary log, and the value will be used if
available when executing the stored routine or trigger.
Prior fix of this bug in MySQL 5.0 and prior MySQL 5.1.12
(referenced as the buggy versions in the text below), when a
statement that generates AUTO_INCREMENT value by the top
statement was executed in the body of a SP, all statements in the
SP after this statement would be treated as if they had generated
AUTO_INCREMENT by the top statement. When a statement that did
not generate AUTO_INCREMENT value by the top statement but by a
function/trigger called by it, an erroneous Intvar event would be
associated with the statement, this erroneous INSERT_ID value
wouldn't cause problem when replicating between masters and
slaves of 5.0.x or prior 5.1.12, because the erroneous INSERT_ID
value was not used when executing functions/triggers. But when
replicating from buggy versions to 5.1.12 or newer, which will
use the INSERT_ID value in functions/triggers, the erroneous
value will be used, which would cause duplicate entry error and
cause the slave to stop.
The patch for 5.1 fixed it to ignore the SET INSERT_ID value when
executing functions/triggers if it is replicating from a master
of buggy versions, another patch for 5.0 fixed it not to generate
the erroneous Intvar event.
Problem: in mixed and statement mode, a query that refers to a
system variable will use the slave's value when replayed on
slave. So if the value of a system variable is inserted into a
table, the slave will differ from the master.
Fix: mark statements that refer to a system variable as "unsafe",
meaning they will be replicated by row in mixed mode and produce a warning
in statement mode. There are some exceptions: some variables are actually
replicated. Those should *not* be marked as unsafe.
BUG#34732: mysqlbinlog does not print default values for auto_increment variables
Problem: mysqlbinlog does not print default values for some variables,
including auto_increment_increment and others. So if a client executing
the output of mysqlbinlog has different default values, replication will
be wrong.
Fix: Always print default values for all variables that are replicated.
I need to fix the two bugs at the same time, because the test cases would
fail if I only fixed one of them.
between 5.0 and 5.1.
The problem was that in the patch for Bug#11986 it was decided
to store original query in UTF8 encoding for the INFORMATION_SCHEMA.
This approach however turned out to be quite difficult to implement
properly. The main problem is to preserve the same IS-output after
dump/restore.
So, the fix is to rollback to the previous functionality, but also
to fix it to support multi-character-set-queries properly. The idea
is to generate INFORMATION_SCHEMA-query from the item-tree after
parsing view declaration. The IS-query should:
- be completely in UTF8;
- not contain character set introducers.
For more information, see WL4052.
a SELECT doesn't cause ROLLBACK of statem".
The idea of the fix is to ensure that we always commit the current
statement at the end of dispatch_command(). In order to not issue
redundant disc syncs, an optimization of the two-phase commit
protocol is implemented to bypass the two phase commit if
the transaction is read-only.
Problem: some collation handlers called incorrect version
of my_like_range_xxx(), which led to wrong min_str and max_str,
so like range optimizer threw away good records.
Fix: changing the wrong handlers to call proper version of
my_like_range_xxx().
exists t1,t2,t3'
Bug #34245 Test ndb_binlog_multi fails for 'CREATE TABLE'
Bug #34246 Test rpl_ndb_transaction fails with 'Failed to create
'mysql/ndb_apply_status'
Tests cases didn't wait for cluster to come up due to a typo
in have_multi_ndb.inc.