The problem was in a test case for Bug33507:
- when the number of active connections reaches the limit,
the server accepts only root connections. That's achieved by
accepting a connection, negotiating with the client and
checking user credentials. If it is not SUPER, the connection
is dropped.
- when the server accepts connection, it increases the counter;
- when the server drops connection, it decreases the counter;
- the race was in between of decreasing the counter and accepting
new connection:
- max_user_connections = 2;
- 2 oridinary user connections accepted;
- extra user connection is establishing;
- server checked user credentials, and sent 'Too many connections'
error;
- the client receives the error and establishes extra SUPER user
connection;
- the server however didn't decrease the counter (the extra
user connection still is "alive" in the server) -- so, the new
SUPER-user connection, will be dropped, because it exceeds
(max_user_connections + 1).
The fix is to implement "safe connect", which makes several attempts
to connect and use it in the test script.
in some case.
ER_CON_COUNT_ERROR is defined with SQL state 08004. However, this SQL state is not always
returned.
This error can be thrown in two cases:
1. when an ordinary user (a user w/o SUPER privilege) is connecting,
and the number of active user connections is equal or greater than
max_connections.
2. when a user is connecting and the number of active user connections is
already (max_connections + 1) -- that means that no more connections will
be accepted regardless of the user credentials.
In the 1-st case, SQL state is correct.
The bug happens in the 2-nd case -- on UNIX the client gets 00000 SQL state, which is
absolutely wrong (00000 means "not error SQL state); on Windows
the client accidentally gets HY000 (which means "unknown SQL state).
The cause of the problem is that the server rejects extra connection
prior to read a packet with client capabilities. Thus, the server
does not know if the client supports SQL states or not (if the client
supports 4.1 protocol or not). So, the server supposes the worst and
does not send SQL state at all.
The difference in behavior on UNIX and Windows occurs because on Windows
CLI_MYSQL_REAL_CONNECT() invokes create_shared_memory(), which returns
an error (in default configuration, where shared memory is not configured).
Then, the client does not reset this error, so when the connection is
rejected, SQL state is HY000 (from the error from create_shared_memory()).
The bug appeared after test case for Bug#33507 -- before that, this behavior
just had not been tested.
The fix is to 1) reset the error after create_shared_memory();
2) set SQL state to 'unknown error' if it was not received from
the server.
A separate test case is not required, since the behavior is already
tested in connect.test.
Note for doc-team: the manual should be updated to say that under
some circumstances, 'Too many connections' has HY000 SQL state.
The problem is that since MyISAM's concurrent_insert is on by
default some concurrent SELECT statements might not see changes
made by INSERT statements in other connections, even if the
INSERT statement has returned.
The solution is to disable concurrent_insert so that INSERT
statements returns after the data is actually visible to other
statements.
than max_connections -- which results in user lockout.
The problem was that the variable thread_count that contains
the number of active threads was interpreted as a number of
active connections.
The fix is to introduce a new counter for active connections.
In cases when TRUNCATE was executed by invoking mysql_delete() rather
than by table recreation (for example, when TRUNCATE was issued on
InnoDB table with is referenced by foreign key) triggers were invoked.
In debug builds this also led to crash because of an assertion, which
assumes that some preliminary actions take place before trigger
invocation, which doesn't happen in case of TRUNCATE.
The fix is not to execute triggers in mysql_delete() when this
function is used by TRUNCATE.
The initial value of free memory blocks in 0. When the query cache is enabled
a new memory block gets allocated and is assigned number 1. The free memory
block is later split each time query cache memory is allocated for new blocks.
This means that the free memory block counter won't be reduced to zero when
the number of allocated blocks are zero, but rather one. To avoid confusion
this patch changes this behavior so that the free memory block counter is
reset to zero when the query cache is disabled.
Note that when the query cache is enabled and resized the free memory block
counter was still calculated correctly.
- Apply Eric Bergen's patch: in join_read_always_key(), move ha_index_init() call
to before the late NULLs filtering code.
- Backport function comments from 6.0.
added new function test_if_data_home_dir() which checks that
path does not contain mysql data home directory.
Using of mysql data home directory in
DATA DIRECTORY & INDEX DIRECTORY is disallowed.
Assertion `0' failed
If ROW item is a part of an expression that also has
aggregate function calls (COUNT/SUM/AVG...), a
"splitting" with an Item::split_sum_func2 function
is applied to that ROW item.
Current implementation of Item::split_sum_func2
replaces this Item_row with a newly created
Item_aggregate_ref reference to it.
Then the row cache tries to work with the
Item_aggregate_ref object as with the Item_row object:
row cache calls row-emulation methods such as cols and
element_index. Item_aggregate_ref (like it's parent
Item_ref) inherits dummy implementations of those
methods from the hierarchy root Item, and call to
them leads to failed assertions and wrong data
output.
Row-emulation virtual functions (cols, element_index, addr,
check_cols, null_inside and bring_value) of Item_ref have
been overloaded to forward calls to an underlying item
reference.
The problem is that passing anything other than a integer to a limit
clause in a prepared statement would fail. This limitation was introduced
to avoid replication problems (e.g: replicating the statement with a
string argument would cause a parse failure in the slave).
The solution is to convert arguments to the limit clause to a integer
value and use this converted value when persisting the query to the log.
NAME_CONST('whatever', -1) * MAX(whatever) bombed since -1 was
not seen as constant, but as FUNCTION_UNARY_MINUS(constant)
while we are at the same time pretending it was a basic const
item. This confused the aggregate handlers in exciting ways.
We now make NAME_CONST() behave more consistently.
added new function test_if_data_home_dir() which checks that
path does not contain mysql data home directory.
Using of 'mysql data home'/'any db name' in
DATA DIRECTORY & INDEX DIRECTORY is disallowed
Was a double-free of the Unique member of Item_func_group_concat.
This was not causing a crash because the Unique is a descendent of
Sql_alloc.
Fixed to free the Unique only if it was allocated for the instance
of Item_func_group_concat it was referenced from