Bug #18453 Warning/error message if there is a mismatch between ...
There were three problems:
1. the reported lack of warnings for the BEFORE syntax of PURGE;
2. the similar lack of warnings for the TO syntax;
3. incompatible behaviour between the two in that the latter blanked out
regardlessly of presence or lack the actual file corresponding to
an index record; the former version gave up at the first mismatch.
fixed with deploying the warning's generation and synronizing logics of
purge_logs() and purge_logs_before_date().
my_stat() is called in either of two branches of purge_logs() (responsible
for the TO syntax of PURGE) similarly to how it has behaved in the BEFORE syntax.
If there is no actual binlog file, my_stat returns NULL and my_delete is
not invoked.
A critical error is reported to the user if a file from the index
could not be retrieved info about or deleted with a system error code
different than ENOENT.
The problem was in a test case for Bug33507:
- when the number of active connections reaches the limit,
the server accepts only root connections. That's achieved by
accepting a connection, negotiating with the client and
checking user credentials. If it is not SUPER, the connection
is dropped.
- when the server accepts connection, it increases the counter;
- when the server drops connection, it decreases the counter;
- the race was in between of decreasing the counter and accepting
new connection:
- max_user_connections = 2;
- 2 oridinary user connections accepted;
- extra user connection is establishing;
- server checked user credentials, and sent 'Too many connections'
error;
- the client receives the error and establishes extra SUPER user
connection;
- the server however didn't decrease the counter (the extra
user connection still is "alive" in the server) -- so, the new
SUPER-user connection, will be dropped, because it exceeds
(max_user_connections + 1).
The fix is to implement "safe connect", which makes several attempts
to connect and use it in the test script.
Queries like:
SELECT ROW(1, 2) IN (SELECT t1.a, 2)
FROM t1 GROUP BY t1.a
or
SELECT ROW(1, 2) IN (SELECT t1.a, 2 FROM t2)
FROM t1 GROUP BY t1.a
lead to assertion failure in the
Item_in_subselect::row_value_transformer method in debugging
build, or to unexpected error message in release build:
ERROR 1247 (42S22): Reference '<list ref>' not supported (forward
reference in item list)
Unexpected error message and assertion failure have been
eliminated.
When there are no underlying tables specified for a merge table,
SHOW CREATE TABLE outputs a statement that cannot be executed. The
same is true for mysqldump (it generates dumps that cannot be
executed).
This happens because SQL parser does not accept empty UNION() clause.
This patch changes the following:
- it is now possible to execute CREATE/ALTER statement with
empty UNION() clause.
- the same as above, but still worth noting: it is now possible to
remove underlying tables mapping using ALTER TABLE ... UNION=().
- SHOW CREATE TABLE does not output UNION() clause if there are
no underlying tables specified for a merge table. This makes
mysqldump slightly smaller.
in some case.
ER_CON_COUNT_ERROR is defined with SQL state 08004. However, this SQL state is not always
returned.
This error can be thrown in two cases:
1. when an ordinary user (a user w/o SUPER privilege) is connecting,
and the number of active user connections is equal or greater than
max_connections.
2. when a user is connecting and the number of active user connections is
already (max_connections + 1) -- that means that no more connections will
be accepted regardless of the user credentials.
In the 1-st case, SQL state is correct.
The bug happens in the 2-nd case -- on UNIX the client gets 00000 SQL state, which is
absolutely wrong (00000 means "not error SQL state); on Windows
the client accidentally gets HY000 (which means "unknown SQL state).
The cause of the problem is that the server rejects extra connection
prior to read a packet with client capabilities. Thus, the server
does not know if the client supports SQL states or not (if the client
supports 4.1 protocol or not). So, the server supposes the worst and
does not send SQL state at all.
The difference in behavior on UNIX and Windows occurs because on Windows
CLI_MYSQL_REAL_CONNECT() invokes create_shared_memory(), which returns
an error (in default configuration, where shared memory is not configured).
Then, the client does not reset this error, so when the connection is
rejected, SQL state is HY000 (from the error from create_shared_memory()).
The bug appeared after test case for Bug#33507 -- before that, this behavior
just had not been tested.
The fix is to 1) reset the error after create_shared_memory();
2) set SQL state to 'unknown error' if it was not received from
the server.
A separate test case is not required, since the behavior is already
tested in connect.test.
Note for doc-team: the manual should be updated to say that under
some circumstances, 'Too many connections' has HY000 SQL state.
using a trig in SP
For all 5.0 and up to 5.1.12 exclusive, when a stored routine or
trigger caused an INSERT into an AUTO_INCREMENT column, the
generated AUTO_INCREMENT value should not be written into the
binary log, which means if a statement does not generate
AUTO_INCREMENT value itself, there will be no Intvar event (SET
INSERT_ID) associated with it even if one of the stored routine
or trigger caused generation of such a value. And meanwhile, when
executing a stored routine or trigger, it would ignore the
INSERT_ID value even if there is a INSERT_ID value available set
by a SET INSERT_ID statement.
Starting from MySQL 5.1.12, the generated AUTO_INCREMENT value is
written into the binary log, and the value will be used if
available when executing the stored routine or trigger.
Prior fix of this bug in MySQL 5.0 and prior MySQL 5.1.12
(referenced as the buggy versions in the text below), when a
statement that generates AUTO_INCREMENT value by the top
statement was executed in the body of a SP, all statements in the
SP after this statement would be treated as if they had generated
AUTO_INCREMENT by the top statement. When a statement that did
not generate AUTO_INCREMENT value by the top statement but by a
function/trigger called by it, an erroneous Intvar event would be
associated with the statement, this erroneous INSERT_ID value
wouldn't cause problem when replicating between masters and
slaves of 5.0.x or prior 5.1.12, because the erroneous INSERT_ID
value was not used when executing functions/triggers. But when
replicating from buggy versions to 5.1.12 or newer, which will
use the INSERT_ID value in functions/triggers, the erroneous
value will be used, which would cause duplicate entry error and
cause the slave to stop.
The patch for 5.0 fixed it not to generate the erroneous Intvar
event, another patch for 5.1 fixed it to ignore the SET INSERT_ID
value when executing functions/triggers if it is replicating from
a master of buggy versions.
The problem is that since MyISAM's concurrent_insert is on by
default some concurrent SELECT statements might not see changes
made by INSERT statements in other connections, even if the
INSERT statement has returned.
The solution is to disable concurrent_insert so that INSERT
statements returns after the data is actually visible to other
statements.
When concurrent inserts were disabled, statements after an INSERT
were not put into the query cache. This happened because we do not
save the current data file length at statement start when
concurrent inserts are disabled. But we checked the always zero
local length against the real file length anyway.
Fixed by doing the check only if concurrent inserts are not diabled.
table with backticks
(Thanks to Lu Jingdong, though I did not take his patch directly, as
it contained a significant flaw.)
It wasn't a backtick/parsing problem. We merely didn't anticipate
and allocate enough space to handle the optional "#mysql50#" table-
name prefix.
Now, allocate that extra space in case we need it when we look up
a legacy table to get its file's name.
than max_connections -- which results in user lockout.
The problem was that the variable thread_count that contains
the number of active threads was interpreted as a number of
active connections.
The fix is to introduce a new counter for active connections.
In cases when TRUNCATE was executed by invoking mysql_delete() rather
than by table recreation (for example, when TRUNCATE was issued on
InnoDB table with is referenced by foreign key) triggers were invoked.
In debug builds this also led to crash because of an assertion, which
assumes that some preliminary actions take place before trigger
invocation, which doesn't happen in case of TRUNCATE.
The fix is not to execute triggers in mysql_delete() when this
function is used by TRUNCATE.
WHERE f1 < n ignored row if f1 was indexed integer column and
f1 = TYPE_MAX ^ n = TYPE_MAX+1. The latter value when treated
as TYPE overflowed (obviously). This was not handled, it is now.