When CREATE SERVER is issued, it allocates memory on memory root
to store cached server structure. When DROP SERVER is issued,
it doesn't release this memory, as it is impossible with the
memory root.
We use the same allocation strategy for plugins and acl. The problem
here that there was no way (except for the server restart) to force
'servers' code to release this memory.
With this fix it is possible to release unused server cache memory
by FLUSH PRIVILEGES.
No test case for this fix.
well enough
CREATE SERVER may cause server crash if there is not enough memory
to execute this operation.
Fixed that create_server() and prepare_server_struct_for_insert()
didn't check return value of functions that allocate memory.
As this is out of memory issue fix, not test case available.
isn't running
Pass the process id of the manager as a parameter to "wait_for_pid"
and if the manager isn't running, then do not continue to wait.
Also, capture the error message of our process-existence test,
"kill -0", as we expect errors and shouldn't pass them to the user.
Additionally, be a bit more descriptive of what the problem is.
The problem is that unimplemented WIN32 version of pthread_kill
is returning ESRCH no matter the arguments, causing calls to
mysqld_list_processes to set the procinfo to dead because
pthread_kill returns non zero. The dead procinfo would show
up on a second invocation of show processlist.
binlogging of insert into a autoincrement blackhole table ignored
an explicit set insert_id.
Fixed with refining of the blackhole's insert method to call
update_auto_increment() that prepares binlogging the insert query
with the preceeding set insert_id.
Note, as the engine does not store any actual data one has to explicitly
provide to the server with the value of the autoincrement column via
set insert_id. Otherwise binlogging will happend with the default
set insert_id=1.
When swapping out heap I_S tables to disk, this is done after plan refinement.
Thus, READ_RECORD::file will still point to the (deleted) heap handler at start
of execution. This causes segmentation fault if join buffering is used and the
query is a star query where the result is found to be empty before accessing
some table. In this case that table has not been initialized (i.e. had its
READ_RECORD re-initialized) before the cleanup routine tries to close the handler.
Fixed by updating READ_RECORD::file when changing handler.
binlog_format=mixed
Statement-based replication of DELETE ... LIMIT, UPDATE ... LIMIT,
INSERT ... SELECT ... LIMIT is not safe as order of rows is not
defined.
With this fix, we issue a warning that this statement is not safe to
replicate in statement mode, or go to row-based mode in mixed mode.
Note that we may consider a statement as safe if ORDER BY primary_key
is present. However it may confuse users to see very similiar statements
replicated differently.
Note 2: regular UPDATE statement (w/o LIMIT) is unsafe as well, but
this patch doesn't address this issue. See comment from Kristian
posted 18 Mar 10:55.
Each time the server reloads privileges containing table grants, the
system will allocate too much memory than needed because of badly
chosen growth prediction in the underlying dynamic arrays.
This patch introduces a new signature to the hash container initializer
which enables a much more pessimistic approach in favour for more
efficient memory useage.
This patch was supplied by Google Inc.
Bug #18453 Warning/error message if there is a mismatch between ...
There were three problems:
1. the reported lack of warnings for the BEFORE syntax of PURGE;
2. the similar lack of warnings for the TO syntax;
3. incompatible behaviour between the two in that the latter blanked out
regardlessly of presence or lack the actual file corresponding to
an index record; the former version gave up at the first mismatch.
fixed with deploying the warning's generation and synronizing logics of
purge_logs() and purge_logs_before_date().
my_stat() is called in either of two branches of purge_logs() (responsible
for the TO syntax of PURGE) similarly to how it has behaved in the BEFORE syntax.
If there is no actual binlog file, my_stat returns NULL and my_delete is
not invoked.
A critical error is reported to the user if a file from the index
could not be retrieved info about or deleted with a system error code
different than ENOENT.
The problem was in a test case for Bug33507:
- when the number of active connections reaches the limit,
the server accepts only root connections. That's achieved by
accepting a connection, negotiating with the client and
checking user credentials. If it is not SUPER, the connection
is dropped.
- when the server accepts connection, it increases the counter;
- when the server drops connection, it decreases the counter;
- the race was in between of decreasing the counter and accepting
new connection:
- max_user_connections = 2;
- 2 oridinary user connections accepted;
- extra user connection is establishing;
- server checked user credentials, and sent 'Too many connections'
error;
- the client receives the error and establishes extra SUPER user
connection;
- the server however didn't decrease the counter (the extra
user connection still is "alive" in the server) -- so, the new
SUPER-user connection, will be dropped, because it exceeds
(max_user_connections + 1).
The fix is to implement "safe connect", which makes several attempts
to connect and use it in the test script.