Memory Leak in mysql_options() was caused by missing call
to my_free() in MYSQL_SET_CLIENT_IP branch. Fixed by adding
my_free() to cleanup mysql->options.client_ip value before
assigning new value.
Memory Leak in mysql_options() was caused by missing call
to my_free() in MYSQL_SET_CLIENT_IP branch. Fixed by adding
my_free() to cleanup mysql->options.client_ip value before
assigning new value.
ERROR HANDLING CODE
BACKGROUND:
There can be a potential crash due to buffer overrun in
SSL error handling code due to missing comma in
ssl_error_string[] array in viosslfactories.c.
ANALYSIS:
Found by code Inspection.
FIX:
Added the missing comma in SSL error handling code
in ssl_error_string[] array in viosslfactories.c.
ERROR HANDLING CODE
BACKGROUND:
There can be a potential crash due to buffer overrun in
SSL error handling code due to missing comma in
ssl_error_string[] array in viosslfactories.c.
ANALYSIS:
Found by code Inspection.
FIX:
Added the missing comma in SSL error handling code
in ssl_error_string[] array in viosslfactories.c.
Problem:-
Second execution of prepared statement for query with
parameter in limit clause, causes an assert when using
connectors (e.g., Connector C).
Analysis:-
In prepared statement, LIMIT parameters can be
specified using '?' markers. Value for the parameter can
be supplied while executing the prepared statement.
Passing string, float or double values for LIMIT clause
works well from command-line client. That's because, while
setting the LIMIT parameter value from a user-variable,
the value is converted to integer value.
However, when prepared statement is executed from other
interfaces as J connectors, or C applications etc,
the value for the parameters are sent to the server
with execute command. Each item in command has value and
the data TYPE. So, while setting parameter values
from this log, value is set to all the parameters
with the same data type as passed.
Here, we have the logic to convert the value to change the
state and item_type if it is part of LIMIT parameter and
its item_type is not INT.
But when we reset this parameter we save the item_type but change
state. So on second execution we have old item_type but our state
has been changed, which make us to use string type variable
in Item_param::query_str_val(). This cause an assert.
Fix:
Instead of checking the item_type of the parameter, check for
the state of the parameter. As state value are reset everytime
we execute the statement.
ER_LOCK_WAIT_TIMEOUT".
The problem was that after changes caused by fix bug 14188793 "DEADLOCK
CAUSED BY ALTER TABLE DOEN'T CLEAR STATUS OF ROLLBACKED TRANSACTION"/
bug 17054007 "TRANSACTION IS NOT FULLY ROLLED BACK IN CASE OF INNODB
DEADLOCK implicit rollback of transaction which occurred on ER_LOCK_DEADLOCK
(and ER_LOCK_WAIT_TIMEOUT if innodb_rollback_on_timeout option was set)
didn't start new transaction in @@autocommit=1 mode.
Such behavior although consistent with behavior of explicit ROLLBACK has
broken expectations of users and backward compatibility assumptions.
This patch fixes problem by reverting to starting new transaction
in 5.5/5.6.
The plan is to keep new behavior in trunk so the code change from this
patch is to be null-merged there.
Problem:-
In a Procedure, when we are comparing value of select query
with IN clause and they both have different collation, cause
error on first time execution and assert second time.
procedure will have query like
set @x = ((select a from t1) in (select d from t2));<---proc1
sel1 sel2
Analysis:-
When we execute this proc1(first time)
While resolving the fields of user variable, we will call
Item_in_subselect::fix_fields while will resolve sel2. There
in Item_in_subselect::select_transformer, we evaluate the
left expression(sel1) and store it in Item_cache_* object
(to avoid re-evaluating it many times during subquery execution)
by making Item_in_optimizer class.
While evaluating left expression we will prepare sel1.
After that, we will put a new condition in sel2
in Item_in_subselect::select_transformer() which will compare
t2.d and sel1(which is cached in Item_in_optimizer).
Later while checking the collation in agg_item_collations()
we get error and we cleanup the item. While cleaning up we cleaned
the cached value in Item_in_optimizer object.
When we execute the procedure second time, we have condition for
sel2 and while setup_cond(), we can't able to find reference item
as it is cleanup while item cleanup.So it assert.
Solution:-
We should not cleanup the cached value for Item_in_optimizer object,
if we have put the condition to subselect.
Problem:-
In a Procedure, when we are comparing value of select query
with IN clause and they both have different collation, cause
error on first time execution and assert second time.
procedure will have query like
set @x = ((select a from t1) in (select d from t2));<---proc1
sel1 sel2
Analysis:-
When we execute this proc1(first time)
While resolving the fields of user variable, we will call
Item_in_subselect::fix_fields while will resolve sel2. There
in Item_in_subselect::select_transformer, we evaluate the
left expression(sel1) and store it in Item_cache_* object
(to avoid re-evaluating it many times during subquery execution)
by making Item_in_optimizer class.
While evaluating left expression we will prepare sel1.
After that, we will put a new condition in sel2
in Item_in_subselect::select_transformer() which will compare
t2.d and sel1(which is cached in Item_in_optimizer).
Later while checking the collation in agg_item_collations()
we get error and we cleanup the item. While cleaning up we cleaned
the cached value in Item_in_optimizer object.
When we execute the procedure second time, we have condition for
sel2 and while setup_cond(), we can't able to find reference item
as it is cleanup while item cleanup.So it assert.
Solution:-
We should not cleanup the cached value for Item_in_optimizer object,
if we have put the condition to subselect.
THAN LOCALHOST
This is a test bug and the explanation for the behaviour can be found
on the bug page.Modifying the select to select user where user!=root for the line where
failure is encountered on machines with no hostname other than the localhost.
compressed pages
After loading a compressed-only page in buf_page_get_gen() we allocate a new
block for decompression. The problem is that the compressed page is neither
buffer-fixed nor I/O-fixed by the time we call buf_LRU_get_free_block(),
so it may end up being evicted and returned back as a new block.
buf_page_get_gen(): Temporarily buffer-fix the compressed-only block
while allocating memory for an uncompressed page frame.
This should prevent this form of the infinite loop, which is more likely
with a small innodb_buffer_pool_size.
rb#2511 approved by Jimmy Yang, Sunny Bains
"SHOW PROCESSLIST"
Analysis:
----------
The problem here is, if one connection changes its
default db and at the same time another connection executes
"SHOW PROCESSLIST", when it wants to read db of the another
connection then there is a chance of accessing the invalid
memory.
The db name stored in THD is not guarded while changing user
DB and while reading the user DB in "SHOW PROCESSLIST".
So, if THD.db is freed by thd "owner" thread and if another
thread executing "SHOW PROCESSLIST" statement tries to read
and copy THD.db at the same time then we may endup in the issue
reported here.
Fix:
----------
Used mutex "LOCK_thd_data" to guard THD.db while freeing it
and while copying it to processlist.
STATUS OF ROLLBACKED TRANSACTION" and bug #17054007 - "TRANSACTION
IS NOT FULLY ROLLED BACK IN CASE OF INNODB DEADLOCK".
The problem in the first bug report was that although deadlock involving
metadata locks was reported using the same error code and message as InnoDB
deadlock it didn't rollback transaction like the latter. This caused
confusion to users as in some cases after ER_LOCK_DEADLOCK transaction
could have been restarted immediately and in some cases rollback was
required.
The problem in the second bug report was that although InnoDB deadlock
caused transaction rollback in all storage engines it didn't cause release
of metadata locks. So concurrent DDL on the tables used in transaction was
blocked until implicit or explicit COMMIT or ROLLBACK was issued in the
connection which got InnoDB deadlock.
The former issue has stemmed from the fact that when support for detection
and reporting metadata locks deadlocks was added we erroneously assumed
that InnoDB doesn't rollback transaction on deadlock but only last statement
(while this is what happens on InnoDB lock timeout actually) and so didn't
implement rollback of transactions on MDL deadlocks.
The latter issue was caused by the fact that rollback of transaction due
to deadlock is carried out by setting THD::transaction_rollback_request
flag at the point where deadlock is detected and performing rollback
inside of trans_rollback_stmt() call when this flag is set. And
trans_rollback_stmt() is not aware of MDL locks, so no MDL locks are
released.
This patch solves these two problems in the following way:
- In case when MDL deadlock is detect transaction rollback is requested
by setting THD::transaction_rollback_request flag.
- Code performing rollback of transaction if THD::transaction_rollback_request
is moved out from trans_rollback_stmt(). Now we handle rollback request
on the same level as we call trans_rollback_stmt() and release statement/
transaction MDL locks.
DICT_TABLE_GET_FORMAT(CLUST_INDEX->TABLE) >= 1
The function row_sel_sec_rec_is_for_clust_rec() was incorrectly
preparing to compare a NULL column prefix in a secondary index with a
non-NULL column in a clustered index.
This can trigger an assertion failure in 5.1 plugin and later. In the
built-in InnoDB of MySQL 5.1 and earlier, we would apparently only do
some extra work, by trimming the clustered index field for the
comparison.
The code might actually have worked properly apart from this debug
assertion failure. It is merely doing some extra work in fetching a
BLOB column, and then comparing it to NULL (which would return the
same result, no matter what the BLOB contents is).
While the test case involves CHECK TABLE, this could theoretically
occur during any read that uses a secondary index on a column prefix
of a column that can be NULL.
rb#3101 approved by Mattias Jonsson
There was a race condition in the rollback of TRX_UNDO_UPD_DEL_REC.
Once row_undo_mod_clust() has rolled back the changes by the rolling-back
transaction, it attempts to purge the delete-marked record, if possible, in a
separate mini-transaction.
However, row_undo_mod_remove_clust_low() fails to check if the DB_TRX_ID of
the record that it found after repositioning the cursor, is still the same.
If it is not, it means that the record was purged and another record was
inserted in its place.
So, the rollback would have performed an incorrect purge, breaking the
locking rules and causing corruption.
The problem was found by creating a table that contains a unique
secondary index and a primary key, and two threads running REPLACE
with only one value for the unique column, so that the uniqueness
constraint would be violated all the time, leading to statement
rollback.
This bug exists in all InnoDB versions (I checked MySQL 3.23.53).
It has become easier to repeat in 5.5 and 5.6 thanks to scalability
improvements and a dedicated purge thread.
rb#3085 approved by Jimmy Yang
FAILED BLOB WRITE
btr_store_big_rec_extern_fields(): Relax a debug assertion so that
some BLOB pointers may remain zero if an error occurs.
btr_free_externally_stored_field(), row_undo_ins(): Allow the BLOB
pointer to be zero on any rollback.
rb#3059 approved by Jimmy Yang, Kevin Lewis
Problem Description:
A mysqld_safe instance is started. An InnoDB crash recovery begins which takes
few seconds to complete. During this crash recovery process happening, another
mysqld_safe instance is started with the same server startup parameters. Since
the mysqld's pid file is absent during the crash recovery process the second
instance assumes there is no other process and tries to acquire a lock on the
ibdata files in the datadir. But this step fails and the 2nd instance keeps
retrying 100 times each with a delay of 1 second. Now after the 100 attempts,
the server goes down, but while going down it hits the mysqld_safe script's
cleanup section and without any check it blindly deletes the socket and pid
files. Since no lock is placed on the socket file, it gets deleted.
Solution:
We create a mysqld_safe.pid file in the datadir, which protects the presence
server instance resources by storing the mysqld_safe's process id in it. We
place a check if the mysqld_safe.pid file is existing in the datadir. If yes
then we check if the pid it contains is an active pid or not. If yes again,
then the scripts logs an error saying "A mysqld_safe instance is already
running". Otherwise it will log the present mysqld_safe's pid into the
mysqld_safe.pid file.
Problem Description:
A mysqld_safe instance is started. An InnoDB crash recovery begins which takes
few seconds to complete. During this crash recovery process happening, another
mysqld_safe instance is started with the same server startup parameters. Since
the mysqld's pid file is absent during the crash recovery process the second
instance assumes there is no other process and tries to acquire a lock on the
ibdata files in the datadir. But this step fails and the 2nd instance keeps
retrying 100 times each with a delay of 1 second. Now after the 100 attempts,
the server goes down, but while going down it hits the mysqld_safe script's
cleanup section and without any check it blindly deletes the socket and pid
files. Since no lock is placed on the socket file, it gets deleted.
Solution:
We create a mysqld_safe.pid file in the datadir, which protects the presence
server instance resources by storing the mysqld_safe's process id in it. We
place a check if the mysqld_safe.pid file is existing in the datadir. If yes
then we check if the pid it contains is an active pid or not. If yes again,
then the scripts logs an error saying "A mysqld_safe instance is already
running". Otherwise it will log the present mysqld_safe's pid into the
mysqld_safe.pid file.
AND PARTITION VALUES IN (NULL)
The code assumed there was at least one list element
in LIST partitioned table.
Fixed by checking the number of list elements.