When a DML statement is issued, and if the index merge
access method is chosen, then many rows from the
storage engine will be locked because of the way the
algorithm works. Many rows will be locked, but they
will not be part of the final result set.
To reduce the excessive locking, the locks of unmatched
rows are released by this patch. This patch will
affect only transactions with isolation level
equal to or less stricter than READ COMMITTED. This is
because of the behaviour of ha_innobase::unlock_row().
rb://1296 approved by jorgen and olav.
Problem:-
When we execute a query which has subquery with GROUP BY, ORDER BY and have a
BLOB column,results a memory leak.
Analysis:-
In case of subquery, which have GROUP BY on BLOB and a ORDER BY on other field
and BLOB is not a key. We allocate a tmp buffer to copy_field to take care of
BLOB value.This copy_field value can have copies of its in two join(objects),
so while freeing this copy_field we have to take care that it is
not deleted twice.
The double deletion of tmp_table_param.copy_field is handled by two patches.
One by Kostja :
revid:sp1r-konstantin@mysql.com-20050627101056-55153
Fix the broken test suite in -debug build.
and other by Oleksandr
revid:sp1r-bell@sanja.is.com.ua-20060118114857-19905
Excluded posibility of tmp_table_param.copy_field double deletion (BUG#14851).
both of this patches are commited in different branch and while
merging they both get placed,but there is no need for Kostja patch as Oleksandr
patch handle this.
sql/sql_select.cc:
Bug13726751, tmp_join clean up is not necessary as later in the code we are taking care of cleaning up of tmp_join copy_field.
Problem:-
When we execute a query which has subquery with GROUP BY, ORDER BY and have a
BLOB column,results a memory leak.
Analysis:-
In case of subquery, which have GROUP BY on BLOB and a ORDER BY on other field
and BLOB is not a key. We allocate a tmp buffer to copy_field to take care of
BLOB value.This copy_field value can have copies of its in two join(objects),
so while freeing this copy_field we have to take care that it is
not deleted twice.
The double deletion of tmp_table_param.copy_field is handled by two patches.
One by Kostja :
revid:sp1r-konstantin@mysql.com-20050627101056-55153
Fix the broken test suite in -debug build.
and other by Oleksandr
revid:sp1r-bell@sanja.is.com.ua-20060118114857-19905
Excluded posibility of tmp_table_param.copy_field double deletion (BUG#14851).
both of this patches are commited in different branch and while
merging they both get placed,but there is no need for Kostja patch as Oleksandr
patch handle this.
sql/sql_select.cc:
Bug13726751, tmp_join clean up is not necessary as later in the code we are taking care of cleaning up of tmp_join copy_field.
LEN <= SIZEOF(ULONGLONG)
This bug was caught in the WL#6255 ALTER TABLE...ADD COLUMN in MySQL
5.6, but there is a bug in all InnoDB versions that support
auto-increment columns.
row_search_autoinc_read_column(): When reading the maximum value of
the auto-increment column, and the column only contains NULL values,
return 0. This corresponds to the case when the table is empty in
row_search_max_autoinc().
rb:1415 approved by Sunny Bains
FAILED IN DEACTIVATE_DDL_LOG_ENTRY
deallocate_ddl_log_entry() can be called without having
locked LOCK_gdl. It uses a global buffer for reading and
writing entries in the ddl_log, and since it is not protected
by any mutex, two concurrent threads can overwrite the
content in the global buffer, so it can be different from
what was read.
Thread a reads from entry 1 into global
buffer, thread b reads from entry 2 into global buffer,
thread a writes from global buffer into entry 1
-> entry 1 is not the content of entry 2.
This is especially bad for replace entries, which uses
two phases, and does not deactivate the whole entry
after the first phase, but increases the phase instead.
Fixed by using thread local storage (stack) instead of global
storage (global buffer).
Also added buffer and size arguments to
read/write_ddl_log_file_entry.
Also only read/write first bytes in entries in
deactivate_ddl_log_entry.
Also fixed the scenario where it will try to recover from a server
compiled with a different value of IO_SIZE (very uncommon!)
updated patch with set_ddl_log_entry_from_buf
and removed read_ddl_log_entry.
Manually tested, no test case included.
mysqld_safe script did not heed MySQL specific environment variable
$UMASK, leading to divergent behavior between mysqld and mysqld_safe.
Patch adds an approximation of mysqld's behavior to mysqld_safe,
within the bounds dictated by attempt to have mysqld_safe run on
even the most basic of shells (proper '70s sh, not just bash
with a fancy symlink).
Patch also adds approximation of said behavior to mysqld_multi
(in perl).
(backport)
mysqld_safe script did not heed MySQL specific environment variable
$UMASK, leading to divergent behavior between mysqld and mysqld_safe.
Patch adds an approximation of mysqld's behavior to mysqld_safe,
within the bounds dictated by attempt to have mysqld_safe run on
even the most basic of shells (proper '70s sh, not just bash
with a fancy symlink).
Patch also adds approximation of said behavior to mysqld_multi
(in perl).
(backport)
manual merge
mysqld_safe script did not heed MySQL specific environment variable
$UMASK, leading to divergent behavior between mysqld and mysqld_safe.
Patch adds an approximation of mysqld's behavior to mysqld_safe,
within the bounds dictated by attempt to have mysqld_safe run on
even the most basic of shells (proper '70s sh, not just bash
with a fancy symlink).
Patch also adds approximation of said behavior to mysqld_multi
(in perl).
Problem:-
using last_insert_id() on an auto_incremented bigint unsigned does
not work for values which are greater than max-bigint-signed.
Analysis:-
last_insert_id() returns the first auto_incremented value for a column
and an auto_incremented value can have only positive values.
In our code, when we are initializing a last_insert_id object, we are
taking it as a signed BIGINT, So when the auto_incremented value reaches
greater than max signed bigint, last_insert_id gives negative result.
Solution:
When we are fetching the value from last_insert_id, We are setting the
unsigned_flag, so that it take only unsigned BIGINT value.
sql/item_func.cc:
here unsigned value is converted to signed value.
sql/item_func.h:
last_insert_id() gives an auto_incremented value which can be
positive only,so defined it as a unsigned longlong sets the
unsigned_flag to 1.
Problem:-
using last_insert_id() on an auto_incremented bigint unsigned does
not work for values which are greater than max-bigint-signed.
Analysis:-
last_insert_id() returns the first auto_incremented value for a column
and an auto_incremented value can have only positive values.
In our code, when we are initializing a last_insert_id object, we are
taking it as a signed BIGINT, So when the auto_incremented value reaches
greater than max signed bigint, last_insert_id gives negative result.
Solution:
When we are fetching the value from last_insert_id, We are setting the
unsigned_flag, so that it take only unsigned BIGINT value.
sql/item_func.cc:
here unsigned value is converted to signed value.
sql/item_func.h:
last_insert_id() gives an auto_incremented value which can be
positive only,so defined it as a unsigned longlong sets the
unsigned_flag to 1.
REAL DUPLICATE VALUE FOR PREFIX KEYS
innobase_rec_to_mysql(): Invoke dict_index_get_nth_col_or_prefix_pos()
instead of dict_index_get_nth_col_pos() to find the column.
SECONDARY INDEX UPDATES MAKE CONSISTENT READS DO O(N^2) UNDO PAGE
LOOKUPS (honoring kill query while accessing sec_index)
If secondary index is being used for select query evaluation and this
query is operating with consistent read snapshot it might take good time for
secondary index to return back control to mysql as MVCC would kick in.
If user issues "kill query <id>" while query is actively accessing
secondary index it will not be honored as there is no hook to check
for this condition. Added hook for this check.
-----
Parallely secondary index taking too long to evaluate for consistent
read snapshot case is being examined for performance improvement. WL#6540.
SECONDARY INDEX UPDATES MAKE CONSISTENT READS DO O(N^2) UNDO PAGE
LOOKUPS (honoring kill query while accessing sec_index)
If secondary index is being used for select query evaluation and this
query is operating with consistent read snapshot it might take good time for
secondary index to return back control to mysql as MVCC would kick in.
If user issues "kill query <id>" while query is actively accessing
secondary index it will not be honored as there is no hook to check
for this condition. Added hook for this check.
-----
Parallely secondary index taking too long to evaluate for consistent
read snapshot case is being examined for performance improvement. WL#6540.
This bug had two problems:
P1) Reads out of bounds;
P2) Writes out of bounds.
PROBLEM P1
----------
User_var_log_event unmarshalling from binlog was not performing range
checks when using name_len and val_len variables to walk on event
buffer.
Added range checks to User_var_log_event unmarshalling to prevent
unmarshalling errors.
PROBLEM P2
----------
User_var_log_event value was allocated on thread stack, what caused
stack frame errors when User_var_log_event value was bigger than thread
stack size.
Currently value is allocated on heap memory.
The problem is in the error handling in row_create_table_for_mysql().
In the 'disk full' case we may forget to call dict_mem_table_free() on
the table object.
Approved by: Marko (rb:1377 and rb:1386)
PRIVILEGES
Description: (user,host) pair from security context is used
privilege checking at the time of granting or
revoking proxy privileges. This creates problem
when server is started with
--skip-name-resolve option because host will not
contain any value. Checks should be dependent on
consistent values regardless the way server is
started. Further, privilege check should use
(priv_user,priv_host) pair rather than values
obtained from inbound connection because
this pair represents the correct account context
obtained from mysql.user table.
CONSISTENT SNAPSHOT OPTION
A transaction is started with a consistent snapshot. After
the transaction is started new indexes are added to the
table. Now when we issue an update statement, the optimizer
chooses an index. When the index scan is being initialized
via ha_innobase::change_active_index(), InnoDB reports
the error code HA_ERR_TABLE_DEF_CHANGED, with message
stating that "insufficient history for index".
This error message is propagated up to the SQL layer. But
the my_error() api is never called. The statement level
diagnostics area is not updated with the correct error
status (it remains in Diagnostics_area::DA_EMPTY).
Hence the following check in the Protocol::end_statement()
fails.
516 case Diagnostics_area::DA_EMPTY:
517 default:
518 DBUG_ASSERT(0);
519 error= send_ok(thd->server_status, 0, 0, 0, NULL);
520 break;
The fix is to backport the fix of bugs 14365043, 11761652
and 11746399.
14365043 PROTOCOL::END_STATEMENT(): ASSERTION `0' FAILED
11761652 HA_RND_INIT() RESULT CODE NOT CHECKED
11746399 RETURN VALUES OF HA_INDEX_INIT() AND INDEX_INIT() IGNORED
rb://1227 approved by guilhem and mattiasj.
We did not allocate enough bits for index->trx_id_offset, causing an
UPDATE or DELETE of a table with a PRIMARY KEY longer than 1024 bytes
to corrupt the PRIMARY KEY.
dict_index_t: Allocate enough bits.
dict_index_build_internal_clust(): Check for overflow of
index->trx_id_offset. Trip a debug assertion when overflow occurs.
rb:1380 approved by Jimmy Yang
When a SP handler is activated, memory is allocated to hold the
MESSAGE_TEXT for the condition that caused the activation.
The problem was that this memory was allocated on the MEM_ROOT belonging
to the stored program. Since this MEM_ROOT is not freed until the
stored program ends, a stored program that causes lots of handler
activations can start using lots of memory. In 5.1 and earlier the
problem did not exist as no MESSAGE_TEXT was allocated if a condition
was raised with a handler present. However, this behavior lead to
a number of other issues such as Bug#23032.
This patch fixes the problem by allocating enough memory for the
necessary MESSAGE_TEXTs in the SP MEM_ROOT when the SP starts and
then re-using this memory each time a handler is activated.
This is the 5.5 version of the patch.
This bug depends on cmake version.
For cmake 2.6 (which is still in use for some pushbuild trees)
the main build would succeed, even if create_initial_db failed.
The problem was the chaining of commands in the CUSTOM_COMMAND
to produce 'initdb.dep'. It first invokes cmake to run mysqld,
then invokes 'touch' to create the file. Moving the 'touch'
command makes the error propagate properly for both cmake 2.6 and 2.8