Buffer over-run on all platforms, crash on windows, wrong result on other platforms,
when rounding numbers which start with 999999999 and have
precision = 9 or 18 or 27 or 36 ...
When temporary tables is used for result sorting
result field for gconcat function is created using
group_concat_max_len size. It leads to result truncation
when character_set_results is multi-byte character set due
to insufficient tmp table field size.
The fix is to increase temporary table field size for
gconcat. Method make_string_field() is overloaded
for Item_func_group_concat class and uses
max_characters * collation.collation->mbmaxlen size for
result field. max_characters is maximum number of characters
what can fit into max_length size.
hash index at shutdown
btr_search_disable(): Just drop the entire adaptive hash index,
without dropping every record separately.
buf_pool_clear_hash_index(): Renamed and simplified from
buf_pool_drop_hash_index(). Set block->index = NULL for every block in
the buffer pool. Do not release the btr_search_latch. The caller will
have to adjust other data structures.
Remove block->is_hashed. It is redundant, should be always equal to
block->index != NULL.
Remove btr_search_fully_disabled, btr_search_enabled_mutex, and
SYNC_SEARCH_SYS_CONF. We drop the AHI in one pass, without releasing
the btr_search_latch in between.
Replace void* with const rec_t* and add assertions on btr_search_latch
and btr_search_enabled to ha0ha.h, ha0ha.ic, ha0ha.c.
page_set_max_trx_id(): Ignore the adaptive hash index. I forgot to
push this in rb:750.
btr0sea.c: Always after acquiring btr_search_latch, check for
block->index==NULL or !btr_search_enabled. We can now set
block->index=NULL while only holding btr_search_latch in exclusive
mode. Always acquire btr_search_latch before reading block->index,
except in shortcuts when testing for block->index == NULL.
ha_clear(), ha_search(): Unused function, remove.
buf_page_peek_if_search_hashed(): Remove. This function may avoid
latching a page at the cost of doing a duplicate buf_pool->page_hash
lookup.
rb:775 approved by Inaam Rana
A buffer large enough to hold the query _plus_ some additional
data is allocated before parsing is started. The additional data
is used by the query cache, and consists of the name of the current
database and a set of flags.
When a packet containing multiple SQL statements is sent to the
server and one of the statements changes the current database
(a "USE <db>" statement), and the name of the new current database
is longer than of the previous, there is not enough space in the
buffer for the new name, and we write out over the buffer boundary.
The fix adds an extra field to store the number of bytes
allocated to the database name in the buffer. If the current
database name changes, and the new name is longer than the
previous one, we refuse to cache the query.
Sun Studio 12 has an error when calculating the compile-time
length of a constant character string. The error is only
present when building an optimized 32-bits version, using
the -xbuiltin=(%all) compiler flag.
During compilation, the compiler recognizes the use of
the strlen() function used on a constant string. It
optimizes the strlen and replaces it with the actual
length of the string. This optimization seems to
calculate the length wrongly in this particular case.
Replacing the "const char *" with a "const char []"
solves the problem.
This is a redo for 5.5
Added 'innodb_file_format_max' as variable to ignore change to.
Tests that had to restore this amended
Two tests assumed it to be Antelope, make sure these run on a freshly
started server
Problematic query:
insert ignore into `t1_federated` (`c1`) select `c1` from `t1_local` a
where not exists (select 1 from `t1_federated` b where a.c1 = b.c1);
When this query is killed in another connection it could lead to crash.
The problem is follwing:
An attempt to obtain table statistics for subselect table in killed query
fails with an error. So JOIN::optimize() for subquery is failed but
it does not prevent further subquery evaluation.
At the first subquery execution JOIN::optimize() is called
(see subselect_single_select_engine::exec()) and fails with
an error. 'executed' flag is set to TRUE and it prevents
further subquery evaluation. At the second call
JOIN::optimize() does not happen as 'JOIN::optimized' is TRUE
and in case of uncacheable subquery the 'executed' flag is set
to FALSE before subquery evaluation. So we loose 'optimize stage'
error indication (see subselect_single_select_engine::exec()).
In other words 'executed' flag is used for two purposes, for
error indication at JOIN::optimize() stage and for an
indication of subquery execution. And it seems it's wrong
as the flag could be reset.
rw_lock_x_lock_func(): Assert that the thread is not already holding
the lock in a conflicting mode (RW_LOCK_SHARED).
rw_lock_s_lock_func(): Assert that the thread is not already holding
the lock in a conflicting mode (RW_LOCK_EX).
Bug 12980094 - ASSERTION IN INNODB DETECTED IN RQG_PARTITION_DDL
Bug 13034534 - RQG TESTS FAIL ON WINDOWS WITH CRASH NEAR RW_LOCK_DEBUG_PRINT
All access to struct rw_lock_debug_struct must be protected by rw_lock_debug_mutex_enter().