INFO (HP_INFO)
Description:- Server crashes due to memory overflow.
Analysis:- Bytes for storing key length is wrongly set
for HEAP tables.
Fix:- Bytes used to store the key length is properly set
inside "heap_create()".
instrument table->record[0], table->record[1] and share->default_values.
One should not access record image beyond share->reclength, even
if table->record[0] has some unused space after it (functions that
work with records, might get a copy of the record as an argument,
and that copy - not being record[0] - might not have this buffer space
at the end). See b80fa4000d and 444587d8a3
MEMORY engine needs the record length to be at least sizeof(void*),
because it stores a pointer there (linking deleted records into a list).
So when the reclength is less than sizeof(void*), it's set to sizeof(void*).
That is done inside heap_create(), and the upper layer doesn't know
that the engine writes beyond share->reclength.
While it's usually safe (in-memory record size is rounded up to
sizeof(double), so even if share->reclength is too small,
share->rec_buff_len is not), it could cause problems in the code that
copies records and expects them to fix in share->reclength,
e.g. in partitioning.
~40% bugfixed(*) applied
~40$ bugfixed reverted (incorrect or we're not buggy)
~20% bugfixed applied, despite us being not buggy
(*) only changes in the server code, e.g. not cmakefiles
WITH CERTAIN MAX_HEAP_TABLE_SIZE VALUES
Description:
When the system variable 'max_heap_table_size'
is set to 20GB, the server crashes on creation of a
temporary tables or tables using MEMORY storage engine.
Analysis:
The variable 'max_record' determines the amount heap
allocated for the records of the table. This value
is determined using the 'max_heap_table_size' variable.
'records_in_block' in turn uses the max_records to
determine the number of records per block.
When the 'max_heap_table_size' is set to 20GB, then
the 'records_in_block' is calculated to a value of
2^28.
The size of the block determined by multiplying the
'records_in_block' and 'recbuffer' results in overflow
and hence the value becomes zero. As a result, zero bytes
of the heap is allocated for the table. This will
result in a server crash when the table is accessed.
Fix:
The variables 'records_in_block' and 'recbuffer' are
typecasted to 'unsigned long' while calculating the
size of the block.
WITH CERTAIN MAX_HEAP_TABLE_SIZE VALUES
Description:
When the system variable 'max_heap_table_size'
is set to 20GB, the server crashes on creation of a
temporary tables or tables using MEMORY storage engine.
Analysis:
The variable 'max_record' determines the amount heap
allocated for the records of the table. This value
is determined using the 'max_heap_table_size' variable.
'records_in_block' in turn uses the max_records to
determine the number of records per block.
When the 'max_heap_table_size' is set to 20GB, then
the 'records_in_block' is calculated to a value of
2^28.
The size of the block determined by multiplying the
'records_in_block' and 'recbuffer' results in overflow
and hence the value becomes zero. As a result, zero bytes
of the heap is allocated for the table. This will
result in a server crash when the table is accessed.
Fix:
The variables 'records_in_block' and 'recbuffer' are
typecasted to 'unsigned long' while calculating the
size of the block.
The reason was that a couple of variables that hold number of rows that was used to calculate buffers was uint and caused an overflow.
Fixed by changing variables that could hold number of rows from uint to ulong and also added a cast for this test.
include/heap.h:
Reorder to get better alignment. Changed variables that could hold number of rows from uint to ulong
mysql-test/suite/heap/heap.result:
Added test case
mysql-test/suite/heap/heap.test:
Added test case
mysql-test/suite/plugins/t/server_audit.test:
Added sleep as we want to have disconnect logged before we try a new connect
storage/heap/ha_heap.cc:
Changed variables that could hold number of rows from uint to ulong
Limit number of rows to 4G (as most of the variables that holds rows are ulong anyway)
reset records_changed when key_stat_version is changed to not cause increments for every row changed
storage/heap/ha_heap.h:
changed records_changed to ulong as this can get big
storage/heap/hp_create.c:
Changed variables that could hold number of rows from uint to ulong
Added cast (fixed the original bug)
storage/heap/hp_delete.c:
Changed variables that could hold number of rows from uint to ulong
storage/heap/hp_open.c:
Removed not needed cast
storage/heap/hp_write.c:
Changed variables that could hold number of rows from uint to ulong
support-files/compiler_warnings.supp:
Removed extra : from supression
mysys/errors.c:
revert upstream's fix. use a much simpler one
mysys/my_write.c:
revert upstream's fix. use a simpler one
sql/item_xmlfunc.cc:
useless, but ok
sql/mysqld.cc:
simplify upstream's fix
storage/heap/hp_delete.c:
remove upstream's fix.
we'll use a much less expensive approach.
Brief description: After insert some rows to MEMORY table with HASH key some
rows can't be deleted in one step.
Problem Analysis/solution: info->current_ptr will have the information about the
current hash pointer from where we can traverse to the list to get all the
remaining tuples.
In hp_delete_key we are updating info->current_ptr with the last_pos based on
the flag parameter(which is the keydef and last index are same). As part of the
fix we are making it to zero only when the code flow reaches to the end of the
function hp_delete_key() it means that the next record which has to get deleted
will be at the starting of the list so, that in the next call to
read record(heap_rnext()) will take line number 100 path instead of 102 path,
please see the below code in file hp_rnext.c, function heap_rnext().
99 else if (!info->current_ptr) /* Deleted or first call */
100 pos= hp_search(info, keyinfo, info->lastkey, 0);
101 else
102 pos= hp_search(info, keyinfo, info->lastkey, 1);
with that change the hp_search() will update the info->current_ptr with the
record which needs to be deleted.
storage/heap/hp_delete.c:
In heap_delete_key() function we are making info->current_ptr to 0 if
flag is enabled.
Brief description: After insert some rows to MEMORY table with HASH key some
rows can't be deleted in one step.
Problem Analysis/solution: info->current_ptr will have the information about the
current hash pointer from where we can traverse to the list to get all the
remaining tuples.
In hp_delete_key we are updating info->current_ptr with the last_pos based on
the flag parameter(which is the keydef and last index are same). As part of the
fix we are making it to zero only when the code flow reaches to the end of the
function hp_delete_key() it means that the next record which has to get deleted
will be at the starting of the list so, that in the next call to
read record(heap_rnext()) will take line number 100 path instead of 102 path,
please see the below code in file hp_rnext.c, function heap_rnext().
99 else if (!info->current_ptr) /* Deleted or first call */
100 pos= hp_search(info, keyinfo, info->lastkey, 0);
101 else
102 pos= hp_search(info, keyinfo, info->lastkey, 1);
with that change the hp_search() will update the info->current_ptr with the
record which needs to be deleted.
Check ability of index to be NULL as it made in MyISAM. UNIQUE with NULL could have several NULL entries so we have to continue even if ve have found a row.
mysql-test/suite/heap/heap_hash.result:
Added test case
mysql-test/suite/heap/heap_hash.test:
Added test case
storage/heap/hp_hash.c:
Limit key data length to max key length
mysql-test/suite/heap/heap.result:
Added test case for MDEV-436
mysql-test/suite/heap/heap.test:
Added test case for MDEV-436
storage/heap/hp_block.c:
Don't allocate a set of HP_PTRS when not needed. This saves us about 1024 bytes for most allocations.
storage/heap/hp_create.c:
Made the initial allocation of block sizes depending on min_records and max_records.
RB-tree index in the MEMORY table fails if it grews over 4G.
That happened because the old_allocated variable in hp_rb_write_key()
had the uint type. Changed with the 'size_t' type to be same as the
'rb_tree.allocated'.
per-file comments:
storage/heap/hp_write.c
MDEV-80 Memory engine table full at much less than max_heap_table_size with btree index.
uint->size_t for the 'old_allocated'.
mysql-test/suite/innodb/t/group_commit_crash.test:
remove autoincrement to avoid rbr being used for insert ... select
mysql-test/suite/innodb/t/group_commit_crash_no_optimize_thread.test:
remove autoincrement to avoid rbr being used for insert ... select
mysys/my_addr_resolve.c:
a pointer to a buffer is returned to the caller -> the buffer cannot be on the stack
mysys/stacktrace.c:
my_vsnprintf() is ok here, in 5.5
- Optimize away calls to hp_rec_hashnr() by cashing hash
- Try to get more rows / block (to minimize overhead of HP_PTRS) in HEAP tables.
storage/heap/_check.c:
Optimize away calls to hp_rec_hashnr() by cashing hash.
Print cleanups
storage/heap/heapdef.h:
Added place to hold calculated hash value for row
storage/heap/hp_create.c:
Try to get more rows / block (to minimize overhead of HP_PTRS)
storage/heap/hp_delete.c:
Optimize away calls to hp_rec_hashnr() by cashing hash.
storage/heap/hp_hash.c:
Optimize away calls to hp_rec_hashnr() by cashing hash.
Remove some not needed DBUG_PRINT
storage/heap/hp_test2.c:
Increased max table size as now heap tables takes a bit more space (a few %)
storage/heap/hp_write.c:
Optimize away calls to hp_rec_hashnr() by cashing hash.
Remove duplicated code
More DBUG_PRINT
storage/maria/ma_create.c:
More DBUG_PRINT
mysql-test-run auto-disables all optional plugins.
mysql-test/include/default_client.cnf:
no @OPT.plugindir anymore
mysql-test/include/default_mysqld.cnf:
don't disable plugins manually - mtr can do it better
mysql-test/suite/innodb/t/innodb_bug47167.test:
mtr now uses suite-dir as an include path
mysql-test/suite/innodb/t/innodb_file_format.test:
mtr now uses suite-dir as an include path
mysql-test/t/partition_binlog.test:
this test uses partitions
storage/example/mysql-test/mtr/t/source.result:
update results. as mysqltest includes the correct overlayed include
storage/innobase/handler/ha_innodb.cc:
the assert is wrong