Move part of the code from lock_rec_print() in a separate function
buf_page_try_get() because the same functionality is needed in
INFORMATION SCHEMA code.
Approved by: Heikki
Fix a bug where the condition (prtype & DATA_ROW_ID) is unexpectedly
always false becasue DATA_ROW_ID is 0.
Use a switch instead of if-else in order to avoid repeating
(prtype & DATA_SYS_PRTYPE_MASK).
Approved by: Heikki
innobase_col_to_mysql(): New function, adapted from
row_sel_field_store_in_mysql_format().
innobase_rec_to_mysql(): Correct the function comment, which was still
saying "clustered index record", although we can convert any record.
Make use of innobase_col_to_mysql(). Always call field->reset(),
so that innobase_col_to_mysql() will not have to pad anything.
Since r1905, innobase_rec_to_mysql() does not require a clustered index record.
row_merge_dup_t: Remove old_table.
row_merge_dup_report(): Do not fetch the clustered index record. Simply
convert the tuple by innobase_rec_to_mysql().
row_merge_blocks(), row_merge(), row_merge_sort(): Add a TABLE* parameter
for reporting duplicate key values during file sort.
row_merge_read_clustered_index(): Replace UNIV_PAGE_SIZE with the more
appropriate sizeof(mrec_buf_t).
dtuple_create_for_mysql(), dtuple_free_for_mysql(): Remove.
ha_innobase::records_in_range(): Use mem_heap_create(), mem_heap_free(),
and dtuple_create() instead of the removed functions above. Since r1587,
InnoDB C++ functions can invoke inlined C functions.
innobase_rec_to_mysql(): New function, for converting an InnoDB clustered
index record to MySQL table->record[0]. TODO: convert integer fields.
Currently, integer fields are in big-endian byte order instead of
host byte order, and signed integer fields are offset by 0x80000000.
innobase_rec_reset(): New function, for resetting table->record[0].
row_merge_build_indexes(): Add the parameter TABLE* table (the MySQL table
handle) for reporting duplicate key values.
dtuple_from_fields(): New function, to convert an array of dfield_t* to
dtuple_t.
dtuple_get_n_ext(): New function, to compute the number of externally stored
fields.
row_merge_dup_t: Structure for counting and reporting duplicate records.
row_merge_dup_report(): Function for counting and reporting duplicate records.
row_merge_tuple_cmp(), row_merge_tuple_sort(): Replace the ulint* n_dup
parameter with row_merge_dup_t* dup.
row_merge_buf_sort(): Add the parameter row_merge_dup_t* dup, which is
NULL when sorting a non-unique index.
row_merge_buf_write(), row_merge_heap_create(), row_merge_read_rec(),
row_merge_cmp(), row_merge_read_clustered_index(), row_merge_blocks(),
row_merge(), row_merge_sort(): Add const qualifiers.
row_merge_read_clustered_index(): Use a common error handling branch err_exit.
Invoke row_merge_buf_sort() differently on unique indexes.
row_merge_blocks(): note TODO: We could invoke innobase_rec_to_mysql()
to report duplicate key values when creating a clustered index.
dict_find_index_by_max_id(): Rename this static function to its
only caller, dict_table_get_index_by_max_id().
dict_table_get_index_by_max_id(): Copy the function comment from
dict_find_index_by_max_id().
rec_get_converted_size_comp(), rec_convert_dtuple_to_rec_comp(),
rec_convert_dtuple_to_rec_new(), rec_convert_dtuple_to_rec(): Add a
const qualifier to dict_index_t*.
row_search_on_row_ref(): Add const qualifiers to the dict_table_t*
and dtuple_t* parameters. Note that pcur is an "out" parameter
and mtr is "in/out".
dtuple_create(): Simplify a pointer expression. Flag the fields uninitialized
after initializing them in the debug version.
dtuple_t: Only declare magic_n if UNIV_DEBUG is defined. The field is
not assigned to nor tested unless UNIV_DEBUG is defined.
Copy any data (currently table name and table index) that may be
destroyed after releasing the kernel mutex into internal cache's
storage.
This is done in efficient manner using ha_storage type and a given
string is copied only once into the cache's storage. Later additions of
the same string use the already stored string, thus allocating memory
only once per unique string.
Approved by: Marko
Add a type that stores chunks of data in its own storage and avoids
duplicates. Supported methods:
ha_storage_create()
Allocates new storage object.
ha_storage_put()
Copies a given data chunk into the storage and returns pointer to the
copy. If the data chunk is already present, a pointer to the existing
object is returned and the given data chunk is not copied.
ha_storage_empty()
Clears (empties) the storage from all data chunks that are stored in it.
ha_storage_free()
Destroys a storage object. Opposite to ha_storage_create().
Approved by: Marko
row_merge(): Add the assertion ut_ad(half > 0).
row_merge_sort(): Compute the half of the merge file correctly. The
previous implementation used truncating division, which may result in
loss of records when the file size in blocks is not a power of 2.
Use the newly introduced mem_alloc2() to use the memory that has been
allocated in addition to the requested memory. This is done in order to
avoid wasting memory.
Do not calculate the sizes and offsets of the chunks in advance in
table_cache_init() because it is unknown how much bytes will actually
be allocated by mem_alloc2(). Rather calculate these on the run: after
each chunk is allocated set its size and the offset of the next chunk.
Similar patch approved by: Marko
Some bug still remains, because innodb-index.test will lose some
records from the clustered index after add primary key (a,b(255),c(255))
when row_merge_block_t is reduced to 8192 bytes.
row_merge(): Add the parameter "half". Add some Valgrind instrumentation.
Note that either stream can end before the other one.
row_merge_sort(): Calculate "half" for row_merge().
mem_alloc2(): New macro. This is a variant of mem_alloc() that
returns the allocated size, which is equal to or greater than
the requested size.
mem_alloc_func(): Add the output parameter *size for the allocated size.
When it is set, adjust the parameter passed to mem_heap_alloc().
rec_copy_prefix_to_buf_old(), rec_copy_prefix_to_buf(): Use mem_alloc2()
instead of mem_alloc().
was actually obtained from the buddy allocator. This should avoid some
internal memory fragmentation in mem_heap_create() and mem_heap_alloc().
mem_area_alloc(): Change the in parameter size to an in/out parameter.
Adjust the size based on what was obtained from pool->free_list[].
mem_heap_create_block(): Adjust block->len to what was obtained from
mem_area_alloc().
rec_print_comp(): New function, sliced from rec_print_new().
rec_print_old(), rec_print_comp(): Print the untruncated length of the column.
row_merge_print_read, row_merge_print_write, row_merge_print_cmp:
New flags, to enable debug printout in UNIV_DEBUG builds.
row_merge_tuple_print(): New function for UNIV_DEBUG builds.
row_merge_read_rec(): Obey row_merge_print_read.
row_merge_buf_write(), row_merge_write_rec_low(),
row_merge_write_eof(): Obey row_merge_print_write.
row_merge_cmp(): Obey row_merge_print_cmp.
in fast index creation.
row_merge_write_eof(), row_merge_buf_write(): When UNIV_DEBUG_VALGRIND
is defined, fill the rest of the block (after the end-of-block marker)
with 0xff.
innodb_lock_waits. See
https://svn.innodb.com/innobase/InformationSchema/TransactionsAndLocks
for design notes.
Things that need to be resolved before this goes live:
* MySQL must add thd_get_thread_id() function to their code
http://bugs.mysql.com/30930
* Allocate memory from mem_heap instead of using mem_alloc()
* Copy table name and index name into the cache because they may be
freed later which will result in referencing freed memory
Approved by: Marko
row_merge_read_rec(): Correct a typo in a comment. Fix error in
arithmetics when the record spans two blocks.
row_merge_write_rec_low(): Add a "size" parameter. Add debug assertions
about extra_size and size.
row_merge_write_rec(): After writing a record, properly advance the
buffer pointer.
that row_merge_blocks() will have some work to do when
row_merge_block_t is shrunk to 8192 bytes.
Currently, this will cause a debug assertion failure, because
row_merge_cmp() is considering all columns, not just the unique ones.