This patch was created by running the following commands:
for i in */*[ch]; do doxygenify.pl $i; done
perl -i -pe 's#\*{3} \*/$#****/#' */*[ch]
where doxygenify.pl is
https://svn.innodb.com/svn/misc/trunk/tools/doxygenify.pl r510
Verified the consistency as follows:
(0) not too many /* in: */ or /* out: */ comments left in the code:
grep -l '/\*\s*\(in\|out\)[,:/]' */*[ch]
(1) no difference when ignoring blank lines, after stripping all
C90-style /* comments */, including multi-line ones, before and after
applying this patch:
perl -i -e 'undef $/;while(<ARGV>){s#/\*(.*?)\*/##gs;print}' */*[ch]
diff -I'^\s*$' --exclude .svn -ru TREE1 TREE2
(2) after stripping @return comments and !<, generated a diff and omitted
the hunks where /* out: */ function return comments were removed:
perl -i -e'undef $/;while(<ARGV>){s#!<##g;s#\n\@return\t.*?\*/# \*/#gs;print}'\
*/*[ch]
svn diff|
perl -e 'undef $/;$_=<>;s#\n-\s*/\* out[:,]([^\n]*?)(\n-[^\n]*?)*\*/##gs;print'
Some unintended changes were left. These will be removed in a
subsequent patch.
history. This addresses Mantis issue #116.
dict_index_t: Enable the storage of trx_id.
row_prebuilt_t: Make many fields bit-fields to reduce the memory
footprint. Add index_usable.
ha_innobase::change_active_index(): Check if the index is usable and
set prebuilt->index_usable accordingly. Unfortunately, the return
status of this function is ignored by MySQL, and the actual refusal to
use the index must be made in row_search_for_mysql().
row_search_for_mysql(): Return DB_MISSING_HISTORY if
!prebuilt->index_usable.
convert_error_code_to_mysql(): Map DB_MISSING_HISTORY to
HA_ERR_TABLE_DEF_CHANGED.
innodb-index.test: Add a test case where access to a newly created
secondary index must be blocked for old transactions.
rb://100 approved by Heikki Tuuri
* Remove old Innobase copyright lines from C source files
* Add a reference to the GPLv2 license as recommended by the lawyers
at Oracle Legal
[Step 25/28]
SYS_INDEXES when looking for partially created indexes. Use the
transaction isolation level READ UNCOMMITTED to avoid interfering with
locks held by incomplete transactions that will be rolled back in a
subsequent step in the recovery. (Issue #152)
Approved by Heikki Tuuri
if any of the fields are NULL. While the tuples are equal in the
sorting order, SQL NULL is defined to be logically inequal to
anything else. (Bug #41904)
rb://70 approved by Heikki Tuuri
WHILE 1=1 in the SQL procedure, so that the loop will actually be
entered and temporary indexes be dropped during crash recovery.
Thanks to Sunny Bains for pointing this out.
Tested as follows:
Set a breakpoint in row_merge_rename_indexes.
CREATE TABLE t(a INT)ENGINE=InnoDB;
CREATE INDEX a ON t(a);
-- The breakpoint will be reached. Kill and restart mysqld.
SHOW CREATE TABLE t;
-- This shows the MySQL .frm file, without and index.
CREATE TABLE innodb_table_monitor(a INT)ENGINE=InnoDB;
-- This will dump the InnoDB dictionary to the error log, without the index.
This should hopefully address Issue #85.
ha_innobase::add_index(): Lock the data dictionary before invoking
row_merge_rename_indexes() or row_merge_drop_indexes(), because neither
function will commit the transaction.
ha_innobase::final_drop_index(): Commit the transactions before
unlocking the data dictionary.
row_merge_drop_index(), row_merge_drop_indexes(), row_merge_rename_tables(),
row_merge_rename_indexes(): Note and assert that the data dictionary must
have been exclusively locked by the caller, because the transaction will
not be committed.
row_drop_database_for_mysql(): Commit the transaction immediately after
dropping each table. When MySQL is holding open handles to some tables,
it can otherwise occur than the data dictionary is unlocked while the
transaction has not been committed. This bug was introduced in r2739,
which changed the semantics of row_drop_table_for_mysql().
row_drop_database_for_mysql(): Postpone mem_free(table_name), so that
an error printout will not dereference freed memory.
to the data dictionary records. This should fix Issue #83.
row_drop_table_for_mysql_no_commit(): Rename back to
row_drop_table_for_mysql(). Commit the transaction if the data
dictionary was not locked when the function was called. Otherwise,
neither commit the transaction nor unlock the data dictionary.
row_merge_drop_table(): Let row_drop_table_for_mysql() take care of
locking the data dictionary.
dict_create_or_check_foreign_constraint_tables(),
trx_rollback_active(), row_create_table_for_mysql(),
row_create_index_for_mysql(), row_table_add_foreign_constraints():
Explicitly commit the transaction, because row_drop_table_for_mysql()
would no longer commit it, given that the data dictionary will be
locked during the calls.
Approved by Sunny (over IM). rb://23
in fast index creation. In r1399, we wrote undo log records about
creating indexes. The special undo log records were deemed
unnecessary later, but this special handling was not removed then.
row_merge_create_index(): Do not assign index->id.
dict_build_index_def_step(): Unconditionally assign index->id.
as type-independent macros instead of functions. Because ut_2pow_round()
and ut_2pow_remainder() no longer assert ut_is_2pow(m), add the assertions
to callers when needed. Also add parentheses to assist the compiler in
common subexpression elimination.
lock_get_table(), locks_row_eq_lock(), buf_page_get_mutex(): Add return
after ut_error. On Windows, ut_error is not declared as "noreturn".
Add explicit type casts when assigning ulint to byte to get rid of
"possible loss of precision" warnings.
struct i_s_table_cache_struct: Declare rows_used, rows_allocd as ulint
instead of ullint. 32 bits should be enough.
fill_innodb_trx_from_cache(), i_s_zip_fill_low(): Cast 64-bit unsigned
integers to longlong when calling Field::store(longlong, bool is_unsigned).
Otherwise, the compiler would implicitly convert them to double and
invoke Field::store(double) instead.
recv_truncate_group(), recv_copy_group(), recv_calc_lsn_on_data_add():
Cast ib_uint64_t expressions to ulint to get rid of "possible loss of
precision" warnings. (There should not be any loss of precision in
these cases.)
log_close(), log_checkpoint_margin(): Declare some variables as ib_uint64_t
instead of ulint, so that there won't be any potential loss of precision.
mach_write_ull(): Cast the second argument of mach_write_to_4() to ulint.
OS_FILE_FROM_FD(): Cast the return value of _get_osfhandle() to HANDLE.
row_merge_dict_table_get_index(): Cast the parameter of mem_free() to (void*)
in order to get rid of the bogus MSVC warning C4090, which has been reported
as MSVC bug 101661:
<http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=101661>
row_mysql_read_blob_ref(): To get rid of a bogus MSVC warning C4090,
drop a const qualifier.
symbols. Use it for all definitions of non-static variables and functions.
lexyy.c, make_flex.sh: Declare yylex as UNIV_INTERN, not static. It is
referenced from pars0grm.c.
Actually, according to
nm .libs/ha_innodb.so|grep -w '[ABCE-TVXYZ]'
the following symbols are still global:
* The vtable for class ha_innodb
* pars0grm.c: The function yyparse() and the variables yychar, yylval, yynerrs
The required changes to the Bison-generated file pars0grm.c will be addressed
in a separate commit, which will add a script similar to make_flex.sh.
The class ha_innodb is renamed from class ha_innobase by a #define. Thus,
there will be no clash with the builtin InnoDB. However, there will be some
overhead for invoking virtual methods of class ha_innodb. Ideas for making
the vtable hidden are welcome. -fvisibility=hidden is not available in GCC 3.
btr_rec_copy_externally_stored_field() are safe.
row_merge_copy_blobs(): Note that the table is locked during index creation.
Therefore, none of its BLOBs can be freed.
row_sel_fetch_columns(): Note that rec must be protected by a page latch.
Add const qualifier to rec.
row_sel_get_clust_rec(): Note that the clustered index record is protected
by a page latch that was acquired when the persistent cursor was positioned
and that the latch will be freed by mini-transaction commit.
row_sel_try_search_shortcut(): Check the delete-mark flag before fetching
the columns. Note that the clustered index record is protected
by a page latch that was acquired when the persistent cursor was positioned
and that the latch will be freed by mini-transaction commit.
row_sel(), row_search_for_mysql(): Note that the clustered index record
is protected by a page latch that was acquired when the persistent cursor
was positioned and that the latch will be freed by mini-transaction commit.
row_sel_field_store_in_mysql_format(): Add const qualifier to data.
row_sel_store_mysql_rec(), row_sel_push_cache_row_for_mysql():
Add const qualifier to rec. Note that rec must be protected by a page latch.
row_create_index_graph_for_mysql(): Move from row0mysql.c to row0merge.c
and rename to row_merge_create_index_graph(). Also change the function
comment to say that the function will create and execute the query graph
for creating the index.
row_merge_create_index(): Remove redundant assignment to trx->error_state.
Active transactions must not switch table or index definitions on the fly,
for several reasons, including the following:
* copied indexes do not carry any history or locking information;
that is, rollbacks, read views, and record locking would be broken
* huge potential for race conditions, inconsistent reads and writes,
loss of data, and corruption
Instead of trying to track down if the table was changed during a transaction,
acquire appropriate locks that protect the creation and dropping of indexes.
innodb-index.test: Test the locking of CREATE INDEX and DROP INDEX. Test
that consistent reads work across dropped indexes.
lock_rec_insert_check_and_lock(): Relax the lock_table_has() assertion.
When inserting a record into an index, the table must be at least IX-locked.
However, when an index is being created, an IS-lock on the table is
sufficient.
row_merge_lock_table(): Add the parameter enum lock_mode mode, which must
be LOCK_X or LOCK_S.
row_merge_drop_table(): Assert that n_mysql_handles_opened == 0.
Unconditionally drop the table.
ha_innobase::add_index(): Acquire an X or S lock on the table, as appropriate.
After acquiring an X lock, assert that n_mysql_handles_opened == 1.
Remove the comments about dropping tables in the background.
ha_innobase::final_drop_index(): Acquire an X lock on the table.
dict_table_t: Remove version_number, to_be_dropped, and prebuilts.
ins_node_t: Remove table_version_number.
enum lock_mode: Move the definition from lock0lock.h to lock0types.h.
ROW_PREBUILT_OBSOLETE, row_update_prebuilt(), row_prebuilt_table_obsolete():
Remove.
row_prebuilt_t: Remove the declaration from row0types.h.
row_drop_table_for_mysql_no_commit(): Always print a warning if a table
was added to the background drop queue.
when row_build() was changed to prefetch all externally stored column
prefixes that occur in ordering fields of an index.
row_build(): Add the parameter col_table for determining which
externally stored columns need to be fetched.
row_merge_read_clustered_index(): Pass new_table as the said parameter,
so that newly added indexes containing column prefix indexes of externally
stored columns will work.
Only add indexed BLOBs to row_ext.
trx_undo_rec_get_partial_row(): Move the BLOB fetching to row_ext_create().
row_build(): Pass only those BLOBs to row_ext_create() that are referenced by
ordering columns of some indexes, similar to trx_undo_rec_get_partial_row().
row_ext_create(): Add the parameter "tuple". Move the implementation
from row0ext.ic to row0ext.c.
row_ext_lookup_ith(), row_ext_lookup(): Return a const pointer. Remove
the parameters "field" and "f_len". Make the row_ext_t* parameter const.
row_ext_t: Remove the field zip_size.
field_ref_zero[]: Declare in btr0types.h instead of btr0cur.h.
row_ext_lookup_low(): Rename to row_ext_cache_fill() and change the
signature.
inserted, uncommitted clustered index records when determining if a
secondary index record that contains a column prefix of an externally
stored column is referencing the clustered index record.
field_ref_zero[]: A BLOB pointer full of zero, for use in comparisons.
btr_copy_externally_stored_field_prefix(): Assert that the BLOB pointer is set.
row_ext_lookup_ith(), row_ext_lookup(), row_ext_lookup_low(): Document
that field_ref_zero is returned when the BLOB cannot be fetched.
row_ext_lookup_low(): Return field_ref_zero and *len = 0 when the
BLOB pointer is unset.
row_build_index_entry(): Return NULL when a needed BLOB pointer cannot
be dereferenced (row_ext_lookup returns field_ref_zero). Check the
return value for NULL in callers.
row_vers_impl_x_locked_off_kernel(): Avoid comparisons when
row_build_index_entry() returns NULL.
row_vers_old_has_index_entry(): Ignore records for which
row_build_index_entry() returns NULL. The entry should never be NULL
in rollback, but it may be NULL in purge.
row_merge_buf_add(): Assert that row_ext_lookup() does not return
field_ref_zero. The table will be locked during index creation.
ha_innobase::write_row(): The printf format %p expects const void*.
STRUCT_FLD: Do not use the GCC extension when __STRICT_ANSI__ is defined.
row_merge_read_clustered_index(): Compound initializers must not contain
variables. Assign to struct fields instead.
enum trx_dict_op: dictionary operation modes
trx_get_dict_operation(), trx_set_dict_operation(): Accessors for
trx->dict_operation.
lock_table_enqueue_waiting(), lock_rec_enqueue_waiting(): Do not complain
about lock waits if the dictionary mode is TRX_DICT_OP_INDEX_MAY_WAIT.
row_merge_lock_table(): Remove the work-around for avoiding the warning
in lock_table_enqueue_waiting().
trx_undo_mark_as_dict_operation(): Do not write trx->table_id to the
undo log unless the dict_operation is TRX_DICT_OP_TABLE.
ha_innobase::add_index(): Set the dict_operation mode initially to
TRX_DICT_OP_INDEX_MAY_WAIT, then lock the table exclusively, and set the
mode to TRX_DICT_OP_INDEX, and optionally to TRX_DICT_OP_TABLE when
creating a temporary table.
fix the bugs introduced in r1591.
row_rec_to_index_entry_low(): Clear "n_ext". Do not allow it to be NULL.
Add const qualifier to dict_index_t*.
row_rec_to_index_entry(): Add the parameters "offsets" and "n_ext".
btr_cur_optimistic_update(): Add an assertion that there are no externally
stored columns. Remove the unreachable call to btr_cur_unmark_extern_fields()
and the preceding unnecessary call to rec_get_offsets().
btr_push_update_extern_fields(): Remove the parameters index, offsets.
Only report the additional externally stored columns of the update vector.
row_build(), trx_undo_rec_get_partial_row(): Flag externally stored columns
also with dfield_set_ext().
rec_copy_prefix_to_dtuple(): Assert that there are no externally stored
columns in the prefix.
row_build_row_ref(): Note and assert that the index is a secondary index,
and assert that there are no externally stored columns.
row_build_row_ref_fast(): Assert that there are no externally stored columns.
rec_offs_get_n_alloc(): Expose the function.
row_build_row_ref_in_tuple(): Assert that there are no externally stored
columns in a record of a secondary index.
row_build_row_ref_from_row(): Assert that there are no externally stored
columns.
row_upd_check_references_constraints(): Add the parameter offsets, to
avoid a redundant call to rec_get_offsets().
row_upd_del_mark_clust_rec(): Add the parameter offsets. Remove
duplicated code.
row_ins_index_entry_set_vals(): Copy the external storage flag.
sel_pop_prefetched_row(): Assert that there are no externally stored
columns.
row_scan_and_check_index(): Copy offsets to a temporary heap across
the invocation of row_rec_to_index_entry().
offsets_[] arrays, as suggested by Vasil.
rec_offs_set_n_alloc(): Declare as a public function. Assert that
n_alloc > REC_OFFS_HEADER_SIZE.
rec_offs_get_n_alloc(): Assert that n_alloc > REC_OFFS_HEADER_SIZE.
Since r1905, innobase_rec_to_mysql() does not require a clustered index record.
row_merge_dup_t: Remove old_table.
row_merge_dup_report(): Do not fetch the clustered index record. Simply
convert the tuple by innobase_rec_to_mysql().
row_merge_blocks(), row_merge(), row_merge_sort(): Add a TABLE* parameter
for reporting duplicate key values during file sort.
row_merge_read_clustered_index(): Replace UNIV_PAGE_SIZE with the more
appropriate sizeof(mrec_buf_t).
innobase_rec_to_mysql(): New function, for converting an InnoDB clustered
index record to MySQL table->record[0]. TODO: convert integer fields.
Currently, integer fields are in big-endian byte order instead of
host byte order, and signed integer fields are offset by 0x80000000.
innobase_rec_reset(): New function, for resetting table->record[0].
row_merge_build_indexes(): Add the parameter TABLE* table (the MySQL table
handle) for reporting duplicate key values.
dtuple_from_fields(): New function, to convert an array of dfield_t* to
dtuple_t.
dtuple_get_n_ext(): New function, to compute the number of externally stored
fields.
row_merge_dup_t: Structure for counting and reporting duplicate records.
row_merge_dup_report(): Function for counting and reporting duplicate records.
row_merge_tuple_cmp(), row_merge_tuple_sort(): Replace the ulint* n_dup
parameter with row_merge_dup_t* dup.
row_merge_buf_sort(): Add the parameter row_merge_dup_t* dup, which is
NULL when sorting a non-unique index.
row_merge_buf_write(), row_merge_heap_create(), row_merge_read_rec(),
row_merge_cmp(), row_merge_read_clustered_index(), row_merge_blocks(),
row_merge(), row_merge_sort(): Add const qualifiers.
row_merge_read_clustered_index(): Use a common error handling branch err_exit.
Invoke row_merge_buf_sort() differently on unique indexes.
row_merge_blocks(): note TODO: We could invoke innobase_rec_to_mysql()
to report duplicate key values when creating a clustered index.
row_merge(): Add the assertion ut_ad(half > 0).
row_merge_sort(): Compute the half of the merge file correctly. The
previous implementation used truncating division, which may result in
loss of records when the file size in blocks is not a power of 2.
Some bug still remains, because innodb-index.test will lose some
records from the clustered index after add primary key (a,b(255),c(255))
when row_merge_block_t is reduced to 8192 bytes.
row_merge(): Add the parameter "half". Add some Valgrind instrumentation.
Note that either stream can end before the other one.
row_merge_sort(): Calculate "half" for row_merge().
rec_print_comp(): New function, sliced from rec_print_new().
rec_print_old(), rec_print_comp(): Print the untruncated length of the column.
row_merge_print_read, row_merge_print_write, row_merge_print_cmp:
New flags, to enable debug printout in UNIV_DEBUG builds.
row_merge_tuple_print(): New function for UNIV_DEBUG builds.
row_merge_read_rec(): Obey row_merge_print_read.
row_merge_buf_write(), row_merge_write_rec_low(),
row_merge_write_eof(): Obey row_merge_print_write.
row_merge_cmp(): Obey row_merge_print_cmp.
in fast index creation.
row_merge_write_eof(), row_merge_buf_write(): When UNIV_DEBUG_VALGRIND
is defined, fill the rest of the block (after the end-of-block marker)
with 0xff.
row_merge_read_rec(): Correct a typo in a comment. Fix error in
arithmetics when the record spans two blocks.
row_merge_write_rec_low(): Add a "size" parameter. Add debug assertions
about extra_size and size.
row_merge_write_rec(): After writing a record, properly advance the
buffer pointer.
trx_t: Remove dict_undo_list and dict_redo_list.
innobase_create_temporary_tablename(): Replace TEMP_TABLE_PREFIX with
a table name suffix "#1" or "#2". In this way, the user can restore
precious data, should anything go wrong. It is possible to reach an
inconsistent state, because the creation, deletion and renaming of
single-table tablespaces are not transactional.
ut_print_namel(), fil_make_ibd_name(), innobase_rename_table(): Remove
the special treatment of TEMP_TABLE_PREFIX.
Introduce TEMP_INDEX_PREFIX == 0xff for temporary indexes. This byte
cannot occur in index names since MySQL 4.1. However, it might have
been possible to use this byte in MySQL 4.0.
recv_recovery_from_checkpoint_finish(): Call the new function
row_merge_drop_temp_indexes(), to drop all indexes whose name starts
with the byte 0xff.
row_merge_rename_indexes(): Renamed from row_merge_rename_index().
Remove the parameter "index".
row_drop_table_for_mysql(): Unconditionally call trx_commit_for_mysql().
row_drop_table_for_mysql_no_commit(): Correct the function commit,
based on the corrected comment of row_drop_table_for_mysql(). Rely on
table->to_be_dropped instead of TEMP_TABLE_PREFIX.
ha_innobase::add_index(): Simplify the control flow.
row_drop_table_for_mysql() call with a call to
row_drop_table_for_mysql_no_commit(). The last parameter of
the function is ibool drop_db, not ibool do_commit. Also,
since r1790 of trunk it is not necessary to copy table->name.