This patch was created by running the following commands:
for i in */*[ch]; do doxygenify.pl $i; done
perl -i -pe 's#\*{3} \*/$#****/#' */*[ch]
where doxygenify.pl is
https://svn.innodb.com/svn/misc/trunk/tools/doxygenify.pl r510
Verified the consistency as follows:
(0) not too many /* in: */ or /* out: */ comments left in the code:
grep -l '/\*\s*\(in\|out\)[,:/]' */*[ch]
(1) no difference when ignoring blank lines, after stripping all
C90-style /* comments */, including multi-line ones, before and after
applying this patch:
perl -i -e 'undef $/;while(<ARGV>){s#/\*(.*?)\*/##gs;print}' */*[ch]
diff -I'^\s*$' --exclude .svn -ru TREE1 TREE2
(2) after stripping @return comments and !<, generated a diff and omitted
the hunks where /* out: */ function return comments were removed:
perl -i -e'undef $/;while(<ARGV>){s#!<##g;s#\n\@return\t.*?\*/# \*/#gs;print}'\
*/*[ch]
svn diff|
perl -e 'undef $/;$_=<>;s#\n-\s*/\* out[:,]([^\n]*?)(\n-[^\n]*?)*\*/##gs;print'
Some unintended changes were left. These will be removed in a
subsequent patch.
* Remove old Innobase copyright lines from C source files
* Add a reference to the GPLv2 license as recommended by the lawyers
at Oracle Legal
[Step 1/28]
buf_block_dbg_add_level(block, level): Define as an empty macro when
UNIV_SYNC_DEBUG is not defined. Remove #ifdef UNIV_SYNC_DEBUG around
all invocations.
the maximum record size will never exceed the B-tree page size limit.
For uncompressed tables, there should always be enough space for two
records in an empty B-tree page. For compressed tables, there should
be enough space for storing two node pointer records or one data
record in an empty page in uncompressed format.
dict_build_table_def_step(): Remove the inaccurate check for table row
size.
dict_index_too_big_for_tree(): New function: check if the index
records would be too big for a B-tree page.
dict_index_add_to_cache(): Add the parameter "strict". Invoke
dict_index_too_big_for_tree() if it is set.
trx_is_strict(), thd_is_strict(): New functions, for determining if
innodb_strict_mode is enabled for the current transaction.
dict_create_index_step(): Pass the new parameter strict of
dict_index_add_to_cache() as trx_is_strict(trx). All other callers
pass it as FALSE.
innodb.test: Enable innodb_strict_mode before attempting to create a
table with a too big record size.
innodb-zip.test: Remove the test of inserting random data. Add tests
for checking that the maximum record lengths are enforced at table
creation time.
variable innodb_file_format. Implement file format version stamping of
*.ibd files and SYS_TABLES.TYPE.
This change breaks introduces an incompatible change for for
compressed tables. We can do this, as we have not released yet.
innodb-zip.test: Add tests for stricter KEY_BLOCK_SIZE and ROW_FORMAT
checks.
DICT_TF_COMPRESSED_MASK, DICT_TF_COMPRESSED_SHIFT: Replace with
DICT_TF_ZSSIZE_MASK, DICT_TF_ZSSIZE_SHIFT.
DICT_TF_FORMAT_MASK, DICT_TF_FORMAT_SHIFT, DICT_TF_FORMAT_51,
DICT_TF_FORMAT_ZIP: File format version, stored in table->flags,
in the .ibd file header, and in SYS_TABLES.TYPE.
dict_create_sys_tables_tuple(): Write the table flags to SYS_TABLES.TYPE
if the format is at least DICT_TF_FORMAT_ZIP. For old formats
(DICT_TF_FORMAT_51), write DICT_TABLE_ORDINARY as the table type.
DB_TABLE_ZIP_NO_IBD: Remove the error code. The error handling is done
in ha_innodb.cc; as a failsafe measure, dict_build_table_def_step() will
silently clear the compression and format flags instead of returning this
error.
dict_mem_table_create(): Assert that no extra bits are set in the flags.
dict_sys_tables_get_zip_size(): Rename to dict_sys_tables_get_flags().
Check all flag bits, and return ULINT_UNDEFINED if the combination is
unsupported.
dict_boot(): Document the SYS_TABLES columns N_COLS and TYPE.
dict_table_get_format(), dict_table_set_format(),
dict_table_flags_to_zip_size(): New accessors to table->flags.
dtuple_convert_big_rec(): Introduce the auxiliary variables
local_len, local_prefix_len. Store a 768-byte prefix locally
if the file format is less than DICT_TF_FORMAT_ZIP.
dtuple_convert_back_big_rec(): Restore the columns.
srv_file_format: New variable: innodb_file_format.
fil_create_new_single_table_tablespace(): Replace the parameter zip_size
with table->flags.
fil_open_single_table_tablespace(): Replace the parameter zip_size_in_k
with table->flags. Check the flags.
fil_space_struct, fil_space_create(), fil_op_write_log():
Replace zip_size with flags.
fil_node_open_file(): Note a TODO item for InnoDB Hot Backup.
Check that the tablespace flags match.
fil_space_get_zip_size(): Rename to fil_space_get_flags(). Add a
wrapper for fil_space_get_zip_size().
fsp_header_get_flags(): New function.
fsp_header_init_fields(): Replace zip_size with flags.
FSP_SPACE_FLAGS: New name for the tablespace flags. This field used
to be called FSP_PAGE_ZIP_SIZE, or FSP_LOWEST_NO_WRITE. It has always
been written as 0 in MySQL/InnoDB versions 4.1 to 5.1.
MLOG_ZIP_FILE_CREATE: Rename to MLOG_FILE_CREATE2. Add a 32-bit
parameter for the tablespace flags.
ha_innobase::create(): Check the table attributes ROW_FORMAT and
KEY_BLOCK_SIZE. Issue errors if they are inappropriate, or warnings
if the inherited attributes (in ALTER TABLE) will be ignored.
PAGE_ZIP_MIN_SIZE_SHIFT: New constant: the 2-logarithm of PAGE_ZIP_MIN_SIZE.
symbols. Use it for all definitions of non-static variables and functions.
lexyy.c, make_flex.sh: Declare yylex as UNIV_INTERN, not static. It is
referenced from pars0grm.c.
Actually, according to
nm .libs/ha_innodb.so|grep -w '[ABCE-TVXYZ]'
the following symbols are still global:
* The vtable for class ha_innodb
* pars0grm.c: The function yyparse() and the variables yychar, yylval, yynerrs
The required changes to the Bison-generated file pars0grm.c will be addressed
in a separate commit, which will add a script similar to make_flex.sh.
The class ha_innodb is renamed from class ha_innobase by a #define. Thus,
there will be no clash with the builtin InnoDB. However, there will be some
overhead for invoking virtual methods of class ha_innodb. Ideas for making
the vtable hidden are welcome. -fvisibility=hidden is not available in GCC 3.
innodb-index.test: Add a test with a large number of externally stored
columns. Check that there may not be prefix indexes on too many columns.
dict_index_too_big_for_undo(): New function: Check if the undo log may
overflow.
dict_index_add_to_cache(): Return DB_SUCCESS or DB_TOO_BIG_RECORD.
Postpone the creation and linking of some data structures, so that
when dict_index_too_big_for_undo() holds, it will be easier to clean up.
Check the return status in all callers.
to tables.
dict_mem_table_add_col(): Add the parameter "heap" for temporary memory
allocation. Allow it and "name" to be NULL. These parameters are NULL
when creating dummy indexes.
dict_add_col_name(): Remove calls to ut_malloc() and ut_free().
dict_table_get_col_name(): Allow table->col_names to be NULL.
dict_table_add_system_columns(), dict_table_add_to_cache():
Add the parameter "heap".
buf_page_get_gen(). This saves one mutex operation per block request.
buf_page_get_gen(), various macros and functions: Add parameter zip_size.
btr_node_ptr_get_child(): Add parameter index.
fil_space_get_latch(): Add optional output parameter zip_size.
fil_space_get_zip_size(): Return 0 for space id==0, because the
system tablespace is never compressed.
fsp_header_init(): Remove the parameter zip_size.
ibuf_free_excess_pages(): Remove the parameter zip_size.
trx_rseg_t, trx_undo_t: Add field zip_size.
xdes_lst_get_next(): Remove, unused.
with page_get_page_no() and page_get_space_id(). We want to avoid
buf_block_align() calls, and the page_no and space_id are now stamped
on the pages early on.
BLOB pointers, trx_id, and roll_ptr.
btr_empty(), btr_create(), page_create(): Add parameter "index", as some
index information will be encoded on the compressed page.
Define REC_NODE_PTR_SIZE as 4.
Allow btr_page_reorganize() and btr_page_reorganize_low() to fail.
Define the error code DB_ZIP_OVERFLOW.
Make row_ins_index_entry_low() static.
page0zip: Encode the index, log reorganized records, and store uncompressed
fields separately from the compressed data stream.