Problem
========
Replication breaks in the cases if the event length exceeds
the size of master Dump thread's max_allowed_packet.
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet, on addition of the
max_event_header length exceeds the max_allowed_packet of the DUMP thread.
This causes the Dump thread to break replication and throw an error.
That can happen e.g with row-based replication in Update_rows event.
Fix
====
The problem is fixed in 2 steps:
1.) The Dump thread limit to read event is increased to the upper limit
i.e. Dump thread reads whatever gets logged in the binary log.
2.) On the slave side we increase the the max_allowed_packet for the
slave's threads (IO/SQL) by increasing it to 1GB.
This is done using the new server option (slave_max_allowed_packet)
included, is used to regulate the max_allowed_packet of the
slave thread (IO/SQL) by the DBA, and facilitates the sending of
large packets from the master to the slave.
This causes the large packets to be received by the slave and apply
it successfully.
Problem
========
SQL statements close to the size of max_allowed_packet produce binary
log events larger than max_allowed_packet.
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet + max_event_header
length. Now since the event length exceeds this size master Dump
thread is unable to send the packet on to the slave.
That can happen e.g with row-based replication in Update_rows event.
Fix
====
The problem was fixed by increasing the max_allowed_packet for the
slave's threads (IO/SQL) by increasing it to 1GB.
This is done using the new server option included which is used to
regulate the max_allowed_packet of the slave thread (IO/SQL).
This causes the large packets to be received by the slave and apply
it successfully.
BY A CONCURRENT TRANSACTIO
The member function QUICK_RANGE_SELECT::init_ror_merged_scan() performs
a table handler clone. Innodb does not provide a clone operation.
The ha_innobase::clone() is not there. The handler::clone() does not
take care of the ha_innobase->prebuilt->select_lock_type. Because of
this what happens is that for one index we do a locking read, and
for the other index we were doing a non-locking (consistent) read.
The patch introduces ha_innobase::clone() member function.
It is implemented similar to ha_myisam::clone(). It calls the
base class handler::clone() and then does any additional operation
required. I am setting the ha_innobase->prebuilt->select_lock_type
correctly.
rb://1060 approved by Marko
CAUSES RESTORE PROBLEM
Problem Statement:
------------------
mysqldump is not having the dump stmts for general_log and slow_log
tables. That is because of the fix for Bug#26121. Hence, after
dropping the mysql database, and applying the dump by enabling the
logging, "'general_log' table not found" errors are logged into the
server log file.
Analysis:
---------
As part of the fix for Bug#26121, we skipped the dumping of tables
for general_log and slow_log, because the data dump of those tables
are taking LOCKS, which is not allowed for log tables.
Fix:
----
We came up with an approach that instead of taking both meta data
and data dump information for those tables, take only the meta data
dump which doesn't need LOCKS.
As part of fixing the issue we came up with below algorithm.
Design before fix:
1) mysql database is having tables like db, event,... general_log,
... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Take the TL_READ lock on tables which are present in the table
list and do 'show create table'.
4) Release the lock.
Design with the fix:
1) mysql database is having tables like db, event,... general_log,
... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Explicitly call the 'show create table' for general_log and
slow_log
3) Take the TL_READ lock on tables which are present in the table
list and do 'show create table'.
4) Release the lock.
While taking the meta data dump for general_log and slow_log the
"CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS".
This is because we skipped "DROP TABLE" for those tables,
"DROP TABLE" fails for these tables if logging is enabled.
Customer is applying the dump by enabling logging so, if the dump
has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE"
stmts for those tables.
After the fix we could observe "Table 'mysql.general_log'
doesn't exist" errors initially that is because in the customer
scenario they are dropping the mysql database by enabling the
logging, Hence, those errors are expected. Once we apply the
dump which is taken before the "drop database mysql", the errors
will not be there.
The function mysql_show_binlog_events has a local stack variable
'LOG_INFO linfo;', which is assigned to thd->current_linfo, however
this variable goes out of scope and is destroyed before clean
thd->current_linfo.
The problem is solved by moving 'LOG_INFO linfo;' to function scope.
BUG#11761686 insert_id event is not filtered.
Two issues are covered.
INSERT into autoincrement field which is not the first part in the composed primary key
is unsafe by autoincrement logging design. The case is specific to MyISAM engine
because Innodb does not allow such table definition.
However no warnings and row-format logging in the MIXED mode was done, and
that is fixed.
Int-, Rand-, User-var log-events were not filtered along with their parent
query that made possible them to screw up execution context of the following
query.
Fixed with deferring their execution until the parent query.
******
Bug#11754117
Post review fixes.
Currently SHOW MASTER LOGS and SHOW BINARY LOGS require the SUPER
privilege. Monitoring tools (such as MEM) often want to check this
output - for instance MEM generates the SUM of the sizes of the logs
reported here, and puts that in the Replication overview within the MEM
Dashboard.
However, because of the SUPER requirement, these tools often have an
account that holds open the connection whilst monitoring, and can lock
out administrators when the server gets overloaded and reaches
max_connections - there is already another SUPER privileged account
connected, the "monitor".
As SHOW MASTER STATUS, and all other replication related statements,
return with either REPLICATION CLIENT or SUPER privileges, this worklog
is to make SHOW MASTER LOGS and SHOW BINARY LOGS be consistent with this
as well, and allow both of these commands with either SUPER or
REPLICATION CLIENT.
This allows monitoring tools to not require a SUPER privilege any more,
so is safer in overloaded situations, as well as being more secure, as
lighter privileges can be given to users of such tools or scripts.
ORDER BY COUNT(*) LIMIT.
PROBLEM:
With respect to problem in the bug description, we
exhibit different behaviors for the two tables
presented, because innodb statistics (rec_per_key
in this case) are updated for the first table
and not so for the second one. As a result the
query plan gets changed in test_if_skip_sort_order
to use 'index' scan. Hence the difference in the
explain output. (NOTE: We can reproduce the problem
with first table by reducing the number of tuples
and changing the table structure)
The varied output w.r.t the query on the second table
is because of the result in the query plan change.
When a query plan is changed to use 'index' scan,
after the call to test_if_skip_sort_order, we set
keyread to TRUE immedietly. If for some reason
we drop this index scan for a filesort later on,
we fetch only the keys not the entire tuple.
As a result we would see junk values in the result set.
Following is the code flow:
Call test_if_skip_sort_order
-Choose an index to give sorted output
-If this is a covering index, set_keyread to TRUE
-Set the scan to INDEX scan
Call test_if_skip_sort_order second time
-Index is not chosen (note that we do not pass the
actual limit value second time. Hence we do not choose
index scan second time which in itself is a bug fixed
in 5.6 with WL#5558)
-goto filesort
Call filesort
-Create quick range on a different index
-Since keyread is set to TRUE, we fetch only the columns of
the index
-results in the required columns are not fetched
FIX:
Remove the call to set_keyread(TRUE) from
test_if_skip_sort_order. The access function which is
'join_read_first' or 'join_read_last' calls set_keyread anyways.
Bug#13639204 64111: CRASH ON SELECT SUBQUERY WITH NON UNIQUE INDEX
The crash happened due to wrong calculation
of key length during creation of reference for
sort order index. The problem is that
keyuse->used_tables can have OUTER_REF_TABLE_BIT enabled
but used_tables parameter(create_ref_for_key() func) does
not have it. So key parts which have OUTER_REF_TABLE_BIT
are ommited and it could lead to incorrect key length
calculation(zero key length).
Analysis:
-------------------------------
According to the Manual
(http://dev.mysql.com/doc/refman/5.1/en/identifier-case-sensitivity.html):
"Column, index, stored routine, and event names are not case sensitive on any
platform, nor are column aliases."
In other words, 'lower_case_table_names' does not affect the behaviour of
those identifiers.
On the other hand, trigger names are case sensitive on some platforms,
and case insensitive on others. 'lower_case_table_names' does not affect
the behaviour of trigger names either.
The bug was that SHOW statements did case sensitive comparison
for stored procedure / stored function / event names.
Fix:
Modified the code so that comparison in case insensitive for routines
and events for "SHOW" operation.
As part of this commit, only fixing the test failures due to the actual code fix.
Analysis:
-------------------------------
According to the Manual
(http://dev.mysql.com/doc/refman/5.1/en/identifier-case-sensitivity.html):
"Column, index, stored routine, and event names are not case sensitive on any
platform, nor are column aliases."
In other words, 'lower_case_table_names' does not affect the behaviour of
those identifiers.
On the other hand, trigger names are case sensitive on some platforms,
and case insensitive on others. 'lower_case_table_names' does not affect
the behaviour of trigger names either.
The bug was that SHOW statements did case sensitive comparison
for stored procedure / stored function / event names.
Fix:
Modified the code so that comparison in case insensitive for routines
and events for "SHOW" operation.
The test case must insert all the records using a single transaction. Otherwise the test
case takes more than 15 minutes and will time out in pb2 and mtr.
BUG#64503: mysql frequently ignores --relay-log-space-limit
When the SQL thread goes to sleep, waiting for more events, it sets
the flag ignore_log_space_limit to true. This gives the IO thread a
chance to queue some more events and ultimately the SQL thread will be
able to purge the log once it is rotated. By then the SQL thread
resets the ignore_log_space_limit to false. However, between the time
the SQL thread has set the ignore flag and the time it resets it, the
IO thread will be queuing events in the relay log, possibly going way
over the limit.
This patch makes the IO and SQL thread to synchronize when they reach
the space limit and only ask for one event at a time. Thus the SQL
thread sets ignore_log_space_limit flag and the IO thread resets it to
false everytime it processes one more event. In addition, everytime
the SQL thread processes the next event, and the limit has been
reached, it checks if the IO thread should rotate. If it should, it
instructs the IO thread to rotate, giving the SQL thread a chance to
purge the logs (freeing space). Finally, this patch removes the
resetting of the ignore_log_space_limit flag from purge_first_log,
because this is now reset by the IO thread every time it processes the
next event when the limit has been reached.
If the SQL thread is in a transaction, it cannot purge so, there is no
point in asking the IO thread to rotate. The only thing it can do is
to ask for more events until the transaction is over (then it can ask
the IO to rotate and purge the log right away). Otherwise, there would
be a deadlock (SQL would not be able to purge and IO thread would not
be able to queue events so that the SQL would finish the transaction).
Problem: Grouping results by VALUES(alias for string literal) causes
the server to crash.
Item_insert_values is not constructed to handle other types of
arguments than field and reference to field. In this case, the
argument is an Item_string, and this causes
Item_insert_values::fix_fields() to crash.
Fix: Issue an error message when the argument to Item_insert_values is
not a field or a reference to a field.
This is slightly in breach with documentation, which states that
VALUES should return NULL, but the error message is only issued in
cases where the server otherwise would crash, so there is no change in
behavior for queries that already work. Future versions will restrict
syntax so that using VALUES in this way is illegal.
truncating, inserting the same set of rows. When a table is
re-created with the same set of rows, the data file size must
not grow.
rb:968
Approved by Marko.
A defect in the subquery substitution code may lead to a server crash:
setting substitution's name should be followed by setting its length
(to keep them in sync).
Problem:
lack of incoming geometry data validation may
lead to a server crash when ISCLOSED() function called.
Solution:
necessary incoming data check added.
There are two threads. In one thread, dml operation is going on
involving cascaded update operation. In another thread, alter
table add foreign key constraint is happening. Under these
circumstances, it is possible for the dml thread to access a
dict_foreign_t object that has been freed by the ddl thread.
The debug sync test case provides the sequence of operations.
Without fix, the test case will crash the server (because of
newly added assert). With fix, the alter table stmt will return
an error message.
Backporting the fix from MySQL 5.5 to 5.1
rb:961
rb:947
Analysis:
========================
sql_mode "NO_BACKSLASH_ESCAPES": When user want to use backslash as character input,
instead of escape character in a string literal then sql_mode can be set to
"NO_BACKSLASH_ESCAPES". With this mode enabled, backslash becomes an ordinary
character like any other.
SQL_MODE set applies to the current client session. And while creating the stored
procedure, MySQL stores the current sql_mode and always executes the stored
procedure in sql_mode stored with the Procedure, regardless of the server SQL
mode in effect when the routine is invoked.
In the scenario (for which bug is reported), the routine is created with
sql_mode=NO_BACKSLASH_ESCAPES. And routine is executed with the invoker sql_mode
is "" (NOT SET) by executing statement "call testp('Axel\'s')".
Since invoker sql_mode is "" (NOT_SET), the '\' in 'Axel\'s'(argument to function)
is considered as escape character and column "a" (of table "t1") values are
updated with "Axel's". The binary log generated for above update operation is as below,
set sql_mode=XXXXXX (for no_backslash_escapes)
update test.t1 set a= NAME_CONST('var',_latin1'Axel\'s' COLLATE 'latin1_swedish_ci');
While logging stored procedure statements, the local variables (params) used in
statements are replaced with the NAME_CONST(var_name, var_value) (Internal function)
(http://dev.mysql.com/doc/refman/5.6/en/miscellaneous-functions.html#function_name-const)
On slave, these logs are applied. NAME_CONST is parsed to get the variable and its
value. Since, stored procedure is created with sql_mode="NO_BACKSLASH_ESCAPES", the sql_mode
is also logged in. So that at slave this sql_mode is set before executing the statements
of routine. So at slave, sql_mode is set to "NO_BACKSLASH_ESCAPES" and then while
parsing NAME_CONST of string variable, '\' is considered as NON ESCAPE character
and parsing reported error for "'" (as we have only one "'" no backslash).
At slave, parsing was proper with sql_mode "NO_BACKSLASH_ESCAPES".
But above error reported while writing bin log, "'" (of Axel's) is escaped with
"\" character. Actually, all special characters (n, r, ', ", \, 0...) are escaped
while writing NAME_CONST for string variable(param, local variable) in bin log
irrespective of "NO_BACKSLASH_ESCAPES" sql_mode. So, basically, the problem is
that logging string parameter does not take into account sql_mode value.
Fix:
========================
So when sql_mode is set to "NO_BACKSLASH_ESCAPES", escaping characters as
(n, r, ', ", \, 0...) should be avoided. To do so, added a check to not to
escape such characters while writing NAME_CONST for string variables in bin
log.
And when sql_mode is set to NO_BACKSLASH_ESCAPES, quote character "'" is
represented as ''.
http://dev.mysql.com/doc/refman/5.6/en/string-literals.html (There are several
ways to include quote characters within a string: )
Analysis:
========================
sql_mode "NO_BACKSLASH_ESCAPES": When user want to use backslash as character input,
instead of escape character in a string literal then sql_mode can be set to
"NO_BACKSLASH_ESCAPES". With this mode enabled, backslash becomes an ordinary
character like any other.
SQL_MODE set applies to the current client session. And while creating the stored
procedure, MySQL stores the current sql_mode and always executes the stored
procedure in sql_mode stored with the Procedure, regardless of the server SQL
mode in effect when the routine is invoked.
In the scenario (for which bug is reported), the routine is created with
sql_mode=NO_BACKSLASH_ESCAPES. And routine is executed with the invoker sql_mode
is "" (NOT SET) by executing statement "call testp('Axel\'s')".
Since invoker sql_mode is "" (NOT_SET), the '\' in 'Axel\'s'(argument to function)
is considered as escape character and column "a" (of table "t1") values are
updated with "Axel's". The binary log generated for above update operation is as below,
set sql_mode=XXXXXX (for no_backslash_escapes)
update test.t1 set a= NAME_CONST('var',_latin1'Axel\'s' COLLATE 'latin1_swedish_ci');
While logging stored procedure statements, the local variables (params) used in
statements are replaced with the NAME_CONST(var_name, var_value) (Internal function)
(http://dev.mysql.com/doc/refman/5.6/en/miscellaneous-functions.html#function_name-const)
On slave, these logs are applied. NAME_CONST is parsed to get the variable and its
value. Since, stored procedure is created with sql_mode="NO_BACKSLASH_ESCAPES", the sql_mode
is also logged in. So that at slave this sql_mode is set before executing the statements
of routine. So at slave, sql_mode is set to "NO_BACKSLASH_ESCAPES" and then while
parsing NAME_CONST of string variable, '\' is considered as NON ESCAPE character
and parsing reported error for "'" (as we have only one "'" no backslash).
At slave, parsing was proper with sql_mode "NO_BACKSLASH_ESCAPES".
But above error reported while writing bin log, "'" (of Axel's) is escaped with
"\" character. Actually, all special characters (n, r, ', ", \, 0...) are escaped
while writing NAME_CONST for string variable(param, local variable) in bin log
Airrespective of "NO_BACKSLASH_ESCAPES" sql_mode. So, basically, the problem is
that logging string parameter does not take into account sql_mode value.
Fix:
========================
So when sql_mode is set to "NO_BACKSLASH_ESCAPES", escaping characters as
(n, r, ', ", \, 0...) should be avoided. To do so, added a check to not to
escape such characters while writing NAME_CONST for string variables in bin
log.
And when sql_mode is set to NO_BACKSLASH_ESCAPES, quote character "'" is
represented as ''.
http://dev.mysql.com/doc/refman/5.6/en/string-literals.html (There are several
ways to include quote characters within a string: )
CHECK_SIMPLE_EQUALITY
PROBLEM:
Crash in "check_simple_equality" when using a subquery with "IN" and
"ALL" in prepare.
ANALYSIS:
Crash can be reproduced using a simplified query like this one:
prepare s from "select 1 from g1 where 1 < all (
select @:=(1 in (select 1 from g1)) from g1)";
This bug is currently present only on 5.5.and 5.1. Its fixed as part
of work log(#1110) in 5.6. We are taking one change to fix this
in 5.5 and 5.1.
Problem seems to be present because we are trying to evaluate "is_null"
on an argument which is part of a subquery
(In Item_is_not_null_test::update_used_tables()).
But the condition to evaluate is only when we do not have a sub query
present, which means to say that "with_subselect" is not set.
With respect to the above query, we create an object of type
"Item_in_optimizer" which by definition is always associated with a
subquery. While in 5.6 we set "with_subselect" to true for
"Item_in_optimizer" object, we do not do the same in 5.5. This results in
the evaluation for "is_null" resulting in a coredump.
So, we are now setting "with_subselect" to true for "Item_in_optimizer"
in 5.1 and 5.5.
Suppress innodb_bug34300 from failing if InnoDB prints:
120221 11:05:03 InnoDB: ERROR: the age of the last checkpoint is 9439048,
InnoDB: which exceeds the log group capacity 9433498.
by default the log capacity is 2 log files, 5 MB each.
RESULT FROM PREVIOUS TRANSACTION
The current Query Cache API is not fully compatible with
the partitioning engine.
There is no good way to implement support for QC due to:
1) a static callback for ha_partition would need to have access
to all partition names and call the underlying callback for each
[sub]partition with the correct name.
2) pruning would be impossible, even if one used the ulonglong
engine_data due to if engine_data is changed, the table is
invalidated by the QC.
So the only viable solution to avoid incorrect data is to not allow
caching of queries using partitioned tables.
(There are some extra changes, due to removal of \r as line break)
This bug was originally filed and fixed as Bug#12612184. The original
fix was buggy, and it was patched by Bug#12704861. Also that patch was
buggy (potentially breaking crash recovery), and both fixes were
reverted.
This fix was not ported to the built-in InnoDB of MySQL 5.1, because
the function signatures of many core functions are different from
InnoDB Plugin and later versions. The block allocation routines and
their callers would have to changed so that they handle block
descriptors instead of page frames.
When a record is updated so that its size grows, non-updated columns
can be selected for external (off-page) storage. The bug is that the
initially inserted updated record contains an all-zero BLOB pointer to
the field that was not updated. Only after the BLOB pages have been
allocated and written, the valid pointer can be written to the record.
Between the release of the page latch in mtr_commit(mtr) after
btr_cur_pessimistic_update() and the re-latching of the page in
btr_pcur_restore_position(), other threads can see the invalid BLOB
pointer consisting of 20 zero bytes. Moreover, if the system crashes
at this point, the situation could persist after crash recovery, and
the contents of the non-updated column would be permanently lost.
The problem is amplified by the ROW_FORMAT=DYNAMIC and
ROW_FORMAT=COMPRESSED that were introduced in
innodb_file_format=barracuda in InnoDB Plugin, but the bug does exist
in all InnoDB versions.
The fix is as follows. After a pessimistic B-tree operation that needs
to write out off-page columns, allocate the pages for these columns in
the mini-transaction that performed the B-tree operation (btr_mtr),
but write the pages in a separate mini-transaction (blob_mtr). Do
mtr_commit(blob_mtr) before mtr_commit(btr_mtr). A quirk: Do not reuse
pages that were previously freed in btr_mtr. Only write the off-page
columns to 'fresh' pages.
In this way, crash recovery will see redo log entries for blob_mtr
before any redo log entry for btr_mtr. It will apply the BLOB page
writes to pages that were marked free at that point. If crash recovery
fails to see all of the btr_mtr redo log, there will be some
unreachable BLOB data in free pages, but the B-tree will be in a
consistent state.
btr_page_alloc_low(): Renamed from btr_page_alloc(). Add the parameter
init_mtr. Return an allocated block, or NULL. If init_mtr!=mtr but
the page was already X-latched in mtr, do not initialize the page.
btr_page_alloc(): Wrapper for btr_page_alloc_for_ibuf() and
btr_page_alloc_low().
btr_page_free(): Add a debug assertion that the page was a B-tree page.
btr_lift_page_up(): Return the father block.
btr_compress(), btr_cur_compress_if_useful(): Add the parameter ibool
adjust, for adjusting the cursor position.
btr_cur_pessimistic_update(): Preserve the cursor position when
big_rec will be written and the new flag BTR_KEEP_POS_FLAG is defined.
Remove a duplicate rec_get_offsets() call. Keep the X-latch on
index->lock when big_rec is needed.
btr_store_big_rec_extern_fields(): Replace update_inplace with
an operation code, and local_mtr with btr_mtr. When not doing a
fresh insert and btr_mtr has freed pages, put aside any pages that
were previously X-latched in btr_mtr, and free the pages after
writing out all data. The data must be written to 'fresh' pages,
because btr_mtr will be committed and written to the redo log after
the BLOB writes have been written to the redo log.
btr_blob_op_is_update(): Check if an operation passed to
btr_store_big_rec_extern_fields() is an update or insert-by-update.
fseg_alloc_free_page_low(), fsp_alloc_free_page(),
fseg_alloc_free_extent(), fseg_alloc_free_page_general(): Add the
parameter init_mtr. Return an allocated block, or NULL. If
init_mtr!=mtr but the page was already X-latched in mtr, do not
initialize the page.
xdes_get_descriptor_with_space_hdr(): Assert that the file space
header is being X-latched.
fsp_alloc_from_free_frag(): Refactored from fsp_alloc_free_page().
fsp_page_create(): New function, for allocating, X-latching and
potentially initializing a page. If init_mtr!=mtr but the page was
already X-latched in mtr, do not initialize the page.
fsp_free_page(): Add ut_ad(0) to the error outcomes.
fsp_free_page(), fseg_free_page_low(): Increment mtr->n_freed_pages.
fsp_alloc_seg_inode_page(), fseg_create_general(): Assert that the
page was not previously X-latched in the mini-transaction. A file
segment or inode page should never be allocated in the middle of an
mini-transaction that frees pages, such as btr_cur_pessimistic_delete().
fseg_alloc_free_page_low(): If the hinted page was allocated, skip the
check if the tablespace should be extended. Return NULL instead of
FIL_NULL on failure. Remove the flag frag_page_allocated. Instead,
return directly, because the page would already have been initialized.
fseg_find_free_frag_page_slot() would return ULINT_UNDEFINED on error,
not FIL_NULL. Correct a bogus assertion.
fseg_alloc_free_page(): Redefine as a wrapper macro around
fseg_alloc_free_page_general().
buf_block_buf_fix_inc(): Move the definition from the buf0buf.ic to
buf0buf.h, so that it can be called from other modules.
mtr_t: Add n_freed_pages (number of pages that have been freed).
page_rec_get_nth_const(), page_rec_get_nth(): The inverse function of
page_rec_get_n_recs_before(), get the nth record of the record
list. This is faster than iterating the linked list. Refactored from
page_get_middle_rec().
trx_undo_rec_copy(): Add a debug assertion for the length.
trx_undo_add_page(): Return a block descriptor or NULL instead of a
page number or FIL_NULL.
trx_undo_report_row_operation(): Add debug assertions.
trx_sys_create_doublewrite_buf(): Assert that each page was not
previously X-latched.
page_cur_insert_rec_zip_reorg(): Make use of page_rec_get_nth().
row_ins_clust_index_entry_by_modify(): Pass BTR_KEEP_POS_FLAG, so that
the repositioning of the cursor can be avoided.
row_ins_index_entry_low(): Add DEBUG_SYNC points before and after
writing off-page columns. If inserting by updating a delete-marked
record, do not reposition the cursor or commit the mini-transaction
before writing the off-page columns.
row_build(): Tighten a debug assertion about null BLOB pointers.
row_upd_clust_rec(): Add DEBUG_SYNC points before and after writing
off-page columns. Do not reposition the cursor or commit the
mini-transaction before writing the off-page columns.
rb:939 approved by Jimmy Yang