Turns out the DBUG_ASSERT added by fix for Bug#11792200 was overly pessimistic:
'stop0' is used in the main loop of do_div_mod, but we only dereference 'buf0'
for div operations, not for mod.
mysql-test/r/func_math.result:
New test case.
mysql-test/t/func_math.test:
New test case.
strings/decimal.c:
Move DBUG_ASSERT down to where we actually dereference the loop pointer.
HA_ERR was returning 0 (null string) when no error happened
(error=0). Since HA_ERR is used in DBUG_PRINT, regardless there
was an error or not, the server could crash in solaris debug
builds.
We fix this by:
- deploying an assertion that ensures that the function
is not called when no error has happened;
- making sure that HA_ERR is only called when an error
happened;
- making HA_ERR return "No Error", instead of 0, for
non-debug builds if it is called when no error happened.
This will make HA_ERR return values to work with DBUG_PRINT on
solaris debug builds.
The server crashes if it processes table map events that are
corrupted, especially if they map different tables to the same
identifier. This could happen, for instance, due to BUG 56226.
We fix this by checking whether the table map has already been
mapped before actually applying the event. If it has been mapped
with different settings an error is raised and the slave SQL
thread stops. If it has been mapped with same settings the event
is skipped. If the table is set to be ignored by the filtering
rules, there is no change in behavior: the event is skipped and
ids are not checked.
mysql-test/suite/rpl/t/rpl_row_corruption.test:
Added a simple test case that checks both cases:
- multiple table maps with the same identifier
- multiple table maps with the same identifier, but only one
is processed (the others are filtered out)
CLIENT TOOLS
The fix is to backport part of revision:
- alexander.nozdrin@oracle.com-20101006150613-ls60rb2tq5dpyb5c
from mysql-5.5. In detail, we add the oracle welcome notice
header file proposed in the original patch and include/use it
in client/mysqlbinlog.cc, replacing the existing and obsolete
notice.
Bug#12637786 was fixed with rb:692 by marko. But that fix has a remaining
bug. It added this assert;
ut_ad(ind_field->prefix_len);
before a section of code that assumes there is a prefix_len.
The patch replaced code that explicitly avoided this with a check for
prefix_len. It turns out that the purge thread can get to that assert
without a prefix_len because it does not use a row_ext_t* .
When UNIV_DEBUG is not defined, the affect of this is that the purge thread
sets the dfield->len to zero and then cannot find the entry in the index to
purge. So secondary index entries remain unpurged.
This patch does not do the assert. Instead, it uses
'if (ind_field->prefix_len) {...}'
around the section of code that assumes a prefix_len. This is the way the
patch I provided to Marko did it.
The test case is simply modified to do a sleep(10) in order to give the
purge thread a chance to run. Without the code change to row0row.c, this
modified testcase will assert if InnoDB was compiled with UNIV_DEBUG.
I tried to sleep(5), but it did not always assert.
bug. It added this assert;
ut_ad(ind_field->prefix_len);
before a section of code that assumes there is a prefix_len.
The patch replaced code that explicitly avoided this with a check for
prefix_len. It turns out that the purge thread can get to that assert
without a prefix_len because it does not use a row_ext_t* .
When UNIV_DEBUG is not defined, the affect of this is that the purge thread
sets the dfield->len to zero and then cannot find the entry in the index to
purge. So secondary index entries remain unpurged.
This patch does not do the assert. Instead, it uses
'if (ind_field->prefix_len) {...}'
around the section of code that assumes a prefix_len. This is the way the
patch I provided to Marko did it.
The test case is simply modified to do a sleep(10) in order to give the
purge thread a chance to run. Without the code change to row0row.c, this
modified testcase will assert if InnoDB was compiled with UNIV_DEBUG.
I tried to sleep(5), but it did not always assert.
GCC 4.6 has new -Wunused-but-set-variable flag, which is enabled
by -Wall, that causes GCC to emit a warning whenever a local variable
is assigned to, but otherwise unused (aside from its declaration).
Since the maintainer mode uses -Wall and -Werror, source code which
triggers these warnings will be rejected. That is, these warnings
become hard errors.
The solution is to fix the code which triggers these specific warnings.
In most of the cases, this is a welcome cleanup as code which triggers
this warning is probably dead anyway.
dbug/dbug.c:
Unused but set.
libmysqld/lib_sql.cc:
Length is not necessary as the converted error message is always
null-terminated.
sql/item_func.cc:
Make get_var_with_binlog private to this compilation unit.
If a error was raised, do not attempt to evaluate the user
variable as the statement execution will be interrupted
anyway.
sql/mysqld.cc:
Use a void expression to silence the warning. Avoids the use of
macros that would make the code more unreadable than it already is.
sql/protocol.cc:
Length is not necessary as the converted error message is always
null-terminated. Remove unnecessary casts and assignment.
sql/sql_class.h:
Function is only used in a single compilation unit.
sql/sql_load.cc:
Only use the variable outside of EMBEDDED_LIBRARY.
storage/innobase/btr/btr0cur.c:
Do not retrieve field, only the record length is being used.
storage/perfschema/pfs.cc:
Use a void expression to silence the warning.
tests/mysql_client_test.c:
Unused but set.
unittest/mysys/lf-t.c:
Unused but set.
OLD VALUE OF INPUT PARAMETER.
The user-visible problem was that CASE-control-flow function
(not CASE-statement) misbehaved in stored routines under some
circumstances. The problem resulted in a crash or wrong data
returned. The error happened when expressions in CASE-function
were not of the same character set.
A CASE-function should return values of the same character set
for all branches. Internally, that means a new Item-instance
for the CONVERT(... USING <some charset>)-function is added
to the item tree when needed. The problem was that such changes
were not properly recorded using THD::change_item_tree(),
thus dangling pointers remain in the item tree after
THD::rollback_item_tree_changes(), which lead to undefined
behavior (i.e. crash / wrong data) for subsequent executions of
the stored routine.
This bug was introduced by a patch for Bug 11753363
(44793 - CHARACTER SETS: CASE CLAUSE, UCS2 OR UTF32, FAILURE).
The fixed function is Item_func_case::fix_length_and_dec().
New CONVERT-items are added in agg_item_set_converter(),
which calls THD::change_item_tree().
The problem was that an intermediate array was passed
to agg_item_set_converter(). Thus, THD::change_item_tree() there
was called on intermediate objects.
Note: those intermediate objects are allocated on THD's
memory root, so it's Ok to put them into "changed item lists".
The fix is to track changes on the correct objects.
This is the 5.1 version of the fix.
Need to free the memory allocated by the option parsing code for empty
strings when resetting the pointer to NULL.
No test case needed, as the existing ones already cover this path.
row_build_index_entry(): In innodb_file_format=Barracuda
(ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED), a secondary index on a
full column can refer to a field that is stored off-page in the
clustered index record. Take that into account.
rb:692 approved by Jimmy Yang
Refactor the !rec_offs_any_extern relaxation in row_build().
trx_assert_active(trx_id): Assert that the given transaction is active.
(In the 5.1 built-in InnoDB, there is no trx->is_recovered field.)
trx_assert_recovered(trx_id): Assert that the given transaction is
active and has been recovered after a crash.
row_build(): Replace a bunch of code with an assertion that invokes
trx_assert_active() or trx_assert_recovered() and row_get_rec_trx_id().
row_get_trx_id_offset(): Make the function inlined. Remove the unused
parameter rec, and make all parameters const.
row_get_rec_trx_id(), row_get_rec_roll_ptr(): Make all parameters const.
rb:691 approved by Jimmy Yang
page_zip_dir_elems(): New function, refactored from page_zip_dir_size().
page_zip_dir_size(): Use page_zip_dir_elems()
page_zip_dir_start_offs(): New function: Gets an offset to the
compressed page trailer (the dense page directory), including deleted
records (the free list)
page_zip_dir_start_low(page_zip, n_dense): Constness-preserving
wrapper macro for page_zip_dir_start_offs().
page_zip_dir_start(page_zip): Constness-preserving
wrapper macro for page_zip_dir_start_offs().
page_zip_decompress_node_ptrs(), page_zip_decompress_clust(): Replace
a formula with a fully equivalent page_zip_dir_start_low() call.
page_zip_write_rec(), page_zip_parse_write_node_ptr(),
page_zip_write_node_ptr(), page_zip_write_trx_id_and_roll_ptr(),
page_zip_clear_rec(): Replace a formula with an almost equivalent
page_zip_dir_start() call.
It is OK to replace page_dir_get_n_heap(page) with
page_dir_get_n_heap(page_zip->data), because
ut_ad(page_zip_header_cmp(page_zip, page)) or
page_zip_validate(page_zip, page) asserts that the
page headers are identical.
rb:687 approved by Jimmy Yang
BOGUS "THE TABLE MYSQL.PROC IS MISSING,..."
There was a race condition between loading a stored routine
(function/procedure/trigger) specified by fully qualified name
SCHEMA_NAME.PROC_NAME and dropping the stored routine database.
The problem was that there is a window for race condition when one server
thread tries to load a stored routine being executed and the other thread
tries to drop the stored routine schema.
This condition race window exists in implementation of function
mysql_change_db() called by db_load_routine() during loading of stored
routine to cache. Function mysql_change_db() calls check_db_dir_existence()
that might failed because specified database was dropped during concurrent
execution of DROP SCHEMA statement. db_load_routine() calls mysql_change_db()
with flag 'force_switch' set to 'true' value so when referenced db is not found
then my_error() is not called and function mysql_change_db() returns ok.
This shadows information about schema opening error in db_load_routine().
Then db_load_routine() makes attempt to parse stored routine that is failed.
This makes to return error to sp_cache_routines_and_add_tables_aux() but since
during error generation a call to my_error wasn't made and hence
THD::main_da wasn't set we set the generic "mysql.proc table corrupt" error
when running sp_cache_routines_and_add_tables_aux().
The fix is to install an error handler inside db_load_routine() for
the mysql_op_change_db() call, and check later if the ER_BAD_DB_ERROR
was caught.
sql/sql_db.cc:
Added synchronization point "before_db_dir_check" to emulate a race condition during
processing of CALL/DROP SCHEMA.
approved by: Marko
rb://681
Coalescing of free buf_page_t descriptors can prove to be one severe
bottleneck in performance of compression. One such workload where it
hurts badly is DROP TABLE. This patch removes buf_page_t allocations
from buf_buddy and uses ut_malloc instead.
In order to further reduce overhead of colaescing we no longer attempt
to coalesce a block if the corresponding free_list is less than 16 in
size.
TO POSITION FIRST CAN CAUSE DATA TO BE CORRUPTED".
ALTER TABLE MODIFY/CHANGE ... FIRST did nothing except renaming
columns if new version of the table had exactly the same
structure as the old one (i.e. as result of such statement, names
of columns changed their order as specified but data in columns
didn't). The same thing happened for ALTER TABLE DROP COLUMN/ADD
COLUMN statements which were supposed to produce new version of
table with exactly the same structure as the old version of table.
I.e. in the latter case the result was the same as if old column
was renamed instead of being dropped and new column with default
as value being created.
Both these problems were caused by the fact that ALTER TABLE
implementation incorrectly interpreted both these situations as
simple renaming of columns and assumed that in-place ALTER TABLE
algorithm could have been used for them.
This patch fixes this problem by ensuring that in cases when some
column is moved to the first position or some column is dropped
the default ALTER TABLE algorithm involving table copying is
always used. This is achieved by detecting such situations in
mysql_prepare_alter_table() and setting Alter_info::change_level
to ALTER_TABLE_DATA_CHANGED for them.
mysql-test/r/alter_table.result:
Added test for bug #12652385 - "61493: REORDERING COLUMNS TO
POSITION FIRST CAN CAUSE DATA TO BE CORRUPTED".
mysql-test/t/alter_table.test:
Added test for bug #12652385 - "61493: REORDERING COLUMNS TO
POSITION FIRST CAN CAUSE DATA TO BE CORRUPTED".
sql/sql_table.cc:
Changed mysql_prepare_alter_table() to detect situations in
which we some column moved to the first position or some column
is dropped and ensure that such ALTER TABLE statements won't
be carried out using in-place algorithm. The latter could have
happened before this patch if new version of table had the same
structure as the old one (except the column names).