Fix assorted warnings that are generated in optimized builds.
Most of it is silencing variables that are set but unused.
This patch also introduces the MY_ASSERT_UNREACHABLE macro
which helps the compiler to deduce that a certain piece of
code is unreachable.
include/my_compiler.h:
Use GCC's __builtin_unreachable if available. It allows
GCC to deduce the unreachability of certain code paths,
thus avoiding warnings that, for example, accused that a
variable could be used without being initialized (due to
unreachable code paths).
Fix assorted compiler warnings.
include/my_pthread.h:
Like for pthread_cond_timedwait, the abstime is constant.
mysys/my_gethwaddr.c:
Instead of using a manual copy that introduce warnings due to
type mismatch, copy the buffer using memcpy and use memcmp to
check whether all bytes of the buffer are zeroed.
mysys/thr_mutex.c:
Like for pthread_cond_timedwait, the abstime is constant.
unittest/mytap/tap.h:
Introduce a ok() variant that does not take a format argument.
Since ok() is tagged with a printf attribute, GCC complains if
the fmt argument is NULL.
Ensure that fdatasync is properly declared as on Mac OS X, the
function is available but there is no prototype. Also, port a
fix for a warning from the InnoDB plugin over to the builtin.
configure.in:
Check that fdatasync is declared.
mysys/my_sync.c:
Use fdatasync only if it is declared.
storage/innobase/include/ut0dbg.h:
Port over from the plugin a fix for a warning.
Fix assorted compiler warnings on Mac OS X.
BUILD/SETUP.sh:
Remove -Wctor-dtor-privacy flag to workaround a GCC bug that
causes it to not properly detect that implicitly generated
constructors are always public.
cmd-line-utils/readline/terminal.c:
tgetnum and tgetflag might not take a const string argument.
mysys/my_gethostbyname.c:
Tag unused arguments.
mysys/my_sync.c:
Tag unused arguments.
data dictionary confusion
On file systems with case insensitive file names, and
lower_case_table_names set to '2', the server could crash
due to a table definition cache inconsistency. This is
the default setting on MacOSX, but may also be set and
used on MS Windows.
The bug is caused by using two different strategies for
creating the hash key for the table definition cache, resulting
in failure to look up an entry which is present in the cache,
or failure to delete an existing entry. One strategy was to
use the real table name (with case preserved), and the other
to use a normalized table name (i.e a lower case version).
This is manifested in two cases. One is during 'DROP DATABASE',
where all known files are removed. The removal from
the table definition cache is done via a generated list of
TABLE_LIST with keys (wrongly) created using the case preserved
name. The other is during CREATE TABLE, where the cache lookup
is also (wrongly) based on the case preserved name.
The fix was to use only the normalized table name when
creating hash keys.
sql/sql_db.cc:
Normalize table name (i.e lower case it)
sql/sql_table.cc:
table_name contains the normalized name
alias contains the real table name
(variables_debug fails)
The problem was that "SET GLOBAL debug" could cause a crash on Solaris.
The crash happened if the server failed to open the trace file given in
the "SET GLOBAL debug" statement. This caused an error message to be
printed to stderr containing the process name. However, printing to
stderr crashed the server since the pointer to the process name had
not been initialized.
This patch fixes the problem by initializing the process name
properly when doing "SET GLOBAL debug".
No test case added as this bug was repeatable with existing test
coverage in variables_debug.test.
Enable the MySQL maintainer-specific development environment
(which add various warning related options to the compiler
flags) if debugging support is enabled.
config/ac-macros/maintainer.m4:
Enable the maintainer mode if debug support is enabled.
configure.in:
Move debug argument to before the maintainer mode check.
For crash testing: kill the server without generating core file.
include/my_dbug.h
Use kill(getpid(), SIGKILL) which cannot be caught by signal handlers.
All DBUG_XXX macros should be no-ops in optimized mode, do that for DBUG_ABORT as well.
sql/handler.cc
Kill server without generating core.
sql/log.cc
Kill server without generating core.
replication aborts
When recieving a 'SLAVE STOP' command, slave SQL thread will roll back the
transaction and stop immidiately if there is only transactional table updated,
even through 'CREATE|DROP TEMPOARY TABLE' statement are in it. But These
statements can never be rolled back. Because the temporary tables to the user
session mapping remain until 'RESET SLAVE', Therefore it will abort SQL thread
with an error that the table already exists or doesn't exist, when it restarts
and executes the whole transaction again.
After this patch, SQL thread always waits till the transaction ends and then stops,
if 'CREATE|DROP TEMPOARY TABLE' statement are in it.
mysql-test/extra/rpl_tests/rpl_stop_slave.test:
Auxiliary file which is used to test this bug.
mysql-test/suite/rpl/t/rpl_stop_slave.test:
Test case for this bug.
sql/slave.cc:
Checking if OPTION_KEEP_LOG is set. If it is set, SQL thread should wait
until the transaction ends.
sql/sql_parse.cc:
Add a debug point for testing this bug.
mysql-test/r/grant.result:
It was added result for test case for bug#36742.
mysql-test/t/grant.test:
It was added test case for bug#36742.
sql/sql_yacc.yy:
It was added convertation of host name part of user name to lowercase.
After ALTER TABLE which changed only table's metadata, row-based
binlog sometimes got corrupted since the tablemap was unexpectedly
set to 0 for subsequent updates to the same table.
ALTER TABLE which changed only table's metadata always reset
table_map_id for the table share to 0. Despite the fact that
0 is a valid value for table_map_id, this step caused problems
as it could have created situation in which we had more than
one table share with table_map_id equal 0. If more than one
table with table_map_id are 0 were updated in the same statement,
updates to these different tables were written into the same
rows event. This caused slave server to crash.
This bug happens only on 5.1. It doesn't affect 5.5+.
This patch solves this problem by ensuring that ALTER TABLE
statements which change metadata only never reset table_map_id
to 0. To do this it changes reopen_table() to correctly use
refreshed table_map_id value instead of using the old one/
resetting it.
mysql-test/suite/rpl/r/rpl_alter.result:
Add test for BUG#56226
mysql-test/suite/rpl/t/rpl_alter.test:
Add test for BUG#56226
When slave executes a transaction bigger than slave's max_binlog_cache_size,
slave will crash. It is caused by the assert that server should only roll back
the statement but not the whole transaction if the error ER_TRANS_CACHE_FULL
happens. But slave sql thread always rollbacks the whole transaction when
an error happens.
Ather this patch, we always clear any error set in sql thread(it is different
from the error in 'SHOW SLAVE STATUS') and it is cleared before rolling back
the transaction.
mysql-test/suite/rpl/r/rpl_binlog_max_cache_size.result:
SET binlog_cache_size and max_binlog_cache_size for all test cases.
Add test case for bug#55375.
mysql-test/suite/rpl/t/rpl_binlog_max_cache_size-master.opt:
binlog_cache_size and max_binlog_cache_size can be set in the client connection.
so remove this option file.
mysql-test/suite/rpl/t/rpl_binlog_max_cache_size.test:
SET binlog_cache_size and max_binlog_cache_size for all test cases.
Add test case for bug#55375.
sql/log_event.cc:
Some functions don't return the error code, so it is a wrong error code.
The error should always be set into thd->main_da. So we use
slave_rows_error_report to report the right error.
sql/slave.cc:
exec_relay_log_event() need call cleanup_context() to clear context.
clearup_context() will call end_trans().
Clear thd's error before cleanup_context. It avoid to trigger the assert
which cause this bug.
This is a regression from the fix for bug no 38999. A storage engine capable
of reading only a subset of a table's columns updates corresponding bits in
the read buffer to signal that it has read NULL values for the corresponding
columns. It cannot, and should not, update any other bits. Bug no 38999
occurred because the implementation of UPDATE statements compare the NULL bits
using memcmp, inadvertently comparing bits that were never requested from the
storage engine. The regression was caused by the storage engine trying to
alleviate the situation by writing to all NULL bits, even those that it had no
knowledge of. This has devastating effects for the index merge algorithm,
which relies on all NULL bits, except those explicitly requested, being left
unchanged.
The fix reverts the fix for bug no 38999 in both InnoDB and InnoDB plugin and
changes the server's method of comparing records. For engines that always read
entire rows, we proceed as usual. For engines capable of reading only select
columns, the record buffers are now compared on a column by column basis. An
assertion was also added so that non comparable buffers are never read. Some
relevant copy-pasted code was also consolidated in a new function.
Suprisingly, a Slave_log_event would show up in the binary
log. This event is never used and should not appear in the
logs. As such, when the slave (or the mysqlbinlog tool) reads the
event, it will hit an invalid pointer (reference to the
descriptor event when deserializing the Slave_log_event was
purposodely set to NULL).
The presence of the Slave_log_event denotes a corrupted log, but
we cannot tell how the log got corrupted in the first
place. However, we can make the server cope with such events when
it reads them - in case of log corruption - and fail gracefully.
This patch makes the server/mysqlbinlog to report that it has
found an invalid log event when Slave_log_event is read.
LOAD DATA into partitioned MyISAM table
Problem was that both partitioning and myisam
used the same table_share->mutex for different protections
(auto inc and repair).
Solved by adding a specific mutex for the partitioning
auto_increment.
Also adding destroying the ha_data structure in
free_table_share (which is to be propagated
into 5.5).
This is a 5.1 ONLY patch, already fixed in 5.5+.
Bug#57113: ha_partition::extra(ha_extra_function):
Assertion `m_extra_cache' failed
Fix for bug#55458 included DBUG_ASSERTS causing
debug builds of the server to crash on
another multi-table update.
Removed the asserts since they where wrong.
(updated after testing the patch in 5.5).
mysql-test/r/partition.result:
updated result
mysql-test/t/partition.test:
Added test for bug#57113
sql/ha_partition.cc:
Removed the assert for m_extra_cache when
::extra(HA_PREPARE_FOR_UPDATE) was called.
This is a simple optimization issue. All stats are related to only indexed
columns, index size or number of rows in the whole table. UPDATEs that touch
only non-indexed columns cannot affect stats and we can avoid calling the
function row_update_statistics_if_needed() which may result in unnecessary I/O.
Approved by: Marko (rb://466)
Trying to run perl fails, just like it does when perl is started but fails
Trap the case that perl was not found/could not be started, and skip test
Also force a restart of servers since test may already have done something
mtr now also appends path of current perl to PATH to aid mysqltest
TYPE __sync_lock_test_and_set (TYPE *ptr, TYPE value, ...)
it is not documented what happens if the two arguments are of different
type like it was before: the first one was lock_word_t (byte) and the
second one was 1 or 0 (int).
Approved by: Marko (via IRC)