memory reference
There are two issues present here.
1) There is a possibility that we test a byte beyond the
allocated buffer
2) We compare a byte that might never have been
initalized to see if it's 0.
The first issue is not triggered by existing code, but an
ASSERT has been added to safe-guard against introducing
new code that triggers it.
The second issue is what triggers the Valgrind warnings
reported in the bug report. A buffer is allocated in
class String to hold the value. This buffer is populated
by the character data constituting the string, but is not
zero-terminated in most cases. Testing if it is indeed
zero-terminated means that we check a byte that has never
been explicitly set, thus causing Valgrind to trigger.
Note that issue 2 is not a serious problem. The variable
is read, and if it's not zero, we will set it to zero.
There are no further consequences.
Note that this patch does not fix the underlying problems
with issue 1, as it is deemed too risky to fix at this
point (as noted in the bug report). As discussed in
the report, the c_ptr() method should probably be
replaced, but this requires a thorough analysis of the
~200 calls to the method.
Assertion `bitmap_is_set_all(&table->s->all_set)' failed in
handler::ha_reset
This followup fixes the compilation warning
'test_bit' may be used uninitialized in this function
introduced by the previous patch.
Assertion `bitmap_is_set_all(&table->s->all_set)' failed in
handler::ha_reset
This assertion could be triggered if two connections simultaneously
executed two bitmap test functions on the same bitmap. For example,
the assertion could be triggered if one connection executed UPDATE
while a second connection executed SELECT on the same table.
Even if bitmap test functions have read-only semantics and have
const bitmaps as parameter, several of them modified the internal
state of the bitmap. With interleaved execution of two such functions
it was possible for one function to modify the state of the same
bitmap that the other function had just modified. This lead to an
inconsistent state and could trigger the assert.
Internally the bitmap uses 32 bit words for storage. Since bitmaps
can contain any number of bits, the last word in the bitmap may
not be fully used. A 32 bit mask is maintained where a bit is set
if the corresponding bit in the last bitmap word is unused.
The problem was that several test functions applied this mask to
the last word. Sometimes the mask was negated and used to zero out
the remainder of the last word and sometimes the mask was used as-is
to fill the remainder of the last word with 1's. This meant that if
a function first used the negated mask and another function then
used the mask as-is (or vice-versa), the first function would then
get the wrong result.
This patch fixes the problem by changing the implementation of
9 bitmap functions that modified the bitmap state even if the
bitmap was declared const. These functions now preserve the
internal state of the bitmap. This makes it possible for
two connections to concurrently execute two of these functions
on the same bitmap without issues.
The patch also removes dead testing code from my_bitmap.c.
These tests have already been moved to unittest/mysys/bitmap-t.c.
Existing test coverage of my_bitmap has been extended.
No MTR test case added as this would require adding several sync
points to the bitmap functions. The patch has been tested with
a non-deterministic test case posted on the bug report.
attempt to create spatial index on char > 31 bytes".
Attempt to create spatial index on char field with length
greater than 31 byte led to assertion failure on server
compiled with safemutex support.
The problem occurred in mi_create() function which was called
to create a new version of table being altered. This function
failed since it detected an attempt to create a spatial key
on non-binary column and tried to return an error.
On its error path it tried to unlock THR_LOCK_myisam mutex
which has not been not locked at this point. Indeed such an
incorrect behavior was caught by safemutex wrapper and caused
assertion failure.
This patch fixes the problem by ensuring that mi_create()
doesn't releases THR_LOCK_myisam mutex on error path if it was
not acquired.
batch_readline_init() was modified - make check for
type of file for input stream unless target platform
is WINDOWS since on this platform S_IFBLK is undefined.
Reverse DNS lookup of "localhost" returns "broadcasthost" on Snow Leopard (Mac), and NULL on most others.
Simply ignore the output, as this is not an essential part of UDF testing.
batch_readline_init() was modified - return an error
if the input source is a directory or a block device.
This follow-up is necessary because on some platforms,
such as Solaris, call to read() from directory may be
successful.
Test failed on a certain Linux platform in automated environment. It turns out that this platform has an old version of Perl modules DBI and DBD::mysql installed, as well as the OS itself being relatively old.
Allowing error code 11 to be returned from mysqlhotcopy on expected error seems harmless and will make the test pass also with older libraries.
Added --debug-server and use $opt_debug_server where appropriate
Let --debug imply --debug-server
When merging to 5.5, must adapt fix for 59148
Oops, set debug => debug-server too late, fixed
Also fix bug#59110: Memory leak of QUICK_SELECT_I allocated memory.
Includes Jørgen Lølands review comments.
Root cause of these bugs are that test_if_skip_sort_order() decided to
revert the 'skip_sort_order' descision (and use filesort) after the
query plan has been updated to reflect a 'skip' of the sort order.
This might happen in 'check_reverse_order:' if we have a
select->quick which could not be made descending by appending
a QUICK_SELECT_DESC. ().
The original 'save_quick' was then restored after the QEP has been modified,
which caused:
- An incorrect 'precomputed_group_by= TRUE' may have been set,
and not reverted, as part of the already modifified QEP (Bug#59308)
- A 'select->quick' might have been created which we fail to delete (bug#59110).
This fix is a refactorication of test_if_skip_sort_order() where all logic
related to modification of QEP (controlled by argument 'bool no_changes'), is
moved to the end of test_if_skip_sort_order(), and done after *all* 'test_if_skip'
checks has been performed - including the 'check_reverse_order:' checks.
The refactorication above contains now intentional changes to the logic which
has been moved to the end of the function.
Furthermore, a smaller part of the fix address the handling of the
select->quick objects which may already exists when we call
'test_if_skip_sort_order()' (save_quick) -and
new select->quick's created during test_if_skip_sort_order():
- Before new select->quick may be created by calling ::test_quick_select(), we
set 'select->quick= 0' to avoid that ::test_quick_select() prematurely
delete the save_quick's. (After this call we may have both a 'save_quick'
and 'select->quick')
- All returns from ::test_if_skip_sort_order() where we may have both a
'save_quick' and a 'select->quick' has been changed to goto's to the
exit points 'skiped_sort_order:' or 'need_filesort:' where we
decide which of the QUICK_SELECT's to keep, and delete the other.
if the standard input is a directory.
The problem is that mysql monitor try to read from stdin without
checking input source type.
The solution is to stop reading data from standard input if a call
to read(2) failed.
A new test case was added into mysql.test.
handling.
The problem was that parsing of nested regular expression involved
recursive calls. Such recursion didn't take into account the amount of
available stack space, which ended up leading to stack overflow crashes.
When fixing the 27072 bug, the shell snippets running before/after
a RPM upgrade got expanded to look at files in the data directory
and at the PID file.
In this expansion, the standard locations were used.
There are users who configure their installations to use non-standard
locations for the data directory, the PID file, and other objects.
For these users, the fix of 27072 did not work.
As a result, the fact that a server was running at upgrade start was
not noticed, and the new server was not started after the upgrade.
With this patch, the shell snippets now try to get these locations
from "my_print_defaults" before falling back to the defaults.
Now, the fact that the old server is running is again noticed (even
with non-standard locations), and the new server is started.
Also, the upgrade log is written to the correct data directory.
There is one part of the test case that needs to break
and re-establish the circular topology. For this the test
stops the slave threads on a couple of servers and restarts
them with START SLAVE. However, no check is done on the
status of the IO or SQL threads before proceeding with
the subsequent commands.
Because rpl_only_running_threads is set to 1 this can lead
to silently not syncing all slave threads as expected,
ultimately resulting in unexpected results (and consequently
on a failing test run).
We fix this by replacing the START SLAVE instructions with
calls to --source include/start_slave.inc, which will wait
for the slave threads to be running (show 'Yes' in
Slave_IO|SQL_Running fields of SHOW SLAVE STATUS) before
proceeding. Additionally, we change rpl_sync.inc to make the
IO thread report that it is running when its running status
is any other than 'No'.
Bug #55755 : Date STD variable signness breaks server on FreeBSD and OpenBSD
* Added a check to configure on the size of time_t
* Created a macro to check for a valid time_t that is safe to use with datetime
functions and store in TIMESTAMP columns.
* Used the macro consistently instead of the ad-hoc checks introduced by 52315
* Fixed compliation warnings on platforms where the size of time_t is smaller than
the size of a long (e.g. OpenBSD 4.8 64 amd64).
Bug #52315: utc_date() crashes when system time > year 2037
* Added a correct check for the timestamp range instead of just variable size check to
SET TIMESTAMP.
* Added overflow checking before converting to time_t.
* Using a correct localized error message in this case instead of the generic error.
* Added a test suite.
* fixed the checks so that they check for unsigned time_t as well. Used the checks
consistently across the source code.
* fixed the original test case to expect the new error code.
btr_rec_get_field_ref_offs(), btr_rec_get_field_ref(): New functions.
Get the pointer to an externally stored field.
btr_cur_set_ownership_of_extern_field(): Assert that the BLOB has not
already been disowned.
btr_store_big_rec_extern_fields(): Rename to
btr_store_big_rec_extern_fields_func() and add the debug parameter
update_in_place. All pointers to externally stored columns in the
record must either be zero or they must be pointers to inherited
columns, owned by this record or an earlier record version. For any
BLOB that is stored, the BLOB pointer must previously have been
zero. When the function completes, all BLOB pointers must be nonzero
and owned by the record.
rb://549 approved by Jimmy Yang
primary_key_no == 0".
Attempt to create InnoDB table with non-nullable column of
geometry type having an unique key with length 12 on it and
with some other candidate key led to server crash due to
assertion failure in both non-debug and debug builds.
The problem was that such a non-candidate key could have
been sorted as the first key in table/.FRM, before any legit
candidate keys. This resulted in assertion failure in InnoDB
engine which assumes that primary key should either be the
first key in table/.FRM or should not exist at all.
The reason behind such an incorrect sorting was an wrong
value of Create_field::key_length member for geometry field
(which was set to its pack_length == 12) which confused code
in mysql_prepare_create_table(), so it would skip marking
such key as a key with partial segments.
This patch fixes the problem by ensuring that this member
gets the same value of Create_field::key_length member as
for other blob fields (from which geometry field class is
inherited), and as result unique keys on geometry fields
are correctly marked as having partial segments.
row_purge(): Change the return type to void. (The return value always
was DB_SUCCESS.) Remove some local variables.
row_undo_mod_remove_clust_low(): Remove some local variables.
rb://547 approved by Jimmy Yang
Antelope files in btr_check_blob_fil_page_type(). Unfortunately, we
must keep the check in production builds, because InnoDB wrote
uninitialized garbage to FIL_PAGE_TYPE until fairly recently (5.1.x).
rb://546 approved by Jimmy Yang
It was the enabling of UNIV_DEBUG_FILE_ACCESSES that caught Bug #55284
in the first place. This is a very light piece of of debug code, and
there really is no reason why it is not enabled in all debug builds.
rb://551 approved by Jimmy Yang
Root cause for this bug is that the optimizer try to detect&
optimize the special case:
'<field> BETWEEN c1 AND c1' and handle this as the condition '<field> = c1'
This was implemented inside add_key_field(.. *field, *value[]...)
which assumed field to refer key Field, and value[] to refer a [low...high]
constant pair. value[0] and value[1] was then compared for equality.
In a 'normal' BETWEEN condition of the form '<field> BETWEEN val1 and val2' the
BETWEEN operation is represented with an argementlist containing the
values [<field>, val1, val2] - add_key_field() is then called with
parameters field=<field>, *value=val1.
However, if the BETWEEN predicate specified:
1) '<const1> BETWEEN<const2> AND<field>
the 'field' and 'value' arguments to add_key_field() had to be swapped.
This was implemented by trying to cheat add_key_field() to handle it like:
2) '<const1> GE<const2> AND<const1> LE<field>'
As we didn't really replace the BETWEEN operation with 'ge' and 'le',
add_key_field() still handled it as a 'BETWEEN' and compared the (swapped)
arguments<const1> and<const2> for equality. If they was equal, the
condition 1) was incorrectly 'optimized' to:
3) '<field> EQ <const1>'
This fix moves this optimization of '<field> BETWEEN c1 AND c1' into
add_key_fields() which then calls add_key_equal_fields() to collect
key equality / comparison for the key fields in the BETWEEN condition.
trx rollback or purge
This patch does not relax the failing debug assertion during purge.
That will be revisited once we have managed to repeat the assertion failure.
row_upd_changes_ord_field_binary_func(): Renamed from
row_upd_changes_ord_field_binary(). Add the parameter que_thr_t* in
UNIV_DEBUG builds. When the off-page column cannot be retrieved,
assert that the current transaction is a recovered one and that it is
the one that is currently being rolled back.
row_upd_changes_ord_field_binary(): A wrapper macro for
row_upd_changes_ord_field_binary_func() that discards the que_thr_t*
parameter unless UNIV_DEBUG is defined.
row_purge_upd_exist_or_extern_func(): Renamed from
row_purge_upd_exist_or_extern(). Add the parameter que_thr_t* in
UNIV_DEBUG builds.
row_purge_upd_exist_or_extern(): A wrapper macro for
row_purge_upd_exist_or_extern_func() that discards the que_thr_t*
parameter unless UNIV_DEBUG is defined.
Make trx_roll_crash_recv_trx const. If there were a 'do not
dereference' attribute, it would be appropriate as well.
rb://588 approved by Jimmy Yang