Suprisingly, a Slave_log_event would show up in the binary
log. This event is never used and should not appear in the
logs. As such, when the slave (or the mysqlbinlog tool) reads the
event, it will hit an invalid pointer (reference to the
descriptor event when deserializing the Slave_log_event was
purposodely set to NULL).
The presence of the Slave_log_event denotes a corrupted log, but
we cannot tell how the log got corrupted in the first
place. However, we can make the server cope with such events when
it reads them - in case of log corruption - and fail gracefully.
This patch makes the server/mysqlbinlog to report that it has
found an invalid log event when Slave_log_event is read.
LOAD DATA into partitioned MyISAM table
Problem was that both partitioning and myisam
used the same table_share->mutex for different protections
(auto inc and repair).
Solved by adding a specific mutex for the partitioning
auto_increment.
Also adding destroying the ha_data structure in
free_table_share (which is to be propagated
into 5.5).
This is a 5.1 ONLY patch, already fixed in 5.5+.
Bug#57113: ha_partition::extra(ha_extra_function):
Assertion `m_extra_cache' failed
Fix for bug#55458 included DBUG_ASSERTS causing
debug builds of the server to crash on
another multi-table update.
Removed the asserts since they where wrong.
(updated after testing the patch in 5.5).
This is a simple optimization issue. All stats are related to only indexed
columns, index size or number of rows in the whole table. UPDATEs that touch
only non-indexed columns cannot affect stats and we can avoid calling the
function row_update_statistics_if_needed() which may result in unnecessary I/O.
Approved by: Marko (rb://466)
Trying to run perl fails, just like it does when perl is started but fails
Trap the case that perl was not found/could not be started, and skip test
Also force a restart of servers since test may already have done something
mtr now also appends path of current perl to PATH to aid mysqltest
TYPE __sync_lock_test_and_set (TYPE *ptr, TYPE value, ...)
it is not documented what happens if the two arguments are of different
type like it was before: the first one was lock_word_t (byte) and the
second one was 1 or 0 (int).
Approved by: Marko (via IRC)
Use UNINIT_VAR workaround instead of LINT_INIT. The former can
also be used to silence false-positives in non-debug builds as
it actually does not cause new code to be generated.
Temporarily disable strict aliasing warnings in order to get
wider coverage for optimized builds. Once the violations are
fixed and false-positives silenced, this flag should be removed.
Added --enable-connect-log, somewhet similar to --enable-query-log
If query log is disabled, disable connect log too
Also some related cleanup in mysqltest.test: removing duplicate test loop
Subselect executes twice, at JOIN::optimize stage
and at JOIN::execute stage. At optimize stage
Innodb prebuilt struct which is used for the
retrieval of column values is initialized in.
ha_innobase::index_read(), prebuilt->sql_stat_start is true.
After QUICK_ROR_INTERSECT_SELECT finished his job it
restores read_set/write_set bitmaps with initial values
and deactivates one of the handlers used by
QUICK_ROR_INTERSECT_SELECT in JOIN::cleanup
(it's the case when we reuse original handler as one of
handlers required by QUICK_ROR_INTERSECT_SELECT object).
On second subselect execution inactive handler is activated
in QUICK_RANGE_SELECT::reset, file->ha_index_init().
In ha_index_init Innodb prebuilt struct is reinitialized
with inappropriate read_set/write_set bitmaps. Further
reinitialization in ha_innobase::index_read() does not
happen as prebuilt->sql_stat_start is false.
It leads to partial retrieval of required field values
and we get a mix of field values from different records
in the record buffer.
The fix is to reset
read_set/write_set bitmaps as these values
are required for proper intialization of
internal InnoDB struct which is used for
the retrieval of column values
(see build_template(), ha_innodb.cc)
adding new indexes
A fast alter table requires that the existing (old) table
and indices are unchanged (i.e only new indices can be
added). To verify this, the layout and flags of the old
table/indices are compared for equality with the new.
The PACK_KEYS option is a no-op in InnoDB, but the flag
exists, and is used in the table compare. We need to
check this (table) option flag before deciding whether an
index should be packed or not. If the table has
explicitly set PACK_KEYS to 0, the created indices should
not be marked as packed/packable.
compression protocol.
The loss of connection was caused by a malformed packet
sent by the server in case when query cache was in use.
When storing data in the query cache, the query cache
memory allocation algorithm had a tendency to reduce
the amount of memory block necessary to store a result
set, up to finally storing the entire result set in a single
block. With a significant result set, this memory block
could turn out to be quite large - 30, 40 MB and on.
When such a result set was sent to the client, the entire
memory block was compressed and written to network as a
single network packet. However, the length of the
network packet is limited by 0xFFFFFF (16MB), since
the packet format only allows 3 bytes for packet length.
As a result, a malformed, overly large packet
with truncated length would be sent to the client
and break the client/server protocol.
The solution is, when sending result sets from the query
cache, to ensure that the data is chopped into
network packets of size <= 16MB, so that there
is no corruption of packet length. This solution,
however, has a shortcoming: since the result set
is still stored in the query cache as a single block,
at the time of sending, we've lost boundaries of individual
logical packets (one logical packet = one row of the result
set) and thus can end up sending a truncated logical
packet in a compressed network packet.
As a result, on the client we may require more memory than
max_allowed_packet to keep, both, the truncated
last logical packet, and the compressed next packet.
This never (or in practice never) happens without compression,
since without compression it's very unlikely that
a) a truncated logical packet would remain on the client
when it's time to read the next packet
b) a subsequent logical packet that is being read would be
so large that size-of-new-packet + size-of-old-packet-tail >
max_allowed_packet.
To remedy this issue, we send data in 1MB sized packets,
that's below the current client default of 16MB for
max_allowed_packet, but large enough to ensure there is no
unnecessary overhead from too many syscalls per result set.
Fix compiler warning:
trx/trx0undo.c: In function 'trx_undo_truncate_end':
trx/trx0undo.c:1069:14: error: variable 'rseg' set but not used [-Werror=unused-but-set-variable]
Fix compiler warning:
trx/trx0undo.c: In function 'trx_undo_set_state_at_prepare':
trx/trx0undo.c:1871:16: error: variable 'page_hdr' set but not used [-Werror=unused-but-set-variable]