- _ma_apply_redo_index: Assertion `page_offset != 0 && page_offset + length <= page_length' failed
Fixes one bug and one log assert when inserting rows:
- _ma_log_split: Assertion `org_length <= info->s->max_index_block_size' failed
- write_block_record: Assertion '(data_length < MAX_TAIL_SIZE(block_size)' failed
Mark in recovery log where _ma_log_add() calls was done (for better debugging).
storage/maria/ma_bitmap.c:
Don't write a head part on a tail page. (Caused an assert in write_block_record())
storage/maria/ma_delete.c:
Mark in recovery log where _ma_log_add() calls was done
storage/maria/ma_key_recover.c:
Mark in recovery log where _ma_log_add() calls was done
Fixed not handled logging case for overfull index pages.
storage/maria/ma_key_recover.h:
Mark in recovery log where _ma_log_add() calls was done
storage/maria/ma_loghandler.h:
Mark in recovery log where _ma_log_add() calls was done
storage/maria/ma_rt_key.c:
Mark in recovery log where _ma_log_add() calls was done
storage/maria/ma_write.c:
Mark in recovery log where _ma_log_add() calls was done.
Fixed wrong call to _ma_split_page() for overfull pages
This is a regression from the fix for bug no 38999. A storage engine capable
of reading only a subset of a table's columns updates corresponding bits in
the read buffer to signal that it has read NULL values for the corresponding
columns. It cannot, and should not, update any other bits. Bug no 38999
occurred because the implementation of UPDATE statements compare the NULL bits
using memcmp, inadvertently comparing bits that were never requested from the
storage engine. The regression was caused by the storage engine trying to
alleviate the situation by writing to all NULL bits, even those that it had no
knowledge of. This has devastating effects for the index merge algorithm,
which relies on all NULL bits, except those explicitly requested, being left
unchanged.
The fix reverts the fix for bug no 38999 in both InnoDB and InnoDB plugin and
changes the server's method of comparing records. For engines that always read
entire rows, we proceed as usual. For engines capable of reading only select
columns, the record buffers are now compared on a column by column basis. An
assertion was also added so that non comparable buffers are never read. Some
relevant copy-pasted code was also consolidated in a new function.
Employed the same kind of optimization as in the fix for the cases
when join buffer is used.
The optimization performs early evaluation of the conditions from
on expression with table references to only outer tables of
an outer join.
Suprisingly, a Slave_log_event would show up in the binary
log. This event is never used and should not appear in the
logs. As such, when the slave (or the mysqlbinlog tool) reads the
event, it will hit an invalid pointer (reference to the
descriptor event when deserializing the Slave_log_event was
purposodely set to NULL).
The presence of the Slave_log_event denotes a corrupted log, but
we cannot tell how the log got corrupted in the first
place. However, we can make the server cope with such events when
it reads them - in case of log corruption - and fail gracefully.
This patch makes the server/mysqlbinlog to report that it has
found an invalid log event when Slave_log_event is read.
The reason for this was that some bitmap test functions changed the bitmap, which caused problems when the same bitmap was used by multiple threads.
include/my_bitmap.h:
Changed order of elements to get better alignment.
mysys/my_bitmap.c:
Change bitmap test functions to not modify the bitmap.
Fixed compiler errors in test_bitmap
The fix aligns join_null_complements() with join_matching_records()
making both call generate_full_extensions().
There should not be any difference between how the WHERE clause
is applied to NULL-complemented records from a partial join and how
it is applied to other partially joined records:the latter happens in
join_matching_records(), precisely in generate_full_extensions().
Added a possibility not to factor out the condition pushed to
the access index out of the condition pushed to a joined table.
This is useful for the condition pushed to the index when a hashed
join buffer for BKA is employed. In this case the index condition
may be false for some, but for all records with the same key.
So the condition must be checked not only after index lookup,
but after fetching row data as well, and it makes sense not to
factor out the condition from the condition checked after reading
row data,
The bug happened because the condition pushed to an index always
was factor out from the condition pushed to the accessed table.
******
Fixed bug #54539.
Added a possibility not to factor out the condition pushed to
the access index out of the condition pushed to a joined table.
This is useful for the condition pushed to the index when a hashed
join buffer for BKA is employed. In this case the index condition
may be false for some, but for all records with the same key.
So the condition must be checked not only after index lookup,
but after fetching row data as well, and it makes sense not to
factor out the condition from the condition checked after reading
row data,
The bug happened because the condition pushed to an index always
was factor out from the condition pushed to the accessed table.
Added a possibility not to factor out the condition pushed to
the access index out of the condition pushed to a joined table.
This is useful for the condition pushed to the index when a hashed
join buffer for BKA is employed. In this case the index condition
may be false for some, but for all records with the same key.
So the condition must be checked not only after index lookup,
but after fetching row data as well, and it makes sense not to
factor out the condition from the condition checked after reading
row data,
The bug happened because the condition pushed to an index always
was factor out from the condition pushed to the accessed table.
LOAD DATA into partitioned MyISAM table
Problem was that both partitioning and myisam
used the same table_share->mutex for different protections
(auto inc and repair).
Solved by adding a specific mutex for the partitioning
auto_increment.
Also adding destroying the ha_data structure in
free_table_share (which is to be propagated
into 5.5).
This is a 5.1 ONLY patch, already fixed in 5.5+.
The crash happens because original join table is replaced with temporary table
at execution stage and later we attempt to use this temporary table in
select_describe. It might happen that
Item_subselect::update_used_tables() method which sets const_item flag
is not called by some reasons (no where/having conditon in subquery for example).
It prevents JOIN::join_tmp creation and breaks original join.
The fix is to call ::update_used_tables() before ::const_item() check.
mysql-test/r/ps.result:
test case
mysql-test/t/ps.test:
test case
sql/item_subselect.cc:
call ::update_used_tables() before ::const_item() check.
Bug#57113: ha_partition::extra(ha_extra_function):
Assertion `m_extra_cache' failed
Fix for bug#55458 included DBUG_ASSERTS causing
debug builds of the server to crash on
another multi-table update.
Removed the asserts since they where wrong.
(updated after testing the patch in 5.5).
mysql-test/r/partition.result:
updated result
mysql-test/t/partition.test:
Added test for bug#57113
sql/ha_partition.cc:
Removed the assert for m_extra_cache when
::extra(HA_PREPARE_FOR_UPDATE) was called.
This is a simple optimization issue. All stats are related to only indexed
columns, index size or number of rows in the whole table. UPDATEs that touch
only non-indexed columns cannot affect stats and we can avoid calling the
function row_update_statistics_if_needed() which may result in unnecessary I/O.
Approved by: Marko (rb://466)