The bug could cause a crash when several connections needed
persistent statistics for the same table.
Also added a missing call of set_statistics_for_table() in the code
of the function mysql_update.
Made allocation of memory for statistical data in a table share to be thread safe.
This memory is now allocated in a special MEM_ROOT that is created for each
table share.
If triggers are used for an insert/update/delete statement than the values of
all virtual columns must be computed as any of them may be used by the triggers.
LP bug #1035225 / MySQL bug #66301: INSERT ... ON DUPLICATE KEY UPDATE +
innodb_autoinc_lock_mode=1 is broken
The problem was that when certain INSERT ... ON DUPLICATE KEY UPDATE
were executed concurrently on a table containing an AUTO_INCREMENT
column as a primary key, InnoDB would correctly reserve non-overlapping
AUTO_INCREMENT intervals for each statement, but when the server
encountered the first duplicate key error on the secondary key in one of
the statements and performed an UPDATE, it also updated the internal
AUTO_INCREMENT value to the one from the existing row that caused a
duplicate key error, even though the AUTO_INCREMENT value was not
specified explicitly in the UPDATE clause. It would then proceed with
using AUTO_INCREMENT values the range reserved previously by another
statement, causing duplicate key errors on the AUTO_INCREMENT column.
Fixed by changing write_record() to ensure that in case of a duplicate
key error the internal AUTO_INCREMENT counter is only updated when the
AUTO_INCREMENT value was explicitly updated by the UPDATE
clause. Otherwise it is restored to what it was before the duplicate key
error, as that value is unused and can be reused for subsequent
successfully inserted rows.
sql/sql_insert.cc:
Don't update next_insert_id to the value of a row found during ON DUPLICATE KEY UPDATE.
sql/sql_parse.cc:
Added DBUG_SYNC
sql/table.h:
Added next_number_field_updated flag to detect changing of auto increment fields.
Moved fields a bit to get bool fields after each other (better alignment)
Link view/derived table fields to a real table to check turning the table record to null row.
Item_direct_view_ref wrapper now checks if table is turned to null row.
- Moved the definitions of the classes to store data from persistent
statistical tables into statistics.h, leaving in other internal
data structures only references to the corresponding objects.
- Defined class Column_statistics_collected derived from the class
Column_statistics. This is a helper class to collect statistics
on columns.
- Moved references to read statistics to TABLE SHARE, leaving the
the reference to the collected statistics in TABLE.
- Added a new clone method for the class Field allowing to clone
fields attached to table shares. It was was used to create
fields for min/max values in the memory of the table share.
A lso:
- Added procedures to allocate memory for statistical data in
the table share memory and in table memory.
Also:
- Added a test case demonstrating how ANALYZE could work in parallel
to collect statistics on different indexes of the same table.
- Added a test two demonstrate how two connections working
simultaneously could allocate memory for statistical data in the
table share memory.
The bug prevented acceptance of UNION queries whose non-first select
clauses contained join expressions with degenerated single-table nests
as valid queries.
The bug was introduced into mysql-5.5 code line by the patch for
bug 33204.
As part of derived tables redesign, values for VIEW_ALGORITHM_MERGE and VIEW_ALGORITHM_TMPTABLE have changed from (former values 1 rsp 2 , new values 5 rsp 9).
This lead to the problem that views, created with version 5.2 or earlier would not work in all situations (e.g "SHOW CREATE VIEW"), or with mysqldump.
The fix is to restore backward compatibility for the from file, and convert algorithm={1,2} in the frm to {5,9} when reading .frm from disk, and store backward compatible values when writing from to disk.
Also allow processing correct processing for "invalid" .frms created with MariaDB 5.3/5.5 GA releases (where algorithm stored in memory matched the one stored in frm).
When a view/derived table is converted from merged to materialized the
items from the used_item lists are substituted for items referring to
the fields of the result of the materialization. The problem appeared
with queries employing natural joins. Since the resolution of a natural
join was performed only once the used_item list formed at the second
execution of the query lacked the references to the fields that were
used only in the equality predicates generated for the natural join.
- When doing join optimization, pre-sort the tables so that they mimic the execution
order we've had with 'semijoin=off'.
- That way, we will not get regressions when there are two query plans (the old and the
new) that have indentical costs but different execution times (because of factors that
the optimizer was not able to take into account).
Added 'from_end' as extra parameter to Field::unpack() to detect wrong from data.
Change ha_archive::unpack_row() to detect wrong field lengths.
Replication code changed to detect wrong field information in events.
mysql-test/r/archive.result:
dded test case for lp:917689
sql/field.cc:
Added 'from_end' as extra parameter to Field::unpack() to detect wrong from data.
Removed not used 'unpack_key' functions.
sql/field.h:
Added 'from_end' as extra parameter to Field::unpack() to detect wrong from data.
Removed not used 'unpack_key' functions.
Removed some not needed unpack() functions.
sql/filesort.cc:
Added buffer end parameter to unpack_addon_fields()
sql/log_event.h:
Added end of buffer argument to unpack_row()
sql/log_event_old.cc:
Added end of buffer argument to unpack_row()
sql/log_event_old.h:
Added end of buffer argument to unpack_row()
sql/records.cc:
Added buffer end parameter to unpack_addon_fields()
sql/rpl_record.cc:
Added end of buffer argument to unpack_row()
Added detection of wrong field information in events
sql/rpl_record.h:
Added end of buffer argument to unpack_row()
sql/rpl_record_old.cc:
Added end of buffer argument to unpack_row()
Added detection of wrong field information in events
sql/rpl_record_old.h:
Added end of buffer argument to unpack_row()
sql/table.h:
Added buffer end parameter to unpack()
storage/archive/ha_archive.cc:
Change ha_archive::unpack_row() to detect wrong field lengths.
This fixes lp:917689
The result of materialization of the right part of an IN subquery predicate
is placed into a temporary table. Each row of the materialized table is
distinct. A unique key over all fields of the temporary table is defined and
created. It allows to perform key look-ups into the table.
The table created for a materialized subquery can be accessed by key as
any other table. The function best_access-path search for the best access
to join a table to a given partial join. With some where conditions this
function considers a possibility of a ref_or_null access. If such access
employs the unique key on the temporary table then when estimating
the cost this access the function tries to use the array rec_per_key. Yet,
such array is not built for this unique key. This causes a crash of the server.
Rows returned by the subquery that contain nulls don't have to be placed
into temporary table, as they cannot be match any row produced by the
left part of the subquery predicate. So all fields of the temporary table
can be defined as non-nullable. In this case any ref_or_null access
to the temporary table does not make any sense and it does not make sense
to estimate such an access.
The fix makes sure that the temporary table for a materialized IN subquery
is defined with columns that are all non-nullable. The also ensures that
any row with nulls returned by the subquery is not placed into the
temporary table.
Problem was that now we can merge derived table (subquery in the FROM clause).
Fix: in case of detected conflict and presence of derived table "over" the table which cased the conflict - try materialization strategy.
For single table update/insert added deep check of single tables (single_table_updatable()).
For multi-table view insert added additional check of target table (check_view_single_update).
Multi-update was correct.
Test suite for all cases added.
Bug#13011410 CRASH IN FILESORT CODE WITH GROUP BY/ROLLUP
The assert in 13580775 is visible in 5.6 only,
but shows that all versions are vulnerable.
13011410 crashes in all versions.
filesort tries to re-use the sort buffer between invocations in order to save
malloc/free overhead.
The fix for Bug 11748783 - 37359: FILESORT CAN BE MORE EFFICIENT.
added an assert that buffer properties (num_records, record_length) are
consistent between invocations. Indeed, they are not necessarily consistent.
Fix: re-allocate the sort buffer if properties change.
mysql-test/r/partition.result:
New tests.
mysql-test/t/partition.test:
New tests.
sql/filesort.cc:
If we already have allocated a sort buffer in a previous execution,
then verify that it is big enough for the current one.
sql/table.h:
Add sort_keys_size; Number of bytes allocated for the sort_keys buffer.
- Part 1 of the fix: for semi-join merged subqueries, calling child_join->optimize() until we're done with all
PS-lifetime optimizations in the parent.