Improve the diagnostics of buffer pool accesses for B-trees,
so that the file names and line numbers of the real calls are shown
instead of the line of the buf_page_get() call in btr_block_get().
btr_page_get(): Replaced with a macro.
btr_block_get_func(): Renamed from btr_block_get(). Add file, line.
btr_block_get(): A macro that passes the __FILE__, __LINE__ to
btr_block_get_func().
dict_truncate_index_tree(): Replace a btr_page_get() call
with btr_block_get(), since we are only latching the page, not accessing it.
This patch doesn't get rid of the need to acquire the dict_sys->mutex but
reduces the need to keep the mutex locked for the duration of the query
to fsp_get_available_space_in_free_extents() from ha_innobase::info().
This is a port of revno:3548 from the builtin to the plugin.
rb://501 Approved by Jimmy Yang and Marko Makela.
removed and replaced by the comprehensive innodb-create-options.test.
It uses the rules listed in the comments at the top of that test.
This patch introduces these differences from previous behavior;
1) KEY_BLOCK_SIZE=0 is allowed by Innodb in both strict and non-strict mode
with no errors or warnings. It was previously used by the server to set
KEY_BLOCK_SIZE to undefined. (Bug#56628)
2) An explicit valid non-DEFAULT ROW_FORMAT always takes priority over a
valid KEY_BLOCK_SIZE. (bug#56632)
3) Automatic use of COMPRESSED row format is only done if the ROW_FORMAT
is DEFAULT or unspecified.
4) ROW_FORMAT=FIXED is prevented in strict mode.
This patch also includes various formatting changes for consistency with
InnoDB coding standards.
Related Bugs
Bug#54679: ALTER TABLE causes compressed row_format to revert to compact
Bug#56628: ALTER TABLE .. KEY_BLOCK_SIZE=0 produces untrue warning or unnecessary error
Bug#56632: ALTER TABLE implicitly changes ROW_FORMAT to COMPRESSED
Replace the array of mutexes that used to protect
dict_index_t::stat_n_diff_key_vals[] with an array of rw locks that protects
all the stats related members in dict_table_t and all of its indexes.
Approved by: Jimmy (rb://503)
columns again
This is follow-up to Bug #54358. Not all occurrences of the bug were fixed.
We need to check all calls to btr_copy_externally_stored_field_prefix_low()
and do the right thing when the pointer to the off-page column is null
(full of zero bytes).
It turns out that only the call to btr_copy_externally_stored_field_prefix()
in row_sel_sec_rec_is_for_blob() needs to be changed.
For fetching complete off-page columns rather than prefixes, the function
btr_rec_copy_externally_stored_field() already checks if the pointer
is null (all-zero). Two of its callers (row_merge_copy_blobs() and
row_sel_fetch_columns()) are never executed as READ COMMITTED and can
rightfully assert that the fetch succeeded. The third caller,
row_sel_store_mysql_rec(), already does the right thing.
The calls in row_upd_ext_fetch() and trx_undo_page_fetch_ext() must
expect that the off-page column exists. Update and rollback are
locking operations, never READ UNCOMMITTED.
by a function and column
The bugreport reveals two different bugs about grouping
on a function:
1) grouping by the TIME_TO_SEC function result caused
a server crash or wrong results and
2) grouping by the function returning a blob caused
an unexpected "Duplicate entry" error and wrong
result.
Details for the 1st bug:
TIME_TO_SEC() returns NULL if its argument is invalid (empty
string for example). Thus its nullability depends not only
on the nullability of its arguments but also on their values.
Fixed by (overoptimistically) setting TIME_TO_SEC() to be
nullable despite the nullability of its arguments.
Details for the 2nd bug:
The server is unable to create indices on blobs without
explicit blob key part length. However, this fact was
ignored for blob function result fields of GROUP BY
intermediate tables.
Fixed by disabling GROUP BY index creation for blob
function result fields like regular blob fields.
Lines below which were added in the patch for Bug#56814 cause this crash:
+ if (table->table)
+ table->table->maybe_null= FALSE;
Consider following test case:
--
CREATE TABLE t1(f1 INT NOT NULL);
INSERT INTO t1 VALUES (16777214),(0);
SELECT COUNT(*) FROM t1 LEFT JOIN t1 t2
ON 1 WHERE t2.f1 > 1 GROUP BY t2.f1;
DROP TABLE t1;
--
We set TABLE::maybe_null to FALSE for t2 table
and in create_tmp_field() we create appropriate tmp table field
using create_tmp_field_from_item() function instead of
create_tmp_field_from_field. As a result we have
LONGLONG field. As we have GROUP BY clause we calculate
group buffer length, see calc_group_buffer().
Item from group list which is used for calculation
refer to the field from real tables and have LONG type.
So group buffer length become insufficient for storing of
LONGLONG value. It leads to overwriting of wrong memory
area in do_field_int() function which is called from
end_update().
After some investigation I found out that
create_tmp_field_from_item() is used only for OLAP
grouping and can not be used for common grouping
as it could be an incompatibility between tmp
table fields and group buffer length.
We can not remove create_tmp_field_from_item() call from
create_tmp_field as OLAP needs it and we can not use this
function for common grouping. So we should remove setting
TABLE::maybe_null to FALSE from simplify_joins().
In this case we'll get wrong behaviour of
list_contains_unique_index() back. To fix it we
could use Field::real_maybe_null() check instead of
Field::maybe_null() and add addition check of
TABLE_LIST::outer_join.
The problem is caused by bug49487 fix and became visible
after after bug56679 fix.
Items are cleaned up and set to unfixed state after filling derived table.
So we can not rely on item::fixed state in Item_func_group_concat::print
and we can not use 'args' array as items there may be cleaned up.
The fix is always to use orig_args array of items as it
always should contain the correct data.
On Windows, the parameter for number of bytes passed into WriteFile()
and ReadFile() is DWORD. Casting is needed to silence the warning on
64-bit Windows.
Also, adding several asserts to ensure the variable for number of bytes
is no more than 32 bits, even on 64-bit Windows.
This is for InnoDB Plugin.
rb://415
Approved by: Inaam
On Windows, the parameter for number of bytes passed into WriteFile()
and ReadFile() is DWORD. Casting is needed to silence the warning on
64-bit Windows.
Also, adding several asserts to ensure the variable for number of bytes
is no more than 32 bits, even on 64-bit Windows.
This is for built-in InnoDB.
rb://415
Approved by: Inaam
The problem is dividing by const value when
the result is out of supported range.
The fix:
-return LONGLONG_MIN if the result is out of supported range for DIV operator.
-return 0 if divisor is -1 for MOD operator.
rb://498
Fix handling of update_undo_logs at trx commit. Previously, when
rseg->update_undo_list grows beyond 500 the update_undo_logs were
marked with state TRX_UNDO_TO_FREE which should have been
TRX_UNDO_TO_PURGE.
Approved by: Sunny Bains
"Grantor" columns' data is lost when replicating mysql.tables_priv.
Slave SQL thread used its default user ''@'' as the grantor of GRANT|REVOKE
statements executing on it.
In this patch, current user is put in query log event for all GRANT and REVOKE
statement, SQL thread uses the user in query log event as grantor.
Rows events were applied wrongly on the temporary table with the same name.
But rows events are generated only for base tables. As temporary
table's data never be binlogged on row mode. Normally, base table of the
same name cannot be updated if a temporary table has the same name.
But there are two cases which can generate rows events on
the base table of same name.
Case1: 'CREATE TABLE ... SELECT' statement.
In mixed format, it will generate rows events if it is unsafe.
Case2: Drop a transactional temporary table in a transaction
(happens only on 5.5+).
BEGIN;
DROP TEMPORARY TABLE t1; # t1 is a InnoDB table
INSERT INTO t1 VALUES(rand()); # t1 is a MyISAM table
COMMIT;
'DROP TEMPORARY TABLE' will be put in the transaction cache and
binlogged after the rows events generated by the 'INSERT' statement.
After this patch, slave opens only base table when applying a rows event.
Fix assorted warnings that are generated in optimized builds.
Most of it is silencing variables that are set but unused.
This patch also introduces the MY_ASSERT_UNREACHABLE macro
which helps the compiler to deduce that a certain piece of
code is unreachable.
Ensure that fdatasync is properly declared as on Mac OS X, the
function is available but there is no prototype. Also, port a
fix for a warning from the InnoDB plugin over to the builtin.