While querying INFORMATION SCHEMA, check for a table's engine
only used table name, but not schema name; so, if there were different
rows with the same table name, a wrong one could be retrieved.
The result of the check affected the decision whether the contents
of the table should be dumped, and whether a DELAYED option can be used.
Fixed by adding a clause for table_schema to the query.
Not printing the value" with binlog-row-image=minimal"
Merged Rows_log_event::print_verbose_one_row() and log_event_print_value()
with MySQL 5.7
Added flush after writing of Table_map_log_event() to fix wrong order of
lines in output. This causes a lot of changes in some test results.
The bitmap implementation defines two template Bitmap classes. One
optimized for 64-bit (default) wide bitmaps while the other is used for
all other widths.
In order to optimize the computations, Bitmap<64> class has defined its
own member functions for bitmap operations, the other one, however,
relies on mysys' bitmap implementation (mysys/my_bitmap.c).
Issue 1:
In case of non 64-bit Bitmap class, intersect() wrongly reset the
received bitmap while initialising a new local bitmap structure
(bitmap_init() clears the bitmap buffer) thus, the received bitmap was
getting cleared.
Fixed by initializing the local bitmap structure by using a temporary
buffer and later copying the received bitmap to the initialised bitmap
structure.
Issue 2:
The non 64-bit Bitmap class had the Iterator missing which caused
compilation failure.
Also added a cmake variable to hold the MAX_INDEXES value when supplied
from the command prompt. (eg. cmake .. -DMAX_INDEXES=128U). Checks have
been put in place to trigger build failure if MAX_INDEXES value is
greater than 128.
Test modifications:
* Introduced include/have_max_indexes_[64|128].inc to facilitate
skipping of tests for which the output differs with different
MAX_INDEXES.
* Introduced include/max_indexes.inc which would get modified by cmake
to reflect the MAX_INDEXES value used to build the server. This file
simply sets an mtr variable '$max_indexes' to show the MAX_INDEXES
value, which will then be consumed by the above introduced include file.
* Some tests (portions), dependent on MAX_INDEXES value, have been moved
to separate test files.
Problem:
Procedure which uses stack of views first executed without most deep view.
It fails but one view cached (as well as whole procedure).
Then simultaniusely create the second view we lack and execute the procedure.
In the beginning of procedure execution the view is not yet created so
procedure used as it was cached (cache was not invalidated).
But by the time we are trying to use most deep view it is already created.
The problem with the view is that thd->select_number (first view was not parsed) so second view will get the same number.
The fix is in keeping the thd->select_number correct even if we use cached views.
In the proposed solution (to keep it simple) counter can be bigger then should but it should not create problem because numbers are still unique and situation is very rare.
- Added --start option to mysqld which don't prints notes to log on startup
This helps to find errors in configure options easier
- Dont write [Note] entries to log after we have abort the server
This makes it easier to find what went wrong
- Don't by default write out Changed limits for max_open_files as this didn't really change from anything the user gave us
- Dont write warnings about not using --explicit_defaults_for_timestamp (we don't have plans do depricate the old behaviour)
- Moving Item_xxx_field declarations after Item_sum_xxx declarations,
so Item_xxx_field constructors can be defined directly in item_sum.h
rather than item_sum.cc. This removes some duplicate code, e.g.
initialization of the following members at constructor time:
name, decimals, max_length, unsigned_flag, field, maybe_null.
- Adding Item_sum_field as a common parent for Item_avg_field and
Item_variance_field
- Deriving Item_sum_field directly from Item rather that Item_result_field,
as Item_sum_field descendants do not need anything from Item_result_field.
- Removing hybrid infrastructure from Item_avg_field,
adding Item_avg_field_decimal and Item_avg_field_double instead,
as desired result type is already known at constructor time
(not only at fix_fields time). This simplifies the code.
- Changing Item_avg_field_decimal::val_int() to call val_int_from_decimal()
instead of doing { return (longlong) rint(val_real()); }
This is the fix itself.
The problem was that GROUP BY code created Item_field objects
that referred to fields in the temp. tables used for GROUP BY.
Item_ref and set_items_ref_array() call caused pointers to temp.
table fields to occur in many places.
This patch introduces Item_temptable_field, which can handle
item->print() calls made after the underlying table is freed.
EXPLAIN INSERT ... SELECT tried to use SELECT's execution path. This
caused a collection of problems:
- SELECT_DESCRIBE flag was not put into select_lex->options, which
means it was not in JOIN::select_options either (except for the first
member of the UNION).
- This caused UNION members to be executed. They would attempt to write
join output rows to the output.
- (Actual cause of the crash) second join sibling would call
result->send_eof() when finished execution. Then,
Explain_query::print_explain would attempt to write to query output
again, and cause an assertion due to non-empty query output.
Problem was in rewriting left expression which had 2 references on it. Solved with making subselect reference main.
Item_in_optimized can have not Item_in_subselect reference in left part so type casting with no check is dangerous.
Item::cols() should be checked after Item::fix_fields().
[EXPLAIN] INSERT .. SELECT creates a select_insert object.
select_insert calls handler->start_bulk_insert() during
initialization.
For MyISAM/Aria this requires that a matching call to
handler->end_bulk_insert() call is made.
Regular INSERT .. SELECT accomplishes this by calling either
select_result->send_eof() or select_result->abort_result_set().
EXPLAIN INSERT ... SELECT didn't call either, which resulted in
improper de-initializaiton of handler object. Make it call
abort_result_set(), which invokes handler->end_bulk_insert().
Item_func_hybrid_field_type did not return correct field_type(), cmp_type()
and result_type() in some cases, because cached_result_type and
cached_field_type were set in independent pieces of the code and
did not properly match to each other.
Fix:
- Removing Item_func_hybrid_result_type
- Deriving Item_func_hybrid_field_type directly from Item_func
- Introducing a new class Type_handler which guarantees that
field_type(), cmp_type() and result_type() are always properly synchronized
and using the new class in Item_func_hybrid_field_type.
Item_string::clone_item() creates a new Item_string that
points exactly to the same buffer that the original one does.
Later, Item_string::print() uses c_ptr() for the original Item_string,
which reallocs the original buffer, and the clone remain with
the old freed buffer.
Refactoring the code not to use c_ptr() in Item_string::print().
The crash was caused by range optimizer using RANGE_OPT_PARAM::min_key
(and max_key) to store keys. Buffer size was a good upper bound for
range analysis and partition pruning, but not for EITS selectivity
calculations.
Fixed by making these buffers variable-size. The sizes are calculated
from [pseudo]indexes used for range analysis.
The crash was caused by range optimizer using RANGE_OPT_PARAM::min_key
(and max_key) to store keys. Buffer size was a good upper bound for
range analysis and partition pruning, but not for EITS selectivity
calculations.
Fixed by making these buffers variable-size. The sizes are calculated
from [pseudo]indexes used for range analysis.
1. Removing the legacy code that disabled equal field propagation in cases
when comparison is done as VARBINARY. This is now correctly handled by
the new propagation code in Item_xxx::propagate_equal_fields() and
Field_str::can_be_substituted_to_equal_item (the bug fix).
2. Also, removing legacy (pre-MySQL-4.1) Arg_comparator methods
compare_binary_string() and compare_e_binary_string(), as VARBINARY
comparison is correcty handled in compare_string() and compare_e_string() by
the corresponding VARBINARY collation handler implemented in my_charset_bin.
(not really a part of the bug fix)