is_last_prefix <= 0, file .\opt_range.cc.
SELECT ... GROUP BY bit field failed with an assertion if the
bit length of that field was not divisible by 8.
Index lookup does not always guarantee that we can
simply remove the relevant conditions from the WHERE
clause. Reasons can be e.g. conversion errors,
partial indexes etc.
The optimizer was removing these parts of the WHERE
condition without any further checking.
This leads to "false positives" when using indexes.
Fixed by checking the index reference conditions
(using WHERE) when using indexes with sub-queries.
8bit escape characters, termination and enclosed characters
were silently ignored by SELECT INTO query, but LOAD DATA INFILE
algorithm is 8bit-clean, so data was corrupted during
encoding.
The problem: ha_partition::read_range_first() could return a record that is
outside of the scanned range. If that record happened to be in the next
subsequent range, it would satisfy the WHERE and appear in the output twice.
(we would get it the second time when scanning the next subsequent range)
Fix:
Made ha_partition::read_range_first() check if the returned recod is within
the scanned range, like other read_range_first() implementations do.
This bug is actually two. The first one manifests itself on an EXPLAIN
SELECT query with nested subqueries that employs the filesort algorithm.
The whole SELECT under explain is marked as UNCACHEABLE_EXPLAIN to preserve
some temporary structures for explain. As a side-effect of this values of
nested subqueries weren't cached and subqueries were re-evaluated many
times. Each time buffer for filesort was allocated but wasn't freed because
freeing occurs at the end of topmost SELECT. Thus all available memory was
eaten up step by step and OOM event occur.
The second bug manifests itself on SELECT queries with conditions where
a subquery result is compared with a key field and the subquery itself also
has such condition. When a long chain of such nested subqueries is present
the stack overrun occur. This happens because at some point the range optimizer
temporary puts the PARAM structure on the stack. Its size if about 8K and
the stack is exhausted very fast.
Now the subselect_single_select_engine::exec function allows subquery result
caching when the UNCACHEABLE_EXPLAIN flag is set.
Now the SQL_SELECT::test_quick_select function calls the check_stack_overrun
function for stack checking purposes to prevent server crash.
Comparison of a BIGINT NOT NULL column with a constant arithmetic
expression that evaluates to NULL caused error 1048: "Column '...'
cannot be null".
Made convert_constant_item() check if the constant expression is NULL
before attempting to store it in a field. Attempts to store NULL in a
NOT NULL field caused query errors.
checked for each record'
The problem was in incorrectly calculated length of the buffer used to
store a hexadecimal representation of an index map in
select_describe(). This could result in buffer overrun and stack
corruption under some circumstances.
Fixed by correcting the calculation.
storage engine system variables was not validated and
unexpected value was assigned.
The check_func_enum function used subtraction from the uint
value with the probably negative result. That result of
type uint was compared with 0 after casting to signed long
type. On architectures where long type is longer than int
type the result of comparison was unexpected.
command and reported to a client.
The fact that a timestamp field will be set to NO on UPDATE wasn't shown
by the SHOW COMMAND and reported to a client through connectors. This led to
problems in the ODBC connector and might lead to a user confusion.
A new filed flag called ON_UPDATE_NOW_FLAG is added.
Constructors of the Field_timestamp set it when a field should be set to NOW
on UPDATE.
The get_schema_column_record function now reports whether a timestamp field
will be set to NOW on UPDATE.
The columns in HAVING can reference the GROUP BY and
SELECT columns. There can be "table" prefixes when
referencing these columns. And these "table" prefixes
in HAVING use the table alias if available.
This means that table aliases are subject to the same
storage rules as table names and are dependent on
lower_case_table_names in the same way as the table
names are.
Fixed by :
1. Treating table aliases as table names
and make them lowercase when printing out the SQL
statement for view persistence.
2. Using case insensitive comparison for table
aliases when requested by lower_case_table_names