change the size of core files.
Suppress the 'setrlimit could not change the size of the core files'
warning in mysql-test-run. We do not want core files on some of the
PushBuild hosts, and PushBuild itself does not set --core-files, so
that warning is expected.
is_last_prefix <= 0, file .\opt_range.cc.
SELECT ... GROUP BY bit field failed with an assertion if the
bit length of that field was not divisible by 8.
Index lookup does not always guarantee that we can
simply remove the relevant conditions from the WHERE
clause. Reasons can be e.g. conversion errors,
partial indexes etc.
The optimizer was removing these parts of the WHERE
condition without any further checking.
This leads to "false positives" when using indexes.
Fixed by checking the index reference conditions
(using WHERE) when using indexes with sub-queries.
8bit escape characters, termination and enclosed characters
were silently ignored by SELECT INTO query, but LOAD DATA INFILE
algorithm is 8bit-clean, so data was corrupted during
encoding.
The problem: ha_partition::read_range_first() could return a record that is
outside of the scanned range. If that record happened to be in the next
subsequent range, it would satisfy the WHERE and appear in the output twice.
(we would get it the second time when scanning the next subsequent range)
Fix:
Made ha_partition::read_range_first() check if the returned recod is within
the scanned range, like other read_range_first() implementations do.
This bug is actually two. The first one manifests itself on an EXPLAIN
SELECT query with nested subqueries that employs the filesort algorithm.
The whole SELECT under explain is marked as UNCACHEABLE_EXPLAIN to preserve
some temporary structures for explain. As a side-effect of this values of
nested subqueries weren't cached and subqueries were re-evaluated many
times. Each time buffer for filesort was allocated but wasn't freed because
freeing occurs at the end of topmost SELECT. Thus all available memory was
eaten up step by step and OOM event occur.
The second bug manifests itself on SELECT queries with conditions where
a subquery result is compared with a key field and the subquery itself also
has such condition. When a long chain of such nested subqueries is present
the stack overrun occur. This happens because at some point the range optimizer
temporary puts the PARAM structure on the stack. Its size if about 8K and
the stack is exhausted very fast.
Now the subselect_single_select_engine::exec function allows subquery result
caching when the UNCACHEABLE_EXPLAIN flag is set.
Now the SQL_SELECT::test_quick_select function calls the check_stack_overrun
function for stack checking purposes to prevent server crash.
failing 'INSTALL PLUGIN' statement doesn't work in embedded server
as we disable library loading there.
Fixed by enabling loading libraries (#define HAVE_DLOPEN), what also
makes UDF working in the embedded server.
Comparison of a BIGINT NOT NULL column with a constant arithmetic
expression that evaluates to NULL caused error 1048: "Column '...'
cannot be null".
Made convert_constant_item() check if the constant expression is NULL
before attempting to store it in a field. Attempts to store NULL in a
NOT NULL field caused query errors.
checked for each record'
The problem was in incorrectly calculated length of the buffer used to
store a hexadecimal representation of an index map in
select_describe(). This could result in buffer overrun and stack
corruption under some circumstances.
Fixed by correcting the calculation.
storage engine system variables was not validated and
unexpected value was assigned.
The check_func_enum function used subtraction from the uint
value with the probably negative result. That result of
type uint was compared with 0 after casting to signed long
type. On architectures where long type is longer than int
type the result of comparison was unexpected.