some rollup rows (rows with NULLs for grouping attributes) if GROUP BY
list contained constant expressions.
This happened because the results of constant expressions were not put
in the temporary table used for duplicate elimination. In fact a constant
item from the GROUP BY list of a ROLLUP query can be replaced for an
Item_null_result object when a rollup row is produced .
Now the JOIN::rollup_init function wraps any constant item referenced in
the GROYP BY list of a ROLLUP query into an Item_func object of a special
class that is never detected as constant item. This ensures creation of
fields for such constant items in temporary tables and guarantees right
results when the result of the rollup operation first has to be written
into a temporary table, e.g. in the cases when duplicate elimination is
required.
INSERT...ON DUPLICATE KEY UPDATE may cause error 1032:
"Can't find record in ..." if we are inserting into
InnoDB table unique index of partial key with
underlying UTF-8 string field.
This error occurs because INSERT...ON DUPLICATE uses a wrong
procedure to copy string fields of multi-byte character sets
for index search.
- unsigned flag was not handled correctly for a number of mathematical funcions, which led to incorrect results
- passing large values as the number of decimals to ROUND() resulted in incorrect results and even server crashes in some cases
- reverted the fix and the testcase for bug #10083 as it violates the manual
- fixed some testcases which relied on broken ROUND() behavior
Before this fix, the parser would sometime change where a token starts by
altering Lex_input_string::tok_start, which later confused the code in
sql_yacc.yy that needs to capture the source code of a SQL statement,
like to represent the body of a stored procedure.
This line of code in sql_lex.cc :
case MY_LEX_USER_VARIABLE_DELIMITER:
lip->tok_start= lip->ptr; // Skip first `
would <skip the first back quote> ... and cause the bug reported.
In general, the responsibility of sql_lex.cc is to *find* where token are
in the SQL text, but is *not* to make up fake or incomplete tokens.
With a quoted label like `my_label`, the token starts on the first quote.
Extracting the token value should not change that (it did).
With this fix, the lexical analysis has been cleaned up to not change
lip->tok_start (in the case found for this bug).
The functions get_token() and get_quoted_token() now have an extra
parameters, used when some characters from the beginning of the token need
to be skipped when extracting a token value, like when extracting 'AB' from
'0xAB', for example, for a HEX_NUM token.
This exposed a bad assumption in Item_hex_string and Item_bin_string,
which has been fixed:
The assumption was that the string given, 'AB', was in fact preceded in
memory by '0x', which might be false (it can be preceded by "x'" and
followed by "'" -- or not be preceded by valid memory at all)
If a name is needed for Item_hex_string or Item_bin_string, the name is
taken from the original and true source code ('0xAB'), and assigned in
the select_item rule, instead of relying on assumptions related to how
memory is used.
The BETWEEN function was comparing DATE/DATETIME values either as ints or as
strings. Both methods have their disadvantages and may lead to a wrong
result.
Now BETWEEN function checks whether all of its arguments has the STRING result
types and at least one of them is a DATE/DATETIME item. If so it sets up
two Arg_comparator obects to compare with the compare_datetime() comparator
and uses them to compare such items.
Added two Arg_comparator object members and one flag to the
Item_func_between class for the correct DATE/DATETIME comparison.
The Item_func_between::fix_length_and_dec() function now detects whether
it's used for DATE/DATETIME comparison and sets up newly added Arg_comparator
objects to do this.
The Item_func_between::val_int() now uses Arg_comparator objects to perform
correct DATE/DATETIME comparison.
The owner variable of the Arg_comparator class now can be set to NULL if the
caller wants to handle NULL values by itself.
Now the Item_date_add_interval::get_date() function ajusts cached_field type according to the detected type.
DATE and DATETIME can be compared either as strings or as int. Both
methods have their disadvantages. Strings can contain valid DATETIME value
but have insignificant zeros omitted thus became non-comparable with
other DATETIME strings. The comparison as int usually will require conversion
from the string representation and the automatic conversion in most cases is
carried out in a wrong way thus producing wrong comparison result. Another
problem occurs when one tries to compare DATE field with a DATETIME constant.
The constant is converted to DATE losing its precision i.e. losing time part.
This fix addresses the problems described above by adding a special
DATE/DATETIME comparator. The comparator correctly converts DATE/DATETIME
string values to int when it's necessary, adds zero time part (00:00:00)
to DATE values to compare them correctly to DATETIME values. Due to correct
conversion malformed DATETIME string values are correctly compared to other
DATE/DATETIME values.
As of this patch a DATE value equals to DATETIME value with zero time part.
For example '2001-01-01' equals to '2001-01-01 00:00:00'.
The compare_datetime() function is added to the Arg_comparator class.
It implements the correct comparator for DATE/DATETIME values.
Two supplementary functions called get_date_from_str() and get_datetime_value()
are added. The first one extracts DATE/DATETIME value from a string and the
second one retrieves the correct DATE/DATETIME value from an item.
The new Arg_comparator::can_compare_as_dates() function is added and used
to check whether two given items can be compared by the compare_datetime()
comparator.
Two caching variables were added to the Arg_comparator class to speedup the
DATE/DATETIME comparison.
One more store() method was added to the Item_cache_int class to cache int
values.
The new is_datetime() function was added to the Item class. It indicates
whether the item returns a DATE/DATETIME value.
Validity checks for nested set functions
were not taking into account that the enclosed
set function may be on a nest level that is
lower than the nest level of the enclosing set
function.
Fixed by :
- propagating max_sum_func_level
up the enclosing set functions chain.
- updating the max_sum_func_level of the
enclosing set function when the enclosed set
function is aggregated above or on the same
nest level of as the level of the enclosing
set function.
- updating the max_arg_level of the enclosing
set function on a reference that refers to
an item above or on the same nest level
as the level of the enclosing set function.
- Treating both Item_field and Item_ref as possibly
referencing items from outer nest levels.
INSERT into InnoDB table may cause "ERROR 1062 (23000): Duplicate entry..."
errors or lost records after multi-row INSERT of the form:
"INSERT INTO t (id...) VALUES (NULL...) ON DUPLICATE KEY UPDATE id=VALUES(id)",
where "id" is an AUTO_INCREMENT column.
It happens because InnoDB handler forgets to save next insert id after
updating of auto_increment column with new values. As result of that
last insert id stored inside InnoDB dictionary tables differs from it's
cached thd->next_insert_id value.
When fields are inserted instead of * in the select list they were not marked
for check for the ONLY_FULL_GROUP_BY mode.
The Field_iterator_table::create_item() function now marks newly created
items for check when in the ONLY_FULL_GROUP_BY mode.
The setup_wild() and the insert_fields() functions now maintain the
cur_pos_in_select_list counter for the ONLY_FULL_GROUP_BY mode.
The issue found with bug 25411 is due to the function skip_rear_comments()
which damages the source code while implementing a work around.
The root cause of the problem is in the lexical analyser, which does not
process special comments properly.
For special comments like :
[1] aaa /*!50000 bbb */ ccc
since 5.0 is a version older that the current code, the parser is in lining
the content of the special comment, so that the query to process is
[2] aaa bbb ccc
However, the text of the query captured when processing a stored procedure,
stored function or trigger (or event in 5.1), can be after rebuilding it:
[3] aaa bbb */ ccc
which is wrong.
To fix bug 25411 properly, the lexical analyser needs to return [2] when
in lining special comments.
In order to implement this, some preliminary cleanup is required in the code,
which is implemented by this patch.
Before this change, the structure named LEX (or st_lex) contains attributes
that belong to lexical analysis, as well as attributes that represents the
abstract syntax tree (AST) of a statement.
Creating a new LEX structure for each statements (which makes sense for the
AST part) also re-initialized the lexical analysis phase each time, which
is conceptually wrong.
With this patch, the previous st_lex structure has been split in two:
- st_lex represents the Abstract Syntax Tree for a statement. The name "lex"
has not been changed to avoid a bigger impact in the code base.
- class lex_input_stream represents the internal state of the lexical
analyser, which by definition should *not* be reinitialized when parsing
multiple statements from the same input stream.
This change is a pre-requisite for bug 25411, since the implementation of
lex_input_stream will later improve to deal properly with special comments,
and this processing can not be done with the current implementation of
sp_head::reset_lex and sp_head::restore_lex, which interfere with the lexer.
This change set alone does not fix bug 25411.
- Added script to generate application specific manifest.
- Added new CMake MACRO to add customer build events which will first
generate a manifest and then embeds that manifest into an executable.
In multi_update::send_data(), the counter of matched rows was not correctly incremented, when during insertion of a new row to a temporay table it had to be converted from HEAP to MyISAM.
This fix changes the logic to increment the counter of matched rows in the following cases:
1. If the error returned from write_row() is zero.
2. If the error returned from write_row() is non-zero, is neither HA_ERR_FOUND_DUPP_KEY nor HA_ERR_FOUND_DUPP_UNIQUE, and a call to create_myisam_from_heap() succeeds.
This bug was intruduced by the fix for bug#17212 (in 4.1). It is not
ok to call test_if_skip_sort_order since this function will
alter the execution plan. By contract it is not ok to call
test_if_skip_sort_order in this context.
This bug appears only in the case when the optimizer has chosen
an index for accessing a particular table but finds a covering
index that enables it to skip ORDER BY. This happens in
test_if_skip_sort_order.
When merging views into the enclosing statement
the ORDER BY clause of the view is merged to the
parent's ORDER BY clause.
However when the VIEW is merged into an UNION
branch the ORDER BY should be ignored.
Use of ORDER BY for individual SELECT statements
implies nothing about the order in which the rows
appear in the final result because UNION by default
produces unordered set of rows.
Fixed by ignoring the ORDER BY clause from the merge
view when expanded in an UNION branch.
They can drop table after table names list creation and before table opening.
We open non existing table and get ER_NO_SUCH_TABLE error.
In this case we do not store the record into I_S table and clear error.
NULL MERGE: this ChangeSet will be null merged into mysql-5.1
Fixes:
- Bug #26662: mysqld assertion when creating temporary (InnoDB) table on a tmpfs filesystem
Fix by not open(2)ing with O_DIRECT but rather calling fcntl(2) to set
this flag immediately after open(2)ing. This way an error caused by
O_DIRECT not being supported can easily be ignored.
- Bug #23313: AUTO_INCREMENT=# not reported back for InnoDB tables
- Bug #21404: AUTO_INCREMENT value reset when Adding FKEY (or ALTER?)
Report the current value of the AUTO_INCREMENT counter to MySQL.
- Improve mysql_upgrade and add comments describing it's logic
- Don't look for mysql and mysqlcheck randomly, use dir where mysql_upgrade
was started from
- Don't look for mysql_fix_privilege_tables.sql randomly, compile
in the mysql_fix_privilege_tables.sql file and use that to upgrade
the system tables of MySQL
- Check for any unexpected error returned from runnning the mysql_fix_privilege_tables SQL
- Fix bug#26639, bug#24248 and bug#25405
conditions when executing an equijoin query with WHERE condition
containing a subquery predicate of the form join_attr NOT IN (SELECT ...).
To resolve a problem of the correct evaluation of the expression
attr NOT IN (SELECT ...)
an array of guards is created to make it possible to filter out some
predicates of the EXISTS subquery into which the original subquery
predicate is transformed, in the cases when a takes the NULL value.
If attr is defined as a field that cannot be NULL than such an array
is not needed and is not created.
However if the field a occurred also an an equijoin predicate t2.a=t1.b
and table t1 is accessed before table t2 then it may happen that the
the EXISTS subquery is pushed down to the condition evaluated just after
table t1 has been accessed. In this case any occurrence of t2.a is
substituted for t1.b. When t1.b takes the value of NULL an attempt is
made to turn on the corresponding guard. This action caused a crash as
no guard array had been created.
Now the code of Item_in_subselect::set_cond_guard_var checks that the guard
array has been created before setting a guard variable on. Otherwise the
method does nothing. It cannot results in returning a row that could be
rejected as the condition t2.a=t1.b will be checked later anyway.
The Item_outer_ref class based on the Item_direct_ref class was always used
to represent an outer field. But if the outer select is a grouping one and the
outer field isn't under an aggregate function which is aggregated in that
outer select an Item_ref object should be used to represent such a field.
If the outer select in which the outer field is resolved isn't grouping then
the Item_field class should be used to represent such a field.
This logic also should be used for an outer field resolved through its alias
name.
Now the Item_field::fix_outer_field() uses Item_outer_field objects to
represent aliased and non-aliased outer fields for grouping outer selects
only.
Now the fix_inner_refs() function chooses which class to use to access outer
field - the Item_ref or the Item_direct_ref. An object of the chosen class
substitutes the original field in the Item_outer_ref object.
The direct_ref and the found_in_select_list fields were added to the
Item_outer_ref class.
Problem: single byte do_varstring1() function was called, which didn't
check limit on "number of character", and checked only "number of bytes".
Fix: adding a multi-byte aware function do_varstring1_mb(),
to limit on "number of characters"
IGNORE/USE/FORCE INDEX hints were honored when choosing FULLTEXT
index.
With this fix these hints are ignored. For regular indexes we may
perform table scan instead of index lookup when IGNORE INDEX was
specified. We cannot do this for FULLTEXT in NLQ mode.
Support of views wasn't implemented for the TRUNCATE statement.
Now TRUNCATE on views has the same semantics as DELETE FROM view:
mysql_truncate() checks whether the table is a view and falls back
to delete if so.
In order to initialize properly the LEX::updatable for a view
st_lex::can_use_merged() now allows usage of merged views for the
TRUNCATE statement.
are used as arguments of the IN predicate.
Added a function to check compatibility of row expressions. Made sure that this
function to be called for Item_func_in objects by fix_length_and_dec().
The function CRC32() returns unsigned integer.
But the metadata (the unsigned flag) for the
function was set incorrectly.
As a result type arithmetics based on the
function's metadata (like finding the concise
type of an temporary table column to hold the result)
returned incorrect results.
Fixed by returning correct type information.
This fix is based on code contributed by Martin Friebe
(martin@hybyte.com) on 2007-03-30.
The optimizer transforms DISTINCT into a GROUP BY
when possible.
It does that by constructing the same structure
(a list of ORDER instances) the parser makes when
parsing GROUP BY.
While doing that it also eliminates duplicates.
But if a duplicate is found it doesn't advance the
pointer to ref_pointer array, so the next
(and subsequent) ORDER structures point to the wrong
element in the SELECT list.
Fixed by advancing the pointer in ref_pointer_array
even in the case of a duplicate.
Problem: setting/displaying @@LC_TIME_NAMES didn't distinguish between
GLOBAL and SESSION variable types - always SESSION variable
was set/shonw.
Fix: set either global or session value.
Also, "mysqld --lc-time-names" was added to set "global default" value.
NO_AUTO_VALUE_ON_ZERO mode.
The table->auto_increment_field_not_null variable wasn't reset after
reading a row which may lead to inserting a wrong value to the auto-increment
field to the following row.
The table->auto_increment_field_not_null variable is reset now right after a
row is being written in the read_fixed_length() and the read_sep_field()
functions.
Removed wrong setting of the table->auto_increment_field_not_null variable in
the read_sep_field() function.
In certain cases AFTER UPDATE/DELETE triggers on NDB tables that referenced
subject table didn't see the results of operation which caused invocation
of those triggers. In other words AFTER trigger invoked as result of update
(or deletion) of particular row saw version of this row before update (or
deletion).
The problem occured because NDB handler in those cases postponed actual
update/delete operations to be able to perform them later as one batch.
This fix solves the problem by disabling this optimization for particular
operation if subject table has AFTER trigger for this operation defined.
To achieve this we introduce two new flags for handler::extra() method:
HA_EXTRA_DELETE_CANNOT_BATCH and HA_EXTRA_UPDATE_CANNOT_BATCH.
These are called if there exists AFTER DELETE/UPDATE triggers during a
statement that potentially can generate calls to delete_row()/update_row().
This includes multi_delete/multi_update statements as well as insert statements
that do delete/update as part of an ON DUPLICATE statement.
IN/BETWEEN predicates in sorting expressions.
Wrong results may occur when the select list contains an expression
with IN/BETWEEN predicate that differs from a sorting expression by
an additional NOT only.
Added the method Item_func_opt_neg::eq to compare correctly expressions
containing [NOT] IN/BETWEEN.
The eq method inherited from the Item_func returns TRUE when comparing
'a IN (1,2)' with 'a NOT IN (1,2)' that is not, of course, correct.
The problem was that THD::db_access variable was not restored after
database switch in stored-routine-execution code.
The fix is to restore THD::db_access in this case.
Unfortunately, this fix requires additional changes,
because in prepare_schema_table(), called on the parsing stage, we checked
privileges. That was wrong according to our design, but this flaw haven't
struck so far, because it was masked. All privilege checkings must be
done on the execution stage in order to be compatible with prepared statements
and stored routines. So, this patch also contains patch for
prepare_schema_table(), which moves the checkings to the execution phase.
The query-cache watch thread was continually allocating new thread entries on the
THD MEM_ROOT, not freed until server exit.
Fixed by using a simple array, auto-expanded as necessary.
LEFT JOIN
Fixed that in certain situations MATCH ... AGAINST returns false hits
for NULLs produced by LEFT JOIN when there is no fulltext index
available.