Cleanup: remove TIME_FUZZY_DATE.
Introduce TIME_FUZZY_DATES which means "very fuzzy, the resulting
value is only used for comparison. It can be invalid date, fine, as long as it can be
compared".
Updated many tests results (they're better now).
Item_func_min_max::get_date() did not check the
returned value against the fuzzy_date flags, so
it could return a bad value to the caller that
expects a good date (e.h. CONVERT_TZ).
modified:
mysql-test/r/type_date.result
mysql-test/r/type_datetime.result
mysql-test/r/type_time.result
mysql-test/t/type_date.test
mysql-test/t/type_datetime.test
mysql-test/t/type_time.test
sql/item_func.cc
sql/item_timefunc.cc
sql/mysql_priv.h
sql/time.cc
mysql_derived_merge_for_insert() should not be called for views or derived tables which are not put (directly or via other views) in main SELECT_LEX "join list".
This could happen when using Aria for internal temporary files (default case) and using DISTINCT.
_ma_scan_restore_block_record() didn't work correctly if there was rows inserted, updated or deleted on the handler
between calls to _ma_scan_remember_block_record() and _ma_scan_restore_block_record().
The effect was that some DISTINCT queries that used remove_dup_with_compare() could fail.
.bzrignore:
Ignore sql_yacc.hh
mysql-test/suite/maria/r/distinct.result:
Test case for MDEV-4280
mysql-test/suite/maria/t/distinct.test:
Test case for MDEV-4280
mysql-test/t/mysql.test:
Fixed test suite (we could get error -1 in some cases)
sql/sql_select.cc:
Break loop if restart_rnd_next() gives an error
storage/maria/ha_maria.cc:
scan_restore_pos() can return disk fault error.
storage/maria/ma_blockrec.c:
_ma_scan_remember_block_record() did incorrectly update scan.dir instead of scan_save.dir .
_ma_scan_restore_block_record() didn't work correctly if there was rows inserted,updated or deleted on the handler
between calls to _ma_scan_remember_block_record() and _ma_scan_restore_block_record().
Fixed by adding counters for row changes and reading the current scan page if changes had been made.
storage/maria/ma_blockrec.h:
scan_restore_pos() can return disk fault error.
storage/maria/ma_delete.c:
Increment row_changes
storage/maria/ma_scan.c:
scan_restore_pos() can return disk fault error.
storage/maria/ma_update.c:
Increment row_changes
storage/maria/ma_write.c:
Increment row_changes
storage/maria/maria_def.h:
scan_restore_pos() can return disk fault error.
When iterating over a list of conditions using List_iterator
the function remove_eq_conds should skip all predicates that
replace a condition from the list. Otherwise it can come to
an infinite recursion.
<non-nullable datatime field> IS NULL in outer joins with
that in inner joins.
Previously such condition was transformed into the condition
<non-nullable datatime field> = 0 unless the field belonged
to an inner table of an outer join. In this case the predicate
was interpreted as for any other field.
Now if the field in the predicate <non-nullable datatime field> IS NULL
belongs to an inner table of an outer join the predicate is
transformed into the disjunction
<non-nullable datatime field> = 0 OR <non-nullable datatime field> IS NULL.
This is fully compatible with the semantics of such predicates in 5.5.
This bug was the result of incompleteness of the patch for bug mdev-4177.
When an OR condition is simplified to a single conjunct it is merged
into the embedding AND condition. Multiple equalities are also merged,
and any field item involved in those equality should acquire a pointer
to a the multiple equality formed by this merge.
Removed "optimization" which caused preoblems on second execution of PS with string parameter in LIMIT clause.
Fixed test_bug43560 to be able to skipp it if connection is UNIX socket.
fulltext search was initialized for all MATCH ... AGAINST items
at the end of the JOIN::optimize(). But since 5.3 derived tables
are initialized lazily on first use, very late in the sub_select().
Skip Item_func_match::init_search initialization if the corresponding
table isn't open yet; repeat fulltext initialization for all
not-yet-initialized MATCH ... AGAINST items after creating derived tables.
currently get_mm_tree skipped the evaluation of this constant
and icorrectly proceeded. The correct behavior is to return a
NULL subtree, according to the IF branch being fixed - when it
evaluates the constant it returns a value, and doesn't continue
further.
update 5.1 to replicate from 10.0 and
to show the server version (as of 10.0) correctly
sql-common/client.c:
mdev:4088
sql/slave.cc:
use the version number, not just the first character of the version string
(we want 10 > 4 not "10" < "4").
- Let index_merge allocate table handlers on quick select's MEM_ROOT,
not on statement's MEM_ROOT.
This is crucial for big "range checked for each record" queries, where
index_merge can be created and deleted many times during query exection.
We should not make O(#rows) allocations on statement's MEM_ROOT.
In some cases, when using views the optimizer incorrectly determined
possible join orders for queries with nested outer and inner joins.
This could lead to invalid execution plans for such queries.
Additional fixes for possible overflows in length-related
calculations in 'spatial' implementations.
Checks added to the ::get_data_size() methods.
max_n_points decreased to occupy less 2G size. An
object of that size is practically inoperable anyway.
Item_func_make_set wasn't taking into account the first argument when
calculating maybe_null.
sql/item_strfunc.cc:
rewrite Item_func_make_set, removing separate storage of the first argument
sql/item_strfunc.h:
rewrite Item_func_make_set, removing separate storage of the first argument
with decimals=NOT_FIXED_DEC it is possible to have 'decimals' larger
than 'max_length', it's not an error for temporal functions.
But when Item_func_numhybrid converts the value to DECIMAL_RESULT,
it must limit 'decimals' to be a valid scale of a decimal number.
The bug was found by Alyssa Milburn.
If the number of points of a geometry feature read from
binary representation is greater than 0x10000000, then
the (uint32) (num_points * 16) will cut the higher byte,
which leads to various errors.
Fixed by additional check if (num_points > max_n_points).
This is a bug in the legacy code. It did not manifest itself because
it was masked by other bugs that were fixed by the patches for
mdev-4172 and mdev-4177.
Do not include BLOB fields into the key to access the temporary
table created for a materialized view/derived table.
BLOB components are not allowed in keys.
MySQL Bug #12408412: GROUP_CONCAT + ORDER BY + INPUT/OUTPUT SAME USER VARIABLE = CRASH
and
MySQL Bug#14664077 SEVERE PERFORMANCE DEGRADATION IN SOME CASES WHEN USER VARIABLES ARE USED
sql/item_func.cc:
don't use anything from Item_func_set_user_var::fix_fields()
in Item_func_set_user_var::save_item_result()
sql/sql_class.cc:
Call suv->save_item_result(item) *before* doing suv->fix_fields(), because
the former evaluates the item (and caches its value), while the latter marks
the user variable as non-const. The problem is that the item was fix_field'ed
when the user variable was const, and it doesn't expect it to change to non-const
in the middle of the execution.
mysys/errors.c:
revert upstream's fix. use a much simpler one
mysys/my_write.c:
revert upstream's fix. use a simpler one
sql/item_xmlfunc.cc:
useless, but ok
sql/mysqld.cc:
simplify upstream's fix
storage/heap/hp_delete.c:
remove upstream's fix.
we'll use a much less expensive approach.
The function remove_eq_cond removes the parts of a disjunction
for which it has been proved that they are always true. In the
result of this removal the disjunction may be converted into a
formula without OR that must be merged into the the AND formula
that contains the disjunction.
The merging of two AND conditions must take into account the
multiple equalities that may be part of each of them.
These multiple equality must be merged and become part of the
and object built as the result of the merge of the AND conditions.
Erroneously the function remove_eq_cond lacked the code that
would merge multiple equalities of the merged AND conditions.
This could lead to confusing situations when at the same AND
level there were two multiple equalities with common members
and the list of equal items contained only some of these
multiple equalities.
This, in its turn, could lead to an incorrect work of the
function substitute_for_best_equal_field when it tried to optimize
ref accesses. This resulted in forming invalid TABLE_REF objects
that were used to build look-up keys when materialized subqueries
were exploited.
This bug in the legacy code could manifest itself in queries with
semi-join materialized subqueries.
When a subquery is materialized all conditions that are imposed
only on the columns belonging to the tables from the subquery
are taken into account.The code responsible for subquery optimizations
that employes subquery materialization makes sure to remove these
conditions from the WHERE conditions of the query obtained after
it has transformed the original query into a query with a semi-join.
If the condition to be removed is an equality condition it could
be added to ON expressions and/or conditions from disjunctive branches
(parts of OR conditions) in an attempt to generate better access keys
to the tables of the query. Such equalities are supposed to be removed
later from all the formulas where they have been added to.
However, erroneously, this was not done in some cases when an ON
expression and/or a disjunctive part of the OR condition could
be converted into one multiple equality. As a result some equality
predicates over columns belonging to the tables of the materialized
subquery remained in the ON condition and/or the a disjunctive
part of the OR condition, and the excuter later, when trying to
evaluate them, returned wrong answers as the values of the fields
from these equalities were not valid.
This happened because any standalone multiple equality (a multiple
equality that are not ANDed with any other predicates) lacked
the information about equality predicates inherited from upper
levels (in particular, inherited from the WHERE condition).
The fix adds a reference to such information to any standalone
multiple equality.
The wrong result set returned by the left join query from
the bug test case happened due to several inconsistencies
and bugs of the legacy mysql code.
The bug test case uses an execution plan that employs a scan
of a materialized IN subquery from the WHERE condition.
When materializing such an IN- subquery the optimizer injects
additional equalities into the WHERE clause. These equalities
express the constraints imposed by the subquery predicate.
The injected equality of the query in the test case happens
to belong to the same equality class, and a new equality
imposing a condition on the rows of the materialized subquery
is inferred from this class. Simultaneously the multiple
equality is added to the ON expression of the LEFT JOIN
used in the main query.
The inferred equality of the form f1=f2 is taken into account
when optimizing the scan of the rows the temporary table
that is the result of the subquery materialization: only the
values of the field f1 are read from the table into the record
buffer. Meanwhile the inferred equality is removed from the
WHERE conditions altogether as a constraint on the fields
of the temporary table that has been used when filling this table.
This equality is supposed to be removed from the ON expression
when the multiple equalities of the ON expression are converted
into an optimal set of equality predicates. It supposed to be
removed from the ON expression as an equality inferred from only
equalities of the WHERE condition. Yet, it did not happened
due to the following bug in the code.
Erroneously the code tried to build multiple equality for ON
expression twice: the first time, when it called optimize_cond()
for the WHERE condition, the second time, when it called
this function for the HAVING condition. When executing
optimize_con() for the WHERE condition a reference
to the multiple equality of the WHERE condition is set
in the multiple equality of the ON expression. This reference
would allow later to convert multiple equalities of the
ON expression into equality predicates. However the
the second call of build_equal_items() for the ON expression
that happened when optimize_cond() was called for the
HAVING condition reset this reference to NULL.
This bug fix blocks calling build_equal_items() for ON
expressions for the second time. In general, it will be
beneficial for many queries as it removes from ON
expressions any equalities that are to be checked for the
WHERE condition.
The patch also fixes two bugs in the list manipulation
operations and a bug in the function
substitute_for_best_equal_field() that resulted
in passing wrong reference to the multiple equalities
of where conditions when processing multiple
equalities of ON expressions.
The code of substitute_for_best_equal_field() and
the code the helper function eliminate_item_equal()
were also streamlined and cleaned up.
Now the conversion of the multiple equalities into
an optimal set of equality predicates first produces
the sequence of the all equalities processing multiple
equalities one by one, and, only after this, it inserts
the equalities at the beginning of the other conditions.
The multiple changes in the output of EXPLAIN
EXTENDED are mainly the result of this streamlining,
but in some cases is the result of the removal of
unneeded equalities from ON expressions. In
some test cases this removal were reflected in the
output of EXPLAIN resulted in disappearance of
“Using where” in some rows of the execution plans.
Analysis:
Range analysis detects that the subquery is expensive and doesn't
build a range access method. Later, the applicability test for loose
scan doesn't take that into account, and builds a loose scan method
without a range scan on the min/max column. As a result loose scan
fetches the first key in each group, rather than the first key that
satisfies the condition on the min/max column.
Solution:
Since there is no SEL_ARG tree to be used for the min/max column,
it is not possible to use loose scan if the min/max column is compared
with an expensive scalar subquery. Make the test for loose scan
applicability to be in sync with the range analysis code by testing if
the min/max argument is compared with an expensive predicate.
This bug happened because the executor tried to use a wrong
TABLE REF object when building access keys. It constructed
keys from fields of a materialized table from a ref object
created to construct keys from the fields of the underlying
base table. This could happen only when materialized table
was created for a non-correlated IN subquery and only
when the materialized table used for lookups.
In this case we are guaranteed to be able to construct the
keys from the fields of tables that would be outer tables
for the tables of the IN subquery.
The patch makes sure that no ref objects constructed from
fields of materialized lookup tables are to be used.
Analys:
The cause for the wrong result was that the optimizer
incorrectly chose min/max loose scan when it is not
applicable. The applicability test missed the case when
a condition on the MIN/MAX argument was OR-ed with a
condition on some other field. In this case, the MIN/MAX
condition cannot be used for loose scan.
Solution:
Extend the test check_group_min_max_predicates() to check
that the WHERE clause is of the form: "cond1 AND cond2"
where
cond1 - does not use min_max_column at all.
cond2 - is an AND/OR tree with leaves in form "min_max_column $CMP$ const"
or $CMP$ is one of the functions between, is [not] null
Analysis:
Range analysis discoveres that the query can be executed via loose index scan for GROUP BY.
Later, GROUP BY analysis fails to confirm that the GROUP operation can be computed via an
index because there is no logic to handle duplicate field references in the GROUP clause.
As a result the optimizer produces an inconsistent plan. It constructs a temporary table,
but on the other hand the group fields are not set to point there.
Solution:
Make loose scan analysis work in sync with order by analysis. In the case of duplicate
columns loose scan will not be applicable. This limitation will be lifted in 10.0 by
removing duplicate columns.
reached by fix_fields() (via reference) before row which it belongs to (on the second execution)
and fix_field for row did not follow usual protocol for Items with argument
(first check that the item fixed then call fix_fields).
Item_row::fix_field fixed.
allow only three failed change_user per connection.
successful change_user do NOT reset the counter
tests/mysql_client_test.c:
make --error to work for --change_user errors
This bug could result in returning 0 for the expressions of the form
<aggregate_function>(distinct field) when the system variable
max_heap_table_size was set to a small enough number.
It happened because the method Unique::walk() did not support
the case when more than one pass was needed to merge the trees
of distinct values saved in an external file.
Backported a fix in grant_lowercase.test from mariadb 5.5.
Early evaluation of subqueries in the WHERE conditions on I_S.*_STATUS tables,
otherwise the subquery on this same table will try to acquire LOCK_status twice.
The problem was that maybe_null of Item_row and its componetes was unsynced after update_used_tables() (and so pushed_cond_guards was not initialized but then requested).
Fix updates Item_row::maybe_null on update_used_tables().
Analysis
The reason for the less efficient plan was result of a prior design decision -
to limit the eveluation of constant expressions during optimization to only
non-expensive ones. With this approach all stored procedures were considered
expensive, and were not evaluated during optimization. As a result, SPs didn't
participate in range optimization, which resulted in a plan with table scan
rather than index range scan.
Solution
Instead of considering all SPs expensive, consider expensive only those SPs
that are non-deterministic. If an SP is deterministic, the optimizer will
checj if it is constant, and may eventually evaluate it during optimization.
The original patch with the implementation of virtual columns
did not support INSERT DELAYED into tables with virtual columns.
This patch fixes the problem.
The bug could lead to a wrong estimate of the number of expected rows
in the output of the EXPLAIN commands for queries with GROUP BY.
This could be observed in the test case for LP bug 934348.
Problem:If Disk becomes full while writing into the binlog,
then the server instance hangs till someone frees the space.
After user frees up the disk space, mysql server crashes
with an assert (m_status != DA_EMPTY)
Analysis: wait_for_free_space is being called in an
infinite loop i.e., server instance will hang until
someone frees up the space. So there is no need to
set status bit in diagnostic area.
Fix: Replace my_error/my_printf_error with
sql_print_warning() which prints the warning in error log.
include/my_sys.h:
Provision to call sql_print_warning from mysys files
mysys/errors.c:
Replace my_error/my_printf_error with
sql_print_warning() which prints the warning in error log.
mysys/my_error.c:
implementation of my_printf_warning
mysys/my_write.c:
Adding logic to break infinite loop in the simulation
sql/mysqld.cc:
Provision to call sql_print_warning from mysys files
from a MERGE view.
The problem was in the lost ability to be null for the table of a left join if it
is a view/derived table.
It hapenned because setup_table_map(), was called earlier then we merged
the view or derived.
Fixed by propagating new maybe_null flag during Item::update_used_tables().
Change in join_outer.test and join_outer_jcl6.test appeared because
IS NULL reported no used tables (i.e. constant) for argument which could not be
NULL and new maybe_null flag was propagated for IS NULL argument (Item_field)
because table the Item_field belonged to changed its maybe_null status.
Details of BUG#11746142: CALLING MYSQLD WHILE ANOTHER
INSTANCE IS RUNNING, REMOVES PID FILE
Fix: Before removing the pid file, ensure it was created
by the same process, leave it intact otherwise.
sql/mysqld.cc:
delete_pid_file() introduced, which checks that the pid file
belongs to the process before removing it
DOS ATTACKS
Problem:
For detailed description, see Bug#42502. This bug is a duplicate
of Bug#42502. The complete fix for Bug#42502 was not made as
proposed. Hence the bug still persists.
Fix:
Make the changes as proposed originally for the bugfix of 42502.
Which is to remove the allocation of the memory before we actually
check for any errors.
sql/tztime.cc:
Remove the double allocation for tz_info
TO SIGNED
Problem:
When we are joining types (of fields) in case of a union, we usually
upgrade the datatypes to the largest present in the query.
In case of mediumint, it is not happening.
Analysis:
When joined with types LONG and LONGLONG, mediumint should get
upgraded to LONG and LONGLONG respectively.
W.r.t the given query, constant '1' will be created as a LONGLONG
internally and SIGNED flag is enabled. As a result, while combining
types for the field, LONGLONG along with MEDIUMINT gets converted
to LONG first. LONG with MEDIUMINT(of the third select) gets converted
to MEDIUMINT. SIGNED FLAG would be that of the first field's.
As a result, the final result would be SIGNED MEDIUMINT.
Fix:
While joining types, MEDIUMINT with LONGLONG and MEDIUMINT with LONG
is converted to LONGLONG and LONG respectively. Also, made some
changes for FLOAT and DOUBLE.
sql/field.cc:
Changed merge types for MEDIUMINT.
Analysis:
When thread cache is enabled, it does not properly initialize
thd->start_utime when a thread is picked from the thread cache.
This breaks the quota management mechanism.
THD::time_out_user_resource_limits() resets
m_user_connect->conn_per_hour to 0 based on thd->start_utime
Fix:
Initialize start_utime when cached thread is reused.
Notes:
Enabled back tests which were disabled because of this issue.
Analysis:
The following call stack shows that it is possible to set Item_cache::value_cached, and the relevant value
without setting Item_cache::example.
#0 Item_cache_temporal::store_packed at item.cc:8395
#1 get_datetime_value at item_cmpfunc.cc:915
#2 resolve_const_item at item.cc:7987
#3 propagate_cond_constants at sql_select.cc:12264
#4 propagate_cond_constants at sql_select.cc:12227
#5 optimize_cond at sql_select.cc:13026
#6 JOIN::optimize at sql_select.cc:1016
#7 st_select_lex::optimize_unflattened_subqueries at sql_lex.cc:3161
#8 JOIN::optimize_unflattened_subqueries at opt_subselect.cc:4880
#9 JOIN::optimize at sql_select.cc:1554
The fix is to set Item_cache_temporal::example even when the value is
set directly by Item_cache_temporal::store_packed. This makes the
Item_cache_temporal object consistent.