functions in queries
Using MAX()/MIN() on table with disabled indexes (by ALTER TABLE)
results in error 124 (wrong index) from storage engine.
The problem was that optimizer use disabled index to optimize
MAX()/MIN(). Normally it must skip disabled index and perform
table scan.
This patch skips disabled indexes for min/max optimization.
Backport Valgrind suppression from mysql-5.1:
D 1.4 05/11/23 22:44:54+02:00 monty@mysql.com 5 4 12/0/154
P mysql-test/valgrind.supp
C Remove warning that may happens becasue threads dies in different order
schemas
The function check_one_table_access() called to check access to tables in
SELECT/INSERT/UPDATE was doing additional checks/modifications that don't hold
in the context of setup_tables_and_check_access().
That's why the check_one_table() was split into two : the functionality needed by
setup_tables_and_check_access() into check_single_table_access() and the rest of
the functionality stays in check_one_table_access() that is made to call the new
check_single_table_access() function.
The length of the prefix of the pattern string in the LIKE predicate that
determined the index range to be scanned was calculated incorrectly for
multi-byte character sets.
As a result of this in 4. 1 the the scanned range was wider then necessary
if the prefix contained not only one-byte characters.
In 5.0 additionally it caused missing some rows from the result set.
function crashes server".
Attempts to execute prepared multi-delete statement which involved trigger or
stored function caused server crashes (the same happened for such statements
included in stored procedures in cases when one tried to execute them more
than once).
The problem was caused by yet another incorrect usage of check_table_access()
routine (the latter assumes that table list which it gets as argument
corresponds to value LEX::query_tables_own_last). We solve this problem by
juggling with LEX::query_tables_own_last value when we call
check_table_access() for LEX::auxilliary_table_list (better solution is too
intrusive and should be done in 5.1).
Added test case for bug#18759 Incorrect string to numeric conversion.
select.test:
Added test case for bug#18759 Incorrect string to numeric conversion.
item_cmpfunc.cc:
Cleanup after fix for bug#18360 removal
Fixes bug#17264, for alter table on win32 for successfull operation completion
it is used TL_WRITE(=10) lock instead of TL_WRITE_ALLOW_READ(=6), however here
in innodb handler TL_WRTIE is lifted to TL_WRITE_ALLOW_WRITE, which causes
race condition when several clients do alter table simultaneously.
There was an incomplete reset of the name resolution context, that caused
INSERT ... SELECT ... JOIN statements to resolve not by joint row type calculated
for the join.
Removed the redundant re-initialization of the context, because
mysql_insert_select_prepare() now correctly saves/restores the context.
tables
Currently in INSERT ... SELECT ... LIMIT ... the compiler uses a
temporary table to store the results of SELECT ... LIMIT .. and then
uses that table as a source for INSERT. The problem is that in some cases
it actually skips the LIMIT clause in doing that and materializes the
whole SELECT result set regardless of the LIMIT.
This fix is limiting the process of filling up the temp table with only
that much rows that will be actually used by propagating the LIMIT value.
Certain updates of table joined to self results in unexpected
behavior.
The problem was that record cache was mistakenly enabled for
self-joined table updates. Normally record cache must be disabled
for such updates.
Fixed wrong condition in code that determines whether to use
record cache for self-joined table updates.
Only MyISAM tables were affected.
INSERT triggers".
In cases when REPLACE was internally executed via update and table had
on update (on delete) triggers defined we exposed the fact that such
optimization used by callng on update (not calling on delete) triggers.
Such behavior contradicts our documentation which describes REPLACE as
INSERT with optional DELETE.
This fix just disables this optimization for tables with on delete triggers.
The optimization is still applied for tables which have on update but have
no on delete triggers, we just don't invoke on update triggers in this case
and thus don't expose information about optimization to user.
Also added test coverage for values returned by ROW_COUNT() function (and
thus for values returned by mysql_affected_rows()) for various forms of
INSERT.
resulted in a wrong error message.
The nest_level counter indicates the depth of nesting for a subselect. It is
needed to properly resolve aggregate functions in nested subselects. Obviously
it shouldn't be incremented for UNION parts because they have the same level of
nesting. This counter was incremented by 1 in the mysql_new_select() function
for any new select and wasn't decremented for UNION parts. This resulted in
wrongly reported error messages.
Now the nest_level counter is decremented by 1 for any union part.
The Field::eq() considered instances of Field_bit that differ only in
bit_ptr/bit_ofs equal. This caused equality conditions optimization
(build_equal_items_for_cond()) to make bad field substitutions that result
in wrong predicates.
Field_bit requires an overloaded eq() function that checks the bit_ptr/bit_ofs
in addition to Field::eq().
This bug in Field_string::cmp resulted in a wrong comparison
with keys in partial indexes over multi-byte character fields.
Given field a is declared as
a varchar(16) collate utf8_unicode_ci
INDEX(a(4)) gives us an example of such an index.
Wrong key comparisons could lead to wrong result sets if
the selected query execution plan used a range scan by
a partial index over a utf8 character field.
This also caused wrong results in many other cases.
The INSERT DELAYED should not maintain its own private auto-increment
counter, because this is assuming that other threads cannot insert
into the table while the INSERT DELAYED thread is inserting, which is
a wrong assumption.
So the start of processing of a batch of INSERT rows in the
INSERT DELAYED thread must be treated as a start of a new statement
and cached next_insert_id must be cleared.
can lead to a wrong result.
All date/time functions has the STRING result type thus their results are
compared as strings. The string date representation allows a user to skip
some of leading zeros. This can lead to wrong comparison result if a date/time
function result is compared to such a string constant.
The idea behind this bug fix is to compare results of date/time functions
and data/time constants as ints, because that date/time representation is
more exact. To achieve this the agg_cmp_type() is changed to take in the
account that a date/time field or an date/time item should be compared
as ints.
This bug fix is partially back ported from 5.0.
The agg_cmp_type() function now accepts THD as one of parameters.
In addition, it now checks if a date/time field/function is present in the
list. If so, it tries to coerce all constants to INT to make date/time
comparison return correct result. The field for the constant coercion is
taken from the Item_field or constructed from the Item_func. In latter case
the constructed field will be freed after conversion of all constant items.
Otherwise the result is same as before - aggregated with help of the
item_cmp_type() function.
From the Item_func_between::fix_length_and_dec() function removed the part
which was converting date/time constants to int if possible. Now this is
done by the agg_cmp_type() function.
The new function result_as_longlong() is added to the Item class.
It indicates that the item is a date/time item and result of it can be
compared as int. Such items are date/time fields/functions.
Correct val_int() methods are implemented for classes Item_date_typecast,
Item_func_makedate, Item_time_typecast, Item_datetime_typecast. All these
classes are derived from Item_str_func and Item_str_func::val_int() converts
its string value to int without regard to the date/time type of these items.
Arg_comparator::set_compare_func() and Arg_comparator::set_cmp_func()
functions are changed to substitute result type of an item with the INT_RESULT
if the item is a date/time item and another item is a constant. This is done
to get a correct result of comparisons like date_time_function() = string_constant.
There was a wrong determination of the DB name (witch is
not always the one in TABLE_LIST because derived tables
may be calculated using temp tables that have their db name
set to "").
The fix determines the database name according to the type
of table reference, and calls the function check_access()
with the correct db name so the correct set of grants is found.
The is_null value was initialized once and thereafter only set to indicate
NULL, and never unset to indicate not-NULL.
Now set is_null to false, in addition to only setting it to true when the value
in question is null.
query
Problem:
There was a wrong context assigned to the columns that were added in insert_fields()
when expanding a '*'. When this is done in a prepared statement it causes
fix_fields() to fail to find the table that these columns reference.
Actually the right context is set in setup_natural_join_row_types() called at the
end of setup_tables(). However when executed in a context of a prepared statement
setup_tables() resets the context, but setup_natural_join_row_types() was not
setting it to the correct value assuming it has already done so.
Solution:
The top-most, left-most NATURAL/USING join must be set as a
first_name_resolution_table in context even when operating on prepared statements.
The st_lex::which_check_option_applicable() function controls for which
statements WITH CHECK OPTION clause should be taken into account. REPLACE and
REPLACE_SELECT wasn't in the list which results in allowing REPLACE to insert
wrong rows in a such view.
The st_lex::which_check_option_applicable() now includes REPLACE and
REPLACE_SELECT in the list of statements for which WITH CHECK OPTION clause is
applicable.
To calculate its max_length the CONCAT() function is simply sums max_lengths
of its arguments but when the collation of an argument differs from the
collation of the CONCAT() max_length will be wrong. This may lead to a data
truncation when a tmp table is used, in UNIONS for example.
The Item_func_concat::fix_length_and_dec() function now recalculates the
max_length of an argument when the mbmaxlen of the argument differs from the
mbmaxlen of the CONCAT().
fix: return db name for I_S.TABLES(and others) in original letter case.
if mysql starts with lower_case_table_names=1 | 2 then original db name is converted
to lower case(for I_S tables). It happens when we perform add_table_to_list.
to avoid this we make a copy of original db name and use the copy hereafter.
The bug report revealed two problems related to min/max optimization:
1. If the length of a constant key used in a SARGable condition for
for the MIN/MAX fields is greater than the length of the field an
unwanted warning on key truncation is issued;
2. If MIN/MAX optimization is applied to a partial index, like INDEX(b(4))
than can lead to returning a wrong result set.
3.23 regression test failure
The member SEL_ARG::min_flag was not initialized,
due to which the condition for no GEOM_FLAG in function
key_or did not choose "Range checked for each record" as
the correct access method.
Only check for FN_DEVCHAR in filenames if FN_DEVCHAR is defined.
This allows to use table names with ":" on non windows platforms.
On Windows platform get an error if you use table name that contains FN_DEVCHAR
(The above problem only occurs with -T -- create a separate file for
each table / view.) This ChangeSet results in correct output of view-
information while omitting the information for the view's stand-in
table. The rationale is that with -T, the user is likely interested
in transferring part of a database, not the db in its entirety (that
would be difficult as replay order is obscure, the files being named
for the table/view they contain rather than getting a sequence number).
'show create' works even on views that are short of a base-table (this
throw a warning though, like you would expect). Unfortunately, this is
not what mysqldump uses; it creates stand-in tables and hence requests
'show fields' on the view which fails with missing base-tables. The
--force option prevents the dump from stopping at this point; furthermore
this patch dumps a comment showing create for the offending view for
better diagnostics. This solution was confirmed by submitter as solving
their/clients' problem. Problem might become non-issue once mysqldump no
longer creates stand-in tables.
Bug#18282 "INFORMATION_SCHEMA.TABLES provides inconsistent info about invalid views"
This bug caused crashes or resulted in wrong data being returned
when one tried to obtain information from I_S tables about views
using stored functions.
It was caused by the fact that we were using LEX representing
statement which were doing select from I_S tables as active LEX
when contents of I_S table were built. So state of this LEX both
affected and was affected by open_tables() calls which happened
during this process. This resulted in wrong behavior and in
violations of some of invariants which caused crashes.
This fix tries to solve this problem by properly saving/resetting
and restoring part of LEX which affects and is affected by the
process of opening tables and views in get_all_tables() routine.
To simplify things we separated this part of LEX in a new class
and made LEX its descendant.
The IN() function uses agg_cmp_type() to aggregate all types of its arguments
to find out some common type for comparisons. In this particular case the
char() and the int was aggregated to double because char() can contain values
like '1.5'. But all strings which do not start from a digit are converted to
0. thus 'a' and 'z' become equal.
This behaviour is reasonable when all function arguments are constants. But
when there is a field or an expression this can lead to false comparisons. In
this case it makes more sense to coerce constants to the type of the field
argument.
The agg_cmp_type() function now aggregates types of constant and non-constant
items separately. If some non-constant items will be found then their
aggregated type will be returned. Thus after the aggregation constants will be
coerced to the aggregated type.
The order of acquiring LOCK_mysql_create_db
and wait_if_global_read_lock() was wrong. It could happen
that a thread held LOCK_mysql_create_db while waiting for
the global read lock to be released. The thread with the
global read lock could try to administrate a database too.
It would first try to lock LOCK_mysql_create_db and hang...
The check if the current thread has the global read lock
is done in wait_if_global_read_lock(), which could not be
reached because of the hang in LOCK_mysql_create_db.
Now I exchanged the order of acquiring LOCK_mysql_create_db
and wait_if_global_read_lock(). This makes
wait_if_global_read_lock() fail with an error message for
the thread with the global read lock. No deadlock happens.
In multi-table delete a table for delete can't be used for selecting in
subselects. Appropriate error was raised but wasn't checked which leads to a
crash at the execution phase.
The mysql_execute_command() now checks for errors before executing select
for multi-delete.
argument can lead to a wrong result.
md5() and sha() functions treat their arguments as case sensitive strings.
But when they are compared their arguments were compared as a case
insensitive strings which leads to two functions with different arguments
and thus different results to being identical. This can lead to a wrong
decision made in the range optimizer and thus lead to a wrong result set.
Item_func_md5::fix_length_and_dec() and Item_func_sha::fix_length_and_dec()
functions now set binary collation on their arguments.