There was a wrong determination of the DB name (witch is
not always the one in TABLE_LIST because derived tables
may be calculated using temp tables that have their db name
set to "").
The fix determines the database name according to the type
of table reference, and calls the function check_access()
with the correct db name so the correct set of grants is found.
The is_null value was initialized once and thereafter only set to indicate
NULL, and never unset to indicate not-NULL.
Now set is_null to false, in addition to only setting it to true when the value
in question is null.
query
Problem:
There was a wrong context assigned to the columns that were added in insert_fields()
when expanding a '*'. When this is done in a prepared statement it causes
fix_fields() to fail to find the table that these columns reference.
Actually the right context is set in setup_natural_join_row_types() called at the
end of setup_tables(). However when executed in a context of a prepared statement
setup_tables() resets the context, but setup_natural_join_row_types() was not
setting it to the correct value assuming it has already done so.
Solution:
The top-most, left-most NATURAL/USING join must be set as a
first_name_resolution_table in context even when operating on prepared statements.
The st_lex::which_check_option_applicable() function controls for which
statements WITH CHECK OPTION clause should be taken into account. REPLACE and
REPLACE_SELECT wasn't in the list which results in allowing REPLACE to insert
wrong rows in a such view.
The st_lex::which_check_option_applicable() now includes REPLACE and
REPLACE_SELECT in the list of statements for which WITH CHECK OPTION clause is
applicable.
To calculate its max_length the CONCAT() function is simply sums max_lengths
of its arguments but when the collation of an argument differs from the
collation of the CONCAT() max_length will be wrong. This may lead to a data
truncation when a tmp table is used, in UNIONS for example.
The Item_func_concat::fix_length_and_dec() function now recalculates the
max_length of an argument when the mbmaxlen of the argument differs from the
mbmaxlen of the CONCAT().
fix: return db name for I_S.TABLES(and others) in original letter case.
if mysql starts with lower_case_table_names=1 | 2 then original db name is converted
to lower case(for I_S tables). It happens when we perform add_table_to_list.
to avoid this we make a copy of original db name and use the copy hereafter.
The bug report revealed two problems related to min/max optimization:
1. If the length of a constant key used in a SARGable condition for
for the MIN/MAX fields is greater than the length of the field an
unwanted warning on key truncation is issued;
2. If MIN/MAX optimization is applied to a partial index, like INDEX(b(4))
than can lead to returning a wrong result set.
3.23 regression test failure
The member SEL_ARG::min_flag was not initialized,
due to which the condition for no GEOM_FLAG in function
key_or did not choose "Range checked for each record" as
the correct access method.
Only check for FN_DEVCHAR in filenames if FN_DEVCHAR is defined.
This allows to use table names with ":" on non windows platforms.
On Windows platform get an error if you use table name that contains FN_DEVCHAR
(The above problem only occurs with -T -- create a separate file for
each table / view.) This ChangeSet results in correct output of view-
information while omitting the information for the view's stand-in
table. The rationale is that with -T, the user is likely interested
in transferring part of a database, not the db in its entirety (that
would be difficult as replay order is obscure, the files being named
for the table/view they contain rather than getting a sequence number).
'show create' works even on views that are short of a base-table (this
throw a warning though, like you would expect). Unfortunately, this is
not what mysqldump uses; it creates stand-in tables and hence requests
'show fields' on the view which fails with missing base-tables. The
--force option prevents the dump from stopping at this point; furthermore
this patch dumps a comment showing create for the offending view for
better diagnostics. This solution was confirmed by submitter as solving
their/clients' problem. Problem might become non-issue once mysqldump no
longer creates stand-in tables.
Bug#18282 "INFORMATION_SCHEMA.TABLES provides inconsistent info about invalid views"
This bug caused crashes or resulted in wrong data being returned
when one tried to obtain information from I_S tables about views
using stored functions.
It was caused by the fact that we were using LEX representing
statement which were doing select from I_S tables as active LEX
when contents of I_S table were built. So state of this LEX both
affected and was affected by open_tables() calls which happened
during this process. This resulted in wrong behavior and in
violations of some of invariants which caused crashes.
This fix tries to solve this problem by properly saving/resetting
and restoring part of LEX which affects and is affected by the
process of opening tables and views in get_all_tables() routine.
To simplify things we separated this part of LEX in a new class
and made LEX its descendant.
The IN() function uses agg_cmp_type() to aggregate all types of its arguments
to find out some common type for comparisons. In this particular case the
char() and the int was aggregated to double because char() can contain values
like '1.5'. But all strings which do not start from a digit are converted to
0. thus 'a' and 'z' become equal.
This behaviour is reasonable when all function arguments are constants. But
when there is a field or an expression this can lead to false comparisons. In
this case it makes more sense to coerce constants to the type of the field
argument.
The agg_cmp_type() function now aggregates types of constant and non-constant
items separately. If some non-constant items will be found then their
aggregated type will be returned. Thus after the aggregation constants will be
coerced to the aggregated type.
The order of acquiring LOCK_mysql_create_db
and wait_if_global_read_lock() was wrong. It could happen
that a thread held LOCK_mysql_create_db while waiting for
the global read lock to be released. The thread with the
global read lock could try to administrate a database too.
It would first try to lock LOCK_mysql_create_db and hang...
The check if the current thread has the global read lock
is done in wait_if_global_read_lock(), which could not be
reached because of the hang in LOCK_mysql_create_db.
Now I exchanged the order of acquiring LOCK_mysql_create_db
and wait_if_global_read_lock(). This makes
wait_if_global_read_lock() fail with an error message for
the thread with the global read lock. No deadlock happens.
In multi-table delete a table for delete can't be used for selecting in
subselects. Appropriate error was raised but wasn't checked which leads to a
crash at the execution phase.
The mysql_execute_command() now checks for errors before executing select
for multi-delete.
argument can lead to a wrong result.
md5() and sha() functions treat their arguments as case sensitive strings.
But when they are compared their arguments were compared as a case
insensitive strings which leads to two functions with different arguments
and thus different results to being identical. This can lead to a wrong
decision made in the range optimizer and thus lead to a wrong result set.
Item_func_md5::fix_length_and_dec() and Item_func_sha::fix_length_and_dec()
functions now set binary collation on their arguments.
When reading a view definition from a .frm file it was
throwing a SQL error if the DEFINER user is not defined.
Changed it to a warning to match the (documented) case
when a view with undefined DEFINER user is created.
The check for view security was lacking several points :
1. Check with the right set of permissions : for each table ref that
participates in a view there were the right credentials to use in it's
security_ctx member, but these weren't used for checking the credentials.
This makes hard enforcing the SQL SECURITY DEFINER|INVOKER property
consistently.
2. Because of the above the security checking for views was just ruled out
in explicit ways in several places.
3. The security was checked only for the columns of the tables that are
brought into the query from a view. So if there is no column reference
outside of the view definition it was not detecting the lack of access to
the tables in the view in SQL SECURITY INVOKER mode.
The fix below tries to fix the above 3 points.
The Item_func_concat::val_str() function tries to make as less re-allocations
as possible. This results in appending strings returned by 2nd and next
arguments to the string returned by 1st argument if the buffer for the first
argument has enough free space. A constant subselect is evaluated only once
and its result is stored in an Item_cache_str. In the case when the first
argument of the concat() function is such a subselect Item_cache_str returns
the stored value and Item_func_concat::val_str() append values of other
arguments to it. But for the next row the value in the Item_cache_str isn't
restored because the subselect is a constant one and it isn't evaluated second
time. This results in appending string values of 2nd and next arguments to the
result of the previous Item_func_concat::val_str() call.
The Item_func_concat::val_str() function now checks whether the first argument
is a constant one and if so it doesn't append values of 2nd and next arguments
to the string value returned by it.
Replaced COND_refresh with COND_global_read_lock becasue of a bug in NTPL threads when using different mutexes as arguments to pthread_cond_wait()
The original code caused a hang in FLUSH TABLES WITH READ LOCK in some circumstances because pthread_cond_broadcast() was not delivered to other threads.
This fixes:
Bug#16986: Deadlock condition with MyISAM tables
Bug#20048: FLUSH TABLES WITH READ LOCK causes a deadlock
- In function 'handle_grant_struct' when searching the memory structures for an
entry to modify, convert all entries here host.hostname is NULL to "" and compare that
with the host passed in argument "user_from".
- A user created with hostname "" is stored in "mysql.user" table as host="" but when loaded into
memory it'll be stored as host.hostname NULL. Specifiying "" as hostname means
that "any host" can connect. Thus is's correct to turn on allow_all_hosts
when such a user is found.
- Review and fix other places where host.hostname may be NULL.
When a CREATE TABLE command created a table from a materialized
view id does not inherit default values from the underlying table.
Moreover the temporary table used for the view materialization
does not inherit those default values.
In the case when the underlying table contained ENUM fields it caused
misleading error messages. In other cases the created table contained
wrong default values.
The code was modified to ensure inheritance of default values for
materialized views.
This bug was introduced when the patch resolving the
performance problem 17164 was applied. As a result
of that modification the not_null_tables attributes
were calculated incorrectly for constant OR conditions.
This triggered invalid conversion of outer joins into
inner joins.
The convert_constant_item() function converts constant items to ints on
prepare phase to optimize execution speed. In this case it tries to evaluate
subselect which contains a derived table and is contained in a derived table.
All derived tables are filled only after all derived tables are prepared.
So evaluation of subselect with derived table at the prepare phase will
return a wrong result.
A new flag with_subselect is added to the Item class. It indicates that
expression which this item represents is a subselect or contains a subselect.
It is set to 0 by default. It is set to 1 in the Item_subselect constructor
for subselects.
For Item_func and Item_cond derived classes it is set after fixing any argument
in Item_func::fix_fields() and Item_cond::fix_fields accordingly.
The convert_constant_item() function now doesn't convert a constant item
if the with_subselect flag set in it.
The select statement that specified a view could be
slightly changed when the view was saved in a frm file.
In particular references to an alias name in the HAVING
clause could be substituted for the expression named by
this alias.
This could result in an error message for a query of
the form SELECT * FROM <view>. Yet no such message
appeared when executing the query specifying the view.
- When manually constructing a SEL_TREE for "t.key NOT IN(...)", take into account that
get_mm_parts may return a tree with type SEL_TREE::IMPOSSIBLE
- Added missing OOM checks
- Added comments
over two views when using syntax with curly braces.
Each outer join operation must be placed in a separate
nest. This was not done when the syntax with curly braces
was used. In some cases, in particular, for queries with outer
join operation over views it could cause a crash.
itself when executing queries referring to a view with GROUP BY
an expression containing non-constant interval.
It happened because Item_date_add_interval::eq neglected the
fact that the method can be applied to an expression of the form
date(col) + interval time_to_sec(col) second
at the time when col could not be evaluated yet.
An attempt to evaluate time_to_sec(col) in this method resulted
in a crash.
A pattern to generate binlog for DROPped temp table in close_temporary_tables
was buggy: could not deal with a grave-accent-in-name table.
The fix exploits `append_identifier()' for quoting and duplicating accents.
from within triggers
Add support for passing NEW.x as INOUT and OUT parameters to stored
procedures. Passing NEW.x as INOUT parameter requires SELECT and
UPDATE privileges on that column, and passing it as OUT parameter
requires only UPDATE privilege.
a worse execution plan than in 4.1 for some queries.
It happened due the fact that at some conditions the
optimizer always preferred range or full index scan access
methods to lookup access methods even when the latter were much
cheaper.
The problem was not observed in 4.1 for the reported query
because the WHERE condition was not of a form that could
cause the problem.
Equality propagation introduced on 5.0 added an extra
predicate and changed the WHERE condition. The new condition
provoked the optimizer to make a bad choice.
The problem was fixed by the patch for bug 17379.
When a view statement is compiled on CREATE VIEW time, most of the
optimizations should not be done. Finding the right optimization
for a subquery is one of them.
Unfortunately the optimizer is resolving the column references of
the left expression of IN subqueries in the process of deciding
witch optimization to use (if needed). So there should be a
special case in Item_in_subselect::fix_fields() : check the
validity of the left expression of IN subqueries in CREATE VIEW
mode and then proceed as normal.
garbles data if longer than 766 chars.
The problem is that a stored routine returns BLOBs to the previous
caller, BLOBs are shallow-copied (i.e. only pointers to the data are
copied). The fix is to also copy data of BLOBs.
SELECT WHERE a= AND b=
Selecting data from memory table with varchar column and hash index over it
returns only first row matched.
Problem was that key length calculation for varchar columns didn't include
number of bytes to store length.
Fixed key length for varchar fields to include number of bytes to store length.
Re-work best_access_path() and find_best() to reuse E(#rows(range access)) as
E(#rows(ref[_or_null](const) access) only when it is appropriate.
[This is the final cumulative patch]
Correct a bug (that I introduced, after using Oracle's database software for
too many years) where the length of the database-sent data is incorrectly
used to infer NULLness.
"alter table from MyISAM to MERGE lost data without errors and warnings"
Add new handlerton flag which prevent user from altering table storage
engine to storage engines which would lose data. Both 'blackhole' and
'merge' are marked with the new flag.
Tests included.
Binlog lacks encoding info about DROPped temporary table.
Idea of the fix is to switch temporary to system_charset_info when a temporary table
is DROPped for binlog. Since that is the server, that automatically, but not the client, who generates the query
the binlog should be updated on the server's encoding for the coming DROP.
The `write_binlog_with_system_charset()' is introduced to replace similar problematic places in the code.
When converting DISTINCT to GROUP BY where the columns are from the covering
index and they are quoted twice in the SELECT list the optimizer is creating
improper processing sequence. This is because of the fact that the columns
of the covering index are not recognized as such and treated as non-index
columns.
Generally speaking duplicate columns can safely be removed from the GROUP
BY/DISTINCT list because this will not add or remove new rows in the
resulting set. Duplicates can be removed even if they are not consecutive
(as is the case for ORDER BY, where the duplicate columns can be removed
only if they are consecutive).
So we can safely transform "SELECT DISTINCT a,a FROM ... ORDER BY a" to
"SELECT a,a FROM ... GROUP BY a ORDER BY a" instead of
"SELECT a,a FROM .. GROUP BY a,a ORDER BY a". We can even transform
"SELECT DISTINCT a,b,a FROM ... ORDER BY a,b" to
"SELECT a,b,a FROM ... GROUP BY a,b ORDER BY a,b".
The fix to this bug consists of checking for duplicate columns in the SELECT
list when constructing the GROUP BY list in transforming DISTINCT to GROUP
BY and skipping the ones that are already in.
or implicitly uses stored function gives "Table not locked" error'
CREATE TABLE ... SELECT ... statement which was explicitly or implicitly
(through view) using stored function gave "Table not locked" error.
The actual bug resides in the current locking scheme of CREATE TABLE SELECT
code, which first opens and locks tables of the SELECT statement itself,
and then, having SELECT tables locked, creates the .FRM, opens the .FRM and
acquires lock on it. This scheme opens a possibility for a deadlock, which
was present and ignored since version 3.23 or earlier. This scheme also
conflicts with the invariant of the prelocking algorithm -- no table can
be open and locked while there are tables locked in prelocked mode.
The patch makes an exception for this invariant when doing CREATE TABLE ...
SELECT, thus extending the possibility of a deadlock to the prelocked mode.
We can't supply a better fix in 5.0.
Bug #19606: ssl variables are not displayed in show variables
Bug #19616: log_queries_not_using_indexes is not listed in show variables
Make basedir, datadir, tmpdir, log_queries_not_using_indexes, ssl_ca,
ssl_capath, ssl_cert, ssl_cipher, and ssl_key all available both from
SHOW VARIABLES and as @@variables.
As a side-effect of this change, log_queries_not_using_indexes can
be changed at runtime (but only globally, not per-connection).
A query with a group by and having clauses could return a wrong
result set if the having condition contained a constant conjunct
evaluated to FALSE.
It happened because the pushdown condition for table with
grouping columns lost its constant conjuncts.
Pushdown conditions are always built by the function make_cond_for_table
that ignores constant conjuncts. This is apparently not correct when
constant false conjuncts are present.
tests fail on FreeBSD.
The patch contains of the following:
- make Instance Manager, running in the daemon mode, dump
the pid of angel-process in the special file;
- default value of angel-pid-file-name is 'mysqlmanager.angel.pid';
- if ordinary (IM) pid-file-name is specified in the configuration,
angel-pid-file-name is updated according to the following
rule: extension of the basename of pid-file-name is replaced by
'.angel.pid.
For example:
- pid-file-name: /tmp/im.pid
=> angel-pid-file-name: /tmp/im.angel.pid
- pid-file-name: /tmp/im.txt
=> angel-pid-file-name: /tmp/im.angel.pid
- pid-file-name: /tmp/5.0/im
=> angel-pid-file-name: /tmp/5.0/im.angel.pid
- add support for configuration option to customize angel
pid file name;
- fix test suite to use angel pid to kill Instance Manager
by all means if something went wrong.
Background
----------
The problem is that on some OSes (FreeBSD for one) Instance
Manager does not get SIGTERM, so can not shutdown gracefully.
Test suite wasn't able to cope with it, so this leads to the
mess in test results.
The problem should be split into two:
- fix signal handling;
- fix test suite.
This patch fixes test suite so that it will be able to kill
uncooperative Instance Manager. In order to achieve this,
test suite needs to know PID of IM Angel process.
The bug was as follows: When merge_key_fields() encounters "t.key=X OR t.key=Y" it will
try to join them into ref_or_null access via "t.key=X OR NULL". In order to make this
inference it checks if Y<=>NULL, ignoring the fact that value of Y may be not yet known.
The fix is that the check if Y<=>NULL is made only if value of Y is known (i.e. it is a
constant).
TODO: When merging to 5.0, replace used_tables() with const_item() everywhere in merge_key_fields().
The reason of the bug is in that `get_var_with_binlog' performs missed
assingment of
the variables as side-effect. Doing that it eventually calls
`free_underlaid_joins' to pass as an argument `thd->lex->select_lex' of the lex
which belongs to the user query, not
to one which is emulated i.e SET @var1:=NULL.
`get_var_with_binlog' is refined to supply a temporary lex to sql_set_variables's stack.
mysqldump / SHOW CREATE TABLE will show the NEXT available value for
the PK, rather than the *first* one that was available (that named in
the original CREATE TABLE ... AUTO_INCREMENT = ... statement).
This should produce correct and robust behaviour for the obvious use
cases -- when no data were inserted, then we'll produce a statement
featuring the same value the original CREATE TABLE had; if we dump
with values, INSERTing the values on the target machine should set the
correct next_ID anyway (and if not, we'll still have our AUTO_INCREMENT =
... to do that). Lastly, just the CREATE statement (with no data) for
a table that saw inserts would still result in a table that new values
could safely be inserted to).
There seems to be no robust way however to see whether the next_ID
field is > 1 because it was set to something else with CREATE TABLE
... AUTO_INCREMENT = ..., or because there is an AUTO_INCREMENT column
in the table (but no initial value was set with AUTO_INCREMENT = ...)
and then one or more rows were INSERTed, counting up next_ID. This
means that in both cases, we'll generate an AUTO_INCREMENT =
... clause in SHOW CREATE TABLE / mysqldump. As we also show info on,
say, charsets even if the user did not explicitly give that info in
their own CREATE TABLE, this shouldn't be an issue.
As per above, the next_ID will be affected by any INSERTs that have
taken place, though. This /should/ result in correct and robust
behaviour, but it may look non-intuitive to some users if they CREATE
TABLE ... AUTO_INCREMENT = 1000 and later (after some INSERTs) have
SHOW CREATE TABLE give them a different value (say, CREATE TABLE
... AUTO_INCREMENT = 1006), so the docs should possibly feature a
caveat to that effect.
It's not very intuitive the way it works now (with the fix), but it's
*correct*. We're not storing the original value anyway, if we wanted
that, we'd have to change on-disk representation?
If we do dump/load cycles with empty DBs, nothing will change. This
changeset includes an additional test case that proves that tables
with rows will create the same next_ID for AUTO_INCREMENT = ... across
dump/restore cycles.
Confirmed by support as likely solution for client's problem.