The cause of the bug was an incomplete fix for bug 18080.
The problem was that setup_tables() unconditionally reset the
name resolution context to its 'tables' argument, which pointed
to the first table of an SQL statement.
The bug fix limits resetting of the name resolution context in
setup_tables() only in the cases when the context was not set
by earlier parser/optimizer phases.
Only MyISAM tables locked with LOCK TABLES ... WRITE were affected.
A query that is optimized with index_merge doesn't reflect rows
inserted within LOCK TABLES.
MyISAM doesn't flush a state within LOCK TABLES. index_merge
optimization creates a copy of the handler, which thus gets
outdated MyISAM state.
New handler->clone() method is introduced to fix this problem.
For non-MyISAM storage engines it allocates a handler and opens
it with ha_open(). For MyISAM it additionally copies MyISAM state
pointer to cloned handler.
The problem was that if after FLUSH TABLES WITH READ LOCK the user
issued DROP/ALTER PROCEDURE/FUNCTION the operation would fail (as
expected), but after UNLOCK TABLE any attempt to execute the same
operation would lead to the error 1305 "PROCEDURE/FUNCTION does not
exist", and an attempt to execute any stored function will also fail.
This happened because under FLUSH TABLES WITH READ LOCK we couldn't open
and lock mysql.proc table for update, and this fact was erroneously
remembered by setting mysql_proc_table_exists to false, so subsequent
statements believed that mysql.proc doesn't exist, and thus that there
are no functions and procedures in the database.
As a solution, we remove mysql_proc_table_exists flag completely. The
reason is that this optimization didn't work most of the time anyway.
Even if open of mysql.proc failed for some reason when we were trying to
call a function or a procedure, we were setting mysql_proc_table_exists
back to true to force table reopen for the sake of producing the same
error message (the open can fail for number of reasons). The solution
could have been to remember the reason why open failed, but that's a lot
of code for optimization of a rare case. Hence we simply remove this
optimization.
0xFF is internal separator for SET|ENUM names.
If this symbol is present in SET|ENUM names then we replace it with
','(deprecated symbol for SET|ENUM names) during frm creation
and restore to 0xFF during frm opening
wrong results
Mark the containing Item(s) (Item_subselect descendant usually) of
a subselect as containing aggregate functions if it has references
to aggregates functions that are calculated outside its context.
This tels end_send_group() not to make an Item_subselect descendant in
select list a copy and causes the correct value being returned.
- Honor unsigned_flag in the corresponding functions
- Use compare_int_signed_unsigned()/compare_int_unsigned_signed() instead of explicit comparison in GREATEST() and LEAST()
VALUES() was considered a constant. This caused replacing
(or pre-calculating) it using uninitialized values before the actual
execution takes place.
Mark it as a non-constant (still not dependent of tables) to prevent
the pre-calculation.
Corrected test case after removal of fix for bug#16377
type_date.test:
Corrected test case after removal of fix for bug#16377
item_cmpfunc.cc:
Removed changes to the agg_cmp_type() made in the for bug#16377
equal constant under any circumstances.
In fact this substitution can be allowed if the field is
not of a type string or if the field reference serves as
an argument of a comparison predicate.
Select_type in the EXPLAIN output for the query SELECT * FROM t1 was
'SIMPLE', while for the query SELECT * FROM v1, where the view v1
was defined as SELECT * FROM t1, the EXPLAIN output contained 'PRIMARY'
for the select_type column.
In 5.0 we made LOAD DATA INFILE autocommit in all engines, while
only NDB wanted that. Users and trainers complained that it affected
InnoDB and was a change compared to 4.1 where only NDB autocommitted.
To revert to the behaviour of 4.1, we move the autocommit logic out of mysql_load() into
ha_ndbcluster::external_lock().
The result is that LOAD DATA INFILE commits all uncommitted changes
of NDB if this is an NDB table, its own changes if this is an NDB
table, but does not affect other engines.
Note: even though there is no "commit the full transaction at end"
anymore, LOAD DATA INFILE stays disabled in routines (re-entrency
problems per a comment of Pem).
Note: ha_ndbcluster::has_transactions() does not give reliable results
because it says "yes" even if transactions are disabled in this engine...
1003: Incorrect table name
in multi-table DELETE the set of tables to delete from actually
references then tables in the other list, e.g:
DELETE alias_of_t1 FROM t1 alias_of_t1 WHERE ....
is a valid statement.
So we must turn off table name syntactical validity check for alias_of_t1
because it's not a table name (even if it looks like one).
In order to do that we add a special flag (TL_OPTION_ALIAS) to
disable the name checking for the aliases in multi-table DELETE.
Variable character_set_results can legally be NULL (for "no conversion.")
This could result in a NULL deref that crashed the server. Fixed.
(Although ran some additional precursory tests to see whether I could break
anything else, but no breakage so far.)
into salvation.intern.azundris.com:/home/tnurnberg/21913/my50-21913
21913: DATE_FORMAT() Crashes mysql server if I use it through mysql-connector-j driver.
Variable character_set_results can legally be NULL (for "no conversion.")
This could result in a NULL deref that crashed the server. Fixed.
(Although ran some additional precursory tests to see whether I could break
anything else, but no breakage so far.)
Variable character_set_results can legally be NULL (for "no conversion.")
This could result in a NULL deref that crashed the server. Fixed.
(Although ran some additional precursory tests to see whether I could break
anything else, but no breakage so far.)
start instance; kill mysqlmanager; show ...
The problem was that Instance Manager didn't close client
sockets (sockets for client connections) on execing mysqld
instance. So, mysqld-instance inherits these descriptors.
The fix is to set close-on-exec flag for each client socket.
The problem was due to a prior fix for BUG 9676, which limited
the rows stored in a temporary table to the LIMIT clause. This
optimization is not applicable to non-group queries with aggregate
functions. The fix disables the optimization in this case.
account by the optimizer.
Now all row equalities are converted into conjunctions of
equalities between row elements. They are taken into account
by the optimizer together with the original regular equality
predicates.
Made the parser to support parenthesis around UNION branches.
This is done by amending the rules of the parser so it generates the correct
structure.
Currently it supports arbitrary subquery/join/parenthesis operations in the
EXISTS clause.
In the IN/scalar subquery case it will allow adding nested parenthesis only
if there is an UNION clause after the parenthesis. Otherwise it will just
treat the multiple nested parenthesis as a scalar expression.
It adds extra lex level for ((SELECT ...) UNION ...) to accommodate for the
UNION clause.
when a range condition use an invalid DATETIME constant.
Now we do not use invalid DATETIME constants to form end keys for
range intervals: range analysis just ignores predicates with such
constants.
doesn't find the column"
When a user was using 4.1 tables with VARCHAR column and 5.0 server
and a query that used a temporary table to resolve itself, the
table metadata for the varchar column sent to client was incorrect:
MYSQL_FIELD::table member was empty.
The bug was caused by implicit "upgrade" from old VARCHAR to new
VARCHAR hard-coded in Field::new_field, which did not preserve
the information about the original table. Thus, the field metadata
of the "upgraded" field pointed to an auxiliary temporary table
created for query execution.
The fix is to copy the pointer to the original table to the new field.
The problem was that during DROP TEMPORARY TABLE we tried to acquire
the name lock, though temporary tables belongs to one connection, and
no race is possible.
The solution is to not use table name locking while executing
DROP TEMPORARY TABLE.
- BUG#15934: Instance manager fails to work;
- BUG#18020: IM connect problem;
- BUG#18027: IM: Server_ID differs;
- BUG#18033: IM: Server_ID not reported;
- BUG#21331: Instance Manager: Connect problems in tests;
The only test suite has been changed
(server codebase has not been modified).
When a view was used inside a trigger or a function, lock type for
tables used in a view was always set to READ (thus making the view
non-updatable), even if we were trying to update the view.
The solution is to set lock type properly.
init_dumping now accepts a function pointer to the table or view specific init_dumping function. This allows both tables and views to use the init_dumping function.
erroneous check
Problem: Actually there were two problems in the server code. The check
for SQLCOM_FLUSH in SF/Triggers were not according to the existing
architecture which uses sp_get_flags_for_command() from sp_head.cc .
This function was also missing a check for SQLCOM_FLUSH which has a
problem combined with prelocking. This changeset fixes both of these
deficiencies as well as the erroneous check in
sp_head::is_not_allowed_in_function() which was a copy&paste error.
const tables. This resulted in choosing extremely inefficient
execution plans in same cases when distribution of data in
joined were skewed (see the customer test case for the bug).
The following procedure was not possible if max_sp_recursion_depth is 0
create procedure show_proc() show create procedure show_proc;
Actually there is no recursive call but the limit is checked.
Solved by temporarily increasing the thread's limit just before the fetch from cache
and decreasing after that.
The mysql client uses the default character set on reconnect. The default character set is now controled by the client charset command while the client is running. The charset command now also issues a SET NAMES command to the server to make sure that the client's charset settings are in sync with the server's.
- Fix typo in Item_func_export_set::fix_length_and_dec() which caused character set aggregation to fail
- Remove default argument from last arg of agg_arg_charsets() function, to reduce potential errors
User name (host name) has limit on length. The server code relies on these
limits when storing the names. The problem was that sometimes these limits
were not checked properly, so that could lead to buffer overflow.
The fix is to check length of user/host name in parser and if string is too
long, throw an error.
GROUP BY/DISTINCT pruning optimization must be done before ORDER BY
optimization because ORDER BY may be removed when GROUP BY/DISTINCT
sorts as a side effect, e.g. in
SELECT DISTINCT <non-key-col>,<pk> FROM t1
ORDER BY <non-key-col> DISTINCT
must be removed before ORDER BY as if done the other way around
it will remove both.
Converting BIT to a string (an intermediate step in conversion) does
not yield an ASCII numeric string, so we skip that step for BIT and
get the integer value directly from the item.
This site in sql/item_strfunc.cc may be ripe for refactoring for
other types as well, where converting to a string is a waste of time.
used.
Sorting by RAND() uses a temporary table in order to get a correct results.
User defined variable was set during filling the temporary table and later
on it is substituted for its value from the temporary table. Due to this
it contains the last value stored in the temporary table.
Now if the result_field is set for the Item_func_set_user_var object it
updates variable from the result_field value when being sent to a client.
The Item_func_set_user_var::check() now accepts a use_result_field
parameter. Depending on its value the result_field or the args[0] is used
to get current value.
when X.509 subject was required for a connect, we tested whether it was the right
one, but did not refuse the connexion if not. fixed.
(corrected CS now --replace_results socket-path)
server to crash".
Crash caused by assertion failure happened when one ran SHOW OPEN TABLES
while concurrently doing DROP TABLE (or RENAME TABLE, CREATE TABLE LIKE
or any other command that takes name-lock) in other connection.
For non-debug version of server problem exposed itself as wrong output
of SHOW OPEN TABLES statement (it was missing name-locked tables).
Finally in 5.1 both debug and non-debug versions simply crashed in
this situation due to NULL-pointer dereference.
This problem was caused by the fact that table placeholders which were
added to table cache in order to obtain name-lock had TABLE_SHARE::table_name
set to 0. Therefore they broke assumption that this member is non-0 for
all tables in table cache which was checked by assert in list_open_tables()
(in 5.1 this function simply relies on it).
The fix simply sets this member for such placeholders to appropriate value
making this assumption true again.
This patch also includes test for similar bug 12212 "Crash that happens
during removing of database name from cache" reappeared in 5.1 as bug 19403.
A date can be represented as an int (like 20060101) and as a string (like
"2006.01.01"). When a DATE/TIME field is compared in one SELECT against both
representations the constant propagation mechanism leads to comparison
of DATE as a string and DATE as an int. In this example it compares 2006 and
20060101 integers. Obviously it fails comparison although they represents the
same date.
Now the Item_bool_func2::fix_length_and_dec() function sets the comparison
context for items being compared. I.e. if items compared as strings the
comparison context is STRING.
The constant propagation mechanism now doesn't mix items used in different
comparison contexts. The context check is done in the
Item_field::equal_fields_propagator() and in the change_cond_ref_to_const()
functions.
Also the better fix for bug 21159 is introduced.
The problem was that the error handling was using a too-small buffer to
print the error message generated. We fix this by not using a buffer at
all, but by using fprintf() directly. There were also some problems with
the error handling in table dumping that was exposed by this fix that were
also corrected.
SELECT right instead of INSERT right was required for an insert into to a view.
This wrong behaviour appeared after the fix for bug #20989. Its intention was
to ask only SELECT right for all tables except the very first for a complex
INSERT query. But that patch has done it in a wrong way and lead to asking
a wrong access right for an insert into a view.
The setup_tables_and_check_access() function now accepts two want_access
parameters. One will be used for the first table and the second for other
tables.
In fix for BUG#15872, a condition of type "t.key NOT IN (c1, .... cN)"
where N>1000, was incorrectly converted to
(-inf < X < c_min) OR (c_max < X)
Now this conversion is removed, we dont produce any range lists for such
conditions.
This bug is a side-effect of bug fix#16377. NOW() is optimized in
BETWEEN to integer constants to speed up query execution. When view is being
created it saves already modified query and thus becomes wrong.
The agg_cmp_type() function now substitutes constant result DATE/TIME functions
for their results only if the current query isn't CREATE VIEW or SHOW CREATE
VIEW.
Zero-length variables caused failures when using the length to look
up the name in a hash. Instead, signal that no zero-length name can
ever be found and that to encounter one is a syntax error.
The crash was caused by invalid sequence of handler::** calls:
ha_smth->index_init();
ha_smth->index_next_same(); (2)
(2) is an invalid call as it was not preceeded by any 'scan setup' call
like index_first() or index_read(). The cause was that QUICK_SELECT::reset()
didn't "fully reset" the quick select- current QUICK_RANGE wasn't forgotten,
and quick select might attempt to continue reading the range, which would
result in the above mentioned invalid sequence of handler calls.
5.x versions are not affected by the bug - they already have the missing
"range=NULL" clause.
bug #18184 SELECT ... FOR UPDATE does not work..: New test case
ha_ndbcluster.h, ha_ndbcluster.cc, NdbConnection.hpp:
Fix for bug #21059 Server crashes on join query with large dataset with NDB tables: Releasing operation for each intermediate batch, before next call to trans->execute(NoCommit);
- if there are two character set definitions in the column declaration,
we replace the first one with the second one as we store both in the LEX->charset
slot. Add a separate slot to the LEX structure to store underscore charset.
- convert default values to the column charset of STRING, VARSTRING fields
if necessary as well.
Disable const propagation for Item_hex_string.
This must be done because Item_hex_string->val_int() is not
the same as (Item_hex_string->val_str() in BINARY column)->val_int().
We cannot simply disable the replacement in a particular context (
e.g. <bin_col> = <int_col> AND <bin_col> = <hex_string>) since
Items don't know the context they are in and there are functions like
IF (<hex_string>, 'yes', 'no').
Note that this will disable some valid cases as well
(e.g. : <bin_col> = <hex_string> AND <bin_col2> = <bin_col>) but
there's no way to distinguish the valid cases without having the
Item's parent say something like : Item->set_context(Item::STRING_RESULT)
and have all the Items that contain other Items do that consistently.
optimizer does not honor IGNORE INDEX
- Allow an index to be used for sorting the table
instead of filesort only if it is not disabled by
IGNORE INDEX.
table in a join
The optimizer removes redundant columns in ORDER BY. It is considering
redundant every reference to const table column, e.g b in :
create table t1 (a int, b int, primary key(a));
select 1 from t1 order by b where a = 1
But it must not remove references to const table columns if the
const table is an outer table because there still can be 2 values :
the const value and NULL. e.g.:
create table t1 (a int, b int, primary key(a));
select t2.b c from t1 left join t1 t2 on (t1.a = t2.a and t2.a = 5)
order by c;
Make the encryption functions MD5(), SHA1() and ENCRYPT() return binary results.
Make MAKE_SET() and EXPORT_SET() use the correct character set for their default separator strings.
didn't work as expected: collation_server was set not to xxx,
but to the default collation of character set "yyy".
With different argument order it worked as expected:
mysqld --character-set-server=yyy --collation-server=yyy
Fix:
initializate default_collation_name to 0
when processing --character-set-server
only if --collation-server has not been specified
in command line.
Treat queries with no FROM and aggregate functions as normal queries,
so the aggregate function get correctly calculated as if there is 1 row.
This means that they will be considered to have one row, so COUNT(*) will return
1 instead of 0. Other aggregates will behave in compatible manner.
time_format() claimed %H and %k would return at most two digits
(hours 0-23), but this coincided neither with actual behaviour
nor with docs. this is not visible in simple queries; forcing
a temp-table is probably the easiest way to see this. adjusted
the return-length appropriately; the alternative would be to
adjust the docs to say that behaviour for > 99 hours is undefined.
---
Bug#19844: time_format in Union truncates values
time_format() claimed %H and %k would return at most two digits
(hours 0-23), but this coincided neither with actual behaviour
nor with docs. this is not visible in simple queries; forcing
a temp-table is probably the easiest way to see this. adjusted
the return-length appropriately; the alternative would be to
adjust the docs to say that behaviour for > 99 hours is undefined.
"A SELECT privilege on a view is required for SHOW CREATE VIEW and it will stay
that way because of compatibility reasons." (see #20136)
a test case to illustrate how the ACLs work in this case (and ensure they will continue
to do so in the future)
privileges
This problem is 4.1 specific. It doesn't affect 4.0 and was fixed
in 5.x before.
Having any mysql user who is allowed to issue multi table update
statement and any column/table grants, allows this user to update
any table on a server (mysql grant tables are not exception).
check_grant() accepts number of tables (in table list) to be checked
in 5-th param. While checking grants for multi table update, number
of tables must be 1. It must never be 0 (actually we have
DBUG_ASSERT(number > 0) in 5.x in grant_check() function).
Before this fix,
- a runtime error in a statement in a stored procedure with no error handlers
was properly detected (as expected)
- a runtime error in a statement with an error handler inherited from a non
local runtime context (i.e., proc a with a handler, calling proc b) was
properly detected (as expected)
- a runtime error in a statement with a *local* error handler was executed
as follows :
a) the statement would succeed, regardless of the error condition, (bug)
b) the error handler would be called (as expected).
The root cause is that functions like my_messqge_sql would "forget" to set
the thread flag thd->net.report_error to 1, because of the check involving
sp_rcontext::found_handler_here().
Failure to set this flag would cause, later in the call stack,
in Item_func::fix_fields() at line 190, the code to return FALSE and consider
that executing the statement was successful.
With this fix :
- error handling code, that was duplicated in different places in the code,
is now implemented in sp_rcontext::handle_error(),
- handle_error() correctly sets thd->net.report_error when a handler is
present, regardless of the handler location (local, or in the call stack).
A test case, bug8153_subselect, has been written to demonstrate the change
of behavior before and after the fix.
Another test case, bug8153_function_a, as also been writen.
This test has the same behavior before and after the fix.
This test has been written to demonstrate that the previous expected
result of procedure bug18787, was incorrect, since select no_such_function()
should fail and therefore not produce a result.
The incorrect result for bug18787 has the same root cause as Bug#8153,
and the expected result has been adjusted.