The cause of the bug was an incomplete fix for bug 18080.
The problem was that setup_tables() unconditionally reset the
name resolution context to its 'tables' argument, which pointed
to the first table of an SQL statement.
The bug fix limits resetting of the name resolution context in
setup_tables() only in the cases when the context was not set
by earlier parser/optimizer phases.
In 5.0 we made LOAD DATA INFILE autocommit in all engines, while
only NDB wanted that. Users and trainers complained that it affected
InnoDB and was a change compared to 4.1 where only NDB autocommitted.
To revert to the behaviour of 4.1, we move the autocommit logic out of mysql_load() into
ha_ndbcluster::external_lock().
The result is that LOAD DATA INFILE commits all uncommitted changes
of NDB if this is an NDB table, its own changes if this is an NDB
table, but does not affect other engines.
Note: even though there is no "commit the full transaction at end"
anymore, LOAD DATA INFILE stays disabled in routines (re-entrency
problems per a comment of Pem).
Note: ha_ndbcluster::has_transactions() does not give reliable results
because it says "yes" even if transactions are disabled in this engine...
into salvation.intern.azundris.com:/home/tnurnberg/21913/my50-21913
21913: DATE_FORMAT() Crashes mysql server if I use it through mysql-connector-j driver.
Variable character_set_results can legally be NULL (for "no conversion.")
This could result in a NULL deref that crashed the server. Fixed.
(Although ran some additional precursory tests to see whether I could break
anything else, but no breakage so far.)
Variable character_set_results can legally be NULL (for "no conversion.")
This could result in a NULL deref that crashed the server. Fixed.
(Although ran some additional precursory tests to see whether I could break
anything else, but no breakage so far.)
The problem was due to a prior fix for BUG 9676, which limited
the rows stored in a temporary table to the LIMIT clause. This
optimization is not applicable to non-group queries with aggregate
functions. The fix disables the optimization in this case.
when a range condition use an invalid DATETIME constant.
Now we do not use invalid DATETIME constants to form end keys for
range intervals: range analysis just ignores predicates with such
constants.
doesn't find the column"
When a user was using 4.1 tables with VARCHAR column and 5.0 server
and a query that used a temporary table to resolve itself, the
table metadata for the varchar column sent to client was incorrect:
MYSQL_FIELD::table member was empty.
The bug was caused by implicit "upgrade" from old VARCHAR to new
VARCHAR hard-coded in Field::new_field, which did not preserve
the information about the original table. Thus, the field metadata
of the "upgraded" field pointed to an auxiliary temporary table
created for query execution.
The fix is to copy the pointer to the original table to the new field.
- BUG#15934: Instance manager fails to work;
- BUG#18020: IM connect problem;
- BUG#18027: IM: Server_ID differs;
- BUG#18033: IM: Server_ID not reported;
- BUG#21331: Instance Manager: Connect problems in tests;
The only test suite has been changed
(server codebase has not been modified).
When a view was used inside a trigger or a function, lock type for
tables used in a view was always set to READ (thus making the view
non-updatable), even if we were trying to update the view.
The solution is to set lock type properly.
init_dumping now accepts a function pointer to the table or view specific init_dumping function. This allows both tables and views to use the init_dumping function.
erroneous check
Problem: Actually there were two problems in the server code. The check
for SQLCOM_FLUSH in SF/Triggers were not according to the existing
architecture which uses sp_get_flags_for_command() from sp_head.cc .
This function was also missing a check for SQLCOM_FLUSH which has a
problem combined with prelocking. This changeset fixes both of these
deficiencies as well as the erroneous check in
sp_head::is_not_allowed_in_function() which was a copy&paste error.
const tables. This resulted in choosing extremely inefficient
execution plans in same cases when distribution of data in
joined were skewed (see the customer test case for the bug).
The following procedure was not possible if max_sp_recursion_depth is 0
create procedure show_proc() show create procedure show_proc;
Actually there is no recursive call but the limit is checked.
Solved by temporarily increasing the thread's limit just before the fetch from cache
and decreasing after that.
The mysql client uses the default character set on reconnect. The default character set is now controled by the client charset command while the client is running. The charset command now also issues a SET NAMES command to the server to make sure that the client's charset settings are in sync with the server's.