messed up
"ROW(...) IN (SELECT ... FROM DUAL)" always returned TRUE.
Item_in_subselect::row_value_transformer rewrites "ROW(...)
IN SELECT" conditions into the "EXISTS (SELECT ... HAVING ...)"
form.
For a subquery from the DUAL pseudotable resulting HAVING
condition is an expression on constant values, so further
transformation with optimize_cond() eliminates this HAVING
condition and resets JOIN::having to NULL.
Then JOIN::exec treated that NULL as an always-true-HAVING
and that caused a bug.
To distinguish an optimized out "HAVING TRUE" clause from
"HAVING FALSE" we already have the JOIN::having_value flag.
However, JOIN::exec() ignored JOIN::having_value as described
above as if it always set to COND_TRUE.
The JOIN::exec method has been modified to take into account
the value of the JOIN::having_value field.
When using CREATE TEMPORARY TABLE LIKE to create a temporary table,
or using TRUNCATE to delete all rows of a temporary table, they
did not set the tmp_table_used flag, and cause the omission of
"SET @@session.pseudo_thread_id" when dumping binlog with mysqlbinlog,
and cause error when replay the statements.
This patch fixed the problem by setting tmp_table_used in these two
cases. (Done by He Zhenxing 2009-01-12)
When a MEMORY table is full the error is returned to client but not written
to error log.
Fixed the handler api to write the error mesage to error log when the table is
full.
Note: No TestCase included as testing the error log is non-trivial.
in trigger
Interchangeable calls to the mysql_change_user client function
and invocations of a trigger changing some user variable caused
a memory corruption and a crash.
The mysql_change_user API call forces TDH::cleanup() on a server
that frees user variable entries.
However it didn't reset Item_func_set_user_var::entry to NULL
because Item_func_set_user_var::cleanup() was not overloaded.
So, Item_func_set_user_var::entry held a pointer to freed memory,
that caused a crash.
The Item_func_set_user_var::cleanup method has been overloaded
to cleanup the Item_func_set_user_var::entry field.
conflicts:
Text conflict in client/mysqltest.cc
Text conflict in mysql-test/include/wait_until_connected_again.inc
Text conflict in mysql-test/lib/mtr_report.pm
Text conflict in mysql-test/mysql-test-run.pl
Text conflict in mysql-test/r/events_bugs.result
Text conflict in mysql-test/r/log_state.result
Text conflict in mysql-test/r/myisam_data_pointer_size_func.result
Text conflict in mysql-test/r/mysqlcheck.result
Text conflict in mysql-test/r/query_cache.result
Text conflict in mysql-test/r/status.result
Text conflict in mysql-test/suite/binlog/r/binlog_index.result
Text conflict in mysql-test/suite/binlog/r/binlog_innodb.result
Text conflict in mysql-test/suite/rpl/r/rpl_packet.result
Text conflict in mysql-test/suite/rpl/t/rpl_packet.test
Text conflict in mysql-test/t/disabled.def
Text conflict in mysql-test/t/events_bugs.test
Text conflict in mysql-test/t/log_state.test
Text conflict in mysql-test/t/myisam_data_pointer_size_func.test
Text conflict in mysql-test/t/mysqlcheck.test
Text conflict in mysql-test/t/query_cache.test
Text conflict in mysql-test/t/rpl_init_slave_func.test
Text conflict in mysql-test/t/status.test
It's a regression issue.
The reason of the bug appeared to be an error introduced into 5.1 source code.
A piece of code in Create_file_log_event::do_apply_event() did not have test
coverage which made make test and pb unaware.
Fixed with inverting the old value of the return value from
Create_file_log_event::do_apply_event().
The rpl test suite is extended with `rpl_cross_version' the file to hold
regression cases similar to the current.
The problem is that the query cache was storing partial results
if the statement failed when sending the results to the client.
This could cause clients to hang when trying to read the results
from the cache as they would, for example, wait indefinitely for
a eof packet that wasn't saved.
The solution is to always discard the caching of a query that
failed to send its results to the associated client.
Views weren't sync()d the same way other structures were.
In creating the FRM for views, obey the same rules for variable
"sync_frm" as for everything else.
bug#33094: Error in upgrading from 5.0 to 5.1 when table contains
triggers
and
#41385: Crash when attempting to repair a #mysql50# upgraded table
with triggers.
Problem:
1. trigger code didn't assume a table name may have
a "#mysql50#" prefix, that may lead to a failing ASSERT().
2. "ALTER DATABASE ... UPGRADE DATA DIRECTORY NAME" failed
for databases with "#mysql50#" prefix if any trigger.
3. mysqlcheck --fix-table-name didn't use UTF8 as a default
character set that resulted in (parsing) errors for tables with
non-latin symbols in their names and definitions of triggers.
Fix:
1. properly handle table/database names with "#mysql50#" prefix.
2. handle --default-character-set mysqlcheck option;
if mysqlcheck is launched with --fix-table-name or --fix-db-name
set default character set to UTF8 if no --default-character-set
option given.
Note: if given --fix-table-name or --fix-db-name option,
without --default-character-set mysqlcheck option
default character set is UTF8.
The next number (AUTO_INCREMENT) field of the table for write
rows events are not initialized, and cause some engines (innodb)
not correctly update the tables's auto_increment value.
This patch fixed this problem by honor next number fields if present.
The problem is that the query cache stores packets containing
the server status of the time when the cached statement was run.
This might lead to a wrong transaction status in the client side
if a statement is cached during a transaction and is later served
outside a transaction context (and vice-versa).
The solution is to take into account the transaction status when
storing in and serving from the query cache.
The greedy optimizer tracks the current level of nested joins and the position
inside these by setting and maintaining a state that's global for the whole FROM
clause.
This state was correctly maintained inside the selection of the next partial plan
table (in best_extension_by_limited_search()).
greedy_search() also moves the current position by adding the last partial match
table when there's not enough tables in the partial plan found by
best_extension_by_limited_search().
This may require update of the global state variables that describe the current
position in the plan if the last table placed by greedy_search is not a top-level
join table.
Fixed by updating the state after placing the partial plan table in greedy_search()
in the same way this is done on entering the best_extension_by_limited_search().
Fixed the signature of the function called to update the state :
check_interleaving_with_nj
Bounds-checks and blocksize corrections were applied to user-input,
but constants in the server were trusted implicitly. If these values
did not actually meet the requirements, the user could not set change
a variable, then set it back to the (wonky) factory default or maximum
by explicitly specifying it (SET <var>=<value> vs SET <var>=DEFAULT).
Now checks also apply to the server's presets. Wonky values and maxima
get corrected at startup. Consequently all non-offsetted values the user
sees are valid, and users can set the variable to that exact value if
they so desire.
When substituting system constant functions with a constant result
the server was not expecting that the function may return NULL.
Fixed by checking for NULL and returning Item_null (in the relevant
collation) if the result of the system constant function was NULL.
The special TRUNCATE TABLE (DDL) transaction wasn't being properly
rolled back if a error occurred during row by row deletion. The
error can be caused by a foreign key restriction imposed by InnoDB
SE and would cause the server to erroneously issue a implicit
commit.
The solution is to rollback the transaction if a truncation via row
by row deletion fails, otherwise commit. All effects of a TRUNCATE
ABLE operation are rolled back if a row by row deletion fails.
Problem: when the server reads a log_event from file, it should read
the post-header lengths from the format_description_log_event. Some
event types which currently have post-header length 0 did not do this,
and instead had a hard-coded zero length for the post-header. That
means the current server version will not be able to read future
versions of these events.
Fix: make the reader functions read the post-header.
The binlog_innodb test was sensitive to what tests ran before it. Now run
FLUSH STATUS before performing operations that need to be checked.
sys_var_thd_ulong::update() was improperly casting an option value from
ulonglong to ulong before comparing it to the max allowed value. On systems
where ulong and ulonglong are of different size, this caused values greater
than ULONG_MAX to wrap around (not be truncated to ULONG_MAX, which appears to
have been the intention of the original coder), and caused some checks to work
incorrectly. This wasn't generally visible to the user, because later checks
would prevent the wrapped-around value from being used. But it caused warning
messages to differ between 32- and 64-bit platforms. Fix is to just remove the
cast. Also added a DBUG_ASSERT to ensure that the value really is capped
properly before finally stuffing it into the ulong.