Problem:
RelativeLocationPath can appear only after a node-set expression
in the third and the fourth branches of this rule:
PathExpr :: = LocationPath
| FilterExpr
| FilterExpr '/' RelativeLocationPath
| FilterExpr '//' RelativeLocationPath
XPatch code didn't check the type of FilterExpr and crashed.
Fix:
If FilterExpr is a scalar expression
(variable reference, literal, number, scalar function call)
return error.
The bug happened because filtering-out a STMT_END_F-flagged event so that
the transaction COMMIT finds traces of incomplete statement commit.
Such situation is only possible with ndb circular replication. The filtered-out
rows event is one that immediately preceeds the COMMIT query event.
Fixed with deploying an the rows-log-event statement commit at executing
of the transaction COMMIT event.
Resources that were allocated by other than STMT_END_F-flagged event of
the last statement are clean up prior execution of the commit logics.
upgrading lock, even with low_priority_updates
The problem is that there is no mechanism to control whether a
delayed insert takes a high or low priority lock on a table.
The solution is to modify the delayed insert thread ("handler")
to take into account the global value of low_priority_updates
when taking table locks. The value of low_priority_updates is
retrieved when the insert delayed thread is created and will
remain the same for the duration of the thread.
When storing a NULL to a TIMESTAMP NOT NULL DEFAULT ...,
NULL returned from some functions threw a 'cannot be NULL error.'
NULL-returns now correctly result in the timestamp-field being
assigned its default value.
If the system time is adjusted back during a query execution
(resulting in the end time being earlier than the start time)
the code that prints to the slow query log gets confused and
prints unsigned negative numbers.
Fixed by not logging the statements that would have negative
execution time due to time shifts.
No test case since this would involve changing the system time.
Item_in_optimizer::is_null() evaluated "NULL IN (SELECT ...)" to NULL regardless of
whether subquery produced any records, this was a documented limitation.
The limitation has been removed (see bugs 8804, 24085, 24127) now
Item_in_optimizer::val_int() correctly handles all cases with NULLs. Make
Item_in_optimizer::is_null() invoke val_int() to return correct values for
"NULL IN (SELECT ...)".
messed up
"ROW(...) IN (SELECT ... FROM DUAL)" always returned TRUE.
Item_in_subselect::row_value_transformer rewrites "ROW(...)
IN SELECT" conditions into the "EXISTS (SELECT ... HAVING ...)"
form.
For a subquery from the DUAL pseudotable resulting HAVING
condition is an expression on constant values, so further
transformation with optimize_cond() eliminates this HAVING
condition and resets JOIN::having to NULL.
Then JOIN::exec treated that NULL as an always-true-HAVING
and that caused a bug.
To distinguish an optimized out "HAVING TRUE" clause from
"HAVING FALSE" we already have the JOIN::having_value flag.
However, JOIN::exec() ignored JOIN::having_value as described
above as if it always set to COND_TRUE.
The JOIN::exec method has been modified to take into account
the value of the JOIN::having_value field.
When using CREATE TEMPORARY TABLE LIKE to create a temporary table,
or using TRUNCATE to delete all rows of a temporary table, they
did not set the tmp_table_used flag, and cause the omission of
"SET @@session.pseudo_thread_id" when dumping binlog with mysqlbinlog,
and cause error when replay the statements.
This patch fixed the problem by setting tmp_table_used in these two
cases. (Done by He Zhenxing 2009-01-12)
When a MEMORY table is full the error is returned to client but not written
to error log.
Fixed the handler api to write the error mesage to error log when the table is
full.
Note: No TestCase included as testing the error log is non-trivial.
in trigger
Interchangeable calls to the mysql_change_user client function
and invocations of a trigger changing some user variable caused
a memory corruption and a crash.
The mysql_change_user API call forces TDH::cleanup() on a server
that frees user variable entries.
However it didn't reset Item_func_set_user_var::entry to NULL
because Item_func_set_user_var::cleanup() was not overloaded.
So, Item_func_set_user_var::entry held a pointer to freed memory,
that caused a crash.
The Item_func_set_user_var::cleanup method has been overloaded
to cleanup the Item_func_set_user_var::entry field.
conflicts:
Text conflict in client/mysqltest.cc
Text conflict in mysql-test/include/wait_until_connected_again.inc
Text conflict in mysql-test/lib/mtr_report.pm
Text conflict in mysql-test/mysql-test-run.pl
Text conflict in mysql-test/r/events_bugs.result
Text conflict in mysql-test/r/log_state.result
Text conflict in mysql-test/r/myisam_data_pointer_size_func.result
Text conflict in mysql-test/r/mysqlcheck.result
Text conflict in mysql-test/r/query_cache.result
Text conflict in mysql-test/r/status.result
Text conflict in mysql-test/suite/binlog/r/binlog_index.result
Text conflict in mysql-test/suite/binlog/r/binlog_innodb.result
Text conflict in mysql-test/suite/rpl/r/rpl_packet.result
Text conflict in mysql-test/suite/rpl/t/rpl_packet.test
Text conflict in mysql-test/t/disabled.def
Text conflict in mysql-test/t/events_bugs.test
Text conflict in mysql-test/t/log_state.test
Text conflict in mysql-test/t/myisam_data_pointer_size_func.test
Text conflict in mysql-test/t/mysqlcheck.test
Text conflict in mysql-test/t/query_cache.test
Text conflict in mysql-test/t/rpl_init_slave_func.test
Text conflict in mysql-test/t/status.test
It's a regression issue.
The reason of the bug appeared to be an error introduced into 5.1 source code.
A piece of code in Create_file_log_event::do_apply_event() did not have test
coverage which made make test and pb unaware.
Fixed with inverting the old value of the return value from
Create_file_log_event::do_apply_event().
The rpl test suite is extended with `rpl_cross_version' the file to hold
regression cases similar to the current.
The problem is that the query cache was storing partial results
if the statement failed when sending the results to the client.
This could cause clients to hang when trying to read the results
from the cache as they would, for example, wait indefinitely for
a eof packet that wasn't saved.
The solution is to always discard the caching of a query that
failed to send its results to the associated client.
Views weren't sync()d the same way other structures were.
In creating the FRM for views, obey the same rules for variable
"sync_frm" as for everything else.
bug#33094: Error in upgrading from 5.0 to 5.1 when table contains
triggers
and
#41385: Crash when attempting to repair a #mysql50# upgraded table
with triggers.
Problem:
1. trigger code didn't assume a table name may have
a "#mysql50#" prefix, that may lead to a failing ASSERT().
2. "ALTER DATABASE ... UPGRADE DATA DIRECTORY NAME" failed
for databases with "#mysql50#" prefix if any trigger.
3. mysqlcheck --fix-table-name didn't use UTF8 as a default
character set that resulted in (parsing) errors for tables with
non-latin symbols in their names and definitions of triggers.
Fix:
1. properly handle table/database names with "#mysql50#" prefix.
2. handle --default-character-set mysqlcheck option;
if mysqlcheck is launched with --fix-table-name or --fix-db-name
set default character set to UTF8 if no --default-character-set
option given.
Note: if given --fix-table-name or --fix-db-name option,
without --default-character-set mysqlcheck option
default character set is UTF8.
The next number (AUTO_INCREMENT) field of the table for write
rows events are not initialized, and cause some engines (innodb)
not correctly update the tables's auto_increment value.
This patch fixed this problem by honor next number fields if present.
The problem is that the query cache stores packets containing
the server status of the time when the cached statement was run.
This might lead to a wrong transaction status in the client side
if a statement is cached during a transaction and is later served
outside a transaction context (and vice-versa).
The solution is to take into account the transaction status when
storing in and serving from the query cache.
The greedy optimizer tracks the current level of nested joins and the position
inside these by setting and maintaining a state that's global for the whole FROM
clause.
This state was correctly maintained inside the selection of the next partial plan
table (in best_extension_by_limited_search()).
greedy_search() also moves the current position by adding the last partial match
table when there's not enough tables in the partial plan found by
best_extension_by_limited_search().
This may require update of the global state variables that describe the current
position in the plan if the last table placed by greedy_search is not a top-level
join table.
Fixed by updating the state after placing the partial plan table in greedy_search()
in the same way this is done on entering the best_extension_by_limited_search().
Fixed the signature of the function called to update the state :
check_interleaving_with_nj
Bounds-checks and blocksize corrections were applied to user-input,
but constants in the server were trusted implicitly. If these values
did not actually meet the requirements, the user could not set change
a variable, then set it back to the (wonky) factory default or maximum
by explicitly specifying it (SET <var>=<value> vs SET <var>=DEFAULT).
Now checks also apply to the server's presets. Wonky values and maxima
get corrected at startup. Consequently all non-offsetted values the user
sees are valid, and users can set the variable to that exact value if
they so desire.