Problem was that partitioning cached the table flags.
These flags could change due to TRANSACTION LEVEL changes.
Solution was to remove the cache and always return the table flags
from the first partition (if the handler was initialized).
Removed the flag that disables innodb on slave in the default
configuration of replication tests. That made the explicit
--innodb flag in -slave.opt files redundant, so lots of -slave.opt
files could be removed. Also, -master.opt files containing reduntant
--innodb flag were removed (those were redundant even without
changing the default). Removing .opt files is good because .opt
files cause server restarts and make tests less readable.
Also fixed a bug where rpl_innodb_mixed_ddl unintentionally
used myisam on slave.
#ifdef HAVE_purify removed
per-file comments:
mysql-test/t/partition_not_windows.test
Bug#39102 valgrind build does not compile in realpath, which make DATA/INDEX DIR fail
test reenabled
mysys/my_symlink.c
Bug#39102 valgrind build does not compile in realpath, which make DATA/INDEX DIR fail
superfluous ifdef removed, comments fixed
A string buffers which were included in the 'view' data structure
were allocated on the stack, causing an invalid pointer when used
after the function returned.
The fix: use copy of values for view->md5 & view->queries
The convert_constant_item function converts a constant to integer using
field for condition like 'field = a_constant'. When the convert_constant_item
is called for a subquery the outer select is already being executed, so
convert_constant_item saves field's value to prevent its corruption.
For EXPLAIN field's value isn't initialized thus when convert_constant_item
tries to restore saved value it fails assertion.
Now the convert_constant_item doesn't save/restore field's value
for EXPLAIN.
Problem: mysqld doesn't detect that enum data must be reinserted performing
'ALTER TABLE' in some cases.
Fix: reinsert data altering an enum field if enum values are changed.
- Make send_row_on_empty_set() return FALSE when simplify_cond() has found out
that HAVING is always FALSE
re-committing to put the fix into 5.0 and 5.1
breaks auto increment
The auto_increment value was not initialized if
the first statement after opening a table was
an 'UPDATE'.
solution was to check initialize if it was not,
before trying to increase it in update.
Problem: In the mysqltest language, it was not possible to set the current
connection from a variable, and it was not possible to read the current
connection.
Fix: Allow setting the connection from a variable, like:
connection $variable;
and introduce the mysqltest language variable $CURRENT_CONNECTION, which
holds the name of the current connection.
The problem was that the server did not robustly handle a
unilateral roll back issued by the Resource Manager (RM)
due to a resource deadlock within the transaction branch.
By not acknowledging the roll back, the server (TM) would
eventually corrupt the XA transaction state and crash.
The solution is to mark the transaction as rollback-only
if the RM indicates that it rolled back its branch of the
transaction.
The problem was that the server did not robustly handle a
unilateral roll back issued by the Resource Manager (RM)
due to a resource deadlock within the transaction branch.
By not acknowledging the roll back, the server (TM) would
eventually corrupt the XA transaction state and crash.
The solution is to mark the transaction as rollback-only
if the RM indicates that it rolled back its branch of the
transaction.
In certain situations, a scan of the table will return the error
code HA_ERR_RECORD_DELETED, and this error code is not
correctly caught in the Rows_log_event::find_row() function, which
causes an error to be returned for this case.
This patch fixes the problem by adding code to either ignore the
record and continuing with the next one, the the event of a table
scan, or change the error code to HA_ERR_KEY_NOT_FOUND, in the event
that a key lookup is attempted.
Problem 1: not_embedded_server runs SELECT FROM I_S.PROCESSLIST near the beginning.
check_testcase executes a query to the server before that. There is a race here,
because there is no guarantee that the thread executing check_testcase's query is
finished.
Problem 2: The SELECT FROM I_S.PROCESSLIST doens't seem very useful in the test.
It's at least misplaced.
Fix to both problems: Comment out SELECT FROM I_S.PROCESSLIST.
fails after the first time
Two separate problems :
1. When flattening joins the linked list used for name resolution
(next_name_resolution_table) was not updated.
Fixed by updating the pointers when extending the table list
2. The items created by expanding a * (star) as a column reference
were marked as fixed, but no cached table was assigned to them
(unlike what Item_field::fix_fields does).
Fixed by assigning a cached table (so the re-preparation is done
faster).
Note that the fix for #2 hides the fix for #1 in most cases
(except when a table reference cannot be cached).
Server crashed during a sort order optimization
of a dependent subquery:
SELECT
(SELECT t1.a FROM t1, t2
WHERE t1.a = t2.b AND t2.a = t3.c
ORDER BY t1.a)
FROM t3;
Bitmap of tables, that the reference to outer table
column uses, in addition to the regular table bit
has the OUTER_REF_TABLE_BIT bit set.
The only_eq_ref_tables function traverses this map
bit by bit simultaneously with join->map2table list.
Obviously join->map2table never contains an entry
for the OUTER_REF_TABLE_BIT pseudo-table, so the
server crashed there.
The only_eq_ref_tables function has been modified
to traverse regular table bits only like the
update_depend_map function (resetting of the
OUTER_REF_TABLE_BIT there is enough, but
resetting of the whole set of PSEUDO_TABLE_BITS
is used there for sure).
The problem is that the offset argument of the limit clause
might be truncated on a 32-bits server built without big
tables support. The truncation was happening because the
original 64-bits long argument was being cast to a 32-bits
(ha_rows) offset counter.
The solution is to check if the conversing resulted in value
truncation and if so, the offset is set to the maximum possible
value that can fit on the type.
If delayed insert fails to upgrade the lock it was not
freeing the temporary memory storage used to keep
newly constructed blob values in memory.
Fixed by iterating over the remaining rows in the delayed
insert rowset and freeing the blob storage for each row.
No test suite because it involves concurrent delayed inserts
on a table and cannot easily be made deterministic.
Added a correct valgrind suppression for Fedora 9.