- use $bindir instead of $basedir when looking for binaries
- NOTE! the $ENV{NDB_EXAMPLES_*} will be reworked so that
mtr.pl does not need to set them up.
When slave executes a transaction bigger than slave's max_binlog_cache_size,
slave will crash. It is caused by the assert that server should only roll back
the statement but not the whole transaction if the error ER_TRANS_CACHE_FULL
happens. But slave sql thread always rollbacks the whole transaction when
an error happens.
Ather this patch, we always clear any error set in sql thread(it is different
from the error in 'SHOW SLAVE STATUS') and it is cleared before rolling back
the transaction.
mysql-test/suite/rpl/r/rpl_binlog_max_cache_size.result:
SET binlog_cache_size and max_binlog_cache_size for all test cases.
Add test case for bug#55375.
mysql-test/suite/rpl/t/rpl_binlog_max_cache_size-master.opt:
binlog_cache_size and max_binlog_cache_size can be set in the client connection.
so remove this option file.
mysql-test/suite/rpl/t/rpl_binlog_max_cache_size.test:
SET binlog_cache_size and max_binlog_cache_size for all test cases.
Add test case for bug#55375.
sql/log_event.cc:
Some functions don't return the error code, so it is a wrong error code.
The error should always be set into thd->main_da. So we use
slave_rows_error_report to report the right error.
sql/slave.cc:
exec_relay_log_event() need call cleanup_context() to clear context.
clearup_context() will call end_trans().
Clear thd's error before cleanup_context. It avoid to trigger the assert
which cause this bug.
Only wait for a single debug signal at a time as the signal state
is global. Also, do not activate the query cache debug sync points
if the thread has no associated THD session.
mysql-test/t/query_cache_debug.test:
Only wait for a single debug signal at a time as the signal state
is global.
sql/sql_cache.cc:
Do not execute the debug sync point if the thread has no associated
THD session. This scenario happens for federated threads.
Buffer overrun when trying to format DBL_MAX
mysql-test/r/func_math.result:
Add test case for Bug#57209
mysql-test/t/func_math.test:
Add test case for Bug#57209
sql/item_strfunc.cc:
Allocate a larger buffer for the result.
Issue an error if user specifies multiple commands to run.
Also there was an unnoticed bug that DO_CHECK was actually 0 which lead
to wrong actions in some cases.
The mysqlcheck.test contained commands with the suspicious meaning
for the above reason. Extra commands removed from there.
per-file commands:
client/mysqlcheck.c
Bug#35269 mysqlcheck behaves different depending on order of parameters
Drop with an error if multiple commands.
mysql-test/r/mysqlcheck.result
Bug#35269 mysqlcheck behaves different depending on order of parameters
result completed.
mysql-test/t/mysqlcheck.test
Bug#35269 mysqlcheck behaves different depending on order of parameters
testcase added.
not-working commands removed from some mysqlcheck calls.
The problem was that threads waiting on the query cache lock
are not easily seen due to the lack of a state indicating that
the thread is waiting on the said lock. This made it difficult
for users to quickly spot (for example, via SHOW PROCESSLIST)
a query cache contention problem.
The solution is to update the thread state when the query cache
lock needs to be acquired. Whenever the lock is to be acquired,
the thread state is updated to "Waiting for query cache lock"
and is reset once the lock is granted or the wait is interrupted.
The intention is to make query cache related hangs more evident.
To further investigate query cache related locking problems, one
may use PERFORMANCE_SCHEMA to track the overhead associated with
the locking bits and determine which particular lock is being a
contention point.
sql/sql_cache.cc:
Set and reset the thread state whenever a attempt to lock the
query cache is made.
Use DEBUG_SYNC instead of the now unnecessary wait_for_kill hack.
The coalesce function returned DATETIME type due to a DATETIME argument, but
since it's not a date/time function it can't return correct int value for
it. Nevertheless Item_datetime_cache was chosen to cache coalesce's result
and that led to a wrong result.
Now Item_datetime_cache is used only for those function that could return
correct int representation of DATETIME values.
mysql-test/r/type_datetime.result:
Added a test case for the bug#57095.
mysql-test/t/type_datetime.test:
Added a test case for the bug#57095.
sql/item.cc:
Bug#57095: Wrongly chosen expression cache type led to a wrong result.
Now Item_datetime_cache is used only for those function that could return
correct int representation of DATETIME values.
The error message for ER_SLAVE_HEARTBEAT_VALUE_OUT_OF_RANGE was
hard coded. Additionally, the same error was used in three
separate error symptoms: 1. when heartbeat period exceeds the
value of slave_net_timeout, 2. when it is smaller than 1
milisecond and 3. when it was not in range, ie, either negative
or greater than the maximum allowed.
We fix this by splitting into three distinct errors and by
removing the message from the source code and moving it to the
errmsg-utf8.txt file.
set to 128k.
mysql-test/collections/default.experimental:
Re-enabled test rpl.rpl_row_sp011*.
sql/sp_head.cc:
sp_head::execute() modified: pass constant value 2 * STACK_MIN_SIZE
instead of 8 * STACK_MIN_SIZE as a second argument value
in call to check_stack_overrun.
This is the 5.5 version of the fix. The 5.1 version was too complicated to
merge and was null merged.
This is a regression from the fix for bug no 38999. A storage engine capable
of reading only a subset of a table's columns updates corresponding bits in
the read buffer to signal that it has read NULL values for the corresponding
columns. It cannot, and should not, update any other bits. Bug no 38999
occurred because the implementation of UPDATE statements compare the NULL bits
using memcmp, inadvertently comparing bits that were never requested from the
storage engine. The regression was caused by the storage engine trying to
alleviate the situation by writing to all NULL bits, even those that it had no
knowledge of. This has devastating effects for the index merge algorithm,
which relies on all NULL bits, except those explicitly requested, being left
unchanged.
The fix reverts the fix for bug no 38999 in both InnoDB and InnoDB plugin and
changes the server's method of comparing records. For engines that always read
entire rows, we proceed as usual. For engines capable of reading only select
columns, the record buffers are now compared on a column by column basis. An
assertion was also added so that non comparable buffers are never read. Some
relevant copy-pasted code was also consolidated in a new function.
This is a regression from the fix for bug no 38999. A storage engine capable
of reading only a subset of a table's columns updates corresponding bits in
the read buffer to signal that it has read NULL values for the corresponding
columns. It cannot, and should not, update any other bits. Bug no 38999
occurred because the implementation of UPDATE statements compare the NULL bits
using memcmp, inadvertently comparing bits that were never requested from the
storage engine. The regression was caused by the storage engine trying to
alleviate the situation by writing to all NULL bits, even those that it had no
knowledge of. This has devastating effects for the index merge algorithm,
which relies on all NULL bits, except those explicitly requested, being left
unchanged.
The fix reverts the fix for bug no 38999 in both InnoDB and InnoDB plugin and
changes the server's method of comparing records. For engines that always read
entire rows, we proceed as usual. For engines capable of reading only select
columns, the record buffers are now compared on a column by column basis. An
assertion was also added so that non comparable buffers are never read. Some
relevant copy-pasted code was also consolidated in a new function.
The subtime function wasn't able to produce correct int representation of
its result. For constant expressions the Item_datetime_cache is used to
speedup evaluation and Item_datetime_cache expects underlying item to return
correct int representation of DATETIME value. These two factors combined led
to a wrong query result.
Now the Item_func_add_time has function val_datetime which performs the
calculation and saves result into given MYSQL_TIME struct, it also sets
null_value to appropriate value. val_int and val_str member functions
convert the result obtained from val_datetime to int or string respectively
and returns it.
mysql-test/r/func_time.result:
Added a test case for the bug#57039.
mysql-test/t/func_time.test:
Added a test case for the bug#57039.
sql/item_timefunc.cc:
Bug#57039: constant subtime expression returns incorrect result.
Now the Item_func_add_time has function val_datetime which performs the
calculation and saves result into given MYSQL_TIME struct, it also sets
null_value to appropriate value. val_int and val_str member functions
convert the result obtained from val_datetime to int or string respectively
and returns it.
sql/item_timefunc.h:
Bug#57039: constant subtime expression returns incorrect result.
Bug#54678: InnoDB, TRUNCATE, ALTER, I_S SELECT, crash or deadlock
- Incompatible change: truncate no longer resorts to a row by
row delete if the storage engine does not support the truncate
method. Consequently, the count of affected rows does not, in
any case, reflect the actual number of rows.
- Incompatible change: it is no longer possible to truncate a
table that participates as a parent in a foreign key constraint,
unless it is a self-referencing constraint (both parent and child
are in the same table). To work around this incompatible change
and still be able to truncate such tables, disable foreign checks
with SET foreign_key_checks=0 before truncate. Alternatively, if
foreign key checks are necessary, please use a DELETE statement
without a WHERE condition.
Problem description:
The problem was that for storage engines that do not support
truncate table via a external drop and recreate, such as InnoDB
which implements truncate via a internal drop and recreate, the
delete_all_rows method could be invoked with a shared metadata
lock, causing problems if the engine needed exclusive access
to some internal metadata. This problem originated with the
fact that there is no truncate specific handler method, which
ended up leading to a abuse of the delete_all_rows method that
is primarily used for delete operations without a condition.
Solution:
The solution is to introduce a truncate handler method that is
invoked when the engine does not support truncation via a table
drop and recreate. This method is invoked under a exclusive
metadata lock, so that there is only a single instance of the
table when the method is invoked.
Also, the method is not invoked and a error is thrown if
the table is a parent in a non-self-referencing foreign key
relationship. This was necessary to avoid inconsistency as
some integrity checks are bypassed. This is inline with the
fact that truncate is primarily a DDL operation that was
designed to quickly remove all data from a table.
mysql-test/suite/innodb/t/innodb-truncate.test:
Add test cases for truncate and foreign key checks.
Also test that InnoDB resets auto-increment on truncate.
mysql-test/suite/innodb/t/innodb.test:
FK is not necessary, test is related to auto-increment.
Update error number, truncate is no longer invoked if
table is parent in a FK relationship.
mysql-test/suite/innodb/t/innodb_mysql.test:
Update error number, truncate is no longer invoked if
table is parent in a FK relationship.
Use delete instead of truncate, test is used to check
the interaction of FKs, triggers and delete.
mysql-test/suite/parts/inc/partition_check.inc:
Fix typo.
mysql-test/suite/sys_vars/t/foreign_key_checks_func.test:
Update error number, truncate is no longer invoked if
table is parent in a FK relationship.
mysql-test/t/mdl_sync.test:
Modify test case to reflect and ensure that truncate takes
a exclusive metadata lock.
mysql-test/t/trigger-trans.test:
Update error number, truncate is no longer invoked if
table is parent in a FK relationship.
sql/ha_partition.cc:
Reorganize the various truncate methods. delete_all_rows is now
passed directly to the underlying engines, so as truncate. The
code responsible for truncating individual partitions is moved
to ha_partition::truncate_partition, which is invoked when a
ALTER TABLE t1 TRUNCATE PARTITION p statement is executed.
Since the partition truncate no longer can be invoked via
delete, the bitmap operations are not necessary anymore. The
explicit reset of the auto-increment value is also removed
as the underlying engines are now responsible for reseting
the value.
sql/handler.cc:
Wire up the handler truncate method.
sql/handler.h:
Introduce and document the truncate handler method. It assumes
certain use cases of delete_all_rows.
Add method to retrieve the list of foreign keys referencing a
table. Method is used to avoid truncating tables that are
parent in a foreign key relationship.
sql/share/errmsg-utf8.txt:
Add error message for truncate and FK.
sql/sql_lex.h:
Introduce a flag so that the partition engine can detect when
a partition is being truncated. Used to give a special error.
sql/sql_parse.cc:
Function mysql_truncate_table no longer exists.
sql/sql_partition_admin.cc:
Implement the TRUNCATE PARTITION statement.
sql/sql_truncate.cc:
Change the truncate table implementation to use the new truncate
handler method and to not rely on row-by-row delete anymore.
The truncate handler method is always invoked with a exclusive
metadata lock. Also, it is no longer possible to truncate a
table that is parent in some non-self-referencing foreign key.
storage/archive/ha_archive.cc:
Rename method as the description indicates that in the future
this could be a truncate operation.
storage/blackhole/ha_blackhole.cc:
Implement truncate as no operation for the blackhole engine in
order to remain compatible with older releases.
storage/federated/ha_federated.cc:
Introduce truncate method that invokes delete_all_rows.
This is required to support partition truncate as this
form of truncate does not implement the drop and recreate
protocol.
storage/heap/ha_heap.cc:
Introduce truncate method that invokes delete_all_rows.
This is required to support partition truncate as this
form of truncate does not implement the drop and recreate
protocol.
storage/ibmdb2i/ha_ibmdb2i.cc:
Introduce truncate method that invokes delete_all_rows.
This is required to support partition truncate as this
form of truncate does not implement the drop and recreate
protocol.
storage/innobase/handler/ha_innodb.cc:
Rename delete_all_rows to truncate. InnoDB now does truncate
under a exclusive metadata lock.
Introduce and reorganize methods used to retrieve the list
of foreign keys referenced by a or referencing a table.
storage/myisammrg/ha_myisammrg.cc:
Introduce truncate method that invokes delete_all_rows.
This is required in order to remain compatible with earlier
releases where truncate would resort to a row-by-row delete.
Problem: CASE didn't work with a mixture of different character
sets in THEN/ELSE in some cases.
This happened because after character set aggregation
newly created Item_func_conv_charset items corresponding
to THEN/ELSE arguments were not put back to args[] array.
Fix:
put all Item_func_conv_charset back to args[].
@ mysql-test/include/ctype_numconv.inc
@ mysql-test/r/ctype_ucs.result
Adding tests
@ sql/item_cmpfunc.cc
Put "agg" back to args[] after character set aggregation.
thd->in_sub_stmt
In a precursor patch for Bug#52044
(revid:bzr/kostja@stripped), a
number of reorganizations of code was made. In addition some
assertions were added to ensure the correct transactional state.
The reorganization had a small glitch so statements that was
active in the query cache was not followed by a
statement commit/rollback (this code was removed). A section
in the trans_commit_stmt/trans_rollback_stmt code is to
clear the thd->transaction.stmt list of affected storage
engines. When a new statement is initiated, an assert
introduced by the 523044 patch checks if this list is cleared.
When the query cache is accessed, this list may be populated,
and since it's not committed it will not be cleared.
This fix adds explicit statement commit or rollback for
statements that is contained in the query cache.
for ALTER TABLE + MERGE tables
The patch for Bug#56292 changed how metadata locks are taken for MERGE
tables. After the patch, locking the MERGE table will also lock the
children tables with the same metadata lock type. This means that
LOCK TABLES on a MERGE table also will implicitly do LOCK TABLES on
the children tables.
A consequence of this change, is that it is possible to do LOCK TABLES
on a child table both explicitly and implicitly with the same statement
and that these two locks can be of different strength. For example,
LOCK TABLES child READ, merge WRITE.
In LOCK TABLES mode, we are not allowed to take new locks and each
statement must therefore try to find an existing TABLE instance with
a suitable lock. The code that searched for a suitable TABLE instance,
only considered table level locks. If a child table was locked twice,
it was therefore possible for this code to find a TABLE instance with
suitable table level locks but without suitable metadata lock.
This problem caused the assert in upgrade_shared_lock_to_exclusive()
to be triggered as it tried to upgrade a MDL_SHARED lock to
EXCLUSIVE. The problem was a regression caused by the patch for
Bug#56292.
This patch fixes the problem by partially reverting the changes
done by Bug#56292. Now, the children tables will only use the
same metadata lock as the MERGE table for MDL_SHARED_NO_WRITE
when not in locked tables mode. This means that LOCK TABLE
on a MERGE table will not implicitly lock the children tables.
This still fixes the original problem in Bug#56292 without
causing a regression.
Test case added to merge.test.
parameters don't match
Revert the changes of the default values of innodb_file_per_table
and innobase_file_format in 5.5, until WL#5135 is implemented.