Problem: sha2() reported its result as BINARY
Fix:
- Inheriting Item_func_sha2 from Item_str_ascii_func
- Setting max_length via fix_length_and_charset()
instead of direct assignment.
- Adding tests
Problem: Item_copy did not set "fixed", which resulted in DBUG_ASSERT in some cases.
Fix: adding initialization of the "fixed" member
Adding tests:
mysql-test/include/ctype_numconv.inc
mysql-test/r/ctype_binary.result
mysql-test/r/ctype_cp1251.result
mysql-test/r/ctype_latin1.result
mysql-test/r/ctype_ucs.result
Adding initialization of the "fixed" member:
sql/item.h
The server was not cleaning up dbug allocated memory before
exiting. This is not a real problem, as this memory would be
deallocated anyway. Nonetheless, we improve the mysqlbinlog exit
procedure, wrt to memory book-keeping, when no parameter is
given.
To fix this, we deploy a call to my_thread_end() before the
thread exits, which will also free pending dbug related allocated
blocks.
value and NO_ZERO_DATE
The problem was that a older version of the error path for a
failed admin statement relied upon a few error conditions being
met in order to access a table handler, the first one being that
the table object pointer was not NULL. Probably due to chance,
in all cases a table object was closed but the reference wasn't
reset, the other conditions didn't evaluate to true. With the
addition of a new check on the error path, the handler started
being dereferenced whenever it was not reset to NULL, causing
problems for code paths which closed the table but didn't reset
the reference.
The solution is to reset the reference whenever a admin statement
fails and the tables are closed.
large-pages option is broken) from next-mr to trunk-bugfixing.
Original revision:
------------------------------------------------------------
revision-id: vvaintroub@mysql.com-20100416134524-y4v27j90p5xvblmy
parent: luis.soares@sun.com-20100416000700-n267ynu77visx31t
committer: Vladislav Vaintroub <vvaintroub@mysql.com>
branch nick: mysql-next-mr-bugfixing
timestamp: Fri 2010-04-16 15:45:24 +0200
message:
Bug #52716 Large files support is disabled, large-pages option is broken.
Correct typo: large pages option was tied to wrong variable opt_large_files,
instead of opt_large_pages.
------------------------------------------------------------
DROP TEMP TABLE
Cset: alfranio.correia@sun.com-20100420091043-4i6ouzozb34hvzhb
introduced a change that made drop temporary table to be always
logged if current statement log format was set to row. This is
fine. However, logging operations, for a "DROP TABLE" statement
in mysql_rm_table_part2, are not protected by first checking if
the mysql_bin_log is open before proceeding to the actual
logging. They only check the dont_log_query variable. This was
actually uncovered by the aforementioned cset and not introduced
by it.
We fix this by extending the condition used in the "if" that
wraps logging operations in mysql_rm_table_part2.
BUG#54872 MBR: replication failure caused by using tmp table inside transaction
Changed criteria to classify a statement as unsafe in order to reduce the
number of spurious warnings. So a statement is classified as unsafe when
there is on-going transaction at any point of the execution if:
1. The mixed statement is about to update a transactional table and
a non-transactional table.
2. The mixed statement is about to update a temporary transactional
table and a non-transactional table.
3. The mixed statement is about to update a transactional table and
read from a non-transactional table.
4. The mixed statement is about to update a temporary transactional
table and read from a non-transactional table.
5. The mixed statement is about to update a non-transactional table
and read from a transactional table when the isolation level is
lower than repeatable read.
After updating a transactional table if:
6. The mixed statement is about to update a non-transactional table
and read from a temporary transactional table.
7. The mixed statement is about to update a non-transactional table
and read from a temporary transactional table.
8. The mixed statement is about to update a non-transactionala table
and read from a temporary non-transactional table.
9. The mixed statement is about to update a temporary non-transactional
table and update a non-transactional table.
10. The mixed statement is about to update a temporary non-transactional
table and read from a non-transactional table.
11. A statement is about to update a non-transactional table and the
option variables.binlog_direct_non_trans_update is OFF.
The reason for this is that locks acquired may not protected a concurrent
transaction of interfering in the current execution and by consequence in
the result. So the patch reduced the number of spurious unsafe warnings.
Besides we fixed a regression caused by BUG#51894, which makes temporary
tables to go into the trx-cache if there is an on-going transaction. In
MIXED mode, the patch for BUG#51894 ignores that the trx-cache may have
updates to temporary non-transactional tables that must be written to the
binary log while rolling back the transaction.
So we fix this problem by writing the content of the trx-cache to the
binary log while rolling back a transaction if a non-transactional
temporary table was updated and the binary logging format is MIXED.
mysqld-debug.exe in 5.5.3 on windows
Fix:
- Do not rename PDB, install mysqld.pdb matching
mysqld-debug.exe into bin\debug subdirectory
- Stack tracing code will now additionally look in
debug subdirectory of the application directory
for debug symbols.
- Small cleanup in stacktracing code: link with
dbghelp rather than load functions dynamically
at runtime, since dbghelp.dll is always present.
- Install debug binaries with WiX
switching binlog format to ROW
BUG 52616 fixed the case which the user would switch from STMT to
ROW binlog format, but the server would silently ignore it. After
that fix thd->is_current_stmt_binlog_format_row() reports correct
value at logging time and events are logged in ROW (as expected)
instead of STMT as they were previously and wrongly logged.
However, the fix was only partially complete, because on
disconnect, at THD cleanup, the implicit logging of temporary
tables is conditionally performed. If the binlog_format==ROW and
thd->is_current_stmt_binlog_format_row() is true then DROPs are
not logged. Given that the user can switch from STMT to ROW, this
is wrong because the server cannot tell, just by relying on the
ROW binlog format, that the tables have been dropped before. This
is effectively similar to the MIXED scenario when a switch from
STMT to ROW is triggered.
We fix this by removing this condition from
close_temporary_tables.
This bug is a consequence of WL#5349, as the
default storage engine was changed.
The fix was to explicitly add an ENGINE
clause to a CREATE TABLE statement, to
ensure that we test case preservement on
MyISAM.
Bug#47633 - assert in ha_myisammrg::info during OPTIMIZE
The server crashed on an attempt to optimize a MERGE table with
non-existent child table.
mysql_admin_table() relied on the table to be successfully open
if a table object had been allocated.
Changed code to check return value of the open function before
calling a handler:: function on it.
DML flow and SAVEPOINT
The problem was that replication could break if a transaction involving
both transactional and non-transactional tables was rolled back to a
savepoint. It broke if a concurrent connection tried to drop a
transactional table which was locked after the savepoint was set.
This DROP TABLE completed when ROLLBACK TO SAVEPOINT was executed as the
lock on the table was dropped by the transaction. When the slave later
tried to apply the binlog, it would fail as the table would already
have been dropped.
The reason for the problem is that transactions involving both
transactional and non-transactional tables are written fully to the
binlog during ROLLBACK TO SAVEPOINT. At the same time, metadata locks
acquired after a savepoint, were released during ROLLBACK TO SAVEPOINT.
This allowed a second connection to drop a table only used between
SAVEPOINT and ROLLBACK TO SAVEPOINT. Which caused the transaction binlog
to refer to a non-existing table when it was written during ROLLBACK
TO SAVEPOINT.
This patch fixes the problem by not releasing metadata locks when
ROLLBACK TO SAVEPOINT is executed if binlogging is enabled.
Problem: SQL and IO thread were racing for the IO_CACHE. The former to
flush it, the latter to close it. In some cases this would cause the
SQL thread to lock an invalid IO_CACHE mutex (it had been destroyed by
IO thread). This would happen when SQL thread was initializing the
master.info
Solution: We solve this by locking the log and checking if it is
hot. If it is we keep the log while seeking. Otherwise we release it
right away, because a log can get from hot to cold, but not from cold
to hot.
Accidental change in compile-time definitions for FreeBSD
Revert the accidental setting of "HAVE_BROKEN_REALPATH"
on current versions of FreeBSD,
do it for both autotools ("configure.in")
and cmake ("cmake/os/FreeBSD.cmake").
use limit efficiently
Bug #36569: UPDATE ... WHERE ... ORDER BY... always does a
filesort even if not required
Also two bugs reported after QA review (before the commit
of bugs above to public trees, no documentation needed):
Bug #53737: Performance regressions after applying patch
for bug 36569
Bug #53742: UPDATEs have no effect after applying patch
for bug 36569
Execution of single-table UPDATE and DELETE statements did not use the
same optimizer as was used in the compilation of SELECT statements.
Instead, it had an optimizer of its own that did not take into account
that you can omit sorting by retrieving rows using an index.
Extra optimization has been added: when applicable, single-table
UPDATE/DELETE statements use an existing index instead of filesort. A
corresponding SELECT query would do the former.
Also handling of the DESC ordering expression has been added when
reverse index scan is applicable.
From now on most single table UPDATE and DELETE statements show the
same disk access patterns as the corresponding SELECT query. We verify
this by comparing the result of SHOW STATUS LIKE 'Sort%
Currently the get_index_for_order function
a) checks quick select index (if any) for compatibility with the
ORDER expression list or
b) chooses the cheapest available compatible index, but only if
the index scan is cheaper than filesort.
Second way is implemented by the new test_if_cheaper_ordering
function (extracted part the test_if_skip_sort_order()).