WITH MYSQL_REFRESH()
reset_slave_info.all was not initialized.
We fix this by setting lex->reset_slave_info.all= false in
the lex_start routine, which is called before every statement.
Connection of slave to master using a replication account which authenticates
with an external plugin was not possible.
Fixed by making sure that the CLIENT_PLUGIN_AUTH capability is set when client connects using mysql_real_connect(). Also, a plugin-dir path used by client library to locate authentication plugins is set based on the analogous server setting. This is done in connect_to_master() function before a call to mysql_real_connect().
FROM OK PACKET
There's no reliable way (without knowing the protocol variants that each
plugin pair implements) to find out when does the authentication exchange
end.
The server is changed to send all the extra authentication packets that
server plugins need to send prefixed with the \x1 command.
This fix was accidentally pushed to mysql-5.1 after the 5.1.59 clone-off in
bzr revision id marko.makela@oracle.com-20110829081642-z0w992a0mrc62s6w
with the fix of Bug#12704861 Corruption after a crash during BLOB update
but not merged to mysql-5.5 and upwards.
In the Barracuda formats, the clustered index record no longer
contains a prefix of off-page columns. Because of this, the undo log
must contain these prefixes, so that purge and multi-versioning will
continue to work. However, this also means that an undo log record can
become too big to fit in an undo log page. (It is a limitation of the
undo log that undo records cannot span across multiple pages.)
In case the checks for undo log size fail when CREATE TABLE or CREATE
INDEX is executed, we need a fallback that blocks a modification
operation when the undo log record would exceed the maximum size.
trx_undo_free_last_page_func(): Renamed from trx_undo_free_page_in_rollback().
Define the trx_t parameter only in debug builds.
trx_undo_free_last_page(): Wrapper for trx_undo_free_last_page_func().
Pass the trx_t parameter only in debug builds.
trx_undo_truncate_end_func(): Renamed from trx_undo_truncate_end().
Define the trx_t parameter only in debug builds. Rewrite a for(;;) loop
as a while loop for clarity.
trx_undo_truncate_end(): Wrapper for from trx_undo_truncate_end_func().
Pass the trx_t parameter only in debug builds.
trx_undo_erase_page_end(): Return TRUE if the page was non-empty
to begin with. Refuse to erase empty pages.
trx_undo_report_row_operation(): If the page for which the undo log
was too big was empty, free the undo page and return DB_TOO_BIG_RECORD.
rb:749 approved by Inaam Rana
GROUPING BY FUNCTIONS.... (PART
The bug was introduced in a patch for bug 49897.
Problem: The assertion inserted by the original patch to guard against
zero-lenght sort keys during merge phase triggers also when the whole
set fits in memory.
Fix: Move assert so that it does not trigger if the whole set is in
memory.
mysql-test/r/group_by.result:
Add test for bug#11765254
mysql-test/t/group_by.test:
Add test for bug#11765254
sql/filesort.cc:
Move assertion
Converting the number zero to binary and back yielded the number zero,
but with no digits, i.e. zero precision.
This made the multiply algorithm go haywire in various ways.
include/decimal.h:
Document struct st_decimal_t
mysql-test/r/type_newdecimal.result:
New test case (valgrind warnings)
mysql-test/t/type_newdecimal.test:
New test case (valgrind warnings)
sql/my_decimal.h:
Remove the HAVE_purify enabled/disabled code.
strings/decimal.c:
Make a proper zero, with non-zero precision.
Suppress the known warnings generated by filesort().
The real fix belongs to worklog 1509:
Pack values of non-sorted fields in the sort buffer
(which is basically the same issue, but in an optimization context:
We are writing the entire sort buffer to disk,
including un-used space for varchar columns.)
mysql-test/valgrind.supp:
Add new Memcheck suppressions for filesort.
sql/filesort.cc:
Remove the ifdef HAVE_purify/bzero code, use valgrind suppressions instead.
Background: Backporting fix for BUG 11752963 to Mysql5.1 branch.
Problem: Fix of bug 11752963 was only available for trunk and 5.5 branch.
Partial fix has been pushed to 5.1 branch as well.
Fix: backporting the fixes of bug 11752963 to 5.1 branch.
1. Made all major changes to make 5.1 branch in line with 5.5 and the trunk.
2. skipped the partial patch that was already applied to the 5.1 branch.
sql/rpl_rli.h:
Made inited Volatile (find inline comments)
sql/slave.cc:
backported all changes from the fix of BUG#11752963.
Blind attempt to fix BUG 12881278 - MAIN.MYISAM TEST FAILS ON LINUX
The printed text is truncated on char 63:
"MySQL thread id 1236, OS thread handle 0x7ff187b96700, query id"
still I do not understand how this truncation could have caused the
main.myisam failure but anyway - the buffer needs to be increased.
CRASHES SERVER
Flushing of MERGE table or one of its child tables, which was
locked by flushing thread using LOCK TABLES, might have caused
crashes or assertion failures if the thread failed to reopen
child or parent table.
Particularly, this might have happened when another connection
killed this FLUSH TABLE statement/connection.
Also this problem might have occurred when we failed to reopen
MERGE table or one of its children when executing DDL statement
under LOCK TABLES.
The problem was caused by the fact that reopen_tables() might
have failed to reopen child table but still tried to reopen,
reattach children for and re-lock its parent. Vice versa it
might have failed to reopen parent but kept references from
children to parent around. Since reopen_tables() closes table
it has failed to reopen and therefore frees all associated
memory such dangling references led to crashes when followed.
This patch solves this problem by ensuring that we always close
parent table and all its children if we fail to reopen this
table or one of its children. Same happens if we fail to reattach
children to parent.
Affects 5.1 only.
mysql-test/r/merge.result:
A test case for BUG#11763712.
mysql-test/t/merge.test:
A test case for BUG#11763712.
sql/sql_base.cc:
When flushing tables under LOCK TABLES, all locked
and flushed tables are released and then reopened.
It may happen that we failed to reopen some tables,
in this case we reopen as much tables as possible.
If it was not possible to reopen MERGE child, MERGE
parent is unusable and must be removed from thread
open tables list.
If it was not possible to reopen MERGE parent, all
MERGE child table objects are unusable as well, at
least because their locks are handled by MERGE parent.
They must also be removed from thread open tables
list.
In other words if it was impossible to reopen any
object of a MERGE table or reattach child tables,
all objects of this MERGE table must be considered
unusable and closed.
Also addressed issues in bug #11745133, where we could mark a table
corrupted instead of crashing the server when found a corrupted buffer/page
if the table created with innodb_file_per_table on.
to mysql-5.5.16-release.
Original revision:
# revision-id: dmitry.lenev@oracle.com-20110811155849-feyt3h7tj48padiu
# parent: tatjana.nuernberg@oracle.com-20110811120945-c6x9a5d2du8s9oj2
# committer: Dmitry Lenev <Dmitry.Lenev@oracle.com>
# branch nick: mysql-5.5-12828477
# timestamp: Thu 2011-08-11 19:58:49 +0400
# message:
# Fix for bug #12828477 - "MDL SUBSYSTEM CREATES BIG OVERHEAD
# FOR CERTAIN QUERIES TO INFORMATION_SCHEMA".
#
# The problem was that metadata locking subsystem introduced
# too much overhead for queries to I_S which were processed by
# opening only .FRM or .TRG files and had to scanned a lot of
# tables (e.g. SELECT COUNT(*) FROM I_S.TRIGGERS was affected).
# The same effect was not observed for similar queries which
# performed full-blown table open in order to fill I_S table.
#
# The problem stemmed from the fact that in case when I_S
# implementation opened only .FRM or .TRG file for each table
# processed it didn't release metadata lock it has acquired on
# the table after finishing its processing. As result, list
# of acquired metadata locks were growing until the end of
# statement. Since acquisition of each new lock required
# search in the list of already acquired locks performance
# degraded.
#
# The same effect is not observed when I_S implementation
# performs full-blown table open for each table being
# processed, as in the latter cases metadata lock on the
# table is released right after table processing.
#
# This fix addressed the problem by ensuring that I_S
# implementation releases metadata lock after processing
# the table in both cases of full-blown table open and in
# case when only .FRM or .TRG file is read.
FOR CERTAIN QUERIES TO INFORMATION_SCHEMA".
The problem was that metadata locking subsystem introduced
too much overhead for queries to I_S which were processed by
opening only .FRM or .TRG files and had to scanned a lot of
tables (e.g. SELECT COUNT(*) FROM I_S.TRIGGERS was affected).
The same effect was not observed for similar queries which
performed full-blown table open in order to fill I_S table.
The problem stemmed from the fact that in case when I_S
implementation opened only .FRM or .TRG file for each table
processed it didn't release metadata lock it has acquired on
the table after finishing its processing. As result, list
of acquired metadata locks were growing until the end of
statement. Since acquisition of each new lock required
search in the list of already acquired locks performance
degraded.
The same effect is not observed when I_S implementation
performs full-blown table open for each table being
processed, as in the latter cases metadata lock on the
table is released right after table processing.
This fix addressed the problem by ensuring that I_S
implementation releases metadata lock after processing
the table in both cases of full-blown table open and in
case when only .FRM or .TRG file is read.
mysql-test/r/information_schema.result:
Added coverage for bug #12828477 - "MDL SUBSYSTEM CREATES BIG
OVERHEAD FOR CERTAIN QUERIES TO INFORMATION_SCHEMA".
mysql-test/t/information_schema.test:
Added coverage for bug #12828477 - "MDL SUBSYSTEM CREATES BIG
OVERHEAD FOR CERTAIN QUERIES TO INFORMATION_SCHEMA".
sql/sql_show.cc:
Changed fill_schema_table_from_frm() to release metadata lock
it has acquired after processing the .FRM or .TRG file for
table.
Without this step metadata locks acquired for each table
processed will be accumulated. In situation when a lot of
tables are processed by I_S query this will result in
transaction with too many metadata locks. As result
performance of acquisition of new lock will degrade.
BUG #11754979 - 46675: ON DUPLICATE KEY UPDATE AND UPDATECOUNT() POSSIBLY WRONG
The mysql_affected_rows() client call returns 3 instead of 2 on
INSERT ... ON DUPLICATE KEY UPDATE query with a duplicated key value.
The fix for the old bug #29692 was incomplete: unnecessary double
increment of "touched" rows still happened.
This bugfix removes:
1) unneeded increment of "touched" rows and
2) useless double resetting of auto-increment value.
sql/sql_insert.cc:
write_record() function:
Unneeded increment of "touched" rows and useless double resetting
of auto-increment value has been removed.
tests/mysql_client_test.c:
New test case.
There is an optimization of DISTINCT in JOIN::optimize()
which depends on THD::used_tables value. Each SELECT statement
inside SP resets used_tables value(see mysql_select()) and it
leads to wrong result. The fix is to replace THD::used_tables
with LEX::used_tables.
mysql-test/r/sp.result:
test case
mysql-test/t/sp.test:
test case
sql/sql_base.cc:
THD::used_tables is replaced with LEX::used_tables
sql/sql_class.cc:
THD::used_tables is replaced with LEX::used_tables
sql/sql_class.h:
THD::used_tables is replaced with LEX::used_tables
sql/sql_insert.cc:
THD::used_tables is replaced with LEX::used_tables
sql/sql_lex.cc:
THD::used_tables is replaced with LEX::used_tables
sql/sql_lex.h:
THD::used_tables is replaced with LEX::used_tables
sql/sql_prepare.cc:
THD::used_tables is replaced with LEX::used_tables
sql/sql_select.cc:
THD::used_tables is replaced with LEX::used_tables
The problem is that TIME_FUZZY_DATE is explicitly used for get_arg0_date()
function in Item_date_typecast::get_date method. The fix is to use real
fuzzy_date value.
mysql-test/r/func_time.result:
test case
mysql-test/t/func_time.test:
test case
sql/item_timefunc.cc:
use real fuzzy_date value
In 5.5, REFRESH SLAVE is used as an alias for RESET SLAVE and
was removed in 5.6. Reseting a slave through REFRESH SLAVE was
causing errors in the valgrind platform since reset_slave_info
was undefined.
To fix the problem, we have set reset_slave_info while calling
REFRESH SLAVE.
SHOW ALL PROBLEMS FOR MERGE TABLE COMPLIANCE IN 5.1".
The problem was that CHECK/REPAIR TABLE for a MERGE table which
had several children missing or in wrong engine reported only
issue with the first such table in its result-set. While in 5.0
this statement returned the whole list of problematic tables.
Ability to report problems for all children was lost during
significant refactorings of MERGE code which were done as part
of work on 5.1 and 5.5 releases.
This patch restores status quo ante refactorings by changing
code in such a way that:
1) Failure to open child table due to its absence during CHECK/
REPAIR TABLE for a MERGE table is not reported immediately
when its absence is discovered in open_tables(). Instead
handling/error reporting in such a situation is postponed
until the moment when children are attached.
2) Code performing attaching of children no longer stops when
it encounters first problem with one of the children during
CHECK/REPAIR TABLE. Instead it continues iteration through
the child list until all problems caused by child absence/
wrong engine are reported.
Note that even after this change problem with mismatch of
child/parent definition won't be reported if there is also
another child missing, but this is how it was in 5.0 as well.
mysql-test/r/merge.result:
Added test case for bug #11754210 - "45777: CHECK TABLE DOESN'T
SHOW ALL PROBLEMS FOR MERGE TABLE COMPLIANCE IN 5.1".
Adjusted results of existing tests to the fact that CHECK/REPAIR
TABLE statements now try to report problems about missing table/
wrong engine for all underlying tables, and to the fact that
mismatch of parent/child definitions is always reported as an
error and not a warning.
mysql-test/t/merge.test:
Added test case for bug #11754210 - "45777: CHECK TABLE DOESN'T
SHOW ALL PROBLEMS FOR MERGE TABLE COMPLIANCE IN 5.1".
sql/sql_base.cc:
Changed code responsible for opening tables to ignore the fact
that underlying tables of a MERGE table are missing, if this
table is opened for CHECK/REPAIR TABLE.
The absence of underlying tables in this case is now detected and
appropriate error is reported at the point when child tables are
attached. At this point we can produce full list of problematic
child tables/errors to be returned as part of CHECK/REPAIR TABLE
result-set.
storage/myisammrg/ha_myisammrg.cc:
Changed myisammrg_attach_children_callback() to handle new
situation, when during CHECK/REPAIR TABLE we do not report
error about missing child immediately when this fact is
discovered during open_tables() but postpone error-reporting
till the time when children are attached.
Also this callback is now responsible for pushing an error
mentioning problematic child table to the list of errors to
be reported by CHECK/REPAIR TABLE statements.
Finally, since now myrg_attach_children() no longer relies on
return value from callback to determine the end of the children
list, callback no longer needs to set my_errno value and can
be simplified.
Changed myrg_print_wrong_table() to always report a problem
with child table as an error and not as a warning. This makes
reporting for different types of issues with child tables
more consistent and compatible with 5.0 behavior.
storage/myisammrg/myrg_open.c:
Changed code in myrg_attach_children() not to abort on the
first problem with a child table when attaching children to
parent MERGE table during CHECK/REPAIR TABLE statement
execution. This allows CHECK/REPAIR TABLE to report problems
about absence/wrong engine for all underlying tables as
part of their result-set.
TOOLS
Backport a fix for Bug 57094 from 5.5.
The following revision was backported:
# revision-id: alexander.nozdrin@oracle.com-20101006150613-ls60rb2tq5dpyb5c
# parent: bar@mysql.com-20101006121559-am1e05ykeicwnx48
# committer: Alexander Nozdrin <alexander.nozdrin@oracle.com>
# branch nick: mysql-5.5-bugteam-bug57094
# timestamp: Wed 2010-10-06 19:06:13 +0400
# message:
# Fix for Bug 57094 (Copyright notice incorrect?).
#
# The fix is to:
# - introduce ORACLE_WELCOME_COPYRIGHT_NOTICE define to have a single place
# to specify copyright notice;
# - replace custom copyright notices with ORACLE_WELCOME_COPYRIGHT_NOTICE
# in programs.
mysql-test/t/implicit_commit.test:
Test fails if server is compiled with -DENABLED_PROFILING=0
sql/sql_class.cc:
Let class PROFILING do its own handling of the input file name.
sql/sql_profile.cc:
Store only basename of file argument.
Before BUG#28796, an empty host was used to identify that an instance was no
longer a slave. However, BUG#28796 changed this behavior and one cannot set
an empty host. Besides, a RESET SLAVE only cleans up information on the next
event to retrieve from the master, disables ssl and resets heartbeat period.
So a call to SHOW SLAVE STATUS after issuing a RESET SLAVE still returns some
valid information, such as host, port, user and password.
To fix this problem, we have introduced the command RESET SLAVE ALL that does
what a regular RESET SLAVE does and also clears host, port, user and password
information thus allowing users to identify when an instance is no longer a
slave.
Truncate result of decimal division before converting to integer.
mysql-test/r/func_math.result:
New test case.
mysql-test/t/func_math.test:
New test case.
sql/item_func.cc:
Item_func_int_div::val_int():
Truncate result of decimal division before converting to integer.
mysql-test/r/type_float.result:
New test case.
mysql-test/t/type_float.test:
New test case.
sql/item_strfunc.cc:
There was a buffer over/under-run when inserting decimal point into an empty string.
HA_ERR was returning 0 (null string) when no error happened
(error=0). Since HA_ERR is used in DBUG_PRINT, regardless there
was an error or not, the server could crash in solaris debug
builds.
We fix this by:
- deploying an assertion that ensures that the function
is not called when no error has happened;
- making sure that HA_ERR is only called when an error
happened;
- making HA_ERR return "No Error", instead of 0, for
non-debug builds if it is called when no error happened.
This will make HA_ERR return values to work with DBUG_PRINT on
solaris debug builds.
non-latin1 server error message
The problem was a one byte buffer overflow in the conversion
of a error message between character sets. Ahead of explaining
the problem further, some background information. Before an
error message is sent to the user, the message is converted
to the character set specified in the character_set_results
variable. For various reasons, this conversion might cause
the message to increase in length -- for example, if certain
characters can't be represented in the result character set.
If the final message length is greater than the maximum allowed
length of a error message (MYSQL_ERRMSG_SIZE), the message
is truncated. The message is also always null-terminated
regardless of the character set. The problem arises from this
null-termination. If a message length reached the maximum,
the terminating null character would be placed one byte past
the end of the message buffer.
The solution is to reserve the end of the message buffer for
the null character.
mysql-test/t/ctype_errors.test:
Add test case for Bug#12736295.
sql/sql_error.cc:
The to_end pointer was actually pointing past the end of
the buffer. Since the message is always null terminated,
point to_end to the last position of the buffer.
The server crashes if it processes table map events that are
corrupted, especially if they map different tables to the same
identifier. This could happen, for instance, due to BUG 56226.
We fix this by checking whether the table map has already been
mapped before actually applying the event. If it has been mapped
with different settings an error is raised and the slave SQL
thread stops. If it has been mapped with same settings the event
is skipped. If the table is set to be ignored by the filtering
rules, there is no change in behavior: the event is skipped and
ids are not checked.
mysql-test/suite/rpl/t/rpl_row_corruption.test:
Added a simple test case that checks both cases:
- multiple table maps with the same identifier
- multiple table maps with the same identifier, but only one
is processed (the others are filtered out)