When a .CSV file for table in the CSV engine contains
\X characters as part of unquoted fields, e.g.
2,naraya\nan
\n is not interpreted as a new line (it is however interpreted as a
newline in a quoted field).
The old algorithm copied the entire value for a unquoted field without
parsing the \X characters.
The new algorithm adds the capability to handle \X characters in the
unquoted fields of a .CSV file.
Text conflict in mysql-test/collections/default.experimental
Text conflict in mysql-test/r/show_check.result
Text conflict in mysql-test/r/sp-code.result
Text conflict in mysql-test/suite/binlog/r/binlog_tmp_table.result
Text conflict in mysql-test/suite/rpl/t/disabled.def
Text conflict in mysql-test/t/show_check.test
Text conflict in mysys/my_delete.c
Text conflict in sql/item.h
Text conflict in sql/item_cmpfunc.h
Text conflict in sql/log.cc
Text conflict in sql/mysqld.cc
Text conflict in sql/repl_failsafe.cc
Text conflict in sql/slave.cc
Text conflict in sql/sql_parse.cc
Text conflict in sql/sql_table.cc
Text conflict in sql/sql_yacc.yy
Text conflict in storage/myisam/ha_myisam.cc
Corrected results for
stm_auto_increment_bug33029.reject 2009-12-01
20:01:49.000000000 +0300
<andrei> @@ -42,9 +42,6 @@
<andrei> RETURN i;
<andrei> END//
<andrei> CALL p1();
<andrei> -Warnings:
<andrei> -Note 1592 Statement may not be safe to log in statement
format.
<andrei> -Note 1592 Statement may not be safe to log in statement
format.
There should be indeed no Note present because there is in fact autoincrement
top-level query in sp() that triggers inserting in yet another auto-inc table.
(todo: alert DaoGang to improve the test).
table .frm file
Problem:
========
Myisampack --join did not create the destination table .frm file.
The user had to copy one of the source table .frm file as destination .frm
file for mysql server to recognize. This is just 'user-friendliness' issue.
How it was solved
=================
After successful join and compression we copy the frm file from the first
source table.
Functionality added
===================
myisampack --join=/path/t3 /path/t1 /path/t2 creates
/path/t3.frm (which is bascially copied from first table's frm /path/t1)
Tests
=====
Modified myisampack.test to test two scenario's
1. Positive myisampack --join test
In this case after the join operation is done,we test if the destination
table is accessible from the server
2. Positive myisampack --join test with an existing .frm file.
We test the above case with an existing .frm file for the destination
table. It should return success even in this case.
3. Positive myisampack --join test with no .frm file for source tables
We test the join operation with no .frm files for source tables. It should
complete the join operation without any warnings and error messages
4. Negative myisampack --join test
We test myisampack --join with existing .MYI,.MDI,.frm files for the
destination table. It should fail with exit status 2 in this case.
Select queries on archive tables when joined on their primary keys
returns no results(empty set)
Archive storage doesn't inform the handler about the fetched record
status when it is found. Fixed the archive storage engine to update
the record status when it fetches successfully
WL#3951 - MyISAM: Additional Error Logs for Data Corruption
When table corruption is detected, in addition to current error message
provide following information:
- list of threads (and queries) accessing a table;
- thread_id of a thread that detected corruption;
- source file name and line number where this corruption was detected;
- optional extra information (string).
beyond unsigned long.
BUG#44779: binlog.binlog_max_extension may be causing failure on
next test in PB
NOTE1: this is the backport to next-mr.
NOTE2: already includes patch for BUG#44779.
Binlog file extensions would turn into negative numbers once the
variable used to hold the value reached maximum for signed
long. Consequently, incrementing value to the next (negative) number
would lead to .000000 extension, causing the server to fail.
This patch addresses this issue by not allowing negative extensions
and by returning an error on find_uniq_filename, when the limit is
reached. Additionally, warnings are printed to the error log when the
limit is approaching. FLUSH LOGS will also report warnings to the
user, if the extension number has reached the limit. The limit has been
set to 0x7FFFFFFF as the maximum.
------------------------------------------------------------
revno: 2572.23.1
committer: davi@mysql.com/endora.local
timestamp: Wed 2008-03-19 09:03:08 -0300
message:
Bug#17954 Threads_connected > Threads_created
The problem is that insert delayed threads are counted as connected
but not as created, leading to a Threads_connected value greater then
the Threads_created value.
The solution is to enforce the documented behavior that the
Threads_connected value shall be the number of currently
open connections and that Threads_created shall be the
number of threads created to handle connections.
------------------------------------------------------------
revno: 2476.1116.1
committer: davi@mysql.com/endora.local
timestamp: Fri 2007-12-14 10:10:19 -0200
message:
DROP TABLE under LOCK TABLES simultaneous to a FLUSH TABLES
WITH READ LOCK (global read lock) can lead to a deadlock.
The solution is to not wait for the global read lock if the
thread is holding any locked tables.
Related to bugs 23713 and 32395. This issues is being fixed
only on 6.0 because it depends on the fix for bug 25858 --
which was fixed only on 6.0.
------------------------------------------------------------
revno: 2476.784.3
committer: davi@moksha.local
timestamp: Tue 2007-10-02 21:27:31 -0300
message:
Bug#25858 Some DROP TABLE under LOCK TABLES can cause deadlocks
When a client (connection) holds a lock on a table and attempts to
drop (obtain a exclusive lock) on a second table that is already
held by a second client and the second client then attempts to
drop the table that is held by the first client, leads to a
circular wait deadlock. This scenario is very similar to trying to
drop (or rename) a table while holding read locks and are
correctly forbidden.
The solution is to allow a drop table operation to continue only
if the table being dropped is write (exclusively) locked, or if
the table is temporary, or if the client is not holding any
locks. Using this scheme prevents the creation of a circular
chain in which each client is waiting for one table that the
next client in the chain is holding.
This is incompatible change, as can be seen by number of tests
cases that needed to be fixed, but is consistent with respect to
behavior of the different scenarios in which the circular wait
might happen.
The mentioned on the bug report set of bugs fixes have not be pushed to the main trees.
Fixed with extracting commits done to 6.0-rpl tree and applying them to the main 5.1.
Notes.
1. part of changes - the mtr's specific - were packported to the main 5.0 tree for mtr v1
as http://lists.mysql.com/commits/46562
However, there is no that fix anymore in the mtr v2. (This fact was mailed to mtr maintaining
people).
2. Bug@36929 crash in kill_zombie_dump_threads-> THD::awake() with replication tests
is not backported because the base code of the patch is libevent and that was removed
from the main trees due to its instability.
Post-push fix: Removed MTRv1 arguments according to the
original patch. Although there is a version check, the patch
was pushed to a 5.1 GA staging tree, while the version check
considers version 5.2. This makes the deprecated parameters
to be used, despite the fact that they are not valid anymore.
Part of MTRv1 is currently used in RQG semisync test, and this
was causing the test to fail on slave startup.
It should be safe to uncomment when merging up to celosia.
One statement that have more than one different tables to update with
autoinc columns just was marked as unsafe in mixed mode, so the unsafe
warning can't be produced in statement mode.
To fix the problem, mark the statement as unsafe in statement mode too.
The additional patch. That 'loadxml.test' failure was actually about our testing system,
not the code.
Firstly we need a new mysqltest command, wich i called 'send_eval'. So the expression
can be evaluated, then started in a parallel thread. We only have separane 'send' and
'eval' commands at the moment.
Then we need to add the waiting code after the 'KILL' to our test, so the thread will be killed
before the test goes further. The present 'reap' command doesn't handle the killed threads
well.
per-file comments:
client/mysqltest.cc
Bug#42520 killing load .. infile Assertion failed: ! is_set(), file .\sql_error.cc, line 8
The 'send_eval' command implemented.
mysql-test/r/loadxml.result
Bug#42520 killing load .. infile Assertion failed: ! is_set(), file .\sql_error.cc, line 8
test result updated.
mysql-test/t/loadxml.test
Bug#42520 killing load .. infile Assertion failed: ! is_set(), file .\sql_error.cc, line 8
test case added.
------------------------------------------------------------
revno: 2476.784.4
revision-id: sp1r-davi@moksha.local-20071008114751-46069
parent: sp1r-davi@moksha.local-20071003002731-48537
committer: davi@moksha.local
timestamp: Mon 2007-10-08 08:47:51 -0300
message:
Bug#27249 table_wild with alias: select t1.* as something
Aliases to table wildcards are silently ignored, but they should
not be allowed as it is non-standard and currently useless. There
is not point in having a alias to a wildcard of column names.
The solution is to rewrite the select_item rule so that aliases
for table wildcards are not accepted.
Contribution by Martin Friebe
------------------------------------------------------------
revno: 2597.4.17
revision-id: sp1r-davi@mysql.com/endora.local-20080328174753-24337
parent: sp1r-anozdrin/alik@quad.opbmk-20080328140038-16479
committer: davi@mysql.com/endora.local
timestamp: Fri 2008-03-28 14:47:53 -0300
message:
Bug#15192 "fatal errors" are caught by handlers in stored procedures
The problem is that fatal errors (e.g.: out of memory) were being
caught by stored procedure exception handlers which could cause
the execution to not be stopped due to a continue handler.
The solution is to not call any exception handler if the error is
fatal and send the fatal error to the client.