When a .CSV file for table in the CSV engine contains
\X characters as part of unquoted fields, e.g.
2,naraya\nan
\n is not interpreted as a new line (it is however interpreted as a
newline in a quoted field).
The old algorithm copied the entire value for a unquoted field without
parsing the \X characters.
The new algorithm adds the capability to handle \X characters in the
unquoted fields of a .CSV file.
----------------------------------------------------------------------
ChangeSet@1.2571, 2008-04-08 12:30:06+02:00, vvaintroub@wva. +122 -0
Bug#32082 : definition of VOID in my_global.h conflicts with Windows
SDK headers
VOID macro is now removed. Its usage is replaced with void cast.
In some cases, where cast does not make much sense (pthread_*, printf,
hash_delete, my_seek), cast is ommited.
----------------------------------------------------------
revno: 2630.7.1
committer: Konstantin Osipov <konstantin@mysql.com>
branch nick: mysql-6.0-lock-tables-new
timestamp: Mon 2008-06-02 15:14:18 +0400
message:
Fix a test suite timeout in partition.test and partition_csv.test
2677 Vladislav Vaintroub 2008-11-04
CMakeLists.txt files cleanup
- remove SAFEMALLOC and SAFE_MUTEX definitions that were
present in *each* CMakeLists.txt. Instead, put them into top level
CMakeLists.txt, but disable on Windows, because
a) SAFEMALLOC does not add any functionality that is not already
present in Debug C runtime ( and 2 safe malloc one on top of the other
only unnecessarily slows down the server)
b)SAFE_MUTEX does not work on Windows and have been
explicitely disabled on Windows with #undef previously. Fortunately,
ntdll does pretty good job identifying l problems with
CRITICAL_SECTIONs.
DebugBreak()s on using uninited critical section, unlocking unowned
critical section)
-Also, remove occationally used -D_DEBUG (added by compiler
anyway)
bzr branch mysql-5.1-performance-version mysql-trunk # Summit
cd mysql-trunk
bzr merge mysql-5.1-innodb_plugin # which is 5.1 + Innodb plugin
bzr rm innobase # remove the builtin
Next step: build, test fixes.
Probe definition file is now a separate file that is copied during
build to the correct location, this enforces dependency requirements
correctly and allows builds to work when using the current or separate
build directories
- Remove bothersome warning messages. This change focuses on the warnings
that are covered by the ignore file: support-files/compiler_warnings.supp.
- Strings are guaranteed to be max uint in length
Replaced abs_top_srcdir with top_srcdir, not sure it's an
improvement but at least it's known that abs_top_srcdir
in cases have a problem and this is a more common variable
to use for the same purpose.
The problem: data file can not be deleted on win because
there is another opened instance of this file.
Data file might be opened twice, on table opening stage and
during write_row execution. We need to close both instances
to satisfy Win.
The problem:
CSV storage engine open function returns success even
thought it failed to open the data file
The fix:
return error
Additional fixes:
added MY_WME to my_open to avoid mysterious error message
free share struct if open the file was unsuccessful
"Update of CSV row incorrect for some BLOBs"
when reading in rows, move blob columns into temporary storage not
allocated by Field_blob class or else row update operation will
alter original row and make mysql think that nothing has been changed.
fix incrementing wrong statistic values.
INSERT/UPDATE against CSV table created by MySQL earlier than 5.1.23
with NULL-able column results in server crash in debug builds.
Starting with 5.1.23 CSV doesn't permit creation of NULL-able columns,
but it is still possible to get such table from older MySQL versions.
Fixed by removing excessive DBUG_ASSERT().
"CSV does not work with NULL value in datetime fields"
Attempting to insert a row with a NULL value for a DATETIME field
results in a CSV file which the storage engine cannot read.
Don't blindly assume that "0" is acceptable for all field types,
Since CSV does not support NULL, we find out from the field the
default non-null value.
Do not permit the creation of a table with a nullable columns.
Faster thr_alarm()
Added 'Opened_files' status variable to track calls to my_open()
Don't give warnings when running mysql_install_db
Added option --source-install to mysql_install_db
I had to do the following renames() as used polymorphism didn't work with Forte compiler on 64 bit systems
index_read() -> index_read_map()
index_read_idx() -> index_read_idx_map()
index_read_last() -> index_read_last_map()
Additional changes for bug#29903
- Changed to do embedded build part as normal build, when
WITH_EMBEDDED_SERVER is set.
- Allow both normal and debug build with embedded.
- Build static embedded library by pointing out all source and compile
it all, i.e. not building libraries from libraries, not portable.
- Let embedded use generated files from the "sql" directory, added
dependencies to make sure built before embedded.
- Mark library "dbug" in TARGET_LINK_LIBRARIES() with "debug", so only
linked in when debug target is used.
- Removed change of target name with "mysqld${MYSQLD_EXE_SUFFIX}", as
others can't depend on it, not defined at configure time. Instead
set the output file name.
- Created work around for bug in CMake 2.4.6 and output names, to
set the "mysqld<suffix>.pdb" name to the same base name.
- Set the correct manifest "name" (patch by iggy)
Bug#25422 (Hang with log tables)
Bug 17876 (Truncating mysql.slow_log in a SP after using cursor locks the
thread)
Bug 23044 (Warnings on flush of a log table)
Bug 29129 (Resetting general_log while the GLOBAL READ LOCK is set causes
a deadlock)
Prior to this fix, the server would hang when performing concurrent
ALTER TABLE or TRUNCATE TABLE statements against the LOG TABLES,
which are mysql.general_log and mysql.slow_log.
The root cause traces to the following code:
in sql_base.cc, open_table()
if (table->in_use != thd)
{
/* wait_for_condition will unlock LOCK_open for us */
wait_for_condition(thd, &LOCK_open, &COND_refresh);
}
The problem with this code is that the current implementation of the
LOGGER creates 'fake' THD objects, like
- Log_to_csv_event_handler::general_log_thd
- Log_to_csv_event_handler::slow_log_thd
which are not associated to a real thread running in the server,
so that waiting for these non-existing threads to release table locks
cause the dead lock.
In general, the design of Log_to_csv_event_handler does not fit into the
general architecture of the server, so that the concept of general_log_thd
and slow_log_thd has to be abandoned:
- this implementation does not work with table locking
- it will not work with commands like SHOW PROCESSLIST
- having the log tables always opened does not integrate well with DDL
operations / FLUSH TABLES / SET GLOBAL READ_ONLY
With this patch, the fundamental design of the LOGGER has been changed to:
- always open and close a log table when writing a log
- remove totally the usage of fake THD objects
- clarify how locking of log tables is implemented in general.
See WL#3984 for details related to the new locking design.
Additional changes (misc bugs exposed and fixed):
1)
mysqldump which would ignore some tables in dump_all_tables_in_db(),
but forget to ignore the same in dump_all_views_in_db().
2)
mysqldump would also issue an empty "LOCK TABLE" command when all the tables
to lock are to be ignored (numrows == 0), instead of not issuing the query.
3)
Internal errors handlers could intercept errors but not warnings
(see sql_error.cc).
4)
Implementing a nested call to open tables, for the performance schema tables,
exposed an existing bug in remove_table_from_cache(), which would perform:
in_use->some_tables_deleted=1;
against another thread, without any consideration about thread locking.
This call inside remove_table_from_cache() was not required anyway,
since calling mysql_lock_abort() takes care of aborting -- cleanly -- threads
that might hold a lock on a table.
This line (in_use->some_tables_deleted=1) has been removed.
Problem: the data file changes made during delete/update are not visible to
other threads as the file is reopened, so reading data
with old descriptors might miss the changes.
Fix: reopen the data file before reading if it was reopened during
delete/update to ensure there's no data behind.
Note: there's no simple test case.
Problem: we don't take into account the length of the data written
to the temporary data file during update on a CSV table.
Fix: properly calculate the data file length during update.
Problem: we don't adjust share->rows_recorded and local_saved_data_file_length
deleting rows from a CSV table, so following table check may fail.
Fix: properly adjust those values.
leads to the table corruption
New Field::store() method implemented to explicitly set thd->count_cuted_fields
before value storing, instead of (incorrectly) setting it in the CSV storage engine.
Thread row counter now properly incremented during check and repair in the CSV engine.