Corrected spelling in copyright text
Makefile.am:
Don't update the files from BitKeeper
Many files:
Removed "MySQL Finland AB & TCX DataKonsult AB" from copyright header
Adjusted year(s) in copyright header
Many files:
Added GPL copyright text
Removed files:
Docs/Support/colspec-fix.pl
Docs/Support/docbook-fixup.pl
Docs/Support/docbook-prefix.pl
Docs/Support/docbook-split
Docs/Support/make-docbook
Docs/Support/make-makefile
Docs/Support/test-make-manual
Docs/Support/test-make-manual-de
Docs/Support/xwf
- Removed not used variables and functions
- Added #ifdef around code that is not used
- Renamed variables and functions to avoid conflicts
- Removed some not used arguments
Fixed some class/struct warnings in ndb
Added define IS_LONGDATA() to simplify code in libmysql.c
I did run gcov on the changes and added 'purecov' comments on almost all lines that was not just variable name changes
Problem:
When creating a temporary field for a temporary table in create_tmp_field_from_field(), a resulting field is created as an exact copy of an original one (in Field::new_field()). However, Field_enum and Field_set contain a pointer (typelib) to memory allocated in the parent table's MEM_ROOT, which under some circumstances may be deallocated later by the time a temporary table is used.
Solution:
Override the new_field() method for Field_enum and Field_set and create a separate copy of the typelib structure in there.
- Use same precision (milliseconds) for all time functions
used when calculating time for pthread_cond_timedwait
- Use 'GetSystemTimeAsFileTime' for both start and curr time
The problem was located to lie inside current NPTL pthread_exit()
implementation. Race conditions in this code can lead to segmentation
fault. Hovewer, this can happen only in a race between first thread
calling pthread_exit() and other threads.
Workaround implemented in this patch spawns a dummy thread, which
exits immediately, during thread lib initialization. This will exclude
segment violations when further threads exit.
MYSQL_VERSION_ID is tested before it has been defined. This leads to
a warning when compiling with -Wundef and it also will break the
internal logic of mysql_com.h as soon as MYSQL_VERSION_ID exceeds
50000.
The fix entailed a simple re-ordering of included files in mysql.h
This problem could happen when show table status get outdated copy
of TABLE object from table cache.
MyISAM updates state info when external_lock() method is called. Though
I_S does not lock a table to avoid deadlocks. If I_S opens a table which
is in a table cache it will likely get outdated state info copy.
In this case shared state copy is more recent than local copy. This problem
is fixed by correctly restoring myisam state info pointer back to original
value, that is to shared state.
Affects MyISAM only. No good deterministic test case for this fix.
If --srcdir and --windows is given, check if error message file
is in source or build tree (bug#24557)
Makefile.am:
Cleaned up "ali_check" target, to satisfy "distcleancheck" (bug#24557)
mysql_install_db.sh:
Added --srcdir=DIR option, used from top Makefile.am in dist-hook
target, to find "fill_help_tables.sql" in VPATH build (bug#24557)
Makefile.am:
Work around problem with "distcleancheck", "sql_yacc.cc" might be in both
the source and build tree.
Call "mysql_install_db" with new option --srcdir, to enable the script
to find all that is needed, if source and build directory is not the same
(bug#24557)
Fixed compiler warnings (detected by VC++):
- Removed not used variables
- Added casts
- Fixed wrong assignments to bool
- Fixed wrong calls with bool arguments
- Added missing argument to store(longlong), which caused wrong store method to be called.
Don't return from my_thread_global_end() until all threads have called my_thread_end()
Bug#24387: Valgrind: my_thread_init (handle_sl sql, handle_one_conn, handle_slave_io)
- Put 'my_getpagesize' in it's own .c file
- Map the call 'my_getpagesize' directly to 'getpagesize' if it exists
- Add default implementation for 'my_getpagesize' to be used if no platform
specfic function exists
(4.1 version, with post-review fixes)
The fix for another Bug (6439) limited FROM_UNIXTIME() to
TIMESTAMP_MAX_VALUE which is 2145916799 or 2037-12-01 23:59:59 GMT,
however unix timestamp in general is not considered to be limited
by this value. All dates up to power(2,31)-1 are valid.
This patch extends allowed TIMESTAMP range so, that max
TIMESTAMP value is power(2,31)-1. It also corrects
FROM_UNIXTIME() and UNIX_TIMESTAMP() functions, so that
max allowed UNIX_TIMESTAMP() is power(2,31)-1. FROM_UNIXTIME()
is fixed accordingly to allow conversion of dates up to
2038-01-19 03:14:07 UTC. The patch also fixes CONVERT_TZ()
function to allow extended range of dates.
The main problem solved in the patch is possible overflows
of variables, used in broken-time representation to time_t
conversion (required for UNIX_TIMESTAMP).
Use lazy initialization for Query_tables_list::sroutines hash.
This step should significantly decrease amount of memory consumed
by stored routines as we no longer will allocate chunk of memory
required for this HASH for each statement in routine.
Problem: SHOW CREATE TABLE printed garbage in table
name for tables having TURKISH I
(i.e. LATIN CAPITABLE LETTER I WITH DOT ABOVE)
when lower-case-table-name=1.
Reason: In some cases during lower/upper conversion in utf8,
the result string can be shorter the original string
(including the above letter). Old implementation of caseup_str()
and casedn_str() didn't handle the result length properly,
assuming that length cannot change.
This fix changes the result type of cs->cset->casedn_str()
and cs->cset->caseup_str() from VOID to UINT, to return
the result length, as well as put '\0' terminator on a
proper place.
Also, my_caseup_str_utf8() and my_casedn_str_utf8() were
rewritten not to use strlen() for performance purposes.
It was done with help of adding of new functions - my_utf8_uni_no_range()
and my_uni_utf8_no_range() - for null terminated strings.
strings
MySQL is setting the flag HA_END_SPACE_KEYS for all the keys that reference
text or varchar columns with collation different than binary.
This was done to handle correctly the situation where a lookup on such a key
may return more than 1 row because of the presence of many rows that differ
only by the amount of trailing space in the table's string column.
Inserting such values however appears to violate the unique checks on
INSERT/UPDATE. Thus that flag must not be set as it will prevent the optimizer
from choosing a faster access method.
This fix removes the setting of the HA_END_SPACE_KEYS flag.
OPTIMIZE TABLE with myisam_repair_threads > 1 performs a non-quick
parallel repair. This means that it does not only rebuild all
indexes, but also the data file.
Non-quick parallel repair works so that there is one thread per
index. The first of the threads rebuilds also the new data file.
The problem was that all threads shared the read io cache on the
old data file. If there were holes (deleted records) in the table,
the first thread skipped them, writing only contiguous, non-deleted
records to the new data file. Then it built the new index so that
its entries pointed to the correct record positions. But the other
threads didn't know the new record positions, but put the positions
from the old data file into the index.
The new design is so that there is a shared io cache which is filled
by the first thread (the data file writer) with the new contiguous
records and read by the other threads. Now they know the new record
positions.
Another problem was that for the parallel repair of compressed
tables a common bit_buff and rec_buff was used. I changed it so
that thread specific buffers are used for parallel repair.
A similar problem existed for checksum calculation. I made this
multi-thread safe too.
- bug #11655 "Wrong time is returning from nested selects - maximum time exists
- input and output TIME values were not validated properly in several conversion functions
- bug #20927 "sec_to_time treats big unsigned as signed"
- integer overflows were not checked in several functions. As a result, input values like 2^32 or 3600*2^32 were treated as 0
- BIGINT UNSIGNED values were treated as SIGNED in several functions
- in cases where both input string truncation and out-of-range TIME value occur, only 'truncated incorrect time value' warning was produced
(back-port to 4.0)
Socket timeouts in client library were used only on Windows.
Additionally, in 4.0 write operations erroneously set read timeout.
The solution is to use socket timeouts in client library on all
systems were they are supported, and to differentiate between read
and write timeouts.
No test case is provided because it is impossible to simulate network
failure in current test suite.
InterfaceError on connect
Removed the bool flag from the st_mysql_options struct, since it adds
another word in size to the memory size and shifts member memory locations
down, both of which break binary-interface compatibility.
Instead, use a flag, 2**30, in the client_options bit-field to represent
that the client should check the SSL certificate of the server.
These variables must be defined for Netware, or it fails
to recognize hard paths starting from the device.
(Packport of: 2006/08/16 19:30:46+03:00 jani@ua141d10.elisa.omakaista.fi )
There were two problems: RESET QUERY CACHE took a long time to complete
and other threads were blocked during this time.
The patch does three things:
1 fixes a bug with improper use of test-lock-test_again technique.
AKA Double-Checked Locking is applicable here only in few places.
2 Somewhat improves performance of RESET QUERY CACHE.
Do my_hash_reset() instead of deleting elements one by one. Note
however that the slowdown also happens when inserting into sorted
list of free blocks, should be rewritten using balanced tree.
3 Makes RESET QUERY CACHE non-blocking.
The patch adjusts the locking protocol of the query cache in the
following way: it introduces a flag flush_in_progress, which is
set when Query_cache::flush_cache() is in progress. This call
sets the flag on enter, and then releases the lock. Every other
call is able to acquire the lock, but does nothing if
flush_in_progress is set (as if the query cache is disabled).
The only exception is the concurrent calls to
Query_cache::flush_cache(), that are blocked until the flush is
over. When leaving Query_cache::flush_cache(), the lock is
acquired and the flag is reset, and one thread waiting on
Query_cache::flush_cache() (if any) is notified that it may
proceed.
Fix when __attribute__() is stubbed out, add ATTRIBUTE_FORMAT() for specifying
__attribute__((format(...))) safely, make more use of the format attribute,
and fix some of the warnings that this turns up (plus a bonus unrelated one).