file .\ha_innodb.
Problem: if a partial unique key followed by a non-partial one we declare
the second one as a primary key.
Fix: sort non-partial unique keys before partial ones.
ucs2 doesn't provide required by fulltext ctype array. Crash
happens because fulltext attempts to use unitialized ctype
array.
Fixed by converting ucs2 fields to compatible utf8 analogue.
When replicating an update pair (before image, after image) under row-based
replication, and the before image is not found on the slave, the after image
was not discared, and was hence read as a before image for the next row.
Eventually, this lead to an after image being read outside the block of rows
in the event, causing an assertion to fire.
This patch fixes this by reading the after image in the event that the row
was not found on the slave, adds some extra debug assertion to catch future
errors earlier, and also adds a few non-debug checks to prevent reading
outside the block of the event.
CPUs / Intel's ICC compile
The bug is a combination of two problems:
1. IA64/ICC MySQL binaries use glibc's qsort(), not the one in mysys.
2. The order relation implemented by join_tab_cmp() is not transitive,
i.e. it is possible to choose such a, b and c that (a < b) && (b < c)
but (c < a). This implies that result of a sort using the relation
implemented by join_tab_cmp() depends on the order in which
elements are compared, i.e. the result is implementation-specific. Since
choose_plan() uses qsort() to pre-sort the
join tables using join_tab_cmp() as a compare function, the results of
the sorting may vary depending on qsort() implementation.
It is neither possible nor important to implement a better ordering
algorithm in join_tab_cmp(). Therefore the only way to fix it is to
force our own qsort() to be used by renaming it to my_qsort(), so we don't depend
on linker to decide that.
This patch also "fixes" bug #20530: qsort redefinition violates the
standard.
(compiler issue ?)
Problem:
Improper compile-time flags on AIX prevented use of files > 2 GB. This
resulted in Max_data_length being truncated to 2 GB by MyISAM code.
Solution:
Reverted large-file changes from the fix for bug10776. We need to define
_LARGE_FILES on AIX to have support for files > 2 GB.
Since _LARGE_FILE_API is incompatible with _LARGE_FILES and may be
automatically defined by including standards.h, we also need a
workaround to avoid this conflict.
- Reserver namespace and place in frm for TABLE_CHECKSUM and PAGE_CHECKSUM create options
- Added syncing of directory when creating .frm files
- Portability fixes
- Added missing cast that could cause bugs
- Code cleanups
- Made some bit functions inline
- Moved things out of myisam.h to my_handler.h to make them more accessable
- Renamed some myisam variables and defines to make them more globaly usable (as they are used outside of MyISAM)
- Fixed bugs in error conditions
- Use compiler time asserts instead of run time
- Fixed indentation
HA_EXTRA_PREPARE_FOR_DELETE -> HA_EXTRA_PREPARE_FOR_DROP as the old name was wrong
(Added a define for old value to ensure we don't break any old code)
Added HA_EXTRA_PREPARE_FOR_RENAME as a signal for rename (before we used a DROP signal which is wrong)
- Initialize error messages early to get better errors when mysqld or an engine fails to start
- Fix windows bug that query_performance_frequency was not initialized if registry code failed
- thread_stack -> my_thread_stack_size
"Disabled plugin is provoking Valgrind error"
If there are any auto-alloced string plug-in options, memory is
allocated during the call for handle_options(). We must free this
memory if we are not installing the plug-in.
It's not possible to use WaitForSingleObject to wait
on a CRITICAL_SECTION, instead use the TryEnterCriticalSection function.
- if "mutex" was already taken => return EBUSY
- if "mutex" was aquired => return 0
The parser uses ulonglong to store the LIMIT number. This number
then is stored into a variable of type ha_rows. ha_rows is either
4 or 8 byte depending on the BIG_TABLES define from config.h
So an overflow may occur (and LIMIT becomes zero) while storing an
ulonglong value in ha_rows.
Fixed by :
1. Using the maximum possible value for ha_rows on overflow
2. Defining BIG_TABLES for the windows builds (to match the others)
make sure that if builder configured with a non-standard (!= 3306)
default TCP port that value actually gets used throughout. if they
didn't configure a value, assume "use a sensible default", which
will be read from /etc/services or, failing that, from the factory
default. That makes the order of preference
- command-line option
- my.cnf, where applicable
- $MYSQL_TCP_PORT environment variable
- /etc/services (unless configured --with-tcp-port)
- default port (--with-tcp-port=... or factory default)
The patch limits read_buffer_size and read_rnd_buffer_size by 2 GB on all platforms for the following reasons:
- I/O code in mysys, code in mf_iocache.c and in some storage engines do not currently work with sizes > 2 GB for those buffers
- even if the above had been fixed, Windows POSIX read() and write() calls are not 2GB-safe, so setting those buffer to sizes > 2GB would not work correctly on 64-bit Windows.
c++config.h now has the following code:
// For example, <windows.h> is known to #define min and max as macros...
#undef min
#undef max
So, our defines in my_global.h are undefined when <new> header
is included.
Move definitions of min()/max() to the end of my_global.h.