without PK
Bug#31609 Not all RBR slave errors reported as errors
bug#32468 delete rows event on a table with foreign key constraint fails
The first two bugs comprise idempotency issues.
First, there was no error code reported under conditions of the bug
description although the slave sql thread halted.
Second, executions were different with and without presence of prim key in
the table.
Third, there was no way to instruct the slave whether to ignore an error
and skip to the following event or to halt.
Fourth, there are handler errors which might happen due to idempotent
applying of binlog but those were not listed among the "idempotent" error
list.
All the named issues are addressed.
Wrt to the 3rd, there is the new global system variable, changeble at run
time, which controls the slave sql thread behaviour.
The new variable allows further extensions to mimic the sql_mode
session/global variable.
To address the 4th, the new bug#32468 had to be fixed as it was staying
in the way.
command and reported to a client.
The fact that a timestamp field will be set to NO on UPDATE wasn't shown
by the SHOW COMMAND and reported to a client through connectors. This led to
problems in the ODBC connector and might lead to a user confusion.
A new filed flag called ON_UPDATE_NOW_FLAG is added.
Constructors of the Field_timestamp set it when a field should be set to NOW
on UPDATE.
The get_schema_column_record function now reports whether a timestamp field
will be set to NOW on UPDATE.
A user could not override system-wide settings in their ~/.my.cnf,
because the DEFAULT_SYSCONFDIR was being searched last. Also, in
some configurations (especially when the --sysconfdir compile-time
option is set to /etc or /etc/mysql), the system-wide my.cnf file
was read multiple times, causing confusion and potential problems.
Rearrange default directories to conform to the manual and logic.
Move --sysconfdir=<path> (DEFAULT_SYSCONFDIR) from the last default
directory to the middle of the list. $HOME/.my.cnf should be last,
so the user is able to override the system-wide settings.
Change init_default_directories() to remove duplicates from the
list.
No functionality added or changed.
This is a pre-requisite for the fix for Bug#12713 Error in a stored
function called from a SELECT doesn't cause ROLLBACK of statem
Address post-review comments.
When replicating an update pair (before image, after image) under row-based
replication, and the before image is not found on the slave, the after image
was not discared, and was hence read as a before image for the next row.
Eventually, this lead to an after image being read outside the block of rows
in the event, causing an assertion to fire.
This patch fixes this by reading the after image in the event that the row
was not found on the slave, adds some extra debug assertion to catch future
errors earlier, and also adds a few non-debug checks to prevent reading
outside the block of the event.
(compiler issue ?)
Problem:
Improper compile-time flags on AIX prevented use of files > 2 GB. This
resulted in Max_data_length being truncated to 2 GB by MyISAM code.
Solution:
Reverted large-file changes from the fix for bug10776. We need to define
_LARGE_FILES on AIX to have support for files > 2 GB.
Since _LARGE_FILE_API is incompatible with _LARGE_FILES and may be
automatically defined by including standards.h, we also need a
workaround to avoid this conflict.
"Disabled plugin is provoking Valgrind error"
If there are any auto-alloced string plug-in options, memory is
allocated during the call for handle_options(). We must free this
memory if we are not installing the plug-in.
It's not possible to use WaitForSingleObject to wait
on a CRITICAL_SECTION, instead use the TryEnterCriticalSection function.
- if "mutex" was already taken => return EBUSY
- if "mutex" was aquired => return 0
The parser uses ulonglong to store the LIMIT number. This number
then is stored into a variable of type ha_rows. ha_rows is either
4 or 8 byte depending on the BIG_TABLES define from config.h
So an overflow may occur (and LIMIT becomes zero) while storing an
ulonglong value in ha_rows.
Fixed by :
1. Using the maximum possible value for ha_rows on overflow
2. Defining BIG_TABLES for the windows builds (to match the others)
make sure that if builder configured with a non-standard (!= 3306)
default TCP port that value actually gets used throughout. if they
didn't configure a value, assume "use a sensible default", which
will be read from /etc/services or, failing that, from the factory
default. That makes the order of preference
- command-line option
- my.cnf, where applicable
- $MYSQL_TCP_PORT environment variable
- /etc/services (unless configured --with-tcp-port)
- default port (--with-tcp-port=... or factory default)
The patch limits read_buffer_size and read_rnd_buffer_size by 2 GB on all platforms for the following reasons:
- I/O code in mysys, code in mf_iocache.c and in some storage engines do not currently work with sizes > 2 GB for those buffers
- even if the above had been fixed, Windows POSIX read() and write() calls are not 2GB-safe, so setting those buffer to sizes > 2GB would not work correctly on 64-bit Windows.
c++config.h now has the following code:
// For example, <windows.h> is known to #define min and max as macros...
#undef min
#undef max
So, our defines in my_global.h are undefined when <new> header
is included.
Move definitions of min()/max() to the end of my_global.h.