corrupts a MERGE table
Bug 26867 - LOCK TABLES + REPAIR + merge table result in
memory/cpu hogging
Bug 26377 - Deadlock with MERGE and FLUSH TABLE
Bug 25038 - Waiting TRUNCATE
Bug 25700 - merge base tables get corrupted by
optimize/analyze/repair table
Bug 30275 - Merge tables: flush tables or unlock tables
causes server to crash
Bug 19627 - temporary merge table locking
Bug 27660 - Falcon: merge table possible
Bug 30273 - merge tables: Can't lock file (errno: 155)
The problems were:
Bug 26379 - Combination of FLUSH TABLE and REPAIR TABLE
corrupts a MERGE table
1. A thread trying to lock a MERGE table performs busy waiting while
REPAIR TABLE or a similar table administration task is ongoing on
one or more of its MyISAM tables.
2. A thread trying to lock a MERGE table performs busy waiting until all
threads that did REPAIR TABLE or similar table administration tasks
on one or more of its MyISAM tables in LOCK TABLES segments do UNLOCK
TABLES. The difference against problem #1 is that the busy waiting
takes place *after* the administration task. It is terminated by
UNLOCK TABLES only.
3. Two FLUSH TABLES within a LOCK TABLES segment can invalidate the
lock. This does *not* require a MERGE table. The first FLUSH TABLES
can be replaced by any statement that requires other threads to
reopen the table. In 5.0 and 5.1 a single FLUSH TABLES can provoke
the problem.
Bug 26867 - LOCK TABLES + REPAIR + merge table result in
memory/cpu hogging
Trying DML on a MERGE table, which has a child locked and
repaired by another thread, made an infinite loop in the server.
Bug 26377 - Deadlock with MERGE and FLUSH TABLE
Locking a MERGE table and its children in parent-child order
and flushing the child deadlocked the server.
Bug 25038 - Waiting TRUNCATE
Truncating a MERGE child, while the MERGE table was in use,
let the truncate fail instead of waiting for the table to
become free.
Bug 25700 - merge base tables get corrupted by
optimize/analyze/repair table
Repairing a child of an open MERGE table corrupted the child.
It was necessary to FLUSH the child first.
Bug 30275 - Merge tables: flush tables or unlock tables
causes server to crash
Flushing and optimizing locked MERGE children crashed the server.
Bug 19627 - temporary merge table locking
Use of a temporary MERGE table with non-temporary children
could corrupt the children.
Temporary tables are never locked. So we do now prohibit
non-temporary chidlren of a temporary MERGE table.
Bug 27660 - Falcon: merge table possible
It was possible to create a MERGE table with non-MyISAM children.
Bug 30273 - merge tables: Can't lock file (errno: 155)
This was a Windows-only bug. Table administration statements
sometimes failed with "Can't lock file (errno: 155)".
These bugs are fixed by a new implementation of MERGE table open.
When opening a MERGE table in open_tables() we do now add the
child tables to the list of tables to be opened by open_tables()
(the "query_list"). The children are not opened in the handler at
this stage.
After opening the parent, open_tables() opens each child from the
now extended query_list. When the last child is opened, we remove
the children from the query_list again and attach the children to
the parent. This behaves similar to the old open. However it does
not open the MyISAM tables directly, but grabs them from the already
open children.
When closing a MERGE table in close_thread_table() we detach the
children only. Closing of the children is done implicitly because
they are in thd->open_tables.
For more detail see the comment at the top of ha_myisammrg.cc.
Changed from open_ltable() to open_and_lock_tables() in all places
that can be relevant for MERGE tables. The latter can handle tables
added to the list on the fly. When open_ltable() was used in a loop
over a list of tables, the list must be temporarily terminated
after every table for open_and_lock_tables().
table_list->required_type is set to FRMTYPE_TABLE to avoid open of
special tables. Handling of derived tables is suppressed.
These details are handled by the new function
open_n_lock_single_table(), which has nearly the same signature as
open_ltable() and can replace it in most cases.
In reopen_tables() some of the tables open by a thread can be
closed and reopened. When a MERGE child is affected, the parent
must be closed and reopened too. Closing of the parent is forced
before the first child is closed. Reopen happens in the order of
thd->open_tables. MERGE parents do not attach their children
automatically at open. This is done after all tables are reopened.
So all children are open when attaching them.
Special lock handling like mysql_lock_abort() or mysql_lock_remove()
needs to be suppressed for MERGE children or forwarded to the parent.
This depends on the situation. In loops over all open tables one
suppresses child lock handling. When a single table is touched,
forwarding is done.
Behavioral changes:
===================
This patch changes the behavior of temporary MERGE tables.
Temporary MERGE must have temporary children.
The old behavior was wrong. A temporary table is not locked. Hence
even non-temporary children were not locked. See
Bug 19627 - temporary merge table locking.
You cannot change the union list of a non-temporary MERGE table
when LOCK TABLES is in effect. The following does *not* work:
CREATE TABLE m1 ... ENGINE=MRG_MYISAM ...;
LOCK TABLES t1 WRITE, t2 WRITE, m1 WRITE;
ALTER TABLE m1 ... UNION=(t1,t2) ...;
However, you can do this with a temporary MERGE table.
You cannot create a MERGE table with CREATE ... SELECT, neither
as a temporary MERGE table, nor as a non-temporary MERGE table.
CREATE TABLE m1 ... ENGINE=MRG_MYISAM ... SELECT ...;
Gives error message: table is not BASE TABLE.
command and reported to a client.
The fact that a timestamp field will be set to NO on UPDATE wasn't shown
by the SHOW COMMAND and reported to a client through connectors. This led to
problems in the ODBC connector and might lead to a user confusion.
A new filed flag called ON_UPDATE_NOW_FLAG is added.
Constructors of the Field_timestamp set it when a field should be set to NOW
on UPDATE.
The get_schema_column_record function now reports whether a timestamp field
will be set to NOW on UPDATE.
A user could not override system-wide settings in their ~/.my.cnf,
because the DEFAULT_SYSCONFDIR was being searched last. Also, in
some configurations (especially when the --sysconfdir compile-time
option is set to /etc or /etc/mysql), the system-wide my.cnf file
was read multiple times, causing confusion and potential problems.
Rearrange default directories to conform to the manual and logic.
Move --sysconfdir=<path> (DEFAULT_SYSCONFDIR) from the last default
directory to the middle of the list. $HOME/.my.cnf should be last,
so the user is able to override the system-wide settings.
Change init_default_directories() to remove duplicates from the
list.
No functionality added or changed.
This is a pre-requisite for the fix for Bug#12713 Error in a stored
function called from a SELECT doesn't cause ROLLBACK of statem
Address post-review comments.
When replicating an update pair (before image, after image) under row-based
replication, and the before image is not found on the slave, the after image
was not discared, and was hence read as a before image for the next row.
Eventually, this lead to an after image being read outside the block of rows
in the event, causing an assertion to fire.
This patch fixes this by reading the after image in the event that the row
was not found on the slave, adds some extra debug assertion to catch future
errors earlier, and also adds a few non-debug checks to prevent reading
outside the block of the event.
(compiler issue ?)
Problem:
Improper compile-time flags on AIX prevented use of files > 2 GB. This
resulted in Max_data_length being truncated to 2 GB by MyISAM code.
Solution:
Reverted large-file changes from the fix for bug10776. We need to define
_LARGE_FILES on AIX to have support for files > 2 GB.
Since _LARGE_FILE_API is incompatible with _LARGE_FILES and may be
automatically defined by including standards.h, we also need a
workaround to avoid this conflict.
"Disabled plugin is provoking Valgrind error"
If there are any auto-alloced string plug-in options, memory is
allocated during the call for handle_options(). We must free this
memory if we are not installing the plug-in.
It's not possible to use WaitForSingleObject to wait
on a CRITICAL_SECTION, instead use the TryEnterCriticalSection function.
- if "mutex" was already taken => return EBUSY
- if "mutex" was aquired => return 0
The parser uses ulonglong to store the LIMIT number. This number
then is stored into a variable of type ha_rows. ha_rows is either
4 or 8 byte depending on the BIG_TABLES define from config.h
So an overflow may occur (and LIMIT becomes zero) while storing an
ulonglong value in ha_rows.
Fixed by :
1. Using the maximum possible value for ha_rows on overflow
2. Defining BIG_TABLES for the windows builds (to match the others)
make sure that if builder configured with a non-standard (!= 3306)
default TCP port that value actually gets used throughout. if they
didn't configure a value, assume "use a sensible default", which
will be read from /etc/services or, failing that, from the factory
default. That makes the order of preference
- command-line option
- my.cnf, where applicable
- $MYSQL_TCP_PORT environment variable
- /etc/services (unless configured --with-tcp-port)
- default port (--with-tcp-port=... or factory default)
The patch limits read_buffer_size and read_rnd_buffer_size by 2 GB on all platforms for the following reasons:
- I/O code in mysys, code in mf_iocache.c and in some storage engines do not currently work with sizes > 2 GB for those buffers
- even if the above had been fixed, Windows POSIX read() and write() calls are not 2GB-safe, so setting those buffer to sizes > 2GB would not work correctly on 64-bit Windows.
c++config.h now has the following code:
// For example, <windows.h> is known to #define min and max as macros...
#undef min
#undef max
So, our defines in my_global.h are undefined when <new> header
is included.
Move definitions of min()/max() to the end of my_global.h.