Replaced COND_refresh with COND_global_read_lock becasue of a bug in NTPL threads when using different mutexes as arguments to pthread_cond_wait()
The original code caused a hang in FLUSH TABLES WITH READ LOCK in some circumstances because pthread_cond_broadcast() was not delivered to other threads.
This fixes:
Bug#16986: Deadlock condition with MyISAM tables
Bug#20048: FLUSH TABLES WITH READ LOCK causes a deadlock
After a locking error the open table(s) were not fully
cleaned up for reuse. But they were put into the open table
cache even before the lock was tried. The next statement
reused the table(s) with a wrong lock type set up. This
tricked MyISAM into believing that it don't need to update
the table statistics. Hence CHECK TABLE reported a mismatch
of record count and table size.
Fortunately nothing worse has been detected yet. The effect
of the test case was that the insert worked on a read locked
table. (!)
I added a new function that clears the lock type from all
tables that were prepared for a lock. I call this function
when a lock failes.
No test case. One test would add 50 seconds to the
test suite. Another test requires file mode modifications.
I added a test script to the bug report. It contains three
cases for failing locks. All could reproduce a table
corruption. All are fixed by this patch.
This bug was not lock timeout specific.
Additional fix for INSERT DELAYED with subselect.
Originally detected in 5.1, but 5.0 could also be affected.
The user thread creates a dummy table object,
which is not included in the lock. The 'real' table is
opened and locked by the 'delayed' system thread.
The dummy object is now marked as not locked and this is
tested in mysql_lock_have_duplicate().
Added missing DBUG_xxx_RETURN statements
Fixed some usage of not initialized variables (as found by valgrind)
Ensure that we don't remove locked tables used as name locks from open table cache until unlock_table_names() are called.
This was fixed by having drop_locked_name() returning any table used as a name lock so that we can free it in unlock_table_names()
This will allow Tomas to continue with his work to use namelocks to syncronize things.
Note: valgrind still produces a lot of warnings about using not initialized code and shows memory loss errors when running the ndb tests
1. dbug state is now local to a thread
2. new macros: DBUG_EXPLAIN, DBUG_EXPLAIN_INITIAL,
DBUG_SET, DBUG_SET_INITIAL, DBUG_EVALUATE, DBUG_EVALUATE_IF
3. macros are do{}while(0) wrapped
4. incremental modifications to the dbug state (e.g. "+d,info:-t")
5. dbug code cleanup, style fixes
6. _db_on_ and DEBUGGER_ON/OFF removed
7. rest of MySQL code fixed because of 3 (missing ;) and 6
8. dbug manual updated
9. server variable @@debug (global and local) to control dbug from SQL!
a. -#T to print timestamps in the log
Optimised version of ADD/DROP/REORGANIZE partitions for
non-NDB storage engines.
New syntax to handle REBUILD/OPTIMIZE/ANALYZE/CHECK/REPAIR partitions
Quite a few bug fixes
when high concurrency": remove HASH::current_record and make it
an external search parameter, so that it can not be the cause of a
race condition under high concurrent load.
The bug was in a race condition in table_hash_search,
when column_priv_hash.current_record was overwritten simultaneously
by multiple threads, causing the search for a suitable grant record
to fail.
No test case as the bug is repeatable only under concurrent load.
Problem #1: INSERT...SELECT, Version for 5.1.
Extended the unique table check by a check of lock data.
Merge sub-tables cannot be detected by doing name checks only.
Problem #1: INSERT...SELECT, Version for 5.0.
Extended the unique table check by a check of lock data.
Merge sub-tables cannot be detected by doing name checks only.
Problem #1: INSERT...SELECT, Version for 4.1.
INSERT ... SELECT with the same table on both sides (hidden
below a MERGE table) does now work by buffering the select result.
The duplicate detection works now after open_and_lock_tables()
on the locks.
I did not find a test case that failed without the change in
sql_update.cc. I made the change anyway as it should in theory
fix a possible MERGE table problem with multi-table update.
Problem #1: INSERT...SELECT
INSERT ... SELECT with the same table on both sides (hidden
below a MERGE table) does now work by buffering the select result.
The duplicate detection works now after open_and_lock_tables()
on the locks.
I did not find a test case that failed without the change in
sql_update.cc. I made the change anyway as it should in theory
fix a possible MERGE table problem with multi-table update.
The table opening process now works the following way:
- Create common TABLE_SHARE object
- Read the .frm file and unpack it into the TABLE_SHARE object
- Create a TABLE object based on the information in the TABLE_SHARE
object and open a handler to the table object
Other noteworthy changes:
- In TABLE_SHARE the most common strings are now LEX_STRING's
- Better error message when table is not found
- Variable table_cache is now renamed 'table_open_cache'
- New variable 'table_definition_cache' that is the number of table defintions that will be cached
- strxnmov() calls are now fixed to avoid overflows
- strxnmov() will now always add one end \0 to result
- engine objects are now created with a TABLE_SHARE object instead of a TABLE object.
- After creating a field object one must call field->init(table) before using it
- For a busy system this change will give you:
- Less memory usage for table object
- Faster opening of tables (if it's has been in use or is in table definition cache)
- Allow you to cache many table definitions objects
- Faster drop of table
into mysql.com:/home/mysql_src/mysql-5.1-merge-of-5.0
(2nd try; Pekka kindly accepted to fix storage/ndb/src/kernel/blocks/dbtup/DbtupRoutines.cpp
and storage/ndb/src/kernel/vm/SimulatedBlock.cpp after I push).
Version for 5.0.
It fixes three problems:
1. The cause of the bug was that we did not check the table version for
the HANDLER ... READ commands. We did not notice when a table was
replaced by a new one. This can happen during ALTER TABLE, REPAIR
TABLE, and OPTIMIZE TABLE (there might be more cases). I call the fix
for this problem "the primary bug fix".
2. mysql_ha_flush() was not always called with a locked LOCK_open.
Though the function comment clearly said it must.
I changed the code so that the locking is done when required. I call
the fix for this problem "the secondary fix".
3. In 5.0 (not in 4.1 or 4.0) DROP TABLE had a possible deadlock flaw in
concur with FLUSH TABLES WITH READ LOCK. I call the fix for this
problem "the 5.0 addendum fix".
This bug occurs when some trigger for table used by DML statement is created
or changed while statement was waiting in lock_tables(). In this situation
prelocking set which we have calculated becames invalid which can easily lead
to errors and even in some cases to crashes.
With proposed patch we no longer silently reopen tables in lock_tables(),
instead caller of lock_tables() becomes responsible for reopening tables and
recalculation of prelocking set.
Bug was introduced by cset 1.1659.14.1. Before it server was silently
ignoring that lock can't be acquired because it already acquired.
This patch makes make_global_read_lock_block_commit() return without error
if lock already acquired.
Fixed portability problem with bool in C programs
Moved close_thread_tables out from LOCK_thread_count mutex (safety fix)
my_sleep() -> pthread_cond_timedwait()
The idea of the patch
is that every cursor gets its own lock id for table level locking.
Thus cursors are protected from updates performed within the same
connection. Additionally a list of transient (must be closed at
commit) cursors is maintained and all transient cursors are closed
when necessary. Lastly, this patch adds support for deadlock
timeouts to TLL locking when using cursors.
+ post-review fixes.
of stored routines definitions even if we already have some tables open and
locked. To avoid deadlocks in this case we have to put certain restrictions
on locking of mysql.proc table.
This allows to use stored routines safely under LOCK TABLES without explicitly
mentioning mysql.proc in the list of locked tables. It also fixes bug #11554
"Server crashes on statement indirectly using non-cached function".
1.) Added a new option to mysql_lock_tables() for ignoring FLUSH TABLES.
Used the new option in create_table_from_items().
It is necessary to prevent the SELECT table from being reopend.
It would get new storage assigned for its fields, while the
SELECT part of the command would still use the old (freed) storage.
2.) Protected the CREATE TABLE and CREATE TABLE ... SELECT commands
against a global read lock. This prevents a deadlock in
CREATE TABLE ... SELECT in conjunction with FLUSH TABLES WITH READ LOCK
and avoids the creation of new tables during a global read lock.
3.) Replaced set_protect_against_global_read_lock() and
unset_protect_against_global_read_lock() by
wait_if_global_read_lock() and start_waiting_global_read_lock()
in the INSERT DELAYED handling.
to read and write
Changed Server code, added new interface to handler and changed the
NDB handler, InnoDB handler and Federated handler that previously used
query_id
Bug#10202 fix (one-liner fix for memory leak)
Added protection against global read lock while creating and
initializing a delayed insert handler.
Allowed to ignore a global read lock when locking the table
inside the delayed insert handler.
Added some minor improvements.
(otherwise a deadlock when ALTER writes to
binlog holding LOCK_open, it causes binlog rotation,
binlog waits for prepared transactions to commit, and commit
needs LOCK_open to check for global read lock)
start_waiting_global_read_lock() should wake up all those who are waiting
for protect_against_global_read_lock to go down to 0: those registered in waiting_for_read_lock
AND those registered in global_read_lock_blocks_commit.
Split TABLE to TABLE and TABLE_SHARE (TABLE_SHARE is still allocated as part of table, will be fixed soon)
Created Field::make_field() and made Field_num::make_field() to call this
Added 'TABLE_SHARE->db' that points to database name; Changed all usage of table_cache_key as database name to use this instead
Changed field->table_name to point to pointer to alias. This allows us to change alias for a table by just updating one pointer.
Renamed TABLE_SHARE->real_name to table_name
Renamed TABLE->table_name to alias
Renamed TABLE_LIST->real_name to table_name
transactional table locks to tables mentioned in the query. These locks
are released at the end of the transaction automatically.
This is fix for bugs #5655, #5998 and issue #3762.
(originally reported as "second run of innobackup hangs forever and can even hang server").
Plus testcase for the bugfix and comments about global read locks.
when one connection had done FLUSH TABLES WITH READ LOCK, some updates, and then COMMIT,
it was accepted but my_error() was called and so, while client got no error, error was logged in binlog.
We now don't call my_error() in this case; we assume the connection know what it does.
This problem was specific to 4.0.21. The change is needed to make replication work with existing versions of innobackup.
Use 'mysqltest' as test database instead of test_$1 or test1,test2 to not accidently delete an important database
Safety fix for mailformed MERGE files
in a deadlock-free manner. This splits locking the global read lock in two steps.
This fixes a consequence of this bug, known as:
BUG#4953 'mysqldump --master-data may report incorrect binlog position if using InnoDB'
And a test.
Bug #4810 "deadlock with KILL when the victim was in a wait state"
(I included mutex unlock into exit_cond() for future safety)
and BUG#4827 "KILL while START SLAVE may lead to replication slave crash"
after Monty's review.
- Item_param was rewritten.
- it turns out that we can't convert string data to character set of
connection on the fly, because they first should be written to the binary
log.
To support efficient conversion we need to rewrite prepared statements
binlogging code first.
Add support for LIMIT # OFFSET #
Changed lock handling: Now all locks should be stored in TABLE_LIST instead of passed to functions.
Don't call query_cache_invalidate() twice in some cases
mysql_change_user() now clears states to be equal to close + connect.
Fixed a bug with multi-table-update and multi-table-delete when used with LOCK TABLES
Fixed a bug with replicate-do and UPDATE
Fixed bugs in my last changeset that made MySQL hard to compile.
Added mutex around some data that could cause table cache corruptions when using OPTIMIZE TABLE / REPAIR TABLE or automatic repair of MyISAM tables.
Added mutex around some data in the slave start/stop code that could cause THD linked list corruptions
Extended my_chsize() to allow one to specify a filler character.
Extend vio_blocking to return the old state (This made some usage of this function much simpler)
Added testing for some functions that they caller have got the required mutexes before calling the function.
Use setrlimit() to ensure that we can write core file if one specifies --core-file.
Added --slave-compressed-protocol
Made 2 the minimum length for ft_min_word_len
Added variables foreign_key_checks & unique_checks.
Less logging from replication code (if not started with --log-warnings)
Changed that SHOW INNODB STATUS requre the SUPER privilege
More DBUG statements and a lot of new code comments
(All commit emails since 4.0.1 checked)
This had to be done now, before the 4.1 tree changes to much, to make it easy to propagate bug fixes to the 4.1 tree.
Test of unsigned BIGINT values
Fixes for queries-per-hour
Cleanup of replication code (comments and portability fixes)
Make most of the binary log code 4G clean
Changed syntax for GRANT ... QUERIES PER HOUR