Introduce a flag that will enable the REPLACE
command to work correctly with an underlying
storage engine that does not report unique key
conflicts in the ascending order.
----------------------------------------------------------------------
ChangeSet@1.2571, 2008-04-08 12:30:06+02:00, vvaintroub@wva. +122 -0
Bug#32082 : definition of VOID in my_global.h conflicts with Windows
SDK headers
VOID macro is now removed. Its usage is replaced with void cast.
In some cases, where cast does not make much sense (pthread_*, printf,
hash_delete, my_seek), cast is ommited.
This is the non-ndb part of the patch.
The return value of mysql_bin_log.write was ignored by most callers,
which may lead to inconsistent on master and slave if the transaction
was committed while the binlog was not correctly written. If
my_error() is call in mysql_bin_log.write, this could also lead to
assertion issue if my_ok() or my_error() is called after.
This fixed the problem by let the caller to check and handle the
return value of mysql_bin_log.write. This patch only adresses the
simple cases.
port from mysql-next (5.4?) to mysql-next-mr-bugfixes (5.5/5.6?)
3477 Mikael Ronstrom 2009-07-29
Bug#32115, made use of local lex object to avoid side effects of opening partitioned
tables
3478 Mikael Ronstrom 2009-07-29
Bug#32115, added an extra test in debug builds to ensure no dangling pointers to the
old lex object is still around
3479 Mikael Ronstrom 2009-07-29
Bug#32115, Removed an assert that was no longer needed
3480 Mikael Ronstrom 2009-08-05
Bug#32115, fixed review comments
3481 Mikael Ronstrom 2009-08-07
Bug#32115, remove now obsolete lex_start calls
The problem was a "self-deadlock" if the connection issuing INSERT DELAYED
had both the global read lock (FLUSH TABLES WITH READ LOCK) and LOCK TABLES
mode active. The table being inserted into had to be different from the
table(s) locked by LOCK TABLES.
For INSERT DELAYED, the connection thread waits until the handler thread has
opened and locked its table before returning. But since the global read lock
was active, the handler thread would be unable to lock and would wait for the
global read lock to go away.
So the handler thread would be waiting for the connection thread to release
the global read lock while the connection thread was waiting for the handler
thread to lock the table. This gave a "self-deadlock" (same connection,
different threads).
The deadlock would only happen if we also had LOCK TABLES mode since the
INSERT otherwise will try to get protection against global read lock before
starting the handler thread. It will then notice that the global read lock
is owned by the same connection and report ER_CANT_UPDATE_WITH_READLOCK.
This patch removes the deadlock by reporting ER_CANT_UPDATE_WITH_READLOCK
also if we are inside LOCK TABLES mode.
Test case added to delayed.test.
------------------------------------------------------------
revno: 2597.4.17
revision-id: sp1r-davi@mysql.com/endora.local-20080328174753-24337
parent: sp1r-anozdrin/alik@quad.opbmk-20080328140038-16479
committer: davi@mysql.com/endora.local
timestamp: Fri 2008-03-28 14:47:53 -0300
message:
Bug#15192 "fatal errors" are caught by handlers in stored procedures
The problem is that fatal errors (e.g.: out of memory) were being
caught by stored procedure exception handlers which could cause
the execution to not be stopped due to a continue handler.
The solution is to not call any exception handler if the error is
fatal and send the fatal error to the client.
During insert, we are not reading the rows in a referring table but
instead using the last read row that happens to be in table->record[0].
Now INSERT into such view is denied.
Implemented the server infrastructure for the fix:
1. Added a function LEX_STRING *thd_query_string(THD) to return
a LEX_STRING structure instead of char *.
This is the function that must be called in innodb instead of
thd_query()
2. Did some encapsulation in THD : aggregated thd_query and
thd_query_length into a LEX_STRING and made accessor and mutator
methods for easy code updating.
3. Updated the server code to use the new methods where applicable.
redefining trigger
The 'table->auto_increment_field_not_null' flag is only valid within
processing of a single row, and should be set to FALSE before
navigating to the next row, or exiting the operation.
This bug was caused by an SQL error occuring while executing a trigger
after the flag had been set, so the normal resetting was bypassed.
The table object was then returned to the table share's cache in
a dirty condition. When the table object was reused, an assert
caught that the flag was set.
This patch explicitly clears the flag on error/abort.
Backported from mysql-6.0-codebase revid: 2617.52.1
Bug #47274 assert in open_table on CREATE TABLE <already existing>
The problem was an assertion during execution of CREATE TABLES.
This assertion would occur if INSERT DELAYED or REPLACE DELAYED
were used to update a table containing an AUTO_INCREMENT column
and if the inserted row had a user-supplied value for that column.
Any CREATE TABLE statement (including CREATE TABLE SELECT and
CREATE TABLE LIKE) trying to create the same table and
which followed the INSERT/REPLACED would cause the assertion.
The problem was only noticeable on debug builds of the server
and not present in the mysql-5.1 tree.
The cause of the problem was that the code for delayed insert did
not properly reset the TABLE->auto_increment_if_null flag after
The flag is used to indicate that a non-null value of an auto_increment field
has been provided by the user or retrieved from a current record.
Open_tables() contains an assertion that tests this flag, and this
was triggered by CREATE TABLE.
This patch fixes the problem by resetting the auto_increment_if_null
field to FALSE once INSERT/REPLACE DELAYED has updated the table,
similar to what is done already for regular INSERT statements.
Test case added to delayed.test.
http://lists.mysql.com/commits/59686
Cleanup pthread_self(), pthread_create(), pthread_join() implementation on Windows.
Prior implementation is was unnecessarily complicated and even differs in embedded
and non-embedded case.
Improvements in this patch:
* pthread_t is now the unique thread ID, instead of HANDLE returned by beginthread
This simplifies pthread_self() to be just straight GetCurrentThreadId().
prior it was much art involved in passing the beginthread() handle from the caller
to the TLS structure in the child thread ( did not work for the main thread of
course)
* remove MySQL specific my_thread_init()/my_thread_end() from pthread_create.
No automagic is done on Unix on pthread_create(). Having the same on Windows will
improve portability and avoid extra #ifdef's
* remove redefinition of getpid() - it was defined as GetCurrentThreadId()
name as existing view
When trying to create a table with the same name as existing view with
join, mysql server crashes.
The problem is when create table is issued with the same name as view, while
verifying with the existing tables, we assume that base table object is
created always.
In this case, since it is a view over multiple tables, we don't have the
mysql derived table object.
Fixed the logic which checks if there is an existing table to not to assume
that table object is created when the base table is view over multiple
tables.
bzr branch mysql-5.1-performance-version mysql-trunk # Summit
cd mysql-trunk
bzr merge mysql-5.1-innodb_plugin # which is 5.1 + Innodb plugin
bzr rm innobase # remove the builtin
Next step: build, test fixes.
procedures causes crashes!
The problem of that bugreport was mostly fixed by the
patch for bug 38691.
However, attached test case focused on another crash or
valgrind warning problem: SHOW PROCESSLIST query accesses
freed memory of SP instruction that run in a parallel
connection.
Changes of thd->query/thd->query_length in dangerous
places have been guarded with the per-thread
LOCK_thd_data mutex (the THD::LOCK_delete mutex has been
renamed to THD::LOCK_thd_data).
There is an inconsistency with DROP DATABASE|TABLE|EVENT IF EXISTS and
CREATE DATABASE|TABLE|EVENT IF NOT EXISTS. DROP IF EXISTS statements are
binlogged even if either the DB, TABLE or EVENT does not exist. In
contrast, Only the CREATE EVENT IF NOT EXISTS is binlogged when the EVENT
exists.
This patch fixes the following cases for all the replication formats:
CREATE DATABASE IF NOT EXISTS.
CREATE TABLE IF NOT EXISTS,
CREATE TABLE IF NOT EXISTS ... LIKE,
CREAET TABLE IF NOT EXISTS ... SELECT.
The crash happend because for views which are joins
we have table_list->table == 0 and
table_list->table->'any method' call leads to crash.
The fix is to perform table_list->table->file->extra()
method for all tables belonging to view.
When opening a table, it is imperative that the flag
TABLE::auto_increment_field_not_null be false. But if an error occured during
the creation of a table (e.g. the table exists already) with an auto_increment
column and a BEFORE trigger that used the INSERT ... SELECT construct, the
flag was not reset until after error checking. Thus if an error occured,
select_insert::send_data() returned immediately and it was not reset (see * in
pseudocode below). Crash happened if the table was opened again. Fixed by
resetting the flag after error checking.
nested-loops_join():
for each row in SELECT table {
select_insert::send_data():
if a values is supplied for AUTO_INCREMENT column
table->auto_increment_field_not_null= TRUE
else
table->auto_increment_field_not_null= FALSE
if (error)
return 1; *
if (table->auto_increment_field_not_null == FALSE)
...
table->auto_increment_field_not_null == FALSE
}
<-- table returned to table cache and later retrieved by open_table:
open_table():
assert(table->auto_increment_field_not_null)
Large transactions and statements may corrupt the binary log if the size of the
cache, which is set by the max_binlog_cache_size, is not enough to store the
the changes.
In a nutshell, to fix the bug, we save the position of the next character in the
cache before starting processing a statement. If there is a problem, we simply
restore the position thus removing any effect of the statement from the cache.
Unfortunately, to avoid corrupting the binary log, we may end up loosing changes
on non-transactional tables if they do not fit in the cache. In such cases, we
store an Incident_log_event in order to stop the slave and alert users that some
changes were not logged.
Precisely, for every non-transactional changes that do not fit into the cache,
we do the following:
a) the statement is *not* logged
b) an incident event is logged after committing/rolling back the transaction,
if any. Note that if a failure happens before writing the incident event to
the binary log, the slave will not stop and the master will not have reported
any error.
c) its respective statement gives an error
For transactional changes that do not fit into the cache, we do the following:
a) the statement is *not* logged
b) its respective statement gives an error
To work properly, this patch requires two additional things. Firstly, callers to
MYSQL_BIN_LOG::write and THD::binlog_query must handle any error returned and
take the appropriate actions such as undoing the effects of a statement. We
already changed some calls in the sql_insert.cc, sql_update.cc and sql_insert.cc
modules but the remaining calls spread all over the code should be handled in
BUG#37148. Secondly, statements must be either classified as DDL or DML because
DDLs that do not get into the cache must generate an incident event since they
cannot be rolled back.
with gcc 4.3.2
Compiling MySQL with gcc 4.3.2 and later produces a number of
warnings, many of which are new with the recent compiler
versions.
This bug will be resolved in more than one patch to limit the
size of changesets. This is the second patch, fixing more
of the warnings.
with gcc 4.3.2
Compiling MySQL with gcc 4.3.2 and later produces a number of
warnings, many of which are new with the recent compiler
versions.
This bug will be resolved in more than one patch to limit the
size of changesets. This is the second patch, fixing more
of the warnings.
Holding on to the temporary inno hash index latch is an optimization in
many cases, but a pessimization in some others.
Release temporary latches for those corner cases we (or rather, or customers,
thanks!) have identified, that is, when we are about to do something that
might take a really long time, like REPAIR or filesort.