This is a regression from the fix for bug no 38999. A storage engine capable
of reading only a subset of a table's columns updates corresponding bits in
the read buffer to signal that it has read NULL values for the corresponding
columns. It cannot, and should not, update any other bits. Bug no 38999
occurred because the implementation of UPDATE statements compare the NULL bits
using memcmp, inadvertently comparing bits that were never requested from the
storage engine. The regression was caused by the storage engine trying to
alleviate the situation by writing to all NULL bits, even those that it had no
knowledge of. This has devastating effects for the index merge algorithm,
which relies on all NULL bits, except those explicitly requested, being left
unchanged.
The fix reverts the fix for bug no 38999 in both InnoDB and InnoDB plugin and
changes the server's method of comparing records. For engines that always read
entire rows, we proceed as usual. For engines capable of reading only select
columns, the record buffers are now compared on a column by column basis. An
assertion was also added so that non comparable buffers are never read. Some
relevant copy-pasted code was also consolidated in a new function.
'CREATE TABLE IF NOT EXISTS ... SELECT' behaviour
BUG#55474, BUG#55499, BUG#55598, BUG#55616 and BUG#55777 are fixed
in this patch too.
This is the 5.1 part.
It implements:
- if the table exists, binlog two events: CREATE TABLE IF NOT EXISTS
and INSERT ... SELECT
- Insert nothing and binlog nothing on master if the existing object
is a view. It only generates a warning that table already exists.
INSERT IGNORE ... SELECT ... UNION SELECT ...
This assert was triggered by INSERT IGNORE ... SELECT. The assert checks that a
statement either sends OK or an error to the client. If the bug was triggered
on release builds, it caused OK to be sent to the client instead of the correct
error message (in this case ER_FIELD_SPECIFIED_TWICE).
The reason the assert was triggered, was that lex->no_error was set to TRUE
during JOIN::optimize() because of IGNORE. This causes all errors to be ignored.
However, not all errors can be ignored. Some, such as ER_FIELD_SPECIFIED_TWICE
will cause the INSERT to fail no matter what. But since lex->no_error was set,
the critical errors were ignored, the INSERT failed and neither OK nor the
error message was sent to the client.
This patch fixes the problem by temporarily turning off lex->no_error in
places where errors cannot be ignored during processing of INSERT ... SELECT.
Test case added to insert.test.
A CREATE...SELECT that fails is written to the binary log if a non-transactional
statement is updated. If the logging format is ROW, the CREATE statement and the
changes are written to the binary log as distinct events and by consequence the
created table is not rolled back in the slave.
In this patch, we opted to let the slave goes out of sync by not writting to the
binary log the CREATE statement. We do this by simply reseting the binary log's
cache.
CHECK_FIELD_IGNORE was treated as CHECK_FIELD_ERROR_FOR_NULL;
UPDATE...SET...NULL on NOT NULL fields behaved differently after
a trigger.
Now distinguishes between IGNORE and ERROR_FOR_NULL and save/restores
check-field options.
CHECK_FIELD_IGNORE was treated as CHECK_FIELD_ERROR_FOR_NULL;
UPDATE...SET...NULL on NOT NULL fields behaved differently after
a trigger.
Now distinguishes between IGNORE and ERROR_FOR_NULL and save/restores
check-field options.
Problem: inserting a record we don't set unused null bits in the
record buffer if no default field values used.
That may lead to wrong live checksum calculation.
Fix: set unused null bits in the record buffer in such cases.
The problem was a "self-deadlock" if the connection issuing INSERT DELAYED
had both the global read lock (FLUSH TABLES WITH READ LOCK) and LOCK TABLES
mode active. The table being inserted into had to be different from the
table(s) locked by LOCK TABLES.
For INSERT DELAYED, the connection thread waits until the handler thread has
opened and locked its table before returning. But since the global read lock
was active, the handler thread would be unable to lock and would wait for the
global read lock to go away.
So the handler thread would be waiting for the connection thread to release
the global read lock while the connection thread was waiting for the handler
thread to lock the table. This gave a "self-deadlock" (same connection,
different threads).
The deadlock would only happen if we also had LOCK TABLES mode since the
INSERT otherwise will try to get protection against global read lock before
starting the handler thread. It will then notice that the global read lock
is owned by the same connection and report ER_CANT_UPDATE_WITH_READLOCK.
This patch removes the deadlock by reporting ER_CANT_UPDATE_WITH_READLOCK
also if we are inside LOCK TABLES mode.
Test case added to delayed.test.
Implemented the server infrastructure for the fix:
1. Added a function LEX_STRING *thd_query_string(THD) to return
a LEX_STRING structure instead of char *.
This is the function that must be called in innodb instead of
thd_query()
2. Did some encapsulation in THD : aggregated thd_query and
thd_query_length into a LEX_STRING and made accessor and mutator
methods for easy code updating.
3. Updated the server code to use the new methods where applicable.
name as existing view
When trying to create a table with the same name as existing view with
join, mysql server crashes.
The problem is when create table is issued with the same name as view, while
verifying with the existing tables, we assume that base table object is
created always.
In this case, since it is a view over multiple tables, we don't have the
mysql derived table object.
Fixed the logic which checks if there is an existing table to not to assume
that table object is created when the base table is view over multiple
tables.
procedures causes crashes!
The problem of that bugreport was mostly fixed by the
patch for bug 38691.
However, attached test case focused on another crash or
valgrind warning problem: SHOW PROCESSLIST query accesses
freed memory of SP instruction that run in a parallel
connection.
Changes of thd->query/thd->query_length in dangerous
places have been guarded with the per-thread
LOCK_thd_data mutex (the THD::LOCK_delete mutex has been
renamed to THD::LOCK_thd_data).
There is an inconsistency with DROP DATABASE|TABLE|EVENT IF EXISTS and
CREATE DATABASE|TABLE|EVENT IF NOT EXISTS. DROP IF EXISTS statements are
binlogged even if either the DB, TABLE or EVENT does not exist. In
contrast, Only the CREATE EVENT IF NOT EXISTS is binlogged when the EVENT
exists.
This patch fixes the following cases for all the replication formats:
CREATE DATABASE IF NOT EXISTS.
CREATE TABLE IF NOT EXISTS,
CREATE TABLE IF NOT EXISTS ... LIKE,
CREAET TABLE IF NOT EXISTS ... SELECT.
The crash happend because for views which are joins
we have table_list->table == 0 and
table_list->table->'any method' call leads to crash.
The fix is to perform table_list->table->file->extra()
method for all tables belonging to view.
When opening a table, it is imperative that the flag
TABLE::auto_increment_field_not_null be false. But if an error occured during
the creation of a table (e.g. the table exists already) with an auto_increment
column and a BEFORE trigger that used the INSERT ... SELECT construct, the
flag was not reset until after error checking. Thus if an error occured,
select_insert::send_data() returned immediately and it was not reset (see * in
pseudocode below). Crash happened if the table was opened again. Fixed by
resetting the flag after error checking.
nested-loops_join():
for each row in SELECT table {
select_insert::send_data():
if a values is supplied for AUTO_INCREMENT column
table->auto_increment_field_not_null= TRUE
else
table->auto_increment_field_not_null= FALSE
if (error)
return 1; *
if (table->auto_increment_field_not_null == FALSE)
...
table->auto_increment_field_not_null == FALSE
}
<-- table returned to table cache and later retrieved by open_table:
open_table():
assert(table->auto_increment_field_not_null)
Large transactions and statements may corrupt the binary log if the size of the
cache, which is set by the max_binlog_cache_size, is not enough to store the
the changes.
In a nutshell, to fix the bug, we save the position of the next character in the
cache before starting processing a statement. If there is a problem, we simply
restore the position thus removing any effect of the statement from the cache.
Unfortunately, to avoid corrupting the binary log, we may end up loosing changes
on non-transactional tables if they do not fit in the cache. In such cases, we
store an Incident_log_event in order to stop the slave and alert users that some
changes were not logged.
Precisely, for every non-transactional changes that do not fit into the cache,
we do the following:
a) the statement is *not* logged
b) an incident event is logged after committing/rolling back the transaction,
if any. Note that if a failure happens before writing the incident event to
the binary log, the slave will not stop and the master will not have reported
any error.
c) its respective statement gives an error
For transactional changes that do not fit into the cache, we do the following:
a) the statement is *not* logged
b) its respective statement gives an error
To work properly, this patch requires two additional things. Firstly, callers to
MYSQL_BIN_LOG::write and THD::binlog_query must handle any error returned and
take the appropriate actions such as undoing the effects of a statement. We
already changed some calls in the sql_insert.cc, sql_update.cc and sql_insert.cc
modules but the remaining calls spread all over the code should be handled in
BUG#37148. Secondly, statements must be either classified as DDL or DML because
DDLs that do not get into the cache must generate an incident event since they
cannot be rolled back.
with gcc 4.3.2
Compiling MySQL with gcc 4.3.2 and later produces a number of
warnings, many of which are new with the recent compiler
versions.
This bug will be resolved in more than one patch to limit the
size of changesets. This is the second patch, fixing more
of the warnings.
with gcc 4.3.2
Compiling MySQL with gcc 4.3.2 and later produces a number of
warnings, many of which are new with the recent compiler
versions.
This bug will be resolved in more than one patch to limit the
size of changesets. This is the second patch, fixing more
of the warnings.
Holding on to the temporary inno hash index latch is an optimization in
many cases, but a pessimization in some others.
Release temporary latches for those corner cases we (or rather, or customers,
thanks!) have identified, that is, when we are about to do something that
might take a really long time, like REPAIR or filesort.
Make the caller of Query_log_event, Execute_load_log_event
constructors and THD::binlog_query to provide the error code
instead of having the constructors to figure out the error code.
When the thread executing a DDL was killed after finished its
execution but before writing the binlog event, the error code in
the binlog event could be set wrongly to ER_SERVER_SHUTDOWN or
ER_QUERY_INTERRUPTED.
This patch fixed the problem by ignoring the kill status when
constructing the event for DDL statements.
This patch also included the following changes in order to
provide the test case.
1) modified mysqltest to support variable for connection command
2) modified mysql-test-run.pl, add new variable MYSQL_SLAVE to
run mysql client against the slave mysqld.
When do 'insert delayed' operation, the time_zone info doesn't be keeped in the row info.
So when we do insert sometime later, time_zone didn't write into binlog.
This will cause wrong result for timestamp column in slave.
Our solution is that adding time_zone info with the delayed-row and
restoring time_zone from row-info when execute that row in the furture by another thread.
So we can write correct time_zone info into binlog and got correct result in slave.
The problem is that select queries executed concurrently with
a concurrent insert on a MyISAM table could be cached if the
select started after the query cache invalidation but before
the unlock of tables performed by the concurrent insert. This
race could happen because the concurrent insert was failing
to prevent cache of select queries happening at the same time.
The solution is to add a 'uncacheable' status flag to signal
that a concurrent insert is being performed on the table and
that queries executing at the same time shouldn't cache the
results.
- Remove bothersome warning messages. This change focuses on the warnings
that are covered by the ignore file: support-files/compiler_warnings.supp.
- Strings are guaranteed to be max uint in length
- Remove bothersome warning messages. This change focuses on the warnings
that are covered by the ignore file: support-files/compiler_warnings.supp.
- Strings are guaranteed to be max uint in length
upgrading lock, even with low_priority_updates
The problem is that there is no mechanism to control whether a
delayed insert takes a high or low priority lock on a table.
The solution is to modify the delayed insert thread ("handler")
to take into account the global value of low_priority_updates
when taking table locks. The value of low_priority_updates is
retrieved when the insert delayed thread is created and will
remain the same for the duration of the thread.
If delayed insert fails to upgrade the lock it was not
freeing the temporary memory storage used to keep
newly constructed blob values in memory.
Fixed by iterating over the remaining rows in the delayed
insert rowset and freeing the blob storage for each row.
No test suite because it involves concurrent delayed inserts
on a table and cannot easily be made deterministic.
Added a correct valgrind suppression for Fedora 9.