The crash happend because for views which are joins
we have table_list->table == 0 and
table_list->table->'any method' call leads to crash.
The fix is to perform table_list->table->file->extra()
method for all tables belonging to view.
mysql-test/r/view.result:
test result
mysql-test/t/view.test:
test case
sql/sql_insert.cc:
added prepare_for_positional_update() function
which updates extra info about primary key for
tables belonging to view.
When opening a table, it is imperative that the flag
TABLE::auto_increment_field_not_null be false. But if an error occured during
the creation of a table (e.g. the table exists already) with an auto_increment
column and a BEFORE trigger that used the INSERT ... SELECT construct, the
flag was not reset until after error checking. Thus if an error occured,
select_insert::send_data() returned immediately and it was not reset (see * in
pseudocode below). Crash happened if the table was opened again. Fixed by
resetting the flag after error checking.
nested-loops_join():
for each row in SELECT table {
select_insert::send_data():
if a values is supplied for AUTO_INCREMENT column
table->auto_increment_field_not_null= TRUE
else
table->auto_increment_field_not_null= FALSE
if (error)
return 1; *
if (table->auto_increment_field_not_null == FALSE)
...
table->auto_increment_field_not_null == FALSE
}
<-- table returned to table cache and later retrieved by open_table:
open_table():
assert(table->auto_increment_field_not_null)
mysql-test/r/trigger.result:
Bug#44653: Test result
mysql-test/t/trigger.test:
Bug#44653: Test case
sql/sql_insert.cc:
Bug#44653: Fix: Make sure to unset this field before returning in case of error
Large transactions and statements may corrupt the binary log if the size of the
cache, which is set by the max_binlog_cache_size, is not enough to store the
the changes.
In a nutshell, to fix the bug, we save the position of the next character in the
cache before starting processing a statement. If there is a problem, we simply
restore the position thus removing any effect of the statement from the cache.
Unfortunately, to avoid corrupting the binary log, we may end up loosing changes
on non-transactional tables if they do not fit in the cache. In such cases, we
store an Incident_log_event in order to stop the slave and alert users that some
changes were not logged.
Precisely, for every non-transactional changes that do not fit into the cache,
we do the following:
a) the statement is *not* logged
b) an incident event is logged after committing/rolling back the transaction,
if any. Note that if a failure happens before writing the incident event to
the binary log, the slave will not stop and the master will not have reported
any error.
c) its respective statement gives an error
For transactional changes that do not fit into the cache, we do the following:
a) the statement is *not* logged
b) its respective statement gives an error
To work properly, this patch requires two additional things. Firstly, callers to
MYSQL_BIN_LOG::write and THD::binlog_query must handle any error returned and
take the appropriate actions such as undoing the effects of a statement. We
already changed some calls in the sql_insert.cc, sql_update.cc and sql_insert.cc
modules but the remaining calls spread all over the code should be handled in
BUG#37148. Secondly, statements must be either classified as DDL or DML because
DDLs that do not get into the cache must generate an incident event since they
cannot be rolled back.
with gcc 4.3.2
Compiling MySQL with gcc 4.3.2 and later produces a number of
warnings, many of which are new with the recent compiler
versions.
This bug will be resolved in more than one patch to limit the
size of changesets. This is the second patch, fixing more
of the warnings.
with gcc 4.3.2
Compiling MySQL with gcc 4.3.2 and later produces a number of
warnings, many of which are new with the recent compiler
versions.
This bug will be resolved in more than one patch to limit the
size of changesets. This is the second patch, fixing more
of the warnings.
Holding on to the temporary inno hash index latch is an optimization in
many cases, but a pessimization in some others.
Release temporary latches for those corner cases we (or rather, or customers,
thanks!) have identified, that is, when we are about to do something that
might take a really long time, like REPAIR or filesort.
sql/ha_myisam.cc:
Let go of (inno, for now) latch when doing MyISAM-repair.
(optimize passes through repair.) ("Stuck" in "Repair with
keycache".)
sql/sql_insert.cc:
Let go of (inno, for now) latch when doing CREATE...SELECT
in select_insert::send_data() -- it might take a while.
("stuck" in "Sending data")
sql/sql_select.cc:
Release temporary (inno, for now) latch on
- free_tmp_table() (this can take surprisingly long, "removing tmp table")
- create_myisam_from_heap() (HEAP table overflowing onto disk as MyISAM,
"converting HEAP to MyISAM")
Make the caller of Query_log_event, Execute_load_log_event
constructors and THD::binlog_query to provide the error code
instead of having the constructors to figure out the error code.
sql/log_event.cc:
Changed constructors of Query_log_event and Execute_load_log_event to accept the error code argument instead of figuring it out by itself
sql/log_event.h:
Changed constructors of Query_log_event and Execute_load_log_event to accept the error code argument
When the thread executing a DDL was killed after finished its
execution but before writing the binlog event, the error code in
the binlog event could be set wrongly to ER_SERVER_SHUTDOWN or
ER_QUERY_INTERRUPTED.
This patch fixed the problem by ignoring the kill status when
constructing the event for DDL statements.
This patch also included the following changes in order to
provide the test case.
1) modified mysqltest to support variable for connection command
2) modified mysql-test-run.pl, add new variable MYSQL_SLAVE to
run mysql client against the slave mysqld.
When do 'insert delayed' operation, the time_zone info doesn't be keeped in the row info.
So when we do insert sometime later, time_zone didn't write into binlog.
This will cause wrong result for timestamp column in slave.
Our solution is that adding time_zone info with the delayed-row and
restoring time_zone from row-info when execute that row in the furture by another thread.
So we can write correct time_zone info into binlog and got correct result in slave.
mysql-test/r/rpl_timezone.result:
Test result
mysql-test/t/rpl_timezone.test:
Add test for bug#41719
sql/sql_insert.cc:
Add time_zone info in the delayed-row and restore time_zone when execute the row in the furture by another thread.
The problem is that select queries executed concurrently with
a concurrent insert on a MyISAM table could be cached if the
select started after the query cache invalidation but before
the unlock of tables performed by the concurrent insert. This
race could happen because the concurrent insert was failing
to prevent cache of select queries happening at the same time.
The solution is to add a 'uncacheable' status flag to signal
that a concurrent insert is being performed on the table and
that queries executing at the same time shouldn't cache the
results.
mysql-test/r/query_cache_debug.result:
Add test case result for Bug#41098
mysql-test/t/disabled.def:
Re-enable test case.
mysql-test/t/query_cache_debug.test:
Add test case for Bug#41098
sql/sql_cache.cc:
Debug sync point for regression testing purposes.
sql/sql_insert.cc:
Remove meaningless query cache invalidate. There is already
a preceding invalidate for queries that started before the
concurrent insert.
storage/myisam/ha_myisam.cc:
Check for a active concurrent insert.
storage/myisam/mi_locking.c:
Signal the start of a concurrent insert. Flag is zeroed once
the state is updated back.
storage/myisam/myisamdef.h:
Add flag to signal a active concurrent insert.
- Remove bothersome warning messages. This change focuses on the warnings
that are covered by the ignore file: support-files/compiler_warnings.supp.
- Strings are guaranteed to be max uint in length
- Remove bothersome warning messages. This change focuses on the warnings
that are covered by the ignore file: support-files/compiler_warnings.supp.
- Strings are guaranteed to be max uint in length
upgrading lock, even with low_priority_updates
The problem is that there is no mechanism to control whether a
delayed insert takes a high or low priority lock on a table.
The solution is to modify the delayed insert thread ("handler")
to take into account the global value of low_priority_updates
when taking table locks. The value of low_priority_updates is
retrieved when the insert delayed thread is created and will
remain the same for the duration of the thread.
include/thr_lock.h:
Update prototype.
mysql-test/r/delayed.result:
Add test case result for Bug#40536
mysql-test/t/delayed.test:
Add test case for Bug#40536
mysys/thr_lock.c:
Add function parameter which specifies the write lock type.
sql/sql_insert.cc:
Take a low priority write lock if global value of low_priority_updates
was ON when the thread was created.
If delayed insert fails to upgrade the lock it was not
freeing the temporary memory storage used to keep
newly constructed blob values in memory.
Fixed by iterating over the remaining rows in the delayed
insert rowset and freeing the blob storage for each row.
No test suite because it involves concurrent delayed inserts
on a table and cannot easily be made deterministic.
Added a correct valgrind suppression for Fedora 9.
mysql-test/valgrind.supp:
Added a vagrind suppression for Fedora 9
sql/sql_insert.cc:
Bug #38693: free the blobs temp storage on error.
The failure was caused by executing a CREATE-SELECT statement that creates a
table in another database than the current one. In row-based logging, the
CREATE statement was written to the binary log without the database, hence
creating the table in the wrong database, causing the following inserts to
fail since the table didn't exist in the given database.
Fixed the bug by adding a parameter to store_create_info() that will make
the function print the database name before the table name and used that
in the calls that write the CREATE statement to the binary log. The database
name is only printed if it is different than the currently selected database.
The output of SHOW CREATE TABLE has not changed and is still printed without
the database name.
mysql-test/suite/rpl/t/rpl_row_create_table.test:
Added test to check that CREATE-SELECT into another database than the
current one replicates.
sql/sql_insert.cc:
Adding parameter to calls to store_create_info().
sql/sql_show.cc:
Adding parameter to calls to store_create_info().
Extending store_create_info() with parameter 'show_database' that will cause
the database to be written before the table name.
sql/sql_show.h:
Adding parameter to call to store_create_info() to tell if the database should be shown or not.
sql/sql_table.cc:
Adding parameter to calls to store_create_info().
Concurrent inserts produce valgrind error messages.
The reason is that the query cache is invalidated after the target table object
is closed.
Since the delayed insert thread already takes care of invalidating the query
cache there is no need to try to synchronize an extra cache invalidation call.
The fix is to remove the query_cache_invalidate3 call altogether.
sql/sql_insert.cc:
When end_delayed_insert is called, the table_list items will be invalidated
by the concurrent insert thread. Further more there is no need to call
query_cache_invalidate here since the delayed insert thread takes care of
this already.
Fix the write_record function to record auto increment
values in a consistent way.
mysql-test/r/auto_increment.result:
Updated the test result file with the output of the
new test case added to verify this bug.
mysql-test/t/auto_increment.test:
Added a new test case to verify this bug.
sql/sql_insert.cc:
The algorithm for the write_record function
in sql_insert.cc is (more emphasis given to
the parts that deal with the autogenerated values)
1) If a write fails
1.1) save the autogenerated value to avoid
thd->insert_id_for_cur_row to become 0.
1.2) <logic to handle INSERT ON DUPLICATE KEY
UPDATE and REPLACE>
2) record the first successful insert id.
explanation of the failure
--------------------------
As long as 1.1) was executed 2) worked fine.
1.1) was always executed when REPLACE worked
with the last row update optimization, but
in cases where 1.1) was not executed 2)
would fail and would result in the autogenerated
value not being saved.
solution
--------
repeat a check for thd->insert_id_for_cur_row
being zero similar to 1.1) before 2) and ensure
that the correct value is saved.
If a delayed insert thread was aborted by a concurrent 'truncate table'
statement, then the diagnostic area would fail with an assert in a debug build
because no actual error message was pushed on the stack despite a thread
being killed.
This patch adds an error message to the stack.
sql/sql_insert.cc:
* Changed sql_print_error() to my_error() to avoid assertion in the DA
* Added assertion in "should never happen" branch.
in open_table()
Problem: repeating "CREATE... ( AUTOINCREMENT) ... SELECT" may lead to
an assertion failure.
Fix: reset table->auto_increment_field_not_null after each record
writing.
mysql-test/r/create.result:
Fix for bug#38821: Assert table->auto_increment_field_not_null failed
in open_table()
- test result.
mysql-test/t/create.test:
Fix for bug#38821: Assert table->auto_increment_field_not_null failed
in open_table()
- test case.
sql/sql_insert.cc:
Fix for bug#38821: Assert table->auto_increment_field_not_null failed
in open_table()
- reset table->auto_increment_field_not_null after writing a record
for "{CREATE, INSERT}..SELECT".
The assert is about binlogging must have been activated, but it was
not actually according to the reported how-to-repeat instuctions.
Analysis revealed that binlog_start_trans_and_stmt() was called
without prior testing if binlogging is ON.
Fixed with avoing entering binlog_start_trans_and_stmt() if binlog is
not activated.
mysql-test/r/skip_log_bin.result:
new results.
mysql-test/t/skip_log_bin-master.opt:
the option to deactivate binlogging.
mysql-test/t/skip_log_bin.test:
regression test for the bug.
sql/sql_insert.cc:
avoing entering binlog_start_trans_and_stmt() if binlog is not activated.
In order to handle CHAR() fields, 8 bits were reserved for
the size of the CHAR field. However, instead of denoting the
number of characters in the field, field_length was used which
denotes the number of bytes in the field.
Since UTF-8 fields can have three bytes per character (and
has been extended to have four bytes per character in 6.0),
an extra two bits have been encoded in the field metadata
work for fields of type Field_string (i.e., CHAR fields).
Since the metadata word is filled, the extra bits have been
encoded in the upper 4 bits of the real type (the most
significant byte of the metadata word) by computing the
bitwise xor of the extra two bits. Since the upper 4 bits
of the real type always is 1111 for Field_string, this
means that for fields of length <256, the encoding is
identical to the encoding used in pre-5.1.26 servers, but
for lengths of 256 or more, an unrecognized type is formed,
causing an old slave (that does not handle lengths of 256
or more) to stop.
mysql-test/extra/rpl_tests/rpl_row_basic.test:
Adding test cases for replicating UTF-8 fields of lengths
of 256 or more (bytes).
mysql-test/suite/binlog/r/binlog_base64_flag.result:
Result file change.
mysql-test/suite/binlog/t/binlog_base64_flag.test:
Adding tests to trigger check that an error is generated when replicating from a
5.1.25 server for tables with a CHAR(128) but not when replicating a table with a
CHAR(63). Although the bug indicates that the limit is 83, we elected to use CHAR(63)
since 6.0 uses 4-byte UTF-8, and anything exceeding 63 would then cause the test to fail
when the patch is merged to 6.0.
mysql-test/suite/bugs/combinations:
Adding combinations file to run all bug reports in all binlog modes (where
applicable).
mysql-test/suite/bugs/r/rpl_bug37426.result:
Result file change.
mysql-test/suite/bugs/t/rpl_bug37426.test:
Added test for reported bug.
mysql-test/suite/rpl/r/rpl_row_basic_2myisam.result:
Result file change.
mysql-test/suite/rpl/r/rpl_row_basic_3innodb.result:
Result file change.
sql/field.cc:
Encoding an extra two bits in the most significant nibble (4 bits)
of the metadata word. Adding assertions to ensure that no attempt
is made to use lengths longer than supported.
Extending compatible_field_size() function with an extra parameter
holding a Relay_log_instace for error reporting.
Field_string::compatible_field_size() now reports an error if field
size for a CHAR is >255.
sql/field.h:
Field length is now computed from most significant 4 bits
of metadata word, or is equal to the row pack length if
there is no metadata.
Extending compatible_field_size() function with an extra parameter
holding a Relay_log_instace for error reporting.
sql/rpl_utility.cc:
Adding relay log parameter to compatible_field_size().
Minor refactoring to eliminate duplicate code.
sql/slave.cc:
Extending rpl_master_has_bug() with a single-argument predicate function and
a parameter to the predicate function. The predicate function can be used to
test for extra conditions for the bug before writing an error message.
sql/slave.h:
Extending rpl_master_has_bug() with a single-argument predicate function and
a parameter to the predicate function. The predicate function can be used to
test for extra conditions for the bug before writing an error message.
Also removing gratuitous default argument.
sql/sql_insert.cc:
Changing calls to rpl_master_has_bug() to adapt to changed signature.
Problem was an unclear error message since it could suggest that
MyISAM did not support INSERT DELAYED.
Changed the error message to say that DELAYED is not supported by the
table, instead of the table's storage engine.
The confusion is that a partitioned table is in somewhat sense using
the partitioning storage engine, which in turn uses the ordinary
storage engine. By saying that the table does not support DELAYED we
do not give any extra informantion about the storage engine or if it
is partitioned.
mysql-test/r/innodb-replace.result:
Bug#31210: INSERT DELAYED crashes server when used on partitioned tables
changed error message
mysql-test/t/innodb-replace.test:
Bug#31210: INSERT DELAYED crashes server when used on partitioned tables
changed error message
mysql-test/t/merge.test:
Bug#31210: INSERT DELAYED crashes server when used on partitioned tables
changed error message
mysql-test/t/partition_hash.test:
Bug#31210: INSERT DELAYED crashes server when used on partitioned tables
changed error message
sql/share/errmsg.txt:
Bug#31210: INSERT DELAYED crashes server when used on partitioned tables
added error message for tables not supporting DELAYED
sql/sql_insert.cc:
Bug#31210: INSERT DELAYED crashes server when used on partitioned tables
changed error message