transaction
BUG#52616 Temp table prevents switch binlog format from STATEMENT to ROW
Before the WL#2687 and BUG#46364, every non-transactional change that happened
after a transactional change was written to trx-cache and flushed upon
committing the transaction. WL#2687 and BUG#46364 changed this behavior and
non-transactional changes are now written to the binary log upon committing
the statement.
A binary log event is identified as transactional or non-transactional through
a flag in the Log_event which is set taking into account the underlie storage
engine on what it is stems from. In the current bug, this flag was not being
set properly when the DROP TEMPORARY TABLE was executed.
However, while fixing this bug we figured out that changes to temporary tables
should be always written to the trx-cache if there is an on-going transaction.
Otherwise, binlog events in the reversed order would be produced.
Regarding concurrency, keeping changes to temporary tables in the trx-cache is
also safe as temporary tables are only visible to the owner connection.
In this patch, we classify the following statements as unsafe:
1 - INSERT INTO t_myisam SELECT * FROM t_myisam_temp
2 - INSERT INTO t_myisam_temp SELECT * FROM t_myisam
3 - CREATE TEMPORARY TABLE t_myisam_temp SELECT * FROM t_myisam
On the other hand, the following statements are classified as safe:
1 - INSERT INTO t_innodb SELECT * FROM t_myisam_temp
2 - INSERT INTO t_myisam_temp SELECT * FROM t_innodb
The patch also guarantees that transactions that have a DROP TEMPORARY are
always written to the binary log regardless of the mode and the outcome:
commit or rollback. In particular, the DROP TEMPORARY is extended with the
IF EXISTS clause when the current statement logging format is set to row.
Finally, the patch allows to switch from STATEMENT to MIXED/ROW when there
are temporary tables but the contrary is not possible.
This assertion could be triggered during execution of OPTIMIZE TABLE for
InnoDB tables. As part of optimize for InnoDB tables, the table is recreated
and then opened again. If the reopen failed for any reason, the assertion
would be triggered. This could for example be caused by a concurrent DROP
TABLE executed by a different connection. The reason for the assertion was
that any failures during reopening were ignored.
This patch fixes the problem by making sure that the result of reopening the
table is checked and that any error messages are sent to the client.
Test case added to innodb_mysql_sync.test.
e.g.MYSQL_AUDIT_GENERAL_ERROR
General audit API (MYSQL_AUDIT_GENERAL_CLASS) didn't expose event
subclass to plugins.
This patch exposes event subclass to plugins via
struct mysql_event_general::event_subclass.
This change is not compatible with existing general audit plugins.
Audit interface major version has been incremented.
for write by another connection
The problem was that if a table was locked in one connection by
LOCK TABLES ... WRITE, REPAIR TABLE or OPTIMIZE TABLE, SHOW CREATE
TABLE from another connection would be blocked. As SHOW CREATE TABLE
only reads metadata about the table, such blocking is not needed.
The problem was that when SHOW CREATE TABLE tried to get a metadata
lock on the table in order to open it, it used the wrong type of
metadata lock request. It used MDL_SHARED_READ which is used when
the intent is to read both table metadata and table data. Instead
it should have used MDL_SHARED_HIGH_PRIO which signifies an intent
to only read metadata.
This patch fixes the problem by making sure SHOW CREATE TABLE uses
the MDL_SHARED_HIGH_PRIO metadata lock request type when trying to
open the table. The patch also fixes a similar problem with the
mysql_list_fields API call.
Test case added to show_check.test.
during rqg_mdl_deadlock test
The problem was that if two connection threads simultaneously tries
to execute "SET GLOBAL EVENT_SCHEDULER = OFF", one of them could
hang waiting for the scheduler to stop.
The first connection thread would kill the event scheduler thread
and then start waiting for it to exit. The second connection thread
would then find the event scheduler thread in the process of exiting
and also wait for it to exit. However, since the event scheduler
thread used signal to wake only one waiting thread, the other connection
thread would be left waiting.
This bug was a regression introduced by the fix for Bug#51160.
Before #51160 it was not possible for two connection threads to
try to stop the event scheduler thread simultaneously.
This patch fixes the problem my making sure the event scheduler
thread uses broadcast to notify all waiters that it is exiting.
No test case added as this would require adding debug sync points
to parts of the code where sync points are currently not used.
The patch has been tested with the non-deterministic test case
from the bug description as well as using the RQG.
Allow stored procedure variables in LIMIT clause.
Only allow variables of INTEGER types.
Handle negative values by means of an implicit cast to UNSIGNED
(similarly to prepared statement placeholders).
Add tests.
Make sure replication works by not doing NAME_CONST substitution
for variables in LIMIT clause.
Add replication tests.
ChangeSet@1.2703, 2007-12-07 09:35:28-05:00, cmiller@zippy.cornsilk.net +40 -0
Bug#13174: SHA2 function
Patch contributed from Bill Karwin, paper unnumbered CLA in Seattle
Implement SHA2 functions.
Chad added code to make it work with YaSSL. Also, he removed the
(probable) bug of embedded server never using SSL-dependent
functions. (libmysqld/Makefile.am didn't read ANY autoconf defs.)
Function specification:
SHA2( string cleartext, integer hash_length )
-> string hash, or NULL
where hash_length is one of 224, 256, 384, or 512. If either is
NULL or a length is unsupported, then the result is NULL. The
resulting string is always the length of the hash_length parameter
or is NULL.
Include the canonical hash examples from the NIST in the test
results.
---
Polish and address concerns of reviewers.
Problem: Segmentation fault in add_group_and_distinct_keys() when accessing
field of what is assumed to be an Item_field object.
Cause: In case of views, the item added to list by is_indexed_agg_distinct()
was not of type Item_field, but Item_ref.
Resolution: Add the real Item_field object, the one referred to by
Item_ref object, to the list, instead.
when trying to build innodb as plugin.
The reason for the error is mismatch in mysql_temp_dir_list
declaration between mysqld.h and usage in ha_innodb.cc
Add missing MYSQL_PLUGIN_IMPORT to mysql_tmpdir_list
(variables exported by the server and used by plugin need it).
Tree cleaup after the last major merges in mysql-trunk:
The files sql/lex_hash.h and sql/sql_yacc.h are automatically
generated, and should not be checked in the configuration management system.
These files are now removed.
No changes are required for .bzrignore, which already listed these files
(and similar files in libmysqld/).
The file storage/perfschema/unittest/pfs_timer-t.cc did not build
after the header files refactoring affecting mysql_priv.h
The file now builds properly using sql_priv.h
Adding my_global.h first in all files using
NO_EMBEDDED_ACCESS_CHECKS.
Correcting a merge problem resulting from a changed definition
of check_some_access compared to the original patches.
This patch:
- Moves all definitions from the mysql_priv.h file into
header files for the component where the variable is
defined
- Creates header files if the component lacks one
- Eliminates all include directives from mysql_priv.h
- Eliminates all circular include cycles
- Rename time.cc to sql_time.cc
- Rename mysql_priv.h to sql_priv.h
BUG#46364 introduced the flag binlog_direct_non_transactional_updates which
would make N-changes to be written to the binary log upon committing the
statement when "ON". On the other hand, when "OFF" the option was supposed
to mimic the behavior in 5.1. However, the implementation was not mimicking
the behavior correctly and the following bugs popped up:
Case #1: N-changes executed within a transaction would go into
the S-cache. When later in the same transaction a
T-change occurs, N-changes following it were written
to the T-cache instead of the S-cache. In some cases,
this raises problems. For example, a
Table_map_log_event being written initially into the
S-cache, together with the initial N-changes, would be
absent from the T-cache. This would log N-changes
orphaned from a Table_map_log_event (thence discarded
at the slave). (MIXED and ROW)
Case #2: When rolling back a transaction, the N-changes that
might be in the T-cache were disregarded and
truncated along with the T-changes. (MIXED and ROW)
Case #3: When a MIXED statement (TN) is ahead of any other
T-changes in the transaction and it fails, it is kept
in the T-cache until the transaction ends. This is
not the case in 5.1 or Betony (5.5.2). In these, the
failed TN statement would be written to the binlog at
the same instant it had failed and not deferred until
transaction end. (SBR)
To fix these problems, we have decided to do what follows:
For Case #1 and #2, we circumvent them:
1. by not letting binlog_direct_non_transactional_updates
affect MIXED and RBR. These modes will keep the behavior
provided by WL#2687. Although this will make Celosia to
behave differently from 5.1, an execution will be always
safe under such modes in the sense that slaves will never
go out sync. In 5.1, using either MIXED or ROW while
mixing N-statements and T-statements was not safe.
For Case #3, we don't actually fix it. We:
1. keep it and make all MIXED statements whether they end
up failing or not or whether they are up front in the
transaction or after some transactional change to always
be stored in the T-cache. This means that it is written
to the binary log on transaction commit/rollback only.
2. We make the warning message even more specific about the
MIXED statement and SBR.
debug_max
There was a buffer overrun when unpacking the date
field. Incidentaly, this seems to affect only solaris x86_64
debug builds, but others platforms may be vulnerable as well.
In particular, the buffer size used was not taking into
consideration that the '\0' character would be written into
it.
We fix this by increasing the size of the buffer used to
accommodate one extra byte (the one for the '\0').
which was detected during the build of 5.5.3-m3.
This requires version 9 of IBM C++ to help.
More fixes will still be needed, it is very strict
(or rather: a bit picky?) about templates.
Conflicts:
Text conflict in client/mysqlbinlog.cc
Text conflict in mysql-test/Makefile.am
Text conflict in mysql-test/collections/default.daily
Text conflict in mysql-test/r/mysqlbinlog_row_innodb.result
Text conflict in mysql-test/suite/rpl/r/rpl_typeconv_innodb.result
Text conflict in mysql-test/suite/rpl/t/rpl_get_master_version_and_clock.test
Text conflict in mysql-test/suite/rpl/t/rpl_row_create_table.test
Text conflict in mysql-test/suite/rpl/t/rpl_slave_skip.test
Text conflict in mysql-test/suite/rpl/t/rpl_typeconv_innodb.test
Text conflict in mysys/charset.c
Text conflict in sql/field.cc
Text conflict in sql/field.h
Text conflict in sql/item.h
Text conflict in sql/item_func.cc
Text conflict in sql/log.cc
Text conflict in sql/log_event.cc
Text conflict in sql/log_event_old.cc
Text conflict in sql/mysqld.cc
Text conflict in sql/rpl_utility.cc
Text conflict in sql/rpl_utility.h
Text conflict in sql/set_var.cc
Text conflict in sql/share/Makefile.am
Text conflict in sql/sql_delete.cc
Text conflict in sql/sql_plugin.cc
Text conflict in sql/sql_select.cc
Text conflict in sql/sql_table.cc
Text conflict in storage/example/ha_example.h
Text conflict in storage/federated/ha_federated.cc
Text conflict in storage/myisammrg/ha_myisammrg.cc
Text conflict in storage/myisammrg/myrg_open.c
(Original patch by Sinisa Milivojevic)
The YEAR(4) value of 2000 was equal to the "bad" YEAR(4) value of 0000.
The get_year_value() function has been modified to not adjust bad
YEAR(4) value to 2000.
Conflicts:
Text conflict in mysql-test/r/partition_innodb.result
Text conflict in sql/field.h
Text conflict in sql/item.h
Text conflict in sql/item_cmpfunc.h
Text conflict in sql/item_sum.h
Text conflict in sql/log_event_old.cc
Text conflict in sql/protocol.cc
Text conflict in sql/sql_select.cc
Text conflict in sql/sql_yacc.yy
The problem is that when we make conditon for
grouped result const part of condition is cut off.
It happens because some parts of 'having' condition
which refer to outer join become const after
make_join_statistics. These parts may be lost
during further having condition transformation
in JOIN::exec. The fix is adding 'having'
condition check for const tables after
make_join_statistics is performed.
DBUG_SYNC_POINT has at least one strong limitation that it's not defined
on all platforms. It has issues cooperating with @@debug.
All in all its functionality is superseded by DEBUG_SYNC facility and
there is no reason to maintain the old less flexible one.
Fixed with adding debug_sync_set_action() function as a facility to set up
a sync-action in the server sources code and re-writing existing simulations
(found 3) to use it.
Couple of tests have been reworked as well.
The patch offers a pattern for setting sync-points in replication threads
where the standard DEBUG_SYNC does not suffice to reach goals.
Optimizer erroneously translated LEFT JOIN into INNER JOIN.
It leads to cutting rows with NULL right side. It happens
because Item_row uses not_null_tables() method form the
base(Item) class and does not calculate 'null tables'
properly. The fix is adding calculation of 'not null tables'
to Item_row.
The crash happens because of discrepancy between values of
conts_tables and join->const_table_map(make_join_statisctics).
Calculation of conts_tables used condition with
HA_STATS_RECORDS_IS_EXACT flag check. Calculation of
join->const_table_map does not use this flag check.
In case of MERGE table without union with index
the table does not become const table and
thus join_read_const_table() is not called
for the table. join->const_table_map supposes
this table is const and later in make_join_select
this table is used for making&calculation const
condition. As table record buffer is not populated
it leads to crash.
The fix is adding a check if an engine supports
HA_STATS_RECORDS_IS_EXACT flag before updating
join->const_table_map.
All numeric operators and functions on integer, floating point
and DECIMAL values now throw an 'out of range' error rather
than returning an incorrect value or NULL, when the result is
out of supported range for the corresponding data type.
Some test cases in the test suite had to be updated
accordingly either because the test case itself relied on a
value returned in case of a numeric overflow, or because a
numeric overflow was the root cause of the corresponding bugs.
The latter tests are no longer relevant, since the expressions
used to trigger the corresponding bugs are not valid anymore.
However, such test cases have been adjusted and kept "for the
record".
for InnoDB
The class Field_bit_as_char stores the metadata for the
field incorrecly because bytes_in_rec and bit_len are set
to (field_length + 7 ) / 8 and 0 respectively, while
Field_bit has the correct values field_length / 8 and
field_length % 8.
Solved the problem by re-computing the values for the
metadata based on the field_length instead of using the
bytes_in_rec and bit_len variables.
To handle compatibility with old server, a table map
flag was added to indicate that the bit computation is
exact. If the flag is clear, the slave computes the
number of bytes required to store the bit field and
compares that instead, effectively allowing replication
*without conversion* from any field length that require
the same number of bytes to store.
definition at engine
If a single ALTER TABLE contains both DROP INDEX and ADD INDEX using
the same index name (a.k.a. index modification) we need to disable
in-place alter table because we can't ask the storage engine to have
two copies of the index with the same name even temporarily (if we
first do the ADD INDEX and then DROP INDEX) and we can't modify
indexes that are needed by e.g. foreign keys if we first do
DROP INDEX and then ADD INDEX.
Fixed the problem by disabling in-place ALTER TABLE for these cases.
In BUG#49562 we fixed the case where numeric user var events
would not serialize the flag stating whether the value was signed
or unsigned (unsigned_flag). This fixed the case that the slave
would get an overflow while treating the unsigned values as
signed.
In this bug, we find that the unsigned_flag can sometimes change
between the moment that the user value is recorded for binlogging
purposes and the actual binlogging time. Since we take the
unsigned_flag from the runtime variable data, at binlogging time,
and the variable value is comes from the copy taken earlier in
the execution, there may be inconsistency in the
User_var_log_event between the variable value and its
unsigned_flag.
We fix this by also copying the unsigned_flag of the
user_var_entry when its value is copied, for binlogging
purposes. Later, at binlogging time, we use the copied
unsigned_flag and not the one in the runtime user_var_entry
instance.
DDL no longer aborts mysql_lock_tables(), and hence
we no longer need to support need_reopen flag of this
call.
Remove the flag, and all the code in the server
that was responsible for handling the case when
it was set. This allowed to simplify:
open_and_lock_tables_derived(), the delayed thread,
multi-update.
Rename MYSQL_LOCK_IGNORE_FLUSH to MYSQL_OPEN_IGNORE_FLUSH,
since we now only support this flag in open_table().
Rename MYSQL_LOCK_PERF_SCHEMA to MYSQL_LOCK_LOG_TABLE,
to avoid confusion.
Move the wait for the global read lock for cases
when we do updates in SELECT f1() or DO (UPDATE) to
open_table() from mysql_lock_tables(). When waiting
for the read lock, we could raise need_reopen flag,
which is no longer present in mysql_lock_tables().
Since the block responsible for waiting for GRL
was moved, MYSQL_LOCK_IGNORE_GLOBAL_READ_LOCK
was renamed to MYSQL_OPEN_IGNORE_GLOBAL_READ_LOCK.
This deadlock could occour betweeen one connection executing
SET GLOBAL EVENT_SCHEDULER= ON and another executing SET GLOBAL
EVENT_SCHEDULER= OFF. The bug was introduced by WL#4738.
The first connection would hold LOCK_event_metadata (protecting
the global variable) while trying to lock LOCK_global_system_variables
starting the event scheduler thread (in THD:init()).
The second connection would hold LOCK_global_system_variables
while trying to get LOCK_event_scheduler after stopping the event
scheduler inside event_scheduler_update().
This patch fixes the problem by not using LOCK_event_metadata to
protect the event_scheduler variable. It is still protected using
LOCK_global_system_variables. This fixes the deadlock as it removes
one of the two mutexes used to produce it.
However, this patch opens up the possibility that the event_scheduler
variable and the real event_scheduler state can become out of sync
(e.g. variable = OFF, but scheduler running). But this can only
happen under very unlikely conditions - two concurrent SET GLOBAL
statments, with one thread interrupted at the exact wrong moment.
This is preferable to having the possibility of a deadlock.
This patch also fixes a bug where it was possible to exit create_event()
without releasing LOCK_event_metadata if running out of memory during
its exection.
No test case added since a repeatable test case would have required
excessive use of new sync points. Instead we rely on the fact that
this bug was easily reproduceable using RGQ tests.