mariadb/sql/log.h

786 lines
26 KiB
C
Raw Normal View History

/* Copyright (c) 2005, 2012, Oracle and/or its affiliates. All rights reserved.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; version 2 of the License.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
2011-05-21 10:21:08 +02:00
Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA */
#ifndef LOG_H
#define LOG_H
#include "unireg.h" // REQUIRED: for other includes
#include "handler.h" /* my_xid */
class Relay_log_info;
class Format_description_log_event;
Manual merge of patch for Bug#46364 from mysql-next-mr-bugfixing. Conflicts: - mysql-test/r/mysqld--help-win.result - sql/sys_vars.cc Original revsion (in next-mr-bugfixing): ------------------------------------------------------------ revno: 2971 [merge] revision-id: alfranio.correia@sun.com-20100121210527-rbuheu5rnsmcakh1 committer: Alfranio Correia <alfranio.correia@sun.com> branch nick: mysql-next-mr-bugfixing timestamp: Thu 2010-01-21 21:05:27 +0000 message: BUG#46364 MyISAM transbuffer problems (NTM problem) It is well-known that due to concurrency issues, a slave can become inconsistent when a transaction contains updates to both transaction and non-transactional tables. In a nutshell, the current code-base tries to preserve causality among the statements by writing non-transactional statements to the txn-cache which is flushed upon commit. However, modifications done to non-transactional tables on behalf of a transaction become immediately visible to other connections but may not immediately get into the binary log and therefore consistency may be broken. In general, it is impossible to automatically detect causality/dependency among statements by just analyzing the statements sent to the server. This happen because dependency may be hidden in the application code and it is necessary to know a priori all the statements processed in the context of a transaction such as in a procedure. Moreover, even for the few cases that we could automatically address in the server, the computation effort required could make the approach infeasible. So, in this patch we introduce the option - "--binlog-direct-non-transactional-updates" that can be used to bypass the current behavior in order to write directly to binary log statements that change non-transactional tables. Besides, it is used to enable the WL#2687 which is disabled by default. ------------------------------------------------------------ revno: 2970.1.1 revision-id: alfranio.correia@sun.com-20100121131034-183r4qdyld7an5a0 parent: alik@sun.com-20100121083914-r9rz2myto3tkdya0 committer: Alfranio Correia <alfranio.correia@sun.com> branch nick: mysql-next-mr-bugfixing timestamp: Thu 2010-01-21 13:10:34 +0000 message: BUG#46364 MyISAM transbuffer problems (NTM problem) It is well-known that due to concurrency issues, a slave can become inconsistent when a transaction contains updates to both transaction and non-transactional tables. In a nutshell, the current code-base tries to preserve causality among the statements by writing non-transactional statements to the txn-cache which is flushed upon commit. However, modifications done to non-transactional tables on behalf of a transaction become immediately visible to other connections but may not immediately get into the binary log and therefore consistency may be broken. In general, it is impossible to automatically detect causality/dependency among statements by just analyzing the statements sent to the server. This happen because dependency may be hidden in the application code and it is necessary to know a priori all the statements processed in the context of a transaction such as in a procedure. Moreover, even for the few cases that we could automatically address in the server, the computation effort required could make the approach infeasible. So, in this patch we introduce the option - "--binlog-direct-non-transactional-updates" that can be used to bypass the current behavior in order to write directly to binary log statements that change non-transactional tables. Besides, it is used to enable the WL#2687 which is disabled by default.
2010-02-02 08:56:42 +01:00
bool trans_has_updated_trans_table(const THD* thd);
bool stmt_has_updated_trans_table(const THD *thd);
bool use_trans_cache(const THD* thd, bool is_transactional);
BUG#51894 Replication failure with SBR on DROP TEMPORARY TABLE inside a transaction BUG#52616 Temp table prevents switch binlog format from STATEMENT to ROW Before the WL#2687 and BUG#46364, every non-transactional change that happened after a transactional change was written to trx-cache and flushed upon committing the transaction. WL#2687 and BUG#46364 changed this behavior and non-transactional changes are now written to the binary log upon committing the statement. A binary log event is identified as transactional or non-transactional through a flag in the Log_event which is set taking into account the underlie storage engine on what it is stems from. In the current bug, this flag was not being set properly when the DROP TEMPORARY TABLE was executed. However, while fixing this bug we figured out that changes to temporary tables should be always written to the trx-cache if there is an on-going transaction. Otherwise, binlog events in the reversed order would be produced. Regarding concurrency, keeping changes to temporary tables in the trx-cache is also safe as temporary tables are only visible to the owner connection. In this patch, we classify the following statements as unsafe: 1 - INSERT INTO t_myisam SELECT * FROM t_myisam_temp 2 - INSERT INTO t_myisam_temp SELECT * FROM t_myisam 3 - CREATE TEMPORARY TABLE t_myisam_temp SELECT * FROM t_myisam On the other hand, the following statements are classified as safe: 1 - INSERT INTO t_innodb SELECT * FROM t_myisam_temp 2 - INSERT INTO t_myisam_temp SELECT * FROM t_innodb The patch also guarantees that transactions that have a DROP TEMPORARY are always written to the binary log regardless of the mode and the outcome: commit or rollback. In particular, the DROP TEMPORARY is extended with the IF EXISTS clause when the current statement logging format is set to row. Finally, the patch allows to switch from STATEMENT to MIXED/ROW when there are temporary tables but the contrary is not possible.
2010-04-20 11:10:43 +02:00
bool ending_trans(THD* thd, const bool all);
BUG#53259 Unsafe statement binlogged in statement format w/MyIsam temp tables BUG#54872 MBR: replication failure caused by using tmp table inside transaction Changed criteria to classify a statement as unsafe in order to reduce the number of spurious warnings. So a statement is classified as unsafe when there is on-going transaction at any point of the execution if: 1. The mixed statement is about to update a transactional table and a non-transactional table. 2. The mixed statement is about to update a temporary transactional table and a non-transactional table. 3. The mixed statement is about to update a transactional table and read from a non-transactional table. 4. The mixed statement is about to update a temporary transactional table and read from a non-transactional table. 5. The mixed statement is about to update a non-transactional table and read from a transactional table when the isolation level is lower than repeatable read. After updating a transactional table if: 6. The mixed statement is about to update a non-transactional table and read from a temporary transactional table. 7. The mixed statement is about to update a non-transactional table and read from a temporary transactional table. 8. The mixed statement is about to update a non-transactionala table and read from a temporary non-transactional table. 9. The mixed statement is about to update a temporary non-transactional table and update a non-transactional table. 10. The mixed statement is about to update a temporary non-transactional table and read from a non-transactional table. 11. A statement is about to update a non-transactional table and the option variables.binlog_direct_non_trans_update is OFF. The reason for this is that locks acquired may not protected a concurrent transaction of interfering in the current execution and by consequence in the result. So the patch reduced the number of spurious unsafe warnings. Besides we fixed a regression caused by BUG#51894, which makes temporary tables to go into the trx-cache if there is an on-going transaction. In MIXED mode, the patch for BUG#51894 ignores that the trx-cache may have updates to temporary non-transactional tables that must be written to the binary log while rolling back the transaction. So we fix this problem by writing the content of the trx-cache to the binary log while rolling back a transaction if a non-transactional temporary table was updated and the binary logging format is MIXED.
2010-06-30 17:25:13 +02:00
bool ending_single_stmt_trans(THD* thd, const bool all);
BUG#51894 Replication failure with SBR on DROP TEMPORARY TABLE inside a transaction BUG#52616 Temp table prevents switch binlog format from STATEMENT to ROW Before the WL#2687 and BUG#46364, every non-transactional change that happened after a transactional change was written to trx-cache and flushed upon committing the transaction. WL#2687 and BUG#46364 changed this behavior and non-transactional changes are now written to the binary log upon committing the statement. A binary log event is identified as transactional or non-transactional through a flag in the Log_event which is set taking into account the underlie storage engine on what it is stems from. In the current bug, this flag was not being set properly when the DROP TEMPORARY TABLE was executed. However, while fixing this bug we figured out that changes to temporary tables should be always written to the trx-cache if there is an on-going transaction. Otherwise, binlog events in the reversed order would be produced. Regarding concurrency, keeping changes to temporary tables in the trx-cache is also safe as temporary tables are only visible to the owner connection. In this patch, we classify the following statements as unsafe: 1 - INSERT INTO t_myisam SELECT * FROM t_myisam_temp 2 - INSERT INTO t_myisam_temp SELECT * FROM t_myisam 3 - CREATE TEMPORARY TABLE t_myisam_temp SELECT * FROM t_myisam On the other hand, the following statements are classified as safe: 1 - INSERT INTO t_innodb SELECT * FROM t_myisam_temp 2 - INSERT INTO t_myisam_temp SELECT * FROM t_innodb The patch also guarantees that transactions that have a DROP TEMPORARY are always written to the binary log regardless of the mode and the outcome: commit or rollback. In particular, the DROP TEMPORARY is extended with the IF EXISTS clause when the current statement logging format is set to row. Finally, the patch allows to switch from STATEMENT to MIXED/ROW when there are temporary tables but the contrary is not possible.
2010-04-20 11:10:43 +02:00
bool trans_has_updated_non_trans_table(const THD* thd);
bool stmt_has_updated_non_trans_table(const THD* thd);
/*
Transaction Coordinator log - a base abstract class
for two different implementations
*/
class TC_LOG
{
public:
int using_heuristic_recover();
TC_LOG() {}
virtual ~TC_LOG() {}
virtual int open(const char *opt_name)=0;
virtual void close()=0;
virtual int log_xid(THD *thd, my_xid xid)=0;
virtual int unlog(ulong cookie, my_xid xid)=0;
};
class TC_LOG_DUMMY: public TC_LOG // use it to disable the logging
{
public:
TC_LOG_DUMMY() {}
int open(const char *opt_name) { return 0; }
void close() { }
int log_xid(THD *thd, my_xid xid) { return 1; }
int unlog(ulong cookie, my_xid xid) { return 0; }
};
#ifdef HAVE_MMAP
class TC_LOG_MMAP: public TC_LOG
{
public: // only to keep Sun Forte on sol9x86 happy
typedef enum {
PS_POOL, // page is in pool
PS_ERROR, // last sync failed
PS_DIRTY // new xids added since last sync
} PAGE_STATE;
private:
typedef struct st_page {
struct st_page *next; // page a linked in a fifo queue
my_xid *start, *end; // usable area of a page
my_xid *ptr; // next xid will be written here
int size, free; // max and current number of free xid slots on the page
int waiters; // number of waiters on condition
PAGE_STATE state; // see above
mysql_mutex_t lock; // to access page data or control structure
mysql_cond_t cond; // to wait for a sync
} PAGE;
char logname[FN_REFLEN];
File fd;
my_off_t file_length;
uint npages, inited;
uchar *data;
struct st_page *pages, *syncing, *active, *pool, *pool_last;
/*
note that, e.g. LOCK_active is only used to protect
'active' pointer, to protect the content of the active page
one has to use active->lock.
Same for LOCK_pool and LOCK_sync
*/
mysql_mutex_t LOCK_active, LOCK_pool, LOCK_sync;
mysql_cond_t COND_pool, COND_active;
public:
TC_LOG_MMAP(): inited(0) {}
int open(const char *opt_name);
void close();
int log_xid(THD *thd, my_xid xid);
int unlog(ulong cookie, my_xid xid);
int recover();
private:
void get_active_from_pool();
int sync();
int overflow();
};
#else
#define TC_LOG_MMAP TC_LOG_DUMMY
#endif
extern TC_LOG *tc_log;
extern TC_LOG_MMAP tc_log_mmap;
extern TC_LOG_DUMMY tc_log_dummy;
/* log info errors */
#define LOG_INFO_EOF -1
#define LOG_INFO_IO -2
#define LOG_INFO_INVALID -3
#define LOG_INFO_SEEK -4
#define LOG_INFO_MEM -6
#define LOG_INFO_FATAL -7
#define LOG_INFO_IN_USE -8
#define LOG_INFO_EMFILE -9
/* bitmap to SQL_LOG::close() */
#define LOG_CLOSE_INDEX 1
#define LOG_CLOSE_TO_BE_OPENED 2
#define LOG_CLOSE_STOP_EVENT 4
/*
Maximum unique log filename extension.
Note: setting to 0x7FFFFFFF due to atol windows
overflow/truncate.
*/
#define MAX_LOG_UNIQUE_FN_EXT 0x7FFFFFFF
/*
Number of warnings that will be printed to error log
before extension number is exhausted.
*/
#define LOG_WARN_UNIQUE_FN_EXT_LEFT 1000
class Relay_log_info;
#ifdef HAVE_PSI_INTERFACE
extern PSI_mutex_key key_LOG_INFO_lock;
#endif
/*
Note that we destroy the lock mutex in the desctructor here.
This means that object instances cannot be destroyed/go out of scope,
until we have reset thd->current_linfo to NULL;
*/
typedef struct st_log_info
{
char log_file_name[FN_REFLEN];
my_off_t index_file_offset, index_file_start_offset;
my_off_t pos;
bool fatal; // if the purge happens to give us a negative offset
mysql_mutex_t lock;
2007-10-04 12:13:04 +02:00
st_log_info()
: index_file_offset(0), index_file_start_offset(0),
pos(0), fatal(0)
{
log_file_name[0] = '\0';
mysql_mutex_init(key_LOG_INFO_lock, &lock, MY_MUTEX_INIT_FAST);
2007-10-04 12:13:04 +02:00
}
~st_log_info() { mysql_mutex_destroy(&lock);}
} LOG_INFO;
/*
Currently we have only 3 kinds of logging functions: old-fashioned
logs, stdout and csv logging routines.
*/
#define MAX_LOG_HANDLERS_NUM 3
/* log event handler flags */
#define LOG_NONE 1
#define LOG_FILE 2
#define LOG_TABLE 4
class Log_event;
class Rows_log_event;
enum enum_log_type { LOG_UNKNOWN, LOG_NORMAL, LOG_BIN };
enum enum_log_state { LOG_OPENED, LOG_CLOSED, LOG_TO_BE_OPENED };
/*
TODO use mmap instead of IO_CACHE for binlog
(mmap+fsync is two times faster than write+fsync)
*/
class MYSQL_LOG
{
public:
MYSQL_LOG();
void init_pthread_objects();
void cleanup();
bool open(
#ifdef HAVE_PSI_INTERFACE
PSI_file_key log_file_key,
#endif
const char *log_name,
enum_log_type log_type,
const char *new_name,
enum cache_type io_cache_type_arg);
BUG#45292 orphan binary log created when starting server twice This patch fixes three bugs as follows. First, aborting the server while purging binary logs might generate orphan files due to how the purge operation was implemented: (purge routine - sql/log.cc - MYSQL_BIN_LOG::purge_logs) 1 - register the files to be removed in a temporary buffer. 2 - update the log-bin.index. 3 - flush the log-bin.index. 4 - erase the files whose names where register in the temporary buffer in step 1. Thus a failure while executing step 4 would generate an orphan file. Second, a similar issue might happen while creating a new binary as follows: (create routine - sql/log.cc - MYSQL_BIN_LOG::open) 1 - open the new log-bin. 2 - update the log-bin.index. Thus a failure while executing step 1 would generate an orphan file. To fix these issues, we record the files to be purged or created before really removing or adding them. So if a failure happens such records can be used to automatically remove dangling files. The new steps might be outlined as follows: (purge routine - sql/log.cc - MYSQL_BIN_LOG::purge_logs) 1 - register the files to be removed in the log-bin.~rec~ placed in the data directory. 2 - update the log-bin.index. 3 - flush the log-bin.index. 4 - delete the log-bin.~rec~. (create routine - sql/log.cc - MYSQL_BIN_LOG::open) 1 - register the file to be created in the log-bin.~rec~ placed in the data directory. 2 - open the new log-bin. 3 - update the log-bin.index. 4 - delete the log-bin.~rec~. (recovery routine - sql/log.cc - MYSQL_BIN_LOG::open_index_file) 1 - open the log-bin.index. 2 - open the log-bin.~rec~. 3 - for each file in log-bin.~rec~. 3.1 Check if the file is in the log-bin.index and if so ignore it. 3.2 Otherwise, delete it. The third issue can be described as follows. The purge operation was allowing to remove a file in use thus leading to the loss of data and possible inconsistencies between the master and slave. Roughly, the routine was only taking into account the dump threads and so if a slave was not connect the file might be delete even though it was in use.
2009-12-04 15:40:42 +01:00
bool init_and_set_log_file_name(const char *log_name,
const char *new_name,
enum_log_type log_type_arg,
enum cache_type io_cache_type_arg);
void init(enum_log_type log_type_arg,
enum cache_type io_cache_type_arg);
void close(uint exiting);
inline bool is_open() { return log_state != LOG_CLOSED; }
const char *generate_name(const char *log_name, const char *suffix,
bool strip_ext, char *buff);
int generate_new_name(char *new_name, const char *log_name);
protected:
/* LOCK_log is inited by init_pthread_objects() */
mysql_mutex_t LOCK_log;
char *name;
char log_file_name[FN_REFLEN];
2006-09-28 15:00:44 +02:00
char time_buff[20], db[NAME_LEN + 1];
bool write_error, inited;
IO_CACHE log_file;
enum_log_type log_type;
volatile enum_log_state log_state;
enum cache_type io_cache_type;
friend class Log_event;
#ifdef HAVE_PSI_INTERFACE
/** Instrumentation key to use for file io in @c log_file */
PSI_file_key m_log_file_key;
#endif
};
class MYSQL_QUERY_LOG: public MYSQL_LOG
{
public:
MYSQL_QUERY_LOG() : last_time(0) {}
void reopen_file();
bool write(time_t event_time, const char *user_host,
uint user_host_len, int thread_id,
const char *command_type, uint command_type_len,
const char *sql_text, uint sql_text_len);
bool write(THD *thd, time_t current_time, time_t query_start_arg,
const char *user_host, uint user_host_len,
ulonglong query_utime, ulonglong lock_utime, bool is_command,
const char *sql_text, uint sql_text_len);
bool open_slow_log(const char *log_name)
{
char buf[FN_REFLEN];
return open(
#ifdef HAVE_PSI_INTERFACE
key_file_slow_log,
#endif
generate_name(log_name, "-slow.log", 0, buf),
LOG_NORMAL, 0, WRITE_CACHE);
}
bool open_query_log(const char *log_name)
{
char buf[FN_REFLEN];
return open(
#ifdef HAVE_PSI_INTERFACE
key_file_query_log,
#endif
generate_name(log_name, ".log", 0, buf),
LOG_NORMAL, 0, WRITE_CACHE);
}
WL#3984 (Revise locking of mysql.general_log and mysql.slow_log) Bug#25422 (Hang with log tables) Bug 17876 (Truncating mysql.slow_log in a SP after using cursor locks the thread) Bug 23044 (Warnings on flush of a log table) Bug 29129 (Resetting general_log while the GLOBAL READ LOCK is set causes a deadlock) Prior to this fix, the server would hang when performing concurrent ALTER TABLE or TRUNCATE TABLE statements against the LOG TABLES, which are mysql.general_log and mysql.slow_log. The root cause traces to the following code: in sql_base.cc, open_table() if (table->in_use != thd) { /* wait_for_condition will unlock LOCK_open for us */ wait_for_condition(thd, &LOCK_open, &COND_refresh); } The problem with this code is that the current implementation of the LOGGER creates 'fake' THD objects, like - Log_to_csv_event_handler::general_log_thd - Log_to_csv_event_handler::slow_log_thd which are not associated to a real thread running in the server, so that waiting for these non-existing threads to release table locks cause the dead lock. In general, the design of Log_to_csv_event_handler does not fit into the general architecture of the server, so that the concept of general_log_thd and slow_log_thd has to be abandoned: - this implementation does not work with table locking - it will not work with commands like SHOW PROCESSLIST - having the log tables always opened does not integrate well with DDL operations / FLUSH TABLES / SET GLOBAL READ_ONLY With this patch, the fundamental design of the LOGGER has been changed to: - always open and close a log table when writing a log - remove totally the usage of fake THD objects - clarify how locking of log tables is implemented in general. See WL#3984 for details related to the new locking design. Additional changes (misc bugs exposed and fixed): 1) mysqldump which would ignore some tables in dump_all_tables_in_db(), but forget to ignore the same in dump_all_views_in_db(). 2) mysqldump would also issue an empty "LOCK TABLE" command when all the tables to lock are to be ignored (numrows == 0), instead of not issuing the query. 3) Internal errors handlers could intercept errors but not warnings (see sql_error.cc). 4) Implementing a nested call to open tables, for the performance schema tables, exposed an existing bug in remove_table_from_cache(), which would perform: in_use->some_tables_deleted=1; against another thread, without any consideration about thread locking. This call inside remove_table_from_cache() was not required anyway, since calling mysql_lock_abort() takes care of aborting -- cleanly -- threads that might hold a lock on a table. This line (in_use->some_tables_deleted=1) has been removed.
2007-07-27 08:31:06 +02:00
private:
time_t last_time;
};
class MYSQL_BIN_LOG: public TC_LOG, private MYSQL_LOG
{
private:
#ifdef HAVE_PSI_INTERFACE
/** The instrumentation key to use for @ LOCK_index. */
PSI_mutex_key m_key_LOCK_index;
/** The instrumentation key to use for @ update_cond. */
PSI_cond_key m_key_update_cond;
/** The instrumentation key to use for opening the log file. */
PSI_file_key m_key_file_log;
/** The instrumentation key to use for opening the log index file. */
PSI_file_key m_key_file_log_index;
#endif
/* LOCK_log and LOCK_index are inited by init_pthread_objects() */
mysql_mutex_t LOCK_index;
mysql_mutex_t LOCK_prep_xids;
mysql_cond_t COND_prep_xids;
mysql_cond_t update_cond;
ulonglong bytes_written;
IO_CACHE index_file;
BUG#45292 orphan binary log created when starting server twice This patch fixes three bugs as follows. First, aborting the server while purging binary logs might generate orphan files due to how the purge operation was implemented: (purge routine - sql/log.cc - MYSQL_BIN_LOG::purge_logs) 1 - register the files to be removed in a temporary buffer. 2 - update the log-bin.index. 3 - flush the log-bin.index. 4 - erase the files whose names where register in the temporary buffer in step 1. Thus a failure while executing step 4 would generate an orphan file. Second, a similar issue might happen while creating a new binary as follows: (create routine - sql/log.cc - MYSQL_BIN_LOG::open) 1 - open the new log-bin. 2 - update the log-bin.index. Thus a failure while executing step 1 would generate an orphan file. To fix these issues, we record the files to be purged or created before really removing or adding them. So if a failure happens such records can be used to automatically remove dangling files. The new steps might be outlined as follows: (purge routine - sql/log.cc - MYSQL_BIN_LOG::purge_logs) 1 - register the files to be removed in the log-bin.~rec~ placed in the data directory. 2 - update the log-bin.index. 3 - flush the log-bin.index. 4 - delete the log-bin.~rec~. (create routine - sql/log.cc - MYSQL_BIN_LOG::open) 1 - register the file to be created in the log-bin.~rec~ placed in the data directory. 2 - open the new log-bin. 3 - update the log-bin.index. 4 - delete the log-bin.~rec~. (recovery routine - sql/log.cc - MYSQL_BIN_LOG::open_index_file) 1 - open the log-bin.index. 2 - open the log-bin.~rec~. 3 - for each file in log-bin.~rec~. 3.1 Check if the file is in the log-bin.index and if so ignore it. 3.2 Otherwise, delete it. The third issue can be described as follows. The purge operation was allowing to remove a file in use thus leading to the loss of data and possible inconsistencies between the master and slave. Roughly, the routine was only taking into account the dump threads and so if a slave was not connect the file might be delete even though it was in use.
2009-12-04 15:40:42 +01:00
char index_file_name[FN_REFLEN];
/*
BUG#45292 orphan binary log created when starting server twice This patch fixes three bugs as follows. First, aborting the server while purging binary logs might generate orphan files due to how the purge operation was implemented: (purge routine - sql/log.cc - MYSQL_BIN_LOG::purge_logs) 1 - register the files to be removed in a temporary buffer. 2 - update the log-bin.index. 3 - flush the log-bin.index. 4 - erase the files whose names where register in the temporary buffer in step 1. Thus a failure while executing step 4 would generate an orphan file. Second, a similar issue might happen while creating a new binary as follows: (create routine - sql/log.cc - MYSQL_BIN_LOG::open) 1 - open the new log-bin. 2 - update the log-bin.index. Thus a failure while executing step 1 would generate an orphan file. To fix these issues, we record the files to be purged or created before really removing or adding them. So if a failure happens such records can be used to automatically remove dangling files. The new steps might be outlined as follows: (purge routine - sql/log.cc - MYSQL_BIN_LOG::purge_logs) 1 - register the files to be removed in the log-bin.~rec~ placed in the data directory. 2 - update the log-bin.index. 3 - flush the log-bin.index. 4 - delete the log-bin.~rec~. (create routine - sql/log.cc - MYSQL_BIN_LOG::open) 1 - register the file to be created in the log-bin.~rec~ placed in the data directory. 2 - open the new log-bin. 3 - update the log-bin.index. 4 - delete the log-bin.~rec~. (recovery routine - sql/log.cc - MYSQL_BIN_LOG::open_index_file) 1 - open the log-bin.index. 2 - open the log-bin.~rec~. 3 - for each file in log-bin.~rec~. 3.1 Check if the file is in the log-bin.index and if so ignore it. 3.2 Otherwise, delete it. The third issue can be described as follows. The purge operation was allowing to remove a file in use thus leading to the loss of data and possible inconsistencies between the master and slave. Roughly, the routine was only taking into account the dump threads and so if a slave was not connect the file might be delete even though it was in use.
2009-12-04 15:40:42 +01:00
purge_file is a temp file used in purge_logs so that the index file
can be updated before deleting files from disk, yielding better crash
recovery. It is created on demand the first time purge_logs is called
and then reused for subsequent calls. It is cleaned up in cleanup().
*/
BUG#45292 orphan binary log created when starting server twice This patch fixes three bugs as follows. First, aborting the server while purging binary logs might generate orphan files due to how the purge operation was implemented: (purge routine - sql/log.cc - MYSQL_BIN_LOG::purge_logs) 1 - register the files to be removed in a temporary buffer. 2 - update the log-bin.index. 3 - flush the log-bin.index. 4 - erase the files whose names where register in the temporary buffer in step 1. Thus a failure while executing step 4 would generate an orphan file. Second, a similar issue might happen while creating a new binary as follows: (create routine - sql/log.cc - MYSQL_BIN_LOG::open) 1 - open the new log-bin. 2 - update the log-bin.index. Thus a failure while executing step 1 would generate an orphan file. To fix these issues, we record the files to be purged or created before really removing or adding them. So if a failure happens such records can be used to automatically remove dangling files. The new steps might be outlined as follows: (purge routine - sql/log.cc - MYSQL_BIN_LOG::purge_logs) 1 - register the files to be removed in the log-bin.~rec~ placed in the data directory. 2 - update the log-bin.index. 3 - flush the log-bin.index. 4 - delete the log-bin.~rec~. (create routine - sql/log.cc - MYSQL_BIN_LOG::open) 1 - register the file to be created in the log-bin.~rec~ placed in the data directory. 2 - open the new log-bin. 3 - update the log-bin.index. 4 - delete the log-bin.~rec~. (recovery routine - sql/log.cc - MYSQL_BIN_LOG::open_index_file) 1 - open the log-bin.index. 2 - open the log-bin.~rec~. 3 - for each file in log-bin.~rec~. 3.1 Check if the file is in the log-bin.index and if so ignore it. 3.2 Otherwise, delete it. The third issue can be described as follows. The purge operation was allowing to remove a file in use thus leading to the loss of data and possible inconsistencies between the master and slave. Roughly, the routine was only taking into account the dump threads and so if a slave was not connect the file might be delete even though it was in use.
2009-12-04 15:40:42 +01:00
IO_CACHE purge_index_file;
char purge_index_file_name[FN_REFLEN];
/*
The max size before rotation (usable only if log_type == LOG_BIN: binary
logs and relay logs).
For a binlog, max_size should be max_binlog_size.
For a relay log, it should be max_relay_log_size if this is non-zero,
max_binlog_size otherwise.
max_size is set in init(), and dynamically changed (when one does SET
GLOBAL MAX_BINLOG_SIZE|MAX_RELAY_LOG_SIZE) by fix_max_binlog_size and
fix_max_relay_log_size).
*/
ulong max_size;
long prepared_xids; /* for tc log - number of xids to remember */
// current file sequence number for load data infile binary logging
uint file_id;
uint open_count; // For replication
int readers_count;
bool need_start_event;
/*
no_auto_events means we don't want any of these automatic events :
Start/Rotate/Stop. That is, in 4.x when we rotate a relay log, we don't
want a Rotate_log event to be written to the relay log. When we start a
relay log etc. So in 4.x this is 1 for relay logs, 0 for binlogs.
In 5.0 it's 0 for relay logs too!
*/
bool no_auto_events;
/* pointer to the sync period variable, for binlog this will be
sync_binlog_period, for relay log this will be
sync_relay_log_period
*/
uint *sync_period_ptr;
uint sync_counter;
inline uint get_sync_period()
{
return *sync_period_ptr;
}
int write_to_file(IO_CACHE *cache);
/*
This is used to start writing to a new log file. The difference from
new_file() is locking. new_file_without_locking() does not acquire
LOCK_log.
*/
int new_file_without_locking();
int new_file_impl(bool need_lock);
public:
using MYSQL_LOG::generate_name;
using MYSQL_LOG::is_open;
/* This is relay log */
bool is_relay_log;
ulong signal_cnt; // update of the counter is checked by heartbeat
/*
These describe the log's format. This is used only for relay logs.
_for_exec is used by the SQL thread, _for_queue by the I/O thread. It's
necessary to have 2 distinct objects, because the I/O thread may be reading
events in a different format from what the SQL thread is reading (consider
the case of a master which has been upgraded from 5.0 to 5.1 without doing
RESET MASTER, or from 4.x to 5.0).
*/
Format_description_log_event *description_event_for_exec,
*description_event_for_queue;
MYSQL_BIN_LOG(uint *sync_period);
/*
note that there's no destructor ~MYSQL_BIN_LOG() !
The reason is that we don't want it to be automatically called
on exit() - but only during the correct shutdown process
*/
#ifdef HAVE_PSI_INTERFACE
void set_psi_keys(PSI_mutex_key key_LOCK_index,
PSI_cond_key key_update_cond,
PSI_file_key key_file_log,
PSI_file_key key_file_log_index)
{
m_key_LOCK_index= key_LOCK_index;
m_key_update_cond= key_update_cond;
m_key_file_log= key_file_log;
m_key_file_log_index= key_file_log_index;
}
#endif
int open(const char *opt_name);
void close();
int log_xid(THD *thd, my_xid xid);
int unlog(ulong cookie, my_xid xid);
int recover(IO_CACHE *log, Format_description_log_event *fdle);
#if !defined(MYSQL_CLIENT)
int flush_and_set_pending_rows_event(THD *thd, Rows_log_event* event,
bool is_transactional);
int remove_pending_rows_event(THD *thd, bool is_transactional);
#endif /* !defined(MYSQL_CLIENT) */
void reset_bytes_written()
{
bytes_written = 0;
}
void harvest_bytes_written(ulonglong* counter)
{
#ifndef DBUG_OFF
char buf1[22],buf2[22];
#endif
DBUG_ENTER("harvest_bytes_written");
(*counter)+=bytes_written;
DBUG_PRINT("info",("counter: %s bytes_written: %s", llstr(*counter,buf1),
llstr(bytes_written,buf2)));
bytes_written=0;
DBUG_VOID_RETURN;
}
void set_max_size(ulong max_size_arg);
void signal_update();
void wait_for_update_relay_log(THD* thd);
int wait_for_update_bin_log(THD* thd, const struct timespec * timeout);
void set_need_start_event() { need_start_event = 1; }
void init(bool no_auto_events_arg, ulong max_size);
void init_pthread_objects();
void cleanup();
bool open(const char *log_name,
enum_log_type log_type,
const char *new_name,
enum cache_type io_cache_type_arg,
bool no_auto_events_arg, ulong max_size,
BUG#45292 orphan binary log created when starting server twice This patch fixes three bugs as follows. First, aborting the server while purging binary logs might generate orphan files due to how the purge operation was implemented: (purge routine - sql/log.cc - MYSQL_BIN_LOG::purge_logs) 1 - register the files to be removed in a temporary buffer. 2 - update the log-bin.index. 3 - flush the log-bin.index. 4 - erase the files whose names where register in the temporary buffer in step 1. Thus a failure while executing step 4 would generate an orphan file. Second, a similar issue might happen while creating a new binary as follows: (create routine - sql/log.cc - MYSQL_BIN_LOG::open) 1 - open the new log-bin. 2 - update the log-bin.index. Thus a failure while executing step 1 would generate an orphan file. To fix these issues, we record the files to be purged or created before really removing or adding them. So if a failure happens such records can be used to automatically remove dangling files. The new steps might be outlined as follows: (purge routine - sql/log.cc - MYSQL_BIN_LOG::purge_logs) 1 - register the files to be removed in the log-bin.~rec~ placed in the data directory. 2 - update the log-bin.index. 3 - flush the log-bin.index. 4 - delete the log-bin.~rec~. (create routine - sql/log.cc - MYSQL_BIN_LOG::open) 1 - register the file to be created in the log-bin.~rec~ placed in the data directory. 2 - open the new log-bin. 3 - update the log-bin.index. 4 - delete the log-bin.~rec~. (recovery routine - sql/log.cc - MYSQL_BIN_LOG::open_index_file) 1 - open the log-bin.index. 2 - open the log-bin.~rec~. 3 - for each file in log-bin.~rec~. 3.1 Check if the file is in the log-bin.index and if so ignore it. 3.2 Otherwise, delete it. The third issue can be described as follows. The purge operation was allowing to remove a file in use thus leading to the loss of data and possible inconsistencies between the master and slave. Roughly, the routine was only taking into account the dump threads and so if a slave was not connect the file might be delete even though it was in use.
2009-12-04 15:40:42 +01:00
bool null_created,
bool need_mutex);
bool open_index_file(const char *index_file_name_arg,
BUG#45292 orphan binary log created when starting server twice This patch fixes three bugs as follows. First, aborting the server while purging binary logs might generate orphan files due to how the purge operation was implemented: (purge routine - sql/log.cc - MYSQL_BIN_LOG::purge_logs) 1 - register the files to be removed in a temporary buffer. 2 - update the log-bin.index. 3 - flush the log-bin.index. 4 - erase the files whose names where register in the temporary buffer in step 1. Thus a failure while executing step 4 would generate an orphan file. Second, a similar issue might happen while creating a new binary as follows: (create routine - sql/log.cc - MYSQL_BIN_LOG::open) 1 - open the new log-bin. 2 - update the log-bin.index. Thus a failure while executing step 1 would generate an orphan file. To fix these issues, we record the files to be purged or created before really removing or adding them. So if a failure happens such records can be used to automatically remove dangling files. The new steps might be outlined as follows: (purge routine - sql/log.cc - MYSQL_BIN_LOG::purge_logs) 1 - register the files to be removed in the log-bin.~rec~ placed in the data directory. 2 - update the log-bin.index. 3 - flush the log-bin.index. 4 - delete the log-bin.~rec~. (create routine - sql/log.cc - MYSQL_BIN_LOG::open) 1 - register the file to be created in the log-bin.~rec~ placed in the data directory. 2 - open the new log-bin. 3 - update the log-bin.index. 4 - delete the log-bin.~rec~. (recovery routine - sql/log.cc - MYSQL_BIN_LOG::open_index_file) 1 - open the log-bin.index. 2 - open the log-bin.~rec~. 3 - for each file in log-bin.~rec~. 3.1 Check if the file is in the log-bin.index and if so ignore it. 3.2 Otherwise, delete it. The third issue can be described as follows. The purge operation was allowing to remove a file in use thus leading to the loss of data and possible inconsistencies between the master and slave. Roughly, the routine was only taking into account the dump threads and so if a slave was not connect the file might be delete even though it was in use.
2009-12-04 15:40:42 +01:00
const char *log_name, bool need_mutex);
/* Use this to start writing a new log file */
int new_file();
bool write(Log_event* event_info); // binary log write
BUG#43929 binlog corruption when max_binlog_cache_size is exceeded Large transactions and statements may corrupt the binary log if the size of the cache, which is set by the max_binlog_cache_size, is not enough to store the the changes. In a nutshell, to fix the bug, we save the position of the next character in the cache before starting processing a statement. If there is a problem, we simply restore the position thus removing any effect of the statement from the cache. Unfortunately, to avoid corrupting the binary log, we may end up loosing changes on non-transactional tables if they do not fit in the cache. In such cases, we store an Incident_log_event in order to stop the slave and alert users that some changes were not logged. Precisely, for every non-transactional changes that do not fit into the cache, we do the following: a) the statement is *not* logged b) an incident event is logged after committing/rolling back the transaction, if any. Note that if a failure happens before writing the incident event to the binary log, the slave will not stop and the master will not have reported any error. c) its respective statement gives an error For transactional changes that do not fit into the cache, we do the following: a) the statement is *not* logged b) its respective statement gives an error To work properly, this patch requires two additional things. Firstly, callers to MYSQL_BIN_LOG::write and THD::binlog_query must handle any error returned and take the appropriate actions such as undoing the effects of a statement. We already changed some calls in the sql_insert.cc, sql_update.cc and sql_insert.cc modules but the remaining calls spread all over the code should be handled in BUG#37148. Secondly, statements must be either classified as DDL or DML because DDLs that do not get into the cache must generate an incident event since they cannot be rolled back.
2009-06-18 15:52:46 +02:00
bool write(THD *thd, IO_CACHE *cache, Log_event *commit_event, bool incident);
bool write_incident(THD *thd, bool lock);
BUG#22864 (Rollback following CREATE... SELECT discards 'CREATE TABLE' from log): When row-based logging is used, the CREATE-SELECT is written as two parts: as a CREATE TABLE statement and as the rows for the table. For both transactional and non-transactional tables, the CREATE TABLE statement was written to the transaction cache, as were the rows, and on statement end, the entire transaction cache was written to the binary log if the table was non-transactional. For transactional tables, the events were kept in the transaction cache until end of transaction (or statement that were not part of a transaction). For the case when AUTOCOMMIT=0 and we are creating a transactional table using a create select, we would then keep the CREATE TABLE statement and the rows for the CREATE-SELECT, while executing the following statements. On a rollback, the transaction cache would then be cleared, which would also remove the CREATE TABLE statement. Hence no table would be created on the slave, while there is an empty table on the master. This relates to BUG#22865 where the table being created exists on the master, but not on the slave during insertion of rows into the newly created table. This occurs since the CREATE TABLE statement were still in the transaction cache until the statement finished executing, and possibly longer if the table was transactional. This patch changes the behaviour of the CREATE-SELECT statement by adding an implicit commit at the end of the statement when creating non-temporary tables. Hence, non-temporary tables will be written to the binary log on completion, and in the even of AUTOCOMMIT=0, a new transaction will be started. Temporary tables do not commit an ongoing transaction: neither as a pre- not a post-commit. The events for both transactional and non-transactional tables are saved in the transaction cache, and written to the binary log at end of the statement.
2006-12-21 09:29:02 +01:00
int write_cache(IO_CACHE *cache, bool lock_log, bool flush_and_sync);
BUG#57275 binlog_cache_size affects trx- and stmt-cache and gets twice the expected memory After the WL#2687, the binlog_cache_size and max_binlog_cache_size affect both the stmt-cache and the trx-cache. This means that the resource used is twice the amount expected/defined by the user. The binlog_cache_use is incremented when the stmt-cache or the trx-cache is used and binlog_cache_disk_use is incremented when the disk space from the stmt-cache or the trx-cache is used. This behavior does not allow to distinguish which cache may be harming performance due to the extra disk accesses and needs to have its in-memory cache increased. To fix the problem, we introduced two new options and status variables related to the stmt-cache: Options: . binlog_stmt_cache_size . max_binlog_stmt_cache_size Status Variables: . binlog_stmt_cache_use . binlog_stmt_cache_disk_use So there are . binlog_cache_size that defines the size of the transactional cache for updates to transactional engines for the binary log. . binlog_stmt_cache_size that defines the size of the statement cache for updates to non-transactional engines for the binary log. . max_binlog_cache_size that sets the total size of the transactional cache. . max_binlog_stmt_cache_size that sets the total size of the statement cache. . binlog_cache_use that identifies the number of transactions that used the temporary transactional binary log cache. . binlog_cache_disk_use that identifies the number of transactions that used the temporary transactional binary log cache but that exceeded the value of binlog_cache_size. . binlog_stmt_cache_use that identifies the number of statements that used the temporary non-transactional binary log cache. . binlog_stmt_cache_disk_use that identifies the number of statements that used the temporary non-transactional binary log cache but that exceeded the value of binlog_stmt_cache_size.
2010-11-05 18:42:37 +01:00
void set_write_error(THD *thd, bool is_transactional);
BUG#43929 binlog corruption when max_binlog_cache_size is exceeded Large transactions and statements may corrupt the binary log if the size of the cache, which is set by the max_binlog_cache_size, is not enough to store the the changes. In a nutshell, to fix the bug, we save the position of the next character in the cache before starting processing a statement. If there is a problem, we simply restore the position thus removing any effect of the statement from the cache. Unfortunately, to avoid corrupting the binary log, we may end up loosing changes on non-transactional tables if they do not fit in the cache. In such cases, we store an Incident_log_event in order to stop the slave and alert users that some changes were not logged. Precisely, for every non-transactional changes that do not fit into the cache, we do the following: a) the statement is *not* logged b) an incident event is logged after committing/rolling back the transaction, if any. Note that if a failure happens before writing the incident event to the binary log, the slave will not stop and the master will not have reported any error. c) its respective statement gives an error For transactional changes that do not fit into the cache, we do the following: a) the statement is *not* logged b) its respective statement gives an error To work properly, this patch requires two additional things. Firstly, callers to MYSQL_BIN_LOG::write and THD::binlog_query must handle any error returned and take the appropriate actions such as undoing the effects of a statement. We already changed some calls in the sql_insert.cc, sql_update.cc and sql_insert.cc modules but the remaining calls spread all over the code should be handled in BUG#37148. Secondly, statements must be either classified as DDL or DML because DDLs that do not get into the cache must generate an incident event since they cannot be rolled back.
2009-06-18 15:52:46 +02:00
bool check_write_error(THD *thd);
BUG#22864 (Rollback following CREATE... SELECT discards 'CREATE TABLE' from log): When row-based logging is used, the CREATE-SELECT is written as two parts: as a CREATE TABLE statement and as the rows for the table. For both transactional and non-transactional tables, the CREATE TABLE statement was written to the transaction cache, as were the rows, and on statement end, the entire transaction cache was written to the binary log if the table was non-transactional. For transactional tables, the events were kept in the transaction cache until end of transaction (or statement that were not part of a transaction). For the case when AUTOCOMMIT=0 and we are creating a transactional table using a create select, we would then keep the CREATE TABLE statement and the rows for the CREATE-SELECT, while executing the following statements. On a rollback, the transaction cache would then be cleared, which would also remove the CREATE TABLE statement. Hence no table would be created on the slave, while there is an empty table on the master. This relates to BUG#22865 where the table being created exists on the master, but not on the slave during insertion of rows into the newly created table. This occurs since the CREATE TABLE statement were still in the transaction cache until the statement finished executing, and possibly longer if the table was transactional. This patch changes the behaviour of the CREATE-SELECT statement by adding an implicit commit at the end of the statement when creating non-temporary tables. Hence, non-temporary tables will be written to the binary log on completion, and in the even of AUTOCOMMIT=0, a new transaction will be started. Temporary tables do not commit an ongoing transaction: neither as a pre- not a post-commit. The events for both transactional and non-transactional tables are saved in the transaction cache, and written to the binary log at end of the statement.
2006-12-21 09:29:02 +01:00
2007-02-26 20:06:10 +01:00
void start_union_events(THD *thd, query_id_t query_id_param);
void stop_union_events(THD *thd);
bool is_query_in_union(THD *thd, query_id_t query_id_param);
/*
v stands for vector
invoked as appendv(buf1,len1,buf2,len2,...,bufn,lenn,0)
*/
bool appendv(const char* buf,uint len,...);
bool append(Log_event* ev);
void make_log_name(char* buf, const char* log_ident);
bool is_active(const char* log_file_name);
int update_log_index(LOG_INFO* linfo, bool need_update_threads);
int rotate(bool force_rotate, bool* check_purge);
void purge();
int rotate_and_purge(bool force_rotate);
/**
Flush binlog cache and synchronize to disk.
This function flushes events in binlog cache to binary log file,
it will do synchronizing according to the setting of system
variable 'sync_binlog'. If file is synchronized, @c synced will
be set to 1, otherwise 0.
@param[out] synced if not NULL, set to 1 if file is synchronized, otherwise 0
@retval 0 Success
@retval other Failure
*/
bool flush_and_sync(bool *synced);
int purge_logs(const char *to_log, bool included,
bool need_mutex, bool need_update_threads,
ulonglong *decrease_log_space);
int purge_logs_before_date(time_t purge_time);
int purge_first_log(Relay_log_info* rli, bool included);
BUG#45292 orphan binary log created when starting server twice This patch fixes three bugs as follows. First, aborting the server while purging binary logs might generate orphan files due to how the purge operation was implemented: (purge routine - sql/log.cc - MYSQL_BIN_LOG::purge_logs) 1 - register the files to be removed in a temporary buffer. 2 - update the log-bin.index. 3 - flush the log-bin.index. 4 - erase the files whose names where register in the temporary buffer in step 1. Thus a failure while executing step 4 would generate an orphan file. Second, a similar issue might happen while creating a new binary as follows: (create routine - sql/log.cc - MYSQL_BIN_LOG::open) 1 - open the new log-bin. 2 - update the log-bin.index. Thus a failure while executing step 1 would generate an orphan file. To fix these issues, we record the files to be purged or created before really removing or adding them. So if a failure happens such records can be used to automatically remove dangling files. The new steps might be outlined as follows: (purge routine - sql/log.cc - MYSQL_BIN_LOG::purge_logs) 1 - register the files to be removed in the log-bin.~rec~ placed in the data directory. 2 - update the log-bin.index. 3 - flush the log-bin.index. 4 - delete the log-bin.~rec~. (create routine - sql/log.cc - MYSQL_BIN_LOG::open) 1 - register the file to be created in the log-bin.~rec~ placed in the data directory. 2 - open the new log-bin. 3 - update the log-bin.index. 4 - delete the log-bin.~rec~. (recovery routine - sql/log.cc - MYSQL_BIN_LOG::open_index_file) 1 - open the log-bin.index. 2 - open the log-bin.~rec~. 3 - for each file in log-bin.~rec~. 3.1 Check if the file is in the log-bin.index and if so ignore it. 3.2 Otherwise, delete it. The third issue can be described as follows. The purge operation was allowing to remove a file in use thus leading to the loss of data and possible inconsistencies between the master and slave. Roughly, the routine was only taking into account the dump threads and so if a slave was not connect the file might be delete even though it was in use.
2009-12-04 15:40:42 +01:00
int set_purge_index_file_name(const char *base_file_name);
int open_purge_index_file(bool destroy);
bool is_inited_purge_index_file();
int close_purge_index_file();
int clean_purge_index_file();
int sync_purge_index_file();
int register_purge_index_entry(const char* entry);
int register_create_index_entry(const char* entry);
int purge_index_entry(THD *thd, ulonglong *decrease_log_space,
bool need_mutex);
bool reset_logs(THD* thd);
void close(uint exiting);
// iterating through the log index file
int find_log_pos(LOG_INFO* linfo, const char* log_name,
bool need_mutex);
int find_next_log(LOG_INFO* linfo, bool need_mutex);
int get_current_log(LOG_INFO* linfo);
2006-09-04 14:19:39 +02:00
int raw_get_current_log(LOG_INFO* linfo);
uint next_file_id();
inline char* get_index_fname() { return index_file_name;}
inline char* get_log_fname() { return log_file_name; }
inline char* get_name() { return name; }
inline mysql_mutex_t* get_log_lock() { return &LOCK_log; }
inline mysql_cond_t* get_log_cond() { return &update_cond; }
inline IO_CACHE* get_log_file() { return &log_file; }
inline void lock_index() { mysql_mutex_lock(&LOCK_index);}
inline void unlock_index() { mysql_mutex_unlock(&LOCK_index);}
inline IO_CACHE *get_index_file() { return &index_file;}
inline uint32 get_open_count() { return open_count; }
};
class Log_event_handler
{
public:
Log_event_handler() {}
virtual bool init()= 0;
virtual void cleanup()= 0;
virtual bool log_slow(THD *thd, time_t current_time,
time_t query_start_arg, const char *user_host,
uint user_host_len, ulonglong query_utime,
ulonglong lock_utime, bool is_command,
const char *sql_text, uint sql_text_len)= 0;
virtual bool log_error(enum loglevel level, const char *format,
va_list args)= 0;
WL#3984 (Revise locking of mysql.general_log and mysql.slow_log) Bug#25422 (Hang with log tables) Bug 17876 (Truncating mysql.slow_log in a SP after using cursor locks the thread) Bug 23044 (Warnings on flush of a log table) Bug 29129 (Resetting general_log while the GLOBAL READ LOCK is set causes a deadlock) Prior to this fix, the server would hang when performing concurrent ALTER TABLE or TRUNCATE TABLE statements against the LOG TABLES, which are mysql.general_log and mysql.slow_log. The root cause traces to the following code: in sql_base.cc, open_table() if (table->in_use != thd) { /* wait_for_condition will unlock LOCK_open for us */ wait_for_condition(thd, &LOCK_open, &COND_refresh); } The problem with this code is that the current implementation of the LOGGER creates 'fake' THD objects, like - Log_to_csv_event_handler::general_log_thd - Log_to_csv_event_handler::slow_log_thd which are not associated to a real thread running in the server, so that waiting for these non-existing threads to release table locks cause the dead lock. In general, the design of Log_to_csv_event_handler does not fit into the general architecture of the server, so that the concept of general_log_thd and slow_log_thd has to be abandoned: - this implementation does not work with table locking - it will not work with commands like SHOW PROCESSLIST - having the log tables always opened does not integrate well with DDL operations / FLUSH TABLES / SET GLOBAL READ_ONLY With this patch, the fundamental design of the LOGGER has been changed to: - always open and close a log table when writing a log - remove totally the usage of fake THD objects - clarify how locking of log tables is implemented in general. See WL#3984 for details related to the new locking design. Additional changes (misc bugs exposed and fixed): 1) mysqldump which would ignore some tables in dump_all_tables_in_db(), but forget to ignore the same in dump_all_views_in_db(). 2) mysqldump would also issue an empty "LOCK TABLE" command when all the tables to lock are to be ignored (numrows == 0), instead of not issuing the query. 3) Internal errors handlers could intercept errors but not warnings (see sql_error.cc). 4) Implementing a nested call to open tables, for the performance schema tables, exposed an existing bug in remove_table_from_cache(), which would perform: in_use->some_tables_deleted=1; against another thread, without any consideration about thread locking. This call inside remove_table_from_cache() was not required anyway, since calling mysql_lock_abort() takes care of aborting -- cleanly -- threads that might hold a lock on a table. This line (in_use->some_tables_deleted=1) has been removed.
2007-07-27 08:31:06 +02:00
virtual bool log_general(THD *thd, time_t event_time, const char *user_host,
uint user_host_len, int thread_id,
const char *command_type, uint command_type_len,
const char *sql_text, uint sql_text_len,
CHARSET_INFO *client_cs)= 0;
virtual ~Log_event_handler() {}
};
int check_if_log_table(size_t db_len, const char *db, size_t table_name_len,
const char *table_name, bool check_if_opened);
class Log_to_csv_event_handler: public Log_event_handler
{
friend class LOGGER;
public:
Log_to_csv_event_handler();
~Log_to_csv_event_handler();
virtual bool init();
virtual void cleanup();
virtual bool log_slow(THD *thd, time_t current_time,
time_t query_start_arg, const char *user_host,
uint user_host_len, ulonglong query_utime,
ulonglong lock_utime, bool is_command,
const char *sql_text, uint sql_text_len);
virtual bool log_error(enum loglevel level, const char *format,
va_list args);
WL#3984 (Revise locking of mysql.general_log and mysql.slow_log) Bug#25422 (Hang with log tables) Bug 17876 (Truncating mysql.slow_log in a SP after using cursor locks the thread) Bug 23044 (Warnings on flush of a log table) Bug 29129 (Resetting general_log while the GLOBAL READ LOCK is set causes a deadlock) Prior to this fix, the server would hang when performing concurrent ALTER TABLE or TRUNCATE TABLE statements against the LOG TABLES, which are mysql.general_log and mysql.slow_log. The root cause traces to the following code: in sql_base.cc, open_table() if (table->in_use != thd) { /* wait_for_condition will unlock LOCK_open for us */ wait_for_condition(thd, &LOCK_open, &COND_refresh); } The problem with this code is that the current implementation of the LOGGER creates 'fake' THD objects, like - Log_to_csv_event_handler::general_log_thd - Log_to_csv_event_handler::slow_log_thd which are not associated to a real thread running in the server, so that waiting for these non-existing threads to release table locks cause the dead lock. In general, the design of Log_to_csv_event_handler does not fit into the general architecture of the server, so that the concept of general_log_thd and slow_log_thd has to be abandoned: - this implementation does not work with table locking - it will not work with commands like SHOW PROCESSLIST - having the log tables always opened does not integrate well with DDL operations / FLUSH TABLES / SET GLOBAL READ_ONLY With this patch, the fundamental design of the LOGGER has been changed to: - always open and close a log table when writing a log - remove totally the usage of fake THD objects - clarify how locking of log tables is implemented in general. See WL#3984 for details related to the new locking design. Additional changes (misc bugs exposed and fixed): 1) mysqldump which would ignore some tables in dump_all_tables_in_db(), but forget to ignore the same in dump_all_views_in_db(). 2) mysqldump would also issue an empty "LOCK TABLE" command when all the tables to lock are to be ignored (numrows == 0), instead of not issuing the query. 3) Internal errors handlers could intercept errors but not warnings (see sql_error.cc). 4) Implementing a nested call to open tables, for the performance schema tables, exposed an existing bug in remove_table_from_cache(), which would perform: in_use->some_tables_deleted=1; against another thread, without any consideration about thread locking. This call inside remove_table_from_cache() was not required anyway, since calling mysql_lock_abort() takes care of aborting -- cleanly -- threads that might hold a lock on a table. This line (in_use->some_tables_deleted=1) has been removed.
2007-07-27 08:31:06 +02:00
virtual bool log_general(THD *thd, time_t event_time, const char *user_host,
uint user_host_len, int thread_id,
const char *command_type, uint command_type_len,
const char *sql_text, uint sql_text_len,
WL#3984 (Revise locking of mysql.general_log and mysql.slow_log) Bug#25422 (Hang with log tables) Bug 17876 (Truncating mysql.slow_log in a SP after using cursor locks the thread) Bug 23044 (Warnings on flush of a log table) Bug 29129 (Resetting general_log while the GLOBAL READ LOCK is set causes a deadlock) Prior to this fix, the server would hang when performing concurrent ALTER TABLE or TRUNCATE TABLE statements against the LOG TABLES, which are mysql.general_log and mysql.slow_log. The root cause traces to the following code: in sql_base.cc, open_table() if (table->in_use != thd) { /* wait_for_condition will unlock LOCK_open for us */ wait_for_condition(thd, &LOCK_open, &COND_refresh); } The problem with this code is that the current implementation of the LOGGER creates 'fake' THD objects, like - Log_to_csv_event_handler::general_log_thd - Log_to_csv_event_handler::slow_log_thd which are not associated to a real thread running in the server, so that waiting for these non-existing threads to release table locks cause the dead lock. In general, the design of Log_to_csv_event_handler does not fit into the general architecture of the server, so that the concept of general_log_thd and slow_log_thd has to be abandoned: - this implementation does not work with table locking - it will not work with commands like SHOW PROCESSLIST - having the log tables always opened does not integrate well with DDL operations / FLUSH TABLES / SET GLOBAL READ_ONLY With this patch, the fundamental design of the LOGGER has been changed to: - always open and close a log table when writing a log - remove totally the usage of fake THD objects - clarify how locking of log tables is implemented in general. See WL#3984 for details related to the new locking design. Additional changes (misc bugs exposed and fixed): 1) mysqldump which would ignore some tables in dump_all_tables_in_db(), but forget to ignore the same in dump_all_views_in_db(). 2) mysqldump would also issue an empty "LOCK TABLE" command when all the tables to lock are to be ignored (numrows == 0), instead of not issuing the query. 3) Internal errors handlers could intercept errors but not warnings (see sql_error.cc). 4) Implementing a nested call to open tables, for the performance schema tables, exposed an existing bug in remove_table_from_cache(), which would perform: in_use->some_tables_deleted=1; against another thread, without any consideration about thread locking. This call inside remove_table_from_cache() was not required anyway, since calling mysql_lock_abort() takes care of aborting -- cleanly -- threads that might hold a lock on a table. This line (in_use->some_tables_deleted=1) has been removed.
2007-07-27 08:31:06 +02:00
CHARSET_INFO *client_cs);
int activate_log(THD *thd, uint log_type);
};
/* type of the log table */
#define QUERY_LOG_SLOW 1
#define QUERY_LOG_GENERAL 2
class Log_to_file_event_handler: public Log_event_handler
{
MYSQL_QUERY_LOG mysql_log;
MYSQL_QUERY_LOG mysql_slow_log;
bool is_initialized;
public:
Log_to_file_event_handler(): is_initialized(FALSE)
{}
virtual bool init();
virtual void cleanup();
virtual bool log_slow(THD *thd, time_t current_time,
time_t query_start_arg, const char *user_host,
uint user_host_len, ulonglong query_utime,
ulonglong lock_utime, bool is_command,
const char *sql_text, uint sql_text_len);
virtual bool log_error(enum loglevel level, const char *format,
va_list args);
WL#3984 (Revise locking of mysql.general_log and mysql.slow_log) Bug#25422 (Hang with log tables) Bug 17876 (Truncating mysql.slow_log in a SP after using cursor locks the thread) Bug 23044 (Warnings on flush of a log table) Bug 29129 (Resetting general_log while the GLOBAL READ LOCK is set causes a deadlock) Prior to this fix, the server would hang when performing concurrent ALTER TABLE or TRUNCATE TABLE statements against the LOG TABLES, which are mysql.general_log and mysql.slow_log. The root cause traces to the following code: in sql_base.cc, open_table() if (table->in_use != thd) { /* wait_for_condition will unlock LOCK_open for us */ wait_for_condition(thd, &LOCK_open, &COND_refresh); } The problem with this code is that the current implementation of the LOGGER creates 'fake' THD objects, like - Log_to_csv_event_handler::general_log_thd - Log_to_csv_event_handler::slow_log_thd which are not associated to a real thread running in the server, so that waiting for these non-existing threads to release table locks cause the dead lock. In general, the design of Log_to_csv_event_handler does not fit into the general architecture of the server, so that the concept of general_log_thd and slow_log_thd has to be abandoned: - this implementation does not work with table locking - it will not work with commands like SHOW PROCESSLIST - having the log tables always opened does not integrate well with DDL operations / FLUSH TABLES / SET GLOBAL READ_ONLY With this patch, the fundamental design of the LOGGER has been changed to: - always open and close a log table when writing a log - remove totally the usage of fake THD objects - clarify how locking of log tables is implemented in general. See WL#3984 for details related to the new locking design. Additional changes (misc bugs exposed and fixed): 1) mysqldump which would ignore some tables in dump_all_tables_in_db(), but forget to ignore the same in dump_all_views_in_db(). 2) mysqldump would also issue an empty "LOCK TABLE" command when all the tables to lock are to be ignored (numrows == 0), instead of not issuing the query. 3) Internal errors handlers could intercept errors but not warnings (see sql_error.cc). 4) Implementing a nested call to open tables, for the performance schema tables, exposed an existing bug in remove_table_from_cache(), which would perform: in_use->some_tables_deleted=1; against another thread, without any consideration about thread locking. This call inside remove_table_from_cache() was not required anyway, since calling mysql_lock_abort() takes care of aborting -- cleanly -- threads that might hold a lock on a table. This line (in_use->some_tables_deleted=1) has been removed.
2007-07-27 08:31:06 +02:00
virtual bool log_general(THD *thd, time_t event_time, const char *user_host,
uint user_host_len, int thread_id,
const char *command_type, uint command_type_len,
const char *sql_text, uint sql_text_len,
CHARSET_INFO *client_cs);
void flush();
void init_pthread_objects();
2006-06-21 11:53:40 +02:00
MYSQL_QUERY_LOG *get_mysql_slow_log() { return &mysql_slow_log; }
MYSQL_QUERY_LOG *get_mysql_log() { return &mysql_log; }
};
/* Class which manages slow, general and error log event handlers */
class LOGGER
{
mysql_rwlock_t LOCK_logger;
/* flag to check whether logger mutex is initialized */
uint inited;
/* available log handlers */
Log_to_csv_event_handler *table_log_handler;
Log_to_file_event_handler *file_log_handler;
/* NULL-terminated arrays of log handlers */
Log_event_handler *error_log_handler_list[MAX_LOG_HANDLERS_NUM + 1];
Log_event_handler *slow_log_handler_list[MAX_LOG_HANDLERS_NUM + 1];
Log_event_handler *general_log_handler_list[MAX_LOG_HANDLERS_NUM + 1];
public:
bool is_log_tables_initialized;
LOGGER() : inited(0), table_log_handler(NULL),
file_log_handler(NULL), is_log_tables_initialized(FALSE)
{}
void lock_shared() { mysql_rwlock_rdlock(&LOCK_logger); }
void lock_exclusive() { mysql_rwlock_wrlock(&LOCK_logger); }
void unlock() { mysql_rwlock_unlock(&LOCK_logger); }
WL#3984 (Revise locking of mysql.general_log and mysql.slow_log) Bug#25422 (Hang with log tables) Bug 17876 (Truncating mysql.slow_log in a SP after using cursor locks the thread) Bug 23044 (Warnings on flush of a log table) Bug 29129 (Resetting general_log while the GLOBAL READ LOCK is set causes a deadlock) Prior to this fix, the server would hang when performing concurrent ALTER TABLE or TRUNCATE TABLE statements against the LOG TABLES, which are mysql.general_log and mysql.slow_log. The root cause traces to the following code: in sql_base.cc, open_table() if (table->in_use != thd) { /* wait_for_condition will unlock LOCK_open for us */ wait_for_condition(thd, &LOCK_open, &COND_refresh); } The problem with this code is that the current implementation of the LOGGER creates 'fake' THD objects, like - Log_to_csv_event_handler::general_log_thd - Log_to_csv_event_handler::slow_log_thd which are not associated to a real thread running in the server, so that waiting for these non-existing threads to release table locks cause the dead lock. In general, the design of Log_to_csv_event_handler does not fit into the general architecture of the server, so that the concept of general_log_thd and slow_log_thd has to be abandoned: - this implementation does not work with table locking - it will not work with commands like SHOW PROCESSLIST - having the log tables always opened does not integrate well with DDL operations / FLUSH TABLES / SET GLOBAL READ_ONLY With this patch, the fundamental design of the LOGGER has been changed to: - always open and close a log table when writing a log - remove totally the usage of fake THD objects - clarify how locking of log tables is implemented in general. See WL#3984 for details related to the new locking design. Additional changes (misc bugs exposed and fixed): 1) mysqldump which would ignore some tables in dump_all_tables_in_db(), but forget to ignore the same in dump_all_views_in_db(). 2) mysqldump would also issue an empty "LOCK TABLE" command when all the tables to lock are to be ignored (numrows == 0), instead of not issuing the query. 3) Internal errors handlers could intercept errors but not warnings (see sql_error.cc). 4) Implementing a nested call to open tables, for the performance schema tables, exposed an existing bug in remove_table_from_cache(), which would perform: in_use->some_tables_deleted=1; against another thread, without any consideration about thread locking. This call inside remove_table_from_cache() was not required anyway, since calling mysql_lock_abort() takes care of aborting -- cleanly -- threads that might hold a lock on a table. This line (in_use->some_tables_deleted=1) has been removed.
2007-07-27 08:31:06 +02:00
bool is_log_table_enabled(uint log_table_type);
bool log_command(THD *thd, enum enum_server_command command);
/*
We want to initialize all log mutexes as soon as possible,
but we cannot do it in constructor, as safe_mutex relies on
initialization, performed by MY_INIT(). This why this is done in
this function.
*/
void init_base();
void init_log_tables();
bool flush_logs(THD *thd);
bool flush_slow_log();
bool flush_general_log();
2006-01-27 14:10:40 +01:00
/* Perform basic logger cleanup. this will leave e.g. error log open. */
void cleanup_base();
/* Free memory. Nothing could be logged after this function is called */
void cleanup_end();
bool error_log_print(enum loglevel level, const char *format,
va_list args);
bool slow_log_print(THD *thd, const char *query, uint query_length,
ulonglong current_utime);
bool general_log_print(THD *thd,enum enum_server_command command,
const char *format, va_list args);
bool general_log_write(THD *thd, enum enum_server_command command,
const char *query, uint query_length);
/* we use this function to setup all enabled log event handlers */
int set_handlers(uint error_log_printer,
uint slow_log_printer,
uint general_log_printer);
void init_error_log(uint error_log_printer);
void init_slow_log(uint slow_log_printer);
void init_general_log(uint general_log_printer);
void deactivate_log_handler(THD* thd, uint log_type);
bool activate_log_handler(THD* thd, uint log_type);
WL#4738 streamline/simplify @@variable creation process Bug#16565 mysqld --help --verbose does not order variablesBug#20413 sql_slave_skip_counter is not shown in show variables Bug#20415 Output of mysqld --help --verbose is incomplete Bug#25430 variable not found in SELECT @@global.ft_max_word_len; Bug#32902 plugin variables don't know their names Bug#34599 MySQLD Option and Variable Reference need to be consistent in formatting! Bug#34829 No default value for variable and setting default does not raise error Bug#34834 ? Is accepted as a valid sql mode Bug#34878 Few variables have default value according to documentation but error occurs Bug#34883 ft_boolean_syntax cant be assigned from user variable to global var. Bug#37187 `INFORMATION_SCHEMA`.`GLOBAL_VARIABLES`: inconsistent status Bug#40988 log_output_basic.test succeeded though syntactically false. Bug#41010 enum-style command-line options are not honoured (maria.maria-recover fails) Bug#42103 Setting key_buffer_size to a negative value may lead to very large allocations Bug#44691 Some plugins configured as MYSQL_PLUGIN_MANDATORY in can be disabled Bug#44797 plugins w/o command-line options have no disabling option in --help Bug#46314 string system variables don't support expressions Bug#46470 sys_vars.max_binlog_cache_size_basic_32 is broken Bug#46586 When using the plugin interface the type "set" for options caused a crash. Bug#47212 Crash in DBUG_PRINT in mysqltest.cc when trying to print octal number Bug#48758 mysqltest crashes on sys_vars.collation_server_basic in gcov builds Bug#49417 some complaints about mysqld --help --verbose output Bug#49540 DEFAULT value of binlog_format isn't the default value Bug#49640 ambiguous option '--skip-skip-myisam' (double skip prefix) Bug#49644 init_connect and \0 Bug#49645 init_slave and multi-byte characters Bug#49646 mysql --show-warnings crashes when server dies
2009-12-22 10:35:56 +01:00
MYSQL_QUERY_LOG *get_slow_log_file_handler() const
{
if (file_log_handler)
return file_log_handler->get_mysql_slow_log();
return NULL;
}
WL#4738 streamline/simplify @@variable creation process Bug#16565 mysqld --help --verbose does not order variablesBug#20413 sql_slave_skip_counter is not shown in show variables Bug#20415 Output of mysqld --help --verbose is incomplete Bug#25430 variable not found in SELECT @@global.ft_max_word_len; Bug#32902 plugin variables don't know their names Bug#34599 MySQLD Option and Variable Reference need to be consistent in formatting! Bug#34829 No default value for variable and setting default does not raise error Bug#34834 ? Is accepted as a valid sql mode Bug#34878 Few variables have default value according to documentation but error occurs Bug#34883 ft_boolean_syntax cant be assigned from user variable to global var. Bug#37187 `INFORMATION_SCHEMA`.`GLOBAL_VARIABLES`: inconsistent status Bug#40988 log_output_basic.test succeeded though syntactically false. Bug#41010 enum-style command-line options are not honoured (maria.maria-recover fails) Bug#42103 Setting key_buffer_size to a negative value may lead to very large allocations Bug#44691 Some plugins configured as MYSQL_PLUGIN_MANDATORY in can be disabled Bug#44797 plugins w/o command-line options have no disabling option in --help Bug#46314 string system variables don't support expressions Bug#46470 sys_vars.max_binlog_cache_size_basic_32 is broken Bug#46586 When using the plugin interface the type "set" for options caused a crash. Bug#47212 Crash in DBUG_PRINT in mysqltest.cc when trying to print octal number Bug#48758 mysqltest crashes on sys_vars.collation_server_basic in gcov builds Bug#49417 some complaints about mysqld --help --verbose output Bug#49540 DEFAULT value of binlog_format isn't the default value Bug#49640 ambiguous option '--skip-skip-myisam' (double skip prefix) Bug#49644 init_connect and \0 Bug#49645 init_slave and multi-byte characters Bug#49646 mysql --show-warnings crashes when server dies
2009-12-22 10:35:56 +01:00
MYSQL_QUERY_LOG *get_log_file_handler() const
{
if (file_log_handler)
return file_log_handler->get_mysql_log();
return NULL;
}
};
WL#2977 and WL#2712 global and session-level variable to set the binlog format (row/statement), and new binlog format called "mixed" (which is statement-based except if only row-based is correct, in this cset it means if UDF or UUID is used; more cases could be added in later 5.1 release): SET GLOBAL|SESSION BINLOG_FORMAT=row|statement|mixed|default; the global default is statement unless cluster is enabled (then it's row) as in 5.1-alpha. It's not possible to use SET on this variable if a session is currently in row-based mode and has open temporary tables (because CREATE TEMPORARY TABLE was not binlogged so temp table is not known on slave), or if NDB is enabled (because NDB does not support such change on-the-fly, though it will later), of if in a stored function (see below). The added tests test the possibility or impossibility to SET, their effects, and the mixed mode, including in prepared statements and in stored procedures and functions. Caveats: a) The mixed mode will not work for stored functions: in mixed mode, a stored function will always be binlogged as one call and in a statement-based way (e.g. INSERT VALUES(myfunc()) or SELECT myfunc()). b) for the same reason, changing the thread's binlog format inside a stored function is refused with an error message. c) the same problems apply to triggers; implementing b) for triggers will be done later (will ask Dmitri). Additionally, as the binlog format is now changeable by each user for his session, I remove the implication which was done at startup, where row-based automatically set log-bin-trust-routine-creators to 1 (not possible anymore as a user can now switch to stmt-based and do nasty things again), and automatically set --innodb-locks-unsafe-for-binlog to 1 (was anyway theoretically incorrect as it disabled phantom protection). Plus fixes for compiler warnings.
2006-02-25 22:21:03 +01:00
enum enum_binlog_format {
WL#4738 streamline/simplify @@variable creation process Bug#16565 mysqld --help --verbose does not order variablesBug#20413 sql_slave_skip_counter is not shown in show variables Bug#20415 Output of mysqld --help --verbose is incomplete Bug#25430 variable not found in SELECT @@global.ft_max_word_len; Bug#32902 plugin variables don't know their names Bug#34599 MySQLD Option and Variable Reference need to be consistent in formatting! Bug#34829 No default value for variable and setting default does not raise error Bug#34834 ? Is accepted as a valid sql mode Bug#34878 Few variables have default value according to documentation but error occurs Bug#34883 ft_boolean_syntax cant be assigned from user variable to global var. Bug#37187 `INFORMATION_SCHEMA`.`GLOBAL_VARIABLES`: inconsistent status Bug#40988 log_output_basic.test succeeded though syntactically false. Bug#41010 enum-style command-line options are not honoured (maria.maria-recover fails) Bug#42103 Setting key_buffer_size to a negative value may lead to very large allocations Bug#44691 Some plugins configured as MYSQL_PLUGIN_MANDATORY in can be disabled Bug#44797 plugins w/o command-line options have no disabling option in --help Bug#46314 string system variables don't support expressions Bug#46470 sys_vars.max_binlog_cache_size_basic_32 is broken Bug#46586 When using the plugin interface the type "set" for options caused a crash. Bug#47212 Crash in DBUG_PRINT in mysqltest.cc when trying to print octal number Bug#48758 mysqltest crashes on sys_vars.collation_server_basic in gcov builds Bug#49417 some complaints about mysqld --help --verbose output Bug#49540 DEFAULT value of binlog_format isn't the default value Bug#49640 ambiguous option '--skip-skip-myisam' (double skip prefix) Bug#49644 init_connect and \0 Bug#49645 init_slave and multi-byte characters Bug#49646 mysql --show-warnings crashes when server dies
2009-12-22 10:35:56 +01:00
BINLOG_FORMAT_MIXED= 0, ///< statement if safe, otherwise row - autodetected
BINLOG_FORMAT_STMT= 1, ///< statement-based
BINLOG_FORMAT_ROW= 2, ///< row-based
BINLOG_FORMAT_UNSPEC=3 ///< thd_binlog_format() returns it when binlog is closed
WL#2977 and WL#2712 global and session-level variable to set the binlog format (row/statement), and new binlog format called "mixed" (which is statement-based except if only row-based is correct, in this cset it means if UDF or UUID is used; more cases could be added in later 5.1 release): SET GLOBAL|SESSION BINLOG_FORMAT=row|statement|mixed|default; the global default is statement unless cluster is enabled (then it's row) as in 5.1-alpha. It's not possible to use SET on this variable if a session is currently in row-based mode and has open temporary tables (because CREATE TEMPORARY TABLE was not binlogged so temp table is not known on slave), or if NDB is enabled (because NDB does not support such change on-the-fly, though it will later), of if in a stored function (see below). The added tests test the possibility or impossibility to SET, their effects, and the mixed mode, including in prepared statements and in stored procedures and functions. Caveats: a) The mixed mode will not work for stored functions: in mixed mode, a stored function will always be binlogged as one call and in a statement-based way (e.g. INSERT VALUES(myfunc()) or SELECT myfunc()). b) for the same reason, changing the thread's binlog format inside a stored function is refused with an error message. c) the same problems apply to triggers; implementing b) for triggers will be done later (will ask Dmitri). Additionally, as the binlog format is now changeable by each user for his session, I remove the implication which was done at startup, where row-based automatically set log-bin-trust-routine-creators to 1 (not possible anymore as a user can now switch to stmt-based and do nasty things again), and automatically set --innodb-locks-unsafe-for-binlog to 1 (was anyway theoretically incorrect as it disabled phantom protection). Plus fixes for compiler warnings.
2006-02-25 22:21:03 +01:00
};
int query_error_code(THD *thd, bool not_killed);
uint purge_log_get_error_code(int res);
int vprint_msg_to_log(enum loglevel level, const char *format, va_list args);
void sql_print_error(const char *format, ...) ATTRIBUTE_FORMAT(printf, 1, 2);
void sql_print_warning(const char *format, ...) ATTRIBUTE_FORMAT(printf, 1, 2);
void sql_print_information(const char *format, ...)
ATTRIBUTE_FORMAT(printf, 1, 2);
typedef void (*sql_print_message_func)(const char *format, ...)
2011-05-21 10:21:08 +02:00
ATTRIBUTE_FORMAT_FPTR(printf, 1, 2);
extern sql_print_message_func sql_print_message_handlers[];
int error_log_print(enum loglevel level, const char *format,
va_list args);
bool slow_log_print(THD *thd, const char *query, uint query_length,
ulonglong current_utime);
bool general_log_print(THD *thd, enum enum_server_command command,
const char *format,...);
bool general_log_write(THD *thd, enum enum_server_command command,
const char *query, uint query_length);
void sql_perror(const char *message);
bool flush_error_log();
File open_binlog(IO_CACHE *log, const char *log_file_name,
const char **errmsg);
char *make_log_name(char *buff, const char *name, const char* log_ext);
extern MYSQL_PLUGIN_IMPORT MYSQL_BIN_LOG mysql_bin_log;
extern LOGGER logger;
/**
Turns a relative log binary log path into a full path, based on the
opt_bin_logname or opt_relay_logname.
@param from The log name we want to make into an absolute path.
@param to The buffer where to put the results of the
normalization.
@param is_relay_log Switch that makes is used inside to choose which
option (opt_bin_logname or opt_relay_logname) to
use when calculating the base path.
@returns true if a problem occurs, false otherwise.
*/
inline bool normalize_binlog_name(char *to, const char *from, bool is_relay_log)
{
DBUG_ENTER("normalize_binlog_name");
bool error= false;
char buff[FN_REFLEN];
char *ptr= (char*) from;
char *opt_name= is_relay_log ? opt_relay_logname : opt_bin_logname;
DBUG_ASSERT(from);
/* opt_name is not null and not empty and from is a relative path */
if (opt_name && opt_name[0] && from && !test_if_hard_path(from))
{
// take the path from opt_name
// take the filename from from
char log_dirpart[FN_REFLEN], log_dirname[FN_REFLEN];
size_t log_dirpart_len, log_dirname_len;
dirname_part(log_dirpart, opt_name, &log_dirpart_len);
dirname_part(log_dirname, from, &log_dirname_len);
/* log may be empty => relay-log or log-bin did not
hold paths, just filename pattern */
if (log_dirpart_len > 0)
{
/* create the new path name */
if(fn_format(buff, from+log_dirname_len, log_dirpart, "",
MYF(MY_UNPACK_FILENAME | MY_SAFE_PATH)) == NULL)
{
error= true;
goto end;
}
ptr= buff;
}
}
DBUG_ASSERT(ptr);
if (ptr)
strmake(to, ptr, strlen(ptr));
end:
DBUG_RETURN(error);
}
#endif /* LOG_H */