LOAD_FILE
LOAD_FILE is not safe to replicate in STATEMENT mode, because it
depends on a file (which is loaded on master and may not exist in
slave(s)). This leads to scenarios on which the slave replicates the
statement with 'load_file' and it will try to load the file from local
file system. Given that the file may not exist in the slave filesystem
the operation will not succeed (probably returning NULL), causing
master and slave(s) to diverge. However, when using MIXED mode
replication, this can be made to work, if the statement including
LOAD_FILE is marked as unsafe, triggering a switch to ROW mode,
meaning that the contents of the file are written to binlog as row
events. Consequently, the contents from the file in the master will
reach the slave via the binlog.
This patch addresses this bug by marking the load_file function as
unsafe. When in mixed mode and when LOAD_FILE is issued, there will be
a switch to row mode. Furthermore, when in statement mode, the
LOAD_FILE will raise a warning that the statement is unsafe in that
mode.
When do 'insert delayed' operation, the time_zone info doesn't be keeped in the row info.
So when we do insert sometime later, time_zone didn't write into binlog.
This will cause wrong result for timestamp column in slave.
Our solution is that adding time_zone info with the delayed-row and
restoring time_zone from row-info when execute that row in the furture by another thread.
So we can write correct time_zone info into binlog and got correct result in slave.
functions
Unknown timezone specifications are properly rejected
by the server, but are copied into tz_storage before
rejection, and hence is retained until end of server
life. With sufficiently large bogus timezone specs,
it is easy to exhaust system memory.
Allocation of memory for a copy of the timezone
name is delayed until after verification of validity,
at the cost of a memcpy of the timezone info. This
only happens once, future lookups will hit the cached
structure.
Don't throw an error after checking the first and the second arguments.
Continue with checking the third and higher arguments and if some of
them is stronger according to coercibility rules,
then this argument's collation is set as result collation.
mysql.procs_priv table itself does not get replicated.
Inserting routine privilege record into mysql.procs_priv table
is triggered by creating function/procedure statements
according to current user's privileges.
Because the current user of SQL thread has GLOBAL_ACL,
which doesn't need any check mysql.procs_priv privilege
when create/alter/execute routines.
Corresponding GLOBAL_ACL privilege user
doesn't insert routine privilege record into
mysql.procs_priv when creating a routine.
Fixed by switching the current user of SQL thread to definer user if
the definer user exists on slave.
That populates procs_priv, otherwise to keep the SQL thread
user and procs_priv remains unchanged.
Compiling with debug and assigning an invalid directory to --slave-load-tmpdir
was crashing the slave due to the following assertion DBUG_ASSERT(! is_set() ||
can_overwrite_status). This assertion assumes that a thread can change its
state once (i.e. ok,error, etc) before aborting, cleaning/resuming or completing
its execution unless the overwrite flag (i.e. can_overwrite_status) is true.
The Append_block_log_event::do_apply_event which is responsible for creating
temporary file(s) was not cleaning the thread state. Thus a failure while
trying to create a file in an invalid temporary directory was causing the crash.
To fix the problem we check if the temporary directory is valid before starting
the SQL Thread and reset the thread state before creating a file in
Append_block_log_event::do_apply_event.
become negative
- merged the fix to 5.1
- extended to cover I_S.PROCESSLIST.TIME
- Changed the column type of I_S.PROCESSLIST.TIME from LOGNLONG
UNSIGNED
to LONG (to match the SHOW PROCESSLIST type)
- Added a test case
- Fix valgrind warning on attempt to run a "SET optimizer_switch=number" statement.
Need to call c_ptr_safe() as strings returned by non-string items are not
necessarily null-terminated.
The problem is issued because we set wrong start position and stop position of query string into binlog.
That two values are stored as part of head info of query string.
When we parse binlog, we first get position values then get the query string according position values.
But seems that two values are not calculated correctly after the parse of Yacc.
We don't want to touch so much of yacc because it may influence other codes.
So just add one space after 'INTO' key word when parsing.
This can easily resolve the problem.
select where .. (col=col and col=col) or ... (false expression)
Problem: optimizer didn't take into account a singular case
when we eliminated all the predicates at the AND level of WHERE.
That may lead to wrong results.
Fix: replace (a=a AND a=a...) with TRUE if we eliminated all the
predicates.
seems to become negative
THD::start_time has a dual meaning : it's either the time since the process
entered a given state or is the transaction time returned by e.g. NOW().
This causes problems, as sometimes THD::start_time may be set to a value
that is correct and needed when used as a base for NOW(), but these times
may be arbitrary (SET @@timestamp) or non-local (coming from the master
through the replication feed).
If one such non-local time is set there's no way to return a correct value
for e.g. SHOW PROCESSLIST or SELECT ... FROM INFORMATION_SCHEMA.PROCESSLIST.
Fixed by making the Time column in SHOW PROCESSLIST SIGNED LONG instead of
UNSIGNED LONG and doing the correct conversions.
Note that no reliable test suite can be constructed, since it would require
knowing the local time and can't be achieved by the means of the current test
suite.
The problem is that creating a event could fail if the value of
the variable server_id didn't fit in the originator column of
the event system table. The cause is two-fold: it was possible
to set server_id to a value outside the documented range (from
0 to 2^32-1) and the originator column of the event table didn't
have enough room for values in this range.
The log tables (general_log and slow_log) also don't have a proper
column type to store the server_id and having a large server_id
value could prevent queries from being logged.
The solution is to ensure that all system tables that store the
server_id value have a proper column type (int unsigned) and that
the variable can't be set to a value that is not within the range.
mysqld is optimized for the default
case (up to 64-indices); for a greater
number of indices it goes through a
different code path. As that code-path
is a compile-time option and can not
easily be covered in standard tests,
bitrot occurred. key-fields need an
explicit initialization in the non-
optimized case; this setup was
presumably not added when a new key-
vector was added.
Changeset adds the necessary
initialisations.
No test case added due to dependence
on compile-time option.
The copy of the original arguments of a aggregate function was not
initialized until after fix_fields().
Sometimes (e.g. when there's an error processing the statement)
the print() can be called with no corresponding fix_fields() call.
Fixed by adding a check if the Item is fixed before using the arguments
copy.