"insert into.. select * from"
When inserting into a partitioned table using 'insert into
<target> select * from <src>', read_buffer_size bytes of memory
are allocated for each partition in the target table.
This resulted in large memory consumption when the number of
partitions are high.
This patch introduces a new method which tries to estimate the
buffer size required for each partition and limits the maximum
buffer size used to maximum of 10 * read_buffer_size,
11 * read_buffer_size in case of monotonic partition functions.
When parsing the service installation parameter in
default_service_handling() make sure the value of the
optional parameter doesn't overwrite it's name.
Problem: LOGGER::general_log_write() relied on valid "thd" parameter passed
but had inconsistent "if (thd)" check.
Fix: as we always pass a valid "thd" parameter to the method,
redundant check removed.
The problem is that argument buffer can be used as result buffer
and it leads to argument value change.
The fix is to use 'old buffer' as result buffer only
if first argument is not constant item.
In RBR, There is an inconsistency between slaves and master.
When INSERT statement which includes an auto_increment field is executed,
Store engine of master will check the value of the auto_increment field.
It will generate a sequence number and then replace the value, if its value is NULL or empty.
if the field's value is 0, the store engine will do like encountering the NULL values
unless NO_AUTO_VALUE_ON_ZERO is set into SQL_MODE.
In contrast, if the field's value is 0, Store engine of slave always generates a new sequence number
whether or not NO_AUTO_VALUE_ON_ZERO is set into SQL_MODE.
SQL MODE of slave sql thread is always consistency with master's.
Another variable is related to this bug.
If generateing a sequence number is decided by the values of
table->auto_increment_field_not_null and SQL_MODE(if includes MODE_NO_AUTO_VALUE_ON_ZERO)
The table->auto_increment_is_not_null is FALSE, which causes this bug to appear. ..
Create temporary InnoDB table fails on case insensitive
filesystems, when lower_case_table_names is 2 (e.g. OS X)
and temporary directory path contains upper case letters.
The problem was that tmpdir prefix was converted to lower
case when table was created, but was passed as is when
table was opened.
Fixed by leaving tmpdir prefix part intact.
can crash under load
Backport from 5.1.
Does also include key cache fixes from:
Bug 44068 (RESTORE can disable the MyISAM Key Cache)
Bug 40944 (Backup: crash after myisampack)
The parser rule for expressions in a udf parameter list contains
two hacks:
First, the parser input stream is read verbatim, bypassing
the lexer.
Second, the Item::name field is overwritten. If the argument to a
udf was a field, the field's name as seen by name resolution was
overwritten this way.
If the field name was quoted or escaped, it would appear as e.g. "`field`".
Fixed by not overwriting field names.
The external 'for' loop in remove_dup_with_compare() handled
HA_ERR_RECORD_DELETED by just starting over without advancing
to the next record which caused an infinite loop.
This condition could be triggered on certain data by a SELECT
query containing DISTINCT, GROUP BY and HAVING clauses.
Fixed remove_dup_with_compare() so that we always advance to
the next record when receiving HA_ERR_RECORD_DELETED from
rnd_next().
(Backport)
Problem is that when insert (ha_start_bulk_insert) in i partitioned table,
it will call ha_start_bulk_insert for every partition, used or not.
Solution is to delay the call to the partitions ha_start_bulk_insert until
the first row is to be inserted into that partition
on subquery inside a SP
Problem: repeated call of a SP containing an incorrect query with a
subselect may lead to failed ASSERT().
Fix: set proper sublelect's state in case of error occured during
subquery transformation.
name as existing view
When trying to create a table with the same name as existing view with
join, mysql server crashes.
The problem is when create table is issued with the same name as view, while
verifying with the existing tables, we assume that base table object is
created always.
In this case, since it is a view over multiple tables, we don't have the
mysql derived table object.
Fixed the logic which checks if there is an existing table to not to assume
that table object is created when the base table is view over multiple
tables.
Inserting a negative value in the autoincrement column of a
partitioned innodb table was causing the value of the auto
increment counter to wrap around into a very large positive
value. The consequences are the same as if a very large positive
value was inserted into a column, e.g. reduced autoincrement
range, failure to read autoincrement counter.
The current patch ensures that before calculating the next
auto increment value, the current value is within the positive
maximum allowed limit.
function,file sql_base.cc
When uncacheable queries are written to a temp table the optimizer must
preserve the original JOIN structure, because it is re-using the JOIN
structure to read from the resulting temporary table.
This was done only for uncacheable sub-queries.
But top level queries can also benefit from this mechanism, specially if
they're using index access and need a reset.
Fixed by not limiting the saving of JOIN structure to subqueries
exclusively.
Added a new test file to extend the existing (large) subquery.test.
CREATE TABLE...LIKE...
The mysql server option 'sync_frm' is ignored when table is created with
syntax CREATE TABLE .. LIKE..
Fixed by adding the MY_SYNC flag and calling my_sync() from my_copy() when
the flag is set.
In mysql_create_table(), when the 'sync_frm' is set, MY_SYNC flag is passed
to my_copy().
Note: TestCase is not attached and can be tested manually using debugger.
During start up some plugins are disabled by default. This caused an additional
warning level message to be emitted as a result of a previous bug patch. Since
there is risk of unnecessary confusion regarding the operation level of the server
the redundant information is removed.
extraneous file
Online/fast ALTER TABLE of a partitioned table may leave
temporary file in database directory.
Fixed by removing unnecessary call to
handler::ha_create_handler_files(), which was creating
partitioning definition file.
results in server crash
check_group_min_max_predicates() assumed the input condition
item to be one of COND_ITEM, SUBSELECT_ITEM, or FUNC_ITEM.
Since a condition of the form "field" is also a valid condition
equivalent to "field <> 0", using such a condition in a query
where the loose index scan was chosen resulted in a debug
assertion failure.
Fixed by handling conditions of the FIELD_ITEM type in
check_group_min_max_predicates().
If an EVENT is created without the DEFINER clause set explicitly or with it set
to CURRENT_USER, the master and slaves become inconsistent. This issue stems from
the fact that in both cases, the DEFINER is set to the CURRENT_USER of the current
thread. On the master, the CURRENT_USER is the mysqld's user, while on the slave,
the CURRENT_USER is empty for the SQL Thread which is responsible for executing
the statement.
To fix the problem, we do what follows. If the definer is not set explicitly,
a DEFINER clause is added when writing the query into binlog; if 'CURRENT_USER' is
used as the DEFINER, it is replaced with the value of the current user before
writing to binlog.
with gcc 4.3.2
This patch fixes a number of GCC warnings about variables used
before initialized. A new macro UNINIT_VAR() is introduced for
use in the variable declaration, and LINT_INIT() usage will be
gradually deprecated. (A workaround is used for g++, pending a
patch for a g++ bug.)
GCC warnings for unused results (attribute warn_unused_result)
for a number of system calls (present at least in later
Ubuntus, where the usual void cast trick doesn't work) are
also fixed.
When a connection is dropped any remaining temporary table is also automatically
dropped and the SQL statement of this operation is written to the binary log in
order to drop such tables on the slave and keep the slave in sync. Specifically,
the current code base creates the following type of statement:
DROP /*!40005 TEMPORARY */ TABLE IF EXISTS `db`.`table`;
Unfortunately, appending the database to the table name in this manner circumvents
the replicate-rewrite-db option (and any options that check the current database).
To solve the issue, we started writing the statement to the binary as follows:
use `db`; DROP /*!40005 TEMPORARY */ TABLE IF EXISTS `table`;
Slave does not correctly handle "expected errors" leading to inconsistencies
between the mater and slave. Specifically, when a statement changes both
transactional and non-transactional tables, the transactional changes are
automatically rolled back on the master but the slave ignores the error and
does not roll them back thus leading to inconsistencies.
To fix the problem, we automatically roll back a statement that fails on
the slave but note that the transaction is not rolled back unless a "rollback"
command is in the relay log file.
field references
This error requires a combination of factors :
1. An "impossible where" in the outermost SELECT
2. An aggregate in the outermost SELECT
3. A correlated subquery with a WHERE clause that includes an outer
field reference as a top level WHERE sargable predicate
When JOIN::optimize detects an "impossible WHERE" it will bail out
without doing the rest of the work and initializations. It will not
call make_join_statistics() as well. And make_join_statistics fills
in various structures for each table referenced.
When processing the result of the "impossible WHERE" the query must
send a single row of data if there are aggregate functions in it.
In this case the server marks all the aggregates as having received
no rows and calls the relevant Item::val_xxx() method on the SELECT
list. However if this SELECT list happens to contain a correlated
subquery this subquery is evaluated in a normal evaluation mode.
And if this correlated subquery has a reference to a field from the
outermost "impossible where" SELECT the add_key_fields will mistakenly
consider the outer field reference as a "local" field reference when
looking for sargable predicates.
But since the SELECT where the outer field reference refers to is not
completely initialized due to the "impossible WHERE" in this level
we'll get a NULL pointer reference.
Fixed by making a better condition for discovering if a field is "local"
to the SELECT level being processed.
It's not enough to look for OUTER_REF_TABLE_BIT in this case since
for outer references to constant tables the Item_field::used_tables()
will return 0 regardless of whether the field reference is from the
local SELECT or not.
The crash happens because select_union object is used as result set
for queries which have derived tables.
select_union use temporary table as data storage and if
fields count exceeds 10(count of values for procedure ANALYSE())
then we get a crash on fill_record() function.
binlog
Mixing transactional (T) and non-transactional (N) tables on behalf of a
transaction may lead to inconsistencies among masters and slaves in STATEMENT
mode. The problem stems from the fact that although modifications done to
non-transactional tables on behalf of a transaction become immediately visible
to other connections they do not immediately get to the binary log and therefore
consistency is broken. Although there may be issues in mixing T and M tables in
STATEMENT mode, there are safe combinations that clients find useful.
In this bug, we fix the following issue. Mixing N and T tables in multi-level
(e.g. a statement that fires a trigger) or multi-table table statements (e.g.
update t1, t2...) were not handled correctly. In such cases, it was not possible
to distinguish when a T table was updated if the sequence of changes was N and T.
In a nutshell, just the flag "modified_non_trans_table" was not enough to reflect
that both a N and T tables were changed. To circumvent this issue, we check if an
engine is registered in the handler's list and changed something which means that
a T table was modified.
Check WL 2687 for a full-fledged patch that will make the use of either the MIXED or
ROW modes completely safe.
Problem was that the partition containing NULL values
was pruned away, since '2001-01-01' < '2001-02-00' but
TO_DAYS('2001-02-00') is NULL.
Added the NULL partition for RANGE/LIST partitioning on TO_DAYS()
function to be scanned too.
Also fixed a bug that added ALLOW_INVALID_DATES to sql_mode
(SELECT * FROM t WHERE date_col < '1999-99-99' on a RANGE/LIST
partitioned table would add it).
There were a problem since pruning uses the field
for comparison (while evaluate_join_record uses longlong),
resulting in pruning failures when comparing DATE to DATETIME.
Fix was to always comparing DATE vs DATETIME as DATETIME,
by adding ' 00:00:00' to the DATE string.
And adding optimization for comparing with 23:59:59, so that
DATETIME_col > '2001-02-03 23:59:59' ->
TO_DAYS(DATETIME_col) > TO_DAYS('2001-02-03 23:59:59') instead
of '>='.
The problem was that creating a DECIMAL column from a decimal
value could lead to a failed assertion as decimal values can
have a higher precision than those attached to a table. The
assert could be triggered by creating a table from a decimal
with a large (> 30) scale. Also, there was a problem in
calculating the number of digits in the integral and fractional
parts if both exceeded the maximum number of digits permitted
by the new decimal type.
The solution is to ensure that truncation procedure is executed
when deducing a DECIMAL column from a decimal value of higher
precision. If the integer part is equal to or bigger than the
maximum precision for the DECIMAL type (65), the integer part
is truncated to fit and the fractional becomes zero. Otherwise,
the fractional part is truncated to fit into the space left
after the integer part is copied.
This patch borrows code and ideas from Martin Hansson's patch.
The code was using a special global buffer for the value of IS NULL ranges.
This was not always long enough to be copied by a regular memcpy. As a
result read buffer overflows may occur.
Fixed by setting the null byte to 1 and setting the rest of the field disk image
to NULL with a bzero (instead of relying on the buffer and memcpy()).
INSERT ... SELECT ...
Problem was that when bulk insert is used on an empty
table/partition, it disables the indexes for better
performance, but in this specific case it also tries
to read from that partition using an index, which is
not possible since it has been disabled.
Solution was to allow index reads on disabled indexes
if there are no records.
Also reverted the patch for bug#38005, since that was a workaround
in the partitioning engine instead of a fix in myisam.
(temporary) TABLE, crash
Problem: if one has an open "HANDLER t1", further "TRUNCATE t1"
doesn't close the handler and leaves handler table hash in an
inconsistent state, that may lead to a server crash.
Fix: TRUNCATE should implicitly close all open handlers.
Doc. request: the fact should be described in the manual accordingly.
If the SQL Thread fails to execute an event due to a temporary error (e.g.
ER_LOCK_DEADLOCK) and the option "--slave_transaction_retries" is set the SQL
Thread should not be aborted and the transaction should be restarted from the
beginning and re-executed.
Unfortunately, a wrong interpretation of the THD::is_fatal_error was preventing
this behavior. In a nutshell, "this variable is set to TRUE if an execution of a
compound statement cannot continue. In particular, it is used to disable access
to the CONTINUE or EXIT handlers of stored routines. So even temporary errors
may have this variable set.
To fix the bug, we have done what follows:
DBUG_ENTER("has_temporary_error");
- if (thd->is_fatal_error)
- DBUG_RETURN(0);
-
DBUG_EXECUTE_IF("all_errors_are_temporary_errors",
if (thd->main_da.is_error())
{
The check for stack overflow was independent of the size of the
structure stored in the heap.
Fixed by adding sizeof(PARAM) to the requested free heap size.
view that has Group By
Table access rights checking function check_grant() assumed
that no view is opened when it's called.
This is not true with nested views where the inner view
needs materialization. In this case the view is already
materialized when check_grant() is called for it.
This caused check_grant() to not look for table level
grants on the materialized view table.
Fixed by checking if a view is already materialized and if
it is check table level grants using the original table name
(not the ones of the materialized temp table).
Bug#45243: crash on win in sql thread clear_tables_to_lock() -> free()
Bug#45242: crash on win in mysql_close() -> free()
Bug#45238: rpl_slave_skip, rpl_change_master failed (lost connection) for STOP SLAVE
Bug#46030: rpl_truncate_3innodb causes server crash on windows
Bug#46014: rpl_stm_reset_slave crashes the server sporadically in pb2
When killing a user session on the server, it's necessary to
interrupt (notify) the thread associated with the session that
the connection is being killed so that the thread is woken up
if waiting for I/O. On a few platforms (Mac, Windows and HP-UX)
where the SIGNAL_WITH_VIO_CLOSE flag is defined, this interruption
procedure is to asynchronously close the underlying socket of
the connection.
In order to enable this schema, each connection serving thread
registers its VIO (I/O interface) so that other threads can
access it and close the connection. But only the owner thread of
the VIO might delete it as to guarantee that other threads won't
see freed memory (the thread unregisters the VIO before deleting
it). A side note: closing the socket introduces a harmless race
that might cause a thread attempt to read from a closed socket,
but this is deemed acceptable.
The problem is that this infrastructure was meant to only be used
by server threads, but the slave I/O thread was registering the
VIO of a mysql handle (a client API structure that represents a
connection to another server instance) as a active connection of
the thread. But under some circumstances such as network failures,
the client API might destroy the VIO associated with a handle at
will, yet the VIO wouldn't be properly unregistered. This could
lead to accesses to freed data if a thread attempted to kill a
slave I/O thread whose connection was already broken.
There was a attempt to work around this by checking whether
the socket was being interrupted, but this hack didn't work as
intended due to the aforementioned race -- attempting to read
from the socket would yield a "bad file descriptor" error.
The solution is to add a hook to the client API that is called
from the client code before the VIO of a handle is deleted.
This hook allows the slave I/O thread to detach the active vio
so it does not point to freed memory.
on SHOW CREATE TRIGGER + MERGE table
Problem: SHOW CREATE TRIGGER erroneously relies on fact
that we have the only underlying table for a trigger
(wrong for merge tables).
Fix: remove erroneous assert().
In STATEMENT based replication, a statement that failed on the master but that
updated non-transactional tables is written to binary log with the error code
appended to it. On the slave, the statement is executed and the same error is
expected. However, when an "expected error" did not happen on the slave and was
either ignored or was related to a concurrency issue on the master, the slave
did not rollback the effects of the statement and as such inconsistencies might
happen.
To fix the problem, we automatically rollback a statement that should have
failed on a slave but succeded and whose expected failure is either ignored or
stems from a concurrency issue on the master.
There is an inconsistency with DROP DATABASE|TABLE|EVENT IF EXISTS and
CREATE DATABASE|TABLE|EVENT IF NOT EXISTS. DROP IF EXISTS statements are
binlogged even if either the DB, TABLE or EVENT does not exist. In
contrast, Only the CREATE EVENT IF NOT EXISTS is binlogged when the EVENT
exists.
This patch fixes the following cases for all the replication formats:
CREATE DATABASE IF NOT EXISTS.
CREATE TABLE IF NOT EXISTS,
CREATE TABLE IF NOT EXISTS ... LIKE,
CREAET TABLE IF NOT EXISTS ... SELECT.
"CREATE TABLE TRANSACTIONAL PAGE_CHECKSUM ROW_FORMAT=PAGE accepted,
does nothing".
Put back stubs for members of structures that are shared between
sql/ and pluggable storage engines. to not break ABI unnecessarily.
To be NULL-merged into 5.4, where we do break the ABI already.
Replication SQL thread does not set database default charset to
thd->variables.collation_database properly, when executing LOAD DATA binlog.
This bug can be repeated by using "LOAD DATA" command in STATEMENT mode.
This patch adds code to find the default character set of the current database
then assign it to thd->db_charset when slave server begins to execute a relay log.
The test of this bug is added into rpl_loaddata_charset.test
The problem is that the lexer could inadvertently skip over the
end of a query being parsed if it encountered a malformed multibyte
character. A specially crated query string could cause the lexer
to jump up to six bytes past the end of the query buffer. Another
problem was that the laxer could use unfiltered user input as
a signed array index for the parser maps (having upper and lower
bounds 0 and 256 respectively).
The solution is to ensure that the lexer only skips over well-formed
multibyte characters and that the index value of the parser maps
is always a unsigned value.
Invalid (old?) table or database name in logs
Post push patch.
Bug was that a non partitioned table file was not
converted to system_charset, (due to table_name_len was not set).
Also missing DBUG_RETURN.
And Innodb adds quotes after calling the function,
so I added one more mode where explain_filename does not
add quotes. But it still appends the [sub]partition name
as a comment.
Also caught a minor quoting bug, the character '`' was
not quoted in the identifier. (so 'a`b' was quoted as `a`b`
and not `a``b`, this is mulitbyte characters aware.)
Problem 1:
When the 'Using index' optimization is used, the optimizer may still - after
cost-based optimization - decide to use another index in order to avoid using
a temporary table. But when this happens, the flag to the storage engine to
read index only (not table) was still set. Fixed by resetting the flag in the
storage engine and TABLE structure in the above scenario, unless the new index
allows for the same optimization.
Problem 2:
When a 'ref' access method was employed by cost-based optimizer, (when the column
is non-NULLable), it was assumed that it needed no initialization if 'quick' access
methods (since they are based on range scan). When ORDER BY optimization overrides
the decision, however, it expects to have this initialized and hence crashes.
Fixed in 5.1 (was fixed in 6.0 already) by initializing 'quick' even when there's
'ref' access.
when partition is reoganized.
Problem was that table->timestamp_field_type was not changed
before copying rows between partitions.
fixed by setting it to TIMESTAMP_NO_AUTO_SET as the first thing
in fast_alter_partition_table, so that all if-branches is covered.
column on partitioned table
An assertion 'ASSERT_COULUMN_MARKED_FOR_READ' is failed if the query
is executed with index containing double column on partitioned table.
The problem is that assertion expects all the fields which are read,
to be in the read_set.
In this query only the field 'a' is in the readset as the tables in
the query are joined by the field 'a' and so the assertion fails
expecting other field 'b'.
Since the function cmp() is just comparison of two parameters passed,
the assertion is not required.
Fixed by removing the assertion in the double fields comparision
function and also fixed the index initialization to do ordered
index scan with RW LOCK which ensures all the fields from a key are in
the read_set.
Note: this bug is not reproducible with other datatypes because the
assertion doesn't exist in comparision function for other
datatypes.
"Patch to fix bug 38551": it was a manual backport (2008-10-15) of
http://lists.mysql.com/commits/56418.
But that was an early, non-final patch from the fixer of this bug (TheK):
after that backport was made by Mikael, TheK decided to do a different fix,
which was finally pushed into 6.0.
Then 5.1's code was changed for some other reasons, so now we have a
conflict between the old never-approved TheK patch backported to Summit and
the latest 5.1. The backport cannot stay, it has to be removed due to
the conflict, and then rewritten if desired.
bzr branch mysql-5.1-performance-version mysql-trunk # Summit
cd mysql-trunk
bzr merge mysql-5.1-innodb_plugin # which is 5.1 + Innodb plugin
bzr rm innobase # remove the builtin
Next step: build, test fixes.
- Define and pass compile time path variables as pre-processor definitions to
mimic the makefile build.
- Set new CMake version and policy requirements explicitly.
- Changed DATADIR to MYSQL_DATADIR to avoid conflicting definition in
Platform SDK header ObjIdl.h which also defines DATADIR.
when used with --tab
1) New syntax: added CHARACTER SET clause to the
SELECT ... INTO OUTFILE (to complement the same clause in
LOAD DATA INFILE).
mysqldump is updated to use this in --tab mode.
2) ESCAPED BY/ENCLOSED BY field parameters are documented as
accepting CHAR argument, however SELECT .. INTO OUTFILE
silently ignored rests of multisymbol arguments.
For the symmetrical behavior with LOAD DATA INFILE the
server has been modified to fail with the same error:
ERROR 42000: Field separator argument is not what is
expected; check the manual
3) Current LOAD DATA INFILE recognizes field/line separators
"as is" without converting from client charset to data
file charset. So, it is supposed, that input file of
LOAD DATA INFILE consists of data in one charset and
separators in other charset. For the compatibility with
that [buggy] behaviour SELECT INTO OUTFILE implementation
has been saved "as is" too, but the new warning message
has been added:
Non-ASCII separator arguments are not fully supported
This message warns on field/line separators that contain
non-ASCII symbols.
If using statement based replication (SBR), repeatedly calling
statements which are unsafe for SBR will cause a warning message
to be written to the error for each statement. This might lead
to filling up the error log and there is no way to disable this
behavior.
The solution is to only log these message (about statements unsafe
for statement based replication) if the log_warnings option is set.
For example:
SET GLOBAL LOG_WARNINGS = 0;
INSERT INTO t1 VALUES(UUID());
SET GLOBAL LOG_WARNINGS = 1;
INSERT INTO t1 VALUES(UUID());
In this case the message will be printed only once:
[Warning] Statement may not be safe to log in statement format.
Statement: INSERT INTO t1 VALUES(UUID())
We disallow the partitioning of a log table. You could however
partition a table first, and then point logging to it. This is
not only against the docs, it also crashes the server.
We catch this case now.
Initialize LOCK_open as a adapative mutex on platforms where the
PTHREAD_ADAPTIVE_MUTEX_INITIALIZER_NP macro is available. The flag
indicates that a thread should spin (busy wait) for some time on a
locked adaptive mutex before blocking (sleeping). It's intended to
to alleviate performance problems due to LOCK_open being a highly
contended mutex.
an assertion in a debug build.
The reason is that the C API doesn't support multiple result sets for prepared
statements and attempting to execute a stored routine which returns multiple result
sets sometimes lead to a network error. The network error sets the diagnostic area
prematurely which later leads to the assert when an attempt is made to set a second
server state.
This patch fixes the issue by changing the scope of the error code returned by
sp_instr_stmt::execute() to include any error which happened during the execution.
To assure that Diagnostic_area::is_sent really mean that the message was sent all
network related functions are checked for return status.
those keywords do nothing in 5.1 (they are meant for future versions, for example featuring the Maria engine)
so they are here removed from the syntax. Adding those keywords to future versions when needed is:
- WL#5034 "Add TRANSACTIONA=0|1 and PAGE_CHECKSUM=0|1 clauses to CREATE TABLE"
- WL#5037 "New ROW_FORMAT value for CREATE TABLE: PAGE"
compression
Since uint3korr() may read 4 bytes depending on build flags and
platform, allocate 1 extra "safety" byte in the network buffer
for cases when uint3korr() in my_real_read() is called to read
last 3 bytes in the buffer.
It is practically hard to construct a reliable and reasonably
small test case for this bug as that would require constructing
input stream such that a certain sequence of bytes in a
compressed packet happens to be the last 3 bytes of the network
buffer.
If the log_bin_trust_function_creators option is not defined, creating a stored
function requires either one of the modifiers DETERMINISTIC, NO SQL, or READS
SQL DATA. Executing a stored function should also follows the same rules if in
STATEMENT mode. However, this was not happening and a wrong error was being
printed out: ER_BINLOG_ROW_RBR_TO_SBR.
The patch makes the creation and execution compatible and prints out the correct
error ER_BINLOG_UNSAFE_ROUTINE when a stored function without one of the modifiers
above is executed in STATEMENT mode.
The maximum value of the max_join_size variable is set by converting
a signed type (long int) with negative value (-1) to a wider unsigned
type (unsigned long long), which yields the largest possible value of
the wider unsigned type -- as per the language conversion rules. But,
depending on build options, the type of the max_join_size might be a
shorter type (ha_rows - unsigned long) which causes the warning to be
thrown once the large value is truncated to fit.
The solution is to ensure that the maximum value of the variable is
always set to the maximum value of integer type of max_join_size.
Furthermore, it would be interesting to always have a fixed type for
this variable, but this would incur in a change of behavior which is
not acceptable for a GA version. See Bug#35346.
to wrong result
When using MIXED mode and issuing 'CREATE TEMPORARY TABLE t_tmp',
the statement is logged if the current binlogging mode is
STATEMENT. This causes the slave to replay the instruction and
create the temporary table as well. If there is no switch to ROW
mode, and later on a 'DROP TEMPORARY TABLE t_tmp' is issued, then
this statement will also be logged and the slave will
remove/close the temporary table.
However, if there is a switch to ROW mode between the CREATE and
DROP TEMPORARY table, the DROP statement will not be logged,
leaving the slave with a dangling temporary table.
This patch addresses this, by always logging a DROP TEMPORARY
TABLE IF EXISTS when in mixed mode and a drop statement is issued
for temporary table(s).