The problem is that a unfiltered user query was being passed as
the format string parameter of sql_print_warning which later
performs printf-like formatting, leading to crashes if the user
query contains formatting instructions (ie: %s). Also, it was
using THD::query as the source of the user query, but this
variable is not meaningful in some situations -- in a delayed
insert, it points to the table name.
The solution is to pass the user query as a parameter for the
format string and use the function parameter query_arg as the
source of the user query.
In 37553 we declared longlong results for
class Item_str_timefunc as per comments/docs,
but didn't add a method for that. And the
default just wasn't good enough for some
cases.
Changeset adds dedicated val_int() to class.
TRUNCATE TABLE fails to replicate when stmt-based binlogging is not supported.
There were two separate problems with the code, both of which are fixed with
this patch:
1. An error was printed by InnoDB for TRUNCATE TABLE in statement mode when
the in isolation levels READ COMMITTED and READ UNCOMMITTED since InnoDB
does permit statement-based replication for DML statements. However,
the TRUNCATE TABLE is not transactional, but is a DDL, and should therefore
be allowed to be replicated as a statement.
2. The statement was not logged in mixed mode because of the error above, but
the error was not reported to the client.
This patch fixes the problem by treating TRUNCATE TABLE a DDL, that is, it is
always logged as a statement and not reporting an error from InnoDB for TRUNCATE
TABLE.
Documented behaviour was broken by the patch for bug 33699
that actually is not a bug.
This fix reverts patch for bug 33699 and reverts the
UPDATE of NOT NULL field with NULL query to old
behavior.
ORDER BY could cause a server crash
Dependent subqueries like
SELECT COUNT(*) FROM t1, t2 WHERE t2.b
IN (SELECT DISTINCT t2.b FROM t2 WHERE t2.b = t1.a)
caused a memory leak proportional to the
number of outer rows.
The make_simple_join() function has been modified to
JOIN class method to store join_tab_reexec and
table_reexec values in the parent join only
(make_simple_join of tmp_join may access these values
via 'this' pointer of the parent JOIN).
NOTE: this patch doesn't include standard test case (this is
"out of memory" bug). See bug #42037 page for test cases.
Problem: some queries using NAME_CONST(.. COLLATE ...)
lead to server crash due to failed type cast.
Fix: return the underlying item's type in case of
NAME_CONST(.. COLLATE ...) to avoid wrong casting.
code backported from 6.0
per-file messages:
include/my_global.h
Remove SC_MAXWIDTH. This is unused and irrelevant nowadays.
include/my_sys.h
Remove errbuf declaration and unused definitions.
mysys/my_error.c
Remove errbuf definition and move and adjust ERRMSGSIZE.
mysys/my_init.c
Declare buffer on the stack and use my_snprintf.
mysys/safemalloc.c
Use size explicitly. It's more than enough for the message at hand.
sql/sql_error.cc
Use size explicitly. It's more than enough for the message at hand.
sql/sql_parse.cc
Declare buffer on the stack. Use my_snprintf as it will result in
less stack space being used than by a system provided sprintf --
this allows us to put the buffer on the stack without causing much
trouble. Also, the use of errbuff here was not thread-safe as the
function can be entered concurrently from multiple threads.
sql/sql_table.cc
Use MYSQL_ERRMSG_SIZE. Extra space is not needed as my_snprintf will
nul terminate strings.
storage/myisam/ha_myisam.cc
Use MYSQL_ERRMSG_SIZE.
sql/share/errmsg.txt
Error message truncation in test "innodb" in embedded mode
filename in the error message can safely take up to 210 symbols.
Problem:
RelativeLocationPath can appear only after a node-set expression
in the third and the fourth branches of this rule:
PathExpr :: = LocationPath
| FilterExpr
| FilterExpr '/' RelativeLocationPath
| FilterExpr '//' RelativeLocationPath
XPatch code didn't check the type of FilterExpr and crashed.
Fix:
If FilterExpr is a scalar expression
(variable reference, literal, number, scalar function call)
return error.
The bug happened because filtering-out a STMT_END_F-flagged event so that
the transaction COMMIT finds traces of incomplete statement commit.
Such situation is only possible with ndb circular replication. The filtered-out
rows event is one that immediately preceeds the COMMIT query event.
Fixed with deploying an the rows-log-event statement commit at executing
of the transaction COMMIT event.
Resources that were allocated by other than STMT_END_F-flagged event of
the last statement are clean up prior execution of the commit logics.
functions
String::realloc() did not check whether the existing string data fits in the newly
allocated buffer for cases when reallocating a String object with external buffer
(i.e.alloced == FALSE). This could lead to memory overruns in some cases.
upgrading lock, even with low_priority_updates
The problem is that there is no mechanism to control whether a
delayed insert takes a high or low priority lock on a table.
The solution is to modify the delayed insert thread ("handler")
to take into account the global value of low_priority_updates
when taking table locks. The value of low_priority_updates is
retrieved when the insert delayed thread is created and will
remain the same for the duration of the thread.
Replaced abs_top_srcdir with top_srcdir, not sure it's an
improvement but at least it's known that abs_top_srcdir
in cases have a problem and this is a more common variable
to use for the same purpose.
When storing a NULL to a TIMESTAMP NOT NULL DEFAULT ...,
NULL returned from some functions threw a 'cannot be NULL error.'
NULL-returns now correctly result in the timestamp-field being
assigned its default value.
If the system time is adjusted back during a query execution
(resulting in the end time being earlier than the start time)
the code that prints to the slow query log gets confused and
prints unsigned negative numbers.
Fixed by not logging the statements that would have negative
execution time due to time shifts.
No test case since this would involve changing the system time.
Item_in_optimizer::is_null() evaluated "NULL IN (SELECT ...)" to NULL regardless of
whether subquery produced any records, this was a documented limitation.
The limitation has been removed (see bugs 8804, 24085, 24127) now
Item_in_optimizer::val_int() correctly handles all cases with NULLs. Make
Item_in_optimizer::is_null() invoke val_int() to return correct values for
"NULL IN (SELECT ...)".
messed up
"ROW(...) IN (SELECT ... FROM DUAL)" always returned TRUE.
Item_in_subselect::row_value_transformer rewrites "ROW(...)
IN SELECT" conditions into the "EXISTS (SELECT ... HAVING ...)"
form.
For a subquery from the DUAL pseudotable resulting HAVING
condition is an expression on constant values, so further
transformation with optimize_cond() eliminates this HAVING
condition and resets JOIN::having to NULL.
Then JOIN::exec treated that NULL as an always-true-HAVING
and that caused a bug.
To distinguish an optimized out "HAVING TRUE" clause from
"HAVING FALSE" we already have the JOIN::having_value flag.
However, JOIN::exec() ignored JOIN::having_value as described
above as if it always set to COND_TRUE.
The JOIN::exec method has been modified to take into account
the value of the JOIN::having_value field.
When using CREATE TEMPORARY TABLE LIKE to create a temporary table,
or using TRUNCATE to delete all rows of a temporary table, they
did not set the tmp_table_used flag, and cause the omission of
"SET @@session.pseudo_thread_id" when dumping binlog with mysqlbinlog,
and cause error when replay the statements.
This patch fixed the problem by setting tmp_table_used in these two
cases. (Done by He Zhenxing 2009-01-12)
When a MEMORY table is full the error is returned to client but not written
to error log.
Fixed the handler api to write the error mesage to error log when the table is
full.
Note: No TestCase included as testing the error log is non-trivial.