bug#44766: valgrind error when using convert() in a subquery
Problem: input and output buffers may be the same
converting a string to some charset.
That may lead to wrong results/valgrind warnings.
Fix: use different buffers.
warnings after uncompressed_length
UNCOMPRESSED_LENGTH() did not validate its argument. In
particular, if the argument length was less than 4 bytes,
an uninitialized memory value was returned as a result.
Since the result of COMPRESS() is either an empty string or
a 4-byte length prefix followed by compressed data, the bug was
fixed by ensuring that the argument of UNCOMPRESSED_LENGTH() is
either an empty string or contains at least 5 bytes (as done in
UNCOMPRESS()). This is the best we can do to validate input
without decompressing.
Change the warning message to 'Statement may not be safe to log in
statement format' to indicate that the decision on whether a
statement is safe or not is heuristic, and we are conservative.
The RAND(N) function where the N is a field of "constant" table
(table of single row) failed with a SIGFPE.
Evaluation of RAND(N) rely on constant status of its argument.
Current server "seeded" random value for each constant argument
only once, in the Item_func_rand::fix_fields method.
Then the server skipped a call to seed_random() in the
Item_func_rand::val_real method for such constant arguments.
However, non-constant state of an argument may be changed
after the call to fix_fields, if an argument is a field of
"constant" table. Thus, pre-initialization of random value
in the fix_fields method is too early.
Initialization of random value by seed_random() has been
removed from Item_func_rand::fix_fields method.
The Item_func_rand::val_real method has been modified to
call seed_random() on the first evaluation of this method
if an argument is a function.
Field_time::get_time() did not initialize some members of
MYSQL_TIME which led to valgrind warnings when those members
were accessed in Protocol_simple::store_time().
It is unlikely that this bug could result in wrong data
being returned, since Field_time::get_time() initializes the
'day' member of MYSQL_TIME to 0, so the value of 'day'
in Protocol_simple::store_time() would be 0 regardless
of the values for 'year' and 'month'.
In UNION if we use last SELECT without braces and this
SELECT have ORDER BY clause, such clause belongs to
global UNION. It is parsed like last SELECT
part and used further as 'unit->global_parameters->order_list' value.
During DESCRIBE EXTENDED we call select_lex->print_order() for
last SELECT where order fields refer to tmp table
which already freed. It leads to crash.
The fix is clean up global_parameters->order_list
instead of fake_select_lex->order_list.
It is not possible to prevent the server from starting if a mandatory
built-in plugin fails to start. This can in some cases lead to data
corruption when the old table name space suddenly is used by a different
storage engine.
A boolean command line option in the form of --foobar is automatically
created for every existing plugin "foobar". By changing this command line
option from a boolean to a tristate { OFF, ON, FORCE } it is possible to
specify the plugin loading policy for each plugin.
The behavior is specified as follows:
OFF = Disable the plugin and start the server
ON = Enable the plugin and start the server even if an error occurrs
during plugin initialization.
FORCE = Enable the plugin but don't start the server if an error occurrs
during plugin initialization.
SQL_SELECT::test_quick_select
The crash was caused by an incomplete cleanup of JOIN_TAB::select
during the filesort of rows for GROUP BY clause inside a subquery.
Queries where a quick index access is replaced with filesort was
was affected. For example:
SELECT 1 FROM
(SELECT COUNT(DISTINCT c1) FROM t1
WHERE c2 IN (1, 1) AND c3 = 2 GROUP BY c2) x
Quick index access related data in the SQL_SELECT::test_quick_select
function was inconsistent after an incomplete cleanup.
This function has been completed to prevent crashes in the
SQL_SELECT::test_quick_select function.
and HAVING
When calculating GROUP BY the server caches some expressions. It does
that by allocating a string slot (Item_copy_string) and assigning the
value of the expression to it. This effectively means that the result
type of the expression can be changed from whatever it was to a string.
As this substitution takes place after the compile-time result type
calculation for IN but before the run-time type calculations,
it causes the type calculations in the IN function done at run time
to get unexpected results different from what was prepared at compile time.
In the CASE ... WHEN ... THEN ... statement there was a similar problem
and it was solved by artificially adding a STRING argument to the matrix
at compile time, so if any of the arguments of the CASE function changes
its type to a string it will still be covered by the information prepared
at compile time.
Extended the CASE fix for cover the IN case.
An alternative way of fixing this problem is by caching the result type of
the arguments at compile time and using the cached information at run time
instead of re-calculating the result types.
Preferred the CASE approach for uniformity and fix localization.
Problem: executing queries like "ALTER TABLE view1;" we don't
check new view's name (which is not specified),
that leads to server crash.
Fix: do nothing (to be consistent with the behaviour for tables)
in such cases.
with a "HAVING" clause though query works
SELECT from views defined like:
CREATE VIEW v1 (view_column)
AS SELECT c AS alias FROM t1 HAVING alias
fails with an error 1356:
View '...' references invalid table(s) or column(s)
or function(s) or definer/invoker of view lack rights
to use them
CREATE VIEW form with a (column list) substitutes
SELECT column names/aliases with names from a
view column list.
However, alias references in HAVING clause was
not substituted.
The Item_ref::print function has been modified
to write correct aliased names of underlying
items into VIEW definition generation/.frm file.
"freeing items"
The calculation of the table map log event in the event constructor
was one byte shorter than what would be actually written. This would
lead to a mismatch between the number of bytes written and the event
end_log_pos, causing bad event alignment in the binlog (corrupted
binlog) or in the transaction cache while fixing positions
(MYSQL_BIN_LOG::write_cache). This could lead to impossible to read
binlog or even infinite loops in MYSQL_BIN_LOG::write_cache.
This patch addresses this issue by correcting the expected event
length in the Table_map_log_event constructor, when the field metadata
size exceeds 255.
Problem: using LOAD_FILE() in some cases we pass a file name string
without a trailing '\0' to fn_format() which relies on that however.
That may lead to valgrind warnings.
Fix: add a trailing '\0' to the file name passed to fn_format().
The problem is that the internal variable used to specify a
transaction with consistent read was being used outside the
processing context of a START TRANSACTION WITH CONSISTENT
SNAPSHOT statement. The practical consequence was that a
consistent snapshot specification could leak to unrelated
transactions on the same session.
The solution is to ensure a consistent snapshot clause is
only relied upon for the START TRANSACTION statement.
This is already fixed in a similar way on 6.0.
SQL_SELECT::test_quick_select
The crash was caused by an incomplete cleanup of JOIN_TAB::select
during the filesort of rows for GROUP BY clause inside a subquery.
Queries where a quick index access is replaced with filesort was
was affected. For example:
SELECT 1 FROM
(SELECT COUNT(DISTINCT c1) FROM t1
WHERE c2 IN (1, 1) AND c3 = 2 GROUP BY c2) x
Quick index access related data in the SQL_SELECT::test_quick_select
function was inconsistent after an incomplete cleanup.
This function has been completed to prevent crashes in the
SQL_SELECT::test_quick_select function.
In the output from mysqlbinlog, incident log events were
represented as just a comment. Since the incident log event
represents an incident that could cause the contents of the
database to change without being logged to the binary log,
it means that if the SQL is applied to a server, it could
potentially lead to that the databases are out of sync.
In order to handle that, this patch adds the statement "RELOAD
DATABASE" to the SQL output for the incident log event. This will
require a DBA to edit the file and handle the case as apropriate
before applying the output to a server.
Problem: storing "SELECT ... INTO @var ..." results in variables we used val_xxx()
methods which returned results of the current row.
So, in some cases (e.g. SELECT DISTINCT, GROUP BY or HAVING) we got data
from the first row of a new group (where we evaluate a clause) instead of
data from the last row of the previous group.
Fix: use val_xxx_result() counterparts to get proper results.
165 changesets with 23 conflicts:
Text conflict in mysql-test/r/lock_multi.result
Text conflict in mysql-test/t/lock_multi.test
Text conflict in mysql-test/t/mysqldump.test
Text conflict in sql/item_strfunc.cc
Text conflict in sql/log.cc
Text conflict in sql/log_event.cc
Text conflict in sql/parse_file.cc
Text conflict in sql/slave.cc
Text conflict in sql/sp.cc
Text conflict in sql/sp_head.cc
Text conflict in sql/sql_acl.cc
Text conflict in sql/sql_base.cc
Text conflict in sql/sql_class.cc
Text conflict in sql/sql_crypt.cc
Text conflict in sql/sql_db.cc
Text conflict in sql/sql_lex.cc
Text conflict in sql/sql_parse.cc
Text conflict in sql/sql_select.cc
Text conflict in sql/sql_table.cc
Text conflict in sql/sql_view.cc
Text conflict in storage/innobase/handler/ha_innodb.cc
Text conflict in storage/myisam/mi_packrec.c
Text conflict in tests/mysql_client_test.c
Updates to Innobase, taken from main 5.1:
bzr: ERROR: Some change isn't sane:
File mysql-test/r/innodb-semi-consistent.result is owned by Innobase and should not be updated.
File mysql-test/t/innodb-semi-consistent.test is owned by Innobase and should not be updated.
File storage/innobase/handler/ha_innodb.cc is owned by Innobase and should not be updated.
File storage/innobase/ibuf/ibuf0ibuf.c is owned by Innobase and should not be updated.
File storage/innobase/include/row0mysql.h is owned by Innobase and should not be updated.
File storage/innobase/include/srv0srv.h is owned by Innobase and should not be updated.
File storage/innobase/include/trx0trx.h is owned by Innobase and should not be updated.
File storage/innobase/include/trx0trx.ic is owned by Innobase and should not be updated.
File storage/innobase/lock/lock0lock.c is owned by Innobase and should not be updated.
File storage/innobase/page/page0cur.c is owned by Innobase and should not be updated.
File storage/innobase/row/row0mysql.c is owned by Innobase and should not be updated.
File storage/innobase/row/row0sel.c is owned by Innobase and should not be updated.
File storage/innobase/srv/srv0srv.c is owned by Innobase and should not be updated.
File storage/innobase/trx/trx0trx.c is owned by Innobase and should not be updated.
(Set env var 'ALLOW_UPDATE_INNOBASE_OWNED' to override.)
with seg fault
Multiple-table DELETE from a table joined to itself may cause
server crash. This was originally discovered with MEMORY engine,
but may affect other engines with different symptoms.
The problem was that the server violated SE API by performing
parallel table scan in one handler and removing records in
another (delete on the fly optimization).