Problem: When GET_FORMAT() is called two times from the upper
level function (e.g. LEAST in the bug report), on the second
call "res= args[0]->val_str(...)" and str point to the same
String object.
1. Fix: changing the order from
- get val_str into tmp_value then convert to str
to
- get val_str into str then convert to tmp_value
The new order is more correct: the purpose of "str" parameter
is exactly to call val_str() for arguments.
The purpose of String class members (like tmp_value) is to do further
actions on the result.
Doing it in the other way around give unexpected surprises.
2. Using str_value instead of str to do padding, for the same reason.
Bug#57820 extractvalue crashes
Problem: ExtractValue and Replace crashed in some cases
due to invalid handling of empty and NULL arguments.
Per file comments:
@mysql-test/r/ctype_ujis.result
@mysql-test/r/xml.result
@mysql-test/t/ctype_ujis.test
@mysql-test/t/xml.test
Adding tests
@sql/item_strfunc.cc
Make sure Item_func_replace::val_str safely handles empty strings.
@sql/item_xmlfunc.cc
set null_value if nodeset_func returned NULL,
which is possible when the second argument is an
unset user variable.
Lines below which were added in the patch for Bug#56814 cause this crash:
+ if (table->table)
+ table->table->maybe_null= FALSE;
Consider following test case:
--
CREATE TABLE t1(f1 INT NOT NULL);
INSERT INTO t1 VALUES (16777214),(0);
SELECT COUNT(*) FROM t1 LEFT JOIN t1 t2
ON 1 WHERE t2.f1 > 1 GROUP BY t2.f1;
DROP TABLE t1;
--
We set TABLE::maybe_null to FALSE for t2 table
and in create_tmp_field() we create appropriate tmp table field
using create_tmp_field_from_item() function instead of
create_tmp_field_from_field. As a result we have
LONGLONG field. As we have GROUP BY clause we calculate
group buffer length, see calc_group_buffer().
Item from group list which is used for calculation
refer to the field from real tables and have LONG type.
So group buffer length become insufficient for storing of
LONGLONG value. It leads to overwriting of wrong memory
area in do_field_int() function which is called from
end_update().
After some investigation I found out that
create_tmp_field_from_item() is used only for OLAP
grouping and can not be used for common grouping
as it could be an incompatibility between tmp
table fields and group buffer length.
We can not remove create_tmp_field_from_item() call from
create_tmp_field as OLAP needs it and we can not use this
function for common grouping. So we should remove setting
TABLE::maybe_null to FALSE from simplify_joins().
In this case we'll get wrong behaviour of
list_contains_unique_index() back. To fix it we
could use Field::real_maybe_null() check instead of
Field::maybe_null() and add addition check of
TABLE_LIST::outer_join.
The problem is caused by bug49487 fix and became visible
after after bug56679 fix.
Items are cleaned up and set to unfixed state after filling derived table.
So we can not rely on item::fixed state in Item_func_group_concat::print
and we can not use 'args' array as items there may be cleaned up.
The fix is always to use orig_args array of items as it
always should contain the correct data.
The problem is dividing by const value when
the result is out of supported range.
The fix:
-return LONGLONG_MIN if the result is out of supported range for DIV operator.
-return 0 if divisor is -1 for MOD operator.
"Grantor" columns' data is lost when replicating mysql.tables_priv.
Slave SQL thread used its default user ''@'' as the grantor of GRANT|REVOKE
statements executing on it.
In this patch, current user is put in query log event for all GRANT and REVOKE
statement, SQL thread uses the user in query log event as grantor.
Rows events were applied wrongly on the temporary table with the same name.
But rows events are generated only for base tables. As temporary
table's data never be binlogged on row mode. Normally, base table of the
same name cannot be updated if a temporary table has the same name.
But there are two cases which can generate rows events on
the base table of same name.
Case1: 'CREATE TABLE ... SELECT' statement.
In mixed format, it will generate rows events if it is unsafe.
Case2: Drop a transactional temporary table in a transaction
(happens only on 5.5+).
BEGIN;
DROP TEMPORARY TABLE t1; # t1 is a InnoDB table
INSERT INTO t1 VALUES(rand()); # t1 is a MyISAM table
COMMIT;
'DROP TEMPORARY TABLE' will be put in the transaction cache and
binlogged after the rows events generated by the 'INSERT' statement.
After this patch, slave opens only base table when applying a rows event.
main.mysqltest skipped on Windows because a perl intentionally does exit(1)
Use exit(2), as exit(1) on Windows is indistinguishable from failing to
execute perl.
data dictionary confusion
On file systems with case insensitive file names, and
lower_case_table_names set to '2', the server could crash
due to a table definition cache inconsistency. This is
the default setting on MacOSX, but may also be set and
used on MS Windows.
The bug is caused by using two different strategies for
creating the hash key for the table definition cache, resulting
in failure to look up an entry which is present in the cache,
or failure to delete an existing entry. One strategy was to
use the real table name (with case preserved), and the other
to use a normalized table name (i.e a lower case version).
This is manifested in two cases. One is during 'DROP DATABASE',
where all known files are removed. The removal from
the table definition cache is done via a generated list of
TABLE_LIST with keys (wrongly) created using the case preserved
name. The other is during CREATE TABLE, where the cache lookup
is also (wrongly) based on the case preserved name.
The fix was to use only the normalized table name when
creating hash keys.
row_search_for_mysql(): When a secondary index record might not be
visible in the current transaction's read view and we consult the
clustered index and optionally some undo log records, return the
relevant columns of the clustered index record to MySQL instead of the
secondary index record.
ibuf_insert_to_index_page_low(): New function, refactored from
ibuf_insert_to_index_page().
ibuf_insert_to_index_page(): When we are inserting a record in place
of a delete-marked record and some fields of the record differ, update
that record just like row_ins_sec_index_entry_by_modify() would do.
btr_cur_update_alloc_zip(): Make the function public.
mysql_row_templ_t: Add clust_rec_field_no.
row_sel_store_mysql_rec(), row_sel_push_cache_row_for_mysql(): Add the
flag rec_clust, for returning data at clust_rec_field_no instead of
rec_field_no. Resurrect the debug assertion that the record not be
marked for deletion. (Bug #55626)
[UNIV_DEBUG || UNIV_IBUF_DEBUG] ibuf_debug, buf_page_get_gen(),
buf_flush_page_try():
Implement innodb_change_buffering_debug=1 for evicting pages from the
buffer pool, so that change buffering will be attempted more
frequently.
row_search_for_mysql(): When a secondary index record might not be
visible in the current transaction's read view and we consult the
clustered index and optionally some undo log records, return the
relevant columns of the clustered index record to MySQL instead of the
secondary index record.
REC_INFO_DELETED_FLAG: Move the definition from rem0rec.ic to rem0rec.h.
ibuf_insert_to_index_page_low(): New function, refactored from
ibuf_insert_to_index_page().
ibuf_insert_to_index_page(): When we are inserting a record in place
of a delete-marked record and some fields of the record differ, update
that record just like row_ins_sec_index_entry_by_modify() would do.
mysql_row_templ_t: Add clust_rec_field_no.
row_sel_store_mysql_rec(), row_sel_push_cache_row_for_mysql(): Add the
flag rec_clust, for returning data at clust_rec_field_no instead of
rec_field_no. Resurrect the debug assertion that the record not be
marked for deletion. (Bug #55626)
buf_LRU_free_block(): Refactored from
buf_LRU_search_and_free_block(). This is needed for the
innodb_change_buffering_debug diagnostics.
[UNIV_DEBUG || UNIV_IBUF_DEBUG] ibuf_debug, buf_page_get_gen(),
buf_flush_page_try():
Implement innodb_change_buffering_debug=1 for evicting pages from the
buffer pool, so that change buffering will be attempted more
frequently.
create_sort_index() function overwrites original JOIN_TAB::type field.
At re-execution of subquery overwritten JOIN_TAB::type(JT_ALL) is
used instead of JT_FT. It misleads test_if_skip_sort_order() and
the function tries to find suitable key for the order that should
not be allowed for FULLTEXT(JT_FT) table.
The fix is to restore JOIN_TAB strucures for subselect on re-execution
for EXPLAIN.
Additional fix:
Update TABLE::maybe_null field which
affects list_contains_unique_index() behaviour as it
could have the value(maybe_null==TRUE) based on the
assumption that this join is outer
(see setup_table_map() func).
This is a merge from 5.1/builtin to 5.1/plugin of:
--------------
revision-id: vasil.dimov@oracle.com-20101018104811-nwqhg9vav17kl5s1
committer: Vasil Dimov <vasil.dimov@oracle.com>
timestamp: Mon 2010-10-18 13:48:11 +0300
message:
Fix Bug#57252 disabling innobase_stats_on_metadata disables ANALYZE
In order to fix this bug we need to distinguish whether ha_innobase::info()
has been called from ::analyze() or not. Rename ::info() to ::info_low()
and add a boolean parameter that tells whether the call is from ::analyze()
or not. Create a new simple ::info() that just calls
::info_low(false => not called from analyze). From ::analyze() instead of
::info() call ::info_low(true => called from analyze).
Approved by: Jimmy (rb://487)
--------------
In order to fix this bug we need to distinguish whether ha_innobase::info()
has been called from ::analyze() or not. Rename ::info() to ::info_low()
and add a boolean parameter that tells whether the call is from ::analyze()
or not. Create a new simple ::info() that just calls
::info_low(false => not called from analyze). From ::analyze() instead of
::info() call ::info_low(true => called from analyze).
Approved by: Jimmy (rb://487)
Subquery executes twice, at top level JOIN::optimize and ::execute stages.
At first execution create_sort_index() function is called and
FT_SELECT object is created and destroyed. HANDLER::ft_handler is cleaned up
in the object destructor and at second execution FT_SELECT::get_next() method
returns error.
The fix is to reinit HANDLER::ft_handler field before re-execution of subquery.
replication aborts
When recieving a 'SLAVE STOP' command, slave SQL thread will roll back the
transaction and stop immidiately if there is only transactional table updated,
even through 'CREATE|DROP TEMPOARY TABLE' statement are in it. But These
statements can never be rolled back. Because the temporary tables to the user
session mapping remain until 'RESET SLAVE', Therefore it will abort SQL thread
with an error that the table already exists or doesn't exist, when it restarts
and executes the whole transaction again.
After this patch, SQL thread always waits till the transaction ends and then stops,
if 'CREATE|DROP TEMPOARY TABLE' statement are in it.
This is a port of the following changeset from
5.1/storage/innobase to 5.1/storage/innodb_plugin:
------------------------------------------------------------
revno: 3626
revision-id: vasil.dimov@oracle.com-20101013171859-gi9n558yj89x9v3w
parent: klewis@mysql.com-20101012175933-ce9kkgl0z8oeqffa
committer: Vasil Dimov <vasil.dimov@oracle.com>
branch nick: mysql-5.1-innodb
timestamp: Wed 2010-10-13 20:18:59 +0300
message:
Fix Bug#56143 too many foreign keys causes output of show create table to become invalid
Just remove the check whether the file is "too big".
A similar code exists in ha_innobase::update_table_comment() but that
method does not seem to be used.
Also use a CREATE statement with all the FKs instead of ALTERing the
table 550 times because it is faster.
Just remove the check whether the file is "too big".
A similar code exists in ha_innobase::update_table_comment() but that
method does not seem to be used.
Problem: if multibyte and binary string arguments passed to
RPAD(), LPAD() or INSERT() functions, they might return
wrong results or even lead to a server crash due to missed
character set convertion.
Fix: perform the convertion if necessary.
After ALTER TABLE which changed only table's metadata, row-based
binlog sometimes got corrupted since the tablemap was unexpectedly
set to 0 for subsequent updates to the same table.
ALTER TABLE which changed only table's metadata always reset
table_map_id for the table share to 0. Despite the fact that
0 is a valid value for table_map_id, this step caused problems
as it could have created situation in which we had more than
one table share with table_map_id equal 0. If more than one
table with table_map_id are 0 were updated in the same statement,
updates to these different tables were written into the same
rows event. This caused slave server to crash.
This bug happens only on 5.1. It doesn't affect 5.5+.
This patch solves this problem by ensuring that ALTER TABLE
statements which change metadata only never reset table_map_id
to 0. To do this it changes reopen_table() to correctly use
refreshed table_map_id value instead of using the old one/
resetting it.
When slave executes a transaction bigger than slave's max_binlog_cache_size,
slave will crash. It is caused by the assert that server should only roll back
the statement but not the whole transaction if the error ER_TRANS_CACHE_FULL
happens. But slave sql thread always rollbacks the whole transaction when
an error happens.
Ather this patch, we always clear any error set in sql thread(it is different
from the error in 'SHOW SLAVE STATUS') and it is cleared before rolling back
the transaction.
This is a regression from the fix for bug no 38999. A storage engine capable
of reading only a subset of a table's columns updates corresponding bits in
the read buffer to signal that it has read NULL values for the corresponding
columns. It cannot, and should not, update any other bits. Bug no 38999
occurred because the implementation of UPDATE statements compare the NULL bits
using memcmp, inadvertently comparing bits that were never requested from the
storage engine. The regression was caused by the storage engine trying to
alleviate the situation by writing to all NULL bits, even those that it had no
knowledge of. This has devastating effects for the index merge algorithm,
which relies on all NULL bits, except those explicitly requested, being left
unchanged.
The fix reverts the fix for bug no 38999 in both InnoDB and InnoDB plugin and
changes the server's method of comparing records. For engines that always read
entire rows, we proceed as usual. For engines capable of reading only select
columns, the record buffers are now compared on a column by column basis. An
assertion was also added so that non comparable buffers are never read. Some
relevant copy-pasted code was also consolidated in a new function.
In case of failure in ALTER ... PARTITION under LOCK TABLE
the server could crash, due to it had modified the locked
table object, which was not reverted in case of failure,
resulting in a bad table definition used after the failed
command.
Solved by always closing the LOCKED TABLE, even in case
of error.
Note: this is a 5.1-only fix, bug#56172 fixed it in 5.5+