When analyzing the possible index use cases the server was re-using an internal structure.
This is wrong, as this internal structure gets updated during the analysis.
Fixed by making a copy of the internal structure for every place it needs to be used.
Also stopped the generation of empty SEL_TREE structures that unnecessary
complicate the analysis.
mysql-test/r/index_merge.result:
Bug#37943: test case
mysql-test/t/index_merge.test:
Bug#37943: test case
sql/opt_range.cc:
Bug#37943:
- Make copy constructors for SEL_TREE and sub-structures and use them when OR-ing trees.
- don't generate empty SEL_TREEs. Return NULL instead.
Re-enabling failing test case because server logs were lost in pushbuild, so we need to run it again.
mysql-test/suite/rpl/t/disabled.def:
Re-enabling failing test case because server logs were lost in pushbuild, so we need to run it again.
JOIN for the subselect wasn't cleaned if we came upon an error
during sub_select() execution. That leads to the assertion failure
in close_thread_tables()
part of the 6.0 code backported
per-file comments:
mysql-test/r/sp-error.result
Bug#37949 Crash if argument to SP is a subquery that returns more than one row
test result
mysql-test/t/sp-error.test
Bug#37949 Crash if argument to SP is a subquery that returns more than one row
test case
sql/sp_head.cc
Bug#37949 Crash if argument to SP is a subquery that returns more than one row
lex->unit.cleanup() call added if not substatement
The problem is that when statement-based replication was enabled,
statements such as INSERT INTO .. SELECT FROM .. and CREATE TABLE
.. SELECT FROM need to grab a read lock on the source table that
does not permit concurrent inserts, which would in turn be denied
if the source table is a log table because log tables can't be
locked exclusively.
The solution is to not take such a lock when the source table is
a log table as it is unsafe to replicate log tables under statement
based replication. Furthermore, the read lock that does not permits
concurrent inserts is now only taken if statement-based replication
is enabled and if the source table is not a log table.
include/thr_lock.h:
Introduce yet another lock type that my get upgraded depending
on the binary log format. This is not a optimal solution but
can be easily improved later.
mysql-test/r/log_tables.result:
Add test case result for Bug#34306
mysql-test/suite/binlog/r/binlog_stm_row.result:
Add test case result for Bug#34306
mysql-test/suite/binlog/t/binlog_stm_row.test:
Add test case for Bug#34306
mysql-test/t/log_tables.test:
Add test case for Bug#34306
sql/lock.cc:
Assert that TL_READ_DEFAULT is not a real lock type.
sql/mysql_priv.h:
Export new function.
sql/mysqld.cc:
Remove using_update_log.
sql/sql_base.cc:
Introduce function that returns the appropriate read lock type
depending on how the statement is going to be replicated. It will
only take a TL_READ_NO_INSERT log if the binary is enabled and the
binary log format is statement-based and the table is not a log table.
sql/sql_parse.cc:
Remove using_update_log.
sql/sql_update.cc:
Use new function to choose read lock type.
sql/sql_yacc.yy:
The lock type is now decided at open_tables time. This old behavior was
actually misleading as the binary log format can be dynamically switched
and this would not change for statements that have already been parsed
when the binary log format is changed (ie: prepared statements).
mysql-test/lib/mtr_cases.pl:
forward port the algorithm to check if a binlog format is supported
mysql-test/mysql-test-run.pl:
Don't use dynamic setting of binlog format - does not work
A stored procedure involving substrings could crash the server on certain
platforms because of invalid memory reads.
During storing the new blob-field value, the cached value's address range
overlapped that of the new field value. This caused problems when the
cached value storage was reallocated to provide access for a new
characater set representation. The patch checks the address ranges, and if
they overlap, the new field value is copied to a new storage before it is
converted to the new character set.
mysql-test/r/sp.result:
Added result set
mysql-test/t/sp.test:
Added test case
sql/field.cc:
The source and destination address ranges of a character conversion must not overlap or the 'from' address will be invalidated as the temporary value-
object is re-allocated to fit the new character set.
sql/field.h:
Added comments
and
Bug#33555: Group By Query does not correctly aggregate partitions
Backport of bug-33257 which is the same bug.
read_range_*() calls was not passed to the partition handlers,
but was translated to index_read/next family calls.
Resulting in duplicates rows and wrong aggregations.
mysql-test/r/partition_range.result:
Bug#30573: Ordered range scan over partitioned tables returns some rows twice
Updated result file
mysql-test/t/partition_range.test:
Bug#30573: Ordered range scan over partitioned tables returns some rows twice
Re-enabled the test
sql/ha_partition.cc:
Bug#30573: Ordered range scan over partitioned tables returns some rows twice
backport of bug-33257, correct handling of read_range_* calls,
without converting them to index_read/next calls
sql/ha_partition.h:
Bug#30573: Ordered range scan over partitioned tables returns some rows twice
backport of bug-33257, correct handling of read_range_* calls,
without converting them to index_read/next calls
The fix for bug 31887 was incomplete : it assumes that all the
field types returned by the IS_NUM macro are descendants of
Item_num and tries to zero-fill the values before doing constant
substitution with such fields when they are compared to constant string
values.
The only exception to this is Field_timestamp : it's in the IS_NUM
macro, but is not a descendant of Field_num.
Fixed by excluding timestamp fields (Field_timestamp) when zero-filling
when converting the constant to compare with to a string.
Note that this will not exclude the timestamp columns from const
propagation.
mysql-test/r/compare.result:
Bug #39353: test case
mysql-test/t/compare.test:
Bug #39353: test case
sql/item.cc:
Bug #39353: don't zero-fill timestamp fields when const propagating
to a string : they'll be converted to a string in a date/time format
and not as an integer.
It is a very big test and as such it takes a lot of time.
Solution is to divide the test in two parts, one for testing increasing
column size and one for decreasing size.
The innodb branch does extended tests (that myisam is not) due to the
$do_pk_tests variabel, that is the reason why the innodb branch takes
longer.
No increase of memory usage in innodb was found when analyzing, (tested
with looping some millions time of create/drop and alter commands)
The memory exhaust discovered in the test is due to mysqltest which
stores the result in memory (result-file) and this was the biggest
result file in the test framework, so by dividing the test into two
parts also cuts the memory usage of mysqltest.
mysql-test/suite/parts/inc/partition_alter2_1.inc:
Bug#37803: Test "partition_alter2_innodb" exhausts resources (time and/or memory)
Splitted the test into two parts (one for increasing column size
and one for decreasing)
This leads to lower test case time (to avoid test case timeout)
and less memory consumption of mysqltest (due to smaller result file)
mysql-test/suite/parts/inc/partition_alter2_2.inc:
Bug#37803: Test "partition_alter2_innodb" exhausts resources (time and/or memory)
Splitted the test into two parts (one for increasing column size
and one for decreasing)
This leads to lower test case time (to avoid test case timeout)
and less memory consumption of mysqltest (due to smaller result file)
mysql-test/suite/parts/r/partition_alter2_1_innodb.result:
Bug#37803: Test "partition_alter2_innodb" exhausts resources (time and/or memory)
Splitted the test into two parts (one for increasing column size
and one for decreasing)
This leads to lower test case time (to avoid test case timeout)
and less memory consumption of mysqltest (due to smaller result file)
mysql-test/suite/parts/r/partition_alter2_1_myisam.result:
Bug#37803: Test "partition_alter2_innodb" exhausts resources (time and/or memory)
Splitted the test into two parts (one for increasing column size
and one for decreasing)
This leads to lower test case time (to avoid test case timeout)
and less memory consumption of mysqltest (due to smaller result file)
mysql-test/suite/parts/r/partition_alter2_2_innodb.result:
Bug#37803: Test "partition_alter2_innodb" exhausts resources (time and/or memory)
Splitted the test into two parts (one for increasing column size
and one for decreasing)
This leads to lower test case time (to avoid test case timeout)
and less memory consumption of mysqltest (due to smaller result file)
mysql-test/suite/parts/r/partition_alter2_2_myisam.result:
Bug#37803: Test "partition_alter2_innodb" exhausts resources (time and/or memory)
Splitted the test into two parts (one for increasing column size
and one for decreasing)
This leads to lower test case time (to avoid test case timeout)
and less memory consumption of mysqltest (due to smaller result file)
mysql-test/suite/parts/t/disabled.def:
Bug#37803: Test "partition_alter2_innodb" exhausts resources (time and/or memory)
Removed the test completely (since it has never been supported)
mysql-test/suite/parts/t/partition_alter2_1_innodb.test:
Bug#37803: Test "partition_alter2_innodb" exhausts resources (time and/or memory)
Splitted the test into two parts (one for increasing column size
and one for decreasing)
This leads to lower test case time (to avoid test case timeout)
and less memory consumption of mysqltest (due to smaller result file)
mysql-test/suite/parts/t/partition_alter2_1_myisam.test:
Bug#37803: Test "partition_alter2_innodb" exhausts resources (time and/or memory)
Splitted the test into two parts (one for increasing column size
and one for decreasing)
This leads to lower test case time (to avoid test case timeout)
and less memory consumption of mysqltest (due to smaller result file)
mysql-test/suite/parts/t/partition_alter2_2_innodb.test:
Bug#37803: Test "partition_alter2_innodb" exhausts resources (time and/or memory)
Splitted the test into two parts (one for increasing column size
and one for decreasing)
This leads to lower test case time (to avoid test case timeout)
and less memory consumption of mysqltest (due to smaller result file)
mysql-test/suite/parts/t/partition_alter2_2_myisam.test:
Bug#37803: Test "partition_alter2_innodb" exhausts resources (time and/or memory)
Splitted the test into two parts (one for increasing column size
and one for decreasing)
This leads to lower test case time (to avoid test case timeout)
and less memory consumption of mysqltest (due to smaller result file)
mysql-test/suite/parts/t/partition_alter2_ndb.test:
Bug#37803: Test "partition_alter2_innodb" exhausts resources (time and/or memory)
Removing test since ndb has never supported these tests
columns data types
The "SELECT @lastId, @lastId := Id FROM t" query returns
different result sets depending on the type of the Id column
(INT or BIGINT).
Note: this fix doesn't cover the case when a select query
references an user variable and stored function that
updates a value of that variable, in this case a result
is indeterminate.
The server uses incorrect assumption about a constantness of
an user variable value as a select list item:
The server caches a last query number where that variable
was changed and compares this number with a current query
number. If these numbers are different, the server guesses,
that the variable is not updating in the current query, so
a respective select list item is a constant. However, in some
common cases the server updates cached query number too late.
The server has been modified to memorize user variable
assignments during the parse phase to take them into account
on the next (query preparation) phase independently of the
order of user variable references/assignments in a select
item list.
mysql-test/r/user_var.result:
Added test case for bug #26020.
mysql-test/t/user_var.test:
Added test case for bug #26020.
sql/item_func.cc:
An update of entry and update_query_id variables has been
moved from Item_func_set_user_var::fix_fields() to a separate
method, Item_func_set_user_var::set_entry().
sql/item_func.h:
1. The Item_func_set_user_var::set_entry() method has been
added to update Item_func_set_user_var::entry.
2. The Item_func_set_user_var::entry_thd field has beend
added to update Item_func_set_user_var::entry only when
needed.
sql/sql_base.cc:
Fix: setup_fiedls() calls Item_func_set_user_var::set_entry()
for all items from the thd->lex->set_var_list before the first
call of ::fix_fields().
sql/sql_lex.cc:
The lex_start function has been modified to reset
the st_lex::set_var_list list.
sql/sql_lex.h:
New st_lex::set_var_list field has been added to
memorize all user variable assignments in the current
select query.
sql/sql_yacc.yy:
The variable_aux rule has been modified to memorize
in-query user variable assignments in the
st_lex::set_var_list list.
NO_BACKSLASH_ESCAPES was not heeded in LOAD DATA INFILE
and SELECT INTO OUTFILE. It is now.
mysql-test/r/loaddata.result:
Show that SQL-mode NO_BACKSLASH_ESCAPES is heeded in
INFILE/OUTFILE, and that dump/restore cycles work!
mysql-test/t/loaddata.test:
Show that SQL-mode NO_BACKSLASH_ESCAPES is heeded in
INFILE/OUTFILE, and that dump/restore cycles work!
sql/sql_class.cc:
Add function to enquire whether ESCAPED BY was given.
When doing SELECT...OUTFILE, use ESCAPED BY if specifically
given; otherwise use sensible default value depending on
SQL-mode features NO_BACKSLASH_ESCAPES.
sql/sql_class.h:
Add function to enquire whether ESCAPED BY was given.
sql/sql_load.cc:
When doing LOAD DATA INFILE, use ESCAPED BY if specifically
given; otherwise use sensible default value depending on
SQL-mode features NO_BACKSLASH_ESCAPES.
Details:
- backport of some improvements which prevent sporadic
failures from 5.1 to 5.0
- @@GLOBAL.CONCURRENT_INSERT= 0 also for slave server
- --sorted_result before all selects which have result
sets with more than one row
- Replace error numbers by error names
Fix the write_record function to record auto increment
values in a consistent way.
mysql-test/r/auto_increment.result:
Updated the test result file with the output of the
new test case added to verify this bug.
mysql-test/t/auto_increment.test:
Added a new test case to verify this bug.
sql/sql_insert.cc:
The algorithm for the write_record function
in sql_insert.cc is (more emphasis given to
the parts that deal with the autogenerated values)
1) If a write fails
1.1) save the autogenerated value to avoid
thd->insert_id_for_cur_row to become 0.
1.2) <logic to handle INSERT ON DUPLICATE KEY
UPDATE and REPLACE>
2) record the first successful insert id.
explanation of the failure
--------------------------
As long as 1.1) was executed 2) worked fine.
1.1) was always executed when REPLACE worked
with the last row update optimization, but
in cases where 1.1) was not executed 2)
would fail and would result in the autogenerated
value not being saved.
solution
--------
repeat a check for thd->insert_id_for_cur_row
being zero similar to 1.1) before 2) and ensure
that the correct value is saved.
Merge of fixes from 5.0 -> 5.1
Moved restoration of concurrent_insert's original value to the end of the 5.1 tests
Re-recorded .result file to account for changes to test file.
Moved fix for this bug to 5.0 as other mysqldump bugs seem tied to concurrent_insert being on
Setting concurrent_insert off during this test as INSERTs weren't being
completely processed before the calls to mysqldump, resulting in failing tests.
Altered .test file to turn concurrent_insert off during the test and to restore it
to whatever the value was at the start of the test when complete.
Re-recorded .result file to account for changes to variables in the test.