mysql-test/lib/mtr_cases.pl:
forward port the algorithm to check if a binlog format is supported
mysql-test/mysql-test-run.pl:
Don't use dynamic setting of binlog format - does not work
A stored procedure involving substrings could crash the server on certain
platforms because of invalid memory reads.
During storing the new blob-field value, the cached value's address range
overlapped that of the new field value. This caused problems when the
cached value storage was reallocated to provide access for a new
characater set representation. The patch checks the address ranges, and if
they overlap, the new field value is copied to a new storage before it is
converted to the new character set.
mysql-test/r/sp.result:
Added result set
mysql-test/t/sp.test:
Added test case
sql/field.cc:
The source and destination address ranges of a character conversion must not overlap or the 'from' address will be invalidated as the temporary value-
object is re-allocated to fit the new character set.
sql/field.h:
Added comments
and
Bug#33555: Group By Query does not correctly aggregate partitions
Backport of bug-33257 which is the same bug.
read_range_*() calls was not passed to the partition handlers,
but was translated to index_read/next family calls.
Resulting in duplicates rows and wrong aggregations.
mysql-test/r/partition_range.result:
Bug#30573: Ordered range scan over partitioned tables returns some rows twice
Updated result file
mysql-test/t/partition_range.test:
Bug#30573: Ordered range scan over partitioned tables returns some rows twice
Re-enabled the test
sql/ha_partition.cc:
Bug#30573: Ordered range scan over partitioned tables returns some rows twice
backport of bug-33257, correct handling of read_range_* calls,
without converting them to index_read/next calls
sql/ha_partition.h:
Bug#30573: Ordered range scan over partitioned tables returns some rows twice
backport of bug-33257, correct handling of read_range_* calls,
without converting them to index_read/next calls
The fix for bug 31887 was incomplete : it assumes that all the
field types returned by the IS_NUM macro are descendants of
Item_num and tries to zero-fill the values before doing constant
substitution with such fields when they are compared to constant string
values.
The only exception to this is Field_timestamp : it's in the IS_NUM
macro, but is not a descendant of Field_num.
Fixed by excluding timestamp fields (Field_timestamp) when zero-filling
when converting the constant to compare with to a string.
Note that this will not exclude the timestamp columns from const
propagation.
mysql-test/r/compare.result:
Bug #39353: test case
mysql-test/t/compare.test:
Bug #39353: test case
sql/item.cc:
Bug #39353: don't zero-fill timestamp fields when const propagating
to a string : they'll be converted to a string in a date/time format
and not as an integer.
It is a very big test and as such it takes a lot of time.
Solution is to divide the test in two parts, one for testing increasing
column size and one for decreasing size.
The innodb branch does extended tests (that myisam is not) due to the
$do_pk_tests variabel, that is the reason why the innodb branch takes
longer.
No increase of memory usage in innodb was found when analyzing, (tested
with looping some millions time of create/drop and alter commands)
The memory exhaust discovered in the test is due to mysqltest which
stores the result in memory (result-file) and this was the biggest
result file in the test framework, so by dividing the test into two
parts also cuts the memory usage of mysqltest.
mysql-test/suite/parts/inc/partition_alter2_1.inc:
Bug#37803: Test "partition_alter2_innodb" exhausts resources (time and/or memory)
Splitted the test into two parts (one for increasing column size
and one for decreasing)
This leads to lower test case time (to avoid test case timeout)
and less memory consumption of mysqltest (due to smaller result file)
mysql-test/suite/parts/inc/partition_alter2_2.inc:
Bug#37803: Test "partition_alter2_innodb" exhausts resources (time and/or memory)
Splitted the test into two parts (one for increasing column size
and one for decreasing)
This leads to lower test case time (to avoid test case timeout)
and less memory consumption of mysqltest (due to smaller result file)
mysql-test/suite/parts/r/partition_alter2_1_innodb.result:
Bug#37803: Test "partition_alter2_innodb" exhausts resources (time and/or memory)
Splitted the test into two parts (one for increasing column size
and one for decreasing)
This leads to lower test case time (to avoid test case timeout)
and less memory consumption of mysqltest (due to smaller result file)
mysql-test/suite/parts/r/partition_alter2_1_myisam.result:
Bug#37803: Test "partition_alter2_innodb" exhausts resources (time and/or memory)
Splitted the test into two parts (one for increasing column size
and one for decreasing)
This leads to lower test case time (to avoid test case timeout)
and less memory consumption of mysqltest (due to smaller result file)
mysql-test/suite/parts/r/partition_alter2_2_innodb.result:
Bug#37803: Test "partition_alter2_innodb" exhausts resources (time and/or memory)
Splitted the test into two parts (one for increasing column size
and one for decreasing)
This leads to lower test case time (to avoid test case timeout)
and less memory consumption of mysqltest (due to smaller result file)
mysql-test/suite/parts/r/partition_alter2_2_myisam.result:
Bug#37803: Test "partition_alter2_innodb" exhausts resources (time and/or memory)
Splitted the test into two parts (one for increasing column size
and one for decreasing)
This leads to lower test case time (to avoid test case timeout)
and less memory consumption of mysqltest (due to smaller result file)
mysql-test/suite/parts/t/disabled.def:
Bug#37803: Test "partition_alter2_innodb" exhausts resources (time and/or memory)
Removed the test completely (since it has never been supported)
mysql-test/suite/parts/t/partition_alter2_1_innodb.test:
Bug#37803: Test "partition_alter2_innodb" exhausts resources (time and/or memory)
Splitted the test into two parts (one for increasing column size
and one for decreasing)
This leads to lower test case time (to avoid test case timeout)
and less memory consumption of mysqltest (due to smaller result file)
mysql-test/suite/parts/t/partition_alter2_1_myisam.test:
Bug#37803: Test "partition_alter2_innodb" exhausts resources (time and/or memory)
Splitted the test into two parts (one for increasing column size
and one for decreasing)
This leads to lower test case time (to avoid test case timeout)
and less memory consumption of mysqltest (due to smaller result file)
mysql-test/suite/parts/t/partition_alter2_2_innodb.test:
Bug#37803: Test "partition_alter2_innodb" exhausts resources (time and/or memory)
Splitted the test into two parts (one for increasing column size
and one for decreasing)
This leads to lower test case time (to avoid test case timeout)
and less memory consumption of mysqltest (due to smaller result file)
mysql-test/suite/parts/t/partition_alter2_2_myisam.test:
Bug#37803: Test "partition_alter2_innodb" exhausts resources (time and/or memory)
Splitted the test into two parts (one for increasing column size
and one for decreasing)
This leads to lower test case time (to avoid test case timeout)
and less memory consumption of mysqltest (due to smaller result file)
mysql-test/suite/parts/t/partition_alter2_ndb.test:
Bug#37803: Test "partition_alter2_innodb" exhausts resources (time and/or memory)
Removing test since ndb has never supported these tests
columns data types
The "SELECT @lastId, @lastId := Id FROM t" query returns
different result sets depending on the type of the Id column
(INT or BIGINT).
Note: this fix doesn't cover the case when a select query
references an user variable and stored function that
updates a value of that variable, in this case a result
is indeterminate.
The server uses incorrect assumption about a constantness of
an user variable value as a select list item:
The server caches a last query number where that variable
was changed and compares this number with a current query
number. If these numbers are different, the server guesses,
that the variable is not updating in the current query, so
a respective select list item is a constant. However, in some
common cases the server updates cached query number too late.
The server has been modified to memorize user variable
assignments during the parse phase to take them into account
on the next (query preparation) phase independently of the
order of user variable references/assignments in a select
item list.
mysql-test/r/user_var.result:
Added test case for bug #26020.
mysql-test/t/user_var.test:
Added test case for bug #26020.
sql/item_func.cc:
An update of entry and update_query_id variables has been
moved from Item_func_set_user_var::fix_fields() to a separate
method, Item_func_set_user_var::set_entry().
sql/item_func.h:
1. The Item_func_set_user_var::set_entry() method has been
added to update Item_func_set_user_var::entry.
2. The Item_func_set_user_var::entry_thd field has beend
added to update Item_func_set_user_var::entry only when
needed.
sql/sql_base.cc:
Fix: setup_fiedls() calls Item_func_set_user_var::set_entry()
for all items from the thd->lex->set_var_list before the first
call of ::fix_fields().
sql/sql_lex.cc:
The lex_start function has been modified to reset
the st_lex::set_var_list list.
sql/sql_lex.h:
New st_lex::set_var_list field has been added to
memorize all user variable assignments in the current
select query.
sql/sql_yacc.yy:
The variable_aux rule has been modified to memorize
in-query user variable assignments in the
st_lex::set_var_list list.
NO_BACKSLASH_ESCAPES was not heeded in LOAD DATA INFILE
and SELECT INTO OUTFILE. It is now.
mysql-test/r/loaddata.result:
Show that SQL-mode NO_BACKSLASH_ESCAPES is heeded in
INFILE/OUTFILE, and that dump/restore cycles work!
mysql-test/t/loaddata.test:
Show that SQL-mode NO_BACKSLASH_ESCAPES is heeded in
INFILE/OUTFILE, and that dump/restore cycles work!
sql/sql_class.cc:
Add function to enquire whether ESCAPED BY was given.
When doing SELECT...OUTFILE, use ESCAPED BY if specifically
given; otherwise use sensible default value depending on
SQL-mode features NO_BACKSLASH_ESCAPES.
sql/sql_class.h:
Add function to enquire whether ESCAPED BY was given.
sql/sql_load.cc:
When doing LOAD DATA INFILE, use ESCAPED BY if specifically
given; otherwise use sensible default value depending on
SQL-mode features NO_BACKSLASH_ESCAPES.
Details:
- backport of some improvements which prevent sporadic
failures from 5.1 to 5.0
- @@GLOBAL.CONCURRENT_INSERT= 0 also for slave server
- --sorted_result before all selects which have result
sets with more than one row
- Replace error numbers by error names
Fix the write_record function to record auto increment
values in a consistent way.
mysql-test/r/auto_increment.result:
Updated the test result file with the output of the
new test case added to verify this bug.
mysql-test/t/auto_increment.test:
Added a new test case to verify this bug.
sql/sql_insert.cc:
The algorithm for the write_record function
in sql_insert.cc is (more emphasis given to
the parts that deal with the autogenerated values)
1) If a write fails
1.1) save the autogenerated value to avoid
thd->insert_id_for_cur_row to become 0.
1.2) <logic to handle INSERT ON DUPLICATE KEY
UPDATE and REPLACE>
2) record the first successful insert id.
explanation of the failure
--------------------------
As long as 1.1) was executed 2) worked fine.
1.1) was always executed when REPLACE worked
with the last row update optimization, but
in cases where 1.1) was not executed 2)
would fail and would result in the autogenerated
value not being saved.
solution
--------
repeat a check for thd->insert_id_for_cur_row
being zero similar to 1.1) before 2) and ensure
that the correct value is saved.
Merge of fixes from 5.0 -> 5.1
Moved restoration of concurrent_insert's original value to the end of the 5.1 tests
Re-recorded .result file to account for changes to test file.
Moved fix for this bug to 5.0 as other mysqldump bugs seem tied to concurrent_insert being on
Setting concurrent_insert off during this test as INSERTs weren't being
completely processed before the calls to mysqldump, resulting in failing tests.
Altered .test file to turn concurrent_insert off during the test and to restore it
to whatever the value was at the start of the test when complete.
Re-recorded .result file to account for changes to variables in the test.
The problem here is that symbols can not be loaded, because symbol
path is not set and default path does not include the directory
where PDB is located.
The problem is _not_ reproducible on the same machine where
mysqld.exe is built - if PDB is not found in the symbol path,
dbghelp would fallback to fully qualified PDB path as given in the
executable header and on the build host this will succeed.
The solution is to calculate symbol path and pass it to SymInitialize()
call.
Problem: with @@sql_mode=pad_char_to_full_length
a CHAR column returned additional garbage
after trailing space characters due to
incorrect my_charpos() call.
Fix: call my_charpos() with correct arguments.
bug#31233 mysql_alter_table() fails to drop UNIQUE KEY
mysql-test/suite/ndb/r/ndb_alter_table.result:
bug#31233 mysql_alter_table() fails to drop UNIQUE KEY: added test cases
mysql-test/suite/ndb/t/ndb_alter_table.test:
bug#31233 mysql_alter_table() fails to drop UNIQUE KEY: added test cases
sql/ha_ndbcluster.cc:
bug#31233 mysql_alter_table() fails to drop UNIQUE KEY: Removed check for non-pk
tables, not needed when mysql_alter_table checks apropriate flags
sql/mysql_priv.h:
bug #31231 mysql_alter_table() tries to drop a non-existing table: added FRM_ONLY
flag
sql/sql_table.cc:
bug #31231 mysql_alter_table() tries to drop a non-existing table
Don't invoke handler for tables defined with FRM_ONLY flag.
bug#31233 mysql_alter_table() fails to drop UNIQUE KEY
When a table is defined without an explicit primary key
mysql will choose the first found unique index defined over
non-nullable fields (if such an index exists). This means
that if such an index is added (the first) or dropped (the last)
through an alter table, this equals adding or dropping a primary key.
The implementation for on-line add/drop index did not consider
this semantics. This patch ensures that only handlers with the
correctly defined flags (see handler.h for explanation of the flags):
HA_ONLINE_ADD_PK_INDEX
HA_ONLINE_ADD_PK_INDEX_NO_WRITES
HA_ONLINE_DROP_PK_INDEX
HA_ONLINE_DROP_PK_INDEX_NO_WRITES
are invoked for such on-line operations. All others handlers must
perform a full (offline) alter table.
If [NOT] PRESERVE was not given, parser always defaulted to NOT
PRESERVE, making it impossible for the "not given = no change"
rule to work in ALTER EVENT. Leaving out the PRESERVE-clause
defaults to NOT PRESERVE on CREATE now, and to "no change" in
ALTER.
mysql-test/r/events_2.result:
show that giving no PRESERVE-clause to ALTER EVENT
results in no change. show that giving no PRESERVE-clause
to CREATE EVENT defaults to NOT PRESERVE as per the docs.
Show specifically that this is also handled correctly when
trying to ALTER EVENTs into the past.
mysql-test/t/events_2.test:
show that giving no PRESERVE-clause to ALTER EVENT
results in no change. show that giving no PRESERVE-clause
to CREATE EVENT defaults to NOT PRESERVE as per the docs.
Show specifically that this is also handled correctly when
trying to ALTER EVENTs into the past.
sql/event_db_repository.cc:
If ALTER EVENT was given no PRESERVE-clause (meaning "no change"),
we don't know the previous PRESERVE-setting by the time we check
the parse-data. If ALTER EVENT was given dates that are in the past,
we don't know how to react, lacking the PRESERVE-setting. Heal this
by running the check later when we have actually read the previous
EVENT-data.
sql/event_parse_data.cc:
Change default for ON COMPLETION to indicate, "not specified."
Also defer throwing errors when ALTER EVENT is given dates in
the past but not PRESERVE-clause until we know the previous
PRESERVE-value.
sql/event_parse_data.h:
Add third state for ON COMPLETION [NOT] PRESERVE (preserve,
don't, not specified).
Make check_dates() public so we can defer this check until
deeper in the callstack where we have all the required data.
sql/sql_yacc.yy:
If CREATE EVENT is not given ON COMPLETION [NOT] PRESERVE,
we default to NOT, as per the docs.
mysqldump creates stand-in tables before dumping the actual view.
Those tables were of the default type; if the view had more columns
than that (a pathological case, arguably), loading the dump would
fail. We now make the temporary stand-ins MyISAM tables to prevent
this.
client/mysqldump.c:
When creating a stand-in table, specify its type to
avoid defaulting to a type with a column-number limit
(like Inno). The type is always MyISAM as we know that
to be available.
mysql-test/r/mysqldump-max.result:
add test results for 31434
mysql-test/r/mysqldump.result:
mysqldump sets engine-type (MyISAM) for stand-in tables
for views now. Update test results.
mysql-test/t/mysqldump-max.test:
Show that mysqldump's stand-in tables for views explicitly
set engine-type to MyISAM to avoid falling back on an engine
that might support fewer columns than the final view requires
(here's lookin' at you, inno). Also show that this actually
has the desired effect by dumping and reloading a view that
has more columns than inno supports.