It is a very big test and as such it takes a lot of time.
Solution is to divide the test in two parts, one for testing increasing
column size and one for decreasing size.
The innodb branch does extended tests (that myisam is not) due to the
$do_pk_tests variabel, that is the reason why the innodb branch takes
longer.
No increase of memory usage in innodb was found when analyzing, (tested
with looping some millions time of create/drop and alter commands)
The memory exhaust discovered in the test is due to mysqltest which
stores the result in memory (result-file) and this was the biggest
result file in the test framework, so by dividing the test into two
parts also cuts the memory usage of mysqltest.
columns data types
The "SELECT @lastId, @lastId := Id FROM t" query returns
different result sets depending on the type of the Id column
(INT or BIGINT).
Note: this fix doesn't cover the case when a select query
references an user variable and stored function that
updates a value of that variable, in this case a result
is indeterminate.
The server uses incorrect assumption about a constantness of
an user variable value as a select list item:
The server caches a last query number where that variable
was changed and compares this number with a current query
number. If these numbers are different, the server guesses,
that the variable is not updating in the current query, so
a respective select list item is a constant. However, in some
common cases the server updates cached query number too late.
The server has been modified to memorize user variable
assignments during the parse phase to take them into account
on the next (query preparation) phase independently of the
order of user variable references/assignments in a select
item list.
Details:
- backport of some improvements which prevent sporadic
failures from 5.1 to 5.0
- @@GLOBAL.CONCURRENT_INSERT= 0 also for slave server
- --sorted_result before all selects which have result
sets with more than one row
- Replace error numbers by error names
Merge of fixes from 5.0 -> 5.1
Moved restoration of concurrent_insert's original value to the end of the 5.1 tests
Re-recorded .result file to account for changes to test file.
Moved fix for this bug to 5.0 as other mysqldump bugs seem tied to concurrent_insert being on
Setting concurrent_insert off during this test as INSERTs weren't being
completely processed before the calls to mysqldump, resulting in failing tests.
Altered .test file to turn concurrent_insert off during the test and to restore it
to whatever the value was at the start of the test when complete.
Re-recorded .result file to account for changes to variables in the test.
Problem: with @@sql_mode=pad_char_to_full_length
a CHAR column returned additional garbage
after trailing space characters due to
incorrect my_charpos() call.
Fix: call my_charpos() with correct arguments.
If [NOT] PRESERVE was not given, parser always defaulted to NOT
PRESERVE, making it impossible for the "not given = no change"
rule to work in ALTER EVENT. Leaving out the PRESERVE-clause
defaults to NOT PRESERVE on CREATE now, and to "no change" in
ALTER.
mysqldump creates stand-in tables before dumping the actual view.
Those tables were of the default type; if the view had more columns
than that (a pathological case, arguably), loading the dump would
fail. We now make the temporary stand-ins MyISAM tables to prevent
this.
mysqldump creates stand-in tables before dumping the actual view.
Those tables were of the default type; if the view had more columns
than that (a pathological case, arguably), loading the dump would
fail. We now make the temporary stand-ins MyISAM tables to prevent
this.
statement/stored procedure
View privileges are properly checked after the fix for bug no
36086, so the method TABLE_LIST::get_db_name() must be used
instead of field TABLE_LIST::db, as this only works for tables.
Bug appears when accessing views in prepared statements.
SUPER is not required to change binlog format for session
A user without SUPER privileges can change the value of the
session variable BINLOG_FORMAT, causing problems for a DBA.
This changeset requires a user to have SUPER privileges to
change the value of the session variable BINLOG_FORMAT, and
not only the global variable BINLOG_FORMAT.
The size of the Innodb_buffer_pool_pages differs by one byte on row versus statement
log, so neuter the last position of the stringified decimal representation. Innobase
says the size isn't very important in any case.
Also, split out the "mixed" format to its own file, as mtr seems to dislike having only
stm and row but not mix.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
SET col
When reporting a duplicate key error the server was making incorrect assumptions
on what the state of the value string to include in the error is.
Fixed by accessing the data in this string in a "safe" way (without relying on it
having a terminating 0).
Detected by code analysis and fixed a similar problem in reporting the foreign key
duplicate errors.
The check_table_access function initializes per-table grant info and performs
access rights check. It wasn't called for SHOW STATUS statement thus left
grants info uninitialized. In some cases this led to server crash. In other
cases it allowed a user to check for presence/absence of arbitrary values in
any tables.
Now the check_table_access function is called prior to the statement
processing.
The assertion indicates that some data was left in the transaction
cache when the server was shut down, which means that a previous
statement did not commit or rollback correctly.
What happened was that a bug in the rollback of a transactional
table caused the transaction cache to be emptied, but not reset.
The error can be triggered by having a failing UPDATE or INSERT,
on a transactional table, causing an implicit rollback.
Fixed by always flushing the pending event to reset the state
properly.
This patch also fixes bugs 36963 and 35600.
- In many places a view was confused with an anonymous derived
table, i.e. access checking was skipped. Fixed by introducing a
predicate to tell the difference between named and anonymous
derived tables.
- When inserting fields for "SELECT * ", there was no
distinction between base tables and views, where one should be
made. View privileges are checked elsewhere.
in open_table()
Problem: repeating "CREATE... ( AUTOINCREMENT) ... SELECT" may lead to
an assertion failure.
Fix: reset table->auto_increment_field_not_null after each record
writing.