naming scheme for tests related to functions, rename analyse.test to
func_analyse.test (test for the ANALYSE() procedure). Avoids confusion
with the ANALYZE statement (tested in analyze.test).
Problem: Item_str_ascii_func::val_str() did not set
charset of the returned value properly.
mysql-test/include/ctype_numconv.inc
mysql-test/r/ctype_binary.result
mysql-test/r/ctype_cp1251.result
mysql-test/r/ctype_latin1.result
mysql-test/r/ctype_ucs.result
- Adding tests
sql/item_strfunc.cc
- Adding initialization of charset
from mysql-next-mr-opt-backporting.
Bug#54515: Crash in opt_range.cc::get_best_group_min_max on
SELECT from VIEW with GROUP BY
When handling the grouping items in get_best_group_min_max, the
items need to be of type Item_field. In this bug, an ASSERT
triggered because the item used for grouping was an
Item_direct_view_ref (i.e., the group column is from a view).
The fix is to get the real_item since Item_ref* pointing to
Item_field is ok.
Problem: sha2() reported its result as BINARY
Fix:
- Inheriting Item_func_sha2 from Item_str_ascii_func
- Setting max_length via fix_length_and_charset()
instead of direct assignment.
- Adding tests
Problem: Item_copy did not set "fixed", which resulted in DBUG_ASSERT in some cases.
Fix: adding initialization of the "fixed" member
Adding tests:
mysql-test/include/ctype_numconv.inc
mysql-test/r/ctype_binary.result
mysql-test/r/ctype_cp1251.result
mysql-test/r/ctype_latin1.result
mysql-test/r/ctype_ucs.result
Adding initialization of the "fixed" member:
sql/item.h
value and NO_ZERO_DATE
The problem was that a older version of the error path for a
failed admin statement relied upon a few error conditions being
met in order to access a table handler, the first one being that
the table object pointer was not NULL. Probably due to chance,
in all cases a table object was closed but the reference wasn't
reset, the other conditions didn't evaluate to true. With the
addition of a new check on the error path, the handler started
being dereferenced whenever it was not reset to NULL, causing
problems for code paths which closed the table but didn't reset
the reference.
The solution is to reset the reference whenever a admin statement
fails and the tables are closed.
This bug is a consequence of WL#5349, as the
default storage engine was changed.
The fix was to explicitly add an ENGINE
clause to a CREATE TABLE statement, to
ensure that we test case preservement on
MyISAM.
Bug#47633 - assert in ha_myisammrg::info during OPTIMIZE
The server crashed on an attempt to optimize a MERGE table with
non-existent child table.
mysql_admin_table() relied on the table to be successfully open
if a table object had been allocated.
Changed code to check return value of the open function before
calling a handler:: function on it.
use limit efficiently
Bug #36569: UPDATE ... WHERE ... ORDER BY... always does a
filesort even if not required
Also two bugs reported after QA review (before the commit
of bugs above to public trees, no documentation needed):
Bug #53737: Performance regressions after applying patch
for bug 36569
Bug #53742: UPDATEs have no effect after applying patch
for bug 36569
Execution of single-table UPDATE and DELETE statements did not use the
same optimizer as was used in the compilation of SELECT statements.
Instead, it had an optimizer of its own that did not take into account
that you can omit sorting by retrieving rows using an index.
Extra optimization has been added: when applicable, single-table
UPDATE/DELETE statements use an existing index instead of filesort. A
corresponding SELECT query would do the former.
Also handling of the DESC ordering expression has been added when
reverse index scan is applicable.
From now on most single table UPDATE and DELETE statements show the
same disk access patterns as the corresponding SELECT query. We verify
this by comparing the result of SHOW STATUS LIKE 'Sort%
Currently the get_index_for_order function
a) checks quick select index (if any) for compatibility with the
ORDER expression list or
b) chooses the cheapest available compatible index, but only if
the index scan is cheaper than filesort.
Second way is implemented by the new test_if_cheaper_ordering
function (extracted part the test_if_skip_sort_order()).
The default storage engine is changed from MyISAM to
InnoDB, in all builds except for the embedded server.
In addition, the following system variables are
changed:
* innodb_file_per_table is enabled
* innodb_strict_mode is enabled
* innodb_file_format_name_update is changed
to 'Barracuda'
The test suite is changed so that tests that do not
explicitly include the have_innodb.inc are run with
--default-storage-engine=MyISAM. This is to ease the
transition, so that most regression tests are run
with the same engine as before.
Some tests are disabled for the embedded server
regression test, as the output of certain statements
will be different that for the regular server
(i.e SELECT @@default_storage_engine). This is to
ease transition.
sporadically".
Races in truncate_coverage.test have caused its sporadical
failures.
In the test case we have tried to kill truncate statement
being executed in the first connection which was waiting
for X metadata lock on table being locked by the second
connection. Since we have released metadata lock held by
the second connection right after issuing KILL statement
sometimes TRUNCATE TABLE managed to acquire X lock before
it has noticed that it was killed. In this case TRUNCATE
TABLE was successfully executed till its end and this fact
has caused test failure since this statement didn't return
expected error in such case.
This patch addresses the problem by not releasing metadata
locks in the second connections prematurely.
Bug #22909 Using CREATE ... LIKE is possible to create
field with invalid default value
Bug #35935 CREATE TABLE under LOCK TABLES ignores FLUSH
TABLES WITH READ LOCK
Bug #37371 CREATE TABLE LIKE merge loses UNION parameter
These bugs were originally fixed in the 6.1-fk tree and the fixes
were backported as part of the fix for Bug #42546 "Backup: RESTORE
fails, thinking it finds an existing table". This patch backports
test coverage missing in the original backport. The patch contains
no code changes.
Item*) at opt_sum.cc:305
Queries applying MIN/MAX functions to indexed columns are
optimized to read directly from the index if all key parts
of the index preceding the aggregated key part are bound to
constants by the WHERE clause. A prefix length is also
produced, equal to the total length of the bound key
parts. If the aggregated column itself is bound to a
constant, however, it is also included in the prefix.
Such full search keys are read as closed intervals for
reasons beyond the scope of this bug. However, the procedure
missed one case where a key part meant for use as range
endpoint was being overwritten with a NULL value destined
for equality checking. In this case the key part was
overwritten but the range flag remained, causing open
interval reading to be performed.
Bug was fixed by adding more stringent checking to the
search key building procedure (matching_cond) and never
allow overwrites of range predicates with non-range
predicates.
An assertion was added to make sure open intervals are never
used with full search keys.
Problem: the server missed the fact that one can read from
2 indexes alternately using HANDLER interface.
Fix: check if the same (initialized) index is involved
reading next/prev values from the index.
Bug#46527 COMMIT AND CHAIN RELEASE does not make sense
Bug#53343 completion_type=1, COMMIT/ROLLBACK AND CHAIN don't
preserve the isolation level
Bug#53346 completion_type has strange effect in a stored
procedure/prepared statement
Added test cases to verify the expected behaviour of :
SET SESSION TRANSACTION ISOLATION LEVEL,
SET TRANSACTION ISOLATION LEVEL,
@@completion_type,
COMMIT AND CHAIN,
ROLLBACK AND CHAIN
..and some combinations of the above
The problem is in the Item_func_isnull::update_used_tables() function,
bracket is at the wrong place. Because of that isnull item erroneously
is treated as const item. The fix is to set brackets in the right place.
This crash happened if a table was listed twice in a DROP TABLE statement,
and the statement was executed while in LOCK TABLES mode. Since the two
elements of table list were identical, they were assigned the same TABLE object.
During processing of the first table element, the TABLE instance was destroyed
and the second table list element was left with a dangling reference.
When this reference was later accessed, the server crashed.
Listing the same table twice in DROP TABLES should give an ER_NONUNIQ_TABLE
error. However, this did not happen as the check for unique table names was
skipped due to the lock type for table list elements being set to TL_IGNORE.
Previously TL_UNLOCK was used and the unique check was performed.
This bug was a regression introduced by a pre-requisite patch for
Bug#51263 "Deadlock between transactional SELECT and ALTER TABLE ...
REBUILD PARTITION". The regression only existed in an internal team
tree and never in any released code.
This patch reverts DROP TABLE (and DROP VIEW) to the old behavior of
using TL_UNLOCK locks. Test case added to drop.test.
locks for DML statements and changes the way MDL locks
are acquired/granted in contended case.
Instead of backing-off when a lock conflict is encountered
and waiting for it to go away before restarting open_tables()
process we now wait for lock to be released without releasing
any previously acquired locks. If conflicting lock goes away
we resume opening tables. If waiting leads to a deadlock we
try to resolve it by backing-off and restarting open_tables()
immediately.
As result both waiting for possibility to acquire and
acquiring of a metadata lock now always happen within the
same MDL API call. This has allowed to make release of a lock
and granting it to the most appropriate pending request an
atomic operation.
Thanks to this it became possible to wake up during release
of lock only those waiters which requests can be satisfied
at the moment as well as wake up only one waiter in case
when granting its request would prevent all other requests
from being satisfied. This solves thundering herd problem
which occured in cases when we were releasing some lock and
woke up many waiters for SNRW or X locks (this was the issue
in bug#52289 "performance regression for MyISAM in sysbench
OLTP_RW test".
This also allowed to implement more fair (FIFO) scheduling
among waiters with the same priority.
It also opens the door for introducing new types of requests
for metadata locks such as low-prio SNRW lock which is
necessary in order to support LOCK TABLES LOW_PRIORITY WRITE.
Notice that after this sometimes can report ER_LOCK_DEADLOCK
error in cases in which it has not happened before.
Particularly we will always report this error if waiting for
conflicting lock has happened in the middle of transaction
and resulted in a deadlock. Before this patch the error was
not reported if deadlock could have been resolved by backing
off all metadata locks acquired by the current statement.
Conflicts:
Text conflict in mysql-test/r/archive.result
Contents conflict in mysql-test/r/innodb_bug38231.result
Text conflict in mysql-test/r/mdl_sync.result
Text conflict in mysql-test/suite/binlog/t/disabled.def
Text conflict in mysql-test/suite/rpl_ndb/r/rpl_ndb_binlog_format_errors.result
Text conflict in mysql-test/t/archive.test
Contents conflict in mysql-test/t/innodb_bug38231.test
Text conflict in mysql-test/t/mdl_sync.test
Text conflict in sql/sp_head.cc
Text conflict in sql/sql_show.cc
Text conflict in sql/table.cc
Text conflict in sql/table.h
Some of the server implementations don't support dates later
than 2038 due to the internal time type being 32 bit.
Added checks so that the server will refuse dates that cannot
be handled by either throwing an error when setting date at
runtime or by refusing to start or shutting down the server if
the system date cannot be stored in my_time_t.
Problems:
- regression (compating to version 5.1) in metadata for BLOB types
- inconsistency between length metadata in server and embedded for BLOB types
- wrong max_length calculation in items derived from BLOB columns
@ libmysqld/lib_sql.cc
Calculating length metadata in embedded similary to server version,
using new function char_to_byte_length_safe().
@ mysql-test/r/ctype_utf16.result
Adding tests
@ mysql-test/r/ctype_utf32.result
Adding tests
@ mysql-test/r/ctype_utf8.result
Adding tests
@ mysql-test/r/ctype_utf8mb4.result
Adding tests
@ mysql-test/t/ctype_utf16.test
Adding tests
@ mysql-test/t/ctype_utf32.test
Adding tests
@ mysql-test/t/ctype_utf8.test
Adding tests
@ mysql-test/t/ctype_utf8mb4.test
Adding tests
@ sql/field.cc
Overriding char_length() for Field_blob:
unlike in generic Item::char_length() we don't
divide to mbmaxlen for BLOBs.
@ sql/field.h
- Making Field::char_length() virtual
- Adding prototype for Field_blob::char_length()
@ sql/item.h
- Adding new helper function char_to_byte_length_safe()
- Using new function
@ sql/protocol.cc
Using new function char_to_byte_length_safe().
modified:
libmysqld/lib_sql.cc
mysql-test/r/ctype_utf16.result
mysql-test/r/ctype_utf32.result
mysql-test/r/ctype_utf8.result
mysql-test/r/ctype_utf8mb4.result
mysql-test/t/ctype_utf16.test
mysql-test/t/ctype_utf32.test
mysql-test/t/ctype_utf8.test
mysql-test/t/ctype_utf8mb4.test
sql/field.cc
sql/field.h
sql/item.h
sql/protocol.cc
Due to a BZR bug, that merge was done by the following command:
bzr merge -r 'revid:tor.didriksen@sun.com-20100527074248-6qtv0p1ugy6o1hjo..' <mysql-trunk-bugfixing path>