and reverse() function
3 problems fixed :
1. The reported problem : caused by incorrect parsing of
the file as ucs data resulting in wrong length of the parsed
string. Fixed by truncating the invalid trailing bytes
(non-complete multibyte characters) when reading from the file
2. LOAD DATA when reading from a proper UCS2 file wasn't
recognizing the new line characters. Fixed by first looking
if a byte is a new line (or any other special) character before
reading it as a part of a multibyte character.
3. When using user variables to hold the column data in LOAD
DATA the character set of the user variable was set incorrectly
to the database charset. Fixed by setting it to the charset
specified by LOAD DATA (if any).
naming scheme for tests related to functions, rename analyse.test to
func_analyse.test (test for the ANALYSE() procedure). Avoids confusion
with the ANALYZE statement (tested in analyze.test).
The problem there is that HAVING condition evaluates const
parts of condition despite the condition has references
on aggregate functions. Table t1 became const tables
after make_join_statistics and table1.pk = 1, HAVING is
transformed into MAX(1) < 7 and taken away from HAVING.
The fix is to skip evaluation of HAVING conts parts if
HAVING condition has references on aggregate functions.
Problem: Item_str_ascii_func::val_str() did not set
charset of the returned value properly.
mysql-test/include/ctype_numconv.inc
mysql-test/r/ctype_binary.result
mysql-test/r/ctype_cp1251.result
mysql-test/r/ctype_latin1.result
mysql-test/r/ctype_ucs.result
- Adding tests
sql/item_strfunc.cc
- Adding initialization of charset
from mysql-next-mr-opt-backporting.
Bug#54515: Crash in opt_range.cc::get_best_group_min_max on
SELECT from VIEW with GROUP BY
When handling the grouping items in get_best_group_min_max, the
items need to be of type Item_field. In this bug, an ASSERT
triggered because the item used for grouping was an
Item_direct_view_ref (i.e., the group column is from a view).
The fix is to get the real_item since Item_ref* pointing to
Item_field is ok.
Problem: sha2() reported its result as BINARY
Fix:
- Inheriting Item_func_sha2 from Item_str_ascii_func
- Setting max_length via fix_length_and_charset()
instead of direct assignment.
- Adding tests
Problem: Item_copy did not set "fixed", which resulted in DBUG_ASSERT in some cases.
Fix: adding initialization of the "fixed" member
Adding tests:
mysql-test/include/ctype_numconv.inc
mysql-test/r/ctype_binary.result
mysql-test/r/ctype_cp1251.result
mysql-test/r/ctype_latin1.result
mysql-test/r/ctype_ucs.result
Adding initialization of the "fixed" member:
sql/item.h
value and NO_ZERO_DATE
The problem was that a older version of the error path for a
failed admin statement relied upon a few error conditions being
met in order to access a table handler, the first one being that
the table object pointer was not NULL. Probably due to chance,
in all cases a table object was closed but the reference wasn't
reset, the other conditions didn't evaluate to true. With the
addition of a new check on the error path, the handler started
being dereferenced whenever it was not reset to NULL, causing
problems for code paths which closed the table but didn't reset
the reference.
The solution is to reset the reference whenever a admin statement
fails and the tables are closed.
Fixed an incomplete historical ALTER TABLE MODIFY trimming the trigger
privilege bit from mysql.tables_priv.Table_priv column.
Removed the duplicate ALTER TABLE MODIFY.
Test suite added.
This bug is a consequence of WL#5349, as the
default storage engine was changed.
The fix was to explicitly add an ENGINE
clause to a CREATE TABLE statement, to
ensure that we test case preservement on
MyISAM.
Bug#47633 - assert in ha_myisammrg::info during OPTIMIZE
The server crashed on an attempt to optimize a MERGE table with
non-existent child table.
mysql_admin_table() relied on the table to be successfully open
if a table object had been allocated.
Changed code to check return value of the open function before
calling a handler:: function on it.
returns nothing
When looking for table or database names inside INFORMATION_SCHEMA
we must convert the table and database names to lowercase (just as it's
done in the rest of the server) when lowercase_table_names is non-zero.
This will allow us to find the same tables that we would find if there
is no condition.
Fixed by converting to lower case when extracting the database and
table name conditions.
Test case added.
The problem is that QUICK_SELECT_DESC behaviour depends
on used_key_parts value which can be bigger than selected
best_key_parts value if an engine supports clustered key.
But used_key_parts is overwritten with best_key_parts
value that prevents from correct selection of index
access method. The fix is to preserve used_key_parts
value for further use in QUICK_SELECT_DESC.
This deadlock happened if DROP DATABASE was blocked due to an open
HANDLER table from a different connection. While DROP DATABASE
is blocked, it holds the LOCK_mysql_create_db mutex. This results
in a deadlock if the connection with the open HANDLER table tries
to execute a CREATE/ALTER/DROP DATABASE statement as they all
try to acquire LOCK_mysql_create_db.
This patch makes this deadlock scenario very unlikely by closing and
marking for re-open all HANDLER tables for which there are pending
conflicing locks, before LOCK_mysql_create_db is acquired.
However, there is still a very slight possibility that a connection
could access one of these HANDLER tables between closing/marking for
re-open and the acquisition of LOCK_mysql_create_db.
This patch is for 5.1 only, a separate and complete fix will be
made for 5.5+.
Test case added to schema.test.
During creation of the table list of
processed tables hidden I_S table 'VARIABLES'
is erroneously added into the table list.
it leads to ER_UNKNOWN_TABLE error in
TABLE_LIST::add_table_to_list() function.
The fix is to skip addition of hidden I_S
tables into the table list.
require O(#scans) memory
When an index merge operation was restarted, it would
re-allocate the Unique object controlling the duplicate row
ID elimination. Fixed by making the Unique object a member
of QUICK_INDEX_MERGE_SELECT and thus reusing it throughout
the lifetime of this object.
file .\filesort.cc, line 149 (part II)
Problem: the server didn't disregard sort order
for some zero length tuples.
Fix: skip sort order in such a case
(zero length NOT NULL string functions).
use limit efficiently
Bug #36569: UPDATE ... WHERE ... ORDER BY... always does a
filesort even if not required
Also two bugs reported after QA review (before the commit
of bugs above to public trees, no documentation needed):
Bug #53737: Performance regressions after applying patch
for bug 36569
Bug #53742: UPDATEs have no effect after applying patch
for bug 36569
Execution of single-table UPDATE and DELETE statements did not use the
same optimizer as was used in the compilation of SELECT statements.
Instead, it had an optimizer of its own that did not take into account
that you can omit sorting by retrieving rows using an index.
Extra optimization has been added: when applicable, single-table
UPDATE/DELETE statements use an existing index instead of filesort. A
corresponding SELECT query would do the former.
Also handling of the DESC ordering expression has been added when
reverse index scan is applicable.
From now on most single table UPDATE and DELETE statements show the
same disk access patterns as the corresponding SELECT query. We verify
this by comparing the result of SHOW STATUS LIKE 'Sort%
Currently the get_index_for_order function
a) checks quick select index (if any) for compatibility with the
ORDER expression list or
b) chooses the cheapest available compatible index, but only if
the index scan is cheaper than filesort.
Second way is implemented by the new test_if_cheaper_ordering
function (extracted part the test_if_skip_sort_order()).
Incorrect handling of NULL arguments could lead to a crash on
the IN or CASE operations when either NULL arguments were
passed explicitly as arguments (IN) or implicitly generated by
the WITH ROLLUP modifier (both IN and CASE).
Item_func_case::find_item() assumed all necessary comparators
to be instantiated in fix_length_and_dec(). However, in the
presence of WITH ROLLUP modifier, arguments could be
substituted with an Item_null leading to an "unexpected"
STRING_RESULT comparator being invoked.
In addition to the problem identical to the above,
Item_func_in::val_int() could crash even with explicitly passed
NULL arguments due to an optimization in fix_length_and_dec()
leading to NULL arguments being ignored during comparators
creation.
In process of record search it is not taken into account
that inital quick->file->ref value could be inapplicable
to range interval. After proper row is found this value is
stored into the record buffer and later the record is
filtered out at condition evaluation stage.
The fix is store a refernce of found row to the handler ref field.
Problem: a flaw (derefencing a NULL pointer) in the LIKE optimization
code may lead to a server crash in some rare cases.
Fix: check the pointer before its dereferencing.
The default storage engine is changed from MyISAM to
InnoDB, in all builds except for the embedded server.
In addition, the following system variables are
changed:
* innodb_file_per_table is enabled
* innodb_strict_mode is enabled
* innodb_file_format_name_update is changed
to 'Barracuda'
The test suite is changed so that tests that do not
explicitly include the have_innodb.inc are run with
--default-storage-engine=MyISAM. This is to ease the
transition, so that most regression tests are run
with the same engine as before.
Some tests are disabled for the embedded server
regression test, as the output of certain statements
will be different that for the regular server
(i.e SELECT @@default_storage_engine). This is to
ease transition.