This bug is a design flaw of the fix for the bug#33546. It assumed that an
item can be used only in one comparison context, but actually it isn't the
case. Item_cache_datetime is used to store result for MIX/MAX aggregate
functions. Because Arg_comparator always compares datetime values as INTs when
possible the Item_cache_datetime most time caches only INT value. But
since all datetime values has STRING result type MIN/MAX functions are asked
for a STRING value when the result is being sent to a client. The
Item_cache_datetime was designed to avoid conversions and get INT/STRING
values from an underlying item, but at the moment the values is asked
underlying item doesn't hold it anymore thus wrong result is returned.
Beside that MIN/MAX aggregate functions was wrongly initializing cached result
and this led to a wrong result.
The Item::has_compatible_context helper function is added. It checks whether
this and given items has the same comparison context or can be compared as
DATETIME values by Arg_comparator. The equality propagation optimization is
adjusted to take into account that items which being compared as DATETIME
can have different comparison contexts.
The Item_cache_datetime now converts cached INT value to a correct STRING
DATETIME value by means of number_to_datetime & my_TIME_to_str functions.
The Arg_comparator::set_cmp_context_for_datetime helper function is added.
It sets comparison context of items being compared as DATETIMEs to INT if
items will be compared as longlong.
The Item_sum_hybrid::setup function now correctly initializes its result
value.
In order to avoid unnecessary conversions Item_sum_hybrid now states that it
can provide correct longlong value if the item being aggregated can do it
too.
This assert checks that the server does not try to send OK to the
client if there has been some error during processing. This is done
to make sure that the error is in fact sent to the client.
The problem was that view errors during processing of WHERE conditions
in UPDATE statements where not detected by the update code. It therefore
tried to send OK to the client, triggering the assert.
The bug was only noticeable in debug builds.
This patch fixes the problem by making sure that the update code
checks for errors during condition processing and acts accordingly.
and reverse() function
3 problems fixed :
1. The reported problem : caused by incorrect parsing of
the file as ucs data resulting in wrong length of the parsed
string. Fixed by truncating the invalid trailing bytes
(non-complete multibyte characters) when reading from the file
2. LOAD DATA when reading from a proper UCS2 file wasn't
recognizing the new line characters. Fixed by first looking
if a byte is a new line (or any other special) character before
reading it as a part of a multibyte character.
3. When using user variables to hold the column data in LOAD
DATA the character set of the user variable was set incorrectly
to the database charset. Fixed by setting it to the charset
specified by LOAD DATA (if any).
naming scheme for tests related to functions, rename analyse.test to
func_analyse.test (test for the ANALYSE() procedure). Avoids confusion
with the ANALYZE statement (tested in analyze.test).
The problem there is that HAVING condition evaluates const
parts of condition despite the condition has references
on aggregate functions. Table t1 became const tables
after make_join_statistics and table1.pk = 1, HAVING is
transformed into MAX(1) < 7 and taken away from HAVING.
The fix is to skip evaluation of HAVING conts parts if
HAVING condition has references on aggregate functions.
Essentially, the problem is that safemalloc is excruciatingly
slow as it checks all allocated blocks for overrun at each
memory management primitive, yielding a almost exponential
slowdown for the memory management functions (malloc, realloc,
free). The overrun check basically consists of verifying some
bytes of a block for certain magic keys, which catches some
simple forms of overrun. Another minor problem is violation
of aliasing rules and that its own internal list of blocks
is prone to corruption.
Another issue with safemalloc is rather the maintenance cost
as the tool has a significant impact on the server code.
Given the magnitude of memory debuggers available nowadays,
especially those that are provided with the platform malloc
implementation, maintenance of a in-house and largely obsolete
memory debugger becomes a burden that is not worth the effort
due to its slowness and lack of support for detecting more
common forms of heap corruption.
Since there are third-party tools that can provide the same
functionality at a lower or comparable performance cost, the
solution is to simply remove safemalloc. Third-party tools
can provide the same functionality at a lower or comparable
performance cost.
The removal of safemalloc also allows a simplification of the
malloc wrappers, removing quite a bit of kludge: redefinition
of my_malloc, my_free and the removal of the unused second
argument of my_free. Since free() always check whether the
supplied pointer is null, redudant checks are also removed.
Also, this patch adds unit testing for my_malloc and moves
my_realloc implementation into the same file as the other
memory allocation primitives.
from mysql-next-mr-opt-backporting.
Bug#54515: Crash in opt_range.cc::get_best_group_min_max on
SELECT from VIEW with GROUP BY
When handling the grouping items in get_best_group_min_max, the
items need to be of type Item_field. In this bug, an ASSERT
triggered because the item used for grouping was an
Item_direct_view_ref (i.e., the group column is from a view).
The fix is to get the real_item since Item_ref* pointing to
Item_field is ok.
Problem: sha2() reported its result as BINARY
Fix:
- Inheriting Item_func_sha2 from Item_str_ascii_func
- Setting max_length via fix_length_and_charset()
instead of direct assignment.
- Adding tests
value and NO_ZERO_DATE
The problem was that a older version of the error path for a
failed admin statement relied upon a few error conditions being
met in order to access a table handler, the first one being that
the table object pointer was not NULL. Probably due to chance,
in all cases a table object was closed but the reference wasn't
reset, the other conditions didn't evaluate to true. With the
addition of a new check on the error path, the handler started
being dereferenced whenever it was not reset to NULL, causing
problems for code paths which closed the table but didn't reset
the reference.
The solution is to reset the reference whenever a admin statement
fails and the tables are closed.
Fixed an incomplete historical ALTER TABLE MODIFY trimming the trigger
privilege bit from mysql.tables_priv.Table_priv column.
Removed the duplicate ALTER TABLE MODIFY.
Test suite added.
BUG#54872 MBR: replication failure caused by using tmp table inside transaction
Changed criteria to classify a statement as unsafe in order to reduce the
number of spurious warnings. So a statement is classified as unsafe when
there is on-going transaction at any point of the execution if:
1. The mixed statement is about to update a transactional table and
a non-transactional table.
2. The mixed statement is about to update a temporary transactional
table and a non-transactional table.
3. The mixed statement is about to update a transactional table and
read from a non-transactional table.
4. The mixed statement is about to update a temporary transactional
table and read from a non-transactional table.
5. The mixed statement is about to update a non-transactional table
and read from a transactional table when the isolation level is
lower than repeatable read.
After updating a transactional table if:
6. The mixed statement is about to update a non-transactional table
and read from a temporary transactional table.
7. The mixed statement is about to update a non-transactional table
and read from a temporary transactional table.
8. The mixed statement is about to update a non-transactionala table
and read from a temporary non-transactional table.
9. The mixed statement is about to update a temporary non-transactional
table and update a non-transactional table.
10. The mixed statement is about to update a temporary non-transactional
table and read from a non-transactional table.
11. A statement is about to update a non-transactional table and the
option variables.binlog_direct_non_trans_update is OFF.
The reason for this is that locks acquired may not protected a concurrent
transaction of interfering in the current execution and by consequence in
the result. So the patch reduced the number of spurious unsafe warnings.
Besides we fixed a regression caused by BUG#51894, which makes temporary
tables to go into the trx-cache if there is an on-going transaction. In
MIXED mode, the patch for BUG#51894 ignores that the trx-cache may have
updates to temporary non-transactional tables that must be written to the
binary log while rolling back the transaction.
So we fix this problem by writing the content of the trx-cache to the
binary log while rolling back a transaction if a non-transactional
temporary table was updated and the binary logging format is MIXED.
This bug is a consequence of WL#5349, as the
default storage engine was changed.
The fix was to explicitly add an ENGINE
clause to a CREATE TABLE statement, to
ensure that we test case preservement on
MyISAM.
Bug#47633 - assert in ha_myisammrg::info during OPTIMIZE
The server crashed on an attempt to optimize a MERGE table with
non-existent child table.
mysql_admin_table() relied on the table to be successfully open
if a table object had been allocated.
Changed code to check return value of the open function before
calling a handler:: function on it.
returns nothing
When looking for table or database names inside INFORMATION_SCHEMA
we must convert the table and database names to lowercase (just as it's
done in the rest of the server) when lowercase_table_names is non-zero.
This will allow us to find the same tables that we would find if there
is no condition.
Fixed by converting to lower case when extracting the database and
table name conditions.
Test case added.
The problem is that QUICK_SELECT_DESC behaviour depends
on used_key_parts value which can be bigger than selected
best_key_parts value if an engine supports clustered key.
But used_key_parts is overwritten with best_key_parts
value that prevents from correct selection of index
access method. The fix is to preserve used_key_parts
value for further use in QUICK_SELECT_DESC.
This deadlock happened if DROP DATABASE was blocked due to an open
HANDLER table from a different connection. While DROP DATABASE
is blocked, it holds the LOCK_mysql_create_db mutex. This results
in a deadlock if the connection with the open HANDLER table tries
to execute a CREATE/ALTER/DROP DATABASE statement as they all
try to acquire LOCK_mysql_create_db.
This patch makes this deadlock scenario very unlikely by closing and
marking for re-open all HANDLER tables for which there are pending
conflicing locks, before LOCK_mysql_create_db is acquired.
However, there is still a very slight possibility that a connection
could access one of these HANDLER tables between closing/marking for
re-open and the acquisition of LOCK_mysql_create_db.
This patch is for 5.1 only, a separate and complete fix will be
made for 5.5+.
Test case added to schema.test.
During creation of the table list of
processed tables hidden I_S table 'VARIABLES'
is erroneously added into the table list.
it leads to ER_UNKNOWN_TABLE error in
TABLE_LIST::add_table_to_list() function.
The fix is to skip addition of hidden I_S
tables into the table list.
require O(#scans) memory
When an index merge operation was restarted, it would
re-allocate the Unique object controlling the duplicate row
ID elimination. Fixed by making the Unique object a member
of QUICK_INDEX_MERGE_SELECT and thus reusing it throughout
the lifetime of this object.
file .\filesort.cc, line 149 (part II)
Problem: the server didn't disregard sort order
for some zero length tuples.
Fix: skip sort order in such a case
(zero length NOT NULL string functions).