Item_func_spatial_collection::fix_length_and_dec()
changed to use argument's print() method to print
the ER_ILLEGAL_VALUE_FOR_TYPE error.
mysql-test/r/gis.result:
Fix for bug#56679: gis.test: valgrind error
- test result adjusted.
sql/item_geofunc.h:
Fix for bug#56679: gis.test: valgrind error
- use argument's print() method instead of improper val_str()
call in the Item_func_spatial_collection::fix_length_and_dec(), as
it's applicable only for constant items.
ALTER TABLE on a MERGE table could cause a deadlock with two
other connections if we reached a situation where:
1) A connection doing ALTER TABLE can't upgrade to MDL_EXCLUSIVE on the
parent table, but holds TL_READ_NO_INSERT on the child tables.
2) A connection doing DELETE on a child table can't get TL_WRITE on it
since ALTER TABLE holds TL_READ_NO_INSERT.
3) A connection doing SELECT on the parent table can't get TL_READ on
the child tables since TL_WRITE is ahead in the lock queue, but holds
MDL_SHARED_READ on the parent table preventing ALTER TABLE from upgrading.
For regular tables, this deadlock is avoided by having ALTER TABLE
take a MDL_SHARED_NO_WRITE metadata lock on the table. This prevents
DELETE from acquiring MDL_SHARED_WRITE on the table before ALTER TABLE
tries to upgrade to MDL_EXCLUSIVE. In the example above, SELECT would
therefore not be blocked by the pending DELETE as DELETE would not be
able to enter TL_WRITE in the table lock queue.
This patch fixes the problem for merge tables by using the same metadata
lock type for child tables as for the parent table. The child tables will
in this case therefore be locked with MDL_SHARED_NO_WRITE, preventing
DELETE from acquiring a metadata lock and enter into the table lock queue.
Change in behavior: By taking the same metadata lock for child tables
as for the parent table, LOCK TABLE on the parent table will now also
implicitly lock the child tables. Since LOCK TABLE on the parent table
now takes more than one metadata lock, it is possible for LOCK TABLE
... WRITE on the parent table or child tables to give ER_LOCK_DEADLOCK
error.
Test case added to mdl_sync.test.
Merge.test/.result has been updated to reflect the change to LOCK TABLE.
Convertion from a floating point number to a string caused a
crash.
During rare circumstances a String object could crash when
it was requested to allocate new memory.
A crash could occcur in Field_double::val_str() because of
a pointer referencing memory inside a String object which was
of unknown size.
And finally, the geometric collection should not accept
arguments which are non geometric.
mysql-test/r/gis.result:
* Test cases change because we intercept the error behind the
previous crashes much earlier.
sql/field.cc:
* It makes no sense to impose a lower limit on the length
and not setting a upper limit will cause crashes later.
sql/item_geofunc.h:
* Disallow for binding with field- and item types which
differ from MYSQL_TYPE_GEOMETRY types.
The EXISTS transformation has additional switches to catch the known corner
cases that appear when transforming an IN predicate into EXISTS. Guarded
conditions are used which are deactivated when a NULL value is seen in the
outer expression's row. When the inner query block supplies NULL values,
however, they are filtered out because no distinction is made between the
guarded conditions; guarded NOT x IS NULL conditions in the HAVING clause that
filter out NULL values cannot be de-activated in isolation from those that
match values or from the outer expression or NULL's.
The above problem is handled by making the guarded conditions remember whether
they have rejected a NULL value or not, and index access methods are taking
this into account as well.
The bug consisted of
1) Not resetting the property for every nested loop iteration on the inner
query's result.
2) Not propagating the NULL result properly from inner query to IN optimizer.
3) A hack that may or may not have been needed at some point. According to a
comment it was aimed to fix#2 by returning NULL when FALSE was actually
the result. This caused failures when #2 was properly fixed. The hack is
now removed.
The fix resolves all three points.
multi-table UPDATE IGNORE.
The problem was that if there was an active SELECT statement
during trigger execution, an error risen during the execution
may cause a crash. The fix is to temporary reset LEX::current_select
before trigger execution and restore it afterwards. This way
errors risen during the trigger execution are processed as
if there was no active SELECT.
mysql-test/r/trigger_notembedded.result:
added test case result for bug #55421.
mysql-test/t/trigger_notembedded.test:
added test case for bug #55421.
sql/sql_trigger.cc:
Reset thd->lex->current_select before start trigger execution
and restore its original value after execution is finished.
This is neccessery in order to set error status in
diagnostic_area in case of trigger execution failure.
inited==INDEX
When an error occurs while sending the data in a temporary table there was no
cleanup performed. This caused a failed assertion in the case when different
access methods were used for populating the table vs. retrieving the data from
the table if IGNORE was specified and sql_safe_updates = 0. In this case
execution continues, but the handler expects to continue with the access
method used for row retrieval.
Fixed by doing the cleanup even if errors occur.
The Item_func_str_to_date class wasn't providing correct integer DATETIME
representation as expected. This led to wrong comparison result and didn't
allowed the STR_TO_DATE function to be used with indexes.
Also, STR_TO_DATE function was inconsisted on throwing warnings/errors.
Fixed now.
val_int and result_as_longlong methods were added to the Item_func_str_to_date
class.
mysql-test/r/func_time.result:
Test case result adjusted after fixing bug#56271.
mysql-test/r/parser.result:
Test case result adjusted after fixing bug#56271.
mysql-test/r/select.result:
A test case result adjusted after fixing bug#56271.
mysql-test/r/strict.result:
Test case result adjusted after fixing bug#56271.
mysql-test/r/type_datetime.result:
Added a test case for the bug#56271.
mysql-test/t/strict.test:
Test case adjusted after fixing bug#56271.
mysql-test/t/type_datetime.test:
Added a test case for the bug#56271.
sql/item_timefunc.cc:
Bug#56271: Wrong comparison result with STR_TO_DATE function
val_int and result_as_longlong methods were added to the Item_func_str_to_date
class.
Item_func_str_to_date::get_date now throws the ER_WRONG_VALUE_FOR_TYPE warning
on incorrect value.
sql/item_timefunc.h:
Bug#56271: Wrong comparison result with STR_TO_DATE function
val_int and result_as_longlong methods were added to the Item_func_str_to_date
class.
Original changeset:
------------------------------------------------------------
revno: 3197
revision-id: alik@sun.com-20100831135426-h5a4s2w6ih1d8q2x
parent: magnus.blaudd@sun.com-20100830120632-u3xzy002mdwueli8
committer: Alexander Nozdrin <alik@sun.com>
branch nick: mysql-5.5-bugfixing
timestamp: Tue 2010-08-31 17:54:26 +0400
message:
Bug#55980 Character sets: supplementary character _bin ordering is wrong
Problem:
- ORDER BY for utf8mb4_bin, utf16_bin and utf32_bin returned
results in a wrong order, because old functions
(supporting only BMP range) were used to handle these collations.
- Additionally, utf16_bin did not sort supplementary characters
between U+D700 and U+E000, as WL#1213 specification specified.
------------------------------------------------------------
Problem:
- ORDER BY for utf8mb4_bin, utf16_bin and utf32_bin returned
results in a wrong order, because old functions
(supporting only BMP range) were used to handle these collations.
- Additionally, utf16_bin did not sort supplementary characters
between U+D700 and U+E000, as WL#1213 specification specified.
include/m_ctype.h:
Adding prototypes.
mysql-test/include/ctype_filesort2.inc:
Adding a new shared test file.
mysql-test/t/ctype_utf8mb4.test:
Adding tests.
strings/ctype-ucs2.c:
- Fixing my_strncoll[sp]_utf16_bin to compare
binary representation instead of code points,
to make columns with indexes sort correct.
- Fixing my_collation_handler_utf32_bin and
my_collation_handler_utf16_bin to use new
functions.
strings/ctype-utf8.c:
- Adding my_strnxfrm[len]_unicode_fill_bin()
to handle utf8mb4_bin, utf16_bin and utf32_bin,
using 3 bytes per weight.
This function also performs special reordering in case of utf16_bin.
- Fixing my_collation_utf8mb4_bin handler to use the
new function.
------------------------------------------------------------
revno: 3124
revision-id: dlenev@mysql.com-20100831090419-rzr5ktekby2gspm1
parent: alik@sun.com-20100827083901-x4wvtc10u9p7gcs9
committer: Dmitry Lenev <dlenev@mysql.com>
branch nick: mysql-5.5-rt-56137
timestamp: Tue 2010-08-31 13:04:19 +0400
message:
Bug #56137 "Assertion `thd->lock == 0' failed on upgrading
from 5.1.50 to 5.5.6".
Debug builds of the server aborted due to an assertion
failure when DROP DATABASE statement was run on an
installation which had outdated or corrupt mysql.proc table.
Particularly this affected the mysql_upgrade tool which is
run as part of 5.1 to 5.5 upgrade.
The problem was that sp_drop_db_routines(), which was invoked
during dropping of the database, could have returned without
closing and unlocking mysql.proc table in cases when this
table was not up-to-date with the current server. As a result
further attempt to open and lock the mysql.event table, which
was necessary to complete dropping of the database, ended up
with an assert.
This patch solves this problem by ensuring that
sp_drop_db_routines() always closes mysql.proc table and
releases metadata locks on it. This is achieved by changing
open_proc_table_for_update() function to close tables and
release metadata locks acquired by it in case of failure.
This step also makes behavior of the latter function
consistent with behavior of open_proc_table_for_read()/
open_and_lock_tables().
Test case for this bug was added to sp-destruct.test.
------------------------------------------------------------
from 5.1.50 to 5.5.6".
Debug builds of the server aborted due to an assertion
failure when DROP DATABASE statement was run on an
installation which had outdated or corrupt mysql.proc table.
Particularly this affected the mysql_upgrade tool which is
run as part of 5.1 to 5.5 upgrade.
The problem was that sp_drop_db_routines(), which was invoked
during dropping of the database, could have returned without
closing and unlocking mysql.proc table in cases when this
table was not up-to-date with the current server. As a result
further attempt to open and lock the mysql.event table, which
was necessary to complete dropping of the database, ended up
with an assert.
This patch solves this problem by ensuring that
sp_drop_db_routines() always closes mysql.proc table and
releases metadata locks on it. This is achieved by changing
open_proc_table_for_update() function to close tables and
release metadata locks acquired by it in case of failure.
This step also makes behavior of the latter function
consistent with behavior of open_proc_table_for_read()/
open_and_lock_tables().
Test case for this bug was added to sp-destruct.test.
"Access compatibility" syntax
The "wild" "DELETE FROM table_name.* ... USING ..." syntax
for multi-table DELETE statements is documented but it was
lost in the fix for the bug 30234.
The table_ident_opt_wild parser rule has been added
to restore the lost syntax.
mysql-test/r/delete.result:
Test case for bug #53034.
mysql-test/t/delete.test:
Test case for bug #53034.
sql/sql_yacc.yy:
Bug #53034: Multiple-table DELETE statements not accepting
"Access compatibility" syntax
The table_ident_opt_wild parser rule has been added
to restore the lost syntax.
Note: simple extending of table_ident with opt_wild in
the table_alias_ref rule is not acceptable, because
a) it adds one conflict more and b) this conflict resolves
in the inappropriate way.
It was hard to understand what the error really meant.
The error checking in partitioning is done in several different
parts during the execution of a query which can make it
hard to return useful errors.
Added a new error for bad VALUES part in the per PARTITION clause.
Using the more verbose error that a column is not allowed in
the partitioning function instead of just that the function is
not allowed.
mysql-test/r/partition.result:
changed error to be more specific
mysql-test/r/partition_error.result:
updated result
mysql-test/std_data/parts/t1TIMESTAMP.frm:
.frm file of CREATE TABLE t1 (a TIMESTAMP) PARTITION BY HASH(TO_DAYS(a));
mysql-test/t/partition.test:
changed error to be more specific
mysql-test/t/partition_error.test:
Added test (also for verifying behaviour of previously
created tables which is no longer allowed).
Updated expected errors in other places
sql/partition_info.cc:
Added function report_part_expr_error to
be able to return a more specific error.
Renamed fix_func_partition to fix_partition_values
since the function really fixes/checks the VALUES clause.
sql/partition_info.h:
removed part_result_type, since it was unused.
renamed fix_funk_partition->fix_partition_values
added report_part_expr_error
sql/share/errmsg-utf8.txt:
Added a more specific error.
sql/sql_partition.cc:
made use of report_part_expr_error to get a more specific error.
sql/sql_yacc.yy:
Changed error message to be more specific. And return an other error code.
Check for number of line strings in the incoming polygon data (wkb) and
for number of points in the incoming linestring wkb.
mysql-test/r/gis.result:
Fix for bug #51875: crash when loading data into geometry function polyfromwkb
- test result.
mysql-test/t/gis.test:
Fix for bug #51875: crash when loading data into geometry function polyfromwkb
- test case.
sql/spatial.cc:
Fix for bug #51875: crash when loading data into geometry function polyfromwkb
- creating a polygon from wkb check for number of line strings,
- creating a linestring from wkb check for number of line points.
== MYSQL_TYPE_LONGLONG
A MIN/MAX() function with a subquery as its argument could lead
to a debug assertion on debug builds or wrong data on release
ones.
The problem was a combination of the following factors:
- Item_sum_hybrid::fix_fields() might use the argument
(args[0]) to calculate 'hybrid_field_type' which was later used
to decide how the data should be sent to the client.
- Item_sum::make_field() might use the argument again to
calculate the field's type when sending result set metadata to
the client.
- The argument could be changed in between these two calls via
Item::set_arg() leading to inconsistent metadata being
reported.
Here is what was happening for the bug's test case:
1. Item_sum_hybrid::fix_fields() calculates hybrid_field_type
as MYSQL_TYPE_LONGLONG based on args[0] which is an
Item::SUBSELECT_ITEM at that time.
2. A temporary table is created to execute the
query. create_tmp_field_from_item() creates a Field_long object
according to the subselect's max_length.
3. The subselect item in Item_sum_hybrid is replaced by the
Item_field object referencing the newly created Field_long.
4. Item_sum::make_field() rightfully returns the
MYSQL_TYPE_LONG type when calculating the result set metadata.
5. When sending the actual data, Item::send() relies on the
virtual field_type() function which in our case returns
previously calculated hybrid_field_type == MYSQL_TYPE_LONGLONG.
It looks like the only solution is to never refer to the
argument's metadata after the result metadata has been
calculated in fix_fields(), since the argument itself may be
different by then. In this sense, Item_sum::make_field() should
never be used, because it may rely on the argument's metadata
and is only called after fix_fields(). The "default"
implementation in Item::make_field() should be used instead as
it relies only on field_type(), but not on the argument's type.
Fixed by removing Item_sum::make_field() so that the superclass
implementation Item::make_field() is always used.
mysql-test/r/func_group.result:
Added a test case for bug #54465.
mysql-test/t/func_group.test:
Added a test case for bug #54465.
sql/item_sum.cc:
Removed Item_sum::make_field() so that the superclass
implementation Item::make_field() is always used.
sql/item_sum.h:
Removed Item_sum::make_field() so that the superclass
implementation Item::make_field() is always used.
Bug#46754: 'rows' field doesn't reflect partition pruning
Update of test results after fixing the above bugs.
(fix in separate commit).
mysql-test/r/partition.result:
Updated test result after fixing bugs 46754 and 53806
mysql-test/r/partition_hash.result:
Updated test result after fixing bugs 46754 and 53806
mysql-test/r/partition_innodb.result:
Updated test result after fixing bugs 46754 and 53806
mysql-test/r/partition_range.result:
Updated test result after fixing bugs 46754 and 53806
mysql-test/suite/parts/r/partition_alter3_innodb.result:
Updated test result after fixing bugs 46754 and 53806
mysql-test/suite/parts/r/partition_alter3_myisam.result:
Updated test result after fixing bugs 46754 and 53806
Bug#46754: 'rows' field doesn't reflect partition pruning
The EXPLAIN's result in 'rows' field
was evaluated to number of rows when the table was opened
(not from the table cache) and only the partitions left
after pruning was updated with its correct number
of rows.
The evaluation of the 'rows' field was using handler::records()
which is a potentially expensive call, and ignores the partitioning
pruning.
The fix was to use the handlers stats.records after updating it
with ::info(HA_STATUS_VARIABLE) instead.
mysql-test/r/partition_pruning.result:
updated result
mysql-test/t/partition_pruning.test:
Added test.
sql/sql_select.cc:
Use ::info + stats.records instead of ::records().
Problem: trailing spaces were stripped using 8-bit code,
so the truncation result length was incorrect, which led
to an assertion failure.
Fix: using multi-byte safe code.
called twice in a row
Queries with nested joins could cause an infinite loop in the
server when used from SP/PS.
When flattening nested joins, simplify_joins() tracks if the
name resolution list needs to be updated by setting
fix_name_res to TRUE if the current loop iteration has done any
transformations to the join table list. The problem was that
the flag was not reset before the next loop iteration leading
to unnecessary "fixing" of the name resolution list which in
turn could lead to a loop (i.e. circularly-linked part) in that
list. This was causing problems on subsequent execution when
used together with stored procedures or prepared statements.
Fixed by making sure fix_name_res is reset on every loop
iteration.
mysql-test/r/join.result:
Added a test case for bug #53544.
mysql-test/t/join.test:
Added a test case for bug #53544.
sql/sql_select.cc:
Make sure fix_name_res is reset on every loop iteration.
Before this fix, the ha_read_last_count status variable was defined and
updated internally, for never exposed as a system variable.
This fix exposes the system variable as "Handler_read_last",
for completness of the Handler_read_* system variables interface.
Adjusted tests results accordingly.
file .\dtoa.c
The assertion failure was correct because the 'width' argument
of my_gcvt() has the signed integer type, whereas the unsigned
value UINT_MAX32 was being passed by the caller
(Field_double::val_str()) leading to a negative width in
my_gcvt().
The following chain of problems was found by further analysis:
1. The display width for a floating point number is calculated
in Field_double::val_str() as either field_length or the
maximum possible length of string representation of a floating
point number, whichever is greater. Since in the bug's test
case field_length is UINT_MAX32, we get the same value as the
display width. This does not make any sense because for numeric
values field_length only matters for ZEROFILL columns,
otherwise it does not make sense to allocate that much memory
just to print a number. Field_float::val_str() has a similar
problem.
2. Even if the above wasn't the case, we would still get a
crash on a slightly different test case when trying to allocate
UINT_MAX32 bytes with String::alloc() because the latter does
not handle such large input values correctly due to alignment
overflows.
3. Even when String::alloc() is fixed to return an error when
an alignment overflow occurs, there is still a problem because
almost no callers check its return value, and
Field_double::val_str() is not an exception (same for
Field_float::val_str()).
4. Even if all of the above wasn't the case, creating a
Field_double object with UINT_MAX32 as its field_length does
not make much sense either, since the .frm code limits it to
MAX_FIELD_CHARLENGTH (255) bytes. Such a beast can only be
created by create_tmp_field_from_item() from an Item with
REAL_RESULT as its result_type() and UINT_MAX32 as its
max_length.
5. For the bug's test case, the above condition (REAL_RESULT
Item with max_length = UINT_MAX32) was a result of
Item_func_if::fix_length_and_dec() "shortcutting" aggregation
of argument types when one of the arguments was a constant
NULL. In this case, the attributes of the aggregated type were
simply copied from the other, non-NULL argument, but max_length
was still calculated as per the general, non-shortcut case, by
choosing the greatest of argument's max_length, which is
obviously not correct.
The patch addresses all of the above problems, even though
fixing the assertion failure for the particular test case would
require only a subset of the above problems to be solved.
client/sql_string.cc:
Return an error in case of uint32 overflow in alignment.
Also assert there was no overflow to help find such conditions
in debug builds, since almost no callers check the return value
of String::alloc().
mysql-test/r/func_if.result:
Add a test case for bug #55077.
mysql-test/t/func_if.test:
Add a test case for bug #55077.
sql/field.cc:
- Assert we don't operate with fields wider than 255
(MAX_FIELD_CHARLENGTH) bytes in both Field_float and
Field_double.
- Don't take field_length into account when calculating the
output buffer length.
- Check the return value of String::alloc()
sql/item_cmpfunc.cc:
When shortcutting type aggregation, don't take the NULL
argument's max_length into account.
sql/sql_string.cc:
Return an error in case of uint32 overflow in alignment.
Also assert there was no overflow to help find such conditions
in debug builds, since almost no callers check the return value
of String::alloc().