MySQL 5.1 server
Server used to clip overly long user-names. This was presumably lost
when code was made UTF8-clean.
Now we emulate the behaviour for backward compatibility, but UTF8-ly
correct.
mysql-test/r/connect.result:
Show that user-names that are too long get clipped now.
mysql-test/t/connect.test:
Show that user-names that are too long get clipped now.
sql/sql_connect.cc:
Clip user-name to 16 characters (not bytes).
strings/CHARSET_INFO.txt:
Clarify in docs.
ORDER BY computed col
GROUP BY implies ORDER BY in the MySQL dialect of SQL. Therefore, when an
index on the first table in the query is used, and that index satisfies
ordering according to the GROUP BY clause, the query optimizer estimates the
number of tuples that need to be read from this index. If there is a LIMIT
clause, table statistics on tables following this 'sort table' are employed.
There may be a separate ORDER BY clause however, which mandates reading the
whole 'sort table' anyway. But the previous estimate was left untouched.
Fixed by removing the estimate from EXPLAIN output if GROUP BY is used in
conjunction with an ORDER BY clause that mandates using a temporary table.
Version "5.1.42 SUSE MySQL RPM"
When a query was using a DATE or DATETIME value formatted
using different formatting than "yyyy-mm-dd HH:MM:SS", a
query with a greater-or-equal '>=' condition matched only
greater values in an indexed TIMESTAMP column.
The problem was introduced by the fix for the bug 46362
and partially solved (for DATE and DATETIME columns only)
by the fix for the bug 47925.
The stored_field_cmp_to_item function has been modified
to take into account TIMESTAMP columns like we do for
DATE and DATETIME columns.
mysql-test/r/type_timestamp.result:
Test case for bug #55779.
mysql-test/t/type_timestamp.test:
Test case for bug #55779.
sql/item.cc:
Bug #55779: select does not work properly in mysql server
Version "5.1.42 SUSE MySQL RPM"
The stored_field_cmp_to_item function has been modified
to take into account TIMESTAMP columns like we do for
DATE and DATETIME.
result
Row subqueries producing no rows were not handled as UNKNOWN
values in row comparison expressions.
That was a result of the following two problems:
1. Item_singlerow_subselect did not mark the resulting row
value as NULL/UNKNOWN when no rows were produced.
2. Arg_comparator::compare_row() did not take into account that
a whole argument may be NULL rather than just individual scalar
values.
Before bug#34384 was fixed, the above problems were hidden
because an uninitialized (i.e. without any stored value) cached
object would appear as NULL for scalar values in a row subquery
returning an empty result. After the fix
Arg_comparator::compare_row() would try to evaluate
uninitialized cached objects.
Fixed by removing the aforementioned problems.
mysql-test/r/row.result:
Added a test case for bug #54190.
mysql-test/r/subselect.result:
Updated the result for a test relying on wrong behavior.
mysql-test/t/row.test:
Added a test case for bug #54190.
sql/item_cmpfunc.cc:
If either of the argument rows is NULL, return NULL as the
result of comparison.
sql/item_subselect.cc:
Adjust null_value for Item_singlerow_subselect depending on
whether a row has been produced by the row subquery.
The EXISTS transformation has additional switches to catch the known corner
cases that appear when transforming an IN predicate into EXISTS. Guarded
conditions are used which are deactivated when a NULL value is seen in the
outer expression's row. When the inner query block supplies NULL values,
however, they are filtered out because no distinction is made between the
guarded conditions; guarded NOT x IS NULL conditions in the HAVING clause that
filter out NULL values cannot be de-activated in isolation from those that
match values or from the outer expression or NULL's.
The above problem is handled by making the guarded conditions remember whether
they have rejected a NULL value or not, and index access methods are taking
this into account as well.
The bug consisted of
1) Not resetting the property for every nested loop iteration on the inner
query's result.
2) Not propagating the NULL result properly from inner query to IN optimizer.
3) A hack that may or may not have been needed at some point. According to a
comment it was aimed to fix#2 by returning NULL when FALSE was actually
the result. This caused failures when #2 was properly fixed. The hack is
now removed.
The fix resolves all three points.
multi-table UPDATE IGNORE.
The problem was that if there was an active SELECT statement
during trigger execution, an error risen during the execution
may cause a crash. The fix is to temporary reset LEX::current_select
before trigger execution and restore it afterwards. This way
errors risen during the trigger execution are processed as
if there was no active SELECT.
mysql-test/r/trigger_notembedded.result:
added test case result for bug #55421.
mysql-test/t/trigger_notembedded.test:
added test case for bug #55421.
sql/sql_trigger.cc:
Reset thd->lex->current_select before start trigger execution
and restore its original value after execution is finished.
This is neccessery in order to set error status in
diagnostic_area in case of trigger execution failure.
inited==INDEX
When an error occurs while sending the data in a temporary table there was no
cleanup performed. This caused a failed assertion in the case when different
access methods were used for populating the table vs. retrieving the data from
the table if IGNORE was specified and sql_safe_updates = 0. In this case
execution continues, but the handler expects to continue with the access
method used for row retrieval.
Fixed by doing the cleanup even if errors occur.
Bug#46754: 'rows' field doesn't reflect partition pruning
The EXPLAIN's result in 'rows' field
was evaluated to number of rows when the table was opened
(not from the table cache) and only the partitions left
after pruning was updated with its correct number
of rows.
The evaluation of the 'rows' field was using handler::records()
which is a potentially expensive call, and ignores the partitioning
pruning.
The fix was to use the handlers stats.records after updating it
with ::info(HA_STATUS_VARIABLE) instead.
mysql-test/r/partition_pruning.result:
updated result
mysql-test/t/partition_pruning.test:
Added test.
sql/sql_select.cc:
Use ::info + stats.records instead of ::records().
"Access compatibility" syntax
The "wild" "DELETE FROM table_name.* ... USING ..." syntax
for multi-table DELETE statements is documented but it was
lost in the fix for the bug 30234.
The table_ident_opt_wild parser rule has been added
to restore the lost syntax.
mysql-test/r/delete.result:
Test case for bug #53034.
mysql-test/t/delete.test:
Test case for bug #53034.
sql/sql_yacc.yy:
Bug #53034: Multiple-table DELETE statements not accepting
"Access compatibility" syntax
The table_ident_opt_wild parser rule has been added
to restore the lost syntax.
Note: simple extending of table_ident with opt_wild in
the table_alias_ref rule is not acceptable, because
a) it adds one conflict more and b) this conflict resolves
in the inappropriate way.
Check for number of line strings in the incoming polygon data (wkb) and
for number of points in the incoming linestring wkb.
mysql-test/r/gis.result:
Fix for bug #51875: crash when loading data into geometry function polyfromwkb
- test result.
mysql-test/t/gis.test:
Fix for bug #51875: crash when loading data into geometry function polyfromwkb
- test case.
sql/spatial.cc:
Fix for bug #51875: crash when loading data into geometry function polyfromwkb
- creating a polygon from wkb check for number of line strings,
- creating a linestring from wkb check for number of line points.
== MYSQL_TYPE_LONGLONG
A MIN/MAX() function with a subquery as its argument could lead
to a debug assertion on debug builds or wrong data on release
ones.
The problem was a combination of the following factors:
- Item_sum_hybrid::fix_fields() might use the argument
(args[0]) to calculate 'hybrid_field_type' which was later used
to decide how the data should be sent to the client.
- Item_sum::make_field() might use the argument again to
calculate the field's type when sending result set metadata to
the client.
- The argument could be changed in between these two calls via
Item::set_arg() leading to inconsistent metadata being
reported.
Here is what was happening for the bug's test case:
1. Item_sum_hybrid::fix_fields() calculates hybrid_field_type
as MYSQL_TYPE_LONGLONG based on args[0] which is an
Item::SUBSELECT_ITEM at that time.
2. A temporary table is created to execute the
query. create_tmp_field_from_item() creates a Field_long object
according to the subselect's max_length.
3. The subselect item in Item_sum_hybrid is replaced by the
Item_field object referencing the newly created Field_long.
4. Item_sum::make_field() rightfully returns the
MYSQL_TYPE_LONG type when calculating the result set metadata.
5. When sending the actual data, Item::send() relies on the
virtual field_type() function which in our case returns
previously calculated hybrid_field_type == MYSQL_TYPE_LONGLONG.
It looks like the only solution is to never refer to the
argument's metadata after the result metadata has been
calculated in fix_fields(), since the argument itself may be
different by then. In this sense, Item_sum::make_field() should
never be used, because it may rely on the argument's metadata
and is only called after fix_fields(). The "default"
implementation in Item::make_field() should be used instead as
it relies only on field_type(), but not on the argument's type.
Fixed by removing Item_sum::make_field() so that the superclass
implementation Item::make_field() is always used.
mysql-test/r/func_group.result:
Added a test case for bug #54465.
mysql-test/t/func_group.test:
Added a test case for bug #54465.
sql/item_sum.cc:
Removed Item_sum::make_field() so that the superclass
implementation Item::make_field() is always used.
sql/item_sum.h:
Removed Item_sum::make_field() so that the superclass
implementation Item::make_field() is always used.
Queries involving predicates of the form "const NOT BETWEEN
not_indexed_column AND indexed_column" could return wrong data
due to incorrect handling by the range optimizer.
For "c NOT BETWEEN f1 AND f2" predicates, get_mm_tree()
produces a disjunction of the SEL_ARG trees for "f1 > c" and
"f2 < c". If one of the trees is empty (i.e. one of the
arguments is not sargable) the resulting tree should be empty
as well, since the whole expression in this case is not
sargable.
The above logic is implemented in get_mm_tree() as follows. The
initial state of the resulting tree is NULL (aka empty). We
then iterate through arguments and compute the corresponding
SEL_ARG tree (either "f1 > c" or "f2 < c"). If the resulting
tree is NULL, it is simply replaced by the generated
tree. Otherwise it is replaced by a disjunction of itself and
the generated tree. The obvious flaw in this implementation is
that if the first argument is not sargable and thus produces a
NULL tree, the resulting tree will simply be replaced by the
tree for the second argument. As a result, "c NOT BETWEEN f1
AND f2" will end up as just "f2 < c".
Fixed by adding a check so that when the first argument
produces an empty tree for the NOT BETWEEN case, the loop is
aborted with an empty tree as a result. The whole idea of using
a loop for 2 arguments does not make much sense, but it was
probably used to avoid code duplication for several BETWEEN
variants.
variable assignments
The assert() that is firing is checking if expressions that can't be
null return a NULL when evaluated.
MAKEDATE() function can return NULL if the second argument is
less then or equal to 0. Thus its nullability depends not only on
the nullability of its arguments but also on their values.
Fixed by (overoptimistically) setting MAKEDATE() to be nullable
despite the nullability of its arguments.
Test added.
Had to update one test result to reflect the metadata change.
feature
The test for bug no 50939 was put in range.test which isn't such a good idea
since it requires partitioning. Fixed by moving the test case to
partitioning_range.test.
Problem was that the handler call ::extra(HA_EXTRA_CACHE) was cached
but the ::extra(HA_EXTRA_PREPARE_FOR_UPDATE) was not.
Solution was to also cache the other call and forward it when moving
to a new partition to scan.
mysql-test/r/partition.result:
test result
mysql-test/t/partition.test:
New test from bug report.
sql/ha_partition.cc:
cache the HA_EXTRA_PREPARE_FOR_UPDATE just like HA_EXTRA_CACHE.
sql/ha_partition.h:
Added cache flag for HA_EXTRA_PREPARE_FOR_UPDATE
INSERT IGNORE ... SELECT ... UNION SELECT ...
This assert was triggered by INSERT IGNORE ... SELECT. The assert checks that a
statement either sends OK or an error to the client. If the bug was triggered
on release builds, it caused OK to be sent to the client instead of the correct
error message (in this case ER_FIELD_SPECIFIED_TWICE).
The reason the assert was triggered, was that lex->no_error was set to TRUE
during JOIN::optimize() because of IGNORE. This causes all errors to be ignored.
However, not all errors can be ignored. Some, such as ER_FIELD_SPECIFIED_TWICE
will cause the INSERT to fail no matter what. But since lex->no_error was set,
the critical errors were ignored, the INSERT failed and neither OK nor the
error message was sent to the client.
This patch fixes the problem by temporarily turning off lex->no_error in
places where errors cannot be ignored during processing of INSERT ... SELECT.
Test case added to insert.test.
file .\item_subselect.cc, line 836
IN quantified predicates are never executed directly. They are rather wrapped
inside nodes called IN Optimizers (Item_in_optimizer) which take care of the
execution. However, this is not done during query preparation. Unfortunately
the LIKE predicate pre-evaluates constant right-hand side arguments even
during name resolution. Likely this is meant as an optimization.
Fixed by not pre-evaluating LIKE arguments in view prepare mode.
if() treated any non-numeric string as false
Fixed to treat those as true instead
Added some test cases
Fixed missing $ in variable name in include/mix2.inc
Queries may crash, if
1) the GREATEST or the LEAST function has a mixed list of
numeric and LONGBLOB arguments and
2) the result of such a function goes through an intermediate
temporary table.
An Item that references a LONGBLOB field has max_length of
UINT_MAX32 == (2^32 - 1).
The current implementation of GREATEST/LEAST returns REAL
result for a mixed list of numeric and string arguments (that
contradicts with the current documentation, this contradiction
was discussed and it was decided to update the documentation).
The max_length of such a function call was calculated as a
maximum of argument max_length values (i.e. UINT_MAX32).
That max_length value of UINT_MAX32 was used as a length for
the intermediate temporary table Field_double to hold
GREATEST/LEAST function result.
The Field_double::val_str() method call on that field
allocates a String value.
Since an allocation of String reserves an additional byte
for a zero-termination, the size of String buffer was
set to (UINT_MAX32 + 1), that caused an integer overflow:
actually, an empty buffer of size 0 was allocated.
An initialization of the "first" byte of that zero-size
buffer with '\0' caused a crash.
The Item_func_min_max::fix_length_and_dec() has been
modified to calculate max_length for the REAL result like
we do it for arithmetical operators.
******
Bug #54461: crash with longblob and union or update with subquery
Queries may crash, if
1) the GREATEST or the LEAST function has a mixed list of
numeric and LONGBLOB arguments and
2) the result of such a function goes through an intermediate
temporary table.
An Item that references a LONGBLOB field has max_length of
UINT_MAX32 == (2^32 - 1).
The current implementation of GREATEST/LEAST returns REAL
result for a mixed list of numeric and string arguments (that
contradicts with the current documentation, this contradiction
was discussed and it was decided to update the documentation).
The max_length of such a function call was calculated as a
maximum of argument max_length values (i.e. UINT_MAX32).
That max_length value of UINT_MAX32 was used as a length for
the intermediate temporary table Field_double to hold
GREATEST/LEAST function result.
The Field_double::val_str() method call on that field
allocates a String value.
Since an allocation of String reserves an additional byte
for a zero-termination, the size of String buffer was
set to (UINT_MAX32 + 1), that caused an integer overflow:
actually, an empty buffer of size 0 was allocated.
An initialization of the "first" byte of that zero-size
buffer with '\0' caused a crash.
The Item_func_min_max::fix_length_and_dec() has been
modified to calculate max_length for the REAL result like
we do it for arithmetical operators.
mysql-test/r/func_misc.result:
Test case for bug #54461.
******
Test case for bug #54461.
mysql-test/t/func_misc.test:
Test case for bug #54461.
******
Test case for bug #54461.
sql/item_func.cc:
Bug #54461: crash with longblob and union or update with subquery
The Item_func_min_max::fix_length_and_dec() has been
modified to calculate max_length for the REAL result like
we do it for arithmetical operators.
******
Bug #54461: crash with longblob and union or update with subquery
The Item_func_min_max::fix_length_and_dec() has been
modified to calculate max_length for the REAL result like
we do it for arithmetical operators.
In order to be able to check if the set of the grouping fields in a
GROUP BY has changed (and thus to start a new group) the optimizer
caches the current values of these fields in a set of Cached_item
derived objects.
The Cached_item_str, used for caching varchar and TEXT columns,
is limited in length by the max_sort_length variable.
A String buffer to store the value with an alloced length of either
the max length of the string or the value of max_sort_length
(whichever is smaller) in Cached_item_str's constructor.
Then, at compare time the value of the string to compare to was
truncated to the alloced length of the string buffer inside
Cached_item_str.
This is all fine and valid, but only if you're not assigning
values near or equal to the alloced length of this buffer.
Because when assigning values like this the alloced length is
rounded up and as a result the next set of data will not match the
group buffer, thus leading to wrong results because of the changed
alloced_length.
Fixed by preserving the original maximum length in the
Cached_item_str's constructor and using this instead of the
alloced_length to limit the string to compare to.
Test case added.
Fix a regression (due to a typo) which caused spurious incorrect
argument errors for long data stream parameters if all forms of
logging were disabled (binary, general and slow logs).
mysql-test/t/mysql_client_test.test:
Save the status of the slow_log.
sql/sql_prepare.cc:
Add a missing logical NOT operator.
tests/mysql_client_test.c:
Disable all query logs when running C tests. Fixes a omission
when, slow log should have been disabled too.
Run test case for Bug#54041 with query logs enabled and disabled.
The problem is that the fix Bug#29784 was mistakenly
reverted when updating YaSSL to a newer version.
The solution is to re-apply the fix and this time
actually add a meaningful test case so that possible
regressions are caught.
extra/yassl/taocrypt/src/coding.cpp:
Fixed buffer allocation to compute the proper maximum
decoded size: (EncodedLength * 3/4) + 3
mysql-test/std_data/server8k-cert.pem:
Update certificate.
mysql-test/std_data/server8k-key.pem:
Update key.
mysql-test/t/ssl_8k_key-master.opt:
Start the server using the certificate and key that
triggers the problem.
prepared statements
Using GROUP_CONCAT() together with the WITH ROLLUP modifier
could crash the server.
The reason was a combination of several facts:
1. The Item_func_group_concat class stores pointers to ORDER
objects representing the columns in the ORDER BY clause of
GROUP_CONCAT().
2. find_order_in_list() called from
Item_func_group_concat::setup() modifies the ORDER objects so
that their 'item' member points to the arguments list
allocated in the Item_func_group_concat constructor.
3. In some cases (e.g. in JOIN::rollup_make_fields) a copy of
the original Item_func_group_concat object could be created by
using the Item_func_group_concat::Item_func_group_concat(THD
*thd, Item_func_group_concat *item) copy constructor. The
latter essentially creates a shallow copy of the source
object. Memory for the arguments array is allocated on
thd->mem_root, but the pointers for arguments and ORDER are
copied verbatim.
What happens in the test case is that when executing the query
for the first time, after a copy of the original
Item_func_group_concat object has been created by
JOIN::rollup_make_fields(), find_order_in_list() is called for
this new object. It then resolves ORDER BY by modifying the
ORDER objects so that they point to elements of the arguments
array which is local to the cloned object. When thd->mem_root
is freed upon completing the execution, pointers in the ORDER
objects become invalid. Those ORDER objects, however, are also
shared with the original Item_func_group_concat object which is
preserved between executions of a prepared statement. So the
first call to find_order_in_list() for the original object on
the second execution tries to dereference an invalid pointer.
The solution is to create copies of the ORDER objects when
copying Item_func_group_concat to not leave any stale pointers
in other instances with different lifecycles.
mysql-test/r/func_gconcat.result:
Test case for bug #54476.
mysql-test/t/func_gconcat.test:
Test case for bug #54476.
sql/item_sum.cc:
Copy the ORDER objects pointed to by the elements of the
'order' array in the copy constructor of
Item_func_group_concat.
sql/table.h:
Removed the unused 'item_copy' member of the ORDER class.
This assert checks that the server does not try to send OK to the
client if there has been some error during processing. This is done
to make sure that the error is in fact sent to the client.
The problem was that view errors during processing of WHERE conditions
in UPDATE statements where not detected by the update code. It therefore
tried to send OK to the client, triggering the assert.
The bug was only noticeable in debug builds.
This patch fixes the problem by making sure that the update code
checks for errors during condition processing and acts accordingly.
and the original engine is disabled
Missing check that engine is available.
mysql-test/include/not_blackhole.inc:
new include file
mysql-test/r/partition_not_blackhole.result:
new result file
mysql-test/std_data/parts/t1_blackhole.frm:
blackhole partitioned table .frm file:
create table `t1` (`id` int primary key) engine=blackhole
partition by key () partitions 1;
mysql-test/std_data/parts/t1_blackhole.par:
.par file matching blackhole partitioned .frm
mysql-test/t/partition_not_blackhole-master.opt:
new master-opt to disable blackhole if compiled in.
mysql-test/t/partition_not_blackhole.test:
New test
sql/ha_partition.cc:
Added check that engine is available.
Merge up to sunny.bains@oracle.com-20100625081841-ppulnkjk1qlazh82 .
There are 8 more changesets in mysql-5.1-innodb, but PB2 shows a
failure for a test added in one of them. If that is resolved quickly
then those 8 more changesets will be merged too.
Fixed an incomplete historical ALTER TABLE MODIFY trimming the trigger
privilege bit from mysql.tables_priv.Table_priv column.
Removed the duplicate ALTER TABLE MODIFY.
Test suite added.
The problem is that QUICK_SELECT_DESC behaviour depends
on used_key_parts value which can be bigger than selected
best_key_parts value if an engine supports clustered key.
But used_key_parts is overwritten with best_key_parts
value that prevents from correct selection of index
access method. The fix is to preserve used_key_parts
value for further use in QUICK_SELECT_DESC.
mysql-test/r/innodb_mysql.result:
test case
mysql-test/t/innodb_mysql.test:
test case
sql/sql_select.cc:
preserve used_key_parts value for further use in QUICK_SELECT_DESC
This deadlock happened if DROP DATABASE was blocked due to an open
HANDLER table from a different connection. While DROP DATABASE
is blocked, it holds the LOCK_mysql_create_db mutex. This results
in a deadlock if the connection with the open HANDLER table tries
to execute a CREATE/ALTER/DROP DATABASE statement as they all
try to acquire LOCK_mysql_create_db.
This patch makes this deadlock scenario very unlikely by closing and
marking for re-open all HANDLER tables for which there are pending
conflicing locks, before LOCK_mysql_create_db is acquired.
However, there is still a very slight possibility that a connection
could access one of these HANDLER tables between closing/marking for
re-open and the acquisition of LOCK_mysql_create_db.
This patch is for 5.1 only, a separate and complete fix will be
made for 5.5+.
Test case added to schema.test.