represented by an expression of the type UNSIGNED INT and this
expression was evaluated to 0 then the function erroneously returned
the value of the first argument instead of an empty string.
This problem was introduced by the patch for bug 10963.
The problem has been resolved by a proper modification of the code of
Item_func_substr::val_str.
DECIMAL column was used instead of BIGINT for the minimal possible
BIGINT (-9223372036854775808).
The Item_func_neg::fix_length_and_dec has been adjusted to
to inherit the type of the argument in the case when it's an
Item_int object whose value is equal to LONGLONG_MIN.
of its arguments was evaluated to NULL, while the predicate
LOCATE(str,NULL) IS NULL erroneously was evaluated to FALSE.
This happened because the Item_func_locate::fix_length_and_dec
method by mistake set the value of the maybe_null flag for
the function item to 0. In consequence of this the function
was considered as the one that could not ever return NULL.
was erroneously converted to double, while the result of
ROUND(<decimal expr>, <int literal>) was preserved as decimal.
As a result of such a conversion the value of ROUND(D,A) could
differ from the value of ROUND(D,val(A)) if D was a decimal expression.
Now the result of the ROUND function is never converted to
double if the first argument is decimal.
- fixed wrong test case for bug 20903
- closed the dangling connections in trigger.test
- GET_LOCK() and RELEASE_LOCK() now produce more detailed log
- fixed an omission in GET_LOCK() : assign the thread_id when
acquiring the lock.
Sometimes a parameter slot may not get a value because of the protocol
data being plain wrong.
Such cases should be detected and handled by returning an error.
Fixed by checking data stream constraints where possible (like maximum
length) and reacting to the case where a value cannot be constructed.
When the INSERT .. ON DUPLICATE KEY UPDATE has to update a matched row but
the new data is the same as in the record then it returns as if
no rows were inserted or updated. Nevertheless the row is silently
updated. This leads to a situation when zero updated rows are reported
in the case when data has actually been changed.
Now the write_record function updates a row only if new data differs from
that in the record.
on PPC/Debian Linux default stack size for a thread is too big.
As we use default thread settings in mysqltest, the
thread creation fails if we create lots of threads (as it
happens in flush.test). So now stack size is explicitly specified
for the mysqltest
When constructing the path to the original .frm file ALTER .. RENAME
was unnecessary (and incorrectly) lowercasing the whole path when not
on a case-insensitive filesystem.
This path should not be modified because of lower_case_table_names
as it is already in the correct case according to that setting.
Fixed by removing the lowercasing.
Unfortunately testing this would require running the tests on a case
sensitive filesystem in a directory that has uppercase letters.
This cannot be guaranteed in all setups so no test case added.
In case of database level grant the database name may be a pattern,
in case of table|column level grant the database name can not be a pattern.
We use 'dont_check_global_grants' as a flag to determine
if it's database level grant command
(see SQLCOM_GRANT case, mysql_execute_command() function) and
set db_is_pattern according to 'dont_check_global_grants' value.
ORDER BY and LIMIT 1.
The bug was introduced by the patch for bug 21727. The patch
erroneously skipped initialization of the array of headers
for sorted records for non-first evaluations of the subquery.
To fix the problem a new parameter has been added to the
function make_char_array that performs the initialization.
Now this function is called for any invocation of the
filesort procedure. Yet it allocates the buffer for sorted
records only if this parameter is NULL.
This bug was introduced by the fix for the bug#27300. In this fix a section
of code was added to the Item::tmp_table_field_from_field_type method.
This section intended to create Field_geom fields for the Item_geometry_func
class and its descendants. In order to get the geometry type of the current
item it casted "this" to the Item_geometry_func* type. But the
Item::tmp_table_field_from_field_type method is also used for creation of
fields for UNION and in this case this method is called for an object of the
Item_type_holder class and the cast to the Item_geometry_func* type causes
a server crash.
Now the Item::tmp_table_field_from_field_type method correctly works when it's
called for both the Item_type_holder and the Item_geometry_func classes.
The new geometry_type variable is added to the Item_type_holder class.
The new method called get_geometry_type is added to the Item_field
and the Field classes. It returns geometry type from the field for the
Item_field and the Field_geom classes and fails an assert for other Field
descendants.
a temporary table has grown out of heap memory reserved for it and
the remaining disk space is not big enough to store the table as
a MyISAM table.
The crash happens because the function create_myisam_from_heap
does not handle safely the mem_root structure associated
with the converted table in the case when an error has occurred.
wrong result for DML
When making key reference buffers over CHAR fields whitespace (0x20)
must be used to fill in the remaining space in the field's buffer.
This is what Field_string::store() does.
Fixed Field_string::get_key_image() to do the same.
flag is set.
When the CLIENT_FOUND_ROWS flag is set then the server should return
found number of rows independently whether they were updated or not.
But this wasn't the case for the INSERT statement which always returned
number of rows that were actually changed thus providing wrong info to
the user.
Now the select_insert::send_eof method and the mysql_insert function
are sending the number of touched rows if the CLIENT_FOUND_ROWS flag is set.