The end_update() function uses the Item::save_org_in_field() function to
save original values of items into the group buffer. But for the
Item_func_set_user_var this method was mapped to the save_in_field method.
The latter function wrongly decides to use the result_field. This leads to
saving incorrect value in the grouping buffer and wrong result of the whole
query.
The can_use_result_field argument of the bool type is added to the
Item_func_set_user_var::save_in_field() function. If it is set to FALSE
then the item's result field won't be used. Otherwise it will be detected
whether the result field will be used (old behaviour).
Two wrapping functions for the function above are added to the
Item_func_set_user_var class:
the save_in_field(Field *field, bool no_conversions) - it calls the above
function with the can_use_result_field set to TRUE.
the save_org_in_field(Field *field) - same, but the can_use_result_field
is set to FALSE.
ON conditions from JOIN expression were ignored at CHECK OPTION
check when updating a multi-table view with CHECK OPTION.
The st_table_list::prep_check_option function has been
modified to to take into account ON conditions at CHECK OPTION check
It was also changed to build the check option condition only once
for any update used in PS/SP.
When the same VIEW was created at the master side twice,
malformed (truncated after the word 'AS') query string
was forwarded to client side, so error messages on the
master and client was different, and replication was
broken.
The mysql_register_view function call failed
too early: fields of `view' output argument of this
function was not filled yet with correct data required
for query replication.
The mysql_register_view function also copied pointers to
local buffers into a memory allocated by the caller.
problem #1: udf_example.so does not get built on AIX
solution#1: build it yourself using
cd sql; gcc -g -I ../include/ -I /usr/include/ -lpthread \
-shared -o udf_example.so udf_example.c; mv udf_example.so \
.libs/
problem#2 (the bug): udf_example fails because it does not
recognize the variable LD_LIBRARY_PATH when doing dl_open(),
it looks at LIBPATH
solution#2: add the library path to LIBPATH
problem#3: udf_example returns the wrong result length since
it relies on strmov to return a pointer to the end of the
string that it copies. On AIX builds, where m_string.h is not
included (m_string defines a macro expanding strmov to stpcpy),
there is a macro expanding strmov to strcpy, which returns a
pointer to the first character.
solution#3: define strmov as stpcpy.
problem#4: #2 applies on hp-ux as well, but this platform
looks at SHLIB_PATH
solution#4: added the library path to SHLIB_PATH
Problem:
HASH indexes on VARCHAR columns with binary collations did not ignore trailing spaces from strings before comparisons. This could result in duplicate records being successfully inserted into a MEMORY table with unique key constraints.
As a direct consequence of the above, internal MEMORY tables used for GROUP BY calculation in testcases for bug #27643 contained duplicate rows which resulted in duplicate key errors when converting those temporary tables to MyISAM. Additionally, that error was incorrectly converted to the 'table is full' error.
Solution:
- ignore trailing spaces in VARCHAR fields with binary collations when calculating hashes.
- return a proper error from create_myisam_from_heap() when conversion fails.
mysqld crashed when a long-running explain query was killed from
another connection.
When the current thread caught a kill signal executing the function
best_extension_by_limited_search it just silently returned to
the calling function greedy_search without initializing elements of
the join->best_positions array.
However, the greedy_search function ignored thd->killed status
after a calls to the best_extension_by_limited_search function, and
after several calls the greedy_search function used an uninitialized
data from the join->best_positions[idx] to search position in the
join->best_ref array.
That search failed, and greedy_search tried to call swap_variables
function with NULL argument - that caused a crash.
ENUM fields internally store their values as integers and may use integer
values as indexes to their values. Invalid values are mapped to zero value.
When storing an empty string the ENUM field fails to find an appropriate value
and tries to convert the provided string to integer. The conversion also
fails and error is returned even if the thd->count_cuted_fields is set to
CHECK_FIELD_IGNORE. This makes the range optimizer wrongly decide that an
impossible range is present.
Now the Field_enum::store() returns error while storing an empty string only
if the thd->count_cuted_fields isn't set to CHECK_FIELD_IGNORE.
The result of the CHECK OPTION condition evaluation over an
updated record and records of merged tables was arbitrary and
dependant on the order of records in the merged tables during
the execution of SELECT statement.
The CHECK OPTION expression was evaluated over expired record
buffers (with arbitrary data in the fields).
Rowids of tables used in the CHECK OPTION expression were
added to temporary table rows. The multi_update::do_updates()
method was modified to restore necessary record buffers
before evaluation of the CHECK OPTION condition.
Integer values with 10 digits may or may not fit into an int column
(e.g. 2147483647 vs 6147483647).
Thus when creating a temp table column for such an int we must
use bigint instead.
Fixed to use bigint.
Also subsituted a "magic number" with a named constant.
type assertion.
The bug was introduced by the patch for bug #16377.
The "+ INTERVAL" (Item_date_add_interval) function detects its result type
by the type of its first argument. But in some cases it returns STRING
as the result type. This happens when, for example, the first argument is a
DATE represented as string. All this makes the get_datetime_value()
function misinterpret such result and return wrong DATE/DATETIME value.
To avoid such cases in the fix for #16377 the code that detects correct result
field type on the first execution was added to the
Item_date_add_interval::get_date() function. Due to this the result
field type of the Item_date_add_interval item stored by the send_fields()
function differs from item's result field type at the moment when
the item is actually sent. It causes an assertion failure.
Now the get_datetime_value() detects that the DATE value is returned by
some item not only by checking the result field type but also by comparing
the returned value with the 100000000L constant - any DATE value should be
less than this value.
Removed result field type adjusting code from the
Item_date_add_interval::get_date() function.
Refining the tests since pb revealed the older version's fragality - the error from SF() due to killed
may be different on different env:s.
DBUG_ASSERT instead of assert.
longer showing SP names.
SHOW CREATE VIEW uses Item::print() methods to reconstruct the
statement text from the parse tree.
The print() method for stored procedure calls needs allocate
space to print the function's quoted name.
It was incorrectly calculating the length of the buffer needed
(was too short).
Fixed to reflect the actual space needed.
The reason for the bug was that replaying of a query on slave could not be possible since its event
was recorded with the killed error. Due to the specific of handling INSERT, which per-row-while-loop is
unbreakable to killing, the query on transactional table should have not appeared in binlog unless
there was a call to a stored routine that got interrupted with killing (and then there must be an error
returned out of the loop).
The offered solution added the following rule for binlogging of INSERT that accounts the above
specifics:
For INSERT on transactional-table if the error was not set the only raised flag
is harmless and is ignored via masking out on time of creation of binlog event.
For both table types the combination of raised error and KILLED flag indicates that there
was potentially partial execution on master and consistency is under the question.
In that case the code continues to binlog an event with an appropriate killed error.
The fix relies on the specified behaviour of stored routine that must propagate the error
to the top level query handling if the thd->killed flag was raised in the routine execution.
The patch adds an arg with the default killed-status-unset value to Query_log_event::Query_log_event.
When storing a large number to a FLOAT or DOUBLE field with fixed length, it could be incorrectly truncated if the field's length was greater than 31.
This patch also does some code cleanups to be able to reuse code which is common between Field_float::store() and Field_double::store().