Problem: show slave status may return different Slave_IO_Running values running some tests.
Fix: wait for a certain slave state if needed to get tests more predictable.
Problem: Temporary buffer which is used for quoting and escaping
was initialized to character set utf8, and thus didn't allow
to store data in other character sets.
Fix: changing character set of the buffer to be able to
store any arbitrary sequence of bytes.
of its arguments was evaluated to NULL, while the predicate
LOCATE(str,NULL) IS NULL erroneously was evaluated to FALSE.
This happened because the Item_func_locate::fix_length_and_dec
method by mistake set the value of the maybe_null flag for
the function item to 0. In consequence of this the function
was considered as the one that could not ever return NULL.
was erroneously converted to double, while the result of
ROUND(<decimal expr>, <int literal>) was preserved as decimal.
As a result of such a conversion the value of ROUND(D,A) could
differ from the value of ROUND(D,val(A)) if D was a decimal expression.
Now the result of the ROUND function is never converted to
double if the first argument is decimal.
- fixed wrong test case for bug 20903
- closed the dangling connections in trigger.test
- GET_LOCK() and RELEASE_LOCK() now produce more detailed log
- fixed an omission in GET_LOCK() : assign the thread_id when
acquiring the lock.
When the INSERT .. ON DUPLICATE KEY UPDATE has to update a matched row but
the new data is the same as in the record then it returns as if
no rows were inserted or updated. Nevertheless the row is silently
updated. This leads to a situation when zero updated rows are reported
in the case when data has actually been changed.
Now the write_record function updates a row only if new data differs from
that in the record.
Problem: we may get unexpected results comparing [u]longlong values as doubles.
Fix: adjust the test to use integer comparators.
Note: it's not a real fix, we have to implement some new comparators
to completely solve the original problem (see my comment in the bug report).
In case of database level grant the database name may be a pattern,
in case of table|column level grant the database name can not be a pattern.
We use 'dont_check_global_grants' as a flag to determine
if it's database level grant command
(see SQLCOM_GRANT case, mysql_execute_command() function) and
set db_is_pattern according to 'dont_check_global_grants' value.
ORDER BY and LIMIT 1.
The bug was introduced by the patch for bug 21727. The patch
erroneously skipped initialization of the array of headers
for sorted records for non-first evaluations of the subquery.
To fix the problem a new parameter has been added to the
function make_char_array that performs the initialization.
Now this function is called for any invocation of the
filesort procedure. Yet it allocates the buffer for sorted
records only if this parameter is NULL.
This bug was introduced by the fix for the bug#27300. In this fix a section
of code was added to the Item::tmp_table_field_from_field_type method.
This section intended to create Field_geom fields for the Item_geometry_func
class and its descendants. In order to get the geometry type of the current
item it casted "this" to the Item_geometry_func* type. But the
Item::tmp_table_field_from_field_type method is also used for creation of
fields for UNION and in this case this method is called for an object of the
Item_type_holder class and the cast to the Item_geometry_func* type causes
a server crash.
Now the Item::tmp_table_field_from_field_type method correctly works when it's
called for both the Item_type_holder and the Item_geometry_func classes.
The new geometry_type variable is added to the Item_type_holder class.
The new method called get_geometry_type is added to the Item_field
and the Field classes. It returns geometry type from the field for the
Item_field and the Field_geom classes and fails an assert for other Field
descendants.
and is not described in the manual
- Adding missing initialization for utf8 collations
- Minor code clean-ups: renaming variables,
moving code into a new separate function.
- Adding test, to check that both ucs2 and utf8 user
defined collations work (ucs2_test_ci and utf8_test_ci)
- Adding Vietnamese collation as a complex user defined
collation example.
Problem: "SELECT INTO OUTFILE" created incorrect dumps for BLOBs,
so "LOAD DATA" later incorrectly interpreted 0x5C as the second
byte of a multi-byte sequence, instead of escape character.
Fix: adding escaping of multi-byte heads.
a temporary table has grown out of heap memory reserved for it and
the remaining disk space is not big enough to store the table as
a MyISAM table.
The crash happens because the function create_myisam_from_heap
does not handle safely the mem_root structure associated
with the converted table in the case when an error has occurred.
wrong result for DML
When making key reference buffers over CHAR fields whitespace (0x20)
must be used to fill in the remaining space in the field's buffer.
This is what Field_string::store() does.
Fixed Field_string::get_key_image() to do the same.
SHOW CREATE TABLE fails
Underlying table names, that merge engine fails to open were not
reported.
With this fix CHECK TABLE issued against merge table reports all
underlying table names that it fails to open. Other statements
are unaffected, that is underlying table names are not included
into error message.
This fix doesn't solve SHOW CREATE TABLE issue.
integer constants.
This bug is introduced by the fix for bug#16377. Before the fix the
Item_func_between::fix_length_and_dec method converted the second and third
arguments to the type of the first argument if they were constant and the first
argument is of the DATE/DATETIME type. That approach worked well for integer
constants and sometimes produced bad result for string constants. The fix for
the bug#16377 wrongly removed that code at all and as a result of this the
comparison of a DATETIME field and an integer constant was carried out in a
wrong way and sometimes led to wrong result sets.
Now the Item_func_between::fix_length_and_dec method converts the second and
third arguments to the type of the first argument if they are constant, the
first argument is of the DATE/DATETIME type and the DATETIME comparator isn't
applicable.
In acl_getroot_no_password(), use a separate variable for traversing the acl_users list so that the last entry is not used when no matching entries are found.
The value of "low-priority-updates" option and the LOW PRIORITY
prefix was taken into account at parse time.
This caused triggers (among others) to ignore this flag (if
supplied for the DML statement).
Moved reading of the LOW_PRIORITY flag at run time.
Fixed an incosistency when handling
SET GLOBAL LOW_PRIORITY_UPDATES : now it is in effect for
delayed INSERTs.
Tested by checking the effect of LOW_PRIORITY flag via a
trigger.
This is an additional fix.
Item::val_xxx methods are supposed to use original data source and
Item::val_xxx_result methods to use the item's result field. But for the
Item_func_set_user_var class val_xxx_result methods were mapped to val_xxx
methods. This leads, in particular, to producing bad sort keys and thus
wrong order of the result set of queries with group by/order by clauses.
The set of val_xxx_result methods is added to the Item_func_set_user_var
class. It's the same as the val_xxx set of method but uses the result_field
to return a value.
using a derived table over a grouping subselect.
This crash happens only when materialization of the derived tables
requires creation of auxiliary temporary table, for example when
a grouping operation is carried out with usage of a temporary table.
The crash happened because EXPLAIN EXTENDED when printing the query
expression made an attempt to use the objects created in the mem_root
of the temporary table which has been already freed by the moment
when printing is called.
This bug appeared after the method Item_field::print() had been
introduced.
Problem: we may create a deadlock committing changes in the mysql_alter_table() when
LOCK_open is set. Moreover, "in some variants of the ALTER TABLE commit
happens earlier, outside of LOCK_open, in other later - inside. It's no good, a storage
engine code that is called in between could expect a consistency - either there is a
transaction or there is not".
Fix: move the commit to happen earlier and outside of the LOCK_open.
Implementation of mysql_multi_update did not call multi_update::send_error method in some cases
(see the test reported on bug page and test cases in changeset).
Fixed with deploying the method, ::send_error() is refined to get binlogging code which works whenever
there is modified non-transactional table.
thd->no_trans_update.stmt flag is set in to TRUE to ease testing though being the beginning of relative
bug#27417 fix (addresses a part of those issues).
Eliminating two minor issues (small bugs) in multi_update methods.
This patch for multi-update also addresses a part of the issues reported in bug#13270,bug#23333.
The end_update() function uses the Item::save_org_in_field() function to
save original values of items into the group buffer. But for the
Item_func_set_user_var this method was mapped to the save_in_field method.
The latter function wrongly decides to use the result_field. This leads to
saving incorrect value in the grouping buffer and wrong result of the whole
query.
The can_use_result_field argument of the bool type is added to the
Item_func_set_user_var::save_in_field() function. If it is set to FALSE
then the item's result field won't be used. Otherwise it will be detected
whether the result field will be used (old behaviour).
Two wrapping functions for the function above are added to the
Item_func_set_user_var class:
the save_in_field(Field *field, bool no_conversions) - it calls the above
function with the can_use_result_field set to TRUE.
the save_org_in_field(Field *field) - same, but the can_use_result_field
is set to FALSE.
ON conditions from JOIN expression were ignored at CHECK OPTION
check when updating a multi-table view with CHECK OPTION.
The st_table_list::prep_check_option function has been
modified to to take into account ON conditions at CHECK OPTION check
It was also changed to build the check option condition only once
for any update used in PS/SP.
Setting a key_cache_block_size which is not a power of 2
could corrupt MyISAM tables.
A couple of computations in the key cache code use bit
operations which do only work if key_cache_block_size
is a power of 2.
Replaced bit operations by arithmetic operations
to make key cache able to handle block sizes that are
not a power of 2.
When the same VIEW was created at the master side twice,
malformed (truncated after the word 'AS') query string
was forwarded to client side, so error messages on the
master and client was different, and replication was
broken.
The mysql_register_view function call failed
too early: fields of `view' output argument of this
function was not filled yet with correct data required
for query replication.
The mysql_register_view function also copied pointers to
local buffers into a memory allocated by the caller.
problem #1: udf_example.so does not get built on AIX
solution#1: build it yourself using
cd sql; gcc -g -I ../include/ -I /usr/include/ -lpthread \
-shared -o udf_example.so udf_example.c; mv udf_example.so \
.libs/
problem#2 (the bug): udf_example fails because it does not
recognize the variable LD_LIBRARY_PATH when doing dl_open(),
it looks at LIBPATH
solution#2: add the library path to LIBPATH
problem#3: udf_example returns the wrong result length since
it relies on strmov to return a pointer to the end of the
string that it copies. On AIX builds, where m_string.h is not
included (m_string defines a macro expanding strmov to stpcpy),
there is a macro expanding strmov to strcpy, which returns a
pointer to the first character.
solution#3: define strmov as stpcpy.
problem#4: #2 applies on hp-ux as well, but this platform
looks at SHLIB_PATH
solution#4: added the library path to SHLIB_PATH
Problem:
HASH indexes on VARCHAR columns with binary collations did not ignore trailing spaces from strings before comparisons. This could result in duplicate records being successfully inserted into a MEMORY table with unique key constraints.
As a direct consequence of the above, internal MEMORY tables used for GROUP BY calculation in testcases for bug #27643 contained duplicate rows which resulted in duplicate key errors when converting those temporary tables to MyISAM. Additionally, that error was incorrectly converted to the 'table is full' error.
Solution:
- ignore trailing spaces in VARCHAR fields with binary collations when calculating hashes.
- return a proper error from create_myisam_from_heap() when conversion fails.
mysqld crashed when a long-running explain query was killed from
another connection.
When the current thread caught a kill signal executing the function
best_extension_by_limited_search it just silently returned to
the calling function greedy_search without initializing elements of
the join->best_positions array.
However, the greedy_search function ignored thd->killed status
after a calls to the best_extension_by_limited_search function, and
after several calls the greedy_search function used an uninitialized
data from the join->best_positions[idx] to search position in the
join->best_ref array.
That search failed, and greedy_search tried to call swap_variables
function with NULL argument - that caused a crash.
ENUM fields internally store their values as integers and may use integer
values as indexes to their values. Invalid values are mapped to zero value.
When storing an empty string the ENUM field fails to find an appropriate value
and tries to convert the provided string to integer. The conversion also
fails and error is returned even if the thd->count_cuted_fields is set to
CHECK_FIELD_IGNORE. This makes the range optimizer wrongly decide that an
impossible range is present.
Now the Field_enum::store() returns error while storing an empty string only
if the thd->count_cuted_fields isn't set to CHECK_FIELD_IGNORE.
The result of the CHECK OPTION condition evaluation over an
updated record and records of merged tables was arbitrary and
dependant on the order of records in the merged tables during
the execution of SELECT statement.
The CHECK OPTION expression was evaluated over expired record
buffers (with arbitrary data in the fields).
Rowids of tables used in the CHECK OPTION expression were
added to temporary table rows. The multi_update::do_updates()
method was modified to restore necessary record buffers
before evaluation of the CHECK OPTION condition.
Integer values with 10 digits may or may not fit into an int column
(e.g. 2147483647 vs 6147483647).
Thus when creating a temp table column for such an int we must
use bigint instead.
Fixed to use bigint.
Also subsituted a "magic number" with a named constant.
type assertion.
The bug was introduced by the patch for bug #16377.
The "+ INTERVAL" (Item_date_add_interval) function detects its result type
by the type of its first argument. But in some cases it returns STRING
as the result type. This happens when, for example, the first argument is a
DATE represented as string. All this makes the get_datetime_value()
function misinterpret such result and return wrong DATE/DATETIME value.
To avoid such cases in the fix for #16377 the code that detects correct result
field type on the first execution was added to the
Item_date_add_interval::get_date() function. Due to this the result
field type of the Item_date_add_interval item stored by the send_fields()
function differs from item's result field type at the moment when
the item is actually sent. It causes an assertion failure.
Now the get_datetime_value() detects that the DATE value is returned by
some item not only by checking the result field type but also by comparing
the returned value with the 100000000L constant - any DATE value should be
less than this value.
Removed result field type adjusting code from the
Item_date_add_interval::get_date() function.
Refining the tests since pb revealed the older version's fragality - the error from SF() due to killed
may be different on different env:s.
DBUG_ASSERT instead of assert.
longer showing SP names.
SHOW CREATE VIEW uses Item::print() methods to reconstruct the
statement text from the parse tree.
The print() method for stored procedure calls needs allocate
space to print the function's quoted name.
It was incorrectly calculating the length of the buffer needed
(was too short).
Fixed to reflect the actual space needed.
The reason for the bug was that replaying of a query on slave could not be possible since its event
was recorded with the killed error. Due to the specific of handling INSERT, which per-row-while-loop is
unbreakable to killing, the query on transactional table should have not appeared in binlog unless
there was a call to a stored routine that got interrupted with killing (and then there must be an error
returned out of the loop).
The offered solution added the following rule for binlogging of INSERT that accounts the above
specifics:
For INSERT on transactional-table if the error was not set the only raised flag
is harmless and is ignored via masking out on time of creation of binlog event.
For both table types the combination of raised error and KILLED flag indicates that there
was potentially partial execution on master and consistency is under the question.
In that case the code continues to binlog an event with an appropriate killed error.
The fix relies on the specified behaviour of stored routine that must propagate the error
to the top level query handling if the thd->killed flag was raised in the routine execution.
The patch adds an arg with the default killed-status-unset value to Query_log_event::Query_log_event.
When storing a large number to a FLOAT or DOUBLE field with fixed length, it could be incorrectly truncated if the field's length was greater than 31.
This patch also does some code cleanups to be able to reuse code which is common between Field_float::store() and Field_double::store().