corruption errors: 126,134,145
When one thread attempts to lock two (or more) tables and another
thread executes statement that aborts these locks (e.g. REPAIR
TABLE) we may get a table object with wrong lock type in a table
cache.
For example if SELECT FROM t1,t2 was aborted, subsequent INSERT
INTO t1 may be executed under read lock.
As a result we may get various table corruptions and even a server
crash.
This is fixed by resetting lock type in case lock was aborted by
another thread.
I failed to create reasonable test case for this bug.
The end_update() function uses the Item::save_org_in_field() function to
save original values of items into the group buffer. But for the
Item_func_set_user_var this method was mapped to the save_in_field method.
The latter function wrongly decides to use the result_field. This leads to
saving incorrect value in the grouping buffer and wrong result of the whole
query.
The can_use_result_field argument of the bool type is added to the
Item_func_set_user_var::save_in_field() function. If it is set to FALSE
then the item's result field won't be used. Otherwise it will be detected
whether the result field will be used (old behaviour).
Two wrapping functions for the function above are added to the
Item_func_set_user_var class:
the save_in_field(Field *field, bool no_conversions) - it calls the above
function with the can_use_result_field set to TRUE.
the save_org_in_field(Field *field) - same, but the can_use_result_field
is set to FALSE.
ON conditions from JOIN expression were ignored at CHECK OPTION
check when updating a multi-table view with CHECK OPTION.
The st_table_list::prep_check_option function has been
modified to to take into account ON conditions at CHECK OPTION check
It was also changed to build the check option condition only once
for any update used in PS/SP.
Setting a key_cache_block_size which is not a power of 2
could corrupt MyISAM tables.
A couple of computations in the key cache code use bit
operations which do only work if key_cache_block_size
is a power of 2.
Replaced bit operations by arithmetic operations
to make key cache able to handle block sizes that are
not a power of 2.
When the same VIEW was created at the master side twice,
malformed (truncated after the word 'AS') query string
was forwarded to client side, so error messages on the
master and client was different, and replication was
broken.
The mysql_register_view function call failed
too early: fields of `view' output argument of this
function was not filled yet with correct data required
for query replication.
The mysql_register_view function also copied pointers to
local buffers into a memory allocated by the caller.
problem #1: udf_example.so does not get built on AIX
solution#1: build it yourself using
cd sql; gcc -g -I ../include/ -I /usr/include/ -lpthread \
-shared -o udf_example.so udf_example.c; mv udf_example.so \
.libs/
problem#2 (the bug): udf_example fails because it does not
recognize the variable LD_LIBRARY_PATH when doing dl_open(),
it looks at LIBPATH
solution#2: add the library path to LIBPATH
problem#3: udf_example returns the wrong result length since
it relies on strmov to return a pointer to the end of the
string that it copies. On AIX builds, where m_string.h is not
included (m_string defines a macro expanding strmov to stpcpy),
there is a macro expanding strmov to strcpy, which returns a
pointer to the first character.
solution#3: define strmov as stpcpy.
problem#4: #2 applies on hp-ux as well, but this platform
looks at SHLIB_PATH
solution#4: added the library path to SHLIB_PATH
Problem:
HASH indexes on VARCHAR columns with binary collations did not ignore trailing spaces from strings before comparisons. This could result in duplicate records being successfully inserted into a MEMORY table with unique key constraints.
As a direct consequence of the above, internal MEMORY tables used for GROUP BY calculation in testcases for bug #27643 contained duplicate rows which resulted in duplicate key errors when converting those temporary tables to MyISAM. Additionally, that error was incorrectly converted to the 'table is full' error.
Solution:
- ignore trailing spaces in VARCHAR fields with binary collations when calculating hashes.
- return a proper error from create_myisam_from_heap() when conversion fails.
mysqld crashed when a long-running explain query was killed from
another connection.
When the current thread caught a kill signal executing the function
best_extension_by_limited_search it just silently returned to
the calling function greedy_search without initializing elements of
the join->best_positions array.
However, the greedy_search function ignored thd->killed status
after a calls to the best_extension_by_limited_search function, and
after several calls the greedy_search function used an uninitialized
data from the join->best_positions[idx] to search position in the
join->best_ref array.
That search failed, and greedy_search tried to call swap_variables
function with NULL argument - that caused a crash.