leave the table unusable".
Failing ALTER statement on partitioned table could have left
this table in an unusable state. This has happened in cases
when ALTER was executed using "fast" algorithm, which doesn't
involve copying of data between old and new versions of table,
and the resulting new table was incompatible with partitioning
function in some way.
The problem stems from the fact that discrepancies between new
table definition and partitioning function are discovered only
when the table is opened. In case of "fast" algorithm this has
happened too late during ALTER's execution, at the moment when
all changes were already done and couldn't have been reverted.
In the cases when "slow" algorithm, which copies data, is used
such discrepancies are detected at the moment new table
definition is opened implicitly when new version of table is
created in storage engine. As result ALTER is aborted before
any changes to table were done.
This fix tries to address this issue by ensuring that "fast"
algorithm behaves similarly to "slow" algorithm and checks
compatibility between new definition and partitioning function
by trying to open new definition after .FRM file for it has
been created.
Long term we probably should implement some way to check
compatibility between partitioning function and new table
definition which won't involve opening it, as this should
allow much cleaner fix for this problem.
mysql-test/r/partition_innodb.result:
Added test for bug #57985 "ONLINE/FAST ALTER PARTITION can
fail and leave the table unusable".
mysql-test/t/partition_innodb.test:
Added test for bug #57985 "ONLINE/FAST ALTER PARTITION can
fail and leave the table unusable".
sql/sql_table.cc:
Ensure that in cases when .FRM for partitioned table is
created without creating table in storage engine (e.g.
during "fast" ALTER TABLE) we still open table definition.
This allows to check that definition of created table/.FRM
is compatible with its partitioning function.
Problem: Extended characters outside of ASCII range where not displayed
properly in SHOW PROCESSLIST, because thd_info->query was always sent as
system_character_set (utf8). This was wrong, because query buffer
is never converted to utf8 - it is always have client character set.
Fix: sending query buffer using query character set
@ sql/sql_class.cc
@ sql/sql_class.h
Introducing a new class CSET_STRING, a LEX_STRING with character set.
Adding set_query(&CSET_STRING)
Adding reset_query(), to use instead of set_query(0, NULL).
@ sql/event_data_objects.cc
Using reset_query()
@ sql/log_event.cc
Using reset_query()
Adding charset argument to set_query_and_id().
@ sql/slave.cc
Using reset_query().
@ sql/sp_head.cc
Changing backing up and restore code to use CSET_STRING.
@ sql/sql_audit.h
Using CSET_STRING.
In the "else" branch it's OK not to use
global_system_variables.character_set_client.
&my_charset_latin1, which is set in constructor, is fine
(verified with Sergey Vojtovich).
@ sql/sql_insert.cc
Using set_query() with proper character set: table_name is utf8.
@ sql/sql_parse.cc
Adding character set argument to set_query_and_id().
(This is the main point where thd->charset() is stored
into thd->query_string.cs, for use in "SHOW PROCESSLIST".)
Using reset_query().
@ sql/sql_prepare.cc
Storing client character set into thd->query_string.cs.
@ sql/sql_show.cc
Using CSET_STRING to fetch and send charset-aware query information
from threads.
@ storage/myisam/ha_myisam.cc
Using set_query() with proper character set: table_name is utf8.
@ mysql-test/r/show_check.result
@ mysql-test/t/show_check.test
Adding tests
Problem: crash in Item_float constructor on DBUG_ASSERT due
to not null-terminated string parameter.
Fix: making Item_float::Item_float non-null-termintated parameter safe:
- Using temporary buffer when generating error
modified:
@ mysql-test/r/xml.result
@ mysql-test/t/xml.test
@ sql/item.cc
NAME_CONST(..) was used wrongly in a HAVING clause, and
should have caused a user error. Instead, it caused a
segmentation fault.
During parsing, the value parameter to NAME_CONST was
specified to be an uninitialized Item_ref object (it
would be resolved later). During the semantic analysis,
the object is tested, and since it was not initialied,
the server seg.faulted.
The fix is to check if the object is initialized
before testing it. The same pattern has already been
applied to most other methods in the Item_ref class.
Bug was introduced by the optimization done as part of
Bug#33546.
The reason for the bug is that :
- we use system checks in cmake/os/mysql_release.cmake
- we include cmake/os/mysql_release.cmake using CMAKE_USER_MAKE_RULES_OVERRIDE
- this (having system checks based on TRY_COMPILE inside file pointed by
CMAKE_USER_MAKE_RULES_OVERRIDE does not work with cmake 2.8.3,
and according to Kitware was never meant to work, it just happened to work by accident
until 2.8.2 release (though, it seems not to work wiith 2.6.0 either)
Related CMake bug discussing the situation:
http://public.kitware.com/Bug/view.php?id=11469
The fix is to use INCLUDE instead of CMAKE_USER_MAKE_RULES_OVERRIDE as suggested
by Kitware. The downside is that compile flags are not in cache, but this is pure cosmetics.
The functionality is the same, flags are used for compiling are correct using INCLUDE.
mysql-test/r/func_math.result:
Add test for Bug #58137
mysql-test/t/func_math.test:
Add test for Bug #58137
sql/field.cc:
Skip calling my_gcvt() if we are trying to insert a double into a char(0) column.
Use __builtin_stpcpy only if the system supports stpcpy.
This is necessary as in some cases a call to stpcpy will
be emitted if the built-in can not optimized.
include/m_string.h:
The expansion of stpcpy (in glibc) causes warnings if the
return value of strmov is not being used. Since stpcpy is
a GNU extension and the expansion ends up using a built-in
provided by GCC, use the compiler provided built-in directly
when possible. Nonetheless, the C library must have stpcpy
as a call be emitted if the built-in can not optimized.
Finalize the server flags after any kind of command is executed.
To avoid updating the flag multiple times, reorganize code so that
its invoked only once for each command.
sql/log_event.cc:
Explicit update after the query is executed in the slave.
sql/sql_parse.cc:
Reorganize so that the status flag is updated for any command and
not done twice for a query command.
breaks SBR
The problem was that DROP DATABASE ignored any metadata locks on stored
functions and procedures held by other connections. This made it
possible for DROP DATABASE to drop functions/procedures that were in use
by other connections and therefore break statement based replication.
(DROP DATABASE could appear in the binlog before a statement using a
dropped function/procedure.)
This problem was an issue left unresolved by the patch for Bug#30977
where metadata locks for stored functions/procedures were introduced.
This patch fixes the problem by making sure DROP DATABASE takes
exclusive metadata locks on all stored functions/procedures to be
dropped.
Test case added to sp-lock.test.
Memory was allocated for storing path names inside
fn_expand(), which were not free:ed anywhere.
This patch fixes the problem by storing the path
names in statically allocated buffers instead,
which is automatically free:ed when the server
exits.
breaks SBR
This pre-requisite patch refactors the code for dropping tables, used
by DROP TABLE and DROP DATABASE. The patch moves the code for acquiring
metadata locks out of mysql_rm_table_part2() and makes it the
responsibility of the caller. This in preparation of changing the
DROP DATABASE implementation to acquire all metadata locks before any
changes are made. mysql_rm_table_part2() is renamed
mysql_rm_table_no_locks() to reflect the change.
Including adding test in 5.5 requiring --big-test
flag from mysql-test-run.pl and also disabled
tests that fails.
mysql-test/collections/default.weekly:
Added all tests requiring --big-test in alphabetical order
mysql-test/r/information_schema-big.result:
Updated result
mysql-test/r/variables-big.result:
updated results
mysql-test/t/disabled.def:
Added tests that fails (has not been run
regularly since they need --big-test)