Cache size is truncated via 32bit ulong in ha_init_key_cache() and
ha_resize_key_cache()
This change fixes the cast to size_t instead of ulong. This cast is safe,
because key_buffer_size parameter is limited to SIZE_T_MAX
Problem was that partitioning cached the table flags.
These flags could change due to TRANSACTION LEVEL changes.
Solution was to remove the cache and always return the table flags
from the first partition (if the handler was initialized).
The problem was that the server did not robustly handle a
unilateral roll back issued by the Resource Manager (RM)
due to a resource deadlock within the transaction branch.
By not acknowledging the roll back, the server (TM) would
eventually corrupt the XA transaction state and crash.
The solution is to mark the transaction as rollback-only
if the RM indicates that it rolled back its branch of the
transaction.
collation change made in 5.1.24-rc
Problem: 'CHECK TABLE ... FOR UPGRADE' did not check for
incompatible collation changes made in MySQL 5.1.24-rc.
Fix: add the check.
upgrade from <=5.0.46 to >=5.0.48
Problem: 'check table .. for upgrade' doesn't detect
incompatible collation changes made in 5.0.48.
Fix: check for incompatible collation changes.
The failure was caused by executing a CREATE-SELECT statement that creates a
table in another database than the current one. In row-based logging, the
CREATE statement was written to the binary log without the database, hence
creating the table in the wrong database, causing the following inserts to
fail since the table didn't exist in the given database.
Fixed the bug by adding a parameter to store_create_info() that will make
the function print the database name before the table name and used that
in the calls that write the CREATE statement to the binary log. The database
name is only printed if it is different than the currently selected database.
The output of SHOW CREATE TABLE has not changed and is still printed without
the database name.
"Trigger fired multiple times leads to gaps in auto_increment sequence".
The bug was that if a trigger fired multiple times inside a top
statement (for example top-statement is a multi-row INSERT,
and trigger is ON INSERT), and that trigger inserted into an auto_increment
column, then gaps could be observed in the auto_increment sequence,
even if there were no other users of the database (no concurrency).
It was wrong usage of THD::auto_inc_intervals_in_cur_stmt_for_binlog.
Note that the fix changes "class handler", I'll tell the Storage Engine API team.
This patch contains fixes for two problems:
1. As originally reported, the server crashed on Mac OS X when trying to access
an EXAMPLE table after the EXAMPLE plugin was installed.
It turned out that the dynamically loaded EXAMPLE plugin called the
function hash_earch() from a Mac OS X system library, instead of
hash_earch() from MySQL's mysys library. Makefile.am in storage/example
does not include libmysys. So the Mac OS X linker arranged the hash_search()
function to be linked to the system library when the shared object is
loaded.
One possible solution would be to include libmysys into the linkage of
dynamic plugins. But then we must have a libmysys.so, which must be
used by the server too. This could have a minimal performance impact,
but foremost the change seems to bee too risky at the current state of
MySQL 5.1.
The selected solution is to rename MySQL's hash_search() to my_hash_search()
like it has been done before with hash_insert() and hash_reset().
Since this is the third time, we need to rename a hash_*() function,
I did renamed all hash_*() functions to my_hash_*().
To avoid changing a zillion calls to these functions, and announcing
this to hundreds of developers, I added defines that map the old names
to the new names.
This change is in hash.h and hash.c.
2. The other problem was improper implementation of the handlerton-to-plugin
mapping. We use a fixed-size array to hold a plugin reference for each
handlerton. On every install of a handler plugin, we allocated a new slot
of the array. On uninstall we did not free it. After some uninstall/install
cycles the array overflowed. We did not check for overflow.
One fix is to check for overflow to stop the crashes.
Another fix is to free the array slot at uninstall and search for a free slot
at plugin install.
This change is in handler.cc.
SET col
When reporting a duplicate key error the server was making incorrect assumptions
on what the state of the value string to include in the error is.
Fixed by accessing the data in this string in a "safe" way (without relying on it
having a terminating 0).
Detected by code analysis and fixed a similar problem in reporting the foreign key
duplicate errors.
partition is corrupt
The main problem was that ALTER TABLE t ANALYZE/CHECK/OPTIMIZE/REPAIR
PARTITION took another code path (over mysql_alter_table instead of
mysql_admin_table) which differs in two ways:
1) alter table opens the tables in a different way than admin tables do
resulting in returning with error before it tried the command
2) alter table does not start to send any diagnostic rows to the client
which the lower admin functions continue to use -> resulting in
assertion crash
The fix:
Remapped ALTER TABLE t ANALYZE/CHECK/OPTIMIZE/REPAIR PARTITION to use
the same code path as ANALYZE/CHECK/OPTIMIZE/REPAIR TABLE t.
Adding check in mysql_admin_table to setup the partition list for
which partitions that should be used.
Partitioned tables will still not work with
REPAIR TABLE/PARTITION USE_FRM, since that requires moving partitions
to tables, REPAIR TABLE t USE_FRM, and check that the data still
fulfills the partitioning function and then move the table back to
being a partition.
NOTE: I have removed the following functions from the handler
interface:
analyze_partitions, check_partitions, optimize_partitions,
repair_partitions
Since they are not longer needed.
THIS ALTERS THE STORAGE ENGINE API
Problem was that ha_partition had HA_FILE_BASED flag set
(since it uses a .par file), but after open it uses the first partitions
flags, which results in different case handling for create and for
open.
Solution was to change the underlying partition name so it was consistent.
(Only happens when lower_case_table_names = 2, i.e. Mac OS X and storage
engines without HA_FILE_BASED, like InnoDB and Memory.)
(Recommit after adding rename of check_lowercase_names to
get_canonical_filename, and moved it from handler.h to mysql_priv.h)
NOTE: if a mixed case name for a partitioned table was created when
lower_case_table_name = 2 it should be renamed or dropped before using
the updated version (See bug#37402 for more info)
The REPAIR TABLE ... USE_FRM query silently corrupts data of tables
with old .FRM file version.
The mysql_upgrade client program or the REPAIR TABLE query (without
the USE_FRM clause) can't prevent this trouble, because in the
common case they don't upgrade .FRM file to compatible structure.
1. Evaluation of the REPAIR TABLE ... USE_FRM query has been
modified to reject such tables with the message:
"Failed repairing incompatible .FRM file".
2. REPAIR TABLE query (without USE_FRM clause) evaluation has been
modified to upgrade .FRM files to current version.
3. CHECK TABLE ... FOR UPGRADE query evaluation has been modified
to return error status when .FRM file has incompatible version.
4. mysql_upgrade and mysqlcheck client programs call CHECK TABLE
FOR UPGRADE and REPAIR TABLE queries, so their behaviors have
been changed too to upgrade .FRM files with incompatible
version numbers.
a SELECT doesn't cause ROLLBACK of statem".
The idea of the fix is to ensure that we always commit the current
statement at the end of dispatch_command(). In order to not issue
redundant disc syncs, an optimization of the two-phase commit
protocol is implemented to bypass the two phase commit if
the transaction is read-only.
Replacing a template function with a normal static function.
The template parameter, which previously was the class to
find a binlogging function in, is now passed as a pointer to
the actual binlogging function instead.
The patch requires change of indention, but that is submitted
as a separate patch.
The bug was that handler::clone/handler::ha_open() call caused allocation of
cloned_copy->ref on the handler->table->mem_root. The allocated memory could not
be reclaimed until the table is flushed, so it was possible to exhaust memory by
repeatedly running index_merge queries without doing table flushes.
The fix:
- make handler::clone() allocate new_handler->ref on the passed mem_root
- make handler::ha_open() not allocate this->ref if it has already been allocated
There is no testcase as it is not possible to check small leaks from testsuite.
called from a SELECT doesn't cause ROLLBACK of state"
Make private all class handler methods (PSEA API) that may modify
data. Introduce and deploy public ha_* wrappers for these methods in
all sql/.
This necessary to keep track of all data modifications in sql/,
which is in turn necessary to be able to optimize two-phase
commit of those transactions that do not modify data.