This patch contains fixes for two problems:
1. As originally reported, the server crashed on Mac OS X when trying to access
an EXAMPLE table after the EXAMPLE plugin was installed.
It turned out that the dynamically loaded EXAMPLE plugin called the
function hash_earch() from a Mac OS X system library, instead of
hash_earch() from MySQL's mysys library. Makefile.am in storage/example
does not include libmysys. So the Mac OS X linker arranged the hash_search()
function to be linked to the system library when the shared object is
loaded.
One possible solution would be to include libmysys into the linkage of
dynamic plugins. But then we must have a libmysys.so, which must be
used by the server too. This could have a minimal performance impact,
but foremost the change seems to bee too risky at the current state of
MySQL 5.1.
The selected solution is to rename MySQL's hash_search() to my_hash_search()
like it has been done before with hash_insert() and hash_reset().
Since this is the third time, we need to rename a hash_*() function,
I did renamed all hash_*() functions to my_hash_*().
To avoid changing a zillion calls to these functions, and announcing
this to hundreds of developers, I added defines that map the old names
to the new names.
This change is in hash.h and hash.c.
2. The other problem was improper implementation of the handlerton-to-plugin
mapping. We use a fixed-size array to hold a plugin reference for each
handlerton. On every install of a handler plugin, we allocated a new slot
of the array. On uninstall we did not free it. After some uninstall/install
cycles the array overflowed. We did not check for overflow.
One fix is to check for overflow to stop the crashes.
Another fix is to free the array slot at uninstall and search for a free slot
at plugin install.
This change is in handler.cc.
from stored procedure.
Problem: we replace all references to local variables in stored procedures
with NAME_CONST(name, value) logging to the binary log. However, if the
value's collation differs we might get an 'illegal mix of collation'
error as we don't pass the collation to the function.
Fix: pass the value's collation to NAME_CONST().
Note: actually we should pass to NAME_CONST() the value's derivation as well.
It's impossible without the parser modifying. Now we always set the
derivation to DERIVATION_IMPLICIT, the same as local variables have.
Server created "arc" directories inside database directories and
maintained there useless copies of .frm files.
Creation and renaming procedures of those copies as well as
creation of "arc" directories has been discontinued.
Removal procedure has been kept untouched to be able to
cleanup existent database directories by the DROP DATABASE
query. Also view renaming procedure has been updated to remove
these directories.
JOIN for the subselect wasn't cleaned if we came upon an error
during sub_select() execution. That leads to the assertion failure
in close_thread_tables()
part of the 6.0 code backported
per-file comments:
mysql-test/r/sp-error.result
Bug#37949 Crash if argument to SP is a subquery that returns more than one row
test result
mysql-test/t/sp-error.test
Bug#37949 Crash if argument to SP is a subquery that returns more than one row
test case
sql/sp_head.cc
Bug#37949 Crash if argument to SP is a subquery that returns more than one row
lex->unit.cleanup() call added if not substatement
The problem is that when statement-based replication was enabled,
statements such as INSERT INTO .. SELECT FROM .. and CREATE TABLE
.. SELECT FROM need to grab a read lock on the source table that
does not permit concurrent inserts, which would in turn be denied
if the source table is a log table because log tables can't be
locked exclusively.
The solution is to not take such a lock when the source table is
a log table as it is unsafe to replicate log tables under statement
based replication. Furthermore, the read lock that does not permits
concurrent inserts is now only taken if statement-based replication
is enabled and if the source table is not a log table.
In order to improve the performance when replicating to partitioned
myisam tables with row-based format, the number of rows of current
rows log event is estimated and used to setup storage engine for bulk
inserts.
Machines with hostname set to "localhost" cause uniqueness errors in
the SQL bootstrap data.
Now, insert zero lines for cases where the (lowercased) hostname is
the same as an already-inserted 'localhost' name. Also, fix a few tests
that expect certain local accounts to have a certain host name.
A stored procedure involving substrings could crash the server on certain
platforms because of invalid memory reads.
During storing the new blob-field value, the cached value's address range
overlapped that of the new field value. This caused problems when the
cached value storage was reallocated to provide access for a new
characater set representation. The patch checks the address ranges, and if
they overlap, the new field value is copied to a new storage before it is
converted to the new character set.
and
Bug#33555: Group By Query does not correctly aggregate partitions
Backport of bug-33257 which is the same bug.
read_range_*() calls was not passed to the partition handlers,
but was translated to index_read/next family calls.
Resulting in duplicates rows and wrong aggregations.
The fix for bug 31887 was incomplete : it assumes that all the
field types returned by the IS_NUM macro are descendants of
Item_num and tries to zero-fill the values before doing constant
substitution with such fields when they are compared to constant string
values.
The only exception to this is Field_timestamp : it's in the IS_NUM
macro, but is not a descendant of Field_num.
Fixed by excluding timestamp fields (Field_timestamp) when zero-filling
when converting the constant to compare with to a string.
Note that this will not exclude the timestamp columns from const
propagation.