The abi_check target instroduced as part of WL#4380 verifies
changes to mysql.h. mysql.h in turn includes mysql_version.h.
mysql_version.h is a file that is generated during the configure
phase. We must ensure that mysql_version.h is cleaned only during
distclean and not during clean.
The problem is that the function used by the server to increase
the thread's priority (pthread_setschedparam) has the unintended
side-effect of changing the calling thread scheduling policy,
possibly overwriting a scheduling policy set by a sysadmin.
The solution is to rely on the pthread_setschedprio function, if
available, as it only changes the scheduling priority and does not
change the scheduling policy. This function is usually available on
Solaris and Linux, but it use won't work by default on Linux as the
the default scheduling policy only accepts a static priority 0 -- this
is acceptable for now as priority changing on Linux is broken anyway.
The problem is that MySQL's 'fast' mutex implementation uses the
random() routine to determine the spin delay. Unfortunately, the
routine interface is not thead-safe and some implementations (eg:
glibc) might use a internal lock to protect the RNG state, causing
excessive locking contention if lots of threads are spinning on
a MySQL's 'fast' mutex. The code was also misusing the value
of the RAND_MAX macro, this macro represents the largest value
that can be returned from the rand() function, not random().
The solution is to use the quite simple Park-Miller random number
generator. The initial seed is set to 1 because the previously used
generator wasn't being seeded -- the initial seed is 1 if srandom()
is not called.
Futhermore, the 'fast' mutex implementation has several shortcomings
and provides no measurable performance benefit. Therefore, its use is
not recommended unless it provides directly measurable results.
This patch contains fixes for two problems:
1. As originally reported, the server crashed on Mac OS X when trying to access
an EXAMPLE table after the EXAMPLE plugin was installed.
It turned out that the dynamically loaded EXAMPLE plugin called the
function hash_earch() from a Mac OS X system library, instead of
hash_earch() from MySQL's mysys library. Makefile.am in storage/example
does not include libmysys. So the Mac OS X linker arranged the hash_search()
function to be linked to the system library when the shared object is
loaded.
One possible solution would be to include libmysys into the linkage of
dynamic plugins. But then we must have a libmysys.so, which must be
used by the server too. This could have a minimal performance impact,
but foremost the change seems to bee too risky at the current state of
MySQL 5.1.
The selected solution is to rename MySQL's hash_search() to my_hash_search()
like it has been done before with hash_insert() and hash_reset().
Since this is the third time, we need to rename a hash_*() function,
I did renamed all hash_*() functions to my_hash_*().
To avoid changing a zillion calls to these functions, and announcing
this to hundreds of developers, I added defines that map the old names
to the new names.
This change is in hash.h and hash.c.
2. The other problem was improper implementation of the handlerton-to-plugin
mapping. We use a fixed-size array to hold a plugin reference for each
handlerton. On every install of a handler plugin, we allocated a new slot
of the array. On uninstall we did not free it. After some uninstall/install
cycles the array overflowed. We did not check for overflow.
One fix is to check for overflow to stop the crashes.
Another fix is to free the array slot at uninstall and search for a free slot
at plugin install.
This change is in handler.cc.
The problem is that when statement-based replication was enabled,
statements such as INSERT INTO .. SELECT FROM .. and CREATE TABLE
.. SELECT FROM need to grab a read lock on the source table that
does not permit concurrent inserts, which would in turn be denied
if the source table is a log table because log tables can't be
locked exclusively.
The solution is to not take such a lock when the source table is
a log table as it is unsafe to replicate log tables under statement
based replication. Furthermore, the read lock that does not permits
concurrent inserts is now only taken if statement-based replication
is enabled and if the source table is not a log table.
test_if_data_home_dir fixed to look into real path.
Checks added to mi_open for symlinks into data home directory.
per-file messages:
include/my_sys.h
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
my_is_symlink interface added
include/myisam.h
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
myisam_test_invalid_symlink interface added
myisam/mi_check.c
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
mi_open_datafile calls modified
myisam/mi_open.c
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
code added to mi_open to check for symlinks into data home directory.
mi_open_datafile now accepts 'original' file path to check if it's
an allowed symlink.
myisam/mi_static.c
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
myisam_test_invlaid_symlink defined
myisam/myisamchk.c
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
mi_open_datafile call modified
myisam/myisamdef.h
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
mi_open_datafile interface modified - 'real_path' parameter added
mysql-test/r/symlink.test
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
error codes corrected as some patch now rejected pointing inside datahome
mysql-test/r/symlink.result
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
error messages corrected in the result
mysys/my_symlink.c
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
my_is_symlink() implementsd
my_realpath() now returns the 'realpath' even if a file isn't a symlink
sql/mysql_priv.h
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
test_if_data_home_dir interface
sql/mysqld.cc
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
myisam_test_invalid_symlik set with the 'test_if_data_home_dir'
sql/sql_parse.cc
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
error messages corrected
test_if_data_home_dir code fixed
1) Remove solaris sparc specific output produced by the
pre-processor in the .out files
2) Ensure compatibility of preprocessor options for solaris/sparc
platform.
1) Modified abi_check rule to not write into the
distribution directory.
2) Added the .pp files to EXTRA_DIST so that it will
be included in the distribution
Pull out some of unpack_dirname() into normalize_dirname(); this
new function does not expand "~" to the home directory. Use this
function in unpack_dirname(), and use it during init_default_directories()
to remove duplicate entries without losing track of which directory
is a user's home dir.
Another problem is that the backtrace facility wasn't being
enabled for non-Linux targets even if the target OS has the
backtrace functions. Also, the stacktrace functions inside
mysqltest were being used without proper checks for their
presence in the build.
The problem was that when a embedded linked version of mysqltest
crashed there was no way to obtain a stack trace if no core file
is available. Another problem is that the embedded version of
libmysql was not behaving (crash) the same as the non-embedded with
respect to sending commands to a explicitly closed connection.
The solution is to generate a mysqltest's stack trace on crash
and to enable "reconnect" if the connection handle was explicitly
closed so the behavior matches the non-embedded one.
added a rule that use gcc to generate preprocessor output (gcc -E)
that can be then compared to a already generated output using
the diff utility.
Ran make test on the repository to verify changes.
PREPARE", review fixes:
- make the patch follow the specification of WL#4166 and remove
the new error that was originally introduced.
Now the client never gets an error from reprepare, unless it failed.
I.e. even if the statement at hand returns a completely different
result set, this is not considered a server error.
The C API library, that can not handle this situation, was modified to
return a client error.
Added additional test coverage.
for ENGINE=MRG_MYISAM (should be optimized out).
Before WL#3281 MERGE engine had the HA_NOT_EXACT_COUNT flag
unset, and it worked with COUNT optimization as desired.
After the removal of the HA_NOT_EXACT_COUNT flag neither
HA_STATS_RECORDS_IS_EXACT (opposite to former HA_NOT_EXACT_COUNT
flag) nor modern HA_HAS_RECORDS flag were not added to MERGE
table flag mask.
1. The HA_HAS_RECORDS table flag has been set.
2. The ha_myisammrg::records method has been overridden to
calculate total number of records in underlying tables.
WL#4165 Prepared statements: validation
WL#4166 Prepared statements: automatic re-prepare
Fixes
Bug#27430 Crash in subquery code when in PS and table DDL changed after PREPARE
Bug#27690 Re-execution of prepared statement after table was replaced with a view crashes
Bug#27420 A combination of PS and view operations cause error + assertion on shutdown
The basic idea of the patch is to keep track of table metadata between
prepared statement prepare and execute. If some table used in the statement
has changed, the prepared statement is re-prepared before execution.
See WL#4165 and WL#4166 contents and comments in the code for details
of the implementation.
When linking with some external programs, "multiple definition
of `init_time'"
Rename init_time() to my_init_time() to avoid collision with other
libraries (particularly libmng).