test_if_data_home_dir fixed to look into real path.
Checks added to mi_open for symlinks into data home directory.
per-file messages:
include/my_sys.h
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
my_is_symlink interface added
include/myisam.h
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
myisam_test_invalid_symlink interface added
myisam/mi_check.c
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
mi_open_datafile calls modified
myisam/mi_open.c
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
code added to mi_open to check for symlinks into data home directory.
mi_open_datafile now accepts 'original' file path to check if it's
an allowed symlink.
myisam/mi_static.c
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
myisam_test_invlaid_symlink defined
myisam/myisamchk.c
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
mi_open_datafile call modified
myisam/myisamdef.h
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
mi_open_datafile interface modified - 'real_path' parameter added
mysql-test/r/symlink.test
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
error codes corrected as some patch now rejected pointing inside datahome
mysql-test/r/symlink.result
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
error messages corrected in the result
mysys/my_symlink.c
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
my_is_symlink() implementsd
my_realpath() now returns the 'realpath' even if a file isn't a symlink
sql/mysql_priv.h
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
test_if_data_home_dir interface
sql/mysqld.cc
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
myisam_test_invalid_symlik set with the 'test_if_data_home_dir'
sql/sql_parse.cc
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
error messages corrected
test_if_data_home_dir code fixed
We could allocate chunks larger than 4GB, but did our
size-accounting in 32-bit values. This could lead to
spurious warnings, inaccurate accounting, and, in
theory, data loss.
Affected: 64-bit platforms. Debug-build (with safemalloc).
At least one buffer larger than 4GB. For potential data
loss, a re-alloc on such a buffer would be necessary.
Tilde expansion could fail when it was to expand to an empty string (such as
when HOME is set to an empty string), especially on systems where size_t is
unsigned.
This fix is for 5.0 only : back porting the 6.0 patch manually
The parser code in sql/sql_yacc.yy needs to be more robust to out of
memory conditions, so that when parsing a query fails due to OOM,
the thread gracefully returns an error.
Before this fix, a new/alloc returning NULL could:
- cause a crash, if dereferencing the NULL pointer,
- produce a corrupted parsed tree, containing NULL nodes,
- alter the semantic of a query, by silently dropping token values or nodes
With this fix:
- C++ constructors are *not* executed with a NULL "this" pointer
when operator new fails.
This is achieved by declaring "operator new" with a "throw ()" clause,
so that a failed new gracefully returns NULL on OOM conditions.
- calls to new/alloc are tested for a NULL result,
- The thread diagnostic area is set to an error status when OOM occurs.
This ensures that a request failing in the server properly returns an
ER_OUT_OF_RESOURCES error to the client.
- OOM conditions cause the parser to stop immediately (MYSQL_YYABORT).
This prevents causing further crashes when using a partially built parsed
tree in further rules in the parser.
No test scripts are provided, since automating OOM failures is not
instrumented in the server.
Tested under the debugger, to verify that an error in alloc_root cause the
thread to returns gracefully all the way to the client application, with
an ER_OUT_OF_RESOURCES error.
Pull out some of unpack_dirname() into normalize_dirname(); this
new function does not expand "~" to the home directory. Use this
function in unpack_dirname(), and use it during init_default_directories()
to remove duplicate entries without losing track of which directory
is a user's home dir.
Another problem is that the backtrace facility wasn't being
enabled for non-Linux targets even if the target OS has the
backtrace functions. Also, the stacktrace functions inside
mysqltest were being used without proper checks for their
presence in the build.
The problem was that when a embedded linked version of mysqltest
crashed there was no way to obtain a stack trace if no core file
is available. Another problem is that the embedded version of
libmysql was not behaving (crash) the same as the non-embedded with
respect to sending commands to a explicitly closed connection.
The solution is to generate a mysqltest's stack trace on crash
and to enable "reconnect" if the connection handle was explicitly
closed so the behavior matches the non-embedded one.
We could allocate chunks larger than 4GB, but did our size-accounting in 32-bit
values. This could lead to spurious warnings, inaccurate accounting, and, in
theory, data loss.
Affected: 64-bit platforms. Debug-build (with safemalloc). At least one buffer
larger than 4GB. For potential data loss, a re-alloc on such a buffer would be
necessary.
When trying to get the requested amount of memory for the keybuffer,
the out of memory could be signaled if one of the tentative allocations
fail. Later the server would crash (debug assert) when trying to send
a ok packet with a error set.
The solution is only to signal the error if all tentative allocations
for the keybuffer fail.
Each time the server reloads privileges containing table grants, the
system will allocate too much memory than needed because of badly
chosen growth prediction in the underlying dynamic arrays.
This patch introduces a new signature to the hash container initializer
which enables a much more pessimistic approach in favour for more
efficient memory useage.
This patch was supplied by Google Inc.
with errno 17
my_create() did not perform any checks for the case when a file is
successfully created by a call to open(), but the call to
my_register_filename() later fails because the number of open files
has exceeded the my_open_files limit. This can happen on platforms
which do not have getrlimit(), and hence we do not know the real limit
for open files. In such a case an error was returned to a caller
although the file has actually been created. Since callers assume
my_create() to return an error only when it failed to create a file,
they did not perform any cleanups, leaving an 'orphaned' file on the
file system.
Fixed by adding a check for the above case to my_create() and ensuring
the newly created file is deleted before returning an error.
Creating a deterministic test case in the test suite is impossible,
because the exact steps required to reproduce the above situation
depend on the platform and/or environment (OS per-user limits, queries
executed by previous tests, startup parameters). The patch was
manually tested on Windows using examples posted in the bug report.
Bug#34678 @@debug variable's incremental mode
The problem is that the per-thread debugging settings stack wasn't
being deallocated before the thread termination, leaking the stack
memory. The chosen solution is to push a new state if the current
is set to the initial settings and pop it (free) once the thread
finishes.