Problem: logging queries not using indexes we check a special flag which
is set only at the server startup and is not changing with a corresponding
server variable together.
Fix: check the variable value instead of the flag.
long shared-memory-base-names could overflow a static internal buffer
and thus crash mysqld and various clients. change both to dynamic
buffers, show everything but overflowing those buffers still works.
The test case for this would pretty much amount to
mysqld --shared-memory-base-name=HeyMrBaseNameXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX --shared-memory=1 &
mysqladmin --no-defaults --shared-memory-base-name=HeyMrBaseNameXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX shutdown
Unfortunately, we can't just use an .opt file for the
server. The .opt file is used at start-up, before any
include in the actual test can tell mysqltest to skip
this one on non-Windows. As a result, such a test would
break on unices.
Fixing mysql-test-run.pl to export full path for master
and slave would enable us to start a server from within
the test which is ugly and, what's more, doesn't work as
the server blocks (mysqltest offers no fire-and-forget
fork-and-exec), and mysqladmin never gets run.
Making the test rpl_windows_shm or some such so we can
is beyond ugly. As is introducing another file-name based
special case (run "win*.test" only when on Windows). As is
(yuck) coding half the test into mtr (as in, having it
hand out a customized environment conductive to the shm-
thing on Win only).
Situation is exacerbated by the fact that .sh is not
necessary run as expected on Win.
In short, it's just not worth it. No test-case until we
have a new-and-improved test framework.
In many cases, binaries can no longer dump core after calling setuid().
Where the PR_SET_DUMPABLE macro is set, use the prctl() system call
to tell the kernel that it's allowed to dump the core of the server.
When a Windows console application that has an open console (e.g. mysqld-nt
started with the --console option) encounters certain type of errors
(like no floppy disk in a floppy drive) the OS will pop-up an
"abort/retry/ignore" dialog and block the application (depending on a
registry setting : see http://msdn2.microsoft.com/en-us/embedded/aa731206.aspx
for details).
Fixed by disabling the dialog popups for every error except a GPF and
alignment errors. This is safe to do as the actual error gets reported
(and handled) to mysqld.
- A race condition caused brief unavailablility when trying to acccess
a table.
- The unprotected variable 'grant_option' wasn't intended to alternate
during normal execution. Variable initialization moved to grant_init
a lines responsible for the alternation are removed.
When storing a large number to a FLOAT or DOUBLE field with fixed length, it could be incorrectly truncated if the field's length was greater than 31.
This patch also does some code cleanups to be able to reuse code which is common between Field_float::store() and Field_double::store().
On some Linux distributions with both LinuxThreads and NPTL glibc versions available, statically built binaries can crash, because linker defaults to LinuxThreads when linking statically, but calls to external libraries (like libnss) are resolved to NPTL versions.
Since there is nothing we can do in the code to work that around, just give user an advice on how to fix that, if a crash happened on such a binary/OS combination.
Command line and configuration file option 'key_cache_block_size'
was reduced by MALLOC_OVERHEAD (8 in a production server, 36 in a
debug server) from the user supplied value and restricted it to
the greatest multiple of 512 less or equal to the reduced value.
This patch changes option 'key_cache_block_size' to not deduce
MALLOC_OVERHEAD from the input value. However, the restriction
to a multiple of 512 is still done.
Problem: setting/displaying @@LC_TIME_NAMES didn't distinguish between
GLOBAL and SESSION variable types - always SESSION variable
was set/shonw.
Fix: set either global or session value.
Also, "mysqld --lc-time-names" was added to set "global default" value.
what it actually means (Monty approved the renaming)
- correcting description of transaction_alloc command-line options
(our manual is correct)
- fix for a failure of rpl_trigger.
TABLE ... WRITE".
Memory and CPU hogging occured when connection which had to wait for table
lock was serviced by thread which previously serviced connection that was
killed (note that connections can reuse threads if thread cache is enabled).
One possible scenario which exposed this problem was when thread which
provided binlog dump to replication slave was implicitly/automatically
killed when the same slave reconnected and started pulling data through
different thread/connection.
The problem also occured when one killed particular query in connection
(using KILL QUERY) and later this connection had to wait for some table
lock.
This problem was caused by the fact that thread-specific mysys_var::abort
variable, which indicates that waiting operations on mysys layer should
be aborted (this includes waiting for table locks), was set by kill
operation but was never reset back. So this value was "inherited" by the
following statements or even other connections (which reused the same
physical thread). Such discrepancy between this variable and THD::killed
flag broke logic on SQL-layer and caused CPU and memory hogging.
This patch tries to fix this problem by properly resetting this member.
There is no test-case associated with this patch since it is hard to test
for memory/CPU hogging conditions in our test-suite.
TABLE ... WRITE".
CPU hogging occured when connection which had to wait for table lock was
serviced by thread which previously serviced connection that was killed
(note that connections can reuse threads if thread cache is enabled).
One possible scenario which exposed this problem was when thread which
provided binlog dump to replication slave was implicitly/automatically
killed when the same slave reconnected and started pulling data through
different thread/connection.
In 5.* versions memory hogging was added to CPU hogging. Moreover in
those versions the problem also occured when one killed particular query
in connection (using KILL QUERY) and later this connection had to wait for
some table lock.
This problem was caused by the fact that thread-specific mysys_var::abort
variable, which indicates that waiting operations on mysys layer should
be aborted (this includes waiting for table locks), was set by kill
operation but was never reset back. So this value was "inherited" by the
following statements or even other connections (which reused the same
physical thread). Such discrepancy between this variable and THD::killed
flag broke logic on SQL-layer and caused CPU and memory hogging.
This patch tries to fix this problem by properly resetting this member.
There is no test-case associated with this patch since it is hard to test
for memory/CPU hogging conditions in our test-suite.
fixes).
The legend: on a replication slave, in case a trigger creation
was filtered out because of application of replicate-do-table/
replicate-ignore-table rule, the parsed definition of a trigger was not
cleaned up properly. LEX::sphead member was left around and leaked
memory. Until the actual implementation of support of
replicate-ignore-table rules for triggers by the patch for Bug 24478 it
was never the case that "case SQLCOM_CREATE_TRIGGER"
was not executed once a trigger was parsed,
so the deletion of lex->sphead there worked and the memory did not leak.
The fix:
The real cause of the bug is that there is no 1 or 2 places where
we can clean up the main LEX after parse. And the reason we
can not have just one or two places where we clean up the LEX is
asymmetric behaviour of MYSQLparse in case of success or error.
One of the root causes of this behaviour is the code in Item::Item()
constructor. There, a newly created item adds itself to THD::free_list
- a single-linked list of Items used in a statement. Yuck. This code
is unaware that we may have more than one statement active at a time,
and always assumes that the free_list of the current statement is
located in THD::free_list. One day we need to be able to explicitly
allocate an item in a given Query_arena.
Thus, when parsing a definition of a stored procedure, like
CREATE PROCEDURE p1() BEGIN SELECT a FROM t1; SELECT b FROM t1; END;
we actually need to reset THD::mem_root, THD::free_list and THD::lex
to parse the nested procedure statement (SELECT *).
The actual reset and restore is implemented in semantic actions
attached to sp_proc_stmt grammar rule.
The problem is that in case of a parsing error inside a nested statement
Bison generated parser would abort immediately, without executing the
restore part of the semantic action. This would leave THD in an
in-the-middle-of-parsing state.
This is why we couldn't have had a single place where we clean up the LEX
after MYSQLparse - in case of an error we needed to do a clean up
immediately, in case of success a clean up could have been delayed.
This left the door open for a memory leak.
One of the following possibilities were considered when working on a fix:
- patch the replication logic to do the clean up. Rejected
as breaks module borders, replication code should not need to know the
gory details of clean up procedure after CREATE TRIGGER.
- wrap MYSQLparse with a function that would do a clean up.
Rejected as ideally we should fix the problem when it happens, not
adjust for it outside of the problematic code.
- make sure MYSQLparse cleans up after itself by invoking the clean up
functionality in the appropriate places before return. Implemented in
this patch.
- use %destructor rule for sp_proc_stmt to restore THD - cleaner
than the prevoius approach, but rejected
because needs a careful analysis of the side effects, and this patch is
for 5.0, and long term we need to use the next alternative anyway
- make sure that sp_proc_stmt doesn't juggle with THD - this is a
large work that will affect many modules.
Cleanup: move main_lex and main_mem_root from Statement to its
only two descendants Prepared_statement and THD. This ensures that
when a Statement instance was created for purposes of statement backup,
we do not involve LEX constructor/destructor, which is fairly expensive.
In order to track that the transformation produces equivalent
functionality please check the respective constructors and destructors
of Statement, Prepared_statement and THD - these members were
used only there.
This cleanup is unrelated to the patch.
Bug 18914 (Calling certain SPs from triggers fail)
Bug 20713 (Functions will not not continue for SQLSTATE VALUE '42S02')
Bug 21825 (Incorrect message error deleting records in a table with a
trigger for inserting)
Bug 22580 (DROP TABLE in nested stored procedure causes strange dependency
error)
Bug 25345 (Cursors from Functions)
This fix resolves a long standing issue originally reported with bug 8407,
which affect the behavior of Stored Procedures, Stored Functions and Trigger
in many different ways, causing symptoms reported by all the bugs listed.
In all cases, the root cause of the problem traces back to 8407 and how the
server locks tables involved with sub statements.
Prior to this fix, the implementation of stored routines would:
- compute the transitive closure of all the tables referenced by a top level
statement
- open and lock all the tables involved
- execute the top level statement
"transitive closure of tables" means collecting:
- all the tables,
- all the stored functions,
- all the views,
- all the table triggers
- all the stored procedures
involved, and recursively inspect these objects definition to find more
references to more objects, until the list of every object referenced does
not grow any more.
This mechanism is known as "pre-locking" tables before execution.
The motivation for locking all the tables (possibly) used at once is to
prevent dead locks.
One problem with this approach is that, if the execution path the code
really takes during runtime does not use a given table, and if the table is
missing, the server would not execute the statement.
This in particular has a major impact on triggers, since a missing table
referenced by an update/delete trigger would prevent an insert trigger to run.
Another problem is that stored routines might define SQL exception handlers
to deal with missing tables, but the server implementation would never give
user code a chance to execute this logic, since the routine is never
executed when a missing table cause the pre-locking code to fail.
With this fix, the internal implementation of the pre-locking code has been
relaxed of some constraints, so that failure to open a table does not
necessarily prevent execution of a stored routine.
In particular, the pre-locking mechanism is now behaving as follows:
1) the first step, to compute the transitive closure of all the tables
possibly referenced by a statement, is unchanged.
2) the next step, which is to open all the tables involved, only attempts
to open the tables added by the pre-locking code, but silently fails without
reporting any error or invoking any exception handler is the table is not
present. This is achieved by trapping internal errors with
Prelock_error_handler
3) the locking step only locks tables that were successfully opened.
4) when executing sub statements, the list of tables used by each statements
is evaluated as before. The tables needed by the sub statement are expected
to be already opened and locked. Statement referencing tables that were not
opened in step 2) will fail to find the table in the open list, and only at
this point will execution of the user code fail.
5) when a runtime exception is raised at 4), the instruction continuation
destination (the next instruction to execute in case of SQL continue
handlers) is evaluated.
This is achieved with sp_instr::exec_open_and_lock_tables()
6) if a user exception handler is present in the stored routine, that
handler is invoked as usual, so that ER_NO_SUCH_TABLE exceptions can be
trapped by stored routines. If no handler exists, then the runtime execution
will fail as expected.
With all these changes, a side effect is that view security is impacted, in
two different ways.
First, a view defined as "select stored_function()", where the stored
function references a table that may not exist, is considered valid.
The rationale is that, because the stored function might trap exceptions
during execution and still return a valid result, there is no way to decide
when the view is created if a missing table really cause the view to be invalid.
Secondly, testing for existence of tables is now done later during
execution. View security, which consist of trapping errors and return a
generic ER_VIEW_INVALID (to prevent disclosing information) was only
implemented at very specific phases covering *opening* tables, but not
covering the runtime execution. Because of this existing limitation,
errors that were previously trapped and converted into ER_VIEW_INVALID are
not trapped, causing table names to be reported to the user.
This change is exposing an existing problem, which is independent and will
be resolved separately.