The problem was that the scheduler function used to handle a
new user connection could use the ER() macro without having a
THD object bound to the current thread. The crash would happen
whenever the function failed to create a new thread to handle a
user connection. Thread creation can fail due to lack or limit
of available resources.
The solution is to simply use the ER_THD() macro instead and pass
to it the THD object which would be bound to the connection.
Fix was tested manually. In a test case, it is too cumbersome to
inject a error in this context.
sql/mysqld.cc:
Use ER_THD and pass the object.
Quoting from the bug report:
The pstack library has been included in MySQL since version
4.0.0. It's useless and should be removed.
Details: According to its own documentation, pstack only works
on Linux on x86 in 32 bit mode and requires LinuxThreads and a
statically linked binary. It doesn't really support any Linux
from 2003 or later and doesn't work on any other OS.
The --enable-pstack option is thus deprecated and has no effect.
sporadically.
The cause of the sporadic time out was a leaking protection
against the global read lock, taken by the RENAME statement,
and not released in case of an error occurred during RENAME.
The leaking protection counter would lead to the value of
protect_against_global_read never dropping to 0.
Consequently FLUSH TABLES in all connections, including the
one that leaked the protection, could not proceed.
The fix is to ensure that all branchesin RENAME code properly
release GRL protection.
mysql-test/r/log_tables.result:
Added results for test for bug#47924.
mysql-test/t/log_tables.test:
Added test for bug#47924.
sql/sql_rename.cc:
mysql_rename_tables() modified: replaced return from function
to goto to clean up code block in case of error.
add boolean command-line option --autocommit.
mysql-test/mysql-test-run.pl:
do in --gdb like in --ddd: to let the developer debug the startup
phase (like command-line options parsing), don't "run".
It's the third time I do this change, it was previously lost
by merges, port of 6.0 to next-mr...
mysql-test/r/mysqld--help-notwin.result:
new command-line option
mysql-test/r/mysqld--help-win.result:
a Linux user's best guess at what the Windows result should be
mysql-test/suite/sys_vars/inc/autocommit_func2.inc:
new test
mysql-test/suite/sys_vars/t/autocommit_func2-master.opt:
test new option
mysql-test/suite/sys_vars/t/autocommit_func3-master.opt:
test new option
sql/mysqld.cc:
new --autocommit
sql/mysqld.h:
new --autocommit
sql/sql_partition.cc:
What matters to this partitioning quote is to have
the OPTION_QUOTE_SHOW_CREATE flag down, it's all
that append_identifier() uses. So we make it explicit.
It was possible to issue an ALTER TABLE ADD PRIMARY KEY on
an partitioned InnoDB table that failed and crashed the server.
The problem was that it succeeded to create the PK on at least
one partition, and then failed on a subsequent partition, due to
duplicate key violation. Since the partitions that already had added
the PK was not reverted all partitions was not consistent with the
table definition, which caused the crash.
The solution was to add a revert step to ha_partition::add_index()
that dropped the index for the already succeeded partitions, on failure.
mysql-test/r/partition.result:
updated result
mysql-test/t/partition.test:
Added test
sql/ha_partition.cc:
Only allow ADD/DROP flags in pairs, so that they can be reverted on failures.
If add_index() fails for a partition, revert (drop the index) for the previous
partitions.
sql/handler.h:
Added some extra info in a comment.
If a relative path is supplied to option --defaults-file or
--defaults-extra-file, the server will crash when executing
an INSTALL PLUGIN command. The reason is that the defaults
file is initially read relative the current working directory
when the server is started, but when INSTALL PLUGIN is executed,
the server has changed working directory to the data directory.
Since there is no check that the call to my_load_defaults()
inside mysql_install_plugin(), the subsequence call to
free_defaults() will crash the server.
This patch fixes the problem by:
- Prepending the current working directory to the file name when
a relative path is given to the --defaults-file or --defaults-
extra-file option the first time my_load_defaults() is called,
which is just after the server has started in main().
- Adding a check of the return value of my_load_defaults() inside
mysql_install_plugin() and aborting command (with an error) if
an error is returned.
- It also adds a check of the return value for load_defaults in
lib_sql.cc for the embedded server since that was missing.
To test that the relative files for the options --defaults-file and
--defaults-extra-file is handled properly, mysql-test-run.pl is also
changed to not add a --defaults-file option if one is provided in the
tests *.opt file.
Assertion `fixed == 1' failed
(also fixes duplicate bug 57515)
agg_item_set_converter() (item.cc) handles conversion of
character sets by creating a new Item. fix_fields() is then
called on this newly created item. Prior to this patch, it was
not checked whether fix_fields() was successful or not. Thus,
agg_item_set_converter() would return success even when an
error occured. This patch makes it return error (TRUE) if
fix_fields() fails.
mysql-test/r/errors.result:
Add test for BUG#57882
mysql-test/t/errors.test:
Add test for BUG#57882
sql/item.cc:
Make agg_item_set_converter() return with error if fix_fields()
on the newly created converted item fails.
Function delegetas_init() did not report proper error messages
when there are failures, which made it hard to know where the problem
occurred.
Fixed the problem by adding specific error message for every possible
place that can fail. And since these failures are supposed to never
happen, ask the user to report a bug if they happened.
In MySQL 5.5 the new reserved words include:
SLOW as in FLUSH SLOW LOGS
GENERAL as in FLUSH GENERAL LOGS
IGNORE_SERVER_IDS as in CHANGE MASTER ... IGNORE_SERVER_IDS
MASTER_HEARTBEAT_PERIOD as in CHANGE MASTER ... MASTER_HEARTBEAT_PERIOD
These are not reserved words in standard SQL, or in Oracle 11g,
and as such, may affect existing applications.
We fix this by adding the new words to the list of
keywords that are allowed for labels in SPs.
mysql-test/t/keywords.test:
Test case that checks that the target words can be used
for naming fields in a table or as local routine variable
names.
1. Fixed the name of the table to proxies_priv
2. Fixed the column names to be of the form Capitalized_lowecarse instead of
Capitalized_Capitalized
3. Added Timestamp and Grantor columns
4. Added tests to plugin_auth to check the table structure
5. Updated the existing tests
by a function and column
The bugreport reveals two different bugs about grouping
on a function:
1) grouping by the TIME_TO_SEC function result caused
a server crash or wrong results and
2) grouping by the function returning a blob caused
an unexpected "Duplicate entry" error and wrong
result.
Details for the 1st bug:
TIME_TO_SEC() returns NULL if its argument is invalid (empty
string for example). Thus its nullability depends not only
on the nullability of its arguments but also on their values.
Fixed by (overoptimistically) setting TIME_TO_SEC() to be
nullable despite the nullability of its arguments.
Details for the 2nd bug:
The server is unable to create indices on blobs without
explicit blob key part length. However, this fact was
ignored for blob function result fields of GROUP BY
intermediate tables.
Fixed by disabling GROUP BY index creation for blob
function result fields like regular blob fields.
mysql-test/r/func_time.result:
Test case for bug #52160.
mysql-test/r/type_blob.result:
Test case for bug #52160.
mysql-test/t/func_time.test:
Test case for bug #52160.
mysql-test/t/type_blob.test:
Test case for bug #52160.
sql/item_timefunc.h:
Bug #52160: crash and inconsistent results when grouping
by a function and column
TIME_TO_SEC() returns NULL if its argument is invalid (empty
string for example). Thus its nullability depends not only
Fixed by (overoptimistically) setting TIME_TO_SEC() to be
nullable despite the nullability of its arguments.
sql/sql_select.cc:
Bug #52160: crash and inconsistent results when grouping
by a function and column
The server is unable to create indices on blobs without
explicit blob key part length. However, this fact was
ignored for blob function result fields of GROUP BY
intermediate tables.
Fixed by disabling GROUP BY index creation for blob
function result fields like regular blob fields.
Post-Push fix, DBUG build broken on freebsd7
sql/field.cc:8456: warning: control reaches end of non-void function
sql/field.cc:
Return NULL to keep compiler happy.
COM_CHANGE_USER was always handled like an implicit request to change the
client plugin, so that the client can re-use the same code path for both normal
login and COM_CHANGE_USER. However this doesn't really work well with old
clients because they don't understand the request to change a client plugin.
Fixed by implementing a special state in the code (and old client issuing
COM_CHANGE_USER). In this state the server parses the COM_CHANGE_USER
package and pushes back the password hash, the user name and the database
to the input stream in the same order that the native password server side plugin
expects. As a result it replies with an OK/FAIL just like the old server does thus
making the new server compatible with older clients.
No test case added, since it would requre an old client binary. Tested using
accounts with and without passwords. Tested with a correct and incorrect
password.
The problem was that the warnings risen by a trigger were not cleared upon
successful completion. The warnings should be cleared if the trigger completes
successfully.
The fix is to skip merging warnings into caller's Warning Info for triggers.
"Grantor" columns' data is lost when replicating mysql.tables_priv.
Slave SQL thread used its default user ''@'' as the grantor of GRANT|REVOKE
statements executing on it.
In this patch, current user is put in query log event for all GRANT and REVOKE
statement, SQL thread uses the user in query log event as grantor.
mysql-test/suite/rpl/r/rpl_do_grant.result:
Add test for this bug.
mysql-test/suite/rpl/t/rpl_do_grant.test:
Add test for this bug.
sql/log_event.cc:
Refactoring THD::current_user_used and related functions.
current_user_used is used to judge if current user should be
binlogged in query log event. So it is better to call it m_binlog_invoker.
The related functions are renamed too.
sql/sql_class.cc:
Refactoring THD::current_user_used and related functions.
current_user_used is used to judge if current user should be
binlogged in query log event. So it is better to call it m_binlog_invoker.
The related functions are renamed too.
sql/sql_class.h:
Refactoring THD::current_user_used and related functions.
current_user_used is used to judge if current user should be
binlogged in query log event. So it is better to call it m_binlog_invoker.
The related functions are renamed too.
sql/sql_parse.cc:
Call binlog_invoker() for GRANT and REVOKE statements.
- A prerequisite cleanup patch for making KILL reliable.
The test case main.kill did not work reliably.
The following problems have been identified:
1. A kill signal could go lost if it came in, short before a
thread went reading on the client connection.
2. A kill signal could go lost if it came in, short before a
thread went waiting on a condition variable.
These problems have been solved as follows. Please see also
added code comments for more details.
1. There is no safe way to detect, when a thread enters the
blocking state of a read(2) or recv(2) system call, where it
can be interrupted by a signal. Hence it is not possible to
wait for the right moment to send a kill signal. It has been
decided, not to fix it in the code. Instead, the test case
repeats the KILL statement until the connection terminates.
2. Before waiting on a condition variable, we register it
together with a synchronizating mutex in THD::mysys_var. After
this, we need to test THD::killed again. At some places we did
only test it in a loop condition before the registration. When
THD::killed had been set between this test and the registration,
we entered waiting without noticing the killed flag. Additional
checks ahve been introduced where required.
In addition to the above, a re-write of the main.kill test
case has been done. All sleeps have been replaced by Debug
Sync Facility synchronization. A couple of sync points have
been added to the server code.
To avoid further problems, if the test case fails in spite of
the fixes, the test case has been added to the "experimental"
list for now.
- Most of the work on this patch is authored by Ingo Struewing
mysql-test/t/kill.test:
Re-wrote test case to use Debug Sync points instead of sleeps
sql/event_queue.cc:
Fixed kill detection in Event_queue::cond_wait() by adding a check
after enter_cond().
sql/lock.cc:
Moved Debug Sync points behind enter_cond().
Fixed comments.
sql/slave.cc:
Fixed kill detection in start_slave_thread() by adding a check
after enter_cond().
sql/sql_class.cc:
Swapped order of kill and close in THD::awake().
Added comments.
sql/sql_class.h:
Added a comment to THD::killed.
sql/sql_parse.cc:
Added a sync point in do_command().
sql/sql_select.cc:
Added a sync point in JOIN::optimize().
set to 128k.
sql/sp.cc:
Added checking for stack overrun at functions
db_load_routine/sp_find_routine.
sql/sp_head.cc:
sp_head::execute() modified: pass constant value STACK_MIN_SIZE
instead of 8 * STACK_MIN_SIZE as second argument value
in call to check_stack_overrun. Added checking for stack overrun
at functions sp_lex_keeper::reset_lex_and_exec_core/sp_instr_stmt::execute.
sql/sql_parse.cc:
check_stack_overrun modified: allocate buffer for error message
at heap instead of stack.
parse_sql modified: added call to check_stack_overrun() before
parsing of sql statement.
Rows events were applied wrongly on the temporary table with the same name.
But rows events are generated only for base tables. As temporary
table's data never be binlogged on row mode. Normally, base table of the
same name cannot be updated if a temporary table has the same name.
But there are two cases which can generate rows events on
the base table of same name.
Case1: 'CREATE TABLE ... SELECT' statement.
In mixed format, it will generate rows events if it is unsafe.
Case2: Drop a transactional temporary table in a transaction
(happens only on 5.5+).
BEGIN;
DROP TEMPORARY TABLE t1; # t1 is a InnoDB table
INSERT INTO t1 VALUES(rand()); # t1 is a MyISAM table
COMMIT;
'DROP TEMPORARY TABLE' will be put in the transaction cache and
binlogged after the rows events generated by the 'INSERT' statement.
After this patch, slave opens only base table when applying a rows event.
Fix assorted warnings that are generated in optimized builds.
Most of it is silencing variables that are set but unused.
This patch also introduces the MY_ASSERT_UNREACHABLE macro
which helps the compiler to deduce that a certain piece of
code is unreachable.
include/my_compiler.h:
Use GCC's __builtin_unreachable if available. It allows
GCC to deduce the unreachability of certain code paths,
thus avoiding warnings that, for example, accused that a
variable could be used without being initialized (due to
unreachable code paths).
data dictionary confusion
On file systems with case insensitive file names, and
lower_case_table_names set to '2', the server could crash
due to a table definition cache inconsistency. This is
the default setting on MacOSX, but may also be set and
used on MS Windows.
The bug is caused by using two different strategies for
creating the hash key for the table definition cache, resulting
in failure to look up an entry which is present in the cache,
or failure to delete an existing entry. One strategy was to
use the real table name (with case preserved), and the other
to use a normalized table name (i.e a lower case version).
This is manifested in two cases. One is during 'DROP DATABASE',
where all known files are removed. The removal from
the table definition cache is done via a generated list of
TABLE_LIST with keys (wrongly) created using the case preserved
name. The other is during CREATE TABLE, where the cache lookup
is also (wrongly) based on the case preserved name.
The fix was to use only the normalized table name when
creating hash keys.
sql/sql_db.cc:
Normalize table name (i.e lower case it)
sql/sql_table.cc:
table_name contains the normalized name
alias contains the real table name
After the fix for
Bug #55077 Assertion failed: width > 0 && to != ((void *)0), file .\dtoa.c
we no longer try to allocate a string of length 'field_length'
so the asserts are relevant only for ZEROFILL columns.
mysql-test/r/select.result:
Add test case for Bug#57203
mysql-test/t/select.test:
Add test case for Bug#57203
sql/field.cc:
Rewrite the DBUG_ASSERTS on field_length.
For crash testing: kill the server without generating core file.
include/my_dbug.h
Use kill(getpid(), SIGKILL) which cannot be caught by signal handlers.
All DBUG_XXX macros should be no-ops in optimized mode, do that for DBUG_ABORT as well.
sql/handler.cc
Kill server without generating core.
sql/log.cc
Kill server without generating core.