Large transactions and statements may corrupt the binary log if the size of the
cache, which is set by the max_binlog_cache_size, is not enough to store the
the changes.
In a nutshell, to fix the bug, we save the position of the next character in the
cache before starting processing a statement. If there is a problem, we simply
restore the position thus removing any effect of the statement from the cache.
Unfortunately, to avoid corrupting the binary log, we may end up loosing changes
on non-transactional tables if they do not fit in the cache. In such cases, we
store an Incident_log_event in order to stop the slave and alert users that some
changes were not logged.
Precisely, for every non-transactional changes that do not fit into the cache,
we do the following:
a) the statement is *not* logged
b) an incident event is logged after committing/rolling back the transaction,
if any. Note that if a failure happens before writing the incident event to
the binary log, the slave will not stop and the master will not have reported
any error.
c) its respective statement gives an error
For transactional changes that do not fit into the cache, we do the following:
a) the statement is *not* logged
b) its respective statement gives an error
To work properly, this patch requires two additional things. Firstly, callers to
MYSQL_BIN_LOG::write and THD::binlog_query must handle any error returned and
take the appropriate actions such as undoing the effects of a statement. We
already changed some calls in the sql_insert.cc, sql_update.cc and sql_insert.cc
modules but the remaining calls spread all over the code should be handled in
BUG#37148. Secondly, statements must be either classified as DDL or DML because
DDLs that do not get into the cache must generate an incident event since they
cannot be rolled back.
Item_func_spatial_collection::val_str
When the concatenation function for geometry data collections
reads the binary data it was not rigorous in checking that there
is data available, leading to invalid reads and crashes.
Fixed by making checking stricter.
This patch corrects a misstake in the test case for bug patch 43658.
There was a race in the test case when the thread id was retrieved from the processlist.
The result was that the same thread id was signalled twice and one thread id wasn't
signalled at all.
The affected platforms appears to be limited to linux.
The assertion in String::copy was added in order to avoid
valgrind errors when the destination was the same as the source.
Eased restriction to allow for the case when str == NULL.
Early patch submitted for discussion.
It is possible for more than one thread to enter the condition
in query_cache_insert(), but the condition predicate is to
signal one thread each time the cache status changes between
the following states: {NO_FLUSH_IN_PROGRESS,FLUSH_IN_PROGRESS,
TABLE_FLUSH_IN_PROGRESS}
Consider three threads THD1, THD2, THD3
THD2: select ... => Got a writer in ::store_query
THD3: select ... => Got a writer in ::store_query
THD1: flush tables => qc status= FLUSH_IN_PROGRESS;
new writers are blocked.
THD2: select ... => Still got a writer and enters cond in
query_cache_insert
THD3: select ... => Still got a writer and enters cond in
query_cache_insert
THD1: flush tables => finished and signal status change.
THD2: select ... => Wakes up and completes the insert.
THD3: select ... => Happily waiting for better times. Why hurry?
This patch is a refactoring of this lock system. It introduces four new methods:
Query_cache::try_lock()
Query_cache::lock()
Query_cache::lock_and_suspend()
Query_cache::unlock()
This change also deprecates wait_while_table_flush_is_in_progress(). All threads are
queued and put on a conditional wait. On each unlock the queue is signalled. This resolve
the issues with left over threads. To assure that no threads are spending unnecessary
time waiting a signal broadcast is issued every time a lock is taken before a full
cache flush.
crashes server!
The problem affects the scenario when index merge is followed by a filesort
and the sort buffer is not big enough for all the sort keys.
In this case the filesort function will read the data to the end through the
index merge quick access method (and thus closing the cursor etc),
but will leave the pointer to the quick select method in place.
It will then create a temporary file to hold the results of the filesort and
will add it as a sort output file (in sort.io_cache).
Note that filesort will copy the original 'sort' structure in an automatic
variable and restore it after it's done.
As a result at exiting filesort() we have a sort.io_cache filled in and
nothing else (as a result of close of the cursors at end of reading data
through index merge).
Now create_sort_index() will note that there is a select and will clean it up
(as it's been used already by filesort() reading the data in). While doing that
a special case in the index merge destructor will clean up the sort.io_cache,
assuming it's an output of the index merge method and is not needed anymore.
As a result the code that tries to read the data back from the filesort output
will get no data in both memory and disk and will crash.
Fixed similarly to how filesort() does it : by copying the sort.io_cache structure
to a local variable, removing the pointer to the io_cache (so that it's not freed
by QUICK_INDEX_MERGE_SELECT::~QUICK_INDEX_MERGE_SELECT) and restoring the original
structure (together with the valid pointer) after the cleanup is done.
This is a safe thing to do because all the structures are already cleaned up by
hitting the end of the index merge's read method (QUICK_INDEX_MERGE_SELECT::get_next())
and the cleanup code being written in a way that tolerates repeating cleanups.
The SQL-mode PAD_CHAR_TO_FULL_LENGTH could prevent a DROP USER
statement from privileges associated with the user being dropped.
What ocurred was that reading from the User and Host fields of
the tables tables_priv or columns_priv would yield values padded
with spaces, causing a failure to match a specified user or host
('user' != 'user ');
The solution is to disregard the PAD_CHAR_TO_FULL_LENGTH mode
when iterating over and matching values in the privileges tables
for a DROP USER statement.
statements missed from general log
A refinement of the test in the previous patch to avoid
using sleep as a means to ensure that timestamps are
added to the log entries.
WHERE and GROUP BY clause
Loose index scan may use range conditions on the argument of
the MIN/MAX aggregate functions to find the beginning/end of
the interval that satisfies the range conditions in a single go.
These range conditions may have open or closed minimum/maximum
values. When the comparison returns 0 (equal) the code should
check the type of the min/max values of the current interval
and accept or reject the row based on whether the limit is
open or not.
There was a wrong composite condition on checking this and it was
not working in all cases.
Fixed by simplifying the conditions and reversing the logic.
Merge the test case from 5.0 to 5.1 for BUG#40565
Detailed revision comments:
r5233 | marko | 2009-06-03 15:12:44 +0300 (Wed, 03 Jun 2009) | 11 lines
branches/5.1: Merge the test case from r5232 from branches/5.0:
------------------------------------------------------------------------
r5232 | marko | 2009-06-03 14:31:04 +0300 (Wed, 03 Jun 2009) | 21 lines
branches/5.0: Merge r3590 from branches/5.1 in order to fix Bug #40565
(Update Query Results in "1 Row Affected" But Should Be "Zero Rows").
Also, add a test case for Bug #40565.
rb://128 approved by Heikki Tuuri
------------------------------------------------------------------------
the mysql test suite.
Tests removed:
1. innodb_trx_weight.test
2. innodb_bug35220.test
Include files removed:
1. have_innodb.inc
2. ctype_innodb_like.inc
3. innodb_trx_weight.inc
Also add the missing opt file for the test innodb-use-sys-malloc.test
Created a test suite 'innodb' under mysql-test/suite/innodb for the innodb plugin tests.
test suite 'innodb' has tests only which are not under any other mysql-test suites.
Total 14 testcases are added to the test suite.
Note: the patches in storage/innodb_plugin/mysql-test/patches are not applied yet
Range analysis did not request sorted output from the storage engine,
which cause partitioned handlers to process one partition at a time
while reading key prefixes in ascending order, causing values to be
missed. Fixed by always requesting sorted order during range analysis.
This fix is introduced in 6.0 by the fix for bug no 41136.
This test uses SHOW STATUS and the like, which may be unstable in the face
of logging to table, since the CSV handler is actively executing operations
and thus incrementing the counters.
Fixed by disabling logging to table for the duration of the test and restoring
it afterwards. This causes various counters to properly start counting from zero
and never advance due to CSV operations.
memory issue ?
The mysql command line client could misinterpret some character
sequences as commands under some circumstances.
The upper limit for internal readline buffer was raised to 1 GB
(the same as for server's max_allowed_packet) so that any input
line is processed by add_line() as a whole rather than in
chunks.
mysqlbinlog --database parameter was being ignored when processing
row events. As such no event filtering would take place.
This patch addresses this by deploying a call to shall_skip_database
when table_map_events are handled (as these contain also the name of
the database). All other rows events referencing the table id for the
filtered map event, will also be skipped.
uninitialized variable used as subscript
Grouping select from a "constant" InnoDB table (a table
of a single row) joined with other tables caused a crash.
The problem is that when a optimization of read-only transactions
(bypass 2-phase commit) was implemented, it removed the code that
reseted the XID once a transaction wasn't active anymore:
sql/sql_parse.cc:
- bzero(&thd->transaction.stmt, sizeof(thd->transaction.stmt));
- if (!thd->active_transaction())
- thd->transaction.xid_state.xid.null();
+ thd->transaction.stmt.reset();
This mostly worked fine as the transaction commit and rollback
functions (in handler.cc) reset the XID once the transaction is
ended. But those functions wouldn't reset the XID in case of
a empty transaction, leading to a assertion when a new starting
a new XA transaction.
The solution is to ensure that the XID state is reset when empty
transactions are ended (by either commit or rollback). This is
achieved by reorganizing the code so that the transaction cleanup
routine is invoked whenever a transaction is ended.
Problem:
Crash happened with a user-defined utf8 collation,
on attempt to insert a value longer than the column
to store.
Reason:
The "ctype" member was not initialized (NULL) when
allocating a user-defined utf8 collation, so an attempt
to call my_ctype(cs, *str) to check if we loose any important
data when truncating the value made the server crash.
Fix:
Initializing tge "ctype" member to a proper value.
mysql-test/r/ctype_ldml.result
Adding tests
mysql-test/t/ctype_ldml.test
Adding tests
strings/ctype-uca.c
Adding initialization of "ctype" member.
modified:
mysql-test/r/ctype_ldml.result
mysql-test/t/ctype_ldml.test
strings/ctype-uca.c
The crash happens because of uninitialized
lex->ssl_cipher, lex->x509_subject, lex->x509_issuer variables.
The fix is to add initialization of these variables for
stored procedures&functions.
The server was not cleaning the last IO error and error number when
resetting slave.
This patch addresses this issue by backporting into 5.1 part of the
patch in BUG 34654. A fix for this issue had already been pushed into
6.0 as part of the aforementioned bug, however the patch also included
some refactoring. The fix for 5.1 does not take into account the
refactoring part.
The crash happens due to wrong max_length value which is set on
Item_func_round::fix_length_and_dec() stage. The value is set to
args[0]->max_length which is too big in case of LONGTEXT(LONGBLOB) fields.
The fix is to set max_length using float_length() function.
BEGIN/COMMIT/ROLLBACK was subject to replication db rules, and
caused the boundary of a transaction not recognized correctly
when these queries were ignored by the rules.
Fixed the problem by skipping replication db rules for these
statements.
MySQL crashes if a user without proper privileges attempts to create a procedure.
The crash happens because more than one error state is pushed onto the Diagnostic
area. In this particular case the user is denied to implicitly create a new user
account with the implicitly granted privileges ALTER- and EXECUTE ROUTINE.
The new account is needed if the original user account contained a host mask.
A user account with a host mask is a distinct user account in this context.
An alternative would be to first get the most permissive user account which
include the current user connection and then assign privileges to that
account. This behavior change is considered out of scope for this bug patch.
The implicit assignment of privileges when a user creates a stored routine is a
considered to be a feature for user convenience and as such it is not
a critical operation. Any failure to complete this operation is thus considered
non-fatal (an error becomes a warning).
The patch back ports a stack implementation of the internal error handler interface.
This enables the use of multiple error handlers so that it is possible to intercept
and cancel errors thrown by lower layers. This is needed as a error handler already
is used in the call stack emitting the errors which needs to be converted.
mysqldump --tab still dumped triggers to stdout rather than to
individual tables.
We now append triggers to the .sql file for the corresponding
table.
--events and --routines correspond to a database rather than a
table and will still go to stdout with --tab unless redirected
with --result-file (-r).
old_password() functions
The PASSWORD() and OLD_PASSWORD() functions could lead to
memory reads outside of an internal buffer when used with BLOB
arguments.
String::c_ptr() assumes there is at least one extra byte
in the internally allocated buffer when adding the trailing
'\0'. This, however, may not be the case when a String object
was initialized with externally allocated buffer.
The bug was fixed by adding an additional "length" argument to
make_scrambled_password_323() and make_scrambled_password() in
order to avoid String::c_ptr() calls for
PASSWORD()/OLD_PASSWORD().
However, since the make_scrambled_password[_323] functions are
a part of the client library ABI, the functions with the new
interfaces were implemented with the 'my_' prefix in their
names, with the old functions changed to be wrappers around
the new ones to maintain interface compatibility.
The problem is that the server failed to follow the rule that
every X509 object retrieved using SSL_get_peer_certificate()
must be explicitly freed by X509_free(). This caused a memory
leak for builds linked against OpenSSL where the X509 object
is reference counted -- improper counting will prevent the
object from being destroyed once the session containing the
peer certificate is freed.
The solution is to explicitly free every X509 object used.
HAVING
When calculating GROUP BY the server caches some expressions. It does
that by allocating a string slot (Item_copy_string) and assigning the
value of the expression to it. This effectively means that the result
type of the expression can be changed from whatever it was to a string.
As this substitution takes place after the compile-time result type
calculation for IN but before the run-time type calculations,
it causes the type calculations in the IN function done at run time
to get unexpected results different from what was prepared at compile time.
In the CASE ... WHEN ... THEN ... statement there was a similar problem
and it was solved by artificially adding a STRING argument to the set of
types of the IN/CASE arguments at compile time, so if any of the
arguments of the CASE function changes its type to a string it will
still be covered by the information prepared at compile time.
stop/start slave
When stopping and restarting the slave while it is replicating
temporary tables, the server would crash or raise an assertion
failure. This was due to the fact that although temporary tables are
saved between slave threads restart, the reference to the thread in
use (table->in_use) was not being properly updated when the restart
happened (it would still reference the old/invalid thread instead of
the new one).
This patch addresses this issue by resetting the reference to the new
slave thread on slave thread restart.
Created new .test file - mysqldump_restore that does test restore from mysqldump
output for a limited number of basic cases.
Create new .inc file - mysqldump.inc - renames original table and uses mysqldump
output to recreate the table, then uses diff_tables.inc to compare the two tables.
Backported include/diff_tables.inc to facilitate this testing.
New patch incorporating review feedback prior to push.
mysqldump.test - removed redundant call to include/have_log_bin.inc (was used twice in the test!)
assertion .\filesort.cc, line 797
A query with the "ORDER BY @@some_system_variable" clause,
where @@some_system_variable is NULL, causes assertion
failure in the filesort procedures.
The reason of the failure is in the value of
Item_func_get_system_var::maybe_null: it was unconditionally
set to false even if the value of a variable was NULL.
Created new .test file - mysqldump_restore that does this for a limited number
of basic cases.
Created new .inc file - mysqldump.inc - renames original table and uses mysqldump
output to recreate the table, then uses diff_tables.inc to compare the two tables.
Backported include/diff_tables.inc to facilitate this testing.
bug#44766: valgrind error when using convert() in a subquery
Problem: input and output buffers may be the same
converting a string to some charset.
That may lead to wrong results/valgrind warnings.
Fix: use different buffers.
warnings after uncompressed_length
UNCOMPRESSED_LENGTH() did not validate its argument. In
particular, if the argument length was less than 4 bytes,
an uninitialized memory value was returned as a result.
Since the result of COMPRESS() is either an empty string or
a 4-byte length prefix followed by compressed data, the bug was
fixed by ensuring that the argument of UNCOMPRESSED_LENGTH() is
either an empty string or contains at least 5 bytes (as done in
UNCOMPRESS()). This is the best we can do to validate input
without decompressing.
Detail:
The results for the "normal" server testcase variants
were already adjusted to the modified information_schema
content. Therefore just do the same for the embedded
server variants.
Internal InnoDN FK parser does not recognize '\'' as quotation symbol.
Suggested fix is to add '\'' symbol check for quotation condition
(dict_strip_comments() function).
with a "HAVING" clause though query works
SELECT from views defined like:
CREATE VIEW v1 (view_column)
AS SELECT c AS alias FROM t1 HAVING alias
fails with an error 1356:
View '...' references invalid table(s) or column(s)
or function(s) or definer/invoker of view lack rights
to use them
CREATE VIEW form with a (column list) substitutes
SELECT column names/aliases with names from a
view column list.
However, alias references in HAVING clause was
not substituted.
The Item_ref::print function has been modified
to write correct aliased names of underlying
items into VIEW definition generation/.frm file.
The RAND(N) function where the N is a field of "constant" table
(table of single row) failed with a SIGFPE.
Evaluation of RAND(N) rely on constant status of its argument.
Current server "seeded" random value for each constant argument
only once, in the Item_func_rand::fix_fields method.
Then the server skipped a call to seed_random() in the
Item_func_rand::val_real method for such constant arguments.
However, non-constant state of an argument may be changed
after the call to fix_fields, if an argument is a field of
"constant" table. Thus, pre-initialization of random value
in the fix_fields method is too early.
Initialization of random value by seed_random() has been
removed from Item_func_rand::fix_fields method.
The Item_func_rand::val_real method has been modified to
call seed_random() on the first evaluation of this method
if an argument is a function.
In order to better support the usage of
IBMDB2I tables from within RPG programs,
the storage engine should ensure that the
RCDFMT name is consistent and predictable
for DB2 tables.
This patch appends a "RCDFMT <name>"
clause to the CREATE TABLE statement
that is passed to DB2. <name> is
generated from the original name of
the table itself. This ensures a
consistent and deterministic mapping
from the original table.
For the sake of simplicity only
the alpha-numeric characters are
preserved when generating the new
name, and these are upper-cased;
other characters are replaced with
an underscore (_). Following DB2
system identifier rules, the name
always begins with an alpha-character
and has a maximum of ten characters.
If no usable characters are found in
the table name, the name X is used.
always rollsback.
The global variable max_binlog_cache_size cannot be set more than 4GB on
32 bit systems, limiting transactions of all storage engines to 4G of changes.
The problem is max_binlog_cache_size is declared as ulong which is 4 bytes
on 32 bit and 8 bytes on 64 bit machines.
Fixed by using ulonglong for max_binlog_cache_size which is 8bytes on 32
and 64 bit machines.The range for max_binlog_cache_size on 32 bit and 64 bit
systems is 4096-18446744073709547520 bytes.
Details:
Most tests mentioned within the bug report were already fixed.
The test modified here failed in stability (high parallel load) tests.
Details:
1. Take care that disconnects are finished before the test terminates.
2. Correct wrong handling of send/reap in events_stress which caused
random garbled output
3. Minor beautifying of script code
It turns out that this test case no longer fails with the discrepancy
in numbers that was the original cause for disabling this test (and showed
potential genuine issues with the query cache). Therefore
this test is being enabled after some minor adjustment of error codes and
messages.
Details:
1. Add missing "disconnect <session>"
2. Take care that the disconnects are finished when the test terminates
3. Replace error names by error numbers
4. Minor beautifying of script code
Field_time::get_time() did not initialize some members of
MYSQL_TIME which led to valgrind warnings when those members
were accessed in Protocol_simple::store_time().
It is unlikely that this bug could result in wrong data
being returned, since Field_time::get_time() initializes the
'day' member of MYSQL_TIME to 0, so the value of 'day'
in Protocol_simple::store_time() would be 0 regardless
of the values for 'year' and 'month'.
In UNION if we use last SELECT without braces and this
SELECT have ORDER BY clause, such clause belongs to
global UNION. It is parsed like last SELECT
part and used further as 'unit->global_parameters->order_list' value.
During DESCRIBE EXTENDED we call select_lex->print_order() for
last SELECT where order fields refer to tmp table
which already freed. It leads to crash.
The fix is clean up global_parameters->order_list
instead of fake_select_lex->order_list.
Suspected reason for the failure is that safe_process.exe already runs in a job that does not allow breakaways.
The fix is to use a fallback - make newly created process the root of the new process group. This allows to kill process together with descendants via GenerateConsoleCtrlEvent (CTRL_BREAK_EVENT, pid)
It is not possible to prevent the server from starting if a mandatory
built-in plugin fails to start. This can in some cases lead to data
corruption when the old table name space suddenly is used by a different
storage engine.
A boolean command line option in the form of --foobar is automatically
created for every existing plugin "foobar". By changing this command line
option from a boolean to a tristate { OFF, ON, FORCE } it is possible to
specify the plugin loading policy for each plugin.
The behavior is specified as follows:
OFF = Disable the plugin and start the server
ON = Enable the plugin and start the server even if an error occurrs
during plugin initialization.
FORCE = Enable the plugin but don't start the server if an error occurrs
during plugin initialization.
UNIX sockets need to be on a path shorter than 70 characters on some older platofrms.
MTRv1 tries to fix this by moving the socket to the $TMPDIR, however this causes
issues with certain tests on Windows.
Fixed by not applying any hacks on Windows - Windows does not need them.
SQL_SELECT::test_quick_select
The crash was caused by an incomplete cleanup of JOIN_TAB::select
during the filesort of rows for GROUP BY clause inside a subquery.
Queries where a quick index access is replaced with filesort was
was affected. For example:
SELECT 1 FROM
(SELECT COUNT(DISTINCT c1) FROM t1
WHERE c2 IN (1, 1) AND c3 = 2 GROUP BY c2) x
Quick index access related data in the SQL_SELECT::test_quick_select
function was inconsistent after an incomplete cleanup.
This function has been completed to prevent crashes in the
SQL_SELECT::test_quick_select function.
SQL_SELECT::test_quick_select
The crash was caused by an incomplete cleanup of JOIN_TAB::select
during the filesort of rows for GROUP BY clause inside a subquery.
Queries where a quick index access is replaced with filesort was
was affected. For example:
SELECT 1 FROM
(SELECT COUNT(DISTINCT c1) FROM t1
WHERE c2 IN (1, 1) AND c3 = 2 GROUP BY c2) x
Quick index access related data in the SQL_SELECT::test_quick_select
function was inconsistent after an incomplete cleanup.
This function has been completed to prevent crashes in the
SQL_SELECT::test_quick_select function.
and HAVING
When calculating GROUP BY the server caches some expressions. It does
that by allocating a string slot (Item_copy_string) and assigning the
value of the expression to it. This effectively means that the result
type of the expression can be changed from whatever it was to a string.
As this substitution takes place after the compile-time result type
calculation for IN but before the run-time type calculations,
it causes the type calculations in the IN function done at run time
to get unexpected results different from what was prepared at compile time.
In the CASE ... WHEN ... THEN ... statement there was a similar problem
and it was solved by artificially adding a STRING argument to the matrix
at compile time, so if any of the arguments of the CASE function changes
its type to a string it will still be covered by the information prepared
at compile time.
Extended the CASE fix for cover the IN case.
An alternative way of fixing this problem is by caching the result type of
the arguments at compile time and using the cached information at run time
instead of re-calculating the result types.
Preferred the CASE approach for uniformity and fix localization.
BUG#42101 - Race condition in innodb_commit_concurrency
Detailed revision comments:
r4994 | marko | 2009-05-14 15:04:55 +0300 (Thu, 14 May 2009) | 18 lines
branches/5.1: Prevent a race condition in innobase_commit() by ensuring
that innodb_commit_concurrency>0 remains constant at run time. (Bug #42101)
srv_commit_concurrency: Make this a static variable in ha_innodb.cc.
innobase_commit_concurrency_validate(): Check that innodb_commit_concurrency
is not changed from or to 0 at run time. This is needed, because
innobase_commit() assumes that innodb_commit_concurrency>0 remains constant.
Without this limitation, the checks for innodb_commit_concurrency>0
in innobase_commit() should be removed and that function would have to
acquire and release commit_cond_m at least twice per invocation.
Normally, innodb_commit_concurrency=0, and introducing the mutex operations
would mean significant overhead.
innodb_bug42101.test, innodb_bug42101-nonzero.test: Test cases.
rb://123 approved by Heikki Tuuri
Problem: executing queries like "ALTER TABLE view1;" we don't
check new view's name (which is not specified),
that leads to server crash.
Fix: do nothing (to be consistent with the behaviour for tables)
in such cases.
Disabling these two tests as they are affected by this bug / causing PB2 failures
on Windows platforms. Can always disable via include/not_windows.inc if
the bug fix looks like it will take some time.
"freeing items"
The calculation of the table map log event in the event constructor
was one byte shorter than what would be actually written. This would
lead to a mismatch between the number of bytes written and the event
end_log_pos, causing bad event alignment in the binlog (corrupted
binlog) or in the transaction cache while fixing positions
(MYSQL_BIN_LOG::write_cache). This could lead to impossible to read
binlog or even infinite loops in MYSQL_BIN_LOG::write_cache.
This patch addresses this issue by correcting the expected event
length in the Table_map_log_event constructor, when the field metadata
size exceeds 255.
Problem: using LOAD_FILE() in some cases we pass a file name string
without a trailing '\0' to fn_format() which relies on that however.
That may lead to valgrind warnings.
Fix: add a trailing '\0' to the file name passed to fn_format().
The problem is that the internal variable used to specify a
transaction with consistent read was being used outside the
processing context of a START TRANSACTION WITH CONSISTENT
SNAPSHOT statement. The practical consequence was that a
consistent snapshot specification could leak to unrelated
transactions on the same session.
The solution is to ensure a consistent snapshot clause is
only relied upon for the START TRANSACTION statement.
This is already fixed in a similar way on 6.0.
In the output from mysqlbinlog, incident log events were
represented as just a comment. Since the incident log event
represents an incident that could cause the contents of the
database to change without being logged to the binary log,
it means that if the SQL is applied to a server, it could
potentially lead to that the databases are out of sync.
In order to handle that, this patch adds the statement "RELOAD
DATABASE" to the SQL output for the incident log event. This will
require a DBA to edit the file and handle the case as apropriate
before applying the output to a server.
Problem: storing "SELECT ... INTO @var ..." results in variables we used val_xxx()
methods which returned results of the current row.
So, in some cases (e.g. SELECT DISTINCT, GROUP BY or HAVING) we got data
from the first row of a new group (where we evaluate a clause) instead of
data from the last row of the previous group.
Fix: use val_xxx_result() counterparts to get proper results.
The --hexdump option crashed mysqlbinlog when used together
with the --read-from-remote-server option due to use of
uninitialized memory.
Since Log_event::print_header() relies on temp_buf to be
initialized when the --hexdump option is present,
dump_remote_log_entries() was fixed to setup temp_buf to point
to the start of a binlog event as done in
dump_local_log_entries().
The root cause of this bug is identical to the one for
bug #17654. The latter was fixed in 5.1 and up, so this
patch is backport of the patches for bug #17654 to 5.0.
Only 5.0 needs a changelog entry.
Details:
innodb-autoinc-optimize
Add DROP TABLE which is missing (Backport of fix from 5.1)
innodb_notembedded
Take care that the disconnects of additional sessions
are completed.
Note:
The merge 5.0 -> 5.1 for innodb-autoinc-optimize
should be a "null" merge = no changes in 5.1.
with seg fault
Multiple-table DELETE from a table joined to itself may cause
server crash. This was originally discovered with MEMORY engine,
but may affect other engines with different symptoms.
The problem was that the server violated SE API by performing
parallel table scan in one handler and removing records in
another (delete on the fly optimization).
Certain multi-updates gave different results on InnoDB from
to MyISAM, due to on-the-fly updates being used on the former and
the update order matters.
Fixed by turning off on-the-fly updates when update order
dependencies are present.
on cp932 and sjis environment.
Problem: case conversion erroneously changes the second bytes
of multi-byte sequences because single-byte functions were
called in a mistake.
Fix: call multi-byte aware functions instead.
'INSERT ... SELECT' statements
The code that produces result rows expected that a duplicate row
error could not occur in INSERT ... SELECT statements with
unfulfilled WHERE conditions. This may happen, however, if the
SELECT list contains only aggregate functions.
Fixed by checking if an error occured before trying to send EOF
to the client.
1. Replace waiting of SQL thread stop by waiting of SQL error on slave and stopped
SQL thread.
2. Remove debug code because it already implemented in MTR2.
EXPLAIN EXTENDED of nested query containing a error:
1054 Unknown column '...' in 'field list'
may cause a server crash.
Parse error like described above forces a call to
JOIN::destroy() on malformed subquery.
That JOIN::destroy function closes and frees temporary
tables. However, temporary fields of these tables
may be listed in st_select_lex::group_list of outer
query, and that st_select_lex may not cleanup them
properly. So, after the JOIN::destroy call that
st_select_lex::group_list may have Item_field
objects with dangling pointers to freed temporary
table Field objects. That caused a crash.
This patch adds corrections to the original patch
submitted 2009-04-08 (http://lists.mysql.com/commits/71607):
- fixed that the original patch didn't work because of an
incorrect condition;
- added a test case.
Error happens because sp_head::MULTI_RESULTS is not set for SP
which has 'show table status' command.
The fix is to add a SQLCOM_SHOW_TABLE_STATUS case into
sp_get_flags_for_command() func.
Killing the insert-select statement corrupts the MyISAM table only
when the destination table is empty and when it has indexes. When
we bulk insert huge data and if the destination table is empty we
disable the indexes for fast inserts, data is then inserted and
indexes are re-enabled after bulk_insert operation
Killing the query, aborts the repair table operation during enable
indexes phase leading to table corruption.
We now truncate the table when we detect that enable indexes is
killed for bulk insert query.As we have an empty table before the
operation, we can fix by truncating the table.
A bug in the initialization of key segment information made it point
to the wrong bit, since a bit index was used when its int value
was needed. This lead to misinterpretation of bit columns
read from MyISAM record format when a NULL bit pushed them over
a byte boundary.
Fixed by using the int value of the bit instead.
MTR is stuck for about 20 seconds checking for free ports.
The reason is that perl's connect() takes 1 second on windows
if port is not opened.
This patch fixes the mtr_ping_port implementation on Windows
to use Net::Ping for the port checking with small (0.1sec) timeout.
This patch also removes pointless second call to check_ports_free()
in case of auto build thread.
(moved from Bug 42308)
Details:
- insert_update
Add DROP TABLE which was missing, error numbers -> names
- varbinary
Add DROP TABLE which was missing
- sp_trans_log
Add missing DROP function, improved formatting
The issue of the current bug is unguarded access to mi->slave_running
by the shutdown thread calling end_slave() that is bug#29968
(alas happened not to be cross-linked with the current bug)
Fixed:
with removing the unguarded read of the running status
and perform reading it in terminate_slave_thread()
at time run_lock is taken (mostly bug#29968 backporting, still with some
improvements over that patch - see the error reporting from
terminate_slave_thread()).
Issue of bug#38716 is fixed here for 5.0 branch as well.
Note:
There has been a separate artifact identified -
a race condition between init_slave() and end_slave() -
reported as Bug#44467.
the Point() and Linestring() functions create WKB representation of an
object instead of an real geometry object.
That produced bugs when these were inserted into tables.
GIS tests fixed accordingly.
per-file messages:
mysql-test/r/gis-rtree.result
Bug#38990 Arbitrary data input plus GIS functions causes mysql server crash
test result
mysql-test/r/gis.result
Bug#38990 Arbitrary data input plus GIS functions causes mysql server crash
test result
mysql-test/t/gis-rtree.test
Bug#38990 Arbitrary data input plus GIS functions causes mysql server crash
test fixed - GeomFromWKB invocations removed
mysql-test/t/gis.test
Bug#38990 Arbitrary data input plus GIS functions causes mysql server crash
test fixed - AsWKB invocations added
sql/item_geofunc.cc
Bug#38990 Arbitrary data input plus GIS functions causes mysql server crash
Point() and similar functions to create a proper object
Bug #40925: Equality propagation takes non indexed attribute
Query execution plans and execution time of queries like
select a, b, c from t1
where a > '2008-11-21' and b = a limit 10
depended on the order of equality operator parameters:
"b = a" and "a = b" are not same.
An equality propagation algorithm has been fixed:
the substitute_for_best_equal_field function should not
substitute a field for an equal field if both fields belong
to the same table.
Respectively, replaced "--exec diff" by "--diff_files" which is a mysqltest command to run a
non-operating system specific diff. Removed the file rpl_000015-slave.sh as it is not
necessary in the new MTR.
Turned off autocommit at the start of this test per Innobase recommendation.
Noted significant reduction in run time for this test w/ a minor increase in other tests' run-times.
1) BUG#43309 - Test main.innodb can't be run twice
Detailed revision comments:
r4701 | vasil | 2009-04-13 17:03:46 +0300 (Mon, 13 Apr 2009) | 6 lines
branches/5.0:
Fix Bug#43309 Test main.innodb can't be run twice
by making the innodb.test reentrant.
perl
The problem here was the method how MTR gets its unique thread ids.
Prior to this patch, the method to do it was to maintain a global
table of pid,mtr_unique_id) pairs. The table was backed by a text
file. The table was cleaned up one in a while and dead processes leaking
unique_ids were determined with with kill(0) or with scripting tasklist
on Windows.
This method is flawed specifically on native Windows Perl. fork() is
implemented with starting a new thread, give it a syntetic negative PID
(threadID*(-1)), until this thread creates a new process with exec()
However, neither tasklist nor any other native Windows tool can cope with
negative perl PIDs. This lead to incorrect determination of dead process
and reusing already used mtr_unique_id.
The patch introduces alternative portable method of solving unique-id
problem. When a process needs a unique id in range [min...max], it just
starts to open files named min, min+1,...max in a loop . After file is
opened, we do non-blocking flock(). When flock() succeeds, process has
allocated the ID. When process dies, file is unlocked . Checks for zombies
are not necessary.
Since the change would create a co-existence problems with older version
of MTR, because of different way to calculate IDs, the default ID range
is changed from 250-299 to 300-349.
Another fix that was necessary enable --parallel option was to serialize
spawn() calls on Windows. specifically, IO redirects needed to be protected.
This patch also fixes hanging CRTL-C (as described in Bug #38629) for the
"new" MTR. The fix was already in 6.0 and is now downported.
single quote fails in 5.1.x
Performing fulltext prefix search (a word with truncation
operator) may cause a dead-loop.
The problem was in smarter index merge algorithm - it was writing
record reference to an incorrect memory area.
The warning happens because string argument is not zero ended.
The fix is to add new parameter 'length' to SQL_CRYPT() and
use ptr() instead of c_ptr().
mysqldump.test is designed to run with concurrent inserts
disabled. It is disabling concurrent inserts at the very
beginning of the test case, and re-enables them at the
bottom of the test. But for some reason (likely incorrect
merge) we enable concurrent inserts in the middle of the test.
The problem is fixed by enabling concurrent inserts only
at the bottom of the test case.