The first part is the functional change,
the second is needed as a compile fix on Windows
(header file order).
| committer: Marc Alff <marc.alff@oracle.com>
| branch nick: mysql-5.5-bugfixing-56521
| timestamp: Thu 2010-09-09 14:28:47 -0600
| message:
| Bug#56521 Assertion failed: (m_state == 2), function allocated_to_free, pfs_lock.h (138)
|
| Before this fix, it was possible to build the server:
| - with the performance schema
| - with a dummy implementation of my_atomic (MY_ATOMIC_MODE_DUMMY).
|
| In this case, the resulting binary will just crash,
| as this configuration is not supported.
|
| This fix enforces that the build will fail with a compilation error in this
| configuration, instead of resulting in a broken binary.
| committer: Tor Didriksen <tor.didriksen@oracle.com>
| branch nick: 5.5-bugfixing-56521
| timestamp: Fri 2010-09-10 11:10:38 +0200
| message:
| Header files should be self-contained
Version "5.1.42 SUSE MySQL RPM"
When a query was using a DATE or DATETIME value formatted
using different formatting than "yyyy-mm-dd HH:MM:SS", a
query with a greater-or-equal '>=' condition matched only
greater values in an indexed TIMESTAMP column.
The problem was introduced by the fix for the bug 46362
and partially solved (for DATE and DATETIME columns only)
by the fix for the bug 47925.
The stored_field_cmp_to_item function has been modified
to take into account TIMESTAMP columns like we do for
DATE and DATETIME columns.
mysql-test/r/type_timestamp.result:
Test case for bug #55779.
mysql-test/t/type_timestamp.test:
Test case for bug #55779.
sql/item.cc:
Bug #55779: select does not work properly in mysql server
Version "5.1.42 SUSE MySQL RPM"
The stored_field_cmp_to_item function has been modified
to take into account TIMESTAMP columns like we do for
DATE and DATETIME.
Before this fix, the server could crash during shutdown,
due to race conditions, that occured when killing the server.
In particular, the performance schema instrumentation handle,
PSI_server, and the performance schema itself would be cleaned up
too soon, causing race conditions with a running kill server thread.
The specifics of the race condition found are that:
the main thread executing "PSI_server= NULL" can cause crashes in
other threads still running, which are executing
"if (PSI_server != NULL) PSI_server->xxx()"
as part of the performance schema instrumentation.
While the bug was reported for the kill server thread,
in theory the same crash could happen with the signal thread,
as found by code analysis.
The correct fix would be to only shutdown the performance schema
and set PSI_server to NULL after every other thread is guaranteed
to be completed, including the kill_server_thread.
However, due to the existing mysqld server design, this is not the case.
See in particular bug number 56666.
The work around used to fix this race condition is to simply not
perform the call to shutdown_performance_schema() when the server exits,
and to keep the PSI_server pointer unchanged.
This will cause memory leaks to be reported by tools like valgrind,
but no memory leak actually happen because the process is about to exit().
As a result, the file mysql-test/valgrind.supp has been updated
to filter out these false positive messages.
This code has been tested with running in a loop the following
tests in parallel, which have been known to fail with race conditions
in the past:
- rpl_change_master
- binlog_max_extension
- events_restart
- rpl_heartbeat_basic
and no crash of test failure has been seen with the changed code.
Before this fix, it was possible to build the server:
- with the performance schema
- with a dummy implementation of my_atomic (MY_ATOMIC_MODE_DUMMY).
In this case, the resulting binary will just crash,
as this configuration is not supported.
This fix enforces that the build will fail with a compilation error in this
configuration, instead of resulting in a broken binary.
to 5.5 (removed one test case as it is no longer valid).
mysql-test/r/select.result:
Removed a part of the test case for bug#48291 since it is not
valid anymore. The comments for the removed part were actually
describing a side-effect from the problem addressed by the
addendum patch for bug #54190.
mysql-test/t/select.test:
Removed a part of the test case for bug#48291 since it is not
valid anymore. The comments for the removed part were actually
describing a side-effect from the problem addressed by the
addendum patch for bug #54190.
The patch caused some test failures when merged to 5.5 because,
unlike 5.1, it utilizes Item_cache_row to actually cache row
values. The problem was that Item_cache_row::bring_value()
essentially did nothing. In particular, it did not update its
null_value, so all Item_cache_row objects were always having
their null_values set to TRUE. This went unnoticed previously,
but now when Arg_comparator::compare_row() actually depends on
the row's null_value to evaluate the comparison, the problem
has surfaced.
Fixed by calling the underlying item's bring_value() and
updating null_value in Item_cache_row::bring_value().
Since the problem also exists in 5.1 code (albeit hidden, since
the relevant code is not used anywhere), the addendum patch is
against 5.1.
table causes assert failure".
Attempting to use FLUSH TABLE table_list WITH READ LOCK
statement for a MERGE table led to an assertion failure if
one of its children was not present in the list of tables
to be flushed. The problem was not visible in non-debug builds.
The assertion failure was caused by the fact that in such
situations FLUSH TABLES table_list WITH READ LOCK implementation
tried to use (e.g. lock) such child tables without acquiring
metadata lock on them. This happened because when opening tables
we assumed metadata locks on all tables were already acquired
earlier during statement execution and a such assumption was
false for MERGE children.
This patch fixes the problem by ensuring at open_tables() time
that we try to acquire metadata locks on all tables to be opened.
For normal tables such requests are satisfied instantly since
locks are already acquired for them. For MERGE children metadata
locks are acquired in normal fashion.
Note that FLUSH TABLES merge_table WITH READ LOCK will lock for
read both the MERGE table and its children but will flush only
the MERGE table. To flush children one has to mention them in table
list explicitly. This is expected behavior and it is consistent with
usage patterns for this statement (e.g. in mysqlhotcopy script).
mysql-test/r/flush.result:
Added test case for bug #55273 "FLUSH TABLE tm WITH READ LOCK
for Merge table causes assert failure".
mysql-test/t/flush.test:
Added test case for bug #55273 "FLUSH TABLE tm WITH READ LOCK
for Merge table causes assert failure".
sql/sql_base.cc:
Changed lock_table_names() to support newly introduced
MYSQL_OPEN_SKIP_SCOPED_MDL_LOCK flag.
sql/sql_base.h:
Introduced MYSQL_OPEN_SKIP_SCOPED_MDL_LOCK flag for
open_tables() and lock_table_names() which allows to skip
acquiring of global and schema-scope locks when SNW, SNRW or
X metadata locks are acquired.
sql/sql_reload.cc:
Changed "FLUSH TABLES table_list WITH READ LOCK" code not to
cause assert about missing metadata locks when MERGE table is
flushed without one of its underlying tables.
To achieve this we no longer call open_and_lock_tables() with
MYSQL_OPEN_HAS_MDL_LOCK flag so this function automatically
acquires metadata locks on MERGE children if such lock has
not been already acquired at earlier stage. Instead we call
this function with MYSQL_OPEN_SKIP_SCOPED_MDL_LOCK flag to
suppress acquiring of global IX lock in order to keep FLUSH
TABLES table_list WITH READ LOCK compatible with FLUSH TABLE
WITH READ LOCK.
Also changed implementation to use lock_table_names() function
for pre-acquiring of metadata locks instead of custom code.
To implement this change moved setting of open_type member for
table list elements to parser.
sql/sql_yacc.yy:
Now we set acceptable type of table for FLUSH TABLES table_list
WITH READ LOCK at parsing time instead of execution time.
result
Row subqueries producing no rows were not handled as UNKNOWN
values in row comparison expressions.
That was a result of the following two problems:
1. Item_singlerow_subselect did not mark the resulting row
value as NULL/UNKNOWN when no rows were produced.
2. Arg_comparator::compare_row() did not take into account that
a whole argument may be NULL rather than just individual scalar
values.
Before bug#34384 was fixed, the above problems were hidden
because an uninitialized (i.e. without any stored value) cached
object would appear as NULL for scalar values in a row subquery
returning an empty result. After the fix
Arg_comparator::compare_row() would try to evaluate
uninitialized cached objects.
Fixed by removing the aforementioned problems.
mysql-test/r/row.result:
Added a test case for bug #54190.
mysql-test/r/subselect.result:
Updated the result for a test relying on wrong behavior.
mysql-test/t/row.test:
Added a test case for bug #54190.
sql/item_cmpfunc.cc:
If either of the argument rows is NULL, return NULL as the
result of comparison.
sql/item_subselect.cc:
Adjust null_value for Item_singlerow_subselect depending on
whether a row has been produced by the row subquery.
The problem was that mysql_stmt_next_result() (new to 5.5)
was not properly updated.
libmysql/libmysql.c:
mysql_stmt_next_result() modified: set mysql->status= MYSQL_STATUS_STATEMENT_GET_RESULT before return
if there is a result set.
In early development of delete buffering, we did allow B-tree pages
to become empty as a result of buffered deletes. That caused fundamental
problems. The fix was to refuse buffering purge operations unless
the page can be guaranteed to be nonempty. Remove an attempt to
cope with empty pages when merging inserts.
Item_func_spatial_collection::fix_length_and_dec()
changed to use argument's print() method to print
the ER_ILLEGAL_VALUE_FOR_TYPE error.
mysql-test/r/gis.result:
Fix for bug#56679: gis.test: valgrind error
- test result adjusted.
sql/item_geofunc.h:
Fix for bug#56679: gis.test: valgrind error
- use argument's print() method instead of improper val_str()
call in the Item_func_spatial_collection::fix_length_and_dec(), as
it's applicable only for constant items.
With recent changes in the performance schema default sizing parameters,
the memory used by a mysqld binary increased accordingly.
This negatively affects the MTR test suite,
because running several tests in parallel now consumes more ressources.
The fix is to leave the default production values unchanged,
and to configure the MTR environment to limit memory
used when running tests in the test suite, which is ok
because only a few objects are typically used within a test script.
This fix:
- changed the default configuration in MTR to use less memory
- adjusted the performance schema tests accordingly
Note that 1,000 mutex instances was too short and caused test failures
in the past in team trees, so the default used is now 10,000 in MTR.
The amount of memory used by the performance schema itself
can be observed with the statement SHOW ENGINE PERFORMANCE_SCHEMA STATUS
ALTER TABLE on a MERGE table could cause a deadlock with two
other connections if we reached a situation where:
1) A connection doing ALTER TABLE can't upgrade to MDL_EXCLUSIVE on the
parent table, but holds TL_READ_NO_INSERT on the child tables.
2) A connection doing DELETE on a child table can't get TL_WRITE on it
since ALTER TABLE holds TL_READ_NO_INSERT.
3) A connection doing SELECT on the parent table can't get TL_READ on
the child tables since TL_WRITE is ahead in the lock queue, but holds
MDL_SHARED_READ on the parent table preventing ALTER TABLE from upgrading.
For regular tables, this deadlock is avoided by having ALTER TABLE
take a MDL_SHARED_NO_WRITE metadata lock on the table. This prevents
DELETE from acquiring MDL_SHARED_WRITE on the table before ALTER TABLE
tries to upgrade to MDL_EXCLUSIVE. In the example above, SELECT would
therefore not be blocked by the pending DELETE as DELETE would not be
able to enter TL_WRITE in the table lock queue.
This patch fixes the problem for merge tables by using the same metadata
lock type for child tables as for the parent table. The child tables will
in this case therefore be locked with MDL_SHARED_NO_WRITE, preventing
DELETE from acquiring a metadata lock and enter into the table lock queue.
Change in behavior: By taking the same metadata lock for child tables
as for the parent table, LOCK TABLE on the parent table will now also
implicitly lock the child tables. Since LOCK TABLE on the parent table
now takes more than one metadata lock, it is possible for LOCK TABLE
... WRITE on the parent table or child tables to give ER_LOCK_DEADLOCK
error.
Test case added to mdl_sync.test.
Merge.test/.result has been updated to reflect the change to LOCK TABLE.
Remove non applicable licensing files
storage/innobase/COPYING is in the MySQL top level directory and
storage/innobase/COPYING.Sun_Microsystems is not applicable anymore
now that Oracle and Sun are one company.
After installation from RPM, server is run under root, not mysql user
The problem was that in the cmake way of building
the variable "MYSQLD_USER" was not set and propagated.
In the script "mysqld_safe" its value is used as the
name of the user who should run the server process.
The fix is to explicitly set this variable to "mysql"
and propagate it in the build process.
It was analyzed and proposed by Jonathan Perkin.