adding new indexes
A fast alter table requires that the existing (old) table
and indices are unchanged (i.e only new indices can be
added). To verify this, the layout and flags of the old
table/indices are compared for equality with the new.
The PACK_KEYS option is a no-op in InnoDB, but the flag
exists, and is used in the table compare. We need to
check this (table) option flag before deciding whether an
index should be packed or not. If the table has
explicitly set PACK_KEYS to 0, the created indices should
not be marked as packed/packable.
compression protocol.
The loss of connection was caused by a malformed packet
sent by the server in case when query cache was in use.
When storing data in the query cache, the query cache
memory allocation algorithm had a tendency to reduce
the amount of memory block necessary to store a result
set, up to finally storing the entire result set in a single
block. With a significant result set, this memory block
could turn out to be quite large - 30, 40 MB and on.
When such a result set was sent to the client, the entire
memory block was compressed and written to network as a
single network packet. However, the length of the
network packet is limited by 0xFFFFFF (16MB), since
the packet format only allows 3 bytes for packet length.
As a result, a malformed, overly large packet
with truncated length would be sent to the client
and break the client/server protocol.
The solution is, when sending result sets from the query
cache, to ensure that the data is chopped into
network packets of size <= 16MB, so that there
is no corruption of packet length. This solution,
however, has a shortcoming: since the result set
is still stored in the query cache as a single block,
at the time of sending, we've lost boundaries of individual
logical packets (one logical packet = one row of the result
set) and thus can end up sending a truncated logical
packet in a compressed network packet.
As a result, on the client we may require more memory than
max_allowed_packet to keep, both, the truncated
last logical packet, and the compressed next packet.
This never (or in practice never) happens without compression,
since without compression it's very unlikely that
a) a truncated logical packet would remain on the client
when it's time to read the next packet
b) a subsequent logical packet that is being read would be
so large that size-of-new-packet + size-of-old-packet-tail >
max_allowed_packet.
To remedy this issue, we send data in 1MB sized packets,
that's below the current client default of 16MB for
max_allowed_packet, but large enough to ensure there is no
unnecessary overhead from too many syscalls per result set.
sql/net_serv.cc:
net_realloc() modified: consider already used memory
when compare packet buffer length
sql/sql_cache.cc:
modified Query_cache::send_result_to_client: send result to client
in chunks limited by 1 megabyte.
The problem was that RENAME TABLE caused an assert if the system variable
lower_case_table_names was 2 (default on Mac OS X) and the old table name
was given in upper case. This caused lowercase_table2.test to fail.
The assert checks that an exclusive metadata lock is held by the connection
trying to do RENAME TABLE - specificially during updates of table triggers.
The assert was triggered since the check is case sensitive and the lock
was held on the normalized (lower case) version of the table name.
This patch fixes the problem by making sure a normalized version of the
table name is used for the metadata lock check, while using a non-normalized
version of the table name for the rename of trigger files. The same is done
for ALTER TABLE ... RENAME.
Regression testing for the bug itself is already covered by
lowercase_table2.test. Additional coverage added to lowercase_fs_off.test.
Before this fix, the server could crash inside a memcpy when reading data
from the EVENTS_WAITS_CURRENT / HISTORY / HISTORY_LONG tables.
The root cause is that the length used in a memcpy could be corrupted,
when another thread writes data in the wait record being read.
Reading unsafe data is ok, per design choice, and the code does sanitize
the data in general, but did not sanitize the length given to memcpy.
The fix is to also sanitize the schema name / object name / file name
length when extracting the data to produce a row.
tables".
Attempting to issue an INSERT DELAYED statement for a MERGE
table might have caused a deadlock if it happened as part of
a transaction or under LOCK TABLES, and there was a concurrent
DDL or LOCK TABLES ... WRITE statement which tried to lock one
of its underlying tables.
The problem occurred when a delayed insert handler thread tried
to open a MERGE table and discovered that to do this it had also
to open all underlying tables and hence acquire metadata
locks on them. Since metadata locks on the underlying tables were
not pre-acquired by the connection thread executing INSERT DELAYED,
attempts to do so might lead to waiting. In this case the
connection thread had to wait for the delayed insert thread.
If the thread which was preventing the lock on the underlying table
from being acquired had to wait for the connection thread (due to
this or other metadata locks), a deadlock occurred.
This deadlock was not detected by the MDL deadlock detector since
waiting for the handler thread by the connection thread is not
represented in the wait-for graph.
This patch solves the problem by ensuring that the delayed
insert handler thread never tries to open underlying tables
of a MERGE table. Instead open_tables() is aborted right after
the parent table is opened and a ER_DELAYED_NOT_SUPPORTED
error is emitted (which is passed to the connection thread and
ultimately to the user).
mysql-test/r/merge.result:
Added test for bug #56251 "Deadlock with INSERT DELAYED and
MERGE tables".
mysql-test/t/merge.test:
Added test for bug #56251 "Deadlock with INSERT DELAYED and
MERGE tables".
sql/sql_base.cc:
Changed open_n_lock_single_table() to take prelocking strategy
as an argument instead of always using DML_prelocking_strategy.
sql/sql_base.h:
Changed open_n_lock_single_table() to take prelocking strategy
as an argument instead of always using DML_prelocking_strategy.
Added a version of this function which is compatible with old
signature.
sql/sql_insert.cc:
When opening MERGE table in delayed insert thread stop and emit
ER_DELAYED_NOT_SUPPORTED right after opening main table and
before opening underlying tables. This ensures that we won't
try to acquire metadata lock on underlying tables which might
lead to a deadlock.
This is achieved by using special prelocking strategy which
abort open_tables() process as soon as we discover that we
have opened table with engine which doesn't support delayed
inserts.
The crash during boot was caused by a DBUG_PRINT statement in fill_schema_schemata() (in
sql_show.cc). This DBUG_PRINT statement contained several instances of %s in the format
string and for one of these we gave a NULL pointer as the argument. This caused the
call to vsnprintf() to crash when running on Solaris.
The fix for this problem is to replace the call to vsnprintf() with my_vsnprintf()
which handles that a NULL pointer is passed as argumens for %s.
This patch also extends my_vsnprintf() to support %i in the format string.
dbug/dbug.c:
Replace the use of vsnprintf() with my_vsnprintf(). On some platforms
vsnprintf() did not handle that a NULL pointer was given as an argument
for a %s in the format string.
include/mysql/service_my_snprintf.h:
Add support for %i in format string to my_vsnprintf().
strings/my_vsnprintf.c:
Add support for %i in format string to my_vsnprintf().
unittest/mysys/my_vsnprintf-t.c:
Add unit tests for %i in format string to my_vsnprintf().
Implemented post review comments.
Added --force to the mysql_upgrade command in the test scripts,
so that the test output does not depends on whether other tests involving an
upgrade have been executed or not in the same test suite execution.
ORDER BY computed col
GROUP BY implies ORDER BY in the MySQL dialect of SQL. Therefore, when an
index on the first table in the query is used, and that index satisfies
ordering according to the GROUP BY clause, the query optimizer estimates the
number of tuples that need to be read from this index. If there is a LIMIT
clause, table statistics on tables following this 'sort table' are employed.
There may be a separate ORDER BY clause however, which mandates reading the
whole 'sort table' anyway. But the previous estimate was left untouched.
Fixed by removing the estimate from EXPLAIN output if GROUP BY is used in
conjunction with an ORDER BY clause that mandates using a temporary table.
The problem was that issuing XA END when the XA transaction was
already ended, caused an assertion. This assertion tests that
the server does not try to send OK to the client if there has
already been an error reported. The bug was only noticeable on
debug versions of the server.
The reason for the problem was that the trans_xa_end() function
reported success if the transaction was at XA_IDLE state at the
end regardless of any errors occured during processing of
trans_xa_end(). So if the transaction state was XA_IDLE already,
reported errors would be ignored.
This patch fixes the problem by having trans_xa_end() take into
consideration any reported errors. The patch also fixes a similar
bug with XA PREPARE.
Test case added to xa.test.
The first part is the functional change,
the second is needed as a compile fix on Windows
(header file order).
| committer: Marc Alff <marc.alff@oracle.com>
| branch nick: mysql-5.5-bugfixing-56521
| timestamp: Thu 2010-09-09 14:28:47 -0600
| message:
| Bug#56521 Assertion failed: (m_state == 2), function allocated_to_free, pfs_lock.h (138)
|
| Before this fix, it was possible to build the server:
| - with the performance schema
| - with a dummy implementation of my_atomic (MY_ATOMIC_MODE_DUMMY).
|
| In this case, the resulting binary will just crash,
| as this configuration is not supported.
|
| This fix enforces that the build will fail with a compilation error in this
| configuration, instead of resulting in a broken binary.
| committer: Tor Didriksen <tor.didriksen@oracle.com>
| branch nick: 5.5-bugfixing-56521
| timestamp: Fri 2010-09-10 11:10:38 +0200
| message:
| Header files should be self-contained
Version "5.1.42 SUSE MySQL RPM"
When a query was using a DATE or DATETIME value formatted
using different formatting than "yyyy-mm-dd HH:MM:SS", a
query with a greater-or-equal '>=' condition matched only
greater values in an indexed TIMESTAMP column.
The problem was introduced by the fix for the bug 46362
and partially solved (for DATE and DATETIME columns only)
by the fix for the bug 47925.
The stored_field_cmp_to_item function has been modified
to take into account TIMESTAMP columns like we do for
DATE and DATETIME columns.
mysql-test/r/type_timestamp.result:
Test case for bug #55779.
mysql-test/t/type_timestamp.test:
Test case for bug #55779.
sql/item.cc:
Bug #55779: select does not work properly in mysql server
Version "5.1.42 SUSE MySQL RPM"
The stored_field_cmp_to_item function has been modified
to take into account TIMESTAMP columns like we do for
DATE and DATETIME.
Before this fix, the server could crash during shutdown,
due to race conditions, that occured when killing the server.
In particular, the performance schema instrumentation handle,
PSI_server, and the performance schema itself would be cleaned up
too soon, causing race conditions with a running kill server thread.
The specifics of the race condition found are that:
the main thread executing "PSI_server= NULL" can cause crashes in
other threads still running, which are executing
"if (PSI_server != NULL) PSI_server->xxx()"
as part of the performance schema instrumentation.
While the bug was reported for the kill server thread,
in theory the same crash could happen with the signal thread,
as found by code analysis.
The correct fix would be to only shutdown the performance schema
and set PSI_server to NULL after every other thread is guaranteed
to be completed, including the kill_server_thread.
However, due to the existing mysqld server design, this is not the case.
See in particular bug number 56666.
The work around used to fix this race condition is to simply not
perform the call to shutdown_performance_schema() when the server exits,
and to keep the PSI_server pointer unchanged.
This will cause memory leaks to be reported by tools like valgrind,
but no memory leak actually happen because the process is about to exit().
As a result, the file mysql-test/valgrind.supp has been updated
to filter out these false positive messages.
This code has been tested with running in a loop the following
tests in parallel, which have been known to fail with race conditions
in the past:
- rpl_change_master
- binlog_max_extension
- events_restart
- rpl_heartbeat_basic
and no crash of test failure has been seen with the changed code.
Before this fix, it was possible to build the server:
- with the performance schema
- with a dummy implementation of my_atomic (MY_ATOMIC_MODE_DUMMY).
In this case, the resulting binary will just crash,
as this configuration is not supported.
This fix enforces that the build will fail with a compilation error in this
configuration, instead of resulting in a broken binary.
to 5.5 (removed one test case as it is no longer valid).
mysql-test/r/select.result:
Removed a part of the test case for bug#48291 since it is not
valid anymore. The comments for the removed part were actually
describing a side-effect from the problem addressed by the
addendum patch for bug #54190.
mysql-test/t/select.test:
Removed a part of the test case for bug#48291 since it is not
valid anymore. The comments for the removed part were actually
describing a side-effect from the problem addressed by the
addendum patch for bug #54190.
The patch caused some test failures when merged to 5.5 because,
unlike 5.1, it utilizes Item_cache_row to actually cache row
values. The problem was that Item_cache_row::bring_value()
essentially did nothing. In particular, it did not update its
null_value, so all Item_cache_row objects were always having
their null_values set to TRUE. This went unnoticed previously,
but now when Arg_comparator::compare_row() actually depends on
the row's null_value to evaluate the comparison, the problem
has surfaced.
Fixed by calling the underlying item's bring_value() and
updating null_value in Item_cache_row::bring_value().
Since the problem also exists in 5.1 code (albeit hidden, since
the relevant code is not used anywhere), the addendum patch is
against 5.1.
table causes assert failure".
Attempting to use FLUSH TABLE table_list WITH READ LOCK
statement for a MERGE table led to an assertion failure if
one of its children was not present in the list of tables
to be flushed. The problem was not visible in non-debug builds.
The assertion failure was caused by the fact that in such
situations FLUSH TABLES table_list WITH READ LOCK implementation
tried to use (e.g. lock) such child tables without acquiring
metadata lock on them. This happened because when opening tables
we assumed metadata locks on all tables were already acquired
earlier during statement execution and a such assumption was
false for MERGE children.
This patch fixes the problem by ensuring at open_tables() time
that we try to acquire metadata locks on all tables to be opened.
For normal tables such requests are satisfied instantly since
locks are already acquired for them. For MERGE children metadata
locks are acquired in normal fashion.
Note that FLUSH TABLES merge_table WITH READ LOCK will lock for
read both the MERGE table and its children but will flush only
the MERGE table. To flush children one has to mention them in table
list explicitly. This is expected behavior and it is consistent with
usage patterns for this statement (e.g. in mysqlhotcopy script).
mysql-test/r/flush.result:
Added test case for bug #55273 "FLUSH TABLE tm WITH READ LOCK
for Merge table causes assert failure".
mysql-test/t/flush.test:
Added test case for bug #55273 "FLUSH TABLE tm WITH READ LOCK
for Merge table causes assert failure".
sql/sql_base.cc:
Changed lock_table_names() to support newly introduced
MYSQL_OPEN_SKIP_SCOPED_MDL_LOCK flag.
sql/sql_base.h:
Introduced MYSQL_OPEN_SKIP_SCOPED_MDL_LOCK flag for
open_tables() and lock_table_names() which allows to skip
acquiring of global and schema-scope locks when SNW, SNRW or
X metadata locks are acquired.
sql/sql_reload.cc:
Changed "FLUSH TABLES table_list WITH READ LOCK" code not to
cause assert about missing metadata locks when MERGE table is
flushed without one of its underlying tables.
To achieve this we no longer call open_and_lock_tables() with
MYSQL_OPEN_HAS_MDL_LOCK flag so this function automatically
acquires metadata locks on MERGE children if such lock has
not been already acquired at earlier stage. Instead we call
this function with MYSQL_OPEN_SKIP_SCOPED_MDL_LOCK flag to
suppress acquiring of global IX lock in order to keep FLUSH
TABLES table_list WITH READ LOCK compatible with FLUSH TABLE
WITH READ LOCK.
Also changed implementation to use lock_table_names() function
for pre-acquiring of metadata locks instead of custom code.
To implement this change moved setting of open_type member for
table list elements to parser.
sql/sql_yacc.yy:
Now we set acceptable type of table for FLUSH TABLES table_list
WITH READ LOCK at parsing time instead of execution time.
result
Row subqueries producing no rows were not handled as UNKNOWN
values in row comparison expressions.
That was a result of the following two problems:
1. Item_singlerow_subselect did not mark the resulting row
value as NULL/UNKNOWN when no rows were produced.
2. Arg_comparator::compare_row() did not take into account that
a whole argument may be NULL rather than just individual scalar
values.
Before bug#34384 was fixed, the above problems were hidden
because an uninitialized (i.e. without any stored value) cached
object would appear as NULL for scalar values in a row subquery
returning an empty result. After the fix
Arg_comparator::compare_row() would try to evaluate
uninitialized cached objects.
Fixed by removing the aforementioned problems.
mysql-test/r/row.result:
Added a test case for bug #54190.
mysql-test/r/subselect.result:
Updated the result for a test relying on wrong behavior.
mysql-test/t/row.test:
Added a test case for bug #54190.
sql/item_cmpfunc.cc:
If either of the argument rows is NULL, return NULL as the
result of comparison.
sql/item_subselect.cc:
Adjust null_value for Item_singlerow_subselect depending on
whether a row has been produced by the row subquery.
The problem was that mysql_stmt_next_result() (new to 5.5)
was not properly updated.
libmysql/libmysql.c:
mysql_stmt_next_result() modified: set mysql->status= MYSQL_STATUS_STATEMENT_GET_RESULT before return
if there is a result set.
Item_func_spatial_collection::fix_length_and_dec()
changed to use argument's print() method to print
the ER_ILLEGAL_VALUE_FOR_TYPE error.
mysql-test/r/gis.result:
Fix for bug#56679: gis.test: valgrind error
- test result adjusted.
sql/item_geofunc.h:
Fix for bug#56679: gis.test: valgrind error
- use argument's print() method instead of improper val_str()
call in the Item_func_spatial_collection::fix_length_and_dec(), as
it's applicable only for constant items.
With recent changes in the performance schema default sizing parameters,
the memory used by a mysqld binary increased accordingly.
This negatively affects the MTR test suite,
because running several tests in parallel now consumes more ressources.
The fix is to leave the default production values unchanged,
and to configure the MTR environment to limit memory
used when running tests in the test suite, which is ok
because only a few objects are typically used within a test script.
This fix:
- changed the default configuration in MTR to use less memory
- adjusted the performance schema tests accordingly
Note that 1,000 mutex instances was too short and caused test failures
in the past in team trees, so the default used is now 10,000 in MTR.
The amount of memory used by the performance schema itself
can be observed with the statement SHOW ENGINE PERFORMANCE_SCHEMA STATUS