Use UNINIT_VAR workaround instead of LINT_INIT. The former can
also be used to silence false-positives in non-debug builds as
it actually does not cause new code to be generated.
argument of inline_mysql_mutex_init in sql_base.cc.
When initializing LOCK_dd_owns_lock_open mutex pass
correct PSI key instead of NULL value.
mysql-test/suite/perfschema/r/dml_setup_instruments.result:
Updated test results after adding P_S instrumentation
for LOCK_dd_owns_lock_open.
sql/sql_base.cc:
When initializing LOCK_dd_owns_lock_open mutex pass
correct PSI key instead of NULL value.
Temporarily disable strict aliasing warnings in order to get
wider coverage for optimized builds. Once the violations are
fixed and false-positives silenced, this flag should be removed.
The problem was that the x86 assembly based atomic CAS
(compare and swap) implementation could copy the wrong
value to the ebx register, where the cmpxchg8b expects
to see part of the "comparand" value. Since the original
value in the ebx register is saved in the stack (that is,
the push instruction causes the stack pointer to change),
a wrong offset could be used if the compiler decides to
put the source of the comparand value in the stack.
The solution is to copy the comparand value directly from
memory. Since the comparand value is 64-bits wide, it is
copied in two steps over to the ebx and ecx registers.
include/atomic/x86-gcc.h:
For reference, an excerpt from a faulty binary follows.
It is a disassembly of my_atomic-t, compiled at -O3 with
ICC 11.0. Most of the code deals with preparations for
a atomic cmpxchg8b operation. This instruction compares
the value in edx:eax with the destination operand. If the
values are equal, the value in ecx:ebx is stored in the
destination, otherwise the value in the destination operand
is copied into edx:eax.
In this case, my_atomic_add64 is implemented as a compare
and exchange. The addition is done over temporary storage
and loaded into the destination if the original term value
is still valid.
volatile int64 a64;
int64 b=0x1000200030004000LL;
a64=0;
mov 0xfffffda8(%ebx),%eax
xor %ebp,%ebp
mov %ebp,(%eax)
mov %ebp,0x4(%eax)
my_atomic_add64(&a64, b);
mov 0xfffffda8(%ebx),%ebp # Load address of a64
mov 0x0(%ebp),%edx # Copy value
mov 0x4(%ebp),%ecx
mov %edx,0xc(%esp) # Assign to tmp var in the stack
mov %ecx,0x10(%esp)
add $0x30004000,%edx # Sum values
adc $0x10002000,%ecx
mov %edx,0x8(%esp) # Save part of result for later
mov 0x0(%ebp),%esi # Copy value of a64 again
mov 0x4(%ebp),%edi
mov 0xc(%esp),%eax # Load the value of a64 used
mov 0x10(%esp),%edx # for comparison
mov %esi,(%esp)
mov %edi,0x4(%esp)
push %ebx # Push %ebx into stack. Changes esp.
mov 0x8(%esp),%ebx # Wrong restore of the result.
lock cmpxchg8b 0x0(%ebp)
sete %cl
pop %ebx
Subselect executes twice, at JOIN::optimize stage
and at JOIN::execute stage. At optimize stage
Innodb prebuilt struct which is used for the
retrieval of column values is initialized in.
ha_innobase::index_read(), prebuilt->sql_stat_start is true.
After QUICK_ROR_INTERSECT_SELECT finished his job it
restores read_set/write_set bitmaps with initial values
and deactivates one of the handlers used by
QUICK_ROR_INTERSECT_SELECT in JOIN::cleanup
(it's the case when we reuse original handler as one of
handlers required by QUICK_ROR_INTERSECT_SELECT object).
On second subselect execution inactive handler is activated
in QUICK_RANGE_SELECT::reset, file->ha_index_init().
In ha_index_init Innodb prebuilt struct is reinitialized
with inappropriate read_set/write_set bitmaps. Further
reinitialization in ha_innobase::index_read() does not
happen as prebuilt->sql_stat_start is false.
It leads to partial retrieval of required field values
and we get a mix of field values from different records
in the record buffer.
The fix is to reset
read_set/write_set bitmaps as these values
are required for proper intialization of
internal InnoDB struct which is used for
the retrieval of column values
(see build_template(), ha_innodb.cc)
mysql-test/include/index_merge_ror_cpk.inc:
test case
mysql-test/r/index_merge_innodb.result:
test case
mysql-test/r/index_merge_myisam.result:
test case
sql/opt_range.cc:
if ROR merge scan is used we need to reset
read_set/write_set bitmaps as these values
are required for proper intialization of
internal InnoDB struct which is used for
the retrieval of column values
(see build_template(), ha_innodb.cc)
adding new indexes
A fast alter table requires that the existing (old) table
and indices are unchanged (i.e only new indices can be
added). To verify this, the layout and flags of the old
table/indices are compared for equality with the new.
The PACK_KEYS option is a no-op in InnoDB, but the flag
exists, and is used in the table compare. We need to
check this (table) option flag before deciding whether an
index should be packed or not. If the table has
explicitly set PACK_KEYS to 0, the created indices should
not be marked as packed/packable.
compression protocol.
The loss of connection was caused by a malformed packet
sent by the server in case when query cache was in use.
When storing data in the query cache, the query cache
memory allocation algorithm had a tendency to reduce
the amount of memory block necessary to store a result
set, up to finally storing the entire result set in a single
block. With a significant result set, this memory block
could turn out to be quite large - 30, 40 MB and on.
When such a result set was sent to the client, the entire
memory block was compressed and written to network as a
single network packet. However, the length of the
network packet is limited by 0xFFFFFF (16MB), since
the packet format only allows 3 bytes for packet length.
As a result, a malformed, overly large packet
with truncated length would be sent to the client
and break the client/server protocol.
The solution is, when sending result sets from the query
cache, to ensure that the data is chopped into
network packets of size <= 16MB, so that there
is no corruption of packet length. This solution,
however, has a shortcoming: since the result set
is still stored in the query cache as a single block,
at the time of sending, we've lost boundaries of individual
logical packets (one logical packet = one row of the result
set) and thus can end up sending a truncated logical
packet in a compressed network packet.
As a result, on the client we may require more memory than
max_allowed_packet to keep, both, the truncated
last logical packet, and the compressed next packet.
This never (or in practice never) happens without compression,
since without compression it's very unlikely that
a) a truncated logical packet would remain on the client
when it's time to read the next packet
b) a subsequent logical packet that is being read would be
so large that size-of-new-packet + size-of-old-packet-tail >
max_allowed_packet.
To remedy this issue, we send data in 1MB sized packets,
that's below the current client default of 16MB for
max_allowed_packet, but large enough to ensure there is no
unnecessary overhead from too many syscalls per result set.
sql/net_serv.cc:
net_realloc() modified: consider already used memory
when compare packet buffer length
sql/sql_cache.cc:
modified Query_cache::send_result_to_client: send result to client
in chunks limited by 1 megabyte.
ORDER BY computed col
GROUP BY implies ORDER BY in the MySQL dialect of SQL. Therefore, when an
index on the first table in the query is used, and that index satisfies
ordering according to the GROUP BY clause, the query optimizer estimates the
number of tuples that need to be read from this index. If there is a LIMIT
clause, table statistics on tables following this 'sort table' are employed.
There may be a separate ORDER BY clause however, which mandates reading the
whole 'sort table' anyway. But the previous estimate was left untouched.
Fixed by removing the estimate from EXPLAIN output if GROUP BY is used in
conjunction with an ORDER BY clause that mandates using a temporary table.
The first part is the functional change,
the second is needed as a compile fix on Windows
(header file order).
| committer: Marc Alff <marc.alff@oracle.com>
| branch nick: mysql-5.5-bugfixing-56521
| timestamp: Thu 2010-09-09 14:28:47 -0600
| message:
| Bug#56521 Assertion failed: (m_state == 2), function allocated_to_free, pfs_lock.h (138)
|
| Before this fix, it was possible to build the server:
| - with the performance schema
| - with a dummy implementation of my_atomic (MY_ATOMIC_MODE_DUMMY).
|
| In this case, the resulting binary will just crash,
| as this configuration is not supported.
|
| This fix enforces that the build will fail with a compilation error in this
| configuration, instead of resulting in a broken binary.
| committer: Tor Didriksen <tor.didriksen@oracle.com>
| branch nick: 5.5-bugfixing-56521
| timestamp: Fri 2010-09-10 11:10:38 +0200
| message:
| Header files should be self-contained
Version "5.1.42 SUSE MySQL RPM"
When a query was using a DATE or DATETIME value formatted
using different formatting than "yyyy-mm-dd HH:MM:SS", a
query with a greater-or-equal '>=' condition matched only
greater values in an indexed TIMESTAMP column.
The problem was introduced by the fix for the bug 46362
and partially solved (for DATE and DATETIME columns only)
by the fix for the bug 47925.
The stored_field_cmp_to_item function has been modified
to take into account TIMESTAMP columns like we do for
DATE and DATETIME columns.
mysql-test/r/type_timestamp.result:
Test case for bug #55779.
mysql-test/t/type_timestamp.test:
Test case for bug #55779.
sql/item.cc:
Bug #55779: select does not work properly in mysql server
Version "5.1.42 SUSE MySQL RPM"
The stored_field_cmp_to_item function has been modified
to take into account TIMESTAMP columns like we do for
DATE and DATETIME.
to 5.5 (removed one test case as it is no longer valid).
mysql-test/r/select.result:
Removed a part of the test case for bug#48291 since it is not
valid anymore. The comments for the removed part were actually
describing a side-effect from the problem addressed by the
addendum patch for bug #54190.
mysql-test/t/select.test:
Removed a part of the test case for bug#48291 since it is not
valid anymore. The comments for the removed part were actually
describing a side-effect from the problem addressed by the
addendum patch for bug #54190.
The patch caused some test failures when merged to 5.5 because,
unlike 5.1, it utilizes Item_cache_row to actually cache row
values. The problem was that Item_cache_row::bring_value()
essentially did nothing. In particular, it did not update its
null_value, so all Item_cache_row objects were always having
their null_values set to TRUE. This went unnoticed previously,
but now when Arg_comparator::compare_row() actually depends on
the row's null_value to evaluate the comparison, the problem
has surfaced.
Fixed by calling the underlying item's bring_value() and
updating null_value in Item_cache_row::bring_value().
Since the problem also exists in 5.1 code (albeit hidden, since
the relevant code is not used anywhere), the addendum patch is
against 5.1.
result
Row subqueries producing no rows were not handled as UNKNOWN
values in row comparison expressions.
That was a result of the following two problems:
1. Item_singlerow_subselect did not mark the resulting row
value as NULL/UNKNOWN when no rows were produced.
2. Arg_comparator::compare_row() did not take into account that
a whole argument may be NULL rather than just individual scalar
values.
Before bug#34384 was fixed, the above problems were hidden
because an uninitialized (i.e. without any stored value) cached
object would appear as NULL for scalar values in a row subquery
returning an empty result. After the fix
Arg_comparator::compare_row() would try to evaluate
uninitialized cached objects.
Fixed by removing the aforementioned problems.
mysql-test/r/row.result:
Added a test case for bug #54190.
mysql-test/r/subselect.result:
Updated the result for a test relying on wrong behavior.
mysql-test/t/row.test:
Added a test case for bug #54190.
sql/item_cmpfunc.cc:
If either of the argument rows is NULL, return NULL as the
result of comparison.
sql/item_subselect.cc:
Adjust null_value for Item_singlerow_subselect depending on
whether a row has been produced by the row subquery.
The problem was that mysql_stmt_next_result() (new to 5.5)
was not properly updated.
libmysql/libmysql.c:
mysql_stmt_next_result() modified: set mysql->status= MYSQL_STATUS_STATEMENT_GET_RESULT before return
if there is a result set.
After installation from RPM, server is run under root, not mysql user
The problem was that in the cmake way of building
the variable "MYSQLD_USER" was not set and propagated.
In the script "mysqld_safe" its value is used as the
name of the user who should run the server process.
The fix is to explicitly set this variable to "mysql"
and propagate it in the build process.
It was analyzed and proposed by Jonathan Perkin.
The EXISTS transformation has additional switches to catch the known corner
cases that appear when transforming an IN predicate into EXISTS. Guarded
conditions are used which are deactivated when a NULL value is seen in the
outer expression's row. When the inner query block supplies NULL values,
however, they are filtered out because no distinction is made between the
guarded conditions; guarded NOT x IS NULL conditions in the HAVING clause that
filter out NULL values cannot be de-activated in isolation from those that
match values or from the outer expression or NULL's.
The above problem is handled by making the guarded conditions remember whether
they have rejected a NULL value or not, and index access methods are taking
this into account as well.
The bug consisted of
1) Not resetting the property for every nested loop iteration on the inner
query's result.
2) Not propagating the NULL result properly from inner query to IN optimizer.
3) A hack that may or may not have been needed at some point. According to a
comment it was aimed to fix#2 by returning NULL when FALSE was actually
the result. This caused failures when #2 was properly fixed. The hack is
now removed.
The fix resolves all three points.