- The bug was caused by COUNT(DISTINCT ...) code using Unique object in
a way that assumed that BIT(N) column occupies a contiguous space in
temp_table->record[0] buffer.
- The fix is to make COUNT(DISTINCT ...) code instruct create_tmp_table to
create temporary table with column of type BIGINT, not BIT(N).
DELETE FROM ... USING ... statements with the following type of
ambiguous aliasing gave unexpected results:
DELETE FROM t1 AS alias USING t1, t2 AS alias WHERE t1.a = alias.a;
This query would leave table t1 intact but delete rows from t2.
Fixed by changing DELETE FROM ... USING syntax so that only alias
references (as opposed to alias declarations) may be used in FROM.
Invaldating a subset of a sufficiently large query cache can take a long time.
During this time the server is efficiently frozen and no other operation can
be executed. This patch addresses this problem by setting a time limit on
how long time a dictionary access request can take before giving up on the
attempt. This patch does not work for query cache invalidations issued by
DROP, ALTER or RENAME TABLE operations.
When dumping database from a 4.x server, the mysqldump client
inserted a delimiter sign inside special commentaries of the form:
/*!... CREATE DATABASE IF NOT EXISTS ... ;*/
During restoration that dump file was splitten by delimiter signs on
the client side, and the rest of some commentary strings was prepended
to following statements.
The 4x_server_emul test case option has been added for use with the
DBUG_EXECUTE_IF debugging macro. This option affects debug server
builds only to emulate particular behavior of a 4.x server for
the mysqldump client testing. Non-debugging builds are not affected.
The problem is that a SELECT on one thread is blocked by INSERT ... ON
DUPLICATE KEY UPDATE on another thread even when low_priority_updates is
activated.
The solution is to possibly downgrade the lock type to the setting of
low_priority_updates if the INSERT cannot be concurrent.
Problem:
In cases when a client-side macro appears inside a server-side comment, the add_line() function in mysql.cc discarded all characters until the next delimiter to remove macro arguments from the query string. This resulted in broken queries being sent to the server when the next delimiter character appeared past the comment's boundaries, because the comment closing sequence ('*/') was discarded.
Fix:
If a client-side macro appears inside a server-side comment, discard all characters in the comment after the macro (that is, until the end of the comment rather than the next delimiter).
This is a minimal fix to allow only simple cases used by the mysqlbinlog utility. Limitations that are worth documenting:
- Nested server-side and/or client-side comments are not supported by mysql.cc
- Using client-side macros in multi-line server-side comments is not supported
- All characters after a client-side macro in a server-side comment will be omitted from the query string (and thus, will not be sent to server).
comments)
Before this fix, the server would accept queries that contained comments,
even when the comments were not properly closed with a '*' '/' marker.
For example,
select 1 /* + 2 <EOF>
would be accepted as
select 1 /* + 2 */ <EOF>
and executed as
select 1
With this fix, the server now rejects queries with unclosed comments
as syntax errors.
Both regular comments ('/' '*') and special comments ('/' '*' '!') must be
closed with '*' '/' to be parsed correctly.
This is a performance bug, affecting in particular the bison generated code
for the parser.
Prior to this fix, the grammar used a long chain of reduces to parse an
expression, like:
bit_expr -> bit_term
bit_term -> bit_factor
bit_factor -> value_expr
value_expr -> term
term -> factor
etc
This chain of reduces cause the internal state automaton in the generated
parser to execute more state transitions and more reduces, so that the
generated MySQLParse() function would spend a lot of time looping to execute
all the grammar reductions.
With this patch, the grammar has been reorganized so that rules are more
"flat", limiting the depth of reduces needed to parse <expr>.
Tests have been written to enforce that relative priorities and properties
of operators have not changed while changing the grammar.
See the bug report for performance data.
Currently the Last_query_cost session status variable shows
only the cost of a single flat subselect. For complex queries
(with subselects or unions etc) Last_query_cost is not valid
as it was showing the cost for the last optimized subselect.
Fixed by reseting to zero Last_query_cost when the complete
cost of the query cannot be determined.
Last_query_cost will be non-zero only for single flat queries.