messed up
"ROW(...) IN (SELECT ... FROM DUAL)" always returned TRUE.
Item_in_subselect::row_value_transformer rewrites "ROW(...)
IN SELECT" conditions into the "EXISTS (SELECT ... HAVING ...)"
form.
For a subquery from the DUAL pseudotable resulting HAVING
condition is an expression on constant values, so further
transformation with optimize_cond() eliminates this HAVING
condition and resets JOIN::having to NULL.
Then JOIN::exec treated that NULL as an always-true-HAVING
and that caused a bug.
To distinguish an optimized out "HAVING TRUE" clause from
"HAVING FALSE" we already have the JOIN::having_value flag.
However, JOIN::exec() ignored JOIN::having_value as described
above as if it always set to COND_TRUE.
The JOIN::exec method has been modified to take into account
the value of the JOIN::having_value field.
The problem is that the query cache was storing partial results
if the statement failed when sending the results to the client.
This could cause clients to hang when trying to read the results
from the cache as they would, for example, wait indefinitely for
a eof packet that wasn't saved.
The solution is to always discard the caching of a query that
failed to send its results to the associated client.
The problem is that the query cache stores packets containing
the server status of the time when the cached statement was run.
This might lead to a wrong transaction status in the client side
if a statement is cached during a transaction and is later served
outside a transaction context (and vice-versa).
The solution is to take into account the transaction status when
storing in and serving from the query cache.
The greedy optimizer tracks the current level of nested joins and the position
inside these by setting and maintaining a state that's global for the whole FROM
clause.
This state was correctly maintained inside the selection of the next partial plan
table (in best_extension_by_limited_search()).
greedy_search() also moves the current position by adding the last partial match
table when there's not enough tables in the partial plan found by
best_extension_by_limited_search().
This may require update of the global state variables that describe the current
position in the plan if the last table placed by greedy_search is not a top-level
join table.
Fixed by updating the state after placing the partial plan table in greedy_search()
in the same way this is done on entering the best_extension_by_limited_search().
Fixed the signature of the function called to update the state :
check_interleaving_with_nj
When substituting system constant functions with a constant result
the server was not expecting that the function may return NULL.
Fixed by checking for NULL and returning Item_null (in the relevant
collation) if the result of the system constant function was NULL.
Passing dubious "year zero" in non-zero date (not "0000-00-00") could
lead to negative value for year internally, while variable was unsigned.
This led to Really Bad Things further down the line.
Now doing calculations with signed type for year internally.
Added global status variable 'Queries' which represents
total amount of queries executed by server including
statements executed by SPs.
note: It's old behaviour of 'Questions' variable.
Table could be marked dependent because it is
either 1) an inner table of an outer join, or 2) it is a part of
STRAIGHT_JOIN. In case of STRAIGHT_JOIN table->maybe_null should not
be assigned. The fix is to set st_table::maybe_null to 'true' only
for those tables which are used in outer join.
The MONTHNAME/DAYNAME functions
returns binary string, so the LOWER/UPPER functions
are not effective on the result of MONTHNAME/DAYNAME call.
Character set of the MONTHNAME/DAYNAME function
result has been changed to connection character set.
- QUICK_INDEX_MERGE_SELECT deinitializes its rnd_pos() scan when it reaches EOF, but we
need to make the deinitialization in QUICK_INDEX_MERGE_SELECT destructor also. This is because
certain execution strategies can stop scanning without reaching EOF, then then try to do a full
table scan on this table. Failure to deinitialize caused the full scan to use (already empty)
table->sort and produce zero records.
IF(..., CAST(longtext AS UNSIGNED), signed_val)
(was: LEFT JOIN on inline view crashes server)
Select from a LONGTEXT column wrapped with an expression
like "IF(..., CAST(longtext_column AS UNSIGNED), smth_signed)"
failed an assertion or crashed the server. IFNULL function was
affected too.
LONGTEXT column item has a maximum length of 32^2-1 bytes,
at the same time this is a maximum possible length of any
MySQL item. CAST(longtext_column AS UNSIGNED) returns some
unsigned numeric result of length 32^2-1, so the result of
IF/IFNULL function of this number and some other signed number
will have text length of (32^2-1)+1=32^2 (one byte for the
minus sign) - there is integer overflow, and the length is
equal to zero. That caused assert/crash.
CAST AS UNSIGNED function has been modified to limit maximal
length of resulting number to 67 (maximal length of DECIMAL
and two characters for minus sign and dot).
IF(..., CAST(longtext AS UNSIGNED), signed_val)
(was: LEFT JOIN on inline view crashes server)
Select from a LONGTEXT column wrapped with an expression
like "IF(..., CAST(longtext_column AS UNSIGNED), smth_signed)"
failed an assertion or crashed the server. IFNULL function was
affected too.
LONGTEXT column item has a maximum length of 32^2-1 bytes,
at the same time this is a maximum possible length of any
MySQL item. CAST(longtext_column AS UNSIGNED) returns some
unsigned numeric result of length 32^2-1, so the result of
IF/IFNULL function of this number and some other signed number
will have text length of (32^2-1)+1=32^2 (one byte for the
minus sign) - there is integer overflow, and the length is
equal to zero. That caused assert/crash.
The bug has been fixed by the same solution as in the CASE
function implementation.
Fix parsing of mysql client commands, especially in relation to
single-line comments when --comments was specified.
This is a little tricky, because we need to allow single-line
comments in the middle of statements, but we don't want to allow
client commands in the middle of statements. So in
comment-preservation mode, we go ahead and send single-line
comments to the server immediately when we encounter them on their
own.
This is still slightly flawed, in that it does not handle a
single-line comment with leading spaces, followed by a client-side
command when --comment has been enabled. But this isn't a new
problem, and it is quite an edge condition. Fixing it would require
a more extensive overall of how the mysql client parses commands.
Removed values with more than 15 significant digits from the test case. Results of
reading/printing such values using system library functions depend on implementation
and thus are not portable.
Bug #41173 rpl_packet fails sporadically on pushbuild: query 'DROP TABLE t1' failed
The both issues appeared to be a race between the SQL thread executing CREATE table t1
and the IO thread that is expected to stop at the consequent big size event.
The two events need serialization which is implemented.
The early bug required back-porting a part fixes for bug#38350 exclusively for 5.0 version.
BUG#39325 Server crash inside MYSQL_LOG::purge_first_log halts replicaiton
The patch reverses the order of the purging and updating events for log and relay-log.info/index files respectively.
This solves the problem of having holes caused by crashes happening between updating info/index files and purging logs.
NOTE: This is a combined patch for BUG#38826 and BUG#39325. This patch is based on bugteam tree and takes into account reviewers suggestions.
bigint' fails on windows.
Visual Studio does not take into account some x86 hardware limitations
which leads to incorrect results when converting large DOUBLE values
to BIGINT UNSIGNED ones.
Fixed by adding a workaround for double->ulonglong conversion on
Windows.