an assertion in a debug build.
The reason is that the C API doesn't support multiple result sets for prepared
statements and attempting to execute a stored routine which returns multiple result
sets sometimes lead to a network error. The network error sets the diagnostic area
prematurely which later leads to the assert when an attempt is made to set a second
server state.
This patch fixes the issue by changing the scope of the error code returned by
sp_instr_stmt::execute() to include any error which happened during the execution.
To assure that Diagnostic_area::is_sent really mean that the message was sent all
network related functions are checked for return status.
those keywords do nothing in 5.1 (they are meant for future versions, for example featuring the Maria engine)
so they are here removed from the syntax. Adding those keywords to future versions when needed is:
- WL#5034 "Add TRANSACTIONA=0|1 and PAGE_CHECKSUM=0|1 clauses to CREATE TABLE"
- WL#5037 "New ROW_FORMAT value for CREATE TABLE: PAGE"
compression
Since uint3korr() may read 4 bytes depending on build flags and
platform, allocate 1 extra "safety" byte in the network buffer
for cases when uint3korr() in my_real_read() is called to read
last 3 bytes in the buffer.
It is practically hard to construct a reliable and reasonably
small test case for this bug as that would require constructing
input stream such that a certain sequence of bytes in a
compressed packet happens to be the last 3 bytes of the network
buffer.
If the log_bin_trust_function_creators option is not defined, creating a stored
function requires either one of the modifiers DETERMINISTIC, NO SQL, or READS
SQL DATA. Executing a stored function should also follows the same rules if in
STATEMENT mode. However, this was not happening and a wrong error was being
printed out: ER_BINLOG_ROW_RBR_TO_SBR.
The patch makes the creation and execution compatible and prints out the correct
error ER_BINLOG_UNSAFE_ROUTINE when a stored function without one of the modifiers
above is executed in STATEMENT mode.
The maximum value of the max_join_size variable is set by converting
a signed type (long int) with negative value (-1) to a wider unsigned
type (unsigned long long), which yields the largest possible value of
the wider unsigned type -- as per the language conversion rules. But,
depending on build options, the type of the max_join_size might be a
shorter type (ha_rows - unsigned long) which causes the warning to be
thrown once the large value is truncated to fit.
The solution is to ensure that the maximum value of the variable is
always set to the maximum value of integer type of max_join_size.
Furthermore, it would be interesting to always have a fixed type for
this variable, but this would incur in a change of behavior which is
not acceptable for a GA version. See Bug#35346.
to wrong result
When using MIXED mode and issuing 'CREATE TEMPORARY TABLE t_tmp',
the statement is logged if the current binlogging mode is
STATEMENT. This causes the slave to replay the instruction and
create the temporary table as well. If there is no switch to ROW
mode, and later on a 'DROP TEMPORARY TABLE t_tmp' is issued, then
this statement will also be logged and the slave will
remove/close the temporary table.
However, if there is a switch to ROW mode between the CREATE and
DROP TEMPORARY table, the DROP statement will not be logged,
leaving the slave with a dangling temporary table.
This patch addresses this, by always logging a DROP TEMPORARY
TABLE IF EXISTS when in mixed mode and a drop statement is issued
for temporary table(s).
mysqld
The problem was that enabling the event scheduler inside a init
file caused the server to crash upon start-up. The crash occurred
because the event scheduler wasn't being initialized before the
commands in the init-file are processed.
The solution is to initialize the event scheduler before the init
file is read. The patch also disables the event scheduler during
bootstrap and makes the bootstrap operation robust in the
presence of background threads.
procedures causes crashes!
The problem of that bugreport was mostly fixed by the
patch for bug 38691.
However, attached test case focused on another crash or
valgrind warning problem: SHOW PROCESSLIST query accesses
freed memory of SP instruction that run in a parallel
connection.
Changes of thd->query/thd->query_length in dangerous
places have been guarded with the per-thread
LOCK_thd_data mutex (the THD::LOCK_delete mutex has been
renamed to THD::LOCK_thd_data).
In create_myisam_from_heap() mark all errors as fatal except
HA_ERR_RECORD_FILE_FULL for a HEAP table.
Not doing so could lead to problems, e.g. in a case when a
temporary MyISAM table gets overrun due to its MAX_ROWS limit
while executing INSERT/REPLACE IGNORE ... SELECT.
The SELECT execution was aborted, but the error was
converted to a warning due to IGNORE clause, so neither 'ok'
nor 'error' packet could be sent back to the client. This
condition led to hanging client when using 5.0 server, or
assertion failure in 5.1.
Problem was that a failing rename just left the partitions at the state
it was at the failure.
Solution was to try to revert the started rename if a failure occured.
not logged
Errors encountered during initialization of the SSL subsystem
are printed to stderr, rather than to the error log.
This patch adds a parameter to several SSL init functions to
report the error (if any) out to the caller. The function
init_ssl() in mysqld.cc is moved after the initialization of
the log subsystem, so that any error messages can be logged to
the error log. Printing of messages to stderr has been
retained to get diagnostic output in a client context.
binlog
The fix for BUG 43929 introduced a regression issue. In a nutshell, when a
statement that changes a non-transactional table fails, it is written to the
binary log with the error code appended. Unfortunately, after BUG 43929, this
failure was flushing the transactional chace causing mismatch between execution
and logging histories. To fix this issue, we avoid flushing the transactional
cache when a commit or rollback is not issued.
When during the optimization an item is moved to the upper select
the item's context left unchanged. This caused wrong result in the
PS/SP mode.
The Item_ident::remove_dependence_processor now sets the context
of the select to which the item is moved to.
it returns misleading 'table is full'
Innodb returns a misleading error message "table is full"
when the number of active concurrent transactions is greater
than 1024.
Fixed by adding errorcode "ER_TOO_MANY_CONCURRENT_TRXS" to the
error codes. Innodb should return HA_TOO_MANY_CONCURRENT_TRXS
to mysql which is then mapped to ER_TOO_MANY_CONCURRENT_TRXS
Note: testcase is not written as this was reproducible only by
changing innodb code.
In a subselect all fields from outer selects are marked as dependent on
selects they are belong to. In some cases optimizer substitutes it for an
equivalent expression. For example "a_field IN (SELECT outer_field)" is
substituted with "a_field = outer_field". As we moved the outer_field to the
upper select it's not really outer anymore. But it was left marked as outer.
If exists an index over a_field optimizer choose wrong execution plan and thus
return wrong result.
Now the Item_in_subselect::single_value_transformer function removes dependent
marking from fields when a subselect is optimized away.
The "get_master_version_and_clock(...)" function in sql/slave.cc ignores
error and passes directly when queries fail, or queries succeed
but the result retrieved is empty.
The "get_master_version_and_clock(...)" function should try to reconnect master
if queries fail because of transient network problems, and fail otherwise.
The I/O thread should print a warning if the some system variables do not
exist on master (very old master)
table
The MERGE table storage engine does not support the HA_CAN_SQL_HANDLE feature
and any attempt to open the merge table will fail with ER_ILLEGAL_HA.
After an error occurred the tables that was opened must be closed again
or they will be left in an inconsistent state. However, the assumption
made in the code for closing and register handler tables was that only
one table will be opened, and this is not true for MERGE tables which
will cause multiple tables to open.
The next time a SELECT operation was issued on the merge table it
caused the system to freeze.
This patch fixes this issue by making sure that all tables which
are opened also are closed in the event of an error.