The bool data type was redefined to BOOL (4 bytes on windows).
Removed the #define and fixed some of the warnings that were uncovered
by this.
Note that the fix also disables 2 warnings :
4800 : 'type' : forcing value to bool 'true' or 'false' (performance warning)
4805: 'operation' : unsafe mix of type 'type' and type 'type' in operation
These warnings will be handled in a separate bug, as they are performance related or bogus.
Fixed to int the return type of functions that return more than
2 distinct values.
correctly - crashes server !
Creating federated table with connect string containing empty
(zero-length) host name and port is evaluated as 0 (port is
incorrect, omitted or 0) crashes server.
This happens because federated calls strcmp() with NULL pointer.
Fixed by avoiding strcmp() call if hostname is set to NULL.
isn't running
Pass the process id of the manager as a parameter to "wait_for_pid"
and if the manager isn't running, then do not continue to wait.
Also, capture the error message of our process-existence test,
"kill -0", as we expect errors and shouldn't pass them to the user.
Additionally, be a bit more descriptive of what the problem is.
The problem is that unimplemented WIN32 version of pthread_kill
is returning ESRCH no matter the arguments, causing calls to
mysqld_list_processes to set the procinfo to dead because
pthread_kill returns non zero. The dead procinfo would show
up on a second invocation of show processlist.
binlogging of insert into a autoincrement blackhole table ignored
an explicit set insert_id.
Fixed with refining of the blackhole's insert method to call
update_auto_increment() that prepares binlogging the insert query
with the preceeding set insert_id.
Note, as the engine does not store any actual data one has to explicitly
provide to the server with the value of the autoincrement column via
set insert_id. Otherwise binlogging will happend with the default
set insert_id=1.
When swapping out heap I_S tables to disk, this is done after plan refinement.
Thus, READ_RECORD::file will still point to the (deleted) heap handler at start
of execution. This causes segmentation fault if join buffering is used and the
query is a star query where the result is found to be empty before accessing
some table. In this case that table has not been initialized (i.e. had its
READ_RECORD re-initialized) before the cleanup routine tries to close the handler.
Fixed by updating READ_RECORD::file when changing handler.
Bug #18453 Warning/error message if there is a mismatch between ...
There were three problems:
1. the reported lack of warnings for the BEFORE syntax of PURGE;
2. the similar lack of warnings for the TO syntax;
3. incompatible behaviour between the two in that the latter blanked out
regardlessly of presence or lack the actual file corresponding to
an index record; the former version gave up at the first mismatch.
fixed with deploying the warning's generation and synronizing logics of
purge_logs() and purge_logs_before_date().
my_stat() is called in either of two branches of purge_logs() (responsible
for the TO syntax of PURGE) similarly to how it has behaved in the BEFORE syntax.
If there is no actual binlog file, my_stat returns NULL and my_delete is
not invoked.
A critical error is reported to the user if a file from the index
could not be retrieved info about or deleted with a system error code
different than ENOENT.
The problem was that the COM_STMT_SEND_LONG_DATA was sending a response
packet if the prepared statement wasn't found in the server (due to
reconnection). The commands COM_STMT_SEND_LONG_DATA and COM_STMT_CLOSE
should not send any packets, even error packets should not be sent since
they are not expected by the client API.
The solution is to clear generated during the execution of the aforementioned
commands and to skip resend of prepared statement commands. Another fix is
that if the connection breaks during the send of prepared statement command,
the command is not sent again since the prepared statement is no longer in the
server.
Queries like:
SELECT ROW(1, 2) IN (SELECT t1.a, 2)
FROM t1 GROUP BY t1.a
or
SELECT ROW(1, 2) IN (SELECT t1.a, 2 FROM t2)
FROM t1 GROUP BY t1.a
lead to assertion failure in the
Item_in_subselect::row_value_transformer method in debugging
build, or to unexpected error message in release build:
ERROR 1247 (42S22): Reference '<list ref>' not supported (forward
reference in item list)
Unexpected error message and assertion failure have been
eliminated.
When there are no underlying tables specified for a merge table,
SHOW CREATE TABLE outputs a statement that cannot be executed. The
same is true for mysqldump (it generates dumps that cannot be
executed).
This happens because SQL parser does not accept empty UNION() clause.
This patch changes the following:
- it is now possible to execute CREATE/ALTER statement with
empty UNION() clause.
- the same as above, but still worth noting: it is now possible to
remove underlying tables mapping using ALTER TABLE ... UNION=().
- SHOW CREATE TABLE does not output UNION() clause if there are
no underlying tables specified for a merge table. This makes
mysqldump slightly smaller.
using a trig in SP
For all 5.0 and up to 5.1.12 exclusive, when a stored routine or
trigger caused an INSERT into an AUTO_INCREMENT column, the
generated AUTO_INCREMENT value should not be written into the
binary log, which means if a statement does not generate
AUTO_INCREMENT value itself, there will be no Intvar event (SET
INSERT_ID) associated with it even if one of the stored routine
or trigger caused generation of such a value. And meanwhile, when
executing a stored routine or trigger, it would ignore the
INSERT_ID value even if there is a INSERT_ID value available set
by a SET INSERT_ID statement.
Starting from MySQL 5.1.12, the generated AUTO_INCREMENT value is
written into the binary log, and the value will be used if
available when executing the stored routine or trigger.
Prior fix of this bug in MySQL 5.0 and prior MySQL 5.1.12
(referenced as the buggy versions in the text below), when a
statement that generates AUTO_INCREMENT value by the top
statement was executed in the body of a SP, all statements in the
SP after this statement would be treated as if they had generated
AUTO_INCREMENT by the top statement. When a statement that did
not generate AUTO_INCREMENT value by the top statement but by a
function/trigger called by it, an erroneous Intvar event would be
associated with the statement, this erroneous INSERT_ID value
wouldn't cause problem when replicating between masters and
slaves of 5.0.x or prior 5.1.12, because the erroneous INSERT_ID
value was not used when executing functions/triggers. But when
replicating from buggy versions to 5.1.12 or newer, which will
use the INSERT_ID value in functions/triggers, the erroneous
value will be used, which would cause duplicate entry error and
cause the slave to stop.
The patch for 5.0 fixed it not to generate the erroneous Intvar
event, another patch for 5.1 fixed it to ignore the SET INSERT_ID
value when executing functions/triggers if it is replicating from
a master of buggy versions.