with gcc 4.3.2
Compiling MySQL with gcc 4.3.2 and later produces a number of
warnings, many of which are new with the recent compiler
versions.
This bug will be resolved in more than one patch to limit the
size of changesets. This is the second patch, fixing more
of the warnings.
crashes server!
The problem affects the scenario when index merge is followed by a filesort
and the sort buffer is not big enough for all the sort keys.
In this case the filesort function will read the data to the end through the
index merge quick access method (and thus closing the cursor etc),
but will leave the pointer to the quick select method in place.
It will then create a temporary file to hold the results of the filesort and
will add it as a sort output file (in sort.io_cache).
Note that filesort will copy the original 'sort' structure in an automatic
variable and restore it after it's done.
As a result at exiting filesort() we have a sort.io_cache filled in and
nothing else (as a result of close of the cursors at end of reading data
through index merge).
Now create_sort_index() will note that there is a select and will clean it up
(as it's been used already by filesort() reading the data in). While doing that
a special case in the index merge destructor will clean up the sort.io_cache,
assuming it's an output of the index merge method and is not needed anymore.
As a result the code that tries to read the data back from the filesort output
will get no data in both memory and disk and will crash.
Fixed similarly to how filesort() does it : by copying the sort.io_cache structure
to a local variable, removing the pointer to the io_cache (so that it's not freed
by QUICK_INDEX_MERGE_SELECT::~QUICK_INDEX_MERGE_SELECT) and restoring the original
structure (together with the valid pointer) after the cleanup is done.
This is a safe thing to do because all the structures are already cleaned up by
hitting the end of the index merge's read method (QUICK_INDEX_MERGE_SELECT::get_next())
and the cleanup code being written in a way that tolerates repeating cleanups.
WHERE and GROUP BY clause
Loose index scan may use range conditions on the argument of
the MIN/MAX aggregate functions to find the beginning/end of
the interval that satisfies the range conditions in a single go.
These range conditions may have open or closed minimum/maximum
values. When the comparison returns 0 (equal) the code should
check the type of the min/max values of the current interval
and accept or reject the row based on whether the limit is
open or not.
There was a wrong composite condition on checking this and it was
not working in all cases.
Fixed by simplifying the conditions and reversing the logic.
While reading a binary log that is being used by a master or was not properly
closed, most likely due to a crash, the following warning message is being
printed out: "Warning: this binlog was not closed properly. Most probably mysqld
crashed writing it.". This was scaring our users as the message was not taking
into account the possibility of the file is being just used by the master.
To avoid unnecessarily scaring our users, we replace the original message by the
following one: Warning: "this binlog is either is use or was not closed properly.".
Backport to MySQL 5.0/1 fix by Vladislav Vaintroub:
In Vista and later and also in when using terminal services, when
server is started from command line, client cannot connect to it
via shared memory protocol.
This is a regression introduced when Bug#24731 was fixed. The
reason is that client is trying to attach to shared memory using
global kernel object namespace (all kernel objects are prefixed
with Global\). However, server started from the command line in
Vista and later will create shared memory and events using current
session namespace. Thus, client is unable to find the server and
connection fails.
The fix for the client is to first try to find server using "local"
names (omitting Global\ prefix) and only if server is not found,
trying global namespace.
memory issue ?
The mysql command line client could misinterpret some character
sequences as commands under some circumstances.
The upper limit for internal readline buffer was raised to 1 GB
(the same as for server's max_allowed_packet) so that any input
line is processed by add_line() as a whole rather than in
chunks.
with gcc 4.3.2
Compiling MySQL with gcc 4.3.2 and later produces a number of
warnings, many of which are new with the recent compiler
versions.
This bug will be resolved in more than one patch to limit the
size of changesets. This is the first patch, fixing a number
of the warnings, predominantly "suggest using parentheses
around && in ||", and empty for and while bodies.
Holding on to the temporary inno hash index latch is an optimization in
many cases, but a pessimization in some others.
Release temporary latches for those corner cases we (or rather, or customers,
thanks!) have identified, that is, when we are about to do something that
might take a really long time, like REPAIR or filesort.
The crash happens because of uninitialized
lex->ssl_cipher, lex->x509_subject, lex->x509_issuer variables.
The fix is to add initialization of these variables for
stored procedures&functions.
The crash happens due to wrong max_length value which is set on
Item_func_round::fix_length_and_dec() stage. The value is set to
args[0]->max_length which is too big in case of LONGTEXT(LONGBLOB) fields.
The fix is to set max_length using float_length() function.
BEGIN/COMMIT/ROLLBACK was subject to replication db rules, and
caused the boundary of a transaction not recognized correctly
when these queries were ignored by the rules.
Fixed the problem by skipping replication db rules for these
statements.
Bug#34309: '_PC' macro redefinition
For reasons that are now a mystery, we had defined a CPP symbol to
help ancient compilers work better (in some way that's lost to history).
This interferes with at least one modern compiler.
Now, don't define the _PC symbol. Those other underscore-leading
symbols are suspect also, but at least the names aren't inscrutable.
Let's leave them for now.
On 64-bit Windows: querying MERGE table with keys may cause
server crash.The problem is generic and may affect any statement
accessing MERGE table cardinality values.
When MERGE engine was copying cardinality statistics, it was
using incorrect size of element in cardinality statistics array
(sizeof(ptr)==8 instead of sizeof(ulong)==4), causing access
of memory beyond of the allocated bounds.
old_password() functions
The PASSWORD() and OLD_PASSWORD() functions could lead to
memory reads outside of an internal buffer when used with BLOB
arguments.
String::c_ptr() assumes there is at least one extra byte
in the internally allocated buffer when adding the trailing
'\0'. This, however, may not be the case when a String object
was initialized with externally allocated buffer.
The bug was fixed by adding an additional "length" argument to
make_scrambled_password_323() and make_scrambled_password() in
order to avoid String::c_ptr() calls for
PASSWORD()/OLD_PASSWORD().
However, since the make_scrambled_password[_323] functions are
a part of the client library ABI, the functions with the new
interfaces were implemented with the 'my_' prefix in their
names, with the old functions changed to be wrappers around
the new ones to maintain interface compatibility.
stop/start slave
When stopping and restarting the slave while it is replicating
temporary tables, the server would crash or raise an assertion
failure. This was due to the fact that although temporary tables are
saved between slave threads restart, the reference to the thread in
use (table->in_use) was not being properly updated when the restart
happened (it would still reference the old/invalid thread instead of
the new one).
This patch addresses this issue by resetting the reference to the new
slave thread on slave thread restart.
Created new .test file - mysqldump_restore that does test restore from mysqldump
output for a limited number of basic cases.
Create new .inc file - mysqldump.inc - renames original table and uses mysqldump
output to recreate the table, then uses diff_tables.inc to compare the two tables.
Backported include/diff_tables.inc to facilitate this testing.
New patch incorporating review feedback prior to push.
mysqldump.test - removed redundant call to include/have_log_bin.inc (was used twice in the test!)
Created new .test file - mysqldump_restore that does this for a limited number
of basic cases.
Created new .inc file - mysqldump.inc - renames original table and uses mysqldump
output to recreate the table, then uses diff_tables.inc to compare the two tables.
Backported include/diff_tables.inc to facilitate this testing.