which does not work. Removing these attempted privileges makes
this identical to option 5 so remove it completely. The spirit
of the program appears to be aimed at database privileges, so do
not add another option for granting global privileges as it may
be unexpected. Fixes bug#14618 (same as previous patch, this
time applied to -maint tree).
When using concurrent insert with parallel index reads, it could
happen that reading sessions found keys that pointed to records
yet to be written to the data file. The result was a report of
a corrupted table. But it was false alert.
When inserting a record in a table with indexes, the keys are
inserted into the indexes before the record is written to the data
file. When the insert happens concurrently to selects, an
index read can find a key that references the record that is not
yet written to the data file. To avoid any access to such record,
the select saves the current end of file position when it starts.
Since concurrent inserts are always appended at end of the data
file, the select can easily ignore any concurrently inserted record.
The problem was that the ignore was only done for non-exact key
searches (partial key or using >, >=, < or <=).
The fix is to ignore concurrently inserted records also for
exact key searches.
No test case. Concurrent inserts cannot be tested with the test
suite. Test cases are attached to the bug report.
SELECT statement itself returns empty.
As a result of this bug 'SELECT AGGREGATE_FUNCTION(fld) ... GROUP BY'
can return one row instead of an empty result set.
When GROUP BY only has fields of constant tables
(with a single row), the optimizer deletes the group_list.
After that we lose the information about whether we had an
GROUP BY statement. Though it's important
as SELECT min(x) from empty_table; and
SELECT min(x) from empty_table GROUP BY y; have to return
different results - the first query should return one row,
second - an empty result set.
So here we add the 'group_optimized_away' flag to remember this case
when GROUP BY exists in the query and is removed
by the optimizer, and check this flag in end_send_group()
Backport of correction for Mac OS X build problem, global variable not
initiated is "common" and can't be used in shared libraries, unless
special flags are used (bug#26218)
to 150 or 107 characters for those messages which are generated
by the embedded server during release builds.
This fixes bug#16635:
Error messages wrong: absolute path names, "%s" format code
See the bug report or the changelog for "sql/share/english/errmsg.txt"
for instructions how to do that with other languages,
even at the customer site, and for the restrictions to keep.
This bug manifested itself for join queries with GROUP BY and HAVING clauses
whose SELECT lists contained DISTINCT. It occurred when the optimizer could
deduce that the result set would have not more than one row.
The bug could lead to wrong result sets for queries of this type because
HAVING conditions were erroneously ignored in some cases in the function
remove_duplicates.
unpack_fields() didn't expect NULL_LENGHT in the field's descriptions.
In this case we get NULL in the resulting string so cannot use
strdup_root to make a copy of it.
strdup_root changed with strmake_root as it's NULL-safe
gettimeofday() can fail and presumably, so can time().
Keep an eye on it.
Since we have no data on this at all so far, we just
retry on failure (and log the event), assuming that
this is just an intermittant failure. This might of
course hang the threat until we succeed. Once we know
more about these failures, an appropriate more clever
scheme may be picked (only try so many times per thread,
etc., if that fails, return last "good" time() we got or
some such). Using sql_print_information() to log as this
probably only occurs in high load scenarios where the debug-
trace likely is disabled (or might interfere with testing
the effect). No test-case as this is a non-deterministic
issue.
Dropping an user defined function may cause server crash in
case this function is still in use by another thread.
The problem was that our hash implementation didn't update
hash link list properly when hash_update() was called.
The `SELECT 'r' INTO OUTFILE ... FIELDS ENCLOSED BY 'r' ' statement
encoded the 'r' string to a 4 byte string of value x'725c7272'
(sequence of 4 characters: r\rr).
The LOAD DATA statement decoded this string to a 1 byte string of
value x'0d' (ASCII Carriage Return character) instead of the original
'r' character.
The same error also happened with the FIELDS ENCLOSED BY clause
followed by special characters: 'n', 't', 'r', 'b', '0', 'Z' and 'N'.
NOTE 1: This is a result of the undocumented feature: the LOAD DATA INFILE
recognises 2-byte input sequences like \n, \t, \r and \Z in addition
to documented 2-byte sequences: \0 and \N. This feature should be
documented (here backspace character is a default ESCAPED BY character,
in the real-life example it may be any ESCAPED BY character).
NOTE 2, changed behaviour:
Now the `SELECT INTO OUTFILE' statement with the `FIELDS ENCLOSED BY'
clause followed by one of: 'n', 't', 'r', 'b', '0', 'Z' or 'N' characters
encodes this special character itself by doubling it ('r' --> 'rr'),
not by prepending it with an escape character.
Sometimes special 0 ENUM values was ALTERed to normal
empty string ENUM values.
Special 0 ENUM value has the same string representation
as normal ENUM value defined as '' (empty string).
The do_field_string function was used to convert
ENUM data at an ALTER TABLE request, but this
function doesn't care about numerical "indices" of
ENUM values, i.e. do_field_string doesn't distinguish
a special 0 value from an empty string value.
A new copy function called do_field_enum has been added to
copy special 0 ENUM values without conversion to an empty
string.
Problem: long and long long types mess in a comparison may lead to wrong results on some platforms.
Fix: prefer [unsigned] long long as [u]longlong as it's used unconditionally in many places.