DECIMAL column was used instead of BIGINT for the minimal possible
BIGINT (-9223372036854775808).
The Item_func_neg::fix_length_and_dec has been adjusted to
to inherit the type of the argument in the case when it's an
Item_int object whose value is equal to LONGLONG_MIN.
Problem: show slave status may return different Slave_IO_Running values running some tests.
Fix: wait for a certain slave state if needed to get tests more predictable.
An attempt to open file with name '????????.frm'
can produce different errors:
- ER_NO_SUCH_TABLE on Unix
- ER_FILE_NOT_FOUND on Windows
because QUESTION MARK has special meaning on Windows.
Make sure that any of these two errors happens.
Problem: Temporary buffer which is used for quoting and escaping
was initialized to character set utf8, and thus didn't allow
to store data in other character sets.
Fix: changing character set of the buffer to be able to
store any arbitrary sequence of bytes.
of its arguments was evaluated to NULL, while the predicate
LOCATE(str,NULL) IS NULL erroneously was evaluated to FALSE.
This happened because the Item_func_locate::fix_length_and_dec
method by mistake set the value of the maybe_null flag for
the function item to 0. In consequence of this the function
was considered as the one that could not ever return NULL.
make server crash
UPDATE against CSV table may cause server crash or update a table with wrong
values.
CSV can write only a whole row at once. That means it must read all columns,
that it is not going to update, and write them along with updated columns.
But only limited set of columns was read, those that were needed for the
UPDATE query.
With this fix all columns are read in case we're performing an UPDATE.
Problem: crash on attempt to open a table
having "#mysql50#" prefix in db or table name.
Fix: This prefix is reserved for "mysql_upgrade"
to access 5.0 tables whose file names are not encoded
according to "5.1 tablename to filename encoded".
Don't try open tables whose db name or table name
has this prefix.
numbers)
Before this patch, the code in the class Log_to_csv_event_handler, which is
used by the global LOGGER object to write to the tables mysql.slow_log and
mysql_general_log, was supporting only records of the format defined for
these tables in the database creation scripts.
Also before this patch, the server would allow, with certain limitations,
to perform ALTER TABLE on the LOG TABLES.
As implemented, the behavior of the server, with regards to LOG TABLES,
is inconsistent:
- either ALTER TABLES on LOG TABLES should be prohibited,
and the code writing to these tables can make assumptions on the record
format,
- or ALTER TABLE on LOG TABLES is permitted, in which case the code
writing a record to these tables should be more flexible and honor
new fields.
In particular, adding an AUTO_INCREMENT column to the logs,
does not work as expected (per the bug report).
Given that the ALTER TABLE on log tables statement has been explicitly
implemented to check that the log should be off to perform the operation,
and that current test cases already cover this, the user expectation is
already set that this is a "feature" and should be supported.
With this patch, the server will:
- populate AUTO INCREMENT columns if present,
- populate any additional column with it's default value
when writing a record to the LOG TABLES.
Tests are provided, that detail the precise sequence of statements
a SUPER user might want to perform to add more columns to the log tables.
was erroneously converted to double, while the result of
ROUND(<decimal expr>, <int literal>) was preserved as decimal.
As a result of such a conversion the value of ROUND(D,A) could
differ from the value of ROUND(D,val(A)) if D was a decimal expression.
Now the result of the ROUND function is never converted to
double if the first argument is decimal.