Problem: ALTER TABLE ADD INDEX may lead to table copying if there's
numeric field(s) with non-default display width modificator specified.
Fix: compare numeric field's storage lenghts when we decide whether
they can be considered 'equal' for table alteration purposes.
for InnoDB
The class Field_bit_as_char stores the metadata for the
field incorrecly because bytes_in_rec and bit_len are set
to (field_length + 7 ) / 8 and 0 respectively, while
Field_bit has the correct values field_length / 8 and
field_length % 8.
Solved the problem by re-computing the values for the
metadata based on the field_length instead of using the
bytes_in_rec and bit_len variables.
To handle compatibility with old server, a table map
flag was added to indicate that the bit computation is
exact. If the flag is clear, the slave computes the
number of bytes required to store the bit field and
compares that instead, effectively allowing replication
*without conversion* from any field length that require
the same number of bytes to store.
SunStudio
SunStudio compilers of late warn about methods that might hide
methods in base classes due to the use of overloading combined
with overriding. SunStudio also warns about variables defined
in local socpe or method arguments that have the same name as
a member attribute of the class.
This patch renames methods that might hide base class methods,
to make it easier both for humans and compilers to see what is
actually called. It also renames variables in local scope.
The problem becomes apparent only if HAVE_purify is undefined.
It related to the part of code placed in open_table_from_share() fuction
where we initialize record buffer only if HAVE_purify is enabled.
So in case of HAVE_purify=OFF record buffer is not initialized
on open table stage.
Next we read key, find NULL value and update appropriate null bit
but do not update record buffer. After that the record is stored
in the join cache(store_record_in_cache). For CHAR fields we
strip trailing spaces and in our case this procedure uses
uninitialized record buffer.
The fix is to skip stripping space procedure in case of null values
for CHAR fields(partially based on 6.0 JOIN_CACHE implementation).
freezes (win) the server
The check for equality was assuming the field object is always
created. If it's not it was de-referencing a NULL pointer.
Fixed to use the data in the create object instead.
Bug #48370 Absolutely wrong calculations with GROUP BY and
decimal fields when using IF
Added the test cases in the above two bugs for regression
testing.
Added additional tests that demonstrate a incomplete fix.
Added a new factory method for Field_new_decimal to
create a field from an (decimal returning) Item.
In the new method made sure that all the precision and
length variables are capped in a proper way.
This is required because Item's can have larger precision
than the decimal fields and thus need to be capped when
creating a field based on an Item type.
Fixed the wrong typecast to Item_decimal.
We set up DATE and TIMESTAMP differently in field-creation than we
did in field-MD creation (for CREATE). Admirably, ALTER TABLE
detected this and didn't damage any data, but it did initiate a
full copy/conversion, which we don't really need to do.
Now we describe Field and Create_field the same for those types.
As a result, ALTER TABLE that only changes meta-data (like a
field's name) no longer forces a data-copy when there needn't
be one.
The problem was that appending values to the end of an existing
ENUM or SET column was being treated as table data modification,
preventing a immediately (fast) table alteration that occurs when
only table metadata is being modified.
The cause was twofold: adding a enumeration or set members to the
end of the list of valid member values was not being considered
a "compatible" table alteration, and for SET columns, the check
was being done upon the max display length and not the underlying
(pack) length of the field.
The solution is to augment the function that checks wether two ENUM
or SET fields are compatible -- by comparing the pack lengths and
performing a limited comparison of the member values.
with gcc 4.3.2
This patch fixes a number of GCC warnings about variables used
before initialized. A new macro UNINIT_VAR() is introduced for
use in the variable declaration, and LINT_INIT() usage will be
gradually deprecated. (A workaround is used for g++, pending a
patch for a g++ bug.)
GCC warnings for unused results (attribute warn_unused_result)
for a number of system calls (present at least in later
Ubuntus, where the usual void cast trick doesn't work) are
also fixed.
The problem was that creating a DECIMAL column from a decimal
value could lead to a failed assertion as decimal values can
have a higher precision than those attached to a table. The
assert could be triggered by creating a table from a decimal
with a large (> 30) scale. Also, there was a problem in
calculating the number of digits in the integral and fractional
parts if both exceeded the maximum number of digits permitted
by the new decimal type.
The solution is to ensure that truncation procedure is executed
when deducing a DECIMAL column from a decimal value of higher
precision. If the integer part is equal to or bigger than the
maximum precision for the DECIMAL type (65), the integer part
is truncated to fit and the fractional becomes zero. Otherwise,
the fractional part is truncated to fit into the space left
after the integer part is copied.
This patch borrows code and ideas from Martin Hansson's patch.
column on partitioned table
An assertion 'ASSERT_COULUMN_MARKED_FOR_READ' is failed if the query
is executed with index containing double column on partitioned table.
The problem is that assertion expects all the fields which are read,
to be in the read_set.
In this query only the field 'a' is in the readset as the tables in
the query are joined by the field 'a' and so the assertion fails
expecting other field 'b'.
Since the function cmp() is just comparison of two parameters passed,
the assertion is not required.
Fixed by removing the assertion in the double fields comparision
function and also fixed the index initialization to do ordered
index scan with RW LOCK which ensures all the fields from a key are in
the read_set.
Note: this bug is not reproducible with other datatypes because the
assertion doesn't exist in comparision function for other
datatypes.
when used with --tab
1) New syntax: added CHARACTER SET clause to the
SELECT ... INTO OUTFILE (to complement the same clause in
LOAD DATA INFILE).
mysqldump is updated to use this in --tab mode.
2) ESCAPED BY/ENCLOSED BY field parameters are documented as
accepting CHAR argument, however SELECT .. INTO OUTFILE
silently ignored rests of multisymbol arguments.
For the symmetrical behavior with LOAD DATA INFILE the
server has been modified to fail with the same error:
ERROR 42000: Field separator argument is not what is
expected; check the manual
3) Current LOAD DATA INFILE recognizes field/line separators
"as is" without converting from client charset to data
file charset. So, it is supposed, that input file of
LOAD DATA INFILE consists of data in one charset and
separators in other charset. For the compatibility with
that [buggy] behaviour SELECT INTO OUTFILE implementation
has been saved "as is" too, but the new warning message
has been added:
Non-ASCII separator arguments are not fully supported
This message warns on field/line separators that contain
non-ASCII symbols.
Altering a table to update a column with types DATE or TIMESTAMP
would incorrectly be seen as a significant change that necessitates
a slow copy+rename operation instead of a fast update.
There were two problems:
The character set is magically set for TIMESTAMP to be "binary",
but that was done too deep in field use code for ALTER TABLE to
know of it. Now, put that in the constructor for Field_timestamp.
Also, when we set the character set for the new replacement/
comparison field, also raise the "binary" field flag that tells us
we should compare it exactly. That is necessary to match the old
stored definition.
Next is the problem that the default length for TIMESTAMP and DATE
fields is different than the length read from the .frm . The
compressed size is written to the file, but the human-readable,
part-delimited length is used as default length. IIRC, for
timestamp it was 19!=14, and for date it was 8!=10. Length
mismatch causes a table copy.
Also, clean up a place where a comparison function alters one of its
parameters and replace it with an assertion of the condition it
mutates.
with gcc 4.3.2
Compiling MySQL with gcc 4.3.2 and later produces a number of
warnings, many of which are new with the recent compiler
versions.
This bug will be resolved in more than one patch to limit the
size of changesets. This is the first patch, fixing a number
of the warnings, predominantly "suggest using parentheses
around && in ||", and empty for and while bodies.
with gcc 4.3.2
Compiling MySQL with gcc 4.3.2 and later produces a number of
warnings, many of which are new with the recent compiler
versions.
This bug will be resolved in more than one patch to limit the
size of changesets. This is the first patch, fixing a number
of the warnings, predominantly "suggest using parentheses
around && in ||", and empty for and while bodies.
Field_time::get_time() did not initialize some members of
MYSQL_TIME which led to valgrind warnings when those members
were accessed in Protocol_simple::store_time().
It is unlikely that this bug could result in wrong data
being returned, since Field_time::get_time() initializes the
'day' member of MYSQL_TIME to 0, so the value of 'day'
in Protocol_simple::store_time() would be 0 regardless
of the values for 'year' and 'month'.
UNION could convert fixed-point FLOAT(M,D)/DOUBLE(M,D) columns
to FLOAT/DOUBLE when aggregating data types from the SELECT
substatements. While there is nothing particularly wrong with
this behavior, especially when M is greater than the hardware
precision limits, it could be confusing in cases when all
SELECT statements in a union have the same
FLOAT(M,D)/DOUBLE(M,D) columns with equal precision
specifications listed in the same position.
Since the manual is quite vague on what data type should be
returned in such cases, the bug was fixed by implementing the
most 'expected' behavior: do not convert FLOAT(M,D)/DOUBLE(M,D)
to anything else if all SELECT statements in a UNION have the
same precision for that column.
Problem: mysqld doesn't detect that enum data must be reinserted performing
'ALTER TABLE' in some cases.
Fix: reinsert data altering an enum field if enum values are changed.
A stored procedure involving substrings could crash the server on certain
platforms because of invalid memory reads.
During storing the new blob-field value, the cached value's address range
overlapped that of the new field value. This caused problems when the
cached value storage was reallocated to provide access for a new
characater set representation. The patch checks the address ranges, and if
they overlap, the new field value is copied to a new storage before it is
converted to the new character set.
Problem: with @@sql_mode=pad_char_to_full_length
a CHAR column returned additional garbage
after trailing space characters due to
incorrect my_charpos() call.
Fix: call my_charpos() with correct arguments.
In order to handle CHAR() fields, 8 bits were reserved for
the size of the CHAR field. However, instead of denoting the
number of characters in the field, field_length was used which
denotes the number of bytes in the field.
Since UTF-8 fields can have three bytes per character (and
has been extended to have four bytes per character in 6.0),
an extra two bits have been encoded in the field metadata
work for fields of type Field_string (i.e., CHAR fields).
Since the metadata word is filled, the extra bits have been
encoded in the upper 4 bits of the real type (the most
significant byte of the metadata word) by computing the
bitwise xor of the extra two bits. Since the upper 4 bits
of the real type always is 1111 for Field_string, this
means that for fields of length <256, the encoding is
identical to the encoding used in pre-5.1.26 servers, but
for lengths of 256 or more, an unrecognized type is formed,
causing an old slave (that does not handle lengths of 256
or more) to stop.
or incorrect.
For better conformance with standard, truncation procedure of CHAR columns
has been changed to ignore truncation of trailing whitespace characters
(note has been removed).
Finally, for columns with non-binary charsets:
1. CHAR(N) columns silently ignore trailing whitespace truncation;
2. VARCHAR and TEXT columns issue Note about truncation.
BLOBs and other columns with BINARY charset are unaffected.
Make maximum blob size to be 2**32-1, regardless of word size.
Fix failure of timestamp with size of 2**31-1. The method of
rounding up to the nearest even number would overflow.