Field_blob::store() has special code for GROUP_CONCAT temporary table
(to store blob values in Blob_mem_storage - this prevents them
from being freed/overwritten when a next row is read).
Field_geom and Field_blob_compressed inherit from Field_blob but they
have their own ::store() method without this special Blob_mem_storage
support.
Considering that non-grouping CONCAT() of such fields converts
them to plain BLOB, let's do the same for GROUP_CONCAT. To do it,
Item_func_group_concat::setup will signal that it's creating
a temporary table for GROUP_CONCAT, and Field_blog::make_new_field()
override will create base Field_blob when under group concat.
The problem is that s390x is not using the default bzip library we use
on other platforms, which causes compressed string lengths to be differnt
than what mtr tests expects.
Fixed by:
- Added have_normal_bzip.inc, which checks if compress() returns the
expected length.
- Adjust the results to match the expected one
- main.func_compress.test & archive.archive
- Don't print lengths that depends on compression library
- mysqlbinlog compress tests & connect.zip
- Don't print DATA_LENGTH for SET column_compression_zlib_level=1
- main.column_compression
Field_varstring::get_copy_func() did not take into account
that functions do_varstring1[_mb], do_varstring2[_mb] do not support
compressed data.
Changing the return value of Field_varstring::get_copy_func()
to `do_field_string` if there is a compresion and truncation
at the same time. This fixes the problem, so now it works as follows:
- val_str() uncompresses the data
- The prefix is then calculated on the uncompressed data
Additionally, introducing two new copying functions
- do_varstring1_no_truncation()
- do_varstring2_no_truncation()
Using new copying functions in cases when:
- a Field_varstring with length_bytes==1 is changing to a longer
Field_varstring with length_bytes==1
- a Field_varstring with length_bytes==2 is changing to a longer
Field_varstring with length_bytes==2
In these cases we don't care neither of compression nor
of multi-byte prefixes: the entire data gets fully copied
from the source column to the target column as is.
This is a kind of new optimization, but this also was needed
to preserve existing MTR test results.
This patch changes the main name of 3 byte character set from utf8 to
utf8mb3. New old_mode UTF8_IS_UTF8MB3 is added and set TRUE by default,
so that utf8 would mean utf8mb3. If not set, utf8 would mean utf8mb4.
In main.index_merge_myisam we remove the test that was added in
commit a2d24def8c because
it duplicates the test case that was added in
commit 5af12e4635.
The Storage-Engine Independent Column Compression does not call
deflateEnd() when deflate() does not return Z_STREAM_END.
This for instance happens when the data is already (externally)
compressed and deflate() needs more space than the original data.
This patch is based on contribution by Martijn Broenland.
In collaboration with Sergey Vojtovich <svoj@mariadb.org>
The COMPRESSED clause is now a part of the data type and goes immediately
after the data type and length, but before the CHARACTER SET clause,
and before column attributes such as DEFAULT, COLLATE, ON UPDATE,
SYSTEM VERSIONING, engine specific column attributes.
In the old reduction, the COMPRESSED clause was a column attribute.
New syntax:
<varchar or text data type> <length> <compression> <character set> <column attributes>
<varbinary or blob data type> <length> <compression> <column attributes>
New syntax examples:
VARCHAR(1000) COMPRESSED CHARACTER SET latin1 DEFAULT ''
BLOB COMPRESSED DEFAULT ''
Deprecate syntax examples:
VARCHAR(1000) CHARACTER SET latin1 COMPRESSED DEFAULT ''
TEXT CHARACTER SET latin1 DEFAULT '' COMPRESSED
VARBINARY(1000) DEFAULT '' COMPRESSED
As a side effect:
- COMPRESSED is not valid as an SP label name in SQL/PSM routines any more
(but it's still valid as an SP label name in sql_mode=ORACLE)
- COMPRESSED is now allowed in combination with GENERATED ALWAYS AS:
TEXT COMPRESSED GENERATED ALWAYS AS REPEAT('a',1000)
Compressed blob columns didn't accept data at their capacity. E.g. storing
255 bytes to TINYBLOB results in "Data too long" error.
Now it is allowed assuming compression method was able to produce shorter
string (so that both metadata and compressed data fits blob) and
column_compression_threshold is lower than blob.
If no compression was performed, we still have to reserve additional byte
for metadata and thus we perform normal data truncation and return it's
status.
Unexpected data truncation may occur when storing data to compressed blob
column having multi byte variable length character sets.
The reason was incorrect number of characters limit was enforced for
blobs.
Character set safe truncation is done when storing non-empty string in
VARCHAR(0) COMPRESSED column, so that string becomes empty. The code
didn't expect empty string after truncation.
Fixed by moving empty string check after truncation.