The GROUP_CONCAT uses its own temporary table. When ROLLUP is present
it creates the second copy of Item_func_group_concat. This copy receives the
same list of arguments that original group_concat does. When the copy is
set up the result_fields of functions from the argument list are reset to the
temporary table of this copy.
As a result of this action data from functions flow directly to the ROLLUP copy
and the original group_concat functions shows wrong result.
Since queries with COUNT(DISTINCT ...) use temporary tables to store
the results the COUNT function they are also affected by this bug.
The idea of the fix is to copy content of the result_field for the function
under GROUP_CONCAT/COUNT from the first temporary table to the second one,
rather than setting result_field to point to the second temporary table.
To achieve this goal force_copy_fields flag is added to Item_func_group_concat
and Item_sum_count_distinct classes. This flag is initialized to 0 and set to 1
into the make_unique() member function of both classes.
To the TMP_TABLE_PARAM structure is modified to include the similar flag as
well.
The create_tmp_table() function passes that flag to create_tmp_field().
When the flag is set the create_tmp_field() function will set result_field
as a source field and will not reset that result field to newly created
field for Item_func_result_field and its descendants. Due to this there
will be created copy func to copy data from old result_field to newly
created field.
table.cc:
Fixing to use system_charset_info instead of default_charset_info.
Crash happened because the "ctype" array is empty in UCS2,
and thus cannot be used with my_isspace().
The reason why UCS2 appeared in this context was because of
of default_charset_info variable incorrectly substituted to my_isspace().
As functions check_db_name(), check_table_name() and check_column_name()
always get values in utf8, system_charset_info must be used instead.
ctype_ucs2_def.test, ctype_ucs2_def-master.opt, ctype_ucs2_def.result:
new file
For "count(*) while index_column = value" an index read
is done. It consists of an index scan and retrieval of
each key.
For efficiency reasons the index scan stores the key in
the special buffer 'lastkey2' once only. At the first
iteration it notes this fact with the flag
HA_STATE_RNEXT_SAME in 'info->update'.
For efficiency reasons, the key retrieval for blobs
does not allocate a new buffer, but uses 'lastkey2'...
Now I clear the HA_STATE_RNEXT_SAME flag whenever the
buffer has been polluted. In this case, the index scan
copies the key value again (and sets the flag again).
union.result, union.test:
Adding test case.
item.cc:
Allow safe character set conversion in UNION
- string constant to column's charset
- to unicode
Thus, UNION now works the same with CONCAT (and other string functions)
in respect of aggregating arguments with different character sets.
column is increasing when table is recreated with PS/SP":
make use of create_field::char_length more consistent in the code.
Reinit create_field::length from create_field::char_length
for every execution of a prepared statement (actually fixes the
bug).
Bug #17257 ndb, update fails for inner joins if tables do not have Primary Key
change: the allocated area by setValue may not be around for later, store hidden key in special member variable instead
Bug #17158 load data infile of char values into table of char with no (PK) fails to load
Bug #17081 Doing "LOAD DATA INFILE" directly after delete can cause missing data
When setup_fields() function finds field named '*' it expands it to the list
of all table fields. It does so by checking that the first char of
field_name is '*', but it doesn't checks that the '* is the only char.
Due to this, when updating table with a field named like '*name', such field
is wrongly treated as '*' and expanded. This leads to making list of fields
to update being longer than list of the new values. Later, the fill_record()
function crashes by dereferencing null when there is left fields to update,
but no more values.
Added check in the setup_fields() function which ensures that the field
expanding will be done only when '*' is the only char in the field name.