used
In a simple queries a result of the GROUP_CONCAT() function was always of
varchar type.
But if length of GROUP_CONCAT() result is greater than 512 chars and temporary
table is used during select then the result is converted to blob, due to
policy to not to store fields longer than 512 chars in tmp table as varchar
fields.
In order to provide consistent behaviour, result of GROUP_CONCAT() now
will always be converted to blob if it is longer than 512 chars.
Item_func_group_concat::field_type() is modified accordingly.
The GROUP_CONCAT uses its own temporary table. When ROLLUP is present
it creates the second copy of Item_func_group_concat. This copy receives the
same list of arguments that original group_concat does. When the copy is
set up the result_fields of functions from the argument list are reset to the
temporary table of this copy.
As a result of this action data from functions flow directly to the ROLLUP copy
and the original group_concat functions shows wrong result.
Since queries with COUNT(DISTINCT ...) use temporary tables to store
the results the COUNT function they are also affected by this bug.
The idea of the fix is to copy content of the result_field for the function
under GROUP_CONCAT/COUNT from the first temporary table to the second one,
rather than setting result_field to point to the second temporary table.
To achieve this goal force_copy_fields flag is added to Item_func_group_concat
and Item_sum_count_distinct classes. This flag is initialized to 0 and set to 1
into the make_unique() member function of both classes.
To the TMP_TABLE_PARAM structure is modified to include the similar flag as
well.
The create_tmp_table() function passes that flag to create_tmp_field().
When the flag is set the create_tmp_field() function will set result_field
as a source field and will not reset that result field to newly created
field for Item_func_result_field and its descendants. Due to this there
will be created copy func to copy data from old result_field to newly
created field.
used
In a simple queries a result of the GROUP_CONCAT() function was always of
varchar type.
But if length of GROUP_CONCAT() result is greater than 512 chars and temporary
table is used during select then the result is converted to blob, due to
policy to not to store fields longer than 512 chars in tmp table as varchar
fields.
In order to provide consistent behaviour, result of GROUP_CONCAT() now
will always be converted to blob if it is longer than 512 chars.
Item_func_group_concat::field_type() is modified accordingly.
Added test cases for bug #12863.
item_sum.cc, item_sum.h:
Fixed bug #12863.
Added a flag to Item_func_group_concat set to FALSE after
concatenation of the first element of a group.
(aka "deinit is not called when calling udf from trigger").
We should call udf_deinit() function during cleanup phase after prepared
(or ordinary) statement execution instead of calling it from Item's
desctructor.
No test case is provided since it is hard to test UDF's from our test
suite.
Change string->float conversion to delay division as long as possible.
This gives us more exact integer->float conversion for numbers of type '123.45E+02' (Bug #7740)
Added a test case for bug #7769.
item_sum.h:
Fixed bug #7769: a crash for queries with group_concat and
having when the query table was empty.
The bug was due an unsafe dereferencing.
New mysqltest that can run mysqltest with PS
Added support for ZEROFILL in PS
Fixed crash when one called mysql_stmt_store_result() without a preceding mysql_stmt_bind_result()
Updated test cases to support --ps-protocol
(Some tests are still run using old protocol)
Fixed crash in PS when using SELECT * FROM t1 NATURAL JOIN t2...
Fixed crash in PS when using sub queries
Create table didn't signal when table was created. This could cause a "DROP TABLE created_table" in another thread to wait "forever"
Fixed wrong permissions check in PS and multi-table updates (one could get permission denied for legal quries)
Fix for PS and SELECT ... PROCEDURE
Reset all warnings when executing a new PS query
group_concat(...ORDER BY) didn't work with PS
Fixed problem with test suite when not using innodb
a second time". The bug was caused by incompatibility of
negations elimination algorithm and PS: during first statement
execute a subtree with negation was replaced with equivalent
subtree without NOTs.
The problem was that although this transformation was permanent,
items of the new subtree were created in execute-local memory.
The patch adds means to check if it is the first execute of a
prepared statement, and if this is the case, to allocate items
in memory of the prepared statement.
The implementation:
- backports Item_arena from 5.0
- adds Item_arena::is_stmt_prepare(),
Item_arena::is_first_stmt_execute().
- deletes THD::allocate_temporary_pool_for_ps_preparing(),
THD::free_temporary_pool_for_ps_preparing(); they
were redundant.
and adds a few invariants:
- thd->free_list never contains junk (= freed items)
- thd->current_arena is never null. If there is no
prepared statement, it points at the thd.
The rest of the patch contains mainly mechanical changes and
cleanups.
Fixed bugs in group_concat with ORDER BY and DISTINCT (Bugs #2695, #3381 and #3319)
Fixed crash when doing rollback in slave and the io thread catched up with the sql thread
Set locked_in_memory properly