The GROUP_CONCAT uses its own temporary table. When ROLLUP is present
it creates the second copy of Item_func_group_concat. This copy receives the
same list of arguments that original group_concat does. When the copy is
set up the result_fields of functions from the argument list are reset to the
temporary table of this copy.
As a result of this action data from functions flow directly to the ROLLUP copy
and the original group_concat functions shows wrong result.
Since queries with COUNT(DISTINCT ...) use temporary tables to store
the results the COUNT function they are also affected by this bug.
The idea of the fix is to copy content of the result_field for the function
under GROUP_CONCAT/COUNT from the first temporary table to the second one,
rather than setting result_field to point to the second temporary table.
To achieve this goal force_copy_fields flag is added to Item_func_group_concat
and Item_sum_count_distinct classes. This flag is initialized to 0 and set to 1
into the make_unique() member function of both classes.
To the TMP_TABLE_PARAM structure is modified to include the similar flag as
well.
The create_tmp_table() function passes that flag to create_tmp_field().
When the flag is set the create_tmp_field() function will set result_field
as a source field and will not reset that result field to newly created
field for Item_func_result_field and its descendants. Due to this there
will be created copy func to copy data from old result_field to newly
created field.
tables corrupt triggers".
It turned out that we also have relied at certain places that
(new_table != table_name) were always true on Windows and for transactional
tables. Since our fix for the bug brakes this assumption we have to add new
flag to pass this information around.
This code needs to be refactored but I dare not to do this in 5.0.
triggers".
Applying ALTER/OPTIMIZE/REPAIR TABLE statements to transactional table or to
table of any type on Windows caused disappearance of its triggers.
Bug was introduced in 5.0.19 by my fix for bug #13525 "Rename table does not
keep info of triggers" (see comment for sql_table.cc for more info).
.
Let us transfer triggers associated with table when we rename it (but only if
we are not changing database to which table belongs, in the latter case we will
emit error).
column is increasing when table is recreated with PS/SP":
make use of create_field::char_length more consistent in the code.
Reinit create_field::length from create_field::char_length
for every execution of a prepared statement (actually fixes the
bug).
When a too long field is used for a key, only a prefix part of the field is
used. Length is reduced to the max key length allowed for storage. But if the
field have a multibyte charset it is possible to break multibyte char
sequence. This leads to the failed assertion in the innodb code and
server crash when a record is inserted.
The make_prepare_table() now aligns truncated key length to the boundary of
multibyte char.
There are (at least) two implementations of the checksum
computation. One is in MyISAM for the quick checksum. It
is executed on every row change. The other is in the
SQL layer for the extended checksum. It retrieves all rows
of a table via the respective storage engine.
In former MySQL versions varchars were stored with their
maximum length, but now with their real length similar to
blobs.
This change had been forgotten to take care of in the
extended checksum calculation. Hence too much data was
checksumed. In MyISAM this change had been taken care of
already. Only the real data is included in the checksum.
I changed mysql_checksum_table() so that it uses the
length information of true varchar fields instead
of the field length like in former varchar
implementations.
CREATE TABLE and PS/SP": make sure that 'typelib' object for
ENUM values and 'Item_string' object for DEFAULT clause are
created in the statement memory root.
Bad examples of usage of a string with its length fixed.
The incorrect length in the trigger file configuration descriptor
fixed (BUG#14090).
A hook for unknown keys added to the parser to support old .TRG files.
Version for 5.0.
It fixes three problems:
1. The cause of the bug was that we did not check the table version for
the HANDLER ... READ commands. We did not notice when a table was
replaced by a new one. This can happen during ALTER TABLE, REPAIR
TABLE, and OPTIMIZE TABLE (there might be more cases). I call the fix
for this problem "the primary bug fix".
2. mysql_ha_flush() was not always called with a locked LOCK_open.
Though the function comment clearly said it must.
I changed the code so that the locking is done when required. I call
the fix for this problem "the secondary fix".
3. In 5.0 (not in 4.1 or 4.0) DROP TABLE had a possible deadlock flaw in
concur with FLUSH TABLES WITH READ LOCK. I call the fix for this
problem "the 5.0 addendum fix".
Version for 4.0.
It fixes two problems:
1. The cause of the bug was that we did not check the table version for
the HANDLER ... READ commands. We did not notice when a table was
replaced by a new one. This can happen during ALTER TABLE, REPAIR
TABLE, and OPTIMIZE TABLE (there might be more cases). I call the fix
for this problem "the primary bug fix".
2. mysql_ha_flush() was not always called with a locked LOCK_open.
Though the function comment clearly said it must.
I changed the code so that the locking is done when required. I call
the fix for this problem "the secondary fix".
large table gives server crash": make sure that when a MyISAM temporary
table is created for a cursor, it's created in its memory root,
not the memory root of the current query.
new file
sql_table.cc, handler.h:
Fixed bug #14540.
Added error mnemonic code HA_ADMIN_NOT_BASE_TABLE
to report that an operation cannot be applied for views.
view.test, view.result:
Added a test case for bug #14540.
errmsg.txt:
Fixed bug #14540.
Added error ER_CHECK_NOT_BASE_TABLE.