The GROUP_CONCAT uses its own temporary table. When ROLLUP is present
it creates the second copy of Item_func_group_concat. This copy receives the
same list of arguments that original group_concat does. When the copy is
set up the result_fields of functions from the argument list are reset to the
temporary table of this copy.
As a result of this action data from functions flow directly to the ROLLUP copy
and the original group_concat functions shows wrong result.
Since queries with COUNT(DISTINCT ...) use temporary tables to store
the results the COUNT function they are also affected by this bug.
The idea of the fix is to copy content of the result_field for the function
under GROUP_CONCAT/COUNT from the first temporary table to the second one,
rather than setting result_field to point to the second temporary table.
To achieve this goal force_copy_fields flag is added to Item_func_group_concat
and Item_sum_count_distinct classes. This flag is initialized to 0 and set to 1
into the make_unique() member function of both classes.
To the TMP_TABLE_PARAM structure is modified to include the similar flag as
well.
The create_tmp_table() function passes that flag to create_tmp_field().
When the flag is set the create_tmp_field() function will set result_field
as a source field and will not reset that result field to newly created
field for Item_func_result_field and its descendants. Due to this there
will be created copy func to copy data from old result_field to newly
created field.
tables corrupt triggers".
It turned out that we also have relied at certain places that
(new_table != table_name) were always true on Windows and for transactional
tables. Since our fix for the bug brakes this assumption we have to add new
flag to pass this information around.
This code needs to be refactored but I dare not to do this in 5.0.
triggers".
Applying ALTER/OPTIMIZE/REPAIR TABLE statements to transactional table or to
table of any type on Windows caused disappearance of its triggers.
Bug was introduced in 5.0.19 by my fix for bug #13525 "Rename table does not
keep info of triggers" (see comment for sql_table.cc for more info).
.
Let us transfer triggers associated with table when we rename it (but only if
we are not changing database to which table belongs, in the latter case we will
emit error).
column is increasing when table is recreated with PS/SP":
make use of create_field::char_length more consistent in the code.
Reinit create_field::length from create_field::char_length
for every execution of a prepared statement (actually fixes the
bug).
When a too long field is used for a key, only a prefix part of the field is
used. Length is reduced to the max key length allowed for storage. But if the
field have a multibyte charset it is possible to break multibyte char
sequence. This leads to the failed assertion in the innodb code and
server crash when a record is inserted.
The make_prepare_table() now aligns truncated key length to the boundary of
multibyte char.
There are (at least) two implementations of the checksum
computation. One is in MyISAM for the quick checksum. It
is executed on every row change. The other is in the
SQL layer for the extended checksum. It retrieves all rows
of a table via the respective storage engine.
In former MySQL versions varchars were stored with their
maximum length, but now with their real length similar to
blobs.
This change had been forgotten to take care of in the
extended checksum calculation. Hence too much data was
checksumed. In MyISAM this change had been taken care of
already. Only the real data is included in the checksum.
I changed mysql_checksum_table() so that it uses the
length information of true varchar fields instead
of the field length like in former varchar
implementations.
CREATE TABLE and PS/SP": make sure that 'typelib' object for
ENUM values and 'Item_string' object for DEFAULT clause are
created in the statement memory root.
Bad examples of usage of a string with its length fixed.
The incorrect length in the trigger file configuration descriptor
fixed (BUG#14090).
A hook for unknown keys added to the parser to support old .TRG files.
Version for 5.0.
It fixes three problems:
1. The cause of the bug was that we did not check the table version for
the HANDLER ... READ commands. We did not notice when a table was
replaced by a new one. This can happen during ALTER TABLE, REPAIR
TABLE, and OPTIMIZE TABLE (there might be more cases). I call the fix
for this problem "the primary bug fix".
2. mysql_ha_flush() was not always called with a locked LOCK_open.
Though the function comment clearly said it must.
I changed the code so that the locking is done when required. I call
the fix for this problem "the secondary fix".
3. In 5.0 (not in 4.1 or 4.0) DROP TABLE had a possible deadlock flaw in
concur with FLUSH TABLES WITH READ LOCK. I call the fix for this
problem "the 5.0 addendum fix".
Version for 4.0.
It fixes two problems:
1. The cause of the bug was that we did not check the table version for
the HANDLER ... READ commands. We did not notice when a table was
replaced by a new one. This can happen during ALTER TABLE, REPAIR
TABLE, and OPTIMIZE TABLE (there might be more cases). I call the fix
for this problem "the primary bug fix".
2. mysql_ha_flush() was not always called with a locked LOCK_open.
Though the function comment clearly said it must.
I changed the code so that the locking is done when required. I call
the fix for this problem "the secondary fix".
large table gives server crash": make sure that when a MyISAM temporary
table is created for a cursor, it's created in its memory root,
not the memory root of the current query.
new file
sql_table.cc, handler.h:
Fixed bug #14540.
Added error mnemonic code HA_ADMIN_NOT_BASE_TABLE
to report that an operation cannot be applied for views.
view.test, view.result:
Added a test case for bug #14540.
errmsg.txt:
Fixed bug #14540.
Added error ER_CHECK_NOT_BASE_TABLE.
droping trigger on InnoDB table".
Deadlock occured in cases when we were trying to create two triggers for
the same InnoDB table concurrently and both threads were able to reach
close_cached_table() simultaneously. Bugfix implements new approach to
table locking and table cache invalidation during creation/dropping
of trigger.
No testcase is supplied since bug was repeatable only under high concurrency.
- CHAR() now returns binary string as default
- CHAR(X*65536+Y*256+Z) is now equal to CHAR(X,Y,Z) independent of the character set for CHAR()
- Test for both ETIMEDOUT and ETIME from pthread_cond_timedwait()
(Some old systems returns ETIME and it's safer to test for both values
than to try to write a wrapper for each old system)
- Fixed new introduced bug in NOT BETWEEN X and X
- Ensure we call commit_by_xid or rollback_by_xid for all engines, even if one engine has failed
- Use octet2hex() for all conversion of string to hex
- Simplify and optimize code
This bug occurs when some trigger for table used by DML statement is created
or changed while statement was waiting in lock_tables(). In this situation
prelocking set which we have calculated becames invalid which can easily lead
to errors and even in some cases to crashes.
With proposed patch we no longer silently reopen tables in lock_tables(),
instead caller of lock_tables() becomes responsible for reopening tables and
recalculation of prelocking set.
Added flag to Field::store(longlong) to specify if value is unsigned.
This fixes bug #12750: Incorrect storage of 9999999999999999999 in DECIMAL(19, 0)
Fixed warning from valgrind in CREATE ... SELECT
Fixed double free of mysql.options if reconnect failed
Solution for 5.0.
Changed calls to open_table(). Requested to ignore
flush at places where the command did already lock tables.
This could happen in CREATE ... SELECT and ALTER TABLE.
No test case. The bug can only be triggered by true concurrency.
The stress test suite provides a test case for this.
produce warning for 'create database if not exists' if database exists
do not update database options in this case
produce warning for 'create table if not exists' if table exists
We binlog the DROP TABLE for each table that was actually dropped. Per Sergei's
suggestion a fixed buffer for the DROP TABLE query is pre-allocated from THD pool, and
logging now is done in batches - new batch is started if the buffer becomes full.
Reduced memory usage by reusing the table list instead of accumulating a list of
dropped table names. Also fixed the problem if the table was not actually dropped, eg
due to permissions. Extended the test case to make sure batched query
logging does work.
"Triggers have the wrong namespace"
"Triggers: duplicate names allowed"
"Triggers: CREATE TRIGGER does not accept fully qualified names"
"SHOW TRIGGERS"
of stored routines definitions even if we already have some tables open and
locked. To avoid deadlocks in this case we have to put certain restrictions
on locking of mysql.proc table.
This allows to use stored routines safely under LOCK TABLES without explicitly
mentioning mysql.proc in the list of locked tables. It also fixes bug #11554
"Server crashes on statement indirectly using non-cached function".
adding test case
sql_table.cc:
sql_table.cc:
- do not create a new item when charsets are the same
- return ER_INVALID_DEFAULT if default value cannot
be converted into the column character set.
item.cc:
- Allow conversion not only to Unicode,
but also to and from "binary".
- Adding safe_charset_converter() for Item_num
and Item_varbinary, returning a fixed const Item.
Bug#11657 Creation of secondary index fails:
If TINYBLOB key part length is 255, it is equal
to field length. For BLOB, unlike for CHAR/VARCHAR,
we should keep a non-zero key part length, otherwise
"BLOB/TEXT column used in key specification without a key length"
error is produced afterwards.
type_blob.result, type_blob.test:
fixing tests accordinly
bug #10617: Insert from same table to same table give incorrect result for bit(4) column.
bug #11091: union involving BIT: assertion failure in Item::tmp_table_field_from_field_type
bug #11572: MYSQL_TYPE_BIT not taken care of in temp. table creation for VIEWs
- Added better error messages when trying to open a table that can't be discovered or unpacked. The most likely cause of this is that it does not have any frm data, probably since it has been created from NdbApi or is a NDB system table.
- Separated functionality that was in ha_create_table_from_engine into two functions. One that checks if the table exists and another one that tries to create the table from the engine.
Ensure that 'null_value' is not accessed before val() is called in FIELD() functions
Fixed initialization of key maps. This fixes some problems with keys when you have more than 64 keys
Fixed that ROLLUP don't always create a temporary table. This fix ensures that func_gconcat.test results are now predictable
1.) Added a new option to mysql_lock_tables() for ignoring FLUSH TABLES.
Used the new option in create_table_from_items().
It is necessary to prevent the SELECT table from being reopend.
It would get new storage assigned for its fields, while the
SELECT part of the command would still use the old (freed) storage.
2.) Protected the CREATE TABLE and CREATE TABLE ... SELECT commands
against a global read lock. This prevents a deadlock in
CREATE TABLE ... SELECT in conjunction with FLUSH TABLES WITH READ LOCK
and avoids the creation of new tables during a global read lock.
3.) Replaced set_protect_against_global_read_lock() and
unset_protect_against_global_read_lock() by
wait_if_global_read_lock() and start_waiting_global_read_lock()
in the INSERT DELAYED handling.
We will however give a warning when opening such a table that users should use ALTER TABLE ... FORCE to fix
the table. In future release we will fix that REPAIR TABLE will be able to handle this case
Count null_bits separately from field offsets and adjust them in case of primary key parts.
(Previously a CREATE TABLE with a lot of null fields that was part of a primary key caused MySQL to wrongly count the number of bytes needed to store null bits)
This is a more complete bug fix for #6236
if we fall back to mysql_alter_table() (for InnoDB), don't do binlogging in mysql_alter_table(), as mysql_admin_table()
is not supposed to do any binlogging (it is done by the caller).
CAST() now produces warnings when casting a wrong INTEGER or CHAR values. This also applies to implicite string to number casts. (Bug #5912)
ALTER TABLE now fails in STRICT mode if it generates warnings.
Inserting a zero date in a DATE, DATETIME or TIMESTAMP column during TRADITIONAL mode now produces an error. (Bug #5933)
#6559 "DROP DATABASE forgets to drop triggers".
If we drop table we should also drop all triggers associated with it.
To do this we have to check for existence of .TRG file when we are
dropping table and delete it too.
(For 4.1 tables old 'VARCHAR' fields are converted to true VARCHAR in the next ALTER TABLE)
This ensures that one can use MySQL 5.0 privilege tables with MySQL 4.1
Windows to call CreateFileMapping() with correct arguments, and
propogating the introduction of query_id_t to everywhere query ids are
passed around. (Bug #8826)
Enabled VARCHAR testing for innodb
NOTE: innodb.test currently fails becasue of a bug in InnoDB.
I have informed Heikki about this and expect him to fix this ASAP
(otherwise a deadlock when ALTER writes to
binlog holding LOCK_open, it causes binlog rotation,
binlog waits for prepared transactions to commit, and commit
needs LOCK_open to check for global read lock)