we now have exclusive name lock on the table name in mysql_rm_table_part2,
we still should keep LOCK_open - some storage engines are not
ready for locking scope change and assume that LOCK_open is kept.
Still, the binary logging and query cache invalidation calls
moved out of LOCK_open scope.
Fixes some of the broken 5.1-runtime tests (tests break on asserts).
SHOW CREATE TABLE or SELECT FROM I_S.
This is the last patch for this bug, which depends on the big
CS patch and was pending.
The problem was that SHOW CREATE statements returned original
queries in the binary character set. That could cause the query
to be unreadable.
The fix is to use original character_set_client when sending
the original query to the client. In order to preserve the query
in mysqldump, 'binary' character set results should be set when
issuing SHOW CREATE statement. If either source or destination
character set is 'binary' , no conversion is performed.
The idea is that since the source character set is no longer
'binary', we fix the destination character set to still produce
valid dumps.
Invaldating a subset of a sufficiently large query cache can take a long time.
During this time the server is efficiently frozen and no other operation can
be executed. This patch addresses this problem by moving the locks which cause
the freezing and also by temporarily disable the query cache while the
invalidation takes place.
When a UNION statement forced conversion of an UTF8
charset value to a binary charset value, the byte
length of the result values was truncated to the
CHAR_LENGTH of the original UTF8 value.
In Automake,
mysqld_LDADD = libndb.la
only adds libndb to the link command for mysqld,
but does not declare a dependency.
Added libndb.la to mysqld_DEPENDENCIES
This fix a build race condition that currently
breaks make -J builds, and also enforce a re-link
of mysqld when libndb.la changes.
1. Fix ddl_i18n_koi8r, ddl_i18n_utf8: explicitly specify character-sets
directory for mysqldump;
2. Fix crash in mysqldump if collation is not found;
3. Use proper way to compare character set names.
query / no aggregate of subquery
The optimizer counts the aggregate functions that
appear as top level expressions (in all_fields) in
the current subquery. Later it makes a list of these
that it uses to actually execute the aggregates in
end_send_group().
That count is used in several places as a flag whether
there are aggregates functions.
While collecting the above info it must not consider
aggregates that are not aggregated in the current
context. It must treat them as normal expressions
instead. Not doing that leads to incorrect data about
the query, e.g. running a query that actually has no
aggregate functions as if it has some (and hence is
expected to return only one row).
Fixed by ignoring the aggregates that are not aggregated
in the current context.
One other smaller omission discovered and fixed in the
process : the place of aggregation was not calculated for
user defined functions. Fixed by calling
Item_sum::init_sum_func_check() and
Item_sum::check_sum_func() as it's done for the rest of
the aggregate functions.
- BUG#11986: Stored routines and triggers can fail if the code
has a non-ascii symbol
- BUG#16291: mysqldump corrupts string-constants with non-ascii-chars
- BUG#19443: INFORMATION_SCHEMA does not support charsets properly
- BUG#21249: Character set of SP-var can be ignored
- BUG#25212: Character set of string constant is ignored (stored routines)
- BUG#25221: Character set of string constant is ignored (triggers)
There were a few general problems that caused these bugs:
1. Character set information of the original (definition) query for views,
triggers, stored routines and events was lost.
2. mysqldump output query in client character set, which can be
inappropriate to encode definition-query.
3. INFORMATION_SCHEMA used strings with mixed encodings to display object
definition;
1. No query-definition-character set.
In order to compile query into execution code, some extra data (such as
environment variables or the database character set) is used. The problem
here was that this context was not preserved. So, on the next load it can
differ from the original one, thus the result will be different.
The context contains the following data:
- client character set;
- connection collation (character set and collation);
- collation of the owner database;
The fix is to store this context and use it each time we parse (compile)
and execute the object (stored routine, trigger, ...).
2. Wrong mysqldump-output.
The original query can contain several encodings (by means of character set
introducers). The problem here was that we tried to convert original query
to the mysqldump-client character set.
Moreover, we stored queries in different character sets for different
objects (views, for one, used UTF8, triggers used original character set).
The solution is
- to store definition queries in the original character set;
- to change SHOW CREATE statement to output definition query in the
binary character set (i.e. without any conversion);
- introduce SHOW CREATE TRIGGER statement;
- to dump special statements to switch the context to the original one
before dumping and restore it afterwards.
Note, in order to preserve the database collation at the creation time,
additional ALTER DATABASE might be used (to temporary switch the database
collation back to the original value). In this case, ALTER DATABASE
privilege will be required. This is a backward-incompatible change.
3. INFORMATION_SCHEMA showed non-UTF8 strings
The fix is to generate UTF8-query during the parsing, store it in the object
and show it in the INFORMATION_SCHEMA.
Basically, the idea is to create a copy of the original query convert it to
UTF8. Character set introducers are removed and all text literals are
converted to UTF8.
This UTF8 query is intended to provide user-readable output. It must not be
used to recreate the object. Specialized SHOW CREATE statements should be
used for this.
The reason for this limitation is the following: the original query can
contain symbols from several character sets (by means of character set
introducers).
Example:
- original query:
CREATE VIEW v1 AS SELECT _cp1251 'Hello' AS c1;
- UTF8 query (for INFORMATION_SCHEMA):
CREATE VIEW v1 AS SELECT 'Hello' AS c1;
Sometimes the number of really updated rows (with changed
column values) cannot be determined at the server level
alone (e.g. if the storage engine does not return enough
column values to verify that). So the only dependable way
in such cases is to let the storage engine return that
information if possible.
Fixed the bug at server level by providing a way for the
storage engine to return information about wether it
actually updated the row or the old and the new column
values are the same. It can do that by returning
HA_ERR_RECORD_IS_THE_SAME in ha_update_row().
Note that each storage engine may choose not to try to
return this status code, so this behaviour remains
storage engine specific.
SHOW CREATE TABLE or SELECT FROM I_S.
Actually, the bug discovers two problems:
- the original query is not preserved properly. This is the problem
of BUG#16291;
- the resultset of SHOW CREATE TABLE statement is binary.
This patch fixes the second problem for the 5.0.
Both problems will be fixed in 5.1.
Thanks to Martin Friebe for finding and submitting a fix for this bug!
A table with maximum number of key segments and maximum length key name
would have a corrupted .frm file, due to an incorrect calculation of the
complete key length. Now the key length is computed correctly (I hope) :-)
MyISAM would reject a table with the maximum number of keys and the maximum
number of key segments in all keys. It would allow one less than this total
maximum. Now MyISAM accepts a table defined with the maximum. (This is a
very minor issue.)
Sometimes special 0 ENUM values was ALTERed to normal
empty string ENUM values.
Special 0 ENUM value has the same string representation
as normal ENUM value defined as '' (empty string).
The do_field_string function was used to convert
ENUM data at an ALTER TABLE request, but this
function doesn't care about numerical "indices" of
ENUM values, i.e. do_field_string doesn't distinguish
a special 0 value from an empty string value.
A new copy function called do_field_enum has been added to
copy special 0 ENUM values without conversion to an empty
string.
In the ha_partition::position we don't calculate the number of
the partition of the record. We use m_last_part_value instead relying on
that it is set in other place like previous calls of ::write_row().
In replication we do neither of these calls before ::position().
Delete_row_log_event::do_exec_row calls find_and_fetch_row() where
we used position() & rnd_pos() calls to find the record for the
PARTITION/INNODB table as it posesses InnoDB table flags.
Fixed by removing HA_PRIMARY_KEY_REQUIRED_FOR_POSITION flag from PARTITION
when index_init() or rnd_init() return an error, we still set
handler->inited to INDEX or RND in ha_index_init and ha_rnd_init.
As caller doesn't call ha_*_end() in this case, we get DBUG_ASSERT
failed.
LOCK TABLES takes a list of tables to lock. It may lock several
tables successfully and then encounter a tables that it can't lock,
e.g. because it's locked. In such case it needs to undo the locks on
the already locked tables. And it does that. But it has also notified
the relevant table storage engine handlers that they should lock.
The only reliable way to ensure that the table handlers will give up
their locks is to end the transaction. This is what UNLOCK TABLE
does : it ends the transaction if there were locked tables by LOCK
tables.
It is possible to end the transaction when the lock fails in
LOCK TABLES because LOCK TABLES ends the transaction at its start
already.
Fixed by ending (again) the transaction when LOCK TABLES fails to
lock a table.
the loose scan optimization for grouping queries was applied returned
a wrong result set when the query was used with the SQL_BIG_RESULT
option.
The SQL_BIG_RESULT option forces to use sorting algorithm for grouping
queries instead of employing a suitable index. The current loose scan
optimization is applied only for one table queries when the suitable
index is covering. It does not make sense to use sort algorithm in this
case. However the create_sort_index function does not take into account
the possible choice of the loose scan to implement the DISTINCT operator
which makes sorting unnecessary. Moreover the current implementation of
the loose scan for queries with distinct assumes that sorting will
never happen. Thus in this case create_sort_index should not call
the function filesort.
INSERT into table from SELECT from the same table
with ORDER BY and LIMIT was inserting other data
than sole SELECT ... ORDER BY ... LIMIT returns.
One part of the patch for bug #9676 improperly pushed
LIMIT to temporary table in the presence of the ORDER BY
clause.
That part has been removed.
parameter don't use query cache"
Thanks to the fix of BUG#26842, statements prepared with SQL PREPARE
and having parameters can now use the query cache.