spaces.
When the my_strnncollsp_simple function compares two strings and one is a prefix
of another then this function compares characters in the rest of longer key
with the space character to find whether the longer key is greater or less.
But the sort order of the collation isn't used in this comparison. This may
lead to a wrong comparison result, wrongly created index or wrong order of the
result set of a query with the ORDER BY clause.
Now the my_strnncollsp_simple function uses collation sort order to compare
the characters in the rest of longer key with the space character.
mysql-test/t/ctype_collate.test:
Added a test case for the bug#29261: Sort order of the collation wasn't used
when comparing trailing spaces.
mysql-test/r/ctype_collate.result:
Added a test case for the bug#29261: Sort order of the collation wasn't used
when comparing trailing spaces.
strings/ctype-simple.c:
Bug#29261: Sort order of the collation wasn't used when comparing trailing
spaces.
Now the my_strnncollsp_simple function uses collation sort order to compare
the characters in the rest of longer key with the space character.
- Use SQL for diffing master and slave
mysql-test/r/rpl_misc_functions.result:
Dump t1 on slave and load it back into temporary table on master
to allow comapre with SQL
mysql-test/t/rpl_misc_functions.test:
Dump t1 on slave and load it back into temporary table on master
to allow comapre with SQL
query / no aggregate of subquery
The optimizer counts the aggregate functions that
appear as top level expressions (in all_fields) in
the current subquery. Later it makes a list of these
that it uses to actually execute the aggregates in
end_send_group().
That count is used in several places as a flag whether
there are aggregates functions.
While collecting the above info it must not consider
aggregates that are not aggregated in the current
context. It must treat them as normal expressions
instead. Not doing that leads to incorrect data about
the query, e.g. running a query that actually has no
aggregate functions as if it has some (and hence is
expected to return only one row).
Fixed by ignoring the aggregates that are not aggregated
in the current context.
One other smaller omission discovered and fixed in the
process : the place of aggregation was not calculated for
user defined functions. Fixed by calling
Item_sum::init_sum_func_check() and
Item_sum::check_sum_func() as it's done for the rest of
the aggregate functions.
mysql-test/r/subselect.result:
Bug #27333: test case
mysql-test/t/subselect.test:
Bug #27333: test case
sql/item_subselect.cc:
Bug#27333: need select_lex to filter out
aggregates that are not aggregated in
the current select.
sql/item_sum.cc:
Bug#27333: need select_lex to filter out
aggregates that are not aggregated in
the current select.
sql/item_sum.h:
Bug#27333: calculate the place of
aggregation for user defined functions.
sql/sql_select.cc:
Bug#27333: When counting the aggregated functions
and collecting a list of them we must not consider
the aggregates that are not aggregated in the local
context as "local" : i.e. we must treat them as
normal functions and not add them to the aggregate
functions list.
sql/sql_select.h:
Bug#27333: need select_lex to filter out
aggregates that are not aggregated in
the current select.
"Federared Transactions Failure"
Bug occurs when the user performs an operation which inserts more than
one row into the federated table and the federated table references a
remote table stored within a transactional storage engine. When the
insert operation for any one row in the statement fails due to
constraint violation, the federated engine is unable to perform
statement rollback and so the remote table contains a partial commit.
The user would expect a statement to perform the same so a statement
rollback is expected.
This bug was fixed by implementing bulk-insert handling into the
federated storage engine. This will relieve the bug for most common
situations by enabling the generation of a multi-row insert into the
remote table and thus permitting the remote table to perform
statement rollback when neccessary.
The multi-row insert is limited to the maximum packet size between
servers and should the size overflow, more than one insert statement
will be sent and this bug will reappear. Multi-row insert is disabled
when an "INSERT...ON DUPLICATE KEY UPDATE" is being performed.
The bulk-insert handling will offer a significant performance boost
when inserting a large number of small rows.
This patch builds on Bug29019 and Bug25511
sql/ha_federated.cc:
bug25513
new member methods:
start_bulk_insert() - initializes memory for bulk insert
end_bulk_insert() - sends any remaining bulk insert and frees memory
append_stmt_insert() - create the INSERT statement
sql/ha_federated.h:
bug25513
new member value:
bulk_insert
new member methods:
start_bulk_insert(), end_bulk_insert(), append_stmt_insert()
make member methods private:
read_next(), index_read_idx_with_result_set()
mysql-test/r/federated_innodb.result:
New BitKeeper file ``mysql-test/r/federated_innodb.result''
mysql-test/t/federated_innodb-slave.opt:
New BitKeeper file ``mysql-test/t/federated_innodb-slave.opt''
mysql-test/t/federated_innodb.test:
New BitKeeper file ``mysql-test/t/federated_innodb.test''
"Federated INSERT failures"
Federated does not correctly handle "INSERT...ON DUPLICATE KEY UPDATE"
However, implementing such support is not reasonably possible without
increasing complexity of the storage engine: checking that constraints
on remote server match local server and parsing error messages.
This patch causes 'ON DUPLICATE KEY' to fail with ER_DUP_KEY message
if a conflict occurs and not to fail silently.
include/my_base.h:
bug25511
new storage engine hint: HA_EXTRA_INSERT_WITH_UPDATE
mysql-test/r/federated.result:
test for bug25511
mysql-test/t/federated.test:
test for bug25511
sql/ha_federated.cc:
bug25511
implement support for handling HA_EXTRA_INSERT_WITH_UPDATE hint
sql/ha_federated.h:
bug25511
new property: insert_dup_update
sql/sql_insert.cc:
bug25511
implement support for HA_EXTRA_INSERT_WITH_UPDATE
When checking duplicates flag, if it is DUP_UPDATE, send hint
to the storage engine.
SHOW CREATE TABLE or SELECT FROM I_S.
Actually, the bug discovers two problems:
- the original query is not preserved properly. This is the problem
of BUG#16291;
- the resultset of SHOW CREATE TABLE statement is binary.
This patch fixes the second problem for the 5.0.
Both problems will be fixed in 5.1.
mysql-test/r/show_check.result:
Update result file.
mysql-test/t/show_check.test:
Provide test case for BUG#10491.
sql/item.h:
Use utf8_general_ci instead of binary collation by default,
because for views and base tables utf8 is the character set
in which their definition is stored. For system constants
it's the default character set, and for other objects
(routines, triggers), no character set is stored, and
therefore no character set is known, so returning utf8
is just as good as any non-binary character set.
This latter problem is fixed in 5.1 by 16291. In 5.1
we will return the "real" character set.
Problem: like_range() returned wrong ranges for contractions (like 'ch' in Czech').
Fix: adding a special code to handle tricky cases:
- contraction head followed by a wild character
- full contraction
- contraction part followed by another contraction part,
but they are not a contraction together.
mysql-test/r/ctype_uca.result:
Adding test case
mysql-test/t/ctype_uca.test:
Adding test case
strings/ctype-mb.c:
Adding test case
strings/ctype-uca.c:
Allocate additional 256 bytes for flags "is contraction part".
strings/ctype-ucs2.c:
Adding test case
"REPLACE/INSERT IGNORE/UPDATE IGNORE doesn't work"
Federated does not record neccessary HA_EXTRA flags in order to
support REPLACE/INSERT IGNORE/UPDATE IGNORE.
Implement ::extra() to capture flags neccessary for functionality.
New function append_ident() to better escape identifiers consistantly.
mysql-test/r/federated.result:
test for bug29019
mysql-test/t/federated.test:
test for bug29019
sql/ha_federated.cc:
Bug29019
Federated does not record neccessary HA_EXTRA flags in order to
support REPLACE/INSERT IGNORE/UPDATE IGNORE.
Implement ::extra() to capture flags neccessary for functionality.
New function append_ident() to better escape identifiers consistantly.
sql/ha_federated.h:
bug29019
add 2 member values to ha_federated class
ignore_duplicates and replace_duplicates.
add 1 member method to ha_federated class
extra()
Fulltext index may get corrupt by certain gbk characters.
The problem was that when skipping leading non-true-word-characters,
we assumed that these characters are always 1 byte long. This is not
the case with gbk character set, since non-true-word-characters may
be 2 bytes long.
Affects 5.0 only.
myisam/ft_parser.c:
Leading non-true-word-characters may also be multi-byte (e.g. in
gbk character set).
mysql-test/r/fulltext2.result:
A test case for BUG#29299.
mysql-test/t/fulltext2.test:
A test case for BUG#29299.
show that shm communication still works on windows,
and that we can't overflow the base-name.
mysql-test/t/windows_shm-master.opt:
Bug#24924: shared-memory-base-name that is too long causes buffer overflow
start a server with shm communication if we're on
windows.
mysql-test/t/windows_shm.test:
Bug#24924: shared-memory-base-name that is too long causes buffer overflow
.opt has started a server with shm communication
if we're on windows. now start a client with shm
and connect to that server.
Thanks to Martin Friebe for finding and submitting a fix for this bug!
A table with maximum number of key segments and maximum length key name
would have a corrupted .frm file, due to an incorrect calculation of the
complete key length. Now the key length is computed correctly (I hope) :-)
MyISAM would reject a table with the maximum number of keys and the maximum
number of key segments in all keys. It would allow one less than this total
maximum. Now MyISAM accepts a table defined with the maximum. (This is a
very minor issue.)
myisam/mi_open.c:
Bug #26642: change >= to > in a comparison (i.e., error
only if key_parts_in_table really is greater than
MAX_KEY * MAX_KEY_SEG)
mysql-test/r/create.result:
Bug #26642: test case
mysql-test/t/create.test:
Bug #26642: test case
sql/table.cc:
Bug #26642: In create_frm(), fix formula for key_length;
it was too small by (keys * 2) bytes
CHECK TABLE against ARCHIVE table may falsely report table corruption,
or cause server crash.
Fixed by using proper buffer for CHECK TABLE.
Affects both 5.0 and 5.1.
mysql-test/r/archive.result:
A test case for BUG#28916.
mysql-test/t/archive.test:
A test case for BUG#28916.
sql/ha_archive.cc:
We call Field::get_length() from get_row(). Field::get_length() assumes
that the row was read into table->record[0] buffer, which is not the
case when we check a table. As a result we get wrongly initialized
blob length.
Use table->record[0] as record buffer for check table instead.
Added more sleep time befoe reap to allow query to be executed.
mysql-test/t/status.test:
Added more sleep time befoe reap to allow query to be executed.
Sometimes special 0 ENUM values was ALTERed to normal
empty string ENUM values.
Special 0 ENUM value has the same string representation
as normal ENUM value defined as '' (empty string).
The do_field_string function was used to convert
ENUM data at an ALTER TABLE request, but this
function doesn't care about numerical "indices" of
ENUM values, i.e. do_field_string doesn't distinguish
a special 0 value from an empty string value.
A new copy function called do_field_enum has been added to
copy special 0 ENUM values without conversion to an empty
string.
sql/field_conv.cc:
Fixed bug #29251.
The Copy_field::get_copy_func method has been modified to
return a pointer to the do_field_enum function if a conversion
between two columns of incompatible enum types is required.
The do_field_enum function has been added for the correct
conversion of special 0 enum values.
mysql-test/t/type_enum.test:
Updated test case for bug #29251.
mysql-test/r/type_enum.result:
Updated test case for bug #29251.
a lookup into a BINARY index by a key ended with spaces. It caused
an assertion abort for a debug version and wrong results for non-debug
versions.
The problem occurred because the function _mi_pack_key stripped off
the trailing spaces from binary search keys while the function _mi_make_key
did not do it when keys were inserted into the index.
Now the function _mi_pack_key does not remove the trailing spaces from
search keys if they are of the binary type.
mysql-test/r/binary.result:
Added a test case for bug #29087.
mysql-test/t/binary.test:
Added a test case for bug #29087.
fix binlog-writing so that end_log_pos is given correctly even
within transactions for both SHOW BINLOG and SHOW MASTER STATUS,
that is as absolute values (from log start) rather than relative
values (from transaction's start).
---
Merge tnurnberg@bk-internal.mysql.com:/home/bk/mysql-5.0-maint
into sin.intern.azundris.com:/home/tnurnberg/22540/50-22540
mysql-test/r/binlog.result:
Bug #22540: Incorrect value in column End_log_pos of SHOW BINLOG EVENTS using InnoDB
show that end_log_pos in SHOW BINLOG EVENTS is correct even in transactions.
show that SHOW MASTER STATUS returns correct values while in transactions
(so that mysqldump --master-data will work correctly).
also remove bdb dependency.
---
manual merge
mysql-test/t/binlog.test:
Bug #22540: Incorrect value in column End_log_pos of SHOW BINLOG EVENTS using InnoDB
show that end_log_pos in SHOW BINLOG EVENTS is correct even in transactions.
show that SHOW MASTER STATUS returns correct values while in transactions
(so that mysqldump --master-data will work correctly).
also remove bdb dependency.
sql/log.cc:
Bug #22540: Incorrect value in column End_log_pos of SHOW BINLOG EVENTS using InnoDB
fix output for SHOW BINLOG EVENTS so that end_log_pos is given correctly
even within transactions. do this by rewriting the commit-buffer in place.
LOCK TABLES takes a list of tables to lock. It may lock several
tables successfully and then encounter a tables that it can't lock,
e.g. because it's locked. In such case it needs to undo the locks on
the already locked tables. And it does that. But it has also notified
the relevant table storage engine handlers that they should lock.
The only reliable way to ensure that the table handlers will give up
their locks is to end the transaction. This is what UNLOCK TABLE
does : it ends the transaction if there were locked tables by LOCK
tables.
It is possible to end the transaction when the lock fails in
LOCK TABLES because LOCK TABLES ends the transaction at its start
already.
Fixed by ending (again) the transaction when LOCK TABLES fails to
lock a table.
mysql-test/r/innodb_mysql.result:
Bug #29154: test case
mysql-test/t/innodb_mysql.test:
Bug #29154: test case
sql/sql_parse.cc:
Bug #29154: end the trasaction at a failing
LOCK TABLES so the handler can free its locks.
Max compressed file size was calculated incorretly causing server
crash on INSERT.
With this patch we use proper max file size provided by zlib.
Affects 5.0 only.
sql/ha_archive.cc:
When calculating max compressed file size, use the real offset size
that is provided by zlib, instead of sizeof(z_off_t), which may be
different from actual offset size.
When we're about to write and the data file is almost full flush gzio
buffer to get accurate real file size.
mysql-test/r/archive-big.result:
New BitKeeper file ``mysql-test/r/archive-big.result''
mysql-test/t/archive-big.test:
New BitKeeper file ``mysql-test/t/archive-big.test''
the loose scan optimization for grouping queries was applied returned
a wrong result set when the query was used with the SQL_BIG_RESULT
option.
The SQL_BIG_RESULT option forces to use sorting algorithm for grouping
queries instead of employing a suitable index. The current loose scan
optimization is applied only for one table queries when the suitable
index is covering. It does not make sense to use sort algorithm in this
case. However the create_sort_index function does not take into account
the possible choice of the loose scan to implement the DISTINCT operator
which makes sorting unnecessary. Moreover the current implementation of
the loose scan for queries with distinct assumes that sorting will
never happen. Thus in this case create_sort_index should not call
the function filesort.
mysql-test/r/group_min_max.result:
Added a test case for bug #25602.
mysql-test/t/group_min_max.test:
Added a test case for bug #25602.
INSERT into table from SELECT from the same table
with ORDER BY and LIMIT was inserting other data
than sole SELECT ... ORDER BY ... LIMIT returns.
One part of the patch for bug #9676 improperly pushed
LIMIT to temporary table in the presence of the ORDER BY
clause.
That part has been removed.
sql/sql_select.cc:
Fixed bug #29095.
One part of the patch for bug #9676 improperly pushed
LIMIT to temporary table in the presence of the ORDER BY
clause.
That part has been removed.
mysql-test/t/insert_select.test:
Expanded the test case for bug #9676.
Created a test case for bug #29095.
mysql-test/r/insert_select.result:
Expanded the test case for bug #9676.
Created a test case for bug #29095.
into adventure.(none):/home/thek/Development/cpp/mysql-5.0-runtime
sql/sql_class.h:
Auto merged
sql/sql_parse.cc:
Auto merged
sql/sql_prepare.cc:
Auto merged
sql/sql_view.cc:
Auto merged
sql/sql_yacc.yy:
Auto merged
into mysql.com:/home/bar/mysql-work/mysql-5.0.b28925
sql/sql_yacc.yy:
Auto merged
mysql-test/r/ctype_ucs2_def.result:
After merge fix
mysql-test/t/ctype_ucs2_def.test:
After merge fix
Problem: separator was not converted to the result character set,
so the result was a mixture of two different character sets,
which was especially bad for UCS2.
Fix: convert separator to the result character set.
mysql-test/r/ctype_ucs.result:
Adding test case
mysql-test/r/ctype_ucs2_def.result:
Adding test case
mysql-test/t/ctype_ucs.test:
Adding test case
mysql-test/t/ctype_ucs2_def.test:
Adding test case
sql/item_sum.cc:
Adding conversion of separator to the result character set
sql/sql_yacc.yy:
Fixing GROUPC_CONCAT problems when "mysqld --default-character-set=ucs2".
ALTER VIEW is currently not supported as a prepared statement
and should be disabled as such as they otherwise could cause server crashes.
ALTER VIEW is currently not supported when called from stored
procedures or functions for related reasons and should also be disabled.
This patch disables these DDL statements and adjusts the appropriate test
cases accordingly.
Additional tests has been added to reflect on the fact that we do support
CREATE/ALTER/DROP TABLE for Prepared Statements (PS), Stored Procedures (SP)
and PS within SP.
mysql-test/r/ps_1general.result:
- Updated test to reflect on the new policy to disallow ALTER VIEW within SP.
mysql-test/r/sp-dynamic.result:
- Added PS ALTER TABLE test from within SP-context to demonstrate that CREATE/ALTER/DROP
TABLE statements is working.
- Added PS CREATE/ALTER/DROP VIEW tests from within SP-context to show that
ALTER VIEW is not supported, CREATE VIEW/DROP VIEW are supported.
mysql-test/r/sp-error.result:
- Updated test to reflect on the new policy to disallow VIEW DDL within SP.
mysql-test/t/ps_1general.test:
- Updated test to reflect on the new policy to disallow VIEW DDL within SP.
mysql-test/t/sp-dynamic.test:
- Add PS ALTER TABLE test from within SP to demonstrate that CREATE/ALTER/DROP
TABLE statements are supported.
mysql-test/t/sp-error.test:
- Updated test to reflect on the new policy to disallow ALTER VIEW
within SP-context.
- Changed error code 1314 to the more abstract ER_SP_BADSTATEMENT.
sql/sql_class.h:
- Added comment for clarity
sql/sql_parse.cc:
- Added comment for clarity
sql/sql_prepare.cc:
- Disallow ALTER VIEW as prepared statements until they are
properly supported. Note that SQLCOM_CREATE_VIEW also handles ALTER VIEW
statements.
sql/sql_view.cc:
- converted to doxygen comments
- Added comment for clarity
sql/sql_yacc.yy:
- Disallow ALTER VIEW statements within a SP.
If the parser is operating within the SP context, this is shown
on the sp->sphead pointer. If this flag is set for view DDL operations
we stop parsing with the error 'ER_SP_BAD_STATEMENT'.
The reason the "reap;" succeeds unexpectedly is because the query was completing(almost always) and the network buffer was big enough to store the query result (sometimes) on Windows, meaning the response was completely sent before the server thread could be killed.
Therefore we use a much longer running query that doesn't have a chance to fully complete before the reap happens, testing the kill properly.
mysql-test/r/kill.result:
We use a much longer running query that doesn't have a chance to fully complete before the reap happens.
mysql-test/t/kill.test:
We use a much longer running query that doesn't have a chance to fully complete before the reap happens.