crashes server
When creating temporary table that contains aggregate functions a
non-reversible source transformation was performed to redirect aggregate
function arguments towards temporary table columns.
This caused EXPLAIN EXTENDED to fail because it was trying to resolve
references to the (freed) temporary table.
Fixed by preserving the original aggregate function arguments and
using them (instead of the transformed ones) for EXPLAIN EXTENDED.
if cached query uses many tables
The problem was that query cache would not properly cache
queries which used 256 or more tables but yet would leave
behind query cache blocks pointing to freed (destroyed)
data. Later when invalidating (due to a truncate) query cache
would attempt to grab a lock which resided in the freed data,
leading to hangs or undefined behavior.
This was happening due to a improper return value from the
function responsible for registering the tables used in the
query (so the cache can be invalidated later if one of the
tables is modified). The function expected a return value of
type boolean (char, 8 bits) indicating success (1) or failure
(0) but the number of tables registered (unsigned int, 32 bits)
was being returned instead. This caused the function to return
failure for cases where it had actually succeed because when
a type (unsigned int) is converted to a narrower type (char),
the excess bits on the left are discarded. Thus if the 8
rightmost bits are zero, the return value will be 0 (failure).
The solution is to simply return true (1) only if the number of
registered table is greater than zero and false (0) otherwise.
Problem was an unclear error message since it could suggest that
MyISAM did not support INSERT DELAYED.
Changed the error message to say that DELAYED is not supported by the
table, instead of the table's storage engine.
The confusion is that a partitioned table is in somewhat sense using
the partitioning storage engine, which in turn uses the ordinary
storage engine. By saying that the table does not support DELAYED we
do not give any extra informantion about the storage engine or if it
is partitioned.
Fix for this bug and additional improvements/fixes
In detail:
- Remove unicode attribute from several columns
(unicode properties were nowhere needed/tested)
of the table tb3
-> The runnability of these tests depends no more on
the availibility of some optional collations.
- Use a table tb3 with the same layout for all
engines to be tested and unify the engine name
within the protocols.
-> <engine>_trig_<abc>.result have the same content
- Do not load data into tb3 if these rows have no
impact on result sets
- Add tests for NDB (they exist already in 5.1)
- "--replace_result" at various places because
NDB variants of tests failed with "random" row
order in results
This fixes a till now unknown weakness within the
funcs_1 NDB tests existing in 5.1 and 6.0
- Fix the expected result of ndb_trig_1011ext
which suffered from Bug 32656
+ disable this test
- funcs_1 could be executed with the mysql-test-run.pl
option "--reorder", which saves some runtime by
optimizing server restarts.
Runtimes on tmpfs (one attempt only):
with reorder 132 seconds
without reorder 183 seconds
- Adjust two "check" statements within func_misc.test
which were incorrect (We had one run with result set
difference though the server worked good.)
- minor fixes in comments
Upmerge of fix for this bug and a second similar problem
found during experimenting.
This replaces the first fix (already pushed to 5.1
and merged to 6.0) which
- failed in runs with the embedded server
- cannot be ported back to 5.0
Fix for this bug and a second similar problem
found during experimenting.
This replaces the first fix (already pushed to 5.1
and merged to 6.0) which
- failed in runs with the embedded server
- cannot be ported back to 5.0
The failing test case is depending on unnecessary status variable output
which changes based on build configuration. By reducing the output the test
becomes more stable.
the local tree contains a fix for
Bug#32748 "Inconsistent handling of assignments to
general_log_file/slow_query_log_file",
which changes output of a number of tests.
first row or fails with an error:
ERROR 1022 (23000): Can't write; duplicate key in table ''
The server uses intermediate temporary table to store updated
row data. The first column of this table contains rowid.
Current server implementation doesn't reset NULL flag of that
column even if the server fills a column with rowid.
To keep each rowid unique, there is an unique index.
An insertion into an unique index takes into account NULL
flag of key value and ignores real data if NULL flag is set.
So, insertion of actually different rowids may lead to two
kind of problems. Visible effect of each of these problems
depends on an initial engine type of temporary table:
1. If multiupdate initially creates temporary table as
a MyISAM table (a table contains blob columns, and the
create_tmp_table function assumes, that this table is
large), it inserts only one single row and updates
only rows with one corresponding rowid. Other rows are
silently ignored.
2. If multiupdate initially creates MEMORY temporary
table, fills it with data and reaches size limit for
MEMORY tables (max_heap_table_size), multiupdate
converts MEMORY table into MyISAM table and fails
with an error:
ERROR 1022 (23000): Can't write; duplicate key in table ''
Multiupdate has been fixed to update the NULL flag of
temporary table rowid columns.
with dependent subqueries
An IN subquery is executed on EXPLAIN when it's not correlated.
If the subquery required a temporary table for its execution
not all the internal structures were restored from pointing to
the items of the temporary table to point back to the items of
the subquery.
Fixed by restoring the ref array when a temp tables were used in
executing the IN subquery during EXPLAIN EXTENDED.
slave
The stored-routine code took the contents of the (lowest) parser
and copied it directly to the binlog, which causes problems if there
is a special case of interpretation at the parser level -- which
there is, in the "/*!VER */" comments. The trailing "*/" caused
errors on the slave, naturally.
Now, since by that point we have /properly/ created parse-tree (as
the rest of the server should do!) for the stored-routine CREATE, we
can construct a perfect statement from that information, instead of
writing uncertain information from an unknown parser state.
Fortunately, there's already a function nearby that does exactly
that.
---
Update for Bug#36570. Qualify routine names with db name when
writing to the binlog ONLY if the source text is qualified.
"Apply InnoDB snapshot innodb-5.1-ss2438.
Addresses the following bugs:
Change the fix for Bug#32440 to show bytes instead of kilobytes in
INFORMATION_SCHEMA.TABLES.DATA_FREE.
branches/5.1: Fix bug#29507 TRUNCATE shows to many rows effected
In InnoDB, the row count is only a rough estimate used by SQL
optimization. InnoDB is now return row count 0 for TRUNCATE operation.
branches/5.1: Fix bug#35537 - Innodb doesn't increment handler_update
and handler_delete
Add the calls to ha_statistic_increment() in ha_innobase::delete_row()
and ha_innobase::update_row().
Fix Bug#36169 create innodb compressed table with too large row size crash
Sometimes it is possible that
row_drop_table_for_mysql(index->table_name, trx, FALSE); is invoked in
row_create_index_for_mysql() when the index object is freed so copy the
table name to a safe place beforehand and use the copy.
Fix Bug#36434 ha_innodb.so is installed in the wrong directory
Change pkglib_LTLIBRARIES with pkgplugin_LTLIBRARIES which has been
forgotten in this commit: http://lists.mysql.com/commits/40206"
with previous rows.
The WHERE clause containing expression:
CONCAT(empty_field1, empty_field2, ..., 'literal constant', ...)
REGEXP 'regular expression'
may return wrong matches.
Optimization of the CONCAT function has been fixed.
subselects into account
It is forbidden to use the SELECT INTO construction inside UNION statements
unless on the last SELECT of the union. The parser records whether it
has seen INTO or not when parsing a UNION statement. But if the INTO was
legally used in an outer query, an error is thrown if UNION is seen in a
subquery. Fixed in 5.0 by remembering the nesting level of INTO tokens and
mitigate the error unless it collides with the UNION.
FLUSH STATUS doesn't clear the values of the global status variables.
The test case is reduced to testing session local Com-variables until
FLUSH GLOBAL STATUS is implemented.
The REPAIR TABLE ... USE_FRM query silently corrupts data of tables
with old .FRM file version.
The mysql_upgrade client program or the REPAIR TABLE query (without
the USE_FRM clause) can't prevent this trouble, because in the
common case they don't upgrade .FRM file to compatible structure.
1. Evaluation of the REPAIR TABLE ... USE_FRM query has been
modified to reject such tables with the message:
"Failed repairing incompatible .FRM file".
2. REPAIR TABLE query (without USE_FRM clause) evaluation has been
modified to upgrade .FRM files to current version.
3. CHECK TABLE ... FOR UPGRADE query evaluation has been modified
to return error status when .FRM file has incompatible version.
4. mysql_upgrade and mysqlcheck client programs call CHECK TABLE
FOR UPGRADE and REPAIR TABLE queries, so their behaviors have
been changed too to upgrade .FRM files with incompatible
version numbers.
Addresses the following bugs:
Change the fix for Bug#32440 to show bytes instead of kilobytes in
INFORMATION_SCHEMA.TABLES.DATA_FREE.
branches/5.1: Fix bug#29507 TRUNCATE shows to many rows effected
In InnoDB, the row count is only a rough estimate used by SQL
optimization. InnoDB is now return row count 0 for TRUNCATE operation.
branches/5.1: Fix bug#35537 - Innodb doesn't increment handler_update
and handler_delete
Add the calls to ha_statistic_increment() in ha_innobase::delete_row()
and ha_innobase::update_row().
Fix Bug#36169 create innodb compressed table with too large row size crashed
Sometimes it is possible that
row_drop_table_for_mysql(index->table_name, trx, FALSE); is invoked in
row_create_index_for_mysql() when the index object is freed so copy the
table name to a safe place beforehand and use the copy.
Fix Bug#36434 ha_innodb.so is installed in the wrong directory
Change pkglib_LTLIBRARIES with pkgplugin_LTLIBRARIES which has been
forgotten in this commit: http://lists.mysql.com/commits/40206
Enable previously disabled test cases which were tested against
the embedded build. The test cases are modified so that they require
non-embedded build.