checksum)"
The problem was that checksum of GEOMETRY type used memory addresses
in the computation, making it un-repeatable thus useless.
(This patch is a backport from 6.0 branch)
Despite copying the value of the old table's row type
we don't always have to mark row type as being specified.
Innodb uses this to check if it can do fast ALTER TABLE
or not.
Fixed by correctly flagging the presence of row_type
only when it's actually changed.
Added a test case for 39200.
query
The fix for bug 46749 removed the check for OUTER_REF_TABLE_BIT
and substituted it for a check on the presence of
Item_ident::depended_from.
Removing it altogether was wrong : OUTER_REF_TABLE_BIT should
still be checked in addition to depended_from (because it's not
set in all cases and doesn't contradict to the check of depended_from).
Fixed by returning the old condition back as a compliment to the
new one.
1. Fixes BUG#44030 - Error: (1500) Couldn't read the MAX(ID) autoinc value
from the index (PRIMARY)
2. Disables the innodb-autoinc test for innodb plugin temporarily.
The testcase for this bug has different result file for InnoDB plugin.
Should add the testcase to Innodb suite with a different result file.
Detailed revision comments:
r5243 | sunny | 2009-06-04 03:17:14 +0300 (Thu, 04 Jun 2009) | 14 lines
branches/5.1: When the InnoDB and MySQL data dictionaries go out of sync, before
the bug fix we would assert on missing autoinc columns. With this fix we allow
MySQL to open the table but set the next autoinc value for the column to the
MAX value. This effectively disables the next value generation. INSERTs will
fail with a generic AUTOINC failure. However, the user should be able to
read/dump the table, set the column values explicitly, use ALTER TABLE to
set the next autoinc value and/or sync the two data dictionaries to resume
normal operations.
Fix Bug#44030 Error: (1500) Couldn't read the MAX(ID) autoinc value from the
index (PRIMARY)
rb://118
r5252 | sunny | 2009-06-04 10:16:24 +0300 (Thu, 04 Jun 2009) | 2 lines
branches/5.1: The version of the result file checked in was broken in r5243.
r5259 | vasil | 2009-06-05 10:29:16 +0300 (Fri, 05 Jun 2009) | 7 lines
branches/5.1:
Remove the word "Error" from the printout because the mysqltest suite
interprets it as an error and thus the innodb-autoinc test fails.
Approved by: Sunny (via IM)
r5466 | vasil | 2009-07-02 10:46:45 +0300 (Thu, 02 Jul 2009) | 6 lines
branches/5.1:
Adjust the failing innodb-autoinc test to conform to the latest behavior
of the MySQL code. The idea and the comment in innodb-autoinc.test come
from Sunny.
The problem is that argument buffer can be used as result buffer
and it leads to argument value change.
The fix is to use 'old buffer' as result buffer only
if first argument is not constant item.
Archive engine returns wrong values for average record length
and max data length.
With this fix they're calculated as following:
- max data length is 2 ^ 63 where large files are supported
and INT_MAX32 where this is not supported;
- average record length is data length / records in data file.
Create temporary InnoDB table fails on case insensitive
filesystems, when lower_case_table_names is 2 (e.g. OS X)
and temporary directory path contains upper case letters.
The problem was that tmpdir prefix was converted to lower
case when table was created, but was passed as is when
table was opened.
Fixed by leaving tmpdir prefix part intact.
The parser rule for expressions in a udf parameter list contains
two hacks:
First, the parser input stream is read verbatim, bypassing
the lexer.
Second, the Item::name field is overwritten. If the argument to a
udf was a field, the field's name as seen by name resolution was
overwritten this way.
If the field name was quoted or escaped, it would appear as e.g. "`field`".
Fixed by not overwriting field names.
This test case uses mysqlbinlog to dump the content of master-bin.000001,
but the content of master-bin.000001 is not that this test needs.
MTR runs a lot of test cases on one server, so when this test starts, the current binlog file
might not be master-bin.000001, or there are other events are written by tests before.
'RESET MASTER' command must be called at the begin, it ensures that binlog of this test
is wrote to master-bin.000001 correctly.
Three other tests have the same problem, They were fixed together.
mysqlbinlog-cp932
binlog_incident
binlog_tmp_table
The external 'for' loop in remove_dup_with_compare() handled
HA_ERR_RECORD_DELETED by just starting over without advancing
to the next record which caused an infinite loop.
This condition could be triggered on certain data by a SELECT
query containing DISTINCT, GROUP BY and HAVING clauses.
Fixed remove_dup_with_compare() so that we always advance to
the next record when receiving HA_ERR_RECORD_DELETED from
rnd_next().
on subquery inside a SP
Problem: repeated call of a SP containing an incorrect query with a
subselect may lead to failed ASSERT().
Fix: set proper sublelect's state in case of error occured during
subquery transformation.
SELECT with join (not only self-join) from archive table may
return incomplete result set, when result set size exceeds
join buffer size.
The problem was that archive row counter was initialzed too
early, when ha_archive::info() method was called. Later,
when optimizer exceeds join buffer, it attempts to reuse
handler without calling ha_archive::info() again (which is
correct).
Fixed by moving row counter initialization from
ha_archive::info() to ha_archive::rnd_init().
name as existing view
When trying to create a table with the same name as existing view with
join, mysql server crashes.
The problem is when create table is issued with the same name as view, while
verifying with the existing tables, we assume that base table object is
created always.
In this case, since it is a view over multiple tables, we don't have the
mysql derived table object.
Fixed the logic which checks if there is an existing table to not to assume
that table object is created when the base table is view over multiple
tables.
function,file sql_base.cc
When uncacheable queries are written to a temp table the optimizer must
preserve the original JOIN structure, because it is re-using the JOIN
structure to read from the resulting temporary table.
This was done only for uncacheable sub-queries.
But top level queries can also benefit from this mechanism, specially if
they're using index access and need a reset.
Fixed by not limiting the saving of JOIN structure to subqueries
exclusively.
Added a new test file to extend the existing (large) subquery.test.
extraneous file
Online/fast ALTER TABLE of a partitioned table may leave
temporary file in database directory.
Fixed by removing unnecessary call to
handler::ha_create_handler_files(), which was creating
partitioning definition file.
a "if"
Bug #41913 mysqltest cannot source files from if inside while
Some commands require additional processing which only works first time
Keep content for write_file or append_file with the st_command struct
Add tests for those cases to mysqltest.test
results in server crash
check_group_min_max_predicates() assumed the input condition
item to be one of COND_ITEM, SUBSELECT_ITEM, or FUNC_ITEM.
Since a condition of the form "field" is also a valid condition
equivalent to "field <> 0", using such a condition in a query
where the loose index scan was chosen resulted in a debug
assertion failure.
Fixed by handling conditions of the FIELD_ITEM type in
check_group_min_max_predicates().
field references
This error requires a combination of factors :
1. An "impossible where" in the outermost SELECT
2. An aggregate in the outermost SELECT
3. A correlated subquery with a WHERE clause that includes an outer
field reference as a top level WHERE sargable predicate
When JOIN::optimize detects an "impossible WHERE" it will bail out
without doing the rest of the work and initializations. It will not
call make_join_statistics() as well. And make_join_statistics fills
in various structures for each table referenced.
When processing the result of the "impossible WHERE" the query must
send a single row of data if there are aggregate functions in it.
In this case the server marks all the aggregates as having received
no rows and calls the relevant Item::val_xxx() method on the SELECT
list. However if this SELECT list happens to contain a correlated
subquery this subquery is evaluated in a normal evaluation mode.
And if this correlated subquery has a reference to a field from the
outermost "impossible where" SELECT the add_key_fields will mistakenly
consider the outer field reference as a "local" field reference when
looking for sargable predicates.
But since the SELECT where the outer field reference refers to is not
completely initialized due to the "impossible WHERE" in this level
we'll get a NULL pointer reference.
Fixed by making a better condition for discovering if a field is "local"
to the SELECT level being processed.
It's not enough to look for OUTER_REF_TABLE_BIT in this case since
for outer references to constant tables the Item_field::used_tables()
will return 0 regardless of whether the field reference is from the
local SELECT or not.