The crash is the result of an attempt made by JOIN::optimize to evaluate
the WHERE condition when no records have been actually read.
The fix is to remove erroneous 'outer_join' variable check.
The crash happens because of incorrect max_length calculation
in QUOTE function(due to overflow). max_length is set
to 0 and it leads to assert failure.
The fix is to cast expression result to
ulonglong variable and adjust it if the
result exceeds MAX_BLOB_WIDTH.
There was no way to repair corrupt ARCHIVE data file,
when unrecoverable data loss is inevitable.
With this fix REPAIR ... EXTENDED attempts to restore
as much rows as possible, ignoring unrecoverable data.
Normal REPAIR is still able to repair meta-data file
only.
Repairing MyISAM table with fulltext indexes and low
myisam_sort_buffer_size may crash the server.
Estimation of number of index entries was done incorrectly,
causing further assertion failure or server crash.
Docs note: min value for myisam_sort_buffer_size has been
changed from 4 to 4096.
Invalid memory read if HANDLER ... READ NEXT is executed
after failed (e.g. empty table) HANDLER ... READ FIRST.
The problem was that we attempted to perform READ NEXT,
whereas there is no pivot available from failed READ FIRST.
With this fix READ NEXT after failed READ FIRST equals
to READ FIRST.
This bug affects MyISAM tables only.
When MyISAM writes newly created index page it may be
initialized partially. In other words some bytes of
sensible data and uninitialized tail of the page may
go into index file.
Under certain rare circumstances these hunks of memory
may contain data that would be otherwise inaccessible
to user, like passwords or data from other tables.
Fixed by initializing memory for temporary MyISAM key
buffer to '\0'.
No test case for this fix as it is heavily covered by
existing tests.
Detailed revision comments:
r6822 | vasil | 2010-03-15 10:17:31 +0200 (Mon, 15 Mar 2010) | 12 lines
branches/5.1:
Typecast to silence a compiler warning:
row/row0sel.c: 4548
C4244: '=' : conversion from 'float' to 'ib_ulonglong', possible loss of data
row/row0sel.c: 4553
C4244: '=' : conversion from 'double' to 'ib_ulonglong', possible loss of data
Reported by: Jonas Oreland <Jonas.Oreland@Sun.COM>
Discussed with: Sunny Bains <sunny.bains@oracle.com>
Detailed revision comments:
r6785 | vasil | 2010-03-10 09:04:38 +0200 (Wed, 10 Mar 2010) | 11 lines
branches/5.1:
Add the missing --reap statements in innodb_bug38231.test. Probably MySQL
enforced the presence of those recently and the test started failing like:
main.innodb_bug38231 [ fail ]
Test ended at 2010-03-10 08:48:32
CURRENT_TEST: main.innodb_bug38231
mysqltest: At line 49: Cannot run query on connection between send and reap
r6788 | vasil | 2010-03-10 10:53:21 +0200 (Wed, 10 Mar 2010) | 8 lines
branches/5.1:
In innodb_bug38231.test: replace the fragile sleep 0.2 that depends on timing
with a more robust condition which waits for the TRUNCATE and LOCK commands
to appear in information_schema.processlist. This could also break if there
are other sessions executing the same SQL commands, but there are none during
the execution of the mysql test.
Detailed revision comments:
r6783 | jyang | 2010-03-09 17:54:14 +0200 (Tue, 09 Mar 2010) | 9 lines
branches/5.1: Fix bug #47621 "MySQL and InnoDB data dictionaries
will become out of sync when renaming columns". MySQL does not
provide new column name information to storage engine to
update the system table. To avoid column name mismatch, we shall
just request a table copy for now.
rb://246 approved by Marko.
> ------------------------------------------------------------
> revno: 3345.2.1
> revision-id: joro@sun.com-20100218084815-53nb9oonzd7r4gmj
> parent: sergey.glukhov@sun.com-20100217121457-jqx19u6x387rgk7e
> committer: Georgi Kodinov <joro@sun.com>
> branch nick: fix-5.1-bugteam
> timestamp: Thu 2010-02-18 10:48:15 +0200
> message:
> Bug #51049: main.bug39022 fails in mysql-trunk-merge
>
> Fixed the test to behave correctly with ps-protocol
> and binlog format row.
> ------------------------------------------------------------
> revno: 3333.1.6
> revision-id: joro@sun.com-20100129093628-sze9cv0neu0xbabm
> parent: davi.arnaut@sun.com-20100128215140-x0w6fe2de0b28opp
> committer: Georgi Kodinov <joro@sun.com>
> branch nick: B49552-5.1-bugteam
> timestamp: Fri 2010-01-29 11:36:28 +0200
> message:
> Bug #49552 : sql_buffer_result cause crash + not found records
> in multitable delete/subquery
>
> SQL_BUFFER_RESULT should not have an effect on non-SELECT
> statements according to our documentation.
> Fixed by not passing it through to multi-table DELETE (similarly
> to how it's done for multi-table UPDATE).
> ------------------------------------------------------------
> revno: 3333.1.31
> revision-id: joro@sun.com-20091223104518-o29t0i3thgs7wgm1
> parent: sergey.glukhov@sun.com-20100205093946-bx1hsljxlm12h7uf
> committer: Georgi Kodinov <joro@sun.com>
> branch nick: B39022-5.1-bugteam
> timestamp: Wed 2009-12-23 12:45:18 +0200
> message:
> Bug #39022: Mysql randomly crashing in lock_sec_rec_cons_read_sees
>
> flush_cached_records() was not correctly checking for errors after calling
> Item::val_xxx() methods. The expressions may contain subqueries
> or stored procedures that cause errors that should stop the statement.
> Fixed by correctly checking for errors and propagating them up the call stack.
> ------------------------------------------------------------
> revno: 3358
> revision-id: sergey.glukhov@sun.com-20100226113925-mxwn1hfxe3l8khc4
> parent: gshchepa@mysql.com-20100225191311-1x71dkk0h5e1alvx
> committer: Sergey Glukhov <Sergey.Glukhov@sun.com>
> branch nick: mysql-5.1-bugteam
> timestamp: Fri 2010-02-26 15:39:25 +0400
> message:
> Bug#50995 Having clause on subquery result produces incorrect results.
> The problem is that cond->fix_fields(thd, 0) breaks
> condition(cuts off 'having'). The reason of that is
> that NULL valued Item pointer is present in the
> middle of Item list and it breaks the Item processing
> loop.
If the listed columns in the view definition of
the table used in a 'INSERT .. SELECT ..'
statement mismatched, a debug assertion would
trigger in the cache invalidation code
following the failing statement.
Although the find_field_in_view() function
correctly generated ER_BAD_FIELD_ERROR during
setup_fields(), the error failed to propagate
further than handle_select(). This patch fixes
the issue by adding a check for the return
value.
> ------------------------------------------------------------
> revno: 3329.2.3
> revision-id: svoj@sun.com-20100122095702-e18xzhmyll1e5s25
> parent: svoj@sun.com-20100122095632-j8ssd5csnlzp1zpf
> committer: Sergey Vojtovich <svoj@sun.com>
> branch nick: mysql-5.1-bugteam
> timestamp: Fri 2010-01-22 13:57:02 +0400
> message:
> Applying InnoDB snapshot, fixes BUG#46193.
>
> Detailed revision comments:
>
> r6424 | marko | 2010-01-12 12:22:19 +0200 (Tue, 12 Jan 2010) | 16 lines
> branches/5.1: In innobase_initialize_autoinc(), do not attempt to read
> the maximum auto-increment value from the table if
> innodb_force_recovery is set to at least 4, so that writes are
> disabled. (Bug #46193)
>
> innobase_get_int_col_max_value(): Move the function definition before
> ha_innobase::innobase_initialize_autoinc(), because that function now
> calls this function.
>
> ha_innobase::innobase_initialize_autoinc(): Change the return type to
> void. Do not attempt to read the maximum auto-increment value from
> the table if innodb_force_recovery is set to at least 4. Issue
> ER_AUTOINC_READ_FAILED to the client when the auto-increment value
> cannot be read.
>
> rb://144 by Sunny, revised by Marko
> ------------------------------------------------------------
> revno: 3324
> revision-id: joro@sun.com-20091223151122-ada73up1yydh0emt
> parent: joro@sun.com-20100119124841-38vva51cuq3if7dc
> committer: Georgi Kodinov <joro@sun.com>
> branch nick: B49512-5.1-bugteam
> timestamp: Wed 2009-12-23 17:11:22 +0200
> message:
> Bug #49512 : subquery with aggregate function crash
> subselect_single_select_engine::exec()
>
> When a subquery doesn't need to be evaluated because
> it returns only aggregate functions and these aggregates
> can be calculated from the metadata about the table it
> was not updating all the relevant members of the JOIN
> structure to reflect that this is a constant query.
> This caused problems to the enclosing subquery
> ('<> SOME' in the test case above) trying to read some
> data about the tables.
>
> Fixed by setting const_tables to the number of tables
> when the SELECT is optimized away.
> ------------------------------------------------------------
> revno: 3315.1.1
> revision-id: mattias.jonsson@sun.com-20100118164918-afjah8vmey4ya4ox
> parent: joro@sun.com-20100115090646-0g4tjrmqf20axlpv
> committer: Mattias Jonsson <mattias.jonsson@sun.com>
> branch nick: b47343-51-bt
> timestamp: Mon 2010-01-18 17:49:18 +0100
> message:
> Bug#47343: InnoDB fails to clean-up after lock wait timeout on
> REORGANIZE PARTITION
>
> There were several problems which lead to this this,
> all related to bad error handling.
>
> 1) There was several bugs preventing the ddl-log to be used for
> cleaning up created files on error.
>
> 2) The error handling after the copy partition rows did not close
> and unlock the tables, resulting in deletion of partitions
> which were in use, which lead InnoDB to put the partition to
> drop in a background queue.
> ------------------------------------------------------------
> revno: 3325
> revision-id: mattias.jonsson@sun.com-20100119160251-0xvcgzw0y08xwk6r
> parent: joro@sun.com-20091223151122-ada73up1yydh0emt
> committer: Mattias Jonsson <mattias.jonsson@sun.com>
> branch nick: topush-51-bugteam
> timestamp: Tue 2010-01-19 17:02:51 +0100
> message:
> post-push patch for bug#47343.
>
> Missing ha_rnd_end in copy_partitions, found due to a
> DBUG_ASSERT in mysql-pe
The crash happens because greedy_serach
can not determine best plan due to
wrong inner table dependences. These
dependences affects join table sorting
which performs before greedy_search starting.
In our case table which has real 'no dependences'
should be put on top of the list but it does not
happen as inner tables have no dependences as well.
The fix is to exclude RAND_TABLE_BIT mask from
condition which checks if table dependences
should be updated.
> ------------------------------------------------------------
> revno: 3302.1.1
> revision-id: kristofer.pettersson@sun.com-20100113113900-o3m4jcm4l6qzum57
> parent: dao-gang.qu@sun.com-20091231040419-i5dnn06ahs256qcy
> committer: Kristofer Pettersson <kristofer.pettersson@sun.com>
> branch nick: mysql-5.1-bugteam
> timestamp: Wed 2010-01-13 12:39:00 +0100
> message:
> Bug#33982 debug assertion and crash reloading grant tables after sighup or kill
>
> In certain rare cases when a process was interrupted
> during a FLUSH PRIVILEGES operation the diagnostic
> area would be set to an error state but the function
> responsible for the operation would still signal
> success. This would lead to a debug assertion error
> later on when the server would attempt to reset the
> DA before sending the error message.
>
> This patch fixes the issue by assuring that
> reload_acl_and_cache() always fails if an error
> condition is raised.
>
> The second issue was that a KILL could cause
> a console error message which referred to a DA
> state without first making sure that such a
> state existed.
>
> This patch fixes this issue in two different
> palces by first checking DA state before
> fetching the error message.
>
>
col equal to itself!
There's no need to copy the value of a field into itself.
While generally harmless (except for some performance penalties)
it may be dangerous when the copy code doesn't expect this.
Fixed by checking if the source field is the same as the destination
field before copying the data.
Note that we must preserve the order of assignment of the null
flags (hence the null_value assignment addition).
The reason of the failure was apparent flaw in that a pointer to an uninitialized buffer was
passed to DBUG_PRINT of Protocol_text::store().
Fixed with splitting the print-out into two branches:
one with length zero of the problematic arg and the rest.
function on windows
When making sure that the directory path ends up with a
slash/backslash we need to check for the correct length of
the buffer and trim at the appropriate location so we don't
write past the end of the buffer.
CHECK_FIELD_IGNORE was treated as CHECK_FIELD_ERROR_FOR_NULL;
UPDATE...SET...NULL on NOT NULL fields behaved differently after
a trigger.
Now distinguishes between IGNORE and ERROR_FOR_NULL and save/restores
check-field options.
myisam tables
Queries following TRUNCATE of partitioned MyISAM table
may crash server if myisam_use_mmap is true.
Internally this is MyISAM bug, but limited to partitioned
tables, because MyISAM doesn't use ::delete_all_rows()
method for TRUNCATE, but goes via table recreate instead.
MyISAM didn't properly fall back to non-mmaped I/O after
mmap() failure. Was not repeatable on linux before, likely
because (quote from man mmap):
SUSv3 specifies that mmap() should fail if length is 0.
However, in kernels before 2.6.12, mmap() succeeded in
this case: no mapping was created and the call returned
addr. Since kernel 2.6.12, mmap() fails with the error
EINVAL for this case.
Problem: caseup_multiply and casedn_multiply members
were not initialized for a dynamic collation, so
UPPER() and LOWER() functions returned empty strings.
Fix: initializing the members properly.
Adding tests:
mysql-test/r/ctype_ldml.result
mysql-test/t/ctype_ldml.test
Applying the fix:
mysys/charset.c
A failed REVOKE statement is logged with error=0, thus causing
the slave to stop. The slave should not stop as this was an
expected error. Given that the execution failed on the master as
well the error code should be logged so that the slave can replay
the statement, get an error and compare with the master's
execution outcome. If errors match, then slave can proceed with
replication, as the error it got, when replaying the statement,
was expected.
In this particular case, the bug surfaces because the error code
is pushed to the THD diagnostics area after writing the event to
the binary log. Therefore, it would be logged with the THD
diagnostics area clean, hence its error code would not contain
the correct code.
We fix this by moving the error reporting ahead of the call to
the routine that writes the event to the binary log.