There could be memory leaks if ALTER ... PARTITION command fails.
Problem was that the list of items to free was not set in
the partition info structure when fix_partition_func call failed
during ALTER ... PARTITION.
Solved by always setting the list in the partition info struct.
when generating new name.
If find_uniq_filename returns an error, then this error is not
being propagated upwards, and execution does not report error to
the user (although a entry in the error log is generated).
Additionally, some more errors were ignored in new_file_impl:
- when writing the rotate event
- when reopening the index and binary log file
This patch addresses this by propagating the error up in the
execution stack. Furthermore, when rotation of the binary log
fails, an incident event is written, because there may be a
chance that some changes for a given statement, were not properly
logged. For example, in SBR, LOAD DATA INFILE statement requires
more than one event to be logged, should rotation fail while
logging part of the LOAD DATA events, then the logged data would
become inconsistent with the data in the storage engine.
Code cleanup after changes for Bug 56628. The general approach for
InnoDB is to make a reference to each enum value whenever it is used in a
switch statement. In addition, no default case should be used for switch
statements on enum types. This assures that if there is ever any change
in the enum values, the switch will need to change to reflect it since a
compiler warning will occur. In this case, the enum row_type is declared
in handler.h and could be changed for another storage engine. If so, a
warning will occur in the InnoDB build.
Other changes;
* This patch uses 2 macros to help consolidate warning messages that
need to occur twice in the single switch for row_format.
* Using row_format as the variable name to distinguish it from the enum
type.
* Function declaration format correction.
InnoDB AUTOINC code expects the locks to be released in strict reverse order
at the end of the statement. However, nested stored proedures and partition
tables break this rule. We now allow the locks to be deleted from the
trx->autoinc_locks vector in any order but optimise for the common (old) case.
rb://441 Approved by Marko Makela
When using BINLOG statement to execute rows log events, session variables
foreign_key_checks and unique_checks are changed temporarily. As each rows
log event has their own special session environment and its own
foreign_key_checks and unique_checks can be different from current session
which executing the BINLOG statement. But these variables are not restored
correctly after BINLOG statement. This problem will cause that the following
statements fail or generate unexpected data.
In this patch, code is added to backup and restore these two variables.
So BINLOG statement will not affect current session's variables again.
win x86 debug_max
The windows MTR run exhibited a different test execution
ordering (due to the fact that in these platforms MTR is invoked
with --parallel > 1). This uncovered a bug in the aforementioned
test case, which is triggered by the following conditions:
1. server is not restarted between two different tests;
2. the test before binlog.binlog_row_failure_mixing_engines
issues flush logs;
3. binlog.binlog_row_failure_mixing_engines uses binlog
positions to limit the output of show_binlog_events;
4. binlog.binlog_row_failure_mixing_engines does not state which
binlog file to use, thence it uses a wrong binlog file with
the correct position.
There are two possible fixes: 1. make sure that the test start
from a clean slate - binlog wise; 2. in addition to the position,
also state the binary log file before sourcing
show_binlog_events.inc .
We go for fix#1, ie, deploy a RESET MASTER before the test is
actually started.
The problem is that the logic which checks if a pointer is
valid relies on a poor heuristic based on the start and end
addresses of the data segment and heap.
Apart from miscalculating the heap bounds, this approach also
suffers from the fact that memory can come from places other
than the heap. See Bug#58528 for a more detailed explanation.
On Linux, the solution is to access the process's memory
through /proc/self/task/<tid>/mem, which allows for retrieving
the contents of pages within the virtual address space of
the calling process. If a address range is not mapped, a
input/output error is returned.
the DROP statement ..."
Problem: When using temporary tables and closing a session, an
implicit DROP TEMPORARY TABLE IF EXISTS is written to the binary
log (while cleaning up the context of the session THD - see:
sql_class.cc:THD::cleanup which calls close_temporary_tables).
close_temporary_tables, first checks if the binary log is opened
and then proceeds to creating the DROP statements. Then, such
statements, are written to the binary log through
MYSQL_BIN_LOG::write(Log_event *). Inside, there is another check
if the binary log is opened and if not an error is returned. This
is where the faulty behavior is triggered. Given that the test
case replays a binary log, with temp tables statements, and right
after it issues RESET MASTER, there is a chance that is_open will
report false (when the mysql session is closed and the temporary
tables are written).
is_open may return false, because MYSQL_BIN_LOG::reset_logs is
not setting the correct flag (LOG_CLOSE_TO_BE_OPENED), on the
MYSQL_LOG_BIN::log_state (instead it sets just the
LOG_CLOSE_INDEX flag, leaving the log_state to
LOG_CLOSED). Thence, when writing the DROP statement as part of
the THD::cleanup, the thread could get a return value of false
for is_open - inside MYSQL_BIN_LOG::write, ultimately reporting
that it can't write the event to the binary log.
Fix: We fix this by adding the correct flag, missing in the
second close.
Problem: MySQL cp1251 did not support 'U+20AC EURO SIGN'
which was assigned a few years ago to 0x88.
Fix: adding mapping: 0x88 <-> U+20AC
@ mysql-test/include/ctype_8bit.inc
New shared file to test 8bit character sets.
@ mysql-test/r/ctype_cp1251.result
@ mysql-test/t/ctype_cp1251.test
Adding tests
@ sql/share/charsets/cp1251.xml
Adding mapping
@ strings/ctype-extra.c
Regenerating ctype-extra.c using strings/conf_to_src
according to new cp1251.xml
After dropping and recreating the database specified along with --one-database
option at command line, mysql client keeps filtering the statements even after
the execution of a 'USE' command on the same database.
--one-database option enables the filtering of statements when the current
database is not the one specified at the command line. However, when the same
database is dropped and recreated the variable (current_db) that holds the
inital database name gets altered. This bug exploits the fact that current_db
initially gets set to null value (0) when a 'use db_name' follows the recreation
of same database db_name (speficied at the command line) and hence skip_updates
gets set to 1, which inturn triggers the further filtering of statements.
Fixed by making get_current_db() a no-op function when one_database is set,
and hence, under that condition current_db will not get altered.
Note, however the value of current_db can change when we execute 'connect'
command with a differnet database to reconnect to the server, in which case,
the behavior of --one-database will be formulated using this new database.
It does work in general, the problem here was that the test name
'alter_table' matches 'main.alter_table-big' which has already been found.
Fixed by matching more explicitly (with/without suite name)
Add a _commented_ workaround for Bug#47350. The full solution is tricky
to get right as explained in the bug report. It is not worth the effort
to extend the deprecated autotools framework to support conflicting
plugins and would be too risky for MySQL 5.1 (GA).
> revision-id: gshchepa@mysql.com-20100801181236-uyuq6ewaq43rw780
> parent: alexey.kopytov@sun.com-20100723115254-jjwmhq97b9wl932l
> committer: Gleb Shchepa <gshchepa@mysql.com>
> branch nick: mysql-5.1-security
> timestamp: Sun 2010-08-01 22:12:36 +0400
> Bug #54461: crash with longblob and union or update with subquery
>
> Queries may crash, if
> 1) the GREATEST or the LEAST function has a mixed list of
> numeric and LONGBLOB arguments and
> 2) the result of such a function goes through an intermediate
> temporary table.
>
> An Item that references a LONGBLOB field has max_length of
> UINT_MAX32 == (2^32 - 1).
>
> The current implementation of GREATEST/LEAST returns REAL
> result for a mixed list of numeric and string arguments (that
> contradicts with the current documentation, this contradiction
> was discussed and it was decided to update the documentation).
>
> The max_length of such a function call was calculated as a
> maximum of argument max_length values (i.e. UINT_MAX32).
>
> That max_length value of UINT_MAX32 was used as a length for
> the intermediate temporary table Field_double to hold
> GREATEST/LEAST function result.
>
> The Field_double::val_str() method call on that field
> allocates a String value.
>
> Since an allocation of String reserves an additional byte
> for a zero-termination, the size of String buffer was
> set to (UINT_MAX32 + 1), that caused an integer overflow:
> actually, an empty buffer of size 0 was allocated.
>
> An initialization of the "first" byte of that zero-size
> buffer with '\0' caused a crash.
>
> The Item_func_min_max::fix_length_and_dec() has been
> modified to calculate max_length for the REAL result like
> we do it for arithmetical operators.
If mysqltest dies, mtr waits to see if mysqld dies too within 100ms
But in that case, it should not care about expected crash
Fix: jump past the code that checks the expect file
OPTIMIZE TABLE recreates the whole table. That is why the counter gets reset.
Making the next autoinc column persistent is a separate issue from resetting
the value after an OPTIMIZE TABLE. We already have a check for ALTER TABLE
and CREATE INDEX to preserve the value on table recreate. We should be able to
add an additional check for OPTIMIZE TABLE to preserve the next value.
rb://519 Approved by Jimmy Yang.