(temporary) TABLE, crash
Problem: if one has an open "HANDLER t1", further "TRUNCATE t1"
doesn't close the handler and leaves handler table hash in an
inconsistent state, that may lead to a server crash.
Fix: TRUNCATE should implicitly close all open handlers.
Doc. request: the fact should be described in the manual accordingly.
If the SQL Thread fails to execute an event due to a temporary error (e.g.
ER_LOCK_DEADLOCK) and the option "--slave_transaction_retries" is set the SQL
Thread should not be aborted and the transaction should be restarted from the
beginning and re-executed.
Unfortunately, a wrong interpretation of the THD::is_fatal_error was preventing
this behavior. In a nutshell, "this variable is set to TRUE if an execution of a
compound statement cannot continue. In particular, it is used to disable access
to the CONTINUE or EXIT handlers of stored routines. So even temporary errors
may have this variable set.
To fix the bug, we have done what follows:
DBUG_ENTER("has_temporary_error");
- if (thd->is_fatal_error)
- DBUG_RETURN(0);
-
DBUG_EXECUTE_IF("all_errors_are_temporary_errors",
if (thd->main_da.is_error())
{
The check for stack overflow was independent of the size of the
structure stored in the heap.
Fixed by adding sizeof(PARAM) to the requested free heap size.
view that has Group By
Table access rights checking function check_grant() assumed
that no view is opened when it's called.
This is not true with nested views where the inner view
needs materialization. In this case the view is already
materialized when check_grant() is called for it.
This caused check_grant() to not look for table level
grants on the materialized view table.
Fixed by checking if a view is already materialized and if
it is check table level grants using the original table name
(not the ones of the materialized temp table).
Bug#45243: crash on win in sql thread clear_tables_to_lock() -> free()
Bug#45242: crash on win in mysql_close() -> free()
Bug#45238: rpl_slave_skip, rpl_change_master failed (lost connection) for STOP SLAVE
Bug#46030: rpl_truncate_3innodb causes server crash on windows
Bug#46014: rpl_stm_reset_slave crashes the server sporadically in pb2
When killing a user session on the server, it's necessary to
interrupt (notify) the thread associated with the session that
the connection is being killed so that the thread is woken up
if waiting for I/O. On a few platforms (Mac, Windows and HP-UX)
where the SIGNAL_WITH_VIO_CLOSE flag is defined, this interruption
procedure is to asynchronously close the underlying socket of
the connection.
In order to enable this schema, each connection serving thread
registers its VIO (I/O interface) so that other threads can
access it and close the connection. But only the owner thread of
the VIO might delete it as to guarantee that other threads won't
see freed memory (the thread unregisters the VIO before deleting
it). A side note: closing the socket introduces a harmless race
that might cause a thread attempt to read from a closed socket,
but this is deemed acceptable.
The problem is that this infrastructure was meant to only be used
by server threads, but the slave I/O thread was registering the
VIO of a mysql handle (a client API structure that represents a
connection to another server instance) as a active connection of
the thread. But under some circumstances such as network failures,
the client API might destroy the VIO associated with a handle at
will, yet the VIO wouldn't be properly unregistered. This could
lead to accesses to freed data if a thread attempted to kill a
slave I/O thread whose connection was already broken.
There was a attempt to work around this by checking whether
the socket was being interrupted, but this hack didn't work as
intended due to the aforementioned race -- attempting to read
from the socket would yield a "bad file descriptor" error.
The solution is to add a hook to the client API that is called
from the client code before the VIO of a handle is deleted.
This hook allows the slave I/O thread to detach the active vio
so it does not point to freed memory.
on SHOW CREATE TRIGGER + MERGE table
Problem: SHOW CREATE TRIGGER erroneously relies on fact
that we have the only underlying table for a trigger
(wrong for merge tables).
Fix: remove erroneous assert().
In STATEMENT based replication, a statement that failed on the master but that
updated non-transactional tables is written to binary log with the error code
appended to it. On the slave, the statement is executed and the same error is
expected. However, when an "expected error" did not happen on the slave and was
either ignored or was related to a concurrency issue on the master, the slave
did not rollback the effects of the statement and as such inconsistencies might
happen.
To fix the problem, we automatically rollback a statement that should have
failed on a slave but succeded and whose expected failure is either ignored or
stems from a concurrency issue on the master.
There is an inconsistency with DROP DATABASE|TABLE|EVENT IF EXISTS and
CREATE DATABASE|TABLE|EVENT IF NOT EXISTS. DROP IF EXISTS statements are
binlogged even if either the DB, TABLE or EVENT does not exist. In
contrast, Only the CREATE EVENT IF NOT EXISTS is binlogged when the EVENT
exists.
This patch fixes the following cases for all the replication formats:
CREATE DATABASE IF NOT EXISTS.
CREATE TABLE IF NOT EXISTS,
CREATE TABLE IF NOT EXISTS ... LIKE,
CREAET TABLE IF NOT EXISTS ... SELECT.
"CREATE TABLE TRANSACTIONAL PAGE_CHECKSUM ROW_FORMAT=PAGE accepted,
does nothing".
Put back stubs for members of structures that are shared between
sql/ and pluggable storage engines. to not break ABI unnecessarily.
To be NULL-merged into 5.4, where we do break the ABI already.
Replication SQL thread does not set database default charset to
thd->variables.collation_database properly, when executing LOAD DATA binlog.
This bug can be repeated by using "LOAD DATA" command in STATEMENT mode.
This patch adds code to find the default character set of the current database
then assign it to thd->db_charset when slave server begins to execute a relay log.
The test of this bug is added into rpl_loaddata_charset.test
The problem is that the lexer could inadvertently skip over the
end of a query being parsed if it encountered a malformed multibyte
character. A specially crated query string could cause the lexer
to jump up to six bytes past the end of the query buffer. Another
problem was that the laxer could use unfiltered user input as
a signed array index for the parser maps (having upper and lower
bounds 0 and 256 respectively).
The solution is to ensure that the lexer only skips over well-formed
multibyte characters and that the index value of the parser maps
is always a unsigned value.
Invalid (old?) table or database name in logs
Post push patch.
Bug was that a non partitioned table file was not
converted to system_charset, (due to table_name_len was not set).
Also missing DBUG_RETURN.
And Innodb adds quotes after calling the function,
so I added one more mode where explain_filename does not
add quotes. But it still appends the [sub]partition name
as a comment.
Also caught a minor quoting bug, the character '`' was
not quoted in the identifier. (so 'a`b' was quoted as `a`b`
and not `a``b`, this is mulitbyte characters aware.)
Problem 1:
When the 'Using index' optimization is used, the optimizer may still - after
cost-based optimization - decide to use another index in order to avoid using
a temporary table. But when this happens, the flag to the storage engine to
read index only (not table) was still set. Fixed by resetting the flag in the
storage engine and TABLE structure in the above scenario, unless the new index
allows for the same optimization.
Problem 2:
When a 'ref' access method was employed by cost-based optimizer, (when the column
is non-NULLable), it was assumed that it needed no initialization if 'quick' access
methods (since they are based on range scan). When ORDER BY optimization overrides
the decision, however, it expects to have this initialized and hence crashes.
Fixed in 5.1 (was fixed in 6.0 already) by initializing 'quick' even when there's
'ref' access.
when partition is reoganized.
Problem was that table->timestamp_field_type was not changed
before copying rows between partitions.
fixed by setting it to TIMESTAMP_NO_AUTO_SET as the first thing
in fast_alter_partition_table, so that all if-branches is covered.
column on partitioned table
An assertion 'ASSERT_COULUMN_MARKED_FOR_READ' is failed if the query
is executed with index containing double column on partitioned table.
The problem is that assertion expects all the fields which are read,
to be in the read_set.
In this query only the field 'a' is in the readset as the tables in
the query are joined by the field 'a' and so the assertion fails
expecting other field 'b'.
Since the function cmp() is just comparison of two parameters passed,
the assertion is not required.
Fixed by removing the assertion in the double fields comparision
function and also fixed the index initialization to do ordered
index scan with RW LOCK which ensures all the fields from a key are in
the read_set.
Note: this bug is not reproducible with other datatypes because the
assertion doesn't exist in comparision function for other
datatypes.
- Define and pass compile time path variables as pre-processor definitions to
mimic the makefile build.
- Set new CMake version and policy requirements explicitly.
- Changed DATADIR to MYSQL_DATADIR to avoid conflicting definition in
Platform SDK header ObjIdl.h which also defines DATADIR.
when used with --tab
1) New syntax: added CHARACTER SET clause to the
SELECT ... INTO OUTFILE (to complement the same clause in
LOAD DATA INFILE).
mysqldump is updated to use this in --tab mode.
2) ESCAPED BY/ENCLOSED BY field parameters are documented as
accepting CHAR argument, however SELECT .. INTO OUTFILE
silently ignored rests of multisymbol arguments.
For the symmetrical behavior with LOAD DATA INFILE the
server has been modified to fail with the same error:
ERROR 42000: Field separator argument is not what is
expected; check the manual
3) Current LOAD DATA INFILE recognizes field/line separators
"as is" without converting from client charset to data
file charset. So, it is supposed, that input file of
LOAD DATA INFILE consists of data in one charset and
separators in other charset. For the compatibility with
that [buggy] behaviour SELECT INTO OUTFILE implementation
has been saved "as is" too, but the new warning message
has been added:
Non-ASCII separator arguments are not fully supported
This message warns on field/line separators that contain
non-ASCII symbols.
If using statement based replication (SBR), repeatedly calling
statements which are unsafe for SBR will cause a warning message
to be written to the error for each statement. This might lead
to filling up the error log and there is no way to disable this
behavior.
The solution is to only log these message (about statements unsafe
for statement based replication) if the log_warnings option is set.
For example:
SET GLOBAL LOG_WARNINGS = 0;
INSERT INTO t1 VALUES(UUID());
SET GLOBAL LOG_WARNINGS = 1;
INSERT INTO t1 VALUES(UUID());
In this case the message will be printed only once:
[Warning] Statement may not be safe to log in statement format.
Statement: INSERT INTO t1 VALUES(UUID())
We disallow the partitioning of a log table. You could however
partition a table first, and then point logging to it. This is
not only against the docs, it also crashes the server.
We catch this case now.
Initialize LOCK_open as a adapative mutex on platforms where the
PTHREAD_ADAPTIVE_MUTEX_INITIALIZER_NP macro is available. The flag
indicates that a thread should spin (busy wait) for some time on a
locked adaptive mutex before blocking (sleeping). It's intended to
to alleviate performance problems due to LOCK_open being a highly
contended mutex.
an assertion in a debug build.
The reason is that the C API doesn't support multiple result sets for prepared
statements and attempting to execute a stored routine which returns multiple result
sets sometimes lead to a network error. The network error sets the diagnostic area
prematurely which later leads to the assert when an attempt is made to set a second
server state.
This patch fixes the issue by changing the scope of the error code returned by
sp_instr_stmt::execute() to include any error which happened during the execution.
To assure that Diagnostic_area::is_sent really mean that the message was sent all
network related functions are checked for return status.
those keywords do nothing in 5.1 (they are meant for future versions, for example featuring the Maria engine)
so they are here removed from the syntax. Adding those keywords to future versions when needed is:
- WL#5034 "Add TRANSACTIONA=0|1 and PAGE_CHECKSUM=0|1 clauses to CREATE TABLE"
- WL#5037 "New ROW_FORMAT value for CREATE TABLE: PAGE"
compression
Since uint3korr() may read 4 bytes depending on build flags and
platform, allocate 1 extra "safety" byte in the network buffer
for cases when uint3korr() in my_real_read() is called to read
last 3 bytes in the buffer.
It is practically hard to construct a reliable and reasonably
small test case for this bug as that would require constructing
input stream such that a certain sequence of bytes in a
compressed packet happens to be the last 3 bytes of the network
buffer.
If the log_bin_trust_function_creators option is not defined, creating a stored
function requires either one of the modifiers DETERMINISTIC, NO SQL, or READS
SQL DATA. Executing a stored function should also follows the same rules if in
STATEMENT mode. However, this was not happening and a wrong error was being
printed out: ER_BINLOG_ROW_RBR_TO_SBR.
The patch makes the creation and execution compatible and prints out the correct
error ER_BINLOG_UNSAFE_ROUTINE when a stored function without one of the modifiers
above is executed in STATEMENT mode.
The maximum value of the max_join_size variable is set by converting
a signed type (long int) with negative value (-1) to a wider unsigned
type (unsigned long long), which yields the largest possible value of
the wider unsigned type -- as per the language conversion rules. But,
depending on build options, the type of the max_join_size might be a
shorter type (ha_rows - unsigned long) which causes the warning to be
thrown once the large value is truncated to fit.
The solution is to ensure that the maximum value of the variable is
always set to the maximum value of integer type of max_join_size.
Furthermore, it would be interesting to always have a fixed type for
this variable, but this would incur in a change of behavior which is
not acceptable for a GA version. See Bug#35346.
to wrong result
When using MIXED mode and issuing 'CREATE TEMPORARY TABLE t_tmp',
the statement is logged if the current binlogging mode is
STATEMENT. This causes the slave to replay the instruction and
create the temporary table as well. If there is no switch to ROW
mode, and later on a 'DROP TEMPORARY TABLE t_tmp' is issued, then
this statement will also be logged and the slave will
remove/close the temporary table.
However, if there is a switch to ROW mode between the CREATE and
DROP TEMPORARY table, the DROP statement will not be logged,
leaving the slave with a dangling temporary table.
This patch addresses this, by always logging a DROP TEMPORARY
TABLE IF EXISTS when in mixed mode and a drop statement is issued
for temporary table(s).
mysqld
The problem was that enabling the event scheduler inside a init
file caused the server to crash upon start-up. The crash occurred
because the event scheduler wasn't being initialized before the
commands in the init-file are processed.
The solution is to initialize the event scheduler before the init
file is read. The patch also disables the event scheduler during
bootstrap and makes the bootstrap operation robust in the
presence of background threads.
procedures causes crashes!
The problem of that bugreport was mostly fixed by the
patch for bug 38691.
However, attached test case focused on another crash or
valgrind warning problem: SHOW PROCESSLIST query accesses
freed memory of SP instruction that run in a parallel
connection.
Changes of thd->query/thd->query_length in dangerous
places have been guarded with the per-thread
LOCK_thd_data mutex (the THD::LOCK_delete mutex has been
renamed to THD::LOCK_thd_data).
In create_myisam_from_heap() mark all errors as fatal except
HA_ERR_RECORD_FILE_FULL for a HEAP table.
Not doing so could lead to problems, e.g. in a case when a
temporary MyISAM table gets overrun due to its MAX_ROWS limit
while executing INSERT/REPLACE IGNORE ... SELECT.
The SELECT execution was aborted, but the error was
converted to a warning due to IGNORE clause, so neither 'ok'
nor 'error' packet could be sent back to the client. This
condition led to hanging client when using 5.0 server, or
assertion failure in 5.1.
Problem was that a failing rename just left the partitions at the state
it was at the failure.
Solution was to try to revert the started rename if a failure occured.
not logged
Errors encountered during initialization of the SSL subsystem
are printed to stderr, rather than to the error log.
This patch adds a parameter to several SSL init functions to
report the error (if any) out to the caller. The function
init_ssl() in mysqld.cc is moved after the initialization of
the log subsystem, so that any error messages can be logged to
the error log. Printing of messages to stderr has been
retained to get diagnostic output in a client context.
binlog
The fix for BUG 43929 introduced a regression issue. In a nutshell, when a
statement that changes a non-transactional table fails, it is written to the
binary log with the error code appended. Unfortunately, after BUG 43929, this
failure was flushing the transactional chace causing mismatch between execution
and logging histories. To fix this issue, we avoid flushing the transactional
cache when a commit or rollback is not issued.
When during the optimization an item is moved to the upper select
the item's context left unchanged. This caused wrong result in the
PS/SP mode.
The Item_ident::remove_dependence_processor now sets the context
of the select to which the item is moved to.
it returns misleading 'table is full'
Innodb returns a misleading error message "table is full"
when the number of active concurrent transactions is greater
than 1024.
Fixed by adding errorcode "ER_TOO_MANY_CONCURRENT_TRXS" to the
error codes. Innodb should return HA_TOO_MANY_CONCURRENT_TRXS
to mysql which is then mapped to ER_TOO_MANY_CONCURRENT_TRXS
Note: testcase is not written as this was reproducible only by
changing innodb code.
In a subselect all fields from outer selects are marked as dependent on
selects they are belong to. In some cases optimizer substitutes it for an
equivalent expression. For example "a_field IN (SELECT outer_field)" is
substituted with "a_field = outer_field". As we moved the outer_field to the
upper select it's not really outer anymore. But it was left marked as outer.
If exists an index over a_field optimizer choose wrong execution plan and thus
return wrong result.
Now the Item_in_subselect::single_value_transformer function removes dependent
marking from fields when a subselect is optimized away.
The "get_master_version_and_clock(...)" function in sql/slave.cc ignores
error and passes directly when queries fail, or queries succeed
but the result retrieved is empty.
The "get_master_version_and_clock(...)" function should try to reconnect master
if queries fail because of transient network problems, and fail otherwise.
The I/O thread should print a warning if the some system variables do not
exist on master (very old master)
table
The MERGE table storage engine does not support the HA_CAN_SQL_HANDLE feature
and any attempt to open the merge table will fail with ER_ILLEGAL_HA.
After an error occurred the tables that was opened must be closed again
or they will be left in an inconsistent state. However, the assumption
made in the code for closing and register handler tables was that only
one table will be opened, and this is not true for MERGE tables which
will cause multiple tables to open.
The next time a SELECT operation was issued on the merge table it
caused the system to freeze.
This patch fixes this issue by making sure that all tables which
are opened also are closed in the event of an error.
failed"
Do not assume that SQL prepared statements always run in text protocol.
When invoked from a stored procedure, which is itself invoked
by means of prepared CALL statement, the protocol may be binary.
Juggle with the protocol only when we want to change it
to binary in COM_STMT_EXECUTE, COM_STMT_PREPARE.
This is a backport from 5.4/6.0, where the bug was fixed
as part of WL#4264 "Backup: Stabilize Service Interface"
Fixed the following problems:
1. cmake 2.6 warning because of a changed default on
how the dependencies to libraries with a specified
path are resolved.
Fixed by requiring cmake 2.6.
2. Removed an obsolete pre-NT4 hack including defining
Windows system defines to alter the behavior of windows.h.
3. Disabled warning C4065 on compiling sql_yacc.cc because
of a know incompatibility in some of the newer bison binaries.
match against.
Server crashes when executing prepared statement with duplicating
MATCH() function calls in SELECT and ORDER BY expressions, e.g.:
SELECT MATCH(a) AGAINST('test') FROM t1 ORDER BY MATCH(a) AGAINST('test')
This query gets optimized by the server, so the value returned
by MATCH() from the SELECT list is reused for ORDER BY purposes.
To make this optimization server is comparing items from
SELECT and ORDER BY lists. We were getting server crash because
comparision function for MATCH() item is not intended to be called
at this point of execution.
In 5.0 and 5.1 this problem is workarounded by resetting MATCH()
item to the state as it was during PREPARE.
In 6.0 correct comparision function will be implemented and
duplicating MATCH() items from the ORDER BY list will be
optimized.
"create as select" (innodb table)
Problem: code constructing "CREATE TABLE..." statement
doesn't take into account that current database is not set
in some cases. That may lead to a server crash.
Fix: check if current database is set.
without error
When using quick access methods for searching rows in UPDATE or
DELETE there was no check if a fatal error was not already sent
to the client while evaluating the quick condition.
As a result a false OK (following the error) was sent to the
client and the error was thus transformed into a warning.
Fixed by checking for errors sent to the client during
SQL_SELECT::check_quick() and treating them as real errors.
Fixed a wrong test case in group_min_max.test
Fixed a wrong return code in mysql_update() and mysql_delete()
mutually-nested subqueries
Queries of the form
SELECT * FROM (SELECT 1) AS t1,
(SELECT 2) AS t2,...
(SELECT 32) AS t32
caused the "Too high level of nesting for select" error
as if the query has a form
SELECT * FROM (SELECT 1 FROM (SELECT 2 FROM (SELECT 3 FROM...
The table_factor parser rule has been modified to adjust
the LEX::nest_level variable value after every derived table.
sort_buffer_size cannot allocate
The NULL return from tree_insert() (on low memory) was not
checked for in Item_func_group_concat::add(). As a result
on low memory conditions a crash happens.
Fixed by properly checking the return code.
When the function exits with an error it was not
freeing the local Unique class instance.
Fixed my making sure all the places where the function
returns from are freeing the Unique instance
use partial primary key if another index can prevent filesort
The fix for bug #28404 causes the covering ordering indexes to be
preferred unconditionally over non-covering and ref indexes.
Fixed by comparing the cost of using a covering index to the cost of
using a ref index even for covering ordering indexes.
Added an assertion to clarify the condition the local variables should
be in.
purge_relay_logs() did not propagate an error happend in count_relay_log_space().
Fixed with the suggestesd setting the error= true.
Note, propagation generally out of purge_relay_logs() was fixed for Bug #44179,
and the issue does not exist in 6.0 thanks to a patch for WL#2775.
the auto_increment value
This is an alternative patch that instead of allowing RECREATE TABLE
on TRUNCATE TABLE it implements reset_auto_increment that is called
after delete_all_rows.
Note: this bug was fixed by Mattias Jonsson:
Pusing this patch: http://lists.mysql.com/commits/70370
timeout
In STMT and MIXED modes, a statement that changes both non-transactional and
transactional tables must be written to the binary log whenever there are
changes to non-transactional tables. This means that the statement gets into the
binary log even when the changes to the transactional tables fail. In particular
, in the presence of a failure such statement is annotated with the error number
and wrapped in a begin/rollback. On the slave, while applying the statement, it
is expected the same failure and the rollback prevents the transactional changes
to be persisted.
Unfortunately, statements that fail due to concurrency issues (e.g. deadlocks,
timeouts) are logged in the same way causing the slave to stop as the statements
are applied sequentially by the SQL Thread. To fix this bug, we automatically
ignore concurrency failures on the slave. Specifically, the following failures
are ignored: ER_LOCK_WAIT_TIMEOUT, ER_LOCK_DEADLOCK and ER_XA_RBDEADLOCK.
The crash happend because for views which are joins
we have table_list->table == 0 and
table_list->table->'any method' call leads to crash.
The fix is to perform table_list->table->file->extra()
method for all tables belonging to view.
Using DECIMAL constants with more than 65 digits in CREATE
TABLE ... SELECT led to bogus errors in release builds or
assertion failures in debug builds.
The problem was in inconsistency in how DECIMAL constants and
fields are handled internally. We allow arbitrarily long
DECIMAL constants, whereas DECIMAL(M,D) columns are limited to
M<=65 and D<=30. my_decimal_precision_to_length() was used in
both Item and Field code and truncated precision to
DECIMAL_MAX_PRECISION when calculating value length without
adjusting precision and decimals. As a result, a DECIMAL
constant with more than 65 digits ended up having length less
than precision or decimals which led to assertion failures.
Fixed by modifying my_decimal_precision_to_length() so that
precision is truncated to DECIMAL_MAX_PRECISION only for Field
object which is indicated by the new 'truncate' parameter.
Another inconsistency fixed by this patch is how DECIMAL
constants and expressions are handled for CREATE ... SELECT.
create_tmp_field_from_item() (which is used for constants) was
changed as a part of the bugfix for bug #24907 to handle long
DECIMAL constants gracefully. Item_func::tmp_table_field()
(which is used for expressions) on the other hand was still
using a simplistic approach when creating a Field_new_decimal
from a DECIMAL expression.
contains ONLY_FULL_GROUP_BY
The partitioning code needs to issue a Item::fix_fields()
on the partitioning expression in order to prepare
it for being evaluated.
It does this by creating a special table and a table list
for the scope of the partitioning expression.
But when checking ONLY_FULL_GROUP_BY the
Item_field::fix_fields() was relying that there always be
cached_table set and was trying to use it to get the
select_lex of the SELECT the field's table is in.
But the cached_table was not set by the partitioning code
that creates the artificial TABLE_LIST used to resolve the
partitioning expression and this resulted in a crash.
Fixed by rectifying the following errors :
1. Item_field::fix_fields() : the code that check for
ONLY_FULL_GROUP_BY relies on having tables with
cacheable_table set. This is mostly true, the only
two exceptions being the partitioning context table
and the trigger context table.
Fixed by taking the current parsing context if no pointer
to the TABLE_LIST instance is present in the cached_table.
2. fix_fields_part_func() :
2a. The code that adds the table being created to the
scope for the partitioning expression is mostly a copy
of the add_table_to_list and friends with one exception :
it was not marking the table as cacheable (something that
normal add_table_to_list is doing). This caused the
problem in the check for ONLY_FULL_GROUP_BY in
Item_field::fix_fields() to appear.
Fixed by setting the correct members to make the table
cacheable.
The ideal structural fix for this is to use a unified
interface for adding a table to a table list
(add_table_to_list?) : noted in a TODO comment
2b. The Item::fix_fields() was called with a NULL destination
pointer. This causes uninitalized memory reads in the
overloaded ::fix_fields() function (namely
Item_field::fix_fields()) as it expects a non-zero pointer
there. Fixed by passing the source pointer similarly to how
it's done in JOIN::prepare().
without proper formatting
The problem is that a suitably crafted database identifier
supplied to COM_CREATE_DB or COM_DROP_DB can cause a SIGSEGV,
and thereby a denial of service. The database name is printed
to the log without using a format string, so potential
attackers can control the behavior of my_b_vprintf() by
supplying their own format string. A CREATE or DROP privilege
would be required.
This patch supplies a format string to the printing of the
database name. A test case is added to mysql_client_test.
format." warnings
Despite the fact that a statement would be filtered out from binlog, a
warning would still be thrown if it was issued with the LIMIT.
This patch addresses this issue by checking the filtering rules before
printing out the warning.
The TABLE::reginfo.impossible_range is used by the optimizer to indicate
that the condition applied to the table is impossible. It wasn't initialized
at table opening and this might lead to an empty result on complex queries:
a query might set the impossible_range flag on a table and when the query finishes,
all tables are returned back to the table cache. The next query that uses the table
with the impossible_range flag set and an index over the table will see the flag
and thus return an empty result.
The open_table function now initializes the TABLE::reginfo.impossible_range
variable.
such as quit and shutdown
Logging to slow log can produce an undetermined value for
Rows_examined in special cases. In debug mode this manifests
itself as any of the various marker values used to mark
uninitialized memory on various platforms.
If logging happens on a THD object that hasn't performed any
row reads (on this or any previous connections), the
THD::examined_row_count may be uninitialized. This patch adds
initialization for this attribute.
No automated test cases are added, as for this to be
meaningful, we need to ensure that we're using a THD
fulfilling the above conditions. This is hard to do in the
mysql-test-run framework. The patch has been verified
manually, however, by restarting mysqld and running the test
included with the bug report.
The problem is that the one phase commit function failed to
properly end a empty transaction. The solution is to ensure
that the transaction cleanup procedure is invoked even for
empty transactions.
BUG#40565 - Update Query Results in "1 Row Affected" But Should Be "Zero Rows"
Detailed revision comments:
r5232 | marko | 2009-06-03 14:31:04 +0300 (Wed, 03 Jun 2009) | 21 lines
branches/5.0: Merge r3590 from branches/5.1 in order to fix Bug #40565
(Update Query Results in "1 Row Affected" But Should Be "Zero Rows").
Also, add a test case for Bug #40565.
rb://128 approved by Heikki Tuuri
------------------------------------------------------------------------
r3590 | marko | 2008-12-18 15:33:36 +0200 (Thu, 18 Dec 2008) | 11 lines
branches/5.1: When converting a record to MySQL format, copy the default
column values for columns that are SQL NULL. This addresses failures in
row-based replication (Bug #39648).
row_prebuilt_t: Add default_rec, for the default values of the columns in
MySQL format.
row_sel_store_mysql_rec(): Use prebuilt->default_rec instead of
padding columns.
rb://64 approved by Heikki Tuuri
------------------------------------------------------------------------
In Item_param::set_from_user_var
value.cs_info.character_set_client is set
to 'fromcs' value. It's wrong, it should be set to
thd->variables.character_set_client.
The reason for the crash was rotate_relay_log (mi=0x0) did not verify
the passed value of active_mi. There are more cases where active_mi
is supposed to be non-zero e.g change_master(), stop_slave(), and it's
reasonable to protect from a similar crash all of them with common
fixes.
Fixed with spliting end_slave() in slave threads release and slave
data clean-up parts (a new close_active_mi()). The new function is
invoked at the very end of close_connections() so that all users of
active_mi are proven to have left.
queries if query was killed
Since we rely on thd->is_error() to decide whether we should
COMMIT or ROLLBACK after a query execution, check the query
'killed' state and throw an error before calling
ha_autocommit_or_rollback(), not after.
The patch was tested manually. For reliable results, the test
case would have to KILL QUERY while a DELETE/UPDATE query in
another thread is still running. I don't see a way to achieve
this kind of synchronization in our test suite (no debug_sync
in 5.1).
When opening a table, it is imperative that the flag
TABLE::auto_increment_field_not_null be false. But if an error occured during
the creation of a table (e.g. the table exists already) with an auto_increment
column and a BEFORE trigger that used the INSERT ... SELECT construct, the
flag was not reset until after error checking. Thus if an error occured,
select_insert::send_data() returned immediately and it was not reset (see * in
pseudocode below). Crash happened if the table was opened again. Fixed by
resetting the flag after error checking.
nested-loops_join():
for each row in SELECT table {
select_insert::send_data():
if a values is supplied for AUTO_INCREMENT column
table->auto_increment_field_not_null= TRUE
else
table->auto_increment_field_not_null= FALSE
if (error)
return 1; *
if (table->auto_increment_field_not_null == FALSE)
...
table->auto_increment_field_not_null == FALSE
}
<-- table returned to table cache and later retrieved by open_table:
open_table():
assert(table->auto_increment_field_not_null)
Inconsistent behavior of session variable max_allowed_packet
(and net_buffer_length); only assignment to the global variable
has any effect, without this being obvious to the user.
The patch for Bug#22891 is backported to 5.0, making the two
session variables read-only. As this is a backport to GA
software, the error used when trying to assign to the read-
only variable is ER_UNKNOWN_ERROR. The error message is the
same as in 5.1+.
The problem: described in the bug report.
The fix:
--increase buffers where it's necessary
(buffers which are used in stxnmov)
--decrease buffer lengths which are used
Large transactions and statements may corrupt the binary log if the size of the
cache, which is set by the max_binlog_cache_size, is not enough to store the
the changes.
In a nutshell, to fix the bug, we save the position of the next character in the
cache before starting processing a statement. If there is a problem, we simply
restore the position thus removing any effect of the statement from the cache.
Unfortunately, to avoid corrupting the binary log, we may end up loosing changes
on non-transactional tables if they do not fit in the cache. In such cases, we
store an Incident_log_event in order to stop the slave and alert users that some
changes were not logged.
Precisely, for every non-transactional changes that do not fit into the cache,
we do the following:
a) the statement is *not* logged
b) an incident event is logged after committing/rolling back the transaction,
if any. Note that if a failure happens before writing the incident event to
the binary log, the slave will not stop and the master will not have reported
any error.
c) its respective statement gives an error
For transactional changes that do not fit into the cache, we do the following:
a) the statement is *not* logged
b) its respective statement gives an error
To work properly, this patch requires two additional things. Firstly, callers to
MYSQL_BIN_LOG::write and THD::binlog_query must handle any error returned and
take the appropriate actions such as undoing the effects of a statement. We
already changed some calls in the sql_insert.cc, sql_update.cc and sql_insert.cc
modules but the remaining calls spread all over the code should be handled in
BUG#37148. Secondly, statements must be either classified as DDL or DML because
DDLs that do not get into the cache must generate an incident event since they
cannot be rolled back.
Item_func_spatial_collection::val_str
When the concatenation function for geometry data collections
reads the binary data it was not rigorous in checking that there
is data available, leading to invalid reads and crashes.
Fixed by making checking stricter.
with gcc 4.3.2
Compiling MySQL with gcc 4.3.2 and later produces a number of
warnings, many of which are new with the recent compiler
versions.
This bug will be resolved in more than one patch to limit the
size of changesets. This is the second patch, fixing more
of the warnings.
The assertion in String::copy was added in order to avoid
valgrind errors when the destination was the same as the source.
Eased restriction to allow for the case when str == NULL.
Early patch submitted for discussion.
It is possible for more than one thread to enter the condition
in query_cache_insert(), but the condition predicate is to
signal one thread each time the cache status changes between
the following states: {NO_FLUSH_IN_PROGRESS,FLUSH_IN_PROGRESS,
TABLE_FLUSH_IN_PROGRESS}
Consider three threads THD1, THD2, THD3
THD2: select ... => Got a writer in ::store_query
THD3: select ... => Got a writer in ::store_query
THD1: flush tables => qc status= FLUSH_IN_PROGRESS;
new writers are blocked.
THD2: select ... => Still got a writer and enters cond in
query_cache_insert
THD3: select ... => Still got a writer and enters cond in
query_cache_insert
THD1: flush tables => finished and signal status change.
THD2: select ... => Wakes up and completes the insert.
THD3: select ... => Happily waiting for better times. Why hurry?
This patch is a refactoring of this lock system. It introduces four new methods:
Query_cache::try_lock()
Query_cache::lock()
Query_cache::lock_and_suspend()
Query_cache::unlock()
This change also deprecates wait_while_table_flush_is_in_progress(). All threads are
queued and put on a conditional wait. On each unlock the queue is signalled. This resolve
the issues with left over threads. To assure that no threads are spending unnecessary
time waiting a signal broadcast is issued every time a lock is taken before a full
cache flush.
crashes server!
The problem affects the scenario when index merge is followed by a filesort
and the sort buffer is not big enough for all the sort keys.
In this case the filesort function will read the data to the end through the
index merge quick access method (and thus closing the cursor etc),
but will leave the pointer to the quick select method in place.
It will then create a temporary file to hold the results of the filesort and
will add it as a sort output file (in sort.io_cache).
Note that filesort will copy the original 'sort' structure in an automatic
variable and restore it after it's done.
As a result at exiting filesort() we have a sort.io_cache filled in and
nothing else (as a result of close of the cursors at end of reading data
through index merge).
Now create_sort_index() will note that there is a select and will clean it up
(as it's been used already by filesort() reading the data in). While doing that
a special case in the index merge destructor will clean up the sort.io_cache,
assuming it's an output of the index merge method and is not needed anymore.
As a result the code that tries to read the data back from the filesort output
will get no data in both memory and disk and will crash.
Fixed similarly to how filesort() does it : by copying the sort.io_cache structure
to a local variable, removing the pointer to the io_cache (so that it's not freed
by QUICK_INDEX_MERGE_SELECT::~QUICK_INDEX_MERGE_SELECT) and restoring the original
structure (together with the valid pointer) after the cleanup is done.
This is a safe thing to do because all the structures are already cleaned up by
hitting the end of the index merge's read method (QUICK_INDEX_MERGE_SELECT::get_next())
and the cleanup code being written in a way that tolerates repeating cleanups.
The SQL-mode PAD_CHAR_TO_FULL_LENGTH could prevent a DROP USER
statement from privileges associated with the user being dropped.
What ocurred was that reading from the User and Host fields of
the tables tables_priv or columns_priv would yield values padded
with spaces, causing a failure to match a specified user or host
('user' != 'user ');
The solution is to disregard the PAD_CHAR_TO_FULL_LENGTH mode
when iterating over and matching values in the privileges tables
for a DROP USER statement.
statements missed from general log
A refinement of the test in the previous patch to avoid
using sleep as a means to ensure that timestamps are
added to the log entries.
WHERE and GROUP BY clause
Loose index scan may use range conditions on the argument of
the MIN/MAX aggregate functions to find the beginning/end of
the interval that satisfies the range conditions in a single go.
These range conditions may have open or closed minimum/maximum
values. When the comparison returns 0 (equal) the code should
check the type of the min/max values of the current interval
and accept or reject the row based on whether the limit is
open or not.
There was a wrong composite condition on checking this and it was
not working in all cases.
Fixed by simplifying the conditions and reversing the logic.
While reading a binary log that is being used by a master or was not properly
closed, most likely due to a crash, the following warning message is being
printed out: "Warning: this binlog was not closed properly. Most probably mysqld
crashed writing it.". This was scaring our users as the message was not taking
into account the possibility of the file is being just used by the master.
To avoid unnecessarily scaring our users, we replace the original message by the
following one: Warning: "this binlog is either is use or was not closed properly.".
with gcc 4.3.2
Compiling MySQL with gcc 4.3.2 and later produces a number of
warnings, many of which are new with the recent compiler
versions.
This bug will be resolved in more than one patch to limit the
size of changesets. This is the second patch, fixing more
of the warnings.
Range analysis did not request sorted output from the storage engine,
which cause partitioned handlers to process one partition at a time
while reading key prefixes in ascending order, causing values to be
missed. Fixed by always requesting sorted order during range analysis.
This fix is introduced in 6.0 by the fix for bug no 41136.
with gcc 4.3.2
Compiling MySQL with gcc 4.3.2 and later produces a number of
warnings, many of which are new with the recent compiler
versions.
This bug will be resolved in more than one patch to limit the
size of changesets. This is the first patch, fixing a number
of the warnings, predominantly "suggest using parentheses
around && in ||", and empty for and while bodies.
with gcc 4.3.2
Compiling MySQL with gcc 4.3.2 and later produces a number of
warnings, many of which are new with the recent compiler
versions.
This bug will be resolved in more than one patch to limit the
size of changesets. This is the first patch, fixing a number
of the warnings, predominantly "suggest using parentheses
around && in ||", and empty for and while bodies.
variable. The problem was that THD::connect_utime could be
used without being initialized when the main thread is used
to handle connections (--thread-handling=no-threads).
mysqlbinlog --database parameter was being ignored when processing
row events. As such no event filtering would take place.
This patch addresses this by deploying a call to shall_skip_database
when table_map_events are handled (as these contain also the name of
the database). All other rows events referencing the table id for the
filtered map event, will also be skipped.
uninitialized variable used as subscript
Grouping select from a "constant" InnoDB table (a table
of a single row) joined with other tables caused a crash.
The problem is that when a optimization of read-only transactions
(bypass 2-phase commit) was implemented, it removed the code that
reseted the XID once a transaction wasn't active anymore:
sql/sql_parse.cc:
- bzero(&thd->transaction.stmt, sizeof(thd->transaction.stmt));
- if (!thd->active_transaction())
- thd->transaction.xid_state.xid.null();
+ thd->transaction.stmt.reset();
This mostly worked fine as the transaction commit and rollback
functions (in handler.cc) reset the XID once the transaction is
ended. But those functions wouldn't reset the XID in case of
a empty transaction, leading to a assertion when a new starting
a new XA transaction.
The solution is to ensure that the XID state is reset when empty
transactions are ended (by either commit or rollback). This is
achieved by reorganizing the code so that the transaction cleanup
routine is invoked whenever a transaction is ended.
Holding on to the temporary inno hash index latch is an optimization in
many cases, but a pessimization in some others.
Release temporary latches for those corner cases we (or rather, or customers,
thanks!) have identified, that is, when we are about to do something that
might take a really long time, like REPAIR or filesort.
the thread->mysys_var parameter should be empty for the idle
embedded-server threads so that working threads can safely free
this memory.
per-file comments:
libmysqld/lib_sql.cc
Bug#43733 Select on processlist let the embedded server crash (concurrent_innodb_safelog)
set thread->mysys_var= 0 after the query is handled
mysql-test/include/concurrent.inc
Bug#43733 Select on processlist let the embedded server crash (concurrent_innodb_safelog)
enable these for the embedded-server mode
sql/sql_show.cc
Bug#43733 Select on processlist let the embedded server crash (concurrent_innodb_safelog)
show thread lock status in the query result
The crash happens because of uninitialized
lex->ssl_cipher, lex->x509_subject, lex->x509_issuer variables.
The fix is to add initialization of these variables for
stored procedures&functions.
The server was not cleaning the last IO error and error number when
resetting slave.
This patch addresses this issue by backporting into 5.1 part of the
patch in BUG 34654. A fix for this issue had already been pushed into
6.0 as part of the aforementioned bug, however the patch also included
some refactoring. The fix for 5.1 does not take into account the
refactoring part.
always rollsback.
There is failure on pushbuild machines which are using old compilers complaining about
ULLONG_MAX declaration. Changing this to ULONGLONG_MAX to solve the problem.