The crash happend because for views which are joins
we have table_list->table == 0 and
table_list->table->'any method' call leads to crash.
The fix is to perform table_list->table->file->extra()
method for all tables belonging to view.
Using DECIMAL constants with more than 65 digits in CREATE
TABLE ... SELECT led to bogus errors in release builds or
assertion failures in debug builds.
The problem was in inconsistency in how DECIMAL constants and
fields are handled internally. We allow arbitrarily long
DECIMAL constants, whereas DECIMAL(M,D) columns are limited to
M<=65 and D<=30. my_decimal_precision_to_length() was used in
both Item and Field code and truncated precision to
DECIMAL_MAX_PRECISION when calculating value length without
adjusting precision and decimals. As a result, a DECIMAL
constant with more than 65 digits ended up having length less
than precision or decimals which led to assertion failures.
Fixed by modifying my_decimal_precision_to_length() so that
precision is truncated to DECIMAL_MAX_PRECISION only for Field
object which is indicated by the new 'truncate' parameter.
Another inconsistency fixed by this patch is how DECIMAL
constants and expressions are handled for CREATE ... SELECT.
create_tmp_field_from_item() (which is used for constants) was
changed as a part of the bugfix for bug #24907 to handle long
DECIMAL constants gracefully. Item_func::tmp_table_field()
(which is used for expressions) on the other hand was still
using a simplistic approach when creating a Field_new_decimal
from a DECIMAL expression.
contains ONLY_FULL_GROUP_BY
The partitioning code needs to issue a Item::fix_fields()
on the partitioning expression in order to prepare
it for being evaluated.
It does this by creating a special table and a table list
for the scope of the partitioning expression.
But when checking ONLY_FULL_GROUP_BY the
Item_field::fix_fields() was relying that there always be
cached_table set and was trying to use it to get the
select_lex of the SELECT the field's table is in.
But the cached_table was not set by the partitioning code
that creates the artificial TABLE_LIST used to resolve the
partitioning expression and this resulted in a crash.
Fixed by rectifying the following errors :
1. Item_field::fix_fields() : the code that check for
ONLY_FULL_GROUP_BY relies on having tables with
cacheable_table set. This is mostly true, the only
two exceptions being the partitioning context table
and the trigger context table.
Fixed by taking the current parsing context if no pointer
to the TABLE_LIST instance is present in the cached_table.
2. fix_fields_part_func() :
2a. The code that adds the table being created to the
scope for the partitioning expression is mostly a copy
of the add_table_to_list and friends with one exception :
it was not marking the table as cacheable (something that
normal add_table_to_list is doing). This caused the
problem in the check for ONLY_FULL_GROUP_BY in
Item_field::fix_fields() to appear.
Fixed by setting the correct members to make the table
cacheable.
The ideal structural fix for this is to use a unified
interface for adding a table to a table list
(add_table_to_list?) : noted in a TODO comment
2b. The Item::fix_fields() was called with a NULL destination
pointer. This causes uninitalized memory reads in the
overloaded ::fix_fields() function (namely
Item_field::fix_fields()) as it expects a non-zero pointer
there. Fixed by passing the source pointer similarly to how
it's done in JOIN::prepare().
The TABLE::reginfo.impossible_range is used by the optimizer to indicate
that the condition applied to the table is impossible. It wasn't initialized
at table opening and this might lead to an empty result on complex queries:
a query might set the impossible_range flag on a table and when the query finishes,
all tables are returned back to the table cache. The next query that uses the table
with the impossible_range flag set and an index over the table will see the flag
and thus return an empty result.
The open_table function now initializes the TABLE::reginfo.impossible_range
variable.
The problem is that the one phase commit function failed to
properly end a empty transaction. The solution is to ensure
that the transaction cleanup procedure is invoked even for
empty transactions.
BUG#40565 - Update Query Results in "1 Row Affected" But Should Be "Zero Rows"
Detailed revision comments:
r5232 | marko | 2009-06-03 14:31:04 +0300 (Wed, 03 Jun 2009) | 21 lines
branches/5.0: Merge r3590 from branches/5.1 in order to fix Bug #40565
(Update Query Results in "1 Row Affected" But Should Be "Zero Rows").
Also, add a test case for Bug #40565.
rb://128 approved by Heikki Tuuri
------------------------------------------------------------------------
r3590 | marko | 2008-12-18 15:33:36 +0200 (Thu, 18 Dec 2008) | 11 lines
branches/5.1: When converting a record to MySQL format, copy the default
column values for columns that are SQL NULL. This addresses failures in
row-based replication (Bug #39648).
row_prebuilt_t: Add default_rec, for the default values of the columns in
MySQL format.
row_sel_store_mysql_rec(): Use prebuilt->default_rec instead of
padding columns.
rb://64 approved by Heikki Tuuri
------------------------------------------------------------------------
In Item_param::set_from_user_var
value.cs_info.character_set_client is set
to 'fromcs' value. It's wrong, it should be set to
thd->variables.character_set_client.
When opening a table, it is imperative that the flag
TABLE::auto_increment_field_not_null be false. But if an error occured during
the creation of a table (e.g. the table exists already) with an auto_increment
column and a BEFORE trigger that used the INSERT ... SELECT construct, the
flag was not reset until after error checking. Thus if an error occured,
select_insert::send_data() returned immediately and it was not reset (see * in
pseudocode below). Crash happened if the table was opened again. Fixed by
resetting the flag after error checking.
nested-loops_join():
for each row in SELECT table {
select_insert::send_data():
if a values is supplied for AUTO_INCREMENT column
table->auto_increment_field_not_null= TRUE
else
table->auto_increment_field_not_null= FALSE
if (error)
return 1; *
if (table->auto_increment_field_not_null == FALSE)
...
table->auto_increment_field_not_null == FALSE
}
<-- table returned to table cache and later retrieved by open_table:
open_table():
assert(table->auto_increment_field_not_null)
1. BUG#45357 - 5.1.35 crashes with Failing assertion: index->type & DICT_CLUSTERED
2. Also fixes the compilation problem when the flag -DUNIV_MUST_NOT_INLINE
Detailed revision comments:
r5340 | marko | 2009-06-17 12:11:49 +0300 (Wed, 17 Jun 2009) | 4 lines
branches/5.1: row_unlock_for_mysql(): When the clustered index is unknown,
refuse to unlock the record.
(Bug #45357, caused by the fix of Bug #39320).
rb://132 approved by Sunny Bains.
r5339 | marko | 2009-06-17 11:01:37 +0300 (Wed, 17 Jun 2009) | 2 lines
branches/5.1: Add missing #include "mtr0log.h" so that the code compiles
with -DUNIV_MUST_NOT_INLINE.
Inconsistent behavior of session variable max_allowed_packet
(and net_buffer_length); only assignment to the global variable
has any effect, without this being obvious to the user.
The patch for Bug#22891 is backported to 5.0, making the two
session variables read-only. As this is a backport to GA
software, the error used when trying to assign to the read-
only variable is ER_UNKNOWN_ERROR. The error message is the
same as in 5.1+.
Item_func_spatial_collection::val_str
When the concatenation function for geometry data collections
reads the binary data it was not rigorous in checking that there
is data available, leading to invalid reads and crashes.
Fixed by making checking stricter.
This patch corrects a misstake in the test case for bug patch 43658.
There was a race in the test case when the thread id was retrieved from the processlist.
The result was that the same thread id was signalled twice and one thread id wasn't
signalled at all.
The affected platforms appears to be limited to linux.
The assertion in String::copy was added in order to avoid
valgrind errors when the destination was the same as the source.
Eased restriction to allow for the case when str == NULL.
Early patch submitted for discussion.
It is possible for more than one thread to enter the condition
in query_cache_insert(), but the condition predicate is to
signal one thread each time the cache status changes between
the following states: {NO_FLUSH_IN_PROGRESS,FLUSH_IN_PROGRESS,
TABLE_FLUSH_IN_PROGRESS}
Consider three threads THD1, THD2, THD3
THD2: select ... => Got a writer in ::store_query
THD3: select ... => Got a writer in ::store_query
THD1: flush tables => qc status= FLUSH_IN_PROGRESS;
new writers are blocked.
THD2: select ... => Still got a writer and enters cond in
query_cache_insert
THD3: select ... => Still got a writer and enters cond in
query_cache_insert
THD1: flush tables => finished and signal status change.
THD2: select ... => Wakes up and completes the insert.
THD3: select ... => Happily waiting for better times. Why hurry?
This patch is a refactoring of this lock system. It introduces four new methods:
Query_cache::try_lock()
Query_cache::lock()
Query_cache::lock_and_suspend()
Query_cache::unlock()
This change also deprecates wait_while_table_flush_is_in_progress(). All threads are
queued and put on a conditional wait. On each unlock the queue is signalled. This resolve
the issues with left over threads. To assure that no threads are spending unnecessary
time waiting a signal broadcast is issued every time a lock is taken before a full
cache flush.
crashes server!
The problem affects the scenario when index merge is followed by a filesort
and the sort buffer is not big enough for all the sort keys.
In this case the filesort function will read the data to the end through the
index merge quick access method (and thus closing the cursor etc),
but will leave the pointer to the quick select method in place.
It will then create a temporary file to hold the results of the filesort and
will add it as a sort output file (in sort.io_cache).
Note that filesort will copy the original 'sort' structure in an automatic
variable and restore it after it's done.
As a result at exiting filesort() we have a sort.io_cache filled in and
nothing else (as a result of close of the cursors at end of reading data
through index merge).
Now create_sort_index() will note that there is a select and will clean it up
(as it's been used already by filesort() reading the data in). While doing that
a special case in the index merge destructor will clean up the sort.io_cache,
assuming it's an output of the index merge method and is not needed anymore.
As a result the code that tries to read the data back from the filesort output
will get no data in both memory and disk and will crash.
Fixed similarly to how filesort() does it : by copying the sort.io_cache structure
to a local variable, removing the pointer to the io_cache (so that it's not freed
by QUICK_INDEX_MERGE_SELECT::~QUICK_INDEX_MERGE_SELECT) and restoring the original
structure (together with the valid pointer) after the cleanup is done.
This is a safe thing to do because all the structures are already cleaned up by
hitting the end of the index merge's read method (QUICK_INDEX_MERGE_SELECT::get_next())
and the cleanup code being written in a way that tolerates repeating cleanups.
The SQL-mode PAD_CHAR_TO_FULL_LENGTH could prevent a DROP USER
statement from privileges associated with the user being dropped.
What ocurred was that reading from the User and Host fields of
the tables tables_priv or columns_priv would yield values padded
with spaces, causing a failure to match a specified user or host
('user' != 'user ');
The solution is to disregard the PAD_CHAR_TO_FULL_LENGTH mode
when iterating over and matching values in the privileges tables
for a DROP USER statement.
statements missed from general log
A refinement of the test in the previous patch to avoid
using sleep as a means to ensure that timestamps are
added to the log entries.
WHERE and GROUP BY clause
Loose index scan may use range conditions on the argument of
the MIN/MAX aggregate functions to find the beginning/end of
the interval that satisfies the range conditions in a single go.
These range conditions may have open or closed minimum/maximum
values. When the comparison returns 0 (equal) the code should
check the type of the min/max values of the current interval
and accept or reject the row based on whether the limit is
open or not.
There was a wrong composite condition on checking this and it was
not working in all cases.
Fixed by simplifying the conditions and reversing the logic.
Merge the test case from 5.0 to 5.1 for BUG#40565
Detailed revision comments:
r5233 | marko | 2009-06-03 15:12:44 +0300 (Wed, 03 Jun 2009) | 11 lines
branches/5.1: Merge the test case from r5232 from branches/5.0:
------------------------------------------------------------------------
r5232 | marko | 2009-06-03 14:31:04 +0300 (Wed, 03 Jun 2009) | 21 lines
branches/5.0: Merge r3590 from branches/5.1 in order to fix Bug #40565
(Update Query Results in "1 Row Affected" But Should Be "Zero Rows").
Also, add a test case for Bug #40565.
rb://128 approved by Heikki Tuuri
------------------------------------------------------------------------
Range analysis did not request sorted output from the storage engine,
which cause partitioned handlers to process one partition at a time
while reading key prefixes in ascending order, causing values to be
missed. Fixed by always requesting sorted order during range analysis.
This fix is introduced in 6.0 by the fix for bug no 41136.
This test uses SHOW STATUS and the like, which may be unstable in the face
of logging to table, since the CSV handler is actively executing operations
and thus incrementing the counters.
Fixed by disabling logging to table for the duration of the test and restoring
it afterwards. This causes various counters to properly start counting from zero
and never advance due to CSV operations.
memory issue ?
The mysql command line client could misinterpret some character
sequences as commands under some circumstances.
The upper limit for internal readline buffer was raised to 1 GB
(the same as for server's max_allowed_packet) so that any input
line is processed by add_line() as a whole rather than in
chunks.
uninitialized variable used as subscript
Grouping select from a "constant" InnoDB table (a table
of a single row) joined with other tables caused a crash.
The problem is that when a optimization of read-only transactions
(bypass 2-phase commit) was implemented, it removed the code that
reseted the XID once a transaction wasn't active anymore:
sql/sql_parse.cc:
- bzero(&thd->transaction.stmt, sizeof(thd->transaction.stmt));
- if (!thd->active_transaction())
- thd->transaction.xid_state.xid.null();
+ thd->transaction.stmt.reset();
This mostly worked fine as the transaction commit and rollback
functions (in handler.cc) reset the XID once the transaction is
ended. But those functions wouldn't reset the XID in case of
a empty transaction, leading to a assertion when a new starting
a new XA transaction.
The solution is to ensure that the XID state is reset when empty
transactions are ended (by either commit or rollback). This is
achieved by reorganizing the code so that the transaction cleanup
routine is invoked whenever a transaction is ended.
Problem:
Crash happened with a user-defined utf8 collation,
on attempt to insert a value longer than the column
to store.
Reason:
The "ctype" member was not initialized (NULL) when
allocating a user-defined utf8 collation, so an attempt
to call my_ctype(cs, *str) to check if we loose any important
data when truncating the value made the server crash.
Fix:
Initializing tge "ctype" member to a proper value.
mysql-test/r/ctype_ldml.result
Adding tests
mysql-test/t/ctype_ldml.test
Adding tests
strings/ctype-uca.c
Adding initialization of "ctype" member.
modified:
mysql-test/r/ctype_ldml.result
mysql-test/t/ctype_ldml.test
strings/ctype-uca.c
The crash happens because of uninitialized
lex->ssl_cipher, lex->x509_subject, lex->x509_issuer variables.
The fix is to add initialization of these variables for
stored procedures&functions.
The crash happens due to wrong max_length value which is set on
Item_func_round::fix_length_and_dec() stage. The value is set to
args[0]->max_length which is too big in case of LONGTEXT(LONGBLOB) fields.
The fix is to set max_length using float_length() function.
BEGIN/COMMIT/ROLLBACK was subject to replication db rules, and
caused the boundary of a transaction not recognized correctly
when these queries were ignored by the rules.
Fixed the problem by skipping replication db rules for these
statements.
MySQL crashes if a user without proper privileges attempts to create a procedure.
The crash happens because more than one error state is pushed onto the Diagnostic
area. In this particular case the user is denied to implicitly create a new user
account with the implicitly granted privileges ALTER- and EXECUTE ROUTINE.
The new account is needed if the original user account contained a host mask.
A user account with a host mask is a distinct user account in this context.
An alternative would be to first get the most permissive user account which
include the current user connection and then assign privileges to that
account. This behavior change is considered out of scope for this bug patch.
The implicit assignment of privileges when a user creates a stored routine is a
considered to be a feature for user convenience and as such it is not
a critical operation. Any failure to complete this operation is thus considered
non-fatal (an error becomes a warning).
The patch back ports a stack implementation of the internal error handler interface.
This enables the use of multiple error handlers so that it is possible to intercept
and cancel errors thrown by lower layers. This is needed as a error handler already
is used in the call stack emitting the errors which needs to be converted.
mysqldump --tab still dumped triggers to stdout rather than to
individual tables.
We now append triggers to the .sql file for the corresponding
table.
--events and --routines correspond to a database rather than a
table and will still go to stdout with --tab unless redirected
with --result-file (-r).
old_password() functions
The PASSWORD() and OLD_PASSWORD() functions could lead to
memory reads outside of an internal buffer when used with BLOB
arguments.
String::c_ptr() assumes there is at least one extra byte
in the internally allocated buffer when adding the trailing
'\0'. This, however, may not be the case when a String object
was initialized with externally allocated buffer.
The bug was fixed by adding an additional "length" argument to
make_scrambled_password_323() and make_scrambled_password() in
order to avoid String::c_ptr() calls for
PASSWORD()/OLD_PASSWORD().
However, since the make_scrambled_password[_323] functions are
a part of the client library ABI, the functions with the new
interfaces were implemented with the 'my_' prefix in their
names, with the old functions changed to be wrappers around
the new ones to maintain interface compatibility.
The problem is that the server failed to follow the rule that
every X509 object retrieved using SSL_get_peer_certificate()
must be explicitly freed by X509_free(). This caused a memory
leak for builds linked against OpenSSL where the X509 object
is reference counted -- improper counting will prevent the
object from being destroyed once the session containing the
peer certificate is freed.
The solution is to explicitly free every X509 object used.
HAVING
When calculating GROUP BY the server caches some expressions. It does
that by allocating a string slot (Item_copy_string) and assigning the
value of the expression to it. This effectively means that the result
type of the expression can be changed from whatever it was to a string.
As this substitution takes place after the compile-time result type
calculation for IN but before the run-time type calculations,
it causes the type calculations in the IN function done at run time
to get unexpected results different from what was prepared at compile time.
In the CASE ... WHEN ... THEN ... statement there was a similar problem
and it was solved by artificially adding a STRING argument to the set of
types of the IN/CASE arguments at compile time, so if any of the
arguments of the CASE function changes its type to a string it will
still be covered by the information prepared at compile time.
stop/start slave
When stopping and restarting the slave while it is replicating
temporary tables, the server would crash or raise an assertion
failure. This was due to the fact that although temporary tables are
saved between slave threads restart, the reference to the thread in
use (table->in_use) was not being properly updated when the restart
happened (it would still reference the old/invalid thread instead of
the new one).
This patch addresses this issue by resetting the reference to the new
slave thread on slave thread restart.
Created new .test file - mysqldump_restore that does test restore from mysqldump
output for a limited number of basic cases.
Create new .inc file - mysqldump.inc - renames original table and uses mysqldump
output to recreate the table, then uses diff_tables.inc to compare the two tables.
Backported include/diff_tables.inc to facilitate this testing.
New patch incorporating review feedback prior to push.
mysqldump.test - removed redundant call to include/have_log_bin.inc (was used twice in the test!)
assertion .\filesort.cc, line 797
A query with the "ORDER BY @@some_system_variable" clause,
where @@some_system_variable is NULL, causes assertion
failure in the filesort procedures.
The reason of the failure is in the value of
Item_func_get_system_var::maybe_null: it was unconditionally
set to false even if the value of a variable was NULL.
Created new .test file - mysqldump_restore that does this for a limited number
of basic cases.
Created new .inc file - mysqldump.inc - renames original table and uses mysqldump
output to recreate the table, then uses diff_tables.inc to compare the two tables.
Backported include/diff_tables.inc to facilitate this testing.
bug#44766: valgrind error when using convert() in a subquery
Problem: input and output buffers may be the same
converting a string to some charset.
That may lead to wrong results/valgrind warnings.
Fix: use different buffers.
warnings after uncompressed_length
UNCOMPRESSED_LENGTH() did not validate its argument. In
particular, if the argument length was less than 4 bytes,
an uninitialized memory value was returned as a result.
Since the result of COMPRESS() is either an empty string or
a 4-byte length prefix followed by compressed data, the bug was
fixed by ensuring that the argument of UNCOMPRESSED_LENGTH() is
either an empty string or contains at least 5 bytes (as done in
UNCOMPRESS()). This is the best we can do to validate input
without decompressing.
Internal InnoDN FK parser does not recognize '\'' as quotation symbol.
Suggested fix is to add '\'' symbol check for quotation condition
(dict_strip_comments() function).
with a "HAVING" clause though query works
SELECT from views defined like:
CREATE VIEW v1 (view_column)
AS SELECT c AS alias FROM t1 HAVING alias
fails with an error 1356:
View '...' references invalid table(s) or column(s)
or function(s) or definer/invoker of view lack rights
to use them
CREATE VIEW form with a (column list) substitutes
SELECT column names/aliases with names from a
view column list.
However, alias references in HAVING clause was
not substituted.
The Item_ref::print function has been modified
to write correct aliased names of underlying
items into VIEW definition generation/.frm file.
The RAND(N) function where the N is a field of "constant" table
(table of single row) failed with a SIGFPE.
Evaluation of RAND(N) rely on constant status of its argument.
Current server "seeded" random value for each constant argument
only once, in the Item_func_rand::fix_fields method.
Then the server skipped a call to seed_random() in the
Item_func_rand::val_real method for such constant arguments.
However, non-constant state of an argument may be changed
after the call to fix_fields, if an argument is a field of
"constant" table. Thus, pre-initialization of random value
in the fix_fields method is too early.
Initialization of random value by seed_random() has been
removed from Item_func_rand::fix_fields method.
The Item_func_rand::val_real method has been modified to
call seed_random() on the first evaluation of this method
if an argument is a function.
always rollsback.
The global variable max_binlog_cache_size cannot be set more than 4GB on
32 bit systems, limiting transactions of all storage engines to 4G of changes.
The problem is max_binlog_cache_size is declared as ulong which is 4 bytes
on 32 bit and 8 bytes on 64 bit machines.
Fixed by using ulonglong for max_binlog_cache_size which is 8bytes on 32
and 64 bit machines.The range for max_binlog_cache_size on 32 bit and 64 bit
systems is 4096-18446744073709547520 bytes.
Details:
Most tests mentioned within the bug report were already fixed.
The test modified here failed in stability (high parallel load) tests.
Details:
1. Take care that disconnects are finished before the test terminates.
2. Correct wrong handling of send/reap in events_stress which caused
random garbled output
3. Minor beautifying of script code
It turns out that this test case no longer fails with the discrepancy
in numbers that was the original cause for disabling this test (and showed
potential genuine issues with the query cache). Therefore
this test is being enabled after some minor adjustment of error codes and
messages.
Details:
1. Add missing "disconnect <session>"
2. Take care that the disconnects are finished when the test terminates
3. Replace error names by error numbers
4. Minor beautifying of script code
Field_time::get_time() did not initialize some members of
MYSQL_TIME which led to valgrind warnings when those members
were accessed in Protocol_simple::store_time().
It is unlikely that this bug could result in wrong data
being returned, since Field_time::get_time() initializes the
'day' member of MYSQL_TIME to 0, so the value of 'day'
in Protocol_simple::store_time() would be 0 regardless
of the values for 'year' and 'month'.
In UNION if we use last SELECT without braces and this
SELECT have ORDER BY clause, such clause belongs to
global UNION. It is parsed like last SELECT
part and used further as 'unit->global_parameters->order_list' value.
During DESCRIBE EXTENDED we call select_lex->print_order() for
last SELECT where order fields refer to tmp table
which already freed. It leads to crash.
The fix is clean up global_parameters->order_list
instead of fake_select_lex->order_list.