Using DECIMAL constants with more than 65 digits in CREATE
TABLE ... SELECT led to bogus errors in release builds or
assertion failures in debug builds.
The problem was in inconsistency in how DECIMAL constants and
fields are handled internally. We allow arbitrarily long
DECIMAL constants, whereas DECIMAL(M,D) columns are limited to
M<=65 and D<=30. my_decimal_precision_to_length() was used in
both Item and Field code and truncated precision to
DECIMAL_MAX_PRECISION when calculating value length without
adjusting precision and decimals. As a result, a DECIMAL
constant with more than 65 digits ended up having length less
than precision or decimals which led to assertion failures.
Fixed by modifying my_decimal_precision_to_length() so that
precision is truncated to DECIMAL_MAX_PRECISION only for Field
object which is indicated by the new 'truncate' parameter.
Another inconsistency fixed by this patch is how DECIMAL
constants and expressions are handled for CREATE ... SELECT.
create_tmp_field_from_item() (which is used for constants) was
changed as a part of the bugfix for bug #24907 to handle long
DECIMAL constants gracefully. Item_func::tmp_table_field()
(which is used for expressions) on the other hand was still
using a simplistic approach when creating a Field_new_decimal
from a DECIMAL expression.
mysql-test/r/type_newdecimal.result:
Added a test case for bug #45262.
mysql-test/t/type_newdecimal.test:
Added a test case for bug #45262.
sql/item.cc:
Use the new 'truncate' parameter in
my_decimal_precision_to_length().
sql/item_cmpfunc.cc:
Use the new 'truncate' parameter in
my_decimal_precision_to_length().
sql/item_func.cc:
1. Use the new 'truncate' parameter in
my_decimal_precision_to_length().
2. Do not truncate decimal precision to DECIMAL_MAX_PRECISION
for additive expressions involving long DECIMAL constants.
3. Fixed an incosistency in how DECIMAL constants and
expressions are handled for CREATE ... SELECT.
sql/item_func.h:
Use the new 'truncate' parameter in
my_decimal_precision_to_length().
sql/item_sum.cc:
Use the new 'truncate' parameter in
my_decimal_precision_to_length().
sql/my_decimal.h:
Do not truncate precision to DECIMAL_MAX_PRECISION
when calculating length in
my_decimal_precision_to_length() if 'truncate' parameter
is FALSE.
sql/sql_select.cc:
1. Use the new 'truncate' parameter in
my_decimal_precision_to_length().
2. Use a more correct logic when adjusting value's length.
When opening a table, it is imperative that the flag
TABLE::auto_increment_field_not_null be false. But if an error occured during
the creation of a table (e.g. the table exists already) with an auto_increment
column and a BEFORE trigger that used the INSERT ... SELECT construct, the
flag was not reset until after error checking. Thus if an error occured,
select_insert::send_data() returned immediately and it was not reset (see * in
pseudocode below). Crash happened if the table was opened again. Fixed by
resetting the flag after error checking.
nested-loops_join():
for each row in SELECT table {
select_insert::send_data():
if a values is supplied for AUTO_INCREMENT column
table->auto_increment_field_not_null= TRUE
else
table->auto_increment_field_not_null= FALSE
if (error)
return 1; *
if (table->auto_increment_field_not_null == FALSE)
...
table->auto_increment_field_not_null == FALSE
}
<-- table returned to table cache and later retrieved by open_table:
open_table():
assert(table->auto_increment_field_not_null)
mysql-test/r/trigger.result:
Bug#44653: Test result
mysql-test/t/trigger.test:
Bug#44653: Test case
sql/sql_insert.cc:
Bug#44653: Fix: Make sure to unset this field before returning in case of error
1. BUG#45357 - 5.1.35 crashes with Failing assertion: index->type & DICT_CLUSTERED
2. Also fixes the compilation problem when the flag -DUNIV_MUST_NOT_INLINE
Detailed revision comments:
r5340 | marko | 2009-06-17 12:11:49 +0300 (Wed, 17 Jun 2009) | 4 lines
branches/5.1: row_unlock_for_mysql(): When the clustered index is unknown,
refuse to unlock the record.
(Bug #45357, caused by the fix of Bug #39320).
rb://132 approved by Sunny Bains.
r5339 | marko | 2009-06-17 11:01:37 +0300 (Wed, 17 Jun 2009) | 2 lines
branches/5.1: Add missing #include "mtr0log.h" so that the code compiles
with -DUNIV_MUST_NOT_INLINE.
Details:
- Limit the queries to character sets and collations
which are most probably available in all build types.
But try to preserve the intention of the tests.
- Remove the variants adjusted to some build types.
Note:
1. The results of the review by Bar are included.
2. I am not able to check the correctness of this patch
on any existing build type and any MySQL version.
So it could happen that the new test fails somewhere.
Inconsistent behavior of session variable max_allowed_packet
(and net_buffer_length); only assignment to the global variable
has any effect, without this being obvious to the user.
The patch for Bug#22891 is backported to 5.0, making the two
session variables read-only. As this is a backport to GA
software, the error used when trying to assign to the read-
only variable is ER_UNKNOWN_ERROR. The error message is the
same as in 5.1+.
mysql-test/t/variables.test:
Tests are changed to account for the new semantics, and assignment to the read-only variables is added to test
the emission of the correct error message.
sql/set_var.cc:
Both max_allowed_packet and net_buffer_length are changed
to be of type sys_var_thd_ulong_session_readonly. ER_UNKNOWN_ERROR is used to indicate an attempt to assign
to an instance of a read-only variable.
sql/set_var.h:
Class sys_var_thd_ulong_session_readonly is added.
Large transactions and statements may corrupt the binary log if the size of the
cache, which is set by the max_binlog_cache_size, is not enough to store the
the changes.
In a nutshell, to fix the bug, we save the position of the next character in the
cache before starting processing a statement. If there is a problem, we simply
restore the position thus removing any effect of the statement from the cache.
Unfortunately, to avoid corrupting the binary log, we may end up loosing changes
on non-transactional tables if they do not fit in the cache. In such cases, we
store an Incident_log_event in order to stop the slave and alert users that some
changes were not logged.
Precisely, for every non-transactional changes that do not fit into the cache,
we do the following:
a) the statement is *not* logged
b) an incident event is logged after committing/rolling back the transaction,
if any. Note that if a failure happens before writing the incident event to
the binary log, the slave will not stop and the master will not have reported
any error.
c) its respective statement gives an error
For transactional changes that do not fit into the cache, we do the following:
a) the statement is *not* logged
b) its respective statement gives an error
To work properly, this patch requires two additional things. Firstly, callers to
MYSQL_BIN_LOG::write and THD::binlog_query must handle any error returned and
take the appropriate actions such as undoing the effects of a statement. We
already changed some calls in the sql_insert.cc, sql_update.cc and sql_insert.cc
modules but the remaining calls spread all over the code should be handled in
BUG#37148. Secondly, statements must be either classified as DDL or DML because
DDLs that do not get into the cache must generate an incident event since they
cannot be rolled back.
Item_func_spatial_collection::val_str
When the concatenation function for geometry data collections
reads the binary data it was not rigorous in checking that there
is data available, leading to invalid reads and crashes.
Fixed by making checking stricter.
mysql-test/r/gis.result:
Bug#44684: Test result
mysql-test/t/gis.test:
Bug#44684: Test case
sql/item_geofunc.cc:
Bug#44684: fix(es)
- Check that there are 4 bytes available for type code.
- Check that there is at least one point available for linestring.
- Check that there are at least 2 points in a polygon and
data for all the points.
This patch corrects a misstake in the test case for bug patch 43658.
There was a race in the test case when the thread id was retrieved from the processlist.
The result was that the same thread id was signalled twice and one thread id wasn't
signalled at all.
The affected platforms appears to be limited to linux.
mysql-test/r/query_cache_debug.result:
There was a race in the test case when the thread id was retrieved from the processlist.
The result was that the same thread id was signalled twice and one thread id wasn't
signalled at all.
mysql-test/t/query_cache_debug.test:
There was a race in the test case when the thread id was retrieved from the processlist.
The result was that the same thread id was signalled twice and one thread id wasn't
signalled at all.
The assertion in String::copy was added in order to avoid
valgrind errors when the destination was the same as the source.
Eased restriction to allow for the case when str == NULL.
mysql-test/r/func_set.result:
Bug#45168: Test result
mysql-test/t/func_set.test:
Bug#45168: Test case
sql/item_strfunc.cc:
Bug#45168: Code cleanup and grammar correction in comment
sql/sql_string.cc:
Bug#45168: Fix
Early patch submitted for discussion.
It is possible for more than one thread to enter the condition
in query_cache_insert(), but the condition predicate is to
signal one thread each time the cache status changes between
the following states: {NO_FLUSH_IN_PROGRESS,FLUSH_IN_PROGRESS,
TABLE_FLUSH_IN_PROGRESS}
Consider three threads THD1, THD2, THD3
THD2: select ... => Got a writer in ::store_query
THD3: select ... => Got a writer in ::store_query
THD1: flush tables => qc status= FLUSH_IN_PROGRESS;
new writers are blocked.
THD2: select ... => Still got a writer and enters cond in
query_cache_insert
THD3: select ... => Still got a writer and enters cond in
query_cache_insert
THD1: flush tables => finished and signal status change.
THD2: select ... => Wakes up and completes the insert.
THD3: select ... => Happily waiting for better times. Why hurry?
This patch is a refactoring of this lock system. It introduces four new methods:
Query_cache::try_lock()
Query_cache::lock()
Query_cache::lock_and_suspend()
Query_cache::unlock()
This change also deprecates wait_while_table_flush_is_in_progress(). All threads are
queued and put on a conditional wait. On each unlock the queue is signalled. This resolve
the issues with left over threads. To assure that no threads are spending unnecessary
time waiting a signal broadcast is issued every time a lock is taken before a full
cache flush.
mysql-test/r/query_cache_debug.result:
* Added test case for bug43758
mysql-test/t/query_cache_debug.test:
* Added test case for bug43758
sql/sql_cache.cc:
* Replaced calls to wait_while_table_flush_is_in_progress() with
calls to try_lock(), lock_and_suspend() and unlock().
* Renamed enumeration Cache_status to Cache_lock_status.
* Renamed enumeration items to UNLOCKED, LOCKED_NO_WAIT and LOCKED.
If the LOCKED_NO_WAIT lock type is used to lock the query cache, other
threads using try_lock() will fail to acquire the lock.
This is useful if the query cache is temporary disabled due to
a full table flush.
sql/sql_cache.h:
* Replaced calls to wait_while_table_flush_is_in_progress() with
calls to try_lock(), lock_and_suspend() and unlock().
* Renamed enumeration Cache_status to Cache_lock_status.
* Renamed enumeration items to UNLOCKED, LOCKED_NO_WAIT and LOCKED.
If the LOCKED_NO_WAIT lock type is used to lock the query cache, other
threads using try_lock() will fail to acquire the lock.
This is useful if the query cache is temporary disabled due to
a full table flush.
times (ie: 2:16:20).
mysql-test/r/log_tables_debug.result:
Update test case result.
mysql-test/t/log_tables_debug.test:
Skip spaces and handle case when a leading zero is not printed.
statements missed from general log
A FLUSH LOGS is added to ensure that the log info hits
the file before attempting to process.
mysql-test/t/log_tables_debug.test:
A FLUSH LOGS is added, and in the event that a match is
not found, <FILE> is reset and the contents of the log
file is dumped for debugging purposes.
crashes server!
The problem affects the scenario when index merge is followed by a filesort
and the sort buffer is not big enough for all the sort keys.
In this case the filesort function will read the data to the end through the
index merge quick access method (and thus closing the cursor etc),
but will leave the pointer to the quick select method in place.
It will then create a temporary file to hold the results of the filesort and
will add it as a sort output file (in sort.io_cache).
Note that filesort will copy the original 'sort' structure in an automatic
variable and restore it after it's done.
As a result at exiting filesort() we have a sort.io_cache filled in and
nothing else (as a result of close of the cursors at end of reading data
through index merge).
Now create_sort_index() will note that there is a select and will clean it up
(as it's been used already by filesort() reading the data in). While doing that
a special case in the index merge destructor will clean up the sort.io_cache,
assuming it's an output of the index merge method and is not needed anymore.
As a result the code that tries to read the data back from the filesort output
will get no data in both memory and disk and will crash.
Fixed similarly to how filesort() does it : by copying the sort.io_cache structure
to a local variable, removing the pointer to the io_cache (so that it's not freed
by QUICK_INDEX_MERGE_SELECT::~QUICK_INDEX_MERGE_SELECT) and restoring the original
structure (together with the valid pointer) after the cleanup is done.
This is a safe thing to do because all the structures are already cleaned up by
hitting the end of the index merge's read method (QUICK_INDEX_MERGE_SELECT::get_next())
and the cleanup code being written in a way that tolerates repeating cleanups.
mysql-test/r/index_merge.result:
Bug #44810: test case
mysql-test/t/index_merge.test:
Bug #44810: test case
sql/sql_select.cc:
Bug #44810: preserve the io_cache produced by filesort while cleaning up
the index merge quick access method (QUICK_INDEX_MERGE_SELECT).
The SQL-mode PAD_CHAR_TO_FULL_LENGTH could prevent a DROP USER
statement from privileges associated with the user being dropped.
What ocurred was that reading from the User and Host fields of
the tables tables_priv or columns_priv would yield values padded
with spaces, causing a failure to match a specified user or host
('user' != 'user ');
The solution is to disregard the PAD_CHAR_TO_FULL_LENGTH mode
when iterating over and matching values in the privileges tables
for a DROP USER statement.
mysql-test/r/sql_mode.result:
Add test case result for Bug#45100.
mysql-test/t/sql_mode.test:
Add test case for Bug#45100.
sql/sql_acl.cc:
Clear MODE_PAD_CHAR_TO_FULL_LENGTH before dropping privileges.
statements missed from general log
A refinement of the test in the previous patch to avoid
using sleep as a means to ensure that timestamps are
added to the log entries.
mysql-test/t/log_tables_debug.test:
New test file. A debug feature is used to ensure that
log entries are prefixed with a timestamp.
sql/log.cc:
A debug feature is implemented to ensure that
log entries are prefixed with a timestamp.
WHERE and GROUP BY clause
Loose index scan may use range conditions on the argument of
the MIN/MAX aggregate functions to find the beginning/end of
the interval that satisfies the range conditions in a single go.
These range conditions may have open or closed minimum/maximum
values. When the comparison returns 0 (equal) the code should
check the type of the min/max values of the current interval
and accept or reject the row based on whether the limit is
open or not.
There was a wrong composite condition on checking this and it was
not working in all cases.
Fixed by simplifying the conditions and reversing the logic.
mysql-test/r/group_min_max.result:
Bug #45386: test case
mysql-test/t/group_min_max.test:
Bug #45386: test case
sql/opt_range.cc:
Bug #45386: fix the check whether to use the value if on the
interval boundry
Range analysis did not request sorted output from the storage engine,
which cause partitioned handlers to process one partition at a time
while reading key prefixes in ascending order, causing values to be
missed. Fixed by always requesting sorted order during range analysis.
This fix is introduced in 6.0 by the fix for bug no 41136.
mysql-test/r/group_min_max.result:
Bug#44821: Test result.
mysql-test/t/group_min_max.test:
Bug#44821: Test case
sql/opt_range.cc:
Bug#44821: Fix.
This test uses SHOW STATUS and the like, which may be unstable in the face
of logging to table, since the CSV handler is actively executing operations
and thus incrementing the counters.
Fixed by disabling logging to table for the duration of the test and restoring
it afterwards. This causes various counters to properly start counting from zero
and never advance due to CSV operations.
memory issue ?
The mysql command line client could misinterpret some character
sequences as commands under some circumstances.
The upper limit for internal readline buffer was raised to 1 GB
(the same as for server's max_allowed_packet) so that any input
line is processed by add_line() as a whole rather than in
chunks.
client/mysql.cc:
The upper limit for internal readline buffer was raised to 1 GB
(the same as for server's max_allowed_packet) so that any input
line is processed by add_line() as a whole rather than in
chunks.
mysql-test/r/mysql-bug45236.result:
Added a test case for bug #45236.
mysql-test/t/mysql-bug45236.test:
Added a test case for bug #45236.
Needed for substitution in some tests.
mysql-test/suite/funcs_1/t/ndb_storedproc_06.tes:
Remove unused file.
mysql-test/suite/funcs_1/t/ndb_storedproc_08.tes:
Remove unused file.
mysql-test/suite/ndb/my.cnf:
Export the socket path.
the --big-test flag is supplied. Test is too resource intensive
under normal valgrind runs (takes more than 30min on powerful
hardware).
mysql-test/include/no_valgrind_without_big.inc:
Add MTR prerequisite file by Matthias Leich.
mysql-test/suite/funcs_1/t/myisam_views.test:
Test is too resource intensive under "Valgrind".