with gcc 4.3.2
This patch fixes a number of GCC warnings about variables used
before initialized. A new macro UNINIT_VAR() is introduced for
use in the variable declaration, and LINT_INIT() usage will be
gradually deprecated. (A workaround is used for g++, pending a
patch for a g++ bug.)
GCC warnings for unused results (attribute warn_unused_result)
for a number of system calls (present at least in later
Ubuntus, where the usual void cast trick doesn't work) are
also fixed.
The problem was that creating a DECIMAL column from a decimal
value could lead to a failed assertion as decimal values can
have a higher precision than those attached to a table. The
assert could be triggered by creating a table from a decimal
with a large (> 30) scale. Also, there was a problem in
calculating the number of digits in the integral and fractional
parts if both exceeded the maximum number of digits permitted
by the new decimal type.
The solution is to ensure that truncation procedure is executed
when deducing a DECIMAL column from a decimal value of higher
precision. If the integer part is equal to or bigger than the
maximum precision for the DECIMAL type (65), the integer part
is truncated to fit and the fractional becomes zero. Otherwise,
the fractional part is truncated to fit into the space left
after the integer part is copied.
This patch borrows code and ideas from Martin Hansson's patch.
If the log_bin_trust_function_creators option is not defined, creating a stored
function requires either one of the modifiers DETERMINISTIC, NO SQL, or READS
SQL DATA. Executing a stored function should also follows the same rules if in
STATEMENT mode. However, this was not happening and a wrong error was being
printed out: ER_BINLOG_ROW_RBR_TO_SBR.
The patch makes the creation and execution compatible and prints out the correct
error ER_BINLOG_UNSAFE_ROUTINE when a stored function without one of the modifiers
above is executed in STATEMENT mode.
Using DECIMAL constants with more than 65 digits in CREATE
TABLE ... SELECT led to bogus errors in release builds or
assertion failures in debug builds.
The problem was in inconsistency in how DECIMAL constants and
fields are handled internally. We allow arbitrarily long
DECIMAL constants, whereas DECIMAL(M,D) columns are limited to
M<=65 and D<=30. my_decimal_precision_to_length() was used in
both Item and Field code and truncated precision to
DECIMAL_MAX_PRECISION when calculating value length without
adjusting precision and decimals. As a result, a DECIMAL
constant with more than 65 digits ended up having length less
than precision or decimals which led to assertion failures.
Fixed by modifying my_decimal_precision_to_length() so that
precision is truncated to DECIMAL_MAX_PRECISION only for Field
object which is indicated by the new 'truncate' parameter.
Another inconsistency fixed by this patch is how DECIMAL
constants and expressions are handled for CREATE ... SELECT.
create_tmp_field_from_item() (which is used for constants) was
changed as a part of the bugfix for bug #24907 to handle long
DECIMAL constants gracefully. Item_func::tmp_table_field()
(which is used for expressions) on the other hand was still
using a simplistic approach when creating a Field_new_decimal
from a DECIMAL expression.
with gcc 4.3.2
Compiling MySQL with gcc 4.3.2 and later produces a number of
warnings, many of which are new with the recent compiler
versions.
This bug will be resolved in more than one patch to limit the
size of changesets. This is the first patch, fixing a number
of the warnings, predominantly "suggest using parentheses
around && in ||", and empty for and while bodies.
with gcc 4.3.2
Compiling MySQL with gcc 4.3.2 and later produces a number of
warnings, many of which are new with the recent compiler
versions.
This bug will be resolved in more than one patch to limit the
size of changesets. This is the first patch, fixing a number
of the warnings, predominantly "suggest using parentheses
around && in ||", and empty for and while bodies.
The crash happens due to wrong max_length value which is set on
Item_func_round::fix_length_and_dec() stage. The value is set to
args[0]->max_length which is too big in case of LONGTEXT(LONGBLOB) fields.
The fix is to set max_length using float_length() function.
assertion .\filesort.cc, line 797
A query with the "ORDER BY @@some_system_variable" clause,
where @@some_system_variable is NULL, causes assertion
failure in the filesort procedures.
The reason of the failure is in the value of
Item_func_get_system_var::maybe_null: it was unconditionally
set to false even if the value of a variable was NULL.
The RAND(N) function where the N is a field of "constant" table
(table of single row) failed with a SIGFPE.
Evaluation of RAND(N) rely on constant status of its argument.
Current server "seeded" random value for each constant argument
only once, in the Item_func_rand::fix_fields method.
Then the server skipped a call to seed_random() in the
Item_func_rand::val_real method for such constant arguments.
However, non-constant state of an argument may be changed
after the call to fix_fields, if an argument is a field of
"constant" table. Thus, pre-initialization of random value
in the fix_fields method is too early.
Initialization of random value by seed_random() has been
removed from Item_func_rand::fix_fields method.
The Item_func_rand::val_real method has been modified to
call seed_random() on the first evaluation of this method
if an argument is a function.
Problem: storing "SELECT ... INTO @var ..." results in variables we used val_xxx()
methods which returned results of the current row.
So, in some cases (e.g. SELECT DISTINCT, GROUP BY or HAVING) we got data
from the first row of a new group (where we evaluate a clause) instead of
data from the last row of the previous group.
Fix: use val_xxx_result() counterparts to get proper results.
When the thread executing a DDL was killed after finished its
execution but before writing the binlog event, the error code in
the binlog event could be set wrongly to ER_SERVER_SHUTDOWN or
ER_QUERY_INTERRUPTED.
This patch fixed the problem by ignoring the kill status when
constructing the event for DDL statements.
This patch also included the following changes in order to
provide the test case.
1) modified mysqltest to support variable for connection command
2) modified mysql-test-run.pl, add new variable MYSQL_SLAVE to
run mysql client against the slave mysqld.
Problem: storing "SELECT ... INTO @var ..." results in variables we used val_xxx()
methods which returned results of the current row.
So, in some cases (e.g. SELECT DISTINCT, GROUP BY or HAVING) we got data
from the first row of a new group (where we evaluate a clause) instead of
data from the last row of the previous group.
Fix: use val_xxx_result() counterparts to get proper results.
- Remove bothersome warning messages. This change focuses on the warnings
that are covered by the ignore file: support-files/compiler_warnings.supp.
- Strings are guaranteed to be max uint in length
MATCH() function accepts column list as an argument. It was possible to override
this requirement with aliased non-column select expression. Which results in
server crash.
With this fix aliased non-column select expressions are not accepted by MATCH()
function, returning an error.
- Remove bothersome warning messages. This change focuses on the warnings
that are covered by the ignore file: support-files/compiler_warnings.supp.
- Strings are guaranteed to be max uint in length
in trigger
Interchangeable calls to the mysql_change_user client function
and invocations of a trigger changing some user variable caused
a memory corruption and a crash.
The mysql_change_user API call forces TDH::cleanup() on a server
that frees user variable entries.
However it didn't reset Item_func_set_user_var::entry to NULL
because Item_func_set_user_var::cleanup() was not overloaded.
So, Item_func_set_user_var::entry held a pointer to freed memory,
that caused a crash.
The Item_func_set_user_var::cleanup method has been overloaded
to cleanup the Item_func_set_user_var::entry field.
We pretended that TIMEDIFF() would always return positive results;
this gave strange results in comparisons of the TIMEDIFF(low,hi)<TIME(0)
type that rendered a negative result, but still gave false in comparison.
We also inadvertantly dropped the sign when converting times to
decimal.
CAST(time AS DECIMAL) handles signs of the times correctly.
TIMEDIFF() marked up as signed. Time/date comparison code switched to
signed for clarity.
Several system variables did not behave like system variables should do.
When trying to SET them or use them in SELECT, they were reported as
"unknown system variable". But they appeared in SHOW VARIABLES.
This has been fixed by removing the "fixed_vars" array of variables
and integrating the variables into the normal system variables chain.
All of these variables do now behave as read-only global-only
variables. Trying to SET them tells they are read-only, trying to
SELECT the session value tells they are global only. Selecting the
global value works. It delivers the same value as SHOW VARIABLES.
variable settings (rpl_sys)
Problem: under certain conditions (e.g. user variables usage in triggers)
accessing a user defined variable we may use a variables hash table that
belongs to already deleted thread. It happens if
thd= new THD;
has the same address as just deleted thd as we use
if (stored_thd == thd)
to check.
That may lead to unpredictable results, server crash etc.
Fix: use thread_id instead of thd address to distinguish threads.
Note: no simple and repeatable test case.
Item_func_div didn't calculate the precision of the result properly.
The result of 5/0.0001 is 5000 so we have to add decimals of the divisor
to the planned precision.
per-file comments:
mysql-test/r/type_newdecimal.result
Bug#31616 div_precision_increment description looks wrong
test result fixed
mysql-test/t/type_newdecimal.test
Bug#31616 div_precision_increment description looks wrong
test case
sql/item_func.cc
Bug#31616 div_precision_increment description looks wrong
precision must be increased with args[1]->decimals parameter