timestamp primary key
Since TIMESTAMP values are adjusted by the current time zone
settings in both numeric and string contexts, using any
expressions involving TIMESTAMP values as a
(sub)partitioning function leads to undeterministic behavior of
partitioned tables. The effect may vary depending on a storage
engine, it can be either incorrect data being retrieved or
stored, or an assertion failure. The root cause of this is the
fact that the calculated partition ID may differ from a
previously calculated ID for the same data due to timezone
adjustments of the partitioning expression value.
Fixed by disabling any expressions involving TIMESTAMP values
to be used in partitioning functions with the follwing two
exceptions:
1. Creating or altering into a partitioned table that violates
the above rule is not allowed, but opening existing such tables
results in a warning rather than an error so that such tables
could be fixed.
2. UNIX_TIMESTAMP() is the only way to get a
timezone-independent value from a TIMESTAMP column, because it
returns the internal representation (a time_t value) of a
TIMESTAMP argument verbatim. So UNIX_TIMESTAMP(timestamp_column)
is allowed and should be used to fix existing tables if one
wants to use TIMESTAMP columns with partitioning.
Constant expressions in WHERE/HAVING/ON clauses aren't cached and evaluated
for each row. This causes slowdown of query execution especially if constant
UDF/SP function are used.
Now WHERE/HAVING/ON expressions are analyzed in the top-bottom direction with
help of the compile function. When analyzer meets a constant item it
sets a flag for the tree transformer to cache the item and doesn't allow tree
walker to go deeper. Thus, the topmost item of a constant expression if
cached. This is done after all other optimizations were applied to
WHERE/HAVING/ON expressions
A helper function called cache_const_exprs is added to the JOIN class.
It calls compile method with caching analyzer and transformer on WHERE,
HAVING, ON expressions if they're present.
The cache_const_expr_analyzer and cache_const_expr_transformer functions are
added to the Item class. The first one check if the item can be cached and
the second caches it if so.
A new Item_cache_datetime class is derived from the Item_cache class.
It caches both int and string values of the underlying item independently to
avoid DATETIME aware int-to-string conversion. Thus it completely relies on
the ability of the underlying item to correctly convert DATETIME value from
int to string and vice versa.
Text conflict in mysql-test/collections/default.experimental
Text conflict in mysql-test/r/show_check.result
Text conflict in mysql-test/r/sp-code.result
Text conflict in mysql-test/suite/binlog/r/binlog_tmp_table.result
Text conflict in mysql-test/suite/rpl/t/disabled.def
Text conflict in mysql-test/t/show_check.test
Text conflict in mysys/my_delete.c
Text conflict in sql/item.h
Text conflict in sql/item_cmpfunc.h
Text conflict in sql/log.cc
Text conflict in sql/mysqld.cc
Text conflict in sql/repl_failsafe.cc
Text conflict in sql/slave.cc
Text conflict in sql/sql_parse.cc
Text conflict in sql/sql_table.cc
Text conflict in sql/sql_yacc.yy
Text conflict in storage/myisam/ha_myisam.cc
Corrected results for
stm_auto_increment_bug33029.reject 2009-12-01
20:01:49.000000000 +0300
<andrei> @@ -42,9 +42,6 @@
<andrei> RETURN i;
<andrei> END//
<andrei> CALL p1();
<andrei> -Warnings:
<andrei> -Note 1592 Statement may not be safe to log in statement
format.
<andrei> -Note 1592 Statement may not be safe to log in statement
format.
There should be indeed no Note present because there is in fact autoincrement
top-level query in sp() that triggers inserting in yet another auto-inc table.
(todo: alert DaoGang to improve the test).
MySQL manual describes values of the YEAR(2) field type as follows:
values 00 - 69 mean 2000 - 2069 years and values 70 - 99 mean 1970 - 1999
years. MIN/MAX and comparison functions was comparing them as int values
thus producing wrong result.
Now the Arg_comparator class is extended with compare_year function which
performs correct comparison of the YEAR type.
The Item_sum_hybrid class now uses Item_cache and Arg_comparator objects to
correctly calculate its value.
To allow Arg_comparator to use func_name() function for Item_func and Item_sum
objects the func_name declaration is moved to the Item_result_field class.
A helper function is_owner_equal_func is added to the Arg_comparator class.
It checks whether the Arg_comparator object owner is the <=> function or not.
A helper function setup is added to the Item_sum_hybrid class. It sets up
cache item and comparator.
When values of different types are compared they're converted to a type that
allows correct comparison. This conversion is done for each comparison and
takes some time. When a constant is being compared it's possible to cache the
value after conversion to speedup comparison. In some cases (large dataset,
complex WHERE condition with many type conversions) query might be executed
7% faster.
A test case isn't provided because all changes are internal and isn't visible
outside.
The behavior of the Item_cache is changed to cache values on the first request
of cached value rather than at the moment of storing item to be cached.
A flag named value_cached is added to the Item_cache class. It's set to TRUE
when cache holds the value of the last stored item.
Function named cache_value() is added to the Item_cache class and derived classes.
This function actually caches the value of the saved item.
Item_cache_xxx::store functions now only store item to be cached and set
value_cached flag to FALSE.
Item_cache_xxx::val_xxx functions are changed to call cache_value function
prior to returning cached value if value_cached is FALSE.
The Arg_comparator::set_cmp_func function now calls cache_converted_constant
to cache constants if they need a type conversion.
The Item_cache::get_cache function is overloaded to allow setting of the
cache type.
The cache_converted_constant function is added to the Arg_comparator class.
It checks whether a value can and should be cached and if so caches it.
When a query was using a DATE or DATETIME value formatted
using any other separator characters beside hyphen '-', a
query with a greater-or-equal '>=' condition matching only
the greatest value in an indexed column, the result was
empty if index range scan was employed.
The range optimizer got a new feature between 5.1.38 and
5.1.39 that changes a greater-or-equal condition to a
greater-than if the value matching that in the query was not
present in the table. But the value comparison function
compared the dates as strings instead of dates.
The bug was fixed by splitting the function
get_date_from_str in two: One part that parses and does
error checking. This function is now visible outside the
module. The old get_date_from_str now calls the new
function.
2630.39.1, 2630.28.29, 2630.34.3, 2630.34.2, 2630.34.1, 2630.29.29,
2630.29.28, 2630.31.1, 2630.28.13, 2630.28.10, 2617.23.14 and
some other minor revisions.
This patch implements:
WL#4264 "Backup: Stabilize Service Interface" -- all the
server prerequisites except si_objects.{h,cc} themselves (they can
be just copied over, when needed).
WL#4435: Support OUT-parameters in prepared statements.
(and all issues in the initial patches for these two
tasks, that were discovered in pushbuild and during testing).
Bug#39519: mysql_stmt_close() should flush all data
associated with the statement.
After execution of a prepared statement, send OUT parameters of the invoked
stored procedure, if any, to the client.
When using the binary protocol, send the parameters in an additional result
set over the wire. When using the text protocol, assign out parameters to
the user variables from the CALL(@var1, @var2, ...) specification.
The following refactoring has been made:
- Protocol::send_fields() was renamed to Protocol::send_result_set_metadata();
- A new Protocol::send_result_set_row() was introduced to incapsulate
common functionality for sending row data.
- Signature of Protocol::prepare_for_send() was changed: this operation
does not need a list of items, the number of items is fully sufficient.
The following backward incompatible changes have been made:
- CLIENT_MULTI_RESULTS is now enabled by default in the client;
- CLIENT_PS_MULTI_RESUTLS is now enabled by default in the client.
Problem was that the partition containing NULL values
was pruned away, since '2001-01-01' < '2001-02-00' but
TO_DAYS('2001-02-00') is NULL.
Added the NULL partition for RANGE/LIST partitioning on TO_DAYS()
function to be scanned too.
Also fixed a bug that added ALLOW_INVALID_DATES to sql_mode
(SELECT * FROM t WHERE date_col < '1999-99-99' on a RANGE/LIST
partitioned table would add it).
There were a problem since pruning uses the field
for comparison (while evaluate_join_record uses longlong),
resulting in pruning failures when comparing DATE to DATETIME.
Fix was to always comparing DATE vs DATETIME as DATETIME,
by adding ' 00:00:00' to the DATE string.
And adding optimization for comparing with 23:59:59, so that
DATETIME_col > '2001-02-03 23:59:59' ->
TO_DAYS(DATETIME_col) > TO_DAYS('2001-02-03 23:59:59') instead
of '>='.
The problem was that creating a DECIMAL column from a decimal
value could lead to a failed assertion as decimal values can
have a higher precision than those attached to a table. The
assert could be triggered by creating a table from a decimal
with a large (> 30) scale. Also, there was a problem in
calculating the number of digits in the integral and fractional
parts if both exceeded the maximum number of digits permitted
by the new decimal type.
The solution is to ensure that truncation procedure is executed
when deducing a DECIMAL column from a decimal value of higher
precision. If the integer part is equal to or bigger than the
maximum precision for the DECIMAL type (65), the integer part
is truncated to fit and the fractional becomes zero. Otherwise,
the fractional part is truncated to fit into the space left
after the integer part is copied.
This patch borrows code and ideas from Martin Hansson's patch.
HAVING
When calculating GROUP BY the server caches some expressions. It does
that by allocating a string slot (Item_copy_string) and assigning the
value of the expression to it. This effectively means that the result
type of the expression can be changed from whatever it was to a string.
As this substitution takes place after the compile-time result type
calculation for IN but before the run-time type calculations,
it causes the type calculations in the IN function done at run time
to get unexpected results different from what was prepared at compile time.
In the CASE ... WHEN ... THEN ... statement there was a similar problem
and it was solved by artificially adding a STRING argument to the set of
types of the IN/CASE arguments at compile time, so if any of the
arguments of the CASE function changes its type to a string it will
still be covered by the information prepared at compile time.
In case of ROW item each compared pair does not
check if argumet collations can be aggregated and
thus appropiriate item conversion does not happen.
The fix is to add the check and convertion for ROW
pairs.
- Remove bothersome warning messages. This change focuses on the warnings
that are covered by the ignore file: support-files/compiler_warnings.supp.
- Strings are guaranteed to be max uint in length
IS NULL was not checking the correct row in a HAVING context.
At the first row of a new group (where the HAVING clause is evaluated)
the column and SELECT list references in the HAVING clause should
refer to the last row of the previous group and not to the current one.
This was not done for IS NULL, because it was using Item::is_null() doesn't
have a Item_is_null_result() counterpart to access the data from the
last row of the previous group. Note that all the Item::val_xxx() functions
(e.g. Item::val_int()) have their _result counterparts (e.g. Item::val_int_result()).
Fixed by implementing a is_null_result() (similarly to int_result()) and
calling this instead of is_null() column and SELECT list references inside
the HAVING clause.
The code to get read the value of a system variable was extracting its value
on PREPARE stage and was substituting the value (as a constant) into the parse tree.
Note that this must be a reversible transformation, i.e. it must be reversed before
each re-execution.
Unfortunately this cannot be reliably done using the current code, because there are
other non-reversible source tree transformations that can interfere with this
reversible transformation.
Fixed by not resolving the value at PREPARE, but at EXECUTE (as the rest of the
functions operate). Added a cache of the value (so that it's constant throughout
the execution of the query). Note that the cache also caches NULL values.
Updated an obsolete related test suite (variables-big) and the code to test the
result type of system variables (as per bug 74).
The optimizer pulls up aggregate functions which should be aggregated in
an outer select. At some point it may substitute such a function for a field
in the temporary table. The setup_copy_fields function doesn't take this
into account and may overrun the copy_field buffer.
Fixed by filtering out the fields referenced through the specialized
reference for aggregates (Item_aggregate_ref).
Added an assertion to make sure bugs that cause similar discrepancy
don't go undetected.
Length value is the length of the field,
Max_length is the length of the field value.
So Max_length can not be more than Length.
The fix: fixed calculation of the Item_empty_string item length
(Patch applied and queued on demand of Trudy/Davi.)
This fix is for 5.0 only : back porting the 6.0 patch manually
The parser code in sql/sql_yacc.yy needs to be more robust to out of
memory conditions, so that when parsing a query fails due to OOM,
the thread gracefully returns an error.
Before this fix, a new/alloc returning NULL could:
- cause a crash, if dereferencing the NULL pointer,
- produce a corrupted parsed tree, containing NULL nodes,
- alter the semantic of a query, by silently dropping token values or nodes
With this fix:
- C++ constructors are *not* executed with a NULL "this" pointer
when operator new fails.
This is achieved by declaring "operator new" with a "throw ()" clause,
so that a failed new gracefully returns NULL on OOM conditions.
- calls to new/alloc are tested for a NULL result,
- The thread diagnostic area is set to an error status when OOM occurs.
This ensures that a request failing in the server properly returns an
ER_OUT_OF_RESOURCES error to the client.
- OOM conditions cause the parser to stop immediately (MYSQL_YYABORT).
This prevents causing further crashes when using a partially built parsed
tree in further rules in the parser.
No test scripts are provided, since automating OOM failures is not
instrumented in the server.
Tested under the debugger, to verify that an error in alloc_root cause the
thread to returns gracefully all the way to the client application, with
an ER_OUT_OF_RESOURCES error.
WL#4165 Prepared statements: validation
WL#4166 Prepared statements: automatic re-prepare
Fixes
Bug#27430 Crash in subquery code when in PS and table DDL changed after PREPARE
Bug#27690 Re-execution of prepared statement after table was replaced with a view crashes
Bug#27420 A combination of PS and view operations cause error + assertion on shutdown
The basic idea of the patch is to keep track of table metadata between
prepared statement prepare and execute. If some table used in the statement
has changed, the prepared statement is re-prepared before execution.
See WL#4165 and WL#4166 contents and comments in the code for details
of the implementation.