This is similar to MysQL Worklog 3253, but with
a different implementation. The disk format and
SQL syntax is identical with MySQL 5.7.
Fetures supported:
- "Any" ammount of any trigger
- Supports FOLLOWS and PRECEDES to be
able to put triggers in a certain execution order.
Implementation details:
- Class Trigger added to hold information about a trigger.
Before this trigger information was stored in a set of lists in
Table_triggers_list and in Table_triggers_list::bodies
- Each Trigger has a next field that poinst to the next Trigger with the
same action and time.
- When accessing a trigger, we now always access all linked triggers
- The list are now only used to load and save trigger files.
- MySQL trigger test case (trigger_wl3253) added and we execute these
identically.
- Even more gracefully handling of wrong trigger files than before. This
is useful if a trigger file uses functions or syntax not provided by
the server.
- Each trigger now has a "Created" field that shows when the trigger was
created, with 2 decimals.
Other comments:
- Many of the changes in test files was done because of the new "Created"
field in the trigger file. This shows up in SHOW ... TRIGGER and when
using information_schema.trigger.
- Don't check if all memory is released if on uses --gdb; This is needed
to be able to get a list from safemalloc of not freed memory while
debugging.
- Added option to trim_whitespace() to know how many prefix characters
was skipped.
- Changed a few ulonglong sql_mode to sql_mode_t, to find some wrong usage
of sql_mode.
Fix window function expressions such as win_func() <operator> expr.
The problem was found in 2 places.
First, when we have complex expressions containing window functions, we
can only compute their final value _after_ we have computed the window
function's values. These values must be stored within the temporary
table that we are using, before sending them off.
This is done by performing an extra copy_funcs call before the final
end_send() call.
Second, such expressions need to have their inner arguments,
changed such that the references within those arguments point to fields within
the temporary table.
Ex: sum(t.a) over (order by t.b) + sum(t.a) over (order by t.b)
Before this fix, t.a pointed to the original table's a field. In order
to compute the sum function's value correctly, it needs to point to the
copy of this field inside the temp table.
This is done by calling split_sum_func for each argument in the
expression in turn.
The win.test results have also been updated as they contained wrong
values for such a use case.
This bug in the code of Item_ref::build_clone could
cause corruption of items in where conditions.
Also made sure that equality predicates extracted
from multiple equality items to be pushed into
materialized views were cloned.
for materialized views and derived tables: there were no
push-down if the view was defined as union of selects
without aggregation. Added test cases with such unions.
Adjusted result files after the merge of the code for mdev-9197.
Adding Converter_double_to_longlong and reusing it in:
1. Field_longlong::store(double nr)
2. Field_double::val_int()
3. Item::val_int_from_real()
4. Item_dyncol_get::val_int()
As a good side efferct, now overflow in conversion in the mentioned
val_xxx() methods return exactly the same warning.
* remove a confusing method name - Field::set_default_expression()
* remove handler::register_columns_for_write()
* rename stuff
* add asserts
* remove unlikely unlikely
* remove redundant if() conditions
* fix mark_unsupported_function() to report the most important violation
* don't scan vfield list for default values (vfields don't have defaults)
* move handling for DROP CONSTRAINT IF EXIST where it belongs
* don't protect engines from Alter_inplace_info::ALTER_ADD_CONSTRAINT
* comments
MDEV-10134 Add full support for DEFAULT
- Added support for using tables with MySQL 5.7 virtual fields,
including MySQL 5.7 syntax
- Better error messages also for old cases
- CREATE ... SELECT now also updates timestamp columns
- Blob can now have default values
- Added new system variable "check_constraint_checks", to turn of
CHECK constraint checking if needed.
- Removed some engine independent tests in suite vcol to only test myisam
- Moved some tests from 'include' to 't'. Should some day be done for all tests.
- FRM version increased to 11 if one uses virtual fields or constraints
- Changed to use a bitmap to check if a field has got a value, instead of
setting HAS_EXPLICIT_VALUE bit in field flags
- Expressions can now be up to 65K in total
- Ensure we are not refering to uninitialized fields when handling virtual fields or defaults
- Changed check_vcol_func_processor() to return a bitmap of used types
- Had to change some functions that calculated cached value in fix_fields to do
this in val() or getdate() instead.
- store_now_in_TIME() now takes a THD argument
- fill_record() now updates default values
- Add a lookahead for NOT NULL, to be able to handle DEFAULT 1+1 NOT NULL
- Automatically generate a name for constraints that doesn't have a name
- Added support for ALTER TABLE DROP CONSTRAINT
- Ensure that partition functions register virtual fields used. This fixes
some bugs when using virtual fields in a partitioning function
use Item->neg to convert generate negative Item_num's
instead of Item_func_neg(Item_num).
Based on the following commit:
Author: Monty <monty@mariadb.org>
Date: Mon May 30 22:44:00 2016 +0300
Make negative number their own token
The negation (-) operator will call Item->neg() one underlying numeric constants
and remove itself (like the NOT() function does today for other NOT functions.
This simplifies things
- -1 is not anymore an expression but a basic_const_item
- improves optimizer
- DEFAULT -1 doesn't need special handling anymore
- When we add DEFAULT expressions, -1 will be treated exactly like 1
- printing of items doesn't anymore put braces around all negative numbers
Other things fixed:
- Fixed that longlong converted to decimal's has a more appropriate size
- Fixed that "-0.0" read into a decimal is interpreted as 0.0
of bugs easier (MDEV-8919, MDEV-10304, MDEV-10305, MDEV-10307)
- Adding Item::push_note_converted_to_negative_complement() and
Item::push_note_converted_to_positive_complement()
- Adding virtual methods Item::val_int_signed_typecast() and
Item::val_int_unsigned_typecast()
- Moving COLUMN_GET() related code from
Item_func_signed::val_int() and Item_func_unsigned::val_int() to
Item_dyncol_get::val_int_signed_typecast() and
Item_dyncol_get::val_int_unsigned_typecast()
- Moving Item_func_signed::val_int_from_str() to Item::val_int_from_str()
and changing it to get the value from "this" instead of args[0].
The patch does not change behaviour. It's only to simplify fixing of the
mentioned bugs. It will also simplify switching the CAST related code to
use the type handler infrastructure easier (soon).
Window functions need to have their own column in the work (temp) table,
like aggregate functions do.
They don't need val_int() -> val_int_result() conversion though, so they
should be wrapped with Item_direct_ref, not Item_aggregate_ref.
Item_func_or_sum.
Implemented method update_used_tables for class Item_findow_func.
Added the flag Item::with_window_func.
Made sure that window functions could be used only in SELECT list
and ORDER BY clause.
Added test cases that checked different illegal placements of
window functions.
1. Fixing Field_time::get_equal_const_item() to pass TIME_FUZZY_DATES
and TIME_INVALID_DATES to get_time_with_conversion().
This is needed to make the recursively called Item::get_date() return
non-NULL values on garbage input. This makes Field_time::get_equal_const_item()
work consistently with how Item::val_time_packed() works.
2. Fixing Item::get_date() to return TIME'00:00:00' rather than
DATE'0000-00-00' on empty or garbage input when:
- TIME_FUZZY_DATES is enabled
- The caller requested a TIME value (by passing TIME_TIME_ONLY).
This is needed to avoid conversion of DATE'0000-00-00' to TIME
in get_time_with_conversion(), which would erroneously try to subtract
CURRENT_DATE from DATE'0000-00-00' and return TIME'-838:59:59' rather than
the desired zero value TIME'00:00:00'.
#1 and #2 fix these type of scripts to return one row with both
MyISAM and InnoDB, with and without an index on t1.b:
CREATE TABLE t1 (a ENUM('a'), b TIME, c INT, KEY(b));
INSERT INTO t1 VALUES ('','00:00:00',0);
SELECT * FROM t1 WHERE b='';
SELECT * FROM t1 WHERE a=b;
SELECT * FROM t1 IGNORE INDEX(b) WHERE b='';
SELECT * FROM t1 IGNORE INDEX(b) WHERE a=b;
Additionally, #1 and #2 fix the originally reported in MDEV-9604 crash
in Item::save_in_field(), because now execution goes through a different
path, so save_in_field() is called for a Item_time_literal instance
(which is non-NULL) rather than a Item_cache_str instance (which could
return NULL without setting null_value).
3. Fixing Field_temporal::get_equal_const_item_datetime() to enable
equal field propagation for DATETIME and TIMESTAMP in case of
comparison (e.g. when ANY_SUBST), for symmetry with
Field_newdate::get_equal_const_item(). This fixes a number of problems
with empty set returned on comparison to empty/garbage input.
Now all SELECT queries in this script return one row for MyISAM and InnoDB,
with and without an index on t1.b:
CREATE TABLE t1 (a ENUM('a'), b DATETIME, c INT, KEY(b));
INSERT INTO t1 VALUES ('','0000-00-00 00:00:00',0);
SELECT * FROM t1 WHERE b='';
SELECT * FROM t1 WHERE a=b;
SELECT * FROM t1 IGNORE INDEX(b) WHERE b='';
SELECT * FROM t1 IGNORE INDEX(b) WHERE a=b;
INTERVALS
ISSUE:
------
Some string functions return one or a combination of the
parameters as their result. Here the resultant string's
charset could be incorrectly set to that of the chosen
parameter.
This results in incorrect behavior when an ascii string is
expected.
SOLUTION:
---------
Since an ascii string is expected, val_str_ascii should
explicitly convert the string.
Part of the fix is a backport of Bug#22340858 for mysql-5.5
and mysql-5.6.
when doing set_field_to_new_field (from switch_to_nullable_trigger_fields())
make sure that the field we're about to change actually belongs
to the right table (otherwise we cannot dereference new_field[]
array as the wrong table might have more fields than
new_field[] has elements)
Case: table with a NOT NULL field, BEFORE UPDATE trigger,
and UPDATE with a subquery that uses GROUP BY on that
NOT NULL field, and needs a temporary table for it.
Because of the BEFORE trigger, the field becomes nullable
temporarily. But its Item_field (used in GROUP BY) doesn't.
When working with the temptable some code looked at
item->maybe_null, some - at field->null_ptr.
The fix: make Item_field nullable when its field is.
This triggers an assert. The group key size is calculated
before the item is made nullable, so the group key doesn't
have a null byte. The fix: make fields/items nullable
before the group key size is calculated.