Based on the current logic, objects of classes Item_func_charset and
Item_func_coercibility (responsible for CHARSET() and COERCIBILITY()
functions) are always considered constant.
However, SQL syntax allows their use in a non-constant manner, such as
CHARSET(t1.a), COERCIBILITY(t1.a).
In these cases, the `used_tables()` parameter corresponds to table names
in the function parameters, creating an inconsistency: the item is marked
as constant but accesses tables. This leads to crashes when
conditions with CHARSET()/COERCIBILITY() are pushed into derived tables.
This commit addresses the issue by setting `used_tables()` to 0 for
`Item_func_charset` and `Item_func_coercibility`. Additionally, the items
now store the return values during the preparation phase and return
them during the execution phase. This ensures that the items do not call
its arguments methods during the execution and are truly constant.
Reviewer: Alexander Barkov <bar@mariadb.com>
The `Item` class methods `get_copy()`, `build_clone()`, and `clone_item()`
face an issue where they may be defined in a descendant class
(e.g., `Item_func`) but not in a further descendant (e.g., `Item_func_child`).
This can lead to scenarios where `build_clone()`, when operating on an
instance of `Item_func_child` with a pointer to the base class (`Item`),
returns an instance of `Item_func` instead of `Item_func_child`.
Since this limitation cannot be resolved at compile time, this commit
introduces runtime type checks for the copy/clone operations.
A debug assertion will now trigger in case of a type mismatch.
`get_copy()`, `build_clone()`, and `clone_item()` are no more virtual,
but virtual `do_get_copy()`, `do_build_clone()`, and `do_clone_item()`
are added to the protected section of the class `Item`.
Additionally, const qualifiers have been added to certain methods
to enhance code reliability.
Reviewer: Oleksandr Byelkin <sanja@mariadb.com>
Starting with clang-16, MemorySanitizer appears to check that
uninitialized values not be passed by value nor returned.
Previously, it was allowed to copy uninitialized data in such cases.
get_foreign_key_info(): Remove a local variable that was passed
uninitialized to a function.
DsMrr_impl: Initialize key_buffer, because DsMrr_impl::dsmrr_init()
is reading it.
test_bind_result_ext1(): MYSQL_TYPE_LONG is 32 bits, hence we must
use a 32-bit type, such as int. sizeof(long) differs between
LP64 and LLP64 targets.
Functions extracting non-negative datetime components:
- YEAR(dt), EXTRACT(YEAR FROM dt)
- QUARTER(td), EXTRACT(QUARTER FROM dt)
- MONTH(dt), EXTRACT(MONTH FROM dt)
- WEEK(dt), EXTRACT(WEEK FROM dt)
- HOUR(dt),
- MINUTE(dt),
- SECOND(dt),
- MICROSECOND(dt),
- DAYOFYEAR(dt)
- EXTRACT(YEAR_MONTH FROM dt)
did not set their max_length properly, so in the DECIMAL
context they created a too small DECIMAL column, which
led to the 'Out of range value' error.
The problem is that most of these functions historically
returned the signed INT data type.
There were two simple ways to fix these functions:
1. Add +1 to max_length.
But this would also change their size in the string context
and create too long VARCHAR columns, with +1 excessive size.
2. Preserve max_length, but change the data type from INT to INT UNSIGNED.
But this would break backward compatibility.
Also, using UNSIGNED is generally not desirable,
it's better to stay with signed when possible.
This fix implements another solution, which it makes all these functions
work well in all contexts: int, decimal, string.
Fix details:
- Adding a new special class Type_handler_long_ge0 - the data type
handler for expressions which:
* should look like normal signed INT
* but which known not to return negative values
Expressions handled by Type_handler_long_ge0 store in Item::max_length
only the number of digits, without adding +1 for the sign.
- Fixing Item_extract to use Type_handler_long_ge0
for non-negative datetime components:
YEAR, YEAR_MONTH, QUARTER, MONTH, WEEK
- Adding a new abstract class Item_long_ge0_func, for functions
returning non-negative datetime components.
Item_long_ge0_func uses Type_handler_long_ge0 as the type handler.
The class hierarchy now looks as follows:
Item_long_ge0_func
Item_long_func_date_field
Item_func_to_days
Item_func_dayofmonth
Item_func_dayofyear
Item_func_quarter
Item_func_year
Item_long_func_time_field
Item_func_hour
Item_func_minute
Item_func_second
Item_func_microsecond
- Cleanup: EXTRACT(QUARTER FROM dt) created an excessive VARCHAR column
in string context. Changing its length from 2 to 1.
During the 10.5->10.6 merge please use the 10.6 code on conflicts.
This is the 10.5 version of the patch (a backport of the 10.6 version).
Unlike 10.6 version, it makes changes in plugin/type_inet/sql_type_inet.*
rather than in sql/sql_type_fixedbin.h
Item_bool_rowready_func2, Item_func_between, Item_func_in
did not check if a not-NULL argument of an arbitrary data type
can produce a NULL value on conversion to INET6.
This caused a crash on DBUG_ASSERT() in conversion failures,
because the function returned SQL NULL for something that
has Item::maybe_null() equal to false.
Adding setting NULL-ability in such cases.
Details:
- Removing the code in Item_func::setup_args_and_comparator()
performing character set aggregation with optional narrowing.
This aggregation is done inside Arg_comparator::set_cmp_func_string().
So this code was redundant
- Removing Item_func::setup_args_and_comparator() as it git simplified to
just to two lines:
convert_const_compared_to_int_field(thd);
return cmp->set_cmp_func(thd, this, &args[0], &args[1], true);
Using these lines directly in:
- Item_bool_rowready_func2::fix_length_and_dec()
- Item_func_nullif::fix_length_and_dec()
- Adding a new virtual method:
- Type_handler::Item_bool_rowready_func2_fix_length_and_dec().
- Adding tests detecting if the data type conversion can return SQL NULL into
the following methods of Type_handler_inet6:
- Item_bool_rowready_func2_fix_length_and_dec
- Item_func_between_fix_length_and_dec
- Item_func_in_fix_comparator_compatible_types
The crash happened with an indexed virtual column whose
value is evaluated using a function that has a different meaning
in sql_mode='' vs sql_mode=ORACLE:
- DECODE()
- LTRIM()
- RTRIM()
- LPAD()
- RPAD()
- REPLACE()
- SUBSTR()
For example:
CREATE TABLE t1 (
b VARCHAR(1),
g CHAR(1) GENERATED ALWAYS AS (SUBSTR(b,0,0)) VIRTUAL,
KEY g(g)
);
So far we had replacement XXX_ORACLE() functions for all mentioned function,
e.g. SUBSTR_ORACLE() for SUBSTR(). So it was possible to correctly re-parse
SUBSTR_ORACLE() even in sql_mode=''.
But it was not possible to re-parse the MariaDB version of SUBSTR()
after switching to sql_mode=ORACLE. It was erroneously mis-interpreted
as SUBSTR_ORACLE().
As a result, this combination worked fine:
SET sql_mode=ORACLE;
CREATE TABLE t1 ... g CHAR(1) GENERATED ALWAYS AS (SUBSTR(b,0,0)) VIRTUAL, ...;
INSERT ...
FLUSH TABLES;
SET sql_mode='';
INSERT ...
But the other way around it crashed:
SET sql_mode='';
CREATE TABLE t1 ... g CHAR(1) GENERATED ALWAYS AS (SUBSTR(b,0,0)) VIRTUAL, ...;
INSERT ...
FLUSH TABLES;
SET sql_mode=ORACLE;
INSERT ...
At CREATE time, SUBSTR was instantiated as Item_func_substr and printed
in the FRM file as substr(). At re-open time with sql_mode=ORACLE, "substr()"
was erroneously instantiated as Item_func_substr_oracle.
Fix:
The fix proposes a symmetric solution. It provides a way to re-parse reliably
all sql_mode dependent functions to their original CREATE TABLE time meaning,
no matter what the open-time sql_mode is.
We take advantage of the same idea we previously used to resolve sql_mode
dependent data types.
Now all sql_mode dependent functions are printed by SHOW using a schema
qualifier when the current sql_mode differs from the function sql_mode:
SET sql_mode='';
CREATE TABLE t1 ... SUBSTR(a,b,c) ..;
SET sql_mode=ORACLE;
SHOW CREATE TABLE t1; -> mariadb_schema.substr(a,b,c)
SET sql_mode=ORACLE;
CREATE TABLE t2 ... SUBSTR(a,b,c) ..;
SET sql_mode='';
SHOW CREATE TABLE t1; -> oracle_schema.substr(a,b,c)
Old replacement names like substr_oracle() are still understood for
backward compatibility and used in FRM files (for downgrade compatibility),
but they are not printed by SHOW any more.
Change history in the affected code:
- Since 10.4.8 (MDEV-20397 and MDEV-23311), functions ROUND(), CEILING(),
FLOOR() return a TIME value for a TIME input.
- Since 10.4.14 (MDEV-23525), MIN() and MAX() calculate a result for a TIME
input using val_native() rather than val_str().
Problem:
The patch for MDEV-23525 did not take into account combinations like
MIN(ROUND(time)), MAX(FLOOR(time)), etc.
MIN() and MAX() with ROUND(time), CEILING(time), FLOOR(time) as an argument
call the method val_native() of the undelying classes Item_func_round and
Item_func_int_val. However these classes implemented the method val_native()
as DBUG_ASSERT(0).
Fix:
This patch adds a TIME-specific code inside:
- Item_func_round::val_native()
- Item_func_int_val::val_native()
still with DBUG_ASSERT(0) for all other data types,
as other data types do not call val_native() of these classes.
We'll need a more generic solition eventualy, e.g.
turn Item_func_round and Item_func_int_val into Item_handled_func.
However, this change would be too risky for 10.4 at this point.
'long long int'; cast to an unsigned type to negate this value ..
to itself in Item_func_mul::int_op and Item_func_round::int_op
Problems:
The code in multiple places in the following methods:
- Item_func_mul::int_op()
- longlong Item_func_int_div::val_int()
- Item_func_mod::int_op()
- Item_func_round::int_op()
did not properly check for corner values LONGLONG_MIN
and (LONGLONG_MAX+1) before doing negation.
This cuased UBSAN to complain about undefined behaviour.
Fix summary:
- Adding helper classes ULonglong, ULonglong_null, ULonglong_hybrid
(in addition to their signed couterparts in sql/sql_type_int.h).
- Moving the code performing multiplication of ulonglong numbers
from Item_func_mul::int_op() to ULonglong_hybrid::ullmul().
- Moving the code responsible for extracting absolute values
from negative numbers to Longlong::abs().
It makes sure to perform negation without undefinite behavior:
LONGLONG_MIN is handled in a special way.
- Moving negation related code to ULonglong::operator-().
It makes sure to perform negation without undefinite behavior:
(LONGLONG_MAX + 1) is handled in a special way.
- Moving signed<=>unsigned conversion code to
Longlong_hybrid::val_int() and ULonglong_hybrid::val_int().
- Reusing old and new sql_type_int.h classes in multiple
places in Item_func_xxx::int_op().
Fix details (explain how sql_type_int.h classes are reused):
- Instead of straight negation of negative "longlong" arguments
*before* performing unsigned multiplication,
Item_func_mul::int_op() now calls ULonglong_null::ullmul()
using Longlong_hybrid_null::abs() to pass arguments.
This fixes undefined behavior N1.
- Instead of straight negation of "ulonglong" result
*after* performing unsigned multiplication,
Item_func_mul::int_op() now calls ULonglong_hybrid::val_int(),
which recursively calls ULonglong::operator-().
This fixes undefined behavior N2.
- Removing duplicate negating code from Item_func_mod::int_op().
Using ULonglong_hybrid::val_int() instead.
This fixes undefinite behavior N3.
- Removing literal "longlong" negation from Item_func_round::int_op().
Using Longlong::abs() instead, which correctly handler LONGLONG_MIN.
This fixes undefinite behavior N4.
- Removing the duplicate (negation related) code from
Item_func_int_div::val_int(). Reusing class ULonglong_hybrid.
There were no undefinite behavior in here.
However, this change allowed to reveal a bug in
"-9223372036854775808 DIV 1".
The removed negation code appeared to be incorrect when
negating +9223372036854775808. It returned the "out of range" error.
ULonglong_hybrid::operator-() now handles all values correctly
and returns +9223372036854775808 as a negation for -9223372036854775808.
Re-recording wrong results for
SELECT -9223372036854775808 DIV 1;
Now instead of "out of range", it returns -9223372036854775808,
which is the smallest possible value for the expression data type
(signed) BIGINT.
- Removing "no UBSAN" branch from Item_func_splus::int_opt()
and Item_func_minus::int_opt(), as it made UBSAN happy but
in RelWithDebInfo some MTR tests started to fail.
When a query does implicit grouping and join operation produces an empty
result set, a NULL-complemented row combination is generated.
However, constant table fields still show non-NULL values.
What happens in the is that end_send_group() is called with a
const row but without any rows matching the WHERE clause.
This last part is shown by 'join->first_record' not being set.
This causes item->no_rows_in_result() to be called for all items to reset
all sum functions to their initial state. However fields are not set
to NULL.
The used fix is to produce NULL-complemented records for constant tables
as well. Also, reset the constant table's records back in case we're
in a subquery which may get re-executed.
An alternative fix would have item->no_rows_in_result() also work
with Item_field objects.
There is some other issues with the code:
- join->no_rows_in_result_called is used but never set.
- Tables that are used with group functions are not properly marked as
maybe_null, which is required if the table rows should be regarded as
null-complemented (not existing).
- The code that tries to detect if mixed_implicit_grouping should be set
didn't take into account all usage of fields and sum functions.
- Item_func::restore_to_before_no_rows_in_result() called the wrong
function.
- join->clear() does not use a table_map argument to clear_tables(),
which caused it to ignore constant tables.
- unclear_tables() does not correctly restore status to what is
was before clear_tables().
Main bug fix was to always use a table_map argument to clear_tables() and
always use join->clear() and clear_tables() together with unclear_tables().
Other fixes:
- Fixed Item_func::restore_to_before_no_rows_in_result()
- Set 'join->no_rows_in_result_called' when no_rows_in_result_set()
is called.
- Removed not used argument from setup_end_select_func().
- More code comments
- Ensure that end_send_group() modifies the same fields as are in the
result set.
- Changed return_zero_rows() to use pointers instead of references,
similar to the rest of the code.
Reviewer: Sergei Petrunia <sergey@mariadb.com>
When a query does implicit grouping and join operation produces an empty
result set, a NULL-complemented row combination is generated.
However, constant table fields still show non-NULL values.
What happens in the is that end_send_group() is called with a
const row but without any rows matching the WHERE clause.
This last part is shown by 'join->first_record' not being set.
This causes item->no_rows_in_result() to be called for all items to reset
all sum functions to their initial state. However fields are not set
to NULL.
The used fix is to produce NULL-complemented records for constant tables
as well. Also, reset the constant table's records back in case we're
in a subquery which may get re-executed.
An alternative fix would have item->no_rows_in_result() also work
with Item_field objects.
There is some other issues with the code:
- join->no_rows_in_result_called is used but never set.
- Tables that are used with group functions are not properly marked as
maybe_null, which is required if the table rows should be regarded as
null-complemented (not existing).
- The code that tries to detect if mixed_implicit_grouping should be set
didn't take into account all usage of fields and sum functions.
- Item_func::restore_to_before_no_rows_in_result() called the wrong
function.
- join->clear() does not use a table_map argument to clear_tables(),
which caused it to ignore constant tables.
- unclear_tables() does not correctly restore status to what is
was before clear_tables().
Main bug fix was to always use a table_map argument to clear_tables() and
always use join->clear() and clear_tables() together with unclear_tables().
Other fixes:
- Fixed Item_func::restore_to_before_no_rows_in_result()
- Set 'join->no_rows_in_result_called' when no_rows_in_result_set()
is called.
- Removed not used argument from setup_end_select_func().
- More code comments
- Ensure that end_send_group() modifies the same fields as are in the
result set.
- Changed return_zero_rows() to use pointers instead of references,
similar to the rest of the code.
This patch is the result of running
run-clang-tidy -fix -header-filter=.* -checks='-*,modernize-use-equals-default' .
Code style changes have been done on top. The result of this change
leads to the following improvements:
1. Binary size reduction.
* For a -DBUILD_CONFIG=mysql_release build, the binary size is reduced by
~400kb.
* A raw -DCMAKE_BUILD_TYPE=Release reduces the binary size by ~1.4kb.
2. Compiler can better understand the intent of the code, thus it leads
to more optimization possibilities. Additionally it enabled detecting
unused variables that had an empty default constructor but not marked
so explicitly.
Particular change required following this patch in sql/opt_range.cc
result_keys, an unused template class Bitmap now correctly issues
unused variable warnings.
Setting Bitmap template class constructor to default allows the compiler
to identify that there are no side-effects when instantiating the class.
Previously the compiler could not issue the warning as it assumed Bitmap
class (being a template) would not be performing a NO-OP for its default
constructor. This prevented the "unused variable warning".
rename to stress that is a specific hack for Item_func_nextval
and should not be used for other items.
If a vcol uses Item_func_nextval, a corresponding table for the sequence
should be added to the prelocking list (in that sense NEXTVAL is not
simply a function, but more like a subquery), see add_internal_tables()
in DML_prelocking_strategy::handle_table(). At the moment it is only
implemented for DEFAULT, not for GENERATED ALWAYS AS, thus the
VCOL_NEXTVAL hack.
it's not "non deterministic", it's completely defined
by @@rand_seed1 and @@rand_seed2. And as a session func it needs
to be re-fixed at the beginning of every statement.
Problem:
When calculatung MIN() and MAX() in a query with GROUP BY, like this:
SELECT MIN(time_expr), MAX(time_expr) FROM t1 GROUP BY i;
the code in Item_sum_min_max::update_field() erroneosly used
string format comparison, therefore '100:20:30' was considered as
smaller than '10:20:30'.
Fix:
1. Implementing low level "native" related methods in class Time:
Time::Time(const Native &native) - convert native to Time
Time::to_native(Native *to, uint decimals) - convert Time to native
The "native" binary representation for TIME is equal to
the binary data format of Field_timef, which is used to
store TIME when mysql56_temporal_format is ON (default).
2. Implementing Type_handler_time_common "native" related methods:
Type_handler_time_common::cmp_native()
Type_handler_time_common::Item_val_native_with_conversion()
Type_handler_time_common::Item_val_native_with_conversion_result()
Type_handler_time_common::Item_param_val_native()
3. Implementing missing "native representation" related methods
in Field_time and Field_timef:
Field_time::store_native()
Field_time::val_native()
Field_timef::store_native()
Field_timef::val_native()
4. Implementing missing "native" related methods in all Items
that can have the TIME data type:
Item_timefunc::val_native()
Item_name_const::val_native()
Item_time_literal::val_native()
Item_cache_time::val_native()
Item_handled_func::val_native()
5. Marking Type_handler_time_common as "native ready".
So now Item_sum_min_max::update_field() calculates
values using min_max_update_native_field(),
which uses native binary representation rather than string representation.
Before this change, only the TIMESTAMP data type used native
representation to calculate MIN() and MAX().
Benchmarks (see more details in MDEV):
This change not only fixes the wrong result, but also
makes a "SELECT .. MAX.. GROUP BY .." query faster:
# TIME(0)
CREATE TABLE t1 (id INT, time_col TIME) ENGINE=HEAP;
INSERT INTO t1 VALUES (1,'10:10:10'); -- repeat this 1m times
SELECT id, MAX(time_col) FROM t1 GROUP BY id;
MySQL80: 0.159 sec
10.3: 0.108 sec
10.4: 0.094 sec (fixed)
# TIME(6):
CREATE TABLE t1 (id INT, time_col TIME(6)) ENGINE=HEAP;
INSERT INTO t1 VALUES (1,'10:10:10.999999'); -- repeat this 1m times
SELECT id, MAX(time_col) FROM t1 GROUP BY id;
My80: 0.154
10.3: 0.135
10.4: 0.093 (fixed)
The code in Item_func_int_val::fix_length_and_dec_int_or_decimal()
calculated badly the result data type for FLOOR()/CEIL(), so for example
the decimal(38,10) input created a decimal(28,0) result.
That was not correct, because one extra integer digit is needed.
floor(-9.9) -> -10
ceil(9.9) -> 10
Rewritting the code in a more straightforward way.
Additional changes:
- FLOOR() now takes into account the presence of the UNSIGNED
flag of the argument: FLOOR(unsigned decimal) does not need an extra digits.
- FLOOR()/CEILING() now preserve the unsigned flag in the result
data type is decimal.
These changes give nicer data types.
Changing that in case of *INT and hex hybrid input:
- ROUND(x,NULL) creates a column with the same type as x.
The old code created a DOUBLE column, which was not relevant at all.
This change simplifies the code a lot.
- ROUND(x,non_constant) creates a column of the INT, BIGINT or DECIMAL
data type (depending on the exact type of x).
The old code created a column of the DOUBLE data type,
which lead to precision loss. Hence MDEV-23366.
- ROUND(bigint_30,negative_constant) creates a column of the DECIMAL(30,0)
data type. The old code created DECIMAL(29,0), which looked strange:
the data type promoted to a higher one, but max length reduced.
Now the length attribute is preserved.
Item_func_round::fix_arg_int() did not take into account cases
when the result of ROUND(bigint_subject,negative_precision)
could go outside of the BIGINT range. The old code only incremented
max_length, but did not extend change the data type.
Fixing to extend the data type (together with max_length increment).
Fixing ROUND(date,0), TRUNCATE(date,x), FLOOR(date), CEILING(date)
to return the `int(8) unsigned` data type.
Details:
1. Cleanup: moving virtual implementations
- Type_handler_temporal_result::Item_func_int_val_fix_length_and_dec()
- Type_handler_temporal_result::Item_func_round_fix_length_and_dec()
to Type_handler_date_common. Other temporal data type handlers
override these methods anyway. So they were only DATE specific.
This change makes the code clearer.
2. Backporting DTCollation_numeric from 10.5, to reuse the code easier.
3. Adding the `preferred_attrs` argument to Item_func_round::fix_arg_int(). Now
Type_handler_xxx::Item_func_round_val_fix_length_and_dec() work as follows:
- The INT-alike and YEAR handlers copy preferred_attrs from args[0].
- The DATE handler passes explicit attributes, to get `int(8) unsigned`.
- The hex hybrid handler passes NULL, so fix_arg_int() calculates attributes.
4. Type_handler_date_common::Item_func_int_val_fix_length_and_dec()
now sets the type handler and attributes to get `int(8) unsigned`.
1. Fixing ROUND(x) and TRUNCATE(x,0) with TINYINT, SMALLINT, MEDIUMINT, BIGINT
input to preserve the exact data type of the argument when it's possible.
2. Fixing FLOOR(x) and CEILING(x) with TINYINT, SMALLINT, MEDIUMINT, BIGINT
to preserve the exact data type of the argument.
3. Adding dedicated Type_handler_year::Item_func_round_fix_length_and_dec()
to easier handle ROUND(x) and TRUNCATE(x,y) for the YEAR(2) and YEAR(4)
input. They still return INT(2) UNSIGNED and INT(4) UNSIGNED correspondingly,
as before.
Implementing dedicated fixing methods:
- Type_handler_bit::Item_func_round_fix_length_and_dec()
- Type_handler_bit::Item_func_int_val_fix_length_and_dec()
- Type_handler_typelib::Item_func_round_fix_length_and_dec()
because the inherited methods did not work well.
Fixing:
- Type_handler_typelib::Item_func_int_val_fix_length_and_dec
It did not work well, because it used args[0]->max_length to
calculate the result data type. In case of ENUM and SET it was
not correct, because in FLOOR() and CEILING() context
ENUM and SET return not more than 5 digits (65535 is the biggest
possible value).
Misc:
- Changing the API of
Type_handler_bit::Bit_decimal_notation_int_digits(const Item *item)
to a more generic form:
Type_handler_bit::Bit_decimal_notation_int_digits_by_nbits(uint nbits)
- Fixing Type_handler_bit::Bit_decimal_notation_int_digits_by_nbits() to
return the exact number of decimal digits for all nbits 1..64.
The old implementation was approximate.
This change gives better (more precise) data types.
Item_func_div::fix_length_and_dec_temporal() set the return data type to
integer in case of @div_precision_increment==0 for temporal input with FSP=0.
This caused Item_func_div to call int_op(), which is not implemented,
so a crash on DBUG_ASSERT(0) happened.
Fixing fix_length_and_dec_temporal() to set the result type to DECIMAL.
Bit operators (~ ^ | & << >>) and the function BIT_COUNT()
always called val_int() for their arguments.
It worked correctly only for INT type arguments.
In case of DECIMAL and DOUBLE arguments it did not work well:
the argument values were truncated to the maximum SIGNED BIGINT value
of 9223372036854775807.
Fixing the code as follows:
- If the argument if of an integer data type,
it works using val_int() as before.
- If the argument if of some other data type, it gets the argument value
using val_decimal(), to avoid truncation, and then converts the result
to ulonglong.
Using Item_handled_func to switch between the two approaches easier.
As an additional advantage, with Item_handled_func it will be easier
to implement overloading in the future, so data type plugings will be able
to define their own behavioir of bit operators and BIT_COUNT().
Moving the code from the former val_int() implementations
as methods to Longlong_null, to avoid code duplication in the
INT and DECIMAL branches.
The patch for `MDEV-20795 CAST(inet6 AS BINARY) returns wrong result`
unintentionally changed what Item_char_typecast::type_handler()
returns. This broke UNIONs with the BINARY() function, as the Aria
engine started to get columns of unexpected data types.
Restoring previous behaviour, to return
Type_handler::string_type_handler(max_length).
The prototype for Item_handed_func::return_type_handler() has changed
from:
const Type_handler *return_type_handler() const
to:
const Type_handler *return_type_handler(const Item_handled_func *) const