Unlike GCC, clang could optimize away alloca() and thus the
ALLOCATE_MEM_ON_STACK() instrumentation. To make it harder, let us
invoke a non-inline function on the entire allocated buffer.
This commit is fixup for MDEV-28762
Analysis: Some recursive json functions dont check for stack control
Fix: Add check_stack_overrun(). The last argument is NULL because it is not
used
This commit is a fixup for MDEV-28762
Analysis: Some recursive json functions dont check for stack control
Fix: Add check_stack_overrun(). The last argument is NULL because it is not
used
Analysis: There were two kinds of failing tests on buildbot with UBSAN.
1) runtime error: signed integer overflow and
2) runtime error: load of value is not valid value for type
Signed integer overflow was occuring because addition of two integers
(size of json array + item number in array) was causing overflow in
json_path_parts_compare. This overflow happens because a->n_item_end
wasn't set.
The second error was occuring because c_path->p.types_used is not
initialized but the value is used later on to check for negative path index.
Fix: For signed integer overflow, use a->n_item_end only in case of range
so that it is set.
Range can be thought about in similar manner as wildcard (*) where
more than one elements are processed. To implement range notation, extended
json parser to parse the 'to' keyword and added JSON_PATH_ARRAY_RANGE for
path type. If there is 'to' keyword then use JSON_PATH_ARRAY range for
path type along with existing type.
This new integer to store the end index of range is n_item_end.
When there is 'to' keyword, store the integer in n_item_end else store in
n_item.
This patch can be viewed as combination of two parts:
1) Enabling '-' in the path so that the parser does not give out a warning.
2) Setting the negative index to a correct value and returning the
appropriate value.
1) To enable using the negative index in the path:
To make the parser not return warning when negative index is used in path
'-' needs to be allowed in json path characters. P_NEG is added
to enable this and is made recognizable by setting the 45th index of
json_path_chr_map[] to P_NEG (instead of previous P_ETC)
because 45 corresponds to '-' in unicode.
When the path is being parsed and '-' is encountered, the parser should
recognize it as parsing '-' sign, so a new json state PS_NEG is required.
When the state is PS_NEG, it means that a negative integer is
going to be parsed so set is_negative_index of current step to 1 and
n_item is set accordingly when integer is encountered after '-'.
Next proceed with parsing rest of the path and get the correct path.
Next thing is parsing the json and returning correct value.
2) Setting the negative index to a correct value and returning the value:
While parsing json if we encounter array and the path step for the array
is a negative index (n_item < 0), then we can count the number of elements
in the array and set n_item to correct corresponding value. This is done in
json_skip_array_and_count.
Analysis: In case of error while processing json document, we goto
error label which eventually return 1 instead of 0.
Fix: Return 0 in case of error instead of 1.
1) When at least one of the two json documents is of scalar type:
1.a) If value and json document both are scalar, then return true
if they have same type and value.
1.b) If json document is scalar but other is array (or vice versa),
then return true if array has at least one element of same type
and value as scalar.
1.c) If one is scalar and other is object, then return false because
it can't be compared.
2) When both arguments are of non-scalar type and below conditons
are satisfied then return true:
2.a) When both arguments are arrays:
Iterate over the value and json document. If there exists at
least one element in other array of same type and value as
that of element in value.
2.b) If both arguments are objects:
Iterate over value and json document and if there exists at least
one key-value pair common between two objects.
2.c) If either of json document or value is array and other is object:
Iterate over the array, if an element of type object is found,
then compare it with the object (which is the other arguemnt).
If the entire object matches i.e all they key value pairs match.
Hybrid functions (IF, COALESCE, etc) did not preserve the JSON property
from their arguments. The same problem was repeatable for single row subselects.
The problem happened because the method Item::is_json_type() was inconsistently
implemented across the Item hierarchy. For example, Item_hybrid_func
and Item_singlerow_subselect did not override is_json_type().
Solution:
- Removing Item::is_json_type()
- Implementing specific JSON type handlers:
Type_handler_string_json
Type_handler_varchar_json
Type_handler_tiny_blob_json
Type_handler_blob_json
Type_handler_medium_blob_json
Type_handler_long_blob_json
- Reusing the existing data type infrastructure to pass JSON
type handlers across all item types, including classes Item_hybrid_func
and Item_singlerow_subselect. Note, these two classes themselves do not
need any changes!
- Extending the data type infrastructure so data types can inherit
their properties (e.g. aggregation rules) from their base data types.
E.g. VARCHAR/JSON acts as VARCHAR, LONGTEXT/JSON acts as LONGTEXT
when mixed to a non-JSON data type. This is done by:
- adding virtual method Type_handler::type_handler_base()
- adding a helper class Type_handler_pair
- refactoring Type_handler_hybrid_field_type methods
aggregate_for_result(), aggregate_for_min_max(),
aggregate_for_num_op() to use Type_handler_pair.
This change also fixes:
MDEV-27361 Hybrid functions with JSON arguments do not send format metadata
Also, adding mtr tests for JSON replication. It was not covered yet.
And the current patch changes the replication code slightly.
This patch implements JSON_EQUALS SQL function. The function takes
advantage of the json_normalize functionality and does the following:
norm_a = json_normalize(a)
norm_b = json_normalize(b)
return strcmp(norm_a, norm_b)
Co-authored-by: Vicențiu Ciorbaru <vicentiu@mariadb.org>
Revert "fix JSON_ARRAYAGG not to over-quote json in joins"
This removes 8711adb786 but keeps the test case.
A different fix is coming up.
Because args can be Item_field's that are later
replaced by Item_direct_view_ref to the actual field.
While Item_field preserved in orig_args will stay unfixed
with item->field==NULL and no metadata
use metadata (in particular is_json() property) of the original
argument item, even if the actual argument was later replaced
with an Item_temptable_field
Changes:
- To detect automatic strlen() I removed the methods in String that
uses 'const char *' without a length:
- String::append(const char*)
- Binary_string(const char *str)
- String(const char *str, CHARSET_INFO *cs)
- append_for_single_quote(const char *)
All usage of append(const char*) is changed to either use
String::append(char), String::append(const char*, size_t length) or
String::append(LEX_CSTRING)
- Added STRING_WITH_LEN() around constant string arguments to
String::append()
- Added overflow argument to escape_string_for_mysql() and
escape_quotes_for_mysql() instead of returning (size_t) -1 on overflow.
This was needed as most usage of the above functions never tested the
result for -1 and would have given wrong results or crashes in case
of overflows.
- Added Item_func_or_sum::func_name_cstring(), which returns LEX_CSTRING.
Changed all Item_func::func_name()'s to func_name_cstring()'s.
The old Item_func_or_sum::func_name() is now an inline function that
returns func_name_cstring().str.
- Changed Item::mode_name() and Item::func_name_ext() to return
LEX_CSTRING.
- Changed for some functions the name argument from const char * to
to const LEX_CSTRING &:
- Item::Item_func_fix_attributes()
- Item::check_type_...()
- Type_std_attributes::agg_item_collations()
- Type_std_attributes::agg_item_set_converter()
- Type_std_attributes::agg_arg_charsets...()
- Type_handler_hybrid_field_type::aggregate_for_result()
- Type_handler_geometry::check_type_geom_or_binary()
- Type_handler::Item_func_or_sum_illegal_param()
- Predicant_to_list_comparator::add_value_skip_null()
- Predicant_to_list_comparator::add_value()
- cmp_item_row::prepare_comparators()
- cmp_item_row::aggregate_row_elements_for_comparison()
- Cursor_ref::print_func()
- Removes String_space() as it was only used in one cases and that
could be simplified to not use String_space(), thanks to the fixed
my_vsnprintf().
- Added some const LEX_CSTRING's for common strings:
- NULL_clex_str, DATA_clex_str, INDEX_clex_str.
- Changed primary_key_name to a LEX_CSTRING
- Renamed String::set_quick() to String::set_buffer_if_not_allocated() to
clarify what the function really does.
- Rename of protocol function:
bool store(const char *from, CHARSET_INFO *cs) to
bool store_string_or_null(const char *from, CHARSET_INFO *cs).
This was done to both clarify the difference between this 'store' function
and also to make it easier to find unoptimal usage of store() calls.
- Added Protocol::store(const LEX_CSTRING*, CHARSET_INFO*)
- Changed some 'const char*' arrays to instead be of type LEX_CSTRING.
- class Item_func_units now used LEX_CSTRING for name.
Other things:
- Fixed a bug in mysql.cc:construct_prompt() where a wrong escape character
in the prompt would cause some part of the prompt to be duplicated.
- Fixed a lot of instances where the length of the argument to
append is known or easily obtain but was not used.
- Removed some not needed 'virtual' definition for functions that was
inherited from the parent. I added override to these.
- Fixed Ordered_key::print() to preallocate needed buffer. Old code could
case memory overruns.
- Simplified some loops when adding char * to a String with delimiters.
This was done to simplify copying of with_* flags
Other things:
- Changed Flags to C++ enums, which enables gdb to print
out bit values for the flags. This also enables compiler
errors if one tries to manipulate a non existing bit in
a variable.
- Added set_maybe_null() as a shortcut as setting the
MAYBE_NULL flags was used in a LOT of places.
- Renamed PARAM flag to SP_VAR to ensure it's not confused with persistent
statement parameters.