The code in the "sp_tail" rule in sql_yacc.yy always
used YYLIP->get_cpp_tok_start() as the start of the body,
and did not check for possible lookahead which happens
for keywords "FOR", "VALUES" and "WITH" for LALR(2)
resolution in Lex_input_stream::lex_token().
In case of the lookahead token presence,
get_tok_start_prev() should have been used instead
of get_cpp_tok_start() as the beginning of the SP body.
Change summary:
This patch hides the implementation of the lookahead
token completely inside Lex_input_stream.
The users of Lex_input_stream now just get token-by-token
transparently and should not care about lookahead any more.
Now external users of Lex_input_stream
are not aware of the lookahead token at all.
Change details:
- Moving Lex_input_stream::has_lookahead() into the "private" section.
- Removing Lex_input_stream::get_tok_start_prev() and
Lex_input_stream::get_cpp_start_prev().
- Fixing the external code to call get_tok_start() and get_cpp_tok_start()
in all places where get_tok_start_prev() and get_cpp_start_prev()
where used.
- Adding a test for has_lookahead() right inside
get_tok_start() and get_cpp_tok_start().
If there is a lookahead token, these methods now
return the position of the previous token automatically:
const char *get_tok_start()
{
return has_lookahead() ? m_tok_start_prev : m_tok_start;
}
const char *get_cpp_tok_start()
{
return has_lookahead() ? m_cpp_tok_start_prev : m_cpp_tok_start;
}
- Fixing the internal code inside Lex_input_stream methods
to use m_tok_start and m_cpp_tok_start directly,
instead of calling get_tok_start() and get_cpp_tok_start(),
to make sure to access to the *current* token position
(independently of a lookahead token presence).
upon select with view and subqueries
This bug occurred when a splittable materialized derived/view
were used inside another splittable materialized derived/view.
The bug happened because the function JOIN::fix_all_splittings_in_plan()
was called at the very beginning of the optimization phase 2 at
the moment when the plan structure of the embedding derived/view
were not valid. The proper position for this call is the very
end of the optimization phase 1.
Introduced new alter algorithm type called NOCOPY & INSTANT for
inplace alter operation.
NOCOPY - Algorithm refuses any alter operation that would
rebuild the clustered index. It is a subset of INPLACE algorithm.
INSTANT - Algorithm allow any alter operation that would
modify only meta data. It is a subset of NOCOPY algorithm.
Introduce new variable called alter_algorithm. The values are
DEFAULT(0), COPY(1), INPLACE(2), NOCOPY(3), INSTANT(4)
Message to deprecate old_alter_table variable and make it alias
for alter_algorithm variable.
alter_algorithm variable for slave is always set to default.
t1.pk IS NOT NULL where pk is a PRIMARY KEY
For equalites in the WHERE clause we create a keyuse array that contains the set of all equalities.
For each KEYUSE inside the keyuse array we have a field "null_rejecting"
which tells that the equality will not hold if either the left or right
hand side of the equality is NULL.
If the equality is NULL rejecting then we accordingly add a NOT NULL condition for the field present in
the item val(present in the KEYUSE struct) when we are doing ref access.
For the optimization of splitting with GROUP BY we always set the null_rejecting to TRUE and we are doing ref access on
the GROUP by field. This does create a problem when the equality is NOT NULL rejecting. This happens in this case as
in the equality we have the right hand side as t1.pk where pk is a PRIMARY KEY , hence it is NOT NULLABLE. So we should have
null rejecting set to FALSE for such a case.
Unexpected data truncation may occur when storing data to compressed blob
column having multi byte variable length character sets.
The reason was incorrect number of characters limit was enforced for
blobs.
Added --skip-test-db option to mysql_install_db. If specified, no test
database created and relevant grants issued.
Removed --skip-auth-anonymous-user option of mysql_install_db. Now it is
covered by --skip-test-db.
Dropped some Debian patches that did the same.
Removed unused make_win_bin_dist.1, make_win_bin_dist and
mysql_install_db.pl.in.
Differences:
MariaDB doesn't support a JSON type therefore the crc32 on those values
are different.
JSON extract syntax is different.
loaddata_utf8 has 3 duplicate lines removed compared to MySQL version.
From mysql-server:
09fdfad50764ff6809e7dd5300e9ce1ab727b62a
e90ae1707e0ca46abc775d1680d1856c4be38b66
described in http://github.com/mysql/mysql-server/pull/157
Apart from external contribution I have added few more additional testcases
for CRC32() function, which are given below.
New Testcases added:
->Verify the crc value of various numeric and string data types(int,
double, blob, text, json, enum, set)
->Verify the crc value when expressions having comparison_operators
and logical_operators
->Verify the crc value for the expression having string_functions,
arithmetic_functions, json_functions
->Verify the crc value for the expression having Geometry functions
like POINT, LINESTRING, MULTILINESTRING, POLYGON, MULTIPOLYGON
->Verify the crc value generated from stored procedures, functions,
triggers, prepare statement, views.
Fix:
Patch based on contribution by Daniel Black (Github user: grooverdan)
Reviewed-by: Anitha Gopi anitha.gopi@oracle.com
Reviewed-by: Srikanth B R srikanth.b.r@oracle.com
RB: 17294
Problems:
1. Unlike Item_field::fix_fields(),
Item_sum_sp::fix_length_and_dec() and Item_func_sp::fix_length_and_dec()
did not run the code which resided in adjust_max_effective_column_length(),
therefore they did not extend max_length for the integer return data types
from the user-specified length to the maximum length according to
the data type capacity.
2. The code in adjust_max_effective_column_length() was not correct
for TEXT data, because Field_blob::max_display_length()
multiplies to mbmaxlen. So TEXT variants were unintentionally
promoted to the next longer data type for multi-byte character
sets: TINYTEXT->TEXT, TEXT->MEDIUMTEXT, MEDIUMTEXT->LONGTEXT.
3. Item_sum_sp::create_table_field_from_handler()
Item_func_sp::create_table_field_from_handler()
erroneously called tmp_table_field_from_field_type(),
which converted VARCHAR(>512) to TEXT variants.
So "CREATE..SELECT spfunc()" erroneously converted
VARCHAR to TEXT. This was wrong, because stored
functions have explicitly declared data types,
which should be preserved.
Solution:
- Removing Type_std_attributes(const Field *)
and using instead Type_std_attributes::set() in combination
with field->type_str_attributes() all around the code, e.g.:
Type_std_attributes::set(field->type_std_attributes())
These two ways of copying attributes from a Field
to an Item duplicated each other, and were slightly
different in how to mix max_length and mbmaxlen.
- Removing adjust_max_effective_column_length() and
fixing Field::type_std_attributes() to do all necessary
type-specific calculations , so no further adjustments
is needed.
Field::type_std_attributes() is now called from all affected methods:
Item_field::fix_fields()
Item_sum_sp::fix_length_and_dec()
Item_func_sp::fix_length_and_dec()
This fixes the problem N1.
- Making Field::type_std_attributes() virtual, to make
sure that type-specific adjustments a properly done
by individual Field_xxx classes. Implementing
Field_blob::type_std_attributes() in the way that
no TEXT promotion is done.
This fixes the problem N2.
- Fixing Item_sum_sp::create_table_field_from_handler()
Item_func_sp::create_table_field_from_handler() to
call create_table_field_from_handler() instead of
tmp_table_field_from_field_type() to avoid
VARCHAR->TEXT conversion on "CREATE..SELECT spfunc()".
- Recording mysql-test/suite/compat/oracle/r/sp-param.result
as "CREATE..SELECT spfunc()" now correctly
preserve the data type as specified in the RETURNS clause.
- Adding new tests
Problem:
The logic in store_column_type() with a switch on field type was
hard to follow. The part for MEDIUMINT (MYSQL_TYPE_INT24) was not correct.
It erroneously calculated the precision of MEDIUMINT UNSIGNED
as 7 instead of 8.
A similar hard-to-follow switch doing some type specific calculations
resided in adjust_max_effective_column_length(). It was also wrong for
MEDIUMINT (reported as a separate issue in MDEV-15946).
Solution:
1. Introducing a new class Information_schema_numeric_attributes
2. Adding a new virtual method Field::information_schema_numeric_attributes()
3. Splitting the logic in store_column_type() into virtual
implementations of information_schema_numeric_attributes().
4. In order to avoid adding duplicate code for the integer data types,
adding a new virtual method Field_int::numeric_precision(),
which returns the number of digits.
Additional changes:
1. Adding the "const" qualifier to Field::max_display_length()
2. Moving the code from adjust_max_effective_column_length()
directly to Field::max_display_length().
There was no any sense to have two implementations:
- a set of wrong virtual implementations for Field_xxx::max_display_length()
- additional code in adjust_max_effective_column_length() fixing
bad results of Field_xxx::max_display_length()
This change is safe:
- The code using Field::max_display_length()
in field.cc, sql_show.cc, sql_type.cc is not affected.
- The code in rpl_utility.cc is also not affected.
See a new DBUG_ASSSERT and new comments explaining why.
In the new reduction, Field_xxx::max_display_length() returns
correct results for all integer types (except MEDIUMINT, see below).
Putting implementations of numeric_precision() and max_display_length()
near each other in field.h made the logic much clearer and thus
helped to reveal bad results for Field_medium::max_display_length(),
which returns 9 instead of 8 for signed MEDIUMINT fields.
This problem will be addressed separately (MDEV-15946).
Note, this change is also useful for pluggable data types (see MDEV-4912),
as now a user defined Field_xxx has a way to control what's returned
in INFORMATION_SCHEMA.COLUMNS.NUMERIC_PRECISION and
INFORMATION_SCHEMA.COLUMNS.NUMERIC_SCALE by implementing
a desired behavior in Field_xxx::information_schema_numeric_attributes().
Character set safe truncation is done when storing non-empty string in
VARCHAR(0) COMPRESSED column, so that string becomes empty. The code
didn't expect empty string after truncation.
Fixed by moving empty string check after truncation.
Element_type& Bounds_checked_array<Element_type>::operator[]
(size_t) [with Element_type = Item*; size_t = long unsigned int]
In sql_yacc.yy the semantic actions for the MEDIAN window function
lacked a call of st_select_lex::prepare_add_window_spec().
This function saves the head of the thd->lex->order_list into
lex->save_order_list in order this head to be restored in
st_select_lex::add_window_spec after the specification of the
window function has been parsed.
Without a call of prepare_add_window_spec() when add_window_spec()
was called the head of an empty list was copied into
thd->lex->order_list (instead of assumed saved head this list).
This made the list thd->lex->order_list invalid and potentially
could cause many different problems.
Corrected the result set in the test case for MDEV-15899 that
used the MEDIAN window function and could not be correct
without this fix.
after rebuilding under test_pseudo_invisible
If we are doing alter related to partitioning then simple alter stmt
like adding column(or any alter stmt) can't be combined with partition
alter, this will generate a syntax error.
But IF we add
SET debug_dbug="+d,test_pseudo_invisible";
or test_completely_invisible
this will add a column to table with have an already partitioning related
alter. This execution of wrong stmt will crash the server on later stages.
(like on repair partition).
So we will simply return 1 (and ER_INTERNAL_ERROR) if we any of these
debug_dbug flags turned on.
insert into table with TIMESTAMP INVISIBLE
Problem:- The segfault occurs because value is null but since timestamp field
is VISIBLE it expects a value , and it tries to call value->save_in_field(..
Timestamp field should not be visible this is the problem.
Solution:- While we clone field for record0_field we don't honor the field
_visibility , this patch changes that.
The crash happened because JOIN::check_for_splittable_materialized()
called by mistake the function JOIN_TAB::is_inner_table_of_outer_join()
instead of the function TABLE_LIST::is_inner_table_of_outer_join().
The former cannot be called before the call of make_outerjoin_info().
Problem was the Item_field::Item_field(THD*, Field*) had old code
that put a null pointer in orig_field_names. Now, when we have
proper re-prepare if table definition changes, this is not needed
anymore.
- Adding "return true" into LEX::set_system_variable()
and LEX::set_default_system_variable() after my_error().
This makes the parser exit on error immediately.
Previously, the error was caught only in mysql_parser(),
a few lines after the parse_sql() call.
- Fixing "--error 1272" to "--error ER_VARIABLE_IS_NOT_STRUCT" in tests
this is a 10.3 version of 27d94b7e03
It disables caching of the first argument of IN,
if it's of a temporal type. Because other types are not
cached in this context.
reorder items in args[] array. Instead of
when1,then1,when2,then2,...[,case][,else]
sort them as
[case,]when1,when2,...,then1,then2,...[,else]
in this case all items used for comparison take a continuous part
of the array and can be aggregated directly. and all items that
can be returned take a continuous part of the array and can be
aggregated directly. Old code had to copy them to a temporary
array before aggreation, and then copy back (thd->change_item_tree)
everything that was changed.
this is a 10.3 version of bf1ca14ff3