query with VALUES()
A table value constructor can be used in all contexts where a select
can be used. In particular an ORDER BY clause or a LIMIT clause or both
of them can be attached to a table value constructor to produce a new
query. Unfortunately execution of such queries was not supported.
This patch fixes the problem.
If a derived table has SELECT DISTINCT, provide index statistics for it so that the join optimizer in the
upper select knows that ref access to the table will produce one row.
A sequence of <digits>e<mbhead><mbtail>, e.g.:
SELECT 123eXYzzz FROM t1;
was not scanned correctly (where XY is a multi-byte character).
The multi-byte head byte X was appended to 123e separately from
the multi-byte tail byte Y, so a pointer to "Yzzz" was passed
into scan_ident_start(), which failed on a bad multi-byte sequence.
After this change, scan_ident_start() gets a pointer to "XYzzz",
so it correctly sees the whole multi-byte character.
This also fixes:
MDEV-17299 Assertion `maybe_null' failed in make_sortkey
Note, during merge of the 10.1 version of MDEV-17299,
please use the 10.3 version of the code (i.e. null merge the 10.1 version).
When compiling CMAKE_BUILD_TYPE=Debug WITH_ASAN using clang-7 -O2
the following tests could fail due to insufficient stack size:
main.signal_demo3 sys_vars.max_sp_recursion_depth_func
If a splittable materialized derived table / view T is used in a inner nest
of an outer join with impossible ON condition then T is marked as a
constant table. Yet the execution plan to build T is still searched for
in spite of the fact that is not needed. So it should be set.
with UNION ALL after INTERSECT
EXPLAIN EXTENDED erroneously showed UNION instead of UNION ALL in
the warning if UNION ALL followed INTERSECT or EXCEPT operations.
The bug was in the function st_select_lex_unit::print() that printed
the text of the query used in the warning.
- CREATE TABLE ... SELECT drops constraints for columns that
are both in the create and select part.
- Fixed by copying the constraint in
Column_definiton::redefine_stage1_common()
- If one has both a default expression and check constraint for a
column, one can get the error "Expression for field `a` is refering
to uninitialized field `a`.
- Fixed by ignoring default expressions for current column when checking
for CHECK constraint
InnoDB does not allow creating multiple FULLTEXT INDEX
in ALGORITHM=INPLACE. This constraint was not being properly
enforced after MariaDB started to support ALGORITHM=INSTANT
and instant ADD COLUMN.
As a side effect of this bug, we again allow ALGORITHM=INPLACE
to rebuild a table when one FULLTEXT INDEX survives.
Also, we are returning a more accurate reason for refusing LOCK=NONE.
innobase_fulltext_exist(): Return the number of fulltext indexes.
ha_innobase::check_if_supported_inplace_alter(): If the table
needs to be rebuilt, refuse the operation if multiple fulltext
indexes would remain.
Alter statement changed the THD structure by setting the value to FIELD_CHECK_WARN
and then not resetting it back. This led ANALYZE to throw a warning which previously
it didn't.
sp_instr_cursor_copy_struct::exec_core() created TYPELIBs on a wrong mem_root,
the one which is initialized in sp_head::execute(), this code:
/* init per-instruction memroot */
init_sql_alloc(&execute_mem_root, "per_instruction_memroot",
MEM_ROOT_BLOCK_SIZE, 0, MYF(0));
This memory root cleans up after every sp_instr_xxx executed, so later
sp_instr_cfetch::execute() tried to use already freed and trashed memory.
Changing sp_instr_cursor_copy_struct::exec_core() to call tmp.export_structure()
inside this block (not outside of it):
thd->set_n_backup_active_arena(thd->spcont->callers_arena, ¤t_arena);
...
thd->restore_active_arena(thd->spcont->callers_arena, ¤t_arena);
So now TYPELIBs created by sp_instr_cursor_copy_struct::exec_core() are
still available and valid when sp_instr_cfetch::execute() is called.
They are freed at the end of dispatch_command() corresponding to
the "CALL p1" statement.
main.derived_cond_pushdown: Move all 10.3 tests to the end,
trim trailing white space, and add an "End of 10.3 tests" marker.
Add --sorted_result to tests where the ordering is not deterministic.
main.win_percentile: Add --sorted_result to tests where the
ordering is no longer deterministic.
The syntax error happened because we had not implemented a different print for
percentile functions. The syntax is a bit different when we use percentile functions
as window functions in comparision to normal window functions.
Implemented a seperate print function for percentile functions
in Field_iterator_table::create_item
When IN predicate is converted to IN subquery we have to ensure that
any item from the select list of the subquery has some name and this name
is unique across the select list.
This was not guaranteed by the code before the patch for MDEV-17222.
If the name of an item of the select list was not set, and this happened
for binary constants, then the server crashed. If the first row in the IN
list contained the same constant in two different positions then the server
returned an error message.
This was fixed by providing all constants in the first row of the IN list
with generated names.
derived table / view by equality
Now rows of a materialized derived table are always put into a
temporary table before join operation. If BNLH is used to join this
table with the result of a partial join then both operands of the
join are actually put into main memory. In most cases this is not
efficient.
We could avoid this by sending the rows of the derived table directly
to the join operation. However this kind of data flow is not supported
yet.
Fixed by not allowing usage of hash join algorithm to join a materialized
derived table if it's joined by an equality predicate of the form
f=e where f is a field of the derived table.
Change for the test case in 10.3: splitting must be turned off to preserve
the explain.
truncating a temporary table
TRUNCATE expects only one TABLE instance (which is used by TRUNCATE
itself) to be open. However this requirement wasn't enforced after
"MDEV-5535: Cannot reopen temporary table".
Fixed by closing unused table instances before performing TRUNCATE.
When using buffered sort in `UPDATE`, keyread is used. In this case,
`TABLE::update_virtual_field` should be aborted, but it actually isn't,
because it is called not with a top-level handler, but with the one that
is actually going to access the disk. Here the problemm is issued with
partitioning, so the solution is to recursively mark for keyread all the
underlying partition handlers.
* ha_partition: update keyread state for child partitions
Closes#800
The function JOIN_TAB::choose_best_splitting() did not take into account
that for some tables whose fields were used in the GROUP BY list of
the specification of a splittable materialized derived there might exist
no elements in the array ext_keyuses_for_splitting.
The optimizer erroneously allowed to use join cache when joining a
splittable materialized table together with splitting optimization.
As a consequence in some rare cases the server returned wrong result
sets for queries with materialized derived.
This patch allows to use either join cache without usage of splitting
technique for materialization of a splittable derived table or splitting
without usage of join cache when joining such table. The costs the these
alternatives are compared and the best variant is chosen.
Field_iterator_table_ref::set_field_iterator
Several functions that processed different prepare statements missed
the DT_INIT flag in last parameter of the open_normal_and_derived_tables()
calls. It made context analysis of derived tables dependent on the order in
which the derived tables were processed by mysql_handle_derived(). This
order was induced by the order of SELECTs in all_select_list.
In 10.4 the order of SELECTs in all_select_list became different and lack
of the DT_INIT flags in some open_normal_and_derived_tables() call became
critical as some derived tables were not identified as such.
a table value constructor shows wrong number of rows
This is another attempt to fix this bug. The previous patch did not take
into account that a transformation for ALL/ANY subqueries could be applied
to the materialized table that wrapped the table value constructor used as
a specification of the subselect used an ALL/ANY subquery. In this case
the result of the derived table used a sink of the class select_subselect
rather than of the class select_unit. Thus the previous fix could cause
memory overwrites when running EXPLAIN for queries with table value
constructors in ALL/ANY subselects.
Implement according to standard SQL specification 2008.
The check_constraints table is used for fetching metadata about
the constraints defined for tables in all databases.
This patch always provides columns of the temporary table used for
materialization of a table value constructor with some names.
Before this patch these names were always borrowed from the items
of the first row of the table value constructor. When this row
contained expressions and expressions were not named then it could cause
different kinds of problems. In particular if the TVC is used as the
specification of a derived table this could cause a crash.
The names given to the expressions used in a TVC are the same as those
given to the columns of the result set from the corresponding SELECT.
value constructor shows wrong number of rows
If the specification of a derived table contained a table value constructor
then the optimizer incorrectly estimated the number of rows in the derived
table. This happened because the optimizer did not take into account the
number of rows in the constructor. The wrong estimate could lead to choosing
inefficient execution plans.
Problem:-
If we try to run this query with -WITH_ASAN=ON compiled server
CREATE TABLE t1 (i INT);
SET debug_dbug="+d,test_completely_invisible,test_invisible_index";
CREATE TABLE t2 LIKE t1;
This will generate a stack buffer overflow error.
==8922==ERROR: AddressSanitizer: stack-buffer-overflow on address #ADDR
Analyze:-
Error is generated on this line
if (((*last)=new list_node(info, &end_of_list)))
So info is our Key*, &end_of_list is global variable and last == #ADDR
So last is suspicious variable. And last is the variable present in alter_info
->key_list. Now the question is how this key_list->last gets wrong/
different stack variable. In the backtrace, we can see that key_list is
generated in mysql_create_table_like_table by calling
mysql_preapre_alter_table_function and dummy key_list is created by
mysql_create_like_table. In the end on mysql_prepare_alter_table we call
alter_info->key_list.swap(new_key_list);
So there is two options either key_list is empty or not empty , IF it is not
empty then there is no issues last ptr is replaced by thd->mem_root (allocated ptr)
So problem arises when key_list is empty. It swaps the dummy last ptr by
mysql_prepare_alter_table declared ptr. which is wrong.
Solution:-
We wont swap variable if list does not have any element.
The bug was in the in the code of JOIN::check_for_splittable_materialized()
where the structures describing the fields of a materialized derived
table that potentially could be used in split optimization were build.
As a result of this bug some fields that were not usable for splitting
were detected as usable. This could trigger crashes further in
st_join_table::choose_best_splitting().
In InnoDB, an INSERT will not create an explicit lock object. Instead,
the inserted record is initially implicitly locked by the transaction
that wrote its trx_t::id to the hidden system column DB_TRX_ID.
(Other transactions would check if DB_TRX_ID is referring to a
transaction that has not been committed.)
If a record was inserted in the current transaction, it would be
implicitly locked by that transaction. Only if some other transaction
is requesting access to the record, the implicit lock should be
converted to an explicit one, so that the waits-for graph can be
constructed for detecting deadlocks and lock wait timeouts.
Before this fix, InnoDB would convert implicit locks to
explicit ones, even if no conflict exists.
lock_rec_convert_impl_to_expl(): Return whether caller_trx
already holds an explicit lock that covers the record.
row_vers_impl_x_locked_low(): Avoid a lookup if the record matches
caller_trx->id.
lock_trx_has_expl_x_lock(): Renamed from lock_trx_has_rec_x_lock().
row_upd_clust_step(): In a debug assertion, check for implicit lock
before invoking lock_trx_has_expl_x_lock().
rw_trx_hash_t::find(): Make do_ref_count a mandatory parameter.
Assert that trx_id is not 0 (the caller should check it).
trx_sys_t::is_registered(): Only invoke find() if id != 0.
trx_sys_t::find(): Add the optional parameter do_ref_count.
lock_rec_queue_validate(): Avoid lookup for trx_id == 0.
* ignore CHECK constraint for historical rows;
* FOREIGN KEY test case.
TODO:
MDEV-16301 IB: use real table name for error messages on ALTER
Closes tempesta-tech/mariadb#491
Closes#748
Problem:
push_handler() created sp_handler_entry instances on THD::main_mem_root,
which is freed only after the SP instructions execution.
So in case of a CONTINUE HANDLER inside a loop (e.g. WHILE) this approach
leaked thread memory on every loop iteration.
Changes:
- Removing sp_handler_entry declaration, it's not really needed.
- Fixing the data type of sp_rcontext::m_handlers from
Dynamic_array<sp_handler_entry*> to Dynamic_array<sp_instr_hpush_jump*>
- Fixing sp_rcontext::push_handler() to push the pointer to
an sp_instr_hpush_jump instance to the handler stack.
This instance contains everything we need.
There is no a need to allocate anything else.
Problem:
push_cursor() created sp_cursor instances on THD::main_mem_root,
which is freed only after the SP instructions loop.
Changes:
- Moving sp_cursor declaration from sp_rcontext.h to sql_class.h
- Deriving sp_instr_cpush from sp_cursor. So now sp_cursor is created
only once (at the SP parse time) and then reused on all loop iterations
- Adding a new method reset() into sp_cursor (and its parent classes)
to reset an sp_cursor instance before reuse.
- Moving former sp_cursor members m_fetch_count, m_row_count, m_found
into a separate class sp_cursor_statistics. This helps to reuse
the code in sp_cursor constructors, and in sp_cursor::reset()
- Adding a helper method sp_rcontext::pop_cursor().
- Adding "THD*" parameter to so_rcontext::pop_cursors() and pop_all_cursors()
- Removing "new" and "delete" from sp_rcontext::push_cursor() and
sp_rconext::pop_cursor().
- Fixing sp_cursor not to derive from Sql_alloc, as it's now allocated
only as a part of sp_instr_cpush (and not allocated separately).
- Moving lex_keeper->disable_query_cache() from sp_cursor::sp_cursor()
to sp_instr_cpush::execute().
- Adding tests
NULL values when there is no DEFAULT
Copy and inplace algorithm works similarly for
NULL to NOT NULL conversion for the following cases:
(1) strict sql mode - Should give error.
(2) non-strict sql mode - Should give warnings alone
(3) alter ignore table command. - Should give warnings alone.
This bug happened for queries that used a materialized view that
renamed columns of the specifying query in an inner table of
an outer join. For such a query name resolution for a column
belonging the view could fail if the underlying column was
non-nullable.
When creating the defintion of the the temporary table for
the materialized view used in the inner part of an outer join
the definition of the non-nullable columns are created by the
function create_tmp_field_from_item() that names the columns
according to the names of the underlying columns. So these names
should be changed for the view column names.
This bug cannot be reproduced in 10.2 because there setup_fields()
called when preparing joins in the view specification effectively
renames the underlying columns in the function find_field_in_view().
In 10.3 this renaming was removed as improper
(see Monty's commit b478276b04).
This problem was earlier fixed by the patch cb16d753b2
for MDEV-11337 Split Item::save_in_field() into virtual methods in Type_handler.
Adding tests only.
The problem described in the bug report happened because the code
did not test check_cols(1) after fix_fields() in a few places.
Additionally, fix_fields() could be called multiple times for SP variables,
because they are all fixed at a early stage in append_for_log().
Solution:
1. Adding a few helper methods
- fix_fields_if_needed()
- fix_fields_if_needed_for_scalar()
- fix_fields_if_needed_for_bool()
- fix_fields_if_needed_for_order_by()
and using it in many cases instead of fix_fields() where
the "fixed" status is not definitely known to be "false".
2. Adding DBUG_ASSERT(!fixed) into Item_splocal*::fix_fields()
to catch double execution.
3. Adding tests.
As a good side effect, the patch removes a lot of duplicate code (~60 lines):
if (!item->fixed &&
item->fix_fields(..) &&
item->check_cols(1))
return true;
Being executed under slow_log is ON the test revealed a "side-effect"
in MDEV-8305 implementation which inadvertently made the trigger or
stored function statements to reset the top-level query's
THD::start_time et al. (Details of the test failure analysis are footnoted).
Unlike the SP case the SF and Trigger's internal statement should not
do that.
Fixed with revising the MDEV-8305 decision to backup/reset/restore
the session timestamp inside sp_instr_stmt::execute(). The timestamp
actually remains reset in the SP case by its caller per statement basis by ever
existing logics.
Timestamps related tests are extended to cover the trigger and stored function case.
Note, commit 3395ab7324 is reverted as its struct QUERY_START_TIME_INFO
declaration is not in use anymore after this patch.
Footnote:
--------
Specifically to the failing test, a query on the master was logged
okay with a timestamp of the query's top-level statement but its post
update trigger managed to compute one more (later) timestamp which got
inserted into another table. The latter table master-vs-slave
no fractional part timestamp discrepancy became evident
thanks to different execution time of the trigger combined with the
fact of the logged with micro-second fractional part master timestamp
was truncated on the slave. On master when the fractional part was
close to 1 the trigger execution added up its own latency to overflow
to next second value. That's how the master timestamp surprisingly
turned out to bigger than the slave's one.
The print() function was missing from the FETCH GROUP NEXT ROW instrunction class, so there was no
output for this particular instruction when we use the query SHOW FUNCTION CODE function_name
Queries involving rollup need all aggregate function to have copy_or_same function where we create a copy
of item_sum items for each sum level.
Implemented copy_or_same function for the custom aggregate function class (Item_sum_sp)
SYSTEM_INVISIBLE or COMPLETELY_INVISIBLE
This commit does multiple things to solve this mdev
1st add field into the parameter of check_column_grant_in_table_ref, so that
we can find out field invisibility.
2nd If field->invisible >= INVISIBLE_SYSTEM skip access check and simple
grant access.
Partition engine FT keys are implemented in such a way that
the FT function's cleanup() methods use table's internals.
So calling them after close_thread_tables is unsafe.
This problem occured because the reorganization of the list of values when the
number of elements exceeds 32 was not handled correctly. I have fixed the
problem by fixing the way that the list values are reorganized when the number
of list values exceeds 32.
Author:
Jacob Mathew.
Reviewer:
Alexey Botchkov.
Merged From:
Branch bb-10.3-MDEV-16101
Forced columns of recursive CTEs to be nullable. SQL standard
requires this only from recursive columns, but in our code
so far we do not differentiate between recursive and non-recursive
columns when aggregating types of the union that specifies a
recursive CTE.
This problem occured because the reorganization of the list of values when the
number of elements exceeds 32 was not handled correctly. I have fixed the
problem by fixing the way that the list values are reorganized when the number
of list values exceeds 32.
Author:
Jacob Mathew.
Reviewer:
Alexey Botchkov.
Compressed blob columns didn't accept data at their capacity. E.g. storing
255 bytes to TINYBLOB results in "Data too long" error.
Now it is allowed assuming compression method was able to produce shorter
string (so that both metadata and compressed data fits blob) and
column_compression_threshold is lower than blob.
If no compression was performed, we still have to reserve additional byte
for metadata and thus we perform normal data truncation and return it's
status.
The code in the "sp_tail" rule in sql_yacc.yy always
used YYLIP->get_cpp_tok_start() as the start of the body,
and did not check for possible lookahead which happens
for keywords "FOR", "VALUES" and "WITH" for LALR(2)
resolution in Lex_input_stream::lex_token().
In case of the lookahead token presence,
get_tok_start_prev() should have been used instead
of get_cpp_tok_start() as the beginning of the SP body.
Change summary:
This patch hides the implementation of the lookahead
token completely inside Lex_input_stream.
The users of Lex_input_stream now just get token-by-token
transparently and should not care about lookahead any more.
Now external users of Lex_input_stream
are not aware of the lookahead token at all.
Change details:
- Moving Lex_input_stream::has_lookahead() into the "private" section.
- Removing Lex_input_stream::get_tok_start_prev() and
Lex_input_stream::get_cpp_start_prev().
- Fixing the external code to call get_tok_start() and get_cpp_tok_start()
in all places where get_tok_start_prev() and get_cpp_start_prev()
where used.
- Adding a test for has_lookahead() right inside
get_tok_start() and get_cpp_tok_start().
If there is a lookahead token, these methods now
return the position of the previous token automatically:
const char *get_tok_start()
{
return has_lookahead() ? m_tok_start_prev : m_tok_start;
}
const char *get_cpp_tok_start()
{
return has_lookahead() ? m_cpp_tok_start_prev : m_cpp_tok_start;
}
- Fixing the internal code inside Lex_input_stream methods
to use m_tok_start and m_cpp_tok_start directly,
instead of calling get_tok_start() and get_cpp_tok_start(),
to make sure to access to the *current* token position
(independently of a lookahead token presence).
upon select with view and subqueries
This bug occurred when a splittable materialized derived/view
were used inside another splittable materialized derived/view.
The bug happened because the function JOIN::fix_all_splittings_in_plan()
was called at the very beginning of the optimization phase 2 at
the moment when the plan structure of the embedding derived/view
were not valid. The proper position for this call is the very
end of the optimization phase 1.
Introduced new alter algorithm type called NOCOPY & INSTANT for
inplace alter operation.
NOCOPY - Algorithm refuses any alter operation that would
rebuild the clustered index. It is a subset of INPLACE algorithm.
INSTANT - Algorithm allow any alter operation that would
modify only meta data. It is a subset of NOCOPY algorithm.
Introduce new variable called alter_algorithm. The values are
DEFAULT(0), COPY(1), INPLACE(2), NOCOPY(3), INSTANT(4)
Message to deprecate old_alter_table variable and make it alias
for alter_algorithm variable.
alter_algorithm variable for slave is always set to default.
t1.pk IS NOT NULL where pk is a PRIMARY KEY
For equalites in the WHERE clause we create a keyuse array that contains the set of all equalities.
For each KEYUSE inside the keyuse array we have a field "null_rejecting"
which tells that the equality will not hold if either the left or right
hand side of the equality is NULL.
If the equality is NULL rejecting then we accordingly add a NOT NULL condition for the field present in
the item val(present in the KEYUSE struct) when we are doing ref access.
For the optimization of splitting with GROUP BY we always set the null_rejecting to TRUE and we are doing ref access on
the GROUP by field. This does create a problem when the equality is NOT NULL rejecting. This happens in this case as
in the equality we have the right hand side as t1.pk where pk is a PRIMARY KEY , hence it is NOT NULLABLE. So we should have
null rejecting set to FALSE for such a case.
Unexpected data truncation may occur when storing data to compressed blob
column having multi byte variable length character sets.
The reason was incorrect number of characters limit was enforced for
blobs.
Added --skip-test-db option to mysql_install_db. If specified, no test
database created and relevant grants issued.
Removed --skip-auth-anonymous-user option of mysql_install_db. Now it is
covered by --skip-test-db.
Dropped some Debian patches that did the same.
Removed unused make_win_bin_dist.1, make_win_bin_dist and
mysql_install_db.pl.in.
Differences:
MariaDB doesn't support a JSON type therefore the crc32 on those values
are different.
JSON extract syntax is different.
loaddata_utf8 has 3 duplicate lines removed compared to MySQL version.
From mysql-server:
09fdfad50764ff6809e7dd5300e9ce1ab727b62a
e90ae1707e0ca46abc775d1680d1856c4be38b66
described in http://github.com/mysql/mysql-server/pull/157
Apart from external contribution I have added few more additional testcases
for CRC32() function, which are given below.
New Testcases added:
->Verify the crc value of various numeric and string data types(int,
double, blob, text, json, enum, set)
->Verify the crc value when expressions having comparison_operators
and logical_operators
->Verify the crc value for the expression having string_functions,
arithmetic_functions, json_functions
->Verify the crc value for the expression having Geometry functions
like POINT, LINESTRING, MULTILINESTRING, POLYGON, MULTIPOLYGON
->Verify the crc value generated from stored procedures, functions,
triggers, prepare statement, views.
Fix:
Patch based on contribution by Daniel Black (Github user: grooverdan)
Reviewed-by: Anitha Gopi anitha.gopi@oracle.com
Reviewed-by: Srikanth B R srikanth.b.r@oracle.com
RB: 17294
Problems:
1. Unlike Item_field::fix_fields(),
Item_sum_sp::fix_length_and_dec() and Item_func_sp::fix_length_and_dec()
did not run the code which resided in adjust_max_effective_column_length(),
therefore they did not extend max_length for the integer return data types
from the user-specified length to the maximum length according to
the data type capacity.
2. The code in adjust_max_effective_column_length() was not correct
for TEXT data, because Field_blob::max_display_length()
multiplies to mbmaxlen. So TEXT variants were unintentionally
promoted to the next longer data type for multi-byte character
sets: TINYTEXT->TEXT, TEXT->MEDIUMTEXT, MEDIUMTEXT->LONGTEXT.
3. Item_sum_sp::create_table_field_from_handler()
Item_func_sp::create_table_field_from_handler()
erroneously called tmp_table_field_from_field_type(),
which converted VARCHAR(>512) to TEXT variants.
So "CREATE..SELECT spfunc()" erroneously converted
VARCHAR to TEXT. This was wrong, because stored
functions have explicitly declared data types,
which should be preserved.
Solution:
- Removing Type_std_attributes(const Field *)
and using instead Type_std_attributes::set() in combination
with field->type_str_attributes() all around the code, e.g.:
Type_std_attributes::set(field->type_std_attributes())
These two ways of copying attributes from a Field
to an Item duplicated each other, and were slightly
different in how to mix max_length and mbmaxlen.
- Removing adjust_max_effective_column_length() and
fixing Field::type_std_attributes() to do all necessary
type-specific calculations , so no further adjustments
is needed.
Field::type_std_attributes() is now called from all affected methods:
Item_field::fix_fields()
Item_sum_sp::fix_length_and_dec()
Item_func_sp::fix_length_and_dec()
This fixes the problem N1.
- Making Field::type_std_attributes() virtual, to make
sure that type-specific adjustments a properly done
by individual Field_xxx classes. Implementing
Field_blob::type_std_attributes() in the way that
no TEXT promotion is done.
This fixes the problem N2.
- Fixing Item_sum_sp::create_table_field_from_handler()
Item_func_sp::create_table_field_from_handler() to
call create_table_field_from_handler() instead of
tmp_table_field_from_field_type() to avoid
VARCHAR->TEXT conversion on "CREATE..SELECT spfunc()".
- Recording mysql-test/suite/compat/oracle/r/sp-param.result
as "CREATE..SELECT spfunc()" now correctly
preserve the data type as specified in the RETURNS clause.
- Adding new tests
Problem:
The logic in store_column_type() with a switch on field type was
hard to follow. The part for MEDIUMINT (MYSQL_TYPE_INT24) was not correct.
It erroneously calculated the precision of MEDIUMINT UNSIGNED
as 7 instead of 8.
A similar hard-to-follow switch doing some type specific calculations
resided in adjust_max_effective_column_length(). It was also wrong for
MEDIUMINT (reported as a separate issue in MDEV-15946).
Solution:
1. Introducing a new class Information_schema_numeric_attributes
2. Adding a new virtual method Field::information_schema_numeric_attributes()
3. Splitting the logic in store_column_type() into virtual
implementations of information_schema_numeric_attributes().
4. In order to avoid adding duplicate code for the integer data types,
adding a new virtual method Field_int::numeric_precision(),
which returns the number of digits.
Additional changes:
1. Adding the "const" qualifier to Field::max_display_length()
2. Moving the code from adjust_max_effective_column_length()
directly to Field::max_display_length().
There was no any sense to have two implementations:
- a set of wrong virtual implementations for Field_xxx::max_display_length()
- additional code in adjust_max_effective_column_length() fixing
bad results of Field_xxx::max_display_length()
This change is safe:
- The code using Field::max_display_length()
in field.cc, sql_show.cc, sql_type.cc is not affected.
- The code in rpl_utility.cc is also not affected.
See a new DBUG_ASSSERT and new comments explaining why.
In the new reduction, Field_xxx::max_display_length() returns
correct results for all integer types (except MEDIUMINT, see below).
Putting implementations of numeric_precision() and max_display_length()
near each other in field.h made the logic much clearer and thus
helped to reveal bad results for Field_medium::max_display_length(),
which returns 9 instead of 8 for signed MEDIUMINT fields.
This problem will be addressed separately (MDEV-15946).
Note, this change is also useful for pluggable data types (see MDEV-4912),
as now a user defined Field_xxx has a way to control what's returned
in INFORMATION_SCHEMA.COLUMNS.NUMERIC_PRECISION and
INFORMATION_SCHEMA.COLUMNS.NUMERIC_SCALE by implementing
a desired behavior in Field_xxx::information_schema_numeric_attributes().
Character set safe truncation is done when storing non-empty string in
VARCHAR(0) COMPRESSED column, so that string becomes empty. The code
didn't expect empty string after truncation.
Fixed by moving empty string check after truncation.
Element_type& Bounds_checked_array<Element_type>::operator[]
(size_t) [with Element_type = Item*; size_t = long unsigned int]
In sql_yacc.yy the semantic actions for the MEDIAN window function
lacked a call of st_select_lex::prepare_add_window_spec().
This function saves the head of the thd->lex->order_list into
lex->save_order_list in order this head to be restored in
st_select_lex::add_window_spec after the specification of the
window function has been parsed.
Without a call of prepare_add_window_spec() when add_window_spec()
was called the head of an empty list was copied into
thd->lex->order_list (instead of assumed saved head this list).
This made the list thd->lex->order_list invalid and potentially
could cause many different problems.
Corrected the result set in the test case for MDEV-15899 that
used the MEDIAN window function and could not be correct
without this fix.
after rebuilding under test_pseudo_invisible
If we are doing alter related to partitioning then simple alter stmt
like adding column(or any alter stmt) can't be combined with partition
alter, this will generate a syntax error.
But IF we add
SET debug_dbug="+d,test_pseudo_invisible";
or test_completely_invisible
this will add a column to table with have an already partitioning related
alter. This execution of wrong stmt will crash the server on later stages.
(like on repair partition).
So we will simply return 1 (and ER_INTERNAL_ERROR) if we any of these
debug_dbug flags turned on.
insert into table with TIMESTAMP INVISIBLE
Problem:- The segfault occurs because value is null but since timestamp field
is VISIBLE it expects a value , and it tries to call value->save_in_field(..
Timestamp field should not be visible this is the problem.
Solution:- While we clone field for record0_field we don't honor the field
_visibility , this patch changes that.
The crash happened because JOIN::check_for_splittable_materialized()
called by mistake the function JOIN_TAB::is_inner_table_of_outer_join()
instead of the function TABLE_LIST::is_inner_table_of_outer_join().
The former cannot be called before the call of make_outerjoin_info().
Problem was the Item_field::Item_field(THD*, Field*) had old code
that put a null pointer in orig_field_names. Now, when we have
proper re-prepare if table definition changes, this is not needed
anymore.