In the code existed just before this patch binding of a table reference to
the specification of the corresponding CTE happens in the function
open_and_process_table(). If the table reference is not the first in the
query the specification is cloned in the same way as the specification of
a view is cloned for any reference of the view. This works fine for
standalone queries, but does not work for stored procedures / functions
for the following reason.
When the first call of a stored procedure/ function SP is processed the
body of SP is parsed. When a query of SP is parsed the info on each
encountered table reference is put into a TABLE_LIST object linked into
a global chain associated with the query. When parsing of the query is
finished the basic info on the table references from this chain except
table references to derived tables and information schema tables is put
in one hash table associated with SP. When parsing of the body of SP is
finished this hash table is used to construct TABLE_LIST objects for all
table references mentioned in SP and link them into the list of such
objects passed to a pre-locking process that calls open_and_process_table()
for each table from the list.
When a TABLE_LIST for a view is encountered the view is opened and its
specification is parsed. For any table reference occurred in
the specification a new TABLE_LIST object is created to be included into
the list for pre-locking. After all objects in the pre-locking have been
looked through the tables mentioned in the list are locked. Note that the
objects referenced CTEs are just skipped here as it is impossible to
resolve these references without any info on the context where they occur.
Now the statements from the body of SP are executed one by one that.
At the very beginning of the execution of a query the tables used in the
query are opened and open_and_process_table() now is called for each table
reference mentioned in the list of TABLE_LIST objects associated with the
query that was built when the query was parsed.
For each table reference first the reference is checked against CTEs
definitions in whose scope it occurred. If such definition is found the
reference is considered resolved and if this is not the first reference
to the found CTE the the specification of the CTE is re-parsed and the
result of the parsing is added to the parsing tree of the query as a
sub-tree. If this sub-tree contains table references to other tables they
are added to the list of TABLE_LIST objects associated with the query in
order the referenced tables to be opened. When the procedure that opens
the tables comes to the TABLE_LIST object created for a non-first
reference to a CTE it discovers that the referenced table instance is not
locked and reports an error.
Thus processing non-first table references to a CTE similar to how
references to view are processed does not work for queries used in stored
procedures / functions. And the main problem is that the current
pre-locking mechanism employed for stored procedures / functions does not
allow to save the context in which a CTE reference occur. It's not trivial
to save the info about the context where a CTE reference occurs while the
resolution of the table reference cannot be done without this context and
consequentially the specification for the table reference cannot be
determined.
This patch solves the above problem by moving resolution of all CTE
references at the parsing stage. More exactly references to CTEs occurred in
a query are resolved right after parsing of the query has finished. After
resolution any CTE reference it is marked as a reference to to derived
table. So it is excluded from the hash table created for pre-locking used
base tables and view when the first call of a stored procedure / function
is processed.
This solution required recursive calls of the parser. The function
THD::sql_parser() has been added specifically for recursive invocations of
the parser.
# Conflicts:
# sql/sql_cte.cc
# sql/sql_cte.h
# sql/sql_lex.cc
# sql/sql_lex.h
# sql/sql_view.cc
# sql/sql_yacc.yy
# sql/sql_yacc_ora.yy
The ROWNUM() function is for SELECT mapped to JOIN->accepted_rows, which is
incremented for each accepted rows.
For Filesort, update, insert, delete and load data, we map ROWNUM() to
internal variables incremented when the table is changed.
The connection between the row counter and Item_func_rownum is done
in sql_select.cc::fix_items_after_optimize() and
sql_insert.cc::fix_rownum_pointers()
When ROWNUM() is used anywhere in query, the optimization to ignore ORDER
BY in sub queries are disabled. This was done to get the following common
Oracle query to work:
select * from (select * from t1 order by a desc) as t where rownum() <= 2;
MDEV-3926 "Wrong result with GROUP BY ... WITH ROLLUP" contains a discussion
about this topic.
LIMIT optimization is enabled when in a top level WHERE clause comparing
ROWNUM() with a numerical constant using any of the following expressions:
- ROWNUM() < #
- ROWNUM() <= #
- ROWNUM() = 1
ROWNUM() can be also be the right argument to the comparison function.
LIMIT optimization is done in two cases:
- For the current sub query when the ROWNUM comparison is done on the top
level:
SELECT * from t1 WHERE rownum() <= 2 AND t1.a > 0
- For an inner sub query, when the upper level has only a ROWNUM comparison
in the WHERE clause:
SELECT * from (select * from t1) as t WHERE rownum() <= 2
In Oracle mode, one can also use ROWNUM without parentheses.
Other things:
- Fixed bug where the optimizer tries to optimize away sub queries
with RAND_TABLE_BIT set (non-deterministic queries). Now these
sub queries will not be converted to joins. This bug fix was also
needed to get rownum() working inside subqueries.
- In remove_const() remove setting simple_order to FALSE if ROLLUP is
USED. This code was disable a long time ago because of wrong assignment
in the following code. Instead we set simple_order to false if
RAND_TABLE_BIT was used in the SELECT list. This ensures that
we don't delete ORDER BY if the result set is not deterministic, like
in 'SELECT RAND() AS 'r' FROM t1 ORDER BY r';
- Updated parameters for Sort_param::init_for_filesort() to be able
to provide filesort with information where the number of accepted
rows should be stored
- Reordered fields in class Filesort to optimize storage layout
- Added new error messsage to tell that a function can't be used in HAVING
- Added field 'with_rownum' to THD to mark that ROWNUM() is used in the
query.
Co-author: Oleksandr Byelkin <sanja@mariadb.com>
LIMIT optimization for sub query
This change removed 68 explict strlen() calls from the code.
The following renames was done to ensure we don't use the old names
when merging code from earlier releases, as using the new variables
for print function could result in crashes:
- charset->csname renamed to charset->cs_name
- charset->name renamed to charset->coll_name
Almost everything where mechanical changes except:
- Changed to use the new Protocol::store(LEX_CSTRING..) when possible
- Changed to use field->store(LEX_CSTRING*, CHARSET_INFO*) when possible
- Changed to use String->append(LEX_CSTRING&) when possible
Other things:
- There where compiler issues with ensuring that all character set names
points to the same string: gcc doesn't allow one to use integer constants
when defining global structures (constant char * pointers works fine).
To get around this, I declared defines for each character set name
length.
Changes:
- To detect automatic strlen() I removed the methods in String that
uses 'const char *' without a length:
- String::append(const char*)
- Binary_string(const char *str)
- String(const char *str, CHARSET_INFO *cs)
- append_for_single_quote(const char *)
All usage of append(const char*) is changed to either use
String::append(char), String::append(const char*, size_t length) or
String::append(LEX_CSTRING)
- Added STRING_WITH_LEN() around constant string arguments to
String::append()
- Added overflow argument to escape_string_for_mysql() and
escape_quotes_for_mysql() instead of returning (size_t) -1 on overflow.
This was needed as most usage of the above functions never tested the
result for -1 and would have given wrong results or crashes in case
of overflows.
- Added Item_func_or_sum::func_name_cstring(), which returns LEX_CSTRING.
Changed all Item_func::func_name()'s to func_name_cstring()'s.
The old Item_func_or_sum::func_name() is now an inline function that
returns func_name_cstring().str.
- Changed Item::mode_name() and Item::func_name_ext() to return
LEX_CSTRING.
- Changed for some functions the name argument from const char * to
to const LEX_CSTRING &:
- Item::Item_func_fix_attributes()
- Item::check_type_...()
- Type_std_attributes::agg_item_collations()
- Type_std_attributes::agg_item_set_converter()
- Type_std_attributes::agg_arg_charsets...()
- Type_handler_hybrid_field_type::aggregate_for_result()
- Type_handler_geometry::check_type_geom_or_binary()
- Type_handler::Item_func_or_sum_illegal_param()
- Predicant_to_list_comparator::add_value_skip_null()
- Predicant_to_list_comparator::add_value()
- cmp_item_row::prepare_comparators()
- cmp_item_row::aggregate_row_elements_for_comparison()
- Cursor_ref::print_func()
- Removes String_space() as it was only used in one cases and that
could be simplified to not use String_space(), thanks to the fixed
my_vsnprintf().
- Added some const LEX_CSTRING's for common strings:
- NULL_clex_str, DATA_clex_str, INDEX_clex_str.
- Changed primary_key_name to a LEX_CSTRING
- Renamed String::set_quick() to String::set_buffer_if_not_allocated() to
clarify what the function really does.
- Rename of protocol function:
bool store(const char *from, CHARSET_INFO *cs) to
bool store_string_or_null(const char *from, CHARSET_INFO *cs).
This was done to both clarify the difference between this 'store' function
and also to make it easier to find unoptimal usage of store() calls.
- Added Protocol::store(const LEX_CSTRING*, CHARSET_INFO*)
- Changed some 'const char*' arrays to instead be of type LEX_CSTRING.
- class Item_func_units now used LEX_CSTRING for name.
Other things:
- Fixed a bug in mysql.cc:construct_prompt() where a wrong escape character
in the prompt would cause some part of the prompt to be duplicated.
- Fixed a lot of instances where the length of the argument to
append is known or easily obtain but was not used.
- Removed some not needed 'virtual' definition for functions that was
inherited from the parent. I added override to these.
- Fixed Ordered_key::print() to preallocate needed buffer. Old code could
case memory overruns.
- Simplified some loops when adding char * to a String with delimiters.
The name change was to make the intention of the flag more clear and
also because most usage of the old flag was to test for
NOT IS_AUTOGENERATED_NAME.
Note that the new flag is the inverse of the old one!
This was done to simplify copying of with_* flags
Other things:
- Changed Flags to C++ enums, which enables gdb to print
out bit values for the flags. This also enables compiler
errors if one tries to manipulate a non existing bit in
a variable.
- Added set_maybe_null() as a shortcut as setting the
MAYBE_NULL flags was used in a LOT of places.
- Renamed PARAM flag to SP_VAR to ensure it's not confused with persistent
statement parameters.
The reason for the change is that neither clang or gcc can do efficient
code when several bit fields are change at the same time or when copying
one or more bits between identical bit fields.
Updated bits explicitely with & and | is MUCH more efficient than what
current compilers can do.
The problem was that when one used String::alloc() to allocate a string,
the String ensures that there is space for an extra NULL byte in the
buffer and if not, reallocates the string. This is a problem with the
String::set_int() that calls alloc(21), which forces extra
malloc/free calls to happen.
- We do not anymore re-allocate String if alloc() is called with the
Allocated_length. This reduces number of malloc() allocations,
especially one big re-allocation in Protocol::send_result_Set_metadata()
for almost every query that produced a result to the connnected client.
- Avoid extra mallocs when using LONGLONG_BUFFER_SIZE
This can now be done as alloc() doesn't increase buffers if new length is
not bigger than old one.
- c_ptr() is redesigned to be safer (but a bit longer) than before.
- Remove wrong usage of c_ptr_quick()
c_ptr_quick() was used in many cases to get the pointer to the used
buffer, even when it didn't need to be \0 terminated. In this case
ptr() is a better substitute.
Another problem with c_ptr_quick() is that it did not guarantee that
the string would be \0 terminated.
- item_val_str(), an API function not used currently by the server,
now always returns a null terminated string (before it didn't always
do that).
- Ensure that all String allocations uses STRING_PSI_MEMORY_KEY. The old
mixed usage of performance keys caused assert's when String buffers
where shrunk.
- Binary_string::shrink() is simplifed
- Fixed bug in String(const char *str, size_t len, CHARSET_INFO *cs) that
used Binary_string((char *) str, len) instead of Binary_string(str,len).
- Changed argument to String() creations and String.set() functions to use
'const char*' instead of 'char*'. This ensures that Alloced_length is
not set, which gives safety against someone trying to change the
original string. This also would allow us to use !Alloced_length in
c_ptr() if needed.
- Changed string_ptr_cmp() to use memcmp() instead of c_ptr() to avoid
a possible malloc during string comparision.
This patch changes the main name of 3 byte character set from utf8 to
utf8mb3. New old_mode UTF8_IS_UTF8MB3 is added and set TRUE by default,
so that utf8 would mean utf8mb3. If not set, utf8 would mean utf8mb4.
Mark the JSON_TABLE function as SBR-unsafe.
It is not unsafe for the current implementation. But we still mark it as such
in order to be future-proof and keep it possible to change JSON data
representation in the future.
This commit implements the standard SQL extension
OFFSET start { ROW | ROWS }
[FETCH { FIRST | NEXT } [ count ] { ROW | ROWS } { ONLY | WITH TIES }]
To achieve this a reserved keyword OFFSET is introduced.
The general logic for WITH TIES implies:
1. The number of rows a query returns is no longer known during optimize
phase. Adjust optimizations to no longer consider this.
2. During end_send make use of an "order Cached_item"to compare if the
ORDER BY columns changed. Keep returning rows until there is a
change. This happens only after we reached the row limit.
3. Within end_send_group, the order by clause was eliminated. It is
still possible to keep the optimization of using end_send_group for
producing the final result set.
Replace
* select_lex::offset_limit
* select_lex::select_limit
* select_lex::explicit_limit
with select_lex::Lex_select_limit
The Lex_select_limit already existed with the same elements and was used in
by the yacc parser.
This commit is in preparation for FETCH FIRST implementation, as it
simplifies a lot of the code.
Additionally, the parser is simplified by making use of the stack to
return Lex_select_limit objects.
Cleanup of init_query() too. Removes explicit_limit= 0 as it's done a bit later
in init_select() with limit_params.empty()
(Also fixes MDEV-25254).
Re-work Name Resolution for the argument of JSON_TABLE(json_doc, ....)
function. The json_doc argument can refer to other tables, but it can
only refer to the tables that precede[*] the JSON_TABLE(...) call.
[*] - For queries with RIGHT JOINs, the "preceding" is determined after
the query is normalized by converting RIGHT JOIN into left one.
The implementation is as follows:
- Table function arguments use their own Name_resolution_context.
- The Name_resolution_context now has a bitmap of tables that should be
ignored when searching for a field.
- get_disallowed_table_deps() walks the TABLE_LIST::nested_join tree
and computes a bitmap of tables that do not "precede" the given
JSON_TABLE(...) invocation (according the above definition of
"preceding").
Adds an implementation for SELECT ... FOR UPDATE SKIP LOCKED /
SELECT ... LOCK IN SHARED MODE SKIP LOCKED
This is implemented only InnoDB at the moment, not in RockDB yet.
This adds a new hander flag HA_CAN_SKIP_LOCKED than
will be used when the storage engine advertises the flag.
When a storage engine indicates this flag it will get
TL_WRITE_SKIP_LOCKED and TL_READ_SKIP_LOCKED transaction types.
The Lex structure has been updated to store both the FOR UPDATE/LOCK IN
SHARE as well as the SKIP LOCKED so the SHOW CREATE VIEW
implementation is simplier.
"SELECT FOR UPDATE ... SKIP LOCKED" combined with CREATE TABLE AS or
INSERT.. SELECT on the result set is not safe for STATEMENT based
replication. MIXED replication will replicate this as row based events."
Thanks to guidance from Facebook commit
193896c466
This helped verify basic test case, and components that need implementing
(even though every part was implemented differently).
Thanks Marko for guidance on simplier InnoDB implementation.
Reviewers: Marko, Monty
Use < TL_FIRST_WRITE for determining a READ transaction.
Use TL_FIRST_WRITE as the relative operator replacing TL_WRITE_ALLOW_WRITE
as the minimium WRITE lock type.
This feature adds the functionality of ignorability for indexes.
Indexes are not ignored be default.
To control index ignorability explicitly for a new index,
use IGNORE or NOT IGNORE as part of the index definition for
CREATE TABLE, CREATE INDEX, or ALTER TABLE.
Primary keys (explicit or implicit) cannot be made ignorable.
The table INFORMATION_SCHEMA.STATISTICS get a new column named IGNORED that
would store whether an index needs to be ignored or not.
sql standard (2016) allows <collate clause> in two places in the
<column definition> - as a part of the <data type> or at the very end.
Let's do that too.
Side effect: in column/SP declaration `COLLATE cs_coll` automatically
implies `CHARACTER SET cs` (unless charset was specified explicitly).
See changes in sp-ucs2.result