Backporting from bb-10.2-compatibility to bb-10.2-ext
Version: 2018-01-26
- CREATE PACKAGE [BODY] statements are now
entirely written to mysql.proc with type='PACKAGE' and type='PACKAGE BODY'.
- CREATE PACKAGE BODY now supports IF NOT EXISTS
- DROP PACKAGE BODY now supports IF EXISTS
- CREATE OR REPLACE PACKAGE [BODY] is now supported
- CREATE PACKAGE [BODY] now support the DEFINER clause:
CREATE DEFINER user@host PACKAGE pkg ... END;
CREATE DEFINER user@host PACKAGE BODY pkg ... END;
- CREATE PACKAGE [BODY] now supports SQL SECURITY and COMMENT clauses, e.g.:
CREATE PACKAGE p1 SQL SECURITY INVOKER COMMENT "comment" AS ... END;
- Package routines are now created from the package CREATE PACKAGE BODY
statement and don't produce individual records in mysql.proc.
- CREATE PACKAGE BODY now supports package-wide variables.
Package variables can be read and set inside package routines.
Package variables are stored in a separate sp_rcontext,
which is cached in THD on the first packate routine call.
- CREATE PACKAGE BODY now supports the initialization section.
- All public routines (i.e. declared in CREATE PACKAGE)
must have implementations in CREATE PACKAGE BODY
- Only public package routines are available outside of the package
- {CREATE|DROP} PACKAGE [BODY] now respects CREATE ROUTINE and ALTER ROUTINE
privileges
- "GRANT EXECUTE ON PACKAGE BODY pkg" is now supported
- SHOW CREATE PACKAGE [BODY] is now supported
- SHOW PACKAGE [BODY] STATUS is now supported
- CREATE and DROP for PACKAGE [BODY] now works for non-current databases
- mysqldump now supports packages
- "SHOW {PROCEDURE|FUNCTION) CODE pkg.routine" now works for package routines
- "SHOW PACKAGE BODY CODE pkg" now works (the package initialization section)
- A new package body level MDL was added
- Recursive calls for package procedures are now possible
- Routine forward declarations in CREATE PACKATE BODY are now supported.
- Package body variables now work as SP OUT parameters
- Package body variables now work as SELECT INTO targets
- Package body variables now support ROW, %ROWTYPE, %TYPE
- CREATE PACKAGE [BODY] statements are now
entirely written to mysql.proc with type='PACKAGE' and type='PACKAGE BODY'.
- CREATE PACKAGE BODY now supports IF NOT EXISTS
- DROP PACKAGE BODY now supports IF EXISTS
- CREATE OR REPLACE PACKAGE [BODY] is now supported
- CREATE PACKAGE [BODY] now support the DEFINER clause:
CREATE DEFINER user@host PACKAGE pkg ... END;
CREATE DEFINER user@host PACKAGE BODY pkg ... END;
- CREATE PACKAGE [BODY] now supports SQL SECURITY and COMMENT clauses, e.g.:
CREATE PACKAGE p1 SQL SECURITY INVOKER COMMENT "comment" AS ... END;
- Package routines are now created from the package CREATE PACKAGE BODY
statement and don't produce individual records in mysql.proc.
- CREATE PACKAGE BODY now supports package-wide variables.
Package variables can be read and set inside package routines.
Package variables are stored in a separate sp_rcontext,
which is cached in THD on the first packate routine call.
- CREATE PACKAGE BODY now supports the initialization section.
- All public routines (i.e. declared in CREATE PACKAGE)
must have implementations in CREATE PACKAGE BODY
- Only public package routines are available outside of the package
- {CREATE|DROP} PACKAGE [BODY] now respects CREATE ROUTINE and ALTER ROUTINE
privileges
- "GRANT EXECUTE ON PACKAGE BODY pkg" is now supported
- SHOW CREATE PACKAGE [BODY] is now supported
- SHOW PACKAGE [BODY] STATUS is now supported
- CREATE and DROP for PACKAGE [BODY] now works for non-current databases
- mysqldump now supports packages
- "SHOW {PROCEDURE|FUNCTION) CODE pkg.routine" now works for package routines
- "SHOW PACKAGE BODY CODE pkg" now works (the package initialization section)
- A new package body level MDL was added
- Recursive calls for package procedures are now possible
- Routine forward declarations in CREATE PACKATE BODY are now supported.
- Package body variables now work as SP OUT parameters
- Package body variables now work as SELECT INTO targets
- Package body variables now support ROW, %ROWTYPE, %TYPE
Handle string length as size_t, consistently (almost always:))
Change function prototypes to accept size_t, where in the past
ulong or uint were used. change local/member variables to size_t
when appropriate.
This fix excludes rocksdb, spider,spider, sphinx and connect for now.
Counter for select numbering made stored with the statement (before was global)
So now it does have always accurate value which does not depend on
interruption of statement prepare by errors like lack of table in
a view definition.
This was done in, among other things:
- thd->db and thd->db_length
- TABLE_LIST tablename, db, alias and schema_name
- Audit plugin database name
- lex->db
- All db and table names in Alter_table_ctx
- st_select_lex db
Other things:
- Changed a lot of functions to take const LEX_CSTRING* as argument
for db, table_name and alias. See init_one_table() as an example.
- Changed some function arguments from LEX_CSTRING to const LEX_CSTRING
- Changed some lists from LEX_STRING to LEX_CSTRING
- threads_mysql.result changed because process list_db wasn't always
correctly updated
- New append_identifier() function that takes LEX_CSTRING* as arguments
- Added new element tmp_buff to Alter_table_ctx to separate temp name
handling from temporary space
- Ensure we store the length after my_casedn_str() of table/db names
- Removed not used version of rename_table_in_stat_tables()
- Changed Natural_join_column::table_name and db_name() to never return
NULL (used for print)
- thd->get_db() now returns db as a printable string (thd->db.str or "")
TRASH was mapped to TRASH_FREE and was supposed to be used for memory
that should not be accessed anymore, while TRASH_ALLOC() is to be
used for uninitialized but to-be-used memory.
But sometimes TRASH() was used in the latter sense.
Remove TRASH() macro, always use explicit TRASH_ALLOC() or TRASH_FREE().
Other changes done to get this to work:
- Added 'internal_tables' to TABLE object to list which sequence tables
is needed to use the table.
- Mark any expression using DEFAULT() with LEX->default_used.
This is needed when deciding if we should open internal sequence
tables when a table is opened (we don't need to open sequence tables
if the main table is only used with SELECT).
- Create_and_open_temporary_table() can now also open all internal
sequence tables.
- Added option MYSQL_LOCK_USE_MALLOC to mysql_lock_tables()
to force memory allocation to be used with malloc instead of
memroot.
- Added flag to MYSQL_LOCK to remember if allocation was done with
malloc or memroot (makes code simpler and safer).
- init_one_table_for_prelocking() now takes argument for what lock to
use instead of it's a routine or something else.
- Renamed prelocking placeholders to make them more understandable as
they are now used in more code.
- Changed test in check_lock_and_start_stmt() if found table has correct
locks. The old test didn't work for tables that has lock
TL_WRITE_ALLOW_WRITE, which is what sequence tables are using.
- Added VCOL_NOT_VIRTUAL option to ensure that sequence functions can't
be used with virtual columns
- More sequence tests
This commit implements aggregate stored functions. The basic idea behind
the feature is:
* Implement a special instruction FETCH GROUP NEXT ROW that will pause
the execution of the stored function. When the instruction is reached,
execution of the initial query resumes "as if" the function returned.
This gives the server the opportunity to advance to the next row in the
result set.
* Stored aggregates behave like regular aggregate functions. The
implementation of thus resides in the class Item_sum_sp. Because it is
an aggregate function, for each new row in the group, the
Item_sum_sp::add() method will be called. This is when execution resumes
and the function does another iteration to "add" one extra element to
the final result.
* When the end of group is reached, val_xxx() method will be called for
the item. This case is handled by another execute step for the stored
function, only with a special flag to force a call to the return
handler. See Item_sum_sp::execute() for details.
To allow this pause and resume semantic, we must preserve the function
context across executions. This is stored in Item_sp::sp_query_arena only for
aggregate stored functions, but has no impact for regular functions.
We also enforce aggregate functions to include the "FETCH GROUP NEXT ROW"
instruction.
Signed-off-by: Vicențiu Ciorbaru <vicentiu@mariadb.org>
Most "new" failures fixed in the following files:
- sql_select.cc
- item.cc
- item_func.cc
- opt_subselect.cc
Other things:
- Allocate udf_handler strings in mem_root
- Required changes in sql_string.h
- Add mem_root as argument to some new [] calls
- Mark udf_handler strings as thread specific
- Removed some comment blocks with code
Issue:
------
VALUES doesn't have a type() function and is considered a
Item_field.
Solution for 5.7:
-----------------
Add a new type() function for Item_values_insert.
On 8.0 and trunk it was fixed by Mithun's Bug#19601973.
Solution for 5.6:
-----------------
Additionally Bug#17458914 is backported.
This will address the problem of using VALUES() in
INSERT ... ON DUPLICATE KEY UPDATE. Create a field object
only if it is in the UPDATE clause, else return a NULL
item.
This will also address the problems mentioned in
Bug#14789787 and Bug#16756402.
Solution for 5.5:
-----------------
As mentioned above Bug#17458914 is backported.
Additionally Bug#14786324 is also backported.
When VALUES() is detected outside its meaningful place,
it should be treated as NULL and is thus replaced with a
Field_null object, with the same name as the original
field.
Fields with type NULL are generally not handled well inside
the server (e.g Innodb will not accept them and it is
impossible to create them in regular tables). So create a
new const NULL item instead.
As reported in MDEV-11969 "there's no way to ditch knowledge" about some
domain that is no longer updated on a server. Besides being of annoyance to
clutter output in DBA console stale domains can prevent the slave
to connect the master as MDEV-12012 witnesses.
What domain is obsolete must be evaluated by the user (DBA) according
to whether the domain info is still relevant and will the domain ever
receive any update.
This patch introduces a method to discard obsolete gtid domains from
the server binlog state. The removal requires no event group from such
domain present in existing binlog files though. If there are any the
containing logs must be first PURGEd in order for
FLUSH BINARY LOGS DELETE_DOMAIN_ID=(list-of-domains)
succeed. Otherwise the command returns an error.
The list of obsolete domains can be computed through
intersecting two sets - the earliest (first) binlog's Gtid_list
and the current value of @@global.gtid_binlog_state - and extracting
the domain id components from the intersection list items.
The new DELETE_DOMAIN_ID featured FLUSH continues to rotate binlog
omitting the deleted domains from the active binlog file's Gtid_list.
Notice though when the command is ineffective - that none of requested to delete
domain exists in the binlog state - rotation does not occur.
Obsolete domain deletion is not harmful for connected slaves as long
as master side binlog files *purge* is synchronized with FLUSH-DELETE_DOMAIN_ID.
The slaves must have the last event from purged files processed as usual,
in order not to bump later into requesting a gtid from a file which
was already gone.
While the command is not replicated (as ordinary FLUSH BINLOG LOGS is)
slaves, even though having extra domains, won't suffer from reconnection errors
thanks to master-slave gtid connection protocol allowing the master
to be ignorant about a gtid domain.
Should at failover such slave to be promoted into master role it may run
the ex-master's
FLUSH BINARY LOGS DELETE_DOMAIN_ID=(list-of-domains)
to clean its own binlog state.
NOTES.
suite/perfschema/r/start_server_low_digest.result
is re-recorded as consequence of internal parser codes changes.
As a result of this merge the code for the following tasks appears in 10.3:
- MDEV-12172 Implement tables specified by table value constructors
- MDEV-12176 Transform [NOT] IN predicate with long list of values INTO
[NOT] IN subquery.
A reference to a CTE may occur not in the master of the CTE
specification. In this case if the reference to the CTE is
the first one the specification should be detached from its
master and attached to the referencing select.
Also fixed the TYPE column in the lines of the EXPLAIN output
created for CTE tables.
- Added sql/mariadb.h file that should be included first by files in sql
directory, if sql_plugin.h is not used (sql_plugin.h adds SHOW variables
that must be done before my_global.h is included)
- Removed a lot of include my_global.h from include files
- Removed include's of some files that my_global.h automatically includes
- Removed duplicated include's of my_sys.h
- Replaced include my_config.h with my_global.h
It allows to push conditions into derived with window functions not
only in the cases when the window specifications of these window
functions use the same partition, but also in the cases when the window
functions use partitions that share only some fields. In these
cases only the conditions over the common fields are pushed.
with window functions (mdev-10855).
This patch just modified the function pushdown_cond_for_derived()
to support this feature.
Some test cases demonstrating this optimization were added to
derived_cond_pushdown.test.
"Optimization for equi-joins of derived tables with GROUP BY"
should be considered rather as a 'proof of concept'.
The task itself is targeted at an optimization that employs re-writing
equi-joins with grouping derived tables / views into lateral
derived tables. Here's an example of such transformation:
select t1.a,t.max,t.min
from t1 [left] join
(select a, max(t2.b) max, min(t2.b) min from t2
group by t2.a) as t
on t1.a=t.a;
=>
select t1.a,tl.max,tl.min
from t1 [left] join
lateral (select a, max(t2.b) max, min(t2.b) min from t2
where t1.a=t2.a) as t
on 1=1;
The transformation pushes the equi-join condition t1.a=t.a into the
derived table making it dependent on table t1. It means that for
every row from t1 a new derived table must be filled out. However
the size of any of these derived tables is just a fraction of the
original derived table t. One could say that transformation 'splits'
the rows used for the GROUP BY operation into separate groups
performing aggregation for a group only in the case when there is
a match for the current row of t1.
Apparently the transformation may produce a query with a better
performance only in the case when
- the GROUP BY list refers only to fields returned by the derived table
- there is an index I on one of the tables T used in FROM list of
the specification of the derived table whose prefix covers the
the fields from the proper beginning of the GROUP BY list or
fields that are equal to those fields.
Whether the result of the re-writing can be executed faster depends
on many factors:
- the size of the original derived table
- the size of the table T
- whether the index I is clustering for table T
- whether the index I fully covers the GROUP BY list.
This patch only tries to improve the chosen execution plan using
this transformation. It tries to do it only when the chosen
plan reaches the derived table by a key whose prefix covers
all the fields of the derived table produced by the fields of
the table T from the GROUP BY list.
The code of the patch does not evaluates the cost of the improved
plan. If certain conditions are met the transformation is applied.
TVC can be used in UNION-statement, in view and in subquery.
Files where TVC is defined and its methods are stored added.
Methods exec and prepare for TVC added.
Tests for TVC added.
This patch fills in a serious flaw in the
code that supports condition pushdown into
materialized views / derived tables.
If a predicate happened to contain a reference
to a mergeable view / derived table and it does
not depended directly on the target materialized
view / derived table then the predicate was not
considered as a subject to pusdown to this view
/ derived table.
IB: Fixes in logic when to do versioned or usual row updates. Now it is
able to do unversioned updates for versioned tables just by disabling
`TABLE_SHARE::versioned` flag.
SQL: DDL tracking for:
* RENAME TABLE, ALTER TABLE .. RENAME TO;
* DROP TABLE;
* data-modifying operations (f.ex. ALTER TABLE .. ADD/DROP COLUMN).
This is another attempt to fix the bug mdev-12992.
This patch introduces st_select_lex::context_analysis_place for
the place in SELECT where context analysis is currently performed.
It's similar to st_select_lex::parsing_place, but it is used at
the preparation stage.
Significantly reduce the amount of InnoDB, XtraDB and Mariabackup
code changes by defining pfs_os_file_t as something that is
transparently compatible with os_file_t.
Introducing a new class Type_holder (used internally in sql_union.cc),
to reuse exactly the same data type attribute aggregation Type_handler API
for hybrid functions and UNION.
This fixes a number of bugs in UNION:
- MDEV-9495 Wrong field type for a UNION of a signed and an unsigned INT expression
- MDEV-9497 UNION and COALESCE produce different field types for DECIMAL+INT
- MDEV-12594 UNION between fixed length double columns does not always preserve scale
- MDEV-12595 UNION converts INT to BIGINT
- MDEV-12599 UNION is not symmetric when mixing INT and CHAR
Details:
- sql_union.cc: Reusing attribute aggregation for UNION.
Adding new methods:
* st_select_lex_unit::join_union_type_handlers()
* st_select_lex_unit::join_union_type_attributes()
* st_select_lex_unit::join_union_item_types()
Removing the old join_types()-based code.
- Changing Type_handler::Item_hybrid_func_fix_attributes()
to accept "name", Type_handler_hybrid_field_type, Type_all_attributes
as three separate parameters instead of a single Item_hybrid_func parameter,
to make it possible to pass both Item_hybrid_func and Type_holder.
- Moving the former special GEOMETRY and ENUM/SET attribute aggregation code
from Item_type_holder::join_types() to
* Type_handler_typelib::Item_hybrid_func_fix_attributes().
* Type_handler_geometry::Item_hybrid_func_fix_attrubutes().
This makes GEOMETRY/ENUM/SET symmetric with all other data types
(from the UNION point of view).
Removing Item_type_holder::join_types() and Item_type_holder::get_full_info().
- Adding new methods into Type_all_attributes:
* Type_all_attributes::set_geometry_type() and
Item_hybrid_func::set_geometry_type().
* Adding Type_all_attributes::get_typelib().
* Adding Type_all_attributes::set_typelib().
- Adding Type_handler_typelib as a common parent for
Type_handler_enum and Type_handler_set, to avoid code duplication: they have
already had two common methods, and we're adding one more shared method.
- Adding Type_all_attributes::set_maybe_null(), as some type handlers
may want to set maybe_null (e.g. Type_handler_geometry) during data type
attribute aggregation.
- Changing Type_geometry_attributes() to accept Type_handler
and Type_all_attributes as two separate parameters, instead
of a single Item parameter, to make it possible to pass Type_holder.
- Adding Item_args::add_argument().
- Moving Item_args::alloc_arguments() from "protected" to "public".
- Moving Item_type_holder::Item_type_holder() from item.cc to item.h, as
now it's very simple.
Btw, this constructor should probably be eventually removed.
It's now used only in sql_show.cc, which could be modified to use
Item_return_decimal (for symmetry with Item_return_xxx created for all
other data types). Or, another option: remove all Item_return_xxx and
use Item_type_holder for all data types instead.
- storage/tokudb/mysql-test/tokudb/r/type_float.result
Recording new results (MDEV-12594).
- mysql-test/r/cte_recursive.result
Recording new results (MDEV-9497)
- mysql-test/r/subselect*.result
Recording new results (MDEV-12595)
- mysql-test/r/metadata.result
Recording new results (MDEV-9495)
- mysql-test/r/temp_table.result
Recording new results (MDEV-12594)
- mysql-test/r/type_float.result
Recording new results (MDEV-12594)
At some conditions the function opt_sum_query() can apply MIN/MAX
optimizations to to Item_sum objects of a select These optimizations
becomes invalid if this select is the subquery of an IN subquery
predicate that is converted to a EXISTS subquery. Thus in this case
the MIX/MAX optimizations that have been applied in opt_sum_query()
must be rolled back.
This bug appeared in 5.3 when the code for the cost base choice between
materialization and in-to-exists transformation of non-correlated
IN subqueries was introduced. Before this code in-to-exists
transformations were always performed before the call of opt_sum_query().
- SETVAL(sequence_name, next_value, is_used, round)
- ALTER SEQUENCE, including RESTART WITH
Other things:
- Added handler::extra() option HA_EXTRA_PREPARE_FOR_ALTER_TABLE to signal
ha_sequence() that it should allow write_row statments.
- ALTER ONLINE TABLE now works with SEQUENCE:s
Syntax extension: TIMESTAMP/TRANSACTION keyword can be used before FROM ... TO, BETWEEN ... AND.
Example:
SELECT * FROM t1 FOR SYSTEM_TIME TIMESTAMP FROM '1-1-1' TO NOW();
Closes#27
* BEGIN_TS(), COMMIT_TS() SQL functions;
* VTQ instead of packed stores secs + usecs like my_timestamp_to_binary() does;
* versioned SELECT to IB is translated with COMMIT_TS();
* SQL fixes:
- FOR_SYSTEM_TIME_UNSPECIFIED condition compares to TIMESTAMP_MAX_VALUE;
- segfault fix#36: multiple execute of prepared stmt;
- different tables to same stored procedure fix (#39)
* Fixes of previous parts: ON DUPLICATE KEY, other misc fixes.
Benefits of this patch:
- Removed a lot of calls to strlen(), especially for field_string
- Strings generated by parser are now const strings, less chance of
accidently changing a string
- Removed a lot of calls with LEX_STRING as parameter (changed to pointer)
- More uniform code
- Item::name_length was not kept up to date. Now fixed
- Several bugs found and fixed (Access to null pointers,
access of freed memory, wrong arguments to printf like functions)
- Removed a lot of casts from (const char*) to (char*)
Changes:
- This caused some ABI changes
- lex_string_set now uses LEX_CSTRING
- Some fucntions are now taking const char* instead of char*
- Create_field::change and after changed to LEX_CSTRING
- handler::connect_string, comment and engine_name() changed to LEX_CSTRING
- Checked printf() related calls to find bugs. Found and fixed several
errors in old code.
- A lot of changes from LEX_STRING to LEX_CSTRING, especially related to
parsing and events.
- Some changes from LEX_STRING and LEX_STRING & to LEX_CSTRING*
- Some changes for char* to const char*
- Added printf argument checking for my_snprintf()
- Introduced null_clex_str, star_clex_string, temp_lex_str to simplify
code
- Added item_empty_name and item_used_name to be able to distingush between
items that was given an empty name and items that was not given a name
This is used in sql_yacc.yy to know when to give an item a name.
- select table_name."*' is not anymore same as table_name.*
- removed not used function Item::rename()
- Added comparision of item->name_length before some calls to
my_strcasecmp() to speed up comparison
- Moved Item_sp_variable::make_field() from item.h to item.cc
- Some minimal code changes to avoid copying to const char *
- Fixed wrong error message in wsrep_mysql_parse()
- Fixed wrong code in find_field_in_natural_join() where real_item() was
set when it shouldn't
- ER_ERROR_ON_RENAME was used with extra arguments.
- Removed some (wrong) ER_OUTOFMEMORY, as alloc_root will already
give the error.
TODO:
- Check possible unsafe casts in plugin/auth_examples/qa_auth_interface.c
- Change code to not modify LEX_CSTRING for database name
(as part of lower_case_table_names)
Working features:
CREATE OR REPLACE [TEMPORARY] SEQUENCE [IF NOT EXISTS] name
[ INCREMENT [ BY | = ] increment ]
[ MINVALUE [=] minvalue | NO MINVALUE ]
[ MAXVALUE [=] maxvalue | NO MAXVALUE ]
[ START [ WITH | = ] start ] [ CACHE [=] cache ] [ [ NO ] CYCLE ]
ENGINE=xxx COMMENT=".."
SELECT NEXT VALUE FOR sequence_name;
SELECT NEXTVAL(sequence_name);
SELECT PREVIOUS VALUE FOR sequence_name;
SELECT LASTVAL(sequence_name);
SHOW CREATE SEQUENCE sequence_name;
SHOW CREATE TABLE sequence_name;
CREATE TABLE sequence-structure ... SEQUENCE=1
ALTER TABLE sequence RENAME TO sequence2;
RENAME TABLE sequence TO sequence2;
DROP [TEMPORARY] SEQUENCE [IF EXISTS] sequence_names
Missing features
- SETVAL(value,sequence_name), to be used with replication.
- Check replication, including checking that sequence tables are marked
not transactional.
- Check that a commit happens for NEXT VALUE that changes table data (may
already work)
- ALTER SEQUENCE. ANSI SQL version of setval.
- Share identical sequence entries to not add things twice to table list.
- testing insert/delete/update/truncate/load data
- Run and fix Alibaba sequence tests (part of mysql-test/suite/sql_sequence)
- Write documentation for NEXT VALUE / PREVIOUS_VALUE
- NEXTVAL in DEFAULT
- Ensure that NEXTVAL in DEFAULT uses database from base table
- Two NEXTVAL for same row should give same answer.
- Oracle syntax sequence_table.nextval, without any FOR or FROM.
- Sequence tables are treated as 'not read constant tables' by SELECT; Would
be better if we would have a separate list for sequence tables so that
select doesn't know about them, except if refereed to with FROM.
Other things done:
- Improved output for safemalloc backtrack
- frm_type_enum changed to Table_type
- Removed lex->is_view and replaced with lex->table_type. This allows
use to more easy check if item is view, sequence or table.
- Added table flag HA_CAN_TABLES_WITHOUT_ROLLBACK, needed for handlers
that want's to support sequences
- Added handler calls:
- engine_name(), to simplify getting engine name for partition and sequences
- update_first_row(), to be able to do efficient sequence implementations.
- Made binlog_log_row() global to be able to call it from ha_sequence.cc
- Added handler variable: row_already_logged, to be able to flag that the
changed row is already logging to replication log.
- Added CF_DB_CHANGE and CF_SCHEMA_CHANGE flags to simplify
deny_updates_if_read_only_option()
- Added sp_add_cfetch() to avoid new conflicts in sql_yacc.yy
- Moved code for add_table_options() out from sql_show.cc::show_create_table()
- Added String::append_longlong() and used it in sql_show.cc to simplify code.
- Added extra option to dd_frm_type() and ha_table_exists to indicate if
the table is a sequence. Needed by DROP SQUENCE to not drop a table.
Implementing cursor%ROWTYPE variables, according to the task description.
This patch includes a refactoring in how sp_instr_cpush and sp_instr_copen
work. This is needed to implement MDEV-10598 later easier, to allow variable
declarations go after cursor declarations (which is currently not allowed).
Before this patch, sp_instr_cpush worked as a Query_arena associated with
the cursor. sp_instr_copen::execute() switched to the sp_instr_cpush's
Query_arena when executing the cursor SELECT statement.
Now the Query_arena associated with the cursor is stored inside an instance
of a new class sp_lex_cursor (a LEX descendand) that contains the cursor SELECT
statement.
This simplifies the implementation, because:
- It's easier to follow the code when everything related to execution
of the cursor SELECT statement is stored inside the same sp_lex_cursor
object (rather than distributed between LEX and sp_instr_cpush).
- It's easier to link an sp_instr_cursor_copy_struct to
sp_lex_cursor rather than to sp_instr_cpush.
- Also, it allows to perform sp_instr_cursor_copy_struct::exec_core()
without having a pointer to sp_instr_cpush, using a pointer to sp_lex_cursor
instead. This will be important for MDEV-10598, because sp_instr_cpush will
happen *after* sp_instr_cursor_copy_struct.
After MDEV-10598 is done, this declaration:
DECLARE
CURSOR cur IS SELECT * FROM t1;
rec cur%ROWTYPE;
BEGIN
OPEN cur;
FETCH cur INTO rec;
CLOSE cur;
END;
will generate about this code:
+-----+--------------------------+
| Pos | Instruction |
+-----+--------------------------+
| 0 | cursor_copy_struct rec@0 | Points to sp_cursor_lex through m_lex_keeper
| 1 | set rec@0 NULL |
| 2 | cpush cur@0 | Points to sp_cursor_lex through m_lex_keeper
| 3 | copen cur@0 | Points to sp_cursor_lex through m_cursor
| 4 | cfetch cur@0 rec@0 |
| 5 | cclose cur@0 |
| 6 | cpop 1 |
+-----+--------------------------+
Notice, "cursor_copy_struct" and "set" will go before "cpush".
Instructions at positions 0, 2, 3 point to the same sp_cursor_lex instance.
Adding methods:
- LEX::sp_while_loop_expression()
- LEX::sp_while_loop_finalize()
to reuse code between sql_yacc.yy and sql_yacc_ora.yy.
FOR loop will also reuse these methods.
Part 5: EXIT statement
Adding unconditional EXIT statement:
EXIT [ label ]
Conditional EXIT statements with WHERE clause
will be added in a separate patch.
Moving similar code from sql_yacc.yy and sql_yacc_ora.yy to methods:
LEX::maybe_start_compound_statement()
LEX::sp_push_loop_label()
LEX::sp_push_loop_empty_label()
LEX::sp_pop_loop_label()
LEX::sp_pop_loop_empty_label()
The EXIT statement will also reuse this code.
Moving the code from *.yy to methods:
LEX::sp_change_context()
LEX::sp_leave_statement()
LEX::sp_iterate_statement()
to reuse the same code between LEAVE and ITERATE statements.
EXIT statement will also reuse the same code.
When processing an SP body:
CREATE PROCEDURE p1 (parameters)
AS [ declarations ]
BEGIN statements
[ EXCEPTION exceptions ]
END;
the parser generates two "jump" instructions:
- from the end of "declarations" to the beginning of EXCEPTION
- from the end of EXCEPTION to "statements"
These jumps are useless if EXCEPTION does not exist.
This patch makes sure that these two "jump" instructions are
generated only if EXCEPTION really exists.
- Part 9: EXCEPTION handlers
The top-most stored routine blocks now support EXCEPTION clause
in its correct place:
AS [ declarations ]
BEGIN statements
[ EXCEPTION exceptions ]
END
Inner block will be done in a separate commit.
- Part 14: IN OUT instead of INOUT (in SP parameter declarations)
1. Adding const qualifiers into a few method parameters.
2. Adding methods:
- sp_label::block_label_declare()
- LEX::sp_block_init()
- LEX::sp_block_finalize()
to share more code between the files sql_yacc.yy and sql_yacc_ora.yy,
as well as between the rules sp_labeled_block, sp_unlabeled_block,
sp_unlabeled_block_not_atomic.
3. sql_yacc.yy, sql_yacc_ora.yy changes:
- Removing sp_block_content
- Reorganizing the grammar so the rules sp_labeled_block,
sp_unlabeled_block, sp_unlabeled_block_not_atomic now
contain both BEGIN_SYM and END keywords. Previously,
BEGIN_SYM and END resided in different rules.
This change makes the grammar easier to read,
as well as simplifies adding Oracle-style DECLARE section (coming soon):
DECLARE
..
BEGIN
..
END;
Good side effects:
- SP block related grammar does not use Lex->name any more.
- The "splabel" member was removed from %union
- Adding a new grammar file sql_yacc_ora.yy, which is currently
almost a full copy of sql_yacc.yy.
Note, it's now assumed that sql_yacc.yy and sql_yacc_ora.yy
use the same set of %token directives and exactly the same
%union directive.
These declarations should eventually be moved into a shared
included file, to make sure that sql_yacc.h and sql_yacc_ora.h
are compatible.
- Removing the "-p MYSQL" flag from cmake/bison.cmake, using
the %name-prefix directive inside sql_yacc.yy and sql_yacc_ora.yy instead
- Adding other CMake related changes to build sql_yacc_ora.o
form sql_yacc_ora.yy
- Adding NUMBER(M,N) as a synonym to DECIMAL(M,N) as the first
Oracle compatibility syntax understood in sql_mode=ORACLE.
- Adding prototypes to functions add_virtual_expression()
and handle_sql2003_note184_exception(), so they can be used
in both sql_yacc.yy and sql_yacc_ora.yy.
- Adding a new test suite compat/oracle, with the first test "type_number".
Use this:
./mtr compat/oracle.type_number # to run a single test
./mtr --suite=compat/oracle # to run the entire new suite
- Adding compat/oracle into the list of default suites,
so BuildBot can run it automatically on pushes.
Extended syntax so that it is now possible to set lock_wait_timeout for the
following statements:
SELECT ... FOR UPDATE [WAIT n|NOWAIT]
SELECT ... LOCK IN SHARED MODE [WAIT n|NOWAIT]
LOCK TABLE ... [WAIT n|NOWAIT]
CREATE ... INDEX ON tbl_name (index_col_name, ...) [WAIT n|NOWAIT] ...
ALTER TABLE tbl_name [WAIT n|NOWAIT] ...
OPTIMIZE TABLE tbl_name [WAIT n|NOWAIT]
DROP INDEX ... [WAIT n|NOWAIT]
TRUNCATE TABLE tbl_name [WAIT n|NOWAIT]
RENAME TABLE tbl_name [WAIT n|NOWAIT] ...
DROP TABLE tbl_name [WAIT n|NOWAIT] ...
Valid range of lock_wait_timeout and innodb_lock_wait_timeout was extended so
that 0 is acceptable value (means no wait).
This is amended AliSQL patch. We prefer Oracle syntax for [WAIT n|NOWAIT]
instead of original [WAIT [n]|NO_WAIT].
Define my_thread_id as an unsigned type, to avoid mismatch with
ulonglong. Change some parameters to this type.
Use size_t in a few more places.
Declare many flag constants as unsigned to avoid sign mismatch
when shifting bits or applying the unary ~ operator.
When applying the unary ~ operator to enum constants, explictly
cast the result to an unsigned type, because enum constants can
be treated as signed.
In InnoDB, change the source code line number parameters from
ulint to unsigned type. Also, make some InnoDB functions return
a narrower type (unsigned or uint32_t instead of ulint;
bool instead of ibool).
This is an extraction from the patch for MDEV-10577, which is not
directly related to %TYPE implementation. This patch does the following:
- Moves LEX::set_last_field_type() to Column_definition::set_attributes()
- Adds initialization of Column_definition members length, decimals,
charset, on_update into the constructor.
- Column_definition::set_attributes() now does not set length and decimal
to 0 any more, as they are not initialized in the constructor.
- Move Column_definition::prepare_interval_field() from field.h to field.cc,
as it's too huge.
Moving another banch of functions implemented in sql_yacc.yy as methods to LEX,
to be able to reuse them between sql_yacc.yy and sql_yacc_ora.yy easier.
The list of functions:
- add_create_index_prepare()
- add_key_to_list()
- set_trigger_new_row()
- set_local_variable()
- set_system_variable()
- create_item_for_sp_var()
The full list of functions moved:
int case_stmt_action_expr(LEX *, Item* expr);
int case_stmt_action_when(LEX *, Item *when, bool simple);
int case_stmt_action_then(LEX *);
bool add_select_to_union_list(LEX *,bool is_union_distinct, bool is_top_level);
This is a preparatory change for "MDEV-10142 PL/SQL parser",
to reuse the code easier between sql_yacc.yy and coming soon sql_yacc_ora.yy.
The idea of this fix was taken from the patch by Roy Lyseng
for mysql-5.6 bug iBug#14740889: "Wrong result for aggregate
functions when executing query through cursor".
Here's Roy's comment for his patch:
"
The problem was that a grouped query did not behave properly when
executed using a cursor. On further inspection, the query used one
intermediate temporary table for the grouping.
Then, Select_materialize::send_result_set_metadata created a temporary
table for storing the query result. Notice that get_unit_column_types()
is used to retrieve column meta-data for the query. The items contained
in this list are later modified so that their result_field points to
the row buffer of the materialized temporary table for the cursor.
But prior to this, these result_field objects have been prepared for
use in the grouping operation, by JOIN::make_tmp_tables_info(), hence
the grouping operation operates on wrong column buffers.
The problem is solved by using the list JOIN::fields when copying data
to the materialized table. This list is set by JOIN::make_tmp_tables_info()
and points to the columns of the last intermediate temporary table of
the executed query. For a UNION, it points to the temporary table
that is the result of the UNION query.
Notice that we have to assign a value to ::fields early in JOIN::optimize()
in case the optimization shortcuts due to a const plan detection.
A more optimal solution might be to avoid creating the final temporary
table when the query result is already stored in a temporary table.
"
The patch does not contain a test case, but the description of the
problem corresponds exactly what could be observed in the test
case for mdev-11081.
Merge feature into 10.2 from feature branch.
Delayed replication adds an option
CHANGE MASTER TO master_delay=<seconds>
Replication will then delay applying events with that many
seconds. This creates a replication slave that reflects the state of
the master some time in the past.
Feature is ported from MySQL source tree.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Initial merge of delayed replication from MySQL git.
The code from the initial push into MySQL is merged, and the
associated test case passes. A number of tasks are still pending:
1. Check full test suite run for any regressions or .result file updates.
2. Extend the feature to also work for parallel replication.
3. There are some todo-comments about future refactoring left from
MySQL, these should be located and merged on top.
4. There are some later related MySQL commits, these should be checked
and merged. These include:
e134b9362ba0b750d6ac1b444780019622d14aa5
b38f0f7857c073edfcc0a64675b7f7ede04be00f
fd2b210383358fe7697f201e19ac9779879ba72a
afc397376ec50e96b2918ee64e48baf4dda0d37d
5. The testcase from MySQL relies heavily on sleep and timing for
testing, and seems likely to sporadically fail on heavily loaded test
servers in buildbot or distro build farms.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
This is similar to MysQL Worklog 3253, but with
a different implementation. The disk format and
SQL syntax is identical with MySQL 5.7.
Fetures supported:
- "Any" ammount of any trigger
- Supports FOLLOWS and PRECEDES to be
able to put triggers in a certain execution order.
Implementation details:
- Class Trigger added to hold information about a trigger.
Before this trigger information was stored in a set of lists in
Table_triggers_list and in Table_triggers_list::bodies
- Each Trigger has a next field that poinst to the next Trigger with the
same action and time.
- When accessing a trigger, we now always access all linked triggers
- The list are now only used to load and save trigger files.
- MySQL trigger test case (trigger_wl3253) added and we execute these
identically.
- Even more gracefully handling of wrong trigger files than before. This
is useful if a trigger file uses functions or syntax not provided by
the server.
- Each trigger now has a "Created" field that shows when the trigger was
created, with 2 decimals.
Other comments:
- Many of the changes in test files was done because of the new "Created"
field in the trigger file. This shows up in SHOW ... TRIGGER and when
using information_schema.trigger.
- Don't check if all memory is released if on uses --gdb; This is needed
to be able to get a list from safemalloc of not freed memory while
debugging.
- Added option to trim_whitespace() to know how many prefix characters
was skipped.
- Changed a few ulonglong sql_mode to sql_mode_t, to find some wrong usage
of sql_mode.
If a materialized derived table / view is specified by a unit
with SELECTs containing ORDER BY ... LIMIT then condition pushdown
cannot be done for these SELECTs.
If a materialized derived table / view is specified by a unit
containing global ORDER BY ... LIMIT then condition pushdown
cannot be done for this unit.
The condition pushed into WHERE/HAVING of a materialized
view/derived table may differ for different executions of
the same prepared statement. That's why the should be
ANDed with the existing WHERE/HAVING conditions only after all
permanent transformations of these conditions has been
performed.
mysql_prepare_create_table fixed so it doesn't let duplicating
constraint names. Syntax for CONSTRAINT IF NOT EXISTS added
and handled in mysql_alter_table.
This bug in st_select_lex_node::move_node could result
in invalid select lists in recursive units that could
cause falling into infinite loops when iterating over
selects in such units.
for materialized views and derived tables: there were no
push-down if the view was defined as union of selects
without aggregation. Added test cases with such unions.
Adjusted result files after the merge of the code for mdev-9197.
Moved checking whether the limit set for the number of iterations
when executing a recursive query has been reached from
st_select_lex_unit::exec_recursive to TABLE_LIST::fill_recursive.
Changed the name of the system variable max_recursion_level for
max_recursive_iterations.
Adjusted test cases.
Temporary tables created for recursive CTE
were instantiated at the prepare phase. As
a result these temporary tables missed
indexes for look-ups and optimizer could not
use them.
it's not enough to look for NOT NULL IS, this also fails queries like
SELECT NOT NULL <=> NULL;
and adds no value anymore, as the grammar now requires parentheses
* remove a confusing method name - Field::set_default_expression()
* remove handler::register_columns_for_write()
* rename stuff
* add asserts
* remove unlikely unlikely
* remove redundant if() conditions
* fix mark_unsupported_function() to report the most important violation
* don't scan vfield list for default values (vfields don't have defaults)
* move handling for DROP CONSTRAINT IF EXIST where it belongs
* don't protect engines from Alter_inplace_info::ALTER_ADD_CONSTRAINT
* comments
- Force usage of () around complex DEFAULT expressions
- Give error if DEFAULT expression contains invalid characters
- Don't use const_charset_conversion for stored Item_func_sysconf expressions
as the result is not constaint over different executions
- Fixed Item_func_user() to not store calculated value in str_value
MDEV-10134 Add full support for DEFAULT
- Added support for using tables with MySQL 5.7 virtual fields,
including MySQL 5.7 syntax
- Better error messages also for old cases
- CREATE ... SELECT now also updates timestamp columns
- Blob can now have default values
- Added new system variable "check_constraint_checks", to turn of
CHECK constraint checking if needed.
- Removed some engine independent tests in suite vcol to only test myisam
- Moved some tests from 'include' to 't'. Should some day be done for all tests.
- FRM version increased to 11 if one uses virtual fields or constraints
- Changed to use a bitmap to check if a field has got a value, instead of
setting HAS_EXPLICIT_VALUE bit in field flags
- Expressions can now be up to 65K in total
- Ensure we are not refering to uninitialized fields when handling virtual fields or defaults
- Changed check_vcol_func_processor() to return a bitmap of used types
- Had to change some functions that calculated cached value in fix_fields to do
this in val() or getdate() instead.
- store_now_in_TIME() now takes a THD argument
- fill_record() now updates default values
- Add a lookahead for NOT NULL, to be able to handle DEFAULT 1+1 NOT NULL
- Automatically generate a name for constraints that doesn't have a name
- Added support for ALTER TABLE DROP CONSTRAINT
- Ensure that partition functions register virtual fields used. This fixes
some bugs when using virtual fields in a partitioning function
Added test cases to check the fix.
Fixed the problem of wrong types of recursive tables when the type of anchor part does not coincide with the
type of recursive part.
Prevented usage of marerialization and subquery cache for subqueries with recursive references.
Introduced system variables 'max_recursion_level'.
Added a test case to test usage of this variable.
Added the check whether there are set functions in the specifications of recursive CTE.
Added the check whether there are recursive references in subqueries.
Introduced boolean system variable 'standards_compliant_cte'. By default it's set to 'on'.
When it's set to 'off' non-standard compliant CTE can be executed.
When the specification of a WITH table referred to a view
that used a based table with the same name as the WITH table
the server went into an infinite loop because it erroneously
resolved the reference to the base table as the reference to
the WITH table.
With tables used in a view cannot be searched for beyond the
scope the view.
Item_func_or_sum.
Implemented method update_used_tables for class Item_findow_func.
Added the flag Item::with_window_func.
Made sure that window functions could be used only in SELECT list
and ORDER BY clause.
Added test cases that checked different illegal placements of
window functions.
of mdev-8789.
Fixed a bug in TABLE_LIST::print.
Fixed another bug for the case when the definition of a
WITH table contained column list while the join in the main
query used two instances of this table.
"Re-factor the code for post-join operations".
The patch mainly contains the code ported from mysql-5.6 and
created for two essential architectural changes:
1. WL#5558: Resolve ORDER BY execution method at the optimization stage
2. WL#6071: Inline tmp tables into the nested loops algorithm
The first task was implemented for mysql-5.6 by Ole John Aske.
It allows to make all decisions on ORDER BY operation at the optimization
stage.
The second task implemented for mysql-5.6 by Evgeny Potemkin adds JOIN_TAB
nodes for post-join operations that require temporary tables. It allows
to execute these operations within the nested loops algorithm that used to
be used before this task only for join queries. Besides these task moves
all planning on the execution of these operations from the execution phase
to the optimization phase.
Some other re-factoring changes of mysql-5.6 were pulled in, mainly because
it was easier to pull them in than roll them back. In particular all
changes concerning Ref_ptr_array were incorporated.
The port required some changes in the MariaDB code that concerned the
functionality of EXPLAIN and ANALYZE. This was done mainly by Sergey
Petrunia.
The following left in semi-improved state to keep patch size reasonable:
- Field operator new: left thd_alloc(current_thd)
- Sql_alloc operator new: left thd_alloc(thd_get_current_thd())
- Item_args constructors: left thd_alloc(thd)
- Item_func_interval::fix_length_and_dec(): no THD arg, have to call current_thd
- Item_func_dyncol_exists::val_int(): same
- Item_dyncol_get::val_str(): same
- Item_dyncol_get::val_int(): same
- Item_dyncol_get::val_real(): same
- Item_dyncol_get::val_decimal(): same
- Item_singlerow_subselect::fix_length_and_dec(): same
Problem & Analysis: If DML invokes a trigger or a
stored function that inserts into an AUTO_INCREMENT column,
that DML has to be marked as 'unsafe' statement. If the
tables are locked in the transaction prior to DML statement
(using LOCK TABLES), then the same statement is not marked as
'unsafe' statement. The logic of checking whether unsafeness
is protected with if (!thd->locked_tables_mode). Hence if
we lock the tables prior to DML statement, it is *not* entering
into this if condition. Hence the statement is not marked
as unsafe statement.
Fix: Irrespective of locked_tables_mode value, the unsafeness
check should be done. Now with this patch, the code is moved
out to 'decide_logging_format()' function where all these checks
are happening and also with out 'if(!thd->locked_tables_mode)'.
Along with the specified test case in the bug scenario
(BINLOG_STMT_UNSAFE_AUTOINC_COLUMNS), we also identified that
other cases BINLOG_STMT_UNSAFE_AUTOINC_NOT_FIRST,
BINLOG_STMT_UNSAFE_WRITE_AUTOINC_SELECT, BINLOG_STMT_UNSAFE_INSERT_TWO_KEYS
are also protected with thd->locked_tables_mode which is not right. All
of those checks also moved to 'decide_logging_format()' function.
Problem:
Procedure which uses stack of views first executed without most deep view.
It fails but one view cached (as well as whole procedure).
Then simultaniusely create the second view we lack and execute the procedure.
In the beginning of procedure execution the view is not yet created so
procedure used as it was cached (cache was not invalidated).
But by the time we are trying to use most deep view it is already created.
The problem with the view is that thd->select_number (first view was not parsed) so second view will get the same number.
The fix is in keeping the thd->select_number correct even if we use cached views.
In the proposed solution (to keep it simple) counter can be bigger then should but it should not create problem because numbers are still unique and situation is very rare.
When CHANGE MASTER was executed as a PS, its attributes were wrongly
getting reset toward the end of PREPARE. As a result, the subsequent
executions had no effect. Fixed by making sure that the CHANGE MASTER
attributes are preserved during the lifetime of the PS.
When CHANGE MASTER was executed as a PS, its attributes were wrongly
getting reset toward the end of PREPARE. As a result, the subsequent
executions had no effect. Fixed by making sure that the CHANGE MASTER
attributes are preserved during the lifetime of the PS.
- Part 3: Adding mem_root to push_back() and push_front()
Other things:
- Added THD as an argument to some partition functions.
- Added memory overflow checking for XML tag's in read_xml()
Other things:
- Avoid calling init_and_set_log_file_name() when opening binary log.
- Remove newlines early when reading from index file.
- Ensure that reset_logs() will work even if thd is 0 (Can happen on startup)
- Added thd to sart_slave_threads() for better error handling.
- Changed ER(ER_...) to ER_THD(thd, ER_...) when thd was known or if there was many calls to current_thd in the same function.
- Changed ER(ER_..) to ER_THD_OR_DEFAULT(current_thd, ER...) in some places where current_thd is not necessary defined.
- Removing calls to current_thd when we have access to thd
Part of this is optimization (not calling current_thd when not needed),
but part is bug fixing for error condition when current_thd is not defined
(For example on startup and end of mysqld)
Notable renames done as otherwise a lot of functions would have to be changed:
- In JOIN structure renamed:
examined_rows -> join_examined_rows
record_count -> join_record_count
- In Field, renamed new_field() to make_new_field()
Other things:
- Added DBUG_ASSERT(thd == tmp_thd) in Item_singlerow_subselect() just to be safe.
- Removed old 'tab' prefix in JOIN_TAB::save_explain_data() and use members directly
- Added 'thd' as argument to a few functions to avoid calling current_thd.
This is MDEV-7601, including it's sub tasks MDEV-7594, MDEV-7555, MDEV-7590, MDEV-7581, MDEV-7589
The problem was that select_lex->non_agg_fields was not properly reset for re-execution and this caused an overwrite of a random memory position.
The fix was move non_agg_fields from select_lext to JOIN, which is properly reset.
Handle the case where the optimizer decides to use
handler->delete_all_rows(), but then this call returns
HA_ERR_UNSUPPORTED and execution switches to regular
row-by-row deletion.
To make st_select_lex::master_unit() inlinable:
- moved it's definition to sql_lex.h
- removed base class virtual master_unit() declaration since this method is
specific to st_select_lex
Overhead change:
st_select_lex::master_unit() 0.17% -> out of radar
execute_sqlcom_select() 0.13% -> 0.12%
JOIN::save_explain_data_intern() 0.27% -> 0.23%
JOIN::optimize_inner() 0.76% -> 0.72%
JOIN::exec_inner() 0.30% -> 0.24%
JOIN::prepare() 0.30% -> 0.29%
JOIN::optimize() 0.05% -> 0.05%
sql_alloc() has additional costs compared to direct mem_root allocation:
- function call: it is defined in a separate translation unit and can't be
inlined
- it needs to call pthread_getspecific() to get THD::mem_root
It is called dozens of times implicitely at least by:
- List<>::push_back()
- List<>::push_front()
- new (for Sql_alloc derived classes)
- sql_memdup()
Replaced lots of implicit sql_alloc() calls with direct mem_root allocation,
passing through THD pointer whenever it is needed.
Number of sql_alloc() calls reduced 345 -> 41 per OLTP RO transaction.
pthread_getspecific() overhead dropped 0.76 -> 0.59
sql_alloc() overhed dropped 0.25 -> 0.06
delete_dynamic() was called 9-11x per OLTP RO query + 3x per BEGIN/COMMIT.
3 calls were performed by LEX_MASTER_INFO. Added condition to call those only
for CHANGE MASTER.
1 call was performed by lock_table_names()/Hash_set/my_hash_free(). Hash_set was
supposed to be used for DDL and LOCK TABLES to gather database names, while it
was initialized/freed for DML too. In fact Hash_set didn't do any useful job
here. Hash_set was removed and MDL requests are now added directly to the list.
The rest 5-7 calls are done by optimizer, mostly by Explain_query and friends.
Since dynamic arrays are used in most cases, they can hardly be optimized.
my_hash_free() overhead dropped 0.02 -> out of radar.
delete_dynamic() overhead dropped 0.12 -> 0.04.
including the big commit
commit 305130361bf72726de220f3d2b2787395e10be61
Author: Marc Alff <marc.alff@oracle.com>
Date: Tue Feb 10 11:31:32 2015 +0100
WL#8354 BACKPORT DIGEST IMPROVEMENTS TO MYSQL 5.6
(with the following commits) and related changes in sql/
- Remove ANALYZE's timing code off the the execution path of regular
SELECTs.
- Improve the tracker that tracks counts/execution times of SELECTs or
DML statements:
= regular execution just increments counters
= ANALYZE will also collect timings.