- Added sql/mariadb.h file that should be included first by files in sql
directory, if sql_plugin.h is not used (sql_plugin.h adds SHOW variables
that must be done before my_global.h is included)
- Removed a lot of include my_global.h from include files
- Removed include's of some files that my_global.h automatically includes
- Removed duplicated include's of my_sys.h
- Replaced include my_config.h with my_global.h
- Use microsecond_interval_timer() to calculate time for applying row events.
(Faster execution)
- Removed return value for set_row_stmt_start_timestamp()
as it was never used.
Don't use the server's version, that expects a valid THD.
Modify net_serv.cc not not use any THD if MYSQL_SERVER isn't defined.
This reverts commit aaddac5cd7.
accept proxy protocol header from client connections.
The new server variable 'proxy_protocol_networks' contains list
of networks from which proxy header is accepted.
It allows to push conditions into derived with window functions not
only in the cases when the window specifications of these window
functions use the same partition, but also in the cases when the window
functions use partitions that share only some fields. In these
cases only the conditions over the common fields are pushed.
With MDEV-12288 and MDEV-13536, the InnoDB purge threads will access
pages more often, causing all sorts of debug assertion failures in
the B-tree code.
Work around this problem by amending the corruption tests with
--innodb-purge-rseg-truncate-frequency=1 --skip-innodb-fast-shutdown
so that everything will be purged before the server
is restarted to deal with the corruption.
This should have been part of MDEV-12288.
trx_undo_t::del_marks: Remove.
Purge needs to process all undo log records in order to
reset the DB_TRX_ID. Before MDEV-12288, it sufficed to only delete
the purgeable delete-marked records, and it ignore other undo log.
trx_rseg_t::needs_purge: Renamed from trx_rseg_t::last_del_marks.
Indicates whether a rollback segment needs to be processed by purge.
TRX_UNDO_NEEDS_PURGE: Renamed from TRX_UNDO_DEL_MARKS.
Indicates whether a rollback segment needs to be processed by purge.
This will be 1 until trx_purge_free_segment() has been invoked.
row_purge_record_func(): Set the is_insert flag for TRX_UNDO_INSERT_REC,
so that the DB_ROLL_PTR will match in row_purge_reset_trx_id().
trx_purge_fetch_next_rec(): Add a comment about row_purge_record_func()
going to set the is_insert flag.
trx_purge_read_undo_rec(): Always attempt to read the undo log record.
trx_purge_get_next_rec(): Do not skip any undo log records.
Even when no clustered index record is going to be removed,
we may want to reset some DB_TRX_ID,DB_ROLL_PTR.
trx_undo_rec_get_cmpl_info(), trx_undo_rec_get_extern_storage(): Remove.
trx_purge_add_undo_to_history(): Set the TRX_UNDO_NEEDS_PURGE flag
so that the resetting will work on undo logs that were originally
created before MDEV-12288 (MariaDB 10.3.1).
trx_undo_roll_ptr_is_insert(), trx_purge_free_segment(): Cleanup
(should be no functional change).
with window functions (mdev-10855).
This patch just modified the function pushdown_cond_for_derived()
to support this feature.
Some test cases demonstrating this optimization were added to
derived_cond_pushdown.test.
If the server is upgraded from a database that was created
before MDEV-12288, and if the undo logs in the database contain
an incomplete transaction that performed an INSERT operation,
the server would crash when rolling back that transaction.
trx_commit_low(): Relax a too strict transaction. This function
will also be called after completing the rollback of a recovered
transaction.
trx_purge_add_undo_to_history(): Merged from the functions
trx_purge_add_update_undo_to_history() and trx_undo_update_cleanup(),
which are removed. Remove the parameter undo_page, and instead call
trx_undo_set_state_at_finish() to obtain it.
trx_write_serialisation_history(): Treat undo and old_insert equally.
That is, after the rollback (or XA COMMIT) of a recovered transaction
before upgrade, move all logs (both insert_undo and update_undo) to
the purge queue.
JSON_EXTRACT behaves specifically in the comparison,
so we have to implement specific method for that in
Arg_comparator.
Conflicts:
sql/item_cmpfunc.cc
"Optimization for equi-joins of derived tables with GROUP BY"
should be considered rather as a 'proof of concept'.
The task itself is targeted at an optimization that employs re-writing
equi-joins with grouping derived tables / views into lateral
derived tables. Here's an example of such transformation:
select t1.a,t.max,t.min
from t1 [left] join
(select a, max(t2.b) max, min(t2.b) min from t2
group by t2.a) as t
on t1.a=t.a;
=>
select t1.a,tl.max,tl.min
from t1 [left] join
lateral (select a, max(t2.b) max, min(t2.b) min from t2
where t1.a=t2.a) as t
on 1=1;
The transformation pushes the equi-join condition t1.a=t.a into the
derived table making it dependent on table t1. It means that for
every row from t1 a new derived table must be filled out. However
the size of any of these derived tables is just a fraction of the
original derived table t. One could say that transformation 'splits'
the rows used for the GROUP BY operation into separate groups
performing aggregation for a group only in the case when there is
a match for the current row of t1.
Apparently the transformation may produce a query with a better
performance only in the case when
- the GROUP BY list refers only to fields returned by the derived table
- there is an index I on one of the tables T used in FROM list of
the specification of the derived table whose prefix covers the
the fields from the proper beginning of the GROUP BY list or
fields that are equal to those fields.
Whether the result of the re-writing can be executed faster depends
on many factors:
- the size of the original derived table
- the size of the table T
- whether the index I is clustering for table T
- whether the index I fully covers the GROUP BY list.
This patch only tries to improve the chosen execution plan using
this transformation. It tries to do it only when the chosen
plan reaches the derived table by a key whose prefix covers
all the fields of the derived table produced by the fields of
the table T from the GROUP BY list.
The code of the patch does not evaluates the cost of the improved
plan. If certain conditions are met the transformation is applied.