Commit graph

15 commits

Author SHA1 Message Date
Sergei Golubchik
8d99166c69 MDEV-11640 gcol.gcol_select_myisam fails in buildbot on Power
JOIN_CACHE's were initialized in  check_join_cache_usage()
from make_join_readinfo(). After that make_join_readinfo() was looking
whether it's possible to use keyread. Later, after make_join_readinfo(),
optimizer decided whether to use filesort. And even later, at the
execution time, from join_read_first(), keyread was actually enabled.

The problem is, that if a query uses a vcol, base columns that it
depends on are automatically added to the read_set - because they're
needed to calculate the vcol. But if we're doing keyread, vcol is taken
from the index, not calculated, and base columns do not need to  be
in the read set (even should not be - as they aren't getting values).

The bug was that JOIN_CACHE used read_set with base columns,
they were not read because of keyread, so it was caching garbage.

So read_set is only known after the keyread was decided. And after the
filesort was decided, as filesort doesn't use keyread. But
check_join_cache_usage() needs to be done in make_join_readinfo(),
as the code below depends on these checks,

Fix: keep JOIN_CACHE checks where they were, but move initialization
down to the very end of JOIN::optimize_inner. If keyread was enabled,
update the read_set to include only columns that are part of the index.
Copy the keyread logic from join_read_first() to happen at optimize time.
2017-02-13 18:12:14 +01:00
Nirbhay Choubey
8b2e642aa2 MDEV-7635: Update tests to adapt to the new default sql_mode 2017-02-10 06:30:42 -05:00
Nirbhay Choubey
3435e8a515 MDEV-7635: Part 1
innodb_autoinc_lock_mode            = 2
innodb_buffer_pool_dump_at_shutdown = ON
innodb_buffer_pool_dump_pct         = 25
innodb_buffer_pool_load_at_startup  = ON
innodb_checksum_algorithm           = CRC32
innodb_file_format                  = Barracuda
innodb_large_prefix                 = ON
innodb_log_compressed_pages         = ON
innodb_purge_threads                = 4
innodb_strict_mode                  = ON
binlog_annotate_row_events          = ON
binlog_format                       = MIXED
binlog-row-event-max-size           = 8192
group_concat_max_len                = 1M
lock_wait_timeout                   = 86400
log_slow_admin_statements           = ON
log_slow_slave_statements           = ON
log_warnings                        = 2
max_allowed_packet                  = 16M
replicate_annotate_row_events       = ON
slave_net_timeout                   = 60
sync_binlog                         = 1
aria_recover                        = BACKUP,QUICK
myisam_recover_options              = BACKUP,QUICK
2017-02-10 06:30:42 -05:00
Monty
af7490f95d Remove end . from error messages to get them consistent
Fixed a few failing tests
2016-10-05 01:11:08 +03:00
Sergei Golubchik
06b7fce9f2 Merge branch '10.1' into 10.2 2016-09-09 08:33:08 +02:00
Monty
6f31dd093a Added new status variables to make it easier to debug certain problems:
- Handler_read_retry
- Update_scan
- Delete_scan
2016-08-21 20:18:39 +03:00
Alexander Barkov
612267301f MDEV-10036 sql_yacc.yy: Split select_part2 to disallow syntactically bad constructs with INTO, PROCEDURE, UNION
MDEV-10037 UNION with LIMIT ROWS EXAMINED does not require parentheses
2016-05-10 11:48:01 +04:00
Igor Babaev
2cfc450bf7 This is the consolidated patch for mdev-8646:
"Re-factor the code for post-join operations".

The patch mainly contains the code ported from mysql-5.6 and
created for two essential architectural changes:
1. WL#5558: Resolve ORDER BY execution method at the optimization stage
2. WL#6071: Inline tmp tables into the nested loops algorithm

The first task was implemented for mysql-5.6 by Ole John Aske.
It allows to make all decisions on ORDER BY operation at the optimization
stage.

The second task implemented for mysql-5.6 by Evgeny Potemkin adds JOIN_TAB
nodes for post-join operations that require temporary tables. It allows
to execute these operations within the nested loops algorithm that used to
be used before this task only for join queries. Besides these task moves
all planning on the execution of these operations from the execution phase
to the optimization phase.

Some other re-factoring changes of mysql-5.6 were pulled in, mainly because
it was easier to pull them in than roll them back. In particular all
changes concerning Ref_ptr_array were incorporated.

The port required some changes in the MariaDB code that concerned the
functionality of EXPLAIN and ANALYZE. This was done mainly by Sergey
Petrunia.
2016-02-09 12:35:59 -08:00
timour@askmonty.org
afed809297 MDEV-5123 Remove duplicated conditions pushed both to join_tab->select_cond and join_tab->cache_select->cond for blocked joins.
BNL and BNLH joins pre-filter the records from a joined table via JOIN_TAB::cache_select->cond.
There is no need to re-evaluate the same conditions via JOIN_TAB::select_cond. This patch removes
the duplicated conditions from the top-level conjuncts of each pushed condition.

The added "Using where" in few EXPLAINs is due to taking into account tab->cache_select->cond
in addition to tab->select_cond in JOIN::save_explain_data_intern.
2013-10-18 11:45:25 +03:00
Sergey Petrunya
1c6fc3f6b9 [SHOW] EXPLAIN UPDATE/DELETE, code re-structuring
Part 2 of:
- Pass more tests
- select with subselects is now shown with type=PRIMARY where it used to be (incorrectly) 'SIMPLE'
2013-06-18 19:21:00 +04:00
unknown
97a1c53c81 MDEV-3812: Remove unneeded extra call to engine->exec() in Item_subselect::exec, remove enum store_key_result
This task fixes an ineffeciency that is a remainder from MySQL 5.0/5.1. There, subqueries
were optimized in a lazy manner, when executed for the first time. During this lazy optimization
it may happen that the server finds a more efficient subquery engine, and substitute the current
engine of the query being executed with the new engine. This required re-execution of the engine.

MariaDB 5.3 pre-optimizes subqueries in almost all cases, and the engine is chosen in most cases,
except when subquery materialization found that it must use partial matching. In this case, the
current code was performing one extra re-execution although it was not needed at all. The patch
performs the re-execution only if the engine was changed while executing.

In addition the patch performs small cleanup by removing "enum store_key_result" because it is
essentially a boolean, and the code that uses it already maps it to a boolean.
2012-10-25 15:50:10 +03:00
Sergey Petrunya
b6eccf51c0 Update test results (checked) 2012-08-28 16:03:22 +04:00
unknown
da5214831d Fix for bug lp:944706, task MDEV-193
The patch enables back constant subquery execution during
query optimization after it was disabled during the development
of MWL#89 (cost-based choice of IN-TO-EXISTS vs MATERIALIZATION).

The main idea is that constant subqueries are allowed to be executed
during optimization if their execution is not expensive.

The approach is as follows:
- Constant subqueries are recursively optimized in the beginning of
  JOIN::optimize of the outer query. This is done by the new method
  JOIN::optimize_constant_subqueries(). This is done so that the cost
  of executing these queries can be estimated.
- Optimization of the outer query proceeds normally. During this phase
  the optimizer may request execution of non-expensive constant subqueries.
  Each place where the optimizer may potentially execute an expensive
  expression is guarded with the predicate Item::is_expensive().
- The implementation of Item_subselect::is_expensive has been extended
  to use the number of examined rows (estimated by the optimizer) as a
  way to determine whether the subquery is expensive or not.
- The new system variable "expensive_subquery_limit" controls how many
  examined rows are considered to be not expensive. The default is 100.

In addition, multiple changes were needed to make this solution work
in the light of the changes made by MWL#89. These changes were needed
to fix various crashes and wrong results, and legacy bugs discovered
during development.
2012-05-17 13:46:05 +03:00
unknown
ecbd868f58 Merged the implementation of MDEV-28 LIMIT ROWS EXAMINED into MariaDB 5.5. 2012-03-12 00:45:18 +02:00
unknown
8aebd44e0e Implementation of MDEV-28 LIMIT ROWS EXAMINED
https://mariadb.atlassian.net/browse/MDEV-28
  
This task implements a new clause LIMIT ROWS EXAMINED <num>
as an extention to the ANSI LIMIT clause. This extension
allows to limit the number of rows and/or keys a query
would access (read and/or write) during query execution.
2012-03-11 14:39:20 +02:00