Commit graph

99 commits

Author SHA1 Message Date
Alexander Barkov
36eba98817 MDEV-19123 Change default charset from latin1 to utf8mb4
Changing the default server character set from latin1 to utf8mb4.
2024-07-11 10:21:07 +04:00
Oleksandr Byelkin
f5fae75652 Merge branch '11.0' into 11.1 2023-08-09 08:25:14 +02:00
Oleksandr Byelkin
51f9d62005 Merge branch '10.11' into 11.0 2023-08-09 07:53:48 +02:00
Oleksandr Byelkin
ced243a099 Merge branch '10.9' into 10.10 2023-08-05 20:34:09 +02:00
Oleksandr Byelkin
6bf8483cac Merge branch '10.5' into 10.6 2023-08-01 15:08:52 +02:00
Oleksandr Byelkin
f52954ef42 Merge commit '10.4' into 10.5 2023-07-20 11:54:52 +02:00
Thirunarayanan Balathandayuthapani
5f09b53bdb MDEV-31086 MODIFY COLUMN can break FK constraints, and lead to unrestorable dumps
- When foreign_key_check is disabled, allowing to modify the
column which is part of foreign key constraint can lead to
refusal of TRUNCATE TABLE, OPTIMIZE TABLE later. So it make
sense to block the column modify operation when foreign key
is involved irrespective of foreign_key_check variable.

Correct way to modify the charset of the column when fk is involved:

SET foreign_key_checks=OFF;
ALTER TABLE child DROP FOREIGN KEY fk, MODIFY m VARCHAR(200) CHARSET utf8mb4;
ALTER TABLE parent MODIFY m VARCHAR(200) CHARSET utf8mb4;
ALTER TABLE child ADD CONSTRAINT FOREIGN KEY (m) REFERENCES PARENT(m);
SET foreign_key_checks=ON;

fk_check_column_changes(): Remove the FOREIGN_KEY_CHECKS while
checking the column change for foreign key constraint. This
is the partial revert of commit 5f1f2fc0e4
and it changes the behaviour of copy alter algorithm

ha_innobase::prepare_inplace_alter_table(): Find the modified
column and check whether it is part of existing and newly
added foreign key constraint.
2023-06-27 16:58:22 +05:30
Junqi Xie
d20a96f9c1 MDEV-21921 Make transaction_isolation and transaction_read_only into system variables
In MariaDB, we have a confusing problem where:
* The transaction_isolation option can be set in a configuration file, but it cannot be set dynamically.
* The tx_isolation system variable can be set dynamically, but it cannot be set in a configuration file.

Therefore, we have two different names for the same thing in different contexts. This is needlessly confusing, and it complicates the documentation. The same thing applys for transaction_read_only.

MySQL 5.7 solved this problem by making them into system variables. https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-20.html

This commit takes a similar approach by adding new system variables and marking the original ones as deprecated. This commit also resolves some legacy problems related to SET STATEMENT and transaction_isolation.
2023-04-12 11:04:29 +10:00
Monty
b6215b9b20 Update row and key fetch cost models to take into account data copy costs
Before this patch, when calculating the cost of fetching and using a
row/key from the engine, we took into account the cost of finding a
row or key from the engine, but did not consistently take into account
index only accessed, clustered key or covered keys for all access
paths.

The cost of the WHERE clause (TIME_FOR_COMPARE) was not consistently
considered in best_access_path().  TIME_FOR_COMPARE was used in
calculation in other places, like greedy_search(), but was in some
cases (like scans) done an a different number of rows than was
accessed.

The cost calculation of row and index scans didn't take into account
the number of rows that where accessed, only the number of accepted
rows.

When using a filter, the cost of index_only_reads and cost of
accessing and disregarding 'filtered rows' where not taken into
account, which made filters cost less than there actually where.

To remedy the above, the following key & row fetch related costs
has been added:

- The cost of fetching and using a row is now split into different costs:
  - key + Row fetch cost (as before) but multiplied with the variable
  'optimizer_cache_cost' (default to 0.5). This allows the user to
  tell the optimizer the likehood of finding the key and row in the
  engine cache.
- ROW_COPY_COST, The cost copying a row from the engine to the
  sql layer or creating a row from the join_cache to the record
  buffer. Mostly affects table scan costs.
- ROW_LOOKUP_COST, the cost of fetching a row by rowid.
- KEY_COPY_COST the cost of finding the next key and copying it from
  the engine to the SQL layer. This is used when we calculate the cost
  index only reads. It makes index scans more expensive than before if
  they cover a lot of rows. (main.index_merge_myisam)
- KEY_LOOKUP_COST, the cost of finding the first key in a range.
  This replaces the old define IDX_LOOKUP_COST, but with a higher cost.
- KEY_NEXT_FIND_COST, the cost of finding the next key (and rowid).
  when doing a index scan and comparing the rowid to the filter.
  Before this cost was assumed to be 0.

All of the above constants/variables are now tuned to be somewhat in
proportion of executing complexity to each other.  There is tuning
need for these in the future, but that can wait until the above are
made user variables as that will make tuning much easier.

To make the usage of the above easy, there are new (not virtual)
cost calclation functions in handler:
- ha_read_time(), like read_time(), but take optimizer_cache_cost into
  account.
- ha_read_and_copy_time(), like ha_read_time() but take into account
  ROW_COPY_TIME
- ha_read_and_compare_time(), like ha_read_and_copy_time() but take
  TIME_FOR_COMPARE into account.
- ha_rnd_pos_time(). Read row with row id, taking ROW_COPY_COST
  into account.  This is used with filesort where we don't need
  to execute the WHERE clause again.
- ha_keyread_time(), like keyread_time() but take
  optimizer_cache_cost into account.
- ha_keyread_and_copy_time(), like ha_keyread_time(), but add
  KEY_COPY_COST.
- ha_key_scan_time(), like key_scan_time() but take
  optimizer_cache_cost nto account.
- ha_key_scan_and_compare_time(), like ha_key_scan_time(), but add
  KEY_COPY_COST & TIME_FOR_COMPARE.

I also added some setup costs for doing different types of scans and
creating temporary tables (on disk and in memory). This encourages
the optimizer to not use these for simple 'a few row' lookups if
there are adequate key lookup strategies.
- TABLE_SCAN_SETUP_COST, cost of starting a table scan.
- INDEX_SCAN_SETUP_COST, cost of starting an index scan.
- HEAP_TEMPTABLE_CREATE_COST, cost of creating in memory
  temporary table.
- DISK_TEMPTABLE_CREATE_COST, cost of creating an on disk temporary
  table.

When calculating cost of fetching ranges, we had a cost of
IDX_LOOKUP_COST (0.125) for doing a key div for a new range. This is
now replaced with 'io_cost * KEY_LOOKUP_COST (1.0) *
optimizer_cache_cost', which matches the cost we use for 'ref' and
other key lookups. The effect is that the cost is now a bit higher
when we have many ranges for a key.

Allmost all calculation with TIME_FOR_COMPARE is now done in
best_access_path(). 'JOIN::read_time' now includes the full
cost for finding the rows in the table.

In the result files, many of the changes are now again close to what
they where before the "Update cost for hash and cached joins" commit,
as that commit didn't fix the filter cost (too complex to do
everything in one commit).

The above changes showed a lot of a lot of inconsistencies in
optimizer cost calculation. The main objective with the other changes
was to do calculation as similar (and accurate) as possible and to make
different plans more comparable.

Detailed list of changes:

- Calculate index_only_cost consistently and correctly for all scan
  and ref accesses. The row fetch_cost and index_only_cost now
  takes into account clustered keys, covered keys and index
  only accesses.
- cost_for_index_read now returns both full cost and index_only_cost
- Fixed cost calculation of get_sweep_read_cost() to match other
  similar costs. This is bases on the assumption that data is more
  often stored on SSD than a hard disk.
- Replaced constant 2.0 with new define TABLE_SCAN_SETUP_COST.
- Some scan cost estimates did not take into account
  TIME_FOR_COMPARE. Now all scan costs takes this into
  account. (main.show_explain)
- Added session variable optimizer_cache_hit_ratio (default 50%). By
  adjusting this on can reduce or increase the cost of index or direct
  record lookups. The effect of the default is that key lookups is now
  a bit cheaper than before. See usage of 'optimizer_cache_cost' in
  handler.h.
- JOIN_TAB::scan_time() did not take into account index only scans,
  which produced a wrong cost when index scan was used. Changed
  JOIN_TAB:::scan_time() to take into consideration clustered and
  covered keys. The values are now cached and we only have to call
  this function once. Other calls are changed to use the cached
  values.  Function renamed to JOIN_TAB::estimate_scan_time().
- Fixed that most index cost calculations are done the same way and
  more close to 'range' calculations. The cost is now lower than
  before for small data sets and higher for large data sets as we take
  into account how many keys are read (main.opt_trace_selectivity,
  main.limit_rows_examined).
- Ensured that index_scan_cost() ==
  range(scan_of_all_rows_in_table_using_one_range) +
  MULTI_RANGE_READ_INFO_CONST. One effect of this is that if there
  is choice of doing a full index scan and a range-index scan over
  almost the whole table then index scan will be preferred (no
  range-read setup cost).  (innodb.innodb, main.show_explain,
  main.range)
  - Fixed the EQ_REF and REF takes into account clustered and covered
    keys.  This changes some plans to use covered or clustered indexes
    as these are much cheaper.  (main.subselect_mat_cost,
    main.state_tables_innodb, main.limit_rows_examined)
  - Rowid filter setup cost and filter compare cost now takes into
    account fetching and checking the rowid (KEY_NEXT_FIND_COST).
    (main.partition_pruning heap.heap_btree main.log_state)
  - Added KEY_NEXT_FIND_COST to
    Range_rowid_filter_cost_info::lookup_cost to account of the time
    to find and check the next key value against the container
  - Introduced ha_keyread_time(rows) that takes into account finding
    the next row and copying the key value to 'record'
    (KEY_COPY_COST).
  - Introduced ha_key_scan_time() for calculating an index scan over
    all rows.
  - Added IDX_LOOKUP_COST to keyread_time() as a startup cost.
  - Added index_only_fetch_cost() as a convenience function to
    OPT_RANGE.
  - keyread_time() cost is slightly reduced to prefer shorter keys.
    (main.index_merge_myisam)
  - All of the above caused some index_merge combinations to be
    rejected because of cost (main.index_intersect). In some cases
    'ref' where replaced with index_merge because of the low
    cost calculation of get_sweep_read_cost().
  - Some index usage moved from PRIMARY to a covering index.
    (main.subselect_innodb)
- Changed cost calculation of filter to take KEY_LOOKUP_COST and
  TIME_FOR_COMPARE into account.  See sql_select.cc::apply_filter().
  filter parameters and costs are now written to optimizer_trace.
- Don't use matchings_records_in_range() to try to estimate the number
  of filtered rows for ranges. The reason is that we want to ensure
  that 'range' is calculated similar to 'ref'. There is also more work
  needed to calculate the selectivity when using ranges and ranges and
  filtering.  This causes filtering column in EXPLAIN EXTENDED to be
  100.00 for some cases where range cannot use filtering.
  (main.rowid_filter)
- Introduced ha_scan_time() that takes into account the CPU cost of
  finding the next row and copying the row from the engine to
  'record'. This causes costs of table scan to slightly increase and
  some test to changed their plan from ALL to RANGE or ALL to ref.
  (innodb.innodb_mysql, main.select_pkeycache)
  In a few cases where scan time of very small tables have lower cost
  than a ref or range, things changed from ref/range to ALL.
  (main.myisam, main.func_group, main.limit_rows_examined,
  main.subselect2)
- Introduced ha_scan_and_compare_time() which is like ha_scan_time()
  but also adds the cost of the where clause (TIME_FOR_COMPARE).
- Added small cost for creating temporary table for
  materialization. This causes some very small tables to use scan
  instead of materialization.
- Added checking of the WHERE clause (TIME_FOR_COMPARE) of the
  accepted rows to ROR costs in get_best_ror_intersect()
- Removed '- 0.001' from 'join->best_read' and optimize_straight_join()
  to ensure that the 'Last_query_cost' status variable contains the
  same value as the one that was calculated by the optimizer.
- Take avg_io_cost() into account in handler::keyread_time() and
  handler::read_time(). This should have no effect as it's 1.0 by
  default, except for heap that overrides these functions.
- Some 'ref_or_null' accesses changed to 'range' because of cost
  adjustments (main.order_by)
- Added scan type "scan_with_join_cache" for optimizer_trace. This is
  just to show in the trace what kind of scan was used.
- When using 'scan_with_join_cache' take into account number of
  preceding tables (as have to restore all fields for all previous
  table combination when checking the where clause)
  The new cost added is:
  (row_combinations * ROW_COPY_COST * number_of_cached_tables).
  This increases the cost of join buffering in proportion of the
  number of tables in the join buffer. One effect is that full scans
  are now done earlier as the cost is then smaller.
  (main.join_outer_innodb, main.greedy_optimizer)
- Removed the usage of 'worst_seeks' in cost_for_index_read as it
  caused wrong plans to be created; It prefered JT_EQ_REF even if it
  would be much more expensive than a full table scan. A related
  issue was that worst_seeks only applied to full lookup, not to
  clustered or index only lookups, which is not consistent. This
  caused some plans to use index scan instead of eq_ref (main.union)
- Changed federated block size from 4096 to 1500, which is the
  typical size of an IO packet.
- Added costs for reading rows to Federated. Needed as there is no
  caching of rows in the federated engine.
- Added ha_innobase::rnd_pos_time() cost function.
- A lot of extra things added to optimizer trace
  - More costs, especially for materialization and index_merge.
  - Make lables more uniform
  - Fixed a lot of minor bugs
  - Added 'trace_started()' around a lot of trace blocks.
- When calculating ORDER BY with LIMIT cost for using an index
  the cost did not take into account the number of row retrivals
  that has to be done or the cost of comparing the rows with the
  WHERE clause. The cost calculated would be just a fraction of
  the real cost. Now we calculate the cost as we do for ranges
  and 'ref'.
- 'Using index for group-by' is used a bit more than before as
  now take into account the WHERE clause cost when comparing
  with 'ref' and prefer the method with fewer row combinations.
  (main.group_min_max).

Bugs fixed:
- Fixed that we don't calculate TIME_FOR_COMPARE twice for some plans,
  like in optimize_straight_join() and greedy_search()
- Fixed bug in save_explain_data where we could test for the wrong
  index when displaying 'Using index'. This caused some old plans to
  show 'Using index'.  (main.subselect_innodb, main.subselect2)
- Fixed bug in get_best_ror_intersect() where 'min_cost' was not
  updated, and the cost we compared with was not the one that was
  used.
- Fixed very wrong cost calculation for priority queues in
  check_if_pq_applicable(). (main.order_by now correctly uses priority
  queue)
- When calculating cost of EQ_REF or REF, we added the cost of
  comparing the WHERE clause with the found rows, not all row
  combinations. This made ref and eq_ref to be regarded way to cheap
  compared to other access methods.
- FORCE INDEX cost calculation didn't take into account clustered or
  covered indexes.
- JT_EQ_REF cost was estimated as avg_io_cost(), which is half the
  cost of a JT_REF key. This may be true for InnoDB primary key, but
  not for other unique keys or other engines. Now we use handler
  function to calculate the cost, which allows us to handle
  consistently clustered, covered keys and not covered keys.
- ha_start_keyread() didn't call extra_opt() if keyread was already
  enabled but still changed the 'keyread' variable (which is wrong).
  Fixed by not doing anything if keyread is already enabled.
- multi_range_read_info_cost() didn't take into account io_cost when
  calculating the cost of ranges.
- fix_semijoin_strategies_for_picked_join_order() used the wrong
  record_count when calling best_access_path() for SJ_OPT_FIRST_MATCH
  and SJ_OPT_LOOSE_SCAN.
- Hash joins didn't provide correct best_cost to the upper level, which
  means that the cost for hash_joins more expensive than calculated
  in best_access_path (a difference of 10x * TIME_OF_COMPARE).
  This is fixed in the new code thanks to that we now include
  TIME_OF_COMPARE cost in 'read_time'.

Other things:
- Added some 'if (thd->trace_started())' to speed up code
- Removed not used function Cost_estimate::is_zero()
- Simplified testing of HA_POS_ERROR in get_best_ror_intersect().
  (No cost changes)
- Moved ha_start_keyread() from join_read_const_table() to join_read_const()
  to enable keyread for all types of JT_CONST tables.
- Made a few very short functions inline in handler.h

Notes:
- In main.rowid_filter the join order of order and lineitem is swapped.
  This is because the cost of doing a range fetch of lineitem(98 rows) is
  almost as big as the whole join of order,lineitem. The filtering will
  also ensure that we only have to do very small key fetches of the rows
  in lineitem.
- main.index_merge_myisam had a few changes where we are now using
  less keys for index_merge. This is because index scans are now more
  expensive than before.
- handler->optimizer_cache_cost is updated in ha_external_lock().
  This ensures that it is up to date per statements.
  Not an optimal solution (for locked tables), but should be ok for now.
- 'DELETE FROM t1 WHERE t1.a > 0 ORDER BY t1.a' does not take cost of
  filesort into consideration when table scan is chosen.
  (main.myisam_explain_non_select_all)
- perfschema.table_aggregate_global_* has changed because an update
  on a table with 1 row will now use table scan instead of key lookup.

TODO in upcomming commits:
- Fix selectivity calculation for ranges with and without filtering and
  when there is a ref access but scan is chosen.
  For this we have to store the lowest known value for
  'accepted_records' in the OPT_RANGE structure.
- Change that records_read does not include filtered rows.
- test_if_cheaper_ordering() needs to be updated to properly calculate
  costs. This will fix tests like main.order_by_innodb,
  main.single_delete_update
- Extend get_range_limit_read_cost() to take into considering
  cost_for_index_read() if there where no quick keys. This will reduce
  the computed cost for ORDER BY with LIMIT in some cases.
  (main.innodb_ext_key)
- Fix that we take into account selectivity when counting the number
  of rows we have to read when considering using a index table scan to
  resolve ORDER BY.
- Add new calculation for rnd_pos_time() where we take into account the
  benefit of reading multiple rows from the same page.
2023-02-02 21:43:30 +03:00
Marko Mäkelä
bebe193979 Merge 10.9 into 10.10 2022-11-21 10:32:08 +02:00
Marko Mäkelä
e572c745dc MDEV-29504/MDEV-29849 TRUNCATE breaks FOREIGN KEY locking
ha_innobase::referenced_by_foreign_key(): Protect the check with
dict_sys.freeze(), to prevent races with TRUNCATE TABLE.
The test innodb.instant_alter_crash has been adjusted for this
additional locking.

dict_table_is_referenced_by_foreign_key(): Removed (merged to
the only caller).

create_table_info_t::create_table(): Ignore missing indexes for
FOREIGN KEY constraints if foreign_key_checks=0.

create_table_info_t::create_table_update_dict(): Rewritten as
a static function. Do not return any error.

ha_innobase::create(): When trx!=nullptr and we are operating
on a persistent table, do not rollback, commit, or release the
data dictionary latch.

ha_innobase::truncate(): Protect the entire critical section
with an exclusive dict_sys.latch, so that
ha_innobase::referenced_by_foreign_key() on referenced tables
will return a consistent result. In case of a failure,
invoke dict_load_foreigns() to restore also any FOREIGN KEY
constraints.

ha_innobase::free_foreign_key_create_info(): Define inline.

lock_release(): Disregard innodb_evict_tables_on_commit_debug=ON
when dict_sys.locked() holds. It would hold when fts_load_stopword()
is invoked by create_table_info_t::create_table_update_dict().

dict_sys_t::locked(): Return whether the current thread is holding
the exclusive dict_sys.latch.

dict_sys_t::frozen_not_locked(): Return whether any thread is
holding a shared dict_sys.latch.

In the test main.mysql_upgrade, the InnoDB persistent statistics
will no longer be recalculated in ha_innobase::open() as part of
CHECK TABLE ... FOR UPGRADE. They were deleted earlier in the test.

Tested by: Matthias Leich
2022-11-08 17:34:34 +02:00
Marko Mäkelä
0dab74ff3f MDEV-28539 Some InnoDB counters are duplicating generic SHOW STATUS
The InnoDB srv_stats counters
n_rows_updated, n_rows_deleted, n_rows_inserted, and n_rows_read
are duplicating
Handler_update, Handler_delete, Handler_write, and Handler_read_ counters.

Updating those counters is not free, especially because some counters
are furthermore split to distinguish a rare case of modifying tables
in the system schema.
2022-06-03 12:20:20 +03:00
Marko Mäkelä
9608773f75 MDEV-4750 follow-up: Reduce disabling innodb_stats_persistent
This essentially reverts commit 4e89ec6692
and only disables InnoDB persistent statistics for tests where it is
desirable. By design, InnoDB persistent statistics will not be updated
except by ANALYZE TABLE or by STATS_AUTO_RECALC.

The internal transactions that update persistent InnoDB statistics
in background tasks (with innodb_stats_auto_recalc=ON) may cause
nondeterministic query plans or interfere with some tests that deal
with other InnoDB internals, such as the purge of transaction history.
2021-08-31 13:55:02 +03:00
Rucha Deodhar
2fdb556e04 MDEV-8334: Rename utf8 to utf8mb3
This patch changes the main name of 3 byte character set from utf8 to
utf8mb3. New old_mode UTF8_IS_UTF8MB3 is added and set TRUE by default,
so that utf8 would mean utf8mb3. If not set, utf8 would mean utf8mb4.
2021-05-19 06:48:36 +02:00
Marko Mäkelä
a5d3c1c819 Merge 10.4 into 10.5 2021-03-08 10:16:20 +02:00
Marko Mäkelä
a26e7a3726 Merge 10.3 into 10.4 2021-03-08 09:39:54 +02:00
Vicențiu Ciorbaru
e9b8b76f47 Merge branch '10.2' into 10.3 2021-03-04 16:04:30 +02:00
Thirunarayanan Balathandayuthapani
b044898b97 MDEV-24748 extern column check missing in btr_index_rec_validate()
In btr_index_rec_validate(), externally stored column
check is missing while matching the length of the field
with the length of the field data stored in record.
Fetch the length of the externally stored part and compare it
with the fixed field length.
2021-03-03 17:20:43 +05:30
Marko Mäkelä
d5d8756de3 Merge 10.4 into 10.5 2020-08-20 12:52:44 +03:00
Marko Mäkelä
2fa9f8c53a Merge 10.3 into 10.4 2020-08-20 11:01:47 +03:00
Marko Mäkelä
de0e7cd72a Merge 10.2 into 10.3 2020-08-20 09:12:16 +03:00
Thirunarayanan Balathandayuthapani
8268f26605 MDEV-22934 Table disappear after two alter table command
Problem:
=======
InnoDB drops the column which has foreign key relations on it. So it
tries to load the foreign key during rename process of copy algorithm
even though the foreign_key_check is disabled.

Solution:
========
During alter copy algorithm, InnoDB ignores the error while loading
the foreign key constraint if foreign key check is disabled. It
should throw the warning about failure of the foreign key constraint
when foreign key check is disabled.
2020-08-18 15:05:23 +05:30
Marko Mäkelä
bbd70fcc43 MDEV-23379 Deprecate&ignore InnoDB concurrency throttling parameters
The parameters innodb_thread_concurrency and innodb_commit_concurrency
were useful years ago when both computing resources and the implementation
of some shared data structures were limited. MySQL 5.0 or 5.1 had trouble
scaling beyond 8 concurrent connections. Most of the scalability bottlenecks
have been removed since then, and the transactions per second delivered
by MariaDB Server 10.5 should not dramatically drop upon exceeding the
'optimal' number of connections.

Hence, enabling any concurrency throttling for InnoDB actually makes
things worse. We have seen many customers mistakenly setting this to a
small value like 16 or 64 and then complaining the server was slow.

Ignoring the parameters allows us to remove some normally unused code
and data structures, which could slightly improve performance.

innodb_thread_concurrency, innodb_commit_concurrency,
innodb_replication_delay, innodb_concurrency_tickets,
innodb_thread_sleep_delay, innodb_adaptive_max_sleep_delay:
Deprecate and ignore; hard-wire to 0.

The column INFORMATION_SCHEMA.INNODB_TRX.trx_concurrency_tickets
will always report 0.
2020-08-04 06:59:29 +03:00
Vicențiu Ciorbaru
45bc7574fb MDEV-18650: Options deprecated in previous versions - storage_engine
Remove usage of deprecated variable storage_engine. It was deprecated in 5.5 but
it never issued a deprecation warning. Make it issue a warning in 10.5.1.

Replaced with default_storage_engine.
2020-02-13 13:42:01 +02:00
Igor Babaev
fd386e39cd MDEV-18689 Simple query with extra brackets stopped working
Parenthesis around table names and derived tables should be allowed
in FROM clauses and some other context as it was in earlier versions.

Returned test queries that used such parenthesis in 10.3 to their
original form. Adjusted test results accordingly.
2019-05-06 11:14:39 -07:00
Marko Mäkelä
5c3ff5cb93 Merge 10.3 into 10.4 2019-04-02 11:04:54 +03:00
Marko Mäkelä
349560d5d5 Merge 10.2 into 10.3 2019-03-27 13:27:04 +02:00
Marko Mäkelä
1e9c2b2305 Merge 10.1 into 10.2 2019-03-27 12:26:11 +02:00
Marko Mäkelä
a6585d5ce9 Merge 10.0 into 10.1 2019-03-27 11:56:08 +02:00
Marko Mäkelä
1933cf98e8 Merge 5.5 into 10.0 2019-03-26 14:13:46 +02:00
Chris Calender
d8b7e76c37 Fix for MDEV-18276, typo in error message + all other occurrences of refering 2019-03-23 00:00:47 +04:00
Daniel Black
de51acd037 MDEV-18726: innodb buffer pool size not consistent with large pages
Rather than add a small extra amount on the size of chunks, keep it
of the specified size. The rest of the chunk initialization code
adapts to this small size reduction. This has been made in the general
case, not just large pages, to keep it simple.

The chunks size is controlled by innodb-buffer-pool-chunk-size. In the
code increasing this by a descriptor table size length makes it
difficult with large pages. With innodb-buffer-pool-chunk-size set to 2M
the code before this commit would of added a small amount extra to this
value when it tried to allocate this. While not normally a problem it is
with large pages, it now requires addition space, a whole extra large
page. With a number of pools, or with 1G or 16G large pages this is
quite significant.

By removing this additional amount, DBAs can set
innodb-buffer-pool-chunk size to the large page size, or a multiple of
it, and actually get that amount allocated. Previously they had to fudge
a value less.

The innodb.test results show how this is fudged over a number of tests. With
this change the values are just between 488 and 500 depending on architecture
and build options.

Tested with  --large-pages --innodb-buffer-pool-size=256M
--innodb-buffer-pool-chunk-size=2M on x86_64 with 2M default large page
size. Breaking before buf_pool init, one large page was allocated in
MyISAM, by the end of the function 128 huge pages where allocated as
expected. A further 16 pages where allocated for a 32M log buffer and
during startup 1 page was allocated briefly to the redo log.
2019-03-18 21:49:53 +02:00
Oleksandr Byelkin
de745ecf29 MDEV-11953: support of brackets in UNION/EXCEPT/INTERSECT operations 2018-07-04 19:13:55 +02:00
Marko Mäkelä
b006d2ead4 Merge bb-10.2-ext into 10.3 2018-02-15 10:22:03 +02:00
Marko Mäkelä
10590dd39c MDEV-15199 Referential integrity broken in ON DELETE CASCADE
MDEV-14222 Unnecessary 'cascade' memory allocation for every updated row
when there is no FOREIGN KEY

This reverts the MySQL 5.7.2 change
377774689b
which introduced these problems. MariaDB 10.2.2 inherited these problems
in commit 2e814d4702.

The FOREIGN KEY CASCADE and SET NULL operations implemented as
procedural recursion are consuming more than 8 kilobytes of stack
(9 stack frames) per iteration in a non-debug GNU/Linux AMD64 build.
This is why we need to limit the maximum recursion depth to 15 steps
instead of the 255 that it used to be in MySQL 5.7 and MariaDB 10.2.

A corresponding change was made in MySQL 5.7.21 in
7b26dc98a6
2018-02-07 10:39:12 +02:00
Sergei Golubchik
4771ae4b22 Merge branch 'github/10.1' into 10.2 2018-02-06 14:50:50 +01:00
Sergei Golubchik
d4df7bc9b1 Merge branch 'github/10.0' into 10.1 2018-02-02 10:09:44 +01:00
Vicențiu Ciorbaru
d69d488b8c Remove innodb.test "keep away" comment
This was a leftover post 5.5->10.0 merge. It should've been deleted
there.
2018-01-24 17:55:26 +02:00
Vicențiu Ciorbaru
d833bb65d5 Merge remote-tracking branch '5.5' into 10.0 2018-01-24 12:29:31 +02:00
Marko Mäkelä
906ce0962d MDEV-7049 MySQL#74585 - InnoDB: Failing assertion: *mbmaxlen < 5 in file ha_innodb.cc line 1904
InnoDB limited the maximum number of bytes per character to 4.
But, the filename character set that was introduced in MySQL 5.1
uses up to 5 bytes per character.

To allow InnoDB tables to be created with wider characters, let
us split the mbminmaxlen fields into mbminlen, mbmaxlen, and increase
the limit to 7 bytes per character. This will increase the payload size
of dtype_t and dict_col_t by one bit. The storage size will be unchanged
(54 bits and 77 bits will use the same number of bytes as the
previous sizes 53 and 76 bits).
2018-01-22 11:18:10 +02:00
Marko Mäkelä
145ae15a33 Merge bb-10.2-ext into 10.3 2018-01-04 09:22:59 +02:00
Vicențiu Ciorbaru
9aeb5d01d6 Merge remote-tracking branch 'origin/10.1' into bb-10.2-vicentiu 2017-12-28 19:27:00 +02:00
Vicențiu Ciorbaru
d1c2cd30b7 Merge remote-tracking branch '10.0' into 10.1 2017-12-27 17:50:39 +02:00
Sergey Vojtovich
4b8cd4536a MDEV-13626 Merge InnoDB test cases from MySQL 5.7
Coverage for temporary tables modifications in read-only transactions.
Introduced in 5.7 by 325cdf426
2017-12-22 14:03:25 +04:00
Marko Mäkelä
0c92794db3 Remove deprecated InnoDB file format parameters
The following options will be removed:

innodb_file_format
innodb_file_format_check
innodb_file_format_max
innodb_large_prefix

They have been deprecated in MySQL 5.7.7 (and MariaDB 10.2.2) in WL#7703.

The file_format column in two INFORMATION_SCHEMA tables will be removed:

innodb_sys_tablespaces
innodb_sys_tables

Code to update the file format tag at the end of page 0:5
(TRX_SYS_PAGE in the InnoDB system tablespace) will be removed.
When initializing a new database, the bytes will remain 0.

All references to the Barracuda file format will be removed.
Some references to the Antelope file format (meaning
ROW_FORMAT=REDUNDANT or ROW_FORMAT=COMPACT) will remain.

This basically ports WL#7704 from MySQL 8.0.0 to MariaDB 10.3.1:

commit 4a69dc2a95995501ed92d59a1de74414a38540c6
Author: Marko Mäkelä <marko.makela@oracle.com>
Date:   Wed Mar 11 22:19:49 2015 +0200
2017-06-02 09:36:14 +03:00
Nirbhay Choubey
8b2e642aa2 MDEV-7635: Update tests to adapt to the new default sql_mode 2017-02-10 06:30:42 -05:00
Marko Mäkelä
8777458a6e MDEV-6076 Persistent AUTO_INCREMENT for InnoDB
This should be functionally equivalent to WL#6204 in MySQL 8.0.0, with
the notable difference that the file format changes are limited to
repurposing a previously unused data field in B-tree pages.

For persistent InnoDB tables, write the last used AUTO_INCREMENT
value to the root page of the clustered index, in the previously
unused (0) PAGE_MAX_TRX_ID field, now aliased as PAGE_ROOT_AUTO_INC.
Unlike some other previously unused InnoDB data fields, this one was
actually always zero-initialized, at least since MySQL 3.23.49.

The writes to PAGE_ROOT_AUTO_INC are protected by SX or X latch on the
root page. The SX latch will allow concurrent read access to the root
page. (The field PAGE_ROOT_AUTO_INC will only be read on the
first-time call to ha_innobase::open() from the SQL layer. The
PAGE_ROOT_AUTO_INC can only be updated when executing SQL, so
read/write races are not possible.)

During INSERT, the PAGE_ROOT_AUTO_INC is updated by the low-level
function btr_cur_search_to_nth_level(), adding no extra page
access. [Adaptive hash index lookup will be disabled during INSERT.]

If some rare UPDATE modifies an AUTO_INCREMENT column, the
PAGE_ROOT_AUTO_INC will be adjusted in a separate mini-transaction in
ha_innobase::update_row().

When a page is reorganized, we have to preserve the PAGE_ROOT_AUTO_INC
field.

During ALTER TABLE, the initial AUTO_INCREMENT value will be copied
from the table. ALGORITHM=COPY and online log apply in LOCK=NONE will
update PAGE_ROOT_AUTO_INC in real time.

innodb_col_no(): Determine the dict_table_t::cols[] element index
corresponding to a Field of a non-virtual column.
(The MySQL 5.7 implementation of virtual columns breaks the 1:1
relationship between Field::field_index and dict_table_t::cols[].
Virtual columns are omitted from dict_table_t::cols[]. Therefore,
we must translate the field_index of AUTO_INCREMENT columns into
an index of dict_table_t::cols[].)

Upgrade from old data files:

By default, the AUTO_INCREMENT sequence in old data files would appear
to be reset, because PAGE_MAX_TRX_ID or PAGE_ROOT_AUTO_INC would contain
the value 0 in each clustered index page. In new data files,
PAGE_ROOT_AUTO_INC can only be 0 if the table is empty or does not contain
any AUTO_INCREMENT column.

For backward compatibility, we use the old method of
SELECT MAX(auto_increment_column) for initializing the sequence.

btr_read_autoinc(): Read the AUTO_INCREMENT sequence from a new-format
data file.

btr_read_autoinc_with_fallback(): A variant of btr_read_autoinc()
that will resort to reading MAX(auto_increment_column) for data files
that did not use AUTO_INCREMENT yet. It was manually tested that during
the execution of innodb.autoinc_persist the compatibility logic is
not activated (for new files, PAGE_ROOT_AUTO_INC is never 0 in nonempty
clustered index root pages).

initialize_auto_increment(): Replaces
ha_innobase::innobase_initialize_autoinc(). This initializes
the AUTO_INCREMENT metadata. Only called from ha_innobase::open().

ha_innobase::info_low(): Do not try to lazily initialize
dict_table_t::autoinc. It must already have been initialized by
ha_innobase::open() or ha_innobase::create().

Note: The adjustments to class ha_innopart were not tested, because
the source code (native InnoDB partitioning) is not being compiled.
2016-12-16 09:19:19 +02:00
Marko Mäkelä
65b4d7457e Merge the test innodb.innodb_misc1 into innodb.innodb. 2016-12-13 11:52:23 +02:00
Sergei Golubchik
d019af402c misc after-merge changes:
* remove new InnoDB-specific ER_ and HA_ERR_ codes
* renamed few old ER_ and HA_ERR_ error messages to be less MyISAM-specific
* remove duplicate enum definitions (durability_properties, icp_result)
* move new mysql-test include files to their owner suite
* rename xtradb.rdiff files to *-disabled
* remove mistakenly committed helper perl module
* remove long obsolete handler::ha_statistic_increment() method
* restore the standard C xid_t structure to not have setters and getters
* remove xid_t::reset that was cleaning too much
* move MySQL-5.7 ER_ codes where they belong
* fir innodb to include service_wsrep.h not internal wsrep headers
* update tests and results
2016-09-10 16:04:44 +02:00
Jan Lindström
fec844aca8 Merge InnoDB 5.7 from mysql-5.7.14.
Contains also:
       MDEV-10549 mysqld: sql/handler.cc:2692: int handler::ha_index_first(uchar*): Assertion `table_share->tmp_table != NO_TMP_TABLE || m_lock_type != 2' failed. (branch bb-10.2-jan)
       Unlike MySQL, InnoDB still uses THR_LOCK in MariaDB

       MDEV-10548 Some of the debug sync waits do not work with InnoDB 5.7 (branch bb-10.2-jan)
       enable tests that were fixed in MDEV-10549

       MDEV-10548 Some of the debug sync waits do not work with InnoDB 5.7 (branch bb-10.2-jan)
       fix main.innodb_mysql_sync - re-enable online alter for partitioned innodb tables
2016-09-08 15:49:03 +03:00