Commit graph

376 commits

Author SHA1 Message Date
Marko Mäkelä
28c89b7151 Merge 10.4 into 10.5 2019-12-16 07:47:17 +02:00
Oleksandr Byelkin
a15234bf4b Merge branch '10.3' into 10.4 2019-12-09 15:09:41 +01:00
Faustin Lammler
2df2238cb8 Lintian complains on spelling error
The lintian check complains on spelling error:
https://salsa.debian.org/mariadb-team/mariadb-10.3/-/jobs/95739
2019-12-02 12:41:13 +02:00
Monty
82b62924d7 Fixed wrong result files in test suite 2019-10-30 14:59:04 +02:00
Marko Mäkelä
8336371441 Merge 10.4 into 10.5 2019-10-12 22:06:47 +03:00
Marko Mäkelä
55c75b6bb3 Merge 10.3 into 10.4 2019-10-12 06:50:12 +03:00
Marko Mäkelä
8e3d85e112 Merge 10.2 into 10.3 2019-10-12 06:34:09 +03:00
Marko Mäkelä
966d97b5f9 Merge 10.1 into 10.2 2019-10-11 18:38:18 +03:00
Marko Mäkelä
cbfd6882f4 Merge 5.5 into 10.1 2019-10-11 15:19:55 +03:00
Marko Mäkelä
c016ea660e Merge 10.2 into 10.3 2019-09-23 10:25:34 +03:00
Marko Mäkelä
44c5144943 Merge 10.1 into 10.2 2019-09-23 08:26:08 +03:00
Varun Gupta
896974fc3d MDEV-18094: Query with order by limit picking index scan over filesort
In the function test_if_cheaper_ordering we make a decision if using an index is better than
using filesort for ordering. If we chose to do range access then in test_quick_select we
should make sure that cost for table scan is set to DBL_MAX so that it is not picked.
2019-09-21 12:14:05 +05:30
Marko Mäkelä
bb4214272a Merge 10.1 into 10.2 2019-09-18 16:24:48 +03:00
Igor Babaev
0954bcb663 Post fix after the patch for MDEV-20576.
Adjusted test results.
2019-09-13 15:09:28 -07:00
Marko Mäkelä
4081b7b27a Merge 10.4 into 10.5 2019-09-06 17:16:40 +03:00
Marko Mäkelä
780d2bb8a7 Merge 10.4 into 10.5 2019-09-06 14:25:20 +03:00
Sergei Golubchik
244f0e6dd8 Merge branch '10.3' into 10.4 2019-09-06 11:53:10 +02:00
Monty
a071e0e029 Merge branch '10.2' into 10.3 2019-09-03 13:17:32 +03:00
Monty
9cba6c5aa3 Updated mtr files to support different compiled in options
This allows one to run the test suite even if any of the following
options are changed:
- character-set-server
- collation-server
- join-cache-level
- log-basename
- max-allowed-packet
- optimizer-switch
- query-cache-size and query-cache-type
- skip-name-resolve
- table-definition-cache
- table-open-cache
- Some innodb options
etc

Changes:
- Don't print out the value of system variables as one can't depend on
  them to being constants.
- Don't set global variables to 'default' as the default may not
  be the same as the test was started with if there was an additional
  option file. Instead save original value and reset it at end of test.
- Test that depends on the latin1 character set should include
  default_charset.inc or set the character set to latin1
- Test that depends on the original optimizer switch, should include
  default_optimizer_switch.inc
- Test that depends on the value of a specific system variable should
  set it in the test (like optimizer_use_condition_selectivity)
- Split subselect3.test into subselect3.test and subselect3.inc to
  make it easier to set and reset system variables.
- Added .opt files for test that required specfic options that could
  be changed by external configuration files.
- Fixed result files in rockdsb & tokudb that had not been updated for
  a while.
2019-09-01 19:17:35 +03:00
Monty
d558981e2c Remove test that where only applicable for MySQL
These was part of an incomplete old merge from MySQL 5.6
2019-09-01 19:17:34 +03:00
Sergei Golubchik
df61c58499 MDEV-14383 tokudb_bugs. tests failed in buildbot, lost connection to server
don't run tokudb tests under tcmalloc,
this is not a supported combination.
2019-09-01 15:32:06 +02:00
Marko Mäkelä
db4a27ab73 Merge 10.3 into 10.4 2019-08-31 06:53:45 +03:00
Igor Babaev
9380850d87 MDEV-15777 Use inferred IS NOT NULL predicates in the range optimizer
This patch introduces the optimization that allows range optimizer to
consider index range scans that are built employing NOT NULL predicates
inferred from WHERE conditions and ON expressions.
The patch adds a new optimizer switch not_null_range_scan.
2019-08-30 18:47:14 -07:00
Marko Mäkelä
e41eb044f1 Merge 10.2 into 10.3 2019-08-28 10:18:41 +03:00
Sujatha
e7b71e0daa MDEV-19925: Column ... cannot be converted from type 'varchar(20)' to type 'varchar(20)'
Cherry picking:
Bug#25135304: RBR: WRONG FIELD LENGTH IN ERROR MESSAGE
commit 47bd3f7cf3c8518f62b1580ec65af2ba7ac13b95

Description:
============
In row based replication, when replicating from a table with a field with
character set set to UTF8mb3 to the same table with the same field set to
character set UTF8mb4 I get a confusing error message:

For VARCHAR: VARCHAR(1) 'utf8mb3' to VARCHAR(1) 'utf8mb4'
"Column 0 of table 'test.t1' cannot be converted from type 'varchar(3)' to
type 'varchar(1)'"

Similar issue with CHAR type as well.

Issue with respect to BLOB types:

For BLOB: LONGBLOB to TINYBLOB - Error message displays incorrect blob type.
"Column 0 of table 'test.t1' cannot be converted from type 'tinyblob' to type
'tinyblob'"

For BINARY to BINARY - Error message displays incorrect type for master side
field.
"Column 0 of table 'test.t' cannot be converted from type 'char(1)' to type
'binary(10)'"
Similar issue exists for VARBINARY type. It is displayed as 'VARCHAR'.

Analysis:
=========
In Row based replication charset information is not sent as part of metadata
from master to slave.

For VARCHAR field its character length is converted into equivalent
octets/bytes and stored internally. At the time of displaying the data to user
it is converted back to original character length.

For example:
VARCHAR(2)- utf8mb3 is stored as:2*3 = VARCHAR(6)
At the time of displaying it to user
VARCHAR(6)- charset utf8mb3:6/3= VARCHAR(2).

At present the internally converted octect length is sent from master to slave
with out providing the charset information. On slave side if the type
conversion fails 'show_sql_type' function is used to get the type specific
information from metadata. Since there is no charset information is available
the filed type is displayed as VARCHAR(6).

This results in confused error message.

For CHAR fields
CHAR(1)- utf8mb3 - CHAR(3)
CHAR(1)- utf8mb4 - CHAR(4)

'show_sql_type' function which retrieves type information from metadata uses
(bytes/local charset length) to get actual character length. If slave's chaset
is 'utf8mb4' then

CHAR(3/4)-->CHAR(0)
CHAR(4/4)-->CHAR(1).

This results in confused error message.

Analysis for BLOB type issue:

BLOB's length is represented in two forms.
1. Actual length
i.e
  (length < 256) type= MYSQL_TYPE_TINY_BLOB;
  (length < 65536) type= MYSQL_TYPE_BLOB; ...

2. packlength - The number of bytes used to represent the length of the blob
  1- tinyblob
  2- blob ...

In row based replication only the packlength is written in the binary log. On
the slave side this packlength is interpreted as actual length of the blob.
Hence the length is always < 256 and the type is displayed as tiny blob.

Analysis for BINARY to BINARY type issue:
The character set information is needed to identify a filed's type as char or
binary. Since master side character set information is not available on the
slave side both binary and char fields are displayed as char.

Fix:
===
For CHAR and VARCHAR fields display their length in octets for both source and
target fields. For target field display the charset information if it is
relevant.

For blob type changed the code to use the packlength and display appropriate
blob type in error message.

For binary and varbinary fields use the slave side character set as reference
to map them to binary or varbinary fields.
2019-08-27 13:05:04 +05:30
Monty
d90fa9ad28 Added asan options to mysql-test-run
- Leak detection is now enabled by default
- Also added to mysql-test suppression files for asan and lsan
2019-08-23 22:06:30 +02:00
Alexander Barkov
c1599821a5 Merge remote-tracking branch 'origin/10.4' into 10.5 2019-08-13 23:49:10 +04:00
Monty
c4fd167d5a Fixed access to unitialized memory when using unique HASH key
Fixed the following issues:
- Call info with HA_STATUS_CONST to ensure that (key_info->rec_per_key)
  contains latest data
- Don't access rec_per_key if key_info->algorithm == HA_KEY_ALG_LONG_HASH
  is in this case the rec_per_key points to uninitialized data
- Cleaned up code to avoid some extra 'if' and to make things more readable
- Updated test cases that used 'old' rec_per_key values
2019-08-13 17:19:00 +03:00
Oleksandr Byelkin
2792c6e7b0 Merge branch '10.3' into 10.4 2019-07-28 13:43:26 +02:00
Oleksandr Byelkin
d97342b6f2 Merge branch '10.2' into 10.3 2019-07-26 22:42:35 +02:00
Oleksandr Byelkin
32c6f40a63 Merge branch '10.1' into 10.2 2019-07-26 13:39:17 +02:00
Oleksandr Byelkin
4177181e16 Merge branch 'merge-tokudb-5.6' into 10.1 2019-07-26 10:48:12 +02:00
Oleksandr Byelkin
24a0d7c507 5.6.44-86.0 2019-07-26 08:48:46 +02:00
Alexander Barkov
1c27a050e5 MDEV-19818 Reuse new I_S table definition helper classes for TokuDB 2019-06-21 06:48:04 +04:00
Alexander Barkov
b685109596 MDEV-19710 Split the server side code in rpl_utility.cc into virtual methods in Type_handler 2019-06-07 12:47:24 +04:00
Oleksandr Byelkin
c07325f932 Merge branch '10.3' into 10.4 2019-05-19 20:55:37 +02:00
Oleksandr Byelkin
c51f85f882 Merge branch '10.2' into 10.3 2019-05-12 17:20:23 +02:00
Monty
44b8b002f5 Disable 5733_tokudb as the result is not stable 2019-05-09 11:24:06 +03:00
Oleksandr Byelkin
8cbb14ef5d Merge branch '10.1' into 10.2 2019-05-04 17:04:55 +02:00
Vladislav Vaintroub
e8778f1c7c MDEV-19265 Server should throw warning if event is created and event_scheduler = OFF 2019-04-28 12:49:59 +02:00
Sergei Golubchik
f22ed2779f Merge branch 'merge-tokudb-5.6' into 10.1 2019-04-26 18:26:23 +02:00
Sergei Golubchik
33d8a28367 5.6.43-84.3 2019-04-26 17:02:15 +02:00
Varun Gupta
d1a43973ef Adjusted result for tokudb_bugs.db756_card_part_hash_2_pick 2019-04-26 12:50:26 +05:30
Varun Gupta
87472974cd Results updated for tokudb tests 2019-04-25 22:05:54 +05:30
Michael Widenius
b5615eff0d Write information about restart in .result
Idea comes from MySQL which does something similar
2019-04-01 19:47:24 +03:00
Sachin
d00f19e832 MDEV-371 Unique Index for long columns
This patch implements engine independent unique hash index.

Usage:- Unique HASH index can be created automatically for blob/varchar/test column whose key
 length > handler->max_key_length()
or it can be explicitly specified.

  Automatic Creation:-
   Create TABLE t1 (a blob unique);
  Explicit Creation:-
   Create TABLE t1 (a int , unique(a) using HASH);

Internal KEY_PART Representations:-
 Long unique key_info will have 2 representations.
 (lets understand this with an example create table t1(a blob, b blob , unique(a, b)); )

 1. User Given Representation:- key_info->key_part array will be similar to what user has defined.
 So in case of example it will have 2 key_parts (a, b)

 2. Storage Engine Representation:- In this case there will be only one key_part and it will point to
 HASH_FIELD. This key_part will be always after user defined key_parts.

 So:- User Given Representation          [a] [b] [hash_key_part]
                  key_info->key_part ----^
  Storage Engine Representation          [a] [b] [hash_key_part]
                  key_info->key_part ------------^

 Table->s->key_info will have User Given Representation, While table->key_info will have Storage Engine
 Representation.Representation can be changed into each other by calling re/setup_keyinfo_hash function.

Working:-

1. So when user specifies HASH_INDEX or key_length is > handler->max_key_length(), In mysql_prepare_create_table
One extra vfield is added (for each long unique key). And key_info->algorithm is set to HA_KEY_ALG_LONG_HASH.

2. In init_from_binary_frm_image values for hash_keypart is set (like fieldnr , field and flags)

3. In parse_vcol_defs, HASH_FIELD->vcol_info is created. Item_func_hash is used with list of Item_fields,
   When Explicit length is given by user then Item_left is used to concatenate Item_field values.

4. In ha_write_row/ha_update_row check_duplicate_long_entry_key is called which will create the hash key from
table->record[0] and then call ha_index_read_map , if we found duplicated hash , we will compare the result
field by field.
2019-02-22 00:35:40 +01:00
Oleksandr Byelkin
93ac7ae70f Merge branch '10.3' into 10.4 2019-02-21 14:40:52 +01:00
Igor Babaev
2e73c56120 Merge branch '10.4' into bb-10.4-mdev7486 2019-02-19 03:18:17 -08:00
Varun Gupta
d6db6df995 MDEV-17903: New optimizer defaults: change optimize_join_buffer_size to be ON
optimize_join_buffer_size is switched ON.
2019-02-19 14:27:24 +05:30
Galina Shalygina
7a77b221f1 MDEV-7486: Condition pushdown from HAVING into WHERE
Condition can be pushed from the HAVING clause into the WHERE clause
if it depends only on the fields that are used in the GROUP BY list
or depends on the fields that are equal to grouping fields.
Aggregate functions can't be pushed down.

How the pushdown is performed on the example:

SELECT t1.a,MAX(t1.b)
FROM t1
GROUP BY t1.a
HAVING (t1.a>2) AND (MAX(c)>12);

=>

SELECT t1.a,MAX(t1.b)
FROM t1
WHERE (t1.a>2)
GROUP BY t1.a
HAVING (MAX(c)>12);

The implementation scheme:

1. Extract the most restrictive condition cond from the HAVING clause of
   the select that depends only on the fields that are used in the GROUP BY
   list of the select (directly or indirectly through equalities)
2. Save cond as a condition that can be pushed into the WHERE clause
   of the select
3. Remove cond from the HAVING clause if it is possible

The optimization is implemented in the function
st_select_lex::pushdown_from_having_into_where().

New test file having_cond_pushdown.test is created.
2019-02-17 23:38:44 -08:00