Commit graph

1832 commits

Author SHA1 Message Date
Alexander Barkov
f6118acda9 A follow-up patch MDEV-27266 Improve UCA collation performance for utf8mb3 and utf8mb4
Moving these members:

   CHARSET_INFO *cs;
   const MY_UCA_WEIGHT_LEVEL *level;

from my_uca_scanner to a new separate structure my_uca_scanner_param.

Rationale:

During a comparison of two strings these members were initialized two times
(one time for every string).

After the change these members initialized only one time inside
a shared instance of my_uca_scanner_param, and the instance is
shared between two scanners (its const address is passed as new a parameter
to the underlying scanner functions).

This change gives a slight performance improvement (~5%).
2022-09-02 13:23:24 +04:00
Marko Mäkelä
fe1f8f2c6b Merge 10.10 into 10.11 2022-08-30 13:36:30 +03:00
Marko Mäkelä
e71aca8200 Merge 10.9 into 10.10 2022-08-30 13:33:02 +03:00
Marko Mäkelä
50d6966c50 Merge 10.8 into 10.9 2022-08-30 13:22:57 +03:00
Marko Mäkelä
c8cd162a0a Merge 10.7 into 10.8 2022-08-30 13:04:17 +03:00
Marko Mäkelä
b86be02ecf Merge 10.6 into 10.7 2022-08-30 13:02:42 +03:00
anson1014
966d22b715
Ensure that source files contain only valid UTF8 encodings ()
Modern software (including text editors, static analysis software,
and web-based code review interfaces) often requires source code files
to be interpretable via a consistent character encoding, with UTF-8 or
ASCII (a strict subset of UTF-8) as the default. Several of the MariaDB
source files contain bytes that are not valid in either the UTF-8 or
ASCII encodings, but instead represent strings encoded in the
ISO-8859-1/Latin-1 or ISO-8859-2/Latin-2 encodings.

These inconsistent encodings may prevent software from correctly
presenting or processing such files. Converting all source files to
valid UTF8 characters will ensure correct handling.

Comments written in Czech were replaced with lightly-corrected
translations from Google Translate. Additionally, comments describing
the proper handling of special characters were changed so that the
comments are now purely UTF8.

All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer
Amazon Web Services, Inc.

Co-authored-by: Andrew Hutchings <andrew@linuxjedi.co.uk>
2022-08-30 09:21:40 +01:00
Marko Mäkelä
62b418bd28 Merge 10.10 to 10.11 2022-08-29 14:30:29 +03:00
NTH19
80fbd0ee94
Remove redundant variable () 2022-08-27 10:19:16 +01:00
Igor Babaev
87e8463e04 Fixed some compiler issues appeared after d7ffb7c3dd
Disabled atomic.rename_trigger.
2022-08-16 09:26:27 -07:00
Alexander Barkov
d8f172c11c MDEV-27266 Improve UCA collation performance for utf8mb3 and utf8mb4
Adding two levels of optimization:

1. For every bytes pair [00..FF][00..FF] which:
  a. consists of two ASCII characters or makes a well-formed two-byte character
  b. whose total weight string fits into 4 weights
     (concatenated weight string in case of two ASCII characters,
     or a single weight string in case of a two-byte character)
  c. whose weight is context independent (i.e. does not depend on contractions
     or previous context pairs)
  store weights in a separate array of MY_UCA_2BYTES_ITEM,
  so during scanner_next() we can scan two bytes at a time.
  Byte pairs that do not match the conditions a-c are marked in this array
  as not applicable for optimization and scanned as before.

2. For every byte pair which is applicable for optimization in ,
   and which produces only one or two weights, store
   weights in one more array of MY_UCA_WEIGHT2. So in the beginning
   of strnncoll*() we can skip equal prefixes using an even more efficient
   loop. This loop consumes two bytes at a time. The loop scans while the
   two bytes on both sides produce weight strings of equal length
   (i.e. one weight on both sides, or two weight on both sides).
   This allows to compare efficiently:
   - Context independent sequences consisting of two ASCII characters
   - Context independent 2-byte characters
   - Contractions consisting of two ASCII characters, e.g. Czech "ch".
   - Some tricky cases: "ss" vs "SHARP S"
     ("ss" produces two weights, 0xC39F also produces two weights)
2022-08-10 15:04:50 +02:00
Alexander Barkov
a0858b2cff MDEV-27265 Improve contraction performance in UCA collations
Adding a hash table for contractions.

The old code iterated through all items in MY_CONTRACTIONS,
and was much slower, especially for those contractions
in the end of the list.
2022-08-10 15:04:40 +02:00
Alexander Barkov
133446828c MDEV-27009 Add UCA-14.0.0 collations
- Added one neutral and 22 tailored (language specific) collations based on
  Unicode Collation Algorithm version 14.0.0.

  Collations were added for Unicode character sets
  utf8mb3, utf8mb4, ucs2, utf16, utf32.

  Every tailoring was added with four accent and case
  sensitivity flag combinations, e.g:

  * utf8mb4_uca1400_swedish_as_cs
  * utf8mb4_uca1400_swedish_as_ci
  * utf8mb4_uca1400_swedish_ai_cs
  * utf8mb4_uca1400_swedish_ai_ci

  and their _nopad_ variants:

  * utf8mb4_uca1400_swedish_nopad_as_cs
  * utf8mb4_uca1400_swedish_nopad_as_ci
  * utf8mb4_uca1400_swedish_nopad_ai_cs
  * utf8mb4_uca1400_swedish_nopad_ai_ci

- Introducing a conception of contextually typed named collations:

  CREATE DATABASE db1 CHARACTER SET utf8mb4;
  CREATE TABLE db1.t1 (a CHAR(10) COLLATE uca1400_as_ci);

  The idea is that there is no a need to specify the character set prefix
  in the new collation names. It's enough to type just the suffix
  "uca1400_as_ci". The character set is taken from the context.

  In the above example script the context character set is utf8mb4.
  So the CREATE TABLE will make a column with the collation
  utf8mb4_uca1400_as_ci.

  Short collations names can be used in any parts of the SQL syntax
  where the COLLATE clause is understood.

- New collations are displayed only one time
  (without character set combinations) by these statements:

     SELECT * FROM INFORMATION_SCHEMA.COLLATIONS;
     SHOW COLLATION;

  For example, all these collations:
  - utf8mb3_uca1400_swedish_as_ci
  - utf8mb4_uca1400_swedish_as_ci
  - ucs2_uca1400_swedish_as_ci
  - utf16_uca1400_swedish_as_ci
  - utf32_uca1400_swedish_as_ci
  have just one entry in INFORMATION_SCHEMA.COLLATIONS and SHOW COLLATION,
  with COLLATION_NAME equal to "uca1400_swedish_as_ci", which is the suffix
  without the character set name:

SELECT COLLATION_NAME FROM INFORMATION_SCHEMA.COLLATIONS
WHERE COLLATION_NAME LIKE '%uca1400_swedish_as_ci';

+-----------------------+
| COLLATION_NAME        |
+-----------------------+
| uca1400_swedish_as_ci |
+-----------------------+

  Note, the behaviour of old collations did not change.
  Non-unicode collations (e.g. latin1_swedish_ci) and
  old UCA-4.0.0 collations (e.g. utf8mb4_unicode_ci)
  are still displayed with the character set prefix, as before.

- The structure of the table INFORMATION_SCHEMA.COLLATIONS was changed.

  The NOT NULL constraint was removed from these columns:
  - CHARACTER_SET_NAME
  - ID
  - IS_DEFAULT
  and from the corresponding columns in SHOW COLLATION.

  For example:

SELECT COLLATION_NAME, CHARACTER_SET_NAME, ID, IS_DEFAULT
FROM INFORMATION_SCHEMA.COLLATIONS
WHERE COLLATION_NAME LIKE '%uca1400_swedish_as_ci';
+-----------------------+--------------------+------+------------+
| COLLATION_NAME        | CHARACTER_SET_NAME | ID   | IS_DEFAULT |
+-----------------------+--------------------+------+------------+
| uca1400_swedish_as_ci | NULL               | NULL | NULL       |
+-----------------------+--------------------+------+------------+

  The NULL value in these columns now means that the collation
  is applicable to multiple character sets.
  The behavioir of old collations did not change.
  Make sure your client programs can handle NULL values in these columns.

- The structure of the table
  INFORMATION_SCHEMA.COLLATION_CHARACTER_SET_APPLICABILITY was changed.

  Three new NOT NULL columns were added:
  - FULL_COLLATION_NAME
  - ID
  - IS_DEFAULT

  New collations have multiple entries in COLLATION_CHARACTER_SET_APPLICABILITY.
  The column COLLATION_NAME contains the collation name without the character
  set prefix. The column FULL_COLLATION_NAME contains the collation name with
  the character set prefix.

  Old collations have full collation name in both FULL_COLLATION_NAME and
  COLLATION_NAME.

SELECT COLLATION_NAME, FULL_COLLATION_NAME, CHARACTER_SET_NAME, ID, IS_DEFAULT
FROM INFORMATION_SCHEMA.COLLATION_CHARACTER_SET_APPLICABILITY
WHERE FULL_COLLATION_NAME RLIKE '^(utf8mb4|latin1).*swedish.*ci$';
+-----------------------------+-------------------------------------+--------------------+------+------------+
| COLLATION_NAME              | FULL_COLLATION_NAME                 | CHARACTER_SET_NAME | ID   | IS_DEFAULT |
+-----------------------------+-------------------------------------+--------------------+------+------------+
| latin1_swedish_ci           | latin1_swedish_ci                   | latin1             |    8 | Yes        |
| latin1_swedish_nopad_ci     | latin1_swedish_nopad_ci             | latin1             | 1032 |            |
| utf8mb4_swedish_ci          | utf8mb4_swedish_ci                  | utf8mb4            |  232 |            |
| uca1400_swedish_ai_ci       | utf8mb4_uca1400_swedish_ai_ci       | utf8mb4            | 2368 |            |
| uca1400_swedish_as_ci       | utf8mb4_uca1400_swedish_as_ci       | utf8mb4            | 2370 |            |
| uca1400_swedish_nopad_ai_ci | utf8mb4_uca1400_swedish_nopad_ai_ci | utf8mb4            | 2372 |            |
| uca1400_swedish_nopad_as_ci | utf8mb4_uca1400_swedish_nopad_as_ci | utf8mb4            | 2374 |            |
+-----------------------------+-------------------------------------+--------------------+------+------------+

- Other INFORMATION_SCHEMA queries:

  SELECT COLLATION_NAME FROM INFORMATION_SCHEMA.COLUMNS;
  SELECT COLLATION_NAME FROM INFORMATION_SCHEMA.PARAMETERS;
  SELECT TABLE_COLLATION FROM INFORMATION_SCHEMA.TABLES;
  SELECT DEFAULT_COLLATION_NAME FROM INFORMATION_SCHEMA.SCHEMATA;
  SELECT COLLATION_NAME FROM INFORMATION_SCHEMA.ROUTINES;
  SELECT COLLATION_CONNECTION FROM INFORMATION_SCHEMA.EVENTS;
  SELECT DATABASE_COLLATION FROM INFORMATION_SCHEMA.EVENTS;
  SELECT COLLATION_CONNECTION FROM INFORMATION_SCHEMA.ROUTINES;
  SELECT DATABASE_COLLATION FROM INFORMATION_SCHEMA.ROUTINES;
  SELECT COLLATION_CONNECTION FROM INFORMATION_SCHEMA.TRIGGERS;
  SELECT DATABASE_COLLATION FROM INFORMATION_SCHEMA.TRIGGERS;
  SELECT COLLATION_CONNECTION FROM INFORMATION_SCHEMA.VIEWS;

  display full collation names, including character sets prefix,
  for all collations, including new collations.

  Corresponding SHOW commands also display full collation names
  in collation related columns:

  SHOW CREATE TABLE t1;
  SHOW CREATE DATABASE db1;
  SHOW TABLE STATUS;
  SHOW CREATE FUNCTION f1;
  SHOW CREATE PROCEDURE p1;
  SHOW CREATE EVENT ev1;
  SHOW CREATE TRIGGER tr1;
  SHOW CREATE VIEW;

  These INFORMATION_SCHEMA queries and SHOW statements may change in
  the future, to display show collation names.
2022-08-10 15:04:24 +02:00
Alexander Barkov
6bc10f8026 MDEV-27009 Add UCA-14.0.0 collations - adding version aware implicit weight handling
Implicit weights are now handled according to the Unicode version
(14.0.0 vs earlier versions).

- Adding a new member MY_UCA_INFO::version

- Copy logical positions and the version from "src_uca" to "new_uca"
  in init_weight_level().

- Adding a "const MY_UCA_INFO *" parameter to a few functions
  to know Unicode version to generate implicit weights accordingly:
  - during the collation initialization time, to pages which are
    a mixture of explicit and implicit weights
  - during comparison time, for fully implicit pages
2022-08-10 15:04:11 +02:00
Alexander Barkov
d7ffb7c3dd MDEV-27009 Add UCA-14.0.0 collations - dump logical positions and contractions
- uca-dump can now dump logical positions as a set of "#define" directives.
  Logical positions for 4.0.0 and for 5.2.0 were calculated and put into
  ctype-uca.c manually. That required some efforts by analyzing allkeys.txt
  with help of grep and sort.
  Now when defining a new MY_UCA_INFO it's possible to use the new #define's
  instead of calculating logical positions manually.
  Logical positions also print their weights in DUCET format as a comment
  before the define:

/*
[.0000.0021.0002]
[.0000.0117.0002]
*/

  The comment helps to know weight ranges on various levels,
  which makes it easier to debug the code.

- uca-dump can now dump built-in DUCET contractions

- Adding a new uca-dump command line option --no-contractions, this is useful
  if one needs to re-dump 4.0.0 and 5.2.0 data in ctype-uca.c compatible way.

- Adding a new uca-dump command line options --case-first=upper|level.
  This can be useful if one need to dump with UPPER case first by default.
  It's not yet decided if we'll use --case-first=upper during the dump though.

- Moving parts of the code from the main loop into separate functions
  parse_chars() and parse_weights(). This allows to reuse the code between
  single characters and contractions.

- Adding a new function my_ducet_weight_normalize(), to cut zero weights
  from a weight string, e.g. [AAAA][0000][BBBB] -> [AAAA][BBBB].
  This helps to reuse the code between single characters and contractions.

- Weight normalization is now done before printing, in separate loops inside
  my_ducet_normalize(). Before this change, normalization was done during
  priting, inside the printing loop. This helps to separate steps:
  loading -> normalizing -> printing.
  This makes it easier to follow what's going on, e.g. while debugging.

- Fixing ctype-uca.c to handle built-in contractions of any length.
  Previously we had only built-in contractions in utf8mb4_thai_520_w2,
  which contains only 2-character contractions.
2022-08-10 15:03:58 +02:00
Alexander Barkov
0736c03d56 MDEV-27009 Add UCA-14.0.0 collations - Adding implicit weight handling for Unicode-14.0.0
1. Adding separate functions for different Unicode versions
  - my_uca_520_implicit_weight_primary()
   It calculates implicit weights according to the old algorithm
   that we used to dump Unicode-5.2.0 weights.

  - my_uca_1400_implicit_weight_primary()
    It calculates implicit weights according to
    https://unicode.org/reports/tr10/#Values_For_Base_Table
    as of November 2021, Unicode version 14.0.0.

2. Adding the "@version" line recognition when dumping allkeys.txt.
   Implicit weights are dumped according to @version.

3. Dumping the scanned version as a "#define"

4. Removing dumping MY_UCA_NPAGES, MY_UCA_NCHARS, MY_UCA_CMASK, MY_UCA_PSHIFT,
   as they are defined in ctype-uca.c. Removing dumping of "main()", it's not
   needed. The intent is to generate an *.h file which can be put directly
   to the MariaDB source tree.

5. Adding a structure MY_DUCET. It now contains weights for single
   characters and version related members. Later we'll add contractions
   and logical positions in here.
2022-08-10 15:03:47 +02:00
Alexander Barkov
bb84f61a26 MDEV-27009 Add UCA-14.0.0 collations - adding uca-dump into build targets
- Adding uca-dump into build targets
- Adding ctype-uca.h and moving implicit weight related routines there
- Reusing implicit weight routines in ctype-uca.c and uca-dump.c
- Adding handling of command line arguments to uca-dump
- Fixing some compile-time warnings in uca-dump.c
2022-08-10 15:03:35 +02:00
Marko Mäkelä
f53f64b7b9 Merge 10.8 into 10.9 2022-07-28 10:47:33 +03:00
Marko Mäkelä
f79cebb4d0 Merge 10.7 into 10.8 2022-07-28 10:33:26 +03:00
Marko Mäkelä
742e1c727f Merge 10.6 into 10.7 2022-07-27 18:26:21 +03:00
Marko Mäkelä
30914389fe Merge 10.5 into 10.6 2022-07-27 17:52:37 +03:00
Marko Mäkelä
098c0f2634 Merge 10.4 into 10.5 2022-07-27 17:17:24 +03:00
Oleksandr Byelkin
3bb36e9495 Merge branch '10.3' into 10.4 2022-07-27 11:02:57 +02:00
Rucha Deodhar
dbe39f14fe MDEV-28762: recursive call of some json functions without stack control
Analysis: Some recursive json functions dont check for stack control
Fix: Add check_stack_overrun(). The last argument is NULL because it is not
used
2022-07-20 19:24:48 +05:30
Sergei Golubchik
bf2bdd1a1a Merge branch '10.8' into 10.9 2022-05-19 14:07:55 +02:00
Sergei Golubchik
0a1d9d0681 MDEV-27415 main.json_normalize and main.json_equals fail with UBSAN runtime error
UBSAN: out of bound array read in json

json_lib.c:847:25: runtime error: index 200 out of bounds for type 'json_string_char_classes [128]'
json_lib.c:847:25: runtime error: load of address 0x56286f7175a0 with insufficient space for an object of type 'json_string_char_classes'

fixes main.json_equals  and main.json_normalize
2022-05-11 19:21:59 +02:00
Sergei Golubchik
443c2a715d Merge branch '10.7' into 10.8 2022-05-11 12:21:36 +02:00
Sergei Golubchik
fd132be117 Merge branch '10.6' into 10.7 2022-05-11 11:25:33 +02:00
Sergei Golubchik
3bc98a4ec4 Merge branch '10.5' into 10.6 2022-05-10 14:01:23 +02:00
Sergei Golubchik
ef781162ff Merge branch '10.4' into 10.5 2022-05-09 22:04:06 +02:00
Alexander Barkov
0e4bc67eab 10.4 specific changes for "MDEV-27494 Rename .ic files to .inl"
Renaming ctype-uca.ic to ctype-uca.inl

This file was introduced in 10.4,
so it did not get to the main 10.2 patch for MDEV-27494
2022-04-28 10:52:11 +04:00
Rucha Deodhar
43fa8e0b8f MDEV-28319: Assertion `cur_step->type & JSON_PATH_KEY' failed in json_find_path
Analysis: When trying to find path and handling the match for path,
value at current index is not set to 0 for array_counters. This causes wrong
current step value which eventually causes wrong cur_step->type value.
Fix: Set the value at current index for array_counters to 0.
2022-04-26 16:13:19 +05:30
Rucha Deodhar
4730a6982f MDEV-28350: Test failing on buildbot with UBSAN
Analysis: There were two kinds of failing tests on buildbot with UBSAN.
1) runtime error: signed integer overflow and
2) runtime error: load of value is not valid value for type
Signed integer overflow was occuring because addition of two integers
(size of json array + item number in array) was causing overflow in
json_path_parts_compare. This overflow happens because a->n_item_end
wasn't set.
The second error was occuring because c_path->p.types_used is not
initialized but the value is used later on to check for negative path index.
Fix: For signed integer overflow, use a->n_item_end only in case of range
so that it is set.
2022-04-26 13:59:43 +05:30
Rucha Deodhar
3716eaff4e MDEV-28326: Server crashes in json_path_parts_compare
Analysis: When trying to compare json paths, the array_sizes variable is
NULL when beginning. But trying to access address by adding to the NULL
pointer while recursive calling json_path_parts_compare() for handling
double wildcard, it causes undefined behaviour and the array_sizes
variable eventually becomes non-null (has some address).
This eventually results in crash.
Fix: If array_sizes variable is NULL then pass NULL recursively as well.
2022-04-26 12:37:11 +05:30
Rucha Deodhar
c69d72c2e4 MDEV-28072: JSON_EXTRACT has inconsistent behavior with '0' value in json
path (when range is used)

Analysis: When 0 comes after space, then the json path parser changes the
state to JE_SYN instead of PS_Z (meaning parse zero). Hence the warning.
Fix: Make the state PS_Z instead of JE_SYN.
2022-04-15 01:04:52 +05:30
Rucha Deodhar
95a9078efc MDEV-28071: JSON_EXISTS returns always 1 if it is used range notation for
json path
Analysis: When searching for the given path in json string, if the current
step is of array range type, then path was considered reached which meant
path exists. So output was always true. The end indexes of range were not
evaluated.
Fix: If the current step type for a path is array range, then check if the
value array_counter[] is in range of n_item and n_item_end. If it is, then
path exists. Only then return true. If the range criteria is never met
then return false.
2022-04-15 01:04:25 +05:30
Rucha Deodhar
e6511a39f8 vcol.wrong_arena failing on buildbot when current date is '2022-03-17'
Analysis: When current date is '2022-03-17', dayname() gives 'Thursday'. The
previous json state is PS_KEYX which means key started with quote.
So now json parser for path is supposed to parse the key.
The keyname starts with 'T'. But the path transition table has JE_SYN when
previous state is PS_KEYX and next letter is 'T'. So it gives error.
Fix: We want to continue parsing the quoted keyname. So JE_SYN is incorrect.
Replaced it with PS_KNMX.
2022-04-15 01:02:44 +05:30
Rucha Deodhar
c781cefd8a MDEV-27911: Implement range notation for json path
Range can be thought about in similar manner as wildcard (*) where
more than one elements are processed. To implement range notation, extended
json parser to parse the 'to' keyword and added JSON_PATH_ARRAY_RANGE for
path type. If there is 'to' keyword then use JSON_PATH_ARRAY range for
path type along with existing type.
This new integer to store the end index of range is n_item_end.
When there is 'to' keyword, store the integer in n_item_end else store in
n_item.
2022-04-15 01:02:44 +05:30
Rucha Deodhar
abe9712194 MDEV-27972: Unexpected behavior with negative zero (-0) in
JSON Path

Analysis: When we have '-' followed by 0, then the state is
changed to JE_SYN, meaning syntax error.
Fix: Change the state to PS_INT instead, because we are
reading '0' next (integer) and it is not a syntax error.
2022-04-13 21:16:32 +05:30
Rucha Deodhar
dfcbb30a92 MDEV-22224: Support JSON Path negative index
This patch can be viewed as combination of two parts:
1) Enabling '-' in the path so that the parser does not give out a warning.
2) Setting the negative index to a correct value and returning the
   appropriate value.

1) To enable using the negative index in the path:
To make the parser not return warning when negative index is used in path
'-' needs to be allowed in json path characters. P_NEG is added
to enable this and is made recognizable by setting the 45th index of
json_path_chr_map[] to P_NEG (instead of previous P_ETC)
because 45 corresponds to '-' in unicode.
When the path is being parsed and '-' is encountered, the parser should
recognize it as parsing '-' sign, so a new json state PS_NEG is required.
When the state is PS_NEG, it means that a negative integer is
going to be parsed so set is_negative_index of current step to 1 and
n_item is set accordingly when integer is encountered after '-'.
Next proceed with parsing rest of the path and get the correct path.
Next thing is parsing the json and returning correct value.

2) Setting the negative index to a correct value and returning the value:
While parsing json if we encounter array and the path step for the array
is a negative index (n_item < 0), then we can count the number of elements
in the array and set n_item to correct corresponding value. This is done in
json_skip_array_and_count.
2022-04-13 21:16:32 +05:30
Marko Mäkelä
8680eedb26 Merge 10.8 into 10.9 2022-03-30 09:41:14 +03:00
Marko Mäkelä
5c69e93630 Merge 10.7 into 10.8 2022-03-30 09:34:07 +03:00
Marko Mäkelä
a4d753758f Merge 10.6 into 10.7 2022-03-30 08:52:05 +03:00
Marko Mäkelä
b242c3141f Merge 10.5 into 10.6 2022-03-29 16:16:21 +03:00
Marko Mäkelä
d62b0368ca Merge 10.4 into 10.5 2022-03-29 12:59:18 +03:00
Marko Mäkelä
ae6e214fd8 Merge 10.3 into 10.4 2022-03-29 11:13:18 +03:00
Marko Mäkelä
020e7d89eb Merge 10.2 into 10.3 2022-03-29 09:53:15 +03:00
Alexander Barkov
0c4c064f98 MDEV-27743 Remove Lex::charset
This patch also fixes:

MDEV-27690 Crash on `CHARACTER SET csname COLLATE DEFAULT` in column definition
MDEV-27853 Wrong data type on column `COLLATE DEFAULT` and table `COLLATE some_non_default_collation`
MDEV-28067 Multiple conflicting column COLLATE clauses are not rejected
MDEV-28118 Wrong collation of `CAST(.. AS CHAR COLLATE DEFAULT)`
MDEV-28119 Wrong column collation on MODIFY + CONVERT
2022-03-22 17:12:15 +04:00
Alexey Botchkov
6277e7df6b MDEV-22742 UBSAN: Many overflow issues in strings/decimal.c - runtime error: signed integer overflow: x * y cannot be represented in type 'long long int' (on optimized builds).
Avoid integer overflow, do the check before the calculation.
2022-03-21 15:05:42 +04:00
Marko Mäkelä
18bb95b608 Merge 10.7 into 10.8 2022-03-14 11:52:11 +02:00