Moving these members:
CHARSET_INFO *cs;
const MY_UCA_WEIGHT_LEVEL *level;
from my_uca_scanner to a new separate structure my_uca_scanner_param.
Rationale:
During a comparison of two strings these members were initialized two times
(one time for every string).
After the change these members initialized only one time inside
a shared instance of my_uca_scanner_param, and the instance is
shared between two scanners (its const address is passed as new a parameter
to the underlying scanner functions).
This change gives a slight performance improvement (~5%).
Modern software (including text editors, static analysis software,
and web-based code review interfaces) often requires source code files
to be interpretable via a consistent character encoding, with UTF-8 or
ASCII (a strict subset of UTF-8) as the default. Several of the MariaDB
source files contain bytes that are not valid in either the UTF-8 or
ASCII encodings, but instead represent strings encoded in the
ISO-8859-1/Latin-1 or ISO-8859-2/Latin-2 encodings.
These inconsistent encodings may prevent software from correctly
presenting or processing such files. Converting all source files to
valid UTF8 characters will ensure correct handling.
Comments written in Czech were replaced with lightly-corrected
translations from Google Translate. Additionally, comments describing
the proper handling of special characters were changed so that the
comments are now purely UTF8.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer
Amazon Web Services, Inc.
Co-authored-by: Andrew Hutchings <andrew@linuxjedi.co.uk>
Adding two levels of optimization:
1. For every bytes pair [00..FF][00..FF] which:
a. consists of two ASCII characters or makes a well-formed two-byte character
b. whose total weight string fits into 4 weights
(concatenated weight string in case of two ASCII characters,
or a single weight string in case of a two-byte character)
c. whose weight is context independent (i.e. does not depend on contractions
or previous context pairs)
store weights in a separate array of MY_UCA_2BYTES_ITEM,
so during scanner_next() we can scan two bytes at a time.
Byte pairs that do not match the conditions a-c are marked in this array
as not applicable for optimization and scanned as before.
2. For every byte pair which is applicable for optimization in #1,
and which produces only one or two weights, store
weights in one more array of MY_UCA_WEIGHT2. So in the beginning
of strnncoll*() we can skip equal prefixes using an even more efficient
loop. This loop consumes two bytes at a time. The loop scans while the
two bytes on both sides produce weight strings of equal length
(i.e. one weight on both sides, or two weight on both sides).
This allows to compare efficiently:
- Context independent sequences consisting of two ASCII characters
- Context independent 2-byte characters
- Contractions consisting of two ASCII characters, e.g. Czech "ch".
- Some tricky cases: "ss" vs "SHARP S"
("ss" produces two weights, 0xC39F also produces two weights)
Adding a hash table for contractions.
The old code iterated through all items in MY_CONTRACTIONS,
and was much slower, especially for those contractions
in the end of the list.
- Added one neutral and 22 tailored (language specific) collations based on
Unicode Collation Algorithm version 14.0.0.
Collations were added for Unicode character sets
utf8mb3, utf8mb4, ucs2, utf16, utf32.
Every tailoring was added with four accent and case
sensitivity flag combinations, e.g:
* utf8mb4_uca1400_swedish_as_cs
* utf8mb4_uca1400_swedish_as_ci
* utf8mb4_uca1400_swedish_ai_cs
* utf8mb4_uca1400_swedish_ai_ci
and their _nopad_ variants:
* utf8mb4_uca1400_swedish_nopad_as_cs
* utf8mb4_uca1400_swedish_nopad_as_ci
* utf8mb4_uca1400_swedish_nopad_ai_cs
* utf8mb4_uca1400_swedish_nopad_ai_ci
- Introducing a conception of contextually typed named collations:
CREATE DATABASE db1 CHARACTER SET utf8mb4;
CREATE TABLE db1.t1 (a CHAR(10) COLLATE uca1400_as_ci);
The idea is that there is no a need to specify the character set prefix
in the new collation names. It's enough to type just the suffix
"uca1400_as_ci". The character set is taken from the context.
In the above example script the context character set is utf8mb4.
So the CREATE TABLE will make a column with the collation
utf8mb4_uca1400_as_ci.
Short collations names can be used in any parts of the SQL syntax
where the COLLATE clause is understood.
- New collations are displayed only one time
(without character set combinations) by these statements:
SELECT * FROM INFORMATION_SCHEMA.COLLATIONS;
SHOW COLLATION;
For example, all these collations:
- utf8mb3_uca1400_swedish_as_ci
- utf8mb4_uca1400_swedish_as_ci
- ucs2_uca1400_swedish_as_ci
- utf16_uca1400_swedish_as_ci
- utf32_uca1400_swedish_as_ci
have just one entry in INFORMATION_SCHEMA.COLLATIONS and SHOW COLLATION,
with COLLATION_NAME equal to "uca1400_swedish_as_ci", which is the suffix
without the character set name:
SELECT COLLATION_NAME FROM INFORMATION_SCHEMA.COLLATIONS
WHERE COLLATION_NAME LIKE '%uca1400_swedish_as_ci';
+-----------------------+
| COLLATION_NAME |
+-----------------------+
| uca1400_swedish_as_ci |
+-----------------------+
Note, the behaviour of old collations did not change.
Non-unicode collations (e.g. latin1_swedish_ci) and
old UCA-4.0.0 collations (e.g. utf8mb4_unicode_ci)
are still displayed with the character set prefix, as before.
- The structure of the table INFORMATION_SCHEMA.COLLATIONS was changed.
The NOT NULL constraint was removed from these columns:
- CHARACTER_SET_NAME
- ID
- IS_DEFAULT
and from the corresponding columns in SHOW COLLATION.
For example:
SELECT COLLATION_NAME, CHARACTER_SET_NAME, ID, IS_DEFAULT
FROM INFORMATION_SCHEMA.COLLATIONS
WHERE COLLATION_NAME LIKE '%uca1400_swedish_as_ci';
+-----------------------+--------------------+------+------------+
| COLLATION_NAME | CHARACTER_SET_NAME | ID | IS_DEFAULT |
+-----------------------+--------------------+------+------------+
| uca1400_swedish_as_ci | NULL | NULL | NULL |
+-----------------------+--------------------+------+------------+
The NULL value in these columns now means that the collation
is applicable to multiple character sets.
The behavioir of old collations did not change.
Make sure your client programs can handle NULL values in these columns.
- The structure of the table
INFORMATION_SCHEMA.COLLATION_CHARACTER_SET_APPLICABILITY was changed.
Three new NOT NULL columns were added:
- FULL_COLLATION_NAME
- ID
- IS_DEFAULT
New collations have multiple entries in COLLATION_CHARACTER_SET_APPLICABILITY.
The column COLLATION_NAME contains the collation name without the character
set prefix. The column FULL_COLLATION_NAME contains the collation name with
the character set prefix.
Old collations have full collation name in both FULL_COLLATION_NAME and
COLLATION_NAME.
SELECT COLLATION_NAME, FULL_COLLATION_NAME, CHARACTER_SET_NAME, ID, IS_DEFAULT
FROM INFORMATION_SCHEMA.COLLATION_CHARACTER_SET_APPLICABILITY
WHERE FULL_COLLATION_NAME RLIKE '^(utf8mb4|latin1).*swedish.*ci$';
+-----------------------------+-------------------------------------+--------------------+------+------------+
| COLLATION_NAME | FULL_COLLATION_NAME | CHARACTER_SET_NAME | ID | IS_DEFAULT |
+-----------------------------+-------------------------------------+--------------------+------+------------+
| latin1_swedish_ci | latin1_swedish_ci | latin1 | 8 | Yes |
| latin1_swedish_nopad_ci | latin1_swedish_nopad_ci | latin1 | 1032 | |
| utf8mb4_swedish_ci | utf8mb4_swedish_ci | utf8mb4 | 232 | |
| uca1400_swedish_ai_ci | utf8mb4_uca1400_swedish_ai_ci | utf8mb4 | 2368 | |
| uca1400_swedish_as_ci | utf8mb4_uca1400_swedish_as_ci | utf8mb4 | 2370 | |
| uca1400_swedish_nopad_ai_ci | utf8mb4_uca1400_swedish_nopad_ai_ci | utf8mb4 | 2372 | |
| uca1400_swedish_nopad_as_ci | utf8mb4_uca1400_swedish_nopad_as_ci | utf8mb4 | 2374 | |
+-----------------------------+-------------------------------------+--------------------+------+------------+
- Other INFORMATION_SCHEMA queries:
SELECT COLLATION_NAME FROM INFORMATION_SCHEMA.COLUMNS;
SELECT COLLATION_NAME FROM INFORMATION_SCHEMA.PARAMETERS;
SELECT TABLE_COLLATION FROM INFORMATION_SCHEMA.TABLES;
SELECT DEFAULT_COLLATION_NAME FROM INFORMATION_SCHEMA.SCHEMATA;
SELECT COLLATION_NAME FROM INFORMATION_SCHEMA.ROUTINES;
SELECT COLLATION_CONNECTION FROM INFORMATION_SCHEMA.EVENTS;
SELECT DATABASE_COLLATION FROM INFORMATION_SCHEMA.EVENTS;
SELECT COLLATION_CONNECTION FROM INFORMATION_SCHEMA.ROUTINES;
SELECT DATABASE_COLLATION FROM INFORMATION_SCHEMA.ROUTINES;
SELECT COLLATION_CONNECTION FROM INFORMATION_SCHEMA.TRIGGERS;
SELECT DATABASE_COLLATION FROM INFORMATION_SCHEMA.TRIGGERS;
SELECT COLLATION_CONNECTION FROM INFORMATION_SCHEMA.VIEWS;
display full collation names, including character sets prefix,
for all collations, including new collations.
Corresponding SHOW commands also display full collation names
in collation related columns:
SHOW CREATE TABLE t1;
SHOW CREATE DATABASE db1;
SHOW TABLE STATUS;
SHOW CREATE FUNCTION f1;
SHOW CREATE PROCEDURE p1;
SHOW CREATE EVENT ev1;
SHOW CREATE TRIGGER tr1;
SHOW CREATE VIEW;
These INFORMATION_SCHEMA queries and SHOW statements may change in
the future, to display show collation names.
Implicit weights are now handled according to the Unicode version
(14.0.0 vs earlier versions).
- Adding a new member MY_UCA_INFO::version
- Copy logical positions and the version from "src_uca" to "new_uca"
in init_weight_level().
- Adding a "const MY_UCA_INFO *" parameter to a few functions
to know Unicode version to generate implicit weights accordingly:
- during the collation initialization time, to pages which are
a mixture of explicit and implicit weights
- during comparison time, for fully implicit pages
- uca-dump can now dump logical positions as a set of "#define" directives.
Logical positions for 4.0.0 and for 5.2.0 were calculated and put into
ctype-uca.c manually. That required some efforts by analyzing allkeys.txt
with help of grep and sort.
Now when defining a new MY_UCA_INFO it's possible to use the new #define's
instead of calculating logical positions manually.
Logical positions also print their weights in DUCET format as a comment
before the define:
/*
[.0000.0021.0002]
[.0000.0117.0002]
*/
The comment helps to know weight ranges on various levels,
which makes it easier to debug the code.
- uca-dump can now dump built-in DUCET contractions
- Adding a new uca-dump command line option --no-contractions, this is useful
if one needs to re-dump 4.0.0 and 5.2.0 data in ctype-uca.c compatible way.
- Adding a new uca-dump command line options --case-first=upper|level.
This can be useful if one need to dump with UPPER case first by default.
It's not yet decided if we'll use --case-first=upper during the dump though.
- Moving parts of the code from the main loop into separate functions
parse_chars() and parse_weights(). This allows to reuse the code between
single characters and contractions.
- Adding a new function my_ducet_weight_normalize(), to cut zero weights
from a weight string, e.g. [AAAA][0000][BBBB] -> [AAAA][BBBB].
This helps to reuse the code between single characters and contractions.
- Weight normalization is now done before printing, in separate loops inside
my_ducet_normalize(). Before this change, normalization was done during
priting, inside the printing loop. This helps to separate steps:
loading -> normalizing -> printing.
This makes it easier to follow what's going on, e.g. while debugging.
- Fixing ctype-uca.c to handle built-in contractions of any length.
Previously we had only built-in contractions in utf8mb4_thai_520_w2,
which contains only 2-character contractions.
1. Adding separate functions for different Unicode versions
- my_uca_520_implicit_weight_primary()
It calculates implicit weights according to the old algorithm
that we used to dump Unicode-5.2.0 weights.
- my_uca_1400_implicit_weight_primary()
It calculates implicit weights according to
https://unicode.org/reports/tr10/#Values_For_Base_Table
as of November 2021, Unicode version 14.0.0.
2. Adding the "@version" line recognition when dumping allkeys.txt.
Implicit weights are dumped according to @version.
3. Dumping the scanned version as a "#define"
4. Removing dumping MY_UCA_NPAGES, MY_UCA_NCHARS, MY_UCA_CMASK, MY_UCA_PSHIFT,
as they are defined in ctype-uca.c. Removing dumping of "main()", it's not
needed. The intent is to generate an *.h file which can be put directly
to the MariaDB source tree.
5. Adding a structure MY_DUCET. It now contains weights for single
characters and version related members. Later we'll add contractions
and logical positions in here.
- Adding uca-dump into build targets
- Adding ctype-uca.h and moving implicit weight related routines there
- Reusing implicit weight routines in ctype-uca.c and uca-dump.c
- Adding handling of command line arguments to uca-dump
- Fixing some compile-time warnings in uca-dump.c
UBSAN: out of bound array read in json
json_lib.c:847:25: runtime error: index 200 out of bounds for type 'json_string_char_classes [128]'
json_lib.c:847:25: runtime error: load of address 0x56286f7175a0 with insufficient space for an object of type 'json_string_char_classes'
fixes main.json_equals and main.json_normalize
Analysis: When trying to find path and handling the match for path,
value at current index is not set to 0 for array_counters. This causes wrong
current step value which eventually causes wrong cur_step->type value.
Fix: Set the value at current index for array_counters to 0.
Analysis: There were two kinds of failing tests on buildbot with UBSAN.
1) runtime error: signed integer overflow and
2) runtime error: load of value is not valid value for type
Signed integer overflow was occuring because addition of two integers
(size of json array + item number in array) was causing overflow in
json_path_parts_compare. This overflow happens because a->n_item_end
wasn't set.
The second error was occuring because c_path->p.types_used is not
initialized but the value is used later on to check for negative path index.
Fix: For signed integer overflow, use a->n_item_end only in case of range
so that it is set.
Analysis: When trying to compare json paths, the array_sizes variable is
NULL when beginning. But trying to access address by adding to the NULL
pointer while recursive calling json_path_parts_compare() for handling
double wildcard, it causes undefined behaviour and the array_sizes
variable eventually becomes non-null (has some address).
This eventually results in crash.
Fix: If array_sizes variable is NULL then pass NULL recursively as well.
path (when range is used)
Analysis: When 0 comes after space, then the json path parser changes the
state to JE_SYN instead of PS_Z (meaning parse zero). Hence the warning.
Fix: Make the state PS_Z instead of JE_SYN.
json path
Analysis: When searching for the given path in json string, if the current
step is of array range type, then path was considered reached which meant
path exists. So output was always true. The end indexes of range were not
evaluated.
Fix: If the current step type for a path is array range, then check if the
value array_counter[] is in range of n_item and n_item_end. If it is, then
path exists. Only then return true. If the range criteria is never met
then return false.
Analysis: When current date is '2022-03-17', dayname() gives 'Thursday'. The
previous json state is PS_KEYX which means key started with quote.
So now json parser for path is supposed to parse the key.
The keyname starts with 'T'. But the path transition table has JE_SYN when
previous state is PS_KEYX and next letter is 'T'. So it gives error.
Fix: We want to continue parsing the quoted keyname. So JE_SYN is incorrect.
Replaced it with PS_KNMX.
Range can be thought about in similar manner as wildcard (*) where
more than one elements are processed. To implement range notation, extended
json parser to parse the 'to' keyword and added JSON_PATH_ARRAY_RANGE for
path type. If there is 'to' keyword then use JSON_PATH_ARRAY range for
path type along with existing type.
This new integer to store the end index of range is n_item_end.
When there is 'to' keyword, store the integer in n_item_end else store in
n_item.
JSON Path
Analysis: When we have '-' followed by 0, then the state is
changed to JE_SYN, meaning syntax error.
Fix: Change the state to PS_INT instead, because we are
reading '0' next (integer) and it is not a syntax error.
This patch can be viewed as combination of two parts:
1) Enabling '-' in the path so that the parser does not give out a warning.
2) Setting the negative index to a correct value and returning the
appropriate value.
1) To enable using the negative index in the path:
To make the parser not return warning when negative index is used in path
'-' needs to be allowed in json path characters. P_NEG is added
to enable this and is made recognizable by setting the 45th index of
json_path_chr_map[] to P_NEG (instead of previous P_ETC)
because 45 corresponds to '-' in unicode.
When the path is being parsed and '-' is encountered, the parser should
recognize it as parsing '-' sign, so a new json state PS_NEG is required.
When the state is PS_NEG, it means that a negative integer is
going to be parsed so set is_negative_index of current step to 1 and
n_item is set accordingly when integer is encountered after '-'.
Next proceed with parsing rest of the path and get the correct path.
Next thing is parsing the json and returning correct value.
2) Setting the negative index to a correct value and returning the value:
While parsing json if we encounter array and the path step for the array
is a negative index (n_item < 0), then we can count the number of elements
in the array and set n_item to correct corresponding value. This is done in
json_skip_array_and_count.
This patch also fixes:
MDEV-27690 Crash on `CHARACTER SET csname COLLATE DEFAULT` in column definition
MDEV-27853 Wrong data type on column `COLLATE DEFAULT` and table `COLLATE some_non_default_collation`
MDEV-28067 Multiple conflicting column COLLATE clauses are not rejected
MDEV-28118 Wrong collation of `CAST(.. AS CHAR COLLATE DEFAULT)`
MDEV-28119 Wrong column collation on MODIFY + CONVERT