This patch also fixes:
MDEV-33050 Build-in schemas like oracle_schema are accent insensitive
MDEV-33084 LASTVAL(t1) and LASTVAL(T1) do not work well with lower-case-table-names=0
MDEV-33085 Tables T1 and t1 do not work well with ENGINE=CSV and lower-case-table-names=0
MDEV-33086 SHOW OPEN TABLES IN DB1 -- is case insensitive with lower-case-table-names=0
MDEV-33088 Cannot create triggers in the database `MYSQL`
MDEV-33103 LOCK TABLE t1 AS t2 -- alias is not case sensitive with lower-case-table-names=0
MDEV-33109 DROP DATABASE MYSQL -- does not drop SP with lower-case-table-names=0
MDEV-33110 HANDLER commands are case insensitive with lower-case-table-names=0
MDEV-33119 User is case insensitive in INFORMATION_SCHEMA.VIEWS
MDEV-33120 System log table names are case insensitive with lower-cast-table-names=0
- Removing the virtual function strnncoll() from MY_COLLATION_HANDLER
- Adding a wrapper function CHARSET_INFO::streq(), to compare
two strings for equality. For now it calls strnncoll() internally.
In the future it will turn into a virtual function.
- Adding new accent sensitive case insensitive collations:
- utf8mb4_general1400_as_ci
- utf8mb3_general1400_as_ci
They implement accent sensitive case insensitive comparison.
The weight of a character is equal to the code point of its
upper case variant. These collations use Unicode-14.0.0 casefolding data.
The result of
my_charset_utf8mb3_general1400_as_ci.strcoll()
is very close to the former
my_charset_utf8mb3_general_ci.strcasecmp()
There is only a difference in a couple dozen rare characters, because:
- the switch from "tolower" to "toupper" comparison, to make
utf8mb3_general1400_as_ci closer to utf8mb3_general_ci
- the switch from Unicode-3.0.0 to Unicode-14.0.0
This difference should be tolarable. See the list of affected
characters in the MDEV description.
Note, utf8mb4_general1400_as_ci correctly handles non-BMP characters!
Unlike utf8mb4_general_ci, it does not treat all BMP characters
as equal.
- Adding classes representing names of the file based database objects:
Lex_ident_db
Lex_ident_table
Lex_ident_trigger
Their comparison collation depends on the underlying
file system case sensitivity and on --lower-case-table-names
and can be either my_charset_bin or my_charset_utf8mb3_general1400_as_ci.
- Adding classes representing names of other database objects,
whose names have case insensitive comparison style,
using my_charset_utf8mb3_general1400_as_ci:
Lex_ident_column
Lex_ident_sys_var
Lex_ident_user_var
Lex_ident_sp_var
Lex_ident_ps
Lex_ident_i_s_table
Lex_ident_window
Lex_ident_func
Lex_ident_partition
Lex_ident_with_element
Lex_ident_rpl_filter
Lex_ident_master_info
Lex_ident_host
Lex_ident_locale
Lex_ident_plugin
Lex_ident_engine
Lex_ident_server
Lex_ident_savepoint
Lex_ident_charset
engine_option_value::Name
- All the mentioned Lex_ident_xxx classes implement a method streq():
if (ident1.streq(ident2))
do_equal();
This method works as a wrapper for CHARSET_INFO::streq().
- Changing a lot of "LEX_CSTRING name" to "Lex_ident_xxx name"
in class members and in function/method parameters.
- Replacing all calls like
system_charset_info->coll->strcasecmp(ident1, ident2)
to
ident1.streq(ident2)
- Taking advantage of the c++11 user defined literal operator
for LEX_CSTRING (see m_strings.h) and Lex_ident_xxx (see lex_ident.h)
data types. Use example:
const Lex_ident_column primary_key_name= "PRIMARY"_Lex_ident_column;
is now a shorter version of:
const Lex_ident_column primary_key_name=
Lex_ident_column({STRING_WITH_LEN("PRIMARY")});
This is a refactoring patch, it does not change the behaviour.
The MTR tests are being added only to cover the LIKE predicate better.
(these tests should have been added earlier under terms of MDEV 9711).
This patch does not need its own specific MTR tests.
Moving the duplicate code into a new shared file ctype-wildcmp.inl
and including it from multiple places, to define the following functions:
- my_wildcmp_uca_impl(), in ctype-uca.c
For utf8mb3, utf8mb4, ucs2, utf16, utf32, using cs->cset->mb_wc().
For UCA based collations.
- my_wildcmp_mb2_or_mb4_general_ci_impl(), in ctype-ucs2.c:
For ucs2, utf16, utf32, using cs->cset->mb_wc().
For general_ci-style collations:
- xxx_general_ci
- xxx_general_mysql500_ci
- xxx_general_nopad_ci
- my_wildcmp_mb2_or_mb4_bin_impl(), in ctype-ucs2.c:
For ucs2, utf16, utf32, using cs->cset->mb_wc().
For _bin collations:
- xxx_bin
- xxx_nopad_bin
- my_wildcmp_utf8mb3_general_ci_impl(), in ctype-utf8.c
Optimized for utf8mb3, using my_mb_wc_utf8mb3_quick().
For general_ci-style collations:
- utf8mb3_general_ci
- utf8mb3_general_mysql500_ci
- utf8mb3_general_nopad_ci
- my_wildcmp_utf8mb4_general_ci_impl(), in ctype-utf8.c
Optimized for utf8mb4, using my_mb_wc_utf8mb4_quick().
For general_ci-style collations:
- utf8mb4_general_ci
- utf8mb4_general_nopad_ci
Under terms of MDEV 27490 we'll add support for non-BMP identifiers
and upgrade casefolding information to Unicode version 14.0.0.
In Unicode-14.0.0 conversion to lower and upper cases can increase octet length
of the string, so conversion won't be possible in-place any more.
This patch removes virtual functions performing in-place casefolding:
- my_charset_handler_st::casedn_str()
- my_charset_handler_st::caseup_str()
and fixes the code to use the non-inplace functions instead:
- my_charset_handler_st::casedn()
- my_charset_handler_st::caseup()
BASE 62 uses 0-9, A-Z and then a-z to give the numbers 0-61. This patch
increases the range of the string functions to cover this.
Based on ideas and tests in PR #2589, but re-written into the charset
functions.
Includes fix by Sergei, UBSAN complained:
ctype-simple.c:683:38: runtime error: negation of -9223372036854775808
cannot be represented in type 'long long int'; cast to an unsigned
type to negate this value to itself
Co-authored-by: Weijun Huang <huangweijun1001@gmail.com>
Co-authored-by: Sergei Golubchik <serg@mariadb.org>
Modify the NS_ZERO state in the JSON number parser to allow
exponential notation with a zero coefficient (e.g. 0E-4).
The NS_ZERO state transition on 'E' was updated to move to the
NS_EX state rather than returning a syntax error. Similar change
was made for the NS_ZE1 (negative zero) starter state.
This allows accepted number grammar to include cases like:
- 0E4
- -0E-10
which were previously disallowed. Numeric parsing remains
the same for all other states.
Test cases are added to func_json.test to validate parsing for
various exponential numbers starting with zero coefficients.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services.
The patch for "MDEV-25440: Indexed CHAR ... broken with NO_PAD collations"
fixed these scenarios from MDEV-26743:
- Basic latin letter vs equal accented letter
- Two letters vs equal (but space padded) expansion
However, this scenario was still broken:
- Basic latin letter (but followed by an ignorable character)
vs equal accented letter
Fix:
When processing for a NOPAD collation a string with trailing ignorable
characters, like:
'<non-ignorable><ignorable><ignorable>'
the string gets virtually converted to:
'<non-ignorable><ignorable><ignorable><space><space><space>...'
After the fix the code works differently in these two cases:
1. <space> fits into the "nchars" limit
2. <space> does not fit into the "nchars" limit
Details:
1. If "nchars" is large enough (4+ in this example),
return weights as follows:
'[weight-for-non-ignorable, 1 char] [weight-for-space-character, 3 chars]'
i.e. the weight for the virtual trailing space character now indicates
that it corresponds to total 3 characters:
- two ignorable characters
- one virtual trailing space character
2. If "nchars" is small (3), then the virtual trailing space character
does not fit into the "nchar" limit, so return 0x00 as weight, e.g.:
'[weight-for-non-ignorable, 1 char] [0x00, 2 chars]'
Adding corresponding MTR tests and unit tests.
as first character in key
Analysis:
While parsing the path, if '-' is encountered as a part of the key,
the state of the parser changes to error. Hence NULL is returned eventually.
Fix:
If '-' encountered as part of the key, change the state appropriately to
continue scanning the key.
(Variant#3: Allow cross-charset comparisons, use a special
CHARSET_INFO to create lookup keys. Review input addressed.)
Equalities that compare utf8mb{3,4}_general_ci strings, like:
WHERE ... utf8mb3_key_col=utf8mb4_value (MB3-4-CMP)
can now be used to construct ref[const] access and also participate
in multiple-equalities.
This means that utf8mb3_key_col can be used for key-lookups when
compared with an utf8mb4 constant, field or expression using '=' or
'<=>' comparison operators.
This is controlled by optimizer_switch='cset_narrowing=on', which is
OFF by default.
IMPLEMENTATION
Item value comparison in (MB3-4-CMP) is done using utf8mb4_general_ci.
This is valid as any utf8mb3 value is also an utf8mb4 value.
When making index lookup value for utf8mb3_key_col, we do "Charset
Narrowing": characters that are in the Basic Multilingual Plane (=BMP) are
copied as-is, as they can be represented in utf8mb3. Characters that are
outside the BMP cannot be represented in utf8mb3 and are replaced
with U+FFFD, the "Replacement Character".
In utf8mb4_general_ci, the Replacement Character compares as equal to any
character that's not in BMP. Because of this, the constructed lookup value
will find all index records that would be considered equal by the original
condition (MB3-4-CMP).
Approved-by: Monty <monty@mariadb.org>
In the hexlo function, the element type of the array hex_lo_digit is not
explicitly declared as signed char, causing elements with a value of -1
to be converted to 255 on Arm64. The problem occurs because "char" is
unsigned by default on Arm64 compiler, but signed on x86 compiler. This
problem can be seen in https://godbolt.org/z/rT775xshj
The above issue causes "use-after-poison" exception in my_mb_wc_filename
function. The code snippet where the error occurred is shown below,
copied from below link.
5fc19e7137/strings/ctype-utf8.c (L2728)
2728 if ((byte1= hexlo(byte1)) >= 0 &&
2729 (byte2= hexlo(byte2)) >= 0)
{
2731 int byte3= hexlo(s[3]);
…
}
At line 2729, when byte2 is 0, which indicates the end of the string s.
(1) On x86, hexlo(0) return -1 and line 2731 is skipped, as expected.
(2) On Arm64, hexlo(0) return 255 and line 2731 is executed, not as
expected, accessing s[3] after the null character of string s, thus
raising the "user-after-poison" error.
The problem was discovered when executing the main.mysqlcheck test.
Signed-off-by: Xiaotong Niu <xiaotong.niu@arm.com>
(Review input addressed)
(Added handling of UPDATE/DELETE and partitioning w/o index)
If the properties of the used collation allow, do the following
equivalent rewrites:
1. UPPER(key_col)=expr -> key_col=expr
expr=UPPER(key_col) -> expr=key_col
(also rewrite both sides of the equality at the same time)
2. UPPER(key_col) IN (constant-list) -> key_col IN (constant-list)
- Mark utf8mb{3,4}_general_ci as collations that allow this.
- Add optimizer_switch='sargable_casefold=ON' to control this.
(ON by default in this patch)
- Cover the rewrite in Optimizer Trace, rewrite name is
"sargable_casefold_removal".
Also fixes:
MDEV-30982 UBSAN: runtime error: null pointer passed as argument 2, which is declared to never be null in my_strnncoll_binary on DELETE
Calling memcmp() with a NULL pointer is undefined behaviour
according to the C standard, even if the length argument is 0.
Adding tests for length==0 before calling memcmp() into:
- my_strnncoll_binary()
- my_strnncoll_8bit_bin
Modern software (including text editors, static analysis software,
and web-based code review interfaces) often requires source code files
to be interpretable via a consistent character encoding, with UTF-8 or
ASCII (a strict subset of UTF-8) as the default. Several of the MariaDB
source files contain bytes that are not valid in either the UTF-8 or
ASCII encodings, but instead represent strings encoded in the
ISO-8859-1/Latin-1 or ISO-8859-2/Latin-2 encodings.
These inconsistent encodings may prevent software from correctly
presenting or processing such files. Converting all source files to
valid UTF8 characters will ensure correct handling.
Comments written in Czech were replaced with lightly-corrected
translations from Google Translate. Additionally, comments describing
the proper handling of special characters were changed so that the
comments are now purely UTF8.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer
Amazon Web Services, Inc.
Co-authored-by: Andrew Hutchings <andrew@linuxjedi.co.uk>
collations
Fix by Alexey Botchkov
The 'value_len' is calculated wrong for the multibyte charsets. In the
read_strn() function we get the length of the string with the final ' " '
character. So have to subtract it's length from the value_len. And the
length of '1' isn't correct for the ucs2 charset (must be 2).
collations
Analysis:
When we have negative index, the value in array_counter[] array is going to
be -1 at some point ( because in case of negative index in json path, the
initial value for a path with negative index is -<size_of_array>, and as we
move forward in array while parsing it and finding path, this value
increments). Since SKIPPED_STEP_MARK, is maximum uint value, it gets
compared to some int value in the array and eventually equates to -1
and messes with path.
Fix:
Make SKIPPED_STEP_MARK maximum of INT32.