Commit graph

294 commits

Author SHA1 Message Date
Alexander Barkov
e6961ea311 MDEV-20034 Add support for the pre-defined weak SYS_REFCURSOR
This patch adds support for SYS_REFCURSOR (a weakly typed cursor)
for both sql_mode=ORACLE and sql_mode=DEFAULT.

Works as a regular stored routine variable, parameter and return value:

- can be passed as an IN parameter to stored functions and procedures
- can be passed as an INOUT and OUT parameter to stored procedures
- can be returned from a stored function

Note, strongly typed REF CURSOR will be added separately.

Note, to maintain dependencies easier, some parts of sql_class.h
and item.h were moved to new header files:

- select_results.h:
  class select_result_sink
  class select_result
  class select_result_interceptor

- sp_cursor.h:
  class sp_cursor_statistics
  class sp_cursor

- sp_rcontext_handler.h
  class Sp_rcontext_handler and its descendants

The implementation consists of the following parts:
- A new class sp_cursor_array deriving from Dynamic_array

- A new class Statement_rcontext which contains data shared
  between sub-statements of a compound statement.
  It has a member m_statement_cursors of the sp_cursor_array data type,
  as well as open cursor counter. THD inherits from Statement_rcontext.

- A new data type handler Type_handler_sys_refcursor in plugins/type_cursor/
  It is designed to store uint16 references -
  positions of the cursor in THD::m_statement_cursors.

- Type_handler_sys_refcursor suppresses some derived numeric features.
  When a SYS_REFCURSOR variable is used as an integer an error is raised.

- A new abstract class sp_instr_fetch_cursor. It's needed to share
  the common code between "OPEN cur" (for static cursors) and
  "OPER cur FOR stmt" (for SYS_REFCURSORs).

- New sp_instr classes:
  * sp_instr_copen_by_ref      - OPEN sys_ref_curor FOR stmt;
  * sp_instr_cfetch_by_ref     - FETCH sys_ref_cursor INTO targets;
  * sp_instr_cclose_by_ref     - CLOSE sys_ref_cursor;
  * sp_instr_destruct_variable - to destruct SYS_REFCURSOR variables when
                                 the execution goes out of the BEGIN..END block
                                 where SYS_REFCURSOR variables are declared.
- New methods in LEX:
  * sp_open_cursor_for_stmt   - handles "OPEN sys_ref_cursor FOR stmt".
  * sp_add_instr_fetch_cursor - "FETCH cur INTO targets" for both
                                static cursors and SYS_REFCURSORs.
  * sp_close - handles "CLOSE cur" both for static cursors and SYS_REFCURSORs.

- Changes in cursor functions to handle both static cursors and SYS_REFCURSORs:
  * Item_func_cursor_isopen
  * Item_func_cursor_found
  * Item_func_cursor_notfound
  * Item_func_cursor_rowcount

- A new system variable @@max_open_cursors - to limit the number
  of cursors (static and SYS_REFCURSORs) opened at the same time.
  Its allowed range is [0-65536], with 50 by default.

- A new virtual method Type_handler::can_return_bool() telling
  if calling item->val_bool() is allowed for Items of this data type,
  or if otherwise the "Illegal parameter for operation" error should be raised
  at fix_fields() time.

- New methods in Sp_rcontext_handler:
  * get_cursor()
  * get_cursor_by_ref()

- A new class Sp_rcontext_handler_statement to handle top level statement
  wide cursors which are shared by all substatements.

- A new virtual method expr_event_handler() in classes Item and Field.
  It's needed to close (and make available for a new OPEN)
  unused THD::m_statement_cursors elements which do not have any references
  any more. It can happen in various moments in time, e.g.
  * after evaluation parameters of an SQL routine
  * after assigning a cursor expression into a SYS_REFCURSOR variable
  * when leaving a BEGIN..END block with SYS_REFCURSOR variables
  * after setting OUT/INOUT routine actual parameters from formal
    parameters.
2025-03-18 18:31:28 +01:00
Marko Mäkelä
bb9f010432 Merge 11.4 into 11.8 2025-03-05 20:39:47 +02:00
Alexander Barkov
b7d67ceb5f MDEV-36047 Package body variables are not allowed as FETCH targets
It was not possible to use a package body variable as a
fetch target:

CREATE PACKAGE BODY pkg AS
  vc INT := 0;
  FUNCTION f1 RETURN INT AS
    CURSOR cur IS SELECT 1 AS c FROM DUAL;
  BEGIN
    OPEN cur;
    FETCH cur INTO vc; -- this returned "Undeclared variable: vc" error.
    CLOSE cur;
    RETURN vc;
  END;
END;

FETCH assumed that all fetch targets reside of the same sp_rcontext
instance with the cursor. This patch fixes the problem.
Now a cursor and its fetch target can reside in different sp_rcontext
instances.

Details:

- Adding a helper class sp_rcontext_addr
  (a combination of Sp_rcontext_handler pointer and an offset in the rcontext)

- Adding a new class sp_fetch_target deriving from sp_rcontext_addr.
  Fetch targets in "FETCH cur INTO target1, target2 ..." are now collected
  into this structure instead of sp_variable.
  sp_variable cannot be used any more to store fetch targets,
  because it does not have a pointer to Sp_rcontext_handler
  (it only has the current rcontext offset).

- Removing members sp_instr_set members m_rcontext_handler and m_offset.
  Deriving sp_instr_set from sp_rcontext_addr instead.

- Renaming sp_instr_cfetch member  "List<sp_variable> m_varlist"
  to "List<sp_fetch_target> m_fetch_target_list".

- Fixing LEX::sp_add_cfetch() to return the pointer to the
  created sp_fetch_target instance (instead of returning bool).
  This helps to make the grammar in sql_yacc.c simpler

- Renaming LEX::sp_add_cfetch() to LEX::sp_add_instr_cfetch(),
  as `if(sp_add_cfetch())` changed its meaning to the opposite,
  to avoid automatic wrong merge from earlier versions.

- Chaning the "List<sp_variable> *vars" parameter to sp_cursor::fetch
  to have the data type "List<sp_fetch_target> *".

- Changing the data type of "List<sp_variable> &vars" in
  sp_cursor::Select_fetch_into_spvars::send_data_to_variable_list()
  to "List<sp_fetch_target> &".

- Adding THD helper methods get_rcontext() and get_variable().

- Moving the code from sql_yacc.yy into a new LEX method
  LEX::make_fetch_target().

- Simplifying the grammar in sql_yacc.yy using the new LEX method.
  Changing the data type of the bison rule sp_fetch_list from "void"
  to "List<sp_fetch_target> *".
2025-02-09 13:56:19 +04:00
Marko Mäkelä
a5b80531fb Merge 11.4 into 11.6 2024-09-04 10:38:25 +03:00
Marko Mäkelä
44733aa8cf Merge 11.2 into 11.4 2024-08-29 19:10:38 +03:00
Marko Mäkelä
e91a799458 Merge 10.11 into 11.2 2024-08-29 16:02:57 +03:00
Marko Mäkelä
cfcf27c6fe Merge 10.6 into 10.11 2024-08-29 07:47:29 +03:00
Sergei Petrunia
9020baf126 Trivial fix: Make test_if_cheaper_ordering() use actual_rec_per_key()
Discovered this while working on MDEV-34720: test_if_cheaper_ordering()
uses rec_per_key, while the original estimate for the access method
is produced in best_access_path() by using actual_rec_per_key().

Make test_if_cheaper_ordering() also use actual_rec_per_key().
Also make several getter function "const" to make this compile.
Also adjusted the testcase to handle this (the change backported from
11.0)
2024-08-25 16:05:00 +03:00
Monty
ecc7961140 MDEV-34571 Add page accessed and pages read from disk to table_stats
Trivial batch, using the handler statistics already collected for
the slow query log.

The reason for the changes in test cases was mainly to change to use
select TABLE_SCHEMA ... from information_schema.table_statistics instead
of 'show table_statistics' to avoid future changes to test results
if we add more columns to table_statistics.
2024-07-12 11:28:18 +03:00
Monty
94033fcf83 MDEV-33151 Add more columns to TABLE_STATISTICS and USER STATS
Columns added to TABLE_STATISTICS
- ROWS_INSERTED, ROWS_DELETED, ROWS_UPDATED, KEY_READ_HITS and
  KEY_READ_MISSES.

Columns added to CLIENT_STATISTICS and USER_STATISTICS:
- KEY_READ_HITS and KEY_READ_MISSES.

User visible changes (except new columns):
- CLIENT_STATISTICS and USER_STATISTICS has columns KEY_READ_HITS and
  KEY_READ_MISSES added after column ROWS_UPDATED before SELECT_COMMANDS.

Other changes:
- Do not collect table statistics for system tables like index_stats
  table_stats, performance_schema, information_schema etc as the user
  has no control of these and the generate noice in the statistics.
- All row variables that are part of user_stats are moved to
  'struct rows_stats' to make it easy to clear all of them at once.
- ha_read_key_misses added to STATUS_VAR

Notes:
- userstat.result has a change of numbers of rows for handler_read_key.
  This is because use-stat-tables is now disabled for the test.
2024-05-27 12:39:04 +02:00
Monty
b879b8a5c8 More windows changes for 32 bit unsigned timestamp:
MDEV-32188 make TIMESTAMP use whole 32-bit unsigned range

- Changed usage of timeval to my_timeval as the timeval parts on windows
  are 32-bit long, which causes some compiler issues on windows.
2024-05-27 12:39:02 +02:00
Monty
b8ffd99cee Extends 64 bit windows to support timestamps up to year 2106.
MDEV-32188 make TIMESTAMP use whole 32-bit unsigned range

This is done by changing my_time_t from long to unsigned long.
The effect of this is that on windows compling old clients may
get warnings of if they compare my_time_t with as signed variable.

Other things
- Removed my_time_t from include/*.pp files as it is different on windows
  and linux.
- Changed do_abi_check.cmake to first print abi_check and then the
  conflicting file (this makes it easier to find the cause of the error).
2024-05-27 12:39:02 +02:00
Monty
a00e99acca MDEV-33152 Add QUERIES to INDEX_STATISTICS
Other changes:
- Do not collect index statistics for system tables like index_stats
  table_stats, performance_schema, information_schema etc as the user
  has no control of these and the generate noise in the statistics.
2024-05-27 12:39:02 +02:00
Alexander Barkov
fd247cc21f MDEV-31340 Remove MY_COLLATION_HANDLER::strcasecmp()
This patch also fixes:
  MDEV-33050 Build-in schemas like oracle_schema are accent insensitive
  MDEV-33084 LASTVAL(t1) and LASTVAL(T1) do not work well with lower-case-table-names=0
  MDEV-33085 Tables T1 and t1 do not work well with ENGINE=CSV and lower-case-table-names=0
  MDEV-33086 SHOW OPEN TABLES IN DB1 -- is case insensitive with lower-case-table-names=0
  MDEV-33088 Cannot create triggers in the database `MYSQL`
  MDEV-33103 LOCK TABLE t1 AS t2 -- alias is not case sensitive with lower-case-table-names=0
  MDEV-33109 DROP DATABASE MYSQL -- does not drop SP with lower-case-table-names=0
  MDEV-33110 HANDLER commands are case insensitive with lower-case-table-names=0
  MDEV-33119 User is case insensitive in INFORMATION_SCHEMA.VIEWS
  MDEV-33120 System log table names are case insensitive with lower-cast-table-names=0

- Removing the virtual function strnncoll() from MY_COLLATION_HANDLER

- Adding a wrapper function CHARSET_INFO::streq(), to compare
  two strings for equality. For now it calls strnncoll() internally.
  In the future it will turn into a virtual function.

- Adding new accent sensitive case insensitive collations:
    - utf8mb4_general1400_as_ci
    - utf8mb3_general1400_as_ci
  They implement accent sensitive case insensitive comparison.
  The weight of a character is equal to the code point of its
  upper case variant. These collations use Unicode-14.0.0 casefolding data.

  The result of
     my_charset_utf8mb3_general1400_as_ci.strcoll()
  is very close to the former
     my_charset_utf8mb3_general_ci.strcasecmp()

  There is only a difference in a couple dozen rare characters, because:
    - the switch from "tolower" to "toupper" comparison, to make
      utf8mb3_general1400_as_ci closer to utf8mb3_general_ci
    - the switch from Unicode-3.0.0 to Unicode-14.0.0
  This difference should be tolarable. See the list of affected
  characters in the MDEV description.

  Note, utf8mb4_general1400_as_ci correctly handles non-BMP characters!
  Unlike utf8mb4_general_ci, it does not treat all BMP characters
  as equal.

- Adding classes representing names of the file based database objects:

    Lex_ident_db
    Lex_ident_table
    Lex_ident_trigger

  Their comparison collation depends on the underlying
  file system case sensitivity and on --lower-case-table-names
  and can be either my_charset_bin or my_charset_utf8mb3_general1400_as_ci.

- Adding classes representing names of other database objects,
  whose names have case insensitive comparison style,
  using my_charset_utf8mb3_general1400_as_ci:

  Lex_ident_column
  Lex_ident_sys_var
  Lex_ident_user_var
  Lex_ident_sp_var
  Lex_ident_ps
  Lex_ident_i_s_table
  Lex_ident_window
  Lex_ident_func
  Lex_ident_partition
  Lex_ident_with_element
  Lex_ident_rpl_filter
  Lex_ident_master_info
  Lex_ident_host
  Lex_ident_locale
  Lex_ident_plugin
  Lex_ident_engine
  Lex_ident_server
  Lex_ident_savepoint
  Lex_ident_charset
  engine_option_value::Name

- All the mentioned Lex_ident_xxx classes implement a method streq():

  if (ident1.streq(ident2))
     do_equal();

  This method works as a wrapper for CHARSET_INFO::streq().

- Changing a lot of "LEX_CSTRING name" to "Lex_ident_xxx name"
  in class members and in function/method parameters.

- Replacing all calls like
    system_charset_info->coll->strcasecmp(ident1, ident2)
  to
    ident1.streq(ident2)

- Taking advantage of the c++11 user defined literal operator
  for LEX_CSTRING (see m_strings.h) and Lex_ident_xxx (see lex_ident.h)
  data types. Use example:

  const Lex_ident_column primary_key_name= "PRIMARY"_Lex_ident_column;

  is now a shorter version of:

  const Lex_ident_column primary_key_name=
    Lex_ident_column({STRING_WITH_LEN("PRIMARY")});
2024-04-18 15:22:10 +04:00
Alexander Barkov
351a8eecf0 MDEV-32148 Inefficient WHERE timestamp_column=datetime_const_expr
Changing the way how a the following conditions are evaluated:

    WHERE timestamp_column=datetime_const_expr

(for all comparison operators: =, <=>, <, >, <=, >=, <> and for NULLIF)

Before the change it was always performed as DATETIME.
That was not efficient, as involved per-row TIMESTAMP->DATETIME conversion
for timestamp_column. For example, in case of the SYSTEM time zone
it involved a localtime_r() call, which is known to be slow.

After the change it's performed as TIMESTAMP in many cases.
This allows to avoid per-row conversion, as it works the other way around:
datetime_const_expr is converted to TIMESTAMP once before the execution stage.

Note, datetime_const_expr must be inside monotone continuous periods of
the current time zone, i.e. not near these anomalies:
- DST changes (spring forward, fall back)
- leap seconds
2024-01-12 15:24:05 +04:00
Sergei Golubchik
fef31a26f3 Merge branch '11.1' into 11.2 2023-12-20 23:43:05 +01:00
Sergei Golubchik
8c8bce05d2 Merge branch '10.11' into 11.0 2023-12-19 15:53:18 +01:00
Sergei Golubchik
fd0b47f9d6 Merge branch '10.6' into 10.11 2023-12-18 11:19:04 +01:00
Sergei Golubchik
e95bba9c58 Merge branch '10.5' into 10.6 2023-12-17 11:20:43 +01:00
Sergei Golubchik
98a39b0c91 Merge branch '10.4' into 10.5 2023-12-02 01:02:50 +01:00
Oleksandr Byelkin
0427c4739e MariaDB 11.1.3 release
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEF39AEP5WyjM2MAMF8WVvJMdM0dgFAmVSLiUACgkQ8WVvJMdM
 0dhj4A/7B2GIx75Mv4IcExE2s4bfR7sOKZzvjWbMHysovMHhsHAV5fHN7dRQojyV
 HxmSY8lxykm/LMoJ8RASmrojRZsgvkJ84z+fLK7is327Vms7fW7ZWc3eqotIgs7I
 m9Dz3+wiexvl6NKeHnafTZtkJOe8MEqZEGPV1e8V4I3SAJQWyQLnRr4si/VmjAMi
 miKuieTuKoZUYSkdNLwicEFHXysgg6/U8367sgMsJe9V3HYSD3pVQJ/nboTL5uZL
 vTbmEPS1pICKPvPu75DdedSdxSASMyXis9/IWtk13NqPPzX16uHtjkhffAuBT3+k
 CUgRggTYAuoF3MjvyspIS3pdC/73PBb1O+w/9vlHPiwSXVn3d48Ay55uvFgM/pVB
 UKLorw+As0oH2N1HWUp/d4Rbvrnjdq5OgzhmMTrWDAtYyrNU9Jw5S1CAp+G/s2dD
 5j+FUPBBnHo5UfxI+EVTqUggm56R+vJTx4H3q82n05bdJTJYNJ+nixvsYuf7hS3f
 oEqJAUizgGI3h6FGPD9bN0HSYGblEeNgAYv1YogfVX/Eq10RriWic9PtxxOxgOmE
 n+UhdH4YTTyaTv0jssWTJVmVNzjjXMvI4aB8A1FkXeIz2iohSziSkJzaBuzNq2QY
 kKHr8XqiyNnckcoRxfoxNPtrWcmiykpHOBFnuyMRWoXPKcr7idc=
 =ShdC
 -----END PGP SIGNATURE-----

Merge tag '11.1' into 11.2

MariaDB 11.1.3 release
2023-11-14 18:28:37 +01:00
Oleksandr Byelkin
48af85db21 Merge branch '10.11' into 11.0 2023-11-08 17:09:44 +01:00
Oleksandr Byelkin
04d9a46c41 Merge branch '10.6' into 10.10 2023-11-08 16:23:30 +01:00
Oleksandr Byelkin
b83c379420 Merge branch '10.5' into 10.6 2023-11-08 15:57:05 +01:00
Oleksandr Byelkin
6cfd2ba397 Merge branch '10.4' into 10.5 2023-11-08 12:59:00 +01:00
Alexander Barkov
2b6d241ee4 MDEV-27744 LPAD in vcol created in ORACLE mode makes table corrupted in non-ORACLE
The crash happened with an indexed virtual column whose
value is evaluated using a function that has a different meaning
in sql_mode='' vs sql_mode=ORACLE:

- DECODE()
- LTRIM()
- RTRIM()
- LPAD()
- RPAD()
- REPLACE()
- SUBSTR()

For example:

CREATE TABLE t1 (
  b VARCHAR(1),
  g CHAR(1) GENERATED ALWAYS AS (SUBSTR(b,0,0)) VIRTUAL,
  KEY g(g)
);

So far we had replacement XXX_ORACLE() functions for all mentioned function,
e.g. SUBSTR_ORACLE() for SUBSTR(). So it was possible to correctly re-parse
SUBSTR_ORACLE() even in sql_mode=''.

But it was not possible to re-parse the MariaDB version of SUBSTR()
after switching to sql_mode=ORACLE. It was erroneously mis-interpreted
as SUBSTR_ORACLE().

As a result, this combination worked fine:

SET sql_mode=ORACLE;
CREATE TABLE t1 ... g CHAR(1) GENERATED ALWAYS AS (SUBSTR(b,0,0)) VIRTUAL, ...;
INSERT ...
FLUSH TABLES;
SET sql_mode='';
INSERT ...

But the other way around it crashed:

SET sql_mode='';
CREATE TABLE t1 ... g CHAR(1) GENERATED ALWAYS AS (SUBSTR(b,0,0)) VIRTUAL, ...;
INSERT ...
FLUSH TABLES;
SET sql_mode=ORACLE;
INSERT ...

At CREATE time, SUBSTR was instantiated as Item_func_substr and printed
in the FRM file as substr(). At re-open time with sql_mode=ORACLE, "substr()"
was erroneously instantiated as Item_func_substr_oracle.

Fix:

The fix proposes a symmetric solution. It provides a way to re-parse reliably
all sql_mode dependent functions to their original CREATE TABLE time meaning,
no matter what the open-time sql_mode is.

We take advantage of the same idea we previously used to resolve sql_mode
dependent data types.

Now all sql_mode dependent functions are printed by SHOW using a schema
qualifier when the current sql_mode differs from the function sql_mode:

SET sql_mode='';
CREATE TABLE t1 ... SUBSTR(a,b,c) ..;
SET sql_mode=ORACLE;
SHOW CREATE TABLE t1;   ->   mariadb_schema.substr(a,b,c)

SET sql_mode=ORACLE;
CREATE TABLE t2 ... SUBSTR(a,b,c) ..;
SET sql_mode='';
SHOW CREATE TABLE t1;   ->   oracle_schema.substr(a,b,c)

Old replacement names like substr_oracle() are still understood for
backward compatibility and used in FRM files (for downgrade compatibility),
but they are not printed by SHOW any more.
2023-11-08 15:01:20 +04:00
Marko Mäkelä
d5e15424d8 Merge 10.6 into 10.10
The MDEV-29693 conflict resolution is from Monty, as well as is
a bug fix where ANALYZE TABLE wrongly built histograms for
single-column PRIMARY KEY.
Also includes a fix for safe_malloc error reporting.

Other things:
- Copied main.log_slow from 10.4 to avoid mtr issue

Disabled test:
- spider/bugfix.mdev_27239 because we started to get
  +Error	1429 Unable to connect to foreign data source: localhost
  -Error	1158 Got an error reading communication packets
- main.delayed
  - Bug#54332 Deadlock with two connections doing LOCK TABLE+INSERT DELAYED
    This part is disabled for now as it fails randomly with different
    warnings/errors (no corruption).
2023-10-14 13:36:11 +03:00
Alexander Barkov
534a2bf1c6 MDEV-32275 getting error 'Illegal parameter data types row and bigint for operation '+' ' when using ITERATE in a FOR..DO
An "ITERATE innerLoop" did not work properly inside
a WHILE loop, which itself is inside an outer FOR loop:

outerLoop:
  FOR
   ...
   innerLoop:
    WHILE
      ...
      ITERATE innerLoop;
      ...
    END WHILE;
    ...
  END FOR;

It erroneously generated an integer increment code for the outer FOR loop.
There were two problems:
1. "ITERATE innerLoop" worked like "ITERATE outerLoop"
2. It was always integer increment, even in case of FOR cursor loops.

Background:
- A FOR loop automatically creates a dedicated sp_pcontext stack entry,
  to put the iteration and bound variables on it.

- Other loop types (LOOP, WHILE, REPEAT), do not generate a dedicated
  slack entry.

  The old code erroneously assumed that sp_pcontext::m_for_loop
  either describes the most inner loop (in case the inner loop is FOR),
  or is empty (in case the inner loop is not FOR).

  But in fact, sp_pcontext::m_for_loop is never empty inside a FOR loop:
  it describes the closest FOR loop, even if this FOR loop has nested
  non-FOR loops inside.

  So when we're near the ITERATE statement in the above script,
  sp_pcontext::m_for_loop is not empty - it stores information about
  the FOR loop labeled as "outrLoop:".

Fix:
- Adding a new member sp_pcontext::Lex_for_loop::m_start_label,
  to remember the explicit or the auto-generated label correspoding
  to the start of the FOR body. It's used during generation
  of "ITERATE loop_label" code to check if "loop_label" belongs
  to the current FOR loop pointed by sp_pcontext::m_for_loop,
  or belongs to a non-FOR nested loop.

- Adding LEX methods sp_for_loop_intrange_iterate() and
  sp_for_loop_cursor_iterate() to reuse the code between
  methods handling:
  * ITERATE
  * END FOR

- Adding a test for Lex_for_loop::is_for_loop_cursor()
  and generate a code either a cursor fetch, or for an integer increment.
  Before this change, it always erroneously generated an integer increment
  version.

- Cleanup: Initialize Lex_for_loop_st::m_cursor_offset inside
  Lex_for_loop_st::init(), to avoid not initialized members.

- Cleanup: Removing a redundant method:
    Lex_for_loop_st::init(const Lex_for_loop_st &other)
  Using Lex_for_loop_st::operator(const Lex_for_loop_st &other) instead.
2023-10-04 16:06:59 +04:00
Monty
e3b36b8f1b MDEV-31957 Concurrent ALTER and ANALYZE collecting statistics can result in stale statistical data
Example of what causes the problem:
T1: ANALYZE TABLE starts to collect statistics
T2: ALTER TABLE starts by deleting statistics for all changed fields,
    then creates a temp table and copies data to it.
T1: ANALYZE ends and writes to the statistics tables.
T2: ALTER TABLE renames temp table in place of the old table.

Now the statistics from analyze matches the old deleted tables.

Fixed by waiting to delete old statistics until ALTER TABLE is
the only one using the old table and ensure that rename of columns
can handle swapping of column names.

rename_columns_in_stat_table() (former rename_column_in_stat_tables())
now takes a list of columns to rename. It uses the following algorithm
to update column_stats to be able to handle circular renames

- While there are columns to be renamed and it is the first loop or
  last rename loop did change something.
  - Loop over all columns to be renamed
    - Change column name in column_stat
      - If fail because of duplicate key
      - If this is first change attempt for this column
         - Change column name to a temporary column name
         - If there was a conflicting row, replace it with the current row.
    else
     - Remove entry from column list

- Loop over all remaining columns in the list
 - Remove the conflicting row
 - Change column from temporary name to final name in column_stat

Other things:
- Don't flush tables for every operation. Only flush when all updates
  are done.
- Rename of columns was not handled in case of ALGORITHM=copy (old bug).
  - Fixed that we do not collect statistics for hidden hash columns
    used by UNIQUE constraint on long values.
  - Fixed that we do not collect statistics for blob columns referred by
    generated virtual columns. This was achieved by storing the fields for
    which we want to have statistics in table->has_value_set instead of
    in table->read_set.
- Rename of indexes was not handled for persistent statistics.
  - This is now handled similar as rename of columns. Renamed columns
    are now stored in 'rename_stat_indexes' and handled in
    Alter_info::delete_statistics() together with drooped indexes.
- ALTER TABLE .. ADD INDEX may instead of creating a new index rename
  an existing generated foreign key index. This was not reflected in
  the index_stats table because this was handled in
  mysql_prepare_create_table instead instead of in the mysql_alter() code.
  Fixed by adding a call in mysql_prepare_create_table() to drop the
  changed index.
  I also had to change the code that 'marked the index' to be ignored
  with code that would not destroy the original index name.

Reviewer: Sergei Petrunia <sergey@mariadb.com>
2023-10-03 08:25:30 +03:00
Alexander Barkov
75f25e4ca7 MDEV-30164 System variable for default collations
This patch adds a way to override default collations
(or "character set collations") for desired character sets.

The SQL standard says:
> Each collation known in an SQL-environment is applicable to one
> or more character sets, and for each character set, one or more
> collations are applicable to it, one of which is associated with
> it as its character set collation.

In MariaDB, character set collations has been hard-coded so far,
e.g. utf8mb4_general_ci has been a hard-coded character set collation
for utf8mb4.

This patch allows to override (globally per server, or per session)
character set collations, so for example, uca1400_ai_ci can be set as a
character set collation for Unicode character sets
(instead of compiled xxx_general_ci).

The array of overridden character set collations is stored in a new
(session and global) system variable @@character_set_collations and
can be set as a comma separated list of charset=collation pairs, e.g.:

SET @@character_set_collations='utf8mb3=uca1400_ai_ci,utf8mb4=uca1400_ai_ci';

The variable is empty by default, which mean use the hard-coded
character set collations (e.g. utf8mb4_general_ci for utf8mb4).

The variable can also be set globally by passing to the server startup command
line, and/or in my.cnf.
2023-07-17 14:56:17 +04:00
Yuchen Pei
9b431d714f
MDEV-26137 Improve import tablespace workflow.
Allow ALTER TABLE ... IMPORT TABLESPACE without creating the table
followed by discarding the tablespace.

That is, assuming we want to import table t1 to t2, instead of

CREATE TABLE t2 LIKE t1;
ALTER TABLE t2 DISCARD TABLESPACE;
FLUSH TABLES t1 FOR EXPORT;
--copy_file $MYSQLD_DATADIR/test/t1.cfg $MYSQLD_DATADIR/test/t2.cfg
--copy_file $MYSQLD_DATADIR/test/t1.ibd $MYSQLD_DATADIR/test/t2.ibd
UNLOCK TABLES;
ALTER TABLE t2 IMPORT TABLESPACE;

We can simply do

FLUSH TABLES t1 FOR EXPORT;
--copy_file $MYSQLD_DATADIR/test/t1.cfg $MYSQLD_DATADIR/test/t2.cfg
--copy_file $MYSQLD_DATADIR/test/t1.frm $MYSQLD_DATADIR/test/t2.frm
--copy_file $MYSQLD_DATADIR/test/t1.ibd $MYSQLD_DATADIR/test/t2.ibd
UNLOCK TABLES;
ALTER TABLE t2 IMPORT TABLESPACE;

We achieve this by creating a "stub" table in the second scenario
while opening the table, where t2 does not exist but needs to import
from t1. The "stub" table is similar to a table that is created but
then instructed to discard its tablespace.

We include tests with various row formats, encryption, with indexes
and auto-increment.
2023-07-04 17:56:27 +10:00
Sergei Golubchik
0005f2f06c Merge branch 'bb-10.11-release' into bb-11.0-release 2023-06-05 19:27:00 +02:00
Oleksandr Byelkin
cf56f2d7e8 Merge branch '10.8' into 10.9 2023-05-03 13:27:59 +02:00
Oleksandr Byelkin
043d69bbcc Merge branch '10.5' into 10.6 2023-05-03 09:51:25 +02:00
Oleksandr Byelkin
edf8ce5b97 Merge branch 'bb-10.4-release' into bb-10.5-release 2023-05-02 13:54:54 +02:00
Alexander Barkov
ddcc9d2281 MDEV-31153 New methods Schema::make_item_func_* for REPLACE, SUBSTRING, TRIM
Adding virtual methods to class Schema:

  make_item_func_replace()
  make_item_func_substr()
  make_item_func_trim()

This is a non-functional preparatory change for MDEV-27744.
2023-04-29 08:06:46 +04:00
Marko Mäkelä
2e431ff7e6 Merge 10.11 into 11.0 2023-02-16 13:34:45 +02:00
Marko Mäkelä
0d55914d96 Merge 10.8 into 10.9 2023-02-16 10:25:34 +02:00
Marko Mäkelä
6aec87544c Merge 10.5 into 10.6 2023-02-10 13:03:01 +02:00
Marko Mäkelä
c41c79650a Merge 10.4 into 10.5 2023-02-10 12:02:11 +02:00
Vicențiu Ciorbaru
08c852026d Apply clang-tidy to remove empty constructors / destructors
This patch is the result of running
run-clang-tidy -fix -header-filter=.* -checks='-*,modernize-use-equals-default' .

Code style changes have been done on top. The result of this change
leads to the following improvements:

1. Binary size reduction.
* For a -DBUILD_CONFIG=mysql_release build, the binary size is reduced by
  ~400kb.
* A raw -DCMAKE_BUILD_TYPE=Release reduces the binary size by ~1.4kb.

2. Compiler can better understand the intent of the code, thus it leads
   to more optimization possibilities. Additionally it enabled detecting
   unused variables that had an empty default constructor but not marked
   so explicitly.

   Particular change required following this patch in sql/opt_range.cc

   result_keys, an unused template class Bitmap now correctly issues
   unused variable warnings.

   Setting Bitmap template class constructor to default allows the compiler
   to identify that there are no side-effects when instantiating the class.
   Previously the compiler could not issue the warning as it assumed Bitmap
   class (being a template) would not be performing a NO-OP for its default
   constructor. This prevented the "unused variable warning".
2023-02-09 16:09:08 +02:00
Monty
ed0a723566 Cache file->index_flags(index, 0, 1) in table->key_info[index].index_flags
The reason for this is that we call file->index_flags(index, 0, 1)
multiple times in best_access_patch()when optimizing a table.
For example, in InnoDB, the calls is not trivial (4 if's and 2 assignments)
Now the function is inlined and is just a memory reference.

Other things:
- handler::is_clustering_key() and pk_is_clustering_key() are now inline.
- Added TABLE::can_use_rowid_filter() to simplify some code.
- Test if we should use a rowid_filter only if can_use_rowid_filter() is
  true.
- Added TABLE::is_clustering_key() to avoid a memory reference.
- Simplify some code using the fact that HA_KEYREAD_ONLY is true implies
  that HA_CLUSTERED_INDEX is false.
- Added DBUG_ASSERT to TABLE::best_range_rowid_filter() to ensure we
  do not call it with a clustering key.
- Reorginized elements in struct st_key to get better memory alignment.
- Updated ha_innobase::index_flags() to not have
  HA_DO_RANGE_FILTER_PUSHDOWN for clustered index
2023-02-03 14:38:26 +03:00
Sergei Golubchik
0537ce4e9f remove LEX_USER->is_public
it's now redundant
2022-11-02 00:31:27 +01:00
Oleksandr Byelkin
b0325bd6d6 MDEV-5215 Granted to PUBLIC 2022-11-01 22:15:14 +01:00
Sergei Golubchik
2bd41fc5bf Revert MDEV-25292 Atomic CREATE OR REPLACE TABLE
Specifically:

Revert "MDEV-29664 Assertion `!n_mysql_tables_in_use' failed in innobase_close_connection"
This reverts commit ba875e9396.

Revert "MDEV-29620 Assertion `next_insert_id == 0' failed in handler::ha_external_lock"
This reverts commit aa08a7442a.

Revert "MDEV-29628 Memory leak after CREATE OR REPLACE with foreign key"
This reverts commit c579d66ba6.

Revert "MDEV-29609 create_not_windows test fails with different result"
This reverts commit cb583b2f1b.

Revert "MDEV-29544 SIGSEGV in HA_CREATE_INFO::finalize_locked_tables"
This reverts commit dcd66c3814.

Revert "MDEV-28933 CREATE OR REPLACE fails to recreate same constraint name"
This reverts commit cf6c517632.

Revert "MDEV-28933 Moved RENAME_CONSTRAINT_IDS to include/sql_funcs.h"
This reverts commit f1e1c1335b.

Revert "MDEV-28956 Locking is broken if CREATE OR REPLACE fails under LOCK TABLES"
This reverts commit a228ec80e3.

Revert "MDEV-25292 gcol.gcol_bugfixes --ps fix"
This reverts commit 24fff8267d.

Revert "MDEV-25292 Disable atomic replace for slave-generated or-replace"
This reverts commit 2af15914cb.

Revert "MDEV-25292 backup_log improved"
This reverts commit 34398a20b5.

Revert "MDEV-25292 Atomic CREATE OR REPLACE TABLE"
This reverts commit 93c8252f02.

Revert "MDEV-25292 Table_name class for (db, table_name, alias)"
This reverts commit d145dda9c7.

Revert "MDEV-25292 ha_table_exists() cleanup and improvement"
This reverts commit 409b8a86de.

Revert "MDEV-25292 Cleanups"
This reverts commit 595dad83ad.

Revert "MDEV-25292 Refactoring: moved select_field_count into Alter_info."
This reverts commit f02af1d229.
2022-10-27 23:13:41 +02:00
Aleksey Midenkov
93c8252f02 MDEV-25292 Atomic CREATE OR REPLACE TABLE
Atomic CREATE OR REPLACE allows to keep an old table intact if the
command fails or during the crash. That is done through creating
a table with a temporary name and filling it with the data
(for CREATE OR REPLACE .. SELECT), then renaming the original table
to another temporary (backup) name and renaming the replacement table
to original table. The backup table is kept until the last chance of
failure and if that happens, the replacement table is thrown off and
backup recovered. When the command is complete and logged the backup
table is deleted.

Atomic replace algorithm

  Two DDL chains are used for CREATE OR REPLACE:
  ddl_log_state_create (C) and ddl_log_state_rm (D).

  1. (C) Log CREATE_TABLE_ACTION of TMP table (drops TMP table);
  2. Create new table as TMP;
  3. Do everything with TMP (like insert data);

  finalize_atomic_replace():
  4. Link chains: (D) is executed only if (C) is closed;
  5. (D) Log DROP_ACTION of BACKUP;
  6. (C) Log RENAME_TABLE_ACTION from ORIG to BACKUP (replays BACKUP -> ORIG);
  7. Rename ORIG to BACKUP;
  8. (C) Log CREATE_TABLE_ACTION of ORIG (drops ORIG);
  9. Rename TMP to ORIG;

  finalize_ddl() in case of success:
  10. Close (C);
  11. Replay (D): BACKUP is dropped.

  finalize_ddl() in case of error:
  10. Close (D);
  11. Replay (C):
    1) ORIG is dropped (only after finalize_atomic_replace());
    2) BACKUP renamed to ORIG (only after finalize_atomic_replace());
    3) drop TMP.

  If crash happens (C) or (D) is replayed in reverse order. (C) is
  replayed if crash happens before it is closed, otherwise (D) is
  replayed.

Temporary table for CREATE OR REPLACE

  Before dropping "old" table, CREATE OR REPLACE creates "tmp" table.
  ddl_log_state_create holds the drop of the "tmp" table.  When
  everything is OK (data is inserted, "tmp" is ready) ddl_log_state_rm
  is written to replace "old" with "tmp". Until ddl_log_state_create
  is closed ddl_log_state_rm is not executed.

  After the binlogging is done ddl_log_state_create is closed. At that
  point ddl_log_state_rm is executed and "tmp" is replaced with
  "old". That is: final rename is done by the DDL log.

  With that important role of DDL log for CREATE OR REPLACE operation
  replay of ddl_log_state_rm must fail at the first hit error and
  print the error message if possible. F.ex. foreign key error is
  discovered at this phase: InnoDB rejects to drop the "old" table and
  returns corresponding foreign key error code.

Additional notes

  - CREATE TABLE without REPLACE is not affected by this commit.

  - Engines having HTON_EXPENSIVE_RENAME flag set are not affected by
    this commit.

  - CREATE TABLE .. SELECT XID usage is fixed and now there is no need
    to log DROP TABLE via DDL_CREATE_TABLE_PHASE_LOG (see comments in
    do_postlock()). XID is now correctly updated so it disables
    DDL_LOG_DROP_TABLE_ACTION. Note that binary log is flushed at the
    final stage when the table is ready. So if we have XID in the
    binary log we don't need to drop the table.

  - Three variations of CREATE OR REPLACE handled:

    1. CREATE OR REPLACE TABLE t1 (..);
    2. CREATE OR REPLACE TABLE t1 LIKE t2;
    3. CREATE OR REPLACE TABLE t1 SELECT ..;

  - Test case uses 6 combinations for engines (aria, aria_notrans,
    myisam, ib, lock_tables, expensive_rename) and 2 combinations for
    binlog types (row, stmt). Combinations help to check differences
    between the results. Error failures are tested for the above three
    variations.

  - expensive_rename tests CREATE OR REPLACE without atomic
    replace. The effect should be the same as with the old behaviour
    before this commit.

  - Triggers mechanism is unaffected by this change. This is tested in
    create_replace.test.

  - LOCK TABLES is affected. Lock restoration must be done after "rm"
    chain is replayed.

  - Moved ddl_log_complete() from send_eof() to finalize_ddl(). This
    checkpoint was not executed before for normal CREATE TABLE but is
    executed now.

  - CREATE TABLE will now rollback also if writing to the binary
    logging failed. See rpl_gtid_strict.test

Rename and drop via DDL log

  We replay ddl_log_state_rm to drop the old table and rename the
  temporary table. In that case we must throw the correct error
  message if ddl_log_revert() fails (f.ex. on FK error).

  If table is deleted earlier and not via DDL log and the crash
  happened, the create chain is not closed. Linked drop chain is not
  executed and the new table is not installed. But the old table is
  already deleted.

ddl_log.cc changes

  Now we can place action before DDL_LOG_DROP_INIT_ACTION and it will
  be replayed after DDL_LOG_DROP_TABLE_ACTION.

  report_error parameter for ddl_log_revert() allows to fail at first
  error and print the error message if possible.
  ddl_log_execute_action() now can print error message.

  Since we now can handle errors from ddl_log_execute_action() (in
  case of non-recovery execution) unconditional setting "error= TRUE"
  is wrong (it was wrong anyway because it was overwritten at the end
  of the function).

On XID usage

  Like with all other atomic DDL operations XID is used to avoid
  inconsistency between master and slave in the case of a crash after
  binary log is written and before ddl_log_state_create is closed. On
  recovery XIDs are taken from binary log and corresponding DDL log
  events get disabled.  That is done by
  ddl_log_close_binlogged_events().

On linking two chains together

  Chains are executed in the ascending order of entry_pos of execute
  entries. But entry_pos assignment order is undefined: it may assign
  bigger number for the first chain and then smaller number for the
  second chain. So the execution order in that case will be reverse:
  second chain will be executed first.

  To avoid that we link one chain to another. While the base chain
  (ddl_log_state_create) is active the secondary chain
  (ddl_log_state_rm) is not executed. That is: only one chain can be
  executed in two linked chains.

  The interface ddl_log_link_chains() was done in "MDEV-22166
  ddl_log_write_execute_entry() extension".

More on CREATE OR REPLACE .. SELECT

  We use create_and_open_tmp_table() like in ALTER TABLE to create
  temporary TABLE object (tmp_table is (NON_)TRANSACTIONAL_TMP_TABLE).

  After we created such TABLE object we use create_info->tmp_table()
  instead of table->s->tmp_table when we need to check for
  parser-requested tmp-table.

  External locking is required for temporary table created by
  create_and_open_tmp_table(). F.ex. that disables logging for Aria
  transactional tables and without that (when no mysql_lock_tables()
  is done) it cannot work correctly.

  For making external lock the patch requires Aria table to work in
  non-transactional mode. That is usually done by
  ha_enable_transaction(false). But we cannot disable transaction
  completely because: 1. binlog rollback removes pending row events
  (binlog_remove_pending_rows_event()). The row events are added
  during CREATE .. SELECT data insertion phase. 2. replication slave
  highly depends on transaction and cannot work without it.

  So we put temporary Aria table into non-transactional mode with
  "thd->transaction->on hack". See comment for on_save variable.

  Note that Aria table has internal_table mode. But we cannot use it
  because:

  if (!internal_table)
  {
    mysql_mutex_lock(&THR_LOCK_myisam);
    old_info= test_if_reopen(name_buff);
  }

  For internal_table test_if_reopen() is not called and we get a new
  MARIA_SHARE for each file handler. In that case duplicate errors are
  missed because insert and lookup in CREATE .. SELECT is done via two
  different handlers (see create_lookup_handler()).

  For temporary table before dropping TABLE_SHARE by
  drop_temporary_table() we must do ha_reset(). ha_reset() releases
  storage share. Without that the share is kept and the second CREATE
  OR REPLACE .. SELECT fails with:

    HA_ERR_TABLE_EXIST (156): MyISAM table '#sql-create-b5377-4-t2' is
    in use (most likely by a MERGE table). Try FLUSH TABLES.

    HA_EXTRA_PREPARE_FOR_DROP also removes MYISAM_SHARE, but that is
    not needed as ha_reset() does the job.

  ha_reset() is usually done by
  mark_tmp_table_as_free_for_reuse(). But we don't need that mechanism
  for our temporary table.

Atomic_info in HA_CREATE_INFO

  Many functions in CREATE TABLE pass the same parameters. These
  parameters are part of table creation info and should be in
  HA_CREATE_INFO (or whatever). Passing parameters via single
  structure is much easier for adding new data and
  refactoring.

InnoDB changes (revised by Marko Mäkelä)

  row_rename_table_for_mysql(): Specify the treatment of FOREIGN KEY
  constraints in a 4-valued enum parameter. In cases where FOREIGN KEY
  constraints cannot exist (partitioned tables, or internal tables of
  FULLTEXT INDEX), we can use the mode RENAME_IGNORE_FK.
  The mod RENAME_REBUILD is for any DDL operation that rebuilds the
  table inside InnoDB, such as TRUNCATE and native ALTER TABLE
  (or OPTIMIZE TABLE). The mode RENAME_ALTER_COPY is used solely
  during non-native ALTER TABLE in ha_innobase::rename_table().
  Normal ha_innobase::rename_table() will use the mode RENAME_FK.

  CREATE OR REPLACE will rename the old table (if one exists) along
  with its FOREIGN KEY constraints into a temporary name. The replacement
  table will be initially created with another temporary name.
  Unlike in ALTER TABLE, all FOREIGN KEY constraints must be renamed
  and not inherited as part of these operations, using the mode RENAME_FK.

  dict_get_referenced_table(): Let the callers convert names when needed.

  create_table_info_t::create_foreign_keys(): CREATE OR REPLACE creates
  the replacement table with a temporary name table, so for
  self-references foreign->referenced_table will be a table with
  temporary name and charset conversion must be skipped for it.

Reviewed by:

  Michael Widenius <monty@mariadb.org>
2022-08-31 11:55:04 +03:00
Alexander Barkov
e9adc3959e A cleanup for MDEV-27896 Wrong result upon COLLATE latin1_bin CHARACTER SET latin1 on the table or the database level
Changing the error messages in a statement like this:

CREATE DATABASE db1
         COLLATE utf8mb4_bin
         CHARACTER SET utf8mb4
         CHARACTER SET latin1;

from
  COLLATION 'utf8mb4_bin' is not valid for CHARACTER SET 'latin1'

to a more expected:

  Conflicting declarations: 'CHARACTER SET utf8mb4' and 'CHARACTER SET latin1'

In order to do this:
- Adding a new type TYPE_CHARACTER_SET_COLLATE_EXACT into
  Lex_exact_charset_extended_collation_attrs_st

- Removing m_had_charset_exact from its descendant class
  Lex_extended_charset_extended_collation_attrs_st

Additional cleanup:
- Changing methods in Lex_exact_charset_extended_collation_attrs_st
  set_charset(), set_charset_collate_default(), set_charset_collate_binary()
  to get Lex_exact_charset instead CHARSET_INFO as a parameter,
  to guarantee that the argument is only CHARACTER SET and does not have
  any COLLATE clauses yet. This change is not directly related to
  the error message change.
2022-05-25 11:16:38 +04:00
Alexander Barkov
e7f635e2d2 Step#2 MDEV-27896 Wrong result upon COLLATE latin1_bin CHARACTER SET latin1 on the table or the database level
- Renaming Lex_charset_collation_st to
  Lex_exact_charset_extended_collation_attrs_st

- Renaming Lex_explicit_charset_opt_collate to
  Lex_exact_charset_opt_extended_collate

- Renaming their methods charset_collation() to charset_info(),
  so the name clearly tells that it returns CHARSET_INFO.
  Soon we'll have new classes (e.g. Lex_exact_collation) and
  methods returning Lex_exact_collation. So the old name would be
  confusing about the return type.
2022-05-23 11:30:20 +04:00
Alexander Barkov
64a5fab00e Step#1 MDEV-27896 Wrong result upon COLLATE latin1_bin CHARACTER SET latin1 on the table or the database level
- Adding data type aliases:
  using Lex_column_charset_collation_attrs_st = Lex_charset_collation_st;
  using Lex_column_charset_collation_attrs = Lex_charset_collation;

  and using them all around the code (except lex_charset.*)
  instead of the original names.

- Renaming Lex_field_type_st::lex_charset_collation()
  to charset_collation_attrs()

- Renaming Column_definition::set_lex_charset_collation()
  to set_charset_collation_attrs()

- Renaming Column_definition::lex_charset_collation()
  to charset_collation_attrs()

Rationale:

The name "Lex_charset_collation" was a not very good name.
It does not tell details about its properties:
1. if the charset is optional (yes)
2. if the collation is optional (yes)
3. if the charset can be exact (yes) or context (no)
4. if the collation can be: exact (yes) or context (yes)
5. if the clauses can be repeated multiple times (yes)

We'll need a few new data types soon with different properties.
For example, to fix MDEV-27896 and MDEV-27782, we'll need a new
data type which is very like Lex_charset_collation, but additionally
supports CHARACTER SET DEFAULT (which is allowed on table and database level,
but is not allowed on the column level yet), i.e. with:
  "the charset can be exact (yes) or context (yes)" in N3.

So we'll have to rename Lex_charset_collation to something else,
e.g.: Lex_exact_charset_extended_collation_attrs,
and add a new data type:
e.g. Lex_extended_charset_extended_collation_attrs

Also, we'll possibly allow CHARACTER SET DEFAULT at the column level for
consistency with other places. So the storge on the column level can change:
- from Lex_exact_charset_extended_collation_attrs
- to   Lex_extended_charset_extended_collation_attrs

Adding the aliases introduces a convenient abstraction against
upcoming renames and c++ data type changes.
2022-05-23 11:30:04 +04:00
Marko Mäkelä
504a3b32f6 Merge 10.8 into 10.9 2022-04-28 15:54:03 +03:00