Commit graph

4019 commits

Author SHA1 Message Date
Sergei Golubchik
445198c10e pos-fixes for rename 2024-11-05 14:00:50 -08:00
Sergey Vojtovich
97e112fb82 VECTOR indexes support for RENAME TABLE
Rename high-level indexes along with a table.
2024-11-05 14:00:49 -08:00
Sergei Golubchik
ebcbed6d74 post-fixes for TRUNCATE
* fix the truncate-by-handler variant, used by InnoDB
* test that insert works after truncate, meaning graph table was emptied
* test that the vector index size is zero after truncate in MyISAM
2024-11-05 14:00:49 -08:00
Sergey Vojtovich
70575defb7 Fixed TRUNCATE TABLE against VECTOR indexes
This patch fixes only TRUNCATE by recreate variant, there seem to be no
reasonable engine that uses TRUNCATE by handler method for testing.

Reset index_cinfo so that mi_create is not confused by garbage passed via
index_file_name and sets MY_DELETE_OLD flag.

Review question: can we add a test case to make sure VECTOR index is empty
indeed?
2024-11-05 14:00:49 -08:00
Sergey Vojtovich
91a24ddc5d Test insert ... select with vector index 2024-11-05 14:00:49 -08:00
Sergey Vojtovich
4aa1968b89 Disable VECTOR indexes with partitioned tables 2024-11-05 14:00:49 -08:00
Sergey Vojtovich
7c16bba71d CREATE TABLE ... LIKE loses VECTOR index 2024-11-05 14:00:49 -08:00
Vicențiu Ciorbaru
eec1339f5d MDEV-32886 Vec_FromText and Vec_ToText
This commit introduces two utility functions meant to make working with
vectors simpler.

Vec_ToText converts a binary vector into a json array of numbers
(floats).
Vec_FromText takes in a json array of numbers and converts it into a
little-endian IEEE float sequence of bytes (4 bytes per float).
2024-11-05 14:00:49 -08:00
Sergei Golubchik
f44989ff0f UPDATE/DELETE post-fixes 2024-11-05 14:00:49 -08:00
Hugo Wen
0e2b9e7621 MDEV-33408 Initial support for vector DELETE and UPDATE
When the source row is deleted, mark the corresponding node in HNSW
index by setting `tref` to null. An index is added for the `tref` in
secondary table for faster searching of the to-be-marked nodes.

The nodes marked as deleted will still be used for search, but will not
be included in the final query results.

As skipping deleted nodes and not adding deleted nodes for new-inserted
nodes' neighbor list could impact the performance, we now only skip
these nodes in search results.

- for some reason the bitmap is not set for hlindex during the delete so
  I had to temporarily comment out one line

All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
2024-11-05 14:00:49 -08:00
Sergei Golubchik
049d839350 mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
  (to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
  are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
  nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
  need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
  graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
  INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
  in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
  from the secondary cache are invalidated in the shared cache
  while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
  in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
  when it reaches the threshold
2024-11-05 14:00:49 -08:00
Sergei Golubchik
5c2b7c6e7f mhnsw: configurable parameters
1. introduce alpha. the value of 1.1 is optimal, so hard-code it.

2. hard-code ef_construction=10, best by test

3. rename hnsw_max_connection_per_layer to mhnsw_max_edges_per_node
   (max_connection is rather ambiguous in MariaDB) and add a help text

4. rename hnsw_ef_search to mhnsw_min_limit and add a help text
2024-11-05 14:00:49 -08:00
Sergei Golubchik
25b4000290 InnoDB support for hlindexes and mhnsw
* mhnsw:
  * use primary key, innodb loves and (and the index cannot have dupes anyway)
    * MyISAM is ok with that, performance-wise
  * must be ha_rnd_init(0) because we aren't going to scan
    * MyISAM resets the position on ha_rnd_init(0) so query it before
    * oh, and use the correct handler, just in case
  * HA_ERR_RECORD_IS_THE_SAME is no error
* innodb:
  * return ref_length on create
  * don't assume table->pos_in_table_list is set
  * ok, assume away, but only for system versioned tables
* set alter_info on create (InnoDB needs to check for FKs)
* pair external_lock/external_unlock correctly
2024-11-05 14:00:49 -08:00
Sergei Golubchik
3ff7f04fd4 misc changes
* sysvars should be REQUIRED_ARG
* fix a mix of US and UK spelling (use US)
* use consistent naming
* work if VEC_DISTANCE arguments are in the swapped order (const, col)
* work if VEC_DISTANCE argument is NULL/invalid or wrong length
* abort INSERT if the value is invalid or wrong length
* store the "number of neighbors" in a blob in endianness-independent way
* use field->store(longlong, bool) not field->store(double)
* a lot more error checking everywhere
* cleanup after errors
* simplify calling conventions, remove reinterpret_cast's
* todo/XXX comments
* whitespaces
* use float consistently

memory management is still totally PoC quality
2024-11-05 14:00:48 -08:00
Vicențiu Ciorbaru
88839e71a3 Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:

    MDEV-33408 Alter HNSW graph storage and fix memory leak

    This commit changes the way HNSW graph information is stored in the
    second table. Instead of storing connections as separate records, it now
    stores neighbors for each node, leading to significant performance
    improvements and storage savings.

    Comparing with the previous approach, the insert speed is 5 times faster,
    search speed improves by 23%, and storage usage is reduced by 73%, based
    on ann-benchmark tests with random-xs-20-euclidean and
    random-s-100-euclidean datasets.

    Additionally, in previous code, vector objects were not released after
    use, resulting in excessive memory consumption (over 20GB for building
    the index with 90,000 records), preventing tests with large datasets.
    Now ensure that vectors are released appropriately during the insert and
    search functions. Note there are still some vectors that need to be
    cleaned up after search query completion. Needs to be addressed in a
    future commit.

    All new code of the whole pull request, including one or several files
    that are either new files or modified ones, are contributed under the
    BSD-new license. I am contributing on behalf of my employer Amazon Web
    Services, Inc.

As well as the commit:

    Introduce session variables to manage HNSW index parameters

    Three variables:

    hnsw_max_connection_per_layer
    hnsw_ef_constructor
    hnsw_ef_search

    ann-benchmark tool is also updated to support these variables in commit
    https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
    https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable

    All new code of the whole pull request, including one or several files
    that are either new files or modified ones, are contributed under the
    BSD-new license. I am contributing on behalf of my employer Amazon Web
    Services, Inc.

Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-11-05 14:00:48 -08:00
Sergei Golubchik
d6add9a03d initial support for vector indexes
MDEV-33407 Parser support for vector indexes

The syntax is

  create table t1 (... vector index (v) ...);

limitation:
* v is a binary string and NOT NULL
* only one vector index per table
* temporary tables are not supported

MDEV-33404 Engine-independent indexes: subtable method

added support for so-called "high level indexes", they are not visible
to the storage engine, implemented on the sql level. For every such
an index in a table, say, t1, the server implicitly creates a second
table named, like, t1#i#05 (where "05" is the index number in t1).
This table has a fixed structure, no frm, not accessible directly,
doesn't go into the table cache, needs no MDLs.

MDEV-33406 basic optimizer support for k-NN searches

for a query like SELECT ... ORDER BY func() optimizer will use
item_func->part_of_sortkey() to decide what keys can be used
to resolve ORDER BY.
2024-11-05 14:00:48 -08:00
Sergei Golubchik
aa09cb3b11 open frm for DROP TABLE
needed to get partitioning and information about
secondary objects
2024-11-05 14:00:48 -08:00
Sergei Golubchik
062f8eb37d cleanup: key algorithm vs key flags
the information about index algorithm was stored in two
places inconsistently split between both.

BTREE index could have key->algorithm == HA_KEY_ALG_BTREE, if the user
explicitly specified USING BTREE or HA_KEY_ALG_UNDEF, if not.

RTREE index had key->algorithm == HA_KEY_ALG_RTREE
and always had key->flags & HA_SPATIAL

FULLTEXT index had  key->algorithm == HA_KEY_ALG_FULLTEXT
and always had key->flags & HA_FULLTEXT

HASH index had key->algorithm == HA_KEY_ALG_HASH or HA_KEY_ALG_UNDEF

long unique index always had key->algorithm == HA_KEY_ALG_LONG_HASH

In this commit:

All indexes except BTREE and HASH always have key->algorithm
set, HA_SPATIAL and HA_FULLTEXT flags are not used anymore (except
for storage to keep frms backward compatible).

As a side effect ALTER TABLE now detects FULLTEXT index renames correctly
2024-11-05 14:00:47 -08:00
Sergei Golubchik
44ff2f7831 reject invalid spatial key declarations in the parser 2024-11-05 14:00:47 -08:00
Sergei Golubchik
9fa31c1bd9 cleanup: spaces, casts, comments 2024-11-05 14:00:47 -08:00
Sergei Golubchik
70f000f1dc fix main.plugin_vars test to cleanup after itself 2024-11-05 14:00:46 -08:00
Sergei Golubchik
9ddac64188 make INFORMATION_SCHEMA.STATISTICS.COMMENT not nullable
as it can never be null (only "" or "disabled")
2024-11-05 14:00:46 -08:00
Sergei Golubchik
680bdb76a6 fix for 32bit 2024-11-05 14:00:46 -08:00
Alexander Barkov
5d4a4d2091 Fixing main.type_timestamp failure with --view
The patch for
  MDEV-35250 Assertion `dec <= 6' failed in my_timestamp_binary_length
added a test which depends on
  MDEV-29534 In view FROM_UNIXTIME adds .000000 in the result

Adding --disable_view_protocol around the affected statements.
2024-11-02 12:46:27 +04:00
Sergei Golubchik
ac7fe8b214 fix main.selectivity_notembedded --view 2024-11-01 21:01:31 +01:00
Sergei Golubchik
0a3452cf83 MDEV-35229 fix the test for --view
also, enable it for --ps
2024-11-01 20:52:58 +01:00
Sergei Golubchik
947de4b1db print more digits for floating point options in in mariadbd --help 2024-11-01 08:58:43 +01:00
Monty
40810baffe MDEV-33144 Implement the Percona variable slow_query_log_always_write_time
This task is inspired by the Percona implementation of
slow_query_log_always_write_time.

This task implements the variable log_slow_always_query_time (name
matching other MariaDB variables using the slow query log). The
default value for the variable is 31536000, which makes MariaDB
compatible with older installations.

For queries with execution time longer than log_slow_always_query_time
the variables log_slow_rate_limit and log_slow_min_examined_row_limit
will be ignored and the query will be written to the slow query log
if there is no other limitations (like log_slow_filter etc).

Other things:
- long_query_time internal variable renamed to log_slow_query_time.
- More descriptive information for "log_slow_query_time".
2024-11-01 08:58:37 +01:00
Alexander Barkov
556a40dce0 MDEV-35229 NOCOPY has become reserved word bringing wide incompatibility
This patch was suggested by Sergei Golubchik.

It reverts the second patch from the PR:

  commit fa5eeb4931
    Fixed ALTER TABLE NOCOPY keyword failure

and adds NOCOPY_SYM into keyword_func_sp_var_and_label.

The price is one extra shift/recuce conflict in yy_oracle.yy.
This should to tolerable.
2024-10-30 13:58:20 +04:00
Alexander Barkov
8c0a260a5b MDEV-35250 Assertion `dec <= 6' failed in my_timestamp_binary_length
The TIMESTAMP related code did not handle AUTO_SEC_PART_DIGITS.
FROM_UNIXTIME() sets its member 'decimals' to AUTO_SEC_PART_DIGITS.
So some scripts involving FROM_UNIXTIME() crashed on assert in debug
builds and returned unexpected results in release builds.
2024-10-30 11:09:21 +04:00
Yuchen Pei
4b6922a315
MDEV-25008: UPDATE/DELETE: Cost-based choice IN->EXISTS vs Materialization
Single-table UPDATE/DELETE didn't provide outer_lookup_keys value for
subqueries. This didn't allow to make a meaningful choice between
IN->EXISTS and Materialization strategies for subqueries.

Fix this:
* Make UPDATE/DELETE save Sql_cmd_dml::scanned_rows,
* Then, subquery's JOIN::choose_subquery_plan() can fetch it from
there for outer_lookup_keys

Details:
UPDATE/DELETE now calls select_lex->optimize_unflattened_subqueries()
twice, like SELECT does (first call optimize_constant_subquries() in
JOIN::optimize_inner(), then call optimize_unflattened_subqueries() in
JOIN::optimize_stage2()):
1. Call with const_only=true before any optimizations. This allows
range optimizer and others to use the values of cheap const
subqueries.
2. Call it with const_only=false after range optimizer, partition
pruning, etc. outer_lookup_keys value is provided, so it's possible to
pick a good subquery strategy.

Note: PROTECT_STATEMENT_MEMROOT requires that first SP execution
performs subquery optimization for all subqueries, even for degenerate
query plans like "Impossible WHERE". Due to that, we ensure that the
call to optimize_unflattened_subqueries (with const_only=false) even
for degenerate query plans still happens, as was the case before this
change.
2024-10-23 23:51:24 +11:00
Oleg Smirnov
fd87e01f38 MDEV-27277 Add a warning when max_sort_length is reached
During a query execution some sorting and grouping operations
on strings may be involved. System variable max_sort_length defines
the maximum number of bytes to use when comparing strings during
sorting/grouping. Thus, the comparable parts of strings may be less
than their actual size, so the results of the query may be not
sorted/grouped properly.
To indicate that some comparisons were done on a truncated lengths,
a new warning has been introduced with this commit.
2024-10-22 22:39:36 +07:00
Alexander Barkov
0d17c540a5 MDEV-27277 Add a warning when max_sort_length is reached
Step#1: fixing the return type of strnxfrm() from size_t to this structure:

typedef struct
{
  size_t m_output_length;
  size_t m_source_length_used;
  uint m_warnings;
} my_strnxfrm_ret_t;
2024-10-22 21:42:53 +07:00
Alexander Barkov
e1cd3c4033 MDEV-12252 ROW data type for stored function return values
Adding support for the ROW data type in the stored function RETURNS clause:

- explicit ROW(..members...) for both sql_mode=DEFAULT and sql_mode=ORACLE

  CREATE FUNCTION f1() RETURNS ROW(a INT, b VARCHAR(32)) ...

- anchored "ROW TYPE OF [db1.]table1" declarations for sql_mode=DEFAULT

  CREATE FUNCTION f1() RETURNS ROW TYPE OF test.t1 ...

- anchored "[db1.]table1%ROWTYPE" declarations for sql_mode=ORACLE

  CREATE FUNCTION f1() RETURN test.t1%ROWTYPE ...

Adding support for anchored scalar data types in RETURNS clause:

- "TYPE OF [db1.]table1.column1" for sql_mode=DEFAULT

  CREATE FUNCTION f1() RETURNS TYPE OF test.t1.column1;

- "[db1.]table1.column1" for sql_mode=ORACLE

  CREATE FUNCTION f1() RETURN test.t1.column1%TYPE;

Details:

- Adding a new sql_mode_t parameter to
    sp_head::create()
    sp_head::sp_head()
    sp_package::create()
    sp_package::sp_package()
  to guarantee early initialization of sp_head::m_sql_mode.
  Before this change, this member was not initialized at all during
  CREATE FUNCTION/PROCEDURE/PACKAGE statements, and was not used.
  Now it needs to be initialized to write properly the
  mysql.proc.returns column, according to the create time sql_mode.

- Code refactoring to make the things simpler and functions smaller:

  * Adding a new method
    Field_row::row_create_fields(THD *thd, List<Spvar_definition> *list)
    to make a Virtual_tmp_table with Fields for ROW members
    from an explicit definition.

  * Adding a new method
    Field_row::row_create_fields(THD *thd, const Spvar_definition &def)
    to make a Virtual_tmp_table with Fields for ROW members
    from an explicit or a table anchored definition.

  * Adding a new method
    Item_args::add_array_of_item_field(THD *thd, const Virtual_tmp_table &vtable)
    to create and array of Item_field corresponding to all Field instances
    in a Virtual_tmp_table

  * Removing Item_field_row::row_create_items(). It was decomposed
    into the new methods described above.

  * Moving the code from the loop body in sp_rcontext::init_var_items()
    into a separate method Spvar_definition::make_item_field_row(),
    to make the code clearer (smaller functions).
    make_item_field_row() itself uses the new methods described above.

- Changing the data type of sp_head::m_return_field_def
  from Column_definition to Spvar_definition.
  So now it supports not only SQL column field types,
  but also explicit ROW and anchored ROW data types,
  as well as anchored column types.

- Adding a new Column_definition parameter to sp_head::create_result_field().
  Before this patch, create_result_field() took the definition only
  from m_return_field_def. Now it's also called with a local Column_definition
  variable which contains the explicit definition resolved from an
  anchored defition.

- Modifying sql_yacc.yy to support the new grammar.
  Adding new helper methods:
    * sf_return_fill_definition_row()
    * sf_return_fill_definition_rowtype_of()
    * sf_return_fill_definition_type_of()

- Fixing tests in:
  * Virtual_tmp_table::setup_field_pointers() in sql_select.cc
  * Send_field::normalize() in field.h
  * store_column_type()
  to prevent calling Type_handler_row::field_type(),
  which is implemented a DBUG_ASSERT(0).
  Before this patch the affected methods and functions were called only
  for scalar data types. Now ROW is also possible.

- Adding a new virtual method Field::cols()

- Overriding methods:
   Item_func_sp::cols()
   Item_func_sp::element_index()
   Item_func_sp::check_cols()
   Item_func_sp::bring_value()
  to support the ROW data type.

- Extending the rule sp_return_type to support
  * explicit ROW and anchored ROW data types
  * anchored scalar data types

- Overriding Field_row::sql_type() to print
  the data type of an explicit ROW.
2024-10-21 07:59:29 +04:00
Alexander Barkov
dfaf7e2eb4 MDEV-15751 CURRENT_TIMESTAMP should return a TIMESTAMP [WITH TIME ZONE?]
Changing the return type of the following functions:
  - CURRENT_TIMESTAMP, CURRENT_TIMESTAMP(), NOW()
  - SYSDATE()
  - FROM_UNIXTIME()
from DATETIME to TIMESTAMP.

Note, the old function NOW() returning DATETIME is still available
as LOCALTIMESTAMP or LOCALTIMESTAMP(), e.g.:

  SELECT
    LOCALTIMESTAMP,     -- DATETIME
    CURRENT_TIMESTAMP;  -- TIMESTAMP

The change in the functions return data type fixes some problems
that occurred near a DST change:

- Problem #1

INSERT INTO t1 (timestamp_field) VALUES (CURRENT_TIMESTAMP);
INSERT INTO t1 (timestamp_field) VALUES (COALESCE(CURRENT_TIMESTAMP));

could result into two different values inserted.

- Problem #2

INSERT INTO t1 (timestamp_field) VALUES (FROM_UNIXTIME(1288477526));
INSERT INTO t1 (timestamp_field) VALUES (FROM_UNIXTIME(1288477526+3600));

could result into two equal TIMESTAMP values near a DST change.

Additional changes:

- FROM_UNIXTIME(0) now returns SQL NULL instead of '1970-01-01 00:00:00'
  (assuming time_zone='+00:00')

- UNIX_TIMESTAMP('1970-01-01 00:00:00') now returns SQL NULL instead of 0
  (assuming time_zone='+00:00'

These additional changes are needed for consistency with TIMESTAMP fields,
which cannot store '1970-01-01 00:00:00 +00:00'
2024-10-19 22:48:23 +02:00
Brandon Nesterenko
e213e916ad MDEV-32014: Fix mysqld--help,win.rdiff 2024-10-17 15:53:00 -06:00
Libing Song
72cc58bb71 MDEV-32014 Rename binlog cache temporary file to binlog file
for large transaction

Description
===========
When a transaction commits, it copies the binlog events from
binlog cache to binlog file. Very large transactions
(eg. gigabytes) can stall other transactions for a long time
because the data is copied while holding LOCK_log, which blocks
other commits from binlogging.

The solution in this patch is to rename the binlog cache file to
a binlog file instead of copy, if the commiting transaction has
large binlog cache. Rename is a very fast operation, it doesn't
block other transactions a long time.

Design
======
* binlog_large_commit_threshold
  type: ulonglong
  scope: global
  dynamic: yes
  default: 128MB

  Only the binlog cache temporary files large than 128MB are
  renamed to binlog file.

* #binlog_cache_files directory
  To support rename, all binlog cache temporary files are managed
  as normal files now. `#binlog_cache_files` directory is in the same
  directory with binlog files. It is created at server startup if it doesn't
  exist. Otherwise, all files in the directory is deleted at startup.

  The temporary files are named with ML_ prefix and the memorary address
  of the binlog_cache_data object which guarantees it is unique.

* Reserve space
  To supprot rename feature, It must reserve enough space at the
  begin of the binlog cache file. The space is required for
  Format description, Gtid list, checkpoint and Gtid events when
  renaming it to a binlog file.

  Since binlog_cache_data's cache_log is directly accessed by binlog log,
  online alter and wsrep. It is not easy to update all the code. Thus
  binlog cache will not reserve space if it is not session binlog cache or
  wsrep session is enabled.

  - m_file_reserved_bytes
    Stores the bytes reserved at the begin of the cache file.
    It is initialized in write_prepare() and cleared by reset().

    The reserved file header is hide to callers. Thus there is no
    change for callers. E.g.
    - get_byte_position() still get the length of binlog data
      written to the cache, but not the file length.
    - truncate(0) will truncate the file to m_file_reserved_bytes but not 0.

  - write_prepare()
    write_prepare() is called everytime when anything is being written
    into the cache. It will call init_file_reserved_bytes() to  create
    the cache file (if it doesn't exist) and reserve suitable space if
    the data written exceeds buffer's size.

* Binlog_commit_by_rotate
  It is used to encapsulate the code for remaing a binlog cache
  tempoary file to binlog file.
  - should_commit_by_rotate()
    it is called by write_transaction_to_binlog_events() to check if
    a binlog cache should be rename to a binlog file.
  - commit()
    That is the entry to rename a binlog cache and commit the
    transaction. Both rename and commit are protected by LOCK_log,
    Thus not other transactions can write anything into the renamed
    binlog before it.

    Rename happens in a rotation. After the new binlog file is generated,
    replace_binlog_file() is called to:
    - copy data from the new binlog file to its binlog cache file.
    - write gtid event.
    - rename the binlog cache file to binlog file.

    After that the rotation will continue to succeed. Then the transaction
    is committed in a seperated group itself. Its cache file will be
    detached and cache log will be reset before calling
    trx_group_commit_with_engines(). Thus only Xid event be written.
2024-10-17 07:53:59 -06:00
Yuchen Pei
35cebfdc51
MDEV-15696 Implement SHOW CREATE SERVER
One change is that if the port is not supplied or out of bound, the
old behaviour is to print 3306. The new behaviour is to not print
it (if not supplied) or the out of bound value.
2024-10-15 10:50:23 +11:00
Yuchen Pei
d2eba35653
MDEV-34716 Allow arbitrary options in CREATE SERVER
The existing syntax for CREATE SERVER

CREATE [OR REPLACE] SERVER [IF NOT EXISTS] server_name
    FOREIGN DATA WRAPPER wrapper_name
    OPTIONS (option [, option] ...)

option:
  { HOST character-literal
  | DATABASE character-literal
  | USER character-literal
  | PASSWORD character-literal
  | SOCKET character-literal
  | OWNER character-literal
  | PORT numeric-literal }

With this change we have:

option:
  { HOST character-literal
  | DATABASE character-literal
  | USER character-literal
  | PASSWORD character-literal
  | SOCKET character-literal
  | OWNER character-literal
  | PORT numeric-literal
  | PORT quoted-numerical-literal
  | identifier character-literal}

We store these options as a JSON field in the mysql.servers system
table. We retain the restriction that PORT needs to be a number, but
also allow it to be a quoted number, so that SHOW CREATE SERVER can be
used for dumping. Without an accompanied implementation of SHOW CREATE
SERVER, some mysqldump tests will fail. Therefore this commit should
be immediately followed by the one implementating SHOW CREATE SERVER,
with testing covering both.
2024-10-15 10:50:22 +11:00
Yuchen Pei
2345407b8c
MDEV-34716 Fix mysql.servers socket max length too short
The limit of socket length on unix according to libc is 108, see
sockaddr_un::sun_path, but in the table it is a string of max length
64, which results in truncation of socket and failure to connect by
plugins using servers such as spider.
2024-10-15 10:50:22 +11:00
Rex
9315452ea0 MDEV-34941 MDEV-31466-fix column count issue with union in derived table
In specifying a derived table with a union, for example

CREATE TABLE t (c1 INT KEY,c2 INT,c3 INT) ENGINE=MyISAM;
SELECT * FROM (SELECT * FROM t UNION SELECT * FROM t) AS d (d1,d2);

we bypass an earlier check for the correct number of specified column
names, causing a crash.

Fixed by adding a check for the correct number of supplied arguments
in st_select_lex_unit::rename_types_list()
2024-10-15 06:50:19 +12:00
Rex
e90aab7acc MDEV-34931 MDEV-31466 name resolution fails in --view
Fix for MDEV-31466 - add optional derived table column names.
Column names within a SELECT_LEX structure can be left in a non-reparsable
state (as printed out from *::print) after JOIN::prepare.  This caused
an incorrect view definition to be written into the .FRM file.
Fixed by resetting item list names in SELECT_LEX structures representing
derived tables before writing out the view definition.

Reviewed by Igor Babaev (igor@mariadb.com)
2024-10-15 06:08:46 +12:00
Rex
10008b3d3e MDEV-31466 Add optional correlation column list for derived tables
Extend derived table syntax to support column name assignment.
(subquery expression) [as|=] ident [comma separated column name list].
Prior to this patch, the optional comma separated column name list is
not supported.

Processing within the unit of the subquery expression will use
original column names, outside the unit will use the new names.

For example, in the query

select a1, a2 from
  (select c1, c2, c3 from t1 where c2 > 0) as dt (a1, a2, a3)
where a2 > 10;

we see the second column of the derived table dt being used both within,
(where c2 > 0), and outside, (where a2 > 10), the specification.
Both conditions apply to t1.c2.

When multiple unit preparations are required, such as when being used within
a prepared statement or procedure, original column names are needed for
correct resolution. Original names are reset within mysql_derived_reinit().

Item_holder items, used for result tables in both TVC and union preparations
are renamed before use within st_select_lex_unit::prepare().

During wildcard expansion, if column names are present, items names are
set directly after creation.

Reviewed by Igor Babaev (igor@mariadb.com)
2024-10-15 06:08:46 +12:00
Christian Gonzalez
fd0cc2b1fd Make SESSION_USER() comparable with CURRENT_USER()
Update `SESSION_USER()` behaviour to be comparable with `CURRENT_USER()`.
`SESSION_USER()` will return the user and host columns from `mysql.user`
used to authenticate the user when the session was created.

Historically `SESSION_USER()` was an alias of `USER()` function. The
main difference with `USER()` behaviour after this changes is that
`SESSION_USER()` now returns the host column from `mysql.user` instead of
the client host or ip.

NOTE: `SESSION_USER_IS_USER` old mode is added to make the change
backward compatible.

All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer
Amazon Web Services, Inc.
2024-10-04 13:22:40 +02:00
sts-kokseng.wong
383d1f90dd The revision corresponds to the review comments. 1. Move the unit tests into the compat/oracle suite, sp-param.test file. 2. Remove the added unit test file and result file. 3. Add type, Alter_info::enum_alter_table_algorithm, into the union. 4. Remove the extra switch case 2024-10-04 00:17:37 +02:00
sts-kokseng.wong
43825af101 MDEV-34316 sql_mode=ORACLE: Ignore the NOCOPY keyword in stored routine parameters
During sql_mode=ORACLE, ignore the NOCOPY keyword in stored routine
parameters. The optimization (pass-by-reference instead of
pass-by-value) helping to avoid value copying will be done in a separate
task when needed.
2024-10-04 00:17:37 +02:00
Marko Mäkelä
f493e46494 Merge 11.6 into 11.7 2024-10-03 18:15:13 +03:00
Marko Mäkelä
43465352b9 Merge 11.4 into 11.6 2024-10-03 16:09:56 +03:00
Marko Mäkelä
b53b81e937 Merge 11.2 into 11.4 2024-10-03 14:32:14 +03:00
Marko Mäkelä
12a91b57e2 Merge 10.11 into 11.2 2024-10-03 13:24:43 +03:00