Add support for removing the Content-Type header to the S3 engine. This
is required for compatibility with some S3 providers.
This also adds a provider option to the S3 engine which will turn on
relevant compatibility options for specific providers.
This was required for getting MariaDB S3 engine to work with "Huawei
Cloud S3".
To get Huawei S3 storage to work on has set one of the following
S3 options:
s3_provider=Huawei
s3_ssl_no_verify=1
Author: Andrew Hutchings <andrew@mariadb.org>
* restore old ENUM values order, in case someone used
SET @@s3_protocol_version=1
* restore old behavior for Original and Amazon protocol values,
they behave as "Auto" and do not force a specific protocol version
Approved by Andrew Hutchings <andrew@mariadb.org>
The previous commit for MDEV-32884 fixed the s3_protocol_version option,
which was previous only using "Auto", no matter what it was set to. This
patch does several things to keep the old behaviour whilst correcting
for new behaviour and laying the groundwork for the future. This
includes:
* `Original` now means v2 protocol, which it would have been due to the
option not working, so upgrades will stil work.
* A new `Legacy` option has been added to mean v1 protocol.
* Options `Path` and `Domain` have been added, these will be the only
two options apart from `Auto` in a future release, and are more
aligned with what this variable means.
* Fixed the s3.debug test so that it works with v2 protocol.
* Fixed the s3.amazon test so that it works with region subdomains.
* Added additional modes to the s3.amazon test.
* Added s3.not_amazon test for the remaining modes.
This replaces PR #2902.
These options was needed in some cases, like when using minio that require
the port option, to be able to connect to the S3 storage.
The sympthom was that one could get the error
"Table t1.MAI doesn't exist in s3"
even if the table did exits.
Other things:
- Improved error message for non existing S3 files
- Rewrote bool Query_compressed_log_event::write() to make it more readable
(no logic changes).
- Changed DBUG_PRINT of 'is_error:' to 'is_error():' to make it easier to
find error: in traces.
- Ensure that 'db' is never null in Query_log_event (Simplified code).
MDEV-22088 S3 partitioning support
All ALTER PARTITION commands should now work on S3 tables except
REBUILD PARTITION
TRUNCATE PARTITION
REORGANIZE PARTITION
In addition, PARTIONED S3 TABLES can also be replicated.
This is achived by storing the partition tables .frm and .par file on S3
for partitioned shared (S3) tables.
The discovery methods are enchanced by allowing engines that supports
discovery to also support of the partitioned tables .frm and .par file
Things in more detail
- The .frm and .par files of partitioned tables are stored in S3 and kept
in sync.
- Added hton callback create_partitioning_metadata to inform handler
that metadata for a partitoned file has changed
- Added back handler::discover_check_version() to be able to check if
a table's or a part table's definition has changed.
- Added handler::check_if_updates_are_ignored(). Needed for partitioning.
- Renamed rebind() -> rebind_psi(), as it was before.
- Changed CHF_xxx hadnler flags to an enum
- Changed some checks from using table->file->ht to use
table->file->partition_ht() to get discovery to work with partitioning.
- If TABLE_SHARE::init_from_binary_frm_image() fails, ensure that we
don't leave any .frm or .par files around.
- Fixed that writefrm() doesn't leave unusable .frm files around
- Appended extension to path for writefrm() to be able to reuse to function
for creating .par files.
- Added DBUG_PUSH("") to a a few functions that caused a lot of not
critical tracing.
- Only indentation changes in sql_rename.cc
- Ignore some WSREP error messages when there isn't a internet connection
- Force restart of stat_tables_part.test to make result stable
- Fixed compiler warnings in CONNECT
MDEV-19964 S3 replication support
Added new configure options:
s3_slave_ignore_updates
"If the slave has shares same S3 storage as the master"
s3_replicate_alter_as_create_select
"When converting S3 table to local table, log all rows in binary log"
This allows on to configure slaves to have the S3 storage shared or
independent from the master.
Other thing:
Added new session variable '@@sql_if_exists' to force IF_EXIST to DDL's.
Changes:
- maria_create() now uses a bit in the parameter flags to check if table
should be encrypted instead of using maria_encrypted_tables.
- Don't encrypt tables that are to be converted to S3
- Added encrypted flag to ARIA_TABLE_CAPABILITIES
- maria_chk --description now prints if table is encrypted. Other
operations is not allowed on encrypted tables.
Limit increased from 1000 to 2000.
Avoiding stack overflow by only storing keys and pages on the stack in
recursive functions if there is plenty of space on it.
Other things:
- Use less stack space for b-tree operations as we now only allocate as
much space as needed instead of always allocating HA_MAX_KEY_LENGTH.
- Replaced most usage of my_safe_alloca() in Aria with the stack_alloc
interface.
- Moved my_setstacksize() to mysys/my_pthread.c
The error occured because aria_copy_to_s3() function tried to copy .frm file
of partition, but partition does not have it's own .frm file. The same is true
for aria_rename_s3().
To fix this issue the new parameter was added to those two functions to specify
if .frm file must be copied or not. The parameter is set to 'false' for
partitions.
Also there was other issue with EXCHANGE PARTITION. Briefly, there is the
following sequence of operations(see exchange_name_with_ddl_log() for details):
1) rename swap table to temporary table,
2) rename partition to swap table,
3) rename temporary table to partition.
On step (1) .frm file is renamed too. On step (2) the swap table does not
have .frm file, as partition does not have it. On step (3) partition will have
.frm file, because it will be renamed from temporary table. All of this causes
error on different stages of the table access. To fix it, .frm is not touched
at all for s3 during EXCHANGE PARTITION operation. This is implemented in
ha_s3::rename_table() by additional checking of
current_thd->lex->alter_info.partition_flags(see also ALTER_PARTITION_EXCHANGE
in sql_yacc.yy).
A read-only storage engine that stores it's data in (aws) S3
To store data in S3 one could use ALTER TABLE:
ALTER TABLE table_name ENGINE=S3
libmarias3 integration done by Sergei Golubchik
libmarias3 created by Andrew Hutchings