Commit graph

1177 commits

Author SHA1 Message Date
Marko Mäkelä
345356b868 Merge 10.9 into 10.10 2023-02-16 11:36:38 +02:00
Marko Mäkelä
0d55914d96 Merge 10.8 into 10.9 2023-02-16 10:25:34 +02:00
Sergei Petrunia
10a974adc9 Merge 11.0-selectivity into 11.0 2023-02-15 12:03:12 +03:00
Sergei Golubchik
c6f0814468 more changes to man page handling
* move them from ManPagesX component to X (works better for plugins),
  but keep ManPagesDevelopment as C/C is using it
* move backup manpages to Backup
* move plugin manpages (s3, rocksdb) to plugins
2023-02-12 12:15:22 +01:00
Marko Mäkelä
dbab3e8d90 Merge 10.6 into 10.8 2023-02-10 13:43:53 +02:00
Marko Mäkelä
6aec87544c Merge 10.5 into 10.6 2023-02-10 13:03:01 +02:00
Marko Mäkelä
c41c79650a Merge 10.4 into 10.5 2023-02-10 12:02:11 +02:00
Sergei Golubchik
4fa2747a63 MDEV-29582 post-review fixes
don't include my_progname in the error message, my_error starts from it
automatically, resulting in, like

/usr/bin/mysqladmin: Notice: /usr/bin/mysqladmin is deprecated and will be removed in a future release, use command 'mariadb-admin'

and remove "Notice" so that the problem description would directly
follow the executable name.

make the check to work when the executable is in the PATH
(so, invoked simply like 'mysql' and thus readlink cannot find it)

fix the check in mysql_install_db and mysql_secure_installation to not
print the warning if the intermediate path contains "mysql" substring

add this message also to
* mysql_waitpid
* mysql_convert_table_format
* mysql_find_rows
* mysql_setpermissions
* mysqlaccess
* mysqld_multi
* mysqld_safe
* mysqldumpslow
* mysqlhotcopy
* mysql_ldb

Closes #2273
2023-02-10 10:45:25 +01:00
Vicențiu Ciorbaru
08c852026d Apply clang-tidy to remove empty constructors / destructors
This patch is the result of running
run-clang-tidy -fix -header-filter=.* -checks='-*,modernize-use-equals-default' .

Code style changes have been done on top. The result of this change
leads to the following improvements:

1. Binary size reduction.
* For a -DBUILD_CONFIG=mysql_release build, the binary size is reduced by
  ~400kb.
* A raw -DCMAKE_BUILD_TYPE=Release reduces the binary size by ~1.4kb.

2. Compiler can better understand the intent of the code, thus it leads
   to more optimization possibilities. Additionally it enabled detecting
   unused variables that had an empty default constructor but not marked
   so explicitly.

   Particular change required following this patch in sql/opt_range.cc

   result_keys, an unused template class Bitmap now correctly issues
   unused variable warnings.

   Setting Bitmap template class constructor to default allows the compiler
   to identify that there are no side-effects when instantiating the class.
   Previously the compiler could not issue the warning as it assumed Bitmap
   class (being a template) would not be performing a NO-OP for its default
   constructor. This prevented the "unused variable warning".
2023-02-09 16:09:08 +02:00
Daniel Black
17423c6c51 MDEV-30554 RockDB libatomic linking on riscv64
The existing storage/rocksdb/CMakeCache.txt defined
ATOMIC_EXTRA_LIBS when atomics where required. This was
determined by the toplevel configure.cmake test
(HAVE_GCC_C11_ATOMICS_WITH_LIBATOMIC).

As build_rocksdb.cmake is included after ATOMIC_EXTRA_LIBS
was set, we just need to use it. As such no riscv64
specific macro is needed in build_rocksdb.cmake.

As highlighted by Gianfranco Costamagna (@LocutusOfBorg)
in #2472 overwriting SYSTEM_LIBS was problematic.
This is corrected in case in future SYSTEM_LIBS is changed
elsewhere.

Closes #2472.
2023-02-07 21:19:40 +11:00
Sergei Petrunia
1529881595 Stabilize rocksdb.rocksdb test. 2023-02-03 14:31:21 +03:00
Monty
1f4a9f086a Removed "<select expression> INTO <destination>" deprication.
This was done after discussions with Igor, Sanja and Bar.

The main reason for removing the deprication was to ensure that MariaDB
is always backward compatible whenever possible.

Other things:
- Added statistics counters, mainly for the feedback plugin.
  - INTO OUTFILE
  - INTO variable
  - If INTO is using the old syntax (end of query)
2023-02-03 11:57:50 +03:00
Sergei Petrunia
6c4076fac4 MDEV-30032: EXPLAIN FORMAT=JSON output: part #2: print 'loops'. 2023-02-03 11:22:17 +03:00
Sergei Petrunia
ffe0beca25 MDEV-30032: EXPLAIN FORMAT=JSON output: print costs
Basic printout for join and table execution costs.
2023-02-03 11:01:24 +03:00
Monty
727491b72a Added test cases for preceding test
This includes all test changes from
"Changing all cost calculation to be given in milliseconds"
and forwards.

Some of the things that caused changes in the result files:

- As part of fixing tests, I added 'echo' to some comments to be able to
  easier find out where things where wrong.
- MATERIALIZED has now a higher cost compared to X than before. Because
  of this some MATERIALIZED types have changed to DEPENDEND SUBQUERY.
  - Some test cases that required MATERIALIZED to repeat a bug was
    changed by adding more rows to force MATERIALIZED to happen.
- 'Filtered' in SHOW EXPLAIN has in many case changed from 100.00 to
  something smaller. This is because now filtered also takes into
  account the smallest possible ref access and filters, even if they
  where not used. Another reason for 'Filtered' being smaller is that
  we now also take into account implicit filtering done for subqueries
  using FIRSTMATCH.
  (main.subselect_no_exists_to_in)
  This is caluculated in best_access_path() and stored in records_out.
- Table orders has changed because more accurate costs.
- 'index' and 'ALL' for small tables has changed to use 'range' or
   'ref' because of optimizer_scan_setup_cost.
- index can be changed to 'range' as 'range' optimizer assumes we don't
  have to read the blocks from disk that range optimizer has already read.
  This can be confusing in the case where there is no obvious where clause
  but instead there is a hidden 'key_column > NULL' added by the optimizer.
  (main.subselect_no_exists_to_in)
- Scan on primary clustered key does not report 'Using Index' anymore
  (It's a table scan, not an index scan).
- For derived tables, the number of rows is now 100 instead of 2,
  which can be seen in EXPLAIN.
- More tests have "Using index for group by" as the cost of this
  optimization is now more correct (lower).
- A primary key could be preferred for a normal key, even if it would
  access more rows, as it's faster to do 1 lokoup and 3 'index_next' on a
  clustered primary key than one lookup trough a secondary.
  (main.stat_tables_innodb)

Notes:

- There was a 4.7% more calls to best_extension_by_limited_search() in
  the main.greedy_optimizer test.  However examining the test results
  it looked that the plans where slightly better (eq_ref where more
  chained together) so I assume this is ok.
- I have verified a few test cases where there was notable/unexpected
  changes in the plan and in all cases the new optimizer plans where
  faster.  (main.greedy_optimizer and some others)
2023-02-03 00:00:35 +03:00
Monty
d9d0e78039 Add limits for how many IO operations a table access will do
This solves the current problem in the optimizer
- SELECT FROM big_table
  - SELECT from small_table where small_table.eq_ref_key=big_table.id

The old code assumed that each eq_ref access will cause an IO.
As the cost of IO is high, this dominated the cost for the later table
which caused the optimizer to prefer table scans + join cache over
index reads.

This patch fixes this issue by limit the number of expected IO calls,
for rows and index separately, to the size of the table or index or
the number of accesses that we except in a range for the index.

The major changes are:

- Adding a new structure ALL_READ_COST that is mainly used in
  best_access_path() to hold the costs parts of the cost we are
  calculating. This allows us to limit the number of IO when multiplying
  the cost with the previous row combinations.
- All storage engine cost functions are changed to return IO_AND_CPU_COST.
  The virtual cost functions should now return in IO_AND_CPU_COST.io
  the number of disk blocks that will be accessed instead of the cost
  of the access.
- We are not limiting the io_blocks for table or index scans as we
  assume that engines may not store these in the 'hot' part of the
  cache. Table and index scan also uses much less IO blocks than
  key accesses, so the original issue is not as critical with scans.

Other things:
  OPT_RANGE now holds a 'Cost_estimate cost' instead a lot of different
  costs. All the old costs, like index_only_read, can be extracted
  from 'cost'.
- Added to the start of some functions 'handler *file= table->file'
  to shorten the code that is using the handler.
- handler->cost() is used to change a ALL_READ_COST or IO_AND_CPU_COST
  to 'cost in milliseconds'
- New functions:  handler::index_blocks() and handler::row_blocks()
  which are used to limit the IO.
- Added index_cost and row_cost to Cost_estimate and removed all not
  needed members.
- Removed cost coefficients from Cost_estimate as these don't make sense
  when costs (except IO_BLOCKS) are in milliseconds.
- Removed handler::avg_io_cost() and replaced it with DISK_READ_COST.
- Renamed best_range_rowid_filter_for_partial_join() to
  best_range_rowid_filter() as using the old name made rows too long.
- Changed all SJ_MATERIALIZATION_INFO 'Cost_estimate' variables to
  'double' as Cost_estimate power was not used for these and thus
  just caused storage and performance overhead.
- Changed cost_for_index_read() to use 'worst_seeks' to only limit
  IO, not number of table accesses. With this patch worst_seeks is
  probably not needed anymore, but I kept it around just in case.
- Applying cost for filter got to be much shorter and easier thanks
  to the API changes.
- Adjusted cost for fulltext keys in collaboration with Sergei Golubchik.
- Most test changes caused by this patch is that table scans are changed
  to use indexes.
- Added ha_seq::keyread_time() and ha_seq::key_scan_time() to get
  make checking number of potential IO blocks easier during debugging.
2023-02-02 23:57:30 +03:00
Monty
b66cdbd1ea Changing all cost calculation to be given in milliseconds
This makes it easier to compare different costs and also allows
the optimizer to optimizer different storage engines more reliably.

- Added tests/check_costs.pl, a tool to verify optimizer cost calculations.
  - Most engine costs has been found with this program. All steps to
    calculate the new costs are documented in Docs/optimizer_costs.txt

- User optimizer_cost variables are given in microseconds (as individual
  costs can be very small). Internally they are stored in ms.
- Changed DISK_READ_COST (was DISK_SEEK_BASE_COST) from a hard disk cost
  (9 ms) to common SSD cost (400MB/sec).
- Removed cost calculations for hard disks (rotation etc).
- Changed the following handler functions to return IO_AND_CPU_COST.
  This makes it easy to apply different cost modifiers in ha_..time()
  functions for io and cpu costs.
  - scan_time()
  - rnd_pos_time() & rnd_pos_call_time()
  - keyread_time()
- Enhanched keyread_time() to calculate the full cost of reading of a set
  of keys with a given number of ranges and optional number of blocks that
  need to be accessed.
- Removed read_time() as keyread_time() + rnd_pos_time() can do the same
  thing and more.
- Tuned cost for: heap, myisam, Aria, InnoDB, archive and MyRocks.
  Used heap table costs for json_table. The rest are using default engine
  costs.
- Added the following new optimizer variables:
  - optimizer_disk_read_ratio
  - optimizer_disk_read_cost
  - optimizer_key_lookup_cost
  - optimizer_row_lookup_cost
  - optimizer_row_next_find_cost
  - optimizer_scan_cost
- Moved all engine specific cost to OPTIMIZER_COSTS structure.
- Changed costs to use 'records_out' instead of 'records_read' when
  recalculating costs.
- Split optimizer_costs.h to optimizer_costs.h and optimizer_defaults.h.
  This allows one to change costs without having to compile a lot of
  files.
- Updated costs for filter lookup.
- Use a better cost estimate in best_extension_by_limited_search()
  for the sorting cost.
- Fixed previous issues with 'filtered' explain column as we are now
  using 'records_out' (min rows seen for table) to calculate filtering.
  This greatly simplifies the filtering code in
  JOIN_TAB::save_explain_data().

This change caused a lot of queries to be optimized differently than
before, which exposed different issues in the optimizer that needs to
be fixed.  These fixes are in the following commits.  To not have to
change the same test case over and over again, the changes in the test
cases are done in a single commit after all the critical change sets
are done.

InnoDB changes:
- Updated InnoDB to not divide big range cost with 2.
- Added cost for InnoDB (innobase_update_optimizer_costs()).
- Don't mark clustered primary key with HA_KEYREAD_ONLY. This will
  prevent that the optimizer is trying to use index-only scans on
  the clustered key.
- Disabled ha_innobase::scan_time() and ha_innobase::read_time() and
  ha_innobase::rnd_pos_time() as the default engine cost functions now
  works good for InnoDB.

Other things:
- Added  --show-query-costs (\Q) option to mysql.cc to show the query
  cost after each query (good when working with query costs).
- Extended my_getopt with GET_ADJUSTED_VALUE which allows one to adjust
  the value that user is given. This is used to change cost from
  microseconds (user input) to milliseconds (what the server is
  internally using).
- Added include/my_tracker.h  ; Useful include file to quickly test
  costs of a function.
- Use handler::set_table() in all places instead of 'table= arg'.
- Added SHOW_OPTIMIZER_COSTS to sys variables. These are input and
  shown in microseconds for the user but stored as milliseconds.
  This is to make the numbers easier to read for the user (less
  pre-zeros).  Implemented in 'Sys_var_optimizer_cost' class.
- In test_quick_select() do not use index scans if 'no_keyread' is set
  for the table. This is what we do in other places of the server.
- Added THD parameter to Unique::get_use_cost() and
  check_index_intersect_extension() and similar functions to be able
  to provide costs to called functions.
- Changed 'records' to 'rows' in optimizer_trace.
- Write more information to optimizer_trace.
- Added INDEX_BLOCK_FILL_FACTOR_MUL (4) and INDEX_BLOCK_FILL_FACTOR_DIV (3)
  to calculate usage space of keys in b-trees. (Before we used numeric
  constants).
- Removed code that assumed that b-trees has similar costs as binary
  trees. Replaced with engine calls that returns the cost.
- Added Bitmap::find_first_bit()
- Added timings to join_cache for ANALYZE table (patch by Sergei Petrunia).
- Added records_init and records_after_filter to POSITION to remember
  more of what best_access_patch() calculates.
- table_after_join_selectivity() changed to recalculate 'records_out'
  based on the new fields from best_access_patch()

Bug fixes:
- Some queries did not update last_query_cost (was 0). Fixed by moving
  setting thd->...last_query_cost in JOIN::optimize().
- Write '0' as number of rows for const tables with a matching row.

Some internals:
- Engine cost are stored in OPTIMIZER_COSTS structure.  When a
  handlerton is created, we also created a new cost variable for the
  handlerton. We also create a new variable if the user changes a
  optimizer cost for a not yet loaded handlerton either with command
  line arguments or with SET
  @@global.engine.optimizer_cost_variable=xx.
- There are 3 global OPTIMIZER_COSTS variables:
  default_optimizer_costs   The default costs + changes from the
                            command line without an engine specifier.
  heap_optimizer_costs      Heap table costs, used for temporary tables
  tmp_table_optimizer_costs The cost for the default on disk internal
                            temporary table (MyISAM or Aria)
- The engine cost for a table is stored in table_share. To speed up
  accesses the handler has a pointer to this. The cost is copied
  to the table on first access. If one wants to change the cost one
  must first update the global engine cost and then do a FLUSH TABLES.
  This was done to be able to access the costs for an open table
  without any locks.
- When a handlerton is created, the cost are updated the following way:
  See sql/keycaches.cc for details:
  - Use 'default_optimizer_costs' as a base
  - Call hton->update_optimizer_costs() to override with the engines
    default costs.
  - Override the costs that the user has specified for the engine.
  - One handler open, copy the engine cost from handlerton to TABLE_SHARE.
  - Call handler::update_optimizer_costs() to allow the engine to update
    cost for this particular table.
  - There are two costs stored in THD. These are copied to the handler
    when the table is used in a query:
    - optimizer_where_cost
    - optimizer_scan_setup_cost
- Simply code in best_access_path() by storing all cost result in a
  structure. (Idea/Suggestion by Igor)
2023-02-02 23:54:45 +03:00
Marko Mäkelä
d6d85c92ee Merge 10.11 into 11.0 2023-01-13 12:33:12 +02:00
Marko Mäkelä
bb3a63903e Merge 10.10 into 10.11 2023-01-13 12:22:30 +02:00
Marko Mäkelä
6ffe9ad0d4 Merge 10.9 into 10.10 2023-01-13 11:45:57 +02:00
Marko Mäkelä
5d5735c181 Merge 10.8 into 10.9 2023-01-13 11:22:29 +02:00
Marko Mäkelä
88c35781cc Merge 10.7 into 10.8 2023-01-13 11:11:04 +02:00
Marko Mäkelä
1e04cafcba Merge 10.6 into 10.7 2023-01-13 10:47:56 +02:00
Marko Mäkelä
3386b30975 Merge 10.5 into 10.6 2023-01-13 10:45:41 +02:00
Marko Mäkelä
73ecab3d26 Merge 10.4 into 10.5 2023-01-13 10:18:30 +02:00
Marko Mäkelä
f27e9c8947 MDEV-29694 Remove the InnoDB change buffer
The purpose of the change buffer was to reduce random disk access,
which could be useful on rotational storage, but maybe less so on
solid-state storage.
When we wished to
(1) insert a record into a non-unique secondary index,
(2) delete-mark a secondary index record,
(3) delete a secondary index record as part of purge (but not ROLLBACK),
and the B-tree leaf page where the record belongs to is not in the buffer
pool, we inserted a record into the change buffer B-tree, indexed by
the page identifier. When the page was eventually read into the buffer
pool, we looked up the change buffer B-tree for any modifications to the
page, applied these upon the completion of the read operation. This
was called the insert buffer merge.

We remove the change buffer, because it has been the source of
various hard-to-reproduce corruption bugs, including those fixed in
commit 5b9ee8d819 and
commit 165564d3c3 but not limited to them.

A downgrade will fail with a clear message starting with
commit db14eb16f9 (MDEV-30106).

buf_page_t::state: Merge IBUF_EXIST to UNFIXED and
WRITE_FIX_IBUF to WRITE_FIX.

buf_pool_t::watch[]: Remove.

trx_t: Move isolation_level, check_foreigns, check_unique_secondary,
bulk_insert into the same bit-field. The only purpose of
trx_t::check_unique_secondary is to enable bulk insert into an
empty table. It no longer enables insert buffering for UNIQUE INDEX.

btr_cur_t::thr: Remove. This field was originally needed for change
buffering. Later, its use was extended to cover SPATIAL INDEX.
Much of the time, rtr_info::thr holds this field. When it does not,
we will add parameters to SPATIAL INDEX specific functions.

ibuf_upgrade_needed(): Check if the change buffer needs to be updated.

ibuf_upgrade(): Merge and upgrade the change buffer after all redo log
has been applied. Free any pages consumed by the change buffer, and
zero out the change buffer root page to mark the upgrade completed,
and to prevent a downgrade to an earlier version.

dict_load_tablespaces(): Renamed from
dict_check_tablespaces_and_store_max_id(). This needs to be invoked
before ibuf_upgrade().

btr_cur_open_at_rnd_pos(): Specialize for use in persistent statistics.
The change buffer merge does not need this function anymore.

btr_page_alloc(): Renamed from btr_page_alloc_low(). We no longer
allocate any change buffer pages.

btr_cur_open_at_rnd_pos(): Specialize for use in persistent statistics.
The change buffer merge does not need this function anymore.

row_search_index_entry(), btr_lift_page_up(): Add a parameter thr
for the SPATIAL INDEX case.

rtr_page_split_and_insert(): Specialized from btr_page_split_and_insert().

rtr_root_raise_and_insert(): Specialized from btr_root_raise_and_insert().

Note: The support for upgrading from the MySQL 3.23 or MySQL 4.0
change buffer format that predates the MySQL 4.1 introduction of
the option innodb_file_per_table was removed in MySQL 5.6.5
as part of mysql/mysql-server@69b6241a79
and MariaDB 10.0.11 as part of 1d0f70c2f8.

In the tests innodb.log_upgrade and innodb.log_corruption, we create
valid (upgraded) change buffer pages.

Tested by: Matthias Leich
2023-01-11 17:59:36 +02:00
Marko Mäkelä
3a237f7666 Merge 10.10 into 10.11 2023-01-11 11:13:56 +02:00
Sergei Golubchik
fdcfc25127 Merge branch '10.3' into 10.4 2023-01-10 21:04:17 +01:00
Daniel Black
56948ee54c clang15 warnings - unused vars and old prototypes
clang15 finally errors on old prototype definations.

Its also a lot fussier about variables that aren't used
as is the case a number of time with loop counters that
aren't examined.

RocksDB was complaining that its get_range function was
declared without the array length in ha_rocksdb.h. While
a constant is used rather than trying to import the
Rdb_key_def::INDEX_NUMBER_SIZE header (was causing a lot of
errors on the defination of other orders). If the constant
does change can be assured that the same compile warnings will
tell us of the error.

The ha_rocksdb::index_read_map_impl DBUG_EXECUTE_IF was similar
to the existing endless functions used in replication tests.
Its rather moot point as the rocksdb.force_shutdown test that
uses myrocks_busy_loop_on_row_read is currently disabled.
2023-01-10 17:10:43 +00:00
Marko Mäkelä
cae5a0328b Merge 10.9 into 10.10 2023-01-10 15:06:25 +02:00
Marko Mäkelä
820ebcec86 Merge 10.8 into 10.9 2023-01-10 14:50:58 +02:00
Marko Mäkelä
92c8d6f168 Merge 10.7 into 10.8
The MDEV-25004 test innodb_fts.versioning is omitted because ever since
commit 685d958e38 InnoDB would not allow
writes to a database where the redo log file ib_logfile0 is missing.
2023-01-10 14:42:50 +02:00
Marko Mäkelä
8356fb68c3 Merge 10.6 into 10.7 2023-01-04 14:52:25 +02:00
Marko Mäkelä
e441c32a0b Merge 10.5 into 10.6 2023-01-03 18:13:11 +02:00
Marko Mäkelä
8b9b4ab3f5 Merge 10.4 into 10.5 2023-01-03 17:08:42 +02:00
Marko Mäkelä
fb0808c450 Merge 10.3 into 10.4 2023-01-03 16:10:02 +02:00
musvaage
7c5609fb64 typos 2022-12-21 12:46:52 +11:00
Marko Mäkelä
0aca3012a1 Merge 10.10 into 10.11 2022-12-14 09:18:30 +02:00
Marko Mäkelä
fa389b9098 Merge 10.9 into 10.10 2022-12-14 08:57:39 +02:00
Marko Mäkelä
b7914f562d Merge 10.8 into 10.9 2022-12-13 18:24:51 +02:00
Marko Mäkelä
d7a4ce3c80 Merge 10.7 into 10.8 2022-12-13 18:11:24 +02:00
Marko Mäkelä
25b91c3f13 Merge 10.6 into 10.7 2022-12-13 18:01:49 +02:00
Marko Mäkelä
a8a5c8a1b8 Merge 10.5 into 10.6 2022-12-13 16:58:58 +02:00
Marko Mäkelä
1dc2f35598 Merge 10.4 into 10.5 2022-12-13 14:39:18 +02:00
Marko Mäkelä
fdf43b5c78 Merge 10.3 into 10.4 2022-12-13 11:37:33 +02:00
Alexander Barkov
6216a2dfa2 MDEV-29473 UBSAN: Signed integer overflow: X * Y cannot be represented in type 'int' in strings/dtoa.c
Fixing a few problems relealed by UBSAN in type_float.test

- multiplication overflow in dtoa.c

- uninitialized Field::geom_type (and Field::srid as well)

- Wrong call-back function types used in combination with SHOW_FUNC.
  Changes in the mysql_show_var_func data type definition were not
  properly addressed all around the code by the following commits:
    b4ff64568c
    18feb62fee
    0ee879ff8a

  Adding a helper SHOW_FUNC_ENTRY() function and replacing
  all mysql_show_var_func declarations using SHOW_FUNC
  to SHOW_FUNC_ENTRY, to catch mysql_show_var_func in the future
  at compilation time.
2022-11-17 17:51:01 +04:00
Oleksandr Byelkin
e387b396d1 Merge branch '10.10' into 10.11 2022-11-03 11:52:13 +01:00
Oleksandr Byelkin
f8997c68fe Merge branch '10.9' into 10.10 2022-11-03 11:47:10 +01:00
Oleksandr Byelkin
7fef00fdd7 Merge branch '10.8' into 10.9 2022-11-02 21:43:42 +01:00
Oleksandr Byelkin
2e2173a359 Merge branch '10.6' into 10.7 2022-11-02 21:06:47 +01:00