Systemd has a socket activation feature where a mariadb.socket
definition defines the sockets to listen to, and passes those
file descriptors directly to mariadbd to use when a connection
occurs.
The new functionality is utilized when starting as follows:
systemctl start mariadb.socket
The mariadb.socket definition only needs to contain the network
information, ListenStream= directives, the mariadb.service
definition is still used for service instigation.
When mariadbd is started in this way, the socket, port, bind-address
backlog are all assumed to be self contained in the mariadb.socket
definition and as such the mariadb settings and command line
arguments of these network settings are ignored.
See man systemd.socket for how to limit this to specific ports.
Extra ports, those specified with extra_port in socket activation
mode, are those with a FileDescriptorName=extra. These need
to be in a separate service name like mariadb-extra.socket and
these require a Service={mariadb.service} directive to map to the
original service. Extra ports need systemd v227 or greater
(not RHEL/Centos7 - v219) when FileDescriptorName= was added,
otherwise the extra ports are treated like ordinary ports.
The number of sockets isn't limited when using systemd socket activation
(except by operating system limits on file descriptors and a minimal
amount of memory used per file descriptor). The systemd sockets passed
can include any ownership or permissions, including those the
mariadbd process wouldn't normally have the permission to create.
This implementation is compatible with mariadb.service definitions.
Those services started with:
systemctl start mariadb.service
does actually start the mariadb.service and used all the my.cnf
settings of sockets and ports like it previously did.
cleanups from PR 900:
- Use mariadb names instead of mysql and add secure-installation and
additionally organize man pages.
- Remove obsolete script `/make_binary_distribution`
- Don't build binary `mariadb-install-db` in case of without-server
based on the man-page
```
The replace program is used by msql2mysql. See msql2mysql(1).
```
msql2mysql is labeled as Client component, so should the dependency
Closes PR #900
Under WITHOUT_WSREP:
Exclude support files that are server only like
* wsrep.cnf
* wsrep_notify
* log rotate config files
* mysqld_multi
Exclude man pages of server components
`mallinfo` is deprecated since glibc 2.33 and has been replaced by mallinfo2.
The deprecation causes building the server to fail if glibc version is > 2.33.
Check if mallinfo2 exist on the system and use it instead.
The function row_upd_clust_step() is invoking several static functions,
some of which used to commit the mini-transaction in some cases.
If innobase_get_computed_value() would fail due to some reason,
we would fail to invoke mtr_t::commit() and release buffer pool
page latches. This would likely lead to a hanging server later.
This regression was introduced in
commit 97db6c15ea (MDEV-20618).
row_upd_index_is_referenced(), row_upd_sec_index_entry(),
row_upd_sec_index_entry(): Cleanup: Replace some ibool with bool.
row_upd_clust_rec_by_insert(), row_upd_clust_rec(): Guarantee that
the mini-transaction will always remain in active state.
row_upd_del_mark_clust_rec(): Guarantee that
the mini-transaction will always remain in active state.
This fixes one "leak" of mini-transaction on DB_COMPUTE_VALUE_FAILED.
row_upd_clust_step(): Use only one return path, which will always
invoke mtr.commit(). After a failed row_upd_store_row() call, we
will no longer "leak" the mini-transaction.
This fix was verified by RQG on 10.6 (depending on MDEV-371 that
was introduced in 10.4). Unfortunately, it is challenging to
create a regression test for this, and a test case could soon become
invalid as more bugs in virtual column evaluation are fixed.
A side effect of the MDEV-24589 bug fix is that if
FLUSH TABLE...FOR EXPORT is initiated before the history of an
earlier DROP INDEX operation has been purged, then the data file
will contain allocated pages that belonged to the dropped indexes.
These pages would never be freed after a subsequent IMPORT TABLESPACE.
We will work around this regression by making IMPORT TABLESPACE
tolerate pages that refer to an unknown index.
HAVE_valgrind_or_MSAN to HAVE_valgrind was incorrect in
af784385b4.
In my_valgrind.h when clang exists (hence no __has_feature(memory_sanitizer),
and -DWITH_VALGRIND=1, but without memcheck.h, we end up with a MEM_CHECK_DEFINED
being empty.
If we are also doing a CMAKE_BUILD_TYPE=Debug this results a number of
[-Werror,-Wunused-variable] errors because MEM_CHECK_DEFINED is empty.
With MEM_CHECK_DEFINED empty, there becomes no uses of this of the
fixed field and innodb variables in this patch.
So we stop using HAVE_valgrind as catchall and use the name
HAVE_CHECK_MEM to indicate that a CHECK_MEM_DEFINED function exists.
Reviewer: Monty
Corrects: af784385b4
Binlog group commit could lead to a situation where group commit leader
accesses participant thd's wsrep client state concurrently with the
thread executing the participant thd.
This is because of race condition in
MYSQL_BIN_LOG::write_transaction_to_binlog_events(),
and was fixed by moving wsrep_ordered_commit() to happen in
MYSQL_BIN_LOG::queue_for_group_commit() under protection of
LOCK_prepare_ordered mutex.
splittable derived
If one of joined tables of the processed query is a materialized derived
table (or view or CTE) with GROUP BY clause then under some conditions it
can be subject to split optimization. With this optimization new equalities
are injected into the WHERE condition of the SELECT that specifies this
derived table. The injected equalities are generated for all join orders
with which the split optimization can employed. After the best join order
has been chosen only certain of this equalities are really needed. The
others can be safely removed. If it's not done and some of injected
equalities involve expressions over semi-joins with look-up access then
the query may return a wrong result set.
This patch effectively removes equalities injected for split optimization
that are needed only at the optimization stage and not needed for execution.
Approved by serg@mariadb.com
- use FIND_PACKAGE(LIBAIO) to find libaio
- Use standard CMake conventions in Find{PMEM,URING}.cmake
- Drop the LIB from LIB{PMEM,URING}_{INCLUDE_DIR,LIBRARIES}
It is cleaner, and consistent with how other packages are handled in CMake.
e.g successful FIND_PACKAGE(PMEM) now sets PMEM_FOUND, PMEM_LIBRARIES,
PMEM_INCLUDE_DIR, not LIBPMEM_{FOUND,LIBRARIES,INCLUDE_DIR}.
- Decrease the output. use FIND_PACKAGE with QUIET argument.
- for Linux packages, either liburing, or libaio is required
If liburing is installed, libaio does not need to be present .
Use FIND_PACKAGE([LIBAIO|URING] REQUIRED) if either library is required.
row_upd_clust_step(): Remove the "trigger" on DELETE SYS_INDEXES
that would invoke dict_drop_index_tree(). Let us do it on purge.
row_purge_remove_clust_if_poss_low(): Invoke
dict_drop_index_tree() when purging a delete-marked SYS_INDEXES record.
* Remove usage of wsrep_provider variable in galera_ist_restart_joiner
* Rename galera_load_provider.inc and galera_unload_provider.inc to
galera_stop_replication.inc and galera_start_replication.inc. Their
original names were no longer reflecting what these include files do.
followup for ce3a2a688d
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
When we are testing under valgrind, we should be
testing the actual code path used in production code.
This optimization of using shutdown did, in pre-2009,
help purify shutdown quickly. In
b125770aaa this magicly
equated to valgrind, in 2009.
Reviewer: Monty
by compound index
This typo bug may lead to wrong result sets for equi-join queries where
the join operation is supported by a compound index such that the order of
its components differs from the order of the corresponding columns in
the table the index belongs to. The bug manifests itself only when usage
of the BNLH algorithm is forced.
The fix for the bug was provided by Chu Huaxing.
Currently for the server, we only check if $MYSQL_HOME is set. Added a check
if $MARIADB_HOME is set and try to read the configuration file from this
directory. If $MARIADB_HOME is NULL, only then check $MYSQL_HOME.
As suggested by Vladislav Vaintroub, let us remove misleading
and malformatted startup messages.
Even if the global variable srv_use_atomic_writes were set, we would
still invoke my_test_if_atomic_write() to check if writes are atomic
with a particular page size.
When using the default innodb_page_size=16k, page writes should be
atomic on NTFS when using ROW_FORMAT=COMPRESSED and KEY_BLOCK_SIZE<=4.
Disabling srv_use_atomic_writes when innodb_file_per_table=OFF does
not make sense, because that is a dynamic parameter.
We also correct the documentation string of innodb_use_atomic_writes
and remove the duplicate variable innobase_use_atomic_writes.
The debug parameter innodb_simulate_comp_failures injected compression
failures for ROW_FORMAT=COMPRESSED tables, breaking the pre-existing
logic that I had implemented in the InnoDB Plugin for MySQL 5.1 to prevent
compressed page overflows. A much better check is already achieved by
defining UNIV_ZIP_COPY at the compilation time.
(Only UNIV_ZIP_DEBUG is part of cmake -DWITH_INNODB_EXTRA_DEBUG=ON.)
In commit eaeb8ec4b8 (MDEV-24653)
an incorrect debug assertion was introduced.
btr_pcur_store_position(): If the only record in the page is the
instant ALTER TABLE metadata record, we cannot expect there to be
a successor page. The situation could be improved by MDEV-24673 later.