but even if this script called as /bin/mysql_install_db
it is still standard install and scripts are in /usr/share/
(but not in the /share/)
2. fix of bindir path
dict_create_foreign_constraints_low(): Clean up the way in
which the error messages are initialized, and ensure that
the table name is always initialized.
Most of the mtr tests in the galera_3nodes suite fail
for a variety of reasons with a variety of errors.
This patch fixes several substantial flaws
in the galera_3nodes suite tests and in the mtr framework
service files, adapting the tests from galera_3nodes
for the current version of MariaDB.
This patch also synchronizes some galera_3nodes-related
files with the latest changes made for MDEV-17835 (v2 patch)
and for MDEV-18379 in other branches (10.2 and 10.3).
Closes#1161
The code path where the table was not being rebuilt during ALTER TABLE
was not covered by the test. Add coverage, and remove the debug assertion
that could fail in this case.
i_s_innodb_mutexes_fill_table(): Use the C++ RAII pattern
to ensure that the mutexes are released if an OK()
macro returns from the function prematurely.
buf_page_is_corrupted(): Read the global variable srv_checksum_algorithm
only once in order to avoid a race condition when
SET GLOBAL innodb_checksum_algorithm=...;
is being executed concurrently with this function.
wsrep_certification_rules: Define as a weak global symbol.
While there are separate _embedded.a for statically
linked storage engine plugins, there is only one ha_innodb.so
which is supposed to work with both values of WITH_WSREP.
The merge from 10.0-galera introduced a reference to a global
variable that is only defined when the server is built WITH_WSREP.
We must define that symbol as weak global, so that when
a dynamically linked InnoDB or XtraDB is used with the embedded
server (which never includes write-set replication patches),
the variable will be read as 0, instead of causing a failure to
load the InnoDB or XtraDB plugin.
Only starting with MariaDB 10.2, the .result file will echo
"connect" and "connection" statements. There is no way how
the test could have passed on debug builds after
commit 1037edcb11
(which looks like an untested backport from a later version).
ha_innobase::commit_inplace_alter_table(): Do not crash if
innobase_update_foreign_cache() returns an error. It can return
an error on ALTER TABLE if an inconsistent FOREIGN KEY constraint
was created earlier when SET foreign_key_checks=0 was in effect.
Instead, report a warning to the client that constraints cannot
be loaded.
Disable LOAD DATA LOCAL INFILE suport by default and
auto-enable it for the duration of one query, if the query
string starts with the word "load". In all other cases the application
should enable LOAD DATA LOCAL INFILE support explicitly.
If the rlimit.rlim_cur value returned by getrlimit is not the
RLIM_INFINITY magic constant, but a *very* large number, we can allocate
too many open files. Restrict set_max_open_files to only return at most
max_file_limit, as passed via its parameter.
This patch contains the port of the MDEV-18379 patch
for 10.1 branch, but also includes a number of changes
made within MDEV-17835, which are necessary for the
normal operation of tests that use IPv6:
1) Fixed flaws in the galera_3nodes mtr suite control scripts,
because of which they could not work with mariabackup.
2) Fixed numerous bugs in the SST scripts and in the mtr test
files (galera_3nodes mtr suite) that prevented the use of Galera
with IPv6 addresses.
3) Fixed flaws in tests for rsync and mysqldump (for galera_3nodes
mtr tests suite). These tests were not performed successfully
without these fixes.
4) Currently, the three-node mtr suite for Galera (galera_3nodes)
uses a separate IPv6 availability check using the "have_ipv6.inc"
file. This check duplicates a more accurate check at suite.pm
level, which can be used by including the file "check_ipv6.inc".
This patch removes this discrepancy between suites.
5) GAL-501 test in the galera_3nodes suite does not contain the
option "--bind-address=::" which is needed for the test to work
correctly with IPv6 (at least on some systems), since without
it the server will not wait for connections on the IPv6 interface.
https://jira.mariadb.org/browse/MDEV-18379
and partially https://jira.mariadb.org/browse/MDEV-17835
commit 6a6a1f37798
Author: Olivier Bertrand <bertrandop@gmail.com>
Date: Fri Jan 4 12:31:52 2019 +0100
- Fix a few bug mainly concerning discovery and call from OEM
(and prepare new table types)
modified: storage/connect/tabjson.cpp
modified: storage/connect/tabjson.h
modified: storage/connect/tabxml.cpp
modified: storage/connect/tabxml.h
- Fix wrong line estimate
modified: storage/connect/mysql-test/connect/r/part_table.result
modified: storage/connect/mysql-test/connect/t/part_table.test
commit bd7d2e912d9
Author: Olivier Bertrand <bertrandop@gmail.com>
Date: Tue Dec 4 23:35:09 2018 +0100
Fix wrong version number
commit 4933680e7ab
Author: Olivier Bertrand <bertrandop@gmail.com>
Date: Sun Dec 2 00:25:05 2018 +0100
- Make PlugSubAlloc to be exportable
Suppress unused parameter from PlugSubSet
modified: storage/connect/global.h
modified: storage/connect/plugutil.cpp
modified: storage/connect/jsonudf.cpp
modified: storage/connect/tabjson.cpp
modified: storage/connect/user_connect.cc
- Fix a bug making column catalog XML tables fail
modified: storage/connect/tabxml.cpp
- Comment out wrong message
modified: storage/connect/ha_connect.cc
- Update error message when sorting an ODBC table fails
modified: storage/connect/tabodbc.cpp
- Add error message when gettting an address
from an OEM fails.
modified: storage/connect/reldef.cpp
- Make some modifications useful for OEM module writting
Export discovery functions for CSV, JDBC and XML
Remove unuseful include from tabjson.h
Move TDBXML::data_charset function from header file to source
modified: storage/connect/tabfmt.h
modified: storage/connect/tabjson.h
modified: storage/connect/tabxml.cpp
modified: storage/connect/tabxml.h
- Update test result
modified: storage/connect/mysql-test/connect/r/jdbc_oracle.result
The problem was originally stated in
http://bugs.mysql.com/bug.php?id=82212
The size of an base64-encoded Rows_log_event exceeds its
vanilla byte representation in 4/3 times.
When a binlogged event size is about 1GB mysqlbinlog generates
a BINLOG query that can't be send out due to its size.
It is fixed with fragmenting the BINLOG argument C-string into
(approximate) halves when the base64 encoded event is over 1GB size.
The mysqlbinlog in such case puts out
SET @binlog_fragment_0='base64-encoded-fragment_0';
SET @binlog_fragment_1='base64-encoded-fragment_1';
BINLOG @binlog_fragment_0, @binlog_fragment_1;
to represent a big BINLOG.
For prompt memory release BINLOG handler is made to reset the BINLOG argument
user variables in the middle of processing, as if @binlog_fragment_{0,1} = NULL
is assigned.
Notice the 2 fragments are enough, though the client and server still may
need to tweak their @@max_allowed_packet to satisfy to the fragment
size (which they would have to do anyway with greater number of
fragments, should that be desired).
On the lower level the following changes are made:
Log_event::print_base64()
remains to call encoder and store the encoded data into a cache but
now *without* doing any formatting. The latter is left for time
when the cache is copied to an output file (e.g mysqlbinlog output).
No formatting behavior is also reflected by the change in the meaning
of the last argument which specifies whether to cache the encoded data.
Rows_log_event::print_helper()
is made to invoke a specialized fragmented cache-to-file copying function
which is
copy_cache_to_file_wrapped()
that takes care of fragmenting also optionally wraps encoded
strings (fragments) into SQL stanzas.
my_b_copy_to_file()
is refactored to into my_b_copy_all_to_file(). The former function
is generalized
to accepts more a limit argument to constraint the copying and does
not reinitialize anymore the cache into reading mode.
The limit does not do any effect on the fully read cache.