Use older version of the SDK generally, because the newer ones break
with older cmake.
On Macs, use newer version, to fix mac specific the build error.
Update AWS SDK version from 1.0.8 to 1.0.100
Commit b64910ce27 (MDEV-12453)
enabled AWS_SDK to build correctly on buildbot.
Travis still had build faults like below despite many common elements
between the builds;
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../include/c++/v1/cstring:79:9: error: no member named 'strcoll' in the global namespace; did you mean 'strtoll'?
[ 24%] Building CXX object storage/rocksdb/CMakeFiles/rocksdblib.dir/rocksdb/db/internal_stats.cc.o
using ::strcoll;
~~^
/usr/include/stdlib.h:169:3: note: 'strtoll' declared here
strtoll(const char *__str, char **__endptr, int __base);
^
When MDEV-6076 repurposed the field PAGE_MAX_TRX_ID, it was assumed
that the field always was 0 in the clustered index of old data files.
This was not the case in IMPORT TABLESPACE (introduced in MySQL 5.6
and MariaDB 10.0), which is writing the transaction ID to all index
pages, including clustered index pages.
This means that on a data file that was at some point of its life
IMPORTed to an InnoDB instance, MariaDB 10.2.4 or later could interpret
the transaction ID as a persistent AUTO_INCREMENT value.
This also means that future changes that repurpose PAGE_MAX_TRX_ID
in the clustered index may cause trouble with files that were imported
at some point of their life.
There is a separate minor issue that InnoDB is writing PAGE_MAX_TRX_ID
to every secondary index page, even though it is only needed on leaf
pages. From now on we will write PAGE_MAX_TRX_ID as 0 to non-leaf pages,
just to be able to keep stricter debug assertions.
btr_root_raise_and_insert(): Reset the PAGE_MAX_TRX_ID field on non-root
pages of the clustered index, and on the no-longer-leaf root page of
secondary indexes.
AbstractCallback::is_root_page(): Remove. Use page_is_root() instead.
PageConverter::update_index_page(): Reset the PAGE_MAX_TRX_ID to 0
on other pages than the clustered index root page or secondary index
leaf pages.
Fixed handling of default values with cached temporal functions so that the
CREATE TABLE statement now succeeds.
Fixed virtual column session cleanup.
Fixed the error message.
Added quoting of date/time values in cases when this was omitted.
Added a test case in default.test.
Updated test result files.
Fixed the bug by failing the statement with an error message that explains
that an auto-increment column may not be used in an expression for a
check constraint.
Added a test case in check_constraint.test.
Updated existing tests and results.
PARS_INTEGER_TOKEN: Remove. The lexer returns only PARS_INT_TOKEN.
PARS_FIXBINARY_LIT, PARS_BLOB_LIT: Remove. These are never returned
by the lexer. In sym_tab_add_bound_lit(), use PARS_STR_LIT.
dict_index_is_sec_or_ibuf(): Use a single arithmetic expression.
rtr_split_page_move_rec_list(): Remove a redundant condition on
dict_index_is_sec_or_ibuf(). This function is always invoked on
a spatial index, which also is a secondary index.
Remove clang-3.8 which doesn't have a repository on apt.llvm.org any
more.
For OSX, xcode8.3 is explicitly specified.
/usr/local/Cellar is used as a cache repository to save brew install
time on OSX (and /usr/local was too big).
Debian autobake.sh is moved to a matrix include.
Other branches of the matrix build test other test suites.
An Ubuntu galera is downloaded and used in the test suite.
TYPE=RelWithDebInfo used with the test to provide backtraces with line
numbers when crashes occur.
Build of PLUGIN_AWS_KEY_MANAGEMENT enabled in build.
Code supporting TYPE=Debug and -DWITH_ASAN=ON included by not enabled
due to large numbers of errors.
Running more tests in parallel (6) as container based builds seem to
support them. The test case timeout has been set to 2 minutes as large
stalls will put test cases over 50 minute interval.
ccache enabled where possible. Linux clang builds don't use them as the
minimum CMake version isn't there.
In case of error on opening VIEW (absent table for example) it is still possible to print its definition but some variable is not set (table_list->derived->derived) so it is better do not try to test it when there is safer alternative (table_list itself).
because on Windows it cannot properly append to the file,
doesn't use CreateFile(..., FILE_APPEND_DATA, ...)
this fixes main.shutdown failures on Windows
Annotate_rows_log_event again. When a new annotate event comes,
the server applies it first (which backs up thd->query_string),
then frees the old annotate event, if any. Normally there isn't.
But with sub-statements (e.g. triggers) new annotate event comes
before the first one is freed, so the second event backs up
thd->query_string that was set by the first annotate event. Then
the first event is freed, together with its query string. And then
the second event restores thd->query_string to this freed memory.
Fix: free old annotate event before applying the new one.
automatic shortening of a too-long non-unique key should
be not a warning, but a note. It's a normal optimization,
doesn't affect correctness, and should never be converted to
an error, no matter how strict sql_mode is.
in MYSQL_ADD_PLUGIN, do not add TARGET_LINK_LIBRARIES twice for the LINK_LIBRARIES parameter
It is usually harmless to add libraries twice.
However, aws_key_management uses -Wl,-whole-archive to workaround linker issues on Linux
If libraries are added twice with whole-archive, linking will fail complaining about duplicate symbols
use CMAKE_CXX_STANDARD to set C++11 flags with CMake 3.1+ (apples flags are somehow different from standard clang)
port htonbe16/32/64 macros for rocksdb
use reinterpret_cast<size_t> to cast macOS's pthread_t (pointer type) to size_t , for rocksdb
When a CTE referring to another CTE from the same with clause
was used twice then the server could not find the second CTE and
reported a bogus error message.
This happened because for any unit that was created as a clone of
a CTE specification the pointer to the WITH clause that owned this CTE
was not set.
Mroonga generated far too many warnings (and hence output) for Travis's
sensibilities on output log file size. So we just remove the storage
engine.
Signed-off-by: Daniel Black <daniel.black@au.ibm.com>
Additionally use clang as a compiler, versions 3.8, 3.9 and 4.0
Additionally use gcc/g++-7
Add additional packages used by build now that they are whitelisted.
- libsnappy-dev - innodb compression
- liblzma-dev - innodb compression
- libzmq-dev - used my Mgoonga
- libdistro-info-perl - used by autobake-debian
Change to a container build as they tend to have more ram
Signed-off-by: Daniel Black <daniel.black@au.ibm.com>
ha_innobase::defragment_table(): Skip corrupted indexes and
FULLTEXT INDEX. In InnoDB, FULLTEXT INDEX is implemented with
auxiliary tables. We will not defragment them on OPTIMIZE TABLE.
An attempt to mark reference as dependent lead to transfering this property to
original view field and through it to other references of this field which
can't be dependent.
buf_dblwr_create(): Remove a bogus check for the buffer pool size.
Theoretically, there is no problem if the doublewrite buffer is
larger than the buffer pool. It could only cause trouble on crash
recovery, and on recovery the doublewrite buffer is read to a buffer
that is allocated outside of the buffer pool. Moreover, this check
was only performed when the database was initialized for the first
time.
On a normal startup, buf_dblwr_init() would not enforce any
rule on the innodb_buffer_pool_size.
Furthermore, in case of an error, commit the mini-transaction in order
to avoid an assertion failure on shutdown. Yes, this will leave the
doublewrite buffer in a corrupted stage, but the doublewrite buffer
should only be initialized when the data files are being initialized
from the scratch in the first place.