Close connection handler on connection failure. This fixes 14 failing tests in
main suite under clang+ASAN build.
ASAN report for main.connect looks like this:
=================================================================
==25495==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 146280 byte(s) in 115 object(s) allocated from:
#0 0x4fba47 in calloc /fun/cpp_projects/llvm_toolchain/llvm/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:138
#1 0x5a7a02 in mysql_init /work/mariadb/libmariadb/libmariadb/mariadb_lib.c:977:26
#2 0x570a7a in do_connect(st_command*) /work/mariadb/client/mysqltest.cc:6096:26
#3 0x584c39 in main /work/mariadb/client/mysqltest.cc:9321:9
#4 0x7fd15514db96 in __libc_start_main /build/glibc-OTsEL5/glibc-2.27/csu/../csu/libc-start.c:310
Indirect leak of 7065600 byte(s) in 115 object(s) allocated from:
#0 0x4fb80f in __interceptor_malloc /fun/cpp_projects/llvm_toolchain/llvm/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:129
#1 0x637a83 in my_context_init /work/mariadb/libmariadb/libmariadb/ma_context.c:367:23
#2 0x59fd16 in mysql_optionsv /work/mariadb/libmariadb/libmariadb/mariadb_lib.c:2738:9
#3 0x5bc1d4 in mysql_options /work/mariadb/libmariadb/libmariadb/mariadb_lib.c:3242:10
#4 0x570b94 in do_connect(st_command*) /work/mariadb/client/mysqltest.cc:6103:7
#5 0x584c39 in main /work/mariadb/client/mysqltest.cc:9321:9
#6 0x7fd15514db96 in __libc_start_main /build/glibc-OTsEL5/glibc-2.27/csu/../csu/libc-start.c:310
Indirect leak of 940240 byte(s) in 115 object(s) allocated from:
#0 0x4fb80f in __interceptor_malloc /fun/cpp_projects/llvm_toolchain/llvm/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:129
#1 0x64386e in ma_init_dynamic_array /work/mariadb/libmariadb/libmariadb/ma_array.c:49:31
#2 0x649ead in _hash_init /work/mariadb/libmariadb/libmariadb/ma_hash.c:52:7
#3 0x5a3080 in mysql_optionsv /work/mariadb/libmariadb/libmariadb/mariadb_lib.c:2938:13
#4 0x5bc20c in mysql_options4 /work/mariadb/libmariadb/libmariadb/mariadb_lib.c:3248:10
#5 0x56f63b in connect_n_handle_errors(st_command*, st_mysql*, char const*, char const*, char const*, char const*, int, char const*) /work/mariadb/client/mysqltest.cc:5874:3
#6 0x57146b in do_connect(st_command*) /work/mariadb/client/mysqltest.cc:6193:7
#7 0x584c39 in main /work/mariadb/client/mysqltest.cc:9321:9
#8 0x7fd15514db96 in __libc_start_main /build/glibc-OTsEL5/glibc-2.27/csu/../csu/libc-start.c:310
...
Closes#809
32 bit int
Row-based slave applier could not parse correctly the table id when
the value exceeded the max of 32 bit unsigned int.
The reason turns out in that the being parsed value placeholder
was sized as 4 bytes.
The type is fixed to ulonglong.
Additionally the patch works around Rows_log_event::m_table_id 4 bytes
size on 32 bits platforms. In case of last_table_id value overflows
the 4 byte max, there won't be the zero value for m_table_id generated
and the first wrapped-around value is one, this is thanks to excluding
UINT_MAX32 + 1 from TABLE_SHARE::table_map_id.
dict_sys_get_size(): Replace the time-consuming loop with
a crude estimate that can be computed without holding any mutex.
Even before dict_sys->size was removed in MDEV-13325,
not all memory allocations by the InnoDB data dictionary cache
were being accounted for. One example is foreign key constraints.
Another example is virtual column metadata, starting with 10.2.
Issue:
------
When a subquery contains UNION the count of the number of
subquery columns is calculated incorrectly. Only the first
query block in the subquery's UNION is considered and an
array indexing goes out-of-bounds, and this is caught by an
assert.
Solution:
---------
Sum up the columns from all query blocks of the query
expression.
Change specific to 5.6/5.5:
---------------------------
The "child" points to the last query block of the UNION
(as opposed to 5.7+ where it points to the first member of
UNION). So "child->master_unit()->first_select()" is used
to reach the first query block of UNION.
PROBLEM
-------
Memory sanitizer reports uninitialized comparisons
in log_in_use(), because strings are compared with
memcmp() instead of strncmp.
FIX
---
Use strncmp() to compare strings
on startup innodb is checking whether files "ib_logfileN"
(for N from 1 to 100) exist, and whether they're readable.
A non-existent file aborted the scan.
A directory instead of a file made InnoDB to fail.
Now it treats "directory exists" as "file doesn't exist".
When InnoDB is invoking posix_fallocate() to extend data files, it
was missing a call to fsync() to update the file system metadata.
If file system recovery is needed, the file size could be incorrect.
When the setting innodb_flush_method=O_DIRECT_NO_FSYNC
that was introduced in MariaDB 10.0.11 (and MySQL 5.6) is enabled,
InnoDB would wrongly skip fsync() after extending files.
Furthermore, the merge commit d8b45b0c00
inadvertently removed XtraDB error checking for posix_fallocate()
which this fix is restoring.
fil_flush(): Add the parameter bool metadata=false to request that
fil_buffering_disabled() be ignored.
fil_extend_space_to_desired_size(): Invoke fil_flush() with the
extra parameter. After successful posix_fallocate(), invoke
os_file_flush(). Note: The bookkeeping for fil_flush() would not be
updated the posix_fallocate() code path, so the "redundant"
fil_flush() should be a no-op.
Avoid introducing new dependencies or new syntax.
That is, don't use $(...) and don't assume dirname is present.
And remove unsighty /foo/bar/../xyz from the path. Use dirname
instead of ../
In the function QUICK_RANGE_SELECT::init_ror_merged_scan we create a seperate handler if the handler in
head->file cannot be reused. The flag free_file tells us if we have a seperate handler or not.
There are cases where you might create a handler and then there might be a failure(running ALTER)
and then we have to revert the handler back to the original one. The code does that
but it does not reset the flag 'free_file' in this case.
Also backported f2c418079d.
row_drop_table_for_mysql(): Fix a regression introduced in MDEV-16515.
Similar to the follow-up fixes MDEV-16647 and MDEV-17470, we must make
the internal tables of FULLTEXT INDEX immune to kills, to avoid noise
and resource leakage on DROP TABLE or ALTER TABLE. (Orphan internal tables
would be dropped at the next InnoDB startup only.)
Problem:
========
MLOG_FILE_WRITE_CRYPT_DATA redo log fails to apply type for
the crypt_data present in the space. While processing the double-write
buffer pages, page fails to decrypt. It leads to warning message.
Fix:
====
Set the type while parsing MLOG_FILE_WRITE_CRYPT_DATA redo log.
If type and length is of invalid type then mark it as corrupted.
Fix the off-by-one overflow which was introduced with commit
b0fd06a6f2 (MDEV-15670 - unit.my_atomic failed in buildbot with
Signal 11 thrown)
Closes#1098.
If galera.galera_gtid_slave_sst_rsync is repeated more than once it will fail due incorrect GTID position. After stopping SLAVE node reset also GTID_SLAVE_POS variable.
This mutex can be freed when server shuts down (when thread_count goes down to 0)
, but it is still used inside THD::~THD() when Statement_map is destroyed.
The fix is to call Statement_map::reset() at the point where thread_count
is still positive, and avoid locking LOCK_prepared_stmt_count in THD
destructor.
When performing a hash search via HASH_SEARCH we first look at a key of a node
and then at its pointer to the next node in chain. If we have those in one cache
line instead of a two we reduce memory reads.
I found dict_table_t, fil_space_t and buf_page_t suitable for such improvement.
During database recovery, a transaction with wsrep XID is
recovered from InnoDB in prepared state. However, when the
transaction is looked up with trx_get_trx_by_xid() in
innobase_commit_by_xid(), trx->xid gets cleared in
trx_get_trx_by_xid_low() and commit time serialization history
write does not update the wsrep XID in trx sys header for
that recovered trx. As a result the transaction gets
committed during recovery but the wsrep position does not
get updated appropriately.
As a fix, we preserve trx->xid for Galera over transaction
commit in recovery phase.
Fix authored by: Teemu Ollakka (GaleraCluster) and Marko Mäkelä.
modified: mysql-test/suite/galera/disabled.def
modified: mysql-test/suite/galera/r/galera_gcache_recover_full_gcache.result
modified: mysql-test/suite/galera/r/galera_gcache_recover_manytrx.result
modified: mysql-test/suite/galera/t/galera_gcache_recover_full_gcache.test
modified: mysql-test/suite/galera/t/galera_gcache_recover_manytrx.test
modified: storage/innobase/trx/trx0trx.cc
modified: storage/xtradb/trx/trx0trx.cc
When we have a nested subquery then a subquery that was a dependent subquery
may change to an independent one when we optimizer the inner subqueries.
This is handled st_select_lex::optimize_unflattened_subqueries.
Currently a subquery that was changed to independent from dependent after optimization
phase incorrectly shows dependent in the output of Explain, this happens because we
don't update used_tables for the WHERE clause, ON clause, etc after the optimization phase.
If an encrypted table is created during backup, then
mariabackup --backup could wrongly fail.
This caused a failure of the test mariabackup.huge_lsn once on buildbot.
This is due to the way how InnoDB creates .ibd files. It would first
write a dummy page 0 with no encryption information. Due to this,
xb_fil_cur_open() could wrongly interpret that the table is not encrypted.
Subsequently, page_is_corrupted() would compare the computed page
checksum to the wrong checksum. (There are both "before" and "after"
checksums for encrypted pages.)
To work around this problem, we introduce a Boolean option
--backup-encrypted that is enabled by default. With this option,
Mariabackup will assume that a nonzero key_version implies that the
page is encrypted. We need this option in order to be able to copy
encrypted tables from MariaDB 10.1 or 10.2, because unencrypted pages
that were originally created before MySQL 5.1.48 could contain nonzero
garbage in the fields that were repurposed for encryption.
Later, MDEV-18128 would clean up the way how .ibd files are created,
to remove the need for this option.
page_is_corrupted(): Add missing const qualifiers, and do not check
space->crypt_data unless --skip-backup-encrypted has been specified.
xb_fil_cur_read(): After a failed page read, output a page dump.