Disable LOAD DATA LOCAL INFILE suport by default and
auto-enable it for the duration of one query, if the query
string starts with the word "load". In all other cases the application
should enable LOAD DATA LOCAL INFILE support explicitly.
If the rlimit.rlim_cur value returned by getrlimit is not the
RLIM_INFINITY magic constant, but a *very* large number, we can allocate
too many open files. Restrict set_max_open_files to only return at most
max_file_limit, as passed via its parameter.
This patch contains the port of the MDEV-18379 patch
for 10.1 branch, but also includes a number of changes
made within MDEV-17835, which are necessary for the
normal operation of tests that use IPv6:
1) Fixed flaws in the galera_3nodes mtr suite control scripts,
because of which they could not work with mariabackup.
2) Fixed numerous bugs in the SST scripts and in the mtr test
files (galera_3nodes mtr suite) that prevented the use of Galera
with IPv6 addresses.
3) Fixed flaws in tests for rsync and mysqldump (for galera_3nodes
mtr tests suite). These tests were not performed successfully
without these fixes.
4) Currently, the three-node mtr suite for Galera (galera_3nodes)
uses a separate IPv6 availability check using the "have_ipv6.inc"
file. This check duplicates a more accurate check at suite.pm
level, which can be used by including the file "check_ipv6.inc".
This patch removes this discrepancy between suites.
5) GAL-501 test in the galera_3nodes suite does not contain the
option "--bind-address=::" which is needed for the test to work
correctly with IPv6 (at least on some systems), since without
it the server will not wait for connections on the IPv6 interface.
https://jira.mariadb.org/browse/MDEV-18379
and partially https://jira.mariadb.org/browse/MDEV-17835
commit 6a6a1f37798
Author: Olivier Bertrand <bertrandop@gmail.com>
Date: Fri Jan 4 12:31:52 2019 +0100
- Fix a few bug mainly concerning discovery and call from OEM
(and prepare new table types)
modified: storage/connect/tabjson.cpp
modified: storage/connect/tabjson.h
modified: storage/connect/tabxml.cpp
modified: storage/connect/tabxml.h
- Fix wrong line estimate
modified: storage/connect/mysql-test/connect/r/part_table.result
modified: storage/connect/mysql-test/connect/t/part_table.test
commit bd7d2e912d9
Author: Olivier Bertrand <bertrandop@gmail.com>
Date: Tue Dec 4 23:35:09 2018 +0100
Fix wrong version number
commit 4933680e7ab
Author: Olivier Bertrand <bertrandop@gmail.com>
Date: Sun Dec 2 00:25:05 2018 +0100
- Make PlugSubAlloc to be exportable
Suppress unused parameter from PlugSubSet
modified: storage/connect/global.h
modified: storage/connect/plugutil.cpp
modified: storage/connect/jsonudf.cpp
modified: storage/connect/tabjson.cpp
modified: storage/connect/user_connect.cc
- Fix a bug making column catalog XML tables fail
modified: storage/connect/tabxml.cpp
- Comment out wrong message
modified: storage/connect/ha_connect.cc
- Update error message when sorting an ODBC table fails
modified: storage/connect/tabodbc.cpp
- Add error message when gettting an address
from an OEM fails.
modified: storage/connect/reldef.cpp
- Make some modifications useful for OEM module writting
Export discovery functions for CSV, JDBC and XML
Remove unuseful include from tabjson.h
Move TDBXML::data_charset function from header file to source
modified: storage/connect/tabfmt.h
modified: storage/connect/tabjson.h
modified: storage/connect/tabxml.cpp
modified: storage/connect/tabxml.h
- Update test result
modified: storage/connect/mysql-test/connect/r/jdbc_oracle.result
The problem was originally stated in
http://bugs.mysql.com/bug.php?id=82212
The size of an base64-encoded Rows_log_event exceeds its
vanilla byte representation in 4/3 times.
When a binlogged event size is about 1GB mysqlbinlog generates
a BINLOG query that can't be send out due to its size.
It is fixed with fragmenting the BINLOG argument C-string into
(approximate) halves when the base64 encoded event is over 1GB size.
The mysqlbinlog in such case puts out
SET @binlog_fragment_0='base64-encoded-fragment_0';
SET @binlog_fragment_1='base64-encoded-fragment_1';
BINLOG @binlog_fragment_0, @binlog_fragment_1;
to represent a big BINLOG.
For prompt memory release BINLOG handler is made to reset the BINLOG argument
user variables in the middle of processing, as if @binlog_fragment_{0,1} = NULL
is assigned.
Notice the 2 fragments are enough, though the client and server still may
need to tweak their @@max_allowed_packet to satisfy to the fragment
size (which they would have to do anyway with greater number of
fragments, should that be desired).
On the lower level the following changes are made:
Log_event::print_base64()
remains to call encoder and store the encoded data into a cache but
now *without* doing any formatting. The latter is left for time
when the cache is copied to an output file (e.g mysqlbinlog output).
No formatting behavior is also reflected by the change in the meaning
of the last argument which specifies whether to cache the encoded data.
Rows_log_event::print_helper()
is made to invoke a specialized fragmented cache-to-file copying function
which is
copy_cache_to_file_wrapped()
that takes care of fragmenting also optionally wraps encoded
strings (fragments) into SQL stanzas.
my_b_copy_to_file()
is refactored to into my_b_copy_all_to_file(). The former function
is generalized
to accepts more a limit argument to constraint the copying and does
not reinitialize anymore the cache into reading mode.
The limit does not do any effect on the fully read cache.
always logged properly with binlog_row_image=MINIMAL
There are two issues fixed in this commit.
The first is an observation of a multi-table UPDATE binlogged
in row-format in binlog_row_image=MINIMAL mode. While the UPDATE aims
at a table with an ON-UPDATE attribute its binlog after-image misses
to record also installed default value.
The reason for that turns out missed marking of default-capable fields
in TABLE::write_set.
This is fixed to mark such fields similarly to 10.2's MDEV-10134 patch (db7edfed17)
that introduced it. The marking follows up 93d1e5ce0b841bed's idea
to exploit TABLE:rpl_write_set introduced there though,
and thus does not mess (in 10.1) with the actual MDEV-10134 agenda.
The patch makes formerly arg-less TABLE::mark_default_fields_for_write()
to accept an argument which would be TABLE:rpl_write_set.
The 2nd issue is extra columns in in binlog_row_image=MINIMAL before-image
while merely a packed primary key is enough. The test main.mysqlbinlog_row_minimal
always had a wrong result recorded.
This is fixed to invoke a function that intended for read_set
possible filtering and which is called (supposed to) in all type of MDL, UPDATE
including; the test results have gotten corrected.
At *merging* from 10.1->10.2 the 1st "main" part of the patch is unnecessary
since the bug is not observed in 10.2, so only hunks from
sql/sql_class.cc
are required.
Calling st_select_lex::update_used_tables in JOIN::optimize_unflattened_subqueries
only when we are sure that the join have not been cleaned up.
This can happen for a case when we have a non-merged semi-join and an impossible
where which would lead to the cleanup of the join which has the non-merged semi-join
ASAN noticed a freed memory access during EXECUTE in this script:
PREPARE stmt FROM "SELECT 'x' ORDER BY NAME_CONST( 'f', 'foo' )";
EXECUTE stmt;
In case of a PREPARE statement, all Items, including Item_name_const,
are created on Prepared_statement::main_mem_root.
Item_name_const::fix_fields() did not take this into account
and could allocate the value of Item::name on a wrong memory root,
in this code:
if (is_autogenerated_name)
{
set_name(thd, item_name->c_ptr(), (uint) item_name->length(),
system_charset_info);
}
When fix_fields() is called in the reported SQL script, THD's arena already
points to THD::main_mem_root rather than to Prepared_statement::main_mem_root,
so Item::name was allocated on THD::main_mem_root.
Then, at the end of the dispatch_command() for the PREPARE statement,
THD::main_mem_root got cleared. So during EXECUTE, Item::name
pointed to an already freed memory.
This patch changes the code to set the implicit name for Item_name_const
at the constructor time rather than at fix_fields time. This guarantees
that Item_name_const and its Item::name always reside on the same memory root.
Note, this change makes the code for Item_name_const symmetric with other
constant-alike items that set their default implicit names at the constructor
call time rather than at fix_fields() time:
- Item_string
- Item_int
- Item_real
- Item_decimal
- Item_null
- Item_param
Problem:
========
Server fails to notify the engine by not setting the ADD_PK_INDEX and
DROP_PK_INDEX When there is a
i) Change in candidate for primary key.
ii) New candidate for primary key.
Fix:
====
Server sets the ADD_PK_INDEX and DROP_PK_INDEX while doing alter for the
above problematic case.
Close connection handler on connection failure. This fixes 14 failing tests in
main suite under clang+ASAN build.
ASAN report for main.connect looks like this:
=================================================================
==25495==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 146280 byte(s) in 115 object(s) allocated from:
#0 0x4fba47 in calloc /fun/cpp_projects/llvm_toolchain/llvm/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:138
#1 0x5a7a02 in mysql_init /work/mariadb/libmariadb/libmariadb/mariadb_lib.c:977:26
#2 0x570a7a in do_connect(st_command*) /work/mariadb/client/mysqltest.cc:6096:26
#3 0x584c39 in main /work/mariadb/client/mysqltest.cc:9321:9
#4 0x7fd15514db96 in __libc_start_main /build/glibc-OTsEL5/glibc-2.27/csu/../csu/libc-start.c:310
Indirect leak of 7065600 byte(s) in 115 object(s) allocated from:
#0 0x4fb80f in __interceptor_malloc /fun/cpp_projects/llvm_toolchain/llvm/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:129
#1 0x637a83 in my_context_init /work/mariadb/libmariadb/libmariadb/ma_context.c:367:23
#2 0x59fd16 in mysql_optionsv /work/mariadb/libmariadb/libmariadb/mariadb_lib.c:2738:9
#3 0x5bc1d4 in mysql_options /work/mariadb/libmariadb/libmariadb/mariadb_lib.c:3242:10
#4 0x570b94 in do_connect(st_command*) /work/mariadb/client/mysqltest.cc:6103:7
#5 0x584c39 in main /work/mariadb/client/mysqltest.cc:9321:9
#6 0x7fd15514db96 in __libc_start_main /build/glibc-OTsEL5/glibc-2.27/csu/../csu/libc-start.c:310
Indirect leak of 940240 byte(s) in 115 object(s) allocated from:
#0 0x4fb80f in __interceptor_malloc /fun/cpp_projects/llvm_toolchain/llvm/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:129
#1 0x64386e in ma_init_dynamic_array /work/mariadb/libmariadb/libmariadb/ma_array.c:49:31
#2 0x649ead in _hash_init /work/mariadb/libmariadb/libmariadb/ma_hash.c:52:7
#3 0x5a3080 in mysql_optionsv /work/mariadb/libmariadb/libmariadb/mariadb_lib.c:2938:13
#4 0x5bc20c in mysql_options4 /work/mariadb/libmariadb/libmariadb/mariadb_lib.c:3248:10
#5 0x56f63b in connect_n_handle_errors(st_command*, st_mysql*, char const*, char const*, char const*, char const*, int, char const*) /work/mariadb/client/mysqltest.cc:5874:3
#6 0x57146b in do_connect(st_command*) /work/mariadb/client/mysqltest.cc:6193:7
#7 0x584c39 in main /work/mariadb/client/mysqltest.cc:9321:9
#8 0x7fd15514db96 in __libc_start_main /build/glibc-OTsEL5/glibc-2.27/csu/../csu/libc-start.c:310
...
Closes#809
32 bit int
Row-based slave applier could not parse correctly the table id when
the value exceeded the max of 32 bit unsigned int.
The reason turns out in that the being parsed value placeholder
was sized as 4 bytes.
The type is fixed to ulonglong.
Additionally the patch works around Rows_log_event::m_table_id 4 bytes
size on 32 bits platforms. In case of last_table_id value overflows
the 4 byte max, there won't be the zero value for m_table_id generated
and the first wrapped-around value is one, this is thanks to excluding
UINT_MAX32 + 1 from TABLE_SHARE::table_map_id.
dict_sys_get_size(): Replace the time-consuming loop with
a crude estimate that can be computed without holding any mutex.
Even before dict_sys->size was removed in MDEV-13325,
not all memory allocations by the InnoDB data dictionary cache
were being accounted for. One example is foreign key constraints.
Another example is virtual column metadata, starting with 10.2.
Issue:
------
When a subquery contains UNION the count of the number of
subquery columns is calculated incorrectly. Only the first
query block in the subquery's UNION is considered and an
array indexing goes out-of-bounds, and this is caught by an
assert.
Solution:
---------
Sum up the columns from all query blocks of the query
expression.
Change specific to 5.6/5.5:
---------------------------
The "child" points to the last query block of the UNION
(as opposed to 5.7+ where it points to the first member of
UNION). So "child->master_unit()->first_select()" is used
to reach the first query block of UNION.
PROBLEM
-------
Memory sanitizer reports uninitialized comparisons
in log_in_use(), because strings are compared with
memcmp() instead of strncmp.
FIX
---
Use strncmp() to compare strings
on startup innodb is checking whether files "ib_logfileN"
(for N from 1 to 100) exist, and whether they're readable.
A non-existent file aborted the scan.
A directory instead of a file made InnoDB to fail.
Now it treats "directory exists" as "file doesn't exist".
When InnoDB is invoking posix_fallocate() to extend data files, it
was missing a call to fsync() to update the file system metadata.
If file system recovery is needed, the file size could be incorrect.
When the setting innodb_flush_method=O_DIRECT_NO_FSYNC
that was introduced in MariaDB 10.0.11 (and MySQL 5.6) is enabled,
InnoDB would wrongly skip fsync() after extending files.
Furthermore, the merge commit d8b45b0c00
inadvertently removed XtraDB error checking for posix_fallocate()
which this fix is restoring.
fil_flush(): Add the parameter bool metadata=false to request that
fil_buffering_disabled() be ignored.
fil_extend_space_to_desired_size(): Invoke fil_flush() with the
extra parameter. After successful posix_fallocate(), invoke
os_file_flush(). Note: The bookkeeping for fil_flush() would not be
updated the posix_fallocate() code path, so the "redundant"
fil_flush() should be a no-op.
Avoid introducing new dependencies or new syntax.
That is, don't use $(...) and don't assume dirname is present.
And remove unsighty /foo/bar/../xyz from the path. Use dirname
instead of ../
In the function QUICK_RANGE_SELECT::init_ror_merged_scan we create a seperate handler if the handler in
head->file cannot be reused. The flag free_file tells us if we have a seperate handler or not.
There are cases where you might create a handler and then there might be a failure(running ALTER)
and then we have to revert the handler back to the original one. The code does that
but it does not reset the flag 'free_file' in this case.
Also backported f2c418079d.