The --skip-write-binlog message was confusing that it only had
an effect if the galera was enabled. There are uses beyond galera
so we apply SET SESSION SQL_LOG_BIN=0 as implied by the option
without being conditional on the wsrep status.
Remove wsrep.mysql_tzinfo_to_sql_symlink{,_skip} tests as they offered
no additional coverage beyond main.mysql_tzinfo_to_sql_symlink as no
server testing was done.
Introduced a variant of the galera.mariadb_tzinfo_to_sql as
galera.mysql_tzinfo_to_sql, which does testing using the mysql client
rather than directly importing into the server via mysqltest.
Update man page and mysql_tzinfo_to_sql to having a --skip-write-binlog
option.
merge notes:
10.4:
- conflicts in tztime.cc can revert to this version of --help text.
- tztime.cc - merge execute immediate @prep1, and leave %s%s trunc_tables, lock_tables
after that.
10.6:
- Need to remove the not_embedded.inc in mysql_tzinfo_to_sql.test and
replace it with no_protocol.inc
- leave both mysql_tzinfo_to_sql.test and mariadb_tzinfo_to_sql.sql
tests.
- sql/tztime.cc - keep entirely 10.6 version.
Added checking for support of vfork by a platform where
building being done. Set HAVE_VFORK macros in case vfork()
system call is supported. Use vfork() system call if the
macros HAVE_VFORK is set, else use fork().
When single-row subquery fails with "Subquery reutrns more than 1 row"
error, it will raise an error and return NULL.
On the other hand, Item_singlerow_subselect sets item->maybe_null=0
for table-less subqueries like "(SELECT not_null_value)" (*)
This discrepancy (item with maybe_null=0 returning NULL) causes the
code in Type_handler_decimal_result::make_sort_key_part() to crash.
Fixed this by allowing inference (*) only when the subquery is NOT a
UNION.
because CONTEXT_ANALYSIS_ONLY_VCOL_EXPR can be used only for,
exactly, context analysys. Items fixed that way cannot be evaluated.
But vcols are going to be evaluated, so they have to be fixed properly,
for evaluation.
Starting with 10.3, an assertion would fail on the rollback of
a recovered incomplete transaction if a table definition violates
a FOREIGN KEY constraint.
DICT_ERR_IGNORE_RECOVER_LOCK: Include also DICT_ERR_IGNORE_FK_NOKEY
so that trx_resurrect_table_locks() will be able to load
table definitions and resurrect IX locks. Previously, if the
FOREIGN KEY constraints of a table were incomplete, the table
would fail to load until rollback, and in 10.3 or later an assertion
would fail that the rollback was not protected by a table IX lock.
Thanks to commit 9de2e60d74 there
will be no problems to enforce subsequent FOREIGN KEY operations
even though a table with invalid REFERENCES clause was loaded.
When fixing vcols, fix_fields might call convert_const_to_int().
And that will try to read the field value (from record[0]).
Mark the table as having no data to prevent that, because record[0]
is not initialized yet.
the bug was that in_vector array in Item_func_in was allocated in the
statement arena, not in the table->expr_arena.
revert part of the 5acd391e8b. Instead, change the arena correctly
in fix_all_session_vcol_exprs().
Remove TABLE_ARENA, that was introduced in 5acd391e8b to force
item tree changes to be rolled back (because they were allocated in the
wrong arena and didn't persist. now they do)
* Item_default_value::fix_fields creates a copy of its argument's field.
* Field::default_value is changed when its expression is prepared in
unpack_vcol_info_from_frm()
This means we must unpack any vcol expression that includes DEFAULT(x)
strictly after unpacking x->default_value.
To avoid building and solving this dependency graph on every table open,
we update Item_default_value::field->default_value after all vcols
are unpacked and fixed.
bt full - to include args and locals.
set print sevenbit on
- it is more useful to be able to see the exact bytes
(in case something is dumped as a string and not hexadecimal digits)
set print static-members off
- there are many interesting (non-const) static members
set frame-arguments all
- even non-printables are useful to see.
Let's make our bb logs give a little bit more detail on those
hard to reproduce bugs.
Tests on rhel7's gdb-7.6.1-120.el7
I change from `exit;` to `exit(1);` on a function `usage()`.
When we try to run mtr with a wrong option, a function `usage()` is called with the wrong option as its argument. In this case, because the function call `exit` in a first if statement, we get exit status 0.
This is a temporary fix for 10.2.
This problem was permanently fixed in 10.9 under terms of MDEV-27743.
This patch should propagate up to 10.8 then null-merged to 10.9.
Creating a temporary table with Spider is non-sense because a Spider
table cannot hold any physical data and it requires an additional
effort to manage even if it is configured correctly.
Set HTON_TEMPORARY_NOT_SUPPORTED to spider_hton->flags.
Reviewed-by: nayuta.yanagisawa@hey.com
Co-authored-by: d8sk4ueun@gmail.com
The partitioning engine does not support the table-level DATA/INDEX
DIRECTORY specification.
If one create a non-partitioned table with the DATA/INDEX DIRECTORY
option and then performs ALTER TABLE ... PARTITION BY on it, the
DATA/INDEX DIRECTORY specification of the old schema is ignored.
The behavior might be a bit surprising for users because the value
of a usual table option applies to all the partitions. Thus, we raise
a warning on such ALTER TABLE ... PARTITION BY.
The cause of the bug is overflow of uint16 KEY_PART_INFO::length and/or
uint16 KEY_PART_INFO::store_length. The solution is to increase the size
of those variables to the 'uint' type (which is 32-bit long)
If JOIN::create_postjoin_aggr_table encounters errors during execution
then free_tmp_table() is then called twice for JOIN_TAB::aggr.
The solution is to initialize JOIN_TAB::aggr only on successful completion
of JOIN::create_postjoin_aggr_table
tv_usec is a (suseconds_t) so we cast to it. Prevents the AIX(gcc-10) warning:
include/my_time.h: In function 'void my_timeval_trunc(timeval*, uint)':
include/my_time.h:249:65: warning: conversion from 'long int' to 'suseconds_t' {aka 'int'} may change value [-Wconversion]
249 | tv->tv_usec-= my_time_fraction_remainder(tv->tv_usec, decimals);
|
macOS is: conversion from 'long int' to '__darwin_suseconds_t' {aka 'int'} may change value
On Windows suseconds_t isn't defined so we use the existing
long return type of my_time_fraction_remainder.
Reviewed by Marko Mäkelä
Closes: #2079
mariabackup does not load dictionary during backup, but it loads
tablespaces, that is why fil_system.max_assigned_id is not set with
dict_check_tablespaces_and_store_max_id(). There is no sense to issue the
warning during backup.
Second execution of a prepared statement for a query containing a constant
subquery with union that can be optimized away, could result in server abnormal
termination for debug build or incorrect result set output for release build.
For example, the following test case crashes a server built with debug on second
run of the statement EXECUTE stmt
CREATE TABLE t1 (a INT);
PREPARE stmt FROM 'EXPLAIN SELECT * FROM t1 HAVING 6 IN ( SELECT 6 UNION SELECT 5 )';
EXECUTE stmt;
EXECUTE stmt;
The reason for incorrect result set output or abnormal server termination
is careless working with the data member fake_select_lex->options inside
the function mysql_explain_union(). Once the flag SELECT_DESCRIBE is set in
the data member fake_select_lex->option before calling the methods
SELECT_LEX_UNIT::prepare/SELECT_LEX_UNIT::execute
the original value of the option is no longer restored.
As a consequence, next time the prepared statement is re-executed we have
the fake_select_lex with the flag SELECT_DESCRIBE set in the data member
fake_select_lex->option, that is incorrect. In result, the method
Item_subselect::assigned()
is not invoked during evaluation of a constant condition (constant subquery
with union) that being performed on OPTIMIZE phase of query handling.
This leads to the fact that records in the temporary table are not deleted
before calling
table->file->ha_enable_indexes(HA_KEY_SWITCH_ALL)
in the method st_select_lex_unit::optimize().
In result table->file->ha_enable_indexes(HA_KEY_SWITCH_ALL) returns error
and DBUG_ASSERT(0) is fired.
Stack trace to the line where the error generated on re-enabling indexes
for next subselect iteration is below:
st_select_lex_unit::optimize (at sql_union.cc:954)
handler::ha_enable_indexes (at handler.cc:4338)
ha_heap::enable_indexes (at ha_heap.cc:519)
heap_enable_indexes (at hp_clear.c:164)
The code snippet to clarify raising the error is also listed:
int heap_enable_indexes(HP_INFO *info)
{
int error= 0;
HP_SHARE *share= info->s;
if (share->data_length || share->index_length)
error= HA_ERR_CRASHED; <<== set error the value HA_ERR_CRASHED
since share->data_length != 0
To fix this issue the original value of unit->fake_select_lex->options
has to be saved before setting the flag SELECT_DESCRIBE and restored
on return from invocation of SELECT_LEX_UNIT::prepare/SELECT_LEX_UNIT::execute
As main() invokes parse_page() when -S or -D are set, it can be a case
when parse_page() is invoked when -D filename is not set, that is why
any attempt to write to page dump file must be done only if the file
name is set with -D.
The bug is caused by 2ef7a5a13a
(MDEV-13443).
In commit 437da7bc54 (MDEV-19534),
the default value of the global variable srv_checksum_algorithm
in innochecksum was changed from SRV_CHECKSUM_ALGORITHM_INNODB
to implied 0 (innodb_checksum_algorithm=crc32). As a result,
the function buf_page_is_corrupted() would by default invoke
buf_calc_page_crc32() in innochecksum, and crc32_inited would hold.
This would cause "innochecksum" to fail on a particular page.
The actual problem is older, introduced in 2011 in
mysql/mysql-server@17e497bdb7
(MySQL 5.6.3). It should affect the validation of pages of old
data files that were written with innodb_checksum_algorithm=innodb.
When using innodb_checksum_algorithm=crc32 (the default setting
since MariaDB Server 10.2), some valid pages would be rejected
only because exactly one of the two checksum fields accidentally
matches the innodb_checksum_algorithm=crc32 value.
buf_page_is_corrupted(): Simplify the logic of non-strict
checksum validation, by always invoking buf_calc_page_crc32().
Remove a bogus condition that if only one of the checksum fields
contains the value returned by buf_calc_page_crc32(), the page
is corrupted.
Problem: In regular replication, when master binlogged using statement format
slave might not have written an event to its binary log when the Query
event aimed at a temporary table.
Specifically this was observed with LOAD DATA INFILE.
This effect was possible because unlike master slave holds temporary
tables in its pool and the master side check of existence of a
temporary table at the format bin-logging decision did not apply.
Solution: replace THD::has_thd_temporary_tables() with
THD::has_temporary_tables which allows to identify temporary table
presence on either side.
--
Reviewed by Andrei Elkin.
Problem:
========
When using mariadb-binlog with --raw and --stop-never, events from
the master's currently active log file should be written to their
respective log file specified by --result-file, and shown on-disk.
There is a bug where mariadb-binlog does not flush the result file
to disk when new events are received
Solution:
========
Add a function call to flush mariadb-binlog’s result file after
receiving an event in --raw mode.
Reviewed By:
============
Andrei Elkin <andrei.elkin@mariadb.com>
fix a 2015 typo in build scripts.
--without-plugin=plugin_file_key_management translates to
-DPLUGIN_PLUGIN_FILE_KEY_MANAGEMENT=NO
replace it with a line from 10.4 that builds the plugin
dynamically.
TYPELIBs for ENUM/SET columns could erroneously undergo redundant
hex-unescaping at the table open time.
Fix:
- Prevent multiple unescaping of the same TYPELIB
- Prevent sharing TYPELIBs between columns with different mbminlen
Per Marko's comment in JIRA, sql_kill is passing the thread id
as long long. We change the format of the error messages to match,
and cast the thread id to long long in sql_kill_user.
The 10.5 test error main.grant_kill showed up a incorrect
thread id on a big endian architecture.
The cause of this is the sql_kill_user function assumed the
error was ER_OUT_OF_RESOURCES, when the the actual error was
ER_KILL_DENIED_ERROR. ER_KILL_DENIED_ERROR as an error message
requires a thread id to be passed as unsigned long, however a
user/host was passed.
ER_OUT_OF_RESOURCES doesn't even take a user/host, despite
the optimistic comment. We remove this being passed as an
argument to the function so that when MDEV-21978 is implemented
one less compiler format warning is generated (which would
have caught this error sooner).
Thanks Otto for reporting and Marko for analysis.
Problem:
Parse-time conversion from binary to tricky character sets like utf32
produced ill-formed strings. So, later a chash happened in debug builds,
or a wrong SHOW CREATE TABLE was returned in release builds.
Fix:
1. Backporting a few methods from 10.3:
- THD::check_string_for_wellformedness()
- THD::convert_string() overloads
- THD::make_text_string_connection()
2. Adding a new method THD::reinterpret_string_from_binary(),
which makes sure to either returns a well-formed string
(optionally prepending with zero bytes), or returns an error.
Travis is dead to us so we don't need all the conditions around it.
Remove depends for no longer supported versions
Debian Jessies, and Ubuntu Trusty, Xenial, Wily are all eol
as far as we are concerned.
The dependancy on an apt cache when running autobake broke the
10.2 aarch64 packages (MDEV-28014). Lets reduce the risk here.
Fixed typo in my_malloc_size_cb_func. There is no max-thread-mem-used
sys variable in MariaDB, only max-session-mem-used. The relevant entry
in sys_vars.cc is also fixed.
Added a fallback case in case we could allocate the 256 bytes for the
error message containing the exact setting.
btr_cur_optimistic_insert(): Disregard DEBUG_DBUG injection to
invoke btr_page_reorganize() if the page (and the table) is empty.
Otherwise, an assertion would fail in btr_page_reorganize_low()
because PAGE_MAX_TRX_ID is 0 in an empty secondary index leaf page.