don't initialize error_log_handler_list in set_handlers()
* error_log_handler_list is initialized to LOG_FILE early, in init_base()
* set_handlers always reinitializes it to LOG_FILE, so it's pointless
* after init_base() concurrent threads start using sql_log_warning,
so following set_handlers() shouldn't modify error_log_handler_list
without some protection
This crash happens on a combination of multiple conditions:
- There is a thead#1 running an "ANALYZE FORMAT=JSON" query for a
"SELECT .. FROM INFORMATION_SCHEMA.COLUMNS WHERE .. "
- The WHERE clause contains a stored function call, say f1().
- The WHERE clause is built in the way so that the function f1()
is never actually called, e.g.
WHERE .. AND (TRUE OR f1()=expr)
- The database contains multiple VIEWs that have the function f1() call,
e.g. in their <select list>
- The WHERE clause is built in the way so that these VIEWs match
the condition.
- There is a parallel thread#2 running. It creates or drops or recreates
some other stored routine, say f2(), which is not used in the ANALYZE query.
It effectively invalidates the stored routine cache for thread#1
without locking.
Note, it is important that f2() is NOT used by ANALYZE query.
Otherwise, thread#2 would be locked until the ANALYZE query
finishes.
When all of the above conditions are met, the following happens:
1. thread#1 starts the ANALYZE query. It notices a call for the stored function
f1() in the WHERE condition. The function f1() gets parsed and cached
to the SP cache. Its address also gets assigned to Item_func_sp::m_sp.
2. thread#1 starts iterating through all tables that
match the WHERE condition to find the information about their columns.
3. thread#1 processes columns of the VIEW v1.
It notices a call for f1() in the VIEW v1 definition.
But f1() is already cached in the step#1 and it is up to date.
So nothing happens with the SP cache.
4. thread#2 re-creates f2() in a non-locking mode.
It effectively invalidates the SP cache in thread#1.
5. thread#1 processes columns of the VIEW v2.
It notices a call for f1() in the VIEW v2 definition.
It also notices that the cached version of f1() is not up to date.
It frees the old definition of f1(), parses it again, and puts a
new version of f1() to the SP cache.
6. thread#1 finishes processing rows and generates the JSON output.
When printing the "attached_condition" value, it calls
Item_func_sp::print() for f1(). But this Item_func_sp links
to the old (freed) version of f1().
The above scenario demonstrates that Item_func_sp::m_sp can point to an
alredy freed instance when Item_func_sp::func_name() is called,
so accessing to Item_sp::m_sp->m_handler is not safe.
This patch rewrites the code to use Item_func_sp::m_handler instead,
which is always reliable.
Note, this patch is only a cleanup for MDEV-28166 to quickly fix the regression.
It fixes MDEV-28267. But it does not fix the core problem:
The code behind I_S does not take into account that the SP
cache can be updated while evaluating rows of the COLUMNS table.
This is a corner case and it never happens with any other tables.
I_S.COLUMNS is very special.
Another example of the core problem is reported in MDEV-25243.
The code accesses to Item_sp::m_sp->m_chistics of an
already freed m_sp, again. It will be addressed separately.
We will remove the parameter innodb_disallow_writes because it is badly
designed and implemented. The parameter was never allowed at startup.
It was only internally used by Galera snapshot transfer.
If a user executed
SET GLOBAL innodb_disallow_writes=ON;
the server could hang even on subsequent read operations.
During Galera snapshot transfer, we will block writes
to implement an rsync friendly snapshot, as follows:
sst_flush_tables() will acquire a global lock by executing
FLUSH TABLES WITH READ LOCK, which will block any writes
at the high level.
sst_disable_innodb_writes(), invoked via ha_disable_internal_writes(true),
will suspend or disable InnoDB background tasks or threads that could
initiate writes. As part of this, log_make_checkpoint() will be invoked
to ensure that anything in the InnoDB buf_pool.flush_list will be written
to the data files. This has the nice side effect that the Galera joiner
will avoid crash recovery.
The changes to sql/wsrep.cc and to the tests are based on a prototype
that was developed by Jan Lindström.
Reviewed by: Jan Lindström
Fixing a typo in the fix for MDEV-19804, wrong return value in a bool function:
< return NULL;
> return true;
The problem was found because it did not compile on some platforms.
Strangley, it did not have visible problems on other platforms,
which did not fail to compile, although "return NULL" should compile to
"return false" rather than "return true".
This bug report is about the same issue as MDEV-28129 and MDEV-21173.
The issue is that the macros YYABORT is called instead of MYSQL_YYABORT
on parse error. In result the method LEX::cleanup_lex_after_parse_error
is not called to clean up data structures created on parsing of
the statement.
Second execution of a prepared statement for a query containing a constant
subquery with union that can be optimized away, could result in server abnormal
termination for debug build or incorrect result set output for release build.
For example, the following test case crashes a server built with debug on second
run of the statement EXECUTE stmt
CREATE TABLE t1 (a INT);
PREPARE stmt FROM 'EXPLAIN SELECT * FROM t1 HAVING 6 IN ( SELECT 6 UNION SELECT 5 )';
EXECUTE stmt;
EXECUTE stmt;
The reason for incorrect result set output or abnormal server termination
is careless working with the data member fake_select_lex->options inside
the function mysql_explain_union(). Once the flag SELECT_DESCRIBE is set in
the data member fake_select_lex->option before calling the methods
SELECT_LEX_UNIT::prepare/SELECT_LEX_UNIT::execute
the original value of the option is no longer restored.
As a consequence, next time the prepared statement is re-executed we have
the fake_select_lex with the flag SELECT_DESCRIBE set in the data member
fake_select_lex->option, that is incorrect. In result, the method
Item_subselect::assigned()
is not invoked during evaluation of a constant condition (constant subquery
with union) that being performed on OPTIMIZE phase of query handling.
This leads to the fact that records in the temporary table are not deleted
before calling
table->file->ha_enable_indexes(HA_KEY_SWITCH_ALL)
in the method st_select_lex_unit::optimize().
In result table->file->ha_enable_indexes(HA_KEY_SWITCH_ALL) returns error
and DBUG_ASSERT(0) is fired.
Stack trace to the line where the error generated on re-enabling indexes
for next subselect iteration is below:
st_select_lex_unit::optimize (at sql_union.cc:954)
handler::ha_enable_indexes (at handler.cc:4338)
ha_heap::enable_indexes (at ha_heap.cc:519)
heap_enable_indexes (at hp_clear.c:164)
The code snippet to clarify raising the error is also listed:
int heap_enable_indexes(HP_INFO *info)
{
int error= 0;
HP_SHARE *share= info->s;
if (share->data_length || share->index_length)
error= HA_ERR_CRASHED; <<== set error the value HA_ERR_CRASHED
since share->data_length != 0
To fix this issue the original value of unit->fake_select_lex->options
has to be saved before setting the flag SELECT_DESCRIBE and restored
on return from invocation of SELECT_LEX_UNIT::prepare/SELECT_LEX_UNIT::execute
records_are_comparable() requires this condition:
bitmap_is_subset(table->write_set, table->read_set)
On first iteration vers_update_fields() changes write_set and
read_set. On second iteration the above condition fails.
Added missing read bit for ROW_START. Also reorganized
bitmap_set_bit() so it is called only when needed.
Throw ER_NOT_FORM_FILE if this is wrong FRM data (warning with
ER_VERS_FIELD_WRONG_TYPE is still printed for deeper knowledge of what
was happened).
Keep ER_VERS_FIELD_WRONG_TYPE for creating partitioned table with
trx-versioning. Tested by MDEV-15951 in trx_id.test
This bug affected queries with IN predicates that contain parameter markers
in the value list. Such queries are executed via prepared statements.
The problem appeared only if the number of elements in the value list
was greater than the set value of the system variable
in_predicate_conversion_threshold.
The patch unconditionally prohibits conversion of an IN predicate to the
equivalent IN predicand if the value list of the IN predicate contains
parameters markers.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Problem: In regular replication, when master binlogged using statement format
slave might not have written an event to its binary log when the Query
event aimed at a temporary table.
Specifically this was observed with LOAD DATA INFILE.
This effect was possible because unlike master slave holds temporary
tables in its pool and the master side check of existence of a
temporary table at the format bin-logging decision did not apply.
Solution: replace THD::has_thd_temporary_tables() with
THD::has_temporary_tables which allows to identify temporary table
presence on either side.
--
Reviewed by Andrei Elkin.
This commit adds a mtr test for reproducing a test scenario where despite of
innodb_disallow_writes blocking, writes to file system can still happen.
The test launches a garbd node, which triggers one of the cluster node to switch to
SST donor state. In this state, all disk activity should be halted, and e.g.
innodb_disallow_writes has been set. The test records md5sum aggregate over mariadb
data directory when the node enters the donor state, and records another md5sum
when the node leaves the donor state. If there is no IO activity in data directory, these
hashes should be equal.
For this test, the Donor state processing, has beeen instrumented so that, SST donor thread can be
stopped when entering the donor state. The test uses this new dbug sync point,
to control when to record the md5sums.
New SST script was added: wsrep_sst_backup, and garbd uses backup method to lauch the donor
node to call this script, and to enter in donor state.
The backup script could be later extended as general purpose backup method for the cluster.
This commit fixes also one race condition happening in wsrep_sst_rsync, like this:
* wsrep_rsync_sst script requests for flush tables,
and then waits in a loop until mariadbd has created file tables_flushed,
as confirmation that FLUSH TABLES has completed
* mariadbd's SST donor thread, wakes for the flush table request and then performs FTWRL,
and after this it creates the tables_flushed file
* note that SST script will now continue to startup rsync sending
* mariadbd's SST donor thread now calls for sst_disallow_writes(),
so that innodb would setup disk IO blockage, however rsyncing may already be ongoing at this point
This race condition is fixed in this commit, by performing all disk IO blocking before
creating the tables_flushed file.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Problem:
========
When the mysql.gtid_slave_pos table uses the InnoDB engine, and
mysqld starts, it reads the table and begins a transaction. After
reading the value, it should end the transaction and release all
associated locks. The bug reported in DBAAS-7828 shows that when
autocommit is off, the locks are not released, resulting in
indefinite hangs on future attempts to change gtid_slave_pos. In
particular, the transaction was not properly finalized because
thd->server_status was not updated to reflect the end of the
transaction.
Solution:
========
This patch updates the code to properly commit the transaction after
reading gtid_slave_pos during mysqld start-up.
Reviewed By:
============
Andrei Elkin <andrei.elkin@mariadb.com>
Problem:
========
When both semi-sync and slave compression are enabled, the numbering
on packet headers can become out of sync between the primary and
replica servers. More specifically, after the master flushes its
write, it should increment the counters that track packets. The
bug is such that the master only updates the normal packet counter
and leaves the compressed packet counter alone.
Solution:
========
After the master flushes, additionally increment the compressed
packet counter.
Reviewed By:
============
Andrei Elkin: <andrei.elkin@mariadb.com>
This bug could affect prepared statements for the command CREATE VIEW with
specification that contained unnamed basic constant in select list. If
generation of a valid name for the corresponding view column required
resolution of conflicts with names of other columns that were explicitly
defined then execution of such prepared statement and following deallocation
of this statement led to reading from freed memory.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Problem:
DECIMAL columns in I_S must be explicitly set of some value.
I_S columns do not have `DEFAULT 0` (after MDEV-18918), so during
restore_record() their record fragments pointed by Field::ptr are
initialized to zero bytes 0x00.
But an array of 0x00's is not a valid binary DECIMAL value.
So val_decimal() called for such Field_new_decimal generated a warning
when seeing a wrong binary encoded DECIMAL value in the record.
Fix:
Explicitly setting INFORMATION_SCHEMA.PROCESSLIST.PROGRESS
to the decimal value of 0 if no progress information is available.
The problem was that instructions sp_instr_cursor_copy_struct and
sp_instr_copen uses the same lex, adding and removing "tail" of
prelocked tables and forgetting that tail of all tables is kept in
LEX::query_tables_last. If the LEX used only by one instruction
or the query do not have prelocked tables it is not important.
But to work correctly in all cases LEX::query_tables_last should
be reset to make new tables added in the correct list (after last
table in the LEX instead after last table of the prelocking "tail"
which was cut).
TYPELIBs for ENUM/SET columns could erroneously undergo redundant
hex-unescaping at the table open time.
Fix:
- Prevent multiple unescaping of the same TYPELIB
- Prevent sharing TYPELIBs between columns with different mbminlen
Per Marko's comment in JIRA, sql_kill is passing the thread id
as long long. We change the format of the error messages to match,
and cast the thread id to long long in sql_kill_user.
The 10.5 test error main.grant_kill showed up a incorrect
thread id on a big endian architecture.
The cause of this is the sql_kill_user function assumed the
error was ER_OUT_OF_RESOURCES, when the the actual error was
ER_KILL_DENIED_ERROR. ER_KILL_DENIED_ERROR as an error message
requires a thread id to be passed as unsigned long, however a
user/host was passed.
ER_OUT_OF_RESOURCES doesn't even take a user/host, despite
the optimistic comment. We remove this being passed as an
argument to the function so that when MDEV-21978 is implemented
one less compiler format warning is generated (which would
have caught this error sooner).
Thanks Otto for reporting and Marko for analysis.
commit '6de482a6fefac0c21daf33ed465644151cdf879f'
10.3 no longer errors in truncate_notembedded.test
but per comments, a non-crash is all that we are after.
Problem:
Parse-time conversion from binary to tricky character sets like utf32
produced ill-formed strings. So, later a chash happened in debug builds,
or a wrong SHOW CREATE TABLE was returned in release builds.
Fix:
1. Backporting a few methods from 10.3:
- THD::check_string_for_wellformedness()
- THD::convert_string() overloads
- THD::make_text_string_connection()
2. Adding a new method THD::reinterpret_string_from_binary(),
which makes sure to either returns a well-formed string
(optionally prepending with zero bytes), or returns an error.
whenever possible, partitioning should use the full
partition plugin name, not the one byte legacy code.
Normally, ha_partition can get the engine plugin from
table_share->default_part_plugin.
But in some cases, e.g. in DROP TABLE, the table isn't
opened, table_share is NULL, and ha_partition has to parse
the frm, much like dd_frm_type() does.
temporary_tables.cc, sql_table.cc:
When dropping a table, it must be deleted in the engine
first, then frm file. Because frm can be the only true
source of metadata that the engine might need for DROP.
table.cc:
when opening a partitioned table, if the engine for
partitions is not found, do not fallback to MyISAM.
not every index-using plan sets bits in table->quick_keys.
QUICK_ROR_INTERSECT_SELECT, for example, doesn't.
Use the fact that select->quick is set instead.
Also allow EXPLAIN to work.
The first step for deprecating innodb_autoinc_lock_mode(see MDEV-27844) is:
- to switch statement binlog format to ROW if binlog format is MIXED and
the statement changes autoincremented fields
- issue warnings if innodb_autoinc_lock_mode == 2 and binlog format is
STATEMENT
The warning out of OPTIMIZE
Statement is unsafe because it uses a system function
was indeed counterfactual and was resulted by checking an
insufficiently strict property of lex' sql_command_flags.
Fixed with deploying an additional checking of weather
the current sql command that modifes a share->non_determinstic_insert
table is capable of generating ROW format events.
The extra check rules out the unsafety to OPTIMIZE et al, while the
existing check continues to do so to CREATE TABLE (which is
perculiarly tagged as ROW-event generative sql command).
As a side effect sql_sequence.binlog test gets corrected and
binlog_stm_unsafe_warning.test is reinforced to add up
an unsafe CREATE..SELECT test.
Fixed typo in my_malloc_size_cb_func. There is no max-thread-mem-used
sys variable in MariaDB, only max-session-mem-used. The relevant entry
in sys_vars.cc is also fixed.
Added a fallback case in case we could allocate the 256 bytes for the
error message containing the exact setting.
udf_handler::fix_fields(): Execute an assignment outside "if"
so that GCC 12 will not issue a bogus-looking warning.
Also, deduplicate some error handling code.
[Patch idea by Igor Babaev]
Symptom: for IN (SELECT ...) subqueries using IN-to-EXISTS transformation,
the optimizer was unable to make inferences using multiple equalities.
The cause is code Item_in_subselect::inject_in_to_exists_cond() which may
break invariants that Multiple-Equality code relies on. In particular, it
may produce a WHERE condition with an empty Item_cond::m_cond_equal.
Fixed this by making Item_cond::m_cond_equal.
The sys_var class has the deprecation_substitute member to mark the
deprecated variables. As it's set, the server produces warnings when
these variables are used. However, the plugin has no means to utilize
that functionality.
So, the PLUGIN_VAR_DEPRECATED flag is introduced to set the
deprecation_substitute with the empty string. A non-empty string can
make the warning more informative, but there's no nice way seen to
specify it, and not that needed at the moment.
The asserion failure was caused by this query
select /*id=1*/ from t1
where
col= ( select /*id=2*/ from ... where corr_cond1
union
select /*id=4*/ from ... where corr_cond2)
Here,
- select with id=2 was correlated due to corr_cond1.
- select with id=4 was initially correlated due to corr_cond2, but then
the optimizer optimized away the correlation, making the select with id=4
uncorrelated.
However, since select with id=2 remained correlated, the execution had to
re-compute the whole UNION. When it tried to execute select with id=4, it
hit an assertion (join buffer already free'd).
This is because select with id=4 has freed its execution structures after
it has been executed once. The select is uncorrelated, so it did not expect
it would need to be executed for the second time.
Fixed this by adding this logic in
st_select_lex::optimize_unflattened_subqueries():
If a member of a UNION is correlated, mark all its members as
correlated, so that they are prepared to be executed multiple times.