Handling of top level conjuncts in WHERE whose used_tables() contained
RAND_TABLE_BIT in the function make_join_select() was incorrect.
As a result if such a conjunct referred to fields non of which belonged
to the last joined table it was pushed twice. (This could be seen
for a test case from subselect.test whose output was changed after this
patch had been applied. In 10.1 when running EXPLAIN FORMAT=JSON for
the query from this test case we clearly see that one of the conjuncts
is pushed twice.) This fact by itself was not good. Besides, if such a
conjunct was pushed to a table that was the result of materialization
of a semi-join the query could return a wrong result set. In particular
we could watch it for queries with semi-join subqueries whose left parts
used stored functions without "deterministic' specifier.
as well as
MDEV-19500 Update with join stopped worked if there is a call to a procedure in a trigger
MDEV-19521 Update Table Fails with Trigger and Stored Function
MDEV-19497 Replication stops because table not found
MDEV-19527 UPDATE + JOIN + TRIGGERS = table doesn't exists error
Reimplement the fix for (5d510fdbf0)
MDEV-18507 can't update temporary table when joined with table with triggers on read-only
instead of calling open_tables() twice, put multi-update
prepare code inside open_tables() loop.
Add a test for a MDL backoff-and-retry loop inside open_tables()
across multi-update prepare code.
This patch complements the patch that fixes bug MDEV-18479.
This patch takes care of possible overflow when calculating the
estimated number of rows in a materialized derived table / view.
This bug could happen when queries with nested outer joins were
executed employing join buffers. At such an execution if the method
JOIN_CACHE::join_records() is called when a join buffer has become
full no 'first_unmatched' field should be cleaned up in the JOIN_TAB
structure to which the join cache with this buffer is attached.
or server crashes in JOIN::fix_all_splittings_in_plan after EXPLAIN
This patch resolves the problem of overflowing when performing
calculations to estimate the cost of an evaluated query execution plan.
The overflowing in a non-debug build could cause different kind of
problems uncluding crashes of the server.
This patch is for MEM_ROOT only.
In debug mode add 8 byte of poisoned memory before every allocated chunk.
On the right of every chunk there will be either 1-7 trailing poisoned bytes, or
next chunk's redzone, or poisoned non allocated memory or redzone of a
malloc()ed buffer.
This patch complements the original patch for MDEV-18896 that prevents
conversions to semi-joins in tableless selects used in INSERT statements
in post-5.5 versions of the server.
The test case was corrected as well to ensure that potential conversion
to jtbm semi-joins is also checked (the problem was that one of
the preceeding testcases in subselect_sj.test did not restore the
state of the optimizer switch leaving the 'materialization' in the state
'off' and so blocking this check).
Noticed an inconsistency in the state of select_lex::table_list used
in INSERT statements and left a comment about this.
Some places didn't match the previous rules, making the Floor
address wrong.
Additional sed rules:
sed -i -e 's/Place.*Suite .*, Boston/Street, Fifth Floor, Boston/g'
sed -i -e 's/Suite .*, Boston/Fifth Floor, Boston/g'
This commit is based on the work of Michal Schorm, rebased on the
earliest MariaDB version.
Th command line used to generate this diff was:
find ./ -type f \
-exec sed -i -e 's/Foundation, Inc., 59 Temple Place, Suite 330, Boston, /Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, /g' {} \; \
-exec sed -i -e 's/Foundation, Inc. 59 Temple Place.* Suite 330, Boston, /Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, /g' {} \; \
-exec sed -i -e 's/MA.*.....-1307.*USA/MA 02110-1335 USA/g' {} \; \
-exec sed -i -e 's/Foundation, Inc., 59 Temple/Foundation, Inc., 51 Franklin/g' {} \; \
-exec sed -i -e 's/Place, Suite 330, Boston, MA.*02111-1307.*USA/Street, Fifth Floor, Boston, MA 02110-1335 USA/g' {} \; \
-exec sed -i -e 's/MA.*.....-1307/MA 02110-1335/g' {} \;
Before killing the server, we must issue FLUSH TABLES in order
to cleanly close any MyISAM system tables, to avoid warnings about
them when restarting.
This patch fixes an invalid read in fill_effective_table_privileges
triggered by a grant_version increase between a PREPARE for a
statement creating a view from I_S and EXECUTE.
A tmp table was created and free'd while preparing the statement,
TABLE_LIST::table_name was set to point to the tmp table
TABLE_SHARE::table_name which no longer existed after preparing was
done.
The grant version increase made fill_effective_table_privileges
called during EXECUTE to try fetch the updated grant info and
this is where the dangling table name was used.
triggers are opened and tables used in triggers are prelocked in
open_tables(). But multi-update can detect what tables will actually
be updated only later, after all main tables are opened.
Meaning, if a table is used in multi-update, but is not actually updated,
its on-update treggers will be opened and tables will be prelocked,
even if it's unnecessary. This can cause more tables to be
write-locked than needed, causing read_only errors, privilege errors
and lock waits.
Fix: don't open/prelock triggers unless table->updating is true.
In multi-update after setting table->updating=true, do a second
open_tables() for newly added tables, if any.
it always required UPDATE privilege on views, not being able to detect
when a views was not actually updated in multi-update.
fix: instead of marking all tables as "updating" by default,
only set "updating" on tables that will actually be updated
by multi-update. And mark the view "updating" if any of the
view's tables is.
InnoDB could return the same list again and again if the buffer
passed to trx_recover_for_mysql() is smaller than the number of
transactions that InnoDB recovered in XA PREPARE state.
We introduce the transaction state TRX_PREPARED_RECOVERED, which
is like TRX_PREPARED, but will be set during trx_recover_for_mysql()
so that each transaction will only be returned once.
Because init_server_components() is invoking ha_recover() twice,
we must reset the state of the transactions back to TRX_PREPARED
after returning the complete list, so that repeated traversals
will see the complete list again, instead of seeing an empty list.
Without this tweak, the test main.tc_heuristic_recover would hang
in MariaDB 10.1.
Problem:
========
The mysqlbinlog tool is leaking memory, causing failures in various tests when
compiling and testing with AddressSanitizer or LeakSanitizer like this:
cmake -DCMAKE_BUILD_TYPE=Debug -DWITH_ASAN:BOOL=ON /path/to/source
make -j$(nproc)
cd mysql-test
ASAN_OPTIONS=abort_on_error=1 ./mtr --parallel=auto rpl.rpl_row_mysqlbinlog
CURRENT_TEST: rpl.rpl_row_mysqlbinlog
Direct leak of 112 byte(s) in 1 object(s) allocated from:
#0 0x4eff87 in __interceptor_malloc (/dev/shm/5.5/client/mysqlbinlog+0x4eff87)
#1 0x60eaab in my_malloc /mariadb/5.5/mysys/my_malloc.c:41:10
#2 0x5300dd in Log_event::read_log_event(char const*, unsigned int, char const**,
Format_description_log_event const*, char) /mariadb/5.5/sql/log_event.cc:1568:
#3 0x564a9c in dump_remote_log_entries(st_print_event_info*, char const*)
/mariadb/5.5/client/mysqlbinlog.cc:1978:17
Analysis:
========
'mysqlbinlog' tool is being used to read binary log events from a remote server.
While reading binary log, if a fake rotate event is found following actions are
taken.
If 'to-last-log' option is specified, then fake rotate event is processed.
In the absence of 'to-last-log' skip the fake rotate event.
In this skipped case the fake rotate event object is not getting cleaned up
resulting in memory leak.
Fix:
===
Cleanup the fake rotate event.
This issues is already fixed in MariaDB 10.0.23 and higher versions as part of
commit c3018b0ff4
dict_create_foreign_constraints_low(): Tolerate the keywords
IGNORE and ONLINE between the keywords ALTER and TABLE.
We should really remove the hacky FOREIGN KEY constraint parser
from InnoDB.
Always set SERVER_MORE_RESULTS_EXIST when executing stored procedure statements
If statements produce a result, EOF packet needs this flag (SP ends
with an OK packet). IF statetement does not produce a result, affected rows
count are part of the final OK packet.
select from I_S
Problem:
========
When applier thread tries to access 'variable_name' of
INFORMATION_SCHEMA.SESSION_VARIABLES table through triggers, it results in an
abnormal exit of slave server.
Analysis:
========
At the time of replication of stored routines and triggers, their associated
security context will be sent by the master. The applier thread on the slave
server will use this information to set the required security context for the
execution of stored routines and triggers. This is achieved as follows.
->The stored routine object has a member named 'm_security_ctx' which holds the
security context received from master.
->The applier thread's security_ctx is stored into a 'backup' object.
->Set the applier thread's security_ctx to 'm_security_ctx'.
->Upon the completion of stored routine execution restore the original security
context of applier thread from the backup.
During the above process the 'm_security_ctx' object is not initialized
properly. Hence the 'external_user' of 'm_security_ctx' has invalid value for
this variable and accessing this variable results in abnormal exit of server.
Fix:
===
Invoke the Security_context::init() call from the constructor of stored routine
so that 'm_security_ctx' gets initialized properly.
If an IN-subquery is used in a table-less select the current code
should never consider it as candidate for semi-join optimizations.
Yet the function check_and_do_in_subquery_rewrites() improperly
checked the property "to be a table-less select". As a result
such select in IN subquery was used in INSERT .. SELECT then
the IN subquery by mistake was registered as a semi-join subquery
and convert_subq_to_sj() was called for it. However the code of
this function does not assume that the parent select of the subquery
could be a table-less select.
row_mysql_handle_errors(): Correct the wrong error handling for
the code DB_FOREIGN_EXCEED_MAX_CASCADE that was introduced in
c0923d396a
commit 35f5429eda
Author: Jimmy Yang <jimmy.yang@oracle.com>
Date: Wed Oct 6 06:55:34 2010 -0700
Manual port Bug #Bug #54582 "stack overflow when opening many tables
linked with foreign keys at once" from mysql-5.1-security to
mysql-5.5-security again.
rb://391 approved by Heikki
No known test case exists for repeating the bug before MariaDB 10.2.
The scenario should be that DB_FOREIGN_EXCEED_MAX_CASCADE is returned,
then InnoDB wrongly skips the rollback to the start of the current
row operation, and finally the SQL layer commits the transaction.
Normally the SQL layer would roll back either the entire transaction or
to the start of the statement. In the faulty scenario, InnoDB would
leave the transaction in an inconsistent state, and the SQL layer could
commit the transaction.
1. Always drop merged_for_insert flag on cleanup (there could be errors which prevent TABLE to be assigned)
2. Make more precise cleanup of select parts which was touched
st_select_lex::handle_derived() and mysql_handle_list_of_derived() had
exactly the same implementations.
- Adding a new method LEX::handle_list_of_derived() instead
- Removing public function mysql_handle_list_of_derived()
- Reusing LEX::handle_list_of_derived() in st_select_lex::handle_derived()
candle.exe's preprocessor flags (-dHaveUpgradeWizard=0 -DHaveInnodb=1)
were not passed correctly to EXECUTE_PROCESS
Fix is to make a list out of the EXTRA_WIX_PREPROCESSOR_FLAGS string,
and use the preprocessor flags list in EXECUTE_PROCESS.
Disable LOAD DATA LOCAL INFILE suport by default and
auto-enable it for the duration of one query, if the query
string starts with the word "load". In all other cases the application
should enable LOAD DATA LOCAL INFILE support explicitly.