triggers are opened and tables used in triggers are prelocked in
open_tables(). But multi-update can detect what tables will actually
be updated only later, after all main tables are opened.
Meaning, if a table is used in multi-update, but is not actually updated,
its on-update treggers will be opened and tables will be prelocked,
even if it's unnecessary. This can cause more tables to be
write-locked than needed, causing read_only errors, privilege errors
and lock waits.
Fix: don't open/prelock triggers unless table->updating is true.
In multi-update after setting table->updating=true, do a second
open_tables() for newly added tables, if any.
it always required UPDATE privilege on views, not being able to detect
when a views was not actually updated in multi-update.
fix: instead of marking all tables as "updating" by default,
only set "updating" on tables that will actually be updated
by multi-update. And mark the view "updating" if any of the
view's tables is.
InnoDB could return the same list again and again if the buffer
passed to trx_recover_for_mysql() is smaller than the number of
transactions that InnoDB recovered in XA PREPARE state.
We introduce the transaction state TRX_PREPARED_RECOVERED, which
is like TRX_PREPARED, but will be set during trx_recover_for_mysql()
so that each transaction will only be returned once.
Because init_server_components() is invoking ha_recover() twice,
we must reset the state of the transactions back to TRX_PREPARED
after returning the complete list, so that repeated traversals
will see the complete list again, instead of seeing an empty list.
Without this tweak, the test main.tc_heuristic_recover would hang
in MariaDB 10.1.
dict_create_foreign_constraints_low(): Tolerate the keywords
IGNORE and ONLINE between the keywords ALTER and TABLE.
We should really remove the hacky FOREIGN KEY constraint parser
from InnoDB.
Always set SERVER_MORE_RESULTS_EXIST when executing stored procedure statements
If statements produce a result, EOF packet needs this flag (SP ends
with an OK packet). IF statetement does not produce a result, affected rows
count are part of the final OK packet.
select from I_S
Problem:
========
When applier thread tries to access 'variable_name' of
INFORMATION_SCHEMA.SESSION_VARIABLES table through triggers, it results in an
abnormal exit of slave server.
Analysis:
========
At the time of replication of stored routines and triggers, their associated
security context will be sent by the master. The applier thread on the slave
server will use this information to set the required security context for the
execution of stored routines and triggers. This is achieved as follows.
->The stored routine object has a member named 'm_security_ctx' which holds the
security context received from master.
->The applier thread's security_ctx is stored into a 'backup' object.
->Set the applier thread's security_ctx to 'm_security_ctx'.
->Upon the completion of stored routine execution restore the original security
context of applier thread from the backup.
During the above process the 'm_security_ctx' object is not initialized
properly. Hence the 'external_user' of 'm_security_ctx' has invalid value for
this variable and accessing this variable results in abnormal exit of server.
Fix:
===
Invoke the Security_context::init() call from the constructor of stored routine
so that 'm_security_ctx' gets initialized properly.
If an IN-subquery is used in a table-less select the current code
should never consider it as candidate for semi-join optimizations.
Yet the function check_and_do_in_subquery_rewrites() improperly
checked the property "to be a table-less select". As a result
such select in IN subquery was used in INSERT .. SELECT then
the IN subquery by mistake was registered as a semi-join subquery
and convert_subq_to_sj() was called for it. However the code of
this function does not assume that the parent select of the subquery
could be a table-less select.
1. Always drop merged_for_insert flag on cleanup (there could be errors which prevent TABLE to be assigned)
2. Make more precise cleanup of select parts which was touched
Disable LOAD DATA LOCAL INFILE suport by default and
auto-enable it for the duration of one query, if the query
string starts with the word "load". In all other cases the application
should enable LOAD DATA LOCAL INFILE support explicitly.
The fix for "MDEV-17698 MEMORY engine performance regression"
previously fixed this problem.
- Adding the test for MDEV-17724
- Re-recording wrong results for tests:
* engines/iuds/r/insert_number
* engines/iuds/r/update_delete_number
which started to fail since MDEV-17698
--gdb now accepts an argument, it will be passed to gdb as a command.
multiple commands can be separated by a (non-standard and not escapable)
delimiter - semicolon (;).
Old usage with a bare --gdb continues to work too, of course.
Cherry-picked c47c0ca50c5441bbd3b1339b905579
mysql_upgrade used to convert all columns of mysql.db to
utf8_general_ci and then back to utf8_bin. In two separate ALTER's.
This failed if UNIQUE indexes in mysql.db contained entries
that differ only in the letter case.
consistently) on Replication Slave
lower_case_table_names 0 -> 1 replication works, it's safe as long as
mixed case names mapping to the lower case ones is one-to-one
derived table / view by equality
Now rows of a materialized derived table are always put into a
temporary table before join operation. If BNLH is used to join this
table with the result of a partial join then both operands of the
join are actually put into main memory. In most cases this is not
efficient.
We could avoid this by sending the rows of the derived table directly
to the join operation. However this kind of data flow is not supported
yet.
Fixed by not allowing usage of hash join algorithm to join a materialized
derived table if it's joined by an equality predicate of the form
f=e where f is a field of the derived table.
e.g. "No option named 'FILE_KEY_MANAGEMENT_SO' in group 'ENV' at lib/My/ConfigFactory.pm line 370."
when a test has `plugin-load-add=@ENV.FILE_KEY_MANAGEMENT_SO`
Two bugs in Aria, related to 2-level fulltext indexes:
* REPAIR calculated the key number incorrectly
* CHECK copied the key into last_key too early and
checking the second-level btree was overwriting it
The problem was that join_columns creation was not finished due to error of notfound column in USING, but next execution tried to use join_columns lists.
Solution is cleanup the lists on error. It can eat memory in statement MEM_ROOT but it is an error and error will be fixed or statement/procedure removed/altered.
Problem was that SQL level tried to read a record with rnd_pos()
that was already deleted by the same statement.
In the case where the page for the record had been deleted, this
caused an assert.
Fixed by extending the assert to also handle empty pages and
return HA_ERR_RECORD_DELETED for reads to deleted pages.
MySQL bug number 90264
Contribution by Yura Sorokin.
Problem:
File mysys/mf_iocache2.c contains non instrumented file io operations.
This causes inaccurate statistics in PERFORMANCE_SCHEMA.
Solution:
Use the instrumentation apis (mysql_file_tell instead of my_tell, etc).
truncate incorrect values in convert_period_to_month() so that
PERIOD_DIFF never returns a value outside of 2^23 range.
And, for safety, increase buffer sizes for int10_to_str
to be sufficienly big for any int10_to_str result.
with spatial index
So the issue is since it is spatial index , at the time of searching index
for key (Rows_log_event::find_row) we use wrong field image we use
Field::itRAW while we should be using Field::itMBR
We do not accept:
1. We did not have this problem (fixed earlier and better)
d982e717ab Bug#27510150: MYSQLDUMP FAILS FOR SPECIFIC --WHERE CLAUSES
2. We do not have such options (an DBUG_ASSERT put just in case)
bbc2e37fe4 Bug#27759871: BACKRONYM ISSUE IS STILL IN MYSQL 5.7
3. Serg fixed it in other way in this release:
e48d775c6f Bug#27980823: HEAP OVERFLOW VULNERABILITIES IN MYSQL CLIENT LIBRARY
In this case we are setting the field Item_func_eq::in_eqaulity_no for the semi-join equalities.
This helps us to remove these equalites as the inner tables are not available during parent select execution
while the outer tables are not available during materialization phase.
We only have it set for the equalites for the fields involved with the IN subquery
and reset it for the equalities which do not belong to the IN subquery.
For example in case of nested IN subqueries:
SELECT t1.a FROM t1 WHERE t1.a IN
(SELECT t2.a FROM t2 where t2.b IN
(select t3.b from t3 where t3.c=27 ))
there are two equalites involving the fields of the IN subquery
1) t2.b = t3.b : the field Item_func_eq::in_eqaulity_no is set when we merge the grandchild select into the child select
2) t1.a = t2.a : the field Item_func_eq::in_eqaulity_no is set when we merge the child select into the parent select
But when we perform case 2) we should ensure that we reset the equalities in the child's WHERE clause.
with join_cache_level>2
During muliple equality propagation for a query in which we have an IN subquery, the items in the select list of the
subquery may not be part of the multiple equality because there might be another occurence of the same field in the
where clause of the subquery.
So we keyuse_is_valid_for_access_in_chosen_plan function which expects the items in the select list of the subquery to
be same to the ones in the multiple equality (through these multiple equalities we create keyuse array).
The solution would be that we expect the same field not the same Item because when we have SEMI JOIN MATERIALIZATION SCAN,
we use copy back technique to copies back the materialised table fields to the original fields of the base tables.
This patch fixes another problem introduced by the patch for mdev-4817.
The latter changed Item_cond::fix_fields() in such a way that it could
call the virtual method is_expensive(). With the first its call
the method saves the result in Item::is_expensive_cache. For all next
calls the method returns the result from this cache. So if the item
once was determined as expensive the method always returns true.
For subqueries it's not good, because non-optimized subqueries always
is considered as expensive.
It means that the cache should be invalidated after the call of
optimize_constant_subqueries().
LOSS
ANALYSIS:
=========
When converting from a BLOB/TEXT type to a smaller
BLOB/TEXT type, no warning/error is reported to the user
informing about the truncation/data loss.
FIX:
====
We are now reporting a warning in non-strict mode and an
appropriate error in strict mode.
Due to a legacy bug in the code of make_join_statistics() detecting
so-called constant tables could miss some of them in rare queries
that used RIGHT JOIN. As a result these queries had execution plans
different from the execution plans of the equivalent queries with
LEFT JOIN.
Besides starting from 10.2 this could trigger an assertion failure.
In this issue we are using derived_with_keys optimization and we are using these keys to do a hash join which is incorrect.
We cannot create keys for dervied tables whose keyparts have types are of BLOB or TEXT type. TEXT or BLOB columns can only be
indexed over a specified length.
When the definition of the index used for hash join was created
in create_hj_key_for_table() it could cause memory overwrite
due to a bug that led to an underestimation of the number of
the index component.
MDEV-16512 Server crashes in find_field_in_table_ref on 2nd
execution of SP referring to non-existing field
Problem was in the natural join code that it changed TABLE_LIST and
Item_fields but didn't restore changed things if things goes wrong
and was not able to re-execute after failure.
Some of the problems could have been avoided if we would have run
fix_fields before doing natural join transformations.
Fixed by marking functions complete AFTER they had executed, instead at
start.
I had also to change some tests that checked if Item_fields are usable.
This doesn't fix all known problems, but at least avoids some crashes.
What should be done in the near future is to mark the statement in the SP
as 'not re-executable' and force a reparse of it on next execution.
Reviewer: Sergei Petrunia <psergey@askmonty.org>
This reverts commit d39629f01e.
Because running mtr for many hours with no output whatsoever
is not really what we should do.
And in 5.5 `make test` just works anyway, nothing to fix here.
The order of outputting stored procedures is important. Stored
procedures must be available on view creation, for views which make use
of them. Make sure to print them before outputting tables.
Assign all tests added via MY_ADD_TEST to a bogus default_ignore target,
so that they are not ran by default when doing bare make test. Add default
test named MTR that calls mysql-test-run suite, which is now the single
test run by make test.
In consequence, modified unit/suite.pm to exclude the MTR test and run the
real ctests flagged for default_ignore target, thus no circular
loop.
The bug arises when one uses --auto-generate-sql-guid-primary (and
--auto-generate-sql-secondary-indexes) with mysqlslap and also have
sql_mode=STRICT_TRANS_TABLE.
When using this option, mysqlslap should create a column with varchar(36),
but it appears to create it as a varchar(32) only. Then if one has
sql_mode=STRICT_TRANS_TABLES, it throws an error, like:
mysqlslap: Cannot run query INSERT INTO t1 VALUES (...)
ERROR : Data too long for column 'id' at row 1
Upstream bug report: BUG#80329.
For non-semi-join subquery optimization we do a cost based decision between
Materialisation and IN -> EXIST transformation. The issue in this case is that for IN->EXIST transformation
we run JOIN::reoptimize with the IN->EXISt conditions and we come up with a new query plan. But when we compare
the cost with Materialization, we make the decision to chose Materialization so we need to restore the query plan
for Materilization.
The saving and restoring for keyuse array and join_tab keyuse is only done when we have atleast
one element in the keyuse_array , we are now changing to do it even for 0 elements to main the generality.
upon SELECT .. LIMIT 0
The code must differentiate between a SELECT with contradictory
WHERE/HAVING and one with LIMIT 0.
Also for the latter printed 'Zero limit' instead of 'Impossible where'
in the EXPLAIN output.
It should work ok on all Unixes, but on Windows ,only worked by accident
in the past, with client not being Unicode safe.
It stopped working with Visual Studio 2017 15.7 update now.
Analyze core independently of max-save-datadir and max-save-core setting.
Increment $num_saved_cores only if core was actually saved.
"Move any core files from e.g. mysqltest" independently of
max-save-datadir setting. Note: it may overwrite core from mysqld, which
might not be desired (it did work this way even before).
SHOW_ROUTINE_GRANTS
Description :- Server crashes in show_routine_grants().
Analysis :- When "grant_reload_procs_priv" encounters
an error, the grant structures (structures with column,
function and procedure privileges) are freed. Server
crashes when trying to access these structures later.
Fix :- Grant structures are retained even when
"grant_reload_procs_priv()" encounters an error while
reloading column, function and procedure privileges.
multiple times with different arguments.
If the ON expression of an outer join is an OR formula with one
of the disjunct being a constant formula then the expression
cannot be null-rejected if the constant formula is true. Otherwise
it can be null-rejected and if so the outer join can be converted
into inner join. This optimization was added in the patch for
mdev-4817. Yet the code had a defect: if the query was used in
a stored procedure with parameters and the constant item contained
some of them then the value of this constant item depended on the
values of the parameters. With some parameters it may be true,
for others not. The validity of conversion to inner join is checked
only once and it happens only for the first call of procedure.
So if the parameters in the first call allowed the conversion it
was done and next calls used the transformed query though there
could be calls whose parameters made the conversion invalid.
Fixed by cheking whether the constant disjunct in the ON expression
originally contained an SP parameter. If so the expression is not
considered as null-rejected. For this check a new item's attribute
was intruduced: Item::with_param. It is calculated for each item
by fix fields() functions.
Also moved the call of optimize_constant_subqueries() in
JOIN::optimize after the call of simplify_joins(). The reason
for this is that after the optimization introduced by the patch
for mdev-4817 simplify_joins() can use the results of execution
of non-expensive constant subqueries and this is not valid.
The problem was that SJ (semi-join) used secondary list (array) of subquery select list. The items there was prepared once then cleaned up (but not really freed from memory because it was made in statement memory).
Original list was not prepared after first execution because select was removed by conversion to SJ.
The solution is to use original list but prepare it first.
INSERT PRIVILEGES FOR MYSQL.USER TABLE
Description:- Incorrect granting of EXECUTE and ALTER
ROUTINE privileges when the 'automatic_sp_privileges'
variable is set.
Fix:- EXECUTE and ALTER ROUTINE privileges are correctly
granted to the creator of the procedure when the
'automatic_sp_privileges' is SET.
disable online alter add primary key for innodb, if the
table is opened/locked more than once in the current connection
(see assert in ha_innobase::add_index())
Any expensive WHERE condition for a table-less query with
implicit aggregation was lost. As a result the used aggregate
functions were calculated over a non-empty set of rows even
in the case when the condition was false.
Also clarify which --{no-,}default* options, must be first.
Sample output:
$ client/mysql --help
client/mysql Ver 15.1 Distrib 5.5.59-MariaDB, for Linux (x86_64) using readline 5.1
Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.
Usage: client/mysql [OPTIONS] [database]
Default options are read from the following files in the given order:
/etc/my.cnf /etc/mysql/my.cnf ~/.my.cnf
The following groups are read: mysql client client-server client-mariadb
The following options may be given as the first argument:
--print-defaults Print the program argument list and exit.
--no-defaults Don't read default options from any option file.
The following specify which files/groups are read (specified before other options):
--defaults-file=# Only read default options from the given file #.
--defaults-extra-file=# Read this file after the global files are read.
--defaults-group-suffix=# Additionally read default groups with # appended as a suffix.
tests running from build directory:
TEST: print defaults ignored as not first
$ sql/mysqld --no-defaults --print-defaults --lc-messages-dir=${PWD}/sql/share
TEST: no startup occurs as --print-defaults specified
$ sql/mysqld --print-defaults --lc-messages-dir=${PWD}/sql/share
sql/mysqld would have been started with the following arguments:
--lc-messages-dir=/home/dan/repos/build-mariadb-5.5/sql/share
TEST: default args can't be anywhere
$ client/mysql --user=bob --defaults-file=/etc/my.cnf
client/mysql: unknown variable 'defaults-file=/etc/my.cnf'
$ client/mysql --user=bob --defaults-group-suffix=.group
client/mysql: unknown variable 'defaults-group-suffix=.group'
/etc/my.cnf:
[client-server.group]
socket=/var/lib/mysql-multi/group/mysqld.sock
user=bob
/etc/my.other.cnf:
socket=/var/lib/mysql-other/mysqld.sock
TEST: defaults file read and suffix also applied
$ client/mysql --defaults-file=/etc/my.other.cnf --defaults-group-suffix=.group
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql-other/mysqld.sock' (2)
TEST: defaults extra file
$ client/mysql --defaults-extra-file=/etc/my.other.cnf --defaults-group-suffix=.group
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql-other/mysqld.sock' (2)
WRONG VALUES
User variables will have the default session collation
associated with it. And a select which uses it as part of a
union may infer the collation while type merging.
This leads to problems when the result is of DECIMAL type.
Setting the appropriate collation of DECIMAL result type
is missing in 5.7 code base.
Added code to set appropriate collation when the result is
of DECIMAL type during Item_type_holder::join_types().