Merge from mysql-5.6:
revno: 3257
committer: Jon Olav Hauglid <jon.hauglid@oracle.com>
branch nick: mysql-trunk-bug11756966
timestamp: Thu 2011-07-14 09:32:01 +0200
message:
Bug#11756966 - 48958: STORED PROCEDURES CAN BE LEVERAGED TO BYPASS
DATABASE SECURITY
The problem was that CREATE PROCEDURE/FUCTION could be used to
check the existence of databases for which the user had no
privileges and therefore should not be allowed to see.
The reason was that existence of a given database was checked
before privileges. So trying to create a stored routine in
a non-existent database would give a different error than trying
to create a stored routine in a restricted database.
This patch fixes the problem by changing the order of the checks
for CREATE PROCEDURE/FUNCTION so that privileges are checked first.
This means that trying to create a stored routine in a
non-existent database and in a restricted database both will
give ER_DBACCESS_DENIED_ERROR error.
Test case added to grant.test.
on select from I_S.INNODB_CHANGED_PAGES
Analysis: limit_lsn_range_from_condition() incorrectly parses
start_lsn and/or end_lsn conditions.
Fix from SergeyP. Added some test cases.
It is triple bug with one test suite:
1. Incorrect outer table detection
2. Incorrect leaf table processing for multi-update (should be full like for usual updates and inserts)
3. ON condition fix_fields() fould be called for all tables of the query.
do_start_slave_sql() is called from maybe_exit().
We should not recurse when maybe_exit() is called for an error during do_start_slave_sql().
Also remove a meaningless (but safe) "goto err".
MDEV-5980: EITS: if condition is used for REF access, its selectivity is still in filtered%
MDEV-5985: EITS: selectivity estimates look illogical for join and non-key equalities
MDEV-6003: EITS: ref access, keypart2=const vs keypart2=expr - inconsistent filtered% value
- Made a number of fixes in table_cond_selectivity() so that it returns
correct selectivity estimates.
- Added comments in related code.
Better comments
MDEV-5985: EITS: selectivity estimates look illogical for join and non-key equalities
MDEV-6003: EITS: ref access, keypart2=const vs keypart2=expr - inconsistent filtered% value
- Made a number of fixes in table_cond_selectivity() so that it returns
correct selectivity estimates.
- Added comments in related code.
The problem is the async binlog checkpointing; this could on rare
occasions occur too late, causing SHOW BINLOG EVENTS to show the
wrong events and cause .result file difference.
Replication caches the character sets used in a query, to be able to quickly
reuse them for the next query in the common case of them not having changed.
In parallel replication, this caching needs to be per-worker-thread. The
code was not modified to handle this correctly, so the caching in one worker
could cause another worker to run a query using the wrong character set,
causing replication corruption.
Back-ported from the mysql 5.6 code line the patch with
the following comment:
Fix for Bug#11757108 CHANGE IN EXECUTION PLAN FOR COUNT_DISTINCT_GROUP_ON_KEY
CAUSES PEFORMANCE REGRESSION
The cause for the performance regression is that the access strategy for the
GROUP BY query is changed form using "index scan" in mysql-5.1 to use "loose
index scan" in mysql-5.5. The index used for group by is unique and thus each
"loose scan" group will only contain one record. Since loose scan needs to
re-position on each "loose scan" group this query will do a re-position for
each index entry. Compared to just reading the next index entry as a normal
index scan does, the use of loose scan for this query becomes more expensive.
The cause for selecting to use loose scan for this query is that in the current
code when the size of the "loose scan" group is one, the formula for
calculating the cost estimates becomes almost identical to the cost of using
normal index scan. Differences in use of integer versus floating point arithmetic
can cause one or the other access strategy to be selected.
The main issue with the formula for estimating the cost of using loose scan is
that it does not take into account that it is more costly to do a re-position
for each "loose scan" group compared to just reading the next index entry.
Both index scan and loose scan estimates the cpu cost as:
"number of entries needed too read/scan" * ROW_EVALUATE_COST
The results from testing with the query in this bug indicates that the real
cost for doing re-position four to eight times higher than just reading the
next index entry. Thus, the cpu cost estimate for loose scan should be increased.
To account for the extra work to re-position in the index we increase the
cost for loose index scan to include the cost of navigating the index.
This is modelled as a function of the height of the b-tree:
navigation cost= ceil(log(records in table)/log(indexes per block))
* ROWID_COMPARE_COST;
This will avoid loose index scan being used for indexes where the "loose scan"
group contains very few index entries.