- mysql-test-run.pl --valgrind complains when all tests succeed.
- perfschema.all_instances fail on non-linux, where ENABLE_TEMP_POOL
is not set and therefore BITMAP mutex is not used.
- MDEV-132: main.mysqldump fails because it depends on exact size of stdio
buffers.
- MDEV-99: rpl.rpl_cant_read_event_incident fails due to a race where the
slave manages to connect while the test case is in the middle of setting up
the master, causing the slave to replicate extra/wrong events.
- MDEV-133: rpl.rpl_rotate_purge_deadlock fails because it issues a
DEBUG_SYNC SIGNAL immediately followed by RESET; this means that sometimes
the intended receipient has no time to see the signal before it is cleared
by the RESET, causing wait to timeout.
The line in the file_contents.test removes all the '/lib' substrings from the
path, so file cannot be found if a path contains such a substring.
As i didn't find where it is needed, the line was just removed
per-file comments:
mysql-test/t/file_contents.test
mdev57 5.5 main.file_contents fails on debian5-i386-fulltest.
no '/lib' substring cutting.
In 5.5, ssl_do() no longer calls report_errors() in case of ssl error.
Since report_errors() iterated over the list of errors, this means that we
now report the first error in the list, rather than the last. Adjust the
--replace_regex line for OpenSSL build accordingly in the test case.
Problem: When building the condition for JOIN::outer_ref_cond the optimizer forgot to take into account
that this condition could depend on constant tables as well.
mysql-test/r/information_schema_all_engines.result:
Update result
mysql-test/t/information_schema_all_engines.test:
Added --sorted-results as tables in information_schema are not sorted.
- Create/use do_copy_nullable_row_to_notnull() function for ref access, which is used
when copying from not-NULL field in table that can be NULL-complemented to not-NULL field.
If init_command was incorrect, we couldn't let users execute
queries, but we couldn't report the issue to the client either
as it does not expect error messages before even sending a
command. Thus, we simply disconnected them without throwing
a clear error.
We now go through the proper sequence once (without executing
any user statements) so we can report back what the problem
is. Only then do we disconnect the user.
As always, root remains unaffected by this as init_command is
(still) not executed for them.
mysql-test/r/init_connect.result:
We now report a proper error if init_command fails.
Expect as much.
mysql-test/t/init_connect.test:
We now report a proper error if init_command fails.
Expect as much.
sql/sql_connect.cc:
If init_command fails, throw an error explaining this to
the user.
from a heap temptable, which uses pointers to records (that is, byte*
pointers) as rowids.
This meant that for rows with the same sort key value, the order
was determined by memory layout.
fixed several defects in the greedy optimization:
1) The greedy optimizer calculated the 'compare-cost' (CPU-cost)
for iterating over the partial plan result at each level in
the query plan as 'record_count / (double) TIME_FOR_COMPARE'
This cost was only used locally for 'best' calculation at each
level, and *not* accumulated into the total cost for the query plan.
This fix added the 'CPU-cost' of processing 'current_record_count'
records at each level to 'current_read_time' *before* it is used as
'accumulated cost' argument to recursive
best_extension_by_limited_search() calls. This ensured that the
cost of a huge join-fanout early in the QEP was correctly
reflected in the cost of the final QEP.
To get identical cost for a 'best' optimized query and a
straight_join with the same join order, the same change was also
applied to optimize_straight_join() and get_partial_join_cost()
2) Furthermore to get equal cost for 'best' optimized query and a
straight_join the new code substrcated the same '0.001' in
optimize_straight_join() as it had been already done in
best_extension_by_limited_search()
3) When best_extension_by_limited_search() aggregated the 'best' plan a
plan was 'best' by the check :
'if ((search_depth == 1) || (current_read_time < join->best_read))'
The term '(search_depth == 1' incorrectly caused a new best plan to be
collected whenever the specified 'search_depth' was reached - even if
this partial query plan was more expensive than what we had already
found.
--FLUSH-LOG BREAKS CONSISTENCY
The transaction started by mysqldump gets committed
implicitly when flush-log is specified along with
single-transaction option, and hence can break
consistency.
This is because, COM_REFRESH is executed in order
to flush logs and starting from 5.5 this command
performs an implicit commit.
Fixed by making sure that COM_REFRESH is executed
before the transaction has started and not after it.
Note : This patch triggers following behavioral
changes in mysqldump :
1) After this patch we no longer flush logs before
dumping each database if --single-transaction
option is given like it was done before (in the
absence of --lock-all-tables and --master-data
options).
2) Also, after this patch, we start acquiring
FTWRL before flushing logs in cases when only
--single-transaction and --flush-logs are given.
It becomes safe to use mysqldump with these two
options and without --master-data parameter for
backups.
client/mysqldump.c:
Bug#12809202 61854: MYSQLDUMP --SINGLE-TRANSACTION
--FLUSH-LOG BREAKS CONSISTENCY
Added logic to make sure that, if flush-log option
is specified, mysql_refresh() is never executed after
the transaction has started.
Added verbose messages for all the executions of
mysql_refresh() in order to track its invocation.
mysql-test/r/mysqldump.result:
Added test case for Bug#12809202.
mysql-test/t/mysqldump.test:
Added test case for Bug#12809202.
unix_timestamp() is implemented in this part of the code in place of current_time().
Also, since the pb2 machines may be extremely fast, instead of looping through the code,
we use sleep(1.1) so that the variables t0 and t1 have different values.
The time comparison using current_time() stored in an int variable was giving wrong results as
the current_time() format as an int implementation has been changed in mysql-trunk but not in mysql-5.5.
The time is stored in the format hh:mm:ss as 'time' datatype.But as an int, it is stored as hhmmss,
but only on the trunk. On mysql-5.5,as an int, it is stored as hh.
Hence, the current_time() function has been changed to unix_timestamp() function.
The patch differs from the original MySQL patch as follows:
- All test case differences have been reviewed one by one, and
care has been taken to restore the original plan so that each
test case executes the code path it was designed for.
- A bug was found and fixed in MariaDB 5.3 in
Item_allany_subselect::cleanup().
- ORDER BY is not removed because we are unsure of all effects,
and it would prevent enabling ORDER BY ... LIMIT subqueries.
- ref_pointer_array.m_size is not adjusted because we don't do
array bounds checking, and because it looks risky.
Original comment by Jorgen Loland:
-------------------------------------------------------------
WL#5953 - Optimize away useless subquery clauses
For IN/ALL/ANY/SOME/EXISTS subqueries, the following clauses are
meaningless:
* ORDER BY (since we don't support LIMIT in these subqueries)
* DISTINCT
* GROUP BY if there is no HAVING clause and no aggregate
functions
This WL detects and optimizes away these useless parts of the
query during JOIN::prepare()
- The problem was that const-table-reading code would try to evaluate MATCH()
before init_ftfuncs() was called.
- Fixed by making MATCH function "expensive" so that nobody tries to evaluate it
at optimization phase.
- Correctly handle plan refinement stage for LooseScan plans: run create_ref_for_key() if LooseScan
plan includes a ref access, and if we don't have any fixed key components, switch to a full index scan.
Problem description:
When a view is created using function FORMAT and if FORMAT function uses locale
option,definition of view saved into server doesn't contain that locale information,
Ex:
create table test2 (bb decimal (10,2));
insert into test2 values (10.32),(10009.2),(12345678.21);
create view test3 as select format(bb,1,'sk_SK') as cc from test2;
select * from test3;
+--------------+
| cc |
+--------------+
| 10.3 |
| 10,009.2 |
| 12,345,678.2 |
+--------------+
3 rows in set (0.02 sec)
show create view test3
View: test3
Create View: CREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost`
SQL SECURITY DEFINER VIEW `test3` AS select format(`test2`.`bb`,1) AS `cc`
from `test2`
character_set_client: latin1
collation_connection: latin1_swedish_ci
1 row in set (0.02 sec)
Problem Analysis:
The function Item_func_format::print() which prints the query string to create
the view does not print the third argument (i.e the locale information). Hence
view is created without locale information.
Problem Solution:
If argument count is more than 2 we now print the third argument onto the query string.
Files changed:
sql/item_strfunc.cc
Function call changes: Item_func_format::print()
mysql-test/t/select.test
Added test case to test the bug
mysql-test/r/select.result
Result of the test case appended here
- Let JTBM optimization code handle the case where the subquery is degenerate and doesn't have a
join query plan. Regular materialization would fall back to IN->EXISTS for such cases. Semi-Join
materialization does not have such option, instead we introduce and use "constant JTBM join tabs".
- Do a "more thorough" cleanup of SJ-Materialization join tab in JOIN_TAB::cleanup. The bug
was due to the fact that JOIN_TAB::cleanup() may be called multiple times for the same tab
if the join has grouping.
- Changed storage to be 2 bytes instead of sizeof(size_t) (simple optimization)
- Fixed bug when using query_cache_strip_comments and query that started with '('
- Fixed DBUG_PRINT() that used wrong (not initialized) variables.
mysql-test/mysql-test-run.pl:
Added some space to make output more readable.
mysql-test/r/query_cache.result:
Updated test results
mysql-test/t/query_cache.test:
Added test with query_cache_strip_comments
sql/mysql_priv.h:
Added QUERY_CACHE_DB_LENGTH_SIZE
sql/sql_cache.cc:
Fixed bug when using query_cache_strip_comments and query that started with '('
Store db length in 2 characters instead of size_t.
Get db length from correct position (earlier we had an error when query started with ' ')
Fixed DBUG_PRINT() that used wrong (not initialized) variables.
The range optimizer incorrectly chose a loose scan for group by
when there is a correlated WHERE condition. This range access
method cannot be executed for correlated conditions also with the
"range checked for each record" because generally the range access
method can change for each outer record. Loose scan destructively
changes the query plan and removes the GROUP operation, which will
result in wrong query plans if another range access is chosen
dynamically.