Fix by Sergey Petrunia.
This patch only prevents the evaluation of expensive subqueries during optimization.
The crash reported in this bug has been fixed by some other patch.
The fix is to call value->is_null() only when !value->is_expensive(), because is_null()
may trigger evaluation of the Item, which in turn triggers subquery evaluation if the
Item is a subquery.
In 10.0, VIO timeouts can be in milliseconds, so we add a new function
mysql_get_timeout_value_ms() which can return millisecond-precision
timeout values.
In 5.5, we do not have millisecond precision for timeouts. But we still
provide the mysql_get_timeout_value_ms() function; this makes it easier
for applications as they can use the millisecond function in 10.0 and
still work with the 5.5 version of the client library.
.. into MariaDB 5.3
Fix for Bug#12667154 SAME QUERY EXEC AS WHERE SUBQ GIVES DIFFERENT
RESULTS ON IN() & NOT IN() COMP #3
This bug causes a wrong result in mysql-trunk when ICP is used
and bad performance in mysql-5.5 and mysql-trunk.
Using the query from bug report to explain what happens and causes
the wrong result from the query when ICP is enabled:
1. The t3 table contains four records. The outer query will read
these and for each of these it will execute the subquery.
2. Before the first execution of the subquery it will be optimized. In
this case the important is what happens to the first table t1:
-make_join_select() will call the range optimizer which decides
that t1 should be accessed using a range scan on the k1 index
It creates a QUICK_RANGE_SELECT object for this.
-As the last part of optimization the ICP code pushes the
condition down to the storage engine for table t1 on the k1 index.
This produces the following information in the explain for this table:
2 DEPENDENT SUBQUERY t1 range k1 k1 5 NULL 3 Using index condition; Using filesort
Note the use of filesort.
3. The first execution of the subquery does (among other things) due
to the need for sorting:
a. Call create_sort_index() which again will call find_all_keys():
b. find_all_keys() will read the required keys for all qualifying
rows from the storage engine. To do this it checks if it has a
quick-select for the table. It will use the quick-select for
reading records. In this case it will read four records from the
storage engine (based on the range criteria). The storage engine
will evaluate the pushed index condition for each record.
c. At the end of create_sort_index() there is code that cleans up a
lot of stuff on the join tab. One of the things that is cleaned
is the select object. The result of this is that the
quick-select object created in make_join_select is deleted.
4. The second execution of the subquery does the same as the first but
the result is different:
a. Call create_sort_index() which again will call find_all_keys()
(same as for the first execution)
b. find_all_keys() will read the keys from the storage engine. To
do this it checks if it has a quick-select for the table. Now
there is NO quick-select object(!) (since it was deleted in
step 3c). So find_all_keys defaults to read the table using a
table scan instead. So instead of reading the four relevant records
in the range it reads the entire table (6 records). It then
evaluates the table's condition (and here it goes wrong). Since
the entire condition has been pushed down to the storage engine
using ICP all 6 records qualify. (Note that the storage engine
will not evaluate the pushed index condition in this case since
it was pushed for the k1 index and now we do a table scan
without any index being used).
The result is that here we return six qualifying key values
instead of four due to not evaluating the table's condition.
c. As above.
5. The two last execution of the subquery will also produce wrong results
for the same reason.
Summary: The problem occurs due to all but the first executions of the
subquery is done as a table scan without evaluating the table's
condition (which is pushed to the storage engine on a different
index). This is caused by the create_sort_index() function deleting
the quick-select object that should have been used for executing the
subquery as a range scan.
Note that this bug in addition to causing wrong results also can
result in bad performance due to executing the subquery using a table
scan instead of a range scan. This is an issue in MySQL 5.5.
The fix for this problem is to avoid that the Quick-select-object that
the optimizer created is deleted when create_sort_index() is doing
clean-up of the join-tab. This will ensure that the quick-select
object and the corresponding pushed index condition will be available
and used by all following executions of the subquery.
cmake/cpack_rpm.cmake:
* mark all cnf files with %config(noreplace)
* add the forgotten postun script
sql/sys_vars.cc:
0 for a string variable means "no default. But datadir has the default value.
support-files/rpm/server-postin.sh:
* use mysqld --help to determine the correct datadir in the presence of my.cnf files
(better than my_print_defaults, because it considers the correct group set).
* Only create users, and chown/chmod if it's a fresh install, not an upgrade.
* only run mysql_install_db if datadir does not exist
Do not include mytop in mariadb-client-5.5 .deb package.
There is already a Debian mytop package, so we get a package conflict.
And there is no reason for the MariaDB project to guerrilla-take-over
mytop maintenance.
- Wrong thd uses in Item_subselect, could lead to crash
- Inititalize uninitialized variable in new autoincrement handling code
sql/handler.cc:
More DBUG_PRINT
sql/item_subselect.cc:
Wrong thd uses in Item_subselect, could lead to crash
storage/innobase/handler/ha_innodb.cc:
Initialize variable needed by upper level. This only happens when auto-increment value wraps over.
storage/xtradb/handler/ha_innodb.cc:
Initialize variable needed by upper level. This only happens when auto-increment value wraps over.
- Don't abort if plugin table exists
- Use longer timeout for start/stop of mysqld
debian/dist/Debian/mariadb-server-5.5.postinst:
Don't abort if plugin table exists
debian/mariadb-server-5.5.mysql.init:
Use longer timeout for start/stop of mysqld
In some rare cases when the value of the system variable join_buffer_size
was set to a number less than 256 the function JOIN_CACHE::set_constants
determined the size of an offset in the join buffer equal to 1 though
the minimal join buffer required more than 256 bytes. This could cause
a crash of the server when records from the join buffer were read.
The feature was backported from MySQL 5.6.
Some code was added to make commands as
SELECT * FROM ignored_db.t1;
CALL ignored_db.proc();
USE ignored_db;
to take that option into account.
per-file comments:
mysql-test/r/ignore_db_dirs_basic.result
test result added.
mysql-test/t/ignore_db_dirs_basic-master.opt
options for the test,
actually the set of --ignore-db-dir lines.
mysql-test/t/ignore_db_dirs_basic.test
test for the feature.
Same test from 5.6 was taken as a basis,
then tests for SELECT, CALL etc were added.
per-file comments:
sql/mysql_priv.h
MDEV-495 backport --ignore-db-dir.
interface for db_name_is_in_ignore_list() added.
sql/mysqld.cc
MDEV-495 backport --ignore-db-dir.
--ignore-db-dir handling.
sql/set_var.cc
MDEV-495 backport --ignore-db-dir.
the @@ignore_db_dirs variable added.
sql/sql_show.cc
MDEV-495 backport --ignore-db-dir.
check if the directory is ignored.
sql/sql_show.h
MDEV-495 backport --ignore-db-dir.
interface added for opt_ignored_db_dirs.
sql/table.cc
MDEV-495 backport --ignore-db-dir.
check if the directory is ignored.
Remove sql directory from the include path to workaround the problem. This removes the ambiguity , since then only one client_settings.h will be in the include paths
LP bug #1035225 / MySQL bug #66301: INSERT ... ON DUPLICATE KEY UPDATE +
innodb_autoinc_lock_mode=1 is broken
The problem was that when certain INSERT ... ON DUPLICATE KEY UPDATE
were executed concurrently on a table containing an AUTO_INCREMENT
column as a primary key, InnoDB would correctly reserve non-overlapping
AUTO_INCREMENT intervals for each statement, but when the server
encountered the first duplicate key error on the secondary key in one of
the statements and performed an UPDATE, it also updated the internal
AUTO_INCREMENT value to the one from the existing row that caused a
duplicate key error, even though the AUTO_INCREMENT value was not
specified explicitly in the UPDATE clause. It would then proceed with
using AUTO_INCREMENT values the range reserved previously by another
statement, causing duplicate key errors on the AUTO_INCREMENT column.
Fixed by changing write_record() to ensure that in case of a duplicate
key error the internal AUTO_INCREMENT counter is only updated when the
AUTO_INCREMENT value was explicitly updated by the UPDATE
clause. Otherwise it is restored to what it was before the duplicate key
error, as that value is unused and can be reused for subsequent
successfully inserted rows.
sql/sql_insert.cc:
Don't update next_insert_id to the value of a row found during ON DUPLICATE KEY UPDATE.
sql/sql_parse.cc:
Added DBUG_SYNC
sql/table.h:
Added next_number_field_updated flag to detect changing of auto increment fields.
Moved fields a bit to get bool fields after each other (better alignment)