Select with a "NULL NOT IN" condition containing complex
subselect from the same table as in the outer select failed
with an assertion.
The failure was caused by a concatenation of circumstances:
1) an inner select was optimized by make_join_statistics to use
the QUICK_RANGE_SELECT access method (that implies an index
scan of the table);
2) a subselect was independent (constant) from the outer select;
3) a condition was pushed down into inner select.
During the evaluation of a constant IN expression an optimizer
temporary changed the access method from index scan to table
scan, but an engine handler was already initialized for index
access by make_join_statistics. That caused an assertion.
Unnecessary index initialization has been removed from
the QUICK_RANGE_SELECT::init method (QUICK_RANGE_SELECT::reset
reinvokes this initialization).
with COALESCE and JOIN
The server returned to a client the VARBINARY column type
instead of the DATE type for a result of the COALESCE,
IFNULL, IF, CASE, GREATEST or LEAST functions if that result
was filesorted in an anonymous temporary table during
the query execution.
For example:
SELECT COALESCE(t1.date1, t2.date2) AS result
FROM t1 JOIN t2 ON t1.id = t2.id ORDER BY result;
To create a column of various date/time types in a
temporary table the create_tmp_field_from_item() function
uses the Item::tmp_table_field_from_field_type() method
call. However, fields of the MYSQL_TYPE_NEWDATE type were
missed there, and the VARBINARY columns were created
by default.
Necessary condition has been added.
index column
There was actually two problems
1) when clustered pk, order by non pk index should also
compare with pk as last resort to differ keys from each
other
2) bug in the index search handling in ha_partition (was
found when extending the test case
Solution to 1 was to include the pk in key compare if
clustered pk and search on other index.
Solution for 2 was to remove the optimization from
ordered scan to unordered scan if clustered pk.
derived table cause crash
When a multi-UPDATE command fails to lock some table, and
subsequently succeeds, the tables need to be reopened if
they were altered. But the reopening procedure failed for
derived tables.
Extra cleanup has been added.
The problem was that PACK_KEYS and MAX_ROWS clause in ALTER TABLE did not trigger
table reconstruction.
The fix is to rebuild a table if PACK_KEYS or MAX_ROWS are specified.
When running Stored Routines the Status Variable "Questions" was wrongly
incremented. According to the manual it should contain the "number of
statements that clients have sent to the server"
Introduced a new status variable 'questions' to replace the query_id
variable which currently corresponds badly with the number of statements
sent by the client.
The new behavior is ment to be backward compatible with 4.0 and at the
same time work with new features in a similar way.
This is a backport from 6.0
The code to get read the value of a system variable was extracting its value
on PREPARE stage and was substituting the value (as a constant) into the parse tree.
Note that this must be a reversible transformation, i.e. it must be reversed before
each re-execution.
Unfortunately this cannot be reliably done using the current code, because there are
other non-reversible source tree transformations that can interfere with this
reversible transformation.
Fixed by not resolving the value at PREPARE, but at EXECUTE (as the rest of the
functions operate). Added a cache of the value (so that it's constant throughout
the execution of the query). Note that the cache also caches NULL values.
Updated an obsolete related test suite (variables-big) and the code to test the
result type of system variables (as per bug 74).
``FLUSH TABLES WITH READ LOCK''
Concurrent execution of 1) multitable update with a
NATURAL/USING join and 2) a such query as "FLUSH TABLES
WITH READ LOCK" or "ALTER TABLE" of updating table led
to a server crash.
The mysql_multi_update_prepare() function call is optimized
to lock updating tables only, so it postpones locking to
the last, and if locking fails, it does cleanup of modified
syntax structures and repeats a query analysis. However,
that cleanup procedure was incomplete for NATURAL/USING join
syntax data: 1) some Field_item items pointed into freed
table structures, and 2) the TABLE_LIST::join_columns fields
was not reset.
Major change:
short-living Field *Natural_join_column::table_field has
been replaced with long-living Item*.
warnings)
Before this fix, several places in the code would raise a warning with an
error code 0, making it impossible for a stored procedure, a connector,
or a client application to trigger logic to handle the warning.
Also, the warning text was hard coded, and therefore not translated.
With this fix, new errors numbers have been created to represent these
warnings, and the warning text is coded in the errmsg.txt file.
Adds --general-log-file, --slow-query-log-file command-
line options to match system variables of the same names.
Deprecates --log, --log-slow-queries command-line option
and log, log_slow_queries system-variables for v7.0; they
are superseded by general_log/general_log_file and
slow_query_log/slow_query_log_file, respectively.
crashes server
When creating temporary table that contains aggregate functions a
non-reversible source transformation was performed to redirect aggregate
function arguments towards temporary table columns.
This caused EXPLAIN EXTENDED to fail because it was trying to resolve
references to the (freed) temporary table.
Fixed by preserving the original aggregate function arguments and
using them (instead of the transformed ones) for EXPLAIN EXTENDED.
"Trigger fired multiple times leads to gaps in auto_increment sequence".
The bug was that if a trigger fired multiple times inside a top
statement (for example top-statement is a multi-row INSERT,
and trigger is ON INSERT), and that trigger inserted into an auto_increment
column, then gaps could be observed in the auto_increment sequence,
even if there were no other users of the database (no concurrency).
It was wrong usage of THD::auto_inc_intervals_in_cur_stmt_for_binlog.
Note that the fix changes "class handler", I'll tell the Storage Engine API team.
MyISAM blocks index usage for bulk insert into zero-records tables.
See ha_myisam::start_bulk_insert() lines from
...
if (file->state->records == 0 ...
...
That causes problems for partition engine when some partitions have records some not
as the engine uses same access method for all partitions.
Now partition engine doesn't call index_first/index_last
for empty tables.
per-file comments:
mysql-test/r/partition.result
Bug#38005 Partitions: error with insert select.
test result
mysql-test/t/partition.test
Bug#38005 Partitions: error with insert select.
test case
sql/ha_partition.cc
Bug#38005 Partitions: error with insert select.
ha_engine::index_first and
ha_engine::index_last not called for empty tables.
Adds --general_log_file, --slow_query_log_file command-
line options to match system variables of the same names.
Deprecates --log, --log-slow-queries command-line options
and log, log_slow_queries system-variables for v7.0; they
are superseded by general_log/general_log_file and
slow_query_log/slow_query_log_file, respectively.
problems are located in the sql_partition.cc where functions calculation
partition_id don't expect error returned from item->val_int().
Fixed by adding checks to these functions.
Note - it tries to fix more problems than just the reported bug.
per-file comments:
modified:
mysql-test/r/partition.result
Bug#38083 Error-causing row inserted into partitioned table despite error
test result
mysql-test/t/partition.test
Bug#38083 Error-causing row inserted into partitioned table despite error
test case
sql/opt_range.cc
Bug#38083 Error-causing row inserted into partitioned table despite error
get_part_id() call fixed
sql/partition_info.h
Bug#38083 Error-causing row inserted into partitioned table despite error
get_subpart_id_func interface changed.
sql/sql_partition.cc
Bug#38083 Error-causing row inserted into partitioned table despite error
various functions calculationg partition_id and subpart_id didn't expect
an error returned from item->val_int(). Error checks added.
The problem was that the test was trying to obtain a lock on
a table in one connection without ensuring that a insert which
was executed in another connection had released the lock on the
same table.
The solution is to add a dummy select query after the insert to
ensure that the table is unlocked and closed by the time it tries
to lock it again. This is enough to prevent test failures described
in the bug report. As an extra safety measure, concurrent inserts
are disabled.
Remove comments that calculated the Table_locks_immediate. This
value is not tested anymore and it's calculation did not reflect
the actual value.
The optimizer pulls up aggregate functions which should be aggregated in
an outer select. At some point it may substitute such a function for a field
in the temporary table. The setup_copy_fields function doesn't take this
into account and may overrun the copy_field buffer.
Fixed by filtering out the fields referenced through the specialized
reference for aggregates (Item_aggregate_ref).
Added an assertion to make sure bugs that cause similar discrepancy
don't go undetected.
The '@' symbol can not be used in the host name according to rfc952.
The fix:
added function check_host_name(LEX_STRING *str)
which checks that all symbols in host name string are valid and
host name length is not more than max host name length
(just moved check_string_length() function from the parser into check_host_name()).