EXPLAIN EXTENDED of nested query containing a error:
1054 Unknown column '...' in 'field list'
may cause a server crash.
Parse error like described above forces a call to
JOIN::destroy() on malformed subquery.
That JOIN::destroy function closes and frees temporary
tables. However, temporary fields of these tables
may be listed in st_select_lex::group_list of outer
query, and that st_select_lex may not cleanup them
properly. So, after the JOIN::destroy call that
st_select_lex::group_list may have Item_field
objects with dangling pointers to freed temporary
table Field objects. That caused a crash.
(moved from Bug 42308)
Details:
- insert_update
Add DROP TABLE which was missing, error numbers -> names
- varbinary
Add DROP TABLE which was missing
- sp_trans_log
Add missing DROP function, improved formatting
The issue of the current bug is unguarded access to mi->slave_running
by the shutdown thread calling end_slave() that is bug#29968
(alas happened not to be cross-linked with the current bug)
Fixed:
with removing the unguarded read of the running status
and perform reading it in terminate_slave_thread()
at time run_lock is taken (mostly bug#29968 backporting, still with some
improvements over that patch - see the error reporting from
terminate_slave_thread()).
Issue of bug#38716 is fixed here for 5.0 branch as well.
Note:
There has been a separate artifact identified -
a race condition between init_slave() and end_slave() -
reported as Bug#44467.
the Point() and Linestring() functions create WKB representation of an
object instead of an real geometry object.
That produced bugs when these were inserted into tables.
GIS tests fixed accordingly.
per-file messages:
mysql-test/r/gis-rtree.result
Bug#38990 Arbitrary data input plus GIS functions causes mysql server crash
test result
mysql-test/r/gis.result
Bug#38990 Arbitrary data input plus GIS functions causes mysql server crash
test result
mysql-test/t/gis-rtree.test
Bug#38990 Arbitrary data input plus GIS functions causes mysql server crash
test fixed - GeomFromWKB invocations removed
mysql-test/t/gis.test
Bug#38990 Arbitrary data input plus GIS functions causes mysql server crash
test fixed - AsWKB invocations added
sql/item_geofunc.cc
Bug#38990 Arbitrary data input plus GIS functions causes mysql server crash
Point() and similar functions to create a proper object
Bug #40925: Equality propagation takes non indexed attribute
Query execution plans and execution time of queries like
select a, b, c from t1
where a > '2008-11-21' and b = a limit 10
depended on the order of equality operator parameters:
"b = a" and "a = b" are not same.
An equality propagation algorithm has been fixed:
the substitute_for_best_equal_field function should not
substitute a field for an equal field if both fields belong
to the same table.
Turned off autocommit at the start of this test per Innobase recommendation.
Noted significant reduction in run time for this test w/ a minor increase in other tests' run-times.
1) BUG#43309 - Test main.innodb can't be run twice
Detailed revision comments:
r4701 | vasil | 2009-04-13 17:03:46 +0300 (Mon, 13 Apr 2009) | 6 lines
branches/5.0:
Fix Bug#43309 Test main.innodb can't be run twice
by making the innodb.test reentrant.
mysqldump.test is designed to run with concurrent inserts
disabled. It is disabling concurrent inserts at the very
beginning of the test case, and re-enables them at the
bottom of the test. But for some reason (likely incorrect
merge) we enable concurrent inserts in the middle of the test.
The problem is fixed by enabling concurrent inserts only
at the bottom of the test case.
to wrong results
3 problems found with DES_ENCRYPT/DES_DECRYPT :
1. The max length was not calculated properly. Fixed in fix_length_and_dec()
2. DES_ENCRYPT had a side effect of sometimes reallocating and changing
the value of its argument. Fixed by explicitly pre-allocating the necessary
space to pad the argument with trailing '*' (stars) when calculating the
DES digest.
3. in DES_ENCRYPT the string buffer for the result value was not
reallocated to the correct size and only string length was assigned to it.
Fixed by making sure there's enough space to hold the result.
information schema tables are based on internal tmp tables which are removed
after each statement execution. So HANDLER comands can not be used with
information schema.
Streamlined how we increase the size of our test table.
The new method shows run time decreased by ~60%.
This is not a guarantee that we will not see test timeouts (the random failures noted in the bug),
but it should significantly reduce the chances of this occurring.
routine does not exist
There is an inconsistency with DROP DATABASE IF EXISTS, DROP TABLE IF
EXISTS and DROP VIEW IF EXISTS: those are binlogged even if the DB or
TABLE does not exist, whereas DROP PROCEDURE IF EXISTS does not. It
would be nice or at least consistent if DROP PROCEDURE/STATEMENT
worked the same too.
Fixed DROP PROCEDURE|FUNCTION IF EXISTS by adding a call to
mysql_bin_log.write in mysql_execute_command. Checked also if all
documented "DROP (...) IF EXISTS" get binlogged.
NOTE: This is a 5.0 backport patch as requested by support.
The test started failing following the push for BUG#41541.
Some of the algorithms access bytes beyond the input data
and this can affect up to one byte less than "word size"
which is BITS_SAVED / 8.
Fixed by adding (BITS_SAVED / 8) -1 bytes to buffer size
(i.e. Memory Segment #2) to avoid accessing un-allocated data.
The problem is that a SELECT .. FOR UPDATE statement might open
a table and later wait for a impeding global read lock without
noticing whether it is holding a table that is being waited upon
the the flush phase of the process that took the global read
lock.
The same problem also affected the following statements:
LOCK TABLES .. WRITE
UPDATE .. SET (update and multi-table update)
TRUNCATE TABLE ..
LOAD DATA ..
The solution is to make the above statements wait for a impending
global read lock before opening the tables. If there is no
impending global read lock, the statement raises a temporary
protection against global read locks and progresses smoothly
towards completion.
Important notice: the patch does not try to address all possible
cases, only those which are common and can be fixed unintrusively
enough for 5.0.
Original commentary:
Bug #37348: Crash in or immediately after JOIN::make_sum_func_list
The optimizer pulls up aggregate functions which should be aggregated in
an outer select. At some point it may substitute such a function for a field
in the temporary table. The setup_copy_fields function doesn't take this
into account and may overrun the copy_field buffer.
Fixed by filtering out the fields referenced through the specialized
reference for aggregates (Item_aggregate_ref).
Added an assertion to make sure bugs that cause similar discrepancy
don't go undetected.
updates
Attempt to execute trigger or stored function with multi-UPDATE
which used - but didn't update - a table that was also used by
the calling statement led to an error. Read-only reference to
tables used in the calling statement should be allowed.
This problem was caused by the fact that check for conflicting
use of tables in SP/triggers was performed in open_tables(),
and in case of multi-UPDATE we didn't know exact lock type at
this stage.
We solve the problem by moving this check to lock_tables(), so
it can be performed after exact lock types for tables used by
multi-UPDATE are determined.
UNION could convert fixed-point FLOAT(M,D)/DOUBLE(M,D) columns
to FLOAT/DOUBLE when aggregating data types from the SELECT
substatements. While there is nothing particularly wrong with
this behavior, especially when M is greater than the hardware
precision limits, it could be confusing in cases when all
SELECT statements in a union have the same
FLOAT(M,D)/DOUBLE(M,D) columns with equal precision
specifications listed in the same position.
Since the manual is quite vague on what data type should be
returned in such cases, the bug was fixed by implementing the
most 'expected' behavior: do not convert FLOAT(M,D)/DOUBLE(M,D)
to anything else if all SELECT statements in a UNION have the
same precision for that column.
When the thread executing a DDL was killed after finished its
execution but before writing the binlog event, the error code in
the binlog event could be set wrongly to ER_SERVER_SHUTDOWN or
ER_QUERY_INTERRUPTED.
This patch fixed the problem by ignoring the kill status when
constructing the event for DDL statements.
This patch also included the following changes in order to
provide the test case.
1) modified mysqltest to support variable for connection command
2) modified mysql-test-run.pl, add new variable MYSQL_SLAVE to
run mysql client against the slave mysqld.
The problem is that the read and write methods of the shared
memory transport (protocol) didn't react to asynchronous close
events, which could lead to a lock up as the client would wait
(until time out) for a server response that will never come.
The solution is to also wait for close events while waiting
for I/O from or to the server.
including modifications according to code review
+ backport of the fix for
Bug 41932 funcs_1: is_collation_character_set_applicability path
too long for tar
which was missing in 5.0 (just a renaming of two files)
When add an aliase name after NAME_CONST, the aliase name will be overwrite.
NAME_CONST will re-set the field's name only if there isn't an aliase in the
function fix-fields().
If there is an aliase, NAME_CONST doesn't re-set the field's name and keeps the old
name.
due to name_const substitution
Problem:
"In general, statements executed within a stored procedure
are written to the binary log using the same rules that
would apply were the statements to be executed in standalone
fashion. Some special care is taken when logging procedure
statements because statement execution within procedures
is not quite the same as in non-procedure context".
For example, each reference to a local variable in SP's
statements is replaced by NAME_CONST(var_name, var_value).
Queries like
"CREATE TABLE ... SELECT FUNC(local_var ..."
are logged as
"CREATE TABLE ... SELECT FUNC(NAME_CONST("local_var", var_value) ..."
that leads to differrent field names and
might result in "Incorrect column name" if var_value is long enough.
Fix: in 5.x we'll issue a warning in such a case.
In 6.0 we should get rid of NAME_CONST().
Note: this issue and change should be described in the documentation
("Binary Logging of Stored Programs").
Fine-tuning. Broke out comparison into method by
suggestion of Davi. Clarified comments. Reverting
test-case which I find too brittle; proper test
case in 5.1+.
(Pushing for Azundris)
We allow security-contexts with NULL users (for
system-threads and for unauthenticated users).
If a non-SUPER-user tried to KILL such a thread,
we tried to compare the user-fields to see whether
they owned that thread. Comparing against NULL was
not a good idea.
If KILLer does not have SUPER-privilege, we
specifically check whether both KILLer and KILLee
have a non-NULL user before testing for string-
equality. If either is NULL, we reject the KILL.
After the table is compressed by the myisampack utility,
opening the table by the server produces valgrind warnings.
This happens because when we try to read a record into the buffer
we alway assume that the remaining buffer to read is always equal
to word size(4 or 8 or 2 bytes) we read. Sometimes we have
remaining buffer size less than word size and trying to read the
entire word size will end up in valgrind errors.
Fixed by reading byte by byte when we detect the remaining buffer
size is less than the word size.
expired timeout on debx86-b in PB
Moved the resource-intensive test case for bug #41486 into
a separate test file to reduce execution time for mysql.test.
When do 'insert delayed' operation, the time_zone info doesn't be keeped in the row info.
So when we do insert sometime later, time_zone didn't write into binlog.
This will cause wrong result for timestamp column in slave.
Our solution is that adding time_zone info with the delayed-row and
restoring time_zone from row-info when execute that row in the furture by another thread.
So we can write correct time_zone info into binlog and got correct result in slave.
Details for Bug#43015 main.lock_multi: Weak code (sleeps etc.)
-------------------------------------------------------------
- The fix for bug 42003 already removed a lot of the weaknesses mentioned.
- Tests showed that there are unfortunately no improvements of this tests
in MySQL 5.1 which could be ported back to 5.0.
- Remove a superfluous "--sleep 1" around line 195
Details for Bug#43065 main.lock_multi: This test is too big if the disk is slow
-------------------------------------------------------------------------------
- move the subtests for the bugs 38499 and 36691 into separate scripts
- runtime under excessive parallel I/O load after applying the fix
lock_multi [ pass ] 22887
lock_multi_bug38499 [ pass ] 536926
lock_multi_bug38691 [ pass ] 258498
Don't throw an error after checking the first and the second arguments.
Continue with checking the third and higher arguments and if some of
them is stronger according to coercibility rules,
then this argument's collation is set as result collation.
When loading dump created by mysqldump tool an error is
thrown saying storage engine for the table doesn't have
an option.
mysqldump tries to re-insert the data into the federated
table which causes the error. Since the data is already
available on the remote server, mysqldump shouldn't try
to dump the data again for FEDERATED tables.
As stated in the bug page, it can be considered similar
to the MERGE ENGINE with "view only" nature.
Fixed by adding the "FEDERATED ENGINE" to the exception
list to ignore the data.
~40Mb after mysqldump/import
When the input string exceeds the maximum allowed size for the
internal buffer, batch_readline() returns a truncated string.
Since there was no way for a caller to determine whether the
string was truncated or not, the command line client assumed
batch_readline() to always return the whole input string and
appended a newline character. This resulted in garbled data
when importing dumps containing strings longer than the
maximum input buffer size.
Fixed by adding a flag to the batch_readline() interface to
signal a truncated string to the caller.
Other minor problems fixed during patch implementation:
- The maximum allowed buffer size for batch_readline() was set
up depending on the client's max_allowed_packet value. It does
not actully make any sense, as those variables are not
related. The input buffer size limit is now always set to 1
MB.
- fill_buffer() did not always set the EOF flag.
- The input buffer could actually grow twice as the specified
limit due to insufficient checks in intern_read_line().
Revised patch incorporating cleaner test code brought up during review.
Removed the use of grep and accomplished same actions via SQL / use of the server.
Runs as before on *nix systems and now runs on Windows without Cygwin as well.
Since there is more than one duplicate value in the table, when adding the
unique index it is not deterministic which value will be reported as causing a
problem. Replace the reported value with '' so that it doesn't affect the
results.
--ignore-table option
mysqldump would correctly omit temporary tables for views, but would
incorrectly still emit all CREATE VIEW statements.
Backport a fix from 5.1, where we capture the names we want to emit
views for in one pass (the placeholder tables) and in the pass where
we actually emit the views, we don't emit a view if it wasn't in that
list.
The copy of the original arguments of a aggregate function was not
initialized until after fix_fields().
Sometimes (e.g. when there's an error processing the statement)
the print() can be called with no corresponding fix_fields() call.
Fixed by adding a check if the Item is fixed before using the arguments
copy.
+ Fix for Bug#43114 wait_until_count_sessions too restrictive, random PB failures
+ Removal of a lot of other weaknesses found
+ modifications according to review
return no rows
The algorithm of determining the best key for loose index scan is doing a loop
over the available indexes and selects the one that has the best cost.
It retrieves the parameters of the current index into a set of variables.
If the cost of using the current index is lower than the best cost so far it
copies these variables into another set of variables that contain the
information for the best index so far.
After having checked all the indexes it uses these variables (outside of the
index loop) to create the table read plan object instance.
The was a single omission : the key_infix/key_infix_len variables were used
outside of the loop without being preserved in the loop for the best index
so far.
This causes these variables to get overwritten by the next index(es) checked.
Fixed by adding variables to hold the data for the current index, passing
the new variables to the function that assigns values to them and copying
the new variables into the existing ones when selecting a new current best
index.
To avoid further such problems moved the declarations of the variables used
to keep information about the current index inside the loop's compound
statement.
Started fix in 5.0 as the same issue is here.
Revising queries used given what appears to be the scope of this test to only select the manipulated variables.
Added tests for values that are / are not multiples of 1024 to test rounding / constraints.
This behavior is not currently documented (docs bug has been opened)
Problem: storing "SELECT ... INTO @var ..." results in variables we used val_xxx()
methods which returned results of the current row.
So, in some cases (e.g. SELECT DISTINCT, GROUP BY or HAVING) we got data
from the first row of a new group (where we evaluate a clause) instead of
data from the last row of the previous group.
Fix: use val_xxx_result() counterparts to get proper results.
There was a problem when a DELIMITER COMMAND is not the first
command on the line. I this case an extra line feed was added
to the glob buffer and this was causing subsequent attempts
to enter this delimiter to fail.
Fixed by not adding a new line to the glob buffer if the
command being added is a DELIMITER
Both of our own implementations of rint(3) were inconsistent with the
most common behavior of rint() on those platforms that have it: round
to nearest, break ties by rounding to nearest even.
Fixed by leaving just one implementation of rint() in our source tree,
and changing its behavior to match the most common native
implementations on other platforms.
Signed integer format specifier forced to print the binlog header with server_id
negative if the unsigned value sets the sign-bit ON.
Fixed with correcting the specifier to correspond to typeof(server_id) == ulong.
Moved the test case for the bug into a separate file (and restored the
original innodb_mysql test setup).
Used the new wait_show_condition test macro to avoid the usage of sleep
Replaced Unix calls with mysql-test-run's built-in functions / SQL manipulation where possible.
Replaced error codes with error names as well.
Disabled two tests on Windows due to more complex Unix command usage
See Bug#41307, Bug#41308