- Starting time of a query sent by file bootstrapping wasn't initialized
and starting time defaulted to 0. This later used value by the Now-
item and is translated to 1970-01-01 11:00:00.
- marking the time with thd->set_time() before the call to
mysql_parse resolves this issue.
UPDATE contains wrong data if the SELECT employs a temporary table.
If the UPDATE values of the INSERT .. SELECT .. ON DUPLICATE KEY UPDATE
statement contains fields from the SELECT part and the select employs a
temporary table then those fields will contain wrong values because they
aren't corrected to get data from the temporary table.
The solution is to add these fields to the selects all_fields list,
to store pointers to those fields in the selects ref_pointer_array and
to access them via Item_ref objects.
The substitution for Item_ref objects is done in the new function called
Item_field::update_value_transformer(). It is called through the
item->transform() mechanism at the end of the select_insert::prepare()
function.
duplicate key entries on slave" (two concurrrent connections doing
multi-row INSERT DELAYED to insert into an auto_increment column,
caused replication slave to stop with "duplicate key error" (and
binlog was wrong)), and BUG#26116 "If multi-row INSERT
DELAYED has errors, statement-based binlogging breaks" (the binlog
was not accounting for all rows inserted, or slave could stop).
The fix is that: if (statement-based) binlogging is on, a multi-row
INSERT DELAYED is silently converted to a non-delayed INSERT.
Note: it is not possible to test BUG#25507 in 5.0 (requires mysqlslap),
so it is tested only in the changeset for 5.1. However, BUG#26116
is tested here, and the fix for BUG#25507 is the same code change.
were evaluated.
According to the new rules for string comparison partial indexes on text
columns can be used in the same cases when partial indexes on varchar
columns can be used.
- Implement --secure-file-priv=<dir> option that limits
"load_file", "LOAD DATA" and "SELECT .. INTO OUTFILE" to work
with files in specified dir.
- Use above option for mysqld in mysql-test-run.pl
operations)
Before this change, the boolean predicates:
- X IS TRUE,
- X IS NOT TRUE,
- X IS FALSE,
- X IS NOT FALSE
were implemented by expanding the Item tree in the parser, by using a
construct like:
Item_func_if(Item_func_ifnull(X, <value>), <value>, <value>)
Each <value> was a constant integer, either 0 or 1.
A bug in the implementation of the function IF(a, b, c), in
Item_func_if::fix_length_and_dec(), would cause the following :
When the arguments b and c are both unsigned, the result type of the
function was signed, instead of unsigned.
When the result of the if function is signed, space for the sign could be
counted twice (in the max() expression for a signed argument, and in the
total), causing the member max_length to be too high.
An effect of this is that the final type of IF(x, int(1), int(1)) would be
int(2) instead of int(1).
With this fix, the problems found in Item_func_if::fix_length_and_dec()
have been fixed.
While it's semantically correct to represent 'X IS TRUE' with
Item_func_if(Item_func_ifnull(X, <value>), <value>, <value>),
there are however more problems with this construct.
a)
Building the parse tree involves :
- creating 5 Item instances (3 ints, 1 ifnull, 1 if),
- creating each Item calls my_pthread_getspecific_ptr() once in the operator
new(size), and a second time in the Item::Item() constructor, resulting
in a total of 10 calls to get the current thread.
Evaluating the expression involves evaluating up to 4 nodes at runtime.
This representation could be greatly simplified and improved.
b)
Transforming the parse tree internally with if(ifnull(...)) is fine as long
as this transformation is internal to the server implementation.
With views however, the result of the parse tree is later exposed by the
::print() functions, and stored as part of the view definition.
Doing this has long term consequences:
1)
The original semantic 'X IS TRUE' is lost, and replaced by the
if(ifnull(...)) expression. As a result, SHOW CREATE VIEW does not restore
the original code.
2)
Should a future version of MySQL implement the SQL BOOLEAN data type for
example, views created today using 'X IS NULL' can be exported using
mysqldump, and imported again. Such views would be converted correctly and
automatically to use a BOOLEAN column in the future version.
With 'X IS TRUE' and the current implementations, views using these
"boolean" predicates would not be converted during the export/import, and
would use integer columns instead.
The difference traces back to how SHOW CREATE VIEW preserves 'X IS NULL' but
does not preserve the 'X IS TRUE' semantic.
With this fix, internal representation of 'X IS TRUE' booleans predicates
has changed, so that:
- dedicated Item classes are created for each predicate,
- only 1 Item is created to represent 1 predicate
- my_pthread_getspecific_ptr() is invoked 1 time instead of 10
- SHOW CREATE VIEW preserves the original semantic, and prints 'X IS TRUE'.
Note that, because of the fix in Item_func_if, views created before this fix
will:
- correctly use a int(1) type instead of int(2) for boolean predicates,
- incorrectly print the if(ifnull(...), ...) expression in SHOW CREATE VIEW,
since the original semantic (X IS TRUE) has been lost.
- except for the syntax used in SHOW CREATE VIEW, these views will operate
properly, no action is needed.
Views created after this fix will operate correctly, and will preserve the
original code semantic in SHOW CREATE VIEW.
ENUMs weren't allowed to have character 0xff, a perfectly good character in some locales.
This was circumvented by mapping 0xff in ENUMs to ',', thereby prevent actual commas from
being used. Now if 0xff makes an appearance, we find a character not used in the enum and
use that as a separator. If no such character exists, we throw an error.
Any solution would have broken some sort of existing behaviour. This solution should
serve both fractions (those with 0xff and those with ',' in their enums), but
WILL REQUIRE A DUMP/RESTORE CYCLE FROM THOSE WITH 0xff IN THEIR ENUMS. :-/
That is, mysqldump with their current server, and restore when upgrading to one with
this patch.
The crash happens because second filling of the same I_S table happens in
case of subselect with order by. table->sort.io_cache previously allocated
in create_sort_index() is deleted during second filling
(function get_schema_tables_result). There are two places where
I_S table can be filled: JOIN::exec and create_sort_index().
To fix the bug we should check if the table was already filled
in one of these places and skip processing of the table in second.
The function make_unireg_sortorder ignored the fact that any
view field is represented by a 'ref' object.
This could lead to wrong results for the queries containing
both GROUP BY and ORDER BY clauses.
A wrong order of statements in QUICK_GROUP_MIN_MAX_SELECT::reset
caused a crash when a query with DISTINCT was executed by a loose scan
for an InnoDB table that had been emptied.
present.
A view created with CREATE VIEW ... ORDER BY ... cannot be resolved with
the MERGE algorithm, even when no other part of the CREATE VIEW statement
would require the view to be resolved using the TEMPTABLE algorithm.
The check for presence of the ORDER BY clause in the underlying select is
removed from the st_lex::can_be_merged() function.
The ORDER BY list of the underlying select is appended to the ORDER BY list
Objects of the class Item_equal contain an auxiliary member
eval_item of the type cmp_item that is used only for direct
evaluation of multiple equalities. Currently a multiple equality
is evaluated directly only in the cases when the equality holds
at most for one row in the result set.
The compare collation of eval_item was determined incorectly.
It could lead to returning incorrect results for some queries.
"update existingtable set anycolumn=nonexisting order by nonexisting" would crash
the server.
Though we would find the reference to a field, that doesn't mean we can then use
it to set some values. It could be a reference to another field. If it is NULL,
don't try to use it to set values in the Item_field and instead return an error.
Over the previous patch, this signals an error at the location of the error, rather
than letting the subsequent deref signal it.
"INSERT... ON DUPLICATE KEY UPDATE skips auto_increment values".
When in an INSERT ON DUPLICATE KEY UPDATE, using
an autoincrement column, we inserted some autogenerated values and
also updated some rows, some autogenerated values were not used
(for example, even if 10 was the largest autoinc value in the table
at the start of the statement, 12 could be the first autogenerated
value inserted by the statement, instead of 11). One autogenerated
value was lost per updated row. Led to exhausting the range of the
autoincrement column faster.
Bug introduced by fix of BUG#20188; present since 5.0.24 and 5.1.12.
This bug breaks replication from a pre-5.0.24 master.
But the present bugfix, as it makes INSERT ON DUP KEY UPDATE
behave like pre-5.0.24, breaks replication from a [5.0.24,5.0.34]
master to a fixed (5.0.36) slave! To warn users against this when
they upgrade their slave, as agreed with the support team, we add
code for a fixed slave to detect that it is connected to a buggy
master in a situation (INSERT ON DUP KEY UPDATE into autoinc column)
likely to break replication, in which case it cannot replicate so
stops and prints a message to the slave's error log and to SHOW SLAVE
STATUS.
For 5.0.36->[5.0.24,5.0.34] replication we cannot warn as master
does not know the slave's version (but we always recommended to users
to have slave at least as new as master).
As agreed with support, I'll also ask for an alert to be put into
the MySQL Network Monitoring and Advisory Service.
View check option clauses were ignored for updates of multi-table
views when the updates could not be performed on fly and the rows
to update had to be put into temporary tables first.
with a column of the DATETIME type could return a wrong
result set if the WHERE clause included a BETWEEN condition
on the column.
Fixed the method Item_func_between::fix_length_and_dec
where the aggregation type for BETWEEN predicates calculated
incorrectly if the first argument was a view column of the
DATETIME type.
Before this change, a local variables in stored procedures / stored functions
or triggers, when declared with a type of bit(N), would not evaluate their
value properly.
The problem was that the data was incorrectly typed as a string,
causing for example bit b'1', implemented as a byte 0x01, to be interpreted
as a string starting with the character 0x01. This later would cause
implicit conversions to integers or booleans to fail.
The root cause of this problem was an incorrect translation between field
types, like bit(N), and internal types used when representing values in Item
objects.
Also, before this change, the function HEX() would sometime print extra "0"
characters when invoked with bit(N) values.
With this fix, the type translation (sp_map_result_type, sp_map_item_type)
has been changed so that bit(N) fields are represented with integer values.
A consequence is that, for the function HEX(), when called with a stored
procedure local variable of type bit(N) as argument, HEX() is provided with an
integer instead of a string, and therefore does not print "0" padding.
A test case for Bug 12976 was present in the test suite, and has been updated.
updated.
INSERT ... ON DUPLICATE KEY UPDATE reports that a record was updated when
the duplicate key occurs even if the record wasn't actually changed
because the update values are the same as those in the record.
Now the compare_record() function is used to check whether the record was
changed and the update of a record reported only if the record differs
from the original one.
- Small difference in output from 'X509_NAME_Oneline' between OpenSSL and yaSSL. OpenSSL uses
an extension that allow's the email adress of the cert holder.
- Imported patch for yaSSL "add email to DN output"
Ignoring error codes from type conversion allows default (wrong) values to
go unnoticed in the formation of index search conditions.
Fixed by correctly checking for conversion errors.
fails
The bug was introduced with the push of the fix for bug#20953: after
the error on view creation we never reset the error state, so some
valid statements would give the same error after that.
The solution is to properly reset the error state.
This performance degradation for UPDATEs could be observed in the update
statements for which the search key cannot be converted to any valid
value of the type of the search column, like for a the condition
int_fld=99999999999999999999999999, though it can be guaranteed here
that there is no row with such a key value.
The bug could cause choosing a sub-optimal execution plan for
a single-table query if a unique index with many null keys were
defined for the table.
It happened because the code of the check_quick_keys function
made an assumption that any key may occur in an unique index
only once. Yet this is not true for keys with nulls that may
have multiple occurrences in the index.
Two problems here:
Problem 1:
While constructing the join columns list the optimizer does as follows:
1. Sets the join_using_fields/natural_join members of the right JOIN
operand.
2. Makes a "table reference" (TABLE_LIST) to parent the two tables.
3. Assigns the join_using_fields/is_natural_join of the wrapper table
using join_using_fields/natural_join of the rightmost table
4. Sets join_using_fields to NULL for the right JOIN operand.
5. Passes the parent table up to the same procedure on the upper
level.
Step 1 overrides the the join_using_fields that are set for a nested
join wrapping table in step 4.
Fixed by making a designated variable SELECT_LEX::prev_join_using to
pass the data from step 1 to step 4 without destroying the wrapping
table data.
Problem 2:
The optimizer checks for ambiguous columns while transforming
NATURAL JOIN/JOIN USING to JOIN ON. While doing that there was no
distinction between columns that are used in the generated join
condition (where ambiguity can be checked) and the other columns
(where ambiguity can be checked only when resolving references
coming from outside the JOIN construct itself).
Fixed by allowing the non-USING columns to be present in multiple
copies in both sides of the join and moving the ambiguity check
to the place where unqualified references to the join columns are
resolved (find_field_in_natural_join()).
When a merge table is opened compare column and key definition of
underlying tables against column and key definition of merge table.
If any of underlying tables have different column/key definition
refuse to open merge table.
The optimizer takes away columns from GROUP BY/DISTINCT if they constitute
all the parts of an unique index.
However if some of the columns can contain NULLs this cannot be done
(because an UNIQUE index can have multiple rows with NULL values).
Fixed by not using UNIQUE indexes with nullable columns to remove
grouping columns from GROUP BY/DISTINCT.
Depending on the queries we use different data processing methods
and can lose some data in case of double (and decimal in 4.1) fields.
The fix consists of two parts:
1. double comparison changed, now double a is equal to double b
if (a-b) is less than 5*0.1^(1 + max(a->decimals, b->decimals)).
For example, if a->decimals==1, b->decimals==2, a==b if (a-b)<0.005
2. if we use a temporary table, store double values there as is
to avoid any data conversion (rounding).
Checking for NULL before calling the val_xxx()
methods only checks for such arguments that are
known to be NULLs at compile time.
The arguments that may or may not contain
NULLs (e.g. function calls and possibly others)
are not checked at all.
Fixed by first calling the val_xxx() method and
then checking for null in SEC_TO_TIME().
In addition QUARTER() was not returning 0 (as all the
val_int() functions do when processing a NULL value).
Before this fix, a IN predicate of the form: "IN (( subselect ))", with two
parenthesis, would be evaluated as a single row subselect: if the subselect
returns more that 1 row, the statement would fail.
The SQL:2003 standard defines a special exception in the specification,
and mandates that this particular form of IN predicate shall be equivalent
to "IN ( subselect )", which involves a table subquery and works with more
than 1 row.
This fix implements "IN (( subselect ))", "IN ((( subselect )))" etc
as per the SQL:2003 requirement.
All the details related to the implementation of this change have been
commented in the code, and the relevant sections of the SQL:2003 spec
are given for reference, so they are not repeated here.
Having access to the spec is a requirement to review in depth this patch.
Objects of the classes Item_func_is_not_null_test and Item_func_trig_cond
must be transparent for the method Item::split_sum_func2 as these classes
are pure helpers. It means that the method Item::split_sum_func2 should
look at those objects as at pure wrappers.
The bug report has demonstrated the following two problems.
1. If an ORDER/GROUP BY list includes a constant expression being
optimized away and, at the same time, containing single-row
subselects that return more that one row, no error is reported.
Strictly speaking the standard allows to ignore error in this case.
Yet, now a corresponding fatal error is reported in this case.
2. If a query requires sorting by expressions containing single-row
subselects that, however, return more than one row, then the execution
of the query may cause a server crash.
To fix this some code has been added that blocks execution of a subselect
item in case of a fatal error in the method Item_subselect::exec.
Problem: "Data truncated" warning was incorrectly generated
when storing a Japanese character encoded in utf8
into a cp932 column.
Reason: Incorrect wrong warning condition
compared the original length of the character in bytes
(which is 3 in utf8) to the converted length of the
character in bytes (which is 2 in cp932).
Fix: use "how many bytes were scanned from input" instead
of "how many bytes were put to the column" in the condition.
on duplicate key".
INSERT ... SELECT ... ON DUPLICATE KEY UPDATE which was used in
stored routine or as prepared statement and which in its ON DUPLICATE
KEY clause erroneously tried to assign value to a column mentioned only
in its SELECT part was properly emitting error on the first execution
but succeeded on the second and following executions.
Code which is responsible for name resolution of fields mentioned in
UPDATE clause (e.g. see select_insert::prepare()) modifies table list
and Name_resolution_context used in this process. It uses
Name_resolution_context_state::save_state/restore_state() to revert
these modifications. Unfortunately those two methods failed to revert
properly modifications to TABLE_LIST::next_name_resolution_table
and this broke name resolution process for successive executions.
This patch fixes Name_resolution_context_state::save_state/restore_state()
in such way that it properly handles TABLE_LIST::next_name_resolution_table.
When inserting into a join-based view the update fields from the ON DUPLICATE
KEY UPDATE wasn't checked to be from the table being inserted into and were
silently ignored.
The new check_view_single_update() function is added to check that
insert/update fields are being from the same single table of the view.
We use INT_RESULT type if all arguments are of type INT for 'if', 'case',
'coalesce' functions regardless of arguments' unsigned flag, so sometimes we can
exceed the INT bounds.
tables' lock."
Execution of ALTER TABLE ... ENABLE KEYS on a table (which can take rather
long time) prevented concurrent execution of all statements using tables.
The problem was caused by the fact that we were holding LOCK_open mutex
during whole duration of this statement and particularly during call
to handler::enable_indexes(). This behavior was introduced as part of the
fix for bug 14262 "SP: DROP PROCEDURE|VIEW (maybe more) write to binlog
too late (race cond)"
The patch simply restores old behavior. Note that we can safely do this as
this operation takes exclusive lock (similar to name-lock) which blocks both
DML and DDL on the table being altered.
It also introduces mysql-test/include/wait_show_pattern.inc helper script
which is used to make test-case for this bug robust enough.
After fix for bug#21798 JOIN stores the pointer to the buffer for sorting
fields. It is used while sorting for grouping and for ordering. If ORDER BY
clause has more elements then the GROUP BY clause then a memory overrun occurs.
Now the length of the ORDER BY list is always passed to the
make_unireg_sortorder() function and it allocates buffer big enough to be
used for bigger list.
UNION over correlated and uncorrelated SELECTS.
In such subqueries each uncorrelated SELECT should be considered as
uncacheable. Otherwise join_free is called for it and in many cases
it causes some problems.
WL#3681 (ALTER TABLE ORDER BY)
Before this fix, the ALTER TABLE statement implemented an ORDER BY option
with the following characteristics :
1) The order by clause accepts a list of criteria, with optional ASC or
DESC keywords
2) Each criteria can be a general expression, involving operators,
native functions, stored functions, user defined functions, subselects ...
With this fix :
1) has been left unchanged, since it's a de-facto existing feature,
that was already present in the code base and partially covered in the test
suite. Code coverage for ASC and DESC was missing and has been improved.
2) has been changed to limit the kind of criteria that are permissible:
now only a column name is valid.
crashes server
Check for null value is reliable only after calling some of the
val_xxx() methods. If the val_xxx() method is not called
the null_value flag will be set only for certain types of NULL
values (like SQL constant NULLs for example).
This caused a crash while trying to dereference a NULL pointer
that is returned by val_str() for NULL values.
Fixed by swapping the order of val_xxx() and null_value check.
The problem was that if a prepared statement accessed a view, the
access to the tables listed in the query after that view was done in
the security context of the view.
The bug was in the assigning of the security context to the tables
belonging to a view: we traversed the list of all query tables
instead. It didn't show up in the normal (non-prepared) statements
because of the different order of the steps of checking privileges
and descending into a view for normal and prepared statements.
The solution is to traverse the list and stop once the last table
belonging to the view was processed.
when they contain the '!' operator.
Added an implementation for the method Item_func_not::print.
The method encloses any NOT expression into extra parentheses to avoid
incorrect stored representations of views that use the '!' operators.
Without this change when a view was created that contained
the expression !0*5 its stored representation contained not this
expression but rather the expression not(0)*5 .
The operator '!' is of a higher precedence than '*', while NOT is
of a lower precedence than '*'. That's why the expression !0*5
is interpreted as not(0)*5, while the expression not(0)*5 is interpreted
as not((0)*5) unless sql_mode is set to HIGH_NOT_PRECEDENCE.
Now we translate !0*5 into (not(0))*5.
The optimizer needs to evaluate whether predicates are better
evaluated using an index. IN is one such predicate.
To qualify an IN predicate must involve a field of the index
on the left and constant arguments on the right.
However whether an expression is a constant can be determined only
by knowing the preceding tables in the join order.
Assuming that only IN predicates with expressions on the right that
are constant for the whole query qualify limits the scope of
possible optimizations of the IN predicate (more specifically it
doesn't allow the "Range checked for each record" optimization for
such an IN predicate.
Fixed by not pre-determining the optimizability of the IN predicate
in the case when all right IN operands are not SQL constant expressions
of untouched rows in full table scans".
SELECT ... FOR UPDATE/LOCK IN SHARE MODE statements as well as
UPDATE/DELETE statements which were executed using full table
scan were not releasing locks on rows which didn't satisfy
WHERE condition.
This bug surfaced in 5.0 and affected NDB tables. (InnoDB tables
intentionally don't support such unlocking in default mode).
This problem occured because code implementing join didn't call
handler::unlock_row() for rows which didn't satisfy part of condition
attached to this particular table/level of nested loop. So we solve
the problem adding this call.
Note that we already had this call in place in 4.1 but it was lost
(actually not quite correctly placed) when we have introduced nested
joins.
Also note that additional QA should be requested once this patch is
pushed as interaction between handler::unlock_row() and many recent
MySQL features such as subqueries, unions, views is not tested enough.