result.
For built-in functions like sqrt() function names are hard-coded and can be
compared by pointer. But this isn't the case for a used-defined stored
functions - names there are dynamical and should be compared as strings.
Now the Item_func::eq() function employs my_strcasecmp() function to compare
used-defined stored functions names.
away.
During optimization stage the WHERE conditions can be changed or even
be removed at all if they know for sure to be true of false. Thus they aren't
showed in the EXPLAIN EXTENDED which prints conditions after optimization.
Now if all elements of an Item_cond were removed this Item_cond is substituted
for an Item_int with the int value of the Item_cond.
If there were conditions that were totally optimized away then values of the
saved cond_value and having_value will be printed instead.
DATE/DATETIME values are out of the currently supported
4 basic value types (INT,STRING,REAL and DECIMAL).
So expressions (not fields) of compile type DATE/DATETIME are
generally considered as STRING values. This is not so
when they are compared : then they are compared as
INTEGER values.
But the rule for comparison as INTEGERS must be checked
explicitly each time when a comparison is to be performed.
filesort is one such place. However there the check was
not done and hence the expressions (not fields) of type
DATE/DATETIME were sorted by their string representation.
Fixed to compare them as INTEGER values for filesort.
Functions over sum functions wasn't set up correctly for the ORDER BY clause
which leads to a wrong order of the result set.
The split_sum_func() function is called now for each ORDER BY item that
contains a sum function to set it up correctly.
Bug 18914 (Calling certain SPs from triggers fail)
Bug 20713 (Functions will not not continue for SQLSTATE VALUE '42S02')
Bug 21825 (Incorrect message error deleting records in a table with a
trigger for inserting)
Bug 22580 (DROP TABLE in nested stored procedure causes strange dependency
error)
Bug 25345 (Cursors from Functions)
This fix resolves a long standing issue originally reported with bug 8407,
which affect the behavior of Stored Procedures, Stored Functions and Trigger
in many different ways, causing symptoms reported by all the bugs listed.
In all cases, the root cause of the problem traces back to 8407 and how the
server locks tables involved with sub statements.
Prior to this fix, the implementation of stored routines would:
- compute the transitive closure of all the tables referenced by a top level
statement
- open and lock all the tables involved
- execute the top level statement
"transitive closure of tables" means collecting:
- all the tables,
- all the stored functions,
- all the views,
- all the table triggers
- all the stored procedures
involved, and recursively inspect these objects definition to find more
references to more objects, until the list of every object referenced does
not grow any more.
This mechanism is known as "pre-locking" tables before execution.
The motivation for locking all the tables (possibly) used at once is to
prevent dead locks.
One problem with this approach is that, if the execution path the code
really takes during runtime does not use a given table, and if the table is
missing, the server would not execute the statement.
This in particular has a major impact on triggers, since a missing table
referenced by an update/delete trigger would prevent an insert trigger to run.
Another problem is that stored routines might define SQL exception handlers
to deal with missing tables, but the server implementation would never give
user code a chance to execute this logic, since the routine is never
executed when a missing table cause the pre-locking code to fail.
With this fix, the internal implementation of the pre-locking code has been
relaxed of some constraints, so that failure to open a table does not
necessarily prevent execution of a stored routine.
In particular, the pre-locking mechanism is now behaving as follows:
1) the first step, to compute the transitive closure of all the tables
possibly referenced by a statement, is unchanged.
2) the next step, which is to open all the tables involved, only attempts
to open the tables added by the pre-locking code, but silently fails without
reporting any error or invoking any exception handler is the table is not
present. This is achieved by trapping internal errors with
Prelock_error_handler
3) the locking step only locks tables that were successfully opened.
4) when executing sub statements, the list of tables used by each statements
is evaluated as before. The tables needed by the sub statement are expected
to be already opened and locked. Statement referencing tables that were not
opened in step 2) will fail to find the table in the open list, and only at
this point will execution of the user code fail.
5) when a runtime exception is raised at 4), the instruction continuation
destination (the next instruction to execute in case of SQL continue
handlers) is evaluated.
This is achieved with sp_instr::exec_open_and_lock_tables()
6) if a user exception handler is present in the stored routine, that
handler is invoked as usual, so that ER_NO_SUCH_TABLE exceptions can be
trapped by stored routines. If no handler exists, then the runtime execution
will fail as expected.
With all these changes, a side effect is that view security is impacted, in
two different ways.
First, a view defined as "select stored_function()", where the stored
function references a table that may not exist, is considered valid.
The rationale is that, because the stored function might trap exceptions
during execution and still return a valid result, there is no way to decide
when the view is created if a missing table really cause the view to be invalid.
Secondly, testing for existence of tables is now done later during
execution. View security, which consist of trapping errors and return a
generic ER_VIEW_INVALID (to prevent disclosing information) was only
implemented at very specific phases covering *opening* tables, but not
covering the runtime execution. Because of this existing limitation,
errors that were previously trapped and converted into ER_VIEW_INVALID are
not trapped, causing table names to be reported to the user.
This change is exposing an existing problem, which is independent and will
be resolved separately.
Using INSERT DELAYED on MERGE tables could lead to table
corruptions.
The manual lists a couple of storage engines, which can be
used with INSERT DELAYED. MERGE is not in this list.
The attempt to try it anyway has not been rejected yet.
This bug was not detected earlier as it can work under
special circumstances. Most notable is low concurrency.
To be safe, this patch rejects any attempt to use INSERT
DELAYED on MERGE tables.
- Add test case that shows how slave server hangs in "STOP SLAVE"
when run on MySQL version 5.0.33 compiled with OpenSSL.
Works fine with latest version of MySQL since that problem
has been fixed by patch for bug#24148. The fix has been noted in
the changelog for MySQL 5.0.36
The flag alias_name_used was not set on for the outer references
in subqueries. It resulted in replacement of any outer reference
resolved against an alias for a full field name when the frm
representation of a view with a subquery was generated.
If the subquery and the outer query referenced the same table in
their from lists this replacement effectively changed the meaning
of the view and led to wrong results for selects from this view.
Modified several functions to ensure setting the right value of
the alias_name_used flag for outer references resolved against
aliases.
When the ORDER BY clause gets fixed it's allowed to search in the current
item_list in order to find aliased fields and expressions. This is ok for a
SELECT but wrong for an UPDATE statement. If the ORDER BY clause will
contain a non-existing field which is mentioned in the UPDATE set list
then the server will crash due to using of non-existing (0x0) field.
When an Item_field is getting fixed it's allowed to search item list for
aliased expressions and fields only for selects.
Several problems here :
1. The conversion to double of an hex string const item
was not taking into account the unsigned flag.
2. IN was not behaving in the same was way as comparisons
when performed over an INT/DATE/DATETIME/TIMESTAMP column
and a constant. The ordinary comparisons in that case
convert the constant to an INTEGER value and do int
comparisons. Fixed the IN to do the same.
3. IN is not taking into account the unsigned flag when
calculating <expr> IN (<int_const1>, <int_const2>, ...).
Extended the implementation of IN to store and process
the unsigned flag for its arguments.
If we compare two items A and B, with B being (a constant) of a
larger type, then A gets promoted to B's type for comparison if
it's a constant, function, or CAST() column, but B gets demoted
to A's type if A is a (not explicitly CAST()) column. This is
counter-intuitive and not mandated by the standard.
Disabling optimisation where it would be lossy so field value
will properly get promoted and compared as binary string (rather
than as integers).
to return NULL for non-NULL arguments.
This is not the case as it can return NULL
for invalid hexidecimal strings.
Fixed by setting the maybe_null flag.
results)
Before this fix, the function BENCHMARK() would fail to evaluate expressions
like "(select avg(a) from t1)" in debug builds (with an assert),
or would report a time of zero in non debug builds.
The root cause is that evaluation of DECIMAL_RESULT expressions was not
supported in Item_func_benchmark::val_int().
This has been fixed by this change.
INSERT DELAYED inserts garbage for BIT columns.
When delayed thread clones TABLE object, it didn't adjusted bit_ptr
to newly created record (though it correctly adjusts ptr and null_ptr).
This is fixed by correctly adjusting bit_ptr when performing a clone.
With this fix BIT values are stored correctly by INSERT DELAYED.
Bug N 15126 character_set_database is not replicated (LOAD DATA INFILE need it)
Positions of some binlog events were changed because of
additional logging of @@collation_database.
This patch fixes problem that LOAD DATA could use different
character sets when loading files on master and on slave sides:
- Adding replication of thd->variables.collation_database
- Adding optional character set clause into LOAD DATA
Note, the second way, with explicit CHARACTER SET clause
should be the recommended way to load data using an alternative
character set.
The old way, using "SET @@character_set_database=xxx" should be
gradually depricated.
Problem: DROP TRIGGER was not properly handled in combination
with slave filters, which made replication stop
Fix: loading table name before checking slave filters when
dropping a trigger.
- Use mysql_system_tables.sql to create MySQL system tables in
all places where we create them(mysql_install_db, mysql-test-run-pl
and mysql_fix_privilege_tables.sql)
Log messages from shell-scripts were put to var/log/<test id>.log
file. Now, this file is used by mysql-test-run.pl. So, move log
messages to var/log/<test id>.script.log.
Triggers in SBR mode."
BUG#14914 "SP: Uses of session variables in routines are not always
replicated"
BUG#25167 "Dupl. usage of user-variables in trigger/function is not
replicated correctly"
User-defined variables used inside of stored functions/triggers in
statements which did not update tables directly were not replicated.
We also had problems with replication of user-defined variables which
were used in triggers (or stored functions called from table-updating
statements) more than once.
This patch addresses the first issue by enabling logging of all
references to user-defined variables in triggers/stored functions
and not only references from table-updating statements.
The second issue stemmed from the fact that for user-defined
variables used from triggers or stored functions called from
table-updating statements we were writing binlog events for each
reference instead of only one event for the first reference.
This problem is already solved for stored functions called from
non-updating statements with help of "event unioning" mechanism.
So the patch simply extends this mechanism to the case affected.
It also fixes small problem in this mechanism which caused wrong
logging of references to user-variables in cases when non-updating
statement called several stored functions which used the same
variable and some of these function calls were omitted from binlog
as they were not updating any tables.
When handling DELETE ... FROM if there is no
condition it is internally transformed to
TRUNCATE for more efficient execution by the
storage handler.
The check for validity of the optional ORDER BY
clause is done after the check for the above
optimization and will not be performed if the
optimization can be applied.
Moved the validity check for ORDER BY before
the optimization so it performed regardless of
the optimization.
"INSERT... ON DUPLICATE KEY UPDATE skips auto_increment values"
didn't make it into 5.0.36 and 5.1.16,
so we need to adjust the bug-detection-based-on-version-number code.
Because the rpl tree has a too old version, rpl_insert_id cannot pass,
so I disable it (like is already the case in 5.1-rpl for the same reason),
and the repl team will re-enable it when they merge 5.0 and 5.1 into
their trees (thus getting the right version number).
it doesn't select.
This bug was fixed along with bug #16861: User defined variable can
have a wrong value if a tmp table was used.
There the fix consisted of Item_func_set_user_var overloading the method
Item::save_in_field. Consider the query from the test case:
INSERT INTO foo( bar, baz )
SELECT
bar,
@newBaz := 1 + baz
FROM
foo
WHERE
quux <= 0.1;
Here the assignment expression '@newBaz := 1 + baz' is represented by an
Item_func_set_user_var. Its member method save_in_field, which writes the
value of this assignment into the result field, writes the val_xxx() value,
which is not updated at this point. In the fix introduced by the patch,
the save_in_field method reads the actual variable value instead.
See also comment for
ChangeSet@1.2368.1.3, 2007-01-09 23:24:56+03:00, evgen@moonbone.local +4 -0
and comment for
Item_func_set_user_var::save_in_field (item_func.cc)
created for sorting.
Any outer reference in a subquery was represented by an Item_field object.
If the outer select employs a temporary table all such fields should be
replaced with fields from that temporary table in order to point to the
actual data. This replacement wasn't done and that resulted in a wrong
subquery evaluation and a wrong result of the whole query.
Now any outer field is represented by two objects - Item_field placed in the
outer select and Item_outer_ref in the subquery. Item_field object is
processed as a normal field and the reference to it is saved in the
ref_pointer_array. Thus the Item_outer_ref is always references the correct
field. The original field is substituted for a reference in the
Item_field::fix_outer_field() function.
New function called fix_inner_refs() is added to fix fields referenced from
inner selects and to fix references (Item_ref objects) to these fields.
The new Item_outer_ref class is a descendant of the Item_direct_ref class.
It additionally stores a reference to the original field and designed to
behave more like a field.
Having maybe_null flag unset for geometry/spatial functions leads to
wrong Item_func_isnull::val_int()'s results.
Fix: set maybe_null flag and add is_null() methods.
The cause of im_daemon_life_cycle.imtest random failures was the following
behaviour of some implementations of LINUX threads: let's suppose that a
process has several threads (in LINUX threads, there is a separate process for
each thread). When the main process gets killed, the parent receives SIGCHLD
before all threads (child processes) die. In other words, the parent receives
SIGCHLD, when its child is not completely dead.
In terms of IM, that means that IM-angel receives SIGCHLD when IM-main is not dead
and still holds some resources. After receiving SIGCHLD, IM-angel restarts
IM-main, but IM-main failed to initialize, because previous instance (copy) of
IM-main still holds server socket (TCP-port).
Another problem here was that IM-angel restarted IM-main only if it was killed
by signal. If it exited with error, IM-angel thought it's intended / graceful
shutdown and exited itself.
So, when the second instance of IM-main failed to initialize, IM-angel thought
it's intended shutdown and quit.
The fix is
1. to change IM-angel so that it restarts IM-main if it exited with error code;
2. to change IM-main so that it returns proper exit code in case of failure.
Several problems fixed:
1. There was a "catch-all" context initialization in setup_tables()
that was causing the table that we insert into to be visible in the
SELECT part of an INSERT .. SELECT .. statement with no tables in
its FROM clause. This was making sure all the under-initialized
contexts in various parts of the code are not left uninitialized.
Fixed by removing the "catch-all" statement and initializing the
context in the parser.
2. Incomplete name resolution context when resolving the right-hand
values in the ON DUPLICATE KEY UPDATE ... part of an INSERT ... SELECT ...
caused columns from NATURAL JOIN/JOIN USING table references in the
FROM clause of the select to be unavailable.
Fixed by establishing a proper name resolution context.
3. When setting up the special name resolution context for problem 2
there was no check for cases where an aggregate function without a
GROUP BY effectively takes the column from the SELECT part of an
INSERT ... SELECT unavailable for ON DUPLICATE KEY UPDATE.
Fixed by checking for that condition when setting up the name
resolution context.
The problem happened because those tests were using "cp932" and "ucs2" without checking whether these character sets are available. This fix moves test parts to make character set specific parts be tested only if they are:
- some parts were moved to "ctype_ucs.test" and "ctype_cp932.test"
- some parts were moved to the newly added tests "innodb-ucs2.test", "mysqlbinglog-cp932.test" and "sp-ucs2.test"
- Starting time of a query sent by file bootstrapping wasn't initialized
and starting time defaulted to 0. This later used value by the Now-
item and is translated to 1970-01-01 11:00:00.
- marking the time with thd->set_time() before the call to
mysql_parse resolves this issue.
UPDATE contains wrong data if the SELECT employs a temporary table.
If the UPDATE values of the INSERT .. SELECT .. ON DUPLICATE KEY UPDATE
statement contains fields from the SELECT part and the select employs a
temporary table then those fields will contain wrong values because they
aren't corrected to get data from the temporary table.
The solution is to add these fields to the selects all_fields list,
to store pointers to those fields in the selects ref_pointer_array and
to access them via Item_ref objects.
The substitution for Item_ref objects is done in the new function called
Item_field::update_value_transformer(). It is called through the
item->transform() mechanism at the end of the select_insert::prepare()
function.
duplicate key entries on slave" (two concurrrent connections doing
multi-row INSERT DELAYED to insert into an auto_increment column,
caused replication slave to stop with "duplicate key error" (and
binlog was wrong)), and BUG#26116 "If multi-row INSERT
DELAYED has errors, statement-based binlogging breaks" (the binlog
was not accounting for all rows inserted, or slave could stop).
The fix is that: if (statement-based) binlogging is on, a multi-row
INSERT DELAYED is silently converted to a non-delayed INSERT.
Note: it is not possible to test BUG#25507 in 5.0 (requires mysqlslap),
so it is tested only in the changeset for 5.1. However, BUG#26116
is tested here, and the fix for BUG#25507 is the same code change.
were evaluated.
According to the new rules for string comparison partial indexes on text
columns can be used in the same cases when partial indexes on varchar
columns can be used.
- Implement --secure-file-priv=<dir> option that limits
"load_file", "LOAD DATA" and "SELECT .. INTO OUTFILE" to work
with files in specified dir.
- Use above option for mysqld in mysql-test-run.pl
operations)
Before this change, the boolean predicates:
- X IS TRUE,
- X IS NOT TRUE,
- X IS FALSE,
- X IS NOT FALSE
were implemented by expanding the Item tree in the parser, by using a
construct like:
Item_func_if(Item_func_ifnull(X, <value>), <value>, <value>)
Each <value> was a constant integer, either 0 or 1.
A bug in the implementation of the function IF(a, b, c), in
Item_func_if::fix_length_and_dec(), would cause the following :
When the arguments b and c are both unsigned, the result type of the
function was signed, instead of unsigned.
When the result of the if function is signed, space for the sign could be
counted twice (in the max() expression for a signed argument, and in the
total), causing the member max_length to be too high.
An effect of this is that the final type of IF(x, int(1), int(1)) would be
int(2) instead of int(1).
With this fix, the problems found in Item_func_if::fix_length_and_dec()
have been fixed.
While it's semantically correct to represent 'X IS TRUE' with
Item_func_if(Item_func_ifnull(X, <value>), <value>, <value>),
there are however more problems with this construct.
a)
Building the parse tree involves :
- creating 5 Item instances (3 ints, 1 ifnull, 1 if),
- creating each Item calls my_pthread_getspecific_ptr() once in the operator
new(size), and a second time in the Item::Item() constructor, resulting
in a total of 10 calls to get the current thread.
Evaluating the expression involves evaluating up to 4 nodes at runtime.
This representation could be greatly simplified and improved.
b)
Transforming the parse tree internally with if(ifnull(...)) is fine as long
as this transformation is internal to the server implementation.
With views however, the result of the parse tree is later exposed by the
::print() functions, and stored as part of the view definition.
Doing this has long term consequences:
1)
The original semantic 'X IS TRUE' is lost, and replaced by the
if(ifnull(...)) expression. As a result, SHOW CREATE VIEW does not restore
the original code.
2)
Should a future version of MySQL implement the SQL BOOLEAN data type for
example, views created today using 'X IS NULL' can be exported using
mysqldump, and imported again. Such views would be converted correctly and
automatically to use a BOOLEAN column in the future version.
With 'X IS TRUE' and the current implementations, views using these
"boolean" predicates would not be converted during the export/import, and
would use integer columns instead.
The difference traces back to how SHOW CREATE VIEW preserves 'X IS NULL' but
does not preserve the 'X IS TRUE' semantic.
With this fix, internal representation of 'X IS TRUE' booleans predicates
has changed, so that:
- dedicated Item classes are created for each predicate,
- only 1 Item is created to represent 1 predicate
- my_pthread_getspecific_ptr() is invoked 1 time instead of 10
- SHOW CREATE VIEW preserves the original semantic, and prints 'X IS TRUE'.
Note that, because of the fix in Item_func_if, views created before this fix
will:
- correctly use a int(1) type instead of int(2) for boolean predicates,
- incorrectly print the if(ifnull(...), ...) expression in SHOW CREATE VIEW,
since the original semantic (X IS TRUE) has been lost.
- except for the syntax used in SHOW CREATE VIEW, these views will operate
properly, no action is needed.
Views created after this fix will operate correctly, and will preserve the
original code semantic in SHOW CREATE VIEW.
ENUMs weren't allowed to have character 0xff, a perfectly good character in some locales.
This was circumvented by mapping 0xff in ENUMs to ',', thereby prevent actual commas from
being used. Now if 0xff makes an appearance, we find a character not used in the enum and
use that as a separator. If no such character exists, we throw an error.
Any solution would have broken some sort of existing behaviour. This solution should
serve both fractions (those with 0xff and those with ',' in their enums), but
WILL REQUIRE A DUMP/RESTORE CYCLE FROM THOSE WITH 0xff IN THEIR ENUMS. :-/
That is, mysqldump with their current server, and restore when upgrading to one with
this patch.
The crash happens because second filling of the same I_S table happens in
case of subselect with order by. table->sort.io_cache previously allocated
in create_sort_index() is deleted during second filling
(function get_schema_tables_result). There are two places where
I_S table can be filled: JOIN::exec and create_sort_index().
To fix the bug we should check if the table was already filled
in one of these places and skip processing of the table in second.
The function make_unireg_sortorder ignored the fact that any
view field is represented by a 'ref' object.
This could lead to wrong results for the queries containing
both GROUP BY and ORDER BY clauses.
A wrong order of statements in QUICK_GROUP_MIN_MAX_SELECT::reset
caused a crash when a query with DISTINCT was executed by a loose scan
for an InnoDB table that had been emptied.
present.
A view created with CREATE VIEW ... ORDER BY ... cannot be resolved with
the MERGE algorithm, even when no other part of the CREATE VIEW statement
would require the view to be resolved using the TEMPTABLE algorithm.
The check for presence of the ORDER BY clause in the underlying select is
removed from the st_lex::can_be_merged() function.
The ORDER BY list of the underlying select is appended to the ORDER BY list
Objects of the class Item_equal contain an auxiliary member
eval_item of the type cmp_item that is used only for direct
evaluation of multiple equalities. Currently a multiple equality
is evaluated directly only in the cases when the equality holds
at most for one row in the result set.
The compare collation of eval_item was determined incorectly.
It could lead to returning incorrect results for some queries.
"update existingtable set anycolumn=nonexisting order by nonexisting" would crash
the server.
Though we would find the reference to a field, that doesn't mean we can then use
it to set some values. It could be a reference to another field. If it is NULL,
don't try to use it to set values in the Item_field and instead return an error.
Over the previous patch, this signals an error at the location of the error, rather
than letting the subsequent deref signal it.
"INSERT... ON DUPLICATE KEY UPDATE skips auto_increment values".
When in an INSERT ON DUPLICATE KEY UPDATE, using
an autoincrement column, we inserted some autogenerated values and
also updated some rows, some autogenerated values were not used
(for example, even if 10 was the largest autoinc value in the table
at the start of the statement, 12 could be the first autogenerated
value inserted by the statement, instead of 11). One autogenerated
value was lost per updated row. Led to exhausting the range of the
autoincrement column faster.
Bug introduced by fix of BUG#20188; present since 5.0.24 and 5.1.12.
This bug breaks replication from a pre-5.0.24 master.
But the present bugfix, as it makes INSERT ON DUP KEY UPDATE
behave like pre-5.0.24, breaks replication from a [5.0.24,5.0.34]
master to a fixed (5.0.36) slave! To warn users against this when
they upgrade their slave, as agreed with the support team, we add
code for a fixed slave to detect that it is connected to a buggy
master in a situation (INSERT ON DUP KEY UPDATE into autoinc column)
likely to break replication, in which case it cannot replicate so
stops and prints a message to the slave's error log and to SHOW SLAVE
STATUS.
For 5.0.36->[5.0.24,5.0.34] replication we cannot warn as master
does not know the slave's version (but we always recommended to users
to have slave at least as new as master).
As agreed with support, I'll also ask for an alert to be put into
the MySQL Network Monitoring and Advisory Service.
View check option clauses were ignored for updates of multi-table
views when the updates could not be performed on fly and the rows
to update had to be put into temporary tables first.
with a column of the DATETIME type could return a wrong
result set if the WHERE clause included a BETWEEN condition
on the column.
Fixed the method Item_func_between::fix_length_and_dec
where the aggregation type for BETWEEN predicates calculated
incorrectly if the first argument was a view column of the
DATETIME type.
Before this change, a local variables in stored procedures / stored functions
or triggers, when declared with a type of bit(N), would not evaluate their
value properly.
The problem was that the data was incorrectly typed as a string,
causing for example bit b'1', implemented as a byte 0x01, to be interpreted
as a string starting with the character 0x01. This later would cause
implicit conversions to integers or booleans to fail.
The root cause of this problem was an incorrect translation between field
types, like bit(N), and internal types used when representing values in Item
objects.
Also, before this change, the function HEX() would sometime print extra "0"
characters when invoked with bit(N) values.
With this fix, the type translation (sp_map_result_type, sp_map_item_type)
has been changed so that bit(N) fields are represented with integer values.
A consequence is that, for the function HEX(), when called with a stored
procedure local variable of type bit(N) as argument, HEX() is provided with an
integer instead of a string, and therefore does not print "0" padding.
A test case for Bug 12976 was present in the test suite, and has been updated.
updated.
INSERT ... ON DUPLICATE KEY UPDATE reports that a record was updated when
the duplicate key occurs even if the record wasn't actually changed
because the update values are the same as those in the record.
Now the compare_record() function is used to check whether the record was
changed and the update of a record reported only if the record differs
from the original one.
- Small difference in output from 'X509_NAME_Oneline' between OpenSSL and yaSSL. OpenSSL uses
an extension that allow's the email adress of the cert holder.
- Imported patch for yaSSL "add email to DN output"
Ignoring error codes from type conversion allows default (wrong) values to
go unnoticed in the formation of index search conditions.
Fixed by correctly checking for conversion errors.
fails
The bug was introduced with the push of the fix for bug#20953: after
the error on view creation we never reset the error state, so some
valid statements would give the same error after that.
The solution is to properly reset the error state.
This performance degradation for UPDATEs could be observed in the update
statements for which the search key cannot be converted to any valid
value of the type of the search column, like for a the condition
int_fld=99999999999999999999999999, though it can be guaranteed here
that there is no row with such a key value.
The bug could cause choosing a sub-optimal execution plan for
a single-table query if a unique index with many null keys were
defined for the table.
It happened because the code of the check_quick_keys function
made an assumption that any key may occur in an unique index
only once. Yet this is not true for keys with nulls that may
have multiple occurrences in the index.