IN/BETWEEN predicates in sorting expressions.
Wrong results may occur when the select list contains an expression
with IN/BETWEEN predicate that differs from a sorting expression by
an additional NOT only.
Added the method Item_func_opt_neg::eq to compare correctly expressions
containing [NOT] IN/BETWEEN.
The eq method inherited from the Item_func returns TRUE when comparing
'a IN (1,2)' with 'a NOT IN (1,2)' that is not, of course, correct.
The problem was that THD::db_access variable was not restored after
database switch in stored-routine-execution code.
The fix is to restore THD::db_access in this case.
Unfortunately, this fix requires additional changes,
because in prepare_schema_table(), called on the parsing stage, we checked
privileges. That was wrong according to our design, but this flaw haven't
struck so far, because it was masked. All privilege checkings must be
done on the execution stage in order to be compatible with prepared statements
and stored routines. So, this patch also contains patch for
prepare_schema_table(), which moves the checkings to the execution phase.
LEFT JOIN
Fixed that in certain situations MATCH ... AGAINST returns false hits
for NULLs produced by LEFT JOIN when there is no fulltext index
available.
- Change check for return value of 'SSL_CTX_set_cipher_list'
in order to handle 0 as error setting cipher.
- Thanks to Dan Lukes for finding the problem!
conditions.
When allocating memory for KEY_FIELD/SARGABLE_PARAM structures the
function update_ref_and_keys did not take into account the fact that
a single row equality could be replaced by several simple equalities.
Fixed by adjusting the counter cond_count accordingly for each subquery
when performing substitution of a row equality for simple equalities.
Pushbuild fixes:
- Make MAX_SEL_ARGS smaller (even 16K records_in_range() calls is
more than it makes sense to do in typical cases)
- Don't call sel_arg->test_use_count() if we've already allocated
more than MAX_SEL_ARGs elements. The test will succeed but will take
too much time for the test suite (and not provide much value).
NO_AUTO_VALUE_ON_ZERO mode.
In the NO_AUTO_VALUE_ON_ZERO mode the table->auto_increment_field_not_null
variable is used to indicate that a non-NULL value was specified by the user
for an auto_increment column. When an INSERT .. ON DUPLICATE updates the
auto_increment field this variable is set to true and stays unchanged for the
next insert operation. This makes the next inserted row sometimes wrongly have
0 as the value of the auto_increment field.
Now the fill_record() function resets the table->auto_increment_field_not_null
variable before filling the record.
The table->auto_increment_field_not_null variable is also reset by the
open_table() function for a case if we missed some auto_increment_field_not_null
handling bug.
Now the table->auto_increment_field_not_null is reset at the end of the
mysql_load() function.
Reset the table->auto_increment_field_not_null variable after each
write_row() call in the copy_data_between_tables() function.
- Read the pid from pidfile in order to be able to kill the real process
instead of the pseudo process. Most platforms will have the same real_pid
as pid
- Kill using the real pid
ARCHIVE table
ARCHIVE table was truncated by REPAIR TABLE ... USE_FRM statement.
The table handler returned its file name extensions in a wrong order.
REPAIR TABLE believed it has to use the meta file to create a new table
from it.
With the fixed order, REPAIR TABLE does now use the data file to create
a new table. So REPAIR TABLE ... USE_FRM works well with ARCHIVE engine
now.
This issue affects 5.0 only, since in 5.1 ARCHIVE engine stores meta
information and data in the same file.
- GRANT and REVOKE statments didn't have the "updating" flag set and
thus statements with a table specified would not replicate if
slave filtering rules where turned on.
For example "GRANT ... ON test.t1 TO ..." would not replicate.
mark the test as requiring that storage engine(if we need to do that)
Make --ndb and --with-ndbcluster and alias for
--mysqld=--default-storage-engine=ndbcluster
#27176: Assigning a string to an year column has unexpected results
#26359: Strings becoming truncated and converted to numbers under STRICT mode
Problems:
1. storing a string to an integer field we don't check
if strntoull10rnd() returns MY_ERRNO_EDOM error.
Fix: check for MY_ERRNO_EDOM.
2. storing a string to an year field we use my_strntol() function.
Fix: use strntoull10rnd() instead.
Thanks to Martin Friebe for finding and submitting a fix for this bug!
A table with maximum number of key segments and maximum length key name
would have a corrupted .frm file, due to an incorrect calculation of the
complete key length. Now the key length is computed correctly (I hope) :-)
MyISAM would reject a table with the maximum number of keys and the maximum
number of key segments in all keys. It would allow one less than this total
maximum. Now MyISAM accepts a table defined with the maximum. (This is a
very minor issue.)
- Added PARAM::alloced_sel_args where we count the # of SEL_ARGs
created by SEL_ARG tree cloning operations.
- Made the range analyzer to shortcut and not do any more cloning
if we've already created MAX_SEL_ARGS SEL_ARG objects in cloning.
- Added comments about space complexity of SEL_ARG-graph
representation.
Problem: SOUNDEX returned an invalid string for international
characters in multi-byte character sets.
For example: for a Chinese/Japanese 3-byte long character
_utf8 0xE99885 it took only the very first byte 0xE9,
put it into the outout string and then appended with three
DIGIT ZERO characters, so the result was 0xE9303030 - which
is an invalide utf8 string.
Fix: make SOUNDEX() multi-byte aware and - put only complete
characters into result, thus return only valid strings.
This patch also makes SOUNDEX() compatible with UCS2.
Geometry fields have a result type string and a
special subclass to cater for the differences
between them and the base class (just like
DATE/TIME).
When creating temporary tables for results of
functions that return results of type GEOMETRY
we must construct fields of the derived class
instead of the base class.
Fixed by creating a GEOMETRY field (Field_geom)
instead of a generic BLOB (Field_blob) in temp
tables for the results of GIS functions that
have GEOMETRY return type (Item_geometry_func).
- Turn off verification of peer if both ca_path and ca_file is null
i.e from only passing --ssl-key=<client_key> and --ssl-cert=<client_cert>
to the mysql utility programs.
The server will authenticate the client accoring to GRANT tables
but the client won't authenticate the server
execution breaks replication.
When a stored routine is executed, we switch current
database to the database, in which the routine
has been created. When the stored routine finishes,
we switch back to the original database.
The problem was that if the original database does not
exist (anymore) after routine execution, we raised an error.
The fix is to report a warning, and switch to the NULL database.
If a set function with a outer reference s(outer_ref) cannot be aggregated
the outer query against which the reference has been resolved then MySQL
interpretes s(outer_ref) in the same way as it would interpret s(const).
Hovever the standard requires throwing an error in this situation.
Added some code to support this requirement in ansi mode.
Corrected another minor bug in Item_sum::check_sum_func.
- mysqldump executes a SHOW CREATE VIEW statement to generate the text
that it outputs. When the function name is retrieved it's database
name is unconditionally prepended. This change causes the function's
database name to be prepended only when it was used to define the
function.
When creating a temporary table the concise column type
of a string expression is decided based on its length:
- if its length is under 512 it is stored as either
varchar or char.
- otherwise it is stored as a BLOB.
There is a flag (convert_blob_length) to create_tmp_field
that, when >0 allows to force creation of a varchar if the
max blob length is under convert_blob_length.
However it must be verified that convert_blob_length
(settable through a SQL option in some cases) is
under the maximum that can be stored in a varchar column.
While performing that check for expressions in
create_tmp_field_from_item the max length of the blob was
used instead. This causes blob columns to be created in the
heap temp table used by GROUP_CONCAT (where blobs must not
be created in the temp table because of the constant
convert_blob_length that is passed to create_tmp_field() ).
And since these blob columns are not expected in that place
we get wrong results.
Fixed by checking that the value of the flag variable is
in the limits that fit into VARCHAR instead of the max length
of the blob column.
- 1.84e+15 converted to unsigned bigint should be
18400000000000000000 < 18446744073709551615.
- The test will still fail on windows, and is extracted
into a new bug report.
causes incorrect duplicate entries
Keys for BTREE indexes on ENUM and SET columns of MEMORY tables
with character set UTF8 were computed incorrectly. Many
different column values got the same key value.
Apart of possible performance problems, it made unique indexes
of this type unusable because it rejected many different
values as duplicates.
The problem was that multibyte character detection was tried
on the internal numeric column value. Many values were not
identified as characters. Their key value became blank filled.
Thanks to Alexander Barkov and Ramil Kalimullin for the patch,
which sets the character set of ENUM and SET key segments to
the pseudo binary character set.
Problem: GROUP BY on empty ucs2 strings crashed server.
Reason: sometimes mi_unique_hash() is executed with
ptr=null and length=0, which means "empty string".
The branch of code handling UCS2 character set
was not safe against ptr=null and fell into and
endless loop even if length=0 because of poiter
arithmetic overflow.
Fix: adding special check for length=0 to avoid pointer arithmetic
overflow.
to 0 causes wrong (large) length to be read
from the row in _mi_calc_blob_length() when
storing NULL values in (e.g) POINT columns.
This large length is then used to allocate
a block of memory that (on some OSes) causes
trouble.
Fixed by calling the base class's
Field_blob::reset() from Field_geom::reset()
that is called when storing a NULL value into
the column.
Fix is to rewrite the MBR::overlaps() function, to compute the dimension of both
arguments, and the dimension of the intersection; test that all three dimensions are the
same (e.g., all are Polygons).
Add tests for all MBR* functions for various combinations of shapes, lines and points.
thd->options' OPTION_STATUS_NO_TRANS_UPDATE bit was not restored at the end of SF() invocation, where
SF() modified non-ta table.
As the result of this artifact it was not possible to detect whether there were any side-effects when
top-level query ends.
If the top level query table was not modified and the bit is lost there would be no binlogging.
Fixed with preserving the bit inside of thd->no_trans_update struct. The struct agregates two bool flags
telling whether the current query and the current transaction modified any non-ta table.
The flags stmt, all are dropped at the end of the query and the transaction.
context was used as an argument of GROUP_CONCAT.
Ensured correct setting of the depended_from field in references
generated for set functions aggregated in outer selects.
A wrong value of this field resulted in wrong maps returned by
used_tables() for these references.
Made sure that a temporary table field is added for any set function
aggregated in outer context when creation of a temporary table is
needed to execute the inner subquery.
Apply the following InnoDB snapshots:
innodb-5.0-ss1319
innodb-5.0-ss1331
innodb-5.0-ss1333
innodb-5.0-ss1341
Fixes:
- Bug #21409: Incorrect result returned when in READ-COMMITTED with query_cache ON
At low transaction isolation levels we let each consistent read set
its own snapshot.
- Bug #23666: strange Innodb_row_lock_time_% values in show status; also millisecs wrong
On Windows ut_usectime returns secs and usecs relative to the UNIX
epoch (which is Jan, 1 1970).
- Bug #25494: LATEST DEADLOCK INFORMATION is not always cleared
lock_deadlock_recursive(): When the search depth or length is exceeded,
rewind lock_latest_err_file and display the two transactions at the
point of aborting the search.
- Bug #25927: Foreign key with ON DELETE SET NULL on NOT NULL can crash server
Prevent ALTER TABLE ... MODIFY ... NOT NULL on columns for which
there is a foreign key constraint ON ... SET NULL.
- Bug #26835: Repeatable corruption of utf8-enabled tables inside InnoDB
The bug could be reproduced as follows:
Define a table so that the first column of the clustered index is
a VARCHAR or a UTF-8 CHAR in a collation where sequences of bytes
of differing length are considered equivalent.
Insert and delete a record. Before the delete-marked record is
purged, insert another record whose first column is of different
length but equivalent to the first record. Under certain conditions,
the insertion can be incorrectly performed as update-in-place.
Likewise, an operation that could be done as update-in-place can
unnecessarily be performed as delete and insert, but that would not
cause corruption but merely degraded performance.
another user.
When the DEFINER clause isn't specified in the ALTER statement then it's loaded
from the view definition. If the definer differs from the current user then
the error is thrown because only a super-user can set other users as a definers.
Now if the DEFINER clause is omitted in the ALTER VIEW statement then the
definer from the original view is used without check.
Server starts any binlog dump from Format_description_log_event,
this shifted all offset calculations in mysqlbinlog and made it
to stop the dump earlier than --stop-position. Now mysqlbinlog
takes Format_description_log_event into account
Possible problems: function call could be eliminated from where class and only
be evaluated once; function can be evaluated during table and item setup phase which could
cause side effects not to be registered in binlog.
Fixed with introducing func_item_sp::used_tables() returning the correct table_map constant.
in index search MySQL was not explicitly
suppressing warnings. And if the context
happens to enable warnings (e.g. INSERT ..
SELECT) the warnings resulting from converting
the data the key is compared to are
reported to the client.
Fixed by suppressing warnings when converting
the data to the same type as the key parts.
what it actually means (Monty approved the renaming)
- correcting description of transaction_alloc command-line options
(our manual is correct)
- fix for a failure of rpl_trigger.
The problem in this bug is when we create temporary tables. When
temporary tables are created for unions, there is some
inferrence being carried out regarding the type of the column.
Whenever this column type is inferred to be REAL (i.e. FLOAT or
DOUBLE), MySQL will always try to maintain exact precision, and
if that is not possible (there are hardware limits, since FLOAT
and DOUBLE are stored as approximate values) will switch to
using approximate values. The problem here is that at this point
the information about number of significant digits is not
available. Furthermore, the number of significant digits should
be increased for the AVG function, however, this was not properly
handled. There are 4 parts to the problem:
#1: DOUBLE and FLOAT fields don't display their proper display
lengths in max_display_length(). This is hard-coded as 53 for
DOUBLE and 24 for FLOAT. Now changed to instead return the
field_length.
#2: Type holders for temporary tables do not preserve the
max_length of the Item's from which they are created, and is
instead reverted to the 53 and 24 from above. This causes
*all* fields to get non-fixed significant digits.
#3: AVG function does not update max_length (display length)
when updating number of decimals.
#4: The function that switches to non-fixed number of
significant digits should use DBL_DIG + 2 or FLT_DIG + 2 as
cut-off values (Since fixed precision does not use the 'e'
notation)
Of these points, #1 is the controversial one, but this
change is preferred and has been cleared with Monty. The
function causes quite a few unit tests to blow up and they had
to b changed, but each one is annotated and motivated. We
frequently see the magical 53 and 24 give way to more relevant
numbers.
fix for cast( AS DATETIME) + 0 operation.
I just implemented Item_datetime_typecast::val() method
as it is usually done in other classes.
Should be fixed more radically in 5.0
of its argument happened to be a decimal expression returning
the NULL value.
The crash was due to the fact the function in_decimal::set did
not take into account that val_decimal() could return 0 if
the decimal expression had been evaluated to NULL.
on a database.
The problem was that we required not less privileges on the base tables
than we have on the view.
The fix is to be more flexible and allow to create such a view (necessary
privileges will be checked at the runtime).
INTO clause can be specified only for the last select of a UNION and it
receives the result of the whole query. But it was wrongly allowed in
non-last selects of a UNION which leads to a confusing query result.
Now INTO allowed only in the last select of a UNION.
aggregated in outer context returned wrong results.
This happened only if the subquery did not contain any references
to outer fields.
As there were no references to outer fields the subquery erroneously
was taken for non-correlated one.
Now any set function aggregated in outer context makes the subquery
correlated.
Shift the ID values up into a range where they will not collide with those
which we use for real data, when we fill the system tables.
Will be merged up to 5.0 where it is needed for 5.0.38.
Shift the ID values up into a range where they will not collide with those
which we use for real data, when we fill the system tables.
Will be merged up to 5.0 where it is needed for 5.0.38.
To correctly decide which predicates can be evaluated with a given table
the optimizer must know the exact set of tables that a predicate depends
on. If that mask is too wide (refer to non-existing tables) the optimizer
can erroneously skip a predicate.
One such case of wrong table usage mask were the aggregate functions.
The have a all-1 mask (meaning depend on all tables, including non-existent
ones).
Fixed by making a real used_tables mask for the aggregates. The mask is
constructed in the following way :
1. OR the table dependency masks of all the arguments of the aggregate.
2. If all the arguments of the function are from the local name resolution
context and it is evaluated in the same name resolution
context where it is referenced all the tables from that name resolution
context are OR-ed to the dependency mask. This is to denote that an
aggregate function depends on the number of rows it processes.
3. Handle correctly the case of an aggregate function optimization (such that
the aggregate function can be pre-calculated and made a constant).
Made sure that an aggregate function is never a constant (unless subject of a
specific optimization and pre-calculation).
One other flaw was revealed and fixed in the process : references were
not calling the recalculation method for used_tables of their targets.
Removed wrong fix for the bug#27006.
The bug was added by the fix for the bug#19978 and fixed by Monty on 2007/02/21.
trigger.test, trigger.result:
Corrected test case for the bug#27006.
Using a MEMORY table BTREE index for scanning for updatable rows
could lead to an infinite loop.
Everytime a key was inserted into a btree index, the position
in the index scan was cleared. The search started from the
beginning and found the same key again.
Now we do not clear the position on key insert an more.
- Build sql files for netware from the mysql_system_tables*.sq files
- Fix comments about mysql_create_system_tables.sh
- Use mysql_install_db.sh to create system tables for mysql_test-run-shell
- Fix mysql-test-run.pl to also look in share/mysql for the msyql_system*.sql files
Changeset coded today by Magnus Svensson, just the application to 5.0.38 is by Joerg Bruehe.
- Build sql files for netware from the mysql_system_tables*.sq files
- Fix comments about mysql_create_system_tables.sh
- Use mysql_install_db.sh to create system tables for mysql_test-run-shell
- Fix mysql-test-run.pl to also look in share/mysql for the msyql_system*.sql files
Problem: to handle a situation when the size of event on the master is greater than max_allowed_packet on slave, we checked for the wrong constant (ER_NET_PACKET_TOO_LARGE instead of CR_NET_PACKET_TOO_LARGE).
Solution: test for the client "packet too large" error code instead of the server one in slave I/O thread.
UPDATE if the row wasn't actually changed.
This bug was caused by fix for bug#19978. It causes AFTER UPDATE triggers
not firing if a row wasn't actually changed by the update part of the
INSERT .. ON DUPLICATE KEY UPDATE.
Now triggers are always fired if a row is touched by the INSERT ... ON
DUPLICATE KEY UPDATE.
- Stored procedures returning unsinged values returns signed values if
text protocol is used. The reason is that the stored proceedure item
Item_func_sp wasn't initializing the member variables properly based
on the information contained in the associated result field.
- The patch is to upon field-item association, ::fix_fields, initialize
the member variables in appropriate order.
- Field type of an Item_func_sp was hard coded to MYSQL_TYPE_VARCHAR.
This is changed to return the type of the actual result field.
- Member function name sp_result_field was refactored to the more
appropriate init_result_field.
- Member function name find_and_check_access was refactored to
sp_check_access.
when index is used
When the table contained TEXT columns with empty contents
('', zero length, but not NULL) _and_ strings starting with
control characters like tabulator or newline, the empty values
were not found in a "records in range" estimate. Hence count(*)
missed these records.
The reason was a different set of search flags used for key
insert and key range estimation.
I decided to fix the set of flags used in range estimation.
Otherwise millions of databases around the world would require
a repair after an upgrade.
The consequence is that the manual must be fixed, which claims
that TEXT columns are compared with "end space padding". This
is true for CHAR/VARCHAR but wrong for TEXT. See also bug 21335.
INSERT uses query_id to verify what fields are
mentioned in the fields list of the INSERT command.
However the check for that is made after the
ON DUPLICATE KEY is processed. This causes all
the fields mentioned in ON DUPLICATE KEY to be
considered as mentioned in the fields list of
INSERT.
Moved the check up, right after processing the
fields list.
touched but not actually changed.
The LAST_INSERT_ID() is reset to 0 if no rows were inserted or changed.
This is the case when an INSERT ... ON DUPLICATE KEY UPDATE updates a row
with the same values as the row contains.
Now the LAST_INSERT_ID() values is reset to 0 only if there were no rows
successfully inserted or touched.
The new 'touched' field is added to the COPY_INFO structure. It holds the
number of rows that were touched no matter whether they were actually
changed or not.
Before this fix, the parser would accept illegal code in SQL exceptions
handlers, that later causes the runtime to crash when executing the code,
due to memory violations in the exception handler stack.
The root cause of the problem is instructions within an exception handler
that jumps to code located outside of the handler. This is illegal according
to the SQL 2003 standard, since labels located outside the handler are not
supposed to be visible (they are "out of scope"), so any instruction that
jumps to these labels, like ITERATE or LEAVE, should not parse.
The section of the standard that is relevant for this is :
SQL:2003 SQL/PSM (ISO/IEC 9075-4:2003)
section 13.1 <compound statement>,
syntax rule 4
<quote>
The scope of the <beginning label> is CS excluding every <SQL schema
statement> contained in CS and excluding every
<local handler declaration list> contained in CS. <beginning label> shall
not be equivalent to any other <beginning label>s within that scope.
</quote>
With this fix, the C++ class sp_pcontext, which represent the "parsing
context" tree (a.k.a symbol table) of a stored procedure, has been changed
as follows:
- constructors have been cleaned up, so that only building a root node for
the tree is public; building nodes inside a tree is not public.
- a new member, m_label_scope, indicates if a given syntactic context
belongs to a DECLARE HANDLER block,
- label resolution, in the method find_label(), has been changed to
implement the restriction of scope regarding labels used in a compound
statement.
The actions in the parser, when parsing the body of a SQL exception handler,
have been changed as follows:
- the implementation of an exception handler (DECLARE HANDLER) now creates
explicitly a new sp_pcontext, to isolate the code inside the handler from
the containing compound statement context.
- registering exception handlers as a result occurs in the parent context,
see the rule sp_hcond_element
- the code in sp_hcond_list has been cleaned up, to avoid code duplication
In addition, the flags IN_SIMPLE_CASE and IN_HANDLER, declared in sp_head.h
have been removed, since they are unused and broken by design (as seen with
Bug 19194 (Right recursion in parser for CASE causes excessive stack usage,
limitation), representing a stack in a single flag is not possible.
Tests in sp-error have been added to show that illegal constructs are now
rejected.
Tests in sp have been added for code coverage, to show that ITERATE or LEAVE
statements are legal when jumping to a label in scope, inside the body of
an exception handler.
Different set of conditions is used to verify
the validity of index definitions over a GEOMETRY
column in ALTER TABLE and CREATE TABLE.
The difference was on how sub-keys notion validity
is checked.
Fixed by extending the CREATE TABLE condition to
support the cases allowed in ALTER TABLE.
Made the SHOW CREATE TABLE not to display spatial
indexes using the sub-key notion.
differences in tables
Certain merge tables were wrongly reported as having incorrect definition:
- Some fields that are 1 byte long (e.g. TINYINT, CHAR(1)), might
be internally casted (in certain cases) to a different type on a
storage engine layer. (affects 4.1 and up)
- If tables in a merge (and a MERGE table itself) had short VARCHAR column (less
than 4 bytes) and at least one (but not all) tables were ALTER'ed (even to an
identical table: ALTER TABLE xxx ENGINE=yyy), table definitions went ouf of
sync. (affects 4.1 only)
This is fixed by relaxing a check for underlying conformance and setting
field type to FIELD_TYPE_STRING in case varchar is shorter than 4
when a table is created.
when the column is to be read from a derived table column which
was specified as a concatenation of string literals.
The bug happened because the Item_string::append did not adjust the
value of Item_string::max_length. As a result of it the temporary
table column defined to store the concatenation of literals was
not wide enough to hold the whole value.
after single-row table substitution could lead to a wrong result set.
The bug happened because the function Item_field::replace_equal_field
erroniously assumed that any field included in a multiple equality
with a constant has been already substituted for this constant.
This not true for fields becoming constant after row substitutions
for constant tables.
When the SUBSTRING() function was used over a LONGTEXT field the max_length of
the SUBSTRING() result was wrongly calculated and set to 0. As the max_length
parameter is used while tmp field creation it limits the length of the result
field and leads to printing an empty string instead of the correct result.
Now the Item_func_substr::fix_length_and_dec() function correctly calculates
the max_length parameter.
When rand() is called multiple times inside a stored procedure, the server does
not binlog the correct random seed values.
This patch corrects the problem by resetting rand_used= 0 in
THD::cleanup_after_query() allowing the system to save the random seeds if needed
for each command in a stored procedure body.
However, rand_used is not reset if executing in a stored function or trigger
because these operations are binlogged by call and thus only the calling statement
need detect the call to rand() made by its substatements. These substatements must
not set rand_used to 0 because it would remove the detection of rand() by the
calling statement.
construct references invalid name.
Derived tables currently cannot use outer references.
Thus there is no outer context for them.
The 4.1 code takes this fact into account while the
Item_field::fix_outer_field code of 5.0 lost the check that blocks
any attempts to resolve names in outer context for derived tables.
incorrect key file for table
In certain cases it could happen that deleting a row could
corrupt an RTREE index.
According to Guttman's algorithm, page underflow is handled
by storing the page in a list for later re-insertion. The
keys from the stored pages have to be inserted into the
remaining pages of the same level of the tree. Hence the
level number is stored in the re-insertion list together
with the page.
In the MySQL RTree implementation the level counts from zero
at the root page, increasing numbers for levels down the tree.
If during re-insertion of the keys the tree height grows, all
level numbers become invalid. The remaining keys will be
inserted at the wrong level.
The fix is to increment the level numbers stored in the
reinsert list after a split of the root block during reinsertion.
result.
For built-in functions like sqrt() function names are hard-coded and can be
compared by pointer. But this isn't the case for a used-defined stored
functions - names there are dynamical and should be compared as strings.
Now the Item_func::eq() function employs my_strcasecmp() function to compare
used-defined stored functions names.
away.
During optimization stage the WHERE conditions can be changed or even
be removed at all if they know for sure to be true of false. Thus they aren't
showed in the EXPLAIN EXTENDED which prints conditions after optimization.
Now if all elements of an Item_cond were removed this Item_cond is substituted
for an Item_int with the int value of the Item_cond.
If there were conditions that were totally optimized away then values of the
saved cond_value and having_value will be printed instead.
DATE/DATETIME values are out of the currently supported
4 basic value types (INT,STRING,REAL and DECIMAL).
So expressions (not fields) of compile type DATE/DATETIME are
generally considered as STRING values. This is not so
when they are compared : then they are compared as
INTEGER values.
But the rule for comparison as INTEGERS must be checked
explicitly each time when a comparison is to be performed.
filesort is one such place. However there the check was
not done and hence the expressions (not fields) of type
DATE/DATETIME were sorted by their string representation.
Fixed to compare them as INTEGER values for filesort.
Functions over sum functions wasn't set up correctly for the ORDER BY clause
which leads to a wrong order of the result set.
The split_sum_func() function is called now for each ORDER BY item that
contains a sum function to set it up correctly.
Bug 18914 (Calling certain SPs from triggers fail)
Bug 20713 (Functions will not not continue for SQLSTATE VALUE '42S02')
Bug 21825 (Incorrect message error deleting records in a table with a
trigger for inserting)
Bug 22580 (DROP TABLE in nested stored procedure causes strange dependency
error)
Bug 25345 (Cursors from Functions)
This fix resolves a long standing issue originally reported with bug 8407,
which affect the behavior of Stored Procedures, Stored Functions and Trigger
in many different ways, causing symptoms reported by all the bugs listed.
In all cases, the root cause of the problem traces back to 8407 and how the
server locks tables involved with sub statements.
Prior to this fix, the implementation of stored routines would:
- compute the transitive closure of all the tables referenced by a top level
statement
- open and lock all the tables involved
- execute the top level statement
"transitive closure of tables" means collecting:
- all the tables,
- all the stored functions,
- all the views,
- all the table triggers
- all the stored procedures
involved, and recursively inspect these objects definition to find more
references to more objects, until the list of every object referenced does
not grow any more.
This mechanism is known as "pre-locking" tables before execution.
The motivation for locking all the tables (possibly) used at once is to
prevent dead locks.
One problem with this approach is that, if the execution path the code
really takes during runtime does not use a given table, and if the table is
missing, the server would not execute the statement.
This in particular has a major impact on triggers, since a missing table
referenced by an update/delete trigger would prevent an insert trigger to run.
Another problem is that stored routines might define SQL exception handlers
to deal with missing tables, but the server implementation would never give
user code a chance to execute this logic, since the routine is never
executed when a missing table cause the pre-locking code to fail.
With this fix, the internal implementation of the pre-locking code has been
relaxed of some constraints, so that failure to open a table does not
necessarily prevent execution of a stored routine.
In particular, the pre-locking mechanism is now behaving as follows:
1) the first step, to compute the transitive closure of all the tables
possibly referenced by a statement, is unchanged.
2) the next step, which is to open all the tables involved, only attempts
to open the tables added by the pre-locking code, but silently fails without
reporting any error or invoking any exception handler is the table is not
present. This is achieved by trapping internal errors with
Prelock_error_handler
3) the locking step only locks tables that were successfully opened.
4) when executing sub statements, the list of tables used by each statements
is evaluated as before. The tables needed by the sub statement are expected
to be already opened and locked. Statement referencing tables that were not
opened in step 2) will fail to find the table in the open list, and only at
this point will execution of the user code fail.
5) when a runtime exception is raised at 4), the instruction continuation
destination (the next instruction to execute in case of SQL continue
handlers) is evaluated.
This is achieved with sp_instr::exec_open_and_lock_tables()
6) if a user exception handler is present in the stored routine, that
handler is invoked as usual, so that ER_NO_SUCH_TABLE exceptions can be
trapped by stored routines. If no handler exists, then the runtime execution
will fail as expected.
With all these changes, a side effect is that view security is impacted, in
two different ways.
First, a view defined as "select stored_function()", where the stored
function references a table that may not exist, is considered valid.
The rationale is that, because the stored function might trap exceptions
during execution and still return a valid result, there is no way to decide
when the view is created if a missing table really cause the view to be invalid.
Secondly, testing for existence of tables is now done later during
execution. View security, which consist of trapping errors and return a
generic ER_VIEW_INVALID (to prevent disclosing information) was only
implemented at very specific phases covering *opening* tables, but not
covering the runtime execution. Because of this existing limitation,
errors that were previously trapped and converted into ER_VIEW_INVALID are
not trapped, causing table names to be reported to the user.
This change is exposing an existing problem, which is independent and will
be resolved separately.
Using INSERT DELAYED on MERGE tables could lead to table
corruptions.
The manual lists a couple of storage engines, which can be
used with INSERT DELAYED. MERGE is not in this list.
The attempt to try it anyway has not been rejected yet.
This bug was not detected earlier as it can work under
special circumstances. Most notable is low concurrency.
To be safe, this patch rejects any attempt to use INSERT
DELAYED on MERGE tables.
- Add test case that shows how slave server hangs in "STOP SLAVE"
when run on MySQL version 5.0.33 compiled with OpenSSL.
Works fine with latest version of MySQL since that problem
has been fixed by patch for bug#24148. The fix has been noted in
the changelog for MySQL 5.0.36
The flag alias_name_used was not set on for the outer references
in subqueries. It resulted in replacement of any outer reference
resolved against an alias for a full field name when the frm
representation of a view with a subquery was generated.
If the subquery and the outer query referenced the same table in
their from lists this replacement effectively changed the meaning
of the view and led to wrong results for selects from this view.
Modified several functions to ensure setting the right value of
the alias_name_used flag for outer references resolved against
aliases.
When the ORDER BY clause gets fixed it's allowed to search in the current
item_list in order to find aliased fields and expressions. This is ok for a
SELECT but wrong for an UPDATE statement. If the ORDER BY clause will
contain a non-existing field which is mentioned in the UPDATE set list
then the server will crash due to using of non-existing (0x0) field.
When an Item_field is getting fixed it's allowed to search item list for
aliased expressions and fields only for selects.
Several problems here :
1. The conversion to double of an hex string const item
was not taking into account the unsigned flag.
2. IN was not behaving in the same was way as comparisons
when performed over an INT/DATE/DATETIME/TIMESTAMP column
and a constant. The ordinary comparisons in that case
convert the constant to an INTEGER value and do int
comparisons. Fixed the IN to do the same.
3. IN is not taking into account the unsigned flag when
calculating <expr> IN (<int_const1>, <int_const2>, ...).
Extended the implementation of IN to store and process
the unsigned flag for its arguments.
If we compare two items A and B, with B being (a constant) of a
larger type, then A gets promoted to B's type for comparison if
it's a constant, function, or CAST() column, but B gets demoted
to A's type if A is a (not explicitly CAST()) column. This is
counter-intuitive and not mandated by the standard.
Disabling optimisation where it would be lossy so field value
will properly get promoted and compared as binary string (rather
than as integers).
to return NULL for non-NULL arguments.
This is not the case as it can return NULL
for invalid hexidecimal strings.
Fixed by setting the maybe_null flag.