result.
For built-in functions like sqrt() function names are hard-coded and can be
compared by pointer. But this isn't the case for a used-defined stored
functions - names there are dynamical and should be compared as strings.
Now the Item_func::eq() function employs my_strcasecmp() function to compare
used-defined stored functions names.
Bug 18914 (Calling certain SPs from triggers fail)
Bug 20713 (Functions will not not continue for SQLSTATE VALUE '42S02')
Bug 21825 (Incorrect message error deleting records in a table with a
trigger for inserting)
Bug 22580 (DROP TABLE in nested stored procedure causes strange dependency
error)
Bug 25345 (Cursors from Functions)
This fix resolves a long standing issue originally reported with bug 8407,
which affect the behavior of Stored Procedures, Stored Functions and Trigger
in many different ways, causing symptoms reported by all the bugs listed.
In all cases, the root cause of the problem traces back to 8407 and how the
server locks tables involved with sub statements.
Prior to this fix, the implementation of stored routines would:
- compute the transitive closure of all the tables referenced by a top level
statement
- open and lock all the tables involved
- execute the top level statement
"transitive closure of tables" means collecting:
- all the tables,
- all the stored functions,
- all the views,
- all the table triggers
- all the stored procedures
involved, and recursively inspect these objects definition to find more
references to more objects, until the list of every object referenced does
not grow any more.
This mechanism is known as "pre-locking" tables before execution.
The motivation for locking all the tables (possibly) used at once is to
prevent dead locks.
One problem with this approach is that, if the execution path the code
really takes during runtime does not use a given table, and if the table is
missing, the server would not execute the statement.
This in particular has a major impact on triggers, since a missing table
referenced by an update/delete trigger would prevent an insert trigger to run.
Another problem is that stored routines might define SQL exception handlers
to deal with missing tables, but the server implementation would never give
user code a chance to execute this logic, since the routine is never
executed when a missing table cause the pre-locking code to fail.
With this fix, the internal implementation of the pre-locking code has been
relaxed of some constraints, so that failure to open a table does not
necessarily prevent execution of a stored routine.
In particular, the pre-locking mechanism is now behaving as follows:
1) the first step, to compute the transitive closure of all the tables
possibly referenced by a statement, is unchanged.
2) the next step, which is to open all the tables involved, only attempts
to open the tables added by the pre-locking code, but silently fails without
reporting any error or invoking any exception handler is the table is not
present. This is achieved by trapping internal errors with
Prelock_error_handler
3) the locking step only locks tables that were successfully opened.
4) when executing sub statements, the list of tables used by each statements
is evaluated as before. The tables needed by the sub statement are expected
to be already opened and locked. Statement referencing tables that were not
opened in step 2) will fail to find the table in the open list, and only at
this point will execution of the user code fail.
5) when a runtime exception is raised at 4), the instruction continuation
destination (the next instruction to execute in case of SQL continue
handlers) is evaluated.
This is achieved with sp_instr::exec_open_and_lock_tables()
6) if a user exception handler is present in the stored routine, that
handler is invoked as usual, so that ER_NO_SUCH_TABLE exceptions can be
trapped by stored routines. If no handler exists, then the runtime execution
will fail as expected.
With all these changes, a side effect is that view security is impacted, in
two different ways.
First, a view defined as "select stored_function()", where the stored
function references a table that may not exist, is considered valid.
The rationale is that, because the stored function might trap exceptions
during execution and still return a valid result, there is no way to decide
when the view is created if a missing table really cause the view to be invalid.
Secondly, testing for existence of tables is now done later during
execution. View security, which consist of trapping errors and return a
generic ER_VIEW_INVALID (to prevent disclosing information) was only
implemented at very specific phases covering *opening* tables, but not
covering the runtime execution. Because of this existing limitation,
errors that were previously trapped and converted into ER_VIEW_INVALID are
not trapped, causing table names to be reported to the user.
This change is exposing an existing problem, which is independent and will
be resolved separately.
The problem happened because those tests were using "cp932" and "ucs2" without checking whether these character sets are available. This fix moves test parts to make character set specific parts be tested only if they are:
- some parts were moved to "ctype_ucs.test" and "ctype_cp932.test"
- some parts were moved to the newly added tests "innodb-ucs2.test", "mysqlbinglog-cp932.test" and "sp-ucs2.test"
Problem:
When creating a temporary field for a temporary table in create_tmp_field_from_field(), a resulting field is created as an exact copy of an original one (in Field::new_field()). However, Field_enum and Field_set contain a pointer (typelib) to memory allocated in the parent table's MEM_ROOT, which under some circumstances may be deallocated later by the time a temporary table is used.
Solution:
Override the new_field() method for Field_enum and Field_set and create a separate copy of the typelib structure in there.
Post-commit issues fixed
* Test results for other tests fixed due to added error #s
* Memory allocation/free issues found with running with valgrind
* Fix to mysql-test-run shell script to run federated_server test (installs
mysql.servers table properly)
Bug#21025 (misleading error message when creating functions named 'x', or 'y')
Bug#22619 (Spaces considered harmful)
This change contains a fix to report warnings or errors, and multiple tests
cases.
Before this fix, name collisions between:
- Native functions
- User Defined Functions
- Stored Functions
were not systematically reported, leading to confusing behavior.
I) Native / User Defined Function
Before this fix, is was possible to create a UDF named "foo", with the same
name as a native function "foo", but it was impossible to invoke the UDF,
since the syntax "foo()" always refer to the native function.
After this fix, creating a UDF fails with an error if there is a name
collision with a native function.
II) Native / Stored Function
Before this fix, is was possible to create a SF named "db.foo", with the same
name as a native function "foo", but this was confusing since the syntax
"foo()" would refer to the native function. To refer to the Stored Function,
the user had to use the "db.foo()" syntax.
After this fix, creating a Stored Function reports a warning if there is a
name collision with a native function.
III) User Defined Function / Stored Function
Before this fix, creating a User Defined Function "foo" and a Stored Function
"db.foo" are mutually exclusive operations. Whenever the second function is
created, an error is reported. However, the test suite did not cover this
behavior.
After this fix, the behavior is unchanged, and is now covered by test cases.
Note that the code change in this patch depends on the fix for Bug 21114.
The problem was that THD::row_count_func was zeroed too. It was zeroed
as a fix for bug 4905 "Stored procedure doesn't clear for "Rows affected"
However, the proper solution is not to zero, because THD::row_count_func has
been set to -1 already in mysql_execute_command(), a later fix, which obsoletes
the incorrect fix of #4095
This patch reverts a change introduced by Bug 6951, which incorrectly
set thd->abort_on_warning for stored procedures.
As per internal discussions about the SQL_MODE=TRADITIONAL,
the correct behavior is to *not* abort on warnings even inside an INSERT/UPDATE
trigger.
Tests for Stored Procedures, Stored Functions, Triggers involving SQL_MODE
have been included or revised, to reflect the intended behavior.
(reposting approved patch, to work around source control issues, no review needed)
The syntax of the CALL statement, to invoke a stored procedure, has been
changed to make the use of parenthesis optional in the argument list.
With this change, "CALL p;" is equivalent to "CALL p();".
While the SQL spec does not explicitely mandate this syntax, supporting it
is needed for practical reasons, for integration with JDBC / ODBC connectors.
Also, warnings in the sql/sql_yacc.yy file, which were not reported by Bison 2.1
but are now reported by Bison 2.2, have been fixed.
The warning found were:
bison -y -p MYSQL -d --debug --verbose sql_yacc.yy
sql_yacc.yy:653.9-18: warning: symbol UNLOCK_SYM redeclared
sql_yacc.yy:656.9-17: warning: symbol UNTIL_SYM redeclared
sql_yacc.yy:658.9-18: warning: symbol UPDATE_SYM redeclared
sql_yacc.yy:5169.11-5174.11: warning: unused value: $2
sql_yacc.yy:5208.11-5220.11: warning: unused value: $5
sql_yacc.yy:5221.11-5234.11: warning: unused value: $5
conflicts: 249 shift/reduce
"unused value: $2" correspond to the $$=$1 assignment in the 1st {} block
in table_ref -> join_table {} {},
which does not procude a result ($$) for the rule but an intermediate $2
value for the action instead.
"unused value: $5" are similar, with $$ assignments in {} actions blocks
which are not for the final reduce.
There was possible stack overrun in an edge case which handles invalid body of
a SP in mysql.proc . That should be case when mysql.proc has been changed
manually. Though, due to bug 21513, it can be exploited without having access
to mysql.proc only being able to create a stored routine.
containing a select statement that uses an aggregating IN subquery.
Added a parameter to the function fix_prepare_information
to restore correctly the having clause for the second execution.
Saved andor structure of the having conditions at the proper moment
before any calls of split_sum_func2 that could modify the having structure
adding new Item_ref objects. (These additions, are produced not with
the statement mem_root, but rather with the execution mem_root.)
The problem was that if after FLUSH TABLES WITH READ LOCK the user
issued DROP/ALTER PROCEDURE/FUNCTION the operation would fail (as
expected), but after UNLOCK TABLE any attempt to execute the same
operation would lead to the error 1305 "PROCEDURE/FUNCTION does not
exist", and an attempt to execute any stored function will also fail.
This happened because under FLUSH TABLES WITH READ LOCK we couldn't open
and lock mysql.proc table for update, and this fact was erroneously
remembered by setting mysql_proc_table_exists to false, so subsequent
statements believed that mysql.proc doesn't exist, and thus that there
are no functions and procedures in the database.
As a solution, we remove mysql_proc_table_exists flag completely. The
reason is that this optimization didn't work most of the time anyway.
Even if open of mysql.proc failed for some reason when we were trying to
call a function or a procedure, we were setting mysql_proc_table_exists
back to true to force table reopen for the sake of producing the same
error message (the open can fail for number of reasons). The solution
could have been to remember the reason why open failed, but that's a lot
of code for optimization of a rare case. Hence we simply remove this
optimization.
if join is used
For procedures with selects that use complicated joins with ON expression
re-execution could erroneously ignore this ON expression, giving
incorrect result.
The problem was that optimized ON expression wasn't saved for
re-execution. The solution is to properly save it.
The following procedure was not possible if max_sp_recursion_depth is 0
create procedure show_proc() show create procedure show_proc;
Actually there is no recursive call but the limit is checked.
Solved by temporarily increasing the thread's limit just before the fetch from cache
and decreasing after that.
User name (host name) has limit on length. The server code relies on these
limits when storing the names. The problem was that sometimes these limits
were not checked properly, so that could lead to buffer overflow.
The fix is to check length of user/host name in parser and if string is too
long, throw an error.
create function func() returns char(10) binary ...
is no more possible. This will be reenabled when
bug 2676 "DECLARE can't have COLLATE clause in stored procedure"
is fixed.
Fix after 2nd review
The following procedure was not possible if max_sp_recursion_depth is 0
create procedure show_proc() show create procedure show_proc;
Actually there is no recursive call but the limit is checked.
Solved by temporarily increasing the thread's limit just before the fetch from cache
and decreasing after that.
Before this fix,
- a runtime error in a statement in a stored procedure with no error handlers
was properly detected (as expected)
- a runtime error in a statement with an error handler inherited from a non
local runtime context (i.e., proc a with a handler, calling proc b) was
properly detected (as expected)
- a runtime error in a statement with a *local* error handler was executed
as follows :
a) the statement would succeed, regardless of the error condition, (bug)
b) the error handler would be called (as expected).
The root cause is that functions like my_messqge_sql would "forget" to set
the thread flag thd->net.report_error to 1, because of the check involving
sp_rcontext::found_handler_here().
Failure to set this flag would cause, later in the call stack,
in Item_func::fix_fields() at line 190, the code to return FALSE and consider
that executing the statement was successful.
With this fix :
- error handling code, that was duplicated in different places in the code,
is now implemented in sp_rcontext::handle_error(),
- handle_error() correctly sets thd->net.report_error when a handler is
present, regardless of the handler location (local, or in the call stack).
A test case, bug8153_subselect, has been written to demonstrate the change
of behavior before and after the fix.
Another test case, bug8153_function_a, as also been writen.
This test has the same behavior before and after the fix.
This test has been written to demonstrate that the previous expected
result of procedure bug18787, was incorrect, since select no_such_function()
should fail and therefore not produce a result.
The incorrect result for bug18787 has the same root cause as Bug#8153,
and the expected result has been adjusted.
Fix for BUG#16676: Database CHARSET not used for stored procedures
The problem in BUG#16211 is that CHARSET-clause of the return type for
stored functions is just ignored.
The problem in BUG#16676 is that if character set is not explicitly
specified for sp-variable, the server character set is used instead
of the database one.
The fix has two parts:
- always store CHARSET-clause of the return type along with the
type definition in mysql.proc.returns column. "Always" means that
CHARSET-clause is appended even if it has not been explicitly
specified in CREATE FUNCTION statement (this affects BUG#16211 only).
Storing CHARSET-clause if it is not specified is essential to avoid
changing character set if the database character set is altered in
the future.
NOTE: this change is not backward compatible with the previous releases.
- use database default character set if CHARSET-clause is not explicitly
specified (this affects both BUG#16211 and BUG#16676).
NOTE: this also breaks backward compatibility.
When there is no index defined filesort is used to sort the result of a
query. If there is a function in the select list and the result set should be
ordered by it's value then this function will be evaluated twice. First time to
get the value of the sort key and second time to send its value to a user.
This happens because filesort when sorts a table remembers only values of its
fields but not values of functions.
All functions are affected. But taking into account that SP and UDF functions
can be both expensive and non-deterministic a temporary table should be used
to store their results and then sort it to avoid twice SP evaluation and to
get a correct result.
If an expression referenced in an ORDER clause contains a SP or UDF
function, force the use of a temporary table.
A new Item_processor function called func_type_checker_processor is added
to check whether the expression contains a function of a particular type.
"real" table fails in JOINs".
This is a regression caused by the fix for Bug 18444.
This fix removed the assignment of empty_c_string to table->db performed
in add_table_to_list, as neither me nor anyone else knew what it was
there for. Now we know it and it's covered with tests: the only case
when a table database name can be empty is when the table is a derived
table. The fix puts the assignment back but makes it a bit more explicit.
Additionally, finally drop sp.result.orig which was checked in by mistake.
and Stored Procedure
The essence of the bug was that for every re-execution of stored
routine or prepared statement new items for character set conversions
were created, thus increasing the number of items and the time of their
processing, and creating memory leak.
No test case is provided since current test suite can't cover such type
of bugs.
DESCRIBE returned the type BIGINT for a column of a view if the column
was specified by an expression over values of the type INT.
E.g. for the view defined as follows:
CREATE VIEW v1 SELECT COALESCE(f1,f2) FROM t1
DESCRIBE returned type BIGINT for the only column of the view if f1,f2 are
columns of the INT type.
At the same time DESCRIBE returned type INT for the only column of the table
defined by the statement:
CREATE TABLE t2 SELECT COALESCE(f1,f2) FROM t1.
This inconsistency was removed by the patch.
Now the code chooses between INT/BIGINT depending on the
precision of the aggregated column type.
Thus both DESCRIBE commands above returns type INT for v1 and t2.
Bug#19022 "Memory bug when switching db during trigger execution"
Bug#17199 "Problem when view calls function from another database."
Bug#18444 "Fully qualified stored function names don't work correctly in
SELECT statements"
Documentation note: this patch introduces a change in behaviour of prepared
statements.
This patch adds a few new invariants with regard to how THD::db should
be used. These invariants should be preserved in future:
- one should never refer to THD::db by pointer and always make a deep copy
(strmake, strdup)
- one should never compare two databases by pointer, but use strncmp or
my_strncasecmp
- TABLE_LIST object table->db should be always initialized in the parser or
by creator of the object.
For prepared statements it means that if the current database is changed
after a statement is prepared, the database that was current at prepare
remains active. This also means that you can not prepare a statement that
implicitly refers to the current database if the latter is not set.
This is not documented, and therefore needs documentation. This is NOT a
change in behavior for almost all SQL statements except:
- ALTER TABLE t1 RENAME t2
- OPTIMIZE TABLE t1
- ANALYZE TABLE t1
- TRUNCATE TABLE t1 --
until this patch t1 or t2 could be evaluated at the first execution of
prepared statement.
CURRENT_DATABASE() still works OK and is evaluated at every execution
of prepared statement.
Note, that in stored routines this is not an issue as the default
database is the database of the stored procedure and "use" statement
is prohibited in stored routines.
This patch makes obsolete the use of check_db_used (it was never used in the
old code too) and all other places that check for table->db and assign it
from THD::db if it's NULL, except the parser.
How this patch was created: THD::{db,db_length} were replaced with a
LEX_STRING, THD::db. All the places that refer to THD::{db,db_length} were
manually checked and:
- if the place uses thd->db by pointer, it was fixed to make a deep copy
- if a place compared two db pointers, it was fixed to compare them by value
(via strcmp/my_strcasecmp, whatever was approproate)
Then this intermediate patch was used to write a smaller patch that does the
same thing but without a rename.
TODO in 5.1:
- remove check_db_used
- deploy THD::set_db in mysql_change_db
See also comments to individual files.
with PREPARE fails with weird error".
More generally, re-executing a stored procedure with a complex SP cursor query
could lead to a crash.
The cause of the problem was that SP cursor queries were not optimized
properly at first execution: their parse tree belongs to sp_instr_cpush,
not sp_instr_copen, and thus the tree was tagged "EXECUTED" when the
cursor was declared, not when it was opened. This led to loss of optimization
transformations performed at first execution, as sp_instr_copen saw that the
query is already "EXECUTED" and therefore either not ran first-execution
related blocks or wrongly rolled back the transformations caused by
first-execution code.
The fix is to update the state of the parsed tree only when the tree is
executed, as opposed to when the instruction containing the tree is executed.
Assignment if i->state is moved to reset_lex_and_exec_core.
Under row-based replication, DELETE FROM will now always be
replicated as individual row deletions, while TRUNCATE TABLE will
always be replicated as a statement.
garbles data if longer than 766 chars.
The problem is that a stored routine returns BLOBs to the previous
caller, BLOBs are shallow-copied (i.e. only pointers to the data are
copied). The fix is to also copy data of BLOBs.
or implicitly uses stored function gives "Table not locked" error'
CREATE TABLE ... SELECT ... statement which was explicitly or implicitly
(through view) using stored function gave "Table not locked" error.
The actual bug resides in the current locking scheme of CREATE TABLE SELECT
code, which first opens and locks tables of the SELECT statement itself,
and then, having SELECT tables locked, creates the .FRM, opens the .FRM and
acquires lock on it. This scheme opens a possibility for a deadlock, which
was present and ignored since version 3.23 or earlier. This scheme also
conflicts with the invariant of the prelocking algorithm -- no table can
be open and locked while there are tables locked in prelocked mode.
The patch makes an exception for this invariant when doing CREATE TABLE ...
SELECT, thus extending the possibility of a deadlock to the prelocked mode.
We can't supply a better fix in 5.0.
a misnamed function
... in the presence of a continue handler. The problem was that with a
handler, it continued to execute as if function existed and had set a
useful return value (which it hadn't).
The fix is to set a null return value and do an error return when a function
wasn't found.
counter".
When TRUNCATE TABLE was called within an stored procedure the
auto_increment counter was not reset to 0 even if straight
TRUNCATE for this table did this.
This fix makes TRUNCATE in stored procedures to be handled exactly
in the same way as straight TRUNCATE. We achieve this by rolling
back the fix for bug 8850, which is no longer needed since stored
procedures don't require prelocked mode anymore (and TRUNCATE is
not allowed in stored functions or triggers).
fix_fields() was not called for "order by" variables if the type was a
"constant integer", and thus interpreted as a column index.
However, a local variable is an expression and should not be interpreted
as a column index. Instead it behaves just like when using a user variable
for instance (i.e. it will not affect the ordering).
produce wrong data
By default Item_sp_func::val_str() returns string from it's result_field
internal buffer. When grouping is present Item_copy_string is used to
store SP function result, but it doesn't additionally buffer the result.
When the next record is read, internal buffer is overwritten, due to
this Item_copy_string::val_str() will have wrong data. Thus producing
weird query result.
The Item_func_sp::val_str() now makes a copy of returned value to prevent
occasional corruption.
time per connection
Removed const_string() method from Item_string (it was only used in one
place, in a bad way). Defer possible SP variable, and access data directly
instead, in date_format item.
The idea is to add DEFINER-clause in CREATE PROCEDURE and CREATE FUNCTION
statements. Almost all support of definer in stored routines had been already
done before this patch.
NOTE: this patch changes behaviour of dumping stored routines in mysqldump.
Before this patch, mysqldump did not dump DEFINER-clause for stored routines
and this was documented behaviour. In order to get full information about stored
routines, one should have dumped mysql.proc table. This patch changes this
behaviour, so that DEFINER-clause is dumped.
Since DEFINER-clause is not supported in CREATE PROCEDURE | FUNCTION statements
before this patch, the clause is covered by additional version-specific comments.
The problem was a code generation bug: cpop instructions were not generated
when using ITERATE back to an outer block from a context with a declared
cursor; this would make it push a new cursor without popping in-between,
eventually overrunning the cursor stack with a crash as the result.
Fixed the calculation of how many cursors to pop (in sp_pcontext.cc:
diff_cursors()), and also corrected diff_cursors() and diff_handlers()
to when doing a "leave"; don't include the last context we're leaving
(we are then jumping to the appropriate pop instructions).
Implement table-level TRIGGER privilege to control access to triggers.
Before this path global SUPER privilege was used for this purpose, that
was the big security problem.
In details, before this patch SUPER privilege was required:
- for the user at CREATE TRIGGER time to create a new trigger;
- for the user at DROP TRIGGER time to drop the existing trigger;
- for the definer at trigger activation time to execute the trigger (if the
definer loses SUPER privilege, all its triggers become unavailable);
This patch changes the behaviour in the following way:
- TRIGGER privilege on the subject table for trigger is required:
- for the user at CREATE TRIGGER time to create a new trigger;
- for the user at DROP TRIGGER time to drop the existing trigger;
- for the definer at trigger activation time to execute the trigger
(if the definer loses TRIGGER privilege on the subject table, all its
triggers on this table become unavailable).
- SUPER privilege is still required:
- for the user at CREATE TRIGGER time to explicitly set the trigger
definer to the user other than CURRENT_USER().
When the server works with database of the previous version (w/o TRIGGER
privilege), or if the database is being upgraded from the previous versions,
TRIGGER privilege is granted to whose users, who have CREATE privilege.
After trying multiple inheritance (to messy and hard make it work) and
sublassing jump_if_not (worked, but ugly), decided to on this solution
instead:
Inserting an abstract sp_instr_opt_meta class as parent for all instructions
with destinations makes it possible to handle a continuation pointer for
sp_instr_set_case_expr too.
Note: No special test case; the fix is captured by the changed behaviour of
bug14643_2, and bug14498_4 (formerly disabled), in sp.test.