select OK.
The SQL parser was using Item::name to transfer user defined function attributes
to the user defined function (udf). It was not distinguishing between user defined
function call arguments and stored procedure call arguments. Setting Item::name
was causing Item_ref::print() method to print the argument as quoted identifiers
and caused views that reference aggregate functions as udf call arguments (and
rely on Item::print() for the text of the view to store) to throw an undefined
identifier error.
Overloaded Item_ref::print to print aggregate functions as such when printing
the references to aggregate functions taken out of context by split_sum_func2()
Fixed the parser to properly detect using AS clause in stored procedure arguments
as an error.
Fixed printing the arguments of udf call to print properly the udf attribute.
Problem: "greater than" and "less than" XPath operators appeared to have been implemented in reverse.
Fix: swap arguments to eq_func() and eq_func_reverse() to provide correct operation result.
when doing create unique index which mysql will silently converts to PK, ndb is not informed
so table will be useless.
change so that we never do online add index wo/ primary key.
this is not good, but it's better than a useless table
warnings in sql_trigger.cc and sql_view.cc".
According to the current version of C++ standard offsetof() macro
can't be used for non-POD types. So warnings were emitted when we
tried to use this macro for TABLE_LIST and Table_triggers_list
classes. Note that despite of these warnings it was probably safe
thing to do.
This fix tries to circumvent this limitation by implementing
custom version of offsetof() macro to be used with these
classes. This hack should go away once we will refactor
File_parser class.
Alternative approaches such as disabling this warning for
sql_trigger.cc/sql_view.cc or for the whole server were
considered less explicit. Also I was unable to find a way
to disable particular warning for particular _part_ of
file in GCC.
- compilation on the Alpha platform was broken because the Alpha-specific code was not updated after replacing the SIGRETURN_FRAME_COUNT constant with a variable
If elements a not top-level IN subquery were accessed by an index and
the subquery result set included a NULL value then the quantified
predicate that contained the subquery was evaluated to NULL when
it should return a non-null value.
This patch reverts a change introduced by Bug 6951, which incorrectly
set thd->abort_on_warning for stored procedures.
As per internal discussions about the SQL_MODE=TRADITIONAL,
the correct behavior is to *not* abort on warnings even inside an INSERT/UPDATE
trigger.
Tests for Stored Procedures, Stored Functions, Triggers involving SQL_MODE
have been included or revised, to reflect the intended behavior.
(reposting approved patch, to work around source control issues, no review needed)
with only COMMENT clause. Strangely it has manifestated itself
only on two platforms. This is a fix for bug#23423
"Syntax error for "ALTER EVENT ... COMMENT ..." on just two platforms"
Backport from 5.1.
Raised STACK_MIN_SIZE for Debian GNU/Linux Sid,
Linux kernel 2.6.16,
gcc version 3.3.6 (Debian 1:3.3.6-13),
libc6-dbg 2.3.6.ds1-4,
Pentium4 (x86),
BUILD/compile-pentium-debug-max
Raised about 100 Bytes above the required minimum.
When statement to be prepared contained CREATE PROCEDURE, CREATE FUNCTION
or CREATE TRIGGER statements with a syntax error in it, the preparation
would fail with syntax error message, but the memory could be corrupted.
The problem occurred because we switch memroot when parse stored
routine or trigger definitions, and on parse error we restored the
original memroot only after performing some memory operations. In more
detail:
- prepared statement would activate its own memory root to parse
the definition of the stored procedure.
- SP would reset this memory root with its own memory root to
parse SP statements
- a syntax error would happen
- prepared statement would restore the original memory root
- stored procedure would restore what it thinks was the original
memory root, but actually was the statement memory root.
That led to double free - in destruction of the statement and in
a next call to mysql_parse().
The solution is to restore memroot right after the failed parsing.
The value taken to be shown in SHOW PROCESSLIST is not
initialized when THD is created and will be random for
unauthenticated connections.
To the documentor: Random value, instead of NULL, was shown,
in SHOW PROCESSLIST for still non-authenticated connections.
We miss some records sometimes using RANGE method if we have
partial key segments.
Example:
Create table t1(a char(2), key(a(1)));
insert into t1 values ('a'), ('xx');
select a from t1 where a > 'x';
We call index_read() passing 'x' key and HA_READ_AFTER_KEY flag
in the handler::read_range_first() wich is wrong because we have
a partial key segment for the field and might miss records like 'xx'.
Fix: don't use open segments in such a case.
The value taken to be shown in SHOW PROCESSLIST is not
initialized when THD is created and will be random for
unauthenticated connections.
To the documentor: Random value, instead of NULL, was shown,
in SHOW PROCESSLIST for still non-authenticated connections.
(COUNT(*) = 1) not working in SELECT inside prepared statement.
Note: the warning was introduced in 5.0 and 5.1, 4.1 is OK with the
original fix.
The problem was that in 5.0 and 5.1 clear() for group functions may
access hybrid_type member, and this member is initialized in
fix_fields().
So we should not call clear() from item cleanup() methods, as cleanup()
may be called for unfixed items.
list using a function
When executing dependent subqueries they are re-inited and re-exec() for
each row of the outer context.
The cause for the bug is that during subquery reinitialization/re-execution,
the optimizer reallocates JOIN::join_tab, JOIN::table in make_simple_join()
and the local variable in 'sortorder' in create_sort_index(), which is
allocated by make_unireg_sortorder().
Care must be taken not to allocate anything into the thread's memory pool
while re-initializing query plan structures between subquery re-executions.
All such items mush be cached and reused because the thread's memory pool
is freed at the end of the whole query.
Note that they must be cached and reused even for queries that are not
otherwise cacheable because otherwise it will grow the thread's memory
pool every time a cacheable query is re-executed.
We provide additional members to the JOIN structure to store references
to the items that need to be cached.
account predicates that become sargable after reading const tables.
In some cases this resulted in choosing non-optimal execution plans.
Now info of such potentially saragable predicates is saved in
an array and after reading const tables we check whether this
predicates has become saragable.
Problem: renaming of FRM file and ARC files didn't use file name
encoding, so RENAME TABLE for views failed for views having
"tricky" characters in their names.
Fix: adding build_table_filename() in missing places.
When using index for group by and range access the server isolates
a set of ranges based on the conditions over the key parts of the
index used. Then it uses only the ranges over the GROUP BY fields to
jump from one group to another. Since the GROUP BY fields may form a
prefix over the index, we may use only a prefix of the ranges produced
by the range optimizer.
Each range contains a notion on whether it includes its border values.
The problem is that when using a range prefix, the last range is open
because it assumes that there is a range on the next keypart. Thus when
we use a prefix range as it is, it excludes all border values.
The solution is when ignoring the suffix of the range conditions
(to jump over the GROUP BY prefix only) the server must change the
remaining intervals so they always contain their borders, e.g.
if the whole range was :
(1,-inf) <= (<group_by_col>,<min_max_arg_col>) < (1, 3) we must make
(1) <= (<group_by_col>) <= (1) because (a,b) < (c1,c2) means :
a < c1 OR (a = c1 AND b < c2).
When resolving unqualified name references MySQL was not
checking what is the item type for the reference. Thus
e.g a string literal item that has by convention a name
equal to its string value will also work as a reference to
a SELECT list item or a table field.
Fixed by allowing only Item_ref or Item_field to referenced by
(unqualified) name.
The mysql_alter_table() was able to rename only a table.
The view/table renaming code is moved from the function rename_tables
to the new function called do_rename().
The mysql_alter_table() function calls it when it needs to rename a view.
Bug #21785 "Server crashes after rename of the log table" and
Bug #21966 "Strange warnings on create like/repair of the log
tables"
According to the patch, from now on, one should use RENAME to
perform a log table rotation (this should also be reflected in
the manual).
Here is a sample:
use mysql;
CREATE TABLE IF NOT EXISTS general_log2 LIKE general_log;
RENAME TABLE general_log TO general_log_backup, general_log2 TO general_log;
The rules for Rename of the log tables are following:
IF 1. Log tables are enabled
AND 2. Rename operates on the log table and nothing is being
renamed to the log table.
DO 3. Throw an error message.
ELSE 4. Perform rename.
The very RENAME query will go the the old (backup) table. This is
consistent with the behavoiur we have with binlog ROTATE LOGS
statement.
Other problems, which are solved by the patch are:
1) Now REPAIR of the log table is exclusive operation (as it should be), this
also eliminates lock-related warnings. and
2) CREATE LIKE TABLE now usese usual read lock on the source table rather
then name lock, which is too restrictive. This way we get rid of another
log table-related warning, which occured because of the above fact
(as a side-effect, name lock resulted in a warning).
Do not consider SHOW commands slow queries, just because they don't use proper indexes.
This bug fix is not needed in 5.1, and the code changes will be null merged. However, the test cases will be propogated up to 5.1.
should fail to create
The problem was that this type of errors was checked during view
creation, which doesn't happen when CREATE VIEW is a statement of
a created stored routine.
The solution is to perform the checks at parse time. The idea of the
fix is that the parser checks if a construction just parsed is allowed
in current circumstances by testing certain flags, and this flags are
reset for VIEWs.
The side effect of this change is that if the user already have
such bogus routines, it will now get a error when trying to do
SHOW CREATE PROCEDURE proc;
(and some other) and when trying to execute such routine he will get
ERROR 1457 (HY000): Failed to load routine test.p5. The table mysql.proc is missing, corrupt, or contains bad data (internal code -6)
However there should be very few such users (if any), and they may
(and should) drop these bogus routines.
Both on its own and in the plugin shutdown.... not so good. The code is a bit simpler, and we could now technically remove the panic all entirely if we wanted to.
The Cached_item_decimal::cmp() method wasn't checking for null pointer
returned from the val_decimal() of the item being cached.
This leads to server crash.
The Cached_item_decimal::cmp() method now check for null values.
hangs on Linux
If REPAIR TABLE ... USE_FRM is issued for table that is located in different
than default database server crash could happen.
In reopen_name_locked_table take database name from table_list (user specified
or default database) instead of from thd (default database).
Affects 4.1 only.
Examined rows are counted for every join part. The per-join-part
counter was incremented over all iterations. The result variable
was replaced at the end of every iteration. The final result was
the number of examined rows by the join part that ended its
execution as the last one. The numbers of other join parts was
lost.
Now we reset the per-join-part counter before every iteration and
add it to the result variable at the end of the iteration. That
way we get the sum of all iterations of all join parts.
No test case. Testing this needs a look into the slow query log.
I don't know of a way to do this portably with the test suite.
The bug is present only in 4.1, will be null-merged to 5.0
For InnoDB, check value of thd->transaction.all.innodb_active_trans instead of thd->transaction.stmt.innobase_tid to see if we really need to rollback.
- Type casting was not consequent, thus when adding a DATE type with
a WEEK interval the result tpe was DATETIME and not DATE as is the
norm.
- By changing the order of the date type enumerations the type casting
bug is resolved. To comply with the new order the array
interval_type_to_name needed to change accordingly.
Transaction on the slave sql thread got blocked against a slave's mysqld local ta's
lock. Since the default, slave-transaction-retries=10, there was replaying of the
replicated ta. That failed because of a new started from 5.0.13 policy not to rollback
a timed-out transaction. Effectively the first round of a timed-out ta becomes committed
by the replaying's first "BEGIN".
It was decided to backport already existed method working in 5.1 implemented in
bug #16228 for handling symmetrical deadlock problem. That patch introduced end_trans
execution whenever a replicated ta deadlocks or timed-out.
Note, that this solution can be practically suboptimal - in the light of the changed behavior
due to timeout we still could replay only the last statement - only with a high rate of timeouting
replicated transactions.
This decoupling allows in further versions of MySQL enum interval_type to
be reordered without this affecting any backward compatibility in the
events code.
This changeset doesn't change any exposed behavior but makes events' code
more durable to changes outside of their code base.
To the reviewer: There is no regression test included as it is impossible
to construct one with the current infrastructure which can test it. To test
the code one has create and event, then change the order of
enum interval_type in my_time.h, update sql/time.cc, recompile the server
and run it with scheduler running.
Removing code to step the group log position and just stepping
the event log position. If the group log position were stepped
one time too many, it might be that the group starts at a position
that is not possible, e.g., at a Rows_log_event, or between an
Intvar_log_event and the following associated Query_log_event.
statement.
The problem was that during statement re-execution if the result was
empty the old result could be returned for group functions.
The solution is to implement proper cleanup() method in group
functions.
In a trigger or a function used in a statement it is possible to do
SELECT from a table being modified by the statement. However,
encapsulation of such SELECT into a view and selecting from a view
instead of direct SELECT was not possible.
This happened because tables used by views (which in their turn
were used from functions/triggers) were not excluded from checks
in unique_table() routine as it happens for the rest of tables
added to the statement table list for prelocking.
With this fix we ignore all such tables in unique_table(), thus
providing consistency: inside a trigger or a functions SELECT from
a view may be used where plain SELECT is allowed. Modification of
the same table from function or trigger is still disallowed. Also,
this patch doesn't affect the case where SELECT from the table being
modified is done outside of function of trigger, such SELECTs are
still disallowed (this limitation and visibility problem when function
select from a table being modified are subjects of bug 21326). See
also bug 22427.
The syntax of the CALL statement, to invoke a stored procedure, has been
changed to make the use of parenthesis optional in the argument list.
With this change, "CALL p;" is equivalent to "CALL p();".
While the SQL spec does not explicitely mandate this syntax, supporting it
is needed for practical reasons, for integration with JDBC / ODBC connectors.
Also, warnings in the sql/sql_yacc.yy file, which were not reported by Bison 2.1
but are now reported by Bison 2.2, have been fixed.
The warning found were:
bison -y -p MYSQL -d --debug --verbose sql_yacc.yy
sql_yacc.yy:653.9-18: warning: symbol UNLOCK_SYM redeclared
sql_yacc.yy:656.9-17: warning: symbol UNTIL_SYM redeclared
sql_yacc.yy:658.9-18: warning: symbol UPDATE_SYM redeclared
sql_yacc.yy:5169.11-5174.11: warning: unused value: $2
sql_yacc.yy:5208.11-5220.11: warning: unused value: $5
sql_yacc.yy:5221.11-5234.11: warning: unused value: $5
conflicts: 249 shift/reduce
"unused value: $2" correspond to the $$=$1 assignment in the 1st {} block
in table_ref -> join_table {} {},
which does not procude a result ($$) for the rule but an intermediate $2
value for the action instead.
"unused value: $5" are similar, with $$ assignments in {} actions blocks
which are not for the final reduce.
Currently SQL_BIG_RESULT is checked only at compile time.
However, additional optimizations may take place after
this check that change the sort method from 'filesort'
to sorting via index. As a result the actual plan
executed is not the one specified by the SQL_BIG_RESULT
hint. Similarly, there is no such test when executing
EXPLAIN, resulting in incorrect output.
The patch corrects the problem by testing for
SQL_BIG_RESULT both during the explain and execution
phases.
This is addition to fix for bug21617. Valgrind reports an error when
opening merge table that has underlying tables with less indexes than
in a merge table itself.
Copy at most min(file->keys, table->key_parts) elements from rec_per_key array.
This fixes problems when merge table and subtables have different number of keys.
Note: bug#21726 does not directly apply to 4.1, as it doesn't have stored
procedures. However, 4.1 had some bugs that were fixed in 5.0 by the
patch for bug#21726, and this patch is a backport of those fixes.
Namely, in 4.1 it fixes:
- LAST_INSERT_ID(expr) didn't return value of expr (4.1 specific).
- LAST_INSERT_ID() could return the value generated by current
statement if the call happens after the generation, like in
CREATE TABLE t1 (i INT AUTO_INCREMENT PRIMARY KEY, j INT);
INSERT INTO t1 VALUES (NULL, 0), (NULL, LAST_INSERT_ID());
- Redundant binary log LAST_INSERT_ID_EVENTs could be generated.
crash for, e.g., NDB):
Before, mysqlbinlog printed table map events as a separate statement, so
when executing the event, the opened table was subsequently closed
when the statement ended. Instead, the row-based events that make up
a statement are now printed as *one* BINLOG statement, which means
that the table maps and the following *_rows_log_event events are
executed fully before the statement ends.
Changing implementation of BINLOG statement to be able to read the
emitted format, which now consists of several chunks of BASE64-encoded
data.
Though this is not storage engine specific problem, I was able to
repeat this problem with BDB and NDB engines only. That was the
reason to add a test case into ndb_update.test. As a result
different bad things could happen.
BDB has removed duplicate rows which is not expected.
NDB returns an error.
For multi table update notify storage engine about UPDATE IGNORE
as it is done in single table UPDATE.