context was used as an argument of GROUP_CONCAT.
Ensured correct setting of the depended_from field in references
generated for set functions aggregated in outer selects.
A wrong value of this field resulted in wrong maps returned by
used_tables() for these references.
Made sure that a temporary table field is added for any set function
aggregated in outer context when creation of a temporary table is
needed to execute the inner subquery.
To correctly decide which predicates can be evaluated with a given table
the optimizer must know the exact set of tables that a predicate depends
on. If that mask is too wide (refer to non-existing tables) the optimizer
can erroneously skip a predicate.
One such case of wrong table usage mask were the aggregate functions.
The have a all-1 mask (meaning depend on all tables, including non-existent
ones).
Fixed by making a real used_tables mask for the aggregates. The mask is
constructed in the following way :
1. OR the table dependency masks of all the arguments of the aggregate.
2. If all the arguments of the function are from the local name resolution
context and it is evaluated in the same name resolution
context where it is referenced all the tables from that name resolution
context are OR-ed to the dependency mask. This is to denote that an
aggregate function depends on the number of rows it processes.
3. Handle correctly the case of an aggregate function optimization (such that
the aggregate function can be pre-calculated and made a constant).
Made sure that an aggregate function is never a constant (unless subject of a
specific optimization and pre-calculation).
One other flaw was revealed and fixed in the process : references were
not calling the recalculation method for used_tables of their targets.
touched but not actually changed.
The LAST_INSERT_ID() is reset to 0 if no rows were inserted or changed.
This is the case when an INSERT ... ON DUPLICATE KEY UPDATE updates a row
with the same values as the row contains.
Now the LAST_INSERT_ID() values is reset to 0 only if there were no rows
successfully inserted or touched.
The new 'touched' field is added to the COPY_INFO structure. It holds the
number of rows that were touched no matter whether they were actually
changed or not.
fixes).
The legend: on a replication slave, in case a trigger creation
was filtered out because of application of replicate-do-table/
replicate-ignore-table rule, the parsed definition of a trigger was not
cleaned up properly. LEX::sphead member was left around and leaked
memory. Until the actual implementation of support of
replicate-ignore-table rules for triggers by the patch for Bug 24478 it
was never the case that "case SQLCOM_CREATE_TRIGGER"
was not executed once a trigger was parsed,
so the deletion of lex->sphead there worked and the memory did not leak.
The fix:
The real cause of the bug is that there is no 1 or 2 places where
we can clean up the main LEX after parse. And the reason we
can not have just one or two places where we clean up the LEX is
asymmetric behaviour of MYSQLparse in case of success or error.
One of the root causes of this behaviour is the code in Item::Item()
constructor. There, a newly created item adds itself to THD::free_list
- a single-linked list of Items used in a statement. Yuck. This code
is unaware that we may have more than one statement active at a time,
and always assumes that the free_list of the current statement is
located in THD::free_list. One day we need to be able to explicitly
allocate an item in a given Query_arena.
Thus, when parsing a definition of a stored procedure, like
CREATE PROCEDURE p1() BEGIN SELECT a FROM t1; SELECT b FROM t1; END;
we actually need to reset THD::mem_root, THD::free_list and THD::lex
to parse the nested procedure statement (SELECT *).
The actual reset and restore is implemented in semantic actions
attached to sp_proc_stmt grammar rule.
The problem is that in case of a parsing error inside a nested statement
Bison generated parser would abort immediately, without executing the
restore part of the semantic action. This would leave THD in an
in-the-middle-of-parsing state.
This is why we couldn't have had a single place where we clean up the LEX
after MYSQLparse - in case of an error we needed to do a clean up
immediately, in case of success a clean up could have been delayed.
This left the door open for a memory leak.
One of the following possibilities were considered when working on a fix:
- patch the replication logic to do the clean up. Rejected
as breaks module borders, replication code should not need to know the
gory details of clean up procedure after CREATE TRIGGER.
- wrap MYSQLparse with a function that would do a clean up.
Rejected as ideally we should fix the problem when it happens, not
adjust for it outside of the problematic code.
- make sure MYSQLparse cleans up after itself by invoking the clean up
functionality in the appropriate places before return. Implemented in
this patch.
- use %destructor rule for sp_proc_stmt to restore THD - cleaner
than the prevoius approach, but rejected
because needs a careful analysis of the side effects, and this patch is
for 5.0, and long term we need to use the next alternative anyway
- make sure that sp_proc_stmt doesn't juggle with THD - this is a
large work that will affect many modules.
Cleanup: move main_lex and main_mem_root from Statement to its
only two descendants Prepared_statement and THD. This ensures that
when a Statement instance was created for purposes of statement backup,
we do not involve LEX constructor/destructor, which is fairly expensive.
In order to track that the transformation produces equivalent
functionality please check the respective constructors and destructors
of Statement, Prepared_statement and THD - these members were
used only there.
This cleanup is unrelated to the patch.
Bug 18914 (Calling certain SPs from triggers fail)
Bug 20713 (Functions will not not continue for SQLSTATE VALUE '42S02')
Bug 21825 (Incorrect message error deleting records in a table with a
trigger for inserting)
Bug 22580 (DROP TABLE in nested stored procedure causes strange dependency
error)
Bug 25345 (Cursors from Functions)
This fix resolves a long standing issue originally reported with bug 8407,
which affect the behavior of Stored Procedures, Stored Functions and Trigger
in many different ways, causing symptoms reported by all the bugs listed.
In all cases, the root cause of the problem traces back to 8407 and how the
server locks tables involved with sub statements.
Prior to this fix, the implementation of stored routines would:
- compute the transitive closure of all the tables referenced by a top level
statement
- open and lock all the tables involved
- execute the top level statement
"transitive closure of tables" means collecting:
- all the tables,
- all the stored functions,
- all the views,
- all the table triggers
- all the stored procedures
involved, and recursively inspect these objects definition to find more
references to more objects, until the list of every object referenced does
not grow any more.
This mechanism is known as "pre-locking" tables before execution.
The motivation for locking all the tables (possibly) used at once is to
prevent dead locks.
One problem with this approach is that, if the execution path the code
really takes during runtime does not use a given table, and if the table is
missing, the server would not execute the statement.
This in particular has a major impact on triggers, since a missing table
referenced by an update/delete trigger would prevent an insert trigger to run.
Another problem is that stored routines might define SQL exception handlers
to deal with missing tables, but the server implementation would never give
user code a chance to execute this logic, since the routine is never
executed when a missing table cause the pre-locking code to fail.
With this fix, the internal implementation of the pre-locking code has been
relaxed of some constraints, so that failure to open a table does not
necessarily prevent execution of a stored routine.
In particular, the pre-locking mechanism is now behaving as follows:
1) the first step, to compute the transitive closure of all the tables
possibly referenced by a statement, is unchanged.
2) the next step, which is to open all the tables involved, only attempts
to open the tables added by the pre-locking code, but silently fails without
reporting any error or invoking any exception handler is the table is not
present. This is achieved by trapping internal errors with
Prelock_error_handler
3) the locking step only locks tables that were successfully opened.
4) when executing sub statements, the list of tables used by each statements
is evaluated as before. The tables needed by the sub statement are expected
to be already opened and locked. Statement referencing tables that were not
opened in step 2) will fail to find the table in the open list, and only at
this point will execution of the user code fail.
5) when a runtime exception is raised at 4), the instruction continuation
destination (the next instruction to execute in case of SQL continue
handlers) is evaluated.
This is achieved with sp_instr::exec_open_and_lock_tables()
6) if a user exception handler is present in the stored routine, that
handler is invoked as usual, so that ER_NO_SUCH_TABLE exceptions can be
trapped by stored routines. If no handler exists, then the runtime execution
will fail as expected.
With all these changes, a side effect is that view security is impacted, in
two different ways.
First, a view defined as "select stored_function()", where the stored
function references a table that may not exist, is considered valid.
The rationale is that, because the stored function might trap exceptions
during execution and still return a valid result, there is no way to decide
when the view is created if a missing table really cause the view to be invalid.
Secondly, testing for existence of tables is now done later during
execution. View security, which consist of trapping errors and return a
generic ER_VIEW_INVALID (to prevent disclosing information) was only
implemented at very specific phases covering *opening* tables, but not
covering the runtime execution. Because of this existing limitation,
errors that were previously trapped and converted into ER_VIEW_INVALID are
not trapped, causing table names to be reported to the user.
This change is exposing an existing problem, which is independent and will
be resolved separately.
can be specified
Currently MySQL allows one to specify what indexes to ignore during
join optimization. The scope of the current USE/FORCE/IGNORE INDEX
statement is only the FROM clause, while all other clauses are not
affected.
However, in certain cases, the optimizer
may incorrectly choose an index for sorting and/or grouping, and
produce an inefficient query plan.
This task provides the means to specify what indexes are
ignored/used for what operation in a more fine-grained manner, thus
making it possible to manually force a better plan. We do this
by extending the current IGNORE/USE/FORCE INDEX syntax to:
IGNORE/USE/FORCE INDEX [FOR {JOIN | ORDER | GROUP BY}]
so that:
- if no FOR is specified, the index hint will apply everywhere.
- if MySQL is started with the compatibility option --old_mode then
an index hint without a FOR clause works as in 5.0 (i.e, the
index will only be ignored for JOINs, but can still be used to
compute ORDER BY).
See the WL#3527 for further details.
This patch fixes problem that LOAD DATA could use different
character sets when loading files on master and on slave sides:
- Adding replication of thd->variables.collation_database
- Adding optional character set clause into LOAD DATA
Note, the second way, with explicit CHARACTER SET clause
should be the recommended way to load data using an alternative
character set.
The old way, using "SET @@character_set_database=xxx" should be
gradually depricated.
Triggers in SBR mode."
BUG#14914 "SP: Uses of session variables in routines are not always
replicated"
BUG#25167 "Dupl. usage of user-variables in trigger/function is not
replicated correctly"
User-defined variables used inside of stored functions/triggers in
statements which did not update tables directly were not replicated.
We also had problems with replication of user-defined variables which
were used in triggers (or stored functions called from table-updating
statements) more than once.
This patch addresses the first issue by enabling logging of all
references to user-defined variables in triggers/stored functions
and not only references from table-updating statements.
The second issue stemmed from the fact that for user-defined
variables used from triggers or stored functions called from
table-updating statements we were writing binlog events for each
reference instead of only one event for the first reference.
This problem is already solved for stored functions called from
non-updating statements with help of "event unioning" mechanism.
So the patch simply extends this mechanism to the case affected.
It also fixes small problem in this mechanism which caused wrong
logging of references to user-variables in cases when non-updating
statement called several stored functions which used the same
variable and some of these function calls were omitted from binlog
as they were not updating any tables.
Fixed compile-pentium64 scripts
Fixed wrong estimate of update_with_key_prefix in sql-bench
Merge bk-internal.mysql.com:/home/bk/mysql-5.1 into mysql.com:/home/my/mysql-5.1
Fixed unsafe define of uint4korr()
Fixed that --extern works with mysql-test-run.pl
Small trivial cleanups
This also fixes a bug in counting number of rows that are updated when we have many simultanous queries
Move all connection handling and command exectuion main loop from sql_parse.cc to sql_connection.cc
Split handle_one_connection() into reusable sub functions.
Split create_new_thread() into reusable sub functions.
Added thread_scheduler; Preliminary interface code for future thread_handling code.
Use 'my_thread_id' for internal thread id's
Make thr_alarm_kill() to depend on thread_id instead of thread
Make thr_abort_locks_for_thread() depend on thread_id instead of thread
In store_globals(), set my_thread_var->id to be thd->thread_id.
Use my_thread_var->id as basis for my_thread_name()
The above changes makes the connection we have between THD and threads more soft.
Added a lot of DBUG_PRINT() and DBUG_ASSERT() functions
Fixed compiler warnings
Fixed core dumps when running with --debug
Removed setting of signal masks (was never used)
Made event code call pthread_exit() (portability fix)
Fixed that event code doesn't call DBUG_xxx functions before my_thread_init() is called.
Made handling of thread_id and thd->variables.pseudo_thread_id uniform.
Removed one common 'not freed memory' warning from mysqltest
Fixed a couple of usage of not initialized warnings (unlikely cases)
Suppress compiler warnings from bdb and (for the moment) warnings from ndb
- Implement --secure-file-priv=<dir> option that limits
"load_file", "LOAD DATA" and "SELECT .. INTO OUTFILE" to work
with files in specified dir.
- Use above option for mysqld in mysql-test-run.pl
Protocol_simple->Protocol_text; Protocol_prep->Protocol_binary
and also THD::protocol_simple->THD::protocol_text,
THD::protocol_prep->THD::protocol_binary.
Reason: the binary protocol is not bound to be used only with
prepared statements long term (see WL#3559 "Decouple binary protocol
from prepared statements"). Renaming now is pressing because
the fix for BUG#735 "Prepared Statements: there is
no support for Query Cache" will introduce a new member
in class Query_cache_flags telling about the protocol's nature.
Other reason: "simple" is less accurate than "text".
Future patches for BUG#735 will rely on this cset.
Corrected spelling in copyright text
Makefile.am:
Don't update the files from BitKeeper
Many files:
Removed "MySQL Finland AB & TCX DataKonsult AB" from copyright header
Adjusted year(s) in copyright header
Many files:
Added GPL copyright text
Removed files:
Docs/Support/colspec-fix.pl
Docs/Support/docbook-fixup.pl
Docs/Support/docbook-prefix.pl
Docs/Support/docbook-split
Docs/Support/make-docbook
Docs/Support/make-makefile
Docs/Support/test-make-manual
Docs/Support/test-make-manual-de
Docs/Support/xwf
from log):
When row-based logging is used, the CREATE-SELECT is written as two
parts: as a CREATE TABLE statement and as the rows for the table. For
both transactional and non-transactional tables, the CREATE TABLE
statement was written to the transaction cache, as were the rows, and
on statement end, the entire transaction cache was written to the binary
log if the table was non-transactional. For transactional tables, the
events were kept in the transaction cache until end of transaction (or
statement that were not part of a transaction).
For the case when AUTOCOMMIT=0 and we are creating a transactional table
using a create select, we would then keep the CREATE TABLE statement and
the rows for the CREATE-SELECT, while executing the following statements.
On a rollback, the transaction cache would then be cleared, which would
also remove the CREATE TABLE statement. Hence no table would be created
on the slave, while there is an empty table on the master.
This relates to BUG#22865 where the table being created exists on the
master, but not on the slave during insertion of rows into the newly
created table. This occurs since the CREATE TABLE statement were still
in the transaction cache until the statement finished executing, and
possibly longer if the table was transactional.
This patch changes the behaviour of the CREATE-SELECT statement by
adding an implicit commit at the end of the statement when creating
non-temporary tables. Hence, non-temporary tables will be written to the
binary log on completion, and in the even of AUTOCOMMIT=0, a new
transaction will be started. Temporary tables do not commit an ongoing
transaction: neither as a pre- not a post-commit.
The events for both transactional and non-transactional tables are
saved in the transaction cache, and written to the binary log at end
of the statement.
- Removed not used variables and functions
- Added #ifdef around code that is not used
- Renamed variables and functions to avoid conflicts
- Removed some not used arguments
Fixed some class/struct warnings in ndb
Added define IS_LONGDATA() to simplify code in libmysql.c
I did run gcov on the changes and added 'purecov' comments on almost all lines that was not just variable name changes