mariadb/sql/sql_prepare.cc

4456 lines
125 KiB
C++
Raw Normal View History

Fix for bug#14188793 - "DEADLOCK CAUSED BY ALTER TABLE DOEN'T CLEAR STATUS OF ROLLBACKED TRANSACTION" and bug #17054007 - "TRANSACTION IS NOT FULLY ROLLED BACK IN CASE OF INNODB DEADLOCK". The problem in the first bug report was that although deadlock involving metadata locks was reported using the same error code and message as InnoDB deadlock it didn't rollback transaction like the latter. This caused confusion to users as in some cases after ER_LOCK_DEADLOCK transaction could have been restarted immediately and in some cases rollback was required. The problem in the second bug report was that although InnoDB deadlock caused transaction rollback in all storage engines it didn't cause release of metadata locks. So concurrent DDL on the tables used in transaction was blocked until implicit or explicit COMMIT or ROLLBACK was issued in the connection which got InnoDB deadlock. The former issue has stemmed from the fact that when support for detection and reporting metadata locks deadlocks was added we erroneously assumed that InnoDB doesn't rollback transaction on deadlock but only last statement (while this is what happens on InnoDB lock timeout actually) and so didn't implement rollback of transactions on MDL deadlocks. The latter issue was caused by the fact that rollback of transaction due to deadlock is carried out by setting THD::transaction_rollback_request flag at the point where deadlock is detected and performing rollback inside of trans_rollback_stmt() call when this flag is set. And trans_rollback_stmt() is not aware of MDL locks, so no MDL locks are released. This patch solves these two problems in the following way: - In case when MDL deadlock is detect transaction rollback is requested by setting THD::transaction_rollback_request flag. - Code performing rollback of transaction if THD::transaction_rollback_request is moved out from trans_rollback_stmt(). Now we handle rollback request on the same level as we call trans_rollback_stmt() and release statement/ transaction MDL locks.
2013-08-20 11:12:34 +02:00
/* Copyright (c) 2002, 2013, Oracle and/or its affiliates. All rights reserved.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; version 2 of the License.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Fix for BUG#11755168 '46895: test "outfile_loaddata" fails (reproducible)'. In sql_class.cc, 'row_count', of type 'ha_rows', was used as last argument for ER_TRUNCATED_WRONG_VALUE_FOR_FIELD which is "Incorrect %-.32s value: '%-.128s' for column '%.192s' at row %ld". So 'ha_rows' was used as 'long'. On SPARC32 Solaris builds, 'long' is 4 bytes and 'ha_rows' is 'longlong' i.e. 8 bytes. So the printf-like code was reading only the first 4 bytes. Because the CPU is big-endian, 1LL is 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x01 so the first four bytes yield 0. So the warning message had "row 0" instead of "row 1" in test outfile_loaddata.test: -Warning 1366 Incorrect string value: '\xE1\xE2\xF7' for column 'b' at row 1 +Warning 1366 Incorrect string value: '\xE1\xE2\xF7' for column 'b' at row 0 All error-messaging functions which internally invoke some printf-life function are potential candidate for such mistakes. One apparently easy way to catch such mistakes is to use ATTRIBUTE_FORMAT (from my_attribute.h). But this works only when call site has both: a) the format as a string literal b) the types of arguments. So: func(ER(ER_BLAH), 10); will silently not be checked, because ER(ER_BLAH) is not known at compile time (it is known at run-time, and depends on the chosen language). And func("%s", a va_list argument); has the same problem, as the *real* type of arguments is not known at this site at compile time (it's known in some caller). Moreover, func(ER(ER_BLAH)); though possibly correct (if ER(ER_BLAH) has no '%' markers), will not compile (gcc says "error: format not a string literal and no format arguments"). Consequences: 1) ATTRIBUTE_FORMAT is here added only to functions which in practice take "string literal" formats: "my_error_reporter" and "print_admin_msg". 2) it cannot be added to the other functions: my_error(), push_warning_printf(), Table_check_intact::report_error(), general_log_print(). To do a one-time check of functions listed in (2), the following "static code analysis" has been done: 1) replace my_error(ER_xxx, arguments for substitution in format) with the equivalent my_printf_error(ER_xxx,ER(ER_xxx), arguments for substitution in format), so that we have ER(ER_xxx) and the arguments *in the same call site* 2) add ATTRIBUTE_FORMAT to push_warning_printf(), Table_check_intact::report_error(), general_log_print() 3) replace ER(xxx) with the hard-coded English text found in errmsg.txt (like: ER(ER_UNKNOWN_ERROR) is replaced with "Unknown error"), so that a call site has the format as string literal 4) this way, ATTRIBUTE_FORMAT can effectively do its job 5) compile, fix errors detected by ATTRIBUTE_FORMAT 6) revert steps 1-2-3. The present patch has no compiler error when submitted again to the static code analysis above. It cannot catch all problems though: see Field::set_warning(), in which a call to push_warning_printf() has a variable error (thus, not replacable by a string literal); I checked set_warning() calls by hand though. See also WL 5883 for one proposal to avoid such bugs from appearing again in the future. The issues fixed in the patch are: a) mismatch in types (like 'int' passed to '%ld') b) more arguments passed than specified in the format. This patch resolves mismatches by changing the type/number of arguments, not by changing error messages of sql/share/errmsg.txt. The latter would be wrong, per the following old rule: errmsg.txt must be as stable as possible; no insertions or deletions of messages, no changes of type or number of printf-like format specifiers, are allowed, as long as the change impacts a message already released in a GA version. If this rule is not followed: - Connectors, which use error message numbers, will be confused (by insertions/deletions of messages) - using errmsg.sys of MySQL 5.1.n with mysqld of MySQL 5.1.(n+1) could produce wrong messages or crash; such usage can easily happen if installing 5.1.(n+1) while /etc/my.cnf still has --language=/path/to/5.1.n/xxx; or if copying mysqld from 5.1.(n+1) into a 5.1.n installation. When fixing b), I have verified that the superfluous arguments were not used in the format in the first 5.1 GA (5.1.30 'bteam@astra04-20081114162938-z8mctjp6st27uobm'). Had they been used, then passing them today, even if the message doesn't use them anymore, would have been necessary, as explained above.
2011-05-16 22:04:01 +02:00
Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA */
2007-10-16 21:37:31 +02:00
/**
@file
This file contains the implementation of prepared statements.
When one prepares a statement:
- Server gets the query from client with command 'COM_STMT_PREPARE';
in the following format:
[COM_STMT_PREPARE:1] [query]
2005-06-08 17:57:26 +02:00
- Parse the query and recognize any parameter markers '?' and
store its information list in lex->param_list
2005-06-08 17:57:26 +02:00
- Allocate a new statement for this prepare; and keep this in
'thd->stmt_map'.
2005-06-08 17:57:26 +02:00
- Without executing the query, return back to client the total
number of parameters along with result-set metadata information
(if any) in the following format:
2007-10-16 21:37:31 +02:00
@verbatim
[STMT_ID:4]
[Column_count:2]
[Param_count:2]
[Params meta info (stubs only for now)] (if Param_count > 0)
[Columns meta info] (if Column_count > 0)
2007-10-16 21:37:31 +02:00
@endverbatim
2005-06-08 17:57:26 +02:00
During prepare the tables used in a statement are opened, but no
locks are acquired. Table opening will block any DDL during the
operation, and we do not need any locks as we neither read nor
modify any data during prepare. Tables are closed after prepare
finishes.
When one executes a statement:
- Server gets the command 'COM_STMT_EXECUTE' to execute the
previously prepared query. If there are any parameter markers, then the
client will send the data in the following format:
2007-10-16 21:37:31 +02:00
@verbatim
[COM_STMT_EXECUTE:1]
[STMT_ID:4]
[NULL_BITS:(param_count+7)/8)]
[TYPES_SUPPLIED_BY_CLIENT(0/1):1]
[[length]data]
2005-06-08 17:57:26 +02:00
[[length]data] .. [[length]data].
2007-10-16 21:37:31 +02:00
@endverbatim
2005-06-08 17:57:26 +02:00
(Note: Except for string/binary types; all other types will not be
supplied with length field)
- If it is a first execute or types of parameters were altered by client,
then setup the conversion routines.
- Assign parameter items from the supplied data.
2005-06-08 17:57:26 +02:00
- Execute the query without re-parsing and send back the results
to client
During execution of prepared statement tables are opened and locked
the same way they would for normal (non-prepared) statement
execution. Tables are unlocked and closed after the execution.
When one supplies long data for a placeholder:
- Server gets the long data in pieces with command type
'COM_STMT_SEND_LONG_DATA'.
- The packet recieved will have the format as:
[COM_STMT_SEND_LONG_DATA:1][STMT_ID:4][parameter_number:2][data]
- data from the packet is appended to the long data value buffer for this
2004-05-25 17:52:05 +02:00
placeholder.
- It's up to the client to stop supplying data chunks at any point. The
server doesn't care; also, the server doesn't notify the client whether
it got the data or not; if there is any error, then it will be returned
at statement execute.
2007-10-16 21:37:31 +02:00
*/
#include "my_global.h" /* NO_EMBEDDED_ACCESS_CHECKS */
#include "sql_priv.h"
#include "unireg.h"
#include "sql_class.h" // set_var.h: THD
WL#4738 streamline/simplify @@variable creation process Bug#16565 mysqld --help --verbose does not order variablesBug#20413 sql_slave_skip_counter is not shown in show variables Bug#20415 Output of mysqld --help --verbose is incomplete Bug#25430 variable not found in SELECT @@global.ft_max_word_len; Bug#32902 plugin variables don't know their names Bug#34599 MySQLD Option and Variable Reference need to be consistent in formatting! Bug#34829 No default value for variable and setting default does not raise error Bug#34834 ? Is accepted as a valid sql mode Bug#34878 Few variables have default value according to documentation but error occurs Bug#34883 ft_boolean_syntax cant be assigned from user variable to global var. Bug#37187 `INFORMATION_SCHEMA`.`GLOBAL_VARIABLES`: inconsistent status Bug#40988 log_output_basic.test succeeded though syntactically false. Bug#41010 enum-style command-line options are not honoured (maria.maria-recover fails) Bug#42103 Setting key_buffer_size to a negative value may lead to very large allocations Bug#44691 Some plugins configured as MYSQL_PLUGIN_MANDATORY in can be disabled Bug#44797 plugins w/o command-line options have no disabling option in --help Bug#46314 string system variables don't support expressions Bug#46470 sys_vars.max_binlog_cache_size_basic_32 is broken Bug#46586 When using the plugin interface the type "set" for options caused a crash. Bug#47212 Crash in DBUG_PRINT in mysqltest.cc when trying to print octal number Bug#48758 mysqltest crashes on sys_vars.collation_server_basic in gcov builds Bug#49417 some complaints about mysqld --help --verbose output Bug#49540 DEFAULT value of binlog_format isn't the default value Bug#49640 ambiguous option '--skip-skip-myisam' (double skip prefix) Bug#49644 init_connect and \0 Bug#49645 init_slave and multi-byte characters Bug#49646 mysql --show-warnings crashes when server dies
2009-12-22 10:35:56 +01:00
#include "set_var.h"
Backport of revno 2630.28.10, 2630.28.31, 2630.28.26, 2630.33.1, 2630.39.1, 2630.28.29, 2630.34.3, 2630.34.2, 2630.34.1, 2630.29.29, 2630.29.28, 2630.31.1, 2630.28.13, 2630.28.10, 2617.23.14 and some other minor revisions. This patch implements: WL#4264 "Backup: Stabilize Service Interface" -- all the server prerequisites except si_objects.{h,cc} themselves (they can be just copied over, when needed). WL#4435: Support OUT-parameters in prepared statements. (and all issues in the initial patches for these two tasks, that were discovered in pushbuild and during testing). Bug#39519: mysql_stmt_close() should flush all data associated with the statement. After execution of a prepared statement, send OUT parameters of the invoked stored procedure, if any, to the client. When using the binary protocol, send the parameters in an additional result set over the wire. When using the text protocol, assign out parameters to the user variables from the CALL(@var1, @var2, ...) specification. The following refactoring has been made: - Protocol::send_fields() was renamed to Protocol::send_result_set_metadata(); - A new Protocol::send_result_set_row() was introduced to incapsulate common functionality for sending row data. - Signature of Protocol::prepare_for_send() was changed: this operation does not need a list of items, the number of items is fully sufficient. The following backward incompatible changes have been made: - CLIENT_MULTI_RESULTS is now enabled by default in the client; - CLIENT_PS_MULTI_RESUTLS is now enabled by default in the client.
2009-10-21 22:02:06 +02:00
#include "sql_prepare.h"
#include "sql_parse.h" // insert_precheck, update_precheck, delete_precheck
#include "sql_base.h" // open_normal_and_derived_tables
#include "sql_cache.h" // query_cache_*
#include "sql_view.h" // create_view_precheck
#include "sql_delete.h" // mysql_prepare_delete
2003-01-18 21:58:19 +01:00
#include "sql_select.h" // for JOIN
#include "sql_insert.h" // upgrade_lock_type_for_insert, mysql_prepare_insert
#include "sql_update.h" // mysql_prepare_update
#include "sql_db.h" // mysql_opt_change_db, mysql_change_db
#include "sql_acl.h" // *_ACL
#include "sql_derived.h" // mysql_derived_prepare,
// mysql_handle_derived
#include "sql_cursor.h"
2003-05-23 15:32:31 +02:00
#include "sp_head.h"
#include "sp.h"
#include "sp_cache.h"
2008-12-20 11:01:41 +01:00
#include "probes_mysql.h"
#ifdef EMBEDDED_LIBRARY
/* include MYSQL_BIND headers */
#include <mysql.h>
#else
#include <mysql_com.h>
#endif
#include "lock.h" // MYSQL_OPEN_FORCE_SHARED_MDL
Fix for bug#14188793 - "DEADLOCK CAUSED BY ALTER TABLE DOEN'T CLEAR STATUS OF ROLLBACKED TRANSACTION" and bug #17054007 - "TRANSACTION IS NOT FULLY ROLLED BACK IN CASE OF INNODB DEADLOCK". The problem in the first bug report was that although deadlock involving metadata locks was reported using the same error code and message as InnoDB deadlock it didn't rollback transaction like the latter. This caused confusion to users as in some cases after ER_LOCK_DEADLOCK transaction could have been restarted immediately and in some cases rollback was required. The problem in the second bug report was that although InnoDB deadlock caused transaction rollback in all storage engines it didn't cause release of metadata locks. So concurrent DDL on the tables used in transaction was blocked until implicit or explicit COMMIT or ROLLBACK was issued in the connection which got InnoDB deadlock. The former issue has stemmed from the fact that when support for detection and reporting metadata locks deadlocks was added we erroneously assumed that InnoDB doesn't rollback transaction on deadlock but only last statement (while this is what happens on InnoDB lock timeout actually) and so didn't implement rollback of transactions on MDL deadlocks. The latter issue was caused by the fact that rollback of transaction due to deadlock is carried out by setting THD::transaction_rollback_request flag at the point where deadlock is detected and performing rollback inside of trans_rollback_stmt() call when this flag is set. And trans_rollback_stmt() is not aware of MDL locks, so no MDL locks are released. This patch solves these two problems in the following way: - In case when MDL deadlock is detect transaction rollback is requested by setting THD::transaction_rollback_request flag. - Code performing rollback of transaction if THD::transaction_rollback_request is moved out from trans_rollback_stmt(). Now we handle rollback request on the same level as we call trans_rollback_stmt() and release statement/ transaction MDL locks.
2013-08-20 11:12:34 +02:00
#include "transaction.h" // trans_rollback_implicit
2007-10-16 21:37:31 +02:00
/**
A result class used to send cursor rows using the binary protocol.
*/
class Select_fetch_protocol_binary: public select_send
{
Protocol_binary protocol;
public:
Select_fetch_protocol_binary(THD *thd);
Backport of revno 2630.28.10, 2630.28.31, 2630.28.26, 2630.33.1, 2630.39.1, 2630.28.29, 2630.34.3, 2630.34.2, 2630.34.1, 2630.29.29, 2630.29.28, 2630.31.1, 2630.28.13, 2630.28.10, 2617.23.14 and some other minor revisions. This patch implements: WL#4264 "Backup: Stabilize Service Interface" -- all the server prerequisites except si_objects.{h,cc} themselves (they can be just copied over, when needed). WL#4435: Support OUT-parameters in prepared statements. (and all issues in the initial patches for these two tasks, that were discovered in pushbuild and during testing). Bug#39519: mysql_stmt_close() should flush all data associated with the statement. After execution of a prepared statement, send OUT parameters of the invoked stored procedure, if any, to the client. When using the binary protocol, send the parameters in an additional result set over the wire. When using the text protocol, assign out parameters to the user variables from the CALL(@var1, @var2, ...) specification. The following refactoring has been made: - Protocol::send_fields() was renamed to Protocol::send_result_set_metadata(); - A new Protocol::send_result_set_row() was introduced to incapsulate common functionality for sending row data. - Signature of Protocol::prepare_for_send() was changed: this operation does not need a list of items, the number of items is fully sufficient. The following backward incompatible changes have been made: - CLIENT_MULTI_RESULTS is now enabled by default in the client; - CLIENT_PS_MULTI_RESUTLS is now enabled by default in the client.
2009-10-21 22:02:06 +02:00
virtual bool send_result_set_metadata(List<Item> &list, uint flags);
virtual bool send_data(List<Item> &items);
virtual bool send_eof();
#ifdef EMBEDDED_LIBRARY
void begin_dataset()
{
protocol.begin_dataset();
}
#endif
};
A fix for Bug#26750 "valgrind leak in sp_head" (and post-review fixes). The legend: on a replication slave, in case a trigger creation was filtered out because of application of replicate-do-table/ replicate-ignore-table rule, the parsed definition of a trigger was not cleaned up properly. LEX::sphead member was left around and leaked memory. Until the actual implementation of support of replicate-ignore-table rules for triggers by the patch for Bug 24478 it was never the case that "case SQLCOM_CREATE_TRIGGER" was not executed once a trigger was parsed, so the deletion of lex->sphead there worked and the memory did not leak. The fix: The real cause of the bug is that there is no 1 or 2 places where we can clean up the main LEX after parse. And the reason we can not have just one or two places where we clean up the LEX is asymmetric behaviour of MYSQLparse in case of success or error. One of the root causes of this behaviour is the code in Item::Item() constructor. There, a newly created item adds itself to THD::free_list - a single-linked list of Items used in a statement. Yuck. This code is unaware that we may have more than one statement active at a time, and always assumes that the free_list of the current statement is located in THD::free_list. One day we need to be able to explicitly allocate an item in a given Query_arena. Thus, when parsing a definition of a stored procedure, like CREATE PROCEDURE p1() BEGIN SELECT a FROM t1; SELECT b FROM t1; END; we actually need to reset THD::mem_root, THD::free_list and THD::lex to parse the nested procedure statement (SELECT *). The actual reset and restore is implemented in semantic actions attached to sp_proc_stmt grammar rule. The problem is that in case of a parsing error inside a nested statement Bison generated parser would abort immediately, without executing the restore part of the semantic action. This would leave THD in an in-the-middle-of-parsing state. This is why we couldn't have had a single place where we clean up the LEX after MYSQLparse - in case of an error we needed to do a clean up immediately, in case of success a clean up could have been delayed. This left the door open for a memory leak. One of the following possibilities were considered when working on a fix: - patch the replication logic to do the clean up. Rejected as breaks module borders, replication code should not need to know the gory details of clean up procedure after CREATE TRIGGER. - wrap MYSQLparse with a function that would do a clean up. Rejected as ideally we should fix the problem when it happens, not adjust for it outside of the problematic code. - make sure MYSQLparse cleans up after itself by invoking the clean up functionality in the appropriate places before return. Implemented in this patch. - use %destructor rule for sp_proc_stmt to restore THD - cleaner than the prevoius approach, but rejected because needs a careful analysis of the side effects, and this patch is for 5.0, and long term we need to use the next alternative anyway - make sure that sp_proc_stmt doesn't juggle with THD - this is a large work that will affect many modules. Cleanup: move main_lex and main_mem_root from Statement to its only two descendants Prepared_statement and THD. This ensures that when a Statement instance was created for purposes of statement backup, we do not involve LEX constructor/destructor, which is fairly expensive. In order to track that the transformation produces equivalent functionality please check the respective constructors and destructors of Statement, Prepared_statement and THD - these members were used only there. This cleanup is unrelated to the patch.
2007-03-07 10:24:46 +01:00
/****************************************************************************/
/**
2007-10-16 21:37:31 +02:00
Prepared_statement: a statement that can contain placeholders.
A fix for Bug#26750 "valgrind leak in sp_head" (and post-review fixes). The legend: on a replication slave, in case a trigger creation was filtered out because of application of replicate-do-table/ replicate-ignore-table rule, the parsed definition of a trigger was not cleaned up properly. LEX::sphead member was left around and leaked memory. Until the actual implementation of support of replicate-ignore-table rules for triggers by the patch for Bug 24478 it was never the case that "case SQLCOM_CREATE_TRIGGER" was not executed once a trigger was parsed, so the deletion of lex->sphead there worked and the memory did not leak. The fix: The real cause of the bug is that there is no 1 or 2 places where we can clean up the main LEX after parse. And the reason we can not have just one or two places where we clean up the LEX is asymmetric behaviour of MYSQLparse in case of success or error. One of the root causes of this behaviour is the code in Item::Item() constructor. There, a newly created item adds itself to THD::free_list - a single-linked list of Items used in a statement. Yuck. This code is unaware that we may have more than one statement active at a time, and always assumes that the free_list of the current statement is located in THD::free_list. One day we need to be able to explicitly allocate an item in a given Query_arena. Thus, when parsing a definition of a stored procedure, like CREATE PROCEDURE p1() BEGIN SELECT a FROM t1; SELECT b FROM t1; END; we actually need to reset THD::mem_root, THD::free_list and THD::lex to parse the nested procedure statement (SELECT *). The actual reset and restore is implemented in semantic actions attached to sp_proc_stmt grammar rule. The problem is that in case of a parsing error inside a nested statement Bison generated parser would abort immediately, without executing the restore part of the semantic action. This would leave THD in an in-the-middle-of-parsing state. This is why we couldn't have had a single place where we clean up the LEX after MYSQLparse - in case of an error we needed to do a clean up immediately, in case of success a clean up could have been delayed. This left the door open for a memory leak. One of the following possibilities were considered when working on a fix: - patch the replication logic to do the clean up. Rejected as breaks module borders, replication code should not need to know the gory details of clean up procedure after CREATE TRIGGER. - wrap MYSQLparse with a function that would do a clean up. Rejected as ideally we should fix the problem when it happens, not adjust for it outside of the problematic code. - make sure MYSQLparse cleans up after itself by invoking the clean up functionality in the appropriate places before return. Implemented in this patch. - use %destructor rule for sp_proc_stmt to restore THD - cleaner than the prevoius approach, but rejected because needs a careful analysis of the side effects, and this patch is for 5.0, and long term we need to use the next alternative anyway - make sure that sp_proc_stmt doesn't juggle with THD - this is a large work that will affect many modules. Cleanup: move main_lex and main_mem_root from Statement to its only two descendants Prepared_statement and THD. This ensures that when a Statement instance was created for purposes of statement backup, we do not involve LEX constructor/destructor, which is fairly expensive. In order to track that the transformation produces equivalent functionality please check the respective constructors and destructors of Statement, Prepared_statement and THD - these members were used only there. This cleanup is unrelated to the patch.
2007-03-07 10:24:46 +01:00
*/
class Prepared_statement: public Statement
{
public:
enum flag_values
{
IS_IN_USE= 1,
IS_SQL_PREPARE= 2
};
THD *thd;
Select_fetch_protocol_binary result;
Item_param **param_array;
Server_side_cursor *cursor;
uint param_count;
uint last_errno;
uint flags;
char last_error[MYSQL_ERRMSG_SIZE];
#ifndef EMBEDDED_LIBRARY
bool (*set_params)(Prepared_statement *st, uchar *data, uchar *data_end,
uchar *read_pos, String *expanded_query);
#else
bool (*set_params_data)(Prepared_statement *st, String *expanded_query);
#endif
2005-06-08 17:57:26 +02:00
bool (*set_params_from_vars)(Prepared_statement *stmt,
List<LEX_STRING>& varnames,
String *expanded_query);
public:
Prepared_statement(THD *thd_arg);
virtual ~Prepared_statement();
void setup_set_params();
virtual Query_arena::Type type() const;
virtual void cleanup_stmt();
bool set_name(LEX_STRING *name);
inline void close_cursor() { delete cursor; cursor= 0; }
inline bool is_in_use() { return flags & (uint) IS_IN_USE; }
inline bool is_sql_prepare() const { return flags & (uint) IS_SQL_PREPARE; }
void set_sql_prepare() { flags|= (uint) IS_SQL_PREPARE; }
bool prepare(const char *packet, uint packet_length);
bool execute_loop(String *expanded_query,
bool open_cursor,
uchar *packet_arg, uchar *packet_end_arg);
Backport of revno 2630.28.10, 2630.28.31, 2630.28.26, 2630.33.1, 2630.39.1, 2630.28.29, 2630.34.3, 2630.34.2, 2630.34.1, 2630.29.29, 2630.29.28, 2630.31.1, 2630.28.13, 2630.28.10, 2617.23.14 and some other minor revisions. This patch implements: WL#4264 "Backup: Stabilize Service Interface" -- all the server prerequisites except si_objects.{h,cc} themselves (they can be just copied over, when needed). WL#4435: Support OUT-parameters in prepared statements. (and all issues in the initial patches for these two tasks, that were discovered in pushbuild and during testing). Bug#39519: mysql_stmt_close() should flush all data associated with the statement. After execution of a prepared statement, send OUT parameters of the invoked stored procedure, if any, to the client. When using the binary protocol, send the parameters in an additional result set over the wire. When using the text protocol, assign out parameters to the user variables from the CALL(@var1, @var2, ...) specification. The following refactoring has been made: - Protocol::send_fields() was renamed to Protocol::send_result_set_metadata(); - A new Protocol::send_result_set_row() was introduced to incapsulate common functionality for sending row data. - Signature of Protocol::prepare_for_send() was changed: this operation does not need a list of items, the number of items is fully sufficient. The following backward incompatible changes have been made: - CLIENT_MULTI_RESULTS is now enabled by default in the client; - CLIENT_PS_MULTI_RESUTLS is now enabled by default in the client.
2009-10-21 22:02:06 +02:00
bool execute_server_runnable(Server_runnable *server_runnable);
/* Destroy this statement */
void deallocate();
A fix for Bug#26750 "valgrind leak in sp_head" (and post-review fixes). The legend: on a replication slave, in case a trigger creation was filtered out because of application of replicate-do-table/ replicate-ignore-table rule, the parsed definition of a trigger was not cleaned up properly. LEX::sphead member was left around and leaked memory. Until the actual implementation of support of replicate-ignore-table rules for triggers by the patch for Bug 24478 it was never the case that "case SQLCOM_CREATE_TRIGGER" was not executed once a trigger was parsed, so the deletion of lex->sphead there worked and the memory did not leak. The fix: The real cause of the bug is that there is no 1 or 2 places where we can clean up the main LEX after parse. And the reason we can not have just one or two places where we clean up the LEX is asymmetric behaviour of MYSQLparse in case of success or error. One of the root causes of this behaviour is the code in Item::Item() constructor. There, a newly created item adds itself to THD::free_list - a single-linked list of Items used in a statement. Yuck. This code is unaware that we may have more than one statement active at a time, and always assumes that the free_list of the current statement is located in THD::free_list. One day we need to be able to explicitly allocate an item in a given Query_arena. Thus, when parsing a definition of a stored procedure, like CREATE PROCEDURE p1() BEGIN SELECT a FROM t1; SELECT b FROM t1; END; we actually need to reset THD::mem_root, THD::free_list and THD::lex to parse the nested procedure statement (SELECT *). The actual reset and restore is implemented in semantic actions attached to sp_proc_stmt grammar rule. The problem is that in case of a parsing error inside a nested statement Bison generated parser would abort immediately, without executing the restore part of the semantic action. This would leave THD in an in-the-middle-of-parsing state. This is why we couldn't have had a single place where we clean up the LEX after MYSQLparse - in case of an error we needed to do a clean up immediately, in case of success a clean up could have been delayed. This left the door open for a memory leak. One of the following possibilities were considered when working on a fix: - patch the replication logic to do the clean up. Rejected as breaks module borders, replication code should not need to know the gory details of clean up procedure after CREATE TRIGGER. - wrap MYSQLparse with a function that would do a clean up. Rejected as ideally we should fix the problem when it happens, not adjust for it outside of the problematic code. - make sure MYSQLparse cleans up after itself by invoking the clean up functionality in the appropriate places before return. Implemented in this patch. - use %destructor rule for sp_proc_stmt to restore THD - cleaner than the prevoius approach, but rejected because needs a careful analysis of the side effects, and this patch is for 5.0, and long term we need to use the next alternative anyway - make sure that sp_proc_stmt doesn't juggle with THD - this is a large work that will affect many modules. Cleanup: move main_lex and main_mem_root from Statement to its only two descendants Prepared_statement and THD. This ensures that when a Statement instance was created for purposes of statement backup, we do not involve LEX constructor/destructor, which is fairly expensive. In order to track that the transformation produces equivalent functionality please check the respective constructors and destructors of Statement, Prepared_statement and THD - these members were used only there. This cleanup is unrelated to the patch.
2007-03-07 10:24:46 +01:00
private:
/**
The memory root to allocate parsed tree elements (instances of Item,
SELECT_LEX and other classes).
*/
MEM_ROOT main_mem_root;
private:
bool set_db(const char *db, uint db_length);
bool set_parameters(String *expanded_query,
uchar *packet, uchar *packet_end);
bool execute(String *expanded_query, bool open_cursor);
bool reprepare();
bool validate_metadata(Prepared_statement *copy);
void swap_prepared_statement(Prepared_statement *copy);
};
Backport of revno 2630.28.10, 2630.28.31, 2630.28.26, 2630.33.1, 2630.39.1, 2630.28.29, 2630.34.3, 2630.34.2, 2630.34.1, 2630.29.29, 2630.29.28, 2630.31.1, 2630.28.13, 2630.28.10, 2617.23.14 and some other minor revisions. This patch implements: WL#4264 "Backup: Stabilize Service Interface" -- all the server prerequisites except si_objects.{h,cc} themselves (they can be just copied over, when needed). WL#4435: Support OUT-parameters in prepared statements. (and all issues in the initial patches for these two tasks, that were discovered in pushbuild and during testing). Bug#39519: mysql_stmt_close() should flush all data associated with the statement. After execution of a prepared statement, send OUT parameters of the invoked stored procedure, if any, to the client. When using the binary protocol, send the parameters in an additional result set over the wire. When using the text protocol, assign out parameters to the user variables from the CALL(@var1, @var2, ...) specification. The following refactoring has been made: - Protocol::send_fields() was renamed to Protocol::send_result_set_metadata(); - A new Protocol::send_result_set_row() was introduced to incapsulate common functionality for sending row data. - Signature of Protocol::prepare_for_send() was changed: this operation does not need a list of items, the number of items is fully sufficient. The following backward incompatible changes have been made: - CLIENT_MULTI_RESULTS is now enabled by default in the client; - CLIENT_PS_MULTI_RESUTLS is now enabled by default in the client.
2009-10-21 22:02:06 +02:00
/**
Execute one SQL statement in an isolated context.
*/
class Execute_sql_statement: public Server_runnable
{
public:
Execute_sql_statement(LEX_STRING sql_text);
virtual bool execute_server_code(THD *thd);
private:
LEX_STRING m_sql_text;
};
class Ed_connection;
/**
Protocol_local: a helper class to intercept the result
of the data written to the network.
*/
class Protocol_local :public Protocol
{
public:
Protocol_local(THD *thd, Ed_connection *ed_connection);
~Protocol_local() { free_root(&m_rset_root, MYF(0)); }
protected:
virtual void prepare_for_resend();
virtual bool write();
virtual bool store_null();
virtual bool store_tiny(longlong from);
virtual bool store_short(longlong from);
virtual bool store_long(longlong from);
virtual bool store_longlong(longlong from, bool unsigned_flag);
virtual bool store_decimal(const my_decimal *);
virtual bool store(const char *from, size_t length, CHARSET_INFO *cs);
virtual bool store(const char *from, size_t length,
CHARSET_INFO *fromcs, CHARSET_INFO *tocs);
virtual bool store(MYSQL_TIME *time);
virtual bool store_date(MYSQL_TIME *time);
virtual bool store_time(MYSQL_TIME *time);
virtual bool store(float value, uint32 decimals, String *buffer);
virtual bool store(double value, uint32 decimals, String *buffer);
virtual bool store(Field *field);
virtual bool send_result_set_metadata(List<Item> *list, uint flags);
virtual bool send_out_parameters(List<Item_param> *sp_params);
#ifdef EMBEDDED_LIBRARY
void remove_last_row();
#endif
virtual enum enum_protocol_type type() { return PROTOCOL_LOCAL; };
virtual bool send_ok(uint server_status, uint statement_warn_count,
ulonglong affected_rows, ulonglong last_insert_id,
const char *message);
virtual bool send_eof(uint server_status, uint statement_warn_count);
virtual bool send_error(uint sql_errno, const char *err_msg, const char* sqlstate);
private:
bool store_string(const char *str, size_t length,
CHARSET_INFO *src_cs, CHARSET_INFO *dst_cs);
bool store_column(const void *data, size_t length);
void opt_add_row_to_rset();
private:
Ed_connection *m_connection;
MEM_ROOT m_rset_root;
List<Ed_row> *m_rset;
size_t m_column_count;
Ed_column *m_current_row;
Ed_column *m_current_column;
};
/******************************************************************************
Implementation
******************************************************************************/
inline bool is_param_null(const uchar *pos, ulong param_no)
{
return pos[param_no/8] & (1 << (param_no & 7));
}
2007-10-16 21:37:31 +02:00
/**
Find a prepared statement in the statement map by id.
Try to find a prepared statement and set THD error if it's not found.
2007-10-16 21:37:31 +02:00
@param thd thread handle
@param id statement id
@param where the place from which this function is called (for
error reporting).
@return
0 if the statement was not found, a pointer otherwise.
*/
static Prepared_statement *
find_prepared_statement(THD *thd, ulong id)
{
/*
To strictly separate namespaces of SQL prepared statements and C API
prepared statements find() will return 0 if there is a named prepared
statement with such id.
*/
Statement *stmt= thd->stmt_map.find(id);
if (stmt == 0 || stmt->type() != Query_arena::PREPARED_STATEMENT)
return NULL;
return (Prepared_statement *) stmt;
}
2007-10-16 21:37:31 +02:00
/**
Send prepared statement id and metadata to the client after prepare.
2007-10-16 21:37:31 +02:00
@todo
Fix this nasty upcast from List<Item_param> to List<Item>
2007-10-16 21:37:31 +02:00
@return
0 in case of success, 1 otherwise
*/
#ifndef EMBEDDED_LIBRARY
static bool send_prep_stmt(Prepared_statement *stmt, uint columns)
{
NET *net= &stmt->thd->net;
WL#3817: Simplify string / memory area types and make things more consistent (first part) The following type conversions was done: - Changed byte to uchar - Changed gptr to uchar* - Change my_string to char * - Change my_size_t to size_t - Change size_s to size_t Removed declaration of byte, gptr, my_string, my_size_t and size_s. Following function parameter changes was done: - All string functions in mysys/strings was changed to use size_t instead of uint for string lengths. - All read()/write() functions changed to use size_t (including vio). - All protocoll functions changed to use size_t instead of uint - Functions that used a pointer to a string length was changed to use size_t* - Changed malloc(), free() and related functions from using gptr to use void * as this requires fewer casts in the code and is more in line with how the standard functions work. - Added extra length argument to dirname_part() to return the length of the created string. - Changed (at least) following functions to take uchar* as argument: - db_dump() - my_net_write() - net_write_command() - net_store_data() - DBUG_DUMP() - decimal2bin() & bin2decimal() - Changed my_compress() and my_uncompress() to use size_t. Changed one argument to my_uncompress() from a pointer to a value as we only return one value (makes function easier to use). - Changed type of 'pack_data' argument to packfrm() to avoid casts. - Changed in readfrm() and writefrom(), ha_discover and handler::discover() the type for argument 'frmdata' to uchar** to avoid casts. - Changed most Field functions to use uchar* instead of char* (reduced a lot of casts). - Changed field->val_xxx(xxx, new_ptr) to take const pointers. Other changes: - Removed a lot of not needed casts - Added a few new cast required by other changes - Added some cast to my_multi_malloc() arguments for safety (as string lengths needs to be uint, not size_t). - Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done explicitely as this conflict was often hided by casting the function to hash_get_key). - Changed some buffers to memory regions to uchar* to avoid casts. - Changed some string lengths from uint to size_t. - Changed field->ptr to be uchar* instead of char*. This allowed us to get rid of a lot of casts. - Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar - Include zlib.h in some files as we needed declaration of crc32() - Changed MY_FILE_ERROR to be (size_t) -1. - Changed many variables to hold the result of my_read() / my_write() to be size_t. This was needed to properly detect errors (which are returned as (size_t) -1). - Removed some very old VMS code - Changed packfrm()/unpackfrm() to not be depending on uint size (portability fix) - Removed windows specific code to restore cursor position as this causes slowdown on windows and we should not mix read() and pread() calls anyway as this is not thread safe. Updated function comment to reflect this. Changed function that depended on original behavior of my_pwrite() to itself restore the cursor position (one such case). - Added some missing checking of return value of malloc(). - Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow. - Changed type of table_def::m_size from my_size_t to ulong to reflect that m_size is the number of elements in the array, not a string/memory length. - Moved THD::max_row_length() to table.cc (as it's not depending on THD). Inlined max_row_length_blob() into this function. - More function comments - Fixed some compiler warnings when compiled without partitions. - Removed setting of LEX_STRING() arguments in declaration (portability fix). - Some trivial indentation/variable name changes. - Some trivial code simplifications: - Replaced some calls to alloc_root + memcpy to use strmake_root()/strdup_root(). - Changed some calls from memdup() to strmake() (Safety fix) - Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
uchar buff[12];
uint tmp;
int error;
THD *thd= stmt->thd;
DBUG_ENTER("send_prep_stmt");
buff[0]= 0; /* OK packet indicator */
int4store(buff+1, stmt->id);
int2store(buff+5, columns);
int2store(buff+7, stmt->param_count);
buff[9]= 0; // Guard against a 4.1 client
tmp= min(stmt->thd->warning_info->statement_warn_count(), 65535);
int2store(buff+10, tmp);
/*
Send types and names of placeholders to the client
XXX: fix this nasty upcast from List<Item_param> to List<Item>
*/
error= my_net_write(net, buff, sizeof(buff));
if (stmt->param_count && ! error)
{
Backport of revno 2630.28.10, 2630.28.31, 2630.28.26, 2630.33.1, 2630.39.1, 2630.28.29, 2630.34.3, 2630.34.2, 2630.34.1, 2630.29.29, 2630.29.28, 2630.31.1, 2630.28.13, 2630.28.10, 2617.23.14 and some other minor revisions. This patch implements: WL#4264 "Backup: Stabilize Service Interface" -- all the server prerequisites except si_objects.{h,cc} themselves (they can be just copied over, when needed). WL#4435: Support OUT-parameters in prepared statements. (and all issues in the initial patches for these two tasks, that were discovered in pushbuild and during testing). Bug#39519: mysql_stmt_close() should flush all data associated with the statement. After execution of a prepared statement, send OUT parameters of the invoked stored procedure, if any, to the client. When using the binary protocol, send the parameters in an additional result set over the wire. When using the text protocol, assign out parameters to the user variables from the CALL(@var1, @var2, ...) specification. The following refactoring has been made: - Protocol::send_fields() was renamed to Protocol::send_result_set_metadata(); - A new Protocol::send_result_set_row() was introduced to incapsulate common functionality for sending row data. - Signature of Protocol::prepare_for_send() was changed: this operation does not need a list of items, the number of items is fully sufficient. The following backward incompatible changes have been made: - CLIENT_MULTI_RESULTS is now enabled by default in the client; - CLIENT_PS_MULTI_RESUTLS is now enabled by default in the client.
2009-10-21 22:02:06 +02:00
error= thd->protocol_text.send_result_set_metadata((List<Item> *)
&stmt->lex->param_list,
Protocol::SEND_EOF);
}
if (!error)
/* Flag that a response has already been sent */
thd->stmt_da->disable_status();
DBUG_RETURN(error);
}
#else
static bool send_prep_stmt(Prepared_statement *stmt,
uint columns __attribute__((unused)))
{
THD *thd= stmt->thd;
thd->client_stmt_id= stmt->id;
thd->client_param_count= stmt->param_count;
thd->clear_error();
thd->stmt_da->disable_status();
return 0;
}
2003-11-26 22:16:34 +01:00
#endif /*!EMBEDDED_LIBRARY*/
2007-10-16 21:37:31 +02:00
#ifndef EMBEDDED_LIBRARY
/**
Read the length of the parameter data and return it back to
the caller.
Read data length, position the packet to the first byte after it,
and return the length to the caller.
2007-10-16 21:37:31 +02:00
@param packet a pointer to the data
@param len remaining packet length
@return
Length of data piece.
*/
static ulong get_param_length(uchar **packet, ulong len)
{
reg1 uchar *pos= *packet;
if (len < 1)
return 0;
if (*pos < 251)
{
(*packet)++;
return (ulong) *pos;
}
if (len < 3)
return 0;
if (*pos == 252)
{
(*packet)+=3;
return (ulong) uint2korr(pos+1);
}
if (len < 4)
return 0;
if (*pos == 253)
{
(*packet)+=4;
return (ulong) uint3korr(pos+1);
}
if (len < 5)
return 0;
2005-06-08 17:57:26 +02:00
(*packet)+=9; // Must be 254 when here
/*
In our client-server protocol all numbers bigger than 2^24
stored as 8 bytes with uint8korr. Here we always know that
parameter length is less than 2^4 so don't look at the second
4 bytes. But still we need to obey the protocol hence 9 in the
assignment above.
*/
return (ulong) uint4korr(pos+1);
}
#else
#define get_param_length(packet, len) len
#endif /*!EMBEDDED_LIBRARY*/
2007-10-16 21:37:31 +02:00
/**
Data conversion routines.
2002-11-22 19:04:42 +01:00
All these functions read the data from pos, convert it to requested
type and assign to param; pos is advanced to predefined length.
2002-11-22 19:04:42 +01:00
Make a note that the NULL handling is examined at first execution
(i.e. when input types altered) and for all subsequent executions
we don't read any values for this.
2002-11-22 19:04:42 +01:00
2007-10-16 21:37:31 +02:00
@param param parameter item
@param pos input data buffer
@param len length of data in the buffer
*/
static void set_param_tiny(Item_param *param, uchar **pos, ulong len)
{
#ifndef EMBEDDED_LIBRARY
if (len < 1)
return;
#endif
int8 value= (int8) **pos;
2005-06-08 17:57:26 +02:00
param->set_int(param->unsigned_flag ? (longlong) ((uint8) value) :
(longlong) value, 4);
2002-11-22 19:04:42 +01:00
*pos+= 1;
}
static void set_param_short(Item_param *param, uchar **pos, ulong len)
2002-11-22 19:04:42 +01:00
{
int16 value;
#ifndef EMBEDDED_LIBRARY
if (len < 2)
return;
value= sint2korr(*pos);
#else
shortget(value, *pos);
#endif
param->set_int(param->unsigned_flag ? (longlong) ((uint16) value) :
(longlong) value, 6);
2002-11-22 19:04:42 +01:00
*pos+= 2;
}
static void set_param_int32(Item_param *param, uchar **pos, ulong len)
2002-11-22 19:04:42 +01:00
{
int32 value;
#ifndef EMBEDDED_LIBRARY
if (len < 4)
return;
value= sint4korr(*pos);
#else
longget(value, *pos);
#endif
param->set_int(param->unsigned_flag ? (longlong) ((uint32) value) :
(longlong) value, 11);
2002-11-22 19:04:42 +01:00
*pos+= 4;
}
static void set_param_int64(Item_param *param, uchar **pos, ulong len)
2002-11-22 19:04:42 +01:00
{
longlong value;
#ifndef EMBEDDED_LIBRARY
if (len < 8)
return;
value= (longlong) sint8korr(*pos);
#else
longlongget(value, *pos);
#endif
param->set_int(value, 21);
2002-11-22 19:04:42 +01:00
*pos+= 8;
}
static void set_param_float(Item_param *param, uchar **pos, ulong len)
2002-11-22 19:04:42 +01:00
{
float data;
#ifndef EMBEDDED_LIBRARY
if (len < 4)
return;
2002-11-22 19:04:42 +01:00
float4get(data,*pos);
#else
2005-07-19 18:25:05 +02:00
floatget(data, *pos);
#endif
2002-11-22 19:04:42 +01:00
param->set_double((double) data);
*pos+= 4;
}
static void set_param_double(Item_param *param, uchar **pos, ulong len)
2002-11-22 19:04:42 +01:00
{
double data;
#ifndef EMBEDDED_LIBRARY
if (len < 8)
return;
2002-11-22 19:04:42 +01:00
float8get(data,*pos);
#else
2005-07-19 18:25:05 +02:00
doubleget(data, *pos);
#endif
2002-11-22 19:04:42 +01:00
param->set_double((double) data);
*pos+= 8;
}
2005-02-08 23:50:45 +01:00
static void set_param_decimal(Item_param *param, uchar **pos, ulong len)
{
ulong length= get_param_length(pos, len);
param->set_decimal((char*)*pos, length);
*pos+= length;
2005-02-08 23:50:45 +01:00
}
#ifndef EMBEDDED_LIBRARY
/*
Read date/time/datetime parameter values from network (binary
protocol). See writing counterparts of these functions in
libmysql.c (store_param_{time,date,datetime}).
*/
2007-10-16 21:37:31 +02:00
/**
@todo
Add warning 'Data truncated' here
*/
static void set_param_time(Item_param *param, uchar **pos, ulong len)
{
MYSQL_TIME tm;
ulong length= get_param_length(pos, len);
if (length >= 8)
{
uchar *to= *pos;
uint day;
tm.neg= (bool) to[0];
day= (uint) sint4korr(to+1);
tm.hour= (uint) to[5] + day * 24;
tm.minute= (uint) to[6];
tm.second= (uint) to[7];
tm.second_part= (length > 8) ? (ulong) sint4korr(to+8) : 0;
if (tm.hour > 838)
{
/* TODO: add warning 'Data truncated' here */
tm.hour= 838;
tm.minute= 59;
tm.second= 59;
}
tm.day= tm.year= tm.month= 0;
}
else
set_zero_time(&tm, MYSQL_TIMESTAMP_TIME);
param->set_time(&tm, MYSQL_TIMESTAMP_TIME,
MAX_TIME_WIDTH * MY_CHARSET_BIN_MB_MAXLEN);
*pos+= length;
}
static void set_param_datetime(Item_param *param, uchar **pos, ulong len)
{
MYSQL_TIME tm;
ulong length= get_param_length(pos, len);
if (length >= 4)
{
uchar *to= *pos;
tm.neg= 0;
tm.year= (uint) sint2korr(to);
tm.month= (uint) to[2];
tm.day= (uint) to[3];
if (length > 4)
{
tm.hour= (uint) to[4];
tm.minute= (uint) to[5];
tm.second= (uint) to[6];
}
else
tm.hour= tm.minute= tm.second= 0;
tm.second_part= (length > 7) ? (ulong) sint4korr(to+7) : 0;
}
else
set_zero_time(&tm, MYSQL_TIMESTAMP_DATETIME);
param->set_time(&tm, MYSQL_TIMESTAMP_DATETIME,
MAX_DATETIME_WIDTH * MY_CHARSET_BIN_MB_MAXLEN);
*pos+= length;
}
static void set_param_date(Item_param *param, uchar **pos, ulong len)
{
MYSQL_TIME tm;
ulong length= get_param_length(pos, len);
if (length >= 4)
{
uchar *to= *pos;
tm.year= (uint) sint2korr(to);
tm.month= (uint) to[2];
tm.day= (uint) to[3];
tm.hour= tm.minute= tm.second= 0;
tm.second_part= 0;
tm.neg= 0;
}
else
set_zero_time(&tm, MYSQL_TIMESTAMP_DATE);
param->set_time(&tm, MYSQL_TIMESTAMP_DATE,
MAX_DATE_WIDTH * MY_CHARSET_BIN_MB_MAXLEN);
*pos+= length;
}
#else/*!EMBEDDED_LIBRARY*/
2007-10-16 21:37:31 +02:00
/**
@todo
Add warning 'Data truncated' here
*/
void set_param_time(Item_param *param, uchar **pos, ulong len)
{
MYSQL_TIME tm= *((MYSQL_TIME*)*pos);
tm.hour+= tm.day * 24;
tm.day= tm.year= tm.month= 0;
if (tm.hour > 838)
{
/* TODO: add warning 'Data truncated' here */
tm.hour= 838;
tm.minute= 59;
tm.second= 59;
}
param->set_time(&tm, MYSQL_TIMESTAMP_TIME,
MAX_TIME_WIDTH * MY_CHARSET_BIN_MB_MAXLEN);
}
void set_param_datetime(Item_param *param, uchar **pos, ulong len)
{
MYSQL_TIME tm= *((MYSQL_TIME*)*pos);
tm.neg= 0;
param->set_time(&tm, MYSQL_TIMESTAMP_DATETIME,
MAX_DATETIME_WIDTH * MY_CHARSET_BIN_MB_MAXLEN);
}
void set_param_date(Item_param *param, uchar **pos, ulong len)
{
MYSQL_TIME *to= (MYSQL_TIME*)*pos;
param->set_time(to, MYSQL_TIMESTAMP_DATE,
MAX_DATE_WIDTH * MY_CHARSET_BIN_MB_MAXLEN);
}
#endif /*!EMBEDDED_LIBRARY*/
static void set_param_str(Item_param *param, uchar **pos, ulong len)
2002-11-22 19:04:42 +01:00
{
ulong length= get_param_length(pos, len);
if (length > len)
length= len;
param->set_str((const char *)*pos, length);
*pos+= length;
2002-11-22 19:04:42 +01:00
}
2005-06-08 17:57:26 +02:00
#undef get_param_length
static void setup_one_conversion_function(THD *thd, Item_param *param,
uchar param_type)
2002-11-22 19:04:42 +01:00
{
switch (param_type) {
case MYSQL_TYPE_TINY:
param->set_param_func= set_param_tiny;
param->item_type= Item::INT_ITEM;
param->item_result_type= INT_RESULT;
break;
case MYSQL_TYPE_SHORT:
param->set_param_func= set_param_short;
param->item_type= Item::INT_ITEM;
param->item_result_type= INT_RESULT;
break;
case MYSQL_TYPE_LONG:
param->set_param_func= set_param_int32;
param->item_type= Item::INT_ITEM;
param->item_result_type= INT_RESULT;
break;
case MYSQL_TYPE_LONGLONG:
param->set_param_func= set_param_int64;
param->item_type= Item::INT_ITEM;
param->item_result_type= INT_RESULT;
break;
case MYSQL_TYPE_FLOAT:
param->set_param_func= set_param_float;
param->item_type= Item::REAL_ITEM;
param->item_result_type= REAL_RESULT;
break;
case MYSQL_TYPE_DOUBLE:
param->set_param_func= set_param_double;
param->item_type= Item::REAL_ITEM;
param->item_result_type= REAL_RESULT;
break;
2005-02-08 23:50:45 +01:00
case MYSQL_TYPE_DECIMAL:
case MYSQL_TYPE_NEWDECIMAL:
param->set_param_func= set_param_decimal;
param->item_type= Item::DECIMAL_ITEM;
param->item_result_type= DECIMAL_RESULT;
break;
case MYSQL_TYPE_TIME:
param->set_param_func= set_param_time;
param->item_type= Item::STRING_ITEM;
param->item_result_type= STRING_RESULT;
break;
case MYSQL_TYPE_DATE:
param->set_param_func= set_param_date;
param->item_type= Item::STRING_ITEM;
param->item_result_type= STRING_RESULT;
break;
case MYSQL_TYPE_DATETIME:
case MYSQL_TYPE_TIMESTAMP:
param->set_param_func= set_param_datetime;
param->item_type= Item::STRING_ITEM;
param->item_result_type= STRING_RESULT;
break;
case MYSQL_TYPE_TINY_BLOB:
case MYSQL_TYPE_MEDIUM_BLOB:
case MYSQL_TYPE_LONG_BLOB:
case MYSQL_TYPE_BLOB:
param->set_param_func= set_param_str;
param->value.cs_info.character_set_of_placeholder= &my_charset_bin;
param->value.cs_info.character_set_client=
thd->variables.character_set_client;
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
DBUG_ASSERT(thd->variables.character_set_client);
param->value.cs_info.final_character_set_of_str_value= &my_charset_bin;
param->item_type= Item::STRING_ITEM;
param->item_result_type= STRING_RESULT;
break;
default:
/*
The client library ensures that we won't get any other typecodes
except typecodes above and typecodes for string types. Marking
label as 'default' lets us to handle malformed packets as well.
*/
{
CHARSET_INFO *fromcs= thd->variables.character_set_client;
CHARSET_INFO *tocs= thd->variables.collation_connection;
uint32 dummy_offset;
param->value.cs_info.character_set_of_placeholder= fromcs;
param->value.cs_info.character_set_client= fromcs;
/*
Setup source and destination character sets so that they
are different only if conversion is necessary: this will
make later checks easier.
*/
param->value.cs_info.final_character_set_of_str_value=
String::needs_conversion(0, fromcs, tocs, &dummy_offset) ?
tocs : fromcs;
param->set_param_func= set_param_str;
/*
Exact value of max_length is not known unless data is converted to
charset of connection, so we have to set it later.
*/
param->item_type= Item::STRING_ITEM;
param->item_result_type= STRING_RESULT;
}
}
param->param_type= (enum enum_field_types) param_type;
}
#ifndef EMBEDDED_LIBRARY
/**
Check whether this parameter data type is compatible with long data.
Used to detect whether a long data stream has been supplied to a
incompatible data type.
*/
inline bool is_param_long_data_type(Item_param *param)
{
return ((param->param_type >= MYSQL_TYPE_TINY_BLOB) &&
(param->param_type <= MYSQL_TYPE_STRING));
}
2007-10-16 21:37:31 +02:00
/**
Routines to assign parameters from data supplied by the client.
Update the parameter markers by reading data from the packet and
and generate a valid query for logging.
2007-10-16 21:37:31 +02:00
@note
This function, along with other _with_log functions is called when one of
binary, slow or general logs is open. Logging of prepared statements in
all cases is performed by means of conventional queries: if parameter
data was supplied from C API, each placeholder in the query is
replaced with its actual value; if we're logging a [Dynamic] SQL
prepared statement, parameter markers are replaced with variable names.
Example:
2007-10-16 21:37:31 +02:00
@verbatim
mysqld_stmt_prepare("UPDATE t1 SET a=a*1.25 WHERE a=?")
--> general logs gets [Prepare] UPDATE t1 SET a*1.25 WHERE a=?"
mysqld_stmt_execute(stmt);
--> general and binary logs get
[Execute] UPDATE t1 SET a*1.25 WHERE a=1"
2007-10-16 21:37:31 +02:00
@endverbatim
If a statement has been prepared using SQL syntax:
@verbatim
PREPARE stmt FROM "UPDATE t1 SET a=a*1.25 WHERE a=?"
--> general log gets
[Query] PREPARE stmt FROM "UPDATE ..."
EXECUTE stmt USING @a
--> general log gets
2007-10-16 21:37:31 +02:00
[Query] EXECUTE stmt USING @a;
@endverbatim
2007-10-16 21:37:31 +02:00
@retval
0 if success
@retval
1 otherwise
*/
static bool insert_params_with_log(Prepared_statement *stmt, uchar *null_array,
uchar *read_pos, uchar *data_end,
String *query)
{
THD *thd= stmt->thd;
Item_param **begin= stmt->param_array;
Item_param **end= begin + stmt->param_count;
uint32 length= 0;
2005-06-08 17:57:26 +02:00
String str;
const String *res;
DBUG_ENTER("insert_params_with_log");
if (query->copy(stmt->query(), stmt->query_length(), default_charset_info))
DBUG_RETURN(1);
2005-06-08 17:57:26 +02:00
for (Item_param **it= begin; it < end; ++it)
{
Item_param *param= *it;
if (param->state != Item_param::LONG_DATA_VALUE)
{
if (is_param_null(null_array, (uint) (it - begin)))
param->set_null();
else
{
if (read_pos >= data_end)
DBUG_RETURN(1);
param->set_param_func(param, &read_pos, (uint) (data_end - read_pos));
if (param->state == Item_param::NO_VALUE)
DBUG_RETURN(1);
BUG#13868860 - LIMIT '5' IS EXECUTED WITHOUT ERROR WHEN '5' IS PLACE HOLDER AND USE SERVER-SIDE Analysis: LIMIT always takes nonnegative integer constant values. http://dev.mysql.com/doc/refman/5.6/en/select.html So parsing of value '5' for LIMIT in SELECT fails. But, within prepared statement, LIMIT parameters can be specified using '?' markers. Value for the parameter can be supplied while executing the prepared statement. Passing string values, float or double value for LIMIT works well from CLI. Because, while setting the value for the parameters from the variable list (added using SET), if the value is for parameter LIMIT then its converted to integer value. But, when prepared statement is executed from the other interfaces as J connectors, or C applications etc. The value for the parameters are sent to the server with execute command. Each item in log has value and the data TYPE. So, While setting parameter value from this log, value is set to all the parameters with the same data type as passed. But here logic to convert value to integer type if its for LIMIT parameter is missing. Because of this,string '5' is set to LIMIT. And the same is logged into the binlog file too. Fix: When executing prepared statement having parameter for CLI it worked fine, as the value set for the parameter is converted to integer. And this failed in other interfaces as J connector,C Applications etc as this conversion is missing. So, as a fix added check while setting value for the parameters. If the parameter is for LIMIT value then its converted to integer value.
2012-07-26 20:14:43 +02:00
2013-08-28 11:24:53 +02:00
if (param->limit_clause_param && param->state != Item_param::INT_VALUE)
BUG#13868860 - LIMIT '5' IS EXECUTED WITHOUT ERROR WHEN '5' IS PLACE HOLDER AND USE SERVER-SIDE Analysis: LIMIT always takes nonnegative integer constant values. http://dev.mysql.com/doc/refman/5.6/en/select.html So parsing of value '5' for LIMIT in SELECT fails. But, within prepared statement, LIMIT parameters can be specified using '?' markers. Value for the parameter can be supplied while executing the prepared statement. Passing string values, float or double value for LIMIT works well from CLI. Because, while setting the value for the parameters from the variable list (added using SET), if the value is for parameter LIMIT then its converted to integer value. But, when prepared statement is executed from the other interfaces as J connectors, or C applications etc. The value for the parameters are sent to the server with execute command. Each item in log has value and the data TYPE. So, While setting parameter value from this log, value is set to all the parameters with the same data type as passed. But here logic to convert value to integer type if its for LIMIT parameter is missing. Because of this,string '5' is set to LIMIT. And the same is logged into the binlog file too. Fix: When executing prepared statement having parameter for CLI it worked fine, as the value set for the parameter is converted to integer. And this failed in other interfaces as J connector,C Applications etc as this conversion is missing. So, as a fix added check while setting value for the parameters. If the parameter is for LIMIT value then its converted to integer value.
2012-07-26 20:14:43 +02:00
{
param->set_int(param->val_int(), MY_INT64_NUM_DECIMAL_DIGITS);
param->item_type= Item::INT_ITEM;
if (!param->unsigned_flag && param->value.integer < 0)
DBUG_RETURN(1);
}
}
}
/*
A long data stream was supplied for this parameter marker.
This was done after prepare, prior to providing a placeholder
type (the types are supplied at execute). Check that the
supplied type of placeholder can accept a data stream.
*/
else if (! is_param_long_data_type(param))
DBUG_RETURN(1);
Bug#12601974 - STORED PROCEDURE SQL_MODE=NO_BACKSLASH_ESCAPES IGNORED AND BREAKS REPLICATION Analysis: ======================== sql_mode "NO_BACKSLASH_ESCAPES": When user want to use backslash as character input, instead of escape character in a string literal then sql_mode can be set to "NO_BACKSLASH_ESCAPES". With this mode enabled, backslash becomes an ordinary character like any other. SQL_MODE set applies to the current client session. And while creating the stored procedure, MySQL stores the current sql_mode and always executes the stored procedure in sql_mode stored with the Procedure, regardless of the server SQL mode in effect when the routine is invoked. In the scenario (for which bug is reported), the routine is created with sql_mode=NO_BACKSLASH_ESCAPES. And routine is executed with the invoker sql_mode is "" (NOT SET) by executing statement "call testp('Axel\'s')". Since invoker sql_mode is "" (NOT_SET), the '\' in 'Axel\'s'(argument to function) is considered as escape character and column "a" (of table "t1") values are updated with "Axel's". The binary log generated for above update operation is as below, set sql_mode=XXXXXX (for no_backslash_escapes) update test.t1 set a= NAME_CONST('var',_latin1'Axel\'s' COLLATE 'latin1_swedish_ci'); While logging stored procedure statements, the local variables (params) used in statements are replaced with the NAME_CONST(var_name, var_value) (Internal function) (http://dev.mysql.com/doc/refman/5.6/en/miscellaneous-functions.html#function_name-const) On slave, these logs are applied. NAME_CONST is parsed to get the variable and its value. Since, stored procedure is created with sql_mode="NO_BACKSLASH_ESCAPES", the sql_mode is also logged in. So that at slave this sql_mode is set before executing the statements of routine. So at slave, sql_mode is set to "NO_BACKSLASH_ESCAPES" and then while parsing NAME_CONST of string variable, '\' is considered as NON ESCAPE character and parsing reported error for "'" (as we have only one "'" no backslash). At slave, parsing was proper with sql_mode "NO_BACKSLASH_ESCAPES". But above error reported while writing bin log, "'" (of Axel's) is escaped with "\" character. Actually, all special characters (n, r, ', ", \, 0...) are escaped while writing NAME_CONST for string variable(param, local variable) in bin log Airrespective of "NO_BACKSLASH_ESCAPES" sql_mode. So, basically, the problem is that logging string parameter does not take into account sql_mode value. Fix: ======================== So when sql_mode is set to "NO_BACKSLASH_ESCAPES", escaping characters as (n, r, ', ", \, 0...) should be avoided. To do so, added a check to not to escape such characters while writing NAME_CONST for string variables in bin log. And when sql_mode is set to NO_BACKSLASH_ESCAPES, quote character "'" is represented as ''. http://dev.mysql.com/doc/refman/5.6/en/string-literals.html (There are several ways to include quote characters within a string: )
2012-02-29 07:53:15 +01:00
res= param->query_val_str(thd, &str);
if (param->convert_str_value(thd))
DBUG_RETURN(1); /* out of memory */
if (query->replace(param->pos_in_query+length, 1, *res))
DBUG_RETURN(1);
2005-06-08 17:57:26 +02:00
length+= res->length()-1;
}
DBUG_RETURN(0);
}
static bool insert_params(Prepared_statement *stmt, uchar *null_array,
2005-06-08 17:57:26 +02:00
uchar *read_pos, uchar *data_end,
String *expanded_query)
{
Item_param **begin= stmt->param_array;
Item_param **end= begin + stmt->param_count;
2005-06-08 17:57:26 +02:00
DBUG_ENTER("insert_params");
for (Item_param **it= begin; it < end; ++it)
{
Item_param *param= *it;
if (param->state != Item_param::LONG_DATA_VALUE)
{
if (is_param_null(null_array, (uint) (it - begin)))
param->set_null();
else
{
if (read_pos >= data_end)
DBUG_RETURN(1);
param->set_param_func(param, &read_pos, (uint) (data_end - read_pos));
if (param->state == Item_param::NO_VALUE)
DBUG_RETURN(1);
}
}
/*
A long data stream was supplied for this parameter marker.
This was done after prepare, prior to providing a placeholder
type (the types are supplied at execute). Check that the
supplied type of placeholder can accept a data stream.
*/
else if (! is_param_long_data_type(param))
DBUG_RETURN(1);
if (param->convert_str_value(stmt->thd))
DBUG_RETURN(1); /* out of memory */
}
DBUG_RETURN(0);
}
static bool setup_conversion_functions(Prepared_statement *stmt,
uchar **data, uchar *data_end)
{
/* skip null bits */
uchar *read_pos= *data + (stmt->param_count+7) / 8;
DBUG_ENTER("setup_conversion_functions");
2002-11-22 19:04:42 +01:00
if (*read_pos++) //types supplied / first execute
{
2002-11-22 19:04:42 +01:00
/*
2005-06-08 17:57:26 +02:00
First execute or types altered by the client, setup the
2002-11-22 19:04:42 +01:00
conversion routines for all parameters (one time)
*/
Item_param **it= stmt->param_array;
Item_param **end= it + stmt->param_count;
THD *thd= stmt->thd;
for (; it < end; ++it)
{
ushort typecode;
const uint signed_bit= 1 << 15;
if (read_pos >= data_end)
DBUG_RETURN(1);
typecode= sint2korr(read_pos);
read_pos+= 2;
(**it).unsigned_flag= test(typecode & signed_bit);
setup_one_conversion_function(thd, *it, (uchar) (typecode & ~signed_bit));
}
}
*data= read_pos;
DBUG_RETURN(0);
}
#else
2007-10-16 21:37:31 +02:00
/**
Embedded counterparts of parameter assignment routines.
The main difference between the embedded library and the server is
that in embedded case we don't serialize/deserialize parameters data.
2007-10-16 21:37:31 +02:00
Additionally, for unknown reason, the client-side flag raised for
changed types of placeholders is ignored and we simply setup conversion
functions at each execute (TODO: fix).
*/
static bool emb_insert_params(Prepared_statement *stmt, String *expanded_query)
{
THD *thd= stmt->thd;
Item_param **it= stmt->param_array;
Item_param **end= it + stmt->param_count;
MYSQL_BIND *client_param= stmt->thd->client_params;
DBUG_ENTER("emb_insert_params");
for (; it < end; ++it, ++client_param)
{
Item_param *param= *it;
setup_one_conversion_function(thd, param, client_param->buffer_type);
if (param->state != Item_param::LONG_DATA_VALUE)
{
if (*client_param->is_null)
param->set_null();
else
{
uchar *buff= (uchar*) client_param->buffer;
2004-08-19 12:38:12 +02:00
param->unsigned_flag= client_param->is_unsigned;
param->set_param_func(param, &buff,
2005-06-08 17:57:26 +02:00
client_param->length ?
*client_param->length :
client_param->buffer_length);
if (param->state == Item_param::NO_VALUE)
DBUG_RETURN(1);
}
}
if (param->convert_str_value(thd))
DBUG_RETURN(1); /* out of memory */
}
DBUG_RETURN(0);
}
static bool emb_insert_params_with_log(Prepared_statement *stmt,
String *query)
{
THD *thd= stmt->thd;
Item_param **it= stmt->param_array;
Item_param **end= it + stmt->param_count;
MYSQL_BIND *client_param= thd->client_params;
String str;
const String *res;
uint32 length= 0;
DBUG_ENTER("emb_insert_params_with_log");
if (query->copy(stmt->query(), stmt->query_length(), default_charset_info))
DBUG_RETURN(1);
2005-06-08 17:57:26 +02:00
for (; it < end; ++it, ++client_param)
{
Item_param *param= *it;
setup_one_conversion_function(thd, param, client_param->buffer_type);
if (param->state != Item_param::LONG_DATA_VALUE)
{
if (*client_param->is_null)
param->set_null();
else
{
uchar *buff= (uchar*)client_param->buffer;
2005-06-08 17:57:26 +02:00
param->unsigned_flag= client_param->is_unsigned;
param->set_param_func(param, &buff,
2005-06-08 17:57:26 +02:00
client_param->length ?
*client_param->length :
client_param->buffer_length);
if (param->state == Item_param::NO_VALUE)
DBUG_RETURN(1);
}
}
Bug#12601974 - STORED PROCEDURE SQL_MODE=NO_BACKSLASH_ESCAPES IGNORED AND BREAKS REPLICATION Analysis: ======================== sql_mode "NO_BACKSLASH_ESCAPES": When user want to use backslash as character input, instead of escape character in a string literal then sql_mode can be set to "NO_BACKSLASH_ESCAPES". With this mode enabled, backslash becomes an ordinary character like any other. SQL_MODE set applies to the current client session. And while creating the stored procedure, MySQL stores the current sql_mode and always executes the stored procedure in sql_mode stored with the Procedure, regardless of the server SQL mode in effect when the routine is invoked. In the scenario (for which bug is reported), the routine is created with sql_mode=NO_BACKSLASH_ESCAPES. And routine is executed with the invoker sql_mode is "" (NOT SET) by executing statement "call testp('Axel\'s')". Since invoker sql_mode is "" (NOT_SET), the '\' in 'Axel\'s'(argument to function) is considered as escape character and column "a" (of table "t1") values are updated with "Axel's". The binary log generated for above update operation is as below, set sql_mode=XXXXXX (for no_backslash_escapes) update test.t1 set a= NAME_CONST('var',_latin1'Axel\'s' COLLATE 'latin1_swedish_ci'); While logging stored procedure statements, the local variables (params) used in statements are replaced with the NAME_CONST(var_name, var_value) (Internal function) (http://dev.mysql.com/doc/refman/5.6/en/miscellaneous-functions.html#function_name-const) On slave, these logs are applied. NAME_CONST is parsed to get the variable and its value. Since, stored procedure is created with sql_mode="NO_BACKSLASH_ESCAPES", the sql_mode is also logged in. So that at slave this sql_mode is set before executing the statements of routine. So at slave, sql_mode is set to "NO_BACKSLASH_ESCAPES" and then while parsing NAME_CONST of string variable, '\' is considered as NON ESCAPE character and parsing reported error for "'" (as we have only one "'" no backslash). At slave, parsing was proper with sql_mode "NO_BACKSLASH_ESCAPES". But above error reported while writing bin log, "'" (of Axel's) is escaped with "\" character. Actually, all special characters (n, r, ', ", \, 0...) are escaped while writing NAME_CONST for string variable(param, local variable) in bin log Airrespective of "NO_BACKSLASH_ESCAPES" sql_mode. So, basically, the problem is that logging string parameter does not take into account sql_mode value. Fix: ======================== So when sql_mode is set to "NO_BACKSLASH_ESCAPES", escaping characters as (n, r, ', ", \, 0...) should be avoided. To do so, added a check to not to escape such characters while writing NAME_CONST for string variables in bin log. And when sql_mode is set to NO_BACKSLASH_ESCAPES, quote character "'" is represented as ''. http://dev.mysql.com/doc/refman/5.6/en/string-literals.html (There are several ways to include quote characters within a string: )
2012-02-29 07:53:15 +01:00
res= param->query_val_str(thd, &str);
if (param->convert_str_value(thd))
DBUG_RETURN(1); /* out of memory */
if (query->replace(param->pos_in_query+length, 1, *res))
DBUG_RETURN(1);
length+= res->length()-1;
}
DBUG_RETURN(0);
}
#endif /*!EMBEDDED_LIBRARY*/
/**
Setup data conversion routines using an array of parameter
markers from the original prepared statement.
Swap the parameter data of the original prepared
statement to the new one.
Used only when we re-prepare a prepared statement.
There are two reasons for this function to exist:
1) In the binary client/server protocol, parameter metadata
is sent only at first execute. Consequently, if we need to
reprepare a prepared statement at a subsequent execution,
we may not have metadata information in the packet.
In that case we use the parameter array of the original
prepared statement to setup parameter types of the new
prepared statement.
2) In the binary client/server protocol, we may supply
long data in pieces. When the last piece is supplied,
we assemble the pieces and convert them from client
character set to the connection character set. After
that the parameter value is only available inside
the parameter, the original pieces are lost, and thus
we can only assign the corresponding parameter of the
reprepared statement from the original value.
@param[out] param_array_dst parameter markers of the new statement
@param[in] param_array_src parameter markers of the original
statement
@param[in] param_count total number of parameters. Is the
same in src and dst arrays, since
the statement query is the same
@return this function never fails
*/
static void
swap_parameter_array(Item_param **param_array_dst,
Item_param **param_array_src,
uint param_count)
{
Item_param **dst= param_array_dst;
Item_param **src= param_array_src;
Item_param **end= param_array_dst + param_count;
for (; dst < end; ++src, ++dst)
(*dst)->set_param_type_and_swap_value(*src);
}
2004-04-10 00:14:32 +02:00
2007-10-16 21:37:31 +02:00
/**
Assign prepared statement parameters from user variables.
2007-10-16 21:37:31 +02:00
@param stmt Statement
@param varnames List of variables. Caller must ensure that number
of variables in the list is equal to number of statement
parameters
@param query Ignored
*/
static bool insert_params_from_vars(Prepared_statement *stmt,
List<LEX_STRING>& varnames,
String *query __attribute__((unused)))
{
Item_param **begin= stmt->param_array;
Item_param **end= begin + stmt->param_count;
user_var_entry *entry;
LEX_STRING *varname;
List_iterator<LEX_STRING> var_it(varnames);
DBUG_ENTER("insert_params_from_vars");
for (Item_param **it= begin; it < end; ++it)
{
Item_param *param= *it;
varname= var_it++;
entry= (user_var_entry*)my_hash_search(&stmt->thd->user_vars,
(uchar*) varname->str,
varname->length);
if (param->set_from_user_var(stmt->thd, entry) ||
param->convert_str_value(stmt->thd))
DBUG_RETURN(1);
}
DBUG_RETURN(0);
}
2007-10-16 21:37:31 +02:00
/**
Do the same as insert_params_from_vars but also construct query text for
binary log.
2007-10-16 21:37:31 +02:00
@param stmt Prepared statement
@param varnames List of variables. Caller must ensure that number of
variables in the list is equal to number of statement
parameters
@param query The query with parameter markers replaced with corresponding
user variables that were used to execute the query.
*/
static bool insert_params_from_vars_with_log(Prepared_statement *stmt,
List<LEX_STRING>& varnames,
String *query)
{
Item_param **begin= stmt->param_array;
Item_param **end= begin + stmt->param_count;
user_var_entry *entry;
LEX_STRING *varname;
List_iterator<LEX_STRING> var_it(varnames);
String buf;
const String *val;
uint32 length= 0;
THD *thd= stmt->thd;
Fix for Bug#56934 (mysql_stmt_fetch() incorrectly fills MYSQL_TIME structure buffer). This is a follow-up for WL#4435. The bug actually existed not only MYSQL_TYPE_DATETIME type. The problem was that Item_param::set_value() was written in an assumption that it's working with expressions, i.e. with basic data types. There are two different quick fixes here: a) Change Item_param::make_field() -- remove setting of Send_field::length, Send_field::charsetnr, Send_field::flags and Send_field::type. That would lead to marshalling all data using basic types to the client (MYSQL_TYPE_LONGLONG, MYSQL_TYPE_DOUBLE, MYSQL_TYPE_STRING and MYSQL_TYPE_NEWDECIMAL). In particular, that means, DATETIME would be sent as MYSQL_TYPE_STRING, TINYINT -- as MYSQL_TYPE_LONGLONG, etc. That could be Ok for the client, because the client library does reverse conversion automatically (the client program would see DATETIME as MYSQL_TIME object). However, there is a problem with metadata -- the metadata would be wrong (misleading): it would say that DATETIME is marshaled as MYSQL_TYPE_DATETIME, not as MYSQL_TYPE_STRING. b) Set Item_param::param_type properly to actual underlying field type. That would lead to double conversion inside the server: for example, MYSQL_TIME-object would be converted into STRING-object (in Item_param::set_value()), and then converted back to MYSQL_TIME-object (in Item_param::send()). The data however would be marshalled more properly, and also metadata would be correct. This patch implements b). There is also a possibility to avoid double conversion either by clonning the data field, or by storing a reference to it and using it on Item::send() time. That requires more work and might be done later.
2010-11-13 16:05:02 +01:00
DBUG_ENTER("insert_params_from_vars_with_log");
if (query->copy(stmt->query(), stmt->query_length(), default_charset_info))
DBUG_RETURN(1);
for (Item_param **it= begin; it < end; ++it)
{
Item_param *param= *it;
varname= var_it++;
entry= (user_var_entry *) my_hash_search(&thd->user_vars, (uchar*)
varname->str, varname->length);
/*
We have to call the setup_one_conversion_function() here to set
the parameter's members that might be needed further
(e.g. value.cs_info.character_set_client is used in the query_val_str()).
*/
setup_one_conversion_function(thd, param, param->param_type);
if (param->set_from_user_var(thd, entry))
DBUG_RETURN(1);
Bug#12601974 - STORED PROCEDURE SQL_MODE=NO_BACKSLASH_ESCAPES IGNORED AND BREAKS REPLICATION Analysis: ======================== sql_mode "NO_BACKSLASH_ESCAPES": When user want to use backslash as character input, instead of escape character in a string literal then sql_mode can be set to "NO_BACKSLASH_ESCAPES". With this mode enabled, backslash becomes an ordinary character like any other. SQL_MODE set applies to the current client session. And while creating the stored procedure, MySQL stores the current sql_mode and always executes the stored procedure in sql_mode stored with the Procedure, regardless of the server SQL mode in effect when the routine is invoked. In the scenario (for which bug is reported), the routine is created with sql_mode=NO_BACKSLASH_ESCAPES. And routine is executed with the invoker sql_mode is "" (NOT SET) by executing statement "call testp('Axel\'s')". Since invoker sql_mode is "" (NOT_SET), the '\' in 'Axel\'s'(argument to function) is considered as escape character and column "a" (of table "t1") values are updated with "Axel's". The binary log generated for above update operation is as below, set sql_mode=XXXXXX (for no_backslash_escapes) update test.t1 set a= NAME_CONST('var',_latin1'Axel\'s' COLLATE 'latin1_swedish_ci'); While logging stored procedure statements, the local variables (params) used in statements are replaced with the NAME_CONST(var_name, var_value) (Internal function) (http://dev.mysql.com/doc/refman/5.6/en/miscellaneous-functions.html#function_name-const) On slave, these logs are applied. NAME_CONST is parsed to get the variable and its value. Since, stored procedure is created with sql_mode="NO_BACKSLASH_ESCAPES", the sql_mode is also logged in. So that at slave this sql_mode is set before executing the statements of routine. So at slave, sql_mode is set to "NO_BACKSLASH_ESCAPES" and then while parsing NAME_CONST of string variable, '\' is considered as NON ESCAPE character and parsing reported error for "'" (as we have only one "'" no backslash). At slave, parsing was proper with sql_mode "NO_BACKSLASH_ESCAPES". But above error reported while writing bin log, "'" (of Axel's) is escaped with "\" character. Actually, all special characters (n, r, ', ", \, 0...) are escaped while writing NAME_CONST for string variable(param, local variable) in bin log Airrespective of "NO_BACKSLASH_ESCAPES" sql_mode. So, basically, the problem is that logging string parameter does not take into account sql_mode value. Fix: ======================== So when sql_mode is set to "NO_BACKSLASH_ESCAPES", escaping characters as (n, r, ', ", \, 0...) should be avoided. To do so, added a check to not to escape such characters while writing NAME_CONST for string variables in bin log. And when sql_mode is set to NO_BACKSLASH_ESCAPES, quote character "'" is represented as ''. http://dev.mysql.com/doc/refman/5.6/en/string-literals.html (There are several ways to include quote characters within a string: )
2012-02-29 07:53:15 +01:00
val= param->query_val_str(thd, &buf);
if (param->convert_str_value(thd))
DBUG_RETURN(1); /* out of memory */
if (query->replace(param->pos_in_query+length, 1, *val))
DBUG_RETURN(1);
length+= val->length()-1;
}
DBUG_RETURN(0);
}
2007-10-16 21:37:31 +02:00
/**
Validate INSERT statement.
2004-04-10 00:14:32 +02:00
2007-10-16 21:37:31 +02:00
@param stmt prepared statement
@param tables global/local table list
2004-04-10 00:14:32 +02:00
2007-10-16 21:37:31 +02:00
@retval
FALSE success
@retval
TRUE error, error message is set in THD
*/
2004-09-09 06:26:28 +02:00
static bool mysql_test_insert(Prepared_statement *stmt,
TABLE_LIST *table_list,
2005-06-08 17:57:26 +02:00
List<Item> &fields,
List<List_item> &values_list,
List<Item> &update_fields,
List<Item> &update_values,
enum_duplicates duplic)
{
THD *thd= stmt->thd;
List_iterator_fast<List_item> its(values_list);
List_item *values;
DBUG_ENTER("mysql_test_insert");
2005-06-08 17:31:34 +02:00
if (insert_precheck(thd, table_list))
goto error;
2004-02-20 14:37:45 +01:00
/*
open temporary memory pool for temporary data allocated by derived
tables & preparation procedure
Note that this is done without locks (should not be needed as we will not
access any data here)
If we would use locks, then we have to ensure we are not using
TL_WRITE_DELAYED as having two such locks can cause table corruption.
2004-02-20 14:37:45 +01:00
*/
Implement new type-of-operation-aware metadata locks. Add a wait-for graph based deadlock detector to the MDL subsystem. Fixes bug #46272 "MySQL 5.4.4, new MDL: unnecessary deadlock" and bug #37346 "innodb does not detect deadlock between update and alter table". The first bug manifested itself as an unwarranted abort of a transaction with ER_LOCK_DEADLOCK error by a concurrent ALTER statement, when this transaction tried to repeat use of a table, which it has already used in a similar fashion before ALTER started. The second bug showed up as a deadlock between table-level locks and InnoDB row locks, which was "detected" only after innodb_lock_wait_timeout timeout. A transaction would start using the table and modify a few rows. Then ALTER TABLE would come in, and start copying rows into a temporary table. Eventually it would stumble on the modified records and get blocked on a row lock. The first transaction would try to do more updates, and get blocked on thr_lock.c lock. This situation of circular wait would only get resolved by a timeout. Both these bugs stemmed from inadequate solutions to the problem of deadlocks occurring between different locking subsystems. In the first case we tried to avoid deadlocks between metadata locking and table-level locking subsystems, when upgrading shared metadata lock to exclusive one. Transactions holding the shared lock on the table and waiting for some table-level lock used to be aborted too aggressively. We also allowed ALTER TABLE to start in presence of transactions that modify the subject table. ALTER TABLE acquires TL_WRITE_ALLOW_READ lock at start, and that block all writes against the table (naturally, we don't want any writes to be lost when switching the old and the new table). TL_WRITE_ALLOW_READ lock, in turn, would block the started transaction on thr_lock.c lock, should they do more updates. This, again, lead to the need to abort such transactions. The second bug occurred simply because we didn't have any mechanism to detect deadlocks between the table-level locks in thr_lock.c and row-level locks in InnoDB, other than innodb_lock_wait_timeout. This patch solves both these problems by moving lock conflicts which are causing these deadlocks into the metadata locking subsystem, thus making it possible to avoid or detect such deadlocks inside MDL. To do this we introduce new type-of-operation-aware metadata locks, which allow MDL subsystem to know not only the fact that transaction has used or is going to use some object but also what kind of operation it has carried out or going to carry out on the object. This, along with the addition of a special kind of upgradable metadata lock, allows ALTER TABLE to wait until all transactions which has updated the table to go away. This solves the second issue. Another special type of upgradable metadata lock is acquired by LOCK TABLE WRITE. This second lock type allows to solve the first issue, since abortion of table-level locks in event of DDL under LOCK TABLES becomes also unnecessary. Below follows the list of incompatible changes introduced by this patch: - From now on, ALTER TABLE and CREATE/DROP TRIGGER SQL (i.e. those statements that acquire TL_WRITE_ALLOW_READ lock) wait for all transactions which has *updated* the table to complete. - From now on, LOCK TABLES ... WRITE, REPAIR/OPTIMIZE TABLE (i.e. all statements which acquire TL_WRITE table-level lock) wait for all transaction which *updated or read* from the table to complete. As a consequence, innodb_table_locks=0 option no longer applies to LOCK TABLES ... WRITE. - DROP DATABASE, DROP TABLE, RENAME TABLE no longer abort statements or transactions which use tables being dropped or renamed, and instead wait for these transactions to complete. - Since LOCK TABLES WRITE now takes a special metadata lock, not compatible with with reads or writes against the subject table and transaction-wide, thr_lock.c deadlock avoidance algorithm that used to ensure absence of deadlocks between LOCK TABLES WRITE and other statements is no longer sufficient, even for MyISAM. The wait-for graph based deadlock detector of MDL subsystem may sometimes be necessary and is involved. This may lead to ER_LOCK_DEADLOCK error produced for multi-statement transactions even if these only use MyISAM: session 1: session 2: begin; update t1 ... lock table t2 write, t1 write; -- gets a lock on t2, blocks on t1 update t2 ... (ER_LOCK_DEADLOCK) - Finally, support of LOW_PRIORITY option for LOCK TABLES ... WRITE was abandoned. LOCK TABLE ... LOW_PRIORITY WRITE from now on has the same priority as the usual LOCK TABLE ... WRITE. SELECT HIGH PRIORITY no longer trumps LOCK TABLE ... WRITE in the wait queue. - We do not take upgradable metadata locks on implicitly locked tables. So if one has, say, a view v1 that uses table t1, and issues: LOCK TABLE v1 WRITE; FLUSH TABLE t1; -- (or just 'FLUSH TABLES'), an error is produced. In order to be able to perform DDL on a table under LOCK TABLES, the table must be locked explicitly in the LOCK TABLES list.
2010-02-01 12:43:06 +01:00
if (open_normal_and_derived_tables(thd, table_list,
MYSQL_OPEN_FORCE_SHARED_MDL))
2005-06-08 17:31:34 +02:00
goto error;
if ((values= its++))
{
uint value_count;
ulong counter= 0;
2004-12-30 23:44:00 +01:00
Item *unused_conds= 0;
2004-12-31 17:59:43 +01:00
if (table_list->table)
{
// don't allocate insert_values
WL#3817: Simplify string / memory area types and make things more consistent (first part) The following type conversions was done: - Changed byte to uchar - Changed gptr to uchar* - Change my_string to char * - Change my_size_t to size_t - Change size_s to size_t Removed declaration of byte, gptr, my_string, my_size_t and size_s. Following function parameter changes was done: - All string functions in mysys/strings was changed to use size_t instead of uint for string lengths. - All read()/write() functions changed to use size_t (including vio). - All protocoll functions changed to use size_t instead of uint - Functions that used a pointer to a string length was changed to use size_t* - Changed malloc(), free() and related functions from using gptr to use void * as this requires fewer casts in the code and is more in line with how the standard functions work. - Added extra length argument to dirname_part() to return the length of the created string. - Changed (at least) following functions to take uchar* as argument: - db_dump() - my_net_write() - net_write_command() - net_store_data() - DBUG_DUMP() - decimal2bin() & bin2decimal() - Changed my_compress() and my_uncompress() to use size_t. Changed one argument to my_uncompress() from a pointer to a value as we only return one value (makes function easier to use). - Changed type of 'pack_data' argument to packfrm() to avoid casts. - Changed in readfrm() and writefrom(), ha_discover and handler::discover() the type for argument 'frmdata' to uchar** to avoid casts. - Changed most Field functions to use uchar* instead of char* (reduced a lot of casts). - Changed field->val_xxx(xxx, new_ptr) to take const pointers. Other changes: - Removed a lot of not needed casts - Added a few new cast required by other changes - Added some cast to my_multi_malloc() arguments for safety (as string lengths needs to be uint, not size_t). - Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done explicitely as this conflict was often hided by casting the function to hash_get_key). - Changed some buffers to memory regions to uchar* to avoid casts. - Changed some string lengths from uint to size_t. - Changed field->ptr to be uchar* instead of char*. This allowed us to get rid of a lot of casts. - Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar - Include zlib.h in some files as we needed declaration of crc32() - Changed MY_FILE_ERROR to be (size_t) -1. - Changed many variables to hold the result of my_read() / my_write() to be size_t. This was needed to properly detect errors (which are returned as (size_t) -1). - Removed some very old VMS code - Changed packfrm()/unpackfrm() to not be depending on uint size (portability fix) - Removed windows specific code to restore cursor position as this causes slowdown on windows and we should not mix read() and pread() calls anyway as this is not thread safe. Updated function comment to reflect this. Changed function that depended on original behavior of my_pwrite() to itself restore the cursor position (one such case). - Added some missing checking of return value of malloc(). - Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow. - Changed type of table_def::m_size from my_size_t to ulong to reflect that m_size is the number of elements in the array, not a string/memory length. - Moved THD::max_row_length() to table.cc (as it's not depending on THD). Inlined max_row_length_blob() into this function. - More function comments - Fixed some compiler warnings when compiled without partitions. - Removed setting of LEX_STRING() arguments in declaration (portability fix). - Some trivial indentation/variable name changes. - Some trivial code simplifications: - Replaced some calls to alloc_root + memcpy to use strmake_root()/strdup_root(). - Changed some calls from memdup() to strmake() (Safety fix) - Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
table_list->table->insert_values=(uchar *)1;
}
if (mysql_prepare_insert(thd, table_list, table_list->table,
fields, values, update_fields, update_values,
duplic, &unused_conds, FALSE, FALSE, FALSE))
2004-04-10 00:14:32 +02:00
goto error;
value_count= values->elements;
its.rewind();
if (table_list->lock_type == TL_WRITE_DELAYED &&
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
!(table_list->table->file->ha_table_flags() & HA_CAN_INSERT_DELAYED))
{
my_error(ER_DELAYED_NOT_SUPPORTED, MYF(0), (table_list->view ?
table_list->view_name.str :
table_list->table_name));
goto error;
}
while ((values= its++))
{
counter++;
if (values->elements != value_count)
{
my_error(ER_WRONG_VALUE_COUNT_ON_ROW, MYF(0), counter);
goto error;
}
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
if (setup_fields(thd, 0, *values, MARK_COLUMNS_NONE, 0, 0))
2005-06-08 17:57:26 +02:00
goto error;
}
}
2005-06-08 17:31:34 +02:00
DBUG_RETURN(FALSE);
error:
/* insert_values is cleared in open_table */
2005-06-08 17:31:34 +02:00
DBUG_RETURN(TRUE);
}
2007-10-16 21:37:31 +02:00
/**
Validate UPDATE statement.
@param stmt prepared statement
@param tables list of tables used in this query
2004-04-10 00:14:32 +02:00
2007-10-16 21:37:31 +02:00
@todo
- here we should send types of placeholders to the client.
2004-04-10 00:14:32 +02:00
2007-10-16 21:37:31 +02:00
@retval
0 success
@retval
1 error, error message is set in THD
@retval
2 convert to multi_update
*/
2005-06-08 17:31:34 +02:00
2004-04-10 00:14:32 +02:00
static int mysql_test_update(Prepared_statement *stmt,
TABLE_LIST *table_list)
{
2004-04-10 00:14:32 +02:00
int res;
THD *thd= stmt->thd;
2004-11-25 01:23:13 +01:00
uint table_count= 0;
2004-04-10 00:14:32 +02:00
SELECT_LEX *select= &stmt->lex->select_lex;
#ifndef NO_EMBEDDED_ACCESS_CHECKS
2005-06-08 17:57:26 +02:00
uint want_privilege;
#endif
2004-04-10 00:14:32 +02:00
DBUG_ENTER("mysql_test_update");
if (update_precheck(thd, table_list) ||
Implement new type-of-operation-aware metadata locks. Add a wait-for graph based deadlock detector to the MDL subsystem. Fixes bug #46272 "MySQL 5.4.4, new MDL: unnecessary deadlock" and bug #37346 "innodb does not detect deadlock between update and alter table". The first bug manifested itself as an unwarranted abort of a transaction with ER_LOCK_DEADLOCK error by a concurrent ALTER statement, when this transaction tried to repeat use of a table, which it has already used in a similar fashion before ALTER started. The second bug showed up as a deadlock between table-level locks and InnoDB row locks, which was "detected" only after innodb_lock_wait_timeout timeout. A transaction would start using the table and modify a few rows. Then ALTER TABLE would come in, and start copying rows into a temporary table. Eventually it would stumble on the modified records and get blocked on a row lock. The first transaction would try to do more updates, and get blocked on thr_lock.c lock. This situation of circular wait would only get resolved by a timeout. Both these bugs stemmed from inadequate solutions to the problem of deadlocks occurring between different locking subsystems. In the first case we tried to avoid deadlocks between metadata locking and table-level locking subsystems, when upgrading shared metadata lock to exclusive one. Transactions holding the shared lock on the table and waiting for some table-level lock used to be aborted too aggressively. We also allowed ALTER TABLE to start in presence of transactions that modify the subject table. ALTER TABLE acquires TL_WRITE_ALLOW_READ lock at start, and that block all writes against the table (naturally, we don't want any writes to be lost when switching the old and the new table). TL_WRITE_ALLOW_READ lock, in turn, would block the started transaction on thr_lock.c lock, should they do more updates. This, again, lead to the need to abort such transactions. The second bug occurred simply because we didn't have any mechanism to detect deadlocks between the table-level locks in thr_lock.c and row-level locks in InnoDB, other than innodb_lock_wait_timeout. This patch solves both these problems by moving lock conflicts which are causing these deadlocks into the metadata locking subsystem, thus making it possible to avoid or detect such deadlocks inside MDL. To do this we introduce new type-of-operation-aware metadata locks, which allow MDL subsystem to know not only the fact that transaction has used or is going to use some object but also what kind of operation it has carried out or going to carry out on the object. This, along with the addition of a special kind of upgradable metadata lock, allows ALTER TABLE to wait until all transactions which has updated the table to go away. This solves the second issue. Another special type of upgradable metadata lock is acquired by LOCK TABLE WRITE. This second lock type allows to solve the first issue, since abortion of table-level locks in event of DDL under LOCK TABLES becomes also unnecessary. Below follows the list of incompatible changes introduced by this patch: - From now on, ALTER TABLE and CREATE/DROP TRIGGER SQL (i.e. those statements that acquire TL_WRITE_ALLOW_READ lock) wait for all transactions which has *updated* the table to complete. - From now on, LOCK TABLES ... WRITE, REPAIR/OPTIMIZE TABLE (i.e. all statements which acquire TL_WRITE table-level lock) wait for all transaction which *updated or read* from the table to complete. As a consequence, innodb_table_locks=0 option no longer applies to LOCK TABLES ... WRITE. - DROP DATABASE, DROP TABLE, RENAME TABLE no longer abort statements or transactions which use tables being dropped or renamed, and instead wait for these transactions to complete. - Since LOCK TABLES WRITE now takes a special metadata lock, not compatible with with reads or writes against the subject table and transaction-wide, thr_lock.c deadlock avoidance algorithm that used to ensure absence of deadlocks between LOCK TABLES WRITE and other statements is no longer sufficient, even for MyISAM. The wait-for graph based deadlock detector of MDL subsystem may sometimes be necessary and is involved. This may lead to ER_LOCK_DEADLOCK error produced for multi-statement transactions even if these only use MyISAM: session 1: session 2: begin; update t1 ... lock table t2 write, t1 write; -- gets a lock on t2, blocks on t1 update t2 ... (ER_LOCK_DEADLOCK) - Finally, support of LOW_PRIORITY option for LOCK TABLES ... WRITE was abandoned. LOCK TABLE ... LOW_PRIORITY WRITE from now on has the same priority as the usual LOCK TABLE ... WRITE. SELECT HIGH PRIORITY no longer trumps LOCK TABLE ... WRITE in the wait queue. - We do not take upgradable metadata locks on implicitly locked tables. So if one has, say, a view v1 that uses table t1, and issues: LOCK TABLE v1 WRITE; FLUSH TABLE t1; -- (or just 'FLUSH TABLES'), an error is produced. In order to be able to perform DDL on a table under LOCK TABLES, the table must be locked explicitly in the LOCK TABLES list.
2010-02-01 12:43:06 +01:00
open_tables(thd, &table_list, &table_count, MYSQL_OPEN_FORCE_SHARED_MDL))
2005-06-08 17:31:34 +02:00
goto error;
if (table_list->multitable_view)
{
DBUG_ASSERT(table_list->view != 0);
DBUG_PRINT("info", ("Switch to multi-update"));
/* pass counter value */
thd->lex->table_count= table_count;
/* convert to multiupdate */
DBUG_RETURN(2);
2005-06-08 17:31:34 +02:00
}
2004-11-25 01:23:13 +01:00
2005-06-08 17:31:34 +02:00
/*
thd->fill_derived_tables() is false here for sure (because it is
preparation of PS, so we even do not check it).
*/
if (mysql_handle_derived(thd->lex, &mysql_derived_prepare))
2005-06-08 17:31:34 +02:00
goto error;
2004-11-25 01:23:13 +01:00
#ifndef NO_EMBEDDED_ACCESS_CHECKS
/* Force privilege re-checking for views after they have been opened. */
want_privilege= (table_list->view ? UPDATE_ACL :
table_list->grant.want_privilege);
#endif
2005-06-08 17:31:34 +02:00
if (mysql_prepare_update(thd, table_list, &select->where,
select->order_list.elements,
select->order_list.first))
2005-06-08 17:31:34 +02:00
goto error;
#ifndef NO_EMBEDDED_ACCESS_CHECKS
2005-06-08 17:31:34 +02:00
table_list->grant.want_privilege= want_privilege;
table_list->table->grant.want_privilege= want_privilege;
table_list->register_want_access(want_privilege);
#endif
thd->lex->select_lex.no_wrap_view_item= TRUE;
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
res= setup_fields(thd, 0, select->item_list, MARK_COLUMNS_READ, 0, 0);
thd->lex->select_lex.no_wrap_view_item= FALSE;
2005-06-08 17:31:34 +02:00
if (res)
goto error;
#ifndef NO_EMBEDDED_ACCESS_CHECKS
2005-06-08 17:31:34 +02:00
/* Check values */
table_list->grant.want_privilege=
table_list->table->grant.want_privilege=
(SELECT_ACL & ~table_list->table->grant.privilege);
table_list->register_want_access(SELECT_ACL);
#endif
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
if (setup_fields(thd, 0, stmt->lex->value_list, MARK_COLUMNS_NONE, 0, 0))
2005-06-08 17:31:34 +02:00
goto error;
2005-06-08 17:57:26 +02:00
/* TODO: here we should send types of placeholders to the client. */
2005-06-08 17:31:34 +02:00
DBUG_RETURN(0);
error:
DBUG_RETURN(1);
}
2007-10-16 21:37:31 +02:00
/**
Validate DELETE statement.
2004-04-10 00:14:32 +02:00
2007-10-16 21:37:31 +02:00
@param stmt prepared statement
@param tables list of tables used in this query
2004-04-10 00:14:32 +02:00
2007-10-16 21:37:31 +02:00
@retval
FALSE success
@retval
TRUE error, error message is set in THD
*/
2005-06-08 17:31:34 +02:00
static bool mysql_test_delete(Prepared_statement *stmt,
TABLE_LIST *table_list)
{
THD *thd= stmt->thd;
2004-04-10 00:14:32 +02:00
LEX *lex= stmt->lex;
DBUG_ENTER("mysql_test_delete");
2005-06-08 17:31:34 +02:00
if (delete_precheck(thd, table_list) ||
Implement new type-of-operation-aware metadata locks. Add a wait-for graph based deadlock detector to the MDL subsystem. Fixes bug #46272 "MySQL 5.4.4, new MDL: unnecessary deadlock" and bug #37346 "innodb does not detect deadlock between update and alter table". The first bug manifested itself as an unwarranted abort of a transaction with ER_LOCK_DEADLOCK error by a concurrent ALTER statement, when this transaction tried to repeat use of a table, which it has already used in a similar fashion before ALTER started. The second bug showed up as a deadlock between table-level locks and InnoDB row locks, which was "detected" only after innodb_lock_wait_timeout timeout. A transaction would start using the table and modify a few rows. Then ALTER TABLE would come in, and start copying rows into a temporary table. Eventually it would stumble on the modified records and get blocked on a row lock. The first transaction would try to do more updates, and get blocked on thr_lock.c lock. This situation of circular wait would only get resolved by a timeout. Both these bugs stemmed from inadequate solutions to the problem of deadlocks occurring between different locking subsystems. In the first case we tried to avoid deadlocks between metadata locking and table-level locking subsystems, when upgrading shared metadata lock to exclusive one. Transactions holding the shared lock on the table and waiting for some table-level lock used to be aborted too aggressively. We also allowed ALTER TABLE to start in presence of transactions that modify the subject table. ALTER TABLE acquires TL_WRITE_ALLOW_READ lock at start, and that block all writes against the table (naturally, we don't want any writes to be lost when switching the old and the new table). TL_WRITE_ALLOW_READ lock, in turn, would block the started transaction on thr_lock.c lock, should they do more updates. This, again, lead to the need to abort such transactions. The second bug occurred simply because we didn't have any mechanism to detect deadlocks between the table-level locks in thr_lock.c and row-level locks in InnoDB, other than innodb_lock_wait_timeout. This patch solves both these problems by moving lock conflicts which are causing these deadlocks into the metadata locking subsystem, thus making it possible to avoid or detect such deadlocks inside MDL. To do this we introduce new type-of-operation-aware metadata locks, which allow MDL subsystem to know not only the fact that transaction has used or is going to use some object but also what kind of operation it has carried out or going to carry out on the object. This, along with the addition of a special kind of upgradable metadata lock, allows ALTER TABLE to wait until all transactions which has updated the table to go away. This solves the second issue. Another special type of upgradable metadata lock is acquired by LOCK TABLE WRITE. This second lock type allows to solve the first issue, since abortion of table-level locks in event of DDL under LOCK TABLES becomes also unnecessary. Below follows the list of incompatible changes introduced by this patch: - From now on, ALTER TABLE and CREATE/DROP TRIGGER SQL (i.e. those statements that acquire TL_WRITE_ALLOW_READ lock) wait for all transactions which has *updated* the table to complete. - From now on, LOCK TABLES ... WRITE, REPAIR/OPTIMIZE TABLE (i.e. all statements which acquire TL_WRITE table-level lock) wait for all transaction which *updated or read* from the table to complete. As a consequence, innodb_table_locks=0 option no longer applies to LOCK TABLES ... WRITE. - DROP DATABASE, DROP TABLE, RENAME TABLE no longer abort statements or transactions which use tables being dropped or renamed, and instead wait for these transactions to complete. - Since LOCK TABLES WRITE now takes a special metadata lock, not compatible with with reads or writes against the subject table and transaction-wide, thr_lock.c deadlock avoidance algorithm that used to ensure absence of deadlocks between LOCK TABLES WRITE and other statements is no longer sufficient, even for MyISAM. The wait-for graph based deadlock detector of MDL subsystem may sometimes be necessary and is involved. This may lead to ER_LOCK_DEADLOCK error produced for multi-statement transactions even if these only use MyISAM: session 1: session 2: begin; update t1 ... lock table t2 write, t1 write; -- gets a lock on t2, blocks on t1 update t2 ... (ER_LOCK_DEADLOCK) - Finally, support of LOW_PRIORITY option for LOCK TABLES ... WRITE was abandoned. LOCK TABLE ... LOW_PRIORITY WRITE from now on has the same priority as the usual LOCK TABLE ... WRITE. SELECT HIGH PRIORITY no longer trumps LOCK TABLE ... WRITE in the wait queue. - We do not take upgradable metadata locks on implicitly locked tables. So if one has, say, a view v1 that uses table t1, and issues: LOCK TABLE v1 WRITE; FLUSH TABLE t1; -- (or just 'FLUSH TABLES'), an error is produced. In order to be able to perform DDL on a table under LOCK TABLES, the table must be locked explicitly in the LOCK TABLES list.
2010-02-01 12:43:06 +01:00
open_normal_and_derived_tables(thd, table_list,
MYSQL_OPEN_FORCE_SHARED_MDL))
2005-06-08 17:31:34 +02:00
goto error;
2005-06-08 17:31:34 +02:00
if (!table_list->table)
2004-04-10 00:14:32 +02:00
{
2005-06-08 17:31:34 +02:00
my_error(ER_VIEW_DELETE_MERGE_VIEW, MYF(0),
table_list->view_db.str, table_list->view_name.str);
goto error;
2004-04-10 00:14:32 +02:00
}
2005-06-08 17:31:34 +02:00
DBUG_RETURN(mysql_prepare_delete(thd, table_list, &lex->select_lex.where));
error:
DBUG_RETURN(TRUE);
}
2004-04-10 00:14:32 +02:00
2007-10-16 21:37:31 +02:00
/**
2004-04-10 00:14:32 +02:00
Validate SELECT statement.
In case of success, if this query is not EXPLAIN, send column list info
back to the client.
2004-04-10 00:14:32 +02:00
2007-10-16 21:37:31 +02:00
@param stmt prepared statement
@param tables list of tables used in the query
@retval
0 success
@retval
1 error, error message is set in THD
@retval
2 success, and statement metadata has been sent
*/
static int mysql_test_select(Prepared_statement *stmt,
TABLE_LIST *tables)
{
THD *thd= stmt->thd;
LEX *lex= stmt->lex;
SELECT_LEX_UNIT *unit= &lex->unit;
2004-04-10 00:14:32 +02:00
DBUG_ENTER("mysql_test_select");
lex->select_lex.context.resolve_in_select_list= TRUE;
ulong privilege= lex->exchange ? SELECT_ACL | FILE_ACL : SELECT_ACL;
if (tables)
{
if (check_table_access(thd, privilege, tables, FALSE, UINT_MAX, FALSE))
2005-06-08 17:31:34 +02:00
goto error;
}
else if (check_access(thd, privilege, any_db, NULL, NULL, 0, 0))
2005-06-08 17:31:34 +02:00
goto error;
2004-02-20 14:37:45 +01:00
if (!lex->result && !(lex->result= new (stmt->mem_root) select_send))
2005-06-08 17:31:34 +02:00
{
my_error(ER_OUTOFMEMORY, MYF(ME_FATALERROR),
static_cast<int>(sizeof(select_send)));
2005-06-08 17:31:34 +02:00
goto error;
}
Implement new type-of-operation-aware metadata locks. Add a wait-for graph based deadlock detector to the MDL subsystem. Fixes bug #46272 "MySQL 5.4.4, new MDL: unnecessary deadlock" and bug #37346 "innodb does not detect deadlock between update and alter table". The first bug manifested itself as an unwarranted abort of a transaction with ER_LOCK_DEADLOCK error by a concurrent ALTER statement, when this transaction tried to repeat use of a table, which it has already used in a similar fashion before ALTER started. The second bug showed up as a deadlock between table-level locks and InnoDB row locks, which was "detected" only after innodb_lock_wait_timeout timeout. A transaction would start using the table and modify a few rows. Then ALTER TABLE would come in, and start copying rows into a temporary table. Eventually it would stumble on the modified records and get blocked on a row lock. The first transaction would try to do more updates, and get blocked on thr_lock.c lock. This situation of circular wait would only get resolved by a timeout. Both these bugs stemmed from inadequate solutions to the problem of deadlocks occurring between different locking subsystems. In the first case we tried to avoid deadlocks between metadata locking and table-level locking subsystems, when upgrading shared metadata lock to exclusive one. Transactions holding the shared lock on the table and waiting for some table-level lock used to be aborted too aggressively. We also allowed ALTER TABLE to start in presence of transactions that modify the subject table. ALTER TABLE acquires TL_WRITE_ALLOW_READ lock at start, and that block all writes against the table (naturally, we don't want any writes to be lost when switching the old and the new table). TL_WRITE_ALLOW_READ lock, in turn, would block the started transaction on thr_lock.c lock, should they do more updates. This, again, lead to the need to abort such transactions. The second bug occurred simply because we didn't have any mechanism to detect deadlocks between the table-level locks in thr_lock.c and row-level locks in InnoDB, other than innodb_lock_wait_timeout. This patch solves both these problems by moving lock conflicts which are causing these deadlocks into the metadata locking subsystem, thus making it possible to avoid or detect such deadlocks inside MDL. To do this we introduce new type-of-operation-aware metadata locks, which allow MDL subsystem to know not only the fact that transaction has used or is going to use some object but also what kind of operation it has carried out or going to carry out on the object. This, along with the addition of a special kind of upgradable metadata lock, allows ALTER TABLE to wait until all transactions which has updated the table to go away. This solves the second issue. Another special type of upgradable metadata lock is acquired by LOCK TABLE WRITE. This second lock type allows to solve the first issue, since abortion of table-level locks in event of DDL under LOCK TABLES becomes also unnecessary. Below follows the list of incompatible changes introduced by this patch: - From now on, ALTER TABLE and CREATE/DROP TRIGGER SQL (i.e. those statements that acquire TL_WRITE_ALLOW_READ lock) wait for all transactions which has *updated* the table to complete. - From now on, LOCK TABLES ... WRITE, REPAIR/OPTIMIZE TABLE (i.e. all statements which acquire TL_WRITE table-level lock) wait for all transaction which *updated or read* from the table to complete. As a consequence, innodb_table_locks=0 option no longer applies to LOCK TABLES ... WRITE. - DROP DATABASE, DROP TABLE, RENAME TABLE no longer abort statements or transactions which use tables being dropped or renamed, and instead wait for these transactions to complete. - Since LOCK TABLES WRITE now takes a special metadata lock, not compatible with with reads or writes against the subject table and transaction-wide, thr_lock.c deadlock avoidance algorithm that used to ensure absence of deadlocks between LOCK TABLES WRITE and other statements is no longer sufficient, even for MyISAM. The wait-for graph based deadlock detector of MDL subsystem may sometimes be necessary and is involved. This may lead to ER_LOCK_DEADLOCK error produced for multi-statement transactions even if these only use MyISAM: session 1: session 2: begin; update t1 ... lock table t2 write, t1 write; -- gets a lock on t2, blocks on t1 update t2 ... (ER_LOCK_DEADLOCK) - Finally, support of LOW_PRIORITY option for LOCK TABLES ... WRITE was abandoned. LOCK TABLE ... LOW_PRIORITY WRITE from now on has the same priority as the usual LOCK TABLE ... WRITE. SELECT HIGH PRIORITY no longer trumps LOCK TABLE ... WRITE in the wait queue. - We do not take upgradable metadata locks on implicitly locked tables. So if one has, say, a view v1 that uses table t1, and issues: LOCK TABLE v1 WRITE; FLUSH TABLE t1; -- (or just 'FLUSH TABLES'), an error is produced. In order to be able to perform DDL on a table under LOCK TABLES, the table must be locked explicitly in the LOCK TABLES list.
2010-02-01 12:43:06 +01:00
if (open_normal_and_derived_tables(thd, tables, MYSQL_OPEN_FORCE_SHARED_MDL))
2005-06-08 17:31:34 +02:00
goto error;
thd->lex->used_tables= 0; // Updated by setup_fields
/*
JOIN::prepare calls
It is not SELECT COMMAND for sure, so setup_tables will be called as
usual, and we pass 0 as setup_tables_done_option
*/
if (unit->prepare(thd, 0, 0))
2005-06-08 17:31:34 +02:00
goto error;
if (!lex->describe && !stmt->is_sql_prepare())
{
/* Make copy of item list, as change_columns may change it */
List<Item> fields(lex->select_lex.item_list);
/* Change columns if a procedure like analyse() */
if (unit->last_procedure && unit->last_procedure->change_columns(fields))
goto error;
/*
We can use lex->result as it should've been prepared in
unit->prepare call above.
*/
if (send_prep_stmt(stmt, lex->result->field_count(fields)) ||
Backport of revno 2630.28.10, 2630.28.31, 2630.28.26, 2630.33.1, 2630.39.1, 2630.28.29, 2630.34.3, 2630.34.2, 2630.34.1, 2630.29.29, 2630.29.28, 2630.31.1, 2630.28.13, 2630.28.10, 2617.23.14 and some other minor revisions. This patch implements: WL#4264 "Backup: Stabilize Service Interface" -- all the server prerequisites except si_objects.{h,cc} themselves (they can be just copied over, when needed). WL#4435: Support OUT-parameters in prepared statements. (and all issues in the initial patches for these two tasks, that were discovered in pushbuild and during testing). Bug#39519: mysql_stmt_close() should flush all data associated with the statement. After execution of a prepared statement, send OUT parameters of the invoked stored procedure, if any, to the client. When using the binary protocol, send the parameters in an additional result set over the wire. When using the text protocol, assign out parameters to the user variables from the CALL(@var1, @var2, ...) specification. The following refactoring has been made: - Protocol::send_fields() was renamed to Protocol::send_result_set_metadata(); - A new Protocol::send_result_set_row() was introduced to incapsulate common functionality for sending row data. - Signature of Protocol::prepare_for_send() was changed: this operation does not need a list of items, the number of items is fully sufficient. The following backward incompatible changes have been made: - CLIENT_MULTI_RESULTS is now enabled by default in the client; - CLIENT_PS_MULTI_RESUTLS is now enabled by default in the client.
2009-10-21 22:02:06 +02:00
lex->result->send_result_set_metadata(fields, Protocol::SEND_EOF) ||
thd->protocol->flush())
goto error;
DBUG_RETURN(2);
}
DBUG_RETURN(0);
2005-06-08 17:31:34 +02:00
error:
DBUG_RETURN(1);
}
2007-10-16 21:37:31 +02:00
/**
Validate and prepare for execution DO statement expressions.
2007-10-16 21:37:31 +02:00
@param stmt prepared statement
@param tables list of tables used in this query
@param values list of expressions
2007-10-16 21:37:31 +02:00
@retval
FALSE success
@retval
TRUE error, error message is set in THD
*/
static bool mysql_test_do_fields(Prepared_statement *stmt,
2005-06-08 17:57:26 +02:00
TABLE_LIST *tables,
List<Item> *values)
{
THD *thd= stmt->thd;
2005-06-08 17:31:34 +02:00
DBUG_ENTER("mysql_test_do_fields");
if (tables && check_table_access(thd, SELECT_ACL, tables, FALSE,
UINT_MAX, FALSE))
DBUG_RETURN(TRUE);
Implement new type-of-operation-aware metadata locks. Add a wait-for graph based deadlock detector to the MDL subsystem. Fixes bug #46272 "MySQL 5.4.4, new MDL: unnecessary deadlock" and bug #37346 "innodb does not detect deadlock between update and alter table". The first bug manifested itself as an unwarranted abort of a transaction with ER_LOCK_DEADLOCK error by a concurrent ALTER statement, when this transaction tried to repeat use of a table, which it has already used in a similar fashion before ALTER started. The second bug showed up as a deadlock between table-level locks and InnoDB row locks, which was "detected" only after innodb_lock_wait_timeout timeout. A transaction would start using the table and modify a few rows. Then ALTER TABLE would come in, and start copying rows into a temporary table. Eventually it would stumble on the modified records and get blocked on a row lock. The first transaction would try to do more updates, and get blocked on thr_lock.c lock. This situation of circular wait would only get resolved by a timeout. Both these bugs stemmed from inadequate solutions to the problem of deadlocks occurring between different locking subsystems. In the first case we tried to avoid deadlocks between metadata locking and table-level locking subsystems, when upgrading shared metadata lock to exclusive one. Transactions holding the shared lock on the table and waiting for some table-level lock used to be aborted too aggressively. We also allowed ALTER TABLE to start in presence of transactions that modify the subject table. ALTER TABLE acquires TL_WRITE_ALLOW_READ lock at start, and that block all writes against the table (naturally, we don't want any writes to be lost when switching the old and the new table). TL_WRITE_ALLOW_READ lock, in turn, would block the started transaction on thr_lock.c lock, should they do more updates. This, again, lead to the need to abort such transactions. The second bug occurred simply because we didn't have any mechanism to detect deadlocks between the table-level locks in thr_lock.c and row-level locks in InnoDB, other than innodb_lock_wait_timeout. This patch solves both these problems by moving lock conflicts which are causing these deadlocks into the metadata locking subsystem, thus making it possible to avoid or detect such deadlocks inside MDL. To do this we introduce new type-of-operation-aware metadata locks, which allow MDL subsystem to know not only the fact that transaction has used or is going to use some object but also what kind of operation it has carried out or going to carry out on the object. This, along with the addition of a special kind of upgradable metadata lock, allows ALTER TABLE to wait until all transactions which has updated the table to go away. This solves the second issue. Another special type of upgradable metadata lock is acquired by LOCK TABLE WRITE. This second lock type allows to solve the first issue, since abortion of table-level locks in event of DDL under LOCK TABLES becomes also unnecessary. Below follows the list of incompatible changes introduced by this patch: - From now on, ALTER TABLE and CREATE/DROP TRIGGER SQL (i.e. those statements that acquire TL_WRITE_ALLOW_READ lock) wait for all transactions which has *updated* the table to complete. - From now on, LOCK TABLES ... WRITE, REPAIR/OPTIMIZE TABLE (i.e. all statements which acquire TL_WRITE table-level lock) wait for all transaction which *updated or read* from the table to complete. As a consequence, innodb_table_locks=0 option no longer applies to LOCK TABLES ... WRITE. - DROP DATABASE, DROP TABLE, RENAME TABLE no longer abort statements or transactions which use tables being dropped or renamed, and instead wait for these transactions to complete. - Since LOCK TABLES WRITE now takes a special metadata lock, not compatible with with reads or writes against the subject table and transaction-wide, thr_lock.c deadlock avoidance algorithm that used to ensure absence of deadlocks between LOCK TABLES WRITE and other statements is no longer sufficient, even for MyISAM. The wait-for graph based deadlock detector of MDL subsystem may sometimes be necessary and is involved. This may lead to ER_LOCK_DEADLOCK error produced for multi-statement transactions even if these only use MyISAM: session 1: session 2: begin; update t1 ... lock table t2 write, t1 write; -- gets a lock on t2, blocks on t1 update t2 ... (ER_LOCK_DEADLOCK) - Finally, support of LOW_PRIORITY option for LOCK TABLES ... WRITE was abandoned. LOCK TABLE ... LOW_PRIORITY WRITE from now on has the same priority as the usual LOCK TABLE ... WRITE. SELECT HIGH PRIORITY no longer trumps LOCK TABLE ... WRITE in the wait queue. - We do not take upgradable metadata locks on implicitly locked tables. So if one has, say, a view v1 that uses table t1, and issues: LOCK TABLE v1 WRITE; FLUSH TABLE t1; -- (or just 'FLUSH TABLES'), an error is produced. In order to be able to perform DDL on a table under LOCK TABLES, the table must be locked explicitly in the LOCK TABLES list.
2010-02-01 12:43:06 +01:00
if (open_normal_and_derived_tables(thd, tables, MYSQL_OPEN_FORCE_SHARED_MDL))
DBUG_RETURN(TRUE);
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
2006-06-04 17:52:22 +02:00
DBUG_RETURN(setup_fields(thd, 0, *values, MARK_COLUMNS_NONE, 0, 0));
}
2007-10-16 21:37:31 +02:00
/**
Validate and prepare for execution SET statement expressions.
2007-10-16 21:37:31 +02:00
@param stmt prepared statement
@param tables list of tables used in this query
@param values list of expressions
2007-10-16 21:37:31 +02:00
@retval
FALSE success
@retval
TRUE error, error message is set in THD
*/
2005-06-08 17:31:34 +02:00
static bool mysql_test_set_fields(Prepared_statement *stmt,
TABLE_LIST *tables,
List<set_var_base> *var_list)
{
DBUG_ENTER("mysql_test_set_fields");
List_iterator_fast<set_var_base> it(*var_list);
THD *thd= stmt->thd;
set_var_base *var;
if ((tables && check_table_access(thd, SELECT_ACL, tables, FALSE,
UINT_MAX, FALSE)) ||
Implement new type-of-operation-aware metadata locks. Add a wait-for graph based deadlock detector to the MDL subsystem. Fixes bug #46272 "MySQL 5.4.4, new MDL: unnecessary deadlock" and bug #37346 "innodb does not detect deadlock between update and alter table". The first bug manifested itself as an unwarranted abort of a transaction with ER_LOCK_DEADLOCK error by a concurrent ALTER statement, when this transaction tried to repeat use of a table, which it has already used in a similar fashion before ALTER started. The second bug showed up as a deadlock between table-level locks and InnoDB row locks, which was "detected" only after innodb_lock_wait_timeout timeout. A transaction would start using the table and modify a few rows. Then ALTER TABLE would come in, and start copying rows into a temporary table. Eventually it would stumble on the modified records and get blocked on a row lock. The first transaction would try to do more updates, and get blocked on thr_lock.c lock. This situation of circular wait would only get resolved by a timeout. Both these bugs stemmed from inadequate solutions to the problem of deadlocks occurring between different locking subsystems. In the first case we tried to avoid deadlocks between metadata locking and table-level locking subsystems, when upgrading shared metadata lock to exclusive one. Transactions holding the shared lock on the table and waiting for some table-level lock used to be aborted too aggressively. We also allowed ALTER TABLE to start in presence of transactions that modify the subject table. ALTER TABLE acquires TL_WRITE_ALLOW_READ lock at start, and that block all writes against the table (naturally, we don't want any writes to be lost when switching the old and the new table). TL_WRITE_ALLOW_READ lock, in turn, would block the started transaction on thr_lock.c lock, should they do more updates. This, again, lead to the need to abort such transactions. The second bug occurred simply because we didn't have any mechanism to detect deadlocks between the table-level locks in thr_lock.c and row-level locks in InnoDB, other than innodb_lock_wait_timeout. This patch solves both these problems by moving lock conflicts which are causing these deadlocks into the metadata locking subsystem, thus making it possible to avoid or detect such deadlocks inside MDL. To do this we introduce new type-of-operation-aware metadata locks, which allow MDL subsystem to know not only the fact that transaction has used or is going to use some object but also what kind of operation it has carried out or going to carry out on the object. This, along with the addition of a special kind of upgradable metadata lock, allows ALTER TABLE to wait until all transactions which has updated the table to go away. This solves the second issue. Another special type of upgradable metadata lock is acquired by LOCK TABLE WRITE. This second lock type allows to solve the first issue, since abortion of table-level locks in event of DDL under LOCK TABLES becomes also unnecessary. Below follows the list of incompatible changes introduced by this patch: - From now on, ALTER TABLE and CREATE/DROP TRIGGER SQL (i.e. those statements that acquire TL_WRITE_ALLOW_READ lock) wait for all transactions which has *updated* the table to complete. - From now on, LOCK TABLES ... WRITE, REPAIR/OPTIMIZE TABLE (i.e. all statements which acquire TL_WRITE table-level lock) wait for all transaction which *updated or read* from the table to complete. As a consequence, innodb_table_locks=0 option no longer applies to LOCK TABLES ... WRITE. - DROP DATABASE, DROP TABLE, RENAME TABLE no longer abort statements or transactions which use tables being dropped or renamed, and instead wait for these transactions to complete. - Since LOCK TABLES WRITE now takes a special metadata lock, not compatible with with reads or writes against the subject table and transaction-wide, thr_lock.c deadlock avoidance algorithm that used to ensure absence of deadlocks between LOCK TABLES WRITE and other statements is no longer sufficient, even for MyISAM. The wait-for graph based deadlock detector of MDL subsystem may sometimes be necessary and is involved. This may lead to ER_LOCK_DEADLOCK error produced for multi-statement transactions even if these only use MyISAM: session 1: session 2: begin; update t1 ... lock table t2 write, t1 write; -- gets a lock on t2, blocks on t1 update t2 ... (ER_LOCK_DEADLOCK) - Finally, support of LOW_PRIORITY option for LOCK TABLES ... WRITE was abandoned. LOCK TABLE ... LOW_PRIORITY WRITE from now on has the same priority as the usual LOCK TABLE ... WRITE. SELECT HIGH PRIORITY no longer trumps LOCK TABLE ... WRITE in the wait queue. - We do not take upgradable metadata locks on implicitly locked tables. So if one has, say, a view v1 that uses table t1, and issues: LOCK TABLE v1 WRITE; FLUSH TABLE t1; -- (or just 'FLUSH TABLES'), an error is produced. In order to be able to perform DDL on a table under LOCK TABLES, the table must be locked explicitly in the LOCK TABLES list.
2010-02-01 12:43:06 +01:00
open_normal_and_derived_tables(thd, tables, MYSQL_OPEN_FORCE_SHARED_MDL))
goto error;
2005-06-08 17:31:34 +02:00
while ((var= it++))
{
if (var->light_check(thd))
goto error;
}
2005-06-08 17:31:34 +02:00
DBUG_RETURN(FALSE);
error:
2005-06-08 17:31:34 +02:00
DBUG_RETURN(TRUE);
}
/**
Validate and prepare for execution CALL statement expressions.
@param stmt prepared statement
@param tables list of tables used in this query
@param value_list list of expressions
@retval FALSE success
@retval TRUE error, error message is set in THD
*/
static bool mysql_test_call_fields(Prepared_statement *stmt,
TABLE_LIST *tables,
List<Item> *value_list)
{
DBUG_ENTER("mysql_test_call_fields");
List_iterator<Item> it(*value_list);
THD *thd= stmt->thd;
Item *item;
if ((tables && check_table_access(thd, SELECT_ACL, tables, FALSE,
UINT_MAX, FALSE)) ||
Implement new type-of-operation-aware metadata locks. Add a wait-for graph based deadlock detector to the MDL subsystem. Fixes bug #46272 "MySQL 5.4.4, new MDL: unnecessary deadlock" and bug #37346 "innodb does not detect deadlock between update and alter table". The first bug manifested itself as an unwarranted abort of a transaction with ER_LOCK_DEADLOCK error by a concurrent ALTER statement, when this transaction tried to repeat use of a table, which it has already used in a similar fashion before ALTER started. The second bug showed up as a deadlock between table-level locks and InnoDB row locks, which was "detected" only after innodb_lock_wait_timeout timeout. A transaction would start using the table and modify a few rows. Then ALTER TABLE would come in, and start copying rows into a temporary table. Eventually it would stumble on the modified records and get blocked on a row lock. The first transaction would try to do more updates, and get blocked on thr_lock.c lock. This situation of circular wait would only get resolved by a timeout. Both these bugs stemmed from inadequate solutions to the problem of deadlocks occurring between different locking subsystems. In the first case we tried to avoid deadlocks between metadata locking and table-level locking subsystems, when upgrading shared metadata lock to exclusive one. Transactions holding the shared lock on the table and waiting for some table-level lock used to be aborted too aggressively. We also allowed ALTER TABLE to start in presence of transactions that modify the subject table. ALTER TABLE acquires TL_WRITE_ALLOW_READ lock at start, and that block all writes against the table (naturally, we don't want any writes to be lost when switching the old and the new table). TL_WRITE_ALLOW_READ lock, in turn, would block the started transaction on thr_lock.c lock, should they do more updates. This, again, lead to the need to abort such transactions. The second bug occurred simply because we didn't have any mechanism to detect deadlocks between the table-level locks in thr_lock.c and row-level locks in InnoDB, other than innodb_lock_wait_timeout. This patch solves both these problems by moving lock conflicts which are causing these deadlocks into the metadata locking subsystem, thus making it possible to avoid or detect such deadlocks inside MDL. To do this we introduce new type-of-operation-aware metadata locks, which allow MDL subsystem to know not only the fact that transaction has used or is going to use some object but also what kind of operation it has carried out or going to carry out on the object. This, along with the addition of a special kind of upgradable metadata lock, allows ALTER TABLE to wait until all transactions which has updated the table to go away. This solves the second issue. Another special type of upgradable metadata lock is acquired by LOCK TABLE WRITE. This second lock type allows to solve the first issue, since abortion of table-level locks in event of DDL under LOCK TABLES becomes also unnecessary. Below follows the list of incompatible changes introduced by this patch: - From now on, ALTER TABLE and CREATE/DROP TRIGGER SQL (i.e. those statements that acquire TL_WRITE_ALLOW_READ lock) wait for all transactions which has *updated* the table to complete. - From now on, LOCK TABLES ... WRITE, REPAIR/OPTIMIZE TABLE (i.e. all statements which acquire TL_WRITE table-level lock) wait for all transaction which *updated or read* from the table to complete. As a consequence, innodb_table_locks=0 option no longer applies to LOCK TABLES ... WRITE. - DROP DATABASE, DROP TABLE, RENAME TABLE no longer abort statements or transactions which use tables being dropped or renamed, and instead wait for these transactions to complete. - Since LOCK TABLES WRITE now takes a special metadata lock, not compatible with with reads or writes against the subject table and transaction-wide, thr_lock.c deadlock avoidance algorithm that used to ensure absence of deadlocks between LOCK TABLES WRITE and other statements is no longer sufficient, even for MyISAM. The wait-for graph based deadlock detector of MDL subsystem may sometimes be necessary and is involved. This may lead to ER_LOCK_DEADLOCK error produced for multi-statement transactions even if these only use MyISAM: session 1: session 2: begin; update t1 ... lock table t2 write, t1 write; -- gets a lock on t2, blocks on t1 update t2 ... (ER_LOCK_DEADLOCK) - Finally, support of LOW_PRIORITY option for LOCK TABLES ... WRITE was abandoned. LOCK TABLE ... LOW_PRIORITY WRITE from now on has the same priority as the usual LOCK TABLE ... WRITE. SELECT HIGH PRIORITY no longer trumps LOCK TABLE ... WRITE in the wait queue. - We do not take upgradable metadata locks on implicitly locked tables. So if one has, say, a view v1 that uses table t1, and issues: LOCK TABLE v1 WRITE; FLUSH TABLE t1; -- (or just 'FLUSH TABLES'), an error is produced. In order to be able to perform DDL on a table under LOCK TABLES, the table must be locked explicitly in the LOCK TABLES list.
2010-02-01 12:43:06 +01:00
open_normal_and_derived_tables(thd, tables, MYSQL_OPEN_FORCE_SHARED_MDL))
goto err;
while ((item= it++))
{
2009-06-17 16:56:44 +02:00
if ((!item->fixed && item->fix_fields(thd, it.ref())) ||
item->check_cols(1))
goto err;
}
DBUG_RETURN(FALSE);
err:
DBUG_RETURN(TRUE);
}
2007-10-16 21:37:31 +02:00
/**
Check internal SELECT of the prepared command.
2007-10-16 21:37:31 +02:00
@param stmt prepared statement
@param specific_prepare function of command specific prepare
@param setup_tables_done_option options to be passed to LEX::unit.prepare()
2007-10-16 21:37:31 +02:00
@note
This function won't directly open tables used in select. They should
be opened either by calling function (and in this case you probably
should use select_like_stmt_test_with_open()) or by
"specific_prepare" call (like this happens in case of multi-update).
2007-10-16 21:37:31 +02:00
@retval
FALSE success
2007-10-16 21:37:31 +02:00
@retval
TRUE error, error message is set in THD
*/
static bool select_like_stmt_test(Prepared_statement *stmt,
int (*specific_prepare)(THD *thd),
ulong setup_tables_done_option)
{
DBUG_ENTER("select_like_stmt_test");
THD *thd= stmt->thd;
LEX *lex= stmt->lex;
lex->select_lex.context.resolve_in_select_list= TRUE;
2005-06-08 17:31:34 +02:00
if (specific_prepare && (*specific_prepare)(thd))
DBUG_RETURN(TRUE);
2004-07-16 00:15:55 +02:00
thd->lex->used_tables= 0; // Updated by setup_fields
2005-06-08 17:31:34 +02:00
/* Calls JOIN::prepare */
DBUG_RETURN(lex->unit.prepare(thd, 0, setup_tables_done_option));
}
2007-10-16 21:37:31 +02:00
/**
Check internal SELECT of the prepared command (with opening of used
tables).
2007-10-16 21:37:31 +02:00
@param stmt prepared statement
@param tables list of tables to be opened
before calling specific_prepare function
@param specific_prepare function of command specific prepare
@param setup_tables_done_option options to be passed to LEX::unit.prepare()
2007-10-16 21:37:31 +02:00
@retval
FALSE success
2007-10-16 21:37:31 +02:00
@retval
TRUE error
*/
static bool
select_like_stmt_test_with_open(Prepared_statement *stmt,
TABLE_LIST *tables,
int (*specific_prepare)(THD *thd),
ulong setup_tables_done_option)
{
DBUG_ENTER("select_like_stmt_test_with_open");
/*
We should not call LEX::unit.cleanup() after this
open_normal_and_derived_tables() call because we don't allow
prepared EXPLAIN yet so derived tables will clean up after
themself.
*/
Implement new type-of-operation-aware metadata locks. Add a wait-for graph based deadlock detector to the MDL subsystem. Fixes bug #46272 "MySQL 5.4.4, new MDL: unnecessary deadlock" and bug #37346 "innodb does not detect deadlock between update and alter table". The first bug manifested itself as an unwarranted abort of a transaction with ER_LOCK_DEADLOCK error by a concurrent ALTER statement, when this transaction tried to repeat use of a table, which it has already used in a similar fashion before ALTER started. The second bug showed up as a deadlock between table-level locks and InnoDB row locks, which was "detected" only after innodb_lock_wait_timeout timeout. A transaction would start using the table and modify a few rows. Then ALTER TABLE would come in, and start copying rows into a temporary table. Eventually it would stumble on the modified records and get blocked on a row lock. The first transaction would try to do more updates, and get blocked on thr_lock.c lock. This situation of circular wait would only get resolved by a timeout. Both these bugs stemmed from inadequate solutions to the problem of deadlocks occurring between different locking subsystems. In the first case we tried to avoid deadlocks between metadata locking and table-level locking subsystems, when upgrading shared metadata lock to exclusive one. Transactions holding the shared lock on the table and waiting for some table-level lock used to be aborted too aggressively. We also allowed ALTER TABLE to start in presence of transactions that modify the subject table. ALTER TABLE acquires TL_WRITE_ALLOW_READ lock at start, and that block all writes against the table (naturally, we don't want any writes to be lost when switching the old and the new table). TL_WRITE_ALLOW_READ lock, in turn, would block the started transaction on thr_lock.c lock, should they do more updates. This, again, lead to the need to abort such transactions. The second bug occurred simply because we didn't have any mechanism to detect deadlocks between the table-level locks in thr_lock.c and row-level locks in InnoDB, other than innodb_lock_wait_timeout. This patch solves both these problems by moving lock conflicts which are causing these deadlocks into the metadata locking subsystem, thus making it possible to avoid or detect such deadlocks inside MDL. To do this we introduce new type-of-operation-aware metadata locks, which allow MDL subsystem to know not only the fact that transaction has used or is going to use some object but also what kind of operation it has carried out or going to carry out on the object. This, along with the addition of a special kind of upgradable metadata lock, allows ALTER TABLE to wait until all transactions which has updated the table to go away. This solves the second issue. Another special type of upgradable metadata lock is acquired by LOCK TABLE WRITE. This second lock type allows to solve the first issue, since abortion of table-level locks in event of DDL under LOCK TABLES becomes also unnecessary. Below follows the list of incompatible changes introduced by this patch: - From now on, ALTER TABLE and CREATE/DROP TRIGGER SQL (i.e. those statements that acquire TL_WRITE_ALLOW_READ lock) wait for all transactions which has *updated* the table to complete. - From now on, LOCK TABLES ... WRITE, REPAIR/OPTIMIZE TABLE (i.e. all statements which acquire TL_WRITE table-level lock) wait for all transaction which *updated or read* from the table to complete. As a consequence, innodb_table_locks=0 option no longer applies to LOCK TABLES ... WRITE. - DROP DATABASE, DROP TABLE, RENAME TABLE no longer abort statements or transactions which use tables being dropped or renamed, and instead wait for these transactions to complete. - Since LOCK TABLES WRITE now takes a special metadata lock, not compatible with with reads or writes against the subject table and transaction-wide, thr_lock.c deadlock avoidance algorithm that used to ensure absence of deadlocks between LOCK TABLES WRITE and other statements is no longer sufficient, even for MyISAM. The wait-for graph based deadlock detector of MDL subsystem may sometimes be necessary and is involved. This may lead to ER_LOCK_DEADLOCK error produced for multi-statement transactions even if these only use MyISAM: session 1: session 2: begin; update t1 ... lock table t2 write, t1 write; -- gets a lock on t2, blocks on t1 update t2 ... (ER_LOCK_DEADLOCK) - Finally, support of LOW_PRIORITY option for LOCK TABLES ... WRITE was abandoned. LOCK TABLE ... LOW_PRIORITY WRITE from now on has the same priority as the usual LOCK TABLE ... WRITE. SELECT HIGH PRIORITY no longer trumps LOCK TABLE ... WRITE in the wait queue. - We do not take upgradable metadata locks on implicitly locked tables. So if one has, say, a view v1 that uses table t1, and issues: LOCK TABLE v1 WRITE; FLUSH TABLE t1; -- (or just 'FLUSH TABLES'), an error is produced. In order to be able to perform DDL on a table under LOCK TABLES, the table must be locked explicitly in the LOCK TABLES list.
2010-02-01 12:43:06 +01:00
if (open_normal_and_derived_tables(stmt->thd, tables,
MYSQL_OPEN_FORCE_SHARED_MDL))
DBUG_RETURN(TRUE);
DBUG_RETURN(select_like_stmt_test(stmt, specific_prepare,
setup_tables_done_option));
}
2007-10-16 21:37:31 +02:00
/**
Validate and prepare for execution CREATE TABLE statement.
2007-10-16 21:37:31 +02:00
@param stmt prepared statement
@param tables list of tables used in this query
2007-10-16 21:37:31 +02:00
@retval
FALSE success
@retval
TRUE error, error message is set in THD
*/
2004-09-09 06:26:28 +02:00
2005-06-08 17:31:34 +02:00
static bool mysql_test_create_table(Prepared_statement *stmt)
{
DBUG_ENTER("mysql_test_create_table");
THD *thd= stmt->thd;
LEX *lex= stmt->lex;
SELECT_LEX *select_lex= &lex->select_lex;
2005-06-08 17:31:34 +02:00
bool res= FALSE;
2004-07-16 00:15:55 +02:00
bool link_to_local;
TABLE_LIST *create_table= lex->query_tables;
TABLE_LIST *tables= lex->create_last_non_select_table->next_global;
2005-06-08 17:31:34 +02:00
if (create_table_precheck(thd, tables, create_table))
DBUG_RETURN(TRUE);
if (select_lex->item_list.elements)
{
2010-02-06 11:28:06 +01:00
/* Base table and temporary table are not in the same name space. */
Fix for: Bug #20662 "Infinite loop in CREATE TABLE IF NOT EXISTS ... SELECT with locked tables" Bug #20903 "Crash when using CREATE TABLE .. SELECT and triggers" Bug #24738 "CREATE TABLE ... SELECT is not isolated properly" Bug #24508 "Inconsistent results of CREATE TABLE ... SELECT when temporary table exists" Deadlock occured when one tried to execute CREATE TABLE IF NOT EXISTS ... SELECT statement under LOCK TABLES which held read lock on target table. Attempt to execute the same statement for already existing target table with triggers caused server crashes. Also concurrent execution of CREATE TABLE ... SELECT statement and other statements involving target table suffered from various races (some of which might've led to deadlocks). Finally, attempt to execute CREATE TABLE ... SELECT in case when a temporary table with same name was already present led to the insertion of data into this temporary table and creation of empty non-temporary table. All above problems stemmed from the old implementation of CREATE TABLE ... SELECT in which we created, opened and locked target table without any special protection in a separate step and not with the rest of tables used by this statement. This underminded deadlock-avoidance approach used in server and created window for races. It also excluded target table from prelocking causing problems with trigger execution. The patch solves these problems by implementing new approach to handling of CREATE TABLE ... SELECT for base tables. We try to open and lock table to be created at the same time as the rest of tables used by this statement. If such table does not exist at this moment we create and place in the table cache special placeholder for it which prevents its creation or any other usage by other threads. We still use old approach for creation of temporary tables. Note that we have separate fix for 5.0 since there we use slightly different less intrusive approach.
2007-05-11 19:51:03 +02:00
if (!(lex->create_info.options & HA_LEX_CREATE_TMP_TABLE))
2010-02-06 11:28:06 +01:00
create_table->open_type= OT_BASE_ONLY;
Fix for: Bug #20662 "Infinite loop in CREATE TABLE IF NOT EXISTS ... SELECT with locked tables" Bug #20903 "Crash when using CREATE TABLE .. SELECT and triggers" Bug #24738 "CREATE TABLE ... SELECT is not isolated properly" Bug #24508 "Inconsistent results of CREATE TABLE ... SELECT when temporary table exists" Deadlock occured when one tried to execute CREATE TABLE IF NOT EXISTS ... SELECT statement under LOCK TABLES which held read lock on target table. Attempt to execute the same statement for already existing target table with triggers caused server crashes. Also concurrent execution of CREATE TABLE ... SELECT statement and other statements involving target table suffered from various races (some of which might've led to deadlocks). Finally, attempt to execute CREATE TABLE ... SELECT in case when a temporary table with same name was already present led to the insertion of data into this temporary table and creation of empty non-temporary table. All above problems stemmed from the old implementation of CREATE TABLE ... SELECT in which we created, opened and locked target table without any special protection in a separate step and not with the rest of tables used by this statement. This underminded deadlock-avoidance approach used in server and created window for races. It also excluded target table from prelocking causing problems with trigger execution. The patch solves these problems by implementing new approach to handling of CREATE TABLE ... SELECT for base tables. We try to open and lock table to be created at the same time as the rest of tables used by this statement. If such table does not exist at this moment we create and place in the table cache special placeholder for it which prevents its creation or any other usage by other threads. We still use old approach for creation of temporary tables. Note that we have separate fix for 5.0 since there we use slightly different less intrusive approach.
2007-05-11 19:51:03 +02:00
Implement new type-of-operation-aware metadata locks. Add a wait-for graph based deadlock detector to the MDL subsystem. Fixes bug #46272 "MySQL 5.4.4, new MDL: unnecessary deadlock" and bug #37346 "innodb does not detect deadlock between update and alter table". The first bug manifested itself as an unwarranted abort of a transaction with ER_LOCK_DEADLOCK error by a concurrent ALTER statement, when this transaction tried to repeat use of a table, which it has already used in a similar fashion before ALTER started. The second bug showed up as a deadlock between table-level locks and InnoDB row locks, which was "detected" only after innodb_lock_wait_timeout timeout. A transaction would start using the table and modify a few rows. Then ALTER TABLE would come in, and start copying rows into a temporary table. Eventually it would stumble on the modified records and get blocked on a row lock. The first transaction would try to do more updates, and get blocked on thr_lock.c lock. This situation of circular wait would only get resolved by a timeout. Both these bugs stemmed from inadequate solutions to the problem of deadlocks occurring between different locking subsystems. In the first case we tried to avoid deadlocks between metadata locking and table-level locking subsystems, when upgrading shared metadata lock to exclusive one. Transactions holding the shared lock on the table and waiting for some table-level lock used to be aborted too aggressively. We also allowed ALTER TABLE to start in presence of transactions that modify the subject table. ALTER TABLE acquires TL_WRITE_ALLOW_READ lock at start, and that block all writes against the table (naturally, we don't want any writes to be lost when switching the old and the new table). TL_WRITE_ALLOW_READ lock, in turn, would block the started transaction on thr_lock.c lock, should they do more updates. This, again, lead to the need to abort such transactions. The second bug occurred simply because we didn't have any mechanism to detect deadlocks between the table-level locks in thr_lock.c and row-level locks in InnoDB, other than innodb_lock_wait_timeout. This patch solves both these problems by moving lock conflicts which are causing these deadlocks into the metadata locking subsystem, thus making it possible to avoid or detect such deadlocks inside MDL. To do this we introduce new type-of-operation-aware metadata locks, which allow MDL subsystem to know not only the fact that transaction has used or is going to use some object but also what kind of operation it has carried out or going to carry out on the object. This, along with the addition of a special kind of upgradable metadata lock, allows ALTER TABLE to wait until all transactions which has updated the table to go away. This solves the second issue. Another special type of upgradable metadata lock is acquired by LOCK TABLE WRITE. This second lock type allows to solve the first issue, since abortion of table-level locks in event of DDL under LOCK TABLES becomes also unnecessary. Below follows the list of incompatible changes introduced by this patch: - From now on, ALTER TABLE and CREATE/DROP TRIGGER SQL (i.e. those statements that acquire TL_WRITE_ALLOW_READ lock) wait for all transactions which has *updated* the table to complete. - From now on, LOCK TABLES ... WRITE, REPAIR/OPTIMIZE TABLE (i.e. all statements which acquire TL_WRITE table-level lock) wait for all transaction which *updated or read* from the table to complete. As a consequence, innodb_table_locks=0 option no longer applies to LOCK TABLES ... WRITE. - DROP DATABASE, DROP TABLE, RENAME TABLE no longer abort statements or transactions which use tables being dropped or renamed, and instead wait for these transactions to complete. - Since LOCK TABLES WRITE now takes a special metadata lock, not compatible with with reads or writes against the subject table and transaction-wide, thr_lock.c deadlock avoidance algorithm that used to ensure absence of deadlocks between LOCK TABLES WRITE and other statements is no longer sufficient, even for MyISAM. The wait-for graph based deadlock detector of MDL subsystem may sometimes be necessary and is involved. This may lead to ER_LOCK_DEADLOCK error produced for multi-statement transactions even if these only use MyISAM: session 1: session 2: begin; update t1 ... lock table t2 write, t1 write; -- gets a lock on t2, blocks on t1 update t2 ... (ER_LOCK_DEADLOCK) - Finally, support of LOW_PRIORITY option for LOCK TABLES ... WRITE was abandoned. LOCK TABLE ... LOW_PRIORITY WRITE from now on has the same priority as the usual LOCK TABLE ... WRITE. SELECT HIGH PRIORITY no longer trumps LOCK TABLE ... WRITE in the wait queue. - We do not take upgradable metadata locks on implicitly locked tables. So if one has, say, a view v1 that uses table t1, and issues: LOCK TABLE v1 WRITE; FLUSH TABLE t1; -- (or just 'FLUSH TABLES'), an error is produced. In order to be able to perform DDL on a table under LOCK TABLES, the table must be locked explicitly in the LOCK TABLES list.
2010-02-01 12:43:06 +01:00
if (open_normal_and_derived_tables(stmt->thd, lex->query_tables,
MYSQL_OPEN_FORCE_SHARED_MDL))
Fix for: Bug #20662 "Infinite loop in CREATE TABLE IF NOT EXISTS ... SELECT with locked tables" Bug #20903 "Crash when using CREATE TABLE .. SELECT and triggers" Bug #24738 "CREATE TABLE ... SELECT is not isolated properly" Bug #24508 "Inconsistent results of CREATE TABLE ... SELECT when temporary table exists" Deadlock occured when one tried to execute CREATE TABLE IF NOT EXISTS ... SELECT statement under LOCK TABLES which held read lock on target table. Attempt to execute the same statement for already existing target table with triggers caused server crashes. Also concurrent execution of CREATE TABLE ... SELECT statement and other statements involving target table suffered from various races (some of which might've led to deadlocks). Finally, attempt to execute CREATE TABLE ... SELECT in case when a temporary table with same name was already present led to the insertion of data into this temporary table and creation of empty non-temporary table. All above problems stemmed from the old implementation of CREATE TABLE ... SELECT in which we created, opened and locked target table without any special protection in a separate step and not with the rest of tables used by this statement. This underminded deadlock-avoidance approach used in server and created window for races. It also excluded target table from prelocking causing problems with trigger execution. The patch solves these problems by implementing new approach to handling of CREATE TABLE ... SELECT for base tables. We try to open and lock table to be created at the same time as the rest of tables used by this statement. If such table does not exist at this moment we create and place in the table cache special placeholder for it which prevents its creation or any other usage by other threads. We still use old approach for creation of temporary tables. Note that we have separate fix for 5.0 since there we use slightly different less intrusive approach.
2007-05-11 19:51:03 +02:00
DBUG_RETURN(TRUE);
select_lex->context.resolve_in_select_list= TRUE;
Fix for: Bug #20662 "Infinite loop in CREATE TABLE IF NOT EXISTS ... SELECT with locked tables" Bug #20903 "Crash when using CREATE TABLE .. SELECT and triggers" Bug #24738 "CREATE TABLE ... SELECT is not isolated properly" Bug #24508 "Inconsistent results of CREATE TABLE ... SELECT when temporary table exists" Deadlock occured when one tried to execute CREATE TABLE IF NOT EXISTS ... SELECT statement under LOCK TABLES which held read lock on target table. Attempt to execute the same statement for already existing target table with triggers caused server crashes. Also concurrent execution of CREATE TABLE ... SELECT statement and other statements involving target table suffered from various races (some of which might've led to deadlocks). Finally, attempt to execute CREATE TABLE ... SELECT in case when a temporary table with same name was already present led to the insertion of data into this temporary table and creation of empty non-temporary table. All above problems stemmed from the old implementation of CREATE TABLE ... SELECT in which we created, opened and locked target table without any special protection in a separate step and not with the rest of tables used by this statement. This underminded deadlock-avoidance approach used in server and created window for races. It also excluded target table from prelocking causing problems with trigger execution. The patch solves these problems by implementing new approach to handling of CREATE TABLE ... SELECT for base tables. We try to open and lock table to be created at the same time as the rest of tables used by this statement. If such table does not exist at this moment we create and place in the table cache special placeholder for it which prevents its creation or any other usage by other threads. We still use old approach for creation of temporary tables. Note that we have separate fix for 5.0 since there we use slightly different less intrusive approach.
2007-05-11 19:51:03 +02:00
lex->unlink_first_table(&link_to_local);
Fix for: Bug #20662 "Infinite loop in CREATE TABLE IF NOT EXISTS ... SELECT with locked tables" Bug #20903 "Crash when using CREATE TABLE .. SELECT and triggers" Bug #24738 "CREATE TABLE ... SELECT is not isolated properly" Bug #24508 "Inconsistent results of CREATE TABLE ... SELECT when temporary table exists" Deadlock occured when one tried to execute CREATE TABLE IF NOT EXISTS ... SELECT statement under LOCK TABLES which held read lock on target table. Attempt to execute the same statement for already existing target table with triggers caused server crashes. Also concurrent execution of CREATE TABLE ... SELECT statement and other statements involving target table suffered from various races (some of which might've led to deadlocks). Finally, attempt to execute CREATE TABLE ... SELECT in case when a temporary table with same name was already present led to the insertion of data into this temporary table and creation of empty non-temporary table. All above problems stemmed from the old implementation of CREATE TABLE ... SELECT in which we created, opened and locked target table without any special protection in a separate step and not with the rest of tables used by this statement. This underminded deadlock-avoidance approach used in server and created window for races. It also excluded target table from prelocking causing problems with trigger execution. The patch solves these problems by implementing new approach to handling of CREATE TABLE ... SELECT for base tables. We try to open and lock table to be created at the same time as the rest of tables used by this statement. If such table does not exist at this moment we create and place in the table cache special placeholder for it which prevents its creation or any other usage by other threads. We still use old approach for creation of temporary tables. Note that we have separate fix for 5.0 since there we use slightly different less intrusive approach.
2007-05-11 19:51:03 +02:00
res= select_like_stmt_test(stmt, 0, 0);
lex->link_first_table_back(create_table, link_to_local);
}
else
{
/*
Check that the source table exist, and also record
its metadata version. Even though not strictly necessary,
we validate metadata of all CREATE TABLE statements,
which keeps metadata validation code simple.
*/
Implement new type-of-operation-aware metadata locks. Add a wait-for graph based deadlock detector to the MDL subsystem. Fixes bug #46272 "MySQL 5.4.4, new MDL: unnecessary deadlock" and bug #37346 "innodb does not detect deadlock between update and alter table". The first bug manifested itself as an unwarranted abort of a transaction with ER_LOCK_DEADLOCK error by a concurrent ALTER statement, when this transaction tried to repeat use of a table, which it has already used in a similar fashion before ALTER started. The second bug showed up as a deadlock between table-level locks and InnoDB row locks, which was "detected" only after innodb_lock_wait_timeout timeout. A transaction would start using the table and modify a few rows. Then ALTER TABLE would come in, and start copying rows into a temporary table. Eventually it would stumble on the modified records and get blocked on a row lock. The first transaction would try to do more updates, and get blocked on thr_lock.c lock. This situation of circular wait would only get resolved by a timeout. Both these bugs stemmed from inadequate solutions to the problem of deadlocks occurring between different locking subsystems. In the first case we tried to avoid deadlocks between metadata locking and table-level locking subsystems, when upgrading shared metadata lock to exclusive one. Transactions holding the shared lock on the table and waiting for some table-level lock used to be aborted too aggressively. We also allowed ALTER TABLE to start in presence of transactions that modify the subject table. ALTER TABLE acquires TL_WRITE_ALLOW_READ lock at start, and that block all writes against the table (naturally, we don't want any writes to be lost when switching the old and the new table). TL_WRITE_ALLOW_READ lock, in turn, would block the started transaction on thr_lock.c lock, should they do more updates. This, again, lead to the need to abort such transactions. The second bug occurred simply because we didn't have any mechanism to detect deadlocks between the table-level locks in thr_lock.c and row-level locks in InnoDB, other than innodb_lock_wait_timeout. This patch solves both these problems by moving lock conflicts which are causing these deadlocks into the metadata locking subsystem, thus making it possible to avoid or detect such deadlocks inside MDL. To do this we introduce new type-of-operation-aware metadata locks, which allow MDL subsystem to know not only the fact that transaction has used or is going to use some object but also what kind of operation it has carried out or going to carry out on the object. This, along with the addition of a special kind of upgradable metadata lock, allows ALTER TABLE to wait until all transactions which has updated the table to go away. This solves the second issue. Another special type of upgradable metadata lock is acquired by LOCK TABLE WRITE. This second lock type allows to solve the first issue, since abortion of table-level locks in event of DDL under LOCK TABLES becomes also unnecessary. Below follows the list of incompatible changes introduced by this patch: - From now on, ALTER TABLE and CREATE/DROP TRIGGER SQL (i.e. those statements that acquire TL_WRITE_ALLOW_READ lock) wait for all transactions which has *updated* the table to complete. - From now on, LOCK TABLES ... WRITE, REPAIR/OPTIMIZE TABLE (i.e. all statements which acquire TL_WRITE table-level lock) wait for all transaction which *updated or read* from the table to complete. As a consequence, innodb_table_locks=0 option no longer applies to LOCK TABLES ... WRITE. - DROP DATABASE, DROP TABLE, RENAME TABLE no longer abort statements or transactions which use tables being dropped or renamed, and instead wait for these transactions to complete. - Since LOCK TABLES WRITE now takes a special metadata lock, not compatible with with reads or writes against the subject table and transaction-wide, thr_lock.c deadlock avoidance algorithm that used to ensure absence of deadlocks between LOCK TABLES WRITE and other statements is no longer sufficient, even for MyISAM. The wait-for graph based deadlock detector of MDL subsystem may sometimes be necessary and is involved. This may lead to ER_LOCK_DEADLOCK error produced for multi-statement transactions even if these only use MyISAM: session 1: session 2: begin; update t1 ... lock table t2 write, t1 write; -- gets a lock on t2, blocks on t1 update t2 ... (ER_LOCK_DEADLOCK) - Finally, support of LOW_PRIORITY option for LOCK TABLES ... WRITE was abandoned. LOCK TABLE ... LOW_PRIORITY WRITE from now on has the same priority as the usual LOCK TABLE ... WRITE. SELECT HIGH PRIORITY no longer trumps LOCK TABLE ... WRITE in the wait queue. - We do not take upgradable metadata locks on implicitly locked tables. So if one has, say, a view v1 that uses table t1, and issues: LOCK TABLE v1 WRITE; FLUSH TABLE t1; -- (or just 'FLUSH TABLES'), an error is produced. In order to be able to perform DDL on a table under LOCK TABLES, the table must be locked explicitly in the LOCK TABLES list.
2010-02-01 12:43:06 +01:00
if (open_normal_and_derived_tables(stmt->thd, lex->query_tables,
MYSQL_OPEN_FORCE_SHARED_MDL))
DBUG_RETURN(TRUE);
}
DBUG_RETURN(res);
}
2004-04-10 00:14:32 +02:00
2007-10-16 21:37:31 +02:00
/**
@brief Validate and prepare for execution CREATE VIEW statement
@param stmt prepared statement
@note This function handles create view commands.
@retval FALSE Operation was a success.
@retval TRUE An error occured.
*/
static bool mysql_test_create_view(Prepared_statement *stmt)
{
DBUG_ENTER("mysql_test_create_view");
THD *thd= stmt->thd;
LEX *lex= stmt->lex;
bool res= TRUE;
/* Skip first table, which is the view we are creating */
bool link_to_local;
TABLE_LIST *view= lex->unlink_first_table(&link_to_local);
TABLE_LIST *tables= lex->query_tables;
if (create_view_precheck(thd, tables, view, lex->create_view_mode))
goto err;
Implement new type-of-operation-aware metadata locks. Add a wait-for graph based deadlock detector to the MDL subsystem. Fixes bug #46272 "MySQL 5.4.4, new MDL: unnecessary deadlock" and bug #37346 "innodb does not detect deadlock between update and alter table". The first bug manifested itself as an unwarranted abort of a transaction with ER_LOCK_DEADLOCK error by a concurrent ALTER statement, when this transaction tried to repeat use of a table, which it has already used in a similar fashion before ALTER started. The second bug showed up as a deadlock between table-level locks and InnoDB row locks, which was "detected" only after innodb_lock_wait_timeout timeout. A transaction would start using the table and modify a few rows. Then ALTER TABLE would come in, and start copying rows into a temporary table. Eventually it would stumble on the modified records and get blocked on a row lock. The first transaction would try to do more updates, and get blocked on thr_lock.c lock. This situation of circular wait would only get resolved by a timeout. Both these bugs stemmed from inadequate solutions to the problem of deadlocks occurring between different locking subsystems. In the first case we tried to avoid deadlocks between metadata locking and table-level locking subsystems, when upgrading shared metadata lock to exclusive one. Transactions holding the shared lock on the table and waiting for some table-level lock used to be aborted too aggressively. We also allowed ALTER TABLE to start in presence of transactions that modify the subject table. ALTER TABLE acquires TL_WRITE_ALLOW_READ lock at start, and that block all writes against the table (naturally, we don't want any writes to be lost when switching the old and the new table). TL_WRITE_ALLOW_READ lock, in turn, would block the started transaction on thr_lock.c lock, should they do more updates. This, again, lead to the need to abort such transactions. The second bug occurred simply because we didn't have any mechanism to detect deadlocks between the table-level locks in thr_lock.c and row-level locks in InnoDB, other than innodb_lock_wait_timeout. This patch solves both these problems by moving lock conflicts which are causing these deadlocks into the metadata locking subsystem, thus making it possible to avoid or detect such deadlocks inside MDL. To do this we introduce new type-of-operation-aware metadata locks, which allow MDL subsystem to know not only the fact that transaction has used or is going to use some object but also what kind of operation it has carried out or going to carry out on the object. This, along with the addition of a special kind of upgradable metadata lock, allows ALTER TABLE to wait until all transactions which has updated the table to go away. This solves the second issue. Another special type of upgradable metadata lock is acquired by LOCK TABLE WRITE. This second lock type allows to solve the first issue, since abortion of table-level locks in event of DDL under LOCK TABLES becomes also unnecessary. Below follows the list of incompatible changes introduced by this patch: - From now on, ALTER TABLE and CREATE/DROP TRIGGER SQL (i.e. those statements that acquire TL_WRITE_ALLOW_READ lock) wait for all transactions which has *updated* the table to complete. - From now on, LOCK TABLES ... WRITE, REPAIR/OPTIMIZE TABLE (i.e. all statements which acquire TL_WRITE table-level lock) wait for all transaction which *updated or read* from the table to complete. As a consequence, innodb_table_locks=0 option no longer applies to LOCK TABLES ... WRITE. - DROP DATABASE, DROP TABLE, RENAME TABLE no longer abort statements or transactions which use tables being dropped or renamed, and instead wait for these transactions to complete. - Since LOCK TABLES WRITE now takes a special metadata lock, not compatible with with reads or writes against the subject table and transaction-wide, thr_lock.c deadlock avoidance algorithm that used to ensure absence of deadlocks between LOCK TABLES WRITE and other statements is no longer sufficient, even for MyISAM. The wait-for graph based deadlock detector of MDL subsystem may sometimes be necessary and is involved. This may lead to ER_LOCK_DEADLOCK error produced for multi-statement transactions even if these only use MyISAM: session 1: session 2: begin; update t1 ... lock table t2 write, t1 write; -- gets a lock on t2, blocks on t1 update t2 ... (ER_LOCK_DEADLOCK) - Finally, support of LOW_PRIORITY option for LOCK TABLES ... WRITE was abandoned. LOCK TABLE ... LOW_PRIORITY WRITE from now on has the same priority as the usual LOCK TABLE ... WRITE. SELECT HIGH PRIORITY no longer trumps LOCK TABLE ... WRITE in the wait queue. - We do not take upgradable metadata locks on implicitly locked tables. So if one has, say, a view v1 that uses table t1, and issues: LOCK TABLE v1 WRITE; FLUSH TABLE t1; -- (or just 'FLUSH TABLES'), an error is produced. In order to be able to perform DDL on a table under LOCK TABLES, the table must be locked explicitly in the LOCK TABLES list.
2010-02-01 12:43:06 +01:00
if (open_normal_and_derived_tables(thd, tables, MYSQL_OPEN_FORCE_SHARED_MDL))
goto err;
Fixed following problems: --Bug#52157 various crashes and assertions with multi-table update, stored function --Bug#54475 improper error handling causes cascading crashing failures in innodb/ndb --Bug#57703 create view cause Assertion failed: 0, file .\item_subselect.cc, line 846 --Bug#57352 valgrind warnings when creating view --Recently discovered problem when a nested materialized derived table is used before being populated and it leads to incorrect result We have several modes when we should disable subquery evaluation. The reasons for disabling are different. It could be uselessness of the evaluation as in case of 'CREATE VIEW' or 'PREPARE stmt', or we should disable subquery evaluation if tables are not locked yet as it happens in bug#54475, or too early evaluation of subqueries can lead to wrong result as it happened in Bug#19077. Main problem is that if subquery items are treated as const they are evaluated in ::fix_fields(), ::fix_length_and_dec() of the parental items as a lot of these methods have Item::val_...() calls inside. We have to make subqueries non-const to prevent unnecessary subquery evaluation. At the moment we have different methods for this. Here is a list of these modes: 1. PREPARE stmt; We use UNCACHEABLE_PREPARE flag. It is set during parsing in sql_parse.cc, mysql_new_select() for each SELECT_LEX object and cleared at the end of PREPARE in sql_prepare.cc, init_stmt_after_parse(). If this flag is set subquery becomes non-const and evaluation does not happen. 2. CREATE|ALTER VIEW, SHOW CREATE VIEW, I_S tables which process FRM files We use LEX::view_prepare_mode field. We set it before view preparation and check this flag in ::fix_fields(), ::fix_length_and_dec(). Some bugs are fixed using this approach, some are not(Bug#57352, Bug#57703). The problem here is that we have a lot of ::fix_fields(), ::fix_length_and_dec() where we use Item::val_...() calls for const items. 3. Derived tables with subquery = wrong result(Bug19077) The reason of this bug is too early subquery evaluation. It was fixed by adding Item::with_subselect field The check of this field in appropriate places prevents const item evaluation if the item have subquery. The fix for Bug19077 fixes only the problem with convert_constant_item() function and does not cover other places(::fix_fields(), ::fix_length_and_dec() again) where subqueries could be evaluated. Example: CREATE TABLE t1 (i INT, j BIGINT); INSERT INTO t1 VALUES (1, 2), (2, 2), (3, 2); SELECT * FROM (SELECT MIN(i) FROM t1 WHERE j = SUBSTRING('12', (SELECT * FROM (SELECT MIN(j) FROM t1) t2))) t3; DROP TABLE t1; 4. Derived tables with subquery where subquery is evaluated before table locking(Bug#54475, Bug#52157) Suggested solution is following: -Introduce new field LEX::context_analysis_only with the following possible flags: #define CONTEXT_ANALYSIS_ONLY_PREPARE 1 #define CONTEXT_ANALYSIS_ONLY_VIEW 2 #define CONTEXT_ANALYSIS_ONLY_DERIVED 4 -Set/clean these flags when we perform context analysis operation -Item_subselect::const_item() returns result depending on LEX::context_analysis_only. If context_analysis_only is set then we return FALSE that means that subquery is non-const. As all subquery types are wrapped by Item_subselect it allow as to make subquery non-const when it's necessary.
2010-12-14 10:33:03 +01:00
lex->context_analysis_only|= CONTEXT_ANALYSIS_ONLY_VIEW;
res= select_like_stmt_test(stmt, 0, 0);
err:
/* put view back for PS rexecuting */
lex->link_first_table_back(view, link_to_local);
DBUG_RETURN(res);
}
/*
Validate and prepare for execution a multi update statement.
2007-10-16 21:37:31 +02:00
@param stmt prepared statement
@param tables list of tables used in this query
@param converted converted to multi-update from usual update
2007-10-16 21:37:31 +02:00
@retval
FALSE success
@retval
TRUE error, error message is set in THD
*/
static bool mysql_test_multiupdate(Prepared_statement *stmt,
2005-06-08 17:57:26 +02:00
TABLE_LIST *tables,
bool converted)
{
/* if we switched from normal update, rights are checked */
2004-11-21 19:08:12 +01:00
if (!converted && multi_update_precheck(stmt->thd, tables))
return TRUE;
return select_like_stmt_test(stmt, &mysql_multi_update_prepare,
OPTION_SETUP_TABLES_DONE);
}
2007-10-16 21:37:31 +02:00
/**
Validate and prepare for execution a multi delete statement.
2007-10-16 21:37:31 +02:00
@param stmt prepared statement
@param tables list of tables used in this query
2007-10-16 21:37:31 +02:00
@retval
FALSE success
@retval
TRUE error, error message in THD is set.
*/
2005-06-08 17:31:34 +02:00
static bool mysql_test_multidelete(Prepared_statement *stmt,
2005-06-08 17:57:26 +02:00
TABLE_LIST *tables)
{
stmt->thd->lex->current_select= &stmt->thd->lex->select_lex;
if (add_item_to_list(stmt->thd, new Item_null()))
2005-06-08 17:31:34 +02:00
{
my_error(ER_OUTOFMEMORY, MYF(ME_FATALERROR), 0);
2005-06-08 17:31:34 +02:00
goto error;
}
2005-06-08 23:13:20 +02:00
if (multi_delete_precheck(stmt->thd, tables) ||
select_like_stmt_test_with_open(stmt, tables,
&mysql_multi_delete_prepare,
OPTION_SETUP_TABLES_DONE))
2005-06-08 17:31:34 +02:00
goto error;
if (!tables->table)
{
my_error(ER_VIEW_DELETE_MERGE_VIEW, MYF(0),
2005-06-08 17:57:26 +02:00
tables->view_db.str, tables->view_name.str);
2005-06-08 17:31:34 +02:00
goto error;
}
2005-06-08 17:31:34 +02:00
return FALSE;
error:
return TRUE;
}
2007-10-16 21:37:31 +02:00
/**
Wrapper for mysql_insert_select_prepare, to make change of local tables
after open_normal_and_derived_tables() call.
2007-10-16 21:37:31 +02:00
@param thd thread handle
2007-10-16 21:37:31 +02:00
@note
We need to remove the first local table after
open_normal_and_derived_tables(), because mysql_handle_derived
uses local tables lists.
*/
static int mysql_insert_select_prepare_tester(THD *thd)
{
SELECT_LEX *first_select= &thd->lex->select_lex;
TABLE_LIST *second_table= first_select->table_list.first->next_local;
/* Skip first table, which is the table we are inserting in */
first_select->table_list.first= second_table;
thd->lex->select_lex.context.table_list=
thd->lex->select_lex.context.first_name_resolution_table= second_table;
return mysql_insert_select_prepare(thd);
}
2007-10-16 21:37:31 +02:00
/**
Validate and prepare for execution INSERT ... SELECT statement.
2007-10-16 21:37:31 +02:00
@param stmt prepared statement
@param tables list of tables used in this query
2007-10-16 21:37:31 +02:00
@retval
FALSE success
@retval
TRUE error, error message is set in THD
*/
2004-12-30 23:44:00 +01:00
static bool mysql_test_insert_select(Prepared_statement *stmt,
TABLE_LIST *tables)
{
int res;
LEX *lex= stmt->lex;
2004-12-30 23:44:00 +01:00
TABLE_LIST *first_local_table;
if (tables->table)
{
// don't allocate insert_values
WL#3817: Simplify string / memory area types and make things more consistent (first part) The following type conversions was done: - Changed byte to uchar - Changed gptr to uchar* - Change my_string to char * - Change my_size_t to size_t - Change size_s to size_t Removed declaration of byte, gptr, my_string, my_size_t and size_s. Following function parameter changes was done: - All string functions in mysys/strings was changed to use size_t instead of uint for string lengths. - All read()/write() functions changed to use size_t (including vio). - All protocoll functions changed to use size_t instead of uint - Functions that used a pointer to a string length was changed to use size_t* - Changed malloc(), free() and related functions from using gptr to use void * as this requires fewer casts in the code and is more in line with how the standard functions work. - Added extra length argument to dirname_part() to return the length of the created string. - Changed (at least) following functions to take uchar* as argument: - db_dump() - my_net_write() - net_write_command() - net_store_data() - DBUG_DUMP() - decimal2bin() & bin2decimal() - Changed my_compress() and my_uncompress() to use size_t. Changed one argument to my_uncompress() from a pointer to a value as we only return one value (makes function easier to use). - Changed type of 'pack_data' argument to packfrm() to avoid casts. - Changed in readfrm() and writefrom(), ha_discover and handler::discover() the type for argument 'frmdata' to uchar** to avoid casts. - Changed most Field functions to use uchar* instead of char* (reduced a lot of casts). - Changed field->val_xxx(xxx, new_ptr) to take const pointers. Other changes: - Removed a lot of not needed casts - Added a few new cast required by other changes - Added some cast to my_multi_malloc() arguments for safety (as string lengths needs to be uint, not size_t). - Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done explicitely as this conflict was often hided by casting the function to hash_get_key). - Changed some buffers to memory regions to uchar* to avoid casts. - Changed some string lengths from uint to size_t. - Changed field->ptr to be uchar* instead of char*. This allowed us to get rid of a lot of casts. - Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar - Include zlib.h in some files as we needed declaration of crc32() - Changed MY_FILE_ERROR to be (size_t) -1. - Changed many variables to hold the result of my_read() / my_write() to be size_t. This was needed to properly detect errors (which are returned as (size_t) -1). - Removed some very old VMS code - Changed packfrm()/unpackfrm() to not be depending on uint size (portability fix) - Removed windows specific code to restore cursor position as this causes slowdown on windows and we should not mix read() and pread() calls anyway as this is not thread safe. Updated function comment to reflect this. Changed function that depended on original behavior of my_pwrite() to itself restore the cursor position (one such case). - Added some missing checking of return value of malloc(). - Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow. - Changed type of table_def::m_size from my_size_t to ulong to reflect that m_size is the number of elements in the array, not a string/memory length. - Moved THD::max_row_length() to table.cc (as it's not depending on THD). Inlined max_row_length_blob() into this function. - More function comments - Fixed some compiler warnings when compiled without partitions. - Removed setting of LEX_STRING() arguments in declaration (portability fix). - Some trivial indentation/variable name changes. - Some trivial code simplifications: - Replaced some calls to alloc_root + memcpy to use strmake_root()/strdup_root(). - Changed some calls from memdup() to strmake() (Safety fix) - Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
tables->table->insert_values=(uchar *)1;
}
2004-12-30 23:44:00 +01:00
2005-06-08 17:31:34 +02:00
if (insert_precheck(stmt->thd, tables))
return 1;
/* store it, because mysql_insert_select_prepare_tester change it */
first_local_table= lex->select_lex.table_list.first;
DBUG_ASSERT(first_local_table != 0);
res=
select_like_stmt_test_with_open(stmt, tables,
&mysql_insert_select_prepare_tester,
OPTION_SETUP_TABLES_DONE);
/* revert changes made by mysql_insert_select_prepare_tester */
lex->select_lex.table_list.first= first_local_table;
return res;
}
2007-10-16 21:37:31 +02:00
/**
Perform semantic analysis of the parsed tree and send a response packet
to the client.
This function
- opens all tables and checks access rights
- validates semantics of statement columns and SQL functions
by calling fix_fields.
2007-10-16 21:37:31 +02:00
@param stmt prepared statement
@retval
FALSE success, statement metadata is sent to client
@retval
TRUE error, error message is set in THD (but not sent)
*/
static bool check_prepared_statement(Prepared_statement *stmt)
{
THD *thd= stmt->thd;
LEX *lex= stmt->lex;
SELECT_LEX *select_lex= &lex->select_lex;
2004-07-16 00:15:55 +02:00
TABLE_LIST *tables;
enum enum_sql_command sql_command= lex->sql_command;
2004-04-10 00:14:32 +02:00
int res= 0;
DBUG_ENTER("check_prepared_statement");
2007-03-22 19:32:07 +01:00
DBUG_PRINT("enter",("command: %d param_count: %u",
sql_command, stmt->param_count));
2004-07-16 00:15:55 +02:00
lex->first_lists_tables_same();
tables= lex->query_tables;
/* set context for commands which do not use setup_tables */
lex->select_lex.context.resolve_in_table_list_only(select_lex->
get_table_list());
/* Reset warning count for each query that uses tables */
if (tables)
thd->warning_info->opt_clear_warning_info(thd->query_id);
switch (sql_command) {
case SQLCOM_REPLACE:
case SQLCOM_INSERT:
2004-04-10 00:14:32 +02:00
res= mysql_test_insert(stmt, tables, lex->field_list,
2005-06-08 17:57:26 +02:00
lex->many_values,
lex->update_list, lex->value_list,
2005-06-08 17:57:26 +02:00
lex->duplicates);
break;
case SQLCOM_UPDATE:
2004-04-10 00:14:32 +02:00
res= mysql_test_update(stmt, tables);
2005-06-08 17:31:34 +02:00
/* mysql_test_update returns 2 if we need to switch to multi-update */
if (res != 2)
break;
case SQLCOM_UPDATE_MULTI:
res= mysql_test_multiupdate(stmt, tables, res == 2);
2004-04-10 00:14:32 +02:00
break;
case SQLCOM_DELETE:
2004-04-10 00:14:32 +02:00
res= mysql_test_delete(stmt, tables);
break;
/* The following allow WHERE clause, so they must be tested like SELECT */
case SQLCOM_SHOW_DATABASES:
case SQLCOM_SHOW_TABLES:
case SQLCOM_SHOW_TRIGGERS:
case SQLCOM_SHOW_EVENTS:
case SQLCOM_SHOW_OPEN_TABLES:
case SQLCOM_SHOW_FIELDS:
case SQLCOM_SHOW_KEYS:
case SQLCOM_SHOW_COLLATIONS:
case SQLCOM_SHOW_CHARSETS:
case SQLCOM_SHOW_VARIABLES:
case SQLCOM_SHOW_STATUS:
case SQLCOM_SHOW_TABLE_STATUS:
case SQLCOM_SHOW_STATUS_PROC:
case SQLCOM_SHOW_STATUS_FUNC:
case SQLCOM_SELECT:
res= mysql_test_select(stmt, tables);
if (res == 2)
{
/* Statement and field info has already been sent */
DBUG_RETURN(FALSE);
}
break;
case SQLCOM_CREATE_TABLE:
2004-07-16 00:15:55 +02:00
res= mysql_test_create_table(stmt);
break;
2005-06-08 17:57:26 +02:00
case SQLCOM_CREATE_VIEW:
if (lex->create_view_mode == VIEW_ALTER)
{
my_message(ER_UNSUPPORTED_PS, ER(ER_UNSUPPORTED_PS), MYF(0));
goto error;
}
res= mysql_test_create_view(stmt);
break;
case SQLCOM_DO:
2004-04-10 00:14:32 +02:00
res= mysql_test_do_fields(stmt, tables, lex->insert_list);
break;
case SQLCOM_CALL:
res= mysql_test_call_fields(stmt, tables, &lex->value_list);
break;
case SQLCOM_SET_OPTION:
2004-04-10 00:14:32 +02:00
res= mysql_test_set_fields(stmt, tables, &lex->var_list);
break;
case SQLCOM_DELETE_MULTI:
2004-04-10 00:14:32 +02:00
res= mysql_test_multidelete(stmt, tables);
break;
case SQLCOM_INSERT_SELECT:
case SQLCOM_REPLACE_SELECT:
2004-04-10 00:14:32 +02:00
res= mysql_test_insert_select(stmt, tables);
break;
/*
Note that we don't need to have cases in this list if they are
marked with CF_STATUS_COMMAND in sql_command_flags
*/
case SQLCOM_DROP_TABLE:
case SQLCOM_RENAME_TABLE:
case SQLCOM_ALTER_TABLE:
case SQLCOM_COMMIT:
case SQLCOM_CREATE_INDEX:
case SQLCOM_DROP_INDEX:
case SQLCOM_ROLLBACK:
case SQLCOM_TRUNCATE:
case SQLCOM_DROP_VIEW:
case SQLCOM_REPAIR:
case SQLCOM_ANALYZE:
case SQLCOM_OPTIMIZE:
case SQLCOM_CHANGE_MASTER:
case SQLCOM_RESET:
case SQLCOM_FLUSH:
case SQLCOM_SLAVE_START:
case SQLCOM_SLAVE_STOP:
case SQLCOM_INSTALL_PLUGIN:
case SQLCOM_UNINSTALL_PLUGIN:
case SQLCOM_CREATE_DB:
case SQLCOM_DROP_DB:
case SQLCOM_ALTER_DB_UPGRADE:
case SQLCOM_CHECKSUM:
case SQLCOM_CREATE_USER:
case SQLCOM_RENAME_USER:
case SQLCOM_DROP_USER:
case SQLCOM_ASSIGN_TO_KEYCACHE:
case SQLCOM_PRELOAD_KEYS:
case SQLCOM_GRANT:
case SQLCOM_REVOKE:
case SQLCOM_KILL:
break;
case SQLCOM_PREPARE:
case SQLCOM_EXECUTE:
case SQLCOM_DEALLOCATE_PREPARE:
default:
/*
Trivial check of all status commands. This is easier than having
things in the above case list, as it's less chance for mistakes.
*/
if (!(sql_command_flags[sql_command] & CF_STATUS_COMMAND))
{
/* All other statements are not supported yet. */
my_message(ER_UNSUPPORTED_PS, ER(ER_UNSUPPORTED_PS), MYF(0));
goto error;
}
break;
}
2004-04-10 00:14:32 +02:00
if (res == 0)
DBUG_RETURN(stmt->is_sql_prepare() ?
FALSE : (send_prep_stmt(stmt, 0) || thd->protocol->flush()));
error:
2005-06-08 17:31:34 +02:00
DBUG_RETURN(TRUE);
}
2007-10-16 21:37:31 +02:00
/**
Initialize array of parameters in statement from LEX.
(We need to have quick access to items by number in mysql_stmt_get_longdata).
This is to avoid using malloc/realloc in the parser.
2002-11-22 19:04:42 +01:00
*/
2003-01-03 12:52:53 +01:00
static bool init_param_array(Prepared_statement *stmt)
2002-11-22 19:04:42 +01:00
{
LEX *lex= stmt->lex;
if ((stmt->param_count= lex->param_list.elements))
{
if (stmt->param_count > (uint) UINT_MAX16)
{
/* Error code to be defined in 5.0 */
my_message(ER_PS_MANY_PARAM, ER(ER_PS_MANY_PARAM), MYF(0));
return TRUE;
}
Item_param **to;
List_iterator<Item_param> param_iterator(lex->param_list);
/* Use thd->mem_root as it points at statement mem_root */
stmt->param_array= (Item_param **)
alloc_root(stmt->thd->mem_root,
sizeof(Item_param*) * stmt->param_count);
if (!stmt->param_array)
return TRUE;
for (to= stmt->param_array;
to < stmt->param_array + stmt->param_count;
++to)
{
*to= param_iterator++;
}
}
return FALSE;
2002-11-22 19:04:42 +01:00
}
2007-10-16 21:37:31 +02:00
/**
COM_STMT_PREPARE handler.
2005-06-08 17:57:26 +02:00
Given a query string with parameter markers, create a prepared
statement from it and send PS info back to the client.
If parameter markers are found in the query, then store the information
2005-06-08 17:57:26 +02:00
using Item_param along with maintaining a list in lex->param_array, so
that a fast and direct retrieval can be made without going through all
field items.
2005-06-08 17:57:26 +02:00
2007-10-16 21:37:31 +02:00
@param packet query to be prepared
@param packet_length query string length, including ignored
trailing NULL or quote char.
@note
This function parses the query and sends the total number of parameters
and resultset metadata information back to client (if any), without
executing the query i.e. without any log/disk writes. This allows the
queries to be re-executed without re-parsing during execute.
@return
none: in case of success a new statement id and metadata is sent
to the client, otherwise an error message is set in THD.
*/
void mysqld_stmt_prepare(THD *thd, const char *packet, uint packet_length)
{
Protocol *save_protocol= thd->protocol;
2005-11-17 17:40:48 +01:00
Prepared_statement *stmt;
DBUG_ENTER("mysqld_stmt_prepare");
2004-04-07 12:25:24 +02:00
DBUG_PRINT("prep_query", ("%s", packet));
2005-11-17 17:40:48 +01:00
/* First of all clear possible warnings from the previous command */
mysql_reset_thd_for_next_command(thd);
if (! (stmt= new Prepared_statement(thd)))
DBUG_VOID_RETURN; /* out of memory: error is set in Sql_alloc */
if (thd->stmt_map.insert(thd, stmt))
{
/*
The error is set in the insert. The statement itself
will be also deleted there (this is how the hash works).
*/
DBUG_VOID_RETURN;
}
2003-01-03 12:52:53 +01:00
thd->protocol= &thd->protocol_binary;
Backport of revno 2630.28.10, 2630.28.31, 2630.28.26, 2630.33.1, 2630.39.1, 2630.28.29, 2630.34.3, 2630.34.2, 2630.34.1, 2630.29.29, 2630.29.28, 2630.31.1, 2630.28.13, 2630.28.10, 2617.23.14 and some other minor revisions. This patch implements: WL#4264 "Backup: Stabilize Service Interface" -- all the server prerequisites except si_objects.{h,cc} themselves (they can be just copied over, when needed). WL#4435: Support OUT-parameters in prepared statements. (and all issues in the initial patches for these two tasks, that were discovered in pushbuild and during testing). Bug#39519: mysql_stmt_close() should flush all data associated with the statement. After execution of a prepared statement, send OUT parameters of the invoked stored procedure, if any, to the client. When using the binary protocol, send the parameters in an additional result set over the wire. When using the text protocol, assign out parameters to the user variables from the CALL(@var1, @var2, ...) specification. The following refactoring has been made: - Protocol::send_fields() was renamed to Protocol::send_result_set_metadata(); - A new Protocol::send_result_set_row() was introduced to incapsulate common functionality for sending row data. - Signature of Protocol::prepare_for_send() was changed: this operation does not need a list of items, the number of items is fully sufficient. The following backward incompatible changes have been made: - CLIENT_MULTI_RESULTS is now enabled by default in the client; - CLIENT_PS_MULTI_RESUTLS is now enabled by default in the client.
2009-10-21 22:02:06 +02:00
if (stmt->prepare(packet, packet_length))
{
/* Statement map deletes statement on erase */
thd->stmt_map.erase(stmt);
}
thd->protocol= save_protocol;
Fixed bug#11753187 (formerly known as bug 44585): SP_CACHE BEHAVES AS MEMORY LEAK. Background: - There are caches for stored functions and stored procedures (SP-cache); - There is no similar cache for events; - Triggers are cached together with TABLE objects; - Those SP-caches are per-session (i.e. specific to each session); - A stored routine is represented by a sp_head-instance internally; - SP-cache basically contains sp_head-objects of stored routines, which have been executed in a session; - sp_head-object is added into the SP-cache before the corresponding stored routine is executed; - SP-cache is flushed in the end of the session. The problem was that SP-cache might grow without any limit. Although this was not a pure memory leak (the SP-cache is flushed when session is closed), this is still a problem, because the user might take much memory by executing many stored routines. The patch fixes this problem in the least-intrusive way. A soft limit (similar to the size of table definition cache) is introduced. To represent such limit the new runtime configuration parameter 'stored_program_cache' is introduced. The value of this parameter is stored in the new global variable stored_program_cache_size that used to control the size of SP-cache to overflow. The parameter 'stored_program_cache' limits number of cached routines for each thread. It has the following min/default/max values given from support: min = 256, default = 256, max = 512 * 1024. Also it should be noted that this parameter limits the size of each cache (for stored procedures and for stored functions) separately. The SP-cache size is checked after top-level statement is parsed. If SP-cache size exceeds the limit specified by parameter 'stored_program_cache' then SP-cache is flushed and memory allocated for cache objects is freed. Such approach allows to flush cache safely when there are dependencies among stored routines.
2012-01-25 10:59:30 +01:00
sp_cache_enforce_limit(thd->sp_proc_cache, stored_program_cache_size);
sp_cache_enforce_limit(thd->sp_func_cache, stored_program_cache_size);
/* check_prepared_statemnt sends the metadata packet in case of success */
DBUG_VOID_RETURN;
}
2007-10-16 21:37:31 +02:00
/**
Get an SQL statement text from a user variable or from plain text.
If the statement is plain text, just assign the
pointers, otherwise allocate memory in thd->mem_root and copy
the contents of the variable, possibly with character
set conversion.
@param[in] lex main lex
@param[out] query_len length of the SQL statement (is set only
in case of success)
@retval
non-zero success
@retval
0 in case of error (out of memory)
*/
static const char *get_dynamic_sql_string(LEX *lex, uint *query_len)
{
THD *thd= lex->thd;
char *query_str= 0;
if (lex->prepared_stmt_code_is_varref)
{
/* This is PREPARE stmt FROM or EXECUTE IMMEDIATE @var. */
String str;
CHARSET_INFO *to_cs= thd->variables.collation_connection;
bool needs_conversion;
user_var_entry *entry;
String *var_value= &str;
uint32 unused, len;
/*
Convert @var contents to string in connection character set. Although
it is known that int/real/NULL value cannot be a valid query we still
convert it for error messages to be uniform.
*/
if ((entry=
(user_var_entry*)my_hash_search(&thd->user_vars,
(uchar*)lex->prepared_stmt_code.str,
lex->prepared_stmt_code.length))
&& entry->value)
{
my_bool is_var_null;
var_value= entry->val_str(&is_var_null, &str, NOT_FIXED_DEC);
/*
NULL value of variable checked early as entry->value so here
we can't get NULL in normal conditions
*/
DBUG_ASSERT(!is_var_null);
if (!var_value)
goto end;
}
else
{
/*
variable absent or equal to NULL, so we need to set variable to
something reasonable to get a readable error message during parsing
*/
str.set(STRING_WITH_LEN("NULL"), &my_charset_latin1);
}
needs_conversion= String::needs_conversion(var_value->length(),
var_value->charset(), to_cs,
&unused);
len= (needs_conversion ? var_value->length() * to_cs->mbmaxlen :
var_value->length());
WL#3817: Simplify string / memory area types and make things more consistent (first part) The following type conversions was done: - Changed byte to uchar - Changed gptr to uchar* - Change my_string to char * - Change my_size_t to size_t - Change size_s to size_t Removed declaration of byte, gptr, my_string, my_size_t and size_s. Following function parameter changes was done: - All string functions in mysys/strings was changed to use size_t instead of uint for string lengths. - All read()/write() functions changed to use size_t (including vio). - All protocoll functions changed to use size_t instead of uint - Functions that used a pointer to a string length was changed to use size_t* - Changed malloc(), free() and related functions from using gptr to use void * as this requires fewer casts in the code and is more in line with how the standard functions work. - Added extra length argument to dirname_part() to return the length of the created string. - Changed (at least) following functions to take uchar* as argument: - db_dump() - my_net_write() - net_write_command() - net_store_data() - DBUG_DUMP() - decimal2bin() & bin2decimal() - Changed my_compress() and my_uncompress() to use size_t. Changed one argument to my_uncompress() from a pointer to a value as we only return one value (makes function easier to use). - Changed type of 'pack_data' argument to packfrm() to avoid casts. - Changed in readfrm() and writefrom(), ha_discover and handler::discover() the type for argument 'frmdata' to uchar** to avoid casts. - Changed most Field functions to use uchar* instead of char* (reduced a lot of casts). - Changed field->val_xxx(xxx, new_ptr) to take const pointers. Other changes: - Removed a lot of not needed casts - Added a few new cast required by other changes - Added some cast to my_multi_malloc() arguments for safety (as string lengths needs to be uint, not size_t). - Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done explicitely as this conflict was often hided by casting the function to hash_get_key). - Changed some buffers to memory regions to uchar* to avoid casts. - Changed some string lengths from uint to size_t. - Changed field->ptr to be uchar* instead of char*. This allowed us to get rid of a lot of casts. - Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar - Include zlib.h in some files as we needed declaration of crc32() - Changed MY_FILE_ERROR to be (size_t) -1. - Changed many variables to hold the result of my_read() / my_write() to be size_t. This was needed to properly detect errors (which are returned as (size_t) -1). - Removed some very old VMS code - Changed packfrm()/unpackfrm() to not be depending on uint size (portability fix) - Removed windows specific code to restore cursor position as this causes slowdown on windows and we should not mix read() and pread() calls anyway as this is not thread safe. Updated function comment to reflect this. Changed function that depended on original behavior of my_pwrite() to itself restore the cursor position (one such case). - Added some missing checking of return value of malloc(). - Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow. - Changed type of table_def::m_size from my_size_t to ulong to reflect that m_size is the number of elements in the array, not a string/memory length. - Moved THD::max_row_length() to table.cc (as it's not depending on THD). Inlined max_row_length_blob() into this function. - More function comments - Fixed some compiler warnings when compiled without partitions. - Removed setting of LEX_STRING() arguments in declaration (portability fix). - Some trivial indentation/variable name changes. - Some trivial code simplifications: - Replaced some calls to alloc_root + memcpy to use strmake_root()/strdup_root(). - Changed some calls from memdup() to strmake() (Safety fix) - Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
if (!(query_str= (char*) alloc_root(thd->mem_root, len+1)))
goto end;
if (needs_conversion)
{
uint dummy_errors;
len= copy_and_convert(query_str, len, to_cs, var_value->ptr(),
var_value->length(), var_value->charset(),
&dummy_errors);
}
else
memcpy(query_str, var_value->ptr(), var_value->length());
query_str[len]= '\0'; // Safety (mostly for debug)
*query_len= len;
}
else
{
query_str= lex->prepared_stmt_code.str;
*query_len= lex->prepared_stmt_code.length;
}
end:
return query_str;
}
2007-10-16 21:37:31 +02:00
/**
SQLCOM_PREPARE implementation.
Prepare an SQL prepared statement. This is called from
mysql_execute_command and should therefore behave like an
ordinary query (e.g. should not reset any global THD data).
2007-10-16 21:37:31 +02:00
@param thd thread handle
@return
none: in case of success, OK packet is sent to the client,
otherwise an error message is set in THD
*/
void mysql_sql_stmt_prepare(THD *thd)
{
LEX *lex= thd->lex;
LEX_STRING *name= &lex->prepared_stmt_name;
Prepared_statement *stmt;
const char *query;
2009-06-17 16:56:44 +02:00
uint query_len= 0;
DBUG_ENTER("mysql_sql_stmt_prepare");
if ((stmt= (Prepared_statement*) thd->stmt_map.find_by_name(name)))
{
/*
If there is a statement with the same name, remove it. It is ok to
remove old and fail to insert a new one at the same time.
*/
if (stmt->is_in_use())
{
my_error(ER_PS_NO_RECURSION, MYF(0));
DBUG_VOID_RETURN;
}
stmt->deallocate();
}
if (! (query= get_dynamic_sql_string(lex, &query_len)) ||
! (stmt= new Prepared_statement(thd)))
{
DBUG_VOID_RETURN; /* out of memory */
}
stmt->set_sql_prepare();
/* Set the name first, insert should know that this statement has a name */
if (stmt->set_name(name))
{
delete stmt;
DBUG_VOID_RETURN;
}
A fix for Bug#26750 "valgrind leak in sp_head" (and post-review fixes). The legend: on a replication slave, in case a trigger creation was filtered out because of application of replicate-do-table/ replicate-ignore-table rule, the parsed definition of a trigger was not cleaned up properly. LEX::sphead member was left around and leaked memory. Until the actual implementation of support of replicate-ignore-table rules for triggers by the patch for Bug 24478 it was never the case that "case SQLCOM_CREATE_TRIGGER" was not executed once a trigger was parsed, so the deletion of lex->sphead there worked and the memory did not leak. The fix: The real cause of the bug is that there is no 1 or 2 places where we can clean up the main LEX after parse. And the reason we can not have just one or two places where we clean up the LEX is asymmetric behaviour of MYSQLparse in case of success or error. One of the root causes of this behaviour is the code in Item::Item() constructor. There, a newly created item adds itself to THD::free_list - a single-linked list of Items used in a statement. Yuck. This code is unaware that we may have more than one statement active at a time, and always assumes that the free_list of the current statement is located in THD::free_list. One day we need to be able to explicitly allocate an item in a given Query_arena. Thus, when parsing a definition of a stored procedure, like CREATE PROCEDURE p1() BEGIN SELECT a FROM t1; SELECT b FROM t1; END; we actually need to reset THD::mem_root, THD::free_list and THD::lex to parse the nested procedure statement (SELECT *). The actual reset and restore is implemented in semantic actions attached to sp_proc_stmt grammar rule. The problem is that in case of a parsing error inside a nested statement Bison generated parser would abort immediately, without executing the restore part of the semantic action. This would leave THD in an in-the-middle-of-parsing state. This is why we couldn't have had a single place where we clean up the LEX after MYSQLparse - in case of an error we needed to do a clean up immediately, in case of success a clean up could have been delayed. This left the door open for a memory leak. One of the following possibilities were considered when working on a fix: - patch the replication logic to do the clean up. Rejected as breaks module borders, replication code should not need to know the gory details of clean up procedure after CREATE TRIGGER. - wrap MYSQLparse with a function that would do a clean up. Rejected as ideally we should fix the problem when it happens, not adjust for it outside of the problematic code. - make sure MYSQLparse cleans up after itself by invoking the clean up functionality in the appropriate places before return. Implemented in this patch. - use %destructor rule for sp_proc_stmt to restore THD - cleaner than the prevoius approach, but rejected because needs a careful analysis of the side effects, and this patch is for 5.0, and long term we need to use the next alternative anyway - make sure that sp_proc_stmt doesn't juggle with THD - this is a large work that will affect many modules. Cleanup: move main_lex and main_mem_root from Statement to its only two descendants Prepared_statement and THD. This ensures that when a Statement instance was created for purposes of statement backup, we do not involve LEX constructor/destructor, which is fairly expensive. In order to track that the transformation produces equivalent functionality please check the respective constructors and destructors of Statement, Prepared_statement and THD - these members were used only there. This cleanup is unrelated to the patch.
2007-03-07 10:24:46 +01:00
if (thd->stmt_map.insert(thd, stmt))
{
/* The statement is deleted and an error is set if insert fails */
DBUG_VOID_RETURN;
}
if (stmt->prepare(query, query_len))
{
/* Statement map deletes the statement on erase */
thd->stmt_map.erase(stmt);
}
else
my_ok(thd, 0L, 0L, "Statement prepared");
DBUG_VOID_RETURN;
}
2007-10-16 21:37:31 +02:00
/**
Reinit prepared statement/stored procedure before execution.
@todo
When the new table structure is ready, then have a status bit
to indicate the table is altered, and re-do the setup_*
and open the tables back.
*/
void reinit_stmt_before_use(THD *thd, LEX *lex)
{
SELECT_LEX *sl= lex->all_selects_list;
DBUG_ENTER("reinit_stmt_before_use");
/*
We have to update "thd" pointer in LEX, all its units and in LEX::result,
since statements which belong to trigger body are associated with TABLE
object and because of this can be used in different threads.
*/
lex->thd= thd;
2004-07-16 00:15:55 +02:00
if (lex->empty_field_list_on_rset)
{
lex->empty_field_list_on_rset= 0;
lex->field_list.empty();
2004-07-16 00:15:55 +02:00
}
for (; sl; sl= sl->next_select_in_list())
{
if (!sl->first_execution)
{
2004-07-07 10:29:39 +02:00
/* remove option which was put by mysql_explain_union() */
sl->options&= ~SELECT_DESCRIBE;
/* see unique_table() */
sl->exclude_from_table_unique_test= FALSE;
/*
2006-05-08 01:14:43 +02:00
Copy WHERE, HAVING clause pointers to avoid damaging them
by optimisation
*/
2006-05-08 01:14:43 +02:00
if (sl->prep_where)
{
sl->where= sl->prep_where->copy_andor_structure(thd);
sl->where->cleanup();
}
else
sl->where= NULL;
2006-05-08 01:14:43 +02:00
if (sl->prep_having)
{
sl->having= sl->prep_having->copy_andor_structure(thd);
sl->having->cleanup();
}
else
sl->having= NULL;
2006-05-08 01:14:43 +02:00
DBUG_ASSERT(sl->join == 0);
ORDER *order;
/* Fix GROUP list */
if (sl->group_list_ptrs && sl->group_list_ptrs->size() > 0)
{
for (uint ix= 0; ix < sl->group_list_ptrs->size() - 1; ++ix)
{
order= sl->group_list_ptrs->at(ix);
order->next= sl->group_list_ptrs->at(ix+1);
}
}
for (order= sl->group_list.first; order; order= order->next)
order->item= &order->item_ptr;
/* Fix ORDER list */
for (order= sl->order_list.first; order; order= order->next)
order->item= &order->item_ptr;
/* clear the no_error flag for INSERT/UPDATE IGNORE */
sl->no_error= FALSE;
}
{
SELECT_LEX_UNIT *unit= sl->master_unit();
unit->unclean();
unit->types.empty();
/* for derived tables & PS (which can't be reset by Item_subquery) */
unit->reinit_exec_mechanism();
unit->set_thd(thd);
}
}
2004-07-16 00:15:55 +02:00
/*
2005-06-08 17:57:26 +02:00
TODO: When the new table structure is ready, then have a status bit
to indicate the table is altered, and re-do the setup_*
2004-07-16 00:15:55 +02:00
and open the tables back.
*/
/*
NOTE: We should reset whole table list here including all tables added
by prelocking algorithm (it is not a problem for substatements since
they have their own table list).
*/
2004-07-16 00:15:55 +02:00
for (TABLE_LIST *tables= lex->query_tables;
tables;
tables= tables->next_global)
2004-07-16 00:15:55 +02:00
{
tables->reinit_before_use(thd);
}
Apply and review: 3655 Jon Olav Hauglid 2009-10-19 Bug #30977 Concurrent statement using stored function and DROP FUNCTION breaks SBR Bug #48246 assert in close_thread_table Implement a fix for: Bug #41804 purge stored procedure cache causes mysterious hang for many minutes Bug #49972 Crash in prepared statements The problem was that concurrent execution of DML statements that use stored functions and DDL statements that drop/modify the same function might result in incorrect binary log in statement (and mixed) mode and therefore break replication. This patch fixes the problem by introducing metadata locking for stored procedures and functions. This is similar to what is done in Bug#25144 for views. Procedures and functions now are locked using metadata locks until the transaction is either committed or rolled back. This prevents other statements from modifying the procedure/function while it is being executed. This provides commit ordering - guaranteeing serializability across multiple transactions and thus fixes the reported binlog problem. Note that we do not take locks for top-level CALLs. This means that procedures called directly are not protected from changes by simultaneous DDL operations so they are executed at the state they had at the time of the CALL. By not taking locks for top-level CALLs, we still allow transactions to be started inside procedures. This patch also changes stored procedure cache invalidation. Upon a change of cache version, we no longer invalidate the entire cache, but only those routines which we use, only when a statement is executed that uses them. This patch also changes the logic of prepared statement validation. A stored procedure used by a prepared statement is now validated only once a metadata lock has been acquired. A version mismatch causes a flush of the obsolete routine from the cache and statement reprepare. Incompatible changes: 1) ER_LOCK_DEADLOCK is reported for a transaction trying to access a procedure/function that is locked by a DDL operation in another connection. 2) Procedure/function DDL operations are now prohibited in LOCK TABLES mode as exclusive locks must be taken all at once and LOCK TABLES provides no way to specifiy procedures/functions to be locked. Test cases have been added to sp-lock.test and rpl_sp.test. Work on this bug has very much been a team effort and this patch includes and is based on contributions from Davi Arnaut, Dmitry Lenev, Magne Mæhre and Konstantin Osipov.
2009-12-29 13:19:05 +01:00
/* Reset MDL tickets for procedures/functions */
for (Sroutine_hash_entry *rt=
(Sroutine_hash_entry*)thd->lex->sroutines_list.first;
rt; rt= rt->next)
rt->mdl_request.ticket= NULL;
/*
Cleanup of the special case of DELETE t1, t2 FROM t1, t2, t3 ...
(multi-delete). We do a full clean up, although at the moment all we
need to clean in the tables of MULTI-DELETE list is 'table' member.
*/
for (TABLE_LIST *tables= lex->auxiliary_table_list.first;
tables;
tables= tables->next_global)
2004-07-16 00:15:55 +02:00
{
tables->reinit_before_use(thd);
2004-07-16 00:15:55 +02:00
}
lex->current_select= &lex->select_lex;
/* restore original list used in INSERT ... SELECT */
if (lex->leaf_tables_insert)
lex->select_lex.leaf_tables= lex->leaf_tables_insert;
if (lex->result)
{
lex->result->cleanup();
lex->result->set_thd(thd);
}
lex->allow_sum_func= 0;
lex->in_sum_func= NULL;
DBUG_VOID_RETURN;
}
2007-10-16 21:37:31 +02:00
/**
Clears parameters from data left from previous execution or long data.
2005-06-08 17:57:26 +02:00
2007-10-16 21:37:31 +02:00
@param stmt prepared statement for which parameters should
be reset
*/
static void reset_stmt_params(Prepared_statement *stmt)
{
Item_param **item= stmt->param_array;
Item_param **end= item + stmt->param_count;
for (;item < end ; ++item)
(**item).reset();
}
2007-10-16 21:37:31 +02:00
/**
COM_STMT_EXECUTE handler: execute a previously prepared statement.
If there are any parameters, then replace parameter markers with the
data supplied from the client, and then execute the statement.
This function uses binary protocol to send a possible result set
to the client.
2007-10-16 21:37:31 +02:00
@param thd current thread
@param packet_arg parameter types and data, if any
@param packet_length packet length, including the terminator character.
@return
none: in case of success OK packet or a result set is sent to the
client, otherwise an error message is set in THD.
*/
void mysqld_stmt_execute(THD *thd, char *packet_arg, uint packet_length)
{
uchar *packet= (uchar*)packet_arg; // GCC 4.0.1 workaround
ulong stmt_id= uint4korr(packet);
Fixed compiler warnings Fixed compile-pentium64 scripts Fixed wrong estimate of update_with_key_prefix in sql-bench Merge bk-internal.mysql.com:/home/bk/mysql-5.1 into mysql.com:/home/my/mysql-5.1 Fixed unsafe define of uint4korr() Fixed that --extern works with mysql-test-run.pl Small trivial cleanups This also fixes a bug in counting number of rows that are updated when we have many simultanous queries Move all connection handling and command exectuion main loop from sql_parse.cc to sql_connection.cc Split handle_one_connection() into reusable sub functions. Split create_new_thread() into reusable sub functions. Added thread_scheduler; Preliminary interface code for future thread_handling code. Use 'my_thread_id' for internal thread id's Make thr_alarm_kill() to depend on thread_id instead of thread Make thr_abort_locks_for_thread() depend on thread_id instead of thread In store_globals(), set my_thread_var->id to be thd->thread_id. Use my_thread_var->id as basis for my_thread_name() The above changes makes the connection we have between THD and threads more soft. Added a lot of DBUG_PRINT() and DBUG_ASSERT() functions Fixed compiler warnings Fixed core dumps when running with --debug Removed setting of signal masks (was never used) Made event code call pthread_exit() (portability fix) Fixed that event code doesn't call DBUG_xxx functions before my_thread_init() is called. Made handling of thread_id and thd->variables.pseudo_thread_id uniform. Removed one common 'not freed memory' warning from mysqltest Fixed a couple of usage of not initialized warnings (unlikely cases) Suppress compiler warnings from bdb and (for the moment) warnings from ndb
2007-02-23 12:13:55 +01:00
ulong flags= (ulong) packet[4];
/* Query text for binary, general or slow log, if any of them is open */
String expanded_query;
uchar *packet_end= packet + packet_length;
Prepared_statement *stmt;
Protocol *save_protocol= thd->protocol;
bool open_cursor;
DBUG_ENTER("mysqld_stmt_execute");
packet+= 9; /* stmt_id + 5 bytes of flags */
2005-11-17 17:40:48 +01:00
/* First of all clear possible warnings from the previous command */
mysql_reset_thd_for_next_command(thd);
if (!(stmt= find_prepared_statement(thd, stmt_id)))
{
char llbuf[22];
Fix for BUG#11755168 '46895: test "outfile_loaddata" fails (reproducible)'. In sql_class.cc, 'row_count', of type 'ha_rows', was used as last argument for ER_TRUNCATED_WRONG_VALUE_FOR_FIELD which is "Incorrect %-.32s value: '%-.128s' for column '%.192s' at row %ld". So 'ha_rows' was used as 'long'. On SPARC32 Solaris builds, 'long' is 4 bytes and 'ha_rows' is 'longlong' i.e. 8 bytes. So the printf-like code was reading only the first 4 bytes. Because the CPU is big-endian, 1LL is 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x01 so the first four bytes yield 0. So the warning message had "row 0" instead of "row 1" in test outfile_loaddata.test: -Warning 1366 Incorrect string value: '\xE1\xE2\xF7' for column 'b' at row 1 +Warning 1366 Incorrect string value: '\xE1\xE2\xF7' for column 'b' at row 0 All error-messaging functions which internally invoke some printf-life function are potential candidate for such mistakes. One apparently easy way to catch such mistakes is to use ATTRIBUTE_FORMAT (from my_attribute.h). But this works only when call site has both: a) the format as a string literal b) the types of arguments. So: func(ER(ER_BLAH), 10); will silently not be checked, because ER(ER_BLAH) is not known at compile time (it is known at run-time, and depends on the chosen language). And func("%s", a va_list argument); has the same problem, as the *real* type of arguments is not known at this site at compile time (it's known in some caller). Moreover, func(ER(ER_BLAH)); though possibly correct (if ER(ER_BLAH) has no '%' markers), will not compile (gcc says "error: format not a string literal and no format arguments"). Consequences: 1) ATTRIBUTE_FORMAT is here added only to functions which in practice take "string literal" formats: "my_error_reporter" and "print_admin_msg". 2) it cannot be added to the other functions: my_error(), push_warning_printf(), Table_check_intact::report_error(), general_log_print(). To do a one-time check of functions listed in (2), the following "static code analysis" has been done: 1) replace my_error(ER_xxx, arguments for substitution in format) with the equivalent my_printf_error(ER_xxx,ER(ER_xxx), arguments for substitution in format), so that we have ER(ER_xxx) and the arguments *in the same call site* 2) add ATTRIBUTE_FORMAT to push_warning_printf(), Table_check_intact::report_error(), general_log_print() 3) replace ER(xxx) with the hard-coded English text found in errmsg.txt (like: ER(ER_UNKNOWN_ERROR) is replaced with "Unknown error"), so that a call site has the format as string literal 4) this way, ATTRIBUTE_FORMAT can effectively do its job 5) compile, fix errors detected by ATTRIBUTE_FORMAT 6) revert steps 1-2-3. The present patch has no compiler error when submitted again to the static code analysis above. It cannot catch all problems though: see Field::set_warning(), in which a call to push_warning_printf() has a variable error (thus, not replacable by a string literal); I checked set_warning() calls by hand though. See also WL 5883 for one proposal to avoid such bugs from appearing again in the future. The issues fixed in the patch are: a) mismatch in types (like 'int' passed to '%ld') b) more arguments passed than specified in the format. This patch resolves mismatches by changing the type/number of arguments, not by changing error messages of sql/share/errmsg.txt. The latter would be wrong, per the following old rule: errmsg.txt must be as stable as possible; no insertions or deletions of messages, no changes of type or number of printf-like format specifiers, are allowed, as long as the change impacts a message already released in a GA version. If this rule is not followed: - Connectors, which use error message numbers, will be confused (by insertions/deletions of messages) - using errmsg.sys of MySQL 5.1.n with mysqld of MySQL 5.1.(n+1) could produce wrong messages or crash; such usage can easily happen if installing 5.1.(n+1) while /etc/my.cnf still has --language=/path/to/5.1.n/xxx; or if copying mysqld from 5.1.(n+1) into a 5.1.n installation. When fixing b), I have verified that the superfluous arguments were not used in the format in the first 5.1 GA (5.1.30 'bteam@astra04-20081114162938-z8mctjp6st27uobm'). Had they been used, then passing them today, even if the message doesn't use them anymore, would have been necessary, as explained above.
2011-05-16 22:04:01 +02:00
my_error(ER_UNKNOWN_STMT_HANDLER, MYF(0), static_cast<int>(sizeof(llbuf)),
llstr(stmt_id, llbuf), "mysqld_stmt_execute");
DBUG_VOID_RETURN;
}
#if defined(ENABLED_PROFILING)
thd->profiling.set_query_source(stmt->query(), stmt->query_length());
#endif
DBUG_PRINT("exec_query", ("%s", stmt->query()));
DBUG_PRINT("info",("stmt: 0x%lx", (long) stmt));
open_cursor= test(flags & (ulong) CURSOR_TYPE_READ_ONLY);
thd->protocol= &thd->protocol_binary;
stmt->execute_loop(&expanded_query, open_cursor, packet, packet_end);
thd->protocol= save_protocol;
Fixed bug#11753187 (formerly known as bug 44585): SP_CACHE BEHAVES AS MEMORY LEAK. Background: - There are caches for stored functions and stored procedures (SP-cache); - There is no similar cache for events; - Triggers are cached together with TABLE objects; - Those SP-caches are per-session (i.e. specific to each session); - A stored routine is represented by a sp_head-instance internally; - SP-cache basically contains sp_head-objects of stored routines, which have been executed in a session; - sp_head-object is added into the SP-cache before the corresponding stored routine is executed; - SP-cache is flushed in the end of the session. The problem was that SP-cache might grow without any limit. Although this was not a pure memory leak (the SP-cache is flushed when session is closed), this is still a problem, because the user might take much memory by executing many stored routines. The patch fixes this problem in the least-intrusive way. A soft limit (similar to the size of table definition cache) is introduced. To represent such limit the new runtime configuration parameter 'stored_program_cache' is introduced. The value of this parameter is stored in the new global variable stored_program_cache_size that used to control the size of SP-cache to overflow. The parameter 'stored_program_cache' limits number of cached routines for each thread. It has the following min/default/max values given from support: min = 256, default = 256, max = 512 * 1024. Also it should be noted that this parameter limits the size of each cache (for stored procedures and for stored functions) separately. The SP-cache size is checked after top-level statement is parsed. If SP-cache size exceeds the limit specified by parameter 'stored_program_cache' then SP-cache is flushed and memory allocated for cache objects is freed. Such approach allows to flush cache safely when there are dependencies among stored routines.
2012-01-25 10:59:30 +01:00
sp_cache_enforce_limit(thd->sp_proc_cache, stored_program_cache_size);
sp_cache_enforce_limit(thd->sp_func_cache, stored_program_cache_size);
/* Close connection socket; for use with client testing (Bug#43560). */
DBUG_EXECUTE_IF("close_conn_after_stmt_execute", vio_close(thd->net.vio););
DBUG_VOID_RETURN;
}
2007-10-16 21:37:31 +02:00
/**
SQLCOM_EXECUTE implementation.
Execute prepared statement using parameter values from
lex->prepared_stmt_params and send result to the client using
text protocol. This is called from mysql_execute_command and
therefore should behave like an ordinary query (e.g. not change
global THD data, such as warning count, server status, etc).
This function uses text protocol to send a possible result set.
2007-10-16 21:37:31 +02:00
@param thd thread handle
@return
none: in case of success, OK (or result set) packet is sent to the
client, otherwise an error is set in THD
*/
void mysql_sql_stmt_execute(THD *thd)
{
LEX *lex= thd->lex;
Prepared_statement *stmt;
LEX_STRING *name= &lex->prepared_stmt_name;
/* Query text for binary, general or slow log, if any of them is open */
String expanded_query;
DBUG_ENTER("mysql_sql_stmt_execute");
WL#3817: Simplify string / memory area types and make things more consistent (first part) The following type conversions was done: - Changed byte to uchar - Changed gptr to uchar* - Change my_string to char * - Change my_size_t to size_t - Change size_s to size_t Removed declaration of byte, gptr, my_string, my_size_t and size_s. Following function parameter changes was done: - All string functions in mysys/strings was changed to use size_t instead of uint for string lengths. - All read()/write() functions changed to use size_t (including vio). - All protocoll functions changed to use size_t instead of uint - Functions that used a pointer to a string length was changed to use size_t* - Changed malloc(), free() and related functions from using gptr to use void * as this requires fewer casts in the code and is more in line with how the standard functions work. - Added extra length argument to dirname_part() to return the length of the created string. - Changed (at least) following functions to take uchar* as argument: - db_dump() - my_net_write() - net_write_command() - net_store_data() - DBUG_DUMP() - decimal2bin() & bin2decimal() - Changed my_compress() and my_uncompress() to use size_t. Changed one argument to my_uncompress() from a pointer to a value as we only return one value (makes function easier to use). - Changed type of 'pack_data' argument to packfrm() to avoid casts. - Changed in readfrm() and writefrom(), ha_discover and handler::discover() the type for argument 'frmdata' to uchar** to avoid casts. - Changed most Field functions to use uchar* instead of char* (reduced a lot of casts). - Changed field->val_xxx(xxx, new_ptr) to take const pointers. Other changes: - Removed a lot of not needed casts - Added a few new cast required by other changes - Added some cast to my_multi_malloc() arguments for safety (as string lengths needs to be uint, not size_t). - Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done explicitely as this conflict was often hided by casting the function to hash_get_key). - Changed some buffers to memory regions to uchar* to avoid casts. - Changed some string lengths from uint to size_t. - Changed field->ptr to be uchar* instead of char*. This allowed us to get rid of a lot of casts. - Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar - Include zlib.h in some files as we needed declaration of crc32() - Changed MY_FILE_ERROR to be (size_t) -1. - Changed many variables to hold the result of my_read() / my_write() to be size_t. This was needed to properly detect errors (which are returned as (size_t) -1). - Removed some very old VMS code - Changed packfrm()/unpackfrm() to not be depending on uint size (portability fix) - Removed windows specific code to restore cursor position as this causes slowdown on windows and we should not mix read() and pread() calls anyway as this is not thread safe. Updated function comment to reflect this. Changed function that depended on original behavior of my_pwrite() to itself restore the cursor position (one such case). - Added some missing checking of return value of malloc(). - Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow. - Changed type of table_def::m_size from my_size_t to ulong to reflect that m_size is the number of elements in the array, not a string/memory length. - Moved THD::max_row_length() to table.cc (as it's not depending on THD). Inlined max_row_length_blob() into this function. - More function comments - Fixed some compiler warnings when compiled without partitions. - Removed setting of LEX_STRING() arguments in declaration (portability fix). - Some trivial indentation/variable name changes. - Some trivial code simplifications: - Replaced some calls to alloc_root + memcpy to use strmake_root()/strdup_root(). - Changed some calls from memdup() to strmake() (Safety fix) - Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
DBUG_PRINT("info", ("EXECUTE: %.*s\n", (int) name->length, name->str));
if (!(stmt= (Prepared_statement*) thd->stmt_map.find_by_name(name)))
{
my_error(ER_UNKNOWN_STMT_HANDLER, MYF(0),
Fix for BUG#11755168 '46895: test "outfile_loaddata" fails (reproducible)'. In sql_class.cc, 'row_count', of type 'ha_rows', was used as last argument for ER_TRUNCATED_WRONG_VALUE_FOR_FIELD which is "Incorrect %-.32s value: '%-.128s' for column '%.192s' at row %ld". So 'ha_rows' was used as 'long'. On SPARC32 Solaris builds, 'long' is 4 bytes and 'ha_rows' is 'longlong' i.e. 8 bytes. So the printf-like code was reading only the first 4 bytes. Because the CPU is big-endian, 1LL is 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x01 so the first four bytes yield 0. So the warning message had "row 0" instead of "row 1" in test outfile_loaddata.test: -Warning 1366 Incorrect string value: '\xE1\xE2\xF7' for column 'b' at row 1 +Warning 1366 Incorrect string value: '\xE1\xE2\xF7' for column 'b' at row 0 All error-messaging functions which internally invoke some printf-life function are potential candidate for such mistakes. One apparently easy way to catch such mistakes is to use ATTRIBUTE_FORMAT (from my_attribute.h). But this works only when call site has both: a) the format as a string literal b) the types of arguments. So: func(ER(ER_BLAH), 10); will silently not be checked, because ER(ER_BLAH) is not known at compile time (it is known at run-time, and depends on the chosen language). And func("%s", a va_list argument); has the same problem, as the *real* type of arguments is not known at this site at compile time (it's known in some caller). Moreover, func(ER(ER_BLAH)); though possibly correct (if ER(ER_BLAH) has no '%' markers), will not compile (gcc says "error: format not a string literal and no format arguments"). Consequences: 1) ATTRIBUTE_FORMAT is here added only to functions which in practice take "string literal" formats: "my_error_reporter" and "print_admin_msg". 2) it cannot be added to the other functions: my_error(), push_warning_printf(), Table_check_intact::report_error(), general_log_print(). To do a one-time check of functions listed in (2), the following "static code analysis" has been done: 1) replace my_error(ER_xxx, arguments for substitution in format) with the equivalent my_printf_error(ER_xxx,ER(ER_xxx), arguments for substitution in format), so that we have ER(ER_xxx) and the arguments *in the same call site* 2) add ATTRIBUTE_FORMAT to push_warning_printf(), Table_check_intact::report_error(), general_log_print() 3) replace ER(xxx) with the hard-coded English text found in errmsg.txt (like: ER(ER_UNKNOWN_ERROR) is replaced with "Unknown error"), so that a call site has the format as string literal 4) this way, ATTRIBUTE_FORMAT can effectively do its job 5) compile, fix errors detected by ATTRIBUTE_FORMAT 6) revert steps 1-2-3. The present patch has no compiler error when submitted again to the static code analysis above. It cannot catch all problems though: see Field::set_warning(), in which a call to push_warning_printf() has a variable error (thus, not replacable by a string literal); I checked set_warning() calls by hand though. See also WL 5883 for one proposal to avoid such bugs from appearing again in the future. The issues fixed in the patch are: a) mismatch in types (like 'int' passed to '%ld') b) more arguments passed than specified in the format. This patch resolves mismatches by changing the type/number of arguments, not by changing error messages of sql/share/errmsg.txt. The latter would be wrong, per the following old rule: errmsg.txt must be as stable as possible; no insertions or deletions of messages, no changes of type or number of printf-like format specifiers, are allowed, as long as the change impacts a message already released in a GA version. If this rule is not followed: - Connectors, which use error message numbers, will be confused (by insertions/deletions of messages) - using errmsg.sys of MySQL 5.1.n with mysqld of MySQL 5.1.(n+1) could produce wrong messages or crash; such usage can easily happen if installing 5.1.(n+1) while /etc/my.cnf still has --language=/path/to/5.1.n/xxx; or if copying mysqld from 5.1.(n+1) into a 5.1.n installation. When fixing b), I have verified that the superfluous arguments were not used in the format in the first 5.1 GA (5.1.30 'bteam@astra04-20081114162938-z8mctjp6st27uobm'). Had they been used, then passing them today, even if the message doesn't use them anymore, would have been necessary, as explained above.
2011-05-16 22:04:01 +02:00
static_cast<int>(name->length), name->str, "EXECUTE");
DBUG_VOID_RETURN;
}
if (stmt->param_count != lex->prepared_stmt_params.elements)
{
my_error(ER_WRONG_ARGUMENTS, MYF(0), "EXECUTE");
DBUG_VOID_RETURN;
}
DBUG_PRINT("info",("stmt: 0x%lx", (long) stmt));
(void) stmt->execute_loop(&expanded_query, FALSE, NULL, NULL);
DBUG_VOID_RETURN;
}
2007-10-16 21:37:31 +02:00
/**
COM_STMT_FETCH handler: fetches requested amount of rows from cursor.
2007-10-16 21:37:31 +02:00
@param thd Thread handle
@param packet Packet from client (with stmt_id & num_rows)
@param packet_length Length of packet
*/
void mysqld_stmt_fetch(THD *thd, char *packet, uint packet_length)
{
/* assume there is always place for 8-16 bytes */
ulong stmt_id= uint4korr(packet);
ulong num_rows= uint4korr(packet+4);
Prepared_statement *stmt;
2005-06-22 21:12:25 +02:00
Statement stmt_backup;
Server_side_cursor *cursor;
DBUG_ENTER("mysqld_stmt_fetch");
2005-11-17 17:40:48 +01:00
/* First of all clear possible warnings from the previous command */
mysql_reset_thd_for_next_command(thd);
status_var_increment(thd->status_var.com_stmt_fetch);
if (!(stmt= find_prepared_statement(thd, stmt_id)))
{
char llbuf[22];
Fix for BUG#11755168 '46895: test "outfile_loaddata" fails (reproducible)'. In sql_class.cc, 'row_count', of type 'ha_rows', was used as last argument for ER_TRUNCATED_WRONG_VALUE_FOR_FIELD which is "Incorrect %-.32s value: '%-.128s' for column '%.192s' at row %ld". So 'ha_rows' was used as 'long'. On SPARC32 Solaris builds, 'long' is 4 bytes and 'ha_rows' is 'longlong' i.e. 8 bytes. So the printf-like code was reading only the first 4 bytes. Because the CPU is big-endian, 1LL is 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x01 so the first four bytes yield 0. So the warning message had "row 0" instead of "row 1" in test outfile_loaddata.test: -Warning 1366 Incorrect string value: '\xE1\xE2\xF7' for column 'b' at row 1 +Warning 1366 Incorrect string value: '\xE1\xE2\xF7' for column 'b' at row 0 All error-messaging functions which internally invoke some printf-life function are potential candidate for such mistakes. One apparently easy way to catch such mistakes is to use ATTRIBUTE_FORMAT (from my_attribute.h). But this works only when call site has both: a) the format as a string literal b) the types of arguments. So: func(ER(ER_BLAH), 10); will silently not be checked, because ER(ER_BLAH) is not known at compile time (it is known at run-time, and depends on the chosen language). And func("%s", a va_list argument); has the same problem, as the *real* type of arguments is not known at this site at compile time (it's known in some caller). Moreover, func(ER(ER_BLAH)); though possibly correct (if ER(ER_BLAH) has no '%' markers), will not compile (gcc says "error: format not a string literal and no format arguments"). Consequences: 1) ATTRIBUTE_FORMAT is here added only to functions which in practice take "string literal" formats: "my_error_reporter" and "print_admin_msg". 2) it cannot be added to the other functions: my_error(), push_warning_printf(), Table_check_intact::report_error(), general_log_print(). To do a one-time check of functions listed in (2), the following "static code analysis" has been done: 1) replace my_error(ER_xxx, arguments for substitution in format) with the equivalent my_printf_error(ER_xxx,ER(ER_xxx), arguments for substitution in format), so that we have ER(ER_xxx) and the arguments *in the same call site* 2) add ATTRIBUTE_FORMAT to push_warning_printf(), Table_check_intact::report_error(), general_log_print() 3) replace ER(xxx) with the hard-coded English text found in errmsg.txt (like: ER(ER_UNKNOWN_ERROR) is replaced with "Unknown error"), so that a call site has the format as string literal 4) this way, ATTRIBUTE_FORMAT can effectively do its job 5) compile, fix errors detected by ATTRIBUTE_FORMAT 6) revert steps 1-2-3. The present patch has no compiler error when submitted again to the static code analysis above. It cannot catch all problems though: see Field::set_warning(), in which a call to push_warning_printf() has a variable error (thus, not replacable by a string literal); I checked set_warning() calls by hand though. See also WL 5883 for one proposal to avoid such bugs from appearing again in the future. The issues fixed in the patch are: a) mismatch in types (like 'int' passed to '%ld') b) more arguments passed than specified in the format. This patch resolves mismatches by changing the type/number of arguments, not by changing error messages of sql/share/errmsg.txt. The latter would be wrong, per the following old rule: errmsg.txt must be as stable as possible; no insertions or deletions of messages, no changes of type or number of printf-like format specifiers, are allowed, as long as the change impacts a message already released in a GA version. If this rule is not followed: - Connectors, which use error message numbers, will be confused (by insertions/deletions of messages) - using errmsg.sys of MySQL 5.1.n with mysqld of MySQL 5.1.(n+1) could produce wrong messages or crash; such usage can easily happen if installing 5.1.(n+1) while /etc/my.cnf still has --language=/path/to/5.1.n/xxx; or if copying mysqld from 5.1.(n+1) into a 5.1.n installation. When fixing b), I have verified that the superfluous arguments were not used in the format in the first 5.1 GA (5.1.30 'bteam@astra04-20081114162938-z8mctjp6st27uobm'). Had they been used, then passing them today, even if the message doesn't use them anymore, would have been necessary, as explained above.
2011-05-16 22:04:01 +02:00
my_error(ER_UNKNOWN_STMT_HANDLER, MYF(0), static_cast<int>(sizeof(llbuf)),
llstr(stmt_id, llbuf), "mysqld_stmt_fetch");
DBUG_VOID_RETURN;
}
cursor= stmt->cursor;
if (!cursor)
{
my_error(ER_STMT_HAS_NO_OPEN_CURSOR, MYF(0), stmt_id);
DBUG_VOID_RETURN;
}
thd->stmt_arena= stmt;
2005-06-22 21:12:25 +02:00
thd->set_n_backup_statement(stmt, &stmt_backup);
cursor->fetch(num_rows);
if (!cursor->is_open())
{
stmt->close_cursor();
reset_stmt_params(stmt);
}
thd->restore_backup_statement(stmt, &stmt_backup);
thd->stmt_arena= thd;
DBUG_VOID_RETURN;
}
2007-10-16 21:37:31 +02:00
/**
Reset a prepared statement in case there was a recoverable error.
This function resets statement to the state it was right after prepare.
It can be used to:
- clear an error happened during mysqld_stmt_send_long_data
2007-10-16 21:37:31 +02:00
- cancel long data stream for all placeholders without
having to call mysqld_stmt_execute.
2007-10-16 21:37:31 +02:00
- close an open cursor
Sends 'OK' packet in case of success (statement was reset)
or 'ERROR' packet (unrecoverable error/statement not found/etc).
2007-10-16 21:37:31 +02:00
@param thd Thread handle
@param packet Packet with stmt id
*/
void mysqld_stmt_reset(THD *thd, char *packet)
{
/* There is always space for 4 bytes in buffer */
ulong stmt_id= uint4korr(packet);
Prepared_statement *stmt;
DBUG_ENTER("mysqld_stmt_reset");
2005-11-17 17:40:48 +01:00
/* First of all clear possible warnings from the previous command */
mysql_reset_thd_for_next_command(thd);
status_var_increment(thd->status_var.com_stmt_reset);
if (!(stmt= find_prepared_statement(thd, stmt_id)))
{
char llbuf[22];
Fix for BUG#11755168 '46895: test "outfile_loaddata" fails (reproducible)'. In sql_class.cc, 'row_count', of type 'ha_rows', was used as last argument for ER_TRUNCATED_WRONG_VALUE_FOR_FIELD which is "Incorrect %-.32s value: '%-.128s' for column '%.192s' at row %ld". So 'ha_rows' was used as 'long'. On SPARC32 Solaris builds, 'long' is 4 bytes and 'ha_rows' is 'longlong' i.e. 8 bytes. So the printf-like code was reading only the first 4 bytes. Because the CPU is big-endian, 1LL is 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x01 so the first four bytes yield 0. So the warning message had "row 0" instead of "row 1" in test outfile_loaddata.test: -Warning 1366 Incorrect string value: '\xE1\xE2\xF7' for column 'b' at row 1 +Warning 1366 Incorrect string value: '\xE1\xE2\xF7' for column 'b' at row 0 All error-messaging functions which internally invoke some printf-life function are potential candidate for such mistakes. One apparently easy way to catch such mistakes is to use ATTRIBUTE_FORMAT (from my_attribute.h). But this works only when call site has both: a) the format as a string literal b) the types of arguments. So: func(ER(ER_BLAH), 10); will silently not be checked, because ER(ER_BLAH) is not known at compile time (it is known at run-time, and depends on the chosen language). And func("%s", a va_list argument); has the same problem, as the *real* type of arguments is not known at this site at compile time (it's known in some caller). Moreover, func(ER(ER_BLAH)); though possibly correct (if ER(ER_BLAH) has no '%' markers), will not compile (gcc says "error: format not a string literal and no format arguments"). Consequences: 1) ATTRIBUTE_FORMAT is here added only to functions which in practice take "string literal" formats: "my_error_reporter" and "print_admin_msg". 2) it cannot be added to the other functions: my_error(), push_warning_printf(), Table_check_intact::report_error(), general_log_print(). To do a one-time check of functions listed in (2), the following "static code analysis" has been done: 1) replace my_error(ER_xxx, arguments for substitution in format) with the equivalent my_printf_error(ER_xxx,ER(ER_xxx), arguments for substitution in format), so that we have ER(ER_xxx) and the arguments *in the same call site* 2) add ATTRIBUTE_FORMAT to push_warning_printf(), Table_check_intact::report_error(), general_log_print() 3) replace ER(xxx) with the hard-coded English text found in errmsg.txt (like: ER(ER_UNKNOWN_ERROR) is replaced with "Unknown error"), so that a call site has the format as string literal 4) this way, ATTRIBUTE_FORMAT can effectively do its job 5) compile, fix errors detected by ATTRIBUTE_FORMAT 6) revert steps 1-2-3. The present patch has no compiler error when submitted again to the static code analysis above. It cannot catch all problems though: see Field::set_warning(), in which a call to push_warning_printf() has a variable error (thus, not replacable by a string literal); I checked set_warning() calls by hand though. See also WL 5883 for one proposal to avoid such bugs from appearing again in the future. The issues fixed in the patch are: a) mismatch in types (like 'int' passed to '%ld') b) more arguments passed than specified in the format. This patch resolves mismatches by changing the type/number of arguments, not by changing error messages of sql/share/errmsg.txt. The latter would be wrong, per the following old rule: errmsg.txt must be as stable as possible; no insertions or deletions of messages, no changes of type or number of printf-like format specifiers, are allowed, as long as the change impacts a message already released in a GA version. If this rule is not followed: - Connectors, which use error message numbers, will be confused (by insertions/deletions of messages) - using errmsg.sys of MySQL 5.1.n with mysqld of MySQL 5.1.(n+1) could produce wrong messages or crash; such usage can easily happen if installing 5.1.(n+1) while /etc/my.cnf still has --language=/path/to/5.1.n/xxx; or if copying mysqld from 5.1.(n+1) into a 5.1.n installation. When fixing b), I have verified that the superfluous arguments were not used in the format in the first 5.1 GA (5.1.30 'bteam@astra04-20081114162938-z8mctjp6st27uobm'). Had they been used, then passing them today, even if the message doesn't use them anymore, would have been necessary, as explained above.
2011-05-16 22:04:01 +02:00
my_error(ER_UNKNOWN_STMT_HANDLER, MYF(0), static_cast<int>(sizeof(llbuf)),
llstr(stmt_id, llbuf), "mysqld_stmt_reset");
DBUG_VOID_RETURN;
}
stmt->close_cursor();
/*
Clear parameters from data which could be set by
mysqld_stmt_send_long_data() call.
*/
reset_stmt_params(stmt);
stmt->state= Query_arena::STMT_PREPARED;
general_log_print(thd, thd->command, NullS);
my_ok(thd);
2005-06-08 17:57:26 +02:00
DBUG_VOID_RETURN;
}
2007-10-16 21:37:31 +02:00
/**
Delete a prepared statement from memory.
2007-10-16 21:37:31 +02:00
@note
we don't send any reply to this command.
*/
void mysqld_stmt_close(THD *thd, char *packet)
{
/* There is always space for 4 bytes in packet buffer */
ulong stmt_id= uint4korr(packet);
Prepared_statement *stmt;
DBUG_ENTER("mysqld_stmt_close");
thd->stmt_da->disable_status();
2008-03-17 20:39:09 +01:00
if (!(stmt= find_prepared_statement(thd, stmt_id)))
DBUG_VOID_RETURN;
/*
The only way currently a statement can be deallocated when it's
in use is from within Dynamic SQL.
*/
DBUG_ASSERT(! stmt->is_in_use());
stmt->deallocate();
general_log_print(thd, thd->command, NullS);
DBUG_VOID_RETURN;
}
2007-10-16 21:37:31 +02:00
/**
SQLCOM_DEALLOCATE implementation.
Close an SQL prepared statement. As this can be called from Dynamic
SQL, we should be careful to not close a statement that is currently
being executed.
2007-10-16 21:37:31 +02:00
@return
none: OK packet is sent in case of success, otherwise an error
message is set in THD
*/
void mysql_sql_stmt_close(THD *thd)
{
Prepared_statement* stmt;
LEX_STRING *name= &thd->lex->prepared_stmt_name;
WL#3817: Simplify string / memory area types and make things more consistent (first part) The following type conversions was done: - Changed byte to uchar - Changed gptr to uchar* - Change my_string to char * - Change my_size_t to size_t - Change size_s to size_t Removed declaration of byte, gptr, my_string, my_size_t and size_s. Following function parameter changes was done: - All string functions in mysys/strings was changed to use size_t instead of uint for string lengths. - All read()/write() functions changed to use size_t (including vio). - All protocoll functions changed to use size_t instead of uint - Functions that used a pointer to a string length was changed to use size_t* - Changed malloc(), free() and related functions from using gptr to use void * as this requires fewer casts in the code and is more in line with how the standard functions work. - Added extra length argument to dirname_part() to return the length of the created string. - Changed (at least) following functions to take uchar* as argument: - db_dump() - my_net_write() - net_write_command() - net_store_data() - DBUG_DUMP() - decimal2bin() & bin2decimal() - Changed my_compress() and my_uncompress() to use size_t. Changed one argument to my_uncompress() from a pointer to a value as we only return one value (makes function easier to use). - Changed type of 'pack_data' argument to packfrm() to avoid casts. - Changed in readfrm() and writefrom(), ha_discover and handler::discover() the type for argument 'frmdata' to uchar** to avoid casts. - Changed most Field functions to use uchar* instead of char* (reduced a lot of casts). - Changed field->val_xxx(xxx, new_ptr) to take const pointers. Other changes: - Removed a lot of not needed casts - Added a few new cast required by other changes - Added some cast to my_multi_malloc() arguments for safety (as string lengths needs to be uint, not size_t). - Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done explicitely as this conflict was often hided by casting the function to hash_get_key). - Changed some buffers to memory regions to uchar* to avoid casts. - Changed some string lengths from uint to size_t. - Changed field->ptr to be uchar* instead of char*. This allowed us to get rid of a lot of casts. - Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar - Include zlib.h in some files as we needed declaration of crc32() - Changed MY_FILE_ERROR to be (size_t) -1. - Changed many variables to hold the result of my_read() / my_write() to be size_t. This was needed to properly detect errors (which are returned as (size_t) -1). - Removed some very old VMS code - Changed packfrm()/unpackfrm() to not be depending on uint size (portability fix) - Removed windows specific code to restore cursor position as this causes slowdown on windows and we should not mix read() and pread() calls anyway as this is not thread safe. Updated function comment to reflect this. Changed function that depended on original behavior of my_pwrite() to itself restore the cursor position (one such case). - Added some missing checking of return value of malloc(). - Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow. - Changed type of table_def::m_size from my_size_t to ulong to reflect that m_size is the number of elements in the array, not a string/memory length. - Moved THD::max_row_length() to table.cc (as it's not depending on THD). Inlined max_row_length_blob() into this function. - More function comments - Fixed some compiler warnings when compiled without partitions. - Removed setting of LEX_STRING() arguments in declaration (portability fix). - Some trivial indentation/variable name changes. - Some trivial code simplifications: - Replaced some calls to alloc_root + memcpy to use strmake_root()/strdup_root(). - Changed some calls from memdup() to strmake() (Safety fix) - Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
DBUG_PRINT("info", ("DEALLOCATE PREPARE: %.*s\n", (int) name->length,
name->str));
if (! (stmt= (Prepared_statement*) thd->stmt_map.find_by_name(name)))
my_error(ER_UNKNOWN_STMT_HANDLER, MYF(0),
Fix for BUG#11755168 '46895: test "outfile_loaddata" fails (reproducible)'. In sql_class.cc, 'row_count', of type 'ha_rows', was used as last argument for ER_TRUNCATED_WRONG_VALUE_FOR_FIELD which is "Incorrect %-.32s value: '%-.128s' for column '%.192s' at row %ld". So 'ha_rows' was used as 'long'. On SPARC32 Solaris builds, 'long' is 4 bytes and 'ha_rows' is 'longlong' i.e. 8 bytes. So the printf-like code was reading only the first 4 bytes. Because the CPU is big-endian, 1LL is 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x01 so the first four bytes yield 0. So the warning message had "row 0" instead of "row 1" in test outfile_loaddata.test: -Warning 1366 Incorrect string value: '\xE1\xE2\xF7' for column 'b' at row 1 +Warning 1366 Incorrect string value: '\xE1\xE2\xF7' for column 'b' at row 0 All error-messaging functions which internally invoke some printf-life function are potential candidate for such mistakes. One apparently easy way to catch such mistakes is to use ATTRIBUTE_FORMAT (from my_attribute.h). But this works only when call site has both: a) the format as a string literal b) the types of arguments. So: func(ER(ER_BLAH), 10); will silently not be checked, because ER(ER_BLAH) is not known at compile time (it is known at run-time, and depends on the chosen language). And func("%s", a va_list argument); has the same problem, as the *real* type of arguments is not known at this site at compile time (it's known in some caller). Moreover, func(ER(ER_BLAH)); though possibly correct (if ER(ER_BLAH) has no '%' markers), will not compile (gcc says "error: format not a string literal and no format arguments"). Consequences: 1) ATTRIBUTE_FORMAT is here added only to functions which in practice take "string literal" formats: "my_error_reporter" and "print_admin_msg". 2) it cannot be added to the other functions: my_error(), push_warning_printf(), Table_check_intact::report_error(), general_log_print(). To do a one-time check of functions listed in (2), the following "static code analysis" has been done: 1) replace my_error(ER_xxx, arguments for substitution in format) with the equivalent my_printf_error(ER_xxx,ER(ER_xxx), arguments for substitution in format), so that we have ER(ER_xxx) and the arguments *in the same call site* 2) add ATTRIBUTE_FORMAT to push_warning_printf(), Table_check_intact::report_error(), general_log_print() 3) replace ER(xxx) with the hard-coded English text found in errmsg.txt (like: ER(ER_UNKNOWN_ERROR) is replaced with "Unknown error"), so that a call site has the format as string literal 4) this way, ATTRIBUTE_FORMAT can effectively do its job 5) compile, fix errors detected by ATTRIBUTE_FORMAT 6) revert steps 1-2-3. The present patch has no compiler error when submitted again to the static code analysis above. It cannot catch all problems though: see Field::set_warning(), in which a call to push_warning_printf() has a variable error (thus, not replacable by a string literal); I checked set_warning() calls by hand though. See also WL 5883 for one proposal to avoid such bugs from appearing again in the future. The issues fixed in the patch are: a) mismatch in types (like 'int' passed to '%ld') b) more arguments passed than specified in the format. This patch resolves mismatches by changing the type/number of arguments, not by changing error messages of sql/share/errmsg.txt. The latter would be wrong, per the following old rule: errmsg.txt must be as stable as possible; no insertions or deletions of messages, no changes of type or number of printf-like format specifiers, are allowed, as long as the change impacts a message already released in a GA version. If this rule is not followed: - Connectors, which use error message numbers, will be confused (by insertions/deletions of messages) - using errmsg.sys of MySQL 5.1.n with mysqld of MySQL 5.1.(n+1) could produce wrong messages or crash; such usage can easily happen if installing 5.1.(n+1) while /etc/my.cnf still has --language=/path/to/5.1.n/xxx; or if copying mysqld from 5.1.(n+1) into a 5.1.n installation. When fixing b), I have verified that the superfluous arguments were not used in the format in the first 5.1 GA (5.1.30 'bteam@astra04-20081114162938-z8mctjp6st27uobm'). Had they been used, then passing them today, even if the message doesn't use them anymore, would have been necessary, as explained above.
2011-05-16 22:04:01 +02:00
static_cast<int>(name->length), name->str, "DEALLOCATE PREPARE");
else if (stmt->is_in_use())
my_error(ER_PS_NO_RECURSION, MYF(0));
else
{
stmt->deallocate();
my_ok(thd);
}
}
2007-10-16 21:37:31 +02:00
/**
Handle long data in pieces from client.
Get a part of a long data. To make the protocol efficient, we are
not sending any return packets here. If something goes wrong, then
we will send the error on 'execute' We assume that the client takes
care of checking that all parts are sent to the server. (No checking
that we get a 'end of column' in the server is performed).
2007-10-16 21:37:31 +02:00
@param thd Thread handle
@param packet String to append
@param packet_length Length of string (including end \\0)
*/
void mysql_stmt_get_longdata(THD *thd, char *packet, ulong packet_length)
{
ulong stmt_id;
uint param_number;
Prepared_statement *stmt;
Item_param *param;
#ifndef EMBEDDED_LIBRARY
char *packet_end= packet + packet_length;
#endif
DBUG_ENTER("mysql_stmt_get_longdata");
status_var_increment(thd->status_var.com_stmt_send_long_data);
thd->stmt_da->disable_status();
2003-10-06 13:32:38 +02:00
#ifndef EMBEDDED_LIBRARY
/* Minimal size of long data packet is 6 bytes */
if (packet_length < MYSQL_LONG_DATA_HEADER)
DBUG_VOID_RETURN;
2003-10-06 13:32:38 +02:00
#endif
stmt_id= uint4korr(packet);
packet+= 4;
if (!(stmt=find_prepared_statement(thd, stmt_id)))
DBUG_VOID_RETURN;
param_number= uint2korr(packet);
packet+= 2;
2003-10-06 13:32:38 +02:00
#ifndef EMBEDDED_LIBRARY
if (param_number >= stmt->param_count)
{
/* Error will be sent in execute call */
stmt->state= Query_arena::STMT_ERROR;
stmt->last_errno= ER_WRONG_ARGUMENTS;
sprintf(stmt->last_error, ER(ER_WRONG_ARGUMENTS),
"mysqld_stmt_send_long_data");
DBUG_VOID_RETURN;
}
2003-10-06 13:32:38 +02:00
#endif
param= stmt->param_array[param_number];
Diagnostics_area new_stmt_da, *save_stmt_da= thd->stmt_da;
Warning_info new_warnning_info(thd->query_id, false);
Warning_info *save_warinig_info= thd->warning_info;
thd->stmt_da= &new_stmt_da;
thd->warning_info= &new_warnning_info;
2003-10-06 13:32:38 +02:00
#ifndef EMBEDDED_LIBRARY
param->set_longdata(packet, (ulong) (packet_end - packet));
2003-10-06 13:32:38 +02:00
#else
param->set_longdata(thd->extra_data, thd->extra_length);
2003-10-06 13:32:38 +02:00
#endif
if (thd->stmt_da->is_error())
{
stmt->state= Query_arena::STMT_ERROR;
stmt->last_errno= thd->stmt_da->sql_errno();
strncpy(stmt->last_error, thd->stmt_da->message(), MYSQL_ERRMSG_SIZE);
}
thd->stmt_da= save_stmt_da;
thd->warning_info= save_warinig_info;
general_log_print(thd, thd->command, NullS);
DBUG_VOID_RETURN;
}
2002-11-22 19:04:42 +01:00
/***************************************************************************
Select_fetch_protocol_binary
****************************************************************************/
2007-03-09 16:06:08 +01:00
Select_fetch_protocol_binary::Select_fetch_protocol_binary(THD *thd_arg)
:protocol(thd_arg)
{}
Backport of revno 2630.28.10, 2630.28.31, 2630.28.26, 2630.33.1, 2630.39.1, 2630.28.29, 2630.34.3, 2630.34.2, 2630.34.1, 2630.29.29, 2630.29.28, 2630.31.1, 2630.28.13, 2630.28.10, 2617.23.14 and some other minor revisions. This patch implements: WL#4264 "Backup: Stabilize Service Interface" -- all the server prerequisites except si_objects.{h,cc} themselves (they can be just copied over, when needed). WL#4435: Support OUT-parameters in prepared statements. (and all issues in the initial patches for these two tasks, that were discovered in pushbuild and during testing). Bug#39519: mysql_stmt_close() should flush all data associated with the statement. After execution of a prepared statement, send OUT parameters of the invoked stored procedure, if any, to the client. When using the binary protocol, send the parameters in an additional result set over the wire. When using the text protocol, assign out parameters to the user variables from the CALL(@var1, @var2, ...) specification. The following refactoring has been made: - Protocol::send_fields() was renamed to Protocol::send_result_set_metadata(); - A new Protocol::send_result_set_row() was introduced to incapsulate common functionality for sending row data. - Signature of Protocol::prepare_for_send() was changed: this operation does not need a list of items, the number of items is fully sufficient. The following backward incompatible changes have been made: - CLIENT_MULTI_RESULTS is now enabled by default in the client; - CLIENT_PS_MULTI_RESUTLS is now enabled by default in the client.
2009-10-21 22:02:06 +02:00
bool Select_fetch_protocol_binary::send_result_set_metadata(List<Item> &list, uint flags)
{
bool rc;
Protocol *save_protocol= thd->protocol;
/*
Backport of revno 2630.28.10, 2630.28.31, 2630.28.26, 2630.33.1, 2630.39.1, 2630.28.29, 2630.34.3, 2630.34.2, 2630.34.1, 2630.29.29, 2630.29.28, 2630.31.1, 2630.28.13, 2630.28.10, 2617.23.14 and some other minor revisions. This patch implements: WL#4264 "Backup: Stabilize Service Interface" -- all the server prerequisites except si_objects.{h,cc} themselves (they can be just copied over, when needed). WL#4435: Support OUT-parameters in prepared statements. (and all issues in the initial patches for these two tasks, that were discovered in pushbuild and during testing). Bug#39519: mysql_stmt_close() should flush all data associated with the statement. After execution of a prepared statement, send OUT parameters of the invoked stored procedure, if any, to the client. When using the binary protocol, send the parameters in an additional result set over the wire. When using the text protocol, assign out parameters to the user variables from the CALL(@var1, @var2, ...) specification. The following refactoring has been made: - Protocol::send_fields() was renamed to Protocol::send_result_set_metadata(); - A new Protocol::send_result_set_row() was introduced to incapsulate common functionality for sending row data. - Signature of Protocol::prepare_for_send() was changed: this operation does not need a list of items, the number of items is fully sufficient. The following backward incompatible changes have been made: - CLIENT_MULTI_RESULTS is now enabled by default in the client; - CLIENT_PS_MULTI_RESUTLS is now enabled by default in the client.
2009-10-21 22:02:06 +02:00
Protocol::send_result_set_metadata caches the information about column types:
this information is later used to send data. Therefore, the same
dedicated Protocol object must be used for all operations with
a cursor.
*/
thd->protocol= &protocol;
Backport of revno 2630.28.10, 2630.28.31, 2630.28.26, 2630.33.1, 2630.39.1, 2630.28.29, 2630.34.3, 2630.34.2, 2630.34.1, 2630.29.29, 2630.29.28, 2630.31.1, 2630.28.13, 2630.28.10, 2617.23.14 and some other minor revisions. This patch implements: WL#4264 "Backup: Stabilize Service Interface" -- all the server prerequisites except si_objects.{h,cc} themselves (they can be just copied over, when needed). WL#4435: Support OUT-parameters in prepared statements. (and all issues in the initial patches for these two tasks, that were discovered in pushbuild and during testing). Bug#39519: mysql_stmt_close() should flush all data associated with the statement. After execution of a prepared statement, send OUT parameters of the invoked stored procedure, if any, to the client. When using the binary protocol, send the parameters in an additional result set over the wire. When using the text protocol, assign out parameters to the user variables from the CALL(@var1, @var2, ...) specification. The following refactoring has been made: - Protocol::send_fields() was renamed to Protocol::send_result_set_metadata(); - A new Protocol::send_result_set_row() was introduced to incapsulate common functionality for sending row data. - Signature of Protocol::prepare_for_send() was changed: this operation does not need a list of items, the number of items is fully sufficient. The following backward incompatible changes have been made: - CLIENT_MULTI_RESULTS is now enabled by default in the client; - CLIENT_PS_MULTI_RESUTLS is now enabled by default in the client.
2009-10-21 22:02:06 +02:00
rc= select_send::send_result_set_metadata(list, flags);
thd->protocol= save_protocol;
return rc;
}
bool Select_fetch_protocol_binary::send_eof()
{
/*
Don't send EOF if we're in error condition (which implies we've already
sent or are sending an error)
*/
if (thd->is_error())
return true;
::my_eof(thd);
return false;
}
bool
Select_fetch_protocol_binary::send_data(List<Item> &fields)
{
Protocol *save_protocol= thd->protocol;
bool rc;
thd->protocol= &protocol;
rc= select_send::send_data(fields);
thd->protocol= save_protocol;
return rc;
}
/*******************************************************************
* Reprepare_observer
*******************************************************************/
/** Push an error to the error stack and return TRUE for now. */
bool
Reprepare_observer::report_error(THD *thd)
{
/*
This 'error' is purely internal to the server:
- No exception handler is invoked,
- No condition is added in the condition area (warn_list).
The diagnostics area is set to an error status to enforce
that this thread execution stops and returns to the caller,
backtracking all the way to Prepared_statement::execute_loop().
*/
thd->stmt_da->set_error_status(thd, ER_NEED_REPREPARE,
ER(ER_NEED_REPREPARE), "HY000");
m_invalidated= TRUE;
return TRUE;
}
Backport of revno 2630.28.10, 2630.28.31, 2630.28.26, 2630.33.1, 2630.39.1, 2630.28.29, 2630.34.3, 2630.34.2, 2630.34.1, 2630.29.29, 2630.29.28, 2630.31.1, 2630.28.13, 2630.28.10, 2617.23.14 and some other minor revisions. This patch implements: WL#4264 "Backup: Stabilize Service Interface" -- all the server prerequisites except si_objects.{h,cc} themselves (they can be just copied over, when needed). WL#4435: Support OUT-parameters in prepared statements. (and all issues in the initial patches for these two tasks, that were discovered in pushbuild and during testing). Bug#39519: mysql_stmt_close() should flush all data associated with the statement. After execution of a prepared statement, send OUT parameters of the invoked stored procedure, if any, to the client. When using the binary protocol, send the parameters in an additional result set over the wire. When using the text protocol, assign out parameters to the user variables from the CALL(@var1, @var2, ...) specification. The following refactoring has been made: - Protocol::send_fields() was renamed to Protocol::send_result_set_metadata(); - A new Protocol::send_result_set_row() was introduced to incapsulate common functionality for sending row data. - Signature of Protocol::prepare_for_send() was changed: this operation does not need a list of items, the number of items is fully sufficient. The following backward incompatible changes have been made: - CLIENT_MULTI_RESULTS is now enabled by default in the client; - CLIENT_PS_MULTI_RESUTLS is now enabled by default in the client.
2009-10-21 22:02:06 +02:00
/*******************************************************************
* Server_runnable
*******************************************************************/
Server_runnable::~Server_runnable()
{
}
///////////////////////////////////////////////////////////////////////////
Execute_sql_statement::
Execute_sql_statement(LEX_STRING sql_text)
:m_sql_text(sql_text)
{}
/**
Parse and execute a statement. Does not prepare the query.
Allows to execute a statement from within another statement.
The main property of the implementation is that it does not
affect the environment -- i.e. you can run many
executions without having to cleanup/reset THD in between.
*/
bool
Execute_sql_statement::execute_server_code(THD *thd)
{
bool error;
if (alloc_query(thd, m_sql_text.str, m_sql_text.length))
return TRUE;
Parser_state parser_state;
if (parser_state.init(thd, thd->query(), thd->query_length()))
return TRUE;
Backport of revno 2630.28.10, 2630.28.31, 2630.28.26, 2630.33.1, 2630.39.1, 2630.28.29, 2630.34.3, 2630.34.2, 2630.34.1, 2630.29.29, 2630.29.28, 2630.31.1, 2630.28.13, 2630.28.10, 2617.23.14 and some other minor revisions. This patch implements: WL#4264 "Backup: Stabilize Service Interface" -- all the server prerequisites except si_objects.{h,cc} themselves (they can be just copied over, when needed). WL#4435: Support OUT-parameters in prepared statements. (and all issues in the initial patches for these two tasks, that were discovered in pushbuild and during testing). Bug#39519: mysql_stmt_close() should flush all data associated with the statement. After execution of a prepared statement, send OUT parameters of the invoked stored procedure, if any, to the client. When using the binary protocol, send the parameters in an additional result set over the wire. When using the text protocol, assign out parameters to the user variables from the CALL(@var1, @var2, ...) specification. The following refactoring has been made: - Protocol::send_fields() was renamed to Protocol::send_result_set_metadata(); - A new Protocol::send_result_set_row() was introduced to incapsulate common functionality for sending row data. - Signature of Protocol::prepare_for_send() was changed: this operation does not need a list of items, the number of items is fully sufficient. The following backward incompatible changes have been made: - CLIENT_MULTI_RESULTS is now enabled by default in the client; - CLIENT_PS_MULTI_RESUTLS is now enabled by default in the client.
2009-10-21 22:02:06 +02:00
parser_state.m_lip.multi_statements= FALSE;
lex_start(thd);
error= parse_sql(thd, &parser_state, NULL) || thd->is_error();
if (error)
goto end;
thd->lex->set_trg_event_type_for_tables();
error= mysql_execute_command(thd);
/* report error issued during command execution */
if (error == 0 && thd->spcont == NULL)
general_log_write(thd, COM_STMT_EXECUTE,
2009-11-05 21:28:35 +01:00
thd->query(), thd->query_length());
Backport of revno 2630.28.10, 2630.28.31, 2630.28.26, 2630.33.1, 2630.39.1, 2630.28.29, 2630.34.3, 2630.34.2, 2630.34.1, 2630.29.29, 2630.29.28, 2630.31.1, 2630.28.13, 2630.28.10, 2617.23.14 and some other minor revisions. This patch implements: WL#4264 "Backup: Stabilize Service Interface" -- all the server prerequisites except si_objects.{h,cc} themselves (they can be just copied over, when needed). WL#4435: Support OUT-parameters in prepared statements. (and all issues in the initial patches for these two tasks, that were discovered in pushbuild and during testing). Bug#39519: mysql_stmt_close() should flush all data associated with the statement. After execution of a prepared statement, send OUT parameters of the invoked stored procedure, if any, to the client. When using the binary protocol, send the parameters in an additional result set over the wire. When using the text protocol, assign out parameters to the user variables from the CALL(@var1, @var2, ...) specification. The following refactoring has been made: - Protocol::send_fields() was renamed to Protocol::send_result_set_metadata(); - A new Protocol::send_result_set_row() was introduced to incapsulate common functionality for sending row data. - Signature of Protocol::prepare_for_send() was changed: this operation does not need a list of items, the number of items is fully sufficient. The following backward incompatible changes have been made: - CLIENT_MULTI_RESULTS is now enabled by default in the client; - CLIENT_PS_MULTI_RESUTLS is now enabled by default in the client.
2009-10-21 22:02:06 +02:00
end:
lex_end(thd->lex);
return error;
}
/***************************************************************************
Prepared_statement
****************************************************************************/
Prepared_statement::Prepared_statement(THD *thd_arg)
:Statement(NULL, &main_mem_root,
STMT_INITIALIZED, ++thd_arg->statement_id_counter),
thd(thd_arg),
result(thd_arg),
param_array(0),
cursor(0),
param_count(0),
last_errno(0),
Apply and review: 3655 Jon Olav Hauglid 2009-10-19 Bug #30977 Concurrent statement using stored function and DROP FUNCTION breaks SBR Bug #48246 assert in close_thread_table Implement a fix for: Bug #41804 purge stored procedure cache causes mysterious hang for many minutes Bug #49972 Crash in prepared statements The problem was that concurrent execution of DML statements that use stored functions and DDL statements that drop/modify the same function might result in incorrect binary log in statement (and mixed) mode and therefore break replication. This patch fixes the problem by introducing metadata locking for stored procedures and functions. This is similar to what is done in Bug#25144 for views. Procedures and functions now are locked using metadata locks until the transaction is either committed or rolled back. This prevents other statements from modifying the procedure/function while it is being executed. This provides commit ordering - guaranteeing serializability across multiple transactions and thus fixes the reported binlog problem. Note that we do not take locks for top-level CALLs. This means that procedures called directly are not protected from changes by simultaneous DDL operations so they are executed at the state they had at the time of the CALL. By not taking locks for top-level CALLs, we still allow transactions to be started inside procedures. This patch also changes stored procedure cache invalidation. Upon a change of cache version, we no longer invalidate the entire cache, but only those routines which we use, only when a statement is executed that uses them. This patch also changes the logic of prepared statement validation. A stored procedure used by a prepared statement is now validated only once a metadata lock has been acquired. A version mismatch causes a flush of the obsolete routine from the cache and statement reprepare. Incompatible changes: 1) ER_LOCK_DEADLOCK is reported for a transaction trying to access a procedure/function that is locked by a DDL operation in another connection. 2) Procedure/function DDL operations are now prohibited in LOCK TABLES mode as exclusive locks must be taken all at once and LOCK TABLES provides no way to specifiy procedures/functions to be locked. Test cases have been added to sp-lock.test and rpl_sp.test. Work on this bug has very much been a team effort and this patch includes and is based on contributions from Davi Arnaut, Dmitry Lenev, Magne Mæhre and Konstantin Osipov.
2009-12-29 13:19:05 +01:00
flags((uint) IS_IN_USE)
{
init_sql_alloc(&main_mem_root, thd_arg->variables.query_alloc_block_size,
A fix for Bug#26750 "valgrind leak in sp_head" (and post-review fixes). The legend: on a replication slave, in case a trigger creation was filtered out because of application of replicate-do-table/ replicate-ignore-table rule, the parsed definition of a trigger was not cleaned up properly. LEX::sphead member was left around and leaked memory. Until the actual implementation of support of replicate-ignore-table rules for triggers by the patch for Bug 24478 it was never the case that "case SQLCOM_CREATE_TRIGGER" was not executed once a trigger was parsed, so the deletion of lex->sphead there worked and the memory did not leak. The fix: The real cause of the bug is that there is no 1 or 2 places where we can clean up the main LEX after parse. And the reason we can not have just one or two places where we clean up the LEX is asymmetric behaviour of MYSQLparse in case of success or error. One of the root causes of this behaviour is the code in Item::Item() constructor. There, a newly created item adds itself to THD::free_list - a single-linked list of Items used in a statement. Yuck. This code is unaware that we may have more than one statement active at a time, and always assumes that the free_list of the current statement is located in THD::free_list. One day we need to be able to explicitly allocate an item in a given Query_arena. Thus, when parsing a definition of a stored procedure, like CREATE PROCEDURE p1() BEGIN SELECT a FROM t1; SELECT b FROM t1; END; we actually need to reset THD::mem_root, THD::free_list and THD::lex to parse the nested procedure statement (SELECT *). The actual reset and restore is implemented in semantic actions attached to sp_proc_stmt grammar rule. The problem is that in case of a parsing error inside a nested statement Bison generated parser would abort immediately, without executing the restore part of the semantic action. This would leave THD in an in-the-middle-of-parsing state. This is why we couldn't have had a single place where we clean up the LEX after MYSQLparse - in case of an error we needed to do a clean up immediately, in case of success a clean up could have been delayed. This left the door open for a memory leak. One of the following possibilities were considered when working on a fix: - patch the replication logic to do the clean up. Rejected as breaks module borders, replication code should not need to know the gory details of clean up procedure after CREATE TRIGGER. - wrap MYSQLparse with a function that would do a clean up. Rejected as ideally we should fix the problem when it happens, not adjust for it outside of the problematic code. - make sure MYSQLparse cleans up after itself by invoking the clean up functionality in the appropriate places before return. Implemented in this patch. - use %destructor rule for sp_proc_stmt to restore THD - cleaner than the prevoius approach, but rejected because needs a careful analysis of the side effects, and this patch is for 5.0, and long term we need to use the next alternative anyway - make sure that sp_proc_stmt doesn't juggle with THD - this is a large work that will affect many modules. Cleanup: move main_lex and main_mem_root from Statement to its only two descendants Prepared_statement and THD. This ensures that when a Statement instance was created for purposes of statement backup, we do not involve LEX constructor/destructor, which is fairly expensive. In order to track that the transformation produces equivalent functionality please check the respective constructors and destructors of Statement, Prepared_statement and THD - these members were used only there. This cleanup is unrelated to the patch.
2007-03-07 10:24:46 +01:00
thd_arg->variables.query_prealloc_size);
*last_error= '\0';
}
void Prepared_statement::setup_set_params()
{
/*
Note: BUG#25843 applies here too (query cache lookup uses thd->db, not
db from "prepare" time).
*/
if (query_cache_maybe_disabled(thd)) // we won't expand the query
lex->safe_to_cache_query= FALSE; // so don't cache it at Execution
/*
Decide if we have to expand the query (because we must write it to logs or
because we want to look it up in the query cache) or not.
*/
if ((mysql_bin_log.is_open() && is_update_query(lex->sql_command)) ||
opt_log || opt_slow_log ||
query_cache_is_cacheable_query(lex))
{
set_params_from_vars= insert_params_from_vars_with_log;
#ifndef EMBEDDED_LIBRARY
set_params= insert_params_with_log;
#else
set_params_data= emb_insert_params_with_log;
#endif
}
else
{
set_params_from_vars= insert_params_from_vars;
#ifndef EMBEDDED_LIBRARY
set_params= insert_params;
#else
set_params_data= emb_insert_params;
#endif
}
}
2007-10-16 21:37:31 +02:00
/**
Destroy this prepared statement, cleaning up all used memory
and resources.
This is called from ::deallocate() to handle COM_STMT_CLOSE and
DEALLOCATE PREPARE or when THD ends and all prepared statements are freed.
*/
Prepared_statement::~Prepared_statement()
{
DBUG_ENTER("Prepared_statement::~Prepared_statement");
DBUG_PRINT("enter",("stmt: 0x%lx cursor: 0x%lx",
(long) this, (long) cursor));
delete cursor;
/*
We have to call free on the items even if cleanup is called as some items,
like Item_param, don't free everything until free_items()
*/
free_items();
if (lex)
{
delete lex->result;
delete (st_lex_local *) lex;
}
A fix for Bug#26750 "valgrind leak in sp_head" (and post-review fixes). The legend: on a replication slave, in case a trigger creation was filtered out because of application of replicate-do-table/ replicate-ignore-table rule, the parsed definition of a trigger was not cleaned up properly. LEX::sphead member was left around and leaked memory. Until the actual implementation of support of replicate-ignore-table rules for triggers by the patch for Bug 24478 it was never the case that "case SQLCOM_CREATE_TRIGGER" was not executed once a trigger was parsed, so the deletion of lex->sphead there worked and the memory did not leak. The fix: The real cause of the bug is that there is no 1 or 2 places where we can clean up the main LEX after parse. And the reason we can not have just one or two places where we clean up the LEX is asymmetric behaviour of MYSQLparse in case of success or error. One of the root causes of this behaviour is the code in Item::Item() constructor. There, a newly created item adds itself to THD::free_list - a single-linked list of Items used in a statement. Yuck. This code is unaware that we may have more than one statement active at a time, and always assumes that the free_list of the current statement is located in THD::free_list. One day we need to be able to explicitly allocate an item in a given Query_arena. Thus, when parsing a definition of a stored procedure, like CREATE PROCEDURE p1() BEGIN SELECT a FROM t1; SELECT b FROM t1; END; we actually need to reset THD::mem_root, THD::free_list and THD::lex to parse the nested procedure statement (SELECT *). The actual reset and restore is implemented in semantic actions attached to sp_proc_stmt grammar rule. The problem is that in case of a parsing error inside a nested statement Bison generated parser would abort immediately, without executing the restore part of the semantic action. This would leave THD in an in-the-middle-of-parsing state. This is why we couldn't have had a single place where we clean up the LEX after MYSQLparse - in case of an error we needed to do a clean up immediately, in case of success a clean up could have been delayed. This left the door open for a memory leak. One of the following possibilities were considered when working on a fix: - patch the replication logic to do the clean up. Rejected as breaks module borders, replication code should not need to know the gory details of clean up procedure after CREATE TRIGGER. - wrap MYSQLparse with a function that would do a clean up. Rejected as ideally we should fix the problem when it happens, not adjust for it outside of the problematic code. - make sure MYSQLparse cleans up after itself by invoking the clean up functionality in the appropriate places before return. Implemented in this patch. - use %destructor rule for sp_proc_stmt to restore THD - cleaner than the prevoius approach, but rejected because needs a careful analysis of the side effects, and this patch is for 5.0, and long term we need to use the next alternative anyway - make sure that sp_proc_stmt doesn't juggle with THD - this is a large work that will affect many modules. Cleanup: move main_lex and main_mem_root from Statement to its only two descendants Prepared_statement and THD. This ensures that when a Statement instance was created for purposes of statement backup, we do not involve LEX constructor/destructor, which is fairly expensive. In order to track that the transformation produces equivalent functionality please check the respective constructors and destructors of Statement, Prepared_statement and THD - these members were used only there. This cleanup is unrelated to the patch.
2007-03-07 10:24:46 +01:00
free_root(&main_mem_root, MYF(0));
DBUG_VOID_RETURN;
}
Query_arena::Type Prepared_statement::type() const
{
return PREPARED_STATEMENT;
}
void Prepared_statement::cleanup_stmt()
{
DBUG_ENTER("Prepared_statement::cleanup_stmt");
DBUG_PRINT("enter",("stmt: 0x%lx", (long) this));
cleanup_items(free_list);
thd->cleanup_after_query();
thd->rollback_item_tree_changes();
DBUG_VOID_RETURN;
}
bool Prepared_statement::set_name(LEX_STRING *name_arg)
{
name.length= name_arg->length;
WL#3817: Simplify string / memory area types and make things more consistent (first part) The following type conversions was done: - Changed byte to uchar - Changed gptr to uchar* - Change my_string to char * - Change my_size_t to size_t - Change size_s to size_t Removed declaration of byte, gptr, my_string, my_size_t and size_s. Following function parameter changes was done: - All string functions in mysys/strings was changed to use size_t instead of uint for string lengths. - All read()/write() functions changed to use size_t (including vio). - All protocoll functions changed to use size_t instead of uint - Functions that used a pointer to a string length was changed to use size_t* - Changed malloc(), free() and related functions from using gptr to use void * as this requires fewer casts in the code and is more in line with how the standard functions work. - Added extra length argument to dirname_part() to return the length of the created string. - Changed (at least) following functions to take uchar* as argument: - db_dump() - my_net_write() - net_write_command() - net_store_data() - DBUG_DUMP() - decimal2bin() & bin2decimal() - Changed my_compress() and my_uncompress() to use size_t. Changed one argument to my_uncompress() from a pointer to a value as we only return one value (makes function easier to use). - Changed type of 'pack_data' argument to packfrm() to avoid casts. - Changed in readfrm() and writefrom(), ha_discover and handler::discover() the type for argument 'frmdata' to uchar** to avoid casts. - Changed most Field functions to use uchar* instead of char* (reduced a lot of casts). - Changed field->val_xxx(xxx, new_ptr) to take const pointers. Other changes: - Removed a lot of not needed casts - Added a few new cast required by other changes - Added some cast to my_multi_malloc() arguments for safety (as string lengths needs to be uint, not size_t). - Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done explicitely as this conflict was often hided by casting the function to hash_get_key). - Changed some buffers to memory regions to uchar* to avoid casts. - Changed some string lengths from uint to size_t. - Changed field->ptr to be uchar* instead of char*. This allowed us to get rid of a lot of casts. - Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar - Include zlib.h in some files as we needed declaration of crc32() - Changed MY_FILE_ERROR to be (size_t) -1. - Changed many variables to hold the result of my_read() / my_write() to be size_t. This was needed to properly detect errors (which are returned as (size_t) -1). - Removed some very old VMS code - Changed packfrm()/unpackfrm() to not be depending on uint size (portability fix) - Removed windows specific code to restore cursor position as this causes slowdown on windows and we should not mix read() and pread() calls anyway as this is not thread safe. Updated function comment to reflect this. Changed function that depended on original behavior of my_pwrite() to itself restore the cursor position (one such case). - Added some missing checking of return value of malloc(). - Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow. - Changed type of table_def::m_size from my_size_t to ulong to reflect that m_size is the number of elements in the array, not a string/memory length. - Moved THD::max_row_length() to table.cc (as it's not depending on THD). Inlined max_row_length_blob() into this function. - More function comments - Fixed some compiler warnings when compiled without partitions. - Removed setting of LEX_STRING() arguments in declaration (portability fix). - Some trivial indentation/variable name changes. - Some trivial code simplifications: - Replaced some calls to alloc_root + memcpy to use strmake_root()/strdup_root(). - Changed some calls from memdup() to strmake() (Safety fix) - Simpler loops in client-simple.c
2007-05-10 11:59:39 +02:00
name.str= (char*) memdup_root(mem_root, name_arg->str, name_arg->length);
return name.str == 0;
}
/**
Remember the current database.
We must reset/restore the current database during execution of
a prepared statement since it affects execution environment:
privileges, @@character_set_database, and other.
@return Returns an error if out of memory.
*/
bool
Prepared_statement::set_db(const char *db_arg, uint db_length_arg)
{
/* Remember the current database. */
if (db_arg && db_length_arg)
{
db= this->strmake(db_arg, db_length_arg);
db_length= db_length_arg;
}
else
{
db= NULL;
db_length= 0;
}
return db_arg != NULL && db == NULL;
}
/**************************************************************************
Common parts of mysql_[sql]_stmt_prepare, mysql_[sql]_stmt_execute.
Essentially, these functions do all the magic of preparing/executing
a statement, leaving network communication, input data handling and
global THD state management to the caller.
***************************************************************************/
2007-10-16 21:37:31 +02:00
/**
Parse statement text, validate the statement, and prepare it for execution.
You should not change global THD state in this function, if at all
possible: it may be called from any context, e.g. when executing
a COM_* command, and SQLCOM_* command, or a stored procedure.
2007-10-16 21:37:31 +02:00
@param packet statement text
@param packet_len
@note
Precondition:
The caller must ensure that thd->change_list and thd->free_list
is empty: this function will not back them up but will free
in the end of its execution.
2007-10-16 21:37:31 +02:00
@note
Postcondition:
thd->mem_root contains unused memory allocated during validation.
*/
bool Prepared_statement::prepare(const char *packet, uint packet_len)
{
bool error;
Statement stmt_backup;
Query_arena *old_stmt_arena;
DBUG_ENTER("Prepared_statement::prepare");
/*
If this is an SQLCOM_PREPARE, we also increase Com_prepare_sql.
However, it seems handy if com_stmt_prepare is increased always,
no matter what kind of prepare is processed.
*/
status_var_increment(thd->status_var.com_stmt_prepare);
if (! (lex= new (mem_root) st_lex_local))
DBUG_RETURN(TRUE);
if (set_db(thd->db, thd->db_length))
DBUG_RETURN(TRUE);
/*
alloc_query() uses thd->memroot && thd->query, so we should call
both of backup_statement() and backup_query_arena() here.
*/
thd->set_n_backup_statement(this, &stmt_backup);
thd->set_n_backup_active_arena(this, &stmt_backup);
if (alloc_query(thd, packet, packet_len))
{
thd->restore_backup_statement(this, &stmt_backup);
thd->restore_active_arena(this, &stmt_backup);
DBUG_RETURN(TRUE);
}
old_stmt_arena= thd->stmt_arena;
thd->stmt_arena= this;
Parser_state parser_state;
if (parser_state.init(thd, thd->query(), thd->query_length()))
{
thd->restore_backup_statement(this, &stmt_backup);
thd->restore_active_arena(this, &stmt_backup);
thd->stmt_arena= old_stmt_arena;
DBUG_RETURN(TRUE);
}
parser_state.m_lip.stmt_prepare_mode= TRUE;
Backport of revno 2630.28.10, 2630.28.31, 2630.28.26, 2630.33.1, 2630.39.1, 2630.28.29, 2630.34.3, 2630.34.2, 2630.34.1, 2630.29.29, 2630.29.28, 2630.31.1, 2630.28.13, 2630.28.10, 2617.23.14 and some other minor revisions. This patch implements: WL#4264 "Backup: Stabilize Service Interface" -- all the server prerequisites except si_objects.{h,cc} themselves (they can be just copied over, when needed). WL#4435: Support OUT-parameters in prepared statements. (and all issues in the initial patches for these two tasks, that were discovered in pushbuild and during testing). Bug#39519: mysql_stmt_close() should flush all data associated with the statement. After execution of a prepared statement, send OUT parameters of the invoked stored procedure, if any, to the client. When using the binary protocol, send the parameters in an additional result set over the wire. When using the text protocol, assign out parameters to the user variables from the CALL(@var1, @var2, ...) specification. The following refactoring has been made: - Protocol::send_fields() was renamed to Protocol::send_result_set_metadata(); - A new Protocol::send_result_set_row() was introduced to incapsulate common functionality for sending row data. - Signature of Protocol::prepare_for_send() was changed: this operation does not need a list of items, the number of items is fully sufficient. The following backward incompatible changes have been made: - CLIENT_MULTI_RESULTS is now enabled by default in the client; - CLIENT_PS_MULTI_RESUTLS is now enabled by default in the client.
2009-10-21 22:02:06 +02:00
parser_state.m_lip.multi_statements= FALSE;
lex_start(thd);
Fixed following problems: --Bug#52157 various crashes and assertions with multi-table update, stored function --Bug#54475 improper error handling causes cascading crashing failures in innodb/ndb --Bug#57703 create view cause Assertion failed: 0, file .\item_subselect.cc, line 846 --Bug#57352 valgrind warnings when creating view --Recently discovered problem when a nested materialized derived table is used before being populated and it leads to incorrect result We have several modes when we should disable subquery evaluation. The reasons for disabling are different. It could be uselessness of the evaluation as in case of 'CREATE VIEW' or 'PREPARE stmt', or we should disable subquery evaluation if tables are not locked yet as it happens in bug#54475, or too early evaluation of subqueries can lead to wrong result as it happened in Bug#19077. Main problem is that if subquery items are treated as const they are evaluated in ::fix_fields(), ::fix_length_and_dec() of the parental items as a lot of these methods have Item::val_...() calls inside. We have to make subqueries non-const to prevent unnecessary subquery evaluation. At the moment we have different methods for this. Here is a list of these modes: 1. PREPARE stmt; We use UNCACHEABLE_PREPARE flag. It is set during parsing in sql_parse.cc, mysql_new_select() for each SELECT_LEX object and cleared at the end of PREPARE in sql_prepare.cc, init_stmt_after_parse(). If this flag is set subquery becomes non-const and evaluation does not happen. 2. CREATE|ALTER VIEW, SHOW CREATE VIEW, I_S tables which process FRM files We use LEX::view_prepare_mode field. We set it before view preparation and check this flag in ::fix_fields(), ::fix_length_and_dec(). Some bugs are fixed using this approach, some are not(Bug#57352, Bug#57703). The problem here is that we have a lot of ::fix_fields(), ::fix_length_and_dec() where we use Item::val_...() calls for const items. 3. Derived tables with subquery = wrong result(Bug19077) The reason of this bug is too early subquery evaluation. It was fixed by adding Item::with_subselect field The check of this field in appropriate places prevents const item evaluation if the item have subquery. The fix for Bug19077 fixes only the problem with convert_constant_item() function and does not cover other places(::fix_fields(), ::fix_length_and_dec() again) where subqueries could be evaluated. Example: CREATE TABLE t1 (i INT, j BIGINT); INSERT INTO t1 VALUES (1, 2), (2, 2), (3, 2); SELECT * FROM (SELECT MIN(i) FROM t1 WHERE j = SUBSTRING('12', (SELECT * FROM (SELECT MIN(j) FROM t1) t2))) t3; DROP TABLE t1; 4. Derived tables with subquery where subquery is evaluated before table locking(Bug#54475, Bug#52157) Suggested solution is following: -Introduce new field LEX::context_analysis_only with the following possible flags: #define CONTEXT_ANALYSIS_ONLY_PREPARE 1 #define CONTEXT_ANALYSIS_ONLY_VIEW 2 #define CONTEXT_ANALYSIS_ONLY_DERIVED 4 -Set/clean these flags when we perform context analysis operation -Item_subselect::const_item() returns result depending on LEX::context_analysis_only. If context_analysis_only is set then we return FALSE that means that subquery is non-const. As all subquery types are wrapped by Item_subselect it allow as to make subquery non-const when it's necessary.
2010-12-14 10:33:03 +01:00
lex->context_analysis_only|= CONTEXT_ANALYSIS_ONLY_PREPARE;
error= parse_sql(thd, & parser_state, NULL) ||
thd->is_error() ||
init_param_array(this);
lex->set_trg_event_type_for_tables();
/*
While doing context analysis of the query (in check_prepared_statement)
we allocate a lot of additional memory: for open tables, JOINs, derived
tables, etc. Let's save a snapshot of current parse tree to the
statement and restore original THD. In cases when some tree
transformation can be reused on execute, we set again thd->mem_root from
stmt->mem_root (see setup_wild for one place where we do that).
*/
thd->restore_active_arena(this, &stmt_backup);
/*
If called from a stored procedure, ensure that we won't rollback
external changes when cleaning up after validation.
*/
DBUG_ASSERT(thd->change_list.is_empty());
Backport of revno ## 2617.31.1, 2617.31.3, 2617.31.4, 2617.31.5, 2617.31.12, 2617.31.15, 2617.31.15, 2617.31.16, 2617.43.1 - initial changeset that introduced the fix for Bug#989 and follow up fixes for all test suite failures introduced in the initial changeset. ------------------------------------------------------------ revno: 2617.31.1 committer: Davi Arnaut <Davi.Arnaut@Sun.COM> branch nick: 4284-6.0 timestamp: Fri 2009-03-06 19:17:00 -0300 message: Bug#989: If DROP TABLE while there's an active transaction, wrong binlog order WL#4284: Transactional DDL locking Currently the MySQL server does not keep metadata locks on schema objects for the duration of a transaction, thus failing to guarantee the integrity of the schema objects being used during the transaction and to protect then from concurrent DDL operations. This also poses a problem for replication as a DDL operation might be replicated even thought there are active transactions using the object being modified. The solution is to defer the release of metadata locks until a active transaction is either committed or rolled back. This prevents other statements from modifying the table for the entire duration of the transaction. This provides commitment ordering for guaranteeing serializability across multiple transactions. - Incompatible change: If MySQL's metadata locking system encounters a lock conflict, the usual schema is to use the try and back-off technique to avoid deadlocks -- this schema consists in releasing all locks and trying to acquire them all in one go. But in a transactional context this algorithm can't be utilized as its not possible to release locks acquired during the course of the transaction without breaking the transaction commitments. To avoid deadlocks in this case, the ER_LOCK_DEADLOCK will be returned if a lock conflict is encountered during a transaction. Let's consider an example: A transaction has two statements that modify table t1, then table t2, and then commits. The first statement of the transaction will acquire a shared metadata lock on table t1, and it will be kept utill COMMIT to ensure serializability. At the moment when the second statement attempts to acquire a shared metadata lock on t2, a concurrent ALTER or DROP statement might have locked t2 exclusively. The prescription of the current locking protocol is that the acquirer of the shared lock backs off -- gives up all his current locks and retries. This implies that the entire multi-statement transaction has to be rolled back. - Incompatible change: FLUSH commands such as FLUSH PRIVILEGES and FLUSH TABLES WITH READ LOCK won't cause locked tables to be implicitly unlocked anymore.
2009-12-05 00:02:48 +01:00
/*
Marker used to release metadata locks acquired while the prepared
statement is being checked.
*/
Patch that refactors global read lock implementation and fixes bug #57006 "Deadlock between HANDLER and FLUSH TABLES WITH READ LOCK" and bug #54673 "It takes too long to get readlock for 'FLUSH TABLES WITH READ LOCK'". The first bug manifested itself as a deadlock which occurred when a connection, which had some table open through HANDLER statement, tried to update some data through DML statement while another connection tried to execute FLUSH TABLES WITH READ LOCK concurrently. What happened was that FTWRL in the second connection managed to perform first step of GRL acquisition and thus blocked all upcoming DML. After that it started to wait for table open through HANDLER statement to be flushed. When the first connection tried to execute DML it has started to wait for GRL/the second connection creating deadlock. The second bug manifested itself as starvation of FLUSH TABLES WITH READ LOCK statements in cases when there was a constant stream of concurrent DML statements (in two or more connections). This has happened because requests for protection against GRL which were acquired by DML statements were ignoring presence of pending GRL and thus the latter was starved. This patch solves both these problems by re-implementing GRL using metadata locks. Similar to the old implementation acquisition of GRL in new implementation is two-step. During the first step we block all concurrent DML and DDL statements by acquiring global S metadata lock (each DML and DDL statement acquires global IX lock for its duration). During the second step we block commits by acquiring global S lock in COMMIT namespace (commit code acquires global IX lock in this namespace). Note that unlike in old implementation acquisition of protection against GRL in DML and DDL is semi-automatic. We assume that any statement which should be blocked by GRL will either open and acquires write-lock on tables or acquires metadata locks on objects it is going to modify. For any such statement global IX metadata lock is automatically acquired for its duration. The first problem is solved because waits for GRL become visible to deadlock detector in metadata locking subsystem and thus deadlocks like one in the first bug become impossible. The second problem is solved because global S locks which are used for GRL implementation are given preference over IX locks which are acquired by concurrent DML (and we can switch to fair scheduling in future if needed). Important change: FTWRL/GRL no longer blocks DML and DDL on temporary tables. Before this patch behavior was not consistent in this respect: in some cases DML/DDL statements on temporary tables were blocked while in others they were not. Since the main use cases for FTWRL are various forms of backups and temporary tables are not preserved during backups we have opted for consistently allowing DML/DDL on temporary tables during FTWRL/GRL. Important change: This patch changes thread state names which are used when DML/DDL of FTWRL is waiting for global read lock. It is now either "Waiting for global read lock" or "Waiting for commit lock" depending on the stage on which FTWRL is. Incompatible change: To solve deadlock in events code which was exposed by this patch we have to replace LOCK_event_metadata mutex with metadata locks on events. As result we have to prohibit DDL on events under LOCK TABLES. This patch also adds extensive test coverage for interaction of DML/DDL and FTWRL. Performance of new and old global read lock implementations in sysbench tests were compared. There were no significant difference between new and old implementations.
2010-11-11 18:11:05 +01:00
MDL_savepoint mdl_savepoint= thd->mdl_context.mdl_savepoint();
Backport of revno ## 2617.31.1, 2617.31.3, 2617.31.4, 2617.31.5, 2617.31.12, 2617.31.15, 2617.31.15, 2617.31.16, 2617.43.1 - initial changeset that introduced the fix for Bug#989 and follow up fixes for all test suite failures introduced in the initial changeset. ------------------------------------------------------------ revno: 2617.31.1 committer: Davi Arnaut <Davi.Arnaut@Sun.COM> branch nick: 4284-6.0 timestamp: Fri 2009-03-06 19:17:00 -0300 message: Bug#989: If DROP TABLE while there's an active transaction, wrong binlog order WL#4284: Transactional DDL locking Currently the MySQL server does not keep metadata locks on schema objects for the duration of a transaction, thus failing to guarantee the integrity of the schema objects being used during the transaction and to protect then from concurrent DDL operations. This also poses a problem for replication as a DDL operation might be replicated even thought there are active transactions using the object being modified. The solution is to defer the release of metadata locks until a active transaction is either committed or rolled back. This prevents other statements from modifying the table for the entire duration of the transaction. This provides commitment ordering for guaranteeing serializability across multiple transactions. - Incompatible change: If MySQL's metadata locking system encounters a lock conflict, the usual schema is to use the try and back-off technique to avoid deadlocks -- this schema consists in releasing all locks and trying to acquire them all in one go. But in a transactional context this algorithm can't be utilized as its not possible to release locks acquired during the course of the transaction without breaking the transaction commitments. To avoid deadlocks in this case, the ER_LOCK_DEADLOCK will be returned if a lock conflict is encountered during a transaction. Let's consider an example: A transaction has two statements that modify table t1, then table t2, and then commits. The first statement of the transaction will acquire a shared metadata lock on table t1, and it will be kept utill COMMIT to ensure serializability. At the moment when the second statement attempts to acquire a shared metadata lock on t2, a concurrent ALTER or DROP statement might have locked t2 exclusively. The prescription of the current locking protocol is that the acquirer of the shared lock backs off -- gives up all his current locks and retries. This implies that the entire multi-statement transaction has to be rolled back. - Incompatible change: FLUSH commands such as FLUSH PRIVILEGES and FLUSH TABLES WITH READ LOCK won't cause locked tables to be implicitly unlocked anymore.
2009-12-05 00:02:48 +01:00
/*
The only case where we should have items in the thd->free_list is
after stmt->set_params_from_vars(), which may in some cases create
Item_null objects.
*/
if (error == 0)
error= check_prepared_statement(this);
A fix for Bug#26750 "valgrind leak in sp_head" (and post-review fixes). The legend: on a replication slave, in case a trigger creation was filtered out because of application of replicate-do-table/ replicate-ignore-table rule, the parsed definition of a trigger was not cleaned up properly. LEX::sphead member was left around and leaked memory. Until the actual implementation of support of replicate-ignore-table rules for triggers by the patch for Bug 24478 it was never the case that "case SQLCOM_CREATE_TRIGGER" was not executed once a trigger was parsed, so the deletion of lex->sphead there worked and the memory did not leak. The fix: The real cause of the bug is that there is no 1 or 2 places where we can clean up the main LEX after parse. And the reason we can not have just one or two places where we clean up the LEX is asymmetric behaviour of MYSQLparse in case of success or error. One of the root causes of this behaviour is the code in Item::Item() constructor. There, a newly created item adds itself to THD::free_list - a single-linked list of Items used in a statement. Yuck. This code is unaware that we may have more than one statement active at a time, and always assumes that the free_list of the current statement is located in THD::free_list. One day we need to be able to explicitly allocate an item in a given Query_arena. Thus, when parsing a definition of a stored procedure, like CREATE PROCEDURE p1() BEGIN SELECT a FROM t1; SELECT b FROM t1; END; we actually need to reset THD::mem_root, THD::free_list and THD::lex to parse the nested procedure statement (SELECT *). The actual reset and restore is implemented in semantic actions attached to sp_proc_stmt grammar rule. The problem is that in case of a parsing error inside a nested statement Bison generated parser would abort immediately, without executing the restore part of the semantic action. This would leave THD in an in-the-middle-of-parsing state. This is why we couldn't have had a single place where we clean up the LEX after MYSQLparse - in case of an error we needed to do a clean up immediately, in case of success a clean up could have been delayed. This left the door open for a memory leak. One of the following possibilities were considered when working on a fix: - patch the replication logic to do the clean up. Rejected as breaks module borders, replication code should not need to know the gory details of clean up procedure after CREATE TRIGGER. - wrap MYSQLparse with a function that would do a clean up. Rejected as ideally we should fix the problem when it happens, not adjust for it outside of the problematic code. - make sure MYSQLparse cleans up after itself by invoking the clean up functionality in the appropriate places before return. Implemented in this patch. - use %destructor rule for sp_proc_stmt to restore THD - cleaner than the prevoius approach, but rejected because needs a careful analysis of the side effects, and this patch is for 5.0, and long term we need to use the next alternative anyway - make sure that sp_proc_stmt doesn't juggle with THD - this is a large work that will affect many modules. Cleanup: move main_lex and main_mem_root from Statement to its only two descendants Prepared_statement and THD. This ensures that when a Statement instance was created for purposes of statement backup, we do not involve LEX constructor/destructor, which is fairly expensive. In order to track that the transformation produces equivalent functionality please check the respective constructors and destructors of Statement, Prepared_statement and THD - these members were used only there. This cleanup is unrelated to the patch.
2007-03-07 10:24:46 +01:00
/*
Currently CREATE PROCEDURE/TRIGGER/EVENT are prohibited in prepared
statements: ensure we have no memory leak here if by someone tries
to PREPARE stmt FROM "CREATE PROCEDURE ..."
*/
DBUG_ASSERT(lex->sphead == NULL || error != 0);
/* The order is important */
lex->unit.cleanup();
/* No need to commit statement transaction, it's not started. */
DBUG_ASSERT(thd->transaction.stmt.is_empty());
A fix for Bug#26750 "valgrind leak in sp_head" (and post-review fixes). The legend: on a replication slave, in case a trigger creation was filtered out because of application of replicate-do-table/ replicate-ignore-table rule, the parsed definition of a trigger was not cleaned up properly. LEX::sphead member was left around and leaked memory. Until the actual implementation of support of replicate-ignore-table rules for triggers by the patch for Bug 24478 it was never the case that "case SQLCOM_CREATE_TRIGGER" was not executed once a trigger was parsed, so the deletion of lex->sphead there worked and the memory did not leak. The fix: The real cause of the bug is that there is no 1 or 2 places where we can clean up the main LEX after parse. And the reason we can not have just one or two places where we clean up the LEX is asymmetric behaviour of MYSQLparse in case of success or error. One of the root causes of this behaviour is the code in Item::Item() constructor. There, a newly created item adds itself to THD::free_list - a single-linked list of Items used in a statement. Yuck. This code is unaware that we may have more than one statement active at a time, and always assumes that the free_list of the current statement is located in THD::free_list. One day we need to be able to explicitly allocate an item in a given Query_arena. Thus, when parsing a definition of a stored procedure, like CREATE PROCEDURE p1() BEGIN SELECT a FROM t1; SELECT b FROM t1; END; we actually need to reset THD::mem_root, THD::free_list and THD::lex to parse the nested procedure statement (SELECT *). The actual reset and restore is implemented in semantic actions attached to sp_proc_stmt grammar rule. The problem is that in case of a parsing error inside a nested statement Bison generated parser would abort immediately, without executing the restore part of the semantic action. This would leave THD in an in-the-middle-of-parsing state. This is why we couldn't have had a single place where we clean up the LEX after MYSQLparse - in case of an error we needed to do a clean up immediately, in case of success a clean up could have been delayed. This left the door open for a memory leak. One of the following possibilities were considered when working on a fix: - patch the replication logic to do the clean up. Rejected as breaks module borders, replication code should not need to know the gory details of clean up procedure after CREATE TRIGGER. - wrap MYSQLparse with a function that would do a clean up. Rejected as ideally we should fix the problem when it happens, not adjust for it outside of the problematic code. - make sure MYSQLparse cleans up after itself by invoking the clean up functionality in the appropriate places before return. Implemented in this patch. - use %destructor rule for sp_proc_stmt to restore THD - cleaner than the prevoius approach, but rejected because needs a careful analysis of the side effects, and this patch is for 5.0, and long term we need to use the next alternative anyway - make sure that sp_proc_stmt doesn't juggle with THD - this is a large work that will affect many modules. Cleanup: move main_lex and main_mem_root from Statement to its only two descendants Prepared_statement and THD. This ensures that when a Statement instance was created for purposes of statement backup, we do not involve LEX constructor/destructor, which is fairly expensive. In order to track that the transformation produces equivalent functionality please check the respective constructors and destructors of Statement, Prepared_statement and THD - these members were used only there. This cleanup is unrelated to the patch.
2007-03-07 10:24:46 +01:00
close_thread_tables(thd);
thd->mdl_context.rollback_to_savepoint(mdl_savepoint);
Fix for bug#14188793 - "DEADLOCK CAUSED BY ALTER TABLE DOEN'T CLEAR STATUS OF ROLLBACKED TRANSACTION" and bug #17054007 - "TRANSACTION IS NOT FULLY ROLLED BACK IN CASE OF INNODB DEADLOCK". The problem in the first bug report was that although deadlock involving metadata locks was reported using the same error code and message as InnoDB deadlock it didn't rollback transaction like the latter. This caused confusion to users as in some cases after ER_LOCK_DEADLOCK transaction could have been restarted immediately and in some cases rollback was required. The problem in the second bug report was that although InnoDB deadlock caused transaction rollback in all storage engines it didn't cause release of metadata locks. So concurrent DDL on the tables used in transaction was blocked until implicit or explicit COMMIT or ROLLBACK was issued in the connection which got InnoDB deadlock. The former issue has stemmed from the fact that when support for detection and reporting metadata locks deadlocks was added we erroneously assumed that InnoDB doesn't rollback transaction on deadlock but only last statement (while this is what happens on InnoDB lock timeout actually) and so didn't implement rollback of transactions on MDL deadlocks. The latter issue was caused by the fact that rollback of transaction due to deadlock is carried out by setting THD::transaction_rollback_request flag at the point where deadlock is detected and performing rollback inside of trans_rollback_stmt() call when this flag is set. And trans_rollback_stmt() is not aware of MDL locks, so no MDL locks are released. This patch solves these two problems in the following way: - In case when MDL deadlock is detect transaction rollback is requested by setting THD::transaction_rollback_request flag. - Code performing rollback of transaction if THD::transaction_rollback_request is moved out from trans_rollback_stmt(). Now we handle rollback request on the same level as we call trans_rollback_stmt() and release statement/ transaction MDL locks.
2013-08-20 11:12:34 +02:00
/*
Transaction rollback was requested since MDL deadlock was discovered
while trying to open tables. Rollback transaction in all storage
engines including binary log and release all locks.
Once dynamic SQL is allowed as substatements the below if-statement
has to be adjusted to not do rollback in substatement.
*/
DBUG_ASSERT(! thd->in_sub_stmt);
if (thd->transaction_rollback_request)
{
trans_rollback_implicit(thd);
thd->mdl_context.release_transactional_locks();
}
lex_end(lex);
cleanup_stmt();
thd->restore_backup_statement(this, &stmt_backup);
thd->stmt_arena= old_stmt_arena;
if (error == 0)
{
setup_set_params();
Fixed following problems: --Bug#52157 various crashes and assertions with multi-table update, stored function --Bug#54475 improper error handling causes cascading crashing failures in innodb/ndb --Bug#57703 create view cause Assertion failed: 0, file .\item_subselect.cc, line 846 --Bug#57352 valgrind warnings when creating view --Recently discovered problem when a nested materialized derived table is used before being populated and it leads to incorrect result We have several modes when we should disable subquery evaluation. The reasons for disabling are different. It could be uselessness of the evaluation as in case of 'CREATE VIEW' or 'PREPARE stmt', or we should disable subquery evaluation if tables are not locked yet as it happens in bug#54475, or too early evaluation of subqueries can lead to wrong result as it happened in Bug#19077. Main problem is that if subquery items are treated as const they are evaluated in ::fix_fields(), ::fix_length_and_dec() of the parental items as a lot of these methods have Item::val_...() calls inside. We have to make subqueries non-const to prevent unnecessary subquery evaluation. At the moment we have different methods for this. Here is a list of these modes: 1. PREPARE stmt; We use UNCACHEABLE_PREPARE flag. It is set during parsing in sql_parse.cc, mysql_new_select() for each SELECT_LEX object and cleared at the end of PREPARE in sql_prepare.cc, init_stmt_after_parse(). If this flag is set subquery becomes non-const and evaluation does not happen. 2. CREATE|ALTER VIEW, SHOW CREATE VIEW, I_S tables which process FRM files We use LEX::view_prepare_mode field. We set it before view preparation and check this flag in ::fix_fields(), ::fix_length_and_dec(). Some bugs are fixed using this approach, some are not(Bug#57352, Bug#57703). The problem here is that we have a lot of ::fix_fields(), ::fix_length_and_dec() where we use Item::val_...() calls for const items. 3. Derived tables with subquery = wrong result(Bug19077) The reason of this bug is too early subquery evaluation. It was fixed by adding Item::with_subselect field The check of this field in appropriate places prevents const item evaluation if the item have subquery. The fix for Bug19077 fixes only the problem with convert_constant_item() function and does not cover other places(::fix_fields(), ::fix_length_and_dec() again) where subqueries could be evaluated. Example: CREATE TABLE t1 (i INT, j BIGINT); INSERT INTO t1 VALUES (1, 2), (2, 2), (3, 2); SELECT * FROM (SELECT MIN(i) FROM t1 WHERE j = SUBSTRING('12', (SELECT * FROM (SELECT MIN(j) FROM t1) t2))) t3; DROP TABLE t1; 4. Derived tables with subquery where subquery is evaluated before table locking(Bug#54475, Bug#52157) Suggested solution is following: -Introduce new field LEX::context_analysis_only with the following possible flags: #define CONTEXT_ANALYSIS_ONLY_PREPARE 1 #define CONTEXT_ANALYSIS_ONLY_VIEW 2 #define CONTEXT_ANALYSIS_ONLY_DERIVED 4 -Set/clean these flags when we perform context analysis operation -Item_subselect::const_item() returns result depending on LEX::context_analysis_only. If context_analysis_only is set then we return FALSE that means that subquery is non-const. As all subquery types are wrapped by Item_subselect it allow as to make subquery non-const when it's necessary.
2010-12-14 10:33:03 +01:00
lex->context_analysis_only&= ~CONTEXT_ANALYSIS_ONLY_PREPARE;
state= Query_arena::STMT_PREPARED;
flags&= ~ (uint) IS_IN_USE;
/*
Log COM_EXECUTE to the general log. Note, that in case of SQL
prepared statements this causes two records to be output:
Query PREPARE stmt from @user_variable
Prepare <statement SQL text>
This is considered user-friendly, since in the
second log entry we output the actual statement text.
Do not print anything if this is an SQL prepared statement and
we're inside a stored procedure (also called Dynamic SQL) --
sub-statements inside stored procedures are not logged into
the general log.
*/
if (thd->spcont == NULL)
general_log_write(thd, COM_STMT_PREPARE, query(), query_length());
}
DBUG_RETURN(error);
}
/**
Assign parameter values either from variables, in case of SQL PS
or from the execute packet.
@param expanded_query a container with the original SQL statement.
'?' placeholders will be replaced with
their values in case of success.
The result is used for logging and replication
@param packet pointer to execute packet.
NULL in case of SQL PS
@param packet_end end of the packet. NULL in case of SQL PS
@todo Use a paremeter source class family instead of 'if's, and
support stored procedure variables.
@retval TRUE an error occurred when assigning a parameter (likely
a conversion error or out of memory, or malformed packet)
@retval FALSE success
*/
bool
Prepared_statement::set_parameters(String *expanded_query,
uchar *packet, uchar *packet_end)
{
bool is_sql_ps= packet == NULL;
bool res= FALSE;
if (is_sql_ps)
{
/* SQL prepared statement */
res= set_params_from_vars(this, thd->lex->prepared_stmt_params,
expanded_query);
}
else if (param_count)
{
#ifndef EMBEDDED_LIBRARY
uchar *null_array= packet;
res= (setup_conversion_functions(this, &packet, packet_end) ||
set_params(this, null_array, packet, packet_end, expanded_query));
#else
/*
In embedded library we re-install conversion routines each time
we set parameters, and also we don't need to parse packet.
So we do it in one function.
*/
res= set_params_data(this, expanded_query);
#endif
}
if (res)
{
my_error(ER_WRONG_ARGUMENTS, MYF(0),
is_sql_ps ? "EXECUTE" : "mysqld_stmt_execute");
reset_stmt_params(this);
}
return res;
}
/**
Execute a prepared statement. Re-prepare it a limited number
of times if necessary.
Try to execute a prepared statement. If there is a metadata
validation error, prepare a new copy of the prepared statement,
swap the old and the new statements, and try again.
If there is a validation error again, repeat the above, but
perform no more than MAX_REPREPARE_ATTEMPTS.
@note We have to try several times in a loop since we
release metadata locks on tables after prepared statement
prepare. Therefore, a DDL statement may sneak in between prepare
and execute of a new statement. If this happens repeatedly
more than MAX_REPREPARE_ATTEMPTS times, we give up.
@return TRUE if an error, FALSE if success
@retval TRUE either MAX_REPREPARE_ATTEMPTS has been reached,
or some general error
@retval FALSE successfully executed the statement, perhaps
after having reprepared it a few times.
*/
bool
Prepared_statement::execute_loop(String *expanded_query,
bool open_cursor,
uchar *packet,
uchar *packet_end)
{
const int MAX_REPREPARE_ATTEMPTS= 3;
Reprepare_observer reprepare_observer;
bool error;
int reprepare_attempt= 0;
/* Check if we got an error when sending long data */
2011-04-20 15:15:47 +02:00
if (state == Query_arena::STMT_ERROR)
{
my_message(last_errno, last_error, MYF(0));
return TRUE;
}
if (set_parameters(expanded_query, packet, packet_end))
return TRUE;
reexecute:
reprepare_observer.reset_reprepare_observer();
/*
If the free_list is not empty, we'll wrongly free some externally
allocated items when cleaning up after validation of the prepared
statement.
*/
DBUG_ASSERT(thd->free_list == NULL);
/*
Install the metadata observer. If some metadata version is
different from prepare time and an observer is installed,
the observer method will be invoked to push an error into
the error stack.
*/
if (sql_command_flags[lex->sql_command] &
CF_REEXECUTION_FRAGILE)
{
DBUG_ASSERT(thd->m_reprepare_observer == NULL);
thd->m_reprepare_observer = &reprepare_observer;
}
error= execute(expanded_query, open_cursor) || thd->is_error();
thd->m_reprepare_observer= NULL;
if (error && !thd->is_fatal_error && !thd->killed &&
reprepare_observer.is_invalidated() &&
reprepare_attempt++ < MAX_REPREPARE_ATTEMPTS)
{
DBUG_ASSERT(thd->stmt_da->sql_errno() == ER_NEED_REPREPARE);
thd->clear_error();
error= reprepare();
if (! error) /* Success */
goto reexecute;
}
reset_stmt_params(this);
return error;
}
Backport of revno 2630.28.10, 2630.28.31, 2630.28.26, 2630.33.1, 2630.39.1, 2630.28.29, 2630.34.3, 2630.34.2, 2630.34.1, 2630.29.29, 2630.29.28, 2630.31.1, 2630.28.13, 2630.28.10, 2617.23.14 and some other minor revisions. This patch implements: WL#4264 "Backup: Stabilize Service Interface" -- all the server prerequisites except si_objects.{h,cc} themselves (they can be just copied over, when needed). WL#4435: Support OUT-parameters in prepared statements. (and all issues in the initial patches for these two tasks, that were discovered in pushbuild and during testing). Bug#39519: mysql_stmt_close() should flush all data associated with the statement. After execution of a prepared statement, send OUT parameters of the invoked stored procedure, if any, to the client. When using the binary protocol, send the parameters in an additional result set over the wire. When using the text protocol, assign out parameters to the user variables from the CALL(@var1, @var2, ...) specification. The following refactoring has been made: - Protocol::send_fields() was renamed to Protocol::send_result_set_metadata(); - A new Protocol::send_result_set_row() was introduced to incapsulate common functionality for sending row data. - Signature of Protocol::prepare_for_send() was changed: this operation does not need a list of items, the number of items is fully sufficient. The following backward incompatible changes have been made: - CLIENT_MULTI_RESULTS is now enabled by default in the client; - CLIENT_PS_MULTI_RESUTLS is now enabled by default in the client.
2009-10-21 22:02:06 +02:00
bool
Prepared_statement::execute_server_runnable(Server_runnable *server_runnable)
{
Statement stmt_backup;
bool error;
Query_arena *save_stmt_arena= thd->stmt_arena;
Item_change_list save_change_list;
thd->change_list.move_elements_to(&save_change_list);
Backport of revno 2630.28.10, 2630.28.31, 2630.28.26, 2630.33.1, 2630.39.1, 2630.28.29, 2630.34.3, 2630.34.2, 2630.34.1, 2630.29.29, 2630.29.28, 2630.31.1, 2630.28.13, 2630.28.10, 2617.23.14 and some other minor revisions. This patch implements: WL#4264 "Backup: Stabilize Service Interface" -- all the server prerequisites except si_objects.{h,cc} themselves (they can be just copied over, when needed). WL#4435: Support OUT-parameters in prepared statements. (and all issues in the initial patches for these two tasks, that were discovered in pushbuild and during testing). Bug#39519: mysql_stmt_close() should flush all data associated with the statement. After execution of a prepared statement, send OUT parameters of the invoked stored procedure, if any, to the client. When using the binary protocol, send the parameters in an additional result set over the wire. When using the text protocol, assign out parameters to the user variables from the CALL(@var1, @var2, ...) specification. The following refactoring has been made: - Protocol::send_fields() was renamed to Protocol::send_result_set_metadata(); - A new Protocol::send_result_set_row() was introduced to incapsulate common functionality for sending row data. - Signature of Protocol::prepare_for_send() was changed: this operation does not need a list of items, the number of items is fully sufficient. The following backward incompatible changes have been made: - CLIENT_MULTI_RESULTS is now enabled by default in the client; - CLIENT_PS_MULTI_RESUTLS is now enabled by default in the client.
2009-10-21 22:02:06 +02:00
state= STMT_CONVENTIONAL_EXECUTION;
Backport of revno 2630.28.10, 2630.28.31, 2630.28.26, 2630.33.1, 2630.39.1, 2630.28.29, 2630.34.3, 2630.34.2, 2630.34.1, 2630.29.29, 2630.29.28, 2630.31.1, 2630.28.13, 2630.28.10, 2617.23.14 and some other minor revisions. This patch implements: WL#4264 "Backup: Stabilize Service Interface" -- all the server prerequisites except si_objects.{h,cc} themselves (they can be just copied over, when needed). WL#4435: Support OUT-parameters in prepared statements. (and all issues in the initial patches for these two tasks, that were discovered in pushbuild and during testing). Bug#39519: mysql_stmt_close() should flush all data associated with the statement. After execution of a prepared statement, send OUT parameters of the invoked stored procedure, if any, to the client. When using the binary protocol, send the parameters in an additional result set over the wire. When using the text protocol, assign out parameters to the user variables from the CALL(@var1, @var2, ...) specification. The following refactoring has been made: - Protocol::send_fields() was renamed to Protocol::send_result_set_metadata(); - A new Protocol::send_result_set_row() was introduced to incapsulate common functionality for sending row data. - Signature of Protocol::prepare_for_send() was changed: this operation does not need a list of items, the number of items is fully sufficient. The following backward incompatible changes have been made: - CLIENT_MULTI_RESULTS is now enabled by default in the client; - CLIENT_PS_MULTI_RESUTLS is now enabled by default in the client.
2009-10-21 22:02:06 +02:00
if (!(lex= new (mem_root) st_lex_local))
return TRUE;
thd->set_n_backup_statement(this, &stmt_backup);
thd->set_n_backup_active_arena(this, &stmt_backup);
thd->stmt_arena= this;
error= server_runnable->execute_server_code(thd);
thd->cleanup_after_query();
thd->restore_active_arena(this, &stmt_backup);
thd->restore_backup_statement(this, &stmt_backup);
thd->stmt_arena= save_stmt_arena;
save_change_list.move_elements_to(&thd->change_list);
Backport of revno 2630.28.10, 2630.28.31, 2630.28.26, 2630.33.1, 2630.39.1, 2630.28.29, 2630.34.3, 2630.34.2, 2630.34.1, 2630.29.29, 2630.29.28, 2630.31.1, 2630.28.13, 2630.28.10, 2617.23.14 and some other minor revisions. This patch implements: WL#4264 "Backup: Stabilize Service Interface" -- all the server prerequisites except si_objects.{h,cc} themselves (they can be just copied over, when needed). WL#4435: Support OUT-parameters in prepared statements. (and all issues in the initial patches for these two tasks, that were discovered in pushbuild and during testing). Bug#39519: mysql_stmt_close() should flush all data associated with the statement. After execution of a prepared statement, send OUT parameters of the invoked stored procedure, if any, to the client. When using the binary protocol, send the parameters in an additional result set over the wire. When using the text protocol, assign out parameters to the user variables from the CALL(@var1, @var2, ...) specification. The following refactoring has been made: - Protocol::send_fields() was renamed to Protocol::send_result_set_metadata(); - A new Protocol::send_result_set_row() was introduced to incapsulate common functionality for sending row data. - Signature of Protocol::prepare_for_send() was changed: this operation does not need a list of items, the number of items is fully sufficient. The following backward incompatible changes have been made: - CLIENT_MULTI_RESULTS is now enabled by default in the client; - CLIENT_PS_MULTI_RESUTLS is now enabled by default in the client.
2009-10-21 22:02:06 +02:00
/* Items and memory will freed in destructor */
return error;
}
/**
Reprepare this prepared statement.
Currently this is implemented by creating a new prepared
statement, preparing it with the original query and then
swapping the new statement and the original one.
@retval TRUE an error occurred. Possible errors include
incompatibility of new and old result set
metadata
@retval FALSE success, the statement has been reprepared
*/
bool
Prepared_statement::reprepare()
{
char saved_cur_db_name_buf[NAME_LEN+1];
LEX_STRING saved_cur_db_name=
{ saved_cur_db_name_buf, sizeof(saved_cur_db_name_buf) };
LEX_STRING stmt_db_name= { db, db_length };
bool cur_db_changed;
bool error;
Prepared_statement copy(thd);
copy.set_sql_prepare(); /* To suppress sending metadata to the client. */
status_var_increment(thd->status_var.com_stmt_reprepare);
if (mysql_opt_change_db(thd, &stmt_db_name, &saved_cur_db_name, TRUE,
&cur_db_changed))
return TRUE;
2009-06-17 16:56:44 +02:00
error= ((name.str && copy.set_name(&name)) ||
copy.prepare(query(), query_length()) ||
validate_metadata(&copy));
if (cur_db_changed)
mysql_change_db(thd, &saved_cur_db_name, TRUE);
if (! error)
{
swap_prepared_statement(&copy);
swap_parameter_array(param_array, copy.param_array, param_count);
2008-04-08 19:49:31 +02:00
#ifndef DBUG_OFF
is_reprepared= TRUE;
2008-04-08 19:49:31 +02:00
#endif
/*
Clear possible warnings during reprepare, it has to be completely
transparent to the user. We use clear_warning_info() since
there were no separate query id issued for re-prepare.
Sic: we can't simply silence warnings during reprepare, because if
it's failed, we need to return all the warnings to the user.
*/
thd->warning_info->clear_warning_info(thd->query_id);
}
return error;
}
/**
Validate statement result set metadata (if the statement returns
a result set).
Currently we only check that the number of columns of the result
set did not change.
This is a helper method used during re-prepare.
@param[in] copy the re-prepared prepared statement to verify
the metadata of
@retval TRUE error, ER_PS_REBIND is reported
@retval FALSE statement return no or compatible metadata
*/
bool Prepared_statement::validate_metadata(Prepared_statement *copy)
{
/**
If this is an SQL prepared statement or EXPLAIN,
return FALSE -- the metadata of the original SELECT,
if any, has not been sent to the client.
*/
if (is_sql_prepare() || lex->describe)
return FALSE;
if (lex->select_lex.item_list.elements !=
copy->lex->select_lex.item_list.elements)
{
/** Column counts mismatch, update the client */
thd->server_status|= SERVER_STATUS_METADATA_CHANGED;
}
return FALSE;
}
/**
Replace the original prepared statement with a prepared copy.
This is a private helper that is used as part of statement
reprepare
@return This function does not return any errors.
*/
void
Prepared_statement::swap_prepared_statement(Prepared_statement *copy)
{
Statement tmp_stmt;
/* Swap memory roots. */
swap_variables(MEM_ROOT, main_mem_root, copy->main_mem_root);
/* Swap the arenas */
tmp_stmt.set_query_arena(this);
set_query_arena(copy);
copy->set_query_arena(&tmp_stmt);
/* Swap the statement parent classes */
tmp_stmt.set_statement(this);
set_statement(copy);
copy->set_statement(&tmp_stmt);
/* Swap ids back, we need the original id */
swap_variables(ulong, id, copy->id);
/* Swap mem_roots back, they must continue pointing at the main_mem_roots */
swap_variables(MEM_ROOT *, mem_root, copy->mem_root);
/*
Swap the old and the new parameters array. The old array
is allocated in the old arena.
*/
swap_variables(Item_param **, param_array, copy->param_array);
Apply and review: 3655 Jon Olav Hauglid 2009-10-19 Bug #30977 Concurrent statement using stored function and DROP FUNCTION breaks SBR Bug #48246 assert in close_thread_table Implement a fix for: Bug #41804 purge stored procedure cache causes mysterious hang for many minutes Bug #49972 Crash in prepared statements The problem was that concurrent execution of DML statements that use stored functions and DDL statements that drop/modify the same function might result in incorrect binary log in statement (and mixed) mode and therefore break replication. This patch fixes the problem by introducing metadata locking for stored procedures and functions. This is similar to what is done in Bug#25144 for views. Procedures and functions now are locked using metadata locks until the transaction is either committed or rolled back. This prevents other statements from modifying the procedure/function while it is being executed. This provides commit ordering - guaranteeing serializability across multiple transactions and thus fixes the reported binlog problem. Note that we do not take locks for top-level CALLs. This means that procedures called directly are not protected from changes by simultaneous DDL operations so they are executed at the state they had at the time of the CALL. By not taking locks for top-level CALLs, we still allow transactions to be started inside procedures. This patch also changes stored procedure cache invalidation. Upon a change of cache version, we no longer invalidate the entire cache, but only those routines which we use, only when a statement is executed that uses them. This patch also changes the logic of prepared statement validation. A stored procedure used by a prepared statement is now validated only once a metadata lock has been acquired. A version mismatch causes a flush of the obsolete routine from the cache and statement reprepare. Incompatible changes: 1) ER_LOCK_DEADLOCK is reported for a transaction trying to access a procedure/function that is locked by a DDL operation in another connection. 2) Procedure/function DDL operations are now prohibited in LOCK TABLES mode as exclusive locks must be taken all at once and LOCK TABLES provides no way to specifiy procedures/functions to be locked. Test cases have been added to sp-lock.test and rpl_sp.test. Work on this bug has very much been a team effort and this patch includes and is based on contributions from Davi Arnaut, Dmitry Lenev, Magne Mæhre and Konstantin Osipov.
2009-12-29 13:19:05 +01:00
/* Don't swap flags: the copy has IS_SQL_PREPARE always set. */
/* swap_variables(uint, flags, copy->flags); */
/* Swap names, the old name is allocated in the wrong memory root */
swap_variables(LEX_STRING, name, copy->name);
/* Ditto */
swap_variables(char *, db, copy->db);
DBUG_ASSERT(db_length == copy->db_length);
DBUG_ASSERT(param_count == copy->param_count);
DBUG_ASSERT(thd == copy->thd);
last_error[0]= '\0';
last_errno= 0;
}
2007-10-16 21:37:31 +02:00
/**
Execute a prepared statement.
You should not change global THD state in this function, if at all
possible: it may be called from any context, e.g. when executing
a COM_* command, and SQLCOM_* command, or a stored procedure.
2007-10-16 21:37:31 +02:00
@param expanded_query A query for binlogging which has all parameter
markers ('?') replaced with their actual values.
@param open_cursor True if an attempt to open a cursor should be made.
Currenlty used only in the binary protocol.
@note
Preconditions, postconditions.
- See the comment for Prepared_statement::prepare().
2007-10-16 21:37:31 +02:00
@retval
FALSE ok
@retval
TRUE Error
*/
bool Prepared_statement::execute(String *expanded_query, bool open_cursor)
{
Statement stmt_backup;
Query_arena *old_stmt_arena;
bool error= TRUE;
char saved_cur_db_name_buf[NAME_LEN+1];
LEX_STRING saved_cur_db_name=
{ saved_cur_db_name_buf, sizeof(saved_cur_db_name_buf) };
bool cur_db_changed;
LEX_STRING stmt_db_name= { db, db_length };
status_var_increment(thd->status_var.com_stmt_execute);
if (flags & (uint) IS_IN_USE)
{
my_error(ER_PS_NO_RECURSION, MYF(0));
return TRUE;
}
/*
For SHOW VARIABLES lex->result is NULL, as it's a non-SELECT
command. For such queries we don't return an error and don't
open a cursor -- the client library will recognize this case and
materialize the result set.
For SELECT statements lex->result is created in
check_prepared_statement. lex->result->simple_select() is FALSE
in INSERT ... SELECT and similar commands.
*/
if (open_cursor && lex->result && lex->result->check_simple_select())
{
DBUG_PRINT("info",("Cursor asked for not SELECT stmt"));
return TRUE;
}
/* In case the command has a call to SP which re-uses this statement name */
flags|= IS_IN_USE;
close_cursor();
/*
If the free_list is not empty, we'll wrongly free some externally
allocated items when cleaning up after execution of this statement.
*/
DBUG_ASSERT(thd->change_list.is_empty());
/*
The only case where we should have items in the thd->free_list is
after stmt->set_params_from_vars(), which may in some cases create
Item_null objects.
*/
thd->set_n_backup_statement(this, &stmt_backup);
/*
Change the current database (if needed).
Force switching, because the database of the prepared statement may be
NULL (prepared statements can be created while no current database
selected).
*/
if (mysql_opt_change_db(thd, &stmt_db_name, &saved_cur_db_name, TRUE,
&cur_db_changed))
goto error;
/* Allocate query. */
if (expanded_query->length() &&
alloc_query(thd, (char*) expanded_query->ptr(),
expanded_query->length()))
{
my_error(ER_OUTOFMEMORY, MYF(ME_FATALERROR), expanded_query->length());
goto error;
}
/*
Expanded query is needed for slow logging, so we want thd->query
to point at it even after we restore from backup. This is ok, as
expanded query was allocated in thd->mem_root.
*/
Bug#57306 SHOW PROCESSLIST does not display string literals well. Problem: Extended characters outside of ASCII range where not displayed properly in SHOW PROCESSLIST, because thd_info->query was always sent as system_character_set (utf8). This was wrong, because query buffer is never converted to utf8 - it is always have client character set. Fix: sending query buffer using query character set @ sql/sql_class.cc @ sql/sql_class.h Introducing a new class CSET_STRING, a LEX_STRING with character set. Adding set_query(&CSET_STRING) Adding reset_query(), to use instead of set_query(0, NULL). @ sql/event_data_objects.cc Using reset_query() @ sql/log_event.cc Using reset_query() Adding charset argument to set_query_and_id(). @ sql/slave.cc Using reset_query(). @ sql/sp_head.cc Changing backing up and restore code to use CSET_STRING. @ sql/sql_audit.h Using CSET_STRING. In the "else" branch it's OK not to use global_system_variables.character_set_client. &my_charset_latin1, which is set in constructor, is fine (verified with Sergey Vojtovich). @ sql/sql_insert.cc Using set_query() with proper character set: table_name is utf8. @ sql/sql_parse.cc Adding character set argument to set_query_and_id(). (This is the main point where thd->charset() is stored into thd->query_string.cs, for use in "SHOW PROCESSLIST".) Using reset_query(). @ sql/sql_prepare.cc Storing client character set into thd->query_string.cs. @ sql/sql_show.cc Using CSET_STRING to fetch and send charset-aware query information from threads. @ storage/myisam/ha_myisam.cc Using set_query() with proper character set: table_name is utf8. @ mysql-test/r/show_check.result @ mysql-test/t/show_check.test Adding tests
2010-11-18 15:08:32 +01:00
stmt_backup.set_query_inner(thd->query_string);
/*
At first execution of prepared statement we may perform logical
transformations of the query tree. Such changes should be performed
on the parse tree of current prepared statement and new items should
be allocated in its memory root. Set the appropriate pointer in THD
to the arena of the statement.
*/
old_stmt_arena= thd->stmt_arena;
thd->stmt_arena= this;
reinit_stmt_before_use(thd, lex);
/* Go! */
if (open_cursor)
error= mysql_open_cursor(thd, &result, &cursor);
else
{
/*
Try to find it in the query cache, if not, execute it.
Note that multi-statements cannot exist here (they are not supported in
prepared statements).
*/
if (query_cache_send_result_to_client(thd, thd->query(),
thd->query_length()) <= 0)
{
MYSQL_QUERY_EXEC_START(thd->query(),
2008-12-20 11:01:41 +01:00
thd->thread_id,
(char *) (thd->db ? thd->db : ""),
&thd->security_ctx->priv_user[0],
2008-12-20 11:01:41 +01:00
(char *) thd->security_ctx->host_or_ip,
1);
error= mysql_execute_command(thd);
2008-12-20 11:01:41 +01:00
MYSQL_QUERY_EXEC_DONE(error);
}
}
/*
Restore the current database (if changed).
Force switching back to the saved current database (if changed),
because it may be NULL. In this case, mysql_change_db() would generate
an error.
*/
if (cur_db_changed)
mysql_change_db(thd, &saved_cur_db_name, TRUE);
/* Assert that if an error, no cursor is open */
DBUG_ASSERT(! (error && cursor));
if (! cursor)
cleanup_stmt();
thd->set_statement(&stmt_backup);
thd->stmt_arena= old_stmt_arena;
if (state == Query_arena::STMT_PREPARED)
state= Query_arena::STMT_EXECUTED;
if (error == 0 && this->lex->sql_command == SQLCOM_CALL)
Backport of revno 2630.28.10, 2630.28.31, 2630.28.26, 2630.33.1, 2630.39.1, 2630.28.29, 2630.34.3, 2630.34.2, 2630.34.1, 2630.29.29, 2630.29.28, 2630.31.1, 2630.28.13, 2630.28.10, 2617.23.14 and some other minor revisions. This patch implements: WL#4264 "Backup: Stabilize Service Interface" -- all the server prerequisites except si_objects.{h,cc} themselves (they can be just copied over, when needed). WL#4435: Support OUT-parameters in prepared statements. (and all issues in the initial patches for these two tasks, that were discovered in pushbuild and during testing). Bug#39519: mysql_stmt_close() should flush all data associated with the statement. After execution of a prepared statement, send OUT parameters of the invoked stored procedure, if any, to the client. When using the binary protocol, send the parameters in an additional result set over the wire. When using the text protocol, assign out parameters to the user variables from the CALL(@var1, @var2, ...) specification. The following refactoring has been made: - Protocol::send_fields() was renamed to Protocol::send_result_set_metadata(); - A new Protocol::send_result_set_row() was introduced to incapsulate common functionality for sending row data. - Signature of Protocol::prepare_for_send() was changed: this operation does not need a list of items, the number of items is fully sufficient. The following backward incompatible changes have been made: - CLIENT_MULTI_RESULTS is now enabled by default in the client; - CLIENT_PS_MULTI_RESUTLS is now enabled by default in the client.
2009-10-21 22:02:06 +02:00
{
if (is_sql_prepare())
thd->protocol_text.send_out_parameters(&this->lex->param_list);
else
thd->protocol->send_out_parameters(&this->lex->param_list);
}
/*
Log COM_EXECUTE to the general log. Note, that in case of SQL
prepared statements this causes two records to be output:
Query EXECUTE <statement name>
Execute <statement SQL text>
This is considered user-friendly, since in the
second log entry we output values of parameter markers.
Do not print anything if this is an SQL prepared statement and
we're inside a stored procedure (also called Dynamic SQL) --
sub-statements inside stored procedures are not logged into
the general log.
*/
if (error == 0 && thd->spcont == NULL)
general_log_write(thd, COM_STMT_EXECUTE, thd->query(), thd->query_length());
error:
flags&= ~ (uint) IS_IN_USE;
return error;
}
/** Common part of DEALLOCATE PREPARE and mysqld_stmt_close. */
void Prepared_statement::deallocate()
{
/* We account deallocate in the same manner as mysqld_stmt_close */
status_var_increment(thd->status_var.com_stmt_close);
/* Statement map calls delete stmt on erase */
thd->stmt_map.erase(this);
}
Backport of revno 2630.28.10, 2630.28.31, 2630.28.26, 2630.33.1, 2630.39.1, 2630.28.29, 2630.34.3, 2630.34.2, 2630.34.1, 2630.29.29, 2630.29.28, 2630.31.1, 2630.28.13, 2630.28.10, 2617.23.14 and some other minor revisions. This patch implements: WL#4264 "Backup: Stabilize Service Interface" -- all the server prerequisites except si_objects.{h,cc} themselves (they can be just copied over, when needed). WL#4435: Support OUT-parameters in prepared statements. (and all issues in the initial patches for these two tasks, that were discovered in pushbuild and during testing). Bug#39519: mysql_stmt_close() should flush all data associated with the statement. After execution of a prepared statement, send OUT parameters of the invoked stored procedure, if any, to the client. When using the binary protocol, send the parameters in an additional result set over the wire. When using the text protocol, assign out parameters to the user variables from the CALL(@var1, @var2, ...) specification. The following refactoring has been made: - Protocol::send_fields() was renamed to Protocol::send_result_set_metadata(); - A new Protocol::send_result_set_row() was introduced to incapsulate common functionality for sending row data. - Signature of Protocol::prepare_for_send() was changed: this operation does not need a list of items, the number of items is fully sufficient. The following backward incompatible changes have been made: - CLIENT_MULTI_RESULTS is now enabled by default in the client; - CLIENT_PS_MULTI_RESUTLS is now enabled by default in the client.
2009-10-21 22:02:06 +02:00
/***************************************************************************
* Ed_result_set
***************************************************************************/
/**
Use operator delete to free memory of Ed_result_set.
Accessing members of a class after the class has been destroyed
is a violation of the C++ standard but is commonly used in the
server code.
*/
void Ed_result_set::operator delete(void *ptr, size_t size) throw ()
{
if (ptr)
{
/*
Make a stack copy, otherwise free_root() will attempt to
write to freed memory.
*/
MEM_ROOT own_root= ((Ed_result_set*) ptr)->m_mem_root;
free_root(&own_root, MYF(0));
}
}
/**
Initialize an instance of Ed_result_set.
Instances of the class, as well as all result set rows, are
always allocated in the memory root passed over as the second
argument. In the constructor, we take over ownership of the
memory root. It will be freed when the class is destroyed.
sic: Ed_result_est is not designed to be allocated on stack.
*/
Ed_result_set::Ed_result_set(List<Ed_row> *rows_arg,
size_t column_count_arg,
MEM_ROOT *mem_root_arg)
:m_mem_root(*mem_root_arg),
m_column_count(column_count_arg),
m_rows(rows_arg),
m_next_rset(NULL)
{
/* Take over responsibility for the memory */
clear_alloc_root(mem_root_arg);
}
/***************************************************************************
* Ed_result_set
***************************************************************************/
/**
Create a new "execute direct" connection.
*/
Ed_connection::Ed_connection(THD *thd)
:m_warning_info(thd->query_id, false),
Backport of revno 2630.28.10, 2630.28.31, 2630.28.26, 2630.33.1, 2630.39.1, 2630.28.29, 2630.34.3, 2630.34.2, 2630.34.1, 2630.29.29, 2630.29.28, 2630.31.1, 2630.28.13, 2630.28.10, 2617.23.14 and some other minor revisions. This patch implements: WL#4264 "Backup: Stabilize Service Interface" -- all the server prerequisites except si_objects.{h,cc} themselves (they can be just copied over, when needed). WL#4435: Support OUT-parameters in prepared statements. (and all issues in the initial patches for these two tasks, that were discovered in pushbuild and during testing). Bug#39519: mysql_stmt_close() should flush all data associated with the statement. After execution of a prepared statement, send OUT parameters of the invoked stored procedure, if any, to the client. When using the binary protocol, send the parameters in an additional result set over the wire. When using the text protocol, assign out parameters to the user variables from the CALL(@var1, @var2, ...) specification. The following refactoring has been made: - Protocol::send_fields() was renamed to Protocol::send_result_set_metadata(); - A new Protocol::send_result_set_row() was introduced to incapsulate common functionality for sending row data. - Signature of Protocol::prepare_for_send() was changed: this operation does not need a list of items, the number of items is fully sufficient. The following backward incompatible changes have been made: - CLIENT_MULTI_RESULTS is now enabled by default in the client; - CLIENT_PS_MULTI_RESUTLS is now enabled by default in the client.
2009-10-21 22:02:06 +02:00
m_thd(thd),
m_rsets(0),
m_current_rset(0)
{
}
/**
Free all result sets of the previous statement, if any,
and reset warnings and errors.
Called before execution of the next query.
*/
void
Ed_connection::free_old_result()
{
while (m_rsets)
{
Ed_result_set *rset= m_rsets->m_next_rset;
delete m_rsets;
m_rsets= rset;
}
m_current_rset= m_rsets;
m_diagnostics_area.reset_diagnostics_area();
m_warning_info.clear_warning_info(m_thd->query_id);
}
/**
A simple wrapper that uses a helper class to execute SQL statements.
*/
bool
Ed_connection::execute_direct(LEX_STRING sql_text)
{
Execute_sql_statement execute_sql_statement(sql_text);
DBUG_PRINT("ed_query", ("%s", sql_text.str));
return execute_direct(&execute_sql_statement);
}
/**
Execute a fragment of server functionality without an effect on
thd, and store results in memory.
Conventions:
- the code fragment must finish with OK, EOF or ERROR.
- the code fragment doesn't have to close thread tables,
free memory, commit statement transaction or do any other
cleanup that is normally done in the end of dispatch_command().
@param server_runnable A code fragment to execute.
*/
bool Ed_connection::execute_direct(Server_runnable *server_runnable)
{
bool rc= FALSE;
Protocol_local protocol_local(m_thd, this);
Prepared_statement stmt(m_thd);
Protocol *save_protocol= m_thd->protocol;
Diagnostics_area *save_diagnostics_area= m_thd->stmt_da;
Warning_info *save_warning_info= m_thd->warning_info;
DBUG_ENTER("Ed_connection::execute_direct");
free_old_result(); /* Delete all data from previous execution, if any */
m_thd->protocol= &protocol_local;
m_thd->stmt_da= &m_diagnostics_area;
m_thd->warning_info= &m_warning_info;
rc= stmt.execute_server_runnable(server_runnable);
m_thd->protocol->end_statement();
m_thd->protocol= save_protocol;
m_thd->stmt_da= save_diagnostics_area;
m_thd->warning_info= save_warning_info;
/*
Protocol_local makes use of m_current_rset to keep
track of the last result set, while adding result sets to the end.
Reset it to point to the first result set instead.
*/
m_current_rset= m_rsets;
DBUG_RETURN(rc);
}
/**
A helper method that is called only during execution.
Although Ed_connection doesn't support multi-statements,
a statement may generate many result sets. All subsequent
result sets are appended to the end.
@pre This is called only by Protocol_local.
*/
void
Ed_connection::add_result_set(Ed_result_set *ed_result_set)
{
if (m_rsets)
{
m_current_rset->m_next_rset= ed_result_set;
/* While appending, use m_current_rset as a pointer to the tail. */
m_current_rset= ed_result_set;
}
else
m_current_rset= m_rsets= ed_result_set;
}
/**
Release ownership of the current result set to the client.
Since we use a simple linked list for result sets,
this method uses a linear search of the previous result
set to exclude the released instance from the list.
@todo Use double-linked list, when this is really used.
XXX: This has never been tested with more than one result set!
@pre There must be a result set.
*/
Ed_result_set *
Ed_connection::store_result_set()
{
Ed_result_set *ed_result_set;
DBUG_ASSERT(m_current_rset);
if (m_current_rset == m_rsets)
{
/* Assign the return value */
ed_result_set= m_current_rset;
/* Exclude the return value from the list. */
m_current_rset= m_rsets= m_rsets->m_next_rset;
}
else
{
Ed_result_set *prev_rset= m_rsets;
/* Assign the return value. */
ed_result_set= m_current_rset;
/* Exclude the return value from the list */
while (prev_rset->m_next_rset != m_current_rset)
prev_rset= ed_result_set->m_next_rset;
m_current_rset= prev_rset->m_next_rset= m_current_rset->m_next_rset;
}
ed_result_set->m_next_rset= NULL; /* safety */
return ed_result_set;
}
/*************************************************************************
* Protocol_local
**************************************************************************/
Protocol_local::Protocol_local(THD *thd, Ed_connection *ed_connection)
:Protocol(thd),
m_connection(ed_connection),
m_rset(NULL),
m_column_count(0),
m_current_row(NULL),
m_current_column(NULL)
{
clear_alloc_root(&m_rset_root);
}
/**
Called between two result set rows.
Prepare structures to fill result set rows.
Unfortunately, we can't return an error here. If memory allocation
fails, we'll have to return an error later. And so is done
in methods such as @sa store_column().
*/
void Protocol_local::prepare_for_resend()
{
DBUG_ASSERT(alloc_root_inited(&m_rset_root));
opt_add_row_to_rset();
/* Start a new row. */
m_current_row= (Ed_column *) alloc_root(&m_rset_root,
sizeof(Ed_column) * m_column_count);
m_current_column= m_current_row;
}
/**
In "real" protocols this is called to finish a result set row.
Unused in the local implementation.
*/
bool Protocol_local::write()
{
return FALSE;
}
/**
A helper function to add the current row to the current result
set. Called in @sa prepare_for_resend(), when a new row is started,
and in send_eof(), when the result set is finished.
*/
void Protocol_local::opt_add_row_to_rset()
{
if (m_current_row)
{
/* Add the old row to the result set */
Ed_row *ed_row= new (&m_rset_root) Ed_row(m_current_row, m_column_count);
if (ed_row)
m_rset->push_back(ed_row, &m_rset_root);
}
}
/**
Add a NULL column to the current row.
*/
bool Protocol_local::store_null()
{
if (m_current_column == NULL)
return TRUE; /* prepare_for_resend() failed to allocate memory. */
bzero(m_current_column, sizeof(*m_current_column));
++m_current_column;
return FALSE;
}
/**
A helper method to add any column to the current row
in its binary form.
Allocates memory for the data in the result set memory root.
*/
bool Protocol_local::store_column(const void *data, size_t length)
{
if (m_current_column == NULL)
return TRUE; /* prepare_for_resend() failed to allocate memory. */
/*
alloc_root() automatically aligns memory, so we don't need to
do any extra alignment if we're pointing to, say, an integer.
*/
m_current_column->str= (char*) memdup_root(&m_rset_root,
data,
length + 1 /* Safety */);
if (! m_current_column->str)
return TRUE;
m_current_column->str[length]= '\0'; /* Safety */
m_current_column->length= length;
++m_current_column;
return FALSE;
}
/**
Store a string value in a result set column, optionally
having converted it to character_set_results.
*/
bool
Protocol_local::store_string(const char *str, size_t length,
CHARSET_INFO *src_cs, CHARSET_INFO *dst_cs)
{
/* Store with conversion */
uint error_unused;
if (dst_cs && !my_charset_same(src_cs, dst_cs) &&
src_cs != &my_charset_bin &&
dst_cs != &my_charset_bin)
{
if (convert->copy(str, length, src_cs, dst_cs, &error_unused))
return TRUE;
str= convert->ptr();
length= convert->length();
}
return store_column(str, length);
}
/** Store a tiny int as is (1 byte) in a result set column. */
bool Protocol_local::store_tiny(longlong value)
{
char v= (char) value;
return store_column(&v, 1);
}
/** Store a short as is (2 bytes, host order) in a result set column. */
bool Protocol_local::store_short(longlong value)
{
int16 v= (int16) value;
return store_column(&v, 2);
}
/** Store a "long" as is (4 bytes, host order) in a result set column. */
bool Protocol_local::store_long(longlong value)
{
int32 v= (int32) value;
return store_column(&v, 4);
}
/** Store a "longlong" as is (8 bytes, host order) in a result set column. */
bool Protocol_local::store_longlong(longlong value, bool unsigned_flag)
{
int64 v= (int64) value;
return store_column(&v, 8);
}
/** Store a decimal in string format in a result set column */
bool Protocol_local::store_decimal(const my_decimal *value)
{
char buf[DECIMAL_MAX_STR_LENGTH];
String str(buf, sizeof (buf), &my_charset_bin);
int rc;
rc= my_decimal2string(E_DEC_FATAL_ERROR, value, 0, 0, 0, &str);
if (rc)
return TRUE;
return store_column(str.ptr(), str.length());
}
/** Convert to cs_results and store a string. */
bool Protocol_local::store(const char *str, size_t length,
CHARSET_INFO *src_cs)
{
CHARSET_INFO *dst_cs;
dst_cs= m_connection->m_thd->variables.character_set_results;
return store_string(str, length, src_cs, dst_cs);
}
/** Store a string. */
bool Protocol_local::store(const char *str, size_t length,
CHARSET_INFO *src_cs, CHARSET_INFO *dst_cs)
{
return store_string(str, length, src_cs, dst_cs);
}
/* Store MYSQL_TIME (in binary format) */
bool Protocol_local::store(MYSQL_TIME *time)
{
return store_column(time, sizeof(MYSQL_TIME));
}
/** Store MYSQL_TIME (in binary format) */
bool Protocol_local::store_date(MYSQL_TIME *time)
{
return store_column(time, sizeof(MYSQL_TIME));
}
/** Store MYSQL_TIME (in binary format) */
bool Protocol_local::store_time(MYSQL_TIME *time)
{
return store_column(time, sizeof(MYSQL_TIME));
}
/* Store a floating point number, as is. */
bool Protocol_local::store(float value, uint32 decimals, String *buffer)
{
return store_column(&value, sizeof(float));
}
/* Store a double precision number, as is. */
bool Protocol_local::store(double value, uint32 decimals, String *buffer)
{
return store_column(&value, sizeof (double));
}
/* Store a Field. */
bool Protocol_local::store(Field *field)
{
if (field->is_null())
return store_null();
return field->send_binary(this);
}
/** Called to start a new result set. */
bool Protocol_local::send_result_set_metadata(List<Item> *columns, uint)
{
DBUG_ASSERT(m_rset == 0 && !alloc_root_inited(&m_rset_root));
init_sql_alloc(&m_rset_root, MEM_ROOT_BLOCK_SIZE, 0);
if (! (m_rset= new (&m_rset_root) List<Ed_row>))
return TRUE;
m_column_count= columns->elements;
return FALSE;
}
/**
Normally this is a separate result set with OUT parameters
of stored procedures. Currently unsupported for the local
version.
*/
bool Protocol_local::send_out_parameters(List<Item_param> *sp_params)
{
return FALSE;
}
/** Called for statements that don't have a result set, at statement end. */
bool
Protocol_local::send_ok(uint server_status, uint statement_warn_count,
ulonglong affected_rows, ulonglong last_insert_id,
const char *message)
{
/*
Just make sure nothing is sent to the client, we have grabbed
the status information in the connection diagnostics area.
*/
return FALSE;
}
/**
Called at the end of a result set. Append a complete
result set to the list in Ed_connection.
Don't send anything to the client, but instead finish
building of the result set at hand.
*/
bool Protocol_local::send_eof(uint server_status, uint statement_warn_count)
{
Ed_result_set *ed_result_set;
DBUG_ASSERT(m_rset);
opt_add_row_to_rset();
m_current_row= 0;
ed_result_set= new (&m_rset_root) Ed_result_set(m_rset, m_column_count,
&m_rset_root);
m_rset= NULL;
if (! ed_result_set)
return TRUE;
/* In case of successful allocation memory ownership was transferred. */
DBUG_ASSERT(!alloc_root_inited(&m_rset_root));
/*
Link the created Ed_result_set instance into the list of connection
result sets. Never fails.
*/
m_connection->add_result_set(ed_result_set);
return FALSE;
}
/** Called to send an error to the client at the end of a statement. */
bool
Protocol_local::send_error(uint sql_errno, const char *err_msg, const char*)
{
/*
Just make sure that nothing is sent to the client (default
implementation).
*/
return FALSE;
}
#ifdef EMBEDDED_LIBRARY
void Protocol_local::remove_last_row()
{ }
#endif