NDB table".
SQL-layer was not marking fields which were used in triggers as such. As
result these fields were not always properly retrieved/stored by handler
layer. So one might got wrong values or lost changes in triggers for NDB,
Federated and possibly InnoDB tables.
This fix solves the problem by marking fields used in triggers
appropriately.
Also this patch contains the following cleanup of ha_ndbcluster code:
We no longer rely on reading LEX::sql_command value in handler in order
to determine if we can enable optimization which allows us to handle REPLACE
statement in more efficient way by doing replaces directly in write_row()
method without reporting error to SQL-layer.
Instead we rely on SQL-layer informing us whether this optimization
applicable by calling handler::extra() method with
HA_EXTRA_WRITE_CAN_REPLACE flag.
As result we no longer apply this optimzation in cases when it should not
be used (e.g. if we have on delete triggers on table) and use in some
additional cases when it is applicable (e.g. for LOAD DATA REPLACE).
Finally this patch includes fix for bug#20728 "REPLACE does not work
correctly for NDB table with PK and unique index".
This was yet another problem which was caused by improper field mark-up.
During row replacement fields which weren't explicity used in REPLACE
statement were not marked as fields to be saved (updated) so they have
retained values from old row version. The fix is to mark all table
fields as set for REPLACE statement. Note that in 5.1 we already solve
this problem by notifying handler that it should save values from all
fields only in case when real replacement happens.
Addendum fixes after changing the condition variable
for the global read lock.
The stress test suite revealed some deadlocks. Some were
related to the new condition variable (COND_global_read_lock)
and some were general problems with the global read lock.
It is now necessary to signal COND_global_read_lock whenever
COND_refresh is signalled.
We need to wait for the release of a global read lock if one
is set before every operation that requires a write lock.
But we must not wait if we have locked tables by LOCK TABLES.
After setting a global read lock a thread waits until all
write locks are released.
schemas
The function check_one_table_access() called to check access to tables in
SELECT/INSERT/UPDATE was doing additional checks/modifications that don't hold
in the context of setup_tables_and_check_access().
That's why the check_one_table() was split into two : the functionality needed by
setup_tables_and_check_access() into check_single_table_access() and the rest of
the functionality stays in check_one_table_access() that is made to call the new
check_single_table_access() function.
The check for view security was lacking several points :
1. Check with the right set of permissions : for each table ref that
participates in a view there were the right credentials to use in it's
security_ctx member, but these weren't used for checking the credentials.
This makes hard enforcing the SQL SECURITY DEFINER|INVOKER property
consistently.
2. Because of the above the security checking for views was just ruled out
in explicit ways in several places.
3. The security was checked only for the columns of the tables that are
brought into the query from a view. So if there is no column reference
outside of the view definition it was not detecting the lack of access to
the tables in the view in SQL SECURITY INVOKER mode.
The fix below tries to fix the above 3 points.
Replaced COND_refresh with COND_global_read_lock becasue of a bug in NTPL threads when using different mutexes as arguments to pthread_cond_wait()
The original code caused a hang in FLUSH TABLES WITH READ LOCK in some circumstances because pthread_cond_broadcast() was not delivered to other threads.
This fixes:
Bug#16986: Deadlock condition with MyISAM tables
Bug#20048: FLUSH TABLES WITH READ LOCK causes a deadlock
specific to 5.0 version of the patch is motivated by the fact that a wrapper over
MYSQLLOG::write can not help in 5.0 where query's charset is embedded into event instance in the constructor.
A pattern to generate binlog for DROPped temp table in close_temporary_tables
was buggy: could not deal with a grave-accent-in-name table.
The fix exploits `append_identifier()' for quoting and duplicating accents.
Binlog lacks encoding info about DROPped temporary table.
Idea of the fix is to switch temporary to system_charset_info when a temporary table
is DROPped for binlog. Since that is the server, that automatically, but not the client, who generates the query
the binlog should be updated on the server's encoding for the coming DROP.
The `write_binlog_with_system_charset()' is introduced to replace similar problematic places in the code.
or implicitly uses stored function gives "Table not locked" error'
CREATE TABLE ... SELECT ... statement which was explicitly or implicitly
(through view) using stored function gave "Table not locked" error.
The actual bug resides in the current locking scheme of CREATE TABLE SELECT
code, which first opens and locks tables of the SELECT statement itself,
and then, having SELECT tables locked, creates the .FRM, opens the .FRM and
acquires lock on it. This scheme opens a possibility for a deadlock, which
was present and ignored since version 3.23 or earlier. This scheme also
conflicts with the invariant of the prelocking algorithm -- no table can
be open and locked while there are tables locked in prelocked mode.
The patch makes an exception for this invariant when doing CREATE TABLE ...
SELECT, thus extending the possibility of a deadlock to the prelocked mode.
We can't supply a better fix in 5.0.
Bug #19606: ssl variables are not displayed in show variables
Bug #19616: log_queries_not_using_indexes is not listed in show variables
Make basedir, datadir, tmpdir, log_queries_not_using_indexes, ssl_ca,
ssl_capath, ssl_cert, ssl_cipher, and ssl_key all available both from
SHOW VARIABLES and as @@variables.
As a side-effect of this change, log_queries_not_using_indexes can
be changed at runtime (but only globally, not per-connection).
This performance degradation was due to the fact that some
cost evaluation code added into 4.1 in the function find_best was
not merged into the code of the function best_access_path added
together with other code for greedy optimizer.
Added a parameter to the function print_plan. The parameter contains
accumulated cost for a given partial join.
The patch does not include a special test case since this performance
degradation is hard to reproduse with a simple example.
TODO: make the function find_best use the function best_access_path
in order to remove duplication of code which might result in incomplete
merges in the future.
too many open statements". The patch adds a new global variable
@@max_prepared_stmt_count. This variable limits the total number
of prepared statements in the server. The default value of
@@max_prepared_stmt_count is 16382. 16382 small statements
(a select against 3 tables with GROUP, ORDER and LIMIT) consume
100MB of RAM. Once this limit has been reached, the server will
refuse to prepare a new statement and return ER_UNKNOWN_ERROR
(unfortunately, we can't add new errors to 4.1 without breaking 5.0). The limit is changeable after startup
and can accept any value from 0 to 1 million. In case
the new value of the limit is less than the current
statement count, no new statements can be added, while the old
still can be used. Additionally, the current count of prepared
statements is now available through a global read-only variable
@@prepared_stmt_count.
After FLUSH STATUS max_used_connections was reset to 0, and haven't
been updated while cached threads were reused, until the moment a new
thread was created.
The first suggested fix from original bug report was implemented:
a) On flushing the status, set max_used_connections to
threads_connected, not to 0.
b) Check if it is necessary to increment max_used_connections when
taking a thread from the cache as well as when creating new threads
The GROUP_CONCAT uses its own temporary table. When ROLLUP is present
it creates the second copy of Item_func_group_concat. This copy receives the
same list of arguments that original group_concat does. When the copy is
set up the result_fields of functions from the argument list are reset to the
temporary table of this copy.
As a result of this action data from functions flow directly to the ROLLUP copy
and the original group_concat functions shows wrong result.
Since queries with COUNT(DISTINCT ...) use temporary tables to store
the results the COUNT function they are also affected by this bug.
The idea of the fix is to copy content of the result_field for the function
under GROUP_CONCAT/COUNT from the first temporary table to the second one,
rather than setting result_field to point to the second temporary table.
To achieve this goal force_copy_fields flag is added to Item_func_group_concat
and Item_sum_count_distinct classes. This flag is initialized to 0 and set to 1
into the make_unique() member function of both classes.
To the TMP_TABLE_PARAM structure is modified to include the similar flag as
well.
The create_tmp_table() function passes that flag to create_tmp_field().
When the flag is set the create_tmp_field() function will set result_field
as a source field and will not reset that result field to newly created
field for Item_func_result_field and its descendants. Due to this there
will be created copy func to copy data from old result_field to newly
created field.
¨MySQL server crashes if you try to access to InnoDB table¨
crash caused by schizophrenic mysqld - 2 memory locations for logically same function
with conflicting values.
Fixed by backporting from 5.1 changes to have_xyz_db declarations.
if --skip-grant-tables specified.
The problem is that there is a check that prevents creating a definer
with empty host name.
In --skip-grant-tables mode this check prevents the user from creating a
trigger/view without explicitly specifying its definer. This happens, because
in --skip-grant-tables mode CURRENT_USER is ''@''. According to Sanja this
check was implemented intentionally.
However, according to the MySQL manual it is possible to specify empty host
name (as well as empty user name). Moreover, the behaviour for stored routines
is different in this aspect -- we allow them to be created with implicit
definer.
Based on this, we believe it is OK to change the behaviour for views to be
similar with the behaviour for stored routines.
The idea of the fix is to extend support of non-SUID triggers for backward
compatibility. Formerly non-SUID triggers were appeared when "new" server
is being started against "old" database. Now, they are also created when
"new" slave receives updates from "old" master.
- Added empty constructors and virtual destructors to many classes and structs
- Removed some usage of the offsetof() macro to instead use C++ class pointers
trigger starts trigger".
In short, the deadlock/crash happened when execution of statement, which used
stored functions or activated triggers, coincided with alteration of the
tables used by these functions or triggers (in highly concurrent environment).
Bug was caused by the incorrect handling of tables from prelocked set in
open_tables() functions in situations when refresh happened. This fix replaces
old smart but not very robust way of handling tables after refresh (which was
closing only old tables), with new one which simply closes all tables opened so
far and restarts open_tables().
Also fixed handling of temporary tables in close_tables_for_reopen().
No test case present since bug manifests itself only in concurrent environment.
Problem #1: INSERT...SELECT, Version for 5.0.
Extended the unique table check by a check of lock data.
Merge sub-tables cannot be detected by doing name checks only.
Problem #1: INSERT...SELECT, Version for 4.1.
INSERT ... SELECT with the same table on both sides (hidden
below a MERGE table) does now work by buffering the select result.
The duplicate detection works now after open_and_lock_tables()
on the locks.
I did not find a test case that failed without the change in
sql_update.cc. I made the change anyway as it should in theory
fix a possible MERGE table problem with multi-table update.
according to the standard.
The idea is to use Field-classes to implement stored routines
variables. Also, we should provide facade to Item-hierarchy
by Item_field class (it is necessary, since SRVs take part
in expressions).
The patch fixes the following bugs:
- BUG#8702: Stored Procedures: No Error/Warning shown for inappropriate data
type matching;
- BUG#8768: Functions: For any unsigned data type, -ve values can be passed
and returned;
- BUG#8769: Functions: For Int datatypes, out of range values can be passed
and returned;
- BUG#9078: STORED PROCDURE: Decimal digits are not displayed when we use
DECIMAL datatype;
- BUG#9572: Stored procedures: variable type declarations ignored;
- BUG#12903: upper function does not work inside a function;
- BUG#13705: parameters to stored procedures are not verified;
- BUG#13808: ENUM type stored procedure parameter accepts non-enumerated
data;
- BUG#13909: Varchar Stored Procedure Parameter always BINARY string (ignores
CHARACTER SET);
- BUG#14161: Stored procedure cannot retrieve bigint unsigned;
- BUG#14188: BINARY variables have no 0x00 padding;
- BUG#15148: Stored procedure variables accept non-scalar values;
Allow for configuration of the maximum number of indexes per table.
Added and used a configure.in macro.
Replaced fixed limits by the configurable limit.
Limited MyISAM indexes to its hard limit.
Fixed a bug in opt_range.cc for many indexes with InnoDB.
Tested for 2, 63, 64, 65, 127, 128, 129, 255, 256, and 257 indexes.
Testing this part of the bugfix requires rebuilding of the server
with different options. This cannot be done with our test suite.
Therefore I added the necessary test files to the bug report.
If you repeat the tests, please note that the ps_* tests fail for
everything but 64 indexes. This is because of differences in the
meta data, namely field lengths for index names etc.
Post-review fixes that simplify the way access rights
are checked during name resolution and factor out all
entry points to check access rights into one single
function.
CREATE TABLE and PS/SP": make sure that 'typelib' object for
ENUM values and 'Item_string' object for DEFAULT clause are
created in the statement memory root.
Version for 5.0.
It fixes three problems:
1. The cause of the bug was that we did not check the table version for
the HANDLER ... READ commands. We did not notice when a table was
replaced by a new one. This can happen during ALTER TABLE, REPAIR
TABLE, and OPTIMIZE TABLE (there might be more cases). I call the fix
for this problem "the primary bug fix".
2. mysql_ha_flush() was not always called with a locked LOCK_open.
Though the function comment clearly said it must.
I changed the code so that the locking is done when required. I call
the fix for this problem "the secondary fix".
3. In 5.0 (not in 4.1 or 4.0) DROP TABLE had a possible deadlock flaw in
concur with FLUSH TABLES WITH READ LOCK. I call the fix for this
problem "the 5.0 addendum fix".
Indeed now that stored procedures CALL is not binlogged, but instead the invoked substatements are,
the restrictions applied by log-bin-trust-routine-creators=0 are superfluous for procedures.
They still need to apply to functions where function calls are written to the binlog (for example as "DO myfunc(3)").
We rename the variable to log-bin-trust-function-creators but allow the old name until some future version (and issue a warning if old name is used).
Version for 4.0.
It fixes two problems:
1. The cause of the bug was that we did not check the table version for
the HANDLER ... READ commands. We did not notice when a table was
replaced by a new one. This can happen during ALTER TABLE, REPAIR
TABLE, and OPTIMIZE TABLE (there might be more cases). I call the fix
for this problem "the primary bug fix".
2. mysql_ha_flush() was not always called with a locked LOCK_open.
Though the function comment clearly said it must.
I changed the code so that the locking is done when required. I call
the fix for this problem "the secondary fix".
droping trigger on InnoDB table".
Deadlock occured in cases when we were trying to create two triggers for
the same InnoDB table concurrently and both threads were able to reach
close_cached_table() simultaneously. Bugfix implements new approach to
table locking and table cache invalidation during creation/dropping
of trigger.
No testcase is supplied since bug was repeatable only under high concurrency.
Some options were declared as 'bool', but since those are being
handled in my_getopt.c, bool can be machine dependent. To make
sure it works in all circumstances, the type should be my_bool
for C (not C++) programs.
Handlerton array is now created instead of using sys_table_types_st. All storage engines can now have inits and giant ifdef's are now gone for startup. No compeltely clean yet, handlertons will next be merged with sys_table_types. Federated and archive now have real cleanup if their inits fail.
This bug occurs when some trigger for table used by DML statement is created
or changed while statement was waiting in lock_tables(). In this situation
prelocking set which we have calculated becames invalid which can easily lead
to errors and even in some cases to crashes.
With proposed patch we no longer silently reopen tables in lock_tables(),
instead caller of lock_tables() becomes responsible for reopening tables and
recalculation of prelocking set.
present): the problem originally was that the tables in auxilliary_tables did not have
the correct real_name, which caused problems in the second call to tables_ok().
The fix corrects the real_name problem, and also sets the updating flag properly,
which makes the second call to tables_ok() unnecessary.
Information_schema DB
Bug#9846 Inappropriate error displayed while
dropping table from 'INFORMATION_SCHEMA'
Bug#10734 Grant of privileges other than 'select' and
'create view' should fail on schema
Bug#10708 SP's can use INFORMATION_SCHEMA as ROUTINE_SCHEMA
cumulative fix for bugs above(after review, 2nd version)
added privilege check for information schema db & tables
The idea of the patch is to separate statement processing logic,
such as parsing, validation of the parsed tree, execution and cleanup,
from global query processing logic, such as logging, resetting
priorities of a thread, resetting stored procedure cache, resetting
thread count of errors and warnings.
This makes PREPARE and EXECUTE behave similarly to the rest of SQL
statements and allows their use in stored procedures.
This patch contains a change in behaviour:
until recently for each SQL prepared statement command, 2 queries
were written to the general log, e.g.
[Query] prepare stmt from @stmt_text;
[Prepare] select * from t1 <-- contents of @stmt_text
The chagne was necessary to prevent [Prepare] commands from being written
to the general log when executing a stored procedure with Dynamic SQL.
We should consider whether the old behavior is preferrable and probably
restore it.
This patch refixes Bug#7115, Bug#10975 (partially), Bug#10605 (various bugs
in Dynamic SQL reported before it was disabled).
- current_arena to stmt_arena: the thread may have more than one
'current' arenas: one for runtime data, and one for the parsed
tree of a statement. Only one of them is active at any moment.
- set_item_arena -> set_query_arena, because Item_arena was renamed to
Query_arena a while ago
- set_n_backup_item_arena -> set_n_backup_active_arena;
the active arena is the arena thd->mem_root and thd->free_list
are currently pointing at.
- restore_backup_item_arena -> restore_active_arena (with the same
rationale)
- change_arena_if_needed -> activate_stmt_arena_if_needed; this
method sets thd->stmt_arena active if it's not done yet.
"Interleaved SPs execution is now binlogged properly, "SELECT spfunc()" is binlogged too.
The known remaining issue is binlogging/replication of "a routine is deleted while it is executed" scenario.
"Process NATURAL and USING joins according to SQL:2003".
* Some of the main problems fixed by the patch:
- in "select *" queries the * expanded correctly according to
ANSI for arbitrary natural/using joins
- natural/using joins are correctly transformed into JOIN ... ON
for any number/nesting of the joins.
- column references are correctly resolved against natural joins
of any nesting and combined with arbitrary other joins.
* This patch also contains a fix for name resolution of items
inside the ON condition of JOIN ... ON - in this case items must
be resolved only against the JOIN operands. To support such
'local' name resolution, the patch introduces a stack of
name resolution contexts used at parse time.
NOTICE:
- This patch is not complete in the sense that
- there are 2 test cases that still do not pass -
one in join.test, one in select.test. Both are marked
with a comment "TODO: WL#2486".
- it does not include a new test specific for the task
result set".
To enable full access to contents of I_S tables from stored functions
or statements that use them, we manipulate with thread's open tables
state and ensure that we won't cause deadlock when we open tables by
ignoring flushes and name-locks.
Building of contents of I_S.TABLES no longer requires locking of tables
since we use use handler::info() method with HA_STATUS_AUTO flag instead
of handler::update_auto_increment() for obtaining information about
auto-increment values. But this also means that handlers have to implement
support for HA_STATUS_AUTO flag (particularly InnoDB needs it).
When creating temporary table for UNION, pass TMP_TABLE_FORCE_MYISAM flag to
create_tmp_table if we will be using fulltext function(s) when reading from the
temp. table.
Fixed bug #12154: a query returned: Column <name> cannot be null.
The problem was due to a bug in the function setup_table_map:
the flag maybe_null was set up incorrectly for inner tables of
nested outer joins.
join_nested.result, join_nested.test:
Added a test case for bug #12154.
Change bool in C code to my_bool
Added to mysqltest --enable_parsning and --disable_parsing to avoid to have to comment parts of tests
Added comparison of LEX_STRING's and use this to compare file types for view and trigger files.
Fixed portability problem with bool in C programs
Moved close_thread_tables out from LOCK_thread_count mutex (safety fix)
my_sleep() -> pthread_cond_timedwait()