The STACK_MIN_SIZE is currently set to 8192, when we actually need
(emperically discovered) 9236 bytes to raise an fatal error, on Ubuntu
Dapper Drake, libc6 2.3.6-0ubuntu2, Linux kernel 2.6.15-27-686, on x86.
I'm taking that as a new lower bound, plus 100B of wiggle-room for sundry
word sizes and stack behaviors.
The added test verifies in a cross-platform way that there are no gaps
between the space that we think we need and what we actually need to report
an error.
DOCUMENTERS: This also adds "let" to the mysqltest commands that evaluate
an argument to expand variables therein. (Only right of the "=", of course.)
Presence of a subquery in the ON expression of a join
should not block merging the view that contains this join.
Before this patch the such views were converted into
into temporary table views.
this key does not stop" (version for 5.0 only).
UPDATE statement which WHERE clause used key and which invoked trigger
that modified field in this key worked indefinetely.
This problem occured because in cases when UPDATE statement was
executed in update-on-the-fly mode (in which row is updated right
during evaluation of select for WHERE clause) the new version of
the row became visible to select representing WHERE clause and was
updated again and again.
We already solve this problem for UPDATE statements which does not
invoke triggers by detecting the fact that we are going to update
field in key used for scanning and performing update in two steps,
during the first step we gather information about the rows to be
updated and then doing actual updates. We also do this for
MULTI-UPDATE and in its case we even detect situation when such
fields are updated in triggers (actually we simply assume that
we always update fields used in key if we have before update
trigger).
The fix simply extends this check which is done in check_if_key_used()/
QUICK_SELECT_I::check_if_keys_used() routine/method in such way that
it also detects cases when field used in key is updated in trigger.
As nice side-effect we have more precise and thus more optimal
perfomance-wise check for the MULTI-UPDATE.
Also check_if_key_used()/QUICK_SELECT_I::check_if_keys_used() were
renamed to is_key_used()/QUICK_SELECT_I::is_keys_used() in order to
better reflect that boolean predicate.
Note that this check is implemented in much more elegant way in 5.1
The problem was that if after FLUSH TABLES WITH READ LOCK the user
issued DROP/ALTER PROCEDURE/FUNCTION the operation would fail (as
expected), but after UNLOCK TABLE any attempt to execute the same
operation would lead to the error 1305 "PROCEDURE/FUNCTION does not
exist", and an attempt to execute any stored function will also fail.
This happened because under FLUSH TABLES WITH READ LOCK we couldn't open
and lock mysql.proc table for update, and this fact was erroneously
remembered by setting mysql_proc_table_exists to false, so subsequent
statements believed that mysql.proc doesn't exist, and thus that there
are no functions and procedures in the database.
As a solution, we remove mysql_proc_table_exists flag completely. The
reason is that this optimization didn't work most of the time anyway.
Even if open of mysql.proc failed for some reason when we were trying to
call a function or a procedure, we were setting mysql_proc_table_exists
back to true to force table reopen for the sake of producing the same
error message (the open can fail for number of reasons). The solution
could have been to remember the reason why open failed, but that's a lot
of code for optimization of a rare case. Hence we simply remove this
optimization.
1003: Incorrect table name
in multi-table DELETE the set of tables to delete from actually
references then tables in the other list, e.g:
DELETE alias_of_t1 FROM t1 alias_of_t1 WHERE ....
is a valid statement.
So we must turn off table name syntactical validity check for alias_of_t1
because it's not a table name (even if it looks like one).
In order to do that we add a special flag (TL_OPTION_ALIAS) to
disable the name checking for the aliases in multi-table DELETE.
User name (host name) has limit on length. The server code relies on these
limits when storing the names. The problem was that sometimes these limits
were not checked properly, so that could lead to buffer overflow.
The fix is to check length of user/host name in parser and if string is too
long, throw an error.
Fix when __attribute__() is stubbed out, add ATTRIBUTE_FORMAT() for specifying
__attribute__((format(...))) safely, make more use of the format attribute,
and fix some of the warnings that this turns up (plus a bonus unrelated one).
SELECT right instead of INSERT right was required for an insert into to a view.
This wrong behaviour appeared after the fix for bug #20989. Its intention was
to ask only SELECT right for all tables except the very first for a complex
INSERT query. But that patch has done it in a wrong way and lead to asking
a wrong access right for an insert into a view.
The setup_tables_and_check_access() function now accepts two want_access
parameters. One will be used for the first table and the second for other
tables.
'conc_sys' test
Concurrent execution of SELECT involing at least two INFORMATION_SCHEMA
tables, DROP DATABASE statement and DROP TABLE statement could have
resulted in stalled connection for this SELECT statement.
The problem was that for the first query of a join there was a race
between select from I_S.TABLES and DROP DATABASE, and the error (no
such database) was prepared to be send to the client, but the join
processing was continued. On second query to I_S.COLUMNS there was a
race with DROP TABLE, but this error (no such table) was downgraded to
warning, and thd->net.report_error was reset. And so neither result
nor error was sent to the client.
The solution is to stop join processing once it is clear we are going
to report a error, and also to downgrade to warnings file system errors
like 'no such database' (unless we are in the 'SHOW' command), because
I_S is designed not to use locks and the query to I_S should not abort
if something is dropped in the middle.
No test case is provided since this bug is a result of a race, and is
timing dependant. But we test that plain SHOW TABLES and SHOW COLUMNS
give a error if there is no such database or a table respectively.
Fix for BUG#16676: Database CHARSET not used for stored procedures
The problem in BUG#16211 is that CHARSET-clause of the return type for
stored functions is just ignored.
The problem in BUG#16676 is that if character set is not explicitly
specified for sp-variable, the server character set is used instead
of the database one.
The fix has two parts:
- always store CHARSET-clause of the return type along with the
type definition in mysql.proc.returns column. "Always" means that
CHARSET-clause is appended even if it has not been explicitly
specified in CREATE FUNCTION statement (this affects BUG#16211 only).
Storing CHARSET-clause if it is not specified is essential to avoid
changing character set if the database character set is altered in
the future.
NOTE: this change is not backward compatible with the previous releases.
- use database default character set if CHARSET-clause is not explicitly
specified (this affects both BUG#16211 and BUG#16676).
NOTE: this also breaks backward compatibility.
NDB table".
SQL-layer was not marking fields which were used in triggers as such. As
result these fields were not always properly retrieved/stored by handler
layer. So one might got wrong values or lost changes in triggers for NDB,
Federated and possibly InnoDB tables.
This fix solves the problem by marking fields used in triggers
appropriately.
Also this patch contains the following cleanup of ha_ndbcluster code:
We no longer rely on reading LEX::sql_command value in handler in order
to determine if we can enable optimization which allows us to handle REPLACE
statement in more efficient way by doing replaces directly in write_row()
method without reporting error to SQL-layer.
Instead we rely on SQL-layer informing us whether this optimization
applicable by calling handler::extra() method with
HA_EXTRA_WRITE_CAN_REPLACE flag.
As result we no longer apply this optimzation in cases when it should not
be used (e.g. if we have on delete triggers on table) and use in some
additional cases when it is applicable (e.g. for LOAD DATA REPLACE).
Finally this patch includes fix for bug#20728 "REPLACE does not work
correctly for NDB table with PK and unique index".
This was yet another problem which was caused by improper field mark-up.
During row replacement fields which weren't explicity used in REPLACE
statement were not marked as fields to be saved (updated) so they have
retained values from old row version. The fix is to mark all table
fields as set for REPLACE statement. Note that in 5.1 we already solve
this problem by notifying handler that it should save values from all
fields only in case when real replacement happens.
Addendum fixes after changing the condition variable
for the global read lock.
The stress test suite revealed some deadlocks. Some were
related to the new condition variable (COND_global_read_lock)
and some were general problems with the global read lock.
It is now necessary to signal COND_global_read_lock whenever
COND_refresh is signalled.
We need to wait for the release of a global read lock if one
is set before every operation that requires a write lock.
But we must not wait if we have locked tables by LOCK TABLES.
After setting a global read lock a thread waits until all
write locks are released.
schemas
The function check_one_table_access() called to check access to tables in
SELECT/INSERT/UPDATE was doing additional checks/modifications that don't hold
in the context of setup_tables_and_check_access().
That's why the check_one_table() was split into two : the functionality needed by
setup_tables_and_check_access() into check_single_table_access() and the rest of
the functionality stays in check_one_table_access() that is made to call the new
check_single_table_access() function.
The check for view security was lacking several points :
1. Check with the right set of permissions : for each table ref that
participates in a view there were the right credentials to use in it's
security_ctx member, but these weren't used for checking the credentials.
This makes hard enforcing the SQL SECURITY DEFINER|INVOKER property
consistently.
2. Because of the above the security checking for views was just ruled out
in explicit ways in several places.
3. The security was checked only for the columns of the tables that are
brought into the query from a view. So if there is no column reference
outside of the view definition it was not detecting the lack of access to
the tables in the view in SQL SECURITY INVOKER mode.
The fix below tries to fix the above 3 points.