privileges
This problem is 4.1 specific. It doesn't affect 4.0 and was fixed
in 5.x before.
Having any mysql user who is allowed to issue multi table update
statement and any column/table grants, allows this user to update
any table on a server (mysql grant tables are not exception).
check_grant() accepts number of tables (in table list) to be checked
in 5-th param. While checking grants for multi table update, number
of tables must be 1. It must never be 0 (actually we have
DBUG_ASSERT(number > 0) in 5.x in grant_check() function).
Before this fix,
- a runtime error in a statement in a stored procedure with no error handlers
was properly detected (as expected)
- a runtime error in a statement with an error handler inherited from a non
local runtime context (i.e., proc a with a handler, calling proc b) was
properly detected (as expected)
- a runtime error in a statement with a *local* error handler was executed
as follows :
a) the statement would succeed, regardless of the error condition, (bug)
b) the error handler would be called (as expected).
The root cause is that functions like my_messqge_sql would "forget" to set
the thread flag thd->net.report_error to 1, because of the check involving
sp_rcontext::found_handler_here().
Failure to set this flag would cause, later in the call stack,
in Item_func::fix_fields() at line 190, the code to return FALSE and consider
that executing the statement was successful.
With this fix :
- error handling code, that was duplicated in different places in the code,
is now implemented in sp_rcontext::handle_error(),
- handle_error() correctly sets thd->net.report_error when a handler is
present, regardless of the handler location (local, or in the call stack).
A test case, bug8153_subselect, has been written to demonstrate the change
of behavior before and after the fix.
Another test case, bug8153_function_a, as also been writen.
This test has the same behavior before and after the fix.
This test has been written to demonstrate that the previous expected
result of procedure bug18787, was incorrect, since select no_such_function()
should fail and therefore not produce a result.
The incorrect result for bug18787 has the same root cause as Bug#8153,
and the expected result has been adjusted.
columns
Fixed confusing warning.
Quoting INSERT section of the manual:
----
Inserting NULL into a column that has been declared NOT NULL. For
multiple-row INSERT statements or INSERT INTO ... SELECT statements, the
column is set to the implicit default value for the column data type. This
is 0 for numeric types, the empty string ('') for string types, and the
"zero" value for date and time types. INSERT INTO ... SELECT statements are
handled the same way as multiple-row inserts because the server does not
examine the result set from the SELECT to see whether it returns a single
row. (For a single-row INSERT, no warning occurs when NULL is inserted into
a NOT NULL column. Instead, the statement fails with an error.)
----
This is also true for LOAD DATA INFILE. For INSERT user can specify
DEFAULT keyword as a value to set column default. There is no similiar
feature available for LOAD DATA INFILE.
The problem was that the grammar allows to create a function with an optional
definer clause, and define it as a UDF with the SONAME keyword.
Such combination should be reported as an error.
The solution is to not change the grammar itself, and to introduce a
specific check in the yacc actions in 'create_function_tail' for UDF,
that now reports ER_WRONG_USAGE when using both DEFINER and SONAME.
When executing ALTER TABLE all the attributes of the view were overwritten.
This is contrary to the user's expectations.
So some of the view attributes are preserved now : namely security and
algorithm. This means that if they are not specified in ALTER VIEW
their values are preserved from CREATE VIEW instead of being defaulted.
difference between timestamp in values of months and quarters.)
Problem: when requesting timestamp diff in months or quarters, it
would only examine the date (and not the time) for the comparison.
Solution: increased precision of comparison.
'conc_sys' test
Concurrent execution of SELECT involing at least two INFORMATION_SCHEMA
tables, DROP DATABASE statement and DROP TABLE statement could have
resulted in stalled connection for this SELECT statement.
The problem was that for the first query of a join there was a race
between select from I_S.TABLES and DROP DATABASE, and the error (no
such database) was prepared to be send to the client, but the join
processing was continued. On second query to I_S.COLUMNS there was a
race with DROP TABLE, but this error (no such table) was downgraded to
warning, and thd->net.report_error was reset. And so neither result
nor error was sent to the client.
The solution is to stop join processing once it is clear we are going
to report a error, and also to downgrade to warnings file system errors
like 'no such database' (unless we are in the 'SHOW' command), because
I_S is designed not to use locks and the query to I_S should not abort
if something is dropped in the middle.
No test case is provided since this bug is a result of a race, and is
timing dependant. But we test that plain SHOW TABLES and SHOW COLUMNS
give a error if there is no such database or a table respectively.
can be not replicable.
Now CREATE statements for writing in the binlog are created as follows:
- the beginning of the statement is re-created;
- the rest of the statement is copied from the original query.
The problem appears when there is a version-specific comment (produced by
mysqldump), started in the re-created part of the statement and closed in the
copied part -- there is closing comment-parenthesis, but there is no opening
one.
The proper fix could be to re-create original statement, but we can not
implement it in 5.0. So, for 5.0 the fix is just to cut closing
comment-parenthesis. This technique is also used for SHOW CREATE PROCEDURE
statement (so we are able to reuse existing code).