which does not work. Removing these attempted privileges makes
this identical to option 5 so remove it completely. The spirit
of the program appears to be aimed at database privileges, so do
not add another option for granting global privileges as it may
be unexpected. Fixes bug#14618 (same as previous patch, this
time applied to -maint tree).
When using concurrent insert with parallel index reads, it could
happen that reading sessions found keys that pointed to records
yet to be written to the data file. The result was a report of
a corrupted table. But it was false alert.
When inserting a record in a table with indexes, the keys are
inserted into the indexes before the record is written to the data
file. When the insert happens concurrently to selects, an
index read can find a key that references the record that is not
yet written to the data file. To avoid any access to such record,
the select saves the current end of file position when it starts.
Since concurrent inserts are always appended at end of the data
file, the select can easily ignore any concurrently inserted record.
The problem was that the ignore was only done for non-exact key
searches (partial key or using >, >=, < or <=).
The fix is to ignore concurrently inserted records also for
exact key searches.
No test case. Concurrent inserts cannot be tested with the test
suite. Test cases are attached to the bug report.
mysqld hasn't been built on AIX with ndb-everything in quite a while.
this allowed a variety of changes to be added that broke the AIX build
for both the GNU and IBM compilers (but the IBM suite in particular).
Changeset lets build to complete on AIX 5.2 for users of the GNU and
the IBM suite both. Tudo bem?
INSERT DELAYED on a replication slave was converted to regular INSERT,
whereas it should try concurrent INSERT first.
With this patch we try to convert delayed insert to concurrent insert on
a replication slave. If it is impossible for some reason, we fall back to
regular insert.
No test case for this fix. I do not see anything indicating this is
regression - we behave this way since Nov 2000.
- Don't call mysql_select() several times for the select that enumerates
a temporary table with the results of the UNION. Making this call for
every subquery execution caused O(#enumerated-rows-in-the-outer-query)
memory allocations.
- Instead, call join->reinit() and join->exec(), and
= disable constant table detection for such joins,
= provide special handling for table-less constant subqueries.
SELECT statement itself returns empty.
As a result of this bug 'SELECT AGGREGATE_FUNCTION(fld) ... GROUP BY'
can return one row instead of an empty result set.
When GROUP BY only has fields of constant tables
(with a single row), the optimizer deletes the group_list.
After that we lose the information about whether we had an
GROUP BY statement. Though it's important
as SELECT min(x) from empty_table; and
SELECT min(x) from empty_table GROUP BY y; have to return
different results - the first query should return one row,
second - an empty result set.
So here we add the 'group_optimized_away' flag to remember this case
when GROUP BY exists in the query and is removed
by the optimizer, and check this flag in end_send_group()
to "my_config.h". Not to pollute the top directory, and to get more control
over what is included. Made the include path for "libedit" pick up its own
"config.h" first.
Backport of correction for Mac OS X build problem, global variable not
initiated is "common" and can't be used in shared libraries, unless
special flags are used (bug#26218)
Bug #27417 thd->no_trans_update.stmt lost value inside of SF-exec-stack
Once had been set the flag might later got reset inside of a stored routine
execution stack.
The reason was in that there was no check if a new statement started at time
of resetting.
The artifact affects most of binlogable DML queries. Notice, that multi-update
is wrapped up within
bug@27716 fix, multi-delete bug@29136.
Fixed with saving parent's statement flag of whether the statement modified
non-transactional table, and unioning (merging) the value with that was gained
in mysql_execute_command.
Resettling thd->no_trans_update members into thd->transaction.`member`;
Asserting code;
Effectively the following properties are held.
1. At the end of a substatement thd->transaction.stmt.modified_non_trans_table
reflects the fact if such a table got modified by the substatement.
That also respects THD::really_abort_on_warnin() requirements.
2. Eventually thd->transaction.stmt.modified_non_trans_table will be computed as
the union of the values of all invoked sub-statements.
That fixes this bug#27417;
Computing of thd->transaction.all.modified_non_trans_table is refined to base to
the stmt's value for all the case including insert .. select statement which
before the patch had an extra issue bug@28960.
Minor issues are covered with mysql_load, mysql_delete, and binloggin of insert in
to temp_table select.
The supplied test verifies limitely, mostly asserts. The ultimate testing is defered
for bug@13270, bug@23333.
When innodb detects a deadlock it calls ha_rollback_trans() to rollback the
main transaction. But such action isn't allowed from inside of triggers and
functions. When it happen the 'Explicit or implicit commit' error is thrown
even if there is no commit/rollback statements in the trigger/function. This
leads to the user confusion.
Now the convert_error_code_to_mysql() function doesn't call the
ha_rollback_trans() function directly but rather calls the
mark_transaction_to_rollback function and returns an error.
The sp_rcontext::find_handler() now doesn't allow errors to be caught by the
trigger/function error handlers when the thd->is_fatal_sub_stmt_error flag
is set. Procedures are still allowed to catch such errors.
The sp_rcontext::find_handler function now accepts a THD handle as a parameter.
The transaction_rollback_request and the is_fatal_sub_stmt_error flags are
added to the THD class. The are initialized by the THD class constructor.
Now the ha_autocommit_or_rollback function rolls back main transaction
when not in a sub statement and the thd->transaction_rollback_request
is set.
The THD::restore_sub_statement_state function now resets the
thd->is_fatal_sub_stmt_error flag on exit from a sub-statement.
SP with local variables with non-ASCII names crashed the server.
The server replaces SP local variable names with NAME_CONST calls
when putting statements into the binary log. It used UTF8-encoded
item names as variable names for the replacement inside NAME_CONST
calls. However, statement string may be encoded by any
known character set by the SET NAMES statement.
The server used byte length of UTF8-encoded names to increment
the position in the query string that led to array index overrun.
information schema table.
The get_schema_views_record() function fills records in the view table of
the informations schema with data about given views. Among other info
the is_updatable flag is set. But the check whether the view is updatable or
not wasn't covering all cases thus sometimes providing wrong info.
This might led to a user confusion.
Now the get_schema_views_record function additionally calls to the
view->can_be_merge() function to find out whether the view can be updated or
not.
The subst_spvars function is used to create query string with SP variables
substituted with their values. This string is used later for the binary log
and for the query cache. The problem is that the
query_cache_send_result_to_client function requires some additional space
after the query to store database name and query cache flags. This
space wasn't reserved by the subst_spvars function which led to a memory
corruption and crash.
Now the subst_spvars function reserves additional space for the query cache.