The problem was that some facilities (like CONVERT_TZ() function or
server HELP statement) may require implicit access to some tables in
'mysql' database. This access was done by ordinary means of adding
such tables to the list of tables the query is going to open.
However, if we issued LOCK TABLES before that, we would get "table
was not locked" error trying to open such implicit tables.
The solution is to treat certain tables as MySQL system tables, like
we already do for mysql.proc. Such tables may be opened for reading
at any moment regardless of any locks in effect. The cost of this is
that system table may be locked for writing only together with other
system tables, it is disallowed to lock system tables for writing and
have any other lock on any other table.
After this patch the following tables are treated as MySQL system
tables:
mysql.help_category
mysql.help_keyword
mysql.help_relation
mysql.help_topic
mysql.proc (it already was)
mysql.time_zone
mysql.time_zone_leap_second
mysql.time_zone_name
mysql.time_zone_transition
mysql.time_zone_transition_type
These tables are now opened with open_system_tables_for_read() and
closed with close_system_tables(), or one table may be opened with
open_system_table_for_update() and closed with close_thread_tables()
(the latter is used for mysql.proc table, which is updated as part of
normal MySQL server operation). These functions may be used when
some tables were opened and locked already.
NOTE: online update of time zone tables is not possible during
replication, because there's no time zone cache flush neither on LOCK
TABLES, nor on FLUSH TABLES, so the master may serve stale time zone
data from cache, while on slave updated data will be loaded from the
time zone tables.
With this patch, statements that change metadata (in the mysql database)
is logged as statements, while normal changes (e.g., using INSERT, DELETE,
and/or UPDATE) is logged according to the format in effect.
The log tables (i.e., general_log and slow_log) are not replicated at all.
With this patch, the following statements are replicated as statements:
GRANT, REVOKE (ALL), CREATE USER, DROP USER, and RENAME USER.
index_read(), index_read_idx(), index_read_last(), and
records_in_range() - instead of 'uint keylen' argument take
'ulonglong keypart_map', a bitmap showing which keyparts are
present in the key value.
Fallback method is provided for handlers that are lagging behind.
Removed a lot of compiler warnings
Removed not used variables, functions and labels
Initialize some variables that could be used unitialized (fatal bugs)
%ll -> %l
Fixed compiler warnings (detected by VC++):
- Removed not used variables
- Added casts
- Fixed wrong assignments to bool
- Fixed wrong calls with bool arguments
- Added missing argument to store(longlong), which caused wrong store method to be called.
- CREATE PROCEDURE stores database name based on query context instead
of 'current database' as set by 'USE' according to manual.
The bug reporter interpret the filtering statements as bug for
DROP PROCEDURE based on this behavior.
- Removed the code which changes db context.
- Added code to check that a valid db was supplied.
Use lazy initialization for Query_tables_list::sroutines hash.
This step should significantly decrease amount of memory consumed
by stored routines as we no longer will allocate chunk of memory
required for this HASH for each statement in routine.
(race cond)
It was possible for one thread to interrupt a Data Definition Language
statement and thereby get messages to the binlog out of order. Consider:
Connection 1: Drop Foo x
Connection 2: Create or replace Foo x
Connection 2: Log "Create or replace Foo x"
Connection 1: Log "Drop Foo x"
Local end would have Foo x, but the replicated slaves would not.
The fix for this is to wrap all DDL and logging of a kind in the same mutex.
Since we already use mutexes for the various parts of altering the server,
this only entails moving the logging events down close to the action, inside
the mutex protection.
There was possible stack overrun in an edge case which handles invalid body of
a SP in mysql.proc . That should be case when mysql.proc has been changed
manually. Though, due to bug 21513, it can be exploited without having access
to mysql.proc only being able to create a stored routine.
The problem was that if after FLUSH TABLES WITH READ LOCK the user
issued DROP/ALTER PROCEDURE/FUNCTION the operation would fail (as
expected), but after UNLOCK TABLE any attempt to execute the same
operation would lead to the error 1305 "PROCEDURE/FUNCTION does not
exist", and an attempt to execute any stored function will also fail.
This happened because under FLUSH TABLES WITH READ LOCK we couldn't open
and lock mysql.proc table for update, and this fact was erroneously
remembered by setting mysql_proc_table_exists to false, so subsequent
statements believed that mysql.proc doesn't exist, and thus that there
are no functions and procedures in the database.
As a solution, we remove mysql_proc_table_exists flag completely. The
reason is that this optimization didn't work most of the time anyway.
Even if open of mysql.proc failed for some reason when we were trying to
call a function or a procedure, we were setting mysql_proc_table_exists
back to true to force table reopen for the sake of producing the same
error message (the open can fail for number of reasons). The solution
could have been to remember the reason why open failed, but that's a lot
of code for optimization of a rare case. Hence we simply remove this
optimization.
The following procedure was not possible if max_sp_recursion_depth is 0
create procedure show_proc() show create procedure show_proc;
Actually there is no recursive call but the limit is checked.
Solved by temporarily increasing the thread's limit just before the fetch from cache
and decreasing after that.