Commit graph

3120 commits

Author SHA1 Message Date
Praveenkumar Hulakund
cf4231a7f9 Bug#18790730 - CROSS-DATABASE FOREIGN KEY WITHOUT PERMISSIONS
CHECK.

Analysis:
----------
Issue here is, while creating or altering the InnoDB table,
if the foreign key defined on the table references a parent
table on which the user has no access privileges then the
table is created without reporting any error. 

Currently the privilege level REFERENCES_ACL is unused
and is not used for access evaluation while creating the
table with a foreign key constraint or adding the foreign
key constraint to a table. But when no privileges are granted
to user then also access evaluation on parent table is ignored.

Fix:
---------
For DMLs, irrelevant of the fact, support does not want any
changes to avoid permission checks on every operation.

So, as a fix, added a function "check_fk_parent_table_access" 
to check whether any of the SELECT_ACL, INSERT_ACL, UDPATE_ACL,
DELETE_ACL or REFERENCE_ACL privileges are granted for user
at table level. If none of them is granted then error is reported.
This function is called during the table creation and alter 
operation.
2014-09-10 10:50:17 +05:30
Gleb Shchepa
7141ae8561 Bug #18978946: BACKPORT TO 5.6: BUGFIX FOR 18017820 "BISON 3 BREAKS MYSQL BUILD"
Backport of the fix:

: Bug 18017820: BISON 3 BREAKS MYSQL BUILD
: ========================================    
: 
: The source of the reported problem is a removal of a few deprecated
: things from Bison 3.x: 
: * YYPARSE_PARAM macro (use the %parse-param bison directive instead),
: * YYLEX_PARAM macro (use %lex-param instead),
: 
: The fix removes obsolete macro calls and introduces use of
: %parse-param and %lex-param directives.
2014-06-23 19:59:15 +04:00
Thayumanavar
c7ca708fd5 BUG#18054998 - BACKPORT FIX FOR BUG#11765785 to 5.5
This is a backport of the patch of bug#11765785. Commit message
by Prabakaran Thirumalai from bug#11765785 is reproduced below:
Description:
------------
Global Query ID (global_query_id ) is not incremented for PING and 
statistics command. These two query types are filtered before 
incrementing the global query id. This causes race condition and 
results in duplicate query id for different queries originating from 
different connections.
      
Analysis:
---------
sqlparse.cc::dispath_command() is the only place in code which sets 
thd->query_ id to global_query_id and then increments it based on the 
query type. In all other places it is incremented first and then 
assigned to thd->query_id.
      
This is done such that global_query_id is not incremented for PING 
and statistics commands in dispatch_command() function.
      
Fix:
----
As per suggestion from Serg, "There is no reason to skip query_id for 
the PING and STATISTICS command.", removing the check which filters 
PING and statistics commands.
      
Instead of using get_query_id() and next_query_id() which can still 
cause race condition if context switch happens soon after executing 
get_query_id(), changing the code to use next_query_id() instead of 
get_query_id() as it is done in other parts of code which deals with 
global_query_id.
      
Removed get_query_id() function and forced next_query_id() caller 
to use the return value by specifying warn_unused_result attribute.
2014-01-13 12:04:16 +05:30
Anirudh Mangipudi
37502cfaae Bug #17357535 BACKPORT BUG#16241992 TO 5.5
Problem:
COM_CHANGE_USER allows brute-force attempts to crack a password at a very high
rate as it does not cause any significant delay after a login attempt has
failed. This issue was reproduced using John-The-Ripper password
cracking tool through which about 5000 passwords per second could be attempted.

Solution:
The non-GA version's solution was to disconnect the connection when a login
attempt failed. Now since our aim to to reduce the rate at which passwords 
are tested, we introduced a sleep(1) after every login attempt failed. This
significantly increased the delay with which the password was cracked.
2013-10-18 17:14:39 +05:30
Ashish Agarwal
d75c58e11f WL#7076: Backporting wl6715 to support both formats
in 5.5, 5.6, 5.7.
2013-08-23 09:07:09 +05:30
Praveenkumar Hulakund
39932dcffa Bug#11765252 - READ OF FREED MEMORY WHEN "USE DB" AND
"SHOW PROCESSLIST"

Merging from 5.1 to 5.5
2013-08-21 10:44:22 +05:30
Praveenkumar Hulakund
10a6aa256e Bug#11765252 - READ OF FREED MEMORY WHEN "USE DB" AND
"SHOW PROCESSLIST"

Analysis:
----------
The problem here is, if one connection changes its
default db and at the same time another connection executes
"SHOW PROCESSLIST", when it wants to read db of the another
connection then there is a chance of accessing the invalid
memory. 

The db name stored in THD is not guarded while changing user
DB and while reading the user DB in "SHOW PROCESSLIST".
So, if THD.db is freed by thd "owner" thread and if another
thread executing "SHOW PROCESSLIST" statement tries to read
and copy THD.db at the same time then we may endup in the issue
reported here.

Fix:
----------
Used mutex "LOCK_thd_data" to guard THD.db while freeing it
and while copying it to processlist.
2013-08-21 10:39:40 +05:30
Dmitry Lenev
b07ec61f85 Fix for bug#14188793 - "DEADLOCK CAUSED BY ALTER TABLE DOEN'T CLEAR
STATUS OF ROLLBACKED TRANSACTION" and bug #17054007 - "TRANSACTION
IS NOT FULLY ROLLED BACK IN CASE OF INNODB DEADLOCK".

The problem in the first bug report was that although deadlock involving
metadata locks was reported using the same error code and message as InnoDB
deadlock it didn't rollback transaction like the latter. This caused
confusion to users as in some cases after ER_LOCK_DEADLOCK transaction
could have been restarted immediately and in some cases rollback was
required.

The problem in the second bug report was that although InnoDB deadlock
caused transaction rollback in all storage engines it didn't cause release
of metadata locks. So concurrent DDL on the tables used in transaction was
blocked until implicit or explicit COMMIT or ROLLBACK was issued in the
connection which got InnoDB deadlock.

The former issue has stemmed from the fact that when support for detection
and reporting metadata locks deadlocks was added we erroneously assumed
that InnoDB doesn't rollback transaction on deadlock but only last statement
(while this is what happens on InnoDB lock timeout actually) and so didn't
implement rollback of transactions on MDL deadlocks.

The latter issue was caused by the fact that rollback of transaction due
to deadlock is carried out by setting THD::transaction_rollback_request
flag at the point where deadlock is detected and performing rollback
inside of trans_rollback_stmt() call when this flag is set. And
trans_rollback_stmt() is not aware of MDL locks, so no MDL locks are
released.

This patch solves these two problems in the following way:

- In case when MDL deadlock is detect transaction rollback is requested
  by setting THD::transaction_rollback_request flag.

- Code performing rollback of transaction if THD::transaction_rollback_request
  is moved out from trans_rollback_stmt(). Now we handle rollback request
  on the same level as we call trans_rollback_stmt() and release statement/
  transaction MDL locks.
2013-08-20 13:12:34 +04:00
Ashish Agarwal
f5b5e6b951 WL#7076: Backporting wl6715 to support both formats in 5.5, 5.6, 5.7
Backporting wl6715 to mysql-5.5
2013-07-02 11:58:39 +05:30
Ashish Agarwal
da6538b6cb Bug#16169063: SECURITY CONCERN BECAUSE OF INSUFFICIENT LOGGING
PROBLEM: If multiple statements are sent by a single
         request then only the last statement was
         getting logged. An attacker can bypass the
         audit log just by sending two comsecutive
         statements in one request.

SOLUTION: Each statements from a single request are
          logged.
2013-03-07 12:12:58 +05:30
Igor Solodovnikov
cb08544b57 bug#14163155 COM_CHANGE_USER DOESN'T WORK WITH CHARACTER-SET-SERVER=UCS2 IN
5.1 SERVER

Problem was caused by the COM_CHANGE_USER parsing code. That code ignored
character set number passed in COM_CHANGE_USER packet. Instead
character_set_client values was used. User name was not converted at all.

Fixed by using passed character set number to convert both db and user names.
If COM_CHANGE_USER does not contain character set number then
character_set_client is used to convert both names.
2013-02-07 19:46:08 +02:00
Gleb Shchepa
9c59f5a573 Bug #15948123: SERVER WORKS INCORRECT WITH LONG TABLE ALIASES
Code in MDL subsystem assumes that identifiers of objects can't
be longer than NAME_LEN characters. This assumption was broken
when one tried to construct MDL_key based on table alias, which
can have arbitrary length. Since MDL_key's (and MDL locks) are
not really used for table aliases this patch changes code to
not initialize MDL_key object for table list element representing
aliases.
2012-12-05 16:53:33 +04:00
Thayumanavar
8afcdfe9c0 BUG#14458232 - CRASH IN THD_IS_TRANSACTION_ACTIVE DURING
THREAD POOLING STRESS TEST
PROBLEM:
Connection stress tests which consists of concurrent
kill connections interleaved with mysql ping queries
cause the mysqld server which uses thread pool scheduler
to crash.
FIX:
Killing a connection involves shutdown and close of client
socket and this can cause EPOLLHUP(or EPOLLERR) events to be
to be queued and handled after disarming and cleanup of 
of the connection object (THD) is being done.We disarm the 
the connection by modifying the epoll mask to zero which
ensure no events come and release the ownership of waiting 
thread that collect events and then do the cleanup of THD.
object.As per the linux kernel epoll source code (               
http://lxr.linux.no/linux+*/fs/eventpoll.c#L1771), EPOLLHUP
(or EPOLLERR) can't be masked even if we set EPOLL mask
to zero. So we disarm the connection and thus prevent 
execution of any query processing handler/queueing to 
client ctx. queue by removing the client fd from the epoll        
set via EPOLL_CTL_DEL. Also there is a race condition which
involve the following threads:
1) Thread X executing KILL CONNECTION Y and is in THD::awake
and using mysys_var (holding LOCK_thd_data).
2) Thread Y in tp_process_event executing and is being killed.
3) Thread Z receives KILL flag internally and possible call
the tp_thd_cleanup function which set thread session variable
and changing mysys_var.
The fix for the above race is to set thread session variable
under LOCK_thd_data.
We also do not call THD::awake if we found the thread in the
thread list that is to be killed but it's KILL_CONNECTION flag
set thus avoiding any possible concurrent cleanup. This patch
is approved by Mikael Ronstrom via email review.
2012-11-09 14:54:35 +05:30
Praveenkumar Hulakund
ca9a3a8915 Bug#14003080:65104: MAX_USER_CONNECTIONS WITH PROCESSLIST EMPTY
Analysis:
-------------
If server is started with limit of MAX_CONNECTIONS and 
MAX_USER_CONNECTIONS then only MAX_USER_CONNECTIONS of any particular
users can be connected to server and total MAX_CONNECTIONS of client can
be connected to server.

Server maintains a counter for total CONNECTIONS and total CONNECTIONS 
from particular user.

Here, MAX_CONNECTIONS of connections are created to server. Out of this
MAX_CONNECTIONS, connections from particular user (say USER1) are
also created. The connections from USER1 is lesser than 
MAX_USER_CONNECTIONS. After that there was one more connection request from
USER1. Since USER1 can still create connections as he havent reached
MAX_USER_CONNECTIONS, server increments counter of CONNECTIONS per user.
As server already has MAX_CONNECTIONS of connections, next check to total
CONNECTION count fails. In this case control is returned WITHOUT 
decrementing the CONNECTIONS per user. So the counter per user CONNECTIONS goes
on incrementing for each attempt until current connections are closed. 
And because of this counter per CONNECTIONS reached MAX_USER_CONNECTIONS. 
So, next connections form USER1 user always returns with MAX_USER_CONNECTION 
limit error, even when total connection to sever are less than MAX_CONNECTIONS.

Fix:
-------------
This issue is occurred because of not handling counters properly in the
server. Changed the code to handle per user connection counters properly.
2012-05-28 11:14:43 +05:30
Gopal Shankar
047fea0682 Bug#12636001 : deadlock from thd_security_context
PROBLEM:
Threads end-up in deadlock due to locks acquired as described
below,

con1: Run Query on a table. 
  It is important that this SELECT must back-off while
  trying to open the t1 and enter into wait_for_condition().
  The SELECT then is blocked trying to lock mysys_var->mutex
  which is held by con3. The very significant fact here is
  that mysys_var->current_mutex will still point to LOCK_open,
  even if LOCK_open is no longer held by con1 at this point.

con2: Try dropping table used in con1 or query some table.
  It will hold LOCK_open and be blocked trying to lock
  kernel_mutex held by con4.

con3: Try killing the query run by con1.
  It will hold THD::LOCK_thd_data belonging to con1 while
  trying to lock mysys_var->current_mutex belonging to con1.
  But current_mutex will point to LOCK_open which is held
  by con2.

con4: Get innodb engine status
  It will hold kernel_mutex, trying to lock THD::LOCK_thd_data
  belonging to con1 which is held by con3.

So while technically only con2, con3 and con4 participate in the
deadlock, con1's mysys_var->current_mutex pointing to LOCK_open
is a vital component of the deadlock.

CYCLE = (THD::LOCK_thd_data -> LOCK_open ->
         kernel_mutex -> THD::LOCK_thd_data)

FIX:
LOCK_thd_data has responsibility of protecting,
1) thd->query, thd->query_length
2) VIO
3) thd->mysys_var (used by KILL statement and shutdown)
4) THD during thread delete.

Among above responsibilities, 1), 2)and (3,4) seems to be three
independent group of responsibility. If there is different LOCK
owning responsibility of (3,4), the above mentioned deadlock cycle
can be avoid. This fix introduces LOCK_thd_kill to handle
responsibility (3,4), which eliminates the deadlock issue.

Note: The problem is not found in 5.5. Introduction MDL subsystem 
caused metadata locking responsibility to be moved from TDC/TC to
MDL subsystem. Due to this, responsibility of LOCK_open is reduced. 
As the use of LOCK_open is removed in open_table() and 
mysql_rm_table() the above mentioned CYCLE does not form.
Revision ID for changes,
open_table() = dlenev@mysql.com-20100727133458-m3ua9oslnx8fbbvz
mysql_rm_table() = jon.hauglid@oracle.com-20101116100012-kxep9txz2fxy3nmw
2012-05-17 18:07:59 +05:30
Andrei Elkin
bf66e3ab63 merge bug11754117-45670 fixes from 5.1. 2012-04-21 13:24:39 +03:00
Andrei Elkin
f3509d1d67 BUG#11754117 incorrect logging of INSERT into auto-increment
BUG#11761686 insert_id event is not filtered.
  
Two issues are covered.
  
INSERT into autoincrement field which is not the first part in the composed primary key 
is unsafe by autoincrement logging design. The case is specific to MyISAM engine
because Innodb does not allow such table definition.
  
However no warnings and row-format logging in the MIXED mode was done, and
that is fixed.
  
Int-, Rand-, User-var log-events were not filtered along with their parent
query that made possible them to screw up execution context of the following
query.
  
Fixed with deferring their execution until the parent query.

******
Bug#11754117 

Post review fixes.
2012-04-20 19:41:20 +03:00
Nuno Carvalho
0eea06c5d0 WL#6236: Allow SHOW MASTER LOGS and SHOW BINARY LOGS with REPLICATION CLIENT
Merge from 5.1 into 5.5.
2012-04-18 10:12:19 +01:00
Nuno Carvalho
a9a7e6ea24 WL#6236: Allow SHOW MASTER LOGS and SHOW BINARY LOGS with REPLICATION CLIENT
Currently SHOW MASTER LOGS and SHOW BINARY LOGS require the SUPER
privilege. Monitoring tools (such as MEM) often want to check this 
output - for instance MEM generates the SUM of the sizes of the logs 
reported here, and puts that in the Replication overview within the MEM
Dashboard.
However, because of the SUPER requirement, these tools often have an 
account that holds open the connection whilst monitoring, and can lock
out administrators when the server gets overloaded and reaches
max_connections - there is already another SUPER privileged account
connected, the "monitor". 

As SHOW MASTER STATUS, and all other replication related statements,
return with either REPLICATION CLIENT or SUPER privileges, this worklog 
is to make SHOW MASTER LOGS and SHOW BINARY LOGS be consistent with this
as well, and allow both of these commands with either SUPER or 
REPLICATION CLIENT. 
This allows monitoring tools to not require a SUPER privilege any more,
so is safer in overloaded situations, as well as being more secure, as 
lighter privileges can be given to users of such tools or scripts.
2012-04-18 10:08:01 +01:00
Tatjana Azundris Nuernberg
1c1bcb1c80 BUG 13454045 - 63524: BUG #35396 "ABNORMAL/IMPOSSIBLE/LARGE QUERY_TIME AND LOCK_TIME" HAPPENS A
If a query's end time is before before its start time, the system clock has been turn back
(daylight savings time etc.). When the system clock is changed, we can't tell for certain a
given query was actually slow. We did not protect against logging such a query with a bogus
execution time (resulting from end_time - start_time being negative), and possibly logging it
even though it did not really take long to run.

We now have a sanity check in place.
2012-02-19 03:18:49 +00:00
MySQL Build Team
5734bae576 Updated/added copyright headers 2012-02-16 10:48:16 +01:00
Rohit Kalhans
9153fddf58 Backout the patch for bug#11758263. 2012-02-08 12:10:55 +05:30
Rohit Kalhans
6df5a61d2e BUG#11758263 50440: MARK UNORDERED UPDATE WITH AUTOINC UNSAFE
Problem: Statements that write to tables with auto_increment columns
      based on the selection from another table, may lead to master
      and slave going out of sync, as the order in which the rows
      are retrived from the table may differ on master and slave.
      
      Solution: We mark writing to a table with auto_increment table
      as unsafe. This will cause the execution of such statements to
      throw a warning and forces the statement to be logged in ROW if
      the logging format is mixed. 
      
      Changes: 
      1. All the statements that writes to a table with auto_increment 
      column(s) based on the rows fetched from another table, will now
      be unsafe.
      2. CREATE TABLE with SELECT will now be unsafe.
2012-02-08 00:33:08 +05:30
Dmitry Shulga
d460f1689d Fixed bug#11753187 (formerly known as bug 44585): SP_CACHE BEHAVES AS
MEMORY LEAK.

Background:
 - There are caches for stored functions and stored procedures (SP-cache);
 - There is no similar cache for events;
 - Triggers are cached together with TABLE objects;
 - Those SP-caches are per-session (i.e. specific to each session);
 - A stored routine is represented by a sp_head-instance internally;
 - SP-cache basically contains sp_head-objects of stored routines, which
   have been executed in a session;
 - sp_head-object is added into the SP-cache before the corresponding
   stored routine is executed;
 - SP-cache is flushed in the end of the session.

The problem was that SP-cache might grow without any limit. Although this
was not a pure memory leak (the SP-cache is flushed when session is closed),
this is still a problem, because the user might take much memory by
executing many stored routines.

The patch fixes this problem in the least-intrusive way. A soft limit
(similar to the size of table definition cache) is introduced. To represent
such limit the new runtime configuration parameter 'stored_program_cache'
is introduced. The value of this parameter is stored in the new global
variable stored_program_cache_size that used to control the size of SP-cache
to overflow. 

The parameter 'stored_program_cache' limits number of cached routines for
each thread. It has the following min/default/max values given from support:
  min = 256, default = 256, max = 512 * 1024.
Also it should be noted that this parameter limits the size of 
each cache (for stored procedures and for stored functions) separately.

The SP-cache size is checked after top-level statement is parsed.
If SP-cache size exceeds the limit specified by parameter
'stored_program_cache' then SP-cache is flushed and memory allocated for
cache objects is freed. Such approach allows to flush cache safely 
when there are dependencies among stored routines.
2012-01-25 15:59:30 +06:00
hery.ramilison@oracle.com
85f07c7d7d Merge from mysql-5.5.18-release 2011-11-17 09:00:58 +01:00
Andrei Elkin
a7127418a7 Bug#11763573 - 56299: MUTEX DEADLOCK WITH COM_BINLOG_DUMP, BINLOG PURGE, AND PROCESSLIST/KILL
The bug case is similar to one fixed earlier bug_49536.
Deadlock involving LOCK_log appears to be possible because the purge running thread
is holding LOCK_log whereas there is no sense of doing that and which fact was
exploited by the earlier bug fixes.

Fixed with small reengineering of rotate_and_purge(), adding two new methods and
setting up a policy to execute those instead of the former
rotate_and_purge(RP_LOCK_LOG_IS_ALREADY_LOCKED).
The policy for using rotate(), purge() is that if the caller acquires LOCK_log itself,
it should call rotate(), release the mutex and run purge().

Side effect of this patch is refining error message of bug@11747416 to print
the whole path.
2011-10-27 17:14:41 +03:00
Tatjana Azundris Nuernberg
ec56c16c4b Bug12589870 post merge fixes - manual merge 2011-10-19 03:42:09 +01:00
Tatjana Azundris Nuernberg
7d882c19ec Bug12589870 post-merge fixes for Sparc64 and friends 2011-10-19 03:21:31 +01:00
Georgi Kodinov
665ef20368 merge mysql-5.5->mysql-5.5-security 2011-10-12 15:07:15 +03:00
Georgi Kodinov
492e5b9bce auto-merge mysql-5.1->mysql-5.1-security 2011-10-12 14:34:44 +03:00
Magne Mahre
5611bcfaa9 Merge from 5.1-security 2011-10-07 14:10:15 +02:00
Magne Mahre
e02c3d7fb7 BUG#12589870 CRASHES WITH MULTIQUERY PACKET + USE<DB> + QUERY CACHE
A buffer large enough to hold the query _plus_ some additional
data is allocated before parsing is started.   The additional data 
is used by the query cache, and consists of the name of the current 
database and a set of flags.
 
When a packet containing multiple SQL statements is sent to the
server and one of the statements changes the current database
(a "USE <db>" statement), and the name of the new current database 
is longer than of the previous,  there is not enough space in the 
buffer for the new name, and we write out over the buffer boundary.

The fix adds an extra field to store the number of bytes
allocated to the database name in the buffer.  If the current
database name changes, and the new name is longer than the
previous one, we refuse to cache the query.
2011-10-07 14:08:31 +02:00
Rohit Kalhans
b140784fbc BUG#11758262 - 50439: MARK INSERT...SEL...ON DUP KEY UPD,REPLACE...SEL,CREATE...[IGN|REPL] SEL
Problem: The following statements can cause the slave to go out of sync 
if logged in statement format: 
INSERT IGNORE...SELECT 
INSERT ... SELECT ... ON DUPLICATE KEY UPDATE 
REPLACE ... SELECT 
UPDATE IGNORE :
CREATE ... IGNORE SELECT 
CREATE ... REPLACE SELECT  
           
Background: Since the order of the rows returned by the SELECT 
statement or otherwise may differ on master and slave, therefore
the above statements may cuase the salve to go out of sync with
the master. 
      
Fix:
Issue a warning when statements like the above are exectued and 
the bin-logging format is statement. If the logging format is mixed,
use row based logging. Marking a statement as unsafe has been 
done in the sql/sql_parse.cc instead of sql/sql_yacc.cc, because while
parsing for a token has been done we cannot be sure if the parsing
of the other tokens has been done as well.
      
Six new warning  messages has been added for each unsafe statement. 
      
binlog.binlog_unsafe.test has been updated to incoporate these additional unsafe statments.


******
BUG#11758262 - 50439: MARK INSERT...SEL...ON DUP KEY UPD,REPLACE...SEL,CREATE...[IGN|REPL] SEL 
Problem: The following statements can cause the slave to go out of sync 
if logged in statement format: 
INSERT IGNORE...SELECT 
INSERT ... SELECT ... ON DUPLICATE KEY UPDATE 
REPLACE ... SELECT 
UPDATE IGNORE :
CREATE ... IGNORE SELECT 
CREATE ... REPLACE SELECT  
           
Background: Since the order of the rows returned by the SELECT 
statement or otherwise may differ on master and slave, therefore
the above statements may cuase the salve to go out of sync with
the master. 
      
Fix:
Issue a warning when statements like the above are exectued and 
the bin-logging format is statement. If the logging format is mixed,
use row based logging. Marking a statement as unsafe has been 
done in the sql/sql_parse.cc instead of sql/sql_yacc.cc, because while
parsing for a token has been done we cannot be sure if the parsing
of the other tokens has been done as well.
      
Six new warning  messages has been added for each unsafe statement. 
      
binlog.binlog_unsafe.test has been updated to incoporate these additional unsafe statments.
2011-09-29 14:47:27 +05:30
Alexander Nozdrin
bccea2cc5d Manual merge from mysql-5.1-security. 2011-09-23 20:49:23 +04:00
Alexander Nozdrin
41dc304928 Fix for Bug#13001491: MYSQL_REFRESH CRASHES WHEN STORED ROUTINES ARE RUN CONCURRENTLY.
The main problem was that lex_start() was forgotten to be called before processing
COM_REFRESH.

Another problem discovered was that if failures to flush the error log were not properly
handled, which resulted in the server crash.

The user-visible effect of these problems were:
  - if COM_REFRESH command was sent after SQL-queries of some sort,
    the server would crash.
  - if COM_REFRESH was requested with REFRESH_LOG only, and the error log
    failed to flush, the server would crash. The error log fails to flush
    when it points to unavailable file (for example, due to restricted
    permissions).

The fixes are:
  - call lex_start() in the beginning of COM_REFRESH;
  - handle failures to flush the error log properly, i.e. raise ER_UNKNOWN_ERROR.
2011-09-22 18:31:16 +04:00
Tatjana Azundris Nuernberg
be6f501f47 manual merge 2011-08-08 17:45:43 +01:00
Tatjana Azundris Nuernberg
3c00efc42e Bug#11758414/Bug#50614: Default storage_engine not honored when set from within a stored procedure
When CREATE TABLE wasn't given ENGINE=... it would determine
the default ENGINE at parse-time rather than at execution
time, leading to incorrect behaviour (namely, later changes
to the default engine being ignore) when calling CREATE TABLE
from a stored procedure.

We now defer working out the default engine till execution of
CREATE TABLE.
2011-07-12 06:08:52 +01:00
Kent Boortz
b6e6097c95 Updated/added copyright headers 2011-07-03 17:47:37 +02:00
Kent Boortz
1400d7a2cc Updated/added copyright headers 2011-06-30 17:37:13 +02:00
Kent Boortz
e5ce023f57 Updated/added copyright headers 2011-06-30 17:31:31 +02:00
Dmitry Shulga
8867ad80ac Fixed bug#11753738 (formely known as bug#45235) - 5.1 DOES NOT SUPPORT 5.0-ONLY
SYNTAX TRIGGERS IN ANY WAY

Table with triggers which were using deprecated (5.0-only) syntax became
unavailable for any DML and DDL after upgrade to 5.1 version of server.
Attempt to execute any statement on such a table resulted in parsing
error reported. Since this included DROP TRIGGER and DROP TABLE
statements (actually, the latter was allowed but was not functioning
properly for such tables) it was impossible to fix the problem without
manual operations on .TRG and .TRN files in data directory.

The problem was that failure to parse trigger body (due to 5.0-only
syntax) when opening trigger file for a table prevented the table
from being open. This made all operations on the table impossible
(except DROP TABLE which due to peculiarity in its implementation
dropped the table but left trigger files around).

This patch solves this problem by silencing error which occurs when
we parse trigger body during table open. Error message is preserved
for the future use and table is marked as having a broken trigger.
We also try to analyze parse tree to recover trigger name, which
will be needed in order to drop the broken trigger. DML statements
which invoke triggers on the table marked as having broken trigger
are prohibited and emit saved error message. The same happens for
DDL which change triggers except DROP TRIGGER and DROP TABLE which
try their best to do what was requested. Table becomes no longer
marked as having broken trigger when last such trigger is dropped.
2011-06-10 10:52:39 +07:00
Guilhem Bichot
25221cccd2 Fix for BUG#11755168 '46895: test "outfile_loaddata" fails (reproducible)'.
In sql_class.cc, 'row_count', of type 'ha_rows', was used as last argument for
ER_TRUNCATED_WRONG_VALUE_FOR_FIELD which is
"Incorrect %-.32s value: '%-.128s' for column '%.192s' at row %ld".
So 'ha_rows' was used as 'long'.
On SPARC32 Solaris builds, 'long' is 4 bytes and 'ha_rows' is 'longlong' i.e. 8 bytes.
So the printf-like code was reading only the first 4 bytes.
Because the CPU is big-endian, 1LL is 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x01
so the first four bytes yield 0. So the warning message had "row 0" instead of
"row 1" in test outfile_loaddata.test:
-Warning	1366	Incorrect string value: '\xE1\xE2\xF7' for column 'b' at row 1
+Warning	1366	Incorrect string value: '\xE1\xE2\xF7' for column 'b' at row 0

All error-messaging functions which internally invoke some printf-life function
are potential candidate for such mistakes.
One apparently easy way to catch such mistakes is to use
ATTRIBUTE_FORMAT (from my_attribute.h).
But this works only when call site has both:
a) the format as a string literal
b) the types of arguments.
So:
  func(ER(ER_BLAH), 10);
will silently not be checked, because ER(ER_BLAH) is not known at
compile time (it is known at run-time, and depends on the chosen
language).
And
  func("%s", a va_list argument);
has the same problem, as the *real* type of arguments is not
known at this site at compile time (it's known in some caller).
Moreover,
  func(ER(ER_BLAH));
though possibly correct (if ER(ER_BLAH) has no '%' markers), will not
compile (gcc says "error: format not a string literal and no format
arguments").

Consequences:
1) ATTRIBUTE_FORMAT is here added only to functions which in practice
take "string literal" formats: "my_error_reporter" and "print_admin_msg".
2) it cannot be added to the other functions: my_error(),
push_warning_printf(), Table_check_intact::report_error(),
general_log_print().

To do a one-time check of functions listed in (2), the following
"static code analysis" has been done:
1) replace
  my_error(ER_xxx, arguments for substitution in format)
with the equivalent
  my_printf_error(ER_xxx,ER(ER_xxx), arguments for substitution in
format),
so that we have ER(ER_xxx) and the arguments *in the same call site*
2) add ATTRIBUTE_FORMAT to push_warning_printf(),
Table_check_intact::report_error(), general_log_print()
3) replace ER(xxx) with the hard-coded English text found in
errmsg.txt (like: ER(ER_UNKNOWN_ERROR) is replaced with
"Unknown error"), so that a call site has the format as string literal
4) this way, ATTRIBUTE_FORMAT can effectively do its job
5) compile, fix errors detected by ATTRIBUTE_FORMAT
6) revert steps 1-2-3.
The present patch has no compiler error when submitted again to the
static code analysis above.
It cannot catch all problems though: see Field::set_warning(), in
which a call to push_warning_printf() has a variable error
(thus, not replacable by a string literal); I checked set_warning() calls
by hand though.

See also WL 5883 for one proposal to avoid such bugs from appearing
again in the future.

The issues fixed in the patch are:
a) mismatch in types (like 'int' passed to '%ld')
b) more arguments passed than specified in the format.
This patch resolves mismatches by changing the type/number of arguments,
not by changing error messages of sql/share/errmsg.txt. The latter would be wrong,
per the following old rule: errmsg.txt must be as stable as possible; no insertions
or deletions of messages, no changes of type or number of printf-like format specifiers,
are allowed, as long as the change impacts a message already released in a GA version.
If this rule is not followed:
- Connectors, which use error message numbers, will be confused (by insertions/deletions
of messages)
- using errmsg.sys of MySQL 5.1.n with mysqld of MySQL 5.1.(n+1)
could produce wrong messages or crash; such usage can easily happen if
installing 5.1.(n+1) while /etc/my.cnf still has --language=/path/to/5.1.n/xxx;
or if copying mysqld from 5.1.(n+1) into a 5.1.n installation.
When fixing b), I have verified that the superfluous arguments were not used in the format
in the first 5.1 GA (5.1.30 'bteam@astra04-20081114162938-z8mctjp6st27uobm').
Had they been used, then passing them today, even if the message doesn't use them
anymore, would have been necessary, as explained above.
2011-05-16 22:04:01 +02:00
Marc Alff
2d8c476b77 Bug#12565712 - 61205: "QUERIES PER SECOND AVG" VALUE IN STATUS OUTPUT IS INCORRECT
Before this fix, a server executing 1009 queries in 1000 seconds would give a status of:
Queries per second avg: 1.9
The printf format used to print the decimal part, computed separately, is incorrect.

With this fix, the correct result is printed:
Queries per second avg: 1.009

Tested manually, no test case provided.
2011-06-29 10:15:05 +02:00
Dmitry Lenev
db114007eb Fix for bug #12641342 - "61401: UPDATE PERFORMANCE DEGRADES
GRADUALLY IF A TRIGGER EXISTS".

This bug manifested itself in two ways:

- Firstly execution of any data-changing statement which
  required prelocking (i.e. involved stored function or
  trigger) as part of transaction slowed down a bit all
  subsequent statements in this transaction. So performance
  in transaction which periodically involved such statements
  gradually degraded over time.
- Secondly execution of any data-changing statement which
  required prelocking as part of transaction prevented
  concurrent FLUSH TABLES WITH READ LOCK from proceeding
  until the end of transaction instead of end of particular
  statement.
  
The problem was caused by incorrect handling of metadata lock
used in FTWRL implementation for statements requiring prelocked 
mode. 
Each statement which changes data acquires global IX lock
with STATEMENT duration. This lock is supposed to block 
concurrent FTWRL from proceeding until the statement ends.

When entering prelocked mode, durations of all metadata locks
acquired so far were changed to EXPLICIT, to prevent 
substatements from releasing these locks. When prelocked mode
was left, durations of metadata locks were changed to
TRANSACTIONAL (with a few exceptions) so they can be properly
released at the end of transaction. 
Unfortunately, this meant that the global IX lock blocking
FTWRL with STATEMENT duration was moved to TRANSACTIONAL
duration after execution of statement requiring prelocking.

Since each subsequent statement that required prelocking and
tried to acquire global IX lock with STATEMENT duration got
a new instance of MDL_ticket, which was later moved to
TRANSACTIONAL duration, this led to unwarranted growth of
number of tickets with TRANSACITONAL duration in this
connection's MDL_context. As result searching for other
tickets in it became slow and acquisition of other metadata
locks by this transaction started to hog CPU.

Moreover, this also meant that after execution of statement
requiring prelocking concurrent FTWRL was blocked
until the end of transaction instead of end of statement.

This patch solves this problem by not moving locks to EXPLICIT
duration when thread enters prelocked mode (unless it is a real 
LOCK TABLES mode). This step turned out to be not really 
necessary as substatements don't try to release metadata locks.
Consequently, the global IX lock blocking FTWRL keeps its
STATEMENT duration and is properly released at the end of
statement and the above issue goes away.
2011-06-16 19:18:16 +04:00
Dmitry Lenev
c61e346f51 Fix for bug #11762012 - "54553: INNODB ASSERTS IN
HA_INNOBASE::UPDATE_ROW, TEMPORARY TABLE, TABLE LOCK".

Attempt to update an InnoDB temporary table under LOCK TABLES
led to assertion failure in both debug and production builds
if this temporary table was explicitly locked for READ. The
same scenario works fine for MyISAM temporary tables.

The assertion failure was caused by discrepancy between lock
that was requested on the rows of temporary table at LOCK TABLES
time and by update operation. Since SQL-layer requested a
read-lock at LOCK TABLES time InnoDB engine assumed that upcoming
statements which are going to be executed under LOCK TABLES will
only read table and therefore should acquire only S-lock.
An update operation broken this assumption by requesting X-lock.

Possible approaches to fixing this problem are:

1) Skip locking of temporary tables as locking doesn't make any
   sense for connection-local objects.
2) Prohibit changing of temporary table locked by LOCK TABLES ...
   READ.

Unfortunately both of these approaches have drawbacks which make
them unviable for stable versions of server.

So this patch takes another approach and changes code in such way
that LOCK TABLES for a temporary table will always request write
lock. In 5.5 version of this patch switch from read lock to write
lock is done on SQL-layer.
2011-05-26 19:50:06 +04:00
Guilhem Bichot
12f651ac9d Merge from 5.1. 2011-05-21 10:21:08 +02:00
bjorn.munch@oracle.com
f152d4cf05 Merge from mysql-5.5.12-release 2011-05-06 10:27:04 +02:00
Alexander Nozdrin
506ff594c8 A patch for Bug#11763166 (55847: SHOW WARNINGS returns empty
result set when SQLEXCEPTION is active.

The problem was in a hackish THD::no_warnings_for_error attribute.
When it was set, an error was not written to Warning_info -- only
Diagnostics_area state was changed. That means, Diagnostics_area
might contain error state, which is not present in Warning_info.

The user-visible problem was that in some cases SHOW WARNINGS
returned empty result set (i.e. there were no warnings) while
the previous SQL statement failed. According to the MySQL
protocol errors must be presented in warning list.

The main idea of this patch is to remove THD::no_warnings_for_error.
There were few places where it was used:
  - sql_admin.cc, handling of REPAIR TABLE USE_FRM.
  - sql_show.cc, when calling fill_schema_table_from_frm().
  - sql_show.cc, when calling fill_table().
The fix is to either use internal-error-handlers, or to use
temporary Warning_info storing warnings, which might be ignored.

This patch is needed to fix Bug 11763162 (55843).
2011-04-15 16:02:22 +04:00
Tor Didriksen
4a6d020e8d Bug#11829785 EXPLAIN EXTENDED CRASH WITH RIGHT OUTER JOIN, SUBQUERIES
This is a backport of
Bug #46860 Crash/segfault using EXPLAIN EXTENDED on query using UNION in subquery.
2011-03-24 11:27:11 +01:00
Alexander Barkov
e5fdeac0f6 Bug#11764503 (Bug#57341) Query in EXPLAIN EXTENDED shows wrong characters
@ mysql-test/r/ctype_latin1.result
  @ mysql-test/r/ctype_utf8.result
  @ mysql-test/t/ctype_latin1.test
  @ mysql-test/t/ctype_utf8.test
  Adding tests

  @ sql/mysqld.h
  @ sql/item.cc
  @ sql/sql_parse.cc
  @ sql/sql_view.cc

  Refactoring (thanks to Guilhem for the idea):

  Item_string::print() was hard to understand because of the different
  QT_ constants: in "query_type==QT_x", QT_x is explicitely included
  but the other two QT_ are implicitely excluded. The combinations
  with '||' and '&&' make this even harder.
  - logic is now more "explicit" by changing QT_ constants to a bitmap of flags:
    QT_ORDINARY: no change,
    QT_IS -> QT_TO_SYSTEM_CHARSET | QT_WITHOUT_INTRODUCERS,
    QT_EXPLAIN -> QT_TO_SYSTEM_CHARSET
    (QT_EXPLAIN was introduced in the first version of the Bug#57341 patch)
  - Item_string::print() is rewritten using those flags

  Bugfix itself:

  When QT_TO_SYSTEM_CHARSET is used alone (with no QT_WITHOUT_INTRODUCERS),
  we print string literals as follows:

  - display introducers if they were in the original query
  - print ASCII characters as is
  - print non-ASCII characters using hex-escape
  Note: as "EXPLAIN" output is only for human readability purposes
  and does not need to be a pasrable SQL, so using hex-escape is Ok.
  ErrConvString class perfectly suites for hex escaping purposes.
2011-03-04 18:43:28 +03:00