Commit graph

70871 commits

Author SHA1 Message Date
Tatjana Azundris Nuernberg
b4a7756186 Bug#11764559: UMASK IS IGNORED BY ERROR LOG
mysqld_safe script did not heed MySQL specific environment variable
$UMASK, leading to divergent behavior between mysqld and mysqld_safe.

Patch adds an approximation of mysqld's behavior to mysqld_safe,
within the bounds dictated by attempt to have mysqld_safe run on
even the most basic of shells (proper '70s sh, not just bash
with a fancy symlink).

Patch also adds approximation of said behavior to mysqld_multi
(in perl).

(backport)

manual merge
2012-10-17 07:36:40 +01:00
Tatjana Azundris Nuernberg
b86aea6ce5 Bug#11764559: UMASK IS IGNORED BY ERROR LOG
mysqld_safe script did not heed MySQL specific environment variable
$UMASK, leading to divergent behavior between mysqld and mysqld_safe.

Patch adds an approximation of mysqld's behavior to mysqld_safe,
within the bounds dictated by attempt to have mysqld_safe run on
even the most basic of shells (proper '70s sh, not just bash
with a fancy symlink).

Patch also adds approximation of said behavior to mysqld_multi
(in perl).
2012-10-17 07:22:06 +01:00
Neeraj Bisht
510d048b7c Bug#11745891 - LAST_INSERT(ID) DOES NOT SUPPORT BIGINT UNSIGNED
Problem:-
using last_insert_id() on an auto_incremented bigint unsigned does
not work for values which are greater than max-bigint-signed.

Analysis:-
last_insert_id() returns the first auto_incremented value for a column
and an auto_incremented value can have only positive values.

In our code, when we are initializing a last_insert_id object, we are
taking it as a signed BIGINT, So when the auto_incremented value reaches
greater than max signed bigint, last_insert_id gives negative result.

Solution:
When we are fetching the value from last_insert_id, We are setting the 
unsigned_flag, so that it take only unsigned BIGINT value.

sql/item_func.cc:
  here unsigned value is converted to signed value.
sql/item_func.h:
  last_insert_id() gives an auto_incremented value which can be
  positive only,so defined it as a unsigned longlong sets the
  unsigned_flag to 1.
2012-10-16 23:26:35 +05:30
Neeraj Bisht
bdb4104cf6 Bug#11745891 - LAST_INSERT(ID) DOES NOT SUPPORT BIGINT UNSIGNED
Problem:-
using last_insert_id() on an auto_incremented bigint unsigned does
not work for values which are greater than max-bigint-signed.

Analysis:-
last_insert_id() returns the first auto_incremented value for a column
and an auto_incremented value can have only positive values.

In our code, when we are initializing a last_insert_id object, we are
taking it as a signed BIGINT, So when the auto_incremented value reaches
greater than max signed bigint, last_insert_id gives negative result.

Solution:
When we are fetching the value from last_insert_id, We are setting the 
unsigned_flag, so that it take only unsigned BIGINT value.

sql/item_func.cc:
  here unsigned value is converted to signed value.
sql/item_func.h:
  last_insert_id() gives an auto_incremented value which can be
  positive only,so defined it as a unsigned longlong sets the
  unsigned_flag to 1.
2012-10-16 23:18:48 +05:30
unknown
5f37d738be 2012-10-16 14:27:33 +02:00
Marko Mäkelä
bb371b649d Merge mysql-5.1 to mysql-5.5. 2012-10-16 14:35:19 +03:00
Marko Mäkelä
aec624762b Bug#14729221 IN-PLACE ALTER TABLE REPORTS '' INSTEAD OF
REAL DUPLICATE VALUE FOR PREFIX KEYS

innobase_rec_to_mysql(): Invoke dict_index_get_nth_col_or_prefix_pos()
instead of dict_index_get_nth_col_pos() to find the column.
2012-10-16 14:24:15 +03:00
Krunal Bauskar krunal.bauskar@oracle.com
04c7730006 removed warning message as they have changed in mysql-5.6 and mysql-trunk and this is left over from changes that got up-merged 2012-10-15 14:16:46 +05:30
Krunal Bauskar krunal.bauskar@oracle.com
ed6732dd16 bug#14704286
SECONDARY INDEX UPDATES MAKE CONSISTENT READS DO O(N^2) UNDO PAGE
LOOKUPS (honoring kill query while accessing sec_index)

If secondary index is being used for select query evaluation and this
query is operating with consistent read snapshot it might take good time for
secondary index to return back control to mysql as MVCC would kick in.

If user issues "kill query <id>" while query is actively accessing
secondary index it will not be honored as there is no hook to check
for this condition. Added hook for this check.

-----
Parallely secondary index taking too long to evaluate for consistent
read snapshot case is being examined for performance improvement. WL#6540.
2012-10-15 09:49:50 +05:30
Krunal Bauskar krunal.bauskar@oracle.com
9bfc910f2f bug#14704286
SECONDARY INDEX UPDATES MAKE CONSISTENT READS DO O(N^2) UNDO PAGE
LOOKUPS (honoring kill query while accessing sec_index)

If secondary index is being used for select query evaluation and this
query is operating with consistent read snapshot it might take good time for
secondary index to return back control to mysql as MVCC would kick in.

If user issues "kill query <id>" while query is actively accessing
secondary index it will not be honored as there is no hook to check
for this condition. Added hook for this check.

-----
Parallely secondary index taking too long to evaluate for consistent
read snapshot case is being examined for performance improvement. WL#6540.
2012-10-15 09:24:33 +05:30
Marc Alff
206d4f13db Merge mysql-5.1 --> mysql-5.5 2012-10-12 22:59:21 +02:00
Marc Alff
fc1fbe159a Bug#14629232 SECURITY VULNERABILITY WITH SHOW PROFILE
This fix resolves a security vulnerability of SHOW PROFILE.

See the bug report for details.
2012-10-12 19:38:45 +02:00
Vasil Dimov
614496d502 Null merge mysql-5.1 -> mysql-5.5 (the problem fixed in 5.1 does not
exist in 5.5).
2012-10-10 22:25:16 +03:00
Vasil Dimov
1d16fc16dc Fix compilation error in debug mode:
os/os0file.c:1332: error: ISO C90 forbids mixed declarations and code
2012-10-10 22:22:10 +03:00
Vasil Dimov
61bd7d0ce9 Merge mysql-5.1 -> mysql-5.5 2012-10-09 16:41:13 +03:00
Vasil Dimov
4cefe863ae Port the test for Bug#14708715 from 5.1/innodb_plugin into 5.1/innodb
although the bug does not exist in 5.1/innodb.
2012-10-09 16:29:00 +03:00
Vasil Dimov
8baabf3090 Update the ChangeLog with the fix of Bug#14708715 2012-10-09 16:08:06 +03:00
Vasil Dimov
93398c307f Fix Bug#14708715 CREATE TABLE MEMORY LEAK ON DB_OUT_OF_FILE_SPACE
The problem is in the error handling in row_create_table_for_mysql().
In the 'disk full' case we may forget to call dict_mem_table_free() on
the table object.

Approved by:	Marko (rb:1377 and rb:1386)
2012-10-09 16:02:58 +03:00
Harin Vadodaria
5427d33e62 Bug #14211140: CRASH WHEN GRANTING OR REVOKING PROXY
PRIVILEGES

Description: (user,host) pair from security context is used
             privilege checking at the time of granting or
             revoking proxy privileges. This creates problem
             when server is started with
             --skip-name-resolve option because host will not
             contain any value. Checks should be dependent on
             consistent values regardless the way server is
             started. Further, privilege check should use
             (priv_user,priv_host) pair rather than values
             obtained from inbound connection because
             this pair represents the correct account context
             obtained from mysql.user table.
2012-10-09 18:15:40 +05:30
Annamalai Gurusami
d5d53d1902 Fixing a compilation issue. 2012-10-09 12:25:02 +05:30
Praveenkumar Hulakund
35a05a6008 Bug#11756600 - SLAVE THREAD CAN CRASH IF EVENT SCHEDULER
FAILS TO READ EVENT TABLE AT STARTUP.

This issue is fixed in 5.5+ versions. This patch adds a test
case for this scenario.
2012-10-08 23:35:15 +05:30
Annamalai Gurusami
378a7d1ef5 Bug #14036214 MYSQLD CRASHES WHEN EXECUTING UPDATE IN TRX WITH
CONSISTENT SNAPSHOT OPTION

A transaction is started with a consistent snapshot.  After 
the transaction is started new indexes are added to the 
table.  Now when we issue an update statement, the optimizer
chooses an index.  When the index scan is being initialized
via ha_innobase::change_active_index(), InnoDB reports 
the error code HA_ERR_TABLE_DEF_CHANGED, with message 
stating that "insufficient history for index".

This error message is propagated up to the SQL layer.  But
the my_error() api is never called.  The statement level
diagnostics area is not updated with the correct error 
status (it remains in Diagnostics_area::DA_EMPTY).  

Hence the following check in the Protocol::end_statement()
fails.

 516   case Diagnostics_area::DA_EMPTY:
 517   default:
 518     DBUG_ASSERT(0);
 519     error= send_ok(thd->server_status, 0, 0, 0, NULL);
 520     break;

The fix is to backport the fix of bugs 14365043, 11761652 
and 11746399. 

14365043 PROTOCOL::END_STATEMENT(): ASSERTION `0' FAILED
11761652 HA_RND_INIT() RESULT CODE NOT CHECKED
11746399 RETURN VALUES OF HA_INDEX_INIT() AND INDEX_INIT() IGNORED

rb://1227 approved by guilhem and mattiasj.
2012-10-08 19:40:30 +05:30
Marko Mäkelä
52a4ef95e8 Merge mysql-5.1 to mysql-5.5.
Also, add debug check for trx_id sanity to row_upd_rec_sys_fields().
2012-10-08 16:18:54 +03:00
Marko Mäkelä
b06620868e Bug#14731482 UPDATE OR DELETE CORRUPTS A RECORD WITH A LONG PRIMARY KEY
We did not allocate enough bits for index->trx_id_offset, causing an
UPDATE or DELETE of a table with a PRIMARY KEY longer than 1024 bytes
to corrupt the PRIMARY KEY.

dict_index_t: Allocate enough bits.

dict_index_build_internal_clust(): Check for overflow of
index->trx_id_offset. Trip a debug assertion when overflow occurs.

rb:1380 approved by Jimmy Yang
2012-10-08 16:01:50 +03:00
Jon Olav Hauglid
bfba296d40 Bug#14640599 MEMORY LEAK WHEN EXECUTING STORED ROUTINE EXCEPTION HANDLER
When a SP handler is activated, memory is allocated to hold the
MESSAGE_TEXT for the condition that caused the activation.

The problem was that this memory was allocated on the MEM_ROOT belonging
to the stored program. Since this MEM_ROOT is not freed until the
stored program ends, a stored program that causes lots of handler
activations can start using lots of memory. In 5.1 and earlier the
problem did not exist as no MESSAGE_TEXT was allocated if a condition
was raised with a handler present. However, this behavior lead to
a number of other issues such as Bug#23032.

This patch fixes the problem by allocating enough memory for the
necessary MESSAGE_TEXTs in the SP MEM_ROOT when the SP starts and
then re-using this memory each time a handler is activated.
      
This is the 5.5 version of the patch.
2012-10-04 16:15:13 +02:00
Tor Didriksen
30d35590a3 Bug#13713525 CREATE_INITIAL_DB.CMAKE IS FAILING ON WINDOWS, STILL "DEVENV" RETURNS 0
This bug depends on cmake version.

For cmake 2.6 (which is still in use for some pushbuild trees)
the main build would succeed, even if create_initial_db failed.

The problem was the chaining of commands in the CUSTOM_COMMAND
to produce 'initdb.dep'. It first invokes cmake to run mysqld,
then invokes 'touch' to create the file. Moving the 'touch'
command makes the error propagate properly for both cmake 2.6 and 2.8
2012-10-03 16:05:07 +02:00
Jon Olav Hauglid
2943c8131a Bug#14495351: CRASH IN HA_PARTITION::HANDLE_UNORDERED_NEXT
Follow-up patch - Fix broken build:
error: format ‘%u’ expects argument of type ‘unsigned int’,
but argument 2 has type ‘key_part_map {aka long unsigned int}’
[-Werror=format]
2012-10-03 15:00:43 +02:00
Tor Didriksen
540d0cd28e Bug#14683676 ENDLESS MEMORY CONSUMPTION IN SETUP_REF_ARRAY WITH MAX IN SUBQUERY
n_child_sum_items kept increasing.
Since it is used for calculating the size of ref_pointer_array,
we will allocate larger and larger chunks of memory, until we hit some
operating system limit.
The memory is free()d at disconnect, but is most likely *not*
returned to the operating system.
2012-10-01 13:12:38 +02:00
Joerg Bruehe
5b8e3631b0 Merge 5.1.66 into main 5.1 2012-10-01 11:19:59 +02:00
Annamalai Gurusami
b59a64e2f3 Bug #13249921 ASSERT !BPAGE->FILE_PAGE_WAS_FREED, USUALLY IN
TRANSACTION ROLLBACK

Description:  During the rollback operation, a blob page 
is removed earlier than desired.  Consider following scenario:

1. create table t1(a int primary key,b blob) engine=innodb;
2. insert into t1 values (1,repeat('b',9000));
3. begin;
4. update t1 set b=concat(b,'b');
5. update t1 set a=a+1;
6. insert into t1 values (1,repeat('b',9000));
7. rollback;

The update operation in line 5 produces 2 undo log record. The first
undo record (TRX_UNDO_DEL_MARK_REC) goes to trx->update_undo and the
second undo record (TRX_UNDO_INSERT_REC) goes to trx->insert_undo.
During rollback, they are executed out of order.

When the undo record TRX_UNDO_DEL_MARK_REC is applied/executed,
the blob ownership is also reset.  Because of this the blob page
is released earlier than desired.  This blob page must have been
freed only as part of applying/executing the undo record
TRX_UNDO_INSERT_REC.

This problem can be avoided by executing the undo records in
order.  This patch will make innodb to execute the undo records
in order.

rb://1125 approved by Marko.
2012-09-28 16:02:58 +05:30
unknown
faca6ed81e 2012-09-26 18:29:09 +01:00
Akhila Maddukuri
422e6b520d Description:
-----------
After compiling from source, during make test I got the following error:

test main.loaddata failed with error
CURRENT_TEST: main.loaddata
mysqltest: At line 592: query 'LOAD DATA INFILE 'tmpp.txt' INTO TABLE t1
CHARACTER SET ucs2
(@b) SET a=REVERSE(@b)' failed: 1115: Unknown character set: 'ucs2'

I noticed other tests are skipped because of no ucs2
main.mix2_myisam_ucs2                    [ skipped ]  Test requires:'
have_ucs2'

Should main.loaddata be skipped if there is no ucs2

How To Repeat:
-------------
Run make test on compiled source that doesn't have ucs2

Suggested fix:
-------------
the failing piece of the test should be moved from mysql-test/t/loaddata.test to
mysql-test/t/ctype_ucs.test.
2012-09-26 16:38:42 +05:30
Tor Didriksen
b079b388a5 Backport
Bug #11764313 57135: CRASH IN ITEM_FUNC_CASE::FIND_ITEM WITH CASE WHEN
Bug #11764818 57692: Crash in item_func_in::val_int() with ZEROFILL
2012-09-25 16:03:05 +02:00
unknown
66c7b315b1 2012-09-25 18:44:21 +05:30
unknown
bb11c81be5 2012-09-25 18:00:26 +05:30
Jon Olav Hauglid
58de166062 Bug#14621627 THREAD CACHE IS UNFAIR
When a client connects to a MySQL server, first a THD object is created.
If there are any idle server threads waiting, the THD object is then added
to a list and a server thread is woken up. This thread then retrieves the 
THD object from the list and starts executing.

The problem was that this list of THD objects waiting for a server thread,
was not working in a FIFO fashion, but rather LIFO. This is unfair, as it means
that the last THD added (=last client connected) will be assigned a  server 
thread first.

Note however that for this to be a problem, several clients must be able
to connect and have THD objects constructed before any server threads
manages to be woken up. This is not a very likely scenario.

This patch fixes the problem by changing the THD list to work FIFO
rather than LIFO.

This is the 5.1/5.5 version of the patch.
2012-09-25 13:09:53 +02:00
Raghav Kapoor
815aad6928 BUG#13864642: DROP/CREATE USER BEHAVING ODDLY
BACKGROUND:
In certain situations DROP USER fails to remove all privileges
belonging to user being dropped from in-memory structures.
Current workaround is to do DROP USER twice in scenario below
OR doing FLUSH PRIVILEGES after doing DROP USER.

ANALYSIS:
In MySQL, When we grant some stored routines privileges to a
user they are stored in their respective hash.
When doing DROP USER all the stored routine privilege entries
associated with that user has to be deleted from its respective 
hash.
The root cause for this bug is some entries from the hash
are not getting deleted. 
The problem is that code that deletes entries from the hash tries
to do so while iterating over it, without taking enough measures
to address the fact that such deletion can reshuffle elements in 
the hash. If the user/administrator creates the same user again 
he is thrown an  error 'Error 1396 ER_CANNOT_USER' from MySQL.
This prompts the user to either do FLUSH PRIVILEGES or do DROP USER 
again. This behaviour is not desirable as it is a workaround and
does not solves the problem mentioned above.

FIX:
This bug is fixed by introducing a dynamic array to store the 
pointersto all stored routine privilege objects that either have
to be deleted or updated. This is done in 3 steps.
Step 1: Fetching the element from the hash and checking whether 
it is to be deleted or updated.
Step 2: Storing the pointer to that privilege object in dynamic array.
Step 3: Traversing the dynamic array to perform the appropriate action 
either delete or update.
This is a much cleaner way to delete or update the privilege entries 
associated with some user and solves the problem mentioned above.
Also the code has been refactored a bit by introducing an enum
instead of hard coded numbers used for respective dynamic arrays 
and hashes in handle_grant_struct() function.
2012-09-25 15:58:46 +05:30
Rohit Kalhans
7c671a7ead BUG#14548159: Followup patch to fix some issues on PB2 2012-09-23 15:45:22 +05:30
Rohit Kalhans
5530c5e38d BUG#14548159: NUMEROUS CASES OF INCORRECT IDENTIFIER
QUOTING IN REPLICATION 

Problem: Misquoting or unquoted identifiers may lead to
incorrect statements to be logged to the binary log.

Fix: we use specialized functions to append quoted identifiers in
the statements generated by the server.
2012-09-22 17:50:51 +05:30
Nirbhay Choubey
f820334bbf Bug#14645196 MYSQL CLIENT'S USE COMMAND FAILS
WHEN DBNAME CONTAINS MULTIPLE QUOTES

MySQL client's USE command might fail if the
database name contains multiple quotes (backticks).

The reason behind the failure being the method
that client uses to remove/escape the quotes
while parsing the USE command's option (dbname),
where the option parsing might terminate if a
matching quote is found.

Also, C-APIs like mysql_select_db() expect a
normalized dbname. Now, in certain cases, client
might fail to normalize dbname similar to that of
server and hence mysql_select_db() would fail.

Fixed by getting the normalized dbname (indirectly)
from the server by directly sending the "USE dbanme"
as query to the server followed by a "SELECT DATABASE()".
The above steps are only performed if number of quotes
in the dbname is greater than 2. Once the normalized
dbname is received, the original db is restored.
2012-09-21 23:28:55 +05:30
unknown
373c428fbb 2012-09-20 12:31:26 +03:00
Marko Mäkelä
6bbe24e9a0 Bug#14636528 INNODB CHANGE BUFFERING IS NOT ENTIRELY CRASH-SAFE
Delete-mark change buffer records when resorting to a pessimistic
delete from the change buffer B-tree. Skip delete-marked records in
the change buffer merge and when estimating whether an operation can
be buffered. Without this fix, we could try to apply the same buffered
changes multiple times if the server was killed at the right moment.

In MySQL 5.5 and later: ibuf_get_volume_buffered_count_func(): Ignore
delete-marked (already processed) records.

ibuf_delete_rec(): Add a crash point before optimistic delete. If the
optimistic delete fails, flag the record processed before
mtr_commit().

ibuf_merge_or_delete_for_page(): Ignore delete-marked (already
processed) records.

Backport to 5.1: Rename btr_cur_del_unmark_for_ibuf() to
btr_cur_set_deleted_flag_for_ibuf() and add a parameter.

rb:1307 approved by Jimmy Yang
2012-09-19 22:35:38 +03:00
Marko Mäkelä
a5f36f4c53 Merge mysql-5.1 to working copy. 2012-09-17 15:07:09 +03:00
Harin Vadodaria
9d007e075d Bug#11753779: MAX_CONNECT_ERRORS WORKS ONLY WHEN 1ST
INC_HOST_ERRORS() IS CALLED.

Issue       : Sequence of calling inc_host_errors()
              and reset_host_errors() required some
              changes in order to maintain correct
              connection error count.

Solution    : Call to reset_host_errors() is shifted
              to a location after which no calls to
              inc_host_errors() are made.
2012-09-17 17:02:17 +05:30
Marko Mäkelä
300f3fb77f Bug#12701488 ASSERT PAGE_ZIP_VALIDATE, UNIV_ZIP_DEBUG
page_zip_validate(), page_zip_validate_low(): Add a parameter for the
B-tree index.

page_zip_validate_low(): If the page contents does not match, check
that the record link chains match. Furthermore, if dict_index_t is
passed, check that the records match. (This reduces coverage a bit: if
index=NULL, we will ignore differences in record contents, that is,
the page payload.)

rb:1264 approved by Inaam Rana
2012-09-17 14:21:00 +03:00
Sujatha Sivakumar
5cbdb90827 Bug#11750014:ASSERTION TRX_DATA->EMPTY() IN BINLOG_CLOSE_CONNECTION
Problem:
=======

trx_data->empty() assert happens at `binlog_close_connection'

Analysis:
========

trx_data->empty() function checks for no pending events
and the transaction cache to be empty.This function returns
"true" if no pending events are present and cache is empty.
Otherwise it returns false. `binlog_close_connection' call
expects the above function to return true. But if the
return value is false then assert is raised.

This bug was reproducible in a diskfull scenario. In this
disk full scenario try to do an insert operation so that
a new pending event is created and flushing this pending
event fails. Due to this failure the server goes down
and invokes `binlog_close_connection' for clean closure.
Since the pending event still remains the assert is caused.
This assert is caused only in non transactional databases.


Fix:
===

In a disk full scenario when the insertion fails the
transaction is rolled back and `binlog_end_trans`
is called to flush the pending events. But flush operation
fails as the disk is full and the function simply returns
`1' without taking any action to delete the pending event.

This leaves the event to remain till the closure of
connection.  `delete pending' statement has been added to 
do the required clean up action.

sql/log.cc:
  Added "delete pending" statement to clean pending event
2012-09-17 11:48:02 +05:30
unknown
ab7358fb35 2012-09-12 15:08:44 +03:00
Tor Didriksen
f0b52a9e7e Backport Bug#13724099 2012-09-12 08:36:12 +02:00
Joerg Bruehe
813b661d00 Spec file change to work around cast ulonglong -> int. 2012-09-11 12:47:32 +02:00
Andrei Elkin
8a048ecb59 merge bug14597605 to the main repo. 2012-09-10 17:33:41 +03:00