Commit graph

64562 commits

Author SHA1 Message Date
unknown
a2be7d26a8 BUG #13625278 - PB2 SHOULD PROVIDE MORE USEFUL INFORMATION FOR TIMEOUTS 2013-02-05 11:58:21 +05:30
unknown
ddb60a719b Bug #16190704: MTR STILL LOSES THE FAILED RUN LOGS AT RETRY-FAIL 2013-02-04 20:25:30 +05:30
Gleb Shchepa
7ebfe30b53 Bug #11827369: ASSERTION FAILED: !THD->LEX->CONTEXT_ANALYSIS_ONLY
Some queries with the "SELECT ... FROM DUAL" nested subqueries
failed with an assertion on debug builds.
Non-debug builds were not affected.

There were a few different issues with similar assertion
failures on different queries:

1. The first problem was related to the incomplete propagation
of the "non-constant" item status from underlying subquery
items to the outer item tree: in some cases non-constants were
interpreted as constants and evaluated at the preparation stage
(val_int() calls withing fix_fields() etc).

Thus, the default implementation of Item_ref::const_item() from
the Item parent class didn't take into account the "const_item"
status of the referenced item tree -- it used the insufficient
"used_tables() == 0" check instead. This worked in most cases
since our "non-constant" functions like RAND() and SLEEP() set
the RAND_TABLE_BIT in the used table map, so they aren't
non-constant from Item_ref's "point of view". However, the
"SELECT ... FROM DUAL" subquery may have an empty map of used
tables, but at the same time subqueries are never "constant" at
the context analysis stage (preparation, view creation etc).
So, the non-contantness of such subqueries was missed.

Fix: the Item_ref::const_item() function has been overloaded to
take into account both (*ref)->const_item() status and tricky
Item_ref::used_tables() return values, since the only
(*ref)->const_item() call is not enough there.

2. In some cases instead of the const_item() call we check a
value of the Item::with_subselect field to recognize items
with nested subqueries. However, the Item_ref class didn't
propagate this value from the referenced item tree.

Fix: Item::has_subquery() and Item_ref::has_subquery()
functions have been backported from 5.6. All direct
references to the with_subselect fields of nested items have
been replaced with the has_subquery() function call.

3. The Item_func_regex class didn't propagate with_subselect
as well, since it overloads the Item_func::fix_fields()
function with insufficient fix_fields() implementation.

Fix: the Item_func_regex::fix_fields() function has been
modified to gather "constant" statuses from inner items.

4. The Item_func_isnull::update_used_tables() function has
a special branch for the underlying item where the maybe_null
value is false: in this case it marks the Item_func_isnull
as a "const_item" and sets the cached_value to false.
However, the Item_func_isnull::val_int() was not in sync with
update_used_tables(): it didn't take into account neither
const_item_cache nor cached_value for the case of
"args[0]->maybe_null == false optimization".
As far as such an Item_func_isnull has "const_item() == true",
it's ok to call Item_func_isnull::val_int() etc from outer
items on preparation stage. In this case the server tried to
call Item_func_isnull::args[0]->isnull(), and if the args[0]
item contained a nested not-nullable subquery, it failed
with an assertion.

Fix: take the value of Item_func_isnull::const_item_cache into
account in the val_int() function.

5. The auxiliary Item_is_not_null_test class has a similar
optimization in the update_used_tables() function as the
Item_func_isnull class has, and the same issue in the val_int()
function.
In addition to that the Item_is_not_null_test::update_used_tables()
doesn't update the const_item_cache value, so the "maybe_null"
optimization is useless there. Thus, we missed some optimizations
of cases like these (before and after the fix):
  <  <is_not_null_test>(a),
  ---
  >  <cache>(<is_not_null_test>(a)),
or
  < having (<is_not_null_test>(a) and <is_not_null_test>(a))
  ---
  > having 1
etc.

Fix: update Item_is_not_null_test::const_item_cache in
update_used_tables() and take in into account in val_int().
2013-01-31 08:46:30 +04:00
Yasufumi Kinoshita
c3d2803c43 Bug #16220051 : INNODB_BUG12400341 FAILS ON VALGRIND WITH TOO MANY ACTIVE CONCURRENT TRANSACTION
innodb_bug12400341.test is disabled for valgrind daily test.
It might be affected by the previous test's undo slots existing,
because of slower execution.
2013-01-31 12:42:43 +09:00
Chaithra Gopalareddy
e1ee9581cb Bug#14096619: UNABLE TO RESTORE DATABASE DUMP
Backport of Bug#13581962

mysql-test/r/cast.result:
  Added test result for Bug#13581962,Bug#14096619
mysql-test/t/cast.test:
  Added test case for Bug#13581962,Bug#14096619
sql/item_func.h:
  limit max length by MY_INT64_NUM_DECIMAL_DIGITS
2013-01-31 06:39:15 +05:30
unknown
7aa707f233 2013-01-30 15:17:19 +01:00
Krunal Bauskar krunal.bauskar@oracle.com
ed15e9c270 - BUG#1608883: KILLING A QUERY INSIDE INNODB CAUSES IT TO EVENTUALLY CRASH
WITH AN ASSERTION

  Correcting the build failure that was caused because of changes 
  checked-in to below mentioned revision.
  (Changes: DEBUG_SYNC_C should be disabled for innodb_plugin under
   Windows enviornment. Note: only for innodb_plugin.)

  revno: 3915
  revision-id: krunal.bauskar@oracle.com-20130114051951-ang92lkirop37431
  parent: nisha.gopalakrishnan@oracle.com-20130112054337-gk5pmzf30d2imuw7
  committer: Krunal Bauskar krunal.bauskar@oracle.com
  branch nick: mysql-5.1
  timestamp: Mon 2013-01-14 10:49:51 +0530
2013-01-30 08:17:24 +05:30
Neeraj Bisht
265814f2ae Bug#16208709 - CRASH IN GET_SEL_ARG_FOR_KEYPART ON SELECT DISTINCT
ON COL WITH COMPOSITE INDEX

This problem is caused by the patch for the bug#11751794.
While checking for the keypart covering non grouping attribute. we are not 
checking whether the root node of the SEL_ARG* tree for the index have any 
cvalue or not.
2013-01-29 10:05:00 +05:30
Nuno Carvalho
d1378565bb BUG#16200555: EMPTY NAME FOR USER VARIABLE IS ALLOWED AND BREAKS STATEMENT BINARY LOGGING
On a previous fix, user variables with zero length name were incorrectly
considered as event corruption, despite that them are allowed by server.

Fix this wrong assumption by allowing again user variables with zero
length on binary log.
2013-01-28 19:05:09 +00:00
Venkatesh Duggirala
7e0901b97f Bug#16084594 USER_VAR ITEM IN 'LOAD FILE QUERY' WAS NOT
PROPERLY QUOTED IN BINLOG FILE
Problem: In load data file query, User variables are allowed
inside "Into_list" and "Set_list". These user variables used
inside these two lists are not properly guarded with backticks
while server is writting into binlog. Hence user variable names
like a` cannot be used in this context.

Fix: Properly quote these variables while
writting into binlog

mysql-test/r/func_compress.result:
  changing result file
mysql-test/r/variables.result:
  changing result file
mysql-test/suite/binlog/r/binlog_stm_mix_innodb_myisam.result:
  changing result file
sql/item_func.cc:
  Quote the user variable items
2013-01-28 14:41:54 +05:30
Venkata Sidagam
26f662be1c BUG#11908153 CRASH AND/OR VALGRIND ERRORS IN FIELD_BLOB::GET_KEY_IMAGE
Backporting bug patch from 5.5 to 5.1.
This fix is applicable to BUG#14362617 as well
2013-01-24 14:56:12 +05:30
Venkata Sidagam
776df0a366 Bug #11752803 SERVER CRASHES IF MAX_CONNECTIONS DECREASED BELOW
CERTAIN LEVEL
      
Problem description: mysqld crashes when we update the max_connections 
variable to lesser value than the number of currently open connections.
      
Analysis: The "alarm_queue.max_elements" size will be decided at the 
server start time and it will get modified if we change max_connections 
value. In the current scenario the value of "alarm_queue.max_elements" 
is decremented when the max_connections is set to 2. When updating the  
"alarm_queue.max_elements" value we are not updating "max_used_alarms" 
value. Hence, instead of getting the warning "thr_alarm queue is full" 
it is ending up in asserting the server at the time of inserting new 
elements in the queue.
      
Fix: the fix is to dynamically increase the size of the alarm_queue.
In order to do that, queue_insert_safe() should be used instead if
queue_insert().
2013-01-24 14:02:54 +05:30
Yasufumi Kinoshita
65cb30b3b9 Bug #16089381 : POSSIBLE NUMBER UNDERFLOW AROUND CALLING PAGE_ZIP_EMPTY_SIZE()
some callers for page_zip_empty_size() ignored possibility its returning 0, and could cause underflow.

rb#1837 approved by Marko
2013-01-23 14:59:36 +09:00
Gleb Shchepa
19ea7c031d Bug #11827369: ASSERTION FAILED: !THD->LEX->CONTEXT_ANALYSIS_ONLY
Some queries with the "SELECT ... FROM DUAL" nested subqueries
failed with an assertion on debug builds.
Non-debug builds were not affected.

There were a few different issues with similar assertion
failures on different queries:

1. The first problem was related to the incomplete propagation
of the "non-constant" item status from underlying subquery
items to the outer item tree: in some cases non-constants were
interpreted as constants and evaluated at the preparation stage
(val_int() calls withing fix_fields() etc).

Thus, the default implementation of Item_ref::const_item() from
the Item parent class didn't take into account the "const_item"
status of the referenced item tree -- it used the insufficient
"used_tables() == 0" check instead. This worked in most cases
since our "non-constant" functions like RAND() and SLEEP() set
the RAND_TABLE_BIT in the used table map, so they aren't
non-constant from Item_ref's "point of view". However, the
"SELECT ... FROM DUAL" subquery may have an empty map of used
tables, but at the same time subqueries are never "constant" at
the context analysis stage (preparation, view creation etc).
So, the non-contantness of such subqueries was missed.

Fix: the Item_ref::const_item() function has been overloaded to
take into account both (*ref)->const_item() status and tricky
Item_ref::used_tables() return values, since the only
(*ref)->const_item() call is not enough there.

2. In some cases instead of the const_item() call we check a
value of the Item::with_subselect field to recognize items
with nested subqueries. However, the Item_ref class didn't
propagate this value from the referenced item tree.

Fix: Item::has_subquery() and Item_ref::has_subquery()
functions have been backported from 5.6. All direct
references to the with_subselect fields of nested items have
been with the has_subquery() function call.

3. The Item_func_regex class didn't propagate with_subselect
as well, since it overloads the Item_func::fix_fields()
function with insufficient fix_fields() implementation.

Fix: the Item_func_regex::fix_fields() function has been
modified to gather "constant" statuses from inner items.

4. The Item_func_isnull::update_used_tables() function has
a special branch for the underlying item where the maybe_null
value is false: in this case it marks the Item_func_isnull
as a "const_item" and sets the cached_value to false.
However, the Item_func_isnull::val_int() was not in sync with
update_used_tables(): it didn't take into account neither
const_item_cache nor cached_value for the case of
"args[0]->maybe_null == false optimization".
As far as such an Item_func_isnull has "const_item() == true",
it's ok to call Item_func_isnull::val_int() etc from outer
items on preparation stage. In this case the server tried to
call Item_func_isnull::args[0]->isnull(), and if the args[0]
item contained a nested not-nullable subquery, it failed
with an assertion.

Fix: take the value of Item_func_isnull::const_item_cache into
account in the val_int() function.

5. The auxiliary Item_is_not_null_test class has a similar
optimization in the update_used_tables() function as the
Item_func_isnull class has, and the same issue in the val_int()
function.
In addition to that the Item_is_not_null_test::update_used_tables()
doesn't update the const_item_cache value, so the "maybe_null"
optimization is useless there. Thus, we missed some optimizations
of cases like these (before and after the fix):
  <  <is_not_null_test>(a),
  ---
  >  <cache>(<is_not_null_test>(a)),
or
  < having (<is_not_null_test>(a) and <is_not_null_test>(a))
  ---
  > having 1
etc.

Fix: update Item_is_not_null_test::const_item_cache in
update_used_tables() and take in into account in val_int().
2013-01-23 09:51:50 +04:00
Marko Mäkelä
e7283ceaf0 Bug#16067973 DROP TABLE SLOW WHEN IT DECOMPRESS COMPRESSED-ONLY PAGES
buf_page_get_gen(): Do not attempt to decompress a compressed-only
page when mode == BUF_PEEK_IF_IN_POOL. This mode is only being used by
btr_search_drop_page_hash_when_freed(). There cannot be any adaptive
hash index pointing to a page that does not exist in uncompressed
format in the buffer pool.

innodb_buffer_pool_evict_update(): New function for debug builds, to handle
SET GLOBAL innodb_buffer_pool_evicted='uncompressed'
by evicting all uncompressed page frames of compressed tablespaces
from the buffer pool.

rb#1873 approved by Jimmy Yang
2013-01-21 14:59:49 +02:00
Venkatesh Duggirala
bc21e8cd69 Bug#11752707-SLAVE CRASHES IF RBR HAS AS DESTINATION A VIEW
RATHER THAN A TABLE

Problem: In RBR, If a table is converted into a view at slave,
(i.e., "drop table 'object1'" & "create view 'object1'"), then any
DML operations on the table at master are causing crash at slave.

Analysis: Slave prepares tables to be opened for DML list when it
receives Table_map_log_event(s). And the same list will be sent to
open_table function. Open_table logic assumes that if the list
contains a view object, it also contains "select_lex" object of
that view. In the above special case, the table object does not
contain 'select_lex' as it is base table at master. Since it
is a view at slave, open_table logic goes to 'mysql_make_view()'
function which assumes that 'select_lex' exists for the object.

Fix: While preparing 'tables to be opened' list, we should make 
sure that table required type is 'base table'. If it is not 
base table while opening the object, mysql_make_view will throw an 
error similar to 'object is not a base table' 

sql/log_event.cc:
  Restrict that all table_map_log_event's objects should be 
  base tables @ slave also.
2013-01-19 06:01:46 +05:30
Anirudh Mangipudi
01208b5b0f BUG#14117025: UNABLE TO RESTORE DUMP
Problem: When a view, with a specific character set and collation, 
is created on another view with a different character set and collation the 
dump restoration results in an illegal mix of collations error.
SOLUTION: To avoid this confusion of collations, the create table datatype 
being used is hardcoded as "tinyint NOT NULL". This will not matter as the table 
created will be dropped at runtime and specifically tinyint is used to 
avoid hitting the row size conflicts.
2013-01-16 18:26:27 +05:30
Neeraj Bisht
3930dbf75c Bug#11751794 MYSQL GIVES THE WRONG RESULT WITH SOME SPECIAL USAGE
Consider the following query:

SELECT f_1,..,f_m, AGGREGATE_FN(C)
FROM t1
WHERE ...
GROUP BY ...

Loose index scan ("Using index for group-by") can be used for
this query if there is an index 'i' covering all fields in the
select list, and the GROUP BY clause makes up a prefix f1,...,fn
of 'i'. Furthermore, according to rule NGA2 of
get_best_group_min_max(), the WHERE clause must contain a
conjunction of equality predicates for all fields fn+1,...,fm.

The problem in this bug was that a query with WHERE clause that
broke NGA2 was not detected and therefore used loose index scan.
This lead to wrong result. The query had an index
covering (c1,c2) and had:
  "WHERE (c1 = 1 AND c2 = 'a') OR (c1 = 2 AND c2 = 'b')
   GROUP BY c1"
or 
  "WHERE (c1 = 1 ) OR (c1 = 2 AND c2 = 'b')
   GROUP BY c1"


This WHERE clause cannot be transformed to a conjunction of
equality predicates.

The solution is to introduce another rule, NGA3, that complements
NGA2. NGA3 says that if a gap field (field between those
listed in GROUP BY and C in the index) has a predicate, then
there can only be one range in the query. This requirement is
more strict than it has to be in theory. BUG 15947433 will deal
with that.


sql/opt_range.cc:
  check for the repetition of non group field.
2013-01-16 15:03:42 +05:30
Neeraj Bisht
65af83f642 Bug#11758009 - UNION EXECUTION ORDER WRONG ?
Problem:-
In case of blob data field, UNION ALL doesn't give correct result.

Analysis:-
In MyISAM table, when we dont want to check for the distinct for particular 
key, we set the key_map to zero.

While writing record in MyISAM table, we check the distinct with the help 
of keys, by checking whether that key is active in key_map and then writing 
the record.

In case of blob field, we are checking for distinct by unique constraint, 
where we are not checking whether that unique key is active or not in key_map.

Solution:-
Before checking for distinct, check whether any key is active in key_map.

storage/myisam/mi_write.c:
  check whether key_map is active before checking distinct.
2013-01-15 14:24:35 +05:30
Neeraj Bisht
99645e5be5 BUG#14303860 - EXECUTING A SELECT QUERY WITH TOO
MANY WILDCARDS CAUSES A SEGFAULT

Back port from 5.6 and trunk
2013-01-14 14:59:48 +05:30
Krunal Bauskar krunal.bauskar@oracle.com
54c47527e2 - BUG#1608883: KILLING A QUERY INSIDE INNODB CAUSES IT TO EVENTUALLY CRASH
WITH AN ASSERTION

  Recently we added check to handle kill query signal for long operating
  queries. 
  While the query interruption is reported it must to ensure cursor is restore
  to proper state for HANDLER interface to work correctly. 
  Normal select query will not face this problem, as on recieving interrupt,
  select query is aborted and new select query result in re-initialization
  (including cursor).

  rb://1836. Approved by Marko.
2013-01-14 10:49:51 +05:30
Nisha Gopalakrishnan
c4afaa4242 BUG#11757250: REPLACE(...) INSIDE A STORED PROCEDURE.
Analysis:
--------

REPLACE operation provides incorrect output when
user variable is supplied as an argument and there
are multiple rows on which the operation is performed.

Consider the example below:

SET @var='(( 00000000 ++ 00000000 ))';
SELECT REPLACE(@var, '00000000', table_name) AS a FROM
INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA='mysql';

Invalid output:
  +---------------------------------------+
  | REPLACE(@var, '00000000', TABLE_NAME) |
  +---------------------------------------+
  | (( columns_priv ++ columns_priv ))    |
  | (( columns_priv ++ columns_priv ))    |
      ......
      ......
  | (( columns_priv ++ columns_priv ))    |
  | (( columns_priv ++ columns_priv ))    |
  | (( columns_priv ++ columns_priv ))    |
  +---------------------------------------+

The user argument supplied as the string to REPLACE
operation is overwritten after the first iteration
to '(( columns_priv ++ columns_priv ))'.
The overwritten string after the first iteration
is used for the subsequent REPLACE iteration. Since
the pattern string is not found, it returns invalid
output as mentioned above.

Fix:
---
If the Alloced_length is zero, realloc() and create a
copy of the string which is then used for the REPLACE
operation for every iteration.
2013-01-12 11:13:37 +05:30
Aditya A
21bdf21380 Bug#15843818 PARTITIONING BY RANGE WITH TO_DAYS ALWAYS
INCLUDES FIRST PARTITION WHEN PRUNING


PROBLEM
-------

TO_DAYS()/TO_SECONDS() can return NULL for invalid dates which 
was stored in the first partition ,therefore the first partition 
was always included for the scan when range was specified.


FIX
---

The fix is a small optimization which we have included ,which will
prune the scanning of NULL/first partition if the dates specified 
in the range are valid and in the same year and month . TO_SECONDS()
function is not supported in 5.1 so removed it from the fix and test
scripts for mysql-5.1 version.
2013-01-11 16:27:37 +05:30
Chaithra Gopalareddy
8b41f491c8 Bug#11760726: LEFT JOIN OPTIMIZED INTO JOIN LEADS TO
INCORRECT RESULTS

This is a backport of fix for Bug#13068506.

mysql-test/r/join_outer.result:
  Added test result for Bug#13068506
mysql-test/t/join_outer.test:
  Added test case for Bug#13068506
sql/item.h:
  Implement Item_outer_ref::not_null_tables()
2013-01-10 16:17:13 +05:30
Praveenkumar Hulakund
5e399d1cd0 Bug#11749556: DEBUG ASSERTION WHEN ACCESSING A VIEW AND
AVAILABLE MEMORY IS TOO LOW 

Analysis:
---------
In function "mysql_make_view", "table->view" is initialized
after parsing(using File_parser::parse) the view definition.
If "::parse" function fails then control is moved to label 
"err:". Here we have assert (table->view == thd->lex). 
This assert fails if "::parse" function fails, as 
table->view is not initialized yet.

File_parser::parse fails if data being parsed is incorrect/
corrupted or when memory allocation fails. In this scenario
its failing because of failure in memory allocation.

Fix:
---------
In case of failure in function "File_parser::parse", moving
to label "err:" is incorrect. Modified code to move
to label "end:".
2013-01-10 14:34:27 +05:30
Sunny Bains
df5444d47c Bug#13997024 SEGV IN SYNC_ARRAY_CELL_PRINT PRINTING OUT LONG SEMAPHORE WAIT DATA
Backport fix from mysql-5.6.
2013-01-10 10:01:50 +11:00
unknown
7a89d68f01 Raise version number after cloning 5.1.68 2013-01-08 12:42:36 +01:00
Satya Bodapati
1dbc62ce2e Post Fix to Bug#14628410 - ASSERTION `! IS_SET()' FAILED IN
DIAGNOSTICS_AREA::SET_OK_STATUS

Use DBUG_RETURN() instead of return() if DBUG_ENTER() is
used in the function. This patch is to  fix the Windows 
pb2 failure on mysql-5.1

Approved by Marko. rb#1792
2013-01-07 16:56:16 +05:30
Nirbhay Choubey
e7c2ae94dd Bug#16066243 PB2 FAILURES I_MAIN.BUG15912213 AND
I_MAIN.CTYPE_UTF8 FOR MACOSX10.6 FOR 5.1

Part 2: Fix for test failures on Windows.
2013-01-07 16:16:08 +05:30
unknown
7a9cf3ccb5 2013-01-04 18:32:42 +01:00
Satya Bodapati
eab9f8f4f4 Post Fix to Bug#14628410 - ASSERTION `! IS_SET()' FAILED IN
DIAGNOSTICS_AREA::SET_OK_STATUS

Test fails on 5.1 valgrind build. This is because of close(-1)
system call.

Fixed by adding extra checks for valid file descriptor.

Approved by Vasil(Calvin). rb#1792
2013-01-04 17:30:39 +05:30
Nirbhay Choubey
1ef420b8d4 Bug#16066243 PB2 FAILURES I_MAIN.BUG15912213 AND
I_MAIN.CTYPE_UTF8 FOR MACOSX10.6 FOR 5.1

While converting directory name to filename, a
file separator (FN_LIBCHAR) might get appended
to the resulting file name. This can result in
off-by-one error when length of the input string
is equal to FN_REFLEN. In this case, the terminating
'\0' gets written beyond the buffer allocated to store
the result.

Fixed by incrementing the dst buffer size by 1. As
extra safety, switched to strnmov() and added a debug
assert to check the length of the input file name.

No test case added as the scenario is already
covered by the test cases added for bugs in
the description.
2013-01-04 16:38:12 +05:30
Venkatesh Duggirala
3932392030 BUG#11753923-SQL THREAD CRASHES ON DISK FULL
Problem:If Disk becomes full while writing into the binlog,
then the server instance hangs till someone frees the space.
After user frees up the disk space, mysql server crashes
with an assert (m_status != DA_EMPTY)

Analysis: wait_for_free_space is being called in an
infinite loop i.e., server instance will hang until
someone frees up the space. So there is no need to
set status bit in diagnostic area.

Fix: Replace my_error/my_printf_error with
sql_print_warning() which prints the warning in error log.

include/my_sys.h:
  Provision to call sql_print_warning from mysys files
mysys/errors.c:
  Replace my_error/my_printf_error with
  sql_print_warning() which prints the warning in error log.
mysys/my_error.c:
  implementation of my_printf_warning
mysys/my_write.c:
  Adding logic to break infinite loop in the simulation
sql/mysqld.cc:
  Provision to call sql_print_warning from mysys files
2013-01-02 16:31:58 +05:30
Kent Boortz
10f8266d50 Updated README and client executables copyright year to 2013 2013-01-01 03:33:40 +01:00
unknown
2fb1b2d74c 2012-12-29 23:46:31 +05:30
Venkatesh Duggirala
ec70b93e7b BUG#14726272- BACKPORT FIX FOR BUG 11746142 TO 5.5 AND 5.1
Details of BUG#11746142: CALLING MYSQLD WHILE ANOTHER 
INSTANCE IS RUNNING, REMOVES PID FILE
Fix: Before removing the pid file, ensure it was created
by the same process, leave it intact otherwise.

sql/mysqld.cc:
  delete_pid_file() introduced, which checks that the pid file
          belongs to the process before removing it
2012-12-28 16:13:48 +05:30
Nirbhay Choubey
825459b8cd Bug#16046140 BIN/MYSQLD_SAFE: TEST: ARGUMENT EXPECTED
Some shell interpreters do not support '-e' test
primary to construct conditions.

man test 1 (on S10)
...skip...
-e file True if file exists. (Not available in sh.)
...skip...

Hence, check for the existence of a file using
'-e' might result in a syntax error on such
shell programs.

Fixed by replacing it by '-f'.
2012-12-27 17:33:34 +05:30
Mattias Jonsson
6b7182d9a3 Bug#14589559 Post push fix for valgrind warnings. 2012-12-27 02:27:00 +01:00
Chaithra Gopalareddy
fa61c0499a Bug#12347040: MEMORY LEAK IN CONVERT_TZ COULD POSSIBLY CAUSE
DOS ATTACKS
      
Problem:
For detailed description, see Bug#42502. This bug is a duplicate
of Bug#42502. The complete fix for Bug#42502 was not made as
proposed. Hence the bug still persists.
      
Fix:
Make the changes as proposed originally for the bugfix of 42502.
Which is to remove the allocation of the memory before we actually
check for any errors.

sql/tztime.cc:
  Remove the double allocation for tz_info
2012-12-26 20:21:19 +05:30
unknown
5cf9e19365 Merge from mysql-5.1.67-release 2012-12-26 12:42:47 +01:00
Annamalai Gurusami
76059a4a1d Fixing a pb2 issue. There is some difference in the output in my local machine and pb2 machines in the explain output. 2012-12-24 16:49:42 +05:30
Chaithra Gopalareddy
259a5a301c Bug#11757005: UNION CONVERTS UNSIGNED MEDIUMINT AND BIGINT
TO SIGNED
Problem:
When we are joining types (of fields) in case of a union, we usually
upgrade the datatypes to the largest present in the query.
In case of mediumint, it is not happening.
Analysis:
When joined with types LONG and LONGLONG, mediumint should get
upgraded to LONG and LONGLONG respectively.
W.r.t the given query, constant '1' will be created as a LONGLONG
internally and SIGNED flag is enabled. As a result, while combining
types for the field, LONGLONG along with MEDIUMINT gets converted
to LONG first. LONG with MEDIUMINT(of the third select) gets converted
to MEDIUMINT. SIGNED FLAG would be that of the first field's.
As a result, the final result would be SIGNED MEDIUMINT.
Fix:
While joining types, MEDIUMINT with LONGLONG and MEDIUMINT with LONG
is converted to LONGLONG and LONG respectively. Also, made some 
changes for FLOAT and DOUBLE.


sql/field.cc:
  Changed merge types for MEDIUMINT.
2012-12-24 06:39:54 +05:30
Tor Didriksen
4a35c6d44b Bug#16027468 ADDRESSSANITIZER BUG IN MYSQLTEST
DBUG_ENTER and DBUG_LEAVE must *always* match,
otherwise all subsequent DBUG_ENTER calls will 
be poking into undefined stack frames.
2012-12-20 10:56:09 +01:00
prabakaran thirumalai
98aaf18bc7 Bug#14627287 THREAD CACHE - BYPASSES PRIVILEGES
Analysis:
When thread cache is enabled, it does not properly initialize
thd->start_utime when a thread is picked from the thread cache.
This breaks the quota management mechanism. 
THD::time_out_user_resource_limits() resets 
m_user_connect->conn_per_hour to 0 based on thd->start_utime

Fix:
Initialize start_utime when cached thread is reused.

Notes:
Enabled back tests which were disabled because of this issue.
2012-12-21 11:04:49 +05:30
Vasil Dimov
0dd066cb6f Fix Bug#16000909 MEMORY LEAK, MYSQL_INPLACE_ALTER_TABLE
This is a followup to the fix of
Bug#14628410 ASSERTION `! IS_SET()' FAILED IN DIAGNOSTICS_AREA::SET_OK_STATUS
(satya.bodapati@oracle.com-20121213132316-5joz4phltx9yhjs7)

In innobase_mysql_tmpfile(): allocate/open the file after
the return(-1); statement.
2012-12-18 20:55:30 +02:00
Ahmad Abdullateef
febe03c2db BUG#14727815 - CRASH IN PTHREAD_RWLOCK_WRLOCK/SRW_UNLOCK
IN QUERY CACHE CODE

DESCRIPTION:
MySQL Server crashes sporadically when Query Caching is on and
the server has high contention among clients. 


ANALYSIS :

Scenario 1:
In Query_cache::move_by_type() when handling RESULT or its related blocks,
Write Lock is acquired on its parent Query block. However the next and prev
pointers are cached in local variables before lock acquisition. In an extremely
high contention scenario there exists a possibility that
Query_cache::append_result_data() is operating on the same query block
and as a consequence might append a new Result block to the end of Result
blocks Linked List of the Query. This would manipulate the next, prev pointers
of the Block being processed in move_by_type(), however the local pointers
still point to previous nodes there by causing Data Corruption leading to crash.

FIX :

Scenario 1:
The next, prev pointers are now accessed only after Lock acquisition in 
Query_cache::move_by_type().
2012-12-18 22:12:56 +05:30
Vasil Dimov
7bdd8b481c Fix Bug#13463493 INNODB PLUGIN WERE CHANGED, BUT STILL USE THE
SAME VERSION NUMBER 1.0.17

Now that InnoDB/InnoDB Plugin is no longer separately developed and
distributed from the MySQL server it does not need its own version number.
Thus use the MySQL version instead.

"Removing" the version altogether is not feasible because the config
variable 'innodb_version' cannot be removed in GA branches.

Reviewed by:	Marko (rb#1751)
2012-12-18 16:51:41 +02:00
Ramil Kalimullin
0fa867fd91 Fix for BUG#15948580 UPDATE_XML() CRASHES THE SERVER.
Problem: tag's buffer overflow leads to a problem.
Fix: bound check added.


sql/item_xmlfunc.cc:
  Fix for BUG#15948580 UPDATE_XML() CRASHES THE SERVER.
  
    - XML tag/attribute level shouldn't exceed MAX_LEVEL as we use a
  static buffer to store them in the MY_XML_USER_DATA.
2012-12-14 13:55:30 +04:00
Inaam Rana
117e2d1b6b Bug#14329288 IS THE CALL TO IBUF_MERGE_OR_DELETE_FOR_PAGE FROM
BUF_PAGE_GET_GEN REDUNDANT?

rb://1711
approved by: Marko Makela

When decompressing a compressed page that had already been accessed
in the buffer pool, do not attempt to merge buffered changes.
2012-12-14 11:24:57 +05:00
Ravinder Thakur
2d16c5bd4b bug#11761752: DO NOT ALLOW USE OF ALTERNATE DATA STREAMS ON NTFS FILESYSTEM.
File names with colon are being disallowed because of the Alternate Data 
Stream (ADS) feature of NTFS that could be misused. ADS allows data to be 
written to alternate streams of a normal file. The data in alternate 
streams cannot be seen by normal tools on Windows (explorer, cmd.exe). As 
a result someone can use this feature to hide large amount of data in 
alternate streams and admins will have no easy way of figuring out the 
files that are using that disk space. The fix also disallows ADS in the 
scenarios where file name is passed as some dynamic variable.

An important thing about the fix is that it DOES NOT disallow ADS file 
names if they are not dynamic (i.e. if the file is created by using some 
option that needs local access to the MySQL server, for example error log
file). The reasoning is that if some MySQL option related to files 
requires access to the local machine (it is not dynamic), then user can very 
well create data in ADS by some other means. This fixes only those scenarios 
which can allow users to create data in ADS over the wire.

File names with colon are being disallowed only on Windows. UNIX 
(Linux in particular) supports NTFS, but it will not be a common 
scenario for someone to configure a NTFS file system to store MySQL 
data on Linux.

Changes in file bug11761752-master.opt are needed due to 
bug number 15937938.
2012-12-13 20:33:44 +05:30