Commit graph

71773 commits

Author SHA1 Message Date
Thayumanavar
c7ca708fd5 BUG#18054998 - BACKPORT FIX FOR BUG#11765785 to 5.5
This is a backport of the patch of bug#11765785. Commit message
by Prabakaran Thirumalai from bug#11765785 is reproduced below:
Description:
------------
Global Query ID (global_query_id ) is not incremented for PING and 
statistics command. These two query types are filtered before 
incrementing the global query id. This causes race condition and 
results in duplicate query id for different queries originating from 
different connections.
      
Analysis:
---------
sqlparse.cc::dispath_command() is the only place in code which sets 
thd->query_ id to global_query_id and then increments it based on the 
query type. In all other places it is incremented first and then 
assigned to thd->query_id.
      
This is done such that global_query_id is not incremented for PING 
and statistics commands in dispatch_command() function.
      
Fix:
----
As per suggestion from Serg, "There is no reason to skip query_id for 
the PING and STATISTICS command.", removing the check which filters 
PING and statistics commands.
      
Instead of using get_query_id() and next_query_id() which can still 
cause race condition if context switch happens soon after executing 
get_query_id(), changing the code to use next_query_id() instead of 
get_query_id() as it is done in other parts of code which deals with 
global_query_id.
      
Removed get_query_id() function and forced next_query_id() caller 
to use the return value by specifying warn_unused_result attribute.
2014-01-13 12:04:16 +05:30
Venkata Sidagam
ff06148679 Bug #17760379 COLLATIONS WITH CONTRACTIONS BUFFER-OVERFLOW THEMSELVES IN THE FOOT
Description: A typo in create_tailoring() causes the "contraction_flags" to be written
into cs->contractions in the wrong place. This causes two problems:
(1) Anyone relying on `contraction_flags` to decide "could this character be
part of a contraction" is 100% broken.
(2) Anyone relying on `contractions` to determine the weight of a contraction
is mostly broken

Analysis: When we are preparing the contraction in create_tailoring(), we are corrupting the 
cs->contractions memory location which is supposed to store the weights(8k) + contraction information(256 bytes). We started storing the contraction information after the 4k location. This is because of logic flaw in the code.

Fix: When we create the contractions, we need to calculate the contraction with (char*) (cs->contractions + 0x40*0x40) from ((char*) cs->contractions) + 0x40*0x40. This makes the "cs->contractions" to move to 8k bytes and stores the contraction information from there. Similarly when we are calculating it for like range queries we need to calculate it from the 8k bytes onwards, this can be done by changing the logic to (const char*) (cs->contractions + 0x40*0x40). And for ucs2 charsets we need to modify the my_cs_can_be_contraction_head() and my_cs_can_be_contraction_tail() to point to 8k+ locations.
2014-01-11 14:48:29 +05:30
Sujatha Sivakumar
605aa82f5d Bug#17081415:>=4GB ROW EVENT CRASHES SERVER WITH WILD MEMCPY
OF ROW DATA

Problem:
========
Inserting a row larger than 4G when server uses RBR leads
to crash.

Analysis:
========
Row-based binary logging logs changes in individual table
rows. During the execution of DML statements in RBR the
actual row data will be stored within "m_rows_buf" buffer
and this buffer contents will be written to binary log.
"m_rows_buf" is prepared within the following function
"Rows_log_event::do_add_row_data".

When a huge row is specified as in this bug scenario where
row size is 4294971520 > UINT_MAX (4294967295) then the
"m_rows_buf" is reallocated to accommodate the row data and
then the row is copied to the buffer. During this realloc
call, the length is getting type casted to "uint" which
results in overflow. Because of the overflow the reallocated
memory happens to be incorrect than what was requested
and it results in a crash during copy of rowdata to buffer.

Hence rows of size > 4GB cannot be written to binary log.
By default the event_length can be stored within 4 bytes
which in turn restricts an event's size to grow. Hence large
rows cannot be replicated using row based replication.

Fix:
===
An error is generated if the row size exceeds 4GB value.
2014-01-10 15:11:56 +05:30
Luis Soares
d94513ca85 BUG#17066269
- Automerged from bug branch into latest mysql-5.5.
- Fixed trailing whitespaces.
- Updated the copyright notice year to 2014.
2014-01-09 12:53:49 +00:00
Murthy Narkedimilli
c94e2e0e96 updating the current copyright year which reflects in the MySQL welcome message. 2014-01-09 16:07:14 +05:30
mithun
1e04605abc Bug #17307201 : FAILING ASSERTION: PREBUILT->TRX->CONC_STATE == 1
FROM SUBSELECT
ISSUE         : In function find_all_keys.
                If selected row do not satisfy condition
                then we call unlock_row to release the locked
                row. Suppose if we have subquery in condition
                and we have an innodb error during its execution.
                Then we should not call the unlock_row. If the error
                is because of deadlock, innodb will rollback the
                transaction. And calling unlock_row without
                transaction is an invalid case hence an assertion
                failure.
SOLUTION      : We call unlock_row only if only there is no
                error occurred previously.
                The solution is back ported from 5.6
                defect number 14226481
2014-01-09 11:17:51 +05:30
mysql-builder@oracle.com
8843dd4dc1 2014-01-09 07:39:10 +05:30
Aditya A
657ffcd74c Bug#16287752 INNODB_DATA_FILE_PATH MINIMUM SIZE
IN DOCUMENTATION
Problem 
-------
The documentation says that we support 'K' prefix 
while specifiying size for innodb datafile in the
server variable for innodb_data_file_path ,but the
function srv_parse_megabytes() only handles only 
'M' (megabytes) and 'G' (gigabytes) .

Fix
---
Modify srv_parse_megabytes() to handle Kilobytes. 

Add in documentation that while specifying size 
in KB it should be mentioned in multiples of 1024
other wise they will be rounded off to nearest
MB (megabyte) boundry .(eg if size mentioned
as 2313KB will be considered as 2 MB ).

[ Approved by Marko #rb 2387 ]
2014-01-08 22:25:41 +05:30
Anirudh Mangipudi
14be195187 Bug#16715064 MYSQL COMMUNITY UTILITIES CANNOT CONNECT TO MYSQL ENTERPRISE
WITH SSL ENABLED
Problem:
It was reported that MySQL community utilities cannot connect to a MySQL
Enterprise 5.6.x server with SSL configured. We can reproduce the issue
when we try to connect an MySQL Enterprise Server with a MySQL Client with
--ssl-ca parameter enabled.
We get an ERROR 2026 (HY000): SSL connection error: unknown error number.

Solution:
The root cause of the problem was determined to be the difference in handling
of the certificates by OpenSSL(Enterprise) and yaSSL(Community). OpenSSL expects
a blank certificate to be sent when a parameter (ssl-ca, or ssl-cert or ssl-key)
has not been specified.On the other hand yaSSL doesn't send any certificate and 
since OpenSSL does not expect this behaviour it returns an Unknown SSL error.
The issue was resolved by yaSSL adding capability to send blank certificate when
any of the parameter is missing.
2014-01-08 18:31:42 +05:30
Nisha Gopalakrishnan
1ef8ed17f1 BUG#17324415:GETTING MYSQLD --HELP AS ROOT EXITS WITH 1
Analysis
--------

Running 'MYSQLD --help --verbose' as ROOT user without
using '--user' option displays the help contents but
aborts at the end with an exit code '1'.

While starting the server, a validation is performed to
ensure when the server is started as root user, it should
be done using '--user' option. Else we abort. In case
of help, we dump the help contents and abort.

Fix:
---
During the validation, we skip aborting the server incase
we are using the help option under the condition mentioned
above.

NOTE: Test case has not been added since it requires using 
      'root' user.
2014-01-08 10:04:05 +05:30
Bharathy Satish
c052bec059 Bug #17503460 MYSQL READ ONLY DOESN'T WORK FOR DROP TRIGGER
Problem: Drop Trigger succeeds even after setting read_only 
variable to ON.
Fix: Fix is to report the standard error 
(ER_OPTION_PREVENTS_STATEMENT)when global read_only variable 
is set to ON.
2014-01-07 15:11:05 +05:30
laasya.moduludu@oracle.com
8e9bb710ba Raise version number after cloning 5.5.36 2014-01-06 11:43:05 +01:00
Murthy Narkedimilli
496abd0814 Updated/added copyright headers 2014-01-06 10:52:35 +05:30
Arun Kuruvila
51519388f1 Bug #16324629 : SERVER CRASHES ON UPDATE/JOIN FEDERATED +
LOCAL TABLE WHEN ONLY 1 LOCAL ROW

Description: When updating a federated table with UPDATE...
JOIN, the server consistently crashes with Signal 11 when
only 1 row exists in the local table involved in the join 
and that 1 row can be joined with a row in the federated 
table.

Analysis: Interaction between the federated engine and the
optimizer results in the crash. In our scenario, ie, local
table having only one row, the program is following a 
different path because the table is treated as a constant
table by the join optimizer. So in this scenario 
"index_read()" is happening in the prepare phase,
since optimizer plan is different for constant table joins.
In this case, "index_read_idx_map()" (inside handler.cc) is
calling "index_read()" and inside "index_read()", matching 
rows are fetched and "stored_result" gets populated by 
calling "store_result()". And just after "index_read()", 
"index_end()" function is called. And in the "index_end()",
its freeing the "stored_result" by calling "free_result()".
So when it reaches the execution phase, in "position()" 
function, we are getting assertion at 
"DBUG_ASSERT(stored_result);". In all other scenarios (ie, 
table with more than 1 row), optimizer plan is different 
and "index_read()" is happening in the execution phase.

Fix: So my fix is to have a separate ha_federated member
function for "index_read_idx_map()" which will handle 
federated engine separately. So that position() will be 
called before index_end() call in constant table scenario.
2013-12-30 11:39:55 +05:30
Aditya A
96ae8409e1 Bug#12762390 SHOW INNODB STATUS REPORTS NON-FK
ERRORS IN THE FK SECTION

ANALYSIS
--------

Any error during the renaming of the table was 
incorrectly logged in the dict_foreign_err_file
and it showed up in foreign key section when
we give the query "show engine innodb status".

FIX
---
Prevent renaming error from being logged in 
dict_foreign_err_file section.  

[Aprooved by marko #rb 2501 ]
2013-12-29 16:55:24 +05:30
Satya Bodapati
d584c71c50 BUG#16752251 - INNODB DOESN'T REDO-LOG INSERT BUFFER MERGE OPERATION IF
IT IS DONE IN-PLACE

Add testcase as innodb-change-buffer-recovery.test
2013-12-26 14:33:52 +05:30
Venkata Sidagam
0fb30122ca Bug #17780290 PUBLISH THE GIS TEST FOR BUG#16451878
Adding the test cases for the BUG#16451878.
2013-12-19 16:08:38 +05:30
Bjorn Munch
6ad10c8f4a Followup fix for Bug 17827378 MTR DOES NOT REPORT IF A TEST
FAILS TO DROP CREATED EVENTS:

- Check for triggers should exclude mtr's own
- Move the code to before checksum table as it might affect result
  of some autdit_log tests (does in 5.6)
- Replace SHOW STATUS LIKE 'slave_open_temp_tables' to be like in 5.6
2013-12-18 14:01:15 +01:00
Luis Soares
eec2ee94cd BUG#17066269: AUTO_INC VALUE NOT PROPERLY GENERATED WITH RBR AND
AUTO_INC COLUMN ONLY ON SLAVE

In RBR, if the slave's table as one additional auto_inc column,
then, it will insert the value 0 instead of generating the next
auto_inc number.

We fix this by checking that if an auto_inc extra column exists,
when compared to column data of the row event, we explicitly set
it to NULL and flag the engine that a nulled auto_inc column will
be inserted.
2013-12-18 11:17:24 +00:00
Tor Didriksen
b2bb09529c MTR's internal check of the test case 'main.events_trans' failed.
fix: DROP EVENT e1;
2013-12-18 11:08:21 +01:00
Tor Didriksen
a3c9775657 Bug#16316074 RFE: MAKE TMPDIR A BUILD-TIME CONFIGURABLE OPTION
Bug#68338    RFE: make tmpdir a build-time configurable option

Background: Some distributions use tmpfs for mounting /tmp by
default, which has some advantages, but brings also new
issues. Fedora started using tmpfs on /tmp in version 18 for
example. If not configured otherwise in my.cnf, MySQL uses
system's constant P_tmpdir expanded to /tmp on Linux. This can
introduce some problems with limited space in /tmp and also some
data loss in case of replication slave [1].

In case distributions would like to use /var/tmp, which should be
better for MySQL purposes, then we have to patch the source or
change tmpdir option in my.cnf, which is however not updated in
case it has already existed.

Thus, it would be useful to be able to specify default tmpdir
path using a configure option, while using P_tmpdir in case it is
not defined explicitly.

Based on a contribution from Honza Horak
2013-12-18 11:05:18 +01:00
Venkatesh Duggirala
42be8c16f5 Bug17632978 SLAVE CRASHES IF ROW EVENT IS CORRUPTED
(MYSQLBINLOG -V CRASHES WITH THAT BINLOG)

Post Push: Fixing Werror compiler issue
2013-12-18 13:52:49 +05:30
Venkatesh Duggirala
b0a5086c36 Bug#17632978 SLAVE CRASHES IF ROW EVENT IS CORRUPTED
(MYSQLBINLOG -V CRASHES WITH THAT BINLOG)

Problem: If slave receives a corrupted row event,
slave server is crashing.

Analysis: When slave is unpacking the row event, it is
not validating the data before applying the event. If the
data is corrupted for eg: the length of a field is wrong,
it could end up reading wrong data leading to a crash.
A similar problem happens when mysqlbinlog tool is used
against a corrupted binlog using '-v' option. Due to -v
option, the tool tries to print the values of all the
fields. Corrupted field length could lead to a crash.

Fix: Before unpacking the field, a verification
will be made on the length. If it falls into the event
range, only then it will be unpacked. Otherwise,
"ER_SLAVE_CORRUPT_EVENT" error will be thrown.
Incase mysqlbinlog -v case, the field value will not be
printed and the processing of the file will be stopped.
2013-12-17 22:11:22 +05:30
Kent Boortz
a5eccbc33a Bug#29716 : Bug#11746921 : MYSQL_INSTALL_DB REFERS TO THE (OBSOLETE) MYSQLBUG SCRIPT DURING INSTALLATION
Bug#68742 : Bug#16530527 : OBSOLETE BUGREPORT ADDRESSES
2013-12-14 13:05:36 +01:00
Marc Alff
d155874d83 Push to mysql-5.5 2013-12-13 10:26:05 +01:00
sayantan dutta
1bc36178c6 Bug #17827378 - MTR DOES NOT REPORT IF A TEST FAILS TO DROP CREATED EVENTS 2013-12-12 12:20:57 +05:30
Marc Alff
b734cd73ff Bug#17928281 'CHECK_PERFORMANCE_SCHEMA()' LEAVES 'CURRENT_THD' REFERRING
DESTRUCTED THD OBJ 

Prior to fix, function check_performance_schema() could leave
behind stale pointers in thread local storage, for the following keys:
- THR_THD (used by _current_thd)
- THR_MALLOC (used for memory allocation)
This is an unsafe practice, which can potentially cause crashes,
and that can cause other bugs when code is modified during maintenance.

With this fix, thread local storage keys used temporarily within
function check_performance_schema() are cleaned up after use.
2013-12-11 11:15:23 +01:00
Guilhem Bichot
9418fea133 Bug#16539979 - BASIC SELECT COUNT(DISTINCT ID) IS BROKEN
Bug#17867117 - ERROR RESULT WHEN "COUNT + DISTINCT + CASE WHEN" NEED MERGE_WALK 

Problem:
COUNT DISTINCT gives incorrect result when it uses a Unique
Tree and its last inserted record has null value.

Here is how COUNT DISTINCT is processed, given that this query is not
using loose index scan.

When a row is produced as a result of joining tables (there is only
one table here), we store the SELECTed value in a Unique tree. This
allows elimination of any duplicates, and thus implements DISTINCT.

When we have processed all rows like this, we walk the Unique tree,
counting its elements, in Aggregator_distinct::endup() (tree->walk());
for each element we call Item_sum_count::add(). Such function wants to
ignore any NULL value, for that it checks item_sum -> args[0] ->
null_value. It is a mistake: when walking the Unique tree, the value
to be aggregated is not item_sum ->args[0] but rather table ->
field[0].

Solution:
instead of item_sum -> args[0] -> null_value, use arg_is_null(), which
knows where to look (like in fix for bug 57932).

As a consequence of this solution, we have to make arg_is_null() a
little more general:
1) Because it was so far only used for AVG() (which always has a
single argument), this function was looking at a single argument; now
that it has to work with COUNT(DISTINCT expression1,expression2), it
must look at all arguments.
2) Because we start using arg_is_null () for COUNT(DISTINCT), i.e. in
Item_sum_count::add (), it implies that we are also using it for
COUNT(no DISTINCT) (same add ()). For COUNT(no DISTINCT), the
nullness to check is that of item_sum -> args[0]. But the null_value
of such item is reliable only if val_*() has been called on it. So far
arg_is_null() was always used after a call to arg_val*(), so could
rely on null_value; but for COUNT, there is no call to arg_val*(), so
arg_is_null() has to call is_null() instead.

Testcase for 16539979 by Neeraj. Testcase for 17867117 contributed by
Xiaobin Lin from Taobao.
2013-12-04 12:32:42 +01:00
Hery Ramilison
30de8be1f3 Upmerge of the mysql-5.1.73 build 2013-12-04 04:04:44 +01:00
a2b610677c Merge from mysql-5.1.73-release 2013-12-03 20:47:36 +01:00
Pavan Naik
4f5a317153 BUG#16321920 : CREATE A SEPARATE INNODB_ZIP TEST SUITE
Fix :
-------	

Created separate suites called innodb_zip ans i_innodb_zip that contain all compression tests.

Running the new suites with following compression-related parameters :

* innodb_compression_level = {1/9}
* innodb_log_compressed_pages = {ON/OFF}
2013-11-29 15:13:47 +05:30
Balasubramanian Kandasamy
a33a1e55a3 Added backported repo rpm files from mysql-5.5.35-uln branch 2013-11-29 08:24:49 +01:00
mysql-builder@oracle.com
f852c1235f 2013-11-27 14:23:03 +05:30
Balasubramanian Kandasamy
fb1ead3dd0 Updated the url and sql_mode in my.cnf 2013-11-25 15:08:52 +01:00
Balasubramanian Kandasamy
0870107b7e Backport 5.5.35 EL6 uln repo rpms 2013-11-25 14:49:29 +01:00
Anirudh Mangipudi
b32f13ee47 Bug#12428404 MYSQLD.EXE CRASHES WHEN EXTRACTVALUE() IS CALLED WITH
MALFORMED XPATH EXP
Problem:
A malformed XPATH expression in the ExtractValue query is causing
a server crash. This malformed XPATH expression is resulted when 
the position attribute in the substring function contains ".." in
the beginning.
Solution:
The original crash is happening because the "../" is being evaluated
prematurely. It tries to access XML while it hasn't been parsed yet.
The premature evaluation is happening because the val_nodeset function
is being set to constant, in which case we proceed to evaluate them in
JOIN:prepare stage only. The solution to this is setting the val_nodeset
functions as non-constant. This forces us to evaluate the function in
the JOIN:exec stage and thus avoid any premature evaluation of the 
XML strings.
2013-11-25 13:50:19 +05:30
Anirudh Mangipudi
0f89c3667b Bug#12428404 MYSQLD.EXE CRASHES WHEN EXTRACTVALUE() IS CALLED
WITH MALFORMED XPATH EXP
Problem:
A malformed XPATH expression in the ExtractValue query is 
causing a server crash. This malformed XPATH expression is
resulted when the position attribute in the substring function
contains ".." in the beginning.
Solution:
The original crash is happening because the "../" is being 
evaluated prematurely. It tries to access XML while it 
hasn't been parsed yet. The premature evaluation is happening
because the val_nodeset function is being set to constant, 
in which case we proceed to evaluate them in JOIN:prepare
stage only. The solution to this is setting the val_nodeset
functions as non-constant. This forces us to evaluate the function
 in the JOIN:exec stage and thus avoid any premature evaluation of
the XML strings.
2013-11-25 13:49:07 +05:30
Arun Kuruvila
16db26fcf1 Bug #17168602 MYSQL_PLUGIN REMOVES NON-DIRECTORY TYPE FILES
SPECIFIED WITH THE BASEDIR OPTION 

Description: The mysql_plugin client attempts to remove any
filename specified to the --basedir option. The problem is
that if the filename does not end with a slash, it will 
attempt to unlink it, which succeeds for files, but not for
directories.

Analysis: When we are starting mysql_plugin with basedir 
option and if we are giving path of a file as basedir, it 
deletes that file. It was because it uses a function 
my_delete which unlinks the file path given.

Fix:  As a fix we replace that line using another function 
my_free, which will only free the  pointer which is having 
that file path.
2013-11-25 12:31:09 +05:30
Mattias Jonsson
d58799ed17 backport of Bug#17401628
revid:mattias.jonsson@oracle.com-20131119103616-u6t82s8cpgp0q3ex

Use of uninitialized memory in the priority queue used for returning records
in sorted order.

It happens if no previous partition have returned a row since the
beginning of index_init + an index_read* call returned
HA_ERR_KEY_NOT_FOUND for all partitions (otherwise the record
buffer/priority queue would be initialized) + an index_next/prev
call where all partitions returns HA_ERR_END_OF_FILE.
2013-11-20 13:13:18 +01:00
mithun
f847e58869 Bug #17708621 : EXCEEDING SORT_BUFFER_SIZE (FILE SORT)
WITH SORT ABORTED LEAKS FILE DESCRIPTORS

ISSUE : IO_CACHE used for index_merge quick select
is freed only on successful retrieval of all rows
from index merge.
Suppose if there is a interrupt( or failure) to
this operation of row retrieval (let it be a
KILL_QUERY signal) then we are not freeing the IO_CACHE
resources allocated by index_merge quick select.
And hence temp file associated with it is also not closed.
This lead to a file descriptor leak.

SOLUTION : As part of file sort operation now we always 
free the IO_CACHE allocated by index_merge quick select.
2013-11-18 18:12:01 +05:30
mysql-builder@oracle.com
4f5eab3987 2013-11-14 15:00:08 +05:30
Atanu Ghosh
0c2030e250 Bug #17049656 : MYSQLD --LOCAL-SERVICE PARAMETER DOES NOT WORK
Problem: The "--local-install" service does not perform as expected for, at least,
         Windows.

Fix: A NULL pointer was dereferenced due to which there was crash.A check was introduced
     for NULL string before dereferencing it.No test cases written as it is a bug during 
     installation.
2013-11-14 14:27:31 +05:30
Venkatesh Duggirala
2577662c7a Bug#17641586 INCORRECTLY PRINTED BINLOG DUMP INFORMATION
Problem:
When log_warnings is greater than 1, master prints binlog
dump thread information in mysqld.1.err file.
The information contains slave server id, binlog file and
binlog position. The slave server id is uint32 and the print
format was wrongly specifified (%d instead of %u).
Hence a server id which is more than 2 billion is getting
printed with a negative value.
Eg: Start binlog_dump to slave_server(-1340259414),
pos(mysql-bin.001663, 325187493)

Fix: Changed the uint32 format to %u.
2013-11-12 22:09:10 +05:30
mithun
8934a80b6d Bug #14057034 : WASTED CPU CYCLES IN MY_UTF8_UNI WHERE
RESULTING MY_WC_T RESULT IS NOT USED
Issue         : handler functions my_ismbchar_utf8,
              my_well_formed_len_mb for charset utf8
              is calling unicode converion function
              to validate and to find the character
              length. Because of this, instructions
              which will convert the utf8 to unicode
              are executed for no use.
              A similar issue exist with charset utf8mb4
Solution      : reorganized the code such that character
              validation part of unicode conversion
              handler is extracted(duplicated) in to
              separate function. Hence
              my_ismbchar_utf8, my_well_formed_len_mb
              will call the new function which only
              validates and return the length of mb(utf8).
              A similar fix for charset utf8mb4.
2013-11-12 16:42:46 +05:30
Christopher Powers
aec0856920 Bug#17702677 WRONG INSTRUMENTATION INTERFACE FOR MYSQL_COND_TIMEDWAIT
Fix Windows build break
2013-11-07 15:44:57 -06:00
Marc Alff
127e75859c Push to mysql-5.5 2013-11-07 18:29:43 +01:00
Sujatha Sivakumar
7c69ec0a18 Bug#16736412: THE SERVER WAS CRASHED WHILE EXECUTING
"SHOW BINLOG EVENTS"

Fixing post push test issue. 
Changing the debug simulation.
2013-11-07 17:30:57 +05:30
Neeraj Bisht
97657db919 Bug#16691598 - ORDER BY LOWER(COLUMN) PRODUCES OUT-OF-ORDER RESULTS
Problem:-
We have created a table with UTF8_BIN collation.
In case, when in our query we have ORDER BY clause over a function 
call we are getting result in incorrect order.
Note:the bug is not there in 5.5.

Analysis:
In 5.5, for UTF16_BIN, we have min and max multi-byte length is 2 and 4 
respectively.In make_sortkey(),for 2 byte character character we are 
assuming that the resultant length will be 2 byte/character. But when we 
use my_strnxfrm_unicode_full_bin(), we store sorting weights using 3 bytes 
per character.This result in truncated result.

Same thing happen for UTF8MB4, where we have 1 byte min multi-byte and 
4 byte max multi-byte.We will accsume resultant data as 1 byte/character, 
which result in truncated result.

Solution:-
use strnxfrm(means use of MY_CS_STRNXFRM macro) is used for sort, in 
which the resultant length is not dependent on source length.
2013-11-07 16:46:24 +05:30
mysql-builder@oracle.com
d6893cd391 2013-11-06 17:21:13 +05:30
Sujatha Sivakumar
f9d2b6a8c9 Bug#16736412: THE SERVER WAS CRASHED WHILE EXECUTING
"SHOW BINLOG EVENTS"

Problem:
========
mysql was crashed after executing "show binlog events in
'mysql-bin.000005' from 99", the crash happened randomly.

Analysis:
========
During construction of LOAD EVENT or NEW LOAD EVENT object
if the starting offset is provided as incorrect value then
all the object members that are retrieved from the offset
are also invalid.  Some times it will lead to out of bound
address offsets.  In the bug scenario, the file name is
extracrated from an invalid address and the same is fed to
strlen(fname) function. Passing invalid address to strlen
will lead to crash.

Fix:
===
Validate if the given offset falls within the event boundary
or not.
2013-11-06 15:00:49 +05:30