It is error to call mysql_thread_init() before libmysql is initialized with mysql_library_init(). Thus to fix this bug we need to detect if library was initialized and return error result if mysql_thread_init() is called with uninitialized library.
Fixed by checking my_thread_global_init_done and returning nonzero if the library is not initialized.
WRITTEN WHILE ROWS REMAINS
Problem:
========
When truncate table fails while using transactional based
engines even though the operation errors out we still
continue and log it to binlog. Because of this master has
data but the truncate will be written to binary log which
will cause inconsistency.
Analysis:
========
Truncate table can happen either through drop and create of
table or by deleting rows. In the second case the existing
code is written in such a way that even if an error occurs
the truncate statement will always be binlogged. Which is not
correct.
Binlogging of TRUNCATE TABLE statement should check whether
truncate is executed "transactionally or not". If the table
is transaction based we log the TRUNCATE TABLE only on
successful completion.
If table is non transactional there are possibilities that on
error we could have partial changes done hence in such cases
we do log in spite of errors as some of the lines might have
been removed, so the statement has to be sent to slave.
Fix:
===
Using table handler whether truncate table is being executed
in transaction based mode or not is identified and statement
is binlogged accordingly.
mysql-test/suite/binlog/r/binlog_truncate_kill.result:
Added test case to test the fix for Bug#17942050.
mysql-test/suite/binlog/t/binlog_truncate_kill.test:
Added test case to test the fix for Bug#17942050.
sql/sql_truncate.cc:
Check if truncation is successful or not and retun appropriate
return values so that binlogging can be done based on that.
sql/sql_truncate.h:
Added a new enum.
The problem was in the validation of the input data for blob types.
When assigned binary data, the character blob types were only checking if
the length of these data is a multiple of the minimum char length for the
destination charset.
And since e.g. UTF-8's minimum character length is 1 (becuase it's
variable length) even byte sequences that are invalid utf-8 strings (e.g.
wrong leading byte etc) were copied verbatim into utf-8 columns when
coming from binary strings or fields.
Storing invalid data into string columns was having all kinds of ill effects
on code that assumed that the encoding data are valid to begin with.
Fixed by additionally checking the incoming binary string for validity when
assigning it to a non-binary string column.
Made sure the conversions to charsets with no known "invalid" ranges
are not covered by the extra check.
Removed trailing spaces.
Test case added.
archive table which is using an auto increment column, the
server hangs. In order to recover the mysqld process, it
has to be terminated abnormally using SIGKILL. The problem
is observed in mysql-5.5.
Bug #18065452 "PREPARING" STATE HOGS CPU WITH ARCHIVE
+ SUBQUERY
Analysis: This happens because the server is trapped inside
an infinite loop in the function,
"subselect_indexsubquery_engine::exec()". This function
resolves the correlated suquery by doing an index lookup
for the appropriate engine. In case of archive engine,
after reaching the end of records, "table->status" is not
set to STATUS_NOT_FOUND. As a result the loop is not
terminated.
Fix: The "table->status" is set to STATUS_NOT_FOUND when
the end of records is reached.
THE PERFORMANCE UNDER HEAVY INSERT
Problem:
There are three memset call to allocate memory for system fields
in each insert.
Solution:
Instead of calling it in 3 times, we can combine it into
one memset call. It will reduce the CPU usage under heavy insert.
Approved by Marko rb-4916
LOCAL AND IMPORT ERRORS
Description:
-----------
This bug happens due to the fact that current algorithm is designed
that in the case of LOCAL load of data, in case of the error, the
remaining part of the file is read in order to return the proper
error message to the client side.
But, the problem with current implementation is that data stream
for the client side is cleared only in the case where line delimiters
exist, which is not a case with, for example fixed width
fields.
Fix:
----
Ported patch provided by Sinisa Milivojevic n bug report for this
issue to 5.5+ versions.
As part of this patch code is changed to clear the data stream
by calling new member function "READ_INFO::skip_data_till_eof".
Before this fix, specially crafted queries
using the INFORMATION_SCHEMA could crash the server.
The root cause was a buffer overflow,
see the (private) bug comments for details.
With this fix, the buffer overflow condition is properly handled,
and the queries involved do return the expected result.
Bug#17894997 CMAKE WARNING WRT INTERFACE_LINK_LIBRARIES
Bug#17905155 CMAKE WARNING WHEN GENERATING MAKEFILE
Bug#71089 CMake warning when generating Makefile
Use old policy for LINK_INTERFACE_LIBRARIES.
'mysql_config --libs' outputs -L/path/to/library
on SunOS we also want it to output '-R/path/to/library'
in order to find libraries at runtime.
cmake/libutils.cmake:
Add an informational message, to show dependencies on OS libraries.
FAILING ASSERTION: FLEN == LEN
Problem:
Broken invariant triggered when building a unique index on a
binary column and the input data contains duplicate keys. This was broken
in debug builds only.
Fix:
Fixed length of the binary datatype can be greater than length of
the shorter prefix on which index is being created.
Problem: While printing the Server version, mysql client
doesn't check for the buffer overflow in a
String variable.
Solution: Used a different print function which checks the
allocated length before writing into the string.
ACCEPTED BUT PARSED INCORRECTLY
When we are setting the value in a system variable,
We can set it like
set sys_var="Iden1.Iden2"; //1
set sys_var='Iden1.Iden2'; //2
set sys_var=Iden1.Iden2; //3
set sys_var=.ident1.ident2; //4
set sys_var=`Iden1.Iden2`; //5
While parsing, for case 1(when ANSI_QUOTES is enable) and 2,
we will take as string literal(we will make item of type Item_string).
for case 3 & 4, taken as Item_field, where Iden1 is a table name and
iden2 is a field name.
for case 5, again Item_field type, where iden1.iden2 is taken as
field name.
Now in case 1, when we are assigning some value to system variable
(which can take string or enumerate type data), we are setting only
field part.
This means only iden2 value will be set for system variable. This
result in wrong result.
Solution:
(for string type) We need to Document that we are not allowed to set
system variable which takes string as identifier, otherwise result
in unexpected behaviour.
(for enumerate type)
if we pass iden1.iden2, we will give an error ER_WRONG_TYPE_FOR_VAR
(Incorrect argument type to variable).
mysql-test/suite/sys_vars/t/general_log_file_basic.test:
Earlier we used to give ER_WRONG_VALUE_FOR_VAR error, but in the patch of
(Bug32748-Inconsistent handling of assignments to general_log_file/slow_query_log_file)
they quoted this line.But i am not able to find any relation of this with the changes of
patch. So i think We should give error in this case.
mysql-test/suite/sys_vars/t/slow_query_log_file_basic.test:
Earlier we used to give ER_WRONG_VALUE_FOR_VAR error, but in the patch of
(Bug32748-Inconsistent handling of assignments to general_log_file/slow_query_log_file)
they quoted this line.But i am not able to find any relation of this with the changes of
patch. So i think We should give error in this case.
Problem:
In the clustered index, when an update operation is done the overall
scenario (after rb#4479) is as follows:
1. Delete mark the old record that is to be updated.
2. The old record disowns the blobs.
3. Insert the new record into clustered index.
4. For non-updated blobs, new record must own it. Verified by assert.
5. For non-updated blobs, in new record marked as inherited.
Scenario involving DB_LOCK_WAIT:
If step 3 times out, then we will skip 1 and 2 and will continue from
step 3. This skipping is achieved by the UPD_NODE_INSERT_BLOB state.
In this case, step 4 is not correct. Because of step 1, the new
record need not own the blobs. Hence the assert failure.
Solution:
The assert in step 4 is removed. Instead code is added to ensure that
the record owns the blob.
Note:
This is a regression caused by rb#4479.
rb#4571 approved by Marko
AUTO_INCREMENT_INCREMENT
Problem:
=======
When auto_increment_increment system variable decreases,
immediate next value of auto increment column is not affected.
Solution:
========
Get the previous inserted value of auto increment column by
subtracting the previous auto_increment_increment from next
auto increment value. After that calculate the current autoinc value
using newly changed auto_increment_increment variable.
Approved by Sunny [rb#4394]
Problem:
It was reported that on Debian and KFreeBSD platforms, i386 architecture
machines certain SSL tests are failing. main.ssl_connect rpl.rpl_heartbeat_ssl
rpl.rpl_ssl1 rpl.rpl_ssl main.ssl_cipher, main.func_encrypt were the tests that
were reportedly failing (crashing). The reason for the crashes are said to be
due to the assembly code of yaSSL.
Solution:
There was initially a workaround suggested i.e., to enable
-DTAOCRYPT_DISABLE_X86ASM flag which would prevent the crash, but at an expense
of 4X reduction of speed. Since this was unacceptable, the fix was the
functions using assembly, now input variables from the function call using
extended inline assembly on GCC instead of relying on direct assembly code.
CONFIG FILES CAUSES TEST
Utility as "mysql_upgrade" forks "mysql"/"mysqlcheck". Attaching
"mysql_upgrade" shows following calls after forking "mysql" or
"mysql_check" when configuration file information is passed as
first argument to "mysql_upgrade".
strace -f ./mysql_upgrade --defaults-file=../pdb/my.cnf --socket=../pdb/mysql.sock -f
[pid 6254] stat("/etc/my.cnf", 0x7fff8e772680) = -1 ENOENT (No such file or directory)
[pid 6254] stat("/etc/mysql/my.cnf", 0x7fff8e772680) = -1 ENOENT (No such file or directory)
[pid 6254] stat("/usr/local/mysql/etc/my.cnf", 0x7fff8e772680) = -1 ENOENT (No such file or directory)
[pid 6254] stat("/home/user_name/.my.cnf", {st_mode=S_IFREG|0664, st_size=19, ...}) = 0
[pid 6254] open("/home/user_name/.my.cnf", O_RDONLY) = 3
But when tool forks "mysqlcheck"/"mysql", "--no-defaults" is passed
as first argument. Before forking, in function "find_tool" of
"mysql_upgrade", check is made to verify whether tool can be
executable or not by calling "mysqlcheck --help" and "mysql --help".
But argument "--no-defaults", "--defaults-file" or
"defaults-extra-file" is not passed to "mysql" and "mysqlcheck".
So my.cnf is searched in default paths.
Fix:
------
Modified code to pass "--no-defaults" as first argument to "mysql"
and "mysqlcheck" while checking tool can be executed or not.
Performance schema tables are local to a server and they should not
be allowed to be executed by the slave from the relay log.
From 5.6.10, P_S events are not written into the binary log.
But prior to that, from mysql 5.5 onwards, P_S events are written
to the binary log by master.
The following are problematic scenarios:
1. Master 5.5 -> Slave 5.5
========================
A) RBR: Slave crashes
B) SBR: P_S statements are replicated.
2.Master 5.5 -> Slave 5.6
========================
A) RBR: SQL thd generates error
B) SBR : P_S statements are replicated
3. 5.5 binlog executed on a server 5.5 using mysqlbinlog|mysql
=================================================================
A) RBR: Server crash (because of BINLOG'... statement)
B) SBR: P_S statements are executed
4. 5.5 binlog executed on server 5.6 using mysqlbinlog|mysql
================================================================
A) RBR: SQL error (because of BINLOG'... statement)
B) SBR: P_S statements are executed.
The generalized behaviour should be:
a) Slave SQL thread should certainly ignore P_S events read from
the relay log.
b) mysqlbinlog|mysql should replay the binlog succesfully.
Problem:
The function row_upd_changes_ord_field_binary() is used to decide whether to
use row_upd_clust_rec_by_insert() or row_upd_clust_rec(). The function
row_upd_changes_ord_field_binary() does not make use of charset information.
Based on binary comparison it decides that r1 and r2 differ in their ordering
fields.
In the function row_upd_clust_rec_by_insert(), an update is done by delete +
insert. These operations internally make use of cmp_dtuple_rec_with_match()
to compare records r1 and r2. This comparison takes place with the use of
charset information.
This means that it is possible for the deleted record to be reused in the
subsequent insert. In the given scenario, the characters 'a' and 'A' are
considered equal in the my_charset_latin1. When this happens, the ownership
information of externally stored blobs are not correctly handled.
Solution:
When an update is done by delete followed by insert, disown the relevant
externally stored fields during the delete marking itself (within the same
mtr). If the insert succeeds, then nothing with respect to blob ownership
needs to be done. If the insert fails, then the disown done earlier will be
removed when the operation is rolled back.
rb#4479 approved by Marko.
The maximum value for innodb_thread_sleep_delay is 4294967295 (32-bit) or
18446744073709551615 (64-bit) microseconds. This is way too big, since
the max value of innodb_thread_sleep_delay is limited by
innodb_adaptive_max_sleep_delay if that value is set to non-zero value
(its default is 150,000).
Solution
The maximum value of innodb_thread_sleep_delay should be the same as
the maximum value of innodb_adaptive_max_sleep_delay, which is 1000000.
Approved by Jimmy, rb#4429
Backported only the softlink part of the patch,
*not* the bumping of library version.
With this patch, the libmysql/ directory contains:
libmysqlclient.a
libmysqlclient_r.a -> libmysqlclient.a
libmysqlclient_r.so -> libmysqlclient.so*
libmysqlclient_r.so.18 -> libmysqlclient.so.18*
libmysqlclient_r.so.18.0.0 -> libmysqlclient.so.18.0.0*
libmysqlclient.so -> libmysqlclient.so.18*
libmysqlclient.so.18 -> libmysqlclient.so.18.0.0*
libmysqlclient.so.18.0.0*
Bug#68338 RFE: make tmpdir a build-time configurable option
Post-push fix: 'cmake -LH | grep TMP' showed TMPDIR as a BOOL option,
which was a bit confusing: show it as a PATH instead.
This is a backport of the patch of bug#11765785. Commit message
by Prabakaran Thirumalai from bug#11765785 is reproduced below:
Description:
------------
Global Query ID (global_query_id ) is not incremented for PING and
statistics command. These two query types are filtered before
incrementing the global query id. This causes race condition and
results in duplicate query id for different queries originating from
different connections.
Analysis:
---------
sqlparse.cc::dispath_command() is the only place in code which sets
thd->query_ id to global_query_id and then increments it based on the
query type. In all other places it is incremented first and then
assigned to thd->query_id.
This is done such that global_query_id is not incremented for PING
and statistics commands in dispatch_command() function.
Fix:
----
As per suggestion from Serg, "There is no reason to skip query_id for
the PING and STATISTICS command.", removing the check which filters
PING and statistics commands.
Instead of using get_query_id() and next_query_id() which can still
cause race condition if context switch happens soon after executing
get_query_id(), changing the code to use next_query_id() instead of
get_query_id() as it is done in other parts of code which deals with
global_query_id.
Removed get_query_id() function and forced next_query_id() caller
to use the return value by specifying warn_unused_result attribute.
Description: A typo in create_tailoring() causes the "contraction_flags" to be written
into cs->contractions in the wrong place. This causes two problems:
(1) Anyone relying on `contraction_flags` to decide "could this character be
part of a contraction" is 100% broken.
(2) Anyone relying on `contractions` to determine the weight of a contraction
is mostly broken
Analysis: When we are preparing the contraction in create_tailoring(), we are corrupting the
cs->contractions memory location which is supposed to store the weights(8k) + contraction information(256 bytes). We started storing the contraction information after the 4k location. This is because of logic flaw in the code.
Fix: When we create the contractions, we need to calculate the contraction with (char*) (cs->contractions + 0x40*0x40) from ((char*) cs->contractions) + 0x40*0x40. This makes the "cs->contractions" to move to 8k bytes and stores the contraction information from there. Similarly when we are calculating it for like range queries we need to calculate it from the 8k bytes onwards, this can be done by changing the logic to (const char*) (cs->contractions + 0x40*0x40). And for ucs2 charsets we need to modify the my_cs_can_be_contraction_head() and my_cs_can_be_contraction_tail() to point to 8k+ locations.
OF ROW DATA
Problem:
========
Inserting a row larger than 4G when server uses RBR leads
to crash.
Analysis:
========
Row-based binary logging logs changes in individual table
rows. During the execution of DML statements in RBR the
actual row data will be stored within "m_rows_buf" buffer
and this buffer contents will be written to binary log.
"m_rows_buf" is prepared within the following function
"Rows_log_event::do_add_row_data".
When a huge row is specified as in this bug scenario where
row size is 4294971520 > UINT_MAX (4294967295) then the
"m_rows_buf" is reallocated to accommodate the row data and
then the row is copied to the buffer. During this realloc
call, the length is getting type casted to "uint" which
results in overflow. Because of the overflow the reallocated
memory happens to be incorrect than what was requested
and it results in a crash during copy of rowdata to buffer.
Hence rows of size > 4GB cannot be written to binary log.
By default the event_length can be stored within 4 bytes
which in turn restricts an event's size to grow. Hence large
rows cannot be replicated using row based replication.
Fix:
===
An error is generated if the row size exceeds 4GB value.
sql/log_event.cc:
An error is generated if the row size exceeds 4GB value.
Debug simulations are added to test the fix.