BUG#64503: mysql frequently ignores --relay-log-space-limit
When the SQL thread goes to sleep, waiting for more events, it sets
the flag ignore_log_space_limit to true. This gives the IO thread a
chance to queue some more events and ultimately the SQL thread will be
able to purge the log once it is rotated. By then the SQL thread
resets the ignore_log_space_limit to false. However, between the time
the SQL thread has set the ignore flag and the time it resets it, the
IO thread will be queuing events in the relay log, possibly going way
over the limit.
This patch makes the IO and SQL thread to synchronize when they reach
the space limit and only ask for one event at a time. Thus the SQL
thread sets ignore_log_space_limit flag and the IO thread resets it to
false everytime it processes one more event. In addition, everytime
the SQL thread processes the next event, and the limit has been
reached, it checks if the IO thread should rotate. If it should, it
instructs the IO thread to rotate, giving the SQL thread a chance to
purge the logs (freeing space). Finally, this patch removes the
resetting of the ignore_log_space_limit flag from purge_first_log,
because this is now reset by the IO thread every time it processes the
next event when the limit has been reached.
If the SQL thread is in a transaction, it cannot purge so, there is no
point in asking the IO thread to rotate. The only thing it can do is
to ask for more events until the transaction is over (then it can ask
the IO to rotate and purge the log right away). Otherwise, there would
be a deadlock (SQL would not be able to purge and IO thread would not
be able to queue events so that the SQL would finish the transaction).
Problem: Grouping results by VALUES(alias for string literal) causes
the server to crash.
Item_insert_values is not constructed to handle other types of
arguments than field and reference to field. In this case, the
argument is an Item_string, and this causes
Item_insert_values::fix_fields() to crash.
Fix: Issue an error message when the argument to Item_insert_values is
not a field or a reference to a field.
This is slightly in breach with documentation, which states that
VALUES should return NULL, but the error message is only issued in
cases where the server otherwise would crash, so there is no change in
behavior for queries that already work. Future versions will restrict
syntax so that using VALUES in this way is illegal.
mysql-test/r/errors.result:
Add test case for bug #13031606.
mysql-test/t/errors.test:
Add test case for bug #13031606.
sql/item.cc:
Issue error message if argument is not field or reference to field.
USER VARIABLE = CRASH
Moved the preparation of the variables that receive the output from
SELECT INTO from execution time (JOIN:execute) to compile time
(JOIN::prepare). This ensures that if the same variable is used in the
SELECT part of SELECT INTO it will be properly marked as non-const
for this query.
Test case added.
Used proper fast iterator.
storage/innobase/include/sync0rw.ic:
Prerequisite for compiling with gcc4 on solaris: ignore result from
os_compare_and_swap_ulint
storage/myisam/mi_dynrec.c:
Prerequisite for compiling with gcc4 on solaris: cast to void*
A defect in the subquery substitution code may lead to a server crash:
setting substitution's name should be followed by setting its length
(to keep them in sync).
mysql-test/r/gis.result:
BUG#12537203 - CRASH WHEN SUBSELECTING GLOBAL VARIABLES IN GEOMETRY FUNCTION ARGUMENTS
test result.
mysql-test/t/gis.test:
BUG#12537203 - CRASH WHEN SUBSELECTING GLOBAL VARIABLES IN GEOMETRY FUNCTION ARGUMENTS
test case.
sql/item_subselect.cc:
BUG#12537203 - CRASH WHEN SUBSELECTING GLOBAL VARIABLES IN GEOMETRY FUNCTION ARGUMENTS
set substitution's name length as well as the name itself (to keep them in sync).
Problem:
lack of incoming geometry data validation may
lead to a server crash when ISCLOSED() function called.
Solution:
necessary incoming data check added.
mysql-test/r/gis.result:
Fix for BUG#12414917 - ISCLOSED() CRASHES ON 64-BIT BUILDS
test result.
mysql-test/t/gis.test:
Fix for BUG#12414917 - ISCLOSED() CRASHES ON 64-BIT BUILDS
test case.
sql/spatial.cc:
Fix for BUG#12414917 - ISCLOSED() CRASHES ON 64-BIT BUILDS
check if a LINESTRING has at least one point as we
rely on that further.
There are two threads. In one thread, dml operation is going on
involving cascaded update operation. In another thread, alter
table add foreign key constraint is happening. Under these
circumstances, it is possible for the dml thread to access a
dict_foreign_t object that has been freed by the ddl thread.
The debug sync test case provides the sequence of operations.
Without fix, the test case will crash the server (because of
newly added assert). With fix, the alter table stmt will return
an error message.
Backporting the fix from MySQL 5.5 to 5.1
rb:961
rb:947
Analysis:
========================
sql_mode "NO_BACKSLASH_ESCAPES": When user want to use backslash as character input,
instead of escape character in a string literal then sql_mode can be set to
"NO_BACKSLASH_ESCAPES". With this mode enabled, backslash becomes an ordinary
character like any other.
SQL_MODE set applies to the current client session. And while creating the stored
procedure, MySQL stores the current sql_mode and always executes the stored
procedure in sql_mode stored with the Procedure, regardless of the server SQL
mode in effect when the routine is invoked.
In the scenario (for which bug is reported), the routine is created with
sql_mode=NO_BACKSLASH_ESCAPES. And routine is executed with the invoker sql_mode
is "" (NOT SET) by executing statement "call testp('Axel\'s')".
Since invoker sql_mode is "" (NOT_SET), the '\' in 'Axel\'s'(argument to function)
is considered as escape character and column "a" (of table "t1") values are
updated with "Axel's". The binary log generated for above update operation is as below,
set sql_mode=XXXXXX (for no_backslash_escapes)
update test.t1 set a= NAME_CONST('var',_latin1'Axel\'s' COLLATE 'latin1_swedish_ci');
While logging stored procedure statements, the local variables (params) used in
statements are replaced with the NAME_CONST(var_name, var_value) (Internal function)
(http://dev.mysql.com/doc/refman/5.6/en/miscellaneous-functions.html#function_name-const)
On slave, these logs are applied. NAME_CONST is parsed to get the variable and its
value. Since, stored procedure is created with sql_mode="NO_BACKSLASH_ESCAPES", the sql_mode
is also logged in. So that at slave this sql_mode is set before executing the statements
of routine. So at slave, sql_mode is set to "NO_BACKSLASH_ESCAPES" and then while
parsing NAME_CONST of string variable, '\' is considered as NON ESCAPE character
and parsing reported error for "'" (as we have only one "'" no backslash).
At slave, parsing was proper with sql_mode "NO_BACKSLASH_ESCAPES".
But above error reported while writing bin log, "'" (of Axel's) is escaped with
"\" character. Actually, all special characters (n, r, ', ", \, 0...) are escaped
while writing NAME_CONST for string variable(param, local variable) in bin log
irrespective of "NO_BACKSLASH_ESCAPES" sql_mode. So, basically, the problem is
that logging string parameter does not take into account sql_mode value.
Fix:
========================
So when sql_mode is set to "NO_BACKSLASH_ESCAPES", escaping characters as
(n, r, ', ", \, 0...) should be avoided. To do so, added a check to not to
escape such characters while writing NAME_CONST for string variables in bin
log.
And when sql_mode is set to NO_BACKSLASH_ESCAPES, quote character "'" is
represented as ''.
http://dev.mysql.com/doc/refman/5.6/en/string-literals.html (There are several
ways to include quote characters within a string: )
Analysis:
========================
sql_mode "NO_BACKSLASH_ESCAPES": When user want to use backslash as character input,
instead of escape character in a string literal then sql_mode can be set to
"NO_BACKSLASH_ESCAPES". With this mode enabled, backslash becomes an ordinary
character like any other.
SQL_MODE set applies to the current client session. And while creating the stored
procedure, MySQL stores the current sql_mode and always executes the stored
procedure in sql_mode stored with the Procedure, regardless of the server SQL
mode in effect when the routine is invoked.
In the scenario (for which bug is reported), the routine is created with
sql_mode=NO_BACKSLASH_ESCAPES. And routine is executed with the invoker sql_mode
is "" (NOT SET) by executing statement "call testp('Axel\'s')".
Since invoker sql_mode is "" (NOT_SET), the '\' in 'Axel\'s'(argument to function)
is considered as escape character and column "a" (of table "t1") values are
updated with "Axel's". The binary log generated for above update operation is as below,
set sql_mode=XXXXXX (for no_backslash_escapes)
update test.t1 set a= NAME_CONST('var',_latin1'Axel\'s' COLLATE 'latin1_swedish_ci');
While logging stored procedure statements, the local variables (params) used in
statements are replaced with the NAME_CONST(var_name, var_value) (Internal function)
(http://dev.mysql.com/doc/refman/5.6/en/miscellaneous-functions.html#function_name-const)
On slave, these logs are applied. NAME_CONST is parsed to get the variable and its
value. Since, stored procedure is created with sql_mode="NO_BACKSLASH_ESCAPES", the sql_mode
is also logged in. So that at slave this sql_mode is set before executing the statements
of routine. So at slave, sql_mode is set to "NO_BACKSLASH_ESCAPES" and then while
parsing NAME_CONST of string variable, '\' is considered as NON ESCAPE character
and parsing reported error for "'" (as we have only one "'" no backslash).
At slave, parsing was proper with sql_mode "NO_BACKSLASH_ESCAPES".
But above error reported while writing bin log, "'" (of Axel's) is escaped with
"\" character. Actually, all special characters (n, r, ', ", \, 0...) are escaped
while writing NAME_CONST for string variable(param, local variable) in bin log
Airrespective of "NO_BACKSLASH_ESCAPES" sql_mode. So, basically, the problem is
that logging string parameter does not take into account sql_mode value.
Fix:
========================
So when sql_mode is set to "NO_BACKSLASH_ESCAPES", escaping characters as
(n, r, ', ", \, 0...) should be avoided. To do so, added a check to not to
escape such characters while writing NAME_CONST for string variables in bin
log.
And when sql_mode is set to NO_BACKSLASH_ESCAPES, quote character "'" is
represented as ''.
http://dev.mysql.com/doc/refman/5.6/en/string-literals.html (There are several
ways to include quote characters within a string: )
mysql-test/r/sql_mode.result:
Added test case for Bug#12601974.
mysql-test/suite/binlog/r/binlog_sql_mode.result:
Appended result of test cases added for Bug#12601974.
mysql-test/suite/binlog/t/binlog_sql_mode.test:
Added test case for Bug#12601974.
mysql-test/t/sql_mode.test:
Appended result of test cases added for Bug#12601974.
Problem - The default port number shown in SHOW SLAVE HOSTS is always 3306
though the slave is actually listening on a different port number.
This is a problem as the user can not be sure whether this port
value can be trusted and so client trying to read replication
topology can get confused.
Fix - 3306 ceases to be the default value of report-port. Moreover report-port
does not have a static default any longer.
Instead we initialize report-port to 0 as the new default value and change
it based on two checks :
1) If report_port is not set, the slave reports the port number its listening
on. (i.e. if report-port is not set we get the actual value of the slave's
port number).
2) If report-port is set, we show the value report-port is set to, as the slave's
port number.
mysql-test/include/show_slave_hosts.inc:
A .inc file is added to use show slave hosts in the new test added.
mysql-test/r/mysqld--help-notwin.result:
Updated the result file to show the default value passed for report-port.
mysql-test/suite/rpl/r/rpl_report_port.result:
The result file for the new test that is added.
mysql-test/suite/rpl/r/rpl_show_slave_hosts.result:
Updated the result file to show the default value passed for report-port.
mysql-test/suite/rpl/t/rpl_report_port-slave.opt:
Option file for the new test added.
mysql-test/suite/rpl/t/rpl_report_port.test:
Added a test to check the correct functionality of report-port.
We check this by running the replication twice.
In the first run we do not set the value of report-port through the opt file
and get the actual port number of the slave's port.
We then restart the server with report-port set to some value (in this case 9000)
and check the value reported for the slave's port number.
mysql-test/suite/sys_vars/t/report_port_basic.test:
Update the test file to show the value for report-port. It is replaced with
SLAVE_PORT as the actual value of the report-port will change with each run.
sql/mysqld.cc:
Changed the value reported by report port :
1. If the value for report-port is not set we assign report-port to be the
actual port number of the slave (mysqld_port).
2. If report-port is set we get the value set for the report-port.
sql/sys_vars.cc:
Passed 0 as the default value of the report-port.
There are two threads. In one thread, dml operation is going on
involving cascaded update operation. In another thread, alter
table add foreign key constraint is happening. Under these
circumstances, it is possible for the dml thread to access a
dict_foreign_t object that has been freed by the ddl thread.
The debug sync test case provides the sequence of operations.
Without fix, the test case will crash the server (because of
newly added assert). With fix, the alter table stmt will return
an error message.
rb:947
approved by Jimmy Yang
PROBLEM: After WL 4144, when using MyISAM Merge tables, the routine
open_and_lock_tables will append to the list of tables to lock, the
base tables that make up the MERGE table. This has two side-effects in
replication:
1. On the master side, we log additional table maps for the base
tables, since they appear in the list of locked tables, even
though we don't really use them at the slave.
2. On the slave side, when opening a MERGE table while applying a
ROW event, additional tables are appended to the list of tables
to lock.
Side-effect #1 is not harmful. It's just that when using MyISAM Merge
tables a few table maps more may be logged.
Side-effect #2, is harmful, because the list rli->tables_to_lock is an
extended structure from TABLE_LIST in which the extra fields are
filled from the table maps that are processed. Since
open_and_lock_tables appends tables to the list after all table map
events have been processed we end up with entries without
replication/table map data on them. Thus when trying to access that
info for these extra tables, the server will crash.
SOLUTION: We fix side-effect #2 by making sure that we access the
replication part of the structure for those in the list that were
accounted for when processing the correspondent table map events. All
in all, we never go beyond rli->tables_to_lock_count.
We also deploy an assertion when clearing rli->tables_to_lock, making
sure that the base tables are not in the list anymore (were closed in
close_thread_tables).
CHECK_SIMPLE_EQUALITY
PROBLEM:
Crash in "check_simple_equality" when using a subquery with "IN" and
"ALL" in prepare.
ANALYSIS:
Crash can be reproduced using a simplified query like this one:
prepare s from "select 1 from g1 where 1 < all (
select @:=(1 in (select 1 from g1)) from g1)";
This bug is currently present only on 5.5.and 5.1. Its fixed as part
of work log(#1110) in 5.6. We are taking one change to fix this
in 5.5 and 5.1.
Problem seems to be present because we are trying to evaluate "is_null"
on an argument which is part of a subquery
(In Item_is_not_null_test::update_used_tables()).
But the condition to evaluate is only when we do not have a sub query
present, which means to say that "with_subselect" is not set.
With respect to the above query, we create an object of type
"Item_in_optimizer" which by definition is always associated with a
subquery. While in 5.6 we set "with_subselect" to true for
"Item_in_optimizer" object, we do not do the same in 5.5. This results in
the evaluation for "is_null" resulting in a coredump.
So, we are now setting "with_subselect" to true for "Item_in_optimizer"
in 5.1 and 5.5.
mysql-test/r/func_in.result:
Result file changes for the test case added
mysql-test/t/func_in.test:
Test case added for Bug#13012483
sql/item_cmpfunc.h:
Changed Item_in_optimizer::Item_in_optimizer( ) to set "with_subselect"
to true
PARTITION STATISTICS
Problem was the fix for bug#11756867; It always used the first
partitions, and stopped after it checked 10 [sub]partitions.
(or until it found a partition which would contain a match).
This results in bad statistics for tables where the first 10 partitions
don't represent the majority of the data (like when the first 10
partitions only contained a few rows in total).
The solution was to take statisics from the partitions containing
the most rows instead:
Added an array of partition ids which is sorted by number of records
in descending order.
this array is used in records_in_range to cover as many records as
possible in as few calls as possible.
Also changed the limit of how many partitions to use for the statistics
from a static max of 10 partitions, into a dynamic model:
Maximum number of partitions is now log2(total number of partitions)
taken from the ordered array.
It will continue calling partitions records_in_range until it has
checked:
(total rows in matching partitions) * (maximum number of partitions)
/ (number of used partitions)
Also reverted the changes for ha_partition::scan_time() and
ha_partition::estimate_rows_upper_bound() to before
the fix of bug#11756867. Since they are not as slow as
records_in_range.
RESULT FROM PREVIOUS TRANSACTION
The current Query Cache API is not fully compatible with
the partitioning engine.
There is no good way to implement support for QC due to:
1) a static callback for ha_partition would need to have access
to all partition names and call the underlying callback for each
[sub]partition with the correct name.
2) pruning would be impossible, even if one used the ulonglong
engine_data due to if engine_data is changed, the table is
invalidated by the QC.
So the only viable solution to avoid incorrect data is to not allow
caching of queries using partitioned tables.
(There are some extra changes, due to removal of \r as line break)
On shutdown(), Windows can drop traffic still queued for sending even if that
wasn't specifically requested. As a result, fatal errors (those after
signaling which the server will drop the connection) were sometimes only
seen as "connection lost" on the client side, because the server-side
shutdown() erraneously discarded the correct error message before sending
it.
If on Windows, we now use the Windows API to access the (non-broken) equivalent
of shutdown().
Backport from trunk
If a query's end time is before before its start time, the system clock has been turn back
(daylight savings time etc.). When the system clock is changed, we can't tell for certain a
given query was actually slow. We did not protect against logging such a query with a bogus
execution time (resulting from end_time - start_time being negative), and possibly logging it
even though it did not really take long to run.
We now have a sanity check in place.
sql/sql_parse.cc:
Make sure end time is not before start time - otherwise, we can be SURE the system clock
was changed in between, but not by how much. In other words, when the clock is changed,
we don't know how long a query ran, and whether it was slow.
On shutdown(), Windows can drop traffic still queued for sending even if that
wasn't specifically requested. As a result, fatal errors (those after
signaling which the server will drop the connection) were sometimes only
seen as "connection lost" on the client side, because the server-side
shutdown() erraneously discarded the correct error message before sending
it.
If on Windows, we now use the Windows API to access the (non-broken) equivalent
of shutdown().
Backport from trunk
include/violite.h:
export mysql_socket_shutdown(). It lives in vio in the backport.
sql/mysqld.cc:
Go through our own shutdown() rather than straight to the POSIX one.
vio/viosocket.c:
Define mysql_socket_shutdown(). On UNIXoid systems, it's just a wrapper for shutdown(), but
on Window, it uses DisconnectEx, which is magic.
This patch is a backport of some of the cleanups/refactorings that were done
as part of WL#1393 Optimizing filesort with small limit.
mysql-test/t/bug13633383.test:
New test case.
mysql-test/valgrind.supp:
Changed signature for find_all_keys
Fixed a typo in the comment.
Fixing test cases which were previouslyno throwing due
disable warnings macro.
sql/sql_base.cc:
Change in indentation and fixing a typo in the comment.
Problem: Statements that write to tables with auto_increment columns
based on the selection from another table, may lead to master
and slave going out of sync, as the order in which the rows
are retrieved from the table may differ on master and slave.
Solution: We mark writing to a table with auto_increment table
based on the rows selected from another table as unsafe. This
will cause the execution of such statements to throw a warning
and forces the statement to be logged in ROW if the logging
format is mixed.
Changes:
1. All the statements that writes to a table with auto_increment
column(s) based on the rows fetched from another table, will now
be unsafe.
2. CREATE TABLE with SELECT will now be unsafe.
sql/share/errmsg-utf8.txt:
Added new warning messages.
sql/sql_base.cc:
-Created function to check statements that write to
tables with auto_increment column and has select.
-Marked all the statements that write to a table
with auto_increment column based on rows fetched
from other table(s) as unsafe.
sql/sql_table.cc:
mark CREATE TABLE[with auto_increment column] as unsafe.
Problem: Statements that write to tables with auto_increment columns
based on the selection from another table, may lead to master
and slave going out of sync, as the order in which the rows
are retrived from the table may differ on master and slave.
Solution: We mark writing to a table with auto_increment table
as unsafe. This will cause the execution of such statements to
throw a warning and forces the statement to be logged in ROW if
the logging format is mixed.
Changes:
1. All the statements that writes to a table with auto_increment
column(s) based on the rows fetched from another table, will now
be unsafe.
2. CREATE TABLE with SELECT will now be unsafe.
sql/share/errmsg-utf8.txt:
Added new Warning messages
sql/sql_base.cc:
created a new function that checks for select + write on a autoinc table
made all such statements to be unsafe.
sql/sql_parse.cc:
made create autoincremnet tabble + select unsafe
IS EXECUTED TWICE FROM P
This bug is a duplicate of bug 12567331, which was pushed to the
optimizer backporting tree on 2011-06-11. This is just a back-port of
the fix. Both test cases are included as they differ somewhat.