ON COL WITH COMPOSITE INDEX
This problem is caused by the patch for the bug#11751794.
While checking for the keypart covering non grouping attribute. we are not
checking whether the root node of the SEL_ARG* tree for the index have any
cvalue or not.
On a previous fix, user variables with zero length name were incorrectly
considered as event corruption, despite that them are allowed by server.
Fix this wrong assumption by allowing again user variables with zero
length on binary log.
PROPERLY QUOTED IN BINLOG FILE
Problem: In load data file query, User variables are allowed
inside "Into_list" and "Set_list". These user variables used
inside these two lists are not properly guarded with backticks
while server is writting into binlog. Hence user variable names
like a` cannot be used in this context.
Fix: Properly quote these variables while
writting into binlog
CERTAIN LEVEL
Problem description: mysqld crashes when we update the max_connections
variable to lesser value than the number of currently open connections.
Analysis: The "alarm_queue.max_elements" size will be decided at the
server start time and it will get modified if we change max_connections
value. In the current scenario the value of "alarm_queue.max_elements"
is decremented when the max_connections is set to 2. When updating the
"alarm_queue.max_elements" value we are not updating "max_used_alarms"
value. Hence, instead of getting the warning "thr_alarm queue is full"
it is ending up in asserting the server at the time of inserting new
elements in the queue.
Fix: the fix is to dynamically increase the size of the alarm_queue.
In order to do that, queue_insert_safe() should be used instead if
queue_insert().
buf_page_get_gen(): Do not attempt to decompress a compressed-only
page when mode == BUF_PEEK_IF_IN_POOL. This mode is only being used by
btr_search_drop_page_hash_when_freed(). There cannot be any adaptive
hash index pointing to a page that does not exist in uncompressed
format in the buffer pool.
innodb_buffer_pool_evict_update(): New function for debug builds, to handle
SET GLOBAL innodb_buffer_pool_evicted='uncompressed'
by evicting all uncompressed page frames of compressed tablespaces
from the buffer pool.
rb#1873 approved by Jimmy Yang
RATHER THAN A TABLE
Problem: In RBR, If a table is converted into a view at slave,
(i.e., "drop table 'object1'" & "create view 'object1'"), then any
DML operations on the table at master are causing crash at slave.
Analysis: Slave prepares tables to be opened for DML list when it
receives Table_map_log_event(s). And the same list will be sent to
open_table function. Open_table logic assumes that if the list
contains a view object, it also contains "select_lex" object of
that view. In the above special case, the table object does not
contain 'select_lex' as it is base table at master. Since it
is a view at slave, open_table logic goes to 'mysql_make_view()'
function which assumes that 'select_lex' exists for the object.
Fix: While preparing 'tables to be opened' list, we should make
sure that table required type is 'base table'. If it is not
base table while opening the object, mysql_make_view will throw an
error similar to 'object is not a base table'
Problem: When a view, with a specific character set and collation,
is created on another view with a different character set and collation the
dump restoration results in an illegal mix of collations error.
SOLUTION: To avoid this confusion of collations, the create table datatype
being used is hardcoded as "tinyint NOT NULL". This will not matter as the table
created will be dropped at runtime and specifically tinyint is used to
avoid hitting the row size conflicts.
Consider the following query:
SELECT f_1,..,f_m, AGGREGATE_FN(C)
FROM t1
WHERE ...
GROUP BY ...
Loose index scan ("Using index for group-by") can be used for
this query if there is an index 'i' covering all fields in the
select list, and the GROUP BY clause makes up a prefix f1,...,fn
of 'i'. Furthermore, according to rule NGA2 of
get_best_group_min_max(), the WHERE clause must contain a
conjunction of equality predicates for all fields fn+1,...,fm.
The problem in this bug was that a query with WHERE clause that
broke NGA2 was not detected and therefore used loose index scan.
This lead to wrong result. The query had an index
covering (c1,c2) and had:
"WHERE (c1 = 1 AND c2 = 'a') OR (c1 = 2 AND c2 = 'b')
GROUP BY c1"
or
"WHERE (c1 = 1 ) OR (c1 = 2 AND c2 = 'b')
GROUP BY c1"
This WHERE clause cannot be transformed to a conjunction of
equality predicates.
The solution is to introduce another rule, NGA3, that complements
NGA2. NGA3 says that if a gap field (field between those
listed in GROUP BY and C in the index) has a predicate, then
there can only be one range in the query. This requirement is
more strict than it has to be in theory. BUG 15947433 will deal
with that.
Problem:-
In case of blob data field, UNION ALL doesn't give correct result.
Analysis:-
In MyISAM table, when we dont want to check for the distinct for particular
key, we set the key_map to zero.
While writing record in MyISAM table, we check the distinct with the help
of keys, by checking whether that key is active in key_map and then writing
the record.
In case of blob field, we are checking for distinct by unique constraint,
where we are not checking whether that unique key is active or not in key_map.
Solution:-
Before checking for distinct, check whether any key is active in key_map.
WITH AN ASSERTION
Recently we added check to handle kill query signal for long operating
queries.
While the query interruption is reported it must to ensure cursor is restore
to proper state for HANDLER interface to work correctly.
Normal select query will not face this problem, as on recieving interrupt,
select query is aborted and new select query result in re-initialization
(including cursor).
rb://1836. Approved by Marko.
Analysis:
--------
REPLACE operation provides incorrect output when
user variable is supplied as an argument and there
are multiple rows on which the operation is performed.
Consider the example below:
SET @var='(( 00000000 ++ 00000000 ))';
SELECT REPLACE(@var, '00000000', table_name) AS a FROM
INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA='mysql';
Invalid output:
+---------------------------------------+
| REPLACE(@var, '00000000', TABLE_NAME) |
+---------------------------------------+
| (( columns_priv ++ columns_priv )) |
| (( columns_priv ++ columns_priv )) |
......
......
| (( columns_priv ++ columns_priv )) |
| (( columns_priv ++ columns_priv )) |
| (( columns_priv ++ columns_priv )) |
+---------------------------------------+
The user argument supplied as the string to REPLACE
operation is overwritten after the first iteration
to '(( columns_priv ++ columns_priv ))'.
The overwritten string after the first iteration
is used for the subsequent REPLACE iteration. Since
the pattern string is not found, it returns invalid
output as mentioned above.
Fix:
---
If the Alloced_length is zero, realloc() and create a
copy of the string which is then used for the REPLACE
operation for every iteration.
INCLUDES FIRST PARTITION WHEN PRUNING
PROBLEM
-------
TO_DAYS()/TO_SECONDS() can return NULL for invalid dates which
was stored in the first partition ,therefore the first partition
was always included for the scan when range was specified.
FIX
---
The fix is a small optimization which we have included ,which will
prune the scanning of NULL/first partition if the dates specified
in the range are valid and in the same year and month . TO_SECONDS()
function is not supported in 5.1 so removed it from the fix and test
scripts for mysql-5.1 version.
AVAILABLE MEMORY IS TOO LOW
Analysis:
---------
In function "mysql_make_view", "table->view" is initialized
after parsing(using File_parser::parse) the view definition.
If "::parse" function fails then control is moved to label
"err:". Here we have assert (table->view == thd->lex).
This assert fails if "::parse" function fails, as
table->view is not initialized yet.
File_parser::parse fails if data being parsed is incorrect/
corrupted or when memory allocation fails. In this scenario
its failing because of failure in memory allocation.
Fix:
---------
In case of failure in function "File_parser::parse", moving
to label "err:" is incorrect. Modified code to move
to label "end:".
DIAGNOSTICS_AREA::SET_OK_STATUS
Use DBUG_RETURN() instead of return() if DBUG_ENTER() is
used in the function. This patch is to fix the Windows
pb2 failure on mysql-5.1
Approved by Marko. rb#1792
DIAGNOSTICS_AREA::SET_OK_STATUS
Test fails on 5.1 valgrind build. This is because of close(-1)
system call.
Fixed by adding extra checks for valid file descriptor.
Approved by Vasil(Calvin). rb#1792
I_MAIN.CTYPE_UTF8 FOR MACOSX10.6 FOR 5.1
While converting directory name to filename, a
file separator (FN_LIBCHAR) might get appended
to the resulting file name. This can result in
off-by-one error when length of the input string
is equal to FN_REFLEN. In this case, the terminating
'\0' gets written beyond the buffer allocated to store
the result.
Fixed by incrementing the dst buffer size by 1. As
extra safety, switched to strnmov() and added a debug
assert to check the length of the input file name.
No test case added as the scenario is already
covered by the test cases added for bugs in
the description.
Problem:If Disk becomes full while writing into the binlog,
then the server instance hangs till someone frees the space.
After user frees up the disk space, mysql server crashes
with an assert (m_status != DA_EMPTY)
Analysis: wait_for_free_space is being called in an
infinite loop i.e., server instance will hang until
someone frees up the space. So there is no need to
set status bit in diagnostic area.
Fix: Replace my_error/my_printf_error with
sql_print_warning() which prints the warning in error log.
Details of BUG#11746142: CALLING MYSQLD WHILE ANOTHER
INSTANCE IS RUNNING, REMOVES PID FILE
Fix: Before removing the pid file, ensure it was created
by the same process, leave it intact otherwise.
Some shell interpreters do not support '-e' test
primary to construct conditions.
man test 1 (on S10)
...skip...
-e file True if file exists. (Not available in sh.)
...skip...
Hence, check for the existence of a file using
'-e' might result in a syntax error on such
shell programs.
Fixed by replacing it by '-f'.
DOS ATTACKS
Problem:
For detailed description, see Bug#42502. This bug is a duplicate
of Bug#42502. The complete fix for Bug#42502 was not made as
proposed. Hence the bug still persists.
Fix:
Make the changes as proposed originally for the bugfix of 42502.
Which is to remove the allocation of the memory before we actually
check for any errors.
TO SIGNED
Problem:
When we are joining types (of fields) in case of a union, we usually
upgrade the datatypes to the largest present in the query.
In case of mediumint, it is not happening.
Analysis:
When joined with types LONG and LONGLONG, mediumint should get
upgraded to LONG and LONGLONG respectively.
W.r.t the given query, constant '1' will be created as a LONGLONG
internally and SIGNED flag is enabled. As a result, while combining
types for the field, LONGLONG along with MEDIUMINT gets converted
to LONG first. LONG with MEDIUMINT(of the third select) gets converted
to MEDIUMINT. SIGNED FLAG would be that of the first field's.
As a result, the final result would be SIGNED MEDIUMINT.
Fix:
While joining types, MEDIUMINT with LONGLONG and MEDIUMINT with LONG
is converted to LONGLONG and LONG respectively. Also, made some
changes for FLOAT and DOUBLE.
Analysis:
When thread cache is enabled, it does not properly initialize
thd->start_utime when a thread is picked from the thread cache.
This breaks the quota management mechanism.
THD::time_out_user_resource_limits() resets
m_user_connect->conn_per_hour to 0 based on thd->start_utime
Fix:
Initialize start_utime when cached thread is reused.
Notes:
Enabled back tests which were disabled because of this issue.
This is a followup to the fix of
Bug#14628410 ASSERTION `! IS_SET()' FAILED IN DIAGNOSTICS_AREA::SET_OK_STATUS
(satya.bodapati@oracle.com-20121213132316-5joz4phltx9yhjs7)
In innobase_mysql_tmpfile(): allocate/open the file after
the return(-1); statement.
IN QUERY CACHE CODE
DESCRIPTION:
MySQL Server crashes sporadically when Query Caching is on and
the server has high contention among clients.
ANALYSIS :
Scenario 1:
In Query_cache::move_by_type() when handling RESULT or its related blocks,
Write Lock is acquired on its parent Query block. However the next and prev
pointers are cached in local variables before lock acquisition. In an extremely
high contention scenario there exists a possibility that
Query_cache::append_result_data() is operating on the same query block
and as a consequence might append a new Result block to the end of Result
blocks Linked List of the Query. This would manipulate the next, prev pointers
of the Block being processed in move_by_type(), however the local pointers
still point to previous nodes there by causing Data Corruption leading to crash.
FIX :
Scenario 1:
The next, prev pointers are now accessed only after Lock acquisition in
Query_cache::move_by_type().
SAME VERSION NUMBER 1.0.17
Now that InnoDB/InnoDB Plugin is no longer separately developed and
distributed from the MySQL server it does not need its own version number.
Thus use the MySQL version instead.
"Removing" the version altogether is not feasible because the config
variable 'innodb_version' cannot be removed in GA branches.
Reviewed by: Marko (rb#1751)
BUF_PAGE_GET_GEN REDUNDANT?
rb://1711
approved by: Marko Makela
When decompressing a compressed page that had already been accessed
in the buffer pool, do not attempt to merge buffered changes.
File names with colon are being disallowed because of the Alternate Data
Stream (ADS) feature of NTFS that could be misused. ADS allows data to be
written to alternate streams of a normal file. The data in alternate
streams cannot be seen by normal tools on Windows (explorer, cmd.exe). As
a result someone can use this feature to hide large amount of data in
alternate streams and admins will have no easy way of figuring out the
files that are using that disk space. The fix also disallows ADS in the
scenarios where file name is passed as some dynamic variable.
An important thing about the fix is that it DOES NOT disallow ADS file
names if they are not dynamic (i.e. if the file is created by using some
option that needs local access to the MySQL server, for example error log
file). The reasoning is that if some MySQL option related to files
requires access to the local machine (it is not dynamic), then user can very
well create data in ADS by some other means. This fixes only those scenarios
which can allow users to create data in ADS over the wire.
File names with colon are being disallowed only on Windows. UNIX
(Linux in particular) supports NTFS, but it will not be a common
scenario for someone to configure a NTFS file system to store MySQL
data on Linux.
Changes in file bug11761752-master.opt are needed due to
bug number 15937938.
The error code returned from Merge file/Temp file creation functions are
ignored.
Use the return codes of the row_merge_file_create() and innobase_mysql_tmpfile()
to return the error to caller if file creation fails.
Approved by Marko. rb#1618
DOPROCESSREPLY()
Description: Function DoProcessReply() calls function
decrypt_message() in a while loop without
performing a check on available buffer
space. This can cause buffer overflow and
crash the server. This patch is fix provided
by Sawtooth to resolve the issue.
ROBUST AGAINST BUGS IN CALLERS".
Both MDL subsystems and Table Definition Cache code assume
that callers ensure that names of objects passed to them are
not longer than NAME_LEN bytes. Unfortunately due to bugs in
callers this assumption might be broken in some cases. As
result we get nasty bugs causing buffer overruns when we
construct MDL key or TDC key from object names.
This patch makes TDC code more robust against such bugs by
ensuring that we always checking size of result buffer when
constructing TDC keys. This doesn't free its callers from
ensuring that both db and table names are shorter than
NAME_LEN bytes. But at least this steps prevents buffer
overruns in case of bug in caller, replacing them with less
harmful behavior.
This is 5.1-only version of patch.
This patch introduces new version of create_table_def_key()
helper function which constructs TDC key without risk of
result buffer overrun. Places in code that construct TDC keys
were changed to use this function.
Also changed rm_temporary_table() and open_new_frm() functions
to avoid use of "unsafe" strmov() and strxmov() functions and
use safer strnxmov() instead.
Problem:
Before the ALTER TABLE statement, the array
dict_index_t::stat_n_diff_key_vals had proper values calculated
and updated. But after the ALTER TABLE statement, all the values
of this array is 0.
Because of this statistics returned by innodb_rec_per_key() is
different before and after the ALTER TABLE statement. Running the
ANALYZE TABLE command populates the statistics correctly.
Solution:
After ALTER TABLE statement, set the flag dict_table_t::stat_initialized
correctly so that the table statistics will be recalculated properly when
the table is next loaded. But note that we still don't choose the loose
index scans. This fix only ensures that an ALTER TABLE does not change
the optimizer plan.
rb://1639 approved by Marko and Jimmy.
patch to fix post push falures in pb2
BUG#15872504 - REMOVE MYSQL-TEST/INCLUDE/GET_BINLOG_DUMP_THREAD_ID.INC
=== Problem ===
The file named "mysql-test/include/get_binlog_dump_thread_id.inc" is not
used anywhere. In any case, this file does wrong things in the wrong way:
1) The file seems to assume there is only one dump thread, but there may
be many.
2) you can get this information in a much easier way using the command:
"select thread_id from threads where processlist_command="Binlog Dump";"
=== Fix ===
removed file 'mysql-test/include/get_binlog_dump_thread_id.inc'