Consider the following query:
SELECT f_1,..,f_m, AGGREGATE_FN(C)
FROM t1
WHERE ...
GROUP BY ...
Loose index scan ("Using index for group-by") can be used for
this query if there is an index 'i' covering all fields in the
select list, and the GROUP BY clause makes up a prefix f1,...,fn
of 'i'. Furthermore, according to rule NGA2 of
get_best_group_min_max(), the WHERE clause must contain a
conjunction of equality predicates for all fields fn+1,...,fm.
The problem in this bug was that a query with WHERE clause that
broke NGA2 was not detected and therefore used loose index scan.
This lead to wrong result. The query had an index
covering (c1,c2) and had:
"WHERE (c1 = 1 AND c2 = 'a') OR (c1 = 2 AND c2 = 'b')
GROUP BY c1"
or
"WHERE (c1 = 1 ) OR (c1 = 2 AND c2 = 'b')
GROUP BY c1"
This WHERE clause cannot be transformed to a conjunction of
equality predicates.
The solution is to introduce another rule, NGA3, that complements
NGA2. NGA3 says that if a gap field (field between those
listed in GROUP BY and C in the index) has a predicate, then
there can only be one range in the query. This requirement is
more strict than it has to be in theory. BUG 15947433 will deal
with that.
Analysis:
--------
REPLACE operation provides incorrect output when
user variable is supplied as an argument and there
are multiple rows on which the operation is performed.
Consider the example below:
SET @var='(( 00000000 ++ 00000000 ))';
SELECT REPLACE(@var, '00000000', table_name) AS a FROM
INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA='mysql';
Invalid output:
+---------------------------------------+
| REPLACE(@var, '00000000', TABLE_NAME) |
+---------------------------------------+
| (( columns_priv ++ columns_priv )) |
| (( columns_priv ++ columns_priv )) |
......
......
| (( columns_priv ++ columns_priv )) |
| (( columns_priv ++ columns_priv )) |
| (( columns_priv ++ columns_priv )) |
+---------------------------------------+
The user argument supplied as the string to REPLACE
operation is overwritten after the first iteration
to '(( columns_priv ++ columns_priv ))'.
The overwritten string after the first iteration
is used for the subsequent REPLACE iteration. Since
the pattern string is not found, it returns invalid
output as mentioned above.
Fix:
---
If the Alloced_length is zero, realloc() and create a
copy of the string which is then used for the REPLACE
operation for every iteration.
INCLUDES FIRST PARTITION WHEN PRUNING
PROBLEM
-------
TO_DAYS()/TO_SECONDS() can return NULL for invalid dates which
was stored in the first partition ,therefore the first partition
was always included for the scan when range was specified.
FIX
---
The fix is a small optimization which we have included ,which will
prune the scanning of NULL/first partition if the dates specified
in the range are valid and in the same year and month . TO_SECONDS()
function is not supported in 5.1 so removed it from the fix and test
scripts for mysql-5.1 version.
AVAILABLE MEMORY IS TOO LOW
Analysis:
---------
In function "mysql_make_view", "table->view" is initialized
after parsing(using File_parser::parse) the view definition.
If "::parse" function fails then control is moved to label
"err:". Here we have assert (table->view == thd->lex).
This assert fails if "::parse" function fails, as
table->view is not initialized yet.
File_parser::parse fails if data being parsed is incorrect/
corrupted or when memory allocation fails. In this scenario
its failing because of failure in memory allocation.
Fix:
---------
In case of failure in function "File_parser::parse", moving
to label "err:" is incorrect. Modified code to move
to label "end:".
Problem:If Disk becomes full while writing into the binlog,
then the server instance hangs till someone frees the space.
After user frees up the disk space, mysql server crashes
with an assert (m_status != DA_EMPTY)
Analysis: wait_for_free_space is being called in an
infinite loop i.e., server instance will hang until
someone frees up the space. So there is no need to
set status bit in diagnostic area.
Fix: Replace my_error/my_printf_error with
sql_print_warning() which prints the warning in error log.
Details of BUG#11746142: CALLING MYSQLD WHILE ANOTHER
INSTANCE IS RUNNING, REMOVES PID FILE
Fix: Before removing the pid file, ensure it was created
by the same process, leave it intact otherwise.
DOS ATTACKS
Problem:
For detailed description, see Bug#42502. This bug is a duplicate
of Bug#42502. The complete fix for Bug#42502 was not made as
proposed. Hence the bug still persists.
Fix:
Make the changes as proposed originally for the bugfix of 42502.
Which is to remove the allocation of the memory before we actually
check for any errors.
TO SIGNED
Problem:
When we are joining types (of fields) in case of a union, we usually
upgrade the datatypes to the largest present in the query.
In case of mediumint, it is not happening.
Analysis:
When joined with types LONG and LONGLONG, mediumint should get
upgraded to LONG and LONGLONG respectively.
W.r.t the given query, constant '1' will be created as a LONGLONG
internally and SIGNED flag is enabled. As a result, while combining
types for the field, LONGLONG along with MEDIUMINT gets converted
to LONG first. LONG with MEDIUMINT(of the third select) gets converted
to MEDIUMINT. SIGNED FLAG would be that of the first field's.
As a result, the final result would be SIGNED MEDIUMINT.
Fix:
While joining types, MEDIUMINT with LONGLONG and MEDIUMINT with LONG
is converted to LONGLONG and LONG respectively. Also, made some
changes for FLOAT and DOUBLE.
Analysis:
When thread cache is enabled, it does not properly initialize
thd->start_utime when a thread is picked from the thread cache.
This breaks the quota management mechanism.
THD::time_out_user_resource_limits() resets
m_user_connect->conn_per_hour to 0 based on thd->start_utime
Fix:
Initialize start_utime when cached thread is reused.
Notes:
Enabled back tests which were disabled because of this issue.
IN QUERY CACHE CODE
DESCRIPTION:
MySQL Server crashes sporadically when Query Caching is on and
the server has high contention among clients.
ANALYSIS :
Scenario 1:
In Query_cache::move_by_type() when handling RESULT or its related blocks,
Write Lock is acquired on its parent Query block. However the next and prev
pointers are cached in local variables before lock acquisition. In an extremely
high contention scenario there exists a possibility that
Query_cache::append_result_data() is operating on the same query block
and as a consequence might append a new Result block to the end of Result
blocks Linked List of the Query. This would manipulate the next, prev pointers
of the Block being processed in move_by_type(), however the local pointers
still point to previous nodes there by causing Data Corruption leading to crash.
FIX :
Scenario 1:
The next, prev pointers are now accessed only after Lock acquisition in
Query_cache::move_by_type().
File names with colon are being disallowed because of the Alternate Data
Stream (ADS) feature of NTFS that could be misused. ADS allows data to be
written to alternate streams of a normal file. The data in alternate
streams cannot be seen by normal tools on Windows (explorer, cmd.exe). As
a result someone can use this feature to hide large amount of data in
alternate streams and admins will have no easy way of figuring out the
files that are using that disk space. The fix also disallows ADS in the
scenarios where file name is passed as some dynamic variable.
An important thing about the fix is that it DOES NOT disallow ADS file
names if they are not dynamic (i.e. if the file is created by using some
option that needs local access to the MySQL server, for example error log
file). The reasoning is that if some MySQL option related to files
requires access to the local machine (it is not dynamic), then user can very
well create data in ADS by some other means. This fixes only those scenarios
which can allow users to create data in ADS over the wire.
File names with colon are being disallowed only on Windows. UNIX
(Linux in particular) supports NTFS, but it will not be a common
scenario for someone to configure a NTFS file system to store MySQL
data on Linux.
Changes in file bug11761752-master.opt are needed due to
bug number 15937938.
ROBUST AGAINST BUGS IN CALLERS".
Both MDL subsystems and Table Definition Cache code assume
that callers ensure that names of objects passed to them are
not longer than NAME_LEN bytes. Unfortunately due to bugs in
callers this assumption might be broken in some cases. As
result we get nasty bugs causing buffer overruns when we
construct MDL key or TDC key from object names.
This patch makes TDC code more robust against such bugs by
ensuring that we always checking size of result buffer when
constructing TDC keys. This doesn't free its callers from
ensuring that both db and table names are shorter than
NAME_LEN bytes. But at least this steps prevents buffer
overruns in case of bug in caller, replacing them with less
harmful behavior.
This is 5.1-only version of patch.
This patch introduces new version of create_table_def_key()
helper function which constructs TDC key without risk of
result buffer overrun. Places in code that construct TDC keys
were changed to use this function.
Also changed rm_temporary_table() and open_new_frm() functions
to avoid use of "unsafe" strmov() and strxmov() functions and
use safer strnxmov() instead.
Using too long table aliases in stored routines might
have caused server crashes.
Code in sp_head::merge_table_list() which is responsible
for collecting information about tables used in stored
routine was not aware of the fact that table alias might
have arbitrary length. I.e. it assumed that table alias
can't be longer than NAME_LEN bytes and allocated buffer
for a key identifying table accordingly.
This patch fixes the issue by ensuring that we use
dynamically allocated buffer for table key when table
alias is too long. By default stack based buffer is used
in which NAME_LEN bytes are reserved for table alias.
=== Problem ===
The test is dependent on binlog positions and checks
to see if the command 'START SLAVE' functions correctly
with the 'UNTIL' clause added to it. The 'UNTIL' clause
is added to specify that the slave should start and run
until the SQL thread reaches a given point in the master
binary log or in the slave relay log.
The test uses hard coded values for MASTER_LOG_POS and
RELAY_LOG_POS, instead of extracting it using
query_get_value() function. There is a test
'rpl.rpl_row_until' which does the similar thing but uses
query_get_value() function to set the values of
MASTER_LOG_POS/ RELAY_LOG_POS. To be precise,
rpl.rpl_row_until is a modified version of
engines/func.rpl_row_until.test.
The use of hard coded values may lead the slave to stop at a position
which may differ from the expected position in the binlog file,
an example being the failure of engines/funcs.rpl_row_until in
mysql-5.1 given as:
"query 'select * from t2' failed. Table 'test.t2' doesn't exist".
In this case, the slave actually ran a couple of extra commands
as a result of which the slave first deleted the table and then
ran a select query on table, leading to the above mentioned failure.
=== Fix ===
1) Fixed the code for failure seen in rpl.rpl_row_until.
This test was also failing although the symptoms of
failure were different.
2) Copied the contents from rpl.rpl_row_until into
into engines/funcs.rpl.rpl_row_until.
3) Updated engines/funcs.rpl_row_until.result accordingly.
FORMAT_DESCRIPTION_LOG_EVENT::CALC_SERVER_VERSION_SPLIT
Problem: When reading a Format_description_log_event, it supposes MySQL
version is always valid and DBUG_ASSERTION is used check the version number.
However, user may give a wrong binlog offset, even give a faked binary event
which includes an invalid MySQL version. This will cause server crash.
Fix: The assertions are removed and an error will be reported if MySQL
version in Format_description_log_event is invalid.
=== Problem ===
The test is dependent on binlog positions and checks
to see if the command 'START SLAVE' functions correctly
with the 'UNTIL' clause added to it. The 'UNTIL' clause
is added to specify that the slave should start and run
until the SQL thread reaches a given point in the master
binary log or in the slave relay log.
The test uses hard coded values for MASTER_LOG_POS and
RELAY_LOG_POS, instead of extracting it using
query_get_value() function. There is a test
'rpl.rpl_row_until' which does the similar thing but uses
query_get_value() function to set the values of
MASTER_LOG_POS/ RELAY_LOG_POS. To be precise,
rpl.rpl_row_until is a modified version of
engines/func.rpl_row_until.test.
The use of hard coded values may lead the slave to stop at a position
which may differ from the expected position in the binlog file,
an example being the failure of engines/funcs.rpl_row_until in
mysql-5.1 given as:
"query 'select * from t2' failed. Table 'test.t2' doesn't exist".
In this case, the slave actually ran a couple of extra commands
as a result of which the slave first deleted the table and then
ran a select query on table, leading to the above mentioned failure.
=== Fix ===
1) Fixed the code for failure seen in rpl.rpl_row_until.
This test was also failing although the symptoms of
failure were different.
2) Copied the contents from rpl.rpl_row_until into
into engines/funcs.rpl.rpl_row_until.
3) Updated engines/funcs.rpl_row_until.result accordingly.
Description: A very large database name causes buffer
overflow in functions acl_get() and
check_grant_db() in sql_acl.cc. It happens
due to an unguarded string copy operation.
This puts required sanity checks before
copying db string to destination buffer.
The problem is related to the changes made in bug#13025132.
get_partition_set can do dynamic pruning which limits the partitions
to scan even further. This is not accounted for when setting
the correct start of the preallocated record buffer used in
the priority queue, thus leading to wrong buffer is used
(including wrong preset partitioning id, connected to that buffer).
Solution is to fast forward the buffer pointer to point to the correct
partition record buffer.
Analysis
---------
my_stat() calls stat() and if the stat() call fails we try to set
the variable my_errno which is actually a thread specific data .
We try to get the address of this thread specific data using
my_pthread_getspecifc(),but for the purge thread we have not defined
any thread specific data so it returns null and when dereferencing
null we get a segmentation fault.
init_available_charsets() seen in the core stack is invoked
through pthread_once() .pthread_once is used for one time
initialization.Since free_charsets() is called before innodb plugin
shutdown ,purge thread calls init_avaliable_charsets() which leads
to the crash.
Fix
---
Call free_charsets() after the innodb plugin shutdown,since purge
threads are still using the charsets.
PROBLEM
-------
optimize on partiton will recreate the whole table
instead of just partition.
ANALYSIS
--------
At present innodb doesn't support optimize option ,so we do a rebuild of the
whole table and then call analyze() on the table.Presently for any optimize()
option (on table or partition) we display the following info to the user
"Table does not support optimize, doing recreate + analyze instead".
FIX
---
It was decided for GA versions(5.1 and 5.5) whenever the user tries to
optimize a partition(s) we will will display the following info the user
"Table does not support optimize on partitions.
All partitions will be rebuilt and analyzed."
Earlier partitions were not analyzed.Now all partitions will be analyzed.
If the user wants to optimize the whole table ,we will display the
previous info to the user. i.e
"Table does not support optimize, doing recreate + analyze instead"
For 5.6+ versions we will raise a new bug to support optimize() options
in innodb.
Problem:-
When we execute a query which has subquery with GROUP BY, ORDER BY and have a
BLOB column,results a memory leak.
Analysis:-
In case of subquery, which have GROUP BY on BLOB and a ORDER BY on other field
and BLOB is not a key. We allocate a tmp buffer to copy_field to take care of
BLOB value.This copy_field value can have copies of its in two join(objects),
so while freeing this copy_field we have to take care that it is
not deleted twice.
The double deletion of tmp_table_param.copy_field is handled by two patches.
One by Kostja :
revid:sp1r-konstantin@mysql.com-20050627101056-55153
Fix the broken test suite in -debug build.
and other by Oleksandr
revid:sp1r-bell@sanja.is.com.ua-20060118114857-19905
Excluded posibility of tmp_table_param.copy_field double deletion (BUG#14851).
both of this patches are commited in different branch and while
merging they both get placed,but there is no need for Kostja patch as Oleksandr
patch handle this.
FAILED IN DEACTIVATE_DDL_LOG_ENTRY
deallocate_ddl_log_entry() can be called without having
locked LOCK_gdl. It uses a global buffer for reading and
writing entries in the ddl_log, and since it is not protected
by any mutex, two concurrent threads can overwrite the
content in the global buffer, so it can be different from
what was read.
Thread a reads from entry 1 into global
buffer, thread b reads from entry 2 into global buffer,
thread a writes from global buffer into entry 1
-> entry 1 is not the content of entry 2.
This is especially bad for replace entries, which uses
two phases, and does not deactivate the whole entry
after the first phase, but increases the phase instead.
Fixed by using thread local storage (stack) instead of global
storage (global buffer).
Also added buffer and size arguments to
read/write_ddl_log_file_entry.
Also only read/write first bytes in entries in
deactivate_ddl_log_entry.
Also fixed the scenario where it will try to recover from a server
compiled with a different value of IO_SIZE (very uncommon!)
updated patch with set_ddl_log_entry_from_buf
and removed read_ddl_log_entry.
Manually tested, no test case included.
Problem:-
using last_insert_id() on an auto_incremented bigint unsigned does
not work for values which are greater than max-bigint-signed.
Analysis:-
last_insert_id() returns the first auto_incremented value for a column
and an auto_incremented value can have only positive values.
In our code, when we are initializing a last_insert_id object, we are
taking it as a signed BIGINT, So when the auto_incremented value reaches
greater than max signed bigint, last_insert_id gives negative result.
Solution:
When we are fetching the value from last_insert_id, We are setting the
unsigned_flag, so that it take only unsigned BIGINT value.
This bug had two problems:
P1) Reads out of bounds;
P2) Writes out of bounds.
PROBLEM P1
----------
User_var_log_event unmarshalling from binlog was not performing range
checks when using name_len and val_len variables to walk on event
buffer.
Added range checks to User_var_log_event unmarshalling to prevent
unmarshalling errors.
PROBLEM P2
----------
User_var_log_event value was allocated on thread stack, what caused
stack frame errors when User_var_log_event value was bigger than thread
stack size.
Currently value is allocated on heap memory.
n_child_sum_items kept increasing.
Since it is used for calculating the size of ref_pointer_array,
we will allocate larger and larger chunks of memory, until we hit some
operating system limit.
The memory is free()d at disconnect, but is most likely *not*
returned to the operating system.
When a client connects to a MySQL server, first a THD object is created.
If there are any idle server threads waiting, the THD object is then added
to a list and a server thread is woken up. This thread then retrieves the
THD object from the list and starts executing.
The problem was that this list of THD objects waiting for a server thread,
was not working in a FIFO fashion, but rather LIFO. This is unfair, as it means
that the last THD added (=last client connected) will be assigned a server
thread first.
Note however that for this to be a problem, several clients must be able
to connect and have THD objects constructed before any server threads
manages to be woken up. This is not a very likely scenario.
This patch fixes the problem by changing the THD list to work FIFO
rather than LIFO.
This is the 5.1/5.5 version of the patch.
BACKGROUND:
In certain situations DROP USER fails to remove all privileges
belonging to user being dropped from in-memory structures.
Current workaround is to do DROP USER twice in scenario below
OR doing FLUSH PRIVILEGES after doing DROP USER.
ANALYSIS:
In MySQL, When we grant some stored routines privileges to a
user they are stored in their respective hash.
When doing DROP USER all the stored routine privilege entries
associated with that user has to be deleted from its respective
hash.
The root cause for this bug is some entries from the hash
are not getting deleted.
The problem is that code that deletes entries from the hash tries
to do so while iterating over it, without taking enough measures
to address the fact that such deletion can reshuffle elements in
the hash. If the user/administrator creates the same user again
he is thrown an error 'Error 1396 ER_CANNOT_USER' from MySQL.
This prompts the user to either do FLUSH PRIVILEGES or do DROP USER
again. This behaviour is not desirable as it is a workaround and
does not solves the problem mentioned above.
FIX:
This bug is fixed by introducing a dynamic array to store the
pointersto all stored routine privilege objects that either have
to be deleted or updated. This is done in 3 steps.
Step 1: Fetching the element from the hash and checking whether
it is to be deleted or updated.
Step 2: Storing the pointer to that privilege object in dynamic array.
Step 3: Traversing the dynamic array to perform the appropriate action
either delete or update.
This is a much cleaner way to delete or update the privilege entries
associated with some user and solves the problem mentioned above.
Also the code has been refactored a bit by introducing an enum
instead of hard coded numbers used for respective dynamic arrays
and hashes in handle_grant_struct() function.
QUOTING IN REPLICATION
Problem: Misquoting or unquoted identifiers may lead to
incorrect statements to be logged to the binary log.
Fix: we use specialized functions to append quoted identifiers in
the statements generated by the server.
INC_HOST_ERRORS() IS CALLED.
Issue : Sequence of calling inc_host_errors()
and reset_host_errors() required some
changes in order to maintain correct
connection error count.
Solution : Call to reset_host_errors() is shifted
to a location after which no calls to
inc_host_errors() are made.
Problem:
=======
trx_data->empty() assert happens at `binlog_close_connection'
Analysis:
========
trx_data->empty() function checks for no pending events
and the transaction cache to be empty.This function returns
"true" if no pending events are present and cache is empty.
Otherwise it returns false. `binlog_close_connection' call
expects the above function to return true. But if the
return value is false then assert is raised.
This bug was reproducible in a diskfull scenario. In this
disk full scenario try to do an insert operation so that
a new pending event is created and flushing this pending
event fails. Due to this failure the server goes down
and invokes `binlog_close_connection' for clean closure.
Since the pending event still remains the assert is caused.
This assert is caused only in non transactional databases.
Fix:
===
In a disk full scenario when the insertion fails the
transaction is rolled back and `binlog_end_trans`
is called to flush the pending events. But flush operation
fails as the disk is full and the function simply returns
`1' without taking any action to delete the pending event.
This leaves the event to remain till the closure of
connection. `delete pending' statement has been added to
do the required clean up action.
An "orthographic" typo in User_var::set_deferred() was made in fixes for
bug@14275000. While editing the signature of the initial patch to remove
the only argument, the assigned value of the argument remained in the body ...
to be successfully compiled (!) thanks to names coincidence:
the arg to User_var method and its member.
Fixed with correcting the typo.
Additional patch to remove the part_id -> ref_buffer offset.
The partitioning id and the associate record buffer can
be found without having to calculate it.
By initializing it for each used partition, and then reuse
the key-buffer from the queue, it is not needed to have
such map.
The buffer for the current read row from each partition
(m_ordered_rec_buffer) used for sorted reads was
allocated on open and freed when the ha_partition handler
was closed or destroyed.
For tables with many partitions and big records this could
take up too much valuable memory.
Solution is to only allocate the memory when it is needed
and free it when nolonger needed. I.e. allocate it in
index_init and free it in index_end (and to handle failures
also free it on reset, close etc.)
Also only allocating needed memory, according to
partitioning pruning.
Manually tested that it does not use as much memory and
releases it after queries.
MASTER-MASTER AND USING SET USE
Problem:
=======
In a master-master set-up, a master can show a wrong
'SHOW SLAVE STATUS' output.
Requirements:
- master-master
- log_slave_updates
This is caused when using SET user-variables and then using
it to perform writes. From then on the master that performed
the insert will have a SHOW SLAVE STATUS that is wrong and
it will never get updated until a write happens on the other
master. On"Master A" the "exec_master_log_pos" is not
getting updated.
Analysis:
========
Slave receives a "User_var" event from the master and after
applying the event, when "log_slave_updates" option is
enabled the slave tries to write this applied event into
its own binary log. At the time of writing this event the
slave should use the "originating server-id". But in the
above case the sever always logs the "user var events"
by using its global server-id. Due to this in a
"master-master" replication when the event comes back to the
originating server the "User_var_event" doesn't get skipped.
"User_var_events" are context based events and they always
follow with a query event which marks their end of group.
Due to the above mentioned problem with "User_var_event"
logging the "User_var_event" never gets skipped where as
its corresponding "query_event" gets skipped. Hence the
"User_var" event always waits for the next "query event"
and the "Exec_master_log_position" does not get updated
properly.
Fix:
===
`MYSQL_BIN_LOG::write' function is used to write events
into binary log. Within this function a new object for
"User_var_log_event" is created and this new object is used
to write the "User_var" event in the binlog. "User var"
event is inherited from "Log_event". This "Log_event" has
different overloaded constructors. When a "THD" object
is present "Log_event(thd,...)" constructor should be used
to initialise the objects and in the absence of a valid
"THD" object "Log_event()" minimal constructor should be
used. In the above mentioned problem always default minimal
constructor was used which is incorrect. This minimal
constructor is replaced with "Log_event(thd,...)".