Problem:-
using last_insert_id() on an auto_incremented bigint unsigned does
not work for values which are greater than max-bigint-signed.
Analysis:-
last_insert_id() returns the first auto_incremented value for a column
and an auto_incremented value can have only positive values.
In our code, when we are initializing a last_insert_id object, we are
taking it as a signed BIGINT, So when the auto_incremented value reaches
greater than max signed bigint, last_insert_id gives negative result.
Solution:
When we are fetching the value from last_insert_id, We are setting the
unsigned_flag, so that it take only unsigned BIGINT value.
sql/item_func.cc:
here unsigned value is converted to signed value.
sql/item_func.h:
last_insert_id() gives an auto_incremented value which can be
positive only,so defined it as a unsigned longlong sets the
unsigned_flag to 1.
Problem:-
using last_insert_id() on an auto_incremented bigint unsigned does
not work for values which are greater than max-bigint-signed.
Analysis:-
last_insert_id() returns the first auto_incremented value for a column
and an auto_incremented value can have only positive values.
In our code, when we are initializing a last_insert_id object, we are
taking it as a signed BIGINT, So when the auto_incremented value reaches
greater than max signed bigint, last_insert_id gives negative result.
Solution:
When we are fetching the value from last_insert_id, We are setting the
unsigned_flag, so that it take only unsigned BIGINT value.
sql/item_func.cc:
here unsigned value is converted to signed value.
sql/item_func.h:
last_insert_id() gives an auto_incremented value which can be
positive only,so defined it as a unsigned longlong sets the
unsigned_flag to 1.
This bug had two problems:
P1) Reads out of bounds;
P2) Writes out of bounds.
PROBLEM P1
----------
User_var_log_event unmarshalling from binlog was not performing range
checks when using name_len and val_len variables to walk on event
buffer.
Added range checks to User_var_log_event unmarshalling to prevent
unmarshalling errors.
PROBLEM P2
----------
User_var_log_event value was allocated on thread stack, what caused
stack frame errors when User_var_log_event value was bigger than thread
stack size.
Currently value is allocated on heap memory.
PRIVILEGES
Description: (user,host) pair from security context is used
privilege checking at the time of granting or
revoking proxy privileges. This creates problem
when server is started with
--skip-name-resolve option because host will not
contain any value. Checks should be dependent on
consistent values regardless the way server is
started. Further, privilege check should use
(priv_user,priv_host) pair rather than values
obtained from inbound connection because
this pair represents the correct account context
obtained from mysql.user table.
CONSISTENT SNAPSHOT OPTION
A transaction is started with a consistent snapshot. After
the transaction is started new indexes are added to the
table. Now when we issue an update statement, the optimizer
chooses an index. When the index scan is being initialized
via ha_innobase::change_active_index(), InnoDB reports
the error code HA_ERR_TABLE_DEF_CHANGED, with message
stating that "insufficient history for index".
This error message is propagated up to the SQL layer. But
the my_error() api is never called. The statement level
diagnostics area is not updated with the correct error
status (it remains in Diagnostics_area::DA_EMPTY).
Hence the following check in the Protocol::end_statement()
fails.
516 case Diagnostics_area::DA_EMPTY:
517 default:
518 DBUG_ASSERT(0);
519 error= send_ok(thd->server_status, 0, 0, 0, NULL);
520 break;
The fix is to backport the fix of bugs 14365043, 11761652
and 11746399.
14365043 PROTOCOL::END_STATEMENT(): ASSERTION `0' FAILED
11761652 HA_RND_INIT() RESULT CODE NOT CHECKED
11746399 RETURN VALUES OF HA_INDEX_INIT() AND INDEX_INIT() IGNORED
rb://1227 approved by guilhem and mattiasj.
When a SP handler is activated, memory is allocated to hold the
MESSAGE_TEXT for the condition that caused the activation.
The problem was that this memory was allocated on the MEM_ROOT belonging
to the stored program. Since this MEM_ROOT is not freed until the
stored program ends, a stored program that causes lots of handler
activations can start using lots of memory. In 5.1 and earlier the
problem did not exist as no MESSAGE_TEXT was allocated if a condition
was raised with a handler present. However, this behavior lead to
a number of other issues such as Bug#23032.
This patch fixes the problem by allocating enough memory for the
necessary MESSAGE_TEXTs in the SP MEM_ROOT when the SP starts and
then re-using this memory each time a handler is activated.
This is the 5.5 version of the patch.
This bug depends on cmake version.
For cmake 2.6 (which is still in use for some pushbuild trees)
the main build would succeed, even if create_initial_db failed.
The problem was the chaining of commands in the CUSTOM_COMMAND
to produce 'initdb.dep'. It first invokes cmake to run mysqld,
then invokes 'touch' to create the file. Moving the 'touch'
command makes the error propagate properly for both cmake 2.6 and 2.8
Follow-up patch - Fix broken build:
error: format ‘%u’ expects argument of type ‘unsigned int’,
but argument 2 has type ‘key_part_map {aka long unsigned int}’
[-Werror=format]
n_child_sum_items kept increasing.
Since it is used for calculating the size of ref_pointer_array,
we will allocate larger and larger chunks of memory, until we hit some
operating system limit.
The memory is free()d at disconnect, but is most likely *not*
returned to the operating system.
When a client connects to a MySQL server, first a THD object is created.
If there are any idle server threads waiting, the THD object is then added
to a list and a server thread is woken up. This thread then retrieves the
THD object from the list and starts executing.
The problem was that this list of THD objects waiting for a server thread,
was not working in a FIFO fashion, but rather LIFO. This is unfair, as it means
that the last THD added (=last client connected) will be assigned a server
thread first.
Note however that for this to be a problem, several clients must be able
to connect and have THD objects constructed before any server threads
manages to be woken up. This is not a very likely scenario.
This patch fixes the problem by changing the THD list to work FIFO
rather than LIFO.
This is the 5.1/5.5 version of the patch.
BACKGROUND:
In certain situations DROP USER fails to remove all privileges
belonging to user being dropped from in-memory structures.
Current workaround is to do DROP USER twice in scenario below
OR doing FLUSH PRIVILEGES after doing DROP USER.
ANALYSIS:
In MySQL, When we grant some stored routines privileges to a
user they are stored in their respective hash.
When doing DROP USER all the stored routine privilege entries
associated with that user has to be deleted from its respective
hash.
The root cause for this bug is some entries from the hash
are not getting deleted.
The problem is that code that deletes entries from the hash tries
to do so while iterating over it, without taking enough measures
to address the fact that such deletion can reshuffle elements in
the hash. If the user/administrator creates the same user again
he is thrown an error 'Error 1396 ER_CANNOT_USER' from MySQL.
This prompts the user to either do FLUSH PRIVILEGES or do DROP USER
again. This behaviour is not desirable as it is a workaround and
does not solves the problem mentioned above.
FIX:
This bug is fixed by introducing a dynamic array to store the
pointersto all stored routine privilege objects that either have
to be deleted or updated. This is done in 3 steps.
Step 1: Fetching the element from the hash and checking whether
it is to be deleted or updated.
Step 2: Storing the pointer to that privilege object in dynamic array.
Step 3: Traversing the dynamic array to perform the appropriate action
either delete or update.
This is a much cleaner way to delete or update the privilege entries
associated with some user and solves the problem mentioned above.
Also the code has been refactored a bit by introducing an enum
instead of hard coded numbers used for respective dynamic arrays
and hashes in handle_grant_struct() function.
QUOTING IN REPLICATION
Problem: Misquoting or unquoted identifiers may lead to
incorrect statements to be logged to the binary log.
Fix: we use specialized functions to append quoted identifiers in
the statements generated by the server.
INC_HOST_ERRORS() IS CALLED.
Issue : Sequence of calling inc_host_errors()
and reset_host_errors() required some
changes in order to maintain correct
connection error count.
Solution : Call to reset_host_errors() is shifted
to a location after which no calls to
inc_host_errors() are made.
Problem:
=======
trx_data->empty() assert happens at `binlog_close_connection'
Analysis:
========
trx_data->empty() function checks for no pending events
and the transaction cache to be empty.This function returns
"true" if no pending events are present and cache is empty.
Otherwise it returns false. `binlog_close_connection' call
expects the above function to return true. But if the
return value is false then assert is raised.
This bug was reproducible in a diskfull scenario. In this
disk full scenario try to do an insert operation so that
a new pending event is created and flushing this pending
event fails. Due to this failure the server goes down
and invokes `binlog_close_connection' for clean closure.
Since the pending event still remains the assert is caused.
This assert is caused only in non transactional databases.
Fix:
===
In a disk full scenario when the insertion fails the
transaction is rolled back and `binlog_end_trans`
is called to flush the pending events. But flush operation
fails as the disk is full and the function simply returns
`1' without taking any action to delete the pending event.
This leaves the event to remain till the closure of
connection. `delete pending' statement has been added to
do the required clean up action.
sql/log.cc:
Added "delete pending" statement to clean pending event
An "orthographic" typo in User_var::set_deferred() was made in fixes for
bug@14275000. While editing the signature of the initial patch to remove
the only argument, the assigned value of the argument remained in the body ...
to be successfully compiled (!) thanks to names coincidence:
the arg to User_var method and its member.
Fixed with correcting the typo.
The partitioning engine does not implement index_next for partitions
which return HA_ERR_KEY_NOT_FOUND in index_read_map.
If HA_ERR_KEY_NOT_FOUND was returned by a partition during
index_read_map, that partition would not be included in following
calls to index_next. If no partition returned a row in index_read_map,
then the subsequent call to index_next would try to use a non existing
handler (index out of bound).
Even after fixing the index out of bound if at least one partition
returned.
So it is really two connected bugs
1) crash due to index out of bound (-1 unsigned).
2) not including partitions that returned HA_ERR_KEY_NOT_FOUND.
Fixed by recording the partitions that returned HA_ERR_KEY_NOT_FOUND,
and include them too when doing handle_ordered_next the first time.
BACKGROUND:
In certain situations DROP USER fails to remove all privileges
belonging to user being dropped from in-memory structures.
Current workaround is to do DROP USER twice in scenario below
OR doing FLUSH PRIVILEGES after doing DROP USER.
ANALYSIS:
In MySQL, When we grant some stored routines privileges to a
user they are stored in their respective hash.
When doing DROP USER all the stored routine privilege entries
associated with that user has to be deleted from its respective
hash.
The root cause for this bug is some entries from the hash
are not getting deleted.
The problem is that code that deletes entries from the hash tries
to do so while iterating over it, without taking enough measures
to address the fact that such deletion can reshuffle elements in
the hash. If the user/administrator creates the same user again
he is thrown an error 'Error 1396 ER_CANNOT_USER' from MySQL.
This prompts the user to either do FLUSH PRIVILEGES or do DROP USER
again. This behaviour is not desirable as it is a workaround and
does not solves the problem mentioned above.
FIX:
This bug is fixed by introducing a dynamic array to store the
pointersto all stored routine privilege objects that either have
to be deleted or updated. This is done in 3 steps.
Step 1: Fetching the element from the hash and checking whether
it is to be deleted or updated.
Step 2: Storing the pointer to that privilege object in dynamic array.
Step 3: Traversing the dynamic array to perform the appropriate action
either delete or update.
This is a much cleaner way to delete or update the privilege entries
associated with some user and solves the problem mentioned above.
Also the code has been refactored a bit by introducing an enum
instead of hard coded numbers used for respective dynamic arrays
and hashes in handle_grant_struct() function.
Bug#14530242 CRASH / MEMORY CORRUPTION IN FILESORT_BUFFER::GET_RECORD_BUFFER WITH MYISAM
This is a backport of
Bug#12694872 - VALGRIND: 18,816 BYTES IN 196 BLOCKS ARE DEFINITELY LOST
Bug#13340270: assertion table->sort.record_pointers == __null
Bug#14536113 CRASH IN CLOSEFRM (TABLE.CC) OR UNPACK (FIELD.H) ON SUBQUERY WITH MYISAM TABLES
Also:
removed and re-added test files with file-ids from trunk.
In fill_schema_table_by_open(): free item list before restoring active arena.
sql/sql_show.cc:
Replaced i_s_arena.free_items with DBUG_ASSERT(i_s_arena.free_list == NULL)
(there's nothing to free in that list)
The use of Thread_iterator did not work on windows (linking problems).
Solution: Change the interface between the thread_pool and the server
to only use simple free functions.
This patch is for 5.5 only (mimicks similar solution in 5.6)
ENABLE AUDI PLUGIN WHEN DDL
OPERATION HAPPENING
PROBLEM: While unloading the plugin, state is
not checked before it is to be reaped.
This can lead to simultaneous free of
plugin memory by more than one thread.
Multiple deallocation leads to server
crash. In the present bug two threads
deallocate the alog_log plugin.
SOLUTION: A check is added to ensure that only
one thread is unloading the plugin.
NOTE: No mtr test is added as it requires
multiple threads to access critical
section. debug_sync cannot be used in
the current senario because we dont
have access to thread pointer in
some of the plugin functions. IMHO no
test case in the current time frame.
NUMBERS
If a system variable was declared as deprecated without mention of an
alternative, the message would look funny, e.g. for @@delayed_insert_limit:
Warning 1287 '@@delayed_insert_limit' is deprecated and
will be removed in MySQL .
The message was meant to display the version number, but it's not
possible to give one when declaring a system variable.
The fix does two things:
1) The definition of the message
ER_WARN_DEPRECATED_SYNTAX_NO_REPLACEMENT is changed so that it does
not display a version number. I.e. in English the message now reads:
Warning 1287 The syntax '@@delayed_insert_limit' is deprecated and
will be removed in a future version.
2) The message ER_WARN_DEPRECATED_SYNTAX_WITH_VER is discontinued in
favor of ER_WARN_DEPRECATED_SYNTAX for system variables. This change
was already done in versions 5.6 and above as part of wl#5265. This
part is simply back-ported from the worklog.
FAILED IN CHECK_LOCK_AND_ST
Problem:
--------
lock_tables() is supposed to invoke check_lock_and_start_stmt()
for TABLE_LIST which are directly used by top level statement.
TABLE_LIST->prelocking_placeholder is set only for TABLE_LIST
which are used indirectly by stored programs invoked by top
level statement. Hence check_lock_and_start_stmt() should have
TABLE_LIST->prelocking_placeholder==false always, but it is
observed that this assert fails.
The failure is found during RQG test rqg_signal_resignal.
Analysis:
---------
open_tables() invokes open_and_process_routines() where it
finds all the TABLE_LIST that belong to the routine and
adds it to thd->lex->query_tables. During this process if
the open_and_process_routines() fail for some reason,
we are supposed to chop-off all the TABLE_LIST found during
calls to open_and_process_routines(). But, in practice this
is not happening.
thd->lex->query_tables_own_last is supposed to point to a
node in thd->lex->query_tables, which would be a first
TABLE_LIST used indirectly by stored programs invoked by
top level statement. This is found to be not-set correctly
when we plan to chop-off TABLE_LIST's, when
open_and_process_routines() failed.
close_tables_for_reopen() does chop-off all the TABLE_LIST
added after thd->lex->query_table_own_last. This is invoked
upon error in open_and_process_routines(). This call would
not work as expected as thd->lex->query_tables_own_last
is not set, or is not set to correctly.
Further, when open_tables() restarts the process of finding
TABLE_LIST belonging to stored programs, and as the
thd->lex->query_tables_own_last points to in-correct node,
there is possibility of new iteration setting the
thd->lex->query_tables_own_last past some old nodes that
belong to stored programs, added earlier and not removed.
Later when open_tables() completes, lock_tables() ends up
invoking check_lock_and_start_stmt() for TABLE_LIST which
belong to stored programs, which is not expected behavior
and hence we hit the assert
TABLE_LIST->prelocking_placeholder==false.
Due to above behavior, if a user application tries to
execute a SQL statement which invokes some stored function
and if the lock grant on stored function fails due to a
deadlock, then mysqld crashes.
Fix:
----
open_tables() remembers save_query_tables_last which points
to thd-lex->query_tables_last before calls to
open_and_process_routines(). If there is no known
thd->lex->query_tables_own_last set, we are now setting
thd->lex->query_tables_own_last to save_query_tables_last.
This will make sure that the call to close_tables_for_reopen()
will chop-off the list correctly, in other words we now
remove all the nodes added to thd->lex->query_tables, by
previous calls to open_and_process_routines().
Further, it is found that the problem exists starting
from 5.5, due to a code refactoring effort related to
open_tables(). Hence, the fix will be pushed in 5.5, 5.6
and trunk.
Documentation for class Item_outer_ref was wrong:
(*ref) may point to Item_field as well
(see e.g. Item_outer_ref::fix_fields)
So this casting in get_store_key() was wrong:
(*(Item_ref**)((Item_ref*)keyuse->val)->ref)->ref_type()
Additional patch to remove the part_id -> ref_buffer offset.
The partitioning id and the associate record buffer can
be found without having to calculate it.
By initializing it for each used partition, and then reuse
the key-buffer from the queue, it is not needed to have
such map.