LEN <= SIZEOF(ULONGLONG)
This bug was caught in the WL#6255 ALTER TABLE...ADD COLUMN in MySQL
5.6, but there is a bug in all InnoDB versions that support
auto-increment columns.
row_search_autoinc_read_column(): When reading the maximum value of
the auto-increment column, and the column only contains NULL values,
return 0. This corresponds to the case when the table is empty in
row_search_max_autoinc().
rb:1415 approved by Sunny Bains
mysqld_safe script did not heed MySQL specific environment variable
$UMASK, leading to divergent behavior between mysqld and mysqld_safe.
Patch adds an approximation of mysqld's behavior to mysqld_safe,
within the bounds dictated by attempt to have mysqld_safe run on
even the most basic of shells (proper '70s sh, not just bash
with a fancy symlink).
Patch also adds approximation of said behavior to mysqld_multi
(in perl).
(backport)
mysqld_safe script did not heed MySQL specific environment variable
$UMASK, leading to divergent behavior between mysqld and mysqld_safe.
Patch adds an approximation of mysqld's behavior to mysqld_safe,
within the bounds dictated by attempt to have mysqld_safe run on
even the most basic of shells (proper '70s sh, not just bash
with a fancy symlink).
Patch also adds approximation of said behavior to mysqld_multi
(in perl).
Problem:-
using last_insert_id() on an auto_incremented bigint unsigned does
not work for values which are greater than max-bigint-signed.
Analysis:-
last_insert_id() returns the first auto_incremented value for a column
and an auto_incremented value can have only positive values.
In our code, when we are initializing a last_insert_id object, we are
taking it as a signed BIGINT, So when the auto_incremented value reaches
greater than max signed bigint, last_insert_id gives negative result.
Solution:
When we are fetching the value from last_insert_id, We are setting the
unsigned_flag, so that it take only unsigned BIGINT value.
REAL DUPLICATE VALUE FOR PREFIX KEYS
innobase_rec_to_mysql(): Invoke dict_index_get_nth_col_or_prefix_pos()
instead of dict_index_get_nth_col_pos() to find the column.
SECONDARY INDEX UPDATES MAKE CONSISTENT READS DO O(N^2) UNDO PAGE
LOOKUPS (honoring kill query while accessing sec_index)
If secondary index is being used for select query evaluation and this
query is operating with consistent read snapshot it might take good time for
secondary index to return back control to mysql as MVCC would kick in.
If user issues "kill query <id>" while query is actively accessing
secondary index it will not be honored as there is no hook to check
for this condition. Added hook for this check.
-----
Parallely secondary index taking too long to evaluate for consistent
read snapshot case is being examined for performance improvement. WL#6540.
The problem is in the error handling in row_create_table_for_mysql().
In the 'disk full' case we may forget to call dict_mem_table_free() on
the table object.
Approved by: Marko (rb:1377 and rb:1386)
We did not allocate enough bits for index->trx_id_offset, causing an
UPDATE or DELETE of a table with a PRIMARY KEY longer than 1024 bytes
to corrupt the PRIMARY KEY.
dict_index_t: Allocate enough bits.
dict_index_build_internal_clust(): Check for overflow of
index->trx_id_offset. Trip a debug assertion when overflow occurs.
rb:1380 approved by Jimmy Yang
n_child_sum_items kept increasing.
Since it is used for calculating the size of ref_pointer_array,
we will allocate larger and larger chunks of memory, until we hit some
operating system limit.
The memory is free()d at disconnect, but is most likely *not*
returned to the operating system.
TRANSACTION ROLLBACK
Description: During the rollback operation, a blob page
is removed earlier than desired. Consider following scenario:
1. create table t1(a int primary key,b blob) engine=innodb;
2. insert into t1 values (1,repeat('b',9000));
3. begin;
4. update t1 set b=concat(b,'b');
5. update t1 set a=a+1;
6. insert into t1 values (1,repeat('b',9000));
7. rollback;
The update operation in line 5 produces 2 undo log record. The first
undo record (TRX_UNDO_DEL_MARK_REC) goes to trx->update_undo and the
second undo record (TRX_UNDO_INSERT_REC) goes to trx->insert_undo.
During rollback, they are executed out of order.
When the undo record TRX_UNDO_DEL_MARK_REC is applied/executed,
the blob ownership is also reset. Because of this the blob page
is released earlier than desired. This blob page must have been
freed only as part of applying/executing the undo record
TRX_UNDO_INSERT_REC.
This problem can be avoided by executing the undo records in
order. This patch will make innodb to execute the undo records
in order.
rb://1125 approved by Marko.
-----------
After compiling from source, during make test I got the following error:
test main.loaddata failed with error
CURRENT_TEST: main.loaddata
mysqltest: At line 592: query 'LOAD DATA INFILE 'tmpp.txt' INTO TABLE t1
CHARACTER SET ucs2
(@b) SET a=REVERSE(@b)' failed: 1115: Unknown character set: 'ucs2'
I noticed other tests are skipped because of no ucs2
main.mix2_myisam_ucs2 [ skipped ] Test requires:'
have_ucs2'
Should main.loaddata be skipped if there is no ucs2
How To Repeat:
-------------
Run make test on compiled source that doesn't have ucs2
Suggested fix:
-------------
the failing piece of the test should be moved from mysql-test/t/loaddata.test to
mysql-test/t/ctype_ucs.test.
When a client connects to a MySQL server, first a THD object is created.
If there are any idle server threads waiting, the THD object is then added
to a list and a server thread is woken up. This thread then retrieves the
THD object from the list and starts executing.
The problem was that this list of THD objects waiting for a server thread,
was not working in a FIFO fashion, but rather LIFO. This is unfair, as it means
that the last THD added (=last client connected) will be assigned a server
thread first.
Note however that for this to be a problem, several clients must be able
to connect and have THD objects constructed before any server threads
manages to be woken up. This is not a very likely scenario.
This patch fixes the problem by changing the THD list to work FIFO
rather than LIFO.
This is the 5.1/5.5 version of the patch.
BACKGROUND:
In certain situations DROP USER fails to remove all privileges
belonging to user being dropped from in-memory structures.
Current workaround is to do DROP USER twice in scenario below
OR doing FLUSH PRIVILEGES after doing DROP USER.
ANALYSIS:
In MySQL, When we grant some stored routines privileges to a
user they are stored in their respective hash.
When doing DROP USER all the stored routine privilege entries
associated with that user has to be deleted from its respective
hash.
The root cause for this bug is some entries from the hash
are not getting deleted.
The problem is that code that deletes entries from the hash tries
to do so while iterating over it, without taking enough measures
to address the fact that such deletion can reshuffle elements in
the hash. If the user/administrator creates the same user again
he is thrown an error 'Error 1396 ER_CANNOT_USER' from MySQL.
This prompts the user to either do FLUSH PRIVILEGES or do DROP USER
again. This behaviour is not desirable as it is a workaround and
does not solves the problem mentioned above.
FIX:
This bug is fixed by introducing a dynamic array to store the
pointersto all stored routine privilege objects that either have
to be deleted or updated. This is done in 3 steps.
Step 1: Fetching the element from the hash and checking whether
it is to be deleted or updated.
Step 2: Storing the pointer to that privilege object in dynamic array.
Step 3: Traversing the dynamic array to perform the appropriate action
either delete or update.
This is a much cleaner way to delete or update the privilege entries
associated with some user and solves the problem mentioned above.
Also the code has been refactored a bit by introducing an enum
instead of hard coded numbers used for respective dynamic arrays
and hashes in handle_grant_struct() function.
QUOTING IN REPLICATION
Problem: Misquoting or unquoted identifiers may lead to
incorrect statements to be logged to the binary log.
Fix: we use specialized functions to append quoted identifiers in
the statements generated by the server.
WHEN DBNAME CONTAINS MULTIPLE QUOTES
MySQL client's USE command might fail if the
database name contains multiple quotes (backticks).
The reason behind the failure being the method
that client uses to remove/escape the quotes
while parsing the USE command's option (dbname),
where the option parsing might terminate if a
matching quote is found.
Also, C-APIs like mysql_select_db() expect a
normalized dbname. Now, in certain cases, client
might fail to normalize dbname similar to that of
server and hence mysql_select_db() would fail.
Fixed by getting the normalized dbname (indirectly)
from the server by directly sending the "USE dbanme"
as query to the server followed by a "SELECT DATABASE()".
The above steps are only performed if number of quotes
in the dbname is greater than 2. Once the normalized
dbname is received, the original db is restored.
Delete-mark change buffer records when resorting to a pessimistic
delete from the change buffer B-tree. Skip delete-marked records in
the change buffer merge and when estimating whether an operation can
be buffered. Without this fix, we could try to apply the same buffered
changes multiple times if the server was killed at the right moment.
In MySQL 5.5 and later: ibuf_get_volume_buffered_count_func(): Ignore
delete-marked (already processed) records.
ibuf_delete_rec(): Add a crash point before optimistic delete. If the
optimistic delete fails, flag the record processed before
mtr_commit().
ibuf_merge_or_delete_for_page(): Ignore delete-marked (already
processed) records.
Backport to 5.1: Rename btr_cur_del_unmark_for_ibuf() to
btr_cur_set_deleted_flag_for_ibuf() and add a parameter.
rb:1307 approved by Jimmy Yang
INC_HOST_ERRORS() IS CALLED.
Issue : Sequence of calling inc_host_errors()
and reset_host_errors() required some
changes in order to maintain correct
connection error count.
Solution : Call to reset_host_errors() is shifted
to a location after which no calls to
inc_host_errors() are made.
page_zip_validate(), page_zip_validate_low(): Add a parameter for the
B-tree index.
page_zip_validate_low(): If the page contents does not match, check
that the record link chains match. Furthermore, if dict_index_t is
passed, check that the records match. (This reduces coverage a bit: if
index=NULL, we will ignore differences in record contents, that is,
the page payload.)
rb:1264 approved by Inaam Rana
Problem:
=======
trx_data->empty() assert happens at `binlog_close_connection'
Analysis:
========
trx_data->empty() function checks for no pending events
and the transaction cache to be empty.This function returns
"true" if no pending events are present and cache is empty.
Otherwise it returns false. `binlog_close_connection' call
expects the above function to return true. But if the
return value is false then assert is raised.
This bug was reproducible in a diskfull scenario. In this
disk full scenario try to do an insert operation so that
a new pending event is created and flushing this pending
event fails. Due to this failure the server goes down
and invokes `binlog_close_connection' for clean closure.
Since the pending event still remains the assert is caused.
This assert is caused only in non transactional databases.
Fix:
===
In a disk full scenario when the insertion fails the
transaction is rolled back and `binlog_end_trans`
is called to flush the pending events. But flush operation
fails as the disk is full and the function simply returns
`1' without taking any action to delete the pending event.
This leaves the event to remain till the closure of
connection. `delete pending' statement has been added to
do the required clean up action.
An "orthographic" typo in User_var::set_deferred() was made in fixes for
bug@14275000. While editing the signature of the initial patch to remove
the only argument, the assigned value of the argument remained in the body ...
to be successfully compiled (!) thanks to names coincidence:
the arg to User_var method and its member.
Fixed with correcting the typo.
THOUGH IT IS NOT.
The following error message is misleading because it claims
that the BLOB space is not counted.
"ERROR 1118 (42000): Row size too large. The maximum row size for
the used table type, not counting BLOBs, is 8126. You have to
change some columns to TEXT or BLOBs"
When the ROW_FORMAT=compact or ROW_FORMAT=REDUNDANT is used,
the BLOB prefix is stored inline along with the row. So
the above error message is changed as follows depending on
the row format used:
For ROW_FORMAT=COMPRESSED or ROW_FORMAT=DYNAMIC, the error
message is as follows:
"ERROR 42000: Row size too large (> 8126). Changing some
columns to TEXT or BLOB may help. In current row format,
BLOB prefix of 0 bytes is stored inline."
For ROW_FORMAT=COMPACT or ROW_FORMAT=REDUNDANT, the error
message is as follows:
"ERROR 42000: Row size too large (> 8126). Changing some
columns to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or
ROW_FORMAT=COMPRESSED may help. In current row
format, BLOB prefix of 768 bytes is stored inline."
rb://1252 approved by Marko Makela
PAGE SPLIT
page_rec_get_nth_const(): Map nth==0 to the page infimum.
btr_compress(adjust=TRUE): Add a debug assertion for nth>0. The cursor
should never be positioned on the page infimum.
btr_index_page_validate(): Add test instrumentation for checking the
return values of page_rec_get_nth_const() during CHECK TABLE, and for
checking that the page directory slot 0 always contains only one
record, the predefined page infimum record.
page_cur_delete_rec(), page_delete_rec_list_end(): Add debug
assertions guarding against accessing the page slot 0.
page_copy_rec_list_start(): Clarify a comment about ret_pos==0.
rb:1248 approved by Jimmy Yang
ha_innodb::records_in_range(): Remove a debug assertion
that prohibits an open range (full table).
The patch by Jorgen Loland only removed the assertion from the
built-in InnoDB, not from the InnoDB Plugin.
ha_innobase::records_in_range(): Remove a debug assertion
that prohibits an open range (full table).
This assertion catches unnecessary calls to this method,
but such calls are not harming correctness.
HEURISTICS FOR COMPRESSED PAGE SIZE
The fix of Bug#12845774 was supposed to skip known-to-fail
btr_cur_optimistic_insert() calls. There was only one such call, in
btr_cur_pessimistic_update(). All other callers of
btr_cur_pessimistic_insert() would release and reacquire the B-tree
page latch before attempting the pessimistic insert. This would allow
other threads to restructure the B-tree, allowing (and requiring) the
insert to succeed as an optimistic (single-page) operation.
Failure to attempt an optimistic insert before a pessimistic one would
trigger an attempt to split an empty page.
rb:1234 approved by Sunny Bains
Additional patch to remove the part_id -> ref_buffer offset.
The partitioning id and the associate record buffer can
be found without having to calculate it.
By initializing it for each used partition, and then reuse
the key-buffer from the queue, it is not needed to have
such map.