The ha_innobase table handler contained two search key buffers
(srch_key_val1, srch_key_val2) of fixed size used to store the search
key. The size of these buffers where fixed at
REC_VERSION_56_MAX_INDEX_COL_LEN + 2. But this size is not sufficient
to hold the search key. Hence the following assert in
row_sel_convert_mysql_key_to_innobase() failed.
2438 /* Storing may use at most data_len bytes of buf */
2439
2440 if (UNIV_LIKELY(!is_null)) {
2441 ut_a(buf + data_len <= original_buf + buf_len);
2442 row_mysql_store_col_in_innobase_format(
2443 dfield, buf,
2444 FALSE, /* MySQL key value format col */
2445 key_ptr + data_offset, data_len,
2446 dict_table_is_comp(index->table));
2447 buf += data_len;
2448 }
The buffer size is now calculated with the formula
MAX_KEY_LENGTH + MAX_REF_PARTS*2. This properly takes into account
the extra bytes needed to store the length for each column. An index
can contain a maximum of MAX_REF_PARTS columns in it, and for each
column 2 bytes are needed to store length.
rb://1238 approved by Marko and Vasil Dimov.
Backport from mysql-5.6 the fix
(revision-id sunny.bains@oracle.com-20120315045831-20rgfa4cozxmz7kz)
Bug#13839886 - CRASH IN INNOBASE_NEXT_AUTOINC
The assertion introduce in the fix for Bug#13817703
is too strong, a negative number can be greater
than the column max value, when the column value is
a negative number.
rb://978 Approved by Jimmy Yang.
rb:1236 approved by Marko Makela
WARNING
This patch is for mysql-5.5 only,
to be null-merged to mysql-5.6 and mysql-trunk.
This is a partial rollback of the file io instrumentation,
removing the instrumentation for mysql_file_stat in the archive engine.
See the bug comments for details.
two tests still fail:
main.innodb_icp and main.range_vs_index_merge_innodb
call records_in_range() with both range ends being open
(which triggers an assert)
HEURISTICS FOR COMPRESSED PAGE SIZE
The fix of Bug#12845774 was supposed to skip known-to-fail
btr_cur_optimistic_insert() calls. There was only one such call, in
btr_cur_pessimistic_update(). All other callers of
btr_cur_pessimistic_insert() would release and reacquire the B-tree
page latch before attempting the pessimistic insert. This would allow
other threads to restructure the B-tree, allowing (and requiring) the
insert to succeed as an optimistic (single-page) operation.
Failure to attempt an optimistic insert before a pessimistic one would
trigger an attempt to split an empty page.
rb:1234 approved by Sunny Bains
sql/handler.cc:
SHOW INNODB STATUS sometimes returns 0 even if it has generated an error.
This code is here to catch it until InnoDB some day is fixed.
storage/innobase/handler/ha_innodb.cc:
Catch at least one of the possible errors from SHOW INNODB STATUS to provide a correct return code.
storage/xtradb/handler/ha_innodb.cc:
Catch at least one of the possible errors from SHOW INNODB STATUS to provide a correct return code.
support-files/my-huge.cnf.sh:
Fixed typo
Facebook got a case where the page compresses really well so that
btr_cur_optimistic_update() returns DB_UNDERFLOW, but when a record
gets updated, the compression rate radically changes so that
btr_cur_insert_if_possible() can not insert in place despite
reorganizing/recompressing the page, leading to the assertion failing.
rb:1220 approved by Sunny Bains
COMPRESSED PAGE SIZE
This was submitted as MySQL Bug 61456 and a patch provided by
Facebook. This patch follows the same idea, but instead of adding a
parameter to btr_cur_pessimistic_insert(), we simply remove the
btr_cur_optimistic_insert() call there and add it to the only caller
that needs it.
btr_cur_pessimistic_insert(): Do not try btr_cur_optimistic_insert().
btr_insert_on_non_leaf_level_func(): Invoke btr_cur_optimistic_insert()
before invoking btr_cur_pessimistic_insert().
btr_cur_pessimistic_update(): Clarify in a comment why it is not
necessary to invoke btr_cur_optimistic_insert().
btr_root_raise_and_insert(): Assert that the root page is not empty.
This could happen if a pessimistic insert (involving a split or merge)
is performed without first attempting an optimistic (intra-page) insert.
rb:1219 approved by Sunny Bains
btr_cur_optimistic_insert(): Remove a bogus assertion. The insert may
fail after reorganizing the page.
btr_cur_optimistic_update(): Do not attempt to reorganize compressed pages,
because compression may fail after reorganization.
page_copy_rec_list_start(): Use page_rec_get_nth() to restore to the
ret_pos, which may also be the page infimum.
rb:1221
sql/item_subselect.cc:
Added purecov info
sql/sql_select.cc:
Added cast
storage/innobase/handler/ha_innodb.cc:
Added cast
storage/xtradb/btr/btr0btr.c:
Added buf_block_get_frame_fast() to avoid compiler warning
storage/xtradb/handler/ha_innodb.cc:
Added cast
storage/xtradb/include/buf0buf.h:
Innodb has buf_block_get_frame(block) defined as (block)->frame.
Didn't want to do a big change to break xtradb as it may use block_get_frame() differently, so I mad this quick hack to patch one compiler warning.
client/mysqldump.c:
Slave needs to be initialized with 0
dbug/dbug.c:
Removed not existing function
plugin/semisync/semisync_master.cc:
Fixed compiler warning
sql/opt_range.cc:
thd needs to be set early as it's used in some error conditions.
sql/sql_table.cc:
Changed to use uchar* to make array indexing portable
storage/innobase/handler/ha_innodb.cc:
Removed not used variable
storage/maria/ma_delete.c:
Fixed compiler warning
storage/maria/ma_write.c:
Fixed compiler warning
IN QUERIES
This bug was caused by an incorrect fix of
Bug#13807811 BTR_PCUR_RESTORE_POSITION() CAN SKIP A RECORD
There was nothing wrong with btr_pcur_restore_position(), but with the
use of it in the table scan during index creation.
rb:1206 approved by Jimmy Yang
mysql-test/suite/heap/heap.result:
Added test case for MDEV-436
mysql-test/suite/heap/heap.test:
Added test case for MDEV-436
storage/heap/hp_block.c:
Don't allocate a set of HP_PTRS when not needed. This saves us about 1024 bytes for most allocations.
storage/heap/hp_create.c:
Made the initial allocation of block sizes depending on min_records and max_records.
make CMakeLists.txt to detect if the installed boost can be compiled with the
installed compile and specified set of compiler options.
Background: even sufficiently new Boost cannot be compiled with the sufficiently old gcc
in the presence of -fno-rtti
Problem description:
Table 't' created with two colums having compound index on both the
columns under innodb/myisam engine at remote machine. In the local
machine same table is created undet the federated engine.
A select having where clause with along 'AND' operation gives wrong
results on local machine.
Analysis:
The given query at federated engine is wrongly transformed by
federated::create_where_from_key() function and the same was sent to
the remote machine. Hence the local machine is showing wrong results.
Given query "select c1 from t where c1 <= 2 and c2 = 1;"
Query transformed, after ha_federated::create_where_from_key() function is:
SELECT `c1`, `c2` FROM `t` WHERE (`c1` IS NOT NULL ) AND
( (`c1` >= 2) AND (`c2` <= 1) ) and the same sent to real_query().
In the above the '<=' and '=' conditions were transformed to '>=' and
'<=' respectively.
ha_federated::create_where_from_key() function behaving as below:
The key_range is having both the start_key and end_key. The start_key
is used to get "(`c1` IS NOT NULL )" part of the where clause, this
transformation is correct. The end_key is used to get "( (`c1` >= 2)
AND (`c2` <= 1) )", which is wrong, here the given conditions('<=' and '=')
are changed as wrong conditions('>=' and '<=').
The end_key is having {key = 0x39fa6d0 "", length = 10, keypart_map = 3,
flag = HA_READ_AFTER_KEY}
The store_length is having value '5'. Based on store_length and length
values the condition values is applied in HA_READ_AFTER_KEY switch case.
The switch case 'HA_READ_AFTER_KEY' is applicable to only the last part of
the end_key and for previous parts it is going to 'HA_READ_KEY_OR_NEXT' case,
here the '>=' is getting added as a condition instead of '<='.
Fix:
Updated the 'if' condition in 'HA_READ_AFTER_KEY' case to affect for all
parts of the end_key. i.e 'i > 0' will used for end_key, Hence added it in
the if condition.
mysql-test/suite/federated/federated.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_archive.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_bug_13118.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_bug_25714.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_bug_35333.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_debug.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_innodb.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_server.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_transactions.test:
modified the federated.inc file location
mysql-test/suite/federated/include/federated.inc:
moved the file from federated suite to federated/include folder
mysql-test/suite/federated/include/federated_cleanup.inc:
moved the file from federated suite to federated/include folder
mysql-test/suite/federated/include/have_federated_db.inc:
moved the file from federated suite to federated/include folder
storage/federated/ha_federated.cc:
updated the 'if condition' in ha_federated::create_where_from_key()
function.
Backporting the WL#5716, "Information schema table for InnoDB
buffer pool information". Backporting revisions 2876.244.113,
2876.244.102 from mysql-trunk.
rb://1175 approved by Jimmy Yang.
Backporting the WL#5716, "Information schema table for InnoDB
buffer pool information". Backporting revisions 2876.244.113,
2876.244.102 from mysql-trunk.
rb://1177 approved by Jimmy Yang.
ISSUE: Incorrect key file. Key file is corrupted,
Reading incorrect key information (keyseg)
from index file. Key definition in .MYI
and .FRM file differs. Starting pointer
to read the keyseg information is changed
to a value greater than the pack_reclength.
Memcpy tries to read keyseg information from
unallocated memory which causes the crash.
SOLUTION: One more check added to compare the
the key definition in .MYI and .FRM
file. If the definition differ, server
produces an error.
- index_merge/intersection is unable to work on GIS indexes, because:
1. index scans have no Rowid-Ordered-Retrieval property
2. When one does an index-only read over a GIS index, they do not
get the index tuple, because index only contains bounding box of the geometry.
This is why key_copy() call crashed.
This patch fixes#1, which makes the problem go away. Theoretically, it would
be nice to check #2, too, but SE API semantics is not sufficiently precise to do it.
Now partition engine adds underlying tables to the QC and ask underlying tables engine permittion to cache the query and return result of the query.
Incorrect QC cleanup in case of table registration failure fixe.
Unified interface for myisammrg & partitioned engnes for QC.
primary key with innodb tables
The bug was triggered if a single ALTER TABLE statement both
added and dropped indexes and ALTER TABLE failed during drop
(e.g. because the index was needed in a foreign key constraint).
In such cases, the server index information would get out of
sync with InnoDB - the added index would be present inside
InnoDB, but not in the server. This could then lead to InnoDB
error messages and/or server crashes.
The root cause is that new indexes are added before old indexes
are dropped. This means that if ALTER TABLE fails while dropping
indexes, index changes will be reverted in the server but not
inside InnoDB.
This patch fixes the problem by dropping any added indexes
if drop fails (for ALTER TABLE statements that both adds
and drops indexes).
However, this won't work if we added a primary key as this
key might not be possible to drop inside InnoDB. Therefore,
we resort to the copy algorithm if a primary key is added
by an ALTER TABLE statement that also drops an index.
In 5.6 this bug is more properly fixed by the handler interface
changes done in the scope of WL#5534 "Online ALTER".
- InnoDB now returns handler specific HA_WRONG_CREATE_OPTION instead of MySQL specific ER_ILLEGAL_HA_CREATE_OPTION
- This changes the user level error message from "Unknown error" to "Wrong create options"
mysql-test/r/lowercase_table2.result:
Updated result file
mysql-test/r/partition_innodb_plugin.result:
Updated to new error message
mysql-test/r/partition_open_files_limit.result:
Updated result file
mysql-test/r/row-checksum-old.result:
Updated to new error message
mysql-test/r/row-checksum.result:
Updated to new error message
mysql-test/r/symlink.result:
Updated result file
mysql-test/suite/innodb/r/innodb-create-options.result:
Updated to new error message
mysql-test/suite/innodb/r/innodb-zip.result:
Updated to new error message
mysql-test/suite/innodb/r/innodb.result:
Updated to new error message
storage/innobase/handler/ha_innodb.cc:
Return HA_WRONG_CREATE_OPTION instead of ER_ILLEGAL_HA_CREATE_OPTION
This gives more clear and OS indepedent error messages
storage/xtradb/handler/ha_innodb.cc:
Return HA_WRONG_CREATE_OPTION instead of ER_ILLEGAL_HA_CREATE_OPTION
This gives more clear and OS indepedent error messages
ISSUE: Incorrect key file. Key file is corrupted,
Reading incorrect key information (keyseg)
from index file. Key definition in .MYI
and .FRM file differs. Starting pointer
to read the keyseg information is changed
to a value greater than the pack_reclength.
Memcpy tries to read keyseg information from
unallocated memory which causes the crash.
SOLUTION: One more check added to compare the
the key definition in .MYI and .FRM
file. If the definition differ, server
produces an error.
- Better error messages
This fixes that one again can run the test systems with many threads without having to increase fs.aio-max-nr.
mysql-test/include/mtr_check.sql:
Ignore the INNODB_USE_NATIVE_AIO variable (may change during execution)
mysql-test/mysql-test-run.pl:
Ignore warnings for failure to setup AIO
storage/innobase/os/os0file.c:
Continue without AIO even if we can't allocate resources for AIO
storage/xtradb/os/os0file.c:
Continue without AIO even if we can't allocate resources for AIO
storage/xtradb/srv/srv0start.c:
Give an error message (instead of core dump) if AIO can't be initialized
sql/sql_table.cc:
Added comment
storage/maria/ma_close.c:
Don't store history if it's visible to all.
This fixed the MDEV-306 bug
storage/maria/ma_delete_table.c:
Removed old comment
Delete history state for deleted tables
storage/maria/ma_info.c:
More DBUG_PRINT
storage/maria/ma_open.c:
More DBUG_PRINT
Changes in the InnoDB codebase required to compile and
integrate the MEB codebase with MySQL 5.5.
@ storage/innobase/btr/btr0btr.c
Excluded buffer pool usage from MEB build.
buf_pool_from_bpage calls are in buf0buf.ic, and
the buffer pool functions from that file are
disabled in MEB.
@ storage/innobase/buf/buf0buf.c
Disabling more buffer pool functions unused in MEB.
@ storage/innobase/dict/dict0dict.c
Disabling dict_ind_free that is unused in MEB.
@ storage/innobase/dict/dict0mem.c
The include
#include "ha_prototypes.h"
Was causing conflicts with definitions in my_global.h
Linking C executable mysqlbackup
libinnodb.a(dict0mem.c.o): In function `dict_mem_foreign_table_name_lookup_set':
dict0mem.c:(.text+0x91c): undefined reference to `innobase_get_lower_case_table_names'
libinnodb.a(dict0mem.c.o): In function `dict_mem_referenced_table_name_lookup_set':
dict0mem.c:(.text+0x9fc): undefined reference to `innobase_get_lower_case_table_names'
libinnodb.a(dict0mem.c.o): In function `dict_mem_foreign_table_name_lookup_set':
dict0mem.c:(.text+0x96e): undefined reference to `innobase_casedn_str'
libinnodb.a(dict0mem.c.o): In function `dict_mem_referenced_table_name_lookup_set':
dict0mem.c:(.text+0xa4e): undefined reference to `innobase_casedn_str'
collect2: ld returned 1 exit status
make[2]: *** [mysqlbackup] Error 1
innobase_get_lower_case_table_names
innobase_casedn_str
are functions that are part of ha_innodb.cc that is not part of the build
dict_mem_foreign_table_name_lookup_set
function is not there in the current codebase, meaning we do not use it in MEB.
@ storage/innobase/fil/fil0fil.c
The srv_fast_shutdown variable is declared in
srv0srv.c that is not compiled in the
mysqlbackup codebase.
This throws an undeclared error.
From the Manual
---------------
innodb_fast_shutdown
--------------------
The InnoDB shutdown mode. The default value is 1
as of MySQL 3.23.50, which causes a “fast� shutdown
(the normal type of shutdown). If the value is 0,
InnoDB does a full purge and an insert buffer merge
before a shutdown. These operations can take minutes,
or even hours in extreme cases. If the value is 1,
InnoDB skips these operations at shutdown.
This ideally does not matter from mysqlbackup
@ storage/innobase/ha/ha0ha.c
In file included from /home/narayanan/mysql-server/mysql-5.5-meb-rel3.8-innodb-integration-1/storage/innobase/ha/ha0ha.c:34:0:
/home/narayanan/mysql-server/mysql-5.5-meb-rel3.8-innodb-integration-1/storage/innobase/include/btr0sea.h:286:17: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘*’ token
make[2]: *** [CMakeFiles/innodb.dir/home/narayanan/mysql-server/mysql-5.5-meb-rel3.8-innodb-integration-1/storage/innobase/ha/ha0ha.c.o] Error 1
make[1]: *** [CMakeFiles/innodb.dir/all] Error 2
make: *** [all] Error 2
# include "sync0rw.h" is excluded from hotbackup compilation in dict0dict.h
This causes extern rw_lock_t* btr_search_latch_temp; to throw a failure because
the definition of rw_lock_t is not found.
@ storage/innobase/include/buf0buf.h
Excluding buffer pool functions that are unused from the
MEB codebase.
@ storage/innobase/include/buf0buf.ic
replicated the exclusion of
#include "buf0flu.h"
#include "buf0lru.h"
#include "buf0rea.h"
by looking at the current codebase in <meb-trunk>/src/innodb
@ storage/innobase/include/dict0dict.h
dict_table_x_lock_indexes, dict_table_x_unlock_indexes, dict_table_is_corrupted,
dict_index_is_corrupted, buf_block_buf_fix_inc_func are unused in MEB and was
leading to compilation errors and hence excluded.
@ storage/innobase/include/dict0dict.ic
dict_table_x_lock_indexes, dict_table_x_unlock_indexes, dict_table_is_corrupted,
dict_index_is_corrupted, buf_block_buf_fix_inc_func are unused in MEB and was
leading to compilation errors and hence excluded.
@ storage/innobase/include/log0log.h
/home/narayanan/mysql-server/mysql-5.5-meb-rel3.8-innodb-integration-1/storage/innobase/include/log0log.h: At top level:
/home/narayanan/mysql-server/mysql-5.5-meb-rel3.8-innodb-integration-1/storage/innobase/include/log0log.h:767:2: error: expected specifier-qualifier-list before â⠂¬Ëœmutex_t’
mutex_t definitions were excluded as seen from ambient code
hence excluding definition for log_flush_order_mutex also.
@ storage/innobase/include/os0file.h
Bug in InnoDB code, create_mode should have been create.
@ storage/innobase/include/srv0srv.h
In file included from /home/narayanan/mysql-server/mysql-5.5-meb-rel3.8-innodb-integration-1/storage/innobase/buf/buf0buf.c:50:0:
/home/narayanan/mysql-server/mysql-5.5-meb-rel3.8-innodb-integration-1/storage/innobase/include/srv0srv.h: At top level:
/home/narayanan/mysql-server/mysql-5.5-meb-rel3.8-innodb-integration-1/storage/innobase/include/srv0srv.h:120:16: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘srv_use_native_aio’
srv_use_native_aio - we do not use native aio of the OS anyway from MEB. MEB does not compile
InnoDB with this option. Hence disabling it.
@ storage/innobase/include/trx0sys.h
[ 56%] Building C object CMakeFiles/innodb.dir/home/narayanan/mysql-server/mysql-5.5-meb-rel3.8-innodb-integration-1/storage/innobase/trx/trx0sys.c.o
/home/narayanan/mysql-server/mysql-5.5-meb-rel3.8-innodb-integration-1/storage/innobase/trx/trx0sys.c: In function ‘trx_sys_read_file_format_id’:
/home/narayanan/mysql-server/mysql-5.5-meb-rel3.8-innodb-integration-1/storage/innobase/trx/trx0sys.c:1499:20: error: ‘TRX_SYS_FILE_FORMAT_TAG_MAGIC_N’ undeclared (first use in this function)
/home/narayanan/mysql-server/mysql-5.5-meb-rel3.8-innodb-integration-1/storage/innobase/trx/trx0sys.c:1499:20: note: each undeclared identifier is reported only once for each function it appears in
/home/narayanan/mysql-server/mysql-5.5-meb-rel3.8-innodb-integration-1/storage/innobase/trx/trx0sys.c: At top level:
/home/narayanan/mysql-server/mysql-5.5-meb-rel3.8-innodb-integration-1/storage/innobase/include/buf0buf.h:607:1: warning: ‘buf_block_buf_fix_inc_func’ declared ‘static’ but never defined
make[2]: *** [CMakeFiles/innodb.dir/home/narayanan/mysql-server/mysql-5.5-meb-rel3.8-innodb-integration-1/storage/innobase/trx/trx0sys.c.o] Error 1
unused calls excluded to enable compilation
@ storage/innobase/mem/mem0dbg.c
excluding #include "ha_prototypes.h" that lead to definitions in ha_innodb.cc
@ storage/innobase/os/os0file.c
InnoDB not compiled with aio support from MEB anyway. Hence excluding this from
the compilation.
@ storage/innobase/page/page0zip.c
page0zip.c:(.text+0x4e9e): undefined reference to `buf_pool_from_block'
collect2: ld returned 1 exit status
buf_pool_from_block defined in buf0buf.ic, most of the file is excluded for compilation of MEB
@ storage/innobase/ut/ut0dbg.c
excluding #include "ha_prototypes.h" since it leads to definitions in ha_innodb.cc
innobase_basename(file) is defined in ha_innodb.cc. Hence excluding that also.
@ storage/innobase/ut/ut0ut.c
cal_tm unused from MEB, was leading to earnings, hence disabling for MEB.
WHEN KILLING
Suppose there is a query waiting for a lock. If the user kills
this query, then "Got error -1 when reading table" error message
must not be logged in the server log file. Since this is a user
requested interruption, no spurious error message must be logged
in the server log. This patch will remove the error message from
the log.
approved by joh and tatjana
Fixed by backport of:
------------------------------------------------------------
revno: 3402.50.156
committer: Jon Olav Hauglid <jon.hauglid@oracle.com>
branch nick: mysql-trunk-test
timestamp: Wed 2012-02-08 14:10:23 +0100
message:
Bug#13417754 ASSERT IN ROW_DROP_DATABASE_FOR_MYSQL DURING DROP SCHEMA
This assert could be triggered if an InnoDB table was being moved
to a different database using ALTER TABLE ... RENAME, while this
database concurrently was being dropped by DROP DATABASE.
The reason for the problem was that no metadata lock was taken
on the target database by ALTER TABLE ... RENAME.
DROP DATABASE was therefore not blocked and could remove
the database while ALTER TABLE ... RENAME was executing. This
could cause the assert in InnoDB to be triggered.
This patch fixes the problem by taking a IX metadata lock on
the target database before ALTER TABLE ... RENAME starts
moving a table to a different database.
Note that this problem did not occur with RENAME TABLE which
already takes the correct metadata locks.
Also note that this patch slightly changes the behavior of
ALTER TABLE ... RENAME. Before, the statement would abort and
return an error if a lock on the target table name could not
be taken immediately. With this patch, ALTER TABLE ... RENAME
will instead block and wait until the lock can be taken
(or until we get a lock timeout). This also means that it is
possible to get ER_LOCK_DEADLOCK errors in this situation
since we allow ALTER TABLE ... RENAME to wait and not just
abort immediately.
WHEN KILLING
Suppose there is a query waiting for a lock. If the user kills
this query, then "Got error -1 when reading table" error message
must not be logged in the server log file. Since this is a user
requested interruption, no spurious error message must be logged
in the server log. This patch will remove the error message from
the log.
approved by joh and tatjana
Optimizator fails using index with ST_Within(g, constant_poly).
per-file comments:
mysql-test/r/gis-rt-precise.result
test result fixed.
mysql-test/r/gis-rtree.result
test result fixed.
mysql-test/suite/maria/r/maria-gis-rtree-dynamic.result
test result fixed.
mysql-test/suite/maria/r/maria-gis-rtree-trans.result
test result fixed.
mysql-test/suite/maria/r/maria-gis-rtree.result
test result fixed.
storage/maria/ma_rt_index.c
Use MBR_INTERSECT mode when optimizing the select WITH ST_Within.
storage/myisam/rt_index.c
Use MBR_INTERSECT mode when optimizing the select WITH ST_Within.
Details:
- Archive storage engine file access were not instrumented and thus
were not shown in PS tables.
Fix:
- Added instrumentation code by using PS Apis for I/O.
rb://1088
approved by: Marko Makela
This bug was introduced in early stages of plugin. We were not
checking for an implicit lock on sec index rec for trx_id that is
stamped on current version of the clust_index in case where the
clust_index has a previous delete marked version.
The following scenario crashes our mysql server:
1. set global innodb_file_per_table=1;
2. create table t1(c1 int) engine=innodb;
3. alter table t1 discard tablespace;
4. alter table t1 add unique index(c1);
Step 4 crashes the server. This patch introduces a check on discarded
tablespace to avoid the crash.
rb://1041 approved by Marko Makela
FULLTEXT INDEX AND CONCURRENT DML.
Problem Statement:
------------------
1) Create a table with FT index.
2) Enable concurrent inserts.
3) In multiple threads do below operations repeatedly
a) truncate table
b) insert into table ....
c) select ... match .. against .. non-boolean/boolean mode
After some time we could observe two different assert core dumps
Analysis:
--------
1)assert core dump at key_read_cache():
Two select threads operating in-parallel on same key
root block.
1st select thread block->status is set to BLOCK_ERROR
because the my_pread() in read_block() is returning '0'.
Truncate table made the index file size as 1024 and pread
was asked to get the block of count bytes(1024 bytes)
from offset of 1024 which it cannot read since its
"end of file" and retuning '0' setting
"my_errno= HA_ERR_FILE_TOO_SHORT" and the key_file_length,
key_root[0] is same i.e. 1024. Since block status has BLOCK_ERROR
the 1st select thread enter into the free_block() and will
be under wait on conditional mutex by making status as
BLOCK_REASSIGNED and goes for wait_on_readers(). Other select
thread will also work on the same block and sees the status as
BLOCK_ERROR and enters into free_block(), checks for BLOCK_REASSIGNED
and asserting the server.
2)assert core dump at key_write_cache():
One select thread and One insert thread.
Select thread gets the unlocks the 'keycache->cache_lock',
which allows other threads to continue and gets the pread()
return value as'0'(please see the explanation above) and
tries to get the lock on 'keycache->cache_lock' and waits
there for the lock.
Insert thread requests for the block, block will be assigned
from the hash list and makes the page_status as
'PAGE_WAIT_TO_BE_READ' and goes for the read_block(), waits
in the queue since there are some other threads performing
reads on the same block.
Select thread which was waiting for the 'keycache->cache_lock'
mutex in the read_block() will continue after getting the my_pread()
value as '0' and sets the block status as BLOCK_ERROR and goes to
the free_block() and go to the wait_for_readers().
Now the insert thread will awake and continues. and checks
block->status as not BLOCK_READ and it asserts.
Fix:
---
In the full text code, multiple readers of index file is not guarded.
Hence added below below code in _ft2_search() and walk_and_match().
to lock the key_root I have used below code in _ft2_search()
if (info->s->concurrent_insert)
mysql_rwlock_rdlock(&share->key_root_lock[0]);
and to unlock
if (info->s->concurrent_insert)
mysql_rwlock_unlock(&share->key_root_lock[0]);
storage/myisam/ft_boolean_search.c:
Since its a recursion function, to avoid confusion in taking and
releasing the locks, renamed _ft2_search() to _ft2_search_internal()
function. And _ft2_search() will take the lock, call
_ft2_search_internal() and release the lock in case of concurrent
inserts.
storage/myisam/ft_nlq_search.c:
Added read locks code in walk_and_match()
dict_table_replace_index_in_foreign_list(): Replace the dropped index
also in the foreign key constraints of child tables that are
referencing this table.
row_ins_check_foreign_constraint(): If the underlying index is
missing, refuse the operation.
rb:1051 approved by Jimmy Yang
The following scenario crashes our mysql server:
1. set global innodb_file_per_table=1;
2. create table t1(c1 int) engine=innodb;
3. alter table t1 discard tablespace;
4. alter table t1 add unique index(c1);
Step 4 crashes the server. This patch introduces a check on discarded
tablespace to avoid the crash.
rb://1041 approved by Marko Makela
FULLTEXT INDEX AND CONCURRENT DML.
Problem Statement:
------------------
1) Create a table with FT index.
2) Enable concurrent inserts.
3) In multiple threads do below operations repeatedly
a) truncate table
b) insert into table ....
c) select ... match .. against .. non-boolean/boolean mode
After some time we could observe two different assert core dumps
Analysis:
--------
1)assert core dump at key_read_cache():
Two select threads operating in-parallel on same key
root block.
1st select thread block->status is set to BLOCK_ERROR
because the my_pread() in read_block() is returning '0'.
Truncate table made the index file size as 1024 and pread
was asked to get the block of count bytes(1024 bytes)
from offset of 1024 which it cannot read since its
"end of file" and retuning '0' setting
"my_errno= HA_ERR_FILE_TOO_SHORT" and the key_file_length,
key_root[0] is same i.e. 1024. Since block status has BLOCK_ERROR
the 1st select thread enter into the free_block() and will
be under wait on conditional mutex by making status as
BLOCK_REASSIGNED and goes for wait_on_readers(). Other select
thread will also work on the same block and sees the status as
BLOCK_ERROR and enters into free_block(), checks for BLOCK_REASSIGNED
and asserting the server.
2)assert core dump at key_write_cache():
One select thread and One insert thread.
Select thread gets the unlocks the 'keycache->cache_lock',
which allows other threads to continue and gets the pread()
return value as'0'(please see the explanation above) and
tries to get the lock on 'keycache->cache_lock' and waits
there for the lock.
Insert thread requests for the block, block will be assigned
from the hash list and makes the page_status as
'PAGE_WAIT_TO_BE_READ' and goes for the read_block(), waits
in the queue since there are some other threads performing
reads on the same block.
Select thread which was waiting for the 'keycache->cache_lock'
mutex in the read_block() will continue after getting the my_pread()
value as '0' and sets the block status as BLOCK_ERROR and goes to
the free_block() and go to the wait_for_readers().
Now the insert thread will awake and continues. and checks
block->status as not BLOCK_READ and it asserts.
Fix:
---
In the full text code, multiple readers of index file is not guarded.
Hence added below below code in _ft2_search() and walk_and_match().
to lock the key_root I have used below code in _ft2_search()
if (info->s->concurrent_insert)
mysql_rwlock_rdlock(&share->key_root_lock[0]);
and to unlock
if (info->s->concurrent_insert)
mysql_rwlock_unlock(&share->key_root_lock[0]);
storage/myisam/ft_boolean_search.c:
Since its a recursion function, to avoid confusion in taking and
releasing the locks, renamed _ft2_search() to _ft2_search_internal()
function. And _ft2_search() will take the lock, call
_ft2_search_internal() and release the lock in case of concurrent
inserts.
storage/myisam/ft_nlq_search.c:
Added read locks code in walk_and_match()
Fixed some mtr test problems
dbug/tests.c:
Fixed compiler warnings
mysql-test/r/handlersocket.result:
Fixed that plugin_license is written
mysql-test/suite/innodb/t/innodb_bug60196.test:
Force sorted results as it was sometimes different on windows
mysql-test/suite/rpl/t/rpl_heartbeat_basic.test:
Prolong test as this failed on windows
mysql-test/t/handlersocket.test:
Fixed that plugin_license is written
plugin/handler_socket/handlersocket/handlersocket.cpp:
Use maria_declare_plugin
plugin/handler_socket/handlersocket/mysql_incl.hpp:
Fixed compiler warning
plugin/handler_socket/libhsclient/auto_addrinfo.hpp:
Fixed compiler warning
sql/handler.h:
Fixed typo
sql/sql_plugin.cc:
Fixed bug that caused plugin library name twice in error message
storage/maria/ma_checkpoint.c:
Fixed compiler warning
storage/maria/ma_loghandler.c:
Fixed compiler warning
unittest/mysys/base64-t.c:
Fixed compiler warning
unittest/mysys/bitmap-t.c:
Fixed compiler warning
unittest/mysys/my_malloc-t.c:
Fixed compiler warning
Fix is done by doing an autocommit in truncate table inside Aria
storage/maria/ha_maria.cc:
Force a commit for TRUNCATE TABLE inside lock tables
Check that we don't call TRUNCATE with concurrent inserts going on.
Make ha_maria::implict_commit faster when we don't have Aria tables in the transaction.
(Most of the patch is just re-indentation because I removed an if level)
BY A CONCURRENT TRANSACTIO
The member function QUICK_RANGE_SELECT::init_ror_merged_scan() performs
a table handler clone. Innodb does not provide a clone operation.
The ha_innobase::clone() is not there. The handler::clone() does not
take care of the ha_innobase->prebuilt->select_lock_type. Because of
this what happens is that for one index we do a locking read, and
for the other index we were doing a non-locking (consistent) read.
The patch introduces ha_innobase::clone() member function.
It is implemented similar to ha_myisam::clone(). It calls the
base class handler::clone() and then does any additional operation
required. I am setting the ha_innobase->prebuilt->select_lock_type
correctly.
rb://1060 approved by Marko
rb://1043
approved by: Sunny Bains
Two internal counters were incremented twice for a single
operations. The counters are:
srv_buf_pool_flushed
buf_lru_flush_page_count
- Workaround linker bug that prevents linking aria test executables
using -fno-common on OSX
- Skip system readline detection (OSX readline is incompatible one)
- Make Xcode generator work
TABLES IN INCORRECT ENGINE
PROBLEM:
CREATE/ALTER TABLE currently can move system tables like
mysql.db, user, host etc, to engines other than MyISAM. This is not
completely supported as of now, by mysqld. When some of system tables
like plugin, servers, event, func, *_priv, time_zone* are moved
to innodb, mysqld restart crashes. Currently system tables
can be moved to BLACKHOLE also!!!.
ANALYSIS:
The problem is that there is no check before creating or moving
a system table to some particular engine.
System tables are suppose to be residing in MyISAM. We can think
of restricting system tables to exist only in MyISAM. But, there could
be future needs of these system tables to be part of other engines
by design. For eg, NDB cluster expects some tables to be on innodb
or ndb engine. This calls for a solution, by which system
tables can be supported by any desired engine, with minimal effort.
FIX:
The solution provides a handlerton interface using which,
mysqld server can query particular storage engine handlerton for
system tables that it supports. This way each storage engine
layer can define their own system database and system tables.
The check_engine() function uses the new handlerton function
ha_check_if_supported_system_table() to check if db.tablename
provided in the DDL is supported by the SE.
Note: This fix has modified a test in help.test, which was moving
mysql.help_* to innodb. The primary intention of the test was not
to move them between engines.
RB-tree index in the MEMORY table fails if it grews over 4G.
That happened because the old_allocated variable in hp_rb_write_key()
had the uint type. Changed with the 'size_t' type to be same as the
'rb_tree.allocated'.
per-file comments:
storage/heap/hp_write.c
MDEV-80 Memory engine table full at much less than max_heap_table_size with btree index.
uint->size_t for the 'old_allocated'.
Bug#13639204 64111: CRASH ON SELECT SUBQUERY WITH NON UNIQUE INDEX
The crash happened due to wrong calculation
of key length during creation of reference for
sort order index. The problem is that
keyuse->used_tables can have OUTER_REF_TABLE_BIT enabled
but used_tables parameter(create_ref_for_key() func) does
not have it. So key parts which have OUTER_REF_TABLE_BIT
are ommited and it could lead to incorrect key length
calculation(zero key length).
mysql-test/r/subselect_innodb.result:
test result
mysql-test/t/subselect_innodb.test:
test case
sql/sql_select.cc:
added OUTER_REF_TABLE_BIT to the used_tables parameter
for create_ref_for_key() function.
storage/innobase/handler/ha_innodb.cc:
added assertion, request from Inno team
storage/innodb_plugin/handler/ha_innodb.cc:
added assertion, request from Inno team
The main problem was a bug in CSV where it provided wrong statistics (it claimed the table was empty when it wasn't)
I also fixed wrong freeing of blob's in the CSV handler. (Any call to handler::read_first_row() on a CSV table with blobs would fail)
mysql-test/r/csv.result:
Added new test case
mysql-test/r/partition_innodb.result:
Updated test results after fixing bug with impossible partitions and const tables
mysql-test/t/csv.test:
Added new test case
sql/sql_select.cc:
Cleaned up code for handling of partitions.
Fixed also a bug where we didn't threat a table with impossible partitions as a const table.
storage/csv/ha_tina.cc:
Allocate blobroot onces.
Fixed failing tests in sys_vars as we have now stricter checking of setting of variables.
mysql-test/suite/sys_vars/r/innodb_adaptive_flushing_basic.result:
One can now only assign 0 or 1 to boolean variables
mysql-test/suite/sys_vars/r/innodb_adaptive_hash_index_basic.result:
One can now only assign 0 or 1 to boolean variables
mysql-test/suite/sys_vars/r/innodb_large_prefix_basic.result:
One can now only assign 0 or 1 to boolean variables
mysql-test/suite/sys_vars/r/innodb_random_read_ahead_basic.result:
One can now only assign 0 or 1 to boolean variables
mysql-test/suite/sys_vars/r/innodb_stats_on_metadata_basic.result:
One can now only assign 0 or 1 to boolean variables
mysql-test/suite/sys_vars/r/innodb_strict_mode_basic.result:
One can now only assign 0 or 1 to boolean variables
mysql-test/suite/sys_vars/r/innodb_support_xa_basic.result:
One can now only assign 0 or 1 to boolean variables
mysql-test/suite/sys_vars/r/innodb_table_locks_basic.result:
One can now only assign 0 or 1 to boolean variables
mysql-test/suite/sys_vars/r/rpl_semi_sync_master_enabled_basic.result:
One can now only assign 0 or 1 to boolean variables
mysql-test/suite/sys_vars/r/rpl_semi_sync_slave_enabled_basic.result:
One can now only assign 0 or 1 to boolean variables
mysql-test/suite/sys_vars/t/innodb_adaptive_flushing_basic.test:
One can now only assign 0 or 1 to boolean variables
mysql-test/suite/sys_vars/t/innodb_adaptive_hash_index_basic.test:
One can now only assign 0 or 1 to boolean variables
mysql-test/suite/sys_vars/t/innodb_large_prefix_basic.test:
One can now only assign 0 or 1 to boolean variables
mysql-test/suite/sys_vars/t/innodb_random_read_ahead_basic.test:
One can now only assign 0 or 1 to boolean variables
mysql-test/suite/sys_vars/t/innodb_stats_on_metadata_basic.test:
One can now only assign 0 or 1 to boolean variables
mysql-test/suite/sys_vars/t/innodb_strict_mode_basic.test:
One can now only assign 0 or 1 to boolean variables
mysql-test/suite/sys_vars/t/innodb_support_xa_basic.test:
One can now only assign 0 or 1 to boolean variables
mysql-test/suite/sys_vars/t/innodb_table_locks_basic.test:
One can now only assign 0 or 1 to boolean variables
mysys/my_getsystime.c:
Merge + fixed bug that __NR_clock_gettime didn't work in 5.5
Fixed that repair removes the 'table is moved' mark.
mysql-test/suite/maria/r/maria-autozerofill.result:
Test case for lp:967914
mysql-test/suite/maria/t/maria-autozerofill.test:
Test case for lp:967914
storage/maria/ha_maria.cc:
Fixed that repair removes the 'table is moved' mark.
Fix the calculation of the next autoinc value when offset > 1. Some of the
results have changed due to the changes in the allocation calculation. The
new calculation will result in slightly bigger gaps for bulk inserts.
rb://866 Approved by Jimmy Yang.
Backported from mysql-trunk (5.6)
LOCK_THREAD_COUNT
When using the performance schema file io instrumentation in MySQL 5.5,
a thread would loop forever inside lf_pinbox_put_pins, when disconnecting.
It would also hold LOCK_thread_count while doing so, effectively killing the
server.
The root cause of the loop in lf_pinbox_put_pins() is a leak of LF_PINS,
when used with the filename_hash LF_HASH table in the performance schema.
This fix contains the following changes:
1)
Added the missing call to lf_hash_search_unpin(), to prevent the leak.
2)
In mysys/lf_alloc-pin.c, there was some extra debugging code
(MY_LF_EXTRA_DEBUG) written to detect precisely this kind of issues,
but it was never used.
Replaced MY_LF_EXTRA_DEBUG with DBUG_OFF, so that leaks similar to this one
can be always detected in regular debug builds.
3)
Backported the fix for the following bug, from 5.6 to 5.5:
Bug#13417446 - 63339: INCORRECT FILE PATH IN PEFORMANCE_SCHEMA ON WINDOWS
The issue was that check/optimize/anaylze did not zerofill the table before they started to work on it.
Added one more element to not often used function handler::auto_repair() to allow handler to decide when to auto repair.
mysql-test/suite/maria/r/maria-autozerofill.result:
Test case for lp:944422
mysql-test/suite/maria/t/maria-autozerofill.test:
Test case for lp:944422
sql/ha_partition.cc:
Added argument to auto_repair()
sql/ha_partition.h:
Added argument to auto_repair()
sql/handler.h:
Added argument to auto_repair()
sql/table.cc:
Let auto_repair() decide which errors to trigger auto-repair
storage/archive/ha_archive.h:
Added argument to auto_repair()
storage/csv/ha_tina.h:
Added argument to auto_repair()
storage/maria/ha_maria.cc:
Give better error & warning messages for auto-repaired tables.
storage/maria/ha_maria.h:
Added argument to auto_repair()
Always auto-repair in case of moved table.
storage/maria/ma_open.c:
Remove special handling of HA_ERR_OLD_FILE (this is now handled in auto_repair())
storage/myisam/ha_myisam.h:
Added argument to auto_repair()
HANG IN PREPARING WITH 100% CPU USAGE
Infinite loop in the subselect_indexsubquery_engine::exec()
function caused Server hang with 100% CPU usage.
The BLACKHOLE storage engine didn't update handler's
table->status variable after index operations, that
caused an infinite "while(!table->status)" execution.
Index access methods of the BLACKHOLE engine handler
have been updated to set table->status variable to
STATUS_NOT_FOUND or 0 when such a method returns a
HA_ERR_END_OF_FILE error or 0 respectively.
mysql-test/r/blackhole.result:
Bug #11880012: INDEX_SUBQUERY, BLACKHOLE,
HANG IN PREPARING WITH 100% CPU USAGE
New test file for the BLACKHOLE engine.
mysql-test/t/blackhole.test:
Bug #11880012: INDEX_SUBQUERY, BLACKHOLE,
HANG IN PREPARING WITH 100% CPU USAGE
New test file for the BLACKHOLE engine.
storage/blackhole/ha_blackhole.cc:
Bug #11880012: INDEX_SUBQUERY, BLACKHOLE,
HANG IN PREPARING WITH 100% CPU USAGE
Index access methods of the BLACKHOLE engine handler
have been updated to set table->status variable to
STATUS_NOT_FOUND or 0 when such a method returns a
HA_ERR_END_OF_FILE error or 0 respectively, like
we do in MyISAM storage engine.
mysql-test/suite/innodb/t/group_commit_crash.test:
remove autoincrement to avoid rbr being used for insert ... select
mysql-test/suite/innodb/t/group_commit_crash_no_optimize_thread.test:
remove autoincrement to avoid rbr being used for insert ... select
mysys/my_addr_resolve.c:
a pointer to a buffer is returned to the caller -> the buffer cannot be on the stack
mysys/stacktrace.c:
my_vsnprintf() is ok here, in 5.5
rb://942
approved by: Marko Makela
We don't need to scan LRU for dropping AHI entries when DROPing a table.
AHI entries are already removed when we free up extents for the btree.
Patch and test case by Patryk Pomykalski
mysql-test/suite/innodb/r/innodb-autoinc.result:
updated test case
storage/innodb_plugin/handler/ha_innodb.cc:
Fixed that we properly reserve values for auto_increment
storage/xtradb/handler/ha_innodb.cc:
Fixed that we properly reserve values for auto_increment
- Use native memcmp() supplied with C runtime instead of hand-unrolled loop ptr_compare_N loop
Prior to fix ptr_compare_0() has 3.7% samples in OLTP-RO in-memory.
Fix brings this down to 1.8% (all memcmp samples)
- Innodb : fix UT_RELAX_CPU to be defined as YieldProcessor, as was also originally intended
(but intention was lost in the #ifdef maze
This reduces number of ut_delay() samples in profile from 1.5% to 0.5%
- Optimize away calls to hp_rec_hashnr() by cashing hash
- Try to get more rows / block (to minimize overhead of HP_PTRS) in HEAP tables.
storage/heap/_check.c:
Optimize away calls to hp_rec_hashnr() by cashing hash.
Print cleanups
storage/heap/heapdef.h:
Added place to hold calculated hash value for row
storage/heap/hp_create.c:
Try to get more rows / block (to minimize overhead of HP_PTRS)
storage/heap/hp_delete.c:
Optimize away calls to hp_rec_hashnr() by cashing hash.
storage/heap/hp_hash.c:
Optimize away calls to hp_rec_hashnr() by cashing hash.
Remove some not needed DBUG_PRINT
storage/heap/hp_test2.c:
Increased max table size as now heap tables takes a bit more space (a few %)
storage/heap/hp_write.c:
Optimize away calls to hp_rec_hashnr() by cashing hash.
Remove duplicated code
More DBUG_PRINT
storage/maria/ma_create.c:
More DBUG_PRINT
Remove Aria state history for drop/rename
mysql-test/suite/maria/r/maria-recovery2.result:
Updated old (wrong) test result
sql/handler.cc:
Fixed wrong argument to implict_commit
storage/maria/ha_maria.cc:
Ensure that we don't use file->trn if THD_TRN is 0. (This means that implict_commit() has been called and the trn object is not ours anymore)
storage/maria/ma_extra.c:
Remove Aria state history for drop/rename
storage/maria/ma_rename.c:
Remove Aria state history for rename
storage/maria/ma_state.c:
More DBUG_PRINT
- Don't use SAFEMALLOC on valgrind builds (slows things down)
- Added back lost option from 5.3: debug-mutex-deadlock-detector
- Flush pages before taking lock mutex (speeds up closing of Aria tables).
BUILD/SETUP.sh:
- Don't use SAFEMALLOC on valgrind builds (slows things down)
sql/lock.cc:
Make default argument explicit (improves readability)
sql/mysqld.cc:
Removed compiler warnings
Sorted debug options alphabetically
Added back lost option from 5.3: debug-mutex-deadlock-detector
storage/maria/ma_close.c:
Flush pages before taking lock mutex (speeds up closing of Aria tables).
storage/maria/ma_open.c:
More DBUG_PRINT
storage/maria/maria_def.h:
Better DBUG_PRINT
storage/maria/trnman.c:
Better DBUG_PRINT
- Mark test components, plugins etc with COMPONENT Test, to get them excluded from the MSI
- Only include debug symbols for client and embedded libs and also
mysqld.exe and server plugins (so we can still can get a callstack in case of crash)
The rest (all *.pdbs, test components, MTR) can be obtained from the big ZIP distribution, if required.
FROM BUFFER POOL
rb://975
approved by: Marko Makela
There is a race in lock_validate() where we try to access a page
without ensuring that the tablespace stays valid during the operation
i.e.: it is not deleted. This patch tries to fix that by using an
existing flag (the flag is renamed to make it's name more generic
in line with it's new use).
IN OS_THREAD_EQ
rb://977
approved by: Marko Makela
rw_lock::writer_thread field contains the thread id of current x-holder
or wait-x thread. This field is un-initialized at lock creation and is
written to for the first time when an attempt is made to x-lock.
Current code considers ::writer_thread as valid memory region only when
the lock is held in x-mode (or there is an x-waiter). This is an
overkill and it generates valgrind warnings.
The fix is to consider ::writer_thread as valid memory region once it
has been written to.
Reasoning:
==========
The ::writer_thread can be safely considered valid because:
* We only ever do comparison with current calling threads id.
* We only ever do comparison when ::recursive flag is set
* We always unset ::recursive flag in x-unlock
* Same thread cannot be unlocking and attempting to lock at the same
time
* thread_id recycling is not an issue because before an id is recycled
the thread must leave innodb meaning it must release all locks meaning
it must unset ::recursive flag.
Added 'from_end' as extra parameter to Field::unpack() to detect wrong from data.
Change ha_archive::unpack_row() to detect wrong field lengths.
Replication code changed to detect wrong field information in events.
mysql-test/r/archive.result:
dded test case for lp:917689
sql/field.cc:
Added 'from_end' as extra parameter to Field::unpack() to detect wrong from data.
Removed not used 'unpack_key' functions.
sql/field.h:
Added 'from_end' as extra parameter to Field::unpack() to detect wrong from data.
Removed not used 'unpack_key' functions.
Removed some not needed unpack() functions.
sql/filesort.cc:
Added buffer end parameter to unpack_addon_fields()
sql/log_event.h:
Added end of buffer argument to unpack_row()
sql/log_event_old.cc:
Added end of buffer argument to unpack_row()
sql/log_event_old.h:
Added end of buffer argument to unpack_row()
sql/records.cc:
Added buffer end parameter to unpack_addon_fields()
sql/rpl_record.cc:
Added end of buffer argument to unpack_row()
Added detection of wrong field information in events
sql/rpl_record.h:
Added end of buffer argument to unpack_row()
sql/rpl_record_old.cc:
Added end of buffer argument to unpack_row()
Added detection of wrong field information in events
sql/rpl_record_old.h:
Added end of buffer argument to unpack_row()
sql/table.h:
Added buffer end parameter to unpack()
storage/archive/ha_archive.cc:
Change ha_archive::unpack_row() to detect wrong field lengths.
This fixes lp:917689
IN LOCK_VALIDATE()
rb://917
approved by: Marko Makela
In lock_validate() the limit is used to release the kernel_mutex during
the validation, to obey the latching order.
If we do the limit++ then we are rechecking the same lock most times on
each iteration because limit is being incremented by one and
<space, page_no> will nearly always be > limit. If we set the limit
correctly to (space, page+1) then we are actually making progress
during the iteration.
truncating, inserting the same set of rows. When a table is
re-created with the same set of rows, the data file size must
not grow.
rb:968
Approved by Marko.
This bug has been there at least since MySQL 4.0.9. (Before 4.0.9, the
code probably was even more severely broken.)
btr_pcur_restore_position(): When cursor restoration fails, before
invoking btr_pcur_store_position() move to the previous or next record
unless cursor->rel_pos==BTR_PCUR_ON or the record was not a user
record.
This bug can cause skipped records when btr_pcur_store_position() is
called on the last record of a page. A symptom would be record count
mismatch in CHECK TABLE, or failure to find a record to delete-mark or
update or purge. The following operations should be affected by the
bug:
* row_search_for_mysql(): SELECT, UPDATE, REPLACE, CHECK TABLE,
(almost anything else than INSERT)
* foreign key CASCADE operations
* row_merge_read_clustered_index(): index creation (since MySQL 5.1
InnoDB Plugin)
* multi-threaded purge (after MySQL 5.5): not sure, but it might fail
to purge some records
Not all callers of btr_pcur_restore_position() should be affected.
Anything that asserts or checks that restoration succeeds is
unaffected. For example, cursor restoration on the change buffer tree
should always succeed, because access is being protected by additional
latches. Likewise, rollback, or any code accesses data dictionary
tables while holding dict_sys->mutex should be safe.
rb:967 approved by Jimmy Yang
Fixed memory leaks and compiler warnings in ha_sphinx.cc
Added HA_MUST_USE_TABLE_CONDITION_PUSHDOWN so that an engine can force index condition to be used
mysql-test/suite/sphinx/sphinx.result:
Added testing of pushdown conditions and sphinx status variables.
mysql-test/suite/sphinx/sphinx.test:
Added testing of pushdown conditions and sphinx status variables.
mysql-test/suite/sphinx/suite.pm:
Print version number if sphinx version is too old.
sql/handler.h:
Added HA_MUST_USE_TABLE_CONDITION_PUSHDOWN so that an engine can force index condition to be used
sql/sql_base.cc:
Added 'thd' argument to check_unused() to be able to set 'entry->in_use' if we call handler->extra().
This was needed as sphinx (and possible other storage engines) assumes that 'in_use' is set if handler functions are called.
sql/sql_select.cc:
Test if handler is forcing pushdown condition to be used.
storage/sphinx/ha_sphinx.cc:
Updated to version 2.0.4
Fixed memory leaks and compiler warnings.
storage/sphinx/ha_sphinx.h:
Updated to version 2.0.4
storage/sphinx/snippets_udf.cc:
Updated to version 2.0.4
storage/innobase/include/sync0rw.ic:
Prerequisite for compiling with gcc4 on solaris: ignore result from
os_compare_and_swap_ulint
storage/myisam/mi_dynrec.c:
Prerequisite for compiling with gcc4 on solaris: cast to void*
This is needed as last log entry may be a DDL that is not processed and then a table may be left in 'not properly closed state' even if information is correct in it.
There are two threads. In one thread, dml operation is going on
involving cascaded update operation. In another thread, alter
table add foreign key constraint is happening. Under these
circumstances, it is possible for the dml thread to access a
dict_foreign_t object that has been freed by the ddl thread.
The debug sync test case provides the sequence of operations.
Without fix, the test case will crash the server (because of
newly added assert). With fix, the alter table stmt will return
an error message.
Backporting the fix from MySQL 5.5 to 5.1
rb:961
rb:947
Fixed wrong mutex order bug in Aria when flush_log_for_bitmap() was called when table is not yet marked for change.
include/my_base.h:
Added flag that table is opened only for status
mysql-test/r/myisam-big.result:
Test case for lp:925377
mysql-test/t/myisam-big.test:
Test case for lp:925377
sql/sql_base.cc:
If thd->version == 0 (happens only when we are opening a table that is flushed under MYSQL_LOCK_IGNORE_FLUSH), open the table in HA_OPEN_FOR_STATUS mode
storage/maria/ma_bitmap.c:
Fixed wrong mutex order bug in Aria when flush_log_for_bitmap() was called when table is not yet marked for change.
storage/maria/ma_dbug.c:
Ignore last_version <= 1 as these are either flushed or only opened for status
storage/maria/ma_open.c:
Use last_version=1 as a marker that table was opened with HA_OPEN_FOR_STATUS.
In this case we just open a new version of the table in read only mode.
storage/myisam/mi_create.c:
Update prototype
storage/myisam/mi_dbug.c:
Ignore last_version <= 1 as these are either flushed or only opened for status
storage/myisam/mi_open.c:
Use last_version=1 as a marker that table was opened with HA_OPEN_FOR_STATUS.
If HA_OPEN_FOR_STATUS is used, we will not assert if there is an old not-to-be-used version of the table existing.
In this case we just open a new version of the table in read only mode.
storage/myisam/myisamdef.h:
Updated prototype
also filed as Bug#13146269, Bug#13713178
btr_get_size(): Add mtr_t parameter. Require that the caller S-latches
index->lock. If index->page==FIL_NULL or the index is to be dropped,
return ULINT_UNDEFINED to indicate that the statistics are
unavailable.
dict_update_statistics(): If btr_get_size() returns ULINT_UNDEFINED,
fake the index cardinality statistics.
dict_index_set_page(): Unused function, remove.
row_drop_table_for_mysql(): Before starting to drop the table, mark
the indexes unavailable in the data dictionary cache while holding
index->lock X-latch.
ha_innobase::prepare_drop_index(), ha_innobase::final_drop_index():
When setting index->to_be_dropped, acquire the index->lock X-latch.
rb:960 approved by Jimmy Yang
The issue was that Aria allowed too long keys to be created (so that the internal buffer was not big enough to hold the whole key).
Key lengths is now limited to HA_MAX_KEY_LENGTH (1000), as for MyISAM.
Fixed failure in "_ma_apply_redo_index: Assertion `new_page_length == 0", as found by buildbot.
mysql-test/suite/maria/r/maria.result:
Updated results
mysql-test/suite/maria/r/maria3.result:
Updated results. Added test for bug fix
mysql-test/suite/maria/t/maria3.test:
Updated results. Added test for bug fix
mysql-test/suite/maria/t/optimize.test:
Updated test for new max key length
storage/maria/ha_maria.cc:
Limit key to HA_MAX_KEY_LENGTH.
storage/maria/ma_key_recover.c:
Limit used page length to max page size (this is in line with the code that writes the entry to the log).
This fixes failure in "_ma_apply_redo_index: Assertion `new_page_length == 0", as found by buildbot.
storage/maria/ma_search.c:
Extra DBUG
storage/maria/ma_write.c:
Added test to detect errors earlier.
There are two threads. In one thread, dml operation is going on
involving cascaded update operation. In another thread, alter
table add foreign key constraint is happening. Under these
circumstances, it is possible for the dml thread to access a
dict_foreign_t object that has been freed by the ddl thread.
The debug sync test case provides the sequence of operations.
Without fix, the test case will crash the server (because of
newly added assert). With fix, the alter table stmt will return
an error message.
rb:947
approved by Jimmy Yang
Problem was a crash in internal temporary (Maria) files when row length exceeded 65535
mysql-test/suite/maria/r/maria3.result:
Added test case
mysql-test/suite/maria/t/maria3.test:
Added test case
storage/maria/ma_open.c:
Added support for row length > 65535.
This fixes crash when using tables with longer row lengths.
mysql-test-run auto-disables all optional plugins.
mysql-test/include/default_client.cnf:
no @OPT.plugindir anymore
mysql-test/include/default_mysqld.cnf:
don't disable plugins manually - mtr can do it better
mysql-test/suite/innodb/t/innodb_bug47167.test:
mtr now uses suite-dir as an include path
mysql-test/suite/innodb/t/innodb_file_format.test:
mtr now uses suite-dir as an include path
mysql-test/t/partition_binlog.test:
this test uses partitions
storage/example/mysql-test/mtr/t/source.result:
update results. as mysqltest includes the correct overlayed include
storage/innobase/handler/ha_innodb.cc:
the assert is wrong
Fixed compiler warnings found by buildbot
Makefile.am:
Removed extra empty line
cmd-line-utils/libedit/sys.h:
Fixed that strndup() doesn't give compiler warnings
mysql-test/Makefile.am:
Fixes for 'make distcheck'
plugin/auth_pam/auth_pam.c:
Ensure that prototype for strndup() is included on linux
sql/share/Makefile.am:
Fixes for 'make distcheck'
storage/innodb_plugin/btr/btr0sea.c:
Fixed compiler warning
support-files/Makefile.am:
Fixes for 'make distcheck'
"relocation R_X86_64_PC32 against `handler_index_cond_check' can not be used when making a shared object; recompile with -fPIC"
don't use visibility=hidden for external functions
- In mi_rkey(), do correct handling of case where mi_yield_and_check_if_killed()
detects that the thread was killed (all other similar functions in MyISAM/Aria have
slightly different code and do not have this problem).
- Also fixed assignment in DBUG_ASSERT
- this is 2nd variant of the fix:
= make .result file smaller
= run KILLable statements in a separate connection, otherwise we could end up trying to
KILL the final "DROP TABLE" statement
Fixed README with link to source
Merged InnoDB change to XtraDB
README:
Added information of where to find MariaDB code
storage/archive/ha_archive.cc:
Removed memset() of rows, a MariaDB checksum's doesn't touch not used data.
This happend when you have more than 1024 open Aria tables during checkpoint.
mysql-test/mysql-test-run.pl:
Fixed that variable names are consistent between external and internal server.
mysql-test/suite/maria/suite.pm:
Test for aria-block-size instead of 'aria' as 'aria' is not set for embedded server.
This should be ok for aria tests, as aria is never disabled for these.
storage/maria/ma_checkpoint.c:
Fixed bug when there are more than 1024 open Aria tables during checkpoint.
common icp callback in the handler.cc.
It can also increment status counters, without making the engine
dependent on the exact THD layout (that is different in embedded).