LEN <= SIZEOF(ULONGLONG)
This bug was caught in the WL#6255 ALTER TABLE...ADD COLUMN in MySQL
5.6, but there is a bug in all InnoDB versions that support
auto-increment columns.
row_search_autoinc_read_column(): When reading the maximum value of
the auto-increment column, and the column only contains NULL values,
return 0. This corresponds to the case when the table is empty in
row_search_max_autoinc().
rb:1415 approved by Sunny Bains
Generalized support for auto-updated and/or auto-initialized timestamp
and datetime columns. This patch is a reimplementation of MySQL's
"WL#5874: CURRENT_TIMESTAMP as DEFAULT for DATETIME columns". In order to
ease future merges, this implementation reused few function and variable
names from MySQL's patch, however the implementation is quite different.
TODO:
The only unresolved problem in this patch is the semantics of LOAD DATA for
TIMESTAMP and DATETIME columns in the cases when there are missing or NULL
columns. I couldn't fully comprehend the logic behind MySQL's behavior and
its relationship with their own documentation, so I left the results to be
more consistent with all other LOAD cases.
The problematic test cases can be seen by running the test file function_defaults,
and observing the test case differences. Those were left on purpose for discussion.
REAL DUPLICATE VALUE FOR PREFIX KEYS
innobase_rec_to_mysql(): Invoke dict_index_get_nth_col_or_prefix_pos()
instead of dict_index_get_nth_col_pos() to find the column.
SECONDARY INDEX UPDATES MAKE CONSISTENT READS DO O(N^2) UNDO PAGE
LOOKUPS (honoring kill query while accessing sec_index)
If secondary index is being used for select query evaluation and this
query is operating with consistent read snapshot it might take good time for
secondary index to return back control to mysql as MVCC would kick in.
If user issues "kill query <id>" while query is actively accessing
secondary index it will not be honored as there is no hook to check
for this condition. Added hook for this check.
-----
Parallely secondary index taking too long to evaluate for consistent
read snapshot case is being examined for performance improvement. WL#6540.
SECONDARY INDEX UPDATES MAKE CONSISTENT READS DO O(N^2) UNDO PAGE
LOOKUPS (honoring kill query while accessing sec_index)
If secondary index is being used for select query evaluation and this
query is operating with consistent read snapshot it might take good time for
secondary index to return back control to mysql as MVCC would kick in.
If user issues "kill query <id>" while query is actively accessing
secondary index it will not be honored as there is no hook to check
for this condition. Added hook for this check.
-----
Parallely secondary index taking too long to evaluate for consistent
read snapshot case is being examined for performance improvement. WL#6540.
The problem is in the error handling in row_create_table_for_mysql().
In the 'disk full' case we may forget to call dict_mem_table_free() on
the table object.
Approved by: Marko (rb:1377 and rb:1386)
CONSISTENT SNAPSHOT OPTION
A transaction is started with a consistent snapshot. After
the transaction is started new indexes are added to the
table. Now when we issue an update statement, the optimizer
chooses an index. When the index scan is being initialized
via ha_innobase::change_active_index(), InnoDB reports
the error code HA_ERR_TABLE_DEF_CHANGED, with message
stating that "insufficient history for index".
This error message is propagated up to the SQL layer. But
the my_error() api is never called. The statement level
diagnostics area is not updated with the correct error
status (it remains in Diagnostics_area::DA_EMPTY).
Hence the following check in the Protocol::end_statement()
fails.
516 case Diagnostics_area::DA_EMPTY:
517 default:
518 DBUG_ASSERT(0);
519 error= send_ok(thd->server_status, 0, 0, 0, NULL);
520 break;
The fix is to backport the fix of bugs 14365043, 11761652
and 11746399.
14365043 PROTOCOL::END_STATEMENT(): ASSERTION `0' FAILED
11761652 HA_RND_INIT() RESULT CODE NOT CHECKED
11746399 RETURN VALUES OF HA_INDEX_INIT() AND INDEX_INIT() IGNORED
rb://1227 approved by guilhem and mattiasj.
We did not allocate enough bits for index->trx_id_offset, causing an
UPDATE or DELETE of a table with a PRIMARY KEY longer than 1024 bytes
to corrupt the PRIMARY KEY.
dict_index_t: Allocate enough bits.
dict_index_build_internal_clust(): Check for overflow of
index->trx_id_offset. Trip a debug assertion when overflow occurs.
rb:1380 approved by Jimmy Yang
- Wrong thd uses in Item_subselect, could lead to crash
- Inititalize uninitialized variable in new autoincrement handling code
sql/handler.cc:
More DBUG_PRINT
sql/item_subselect.cc:
Wrong thd uses in Item_subselect, could lead to crash
storage/innobase/handler/ha_innodb.cc:
Initialize variable needed by upper level. This only happens when auto-increment value wraps over.
storage/xtradb/handler/ha_innodb.cc:
Initialize variable needed by upper level. This only happens when auto-increment value wraps over.
This allows us to avoid calculating variables (including those involving mutex) that doesn't match the given
wildcard in SHOW STATUS LIKE '...'
Removed all references to active_mi that could cause problems for multi-source replication.
Added START|STOP ALL SLAVES
Added SHOW ALL SLAVES STATUS
include/mysql/plugin.h:
Added SHOW_SIMPLE_FUNC
include/mysql/plugin_audit.h.pp:
Updated .pp file
include/mysql/plugin_auth.h.pp:
Updated .pp file
include/mysql/plugin_ftparser.h.pp:
Updated .pp file
mysql-test/suite/multi_source/info_logs.result:
New columns in SHOW ALL SLAVES STATUS
mysql-test/suite/multi_source/info_logs.test:
Test new syntax
mysql-test/suite/multi_source/simple.result:
New columns in SHOW ALL SLAVES STATUS
mysql-test/suite/multi_source/simple.test:
test new syntax
mysql-test/suite/multi_source/syntax.result:
Updated result
mysql-test/suite/multi_source/syntax.test:
Test new syntax
mysql-test/suite/rpl/r/rpl_skip_replication.result:
Updated result
plugin/semisync/semisync_master_plugin.cc:
SHOW_FUNC -> SHOW_SIMPLE_FUNC
sql/item_create.cc:
Simplify code
sql/lex.h:
Added SLAVES keyword
sql/log.cc:
Constant -> define
sql/log_event.cc:
Added comment
sql/mysqld.cc:
SHOW_FUNC -> SHOW_SIMPLE_FUNC
Made slave_retried_trans, slave_received_heartbeats and heartbeat_period multi-source safe
Clear variable denied_connections and slave_retried_transactions on startup
sql/mysqld.h:
Added slave_retried_transactions
sql/rpl_mi.cc:
create_signed_file_name -> create_logfile_name_with_suffix
Added start_all_slaves() and stop_all_slaves()
sql/rpl_mi.h:
Added prototypes
sql/rpl_rli.cc:
create_signed_file_name -> create_logfile_name_with_suffix
added executed_entries
sql/rpl_rli.h:
Added executed_entries
sql/share/errmsg-utf8.txt:
More and better error messages
sql/slave.cc:
Added more fields to SHOW ALL SLAVES STATUS
Added slave_retried_transactions
Changed constants -> defines
sql/sql_class.h:
Added comment
sql/sql_insert.cc:
active_mi.rli -> thd->rli_slave
sql/sql_lex.h:
Added SQLCOM_SLAVE_ALL_START & SQLCOM_SLAVE_ALL_STOP
sql/sql_load.cc:
active_mi.rli -> thd->rli_slave
sql/sql_parse.cc:
Added START|STOP ALL SLAVES
sql/sql_prepare.cc:
Added SQLCOM_SLAVE_ALL_START & SQLCOM_SLAVE_ALL_STOP
sql/sql_reload.cc:
Made REFRESH RELAY LOG multi-source safe
sql/sql_repl.cc:
create_signed_file_name -> create_logfile_name_with_suffix
Don't send my_ok() from start_slave() or stop_slave() so that we can call it for all master connections
sql/sql_show.cc:
Compare wild cards early for all variables
sql/sql_yacc.yy:
Added START|STOP ALL SLAVES
Added SHOW ALL SLAVES STATUS
sql/sys_vars.cc:
Made replicate_events_marked_for_skip,slave_net_timeout and rpl_filter multi-source safe.
sql/sys_vars.h:
Simplify Sys_var_rpl_filter
Check ability of index to be NULL as it made in MyISAM. UNIQUE with NULL could have several NULL entries so we have to continue even if ve have found a row.
mysql-test/suite/innodb/include/restart_and_reinit.inc:
drop and recreate mysql.innodb* tables when deleting innodb table spaces
mysql-test/t/ssl_8k_key-master.opt:
with loose- prefix ssl errors are ignored
sql-common/client.c:
compiler warnings
sql/field.cc:
use the new function
sql/item.cc:
don't convert time to double or decimal via longlong,
this loses sub-second part.
Use dedicated functions.
sql/item.h:
incorrect cast_to_int type for params
sql/item_strfunc.cc:
use the new function
sql/lex.h:
unused
sql/my_decimal.h:
helper macro
sql/sql_plugin.cc:
workaround for a compiler warning
sql/sql_yacc.yy:
unused
sql/transaction.cc:
fix the merge for SERVER_STATUS_IN_TRANS_READONLY protocol flag
storage/sphinx/CMakeLists.txt:
compiler warnings
TRANSACTION ROLLBACK
Description: During the rollback operation, a blob page
is removed earlier than desired. Consider following scenario:
1. create table t1(a int primary key,b blob) engine=innodb;
2. insert into t1 values (1,repeat('b',9000));
3. begin;
4. update t1 set b=concat(b,'b');
5. update t1 set a=a+1;
6. insert into t1 values (1,repeat('b',9000));
7. rollback;
The update operation in line 5 produces 2 undo log record. The first
undo record (TRX_UNDO_DEL_MARK_REC) goes to trx->update_undo and the
second undo record (TRX_UNDO_INSERT_REC) goes to trx->insert_undo.
During rollback, they are executed out of order.
When the undo record TRX_UNDO_DEL_MARK_REC is applied/executed,
the blob ownership is also reset. Because of this the blob page
is released earlier than desired. This blob page must have been
freed only as part of applying/executing the undo record
TRX_UNDO_INSERT_REC.
This problem can be avoided by executing the undo records in
order. This patch will make innodb to execute the undo records
in order.
rb://1125 approved by Marko.
and small collateral changes
mysql-test/lib/My/Test.pm:
somehow with "print" we get truncated writes sometimes
mysql-test/suite/perfschema/r/digest_table_full.result:
md5 hashes of statement digests differ, because yacc token codes are different in mariadb
mysql-test/suite/perfschema/r/dml_handler.result:
host table is not ported over yet
mysql-test/suite/perfschema/r/information_schema.result:
host table is not ported over yet
mysql-test/suite/perfschema/r/nesting.result:
this differs, because we don't rewrite general log queries, and multi-statement
packets are logged as a one entry. this result file is identical to what mysql-5.6.5
produces with the --log-raw option.
mysql-test/suite/perfschema/r/relaylog.result:
MariaDB modifies the binlog index file directly, while MySQL 5.6 has a feature "crash-safe binlog index" and modifies a special "crash-safe" shadow copy of the index file and then moves it over. That's why this test shows "NONE" index file writes in MySQL and "MANY" in MariaDB.
mysql-test/suite/perfschema/r/server_init.result:
MariaDB initializes the "manager" resources from the "manager" thread, and starts this thread only when --flush-time is not 0. MySQL 5.6 initializes "manager" resources unconditionally on server startup.
mysql-test/suite/perfschema/r/stage_mdl_global.result:
this differs, because MariaDB disables query cache when query_cache_size=0. MySQL does not
do that, and this causes useless mutex locks and waits.
mysql-test/suite/perfschema/r/statement_digest.result:
md5 hashes of statement digests differ, because yacc token codes are different in mariadb
mysql-test/suite/perfschema/r/statement_digest_consumers.result:
md5 hashes of statement digests differ, because yacc token codes are different in mariadb
mysql-test/suite/perfschema/r/statement_digest_long_query.result:
md5 hashes of statement digests differ, because yacc token codes are different in mariadb
mysql-test/suite/rpl/r/rpl_mixed_drop_create_temp_table.result:
will be updated to match 5.6 when alfranio.correia@oracle.com-20110512172919-c1b5kmum4h52g0ni and anders.song@greatopensource.com-20110105052107-zoab0bsf5a6xxk2y are merged
mysql-test/suite/rpl/r/rpl_non_direct_mixed_mixing_engines.result:
will be updated to match 5.6 when anders.song@greatopensource.com-20110105052107-zoab0bsf5a6xxk2y is merged
- Don't connect right away in ha_cassandra::open. If we do this, it becomes
impossible to do SHOW CREATE TABLE when the server is not present.
- Note: CREATE TABLE still requires that connection is present, as it needs
to check whether the specified DDL can be used with Cassandra. We could
delay that check also, but then one would not be able to find out about
errors in table DDL until they do a SELECT.
- Add capability to retry calls that have failed with UnavailableException or
[Cassandra's] TimedOutException.
- We don't retry for Thrift errors yet, although could easily do, now.
Delete-mark change buffer records when resorting to a pessimistic
delete from the change buffer B-tree. Skip delete-marked records in
the change buffer merge and when estimating whether an operation can
be buffered. Without this fix, we could try to apply the same buffered
changes multiple times if the server was killed at the right moment.
In MySQL 5.5 and later: ibuf_get_volume_buffered_count_func(): Ignore
delete-marked (already processed) records.
ibuf_delete_rec(): Add a crash point before optimistic delete. If the
optimistic delete fails, flag the record processed before
mtr_commit().
ibuf_merge_or_delete_for_page(): Ignore delete-marked (already
processed) records.
Backport to 5.1: Rename btr_cur_del_unmark_for_ibuf() to
btr_cur_set_deleted_flag_for_ibuf() and add a parameter.
rb:1307 approved by Jimmy Yang
create table t1 (a smallint primary key auto_increment);
insert into t1 values(32767);
insert into t1 values(NULL);
ERROR 1062 (23000): Duplicate entry '32767' for key 'PRIMARY
Now on always gets error HA_ERR_AUTOINC_RANGE=167 "Out of range value for column", independent of
store engine, SQL Mode or number of inserted rows. This is an unique error that is easier to test for in replication.
Another bug fix is that we now get an error when trying to insert a too big auto-generated value, even in non-strict mode.
Before one get insted the max column value inserted.
This patch also fixes some issues with inserting negative numbers in an auto-increment column.
Fixed the ER_DUP_ENTRY and HA_ERR_AUTOINC_ERANGE are compared the same between master and slave.
This ensures that replication works between an old server to a new slave for auto-increment overflow errors.
Added SQLSTATE errors for handler errors
Smaller bug fixes:
* Added warnings for duplicate key errors when using INSERT IGNORE
* Fixed bug when using --skip-log-bin followed by --log-bin, which did set log-bin to "0"
* Allow one to see how cmake is called by using --just-print --just-configure
BUILD/FINISH.sh:
--just-print --just-configure now shows how cmake would be invoked. Good for understanding parameters to cmake.
cmake/configure.pl:
--just-print --just-configure now shows how cmake would be invoked. Good for understanding parameters to cmake.
include/CMakeLists.txt:
Added handler_state.h
include/handler_state.h:
SQLSTATE for handler error messages.
Required for HA_ERR_AUTOINC_ERANGE, but solves also some other cases.
mysql-test/extra/binlog_tests/binlog.test:
Fixed old wrong behaviour
Added more tests
mysql-test/extra/binlog_tests/binlog_insert_delayed.test:
Reset binary log to only print what's necessary in show_binlog_events
mysql-test/extra/rpl_tests/rpl_auto_increment.test:
Update to new error codes
mysql-test/extra/rpl_tests/rpl_insert_delayed.test:
Ignore warnings as this depends on how the test is run
mysql-test/include/strict_autoinc.inc:
On now gets an error on overflow
mysql-test/r/auto_increment.result:
Update results after fixing error message
mysql-test/r/auto_increment_ranges_innodb.result:
Test new behaviour
mysql-test/r/auto_increment_ranges_myisam.result:
Test new behaviour
mysql-test/r/commit_1innodb.result:
Added warnings for duplicate key error
mysql-test/r/create.result:
Added warnings for duplicate key error
mysql-test/r/insert.result:
Added warnings for duplicate key error
mysql-test/r/insert_select.result:
Added warnings for duplicate key error
mysql-test/r/insert_update.result:
Added warnings for duplicate key error
mysql-test/r/mix2_myisam.result:
Added warnings for duplicate key error
mysql-test/r/myisam_mrr.result:
Added warnings for duplicate key error
mysql-test/r/null_key.result:
Added warnings for duplicate key error
mysql-test/r/replace.result:
Update to new error codes
mysql-test/r/strict_autoinc_1myisam.result:
Update to new error codes
mysql-test/r/strict_autoinc_2innodb.result:
Update to new error codes
mysql-test/r/strict_autoinc_3heap.result:
Update to new error codes
mysql-test/r/trigger.result:
Added warnings for duplicate key error
mysql-test/r/xtradb_mrr.result:
Added warnings for duplicate key error
mysql-test/suite/binlog/r/binlog_innodb_row.result:
Updated result
mysql-test/suite/binlog/r/binlog_row_binlog.result:
Out of range data for auto-increment is not inserted anymore
mysql-test/suite/binlog/r/binlog_statement_insert_delayed.result:
Updated result
mysql-test/suite/binlog/r/binlog_stm_binlog.result:
Out of range data for auto-increment is not inserted anymore
mysql-test/suite/binlog/r/binlog_unsafe.result:
Updated result
mysql-test/suite/innodb/r/innodb-autoinc.result:
Update to new error codes
mysql-test/suite/innodb/r/innodb-lock.result:
Updated results
mysql-test/suite/innodb/r/innodb.result:
Updated results
mysql-test/suite/innodb/r/innodb_bug56947.result:
Updated results
mysql-test/suite/innodb/r/innodb_mysql.result:
Updated results
mysql-test/suite/innodb/t/innodb-autoinc.test:
Update to new error codes
mysql-test/suite/maria/maria3.result:
Updated result
mysql-test/suite/maria/mrr.result:
Updated result
mysql-test/suite/optimizer_unfixed_bugs/r/bug43617.result:
Updated result
mysql-test/suite/rpl/r/rpl_auto_increment.result:
Update to new error codes
mysql-test/suite/rpl/r/rpl_insert_delayed,stmt.rdiff:
Updated results
mysql-test/suite/rpl/r/rpl_loaddatalocal.result:
Updated results
mysql-test/t/auto_increment.test:
Update to new error codes
mysql-test/t/auto_increment_ranges.inc:
Test new behaviour
mysql-test/t/auto_increment_ranges_innodb.test:
Test new behaviour
mysql-test/t/auto_increment_ranges_myisam.test:
Test new behaviour
mysql-test/t/replace.test:
Update to new error codes
mysys/my_getopt.c:
Fixed bug when using --skip-log-bin followed by --log-bin, which did set log-bin to "0"
sql/handler.cc:
Ignore negative values for signed auto-increment columns
Always give an error if we get an overflow for an auto-increment-column (instead of inserting the max value)
Ensure that the row number is correct for the out-of-range-value error message.
******
Fixed wrong printing of column namn for "Out of range value" errors
Fixed that INSERT_ID is correctly replicated also for out-of-range autoincrement values
Fixed that print_keydup_error() can also be used to generate warnings
******
Return HA_ERR_AUTOINC_ERANGE (167) instead of ER_WARN_DATA_OUT_OF_RANGE for auto-increment overflow
sql/handler.h:
Allow INSERT IGNORE to continue also after out-of-range inserts.
Fixed that print_keydup_error() can also be used to generate warnings
sql/log_event.cc:
Added DBUG_PRINT
Fixed the ER_AUTOINC_READ_FAILED, ER_DUP_ENTRY and HA_ERR_AUTOINC_ERANGE are compared the same between master and slave.
This ensures that replication works between an old server to a new slave for auto-increment overflow errors.
sql/sql_insert.cc:
Add warnings for duplicate key errors when using INSERT IGNORE
sql/sql_state.c:
Added handler errors
sql/sql_table.cc:
Update call to print_keydup_error()
storage/innobase/handler/ha_innodb.cc:
Fixed increment handling of auto-increment columns to be consistent with rest of MariaDB.
storage/xtradb/handler/ha_innodb.cc:
Fixed increment handling of auto-increment columns to be consistent with rest of MariaDB.
page_zip_validate(), page_zip_validate_low(): Add a parameter for the
B-tree index.
page_zip_validate_low(): If the page contents does not match, check
that the record link chains match. Furthermore, if dict_index_t is
passed, check that the records match. (This reduces coverage a bit: if
index=NULL, we will ignore differences in record contents, that is,
the page payload.)
rb:1264 approved by Inaam Rana
- added option thrift_port which allows to specify which port to connect to
- not adding username/password - it turns out, there are no authentication
schemes in stock cassandra distribution.
Introduce a new storage engine API method commit_checkpoint_request().
This is used to replace the fsync() at the end of every storage engine
commit with a single fsync() when a binlog is rotated.
Binlog rotation is now done during group commit instead of being
delayed until unlog(), removing some server stall and avoiding an
expensive lock/unlock of LOCK_log inside unlog().
mysql-test/suite/heap/heap_hash.result:
Added test case
mysql-test/suite/heap/heap_hash.test:
Added test case
storage/heap/hp_hash.c:
Limit key data length to max key length
rb://1293
approved by: Marko Makela
There is race when dropping a single table tablespace where a reader
thread can initiate a read request before the delete flag is set and
before it is finished the deleting thread can attempt to free the
fil_node.
This patch checks the status in fil_io() to make sure that the
tablespace is not being deleted. If it is being deleted then
an error is returned instead of attempting IO.
THOUGH IT IS NOT.
The following error message is misleading because it claims
that the BLOB space is not counted.
"ERROR 1118 (42000): Row size too large. The maximum row size for
the used table type, not counting BLOBs, is 8126. You have to
change some columns to TEXT or BLOBs"
When the ROW_FORMAT=compact or ROW_FORMAT=REDUNDANT is used,
the BLOB prefix is stored inline along with the row. So
the above error message is changed as follows depending on
the row format used:
For ROW_FORMAT=COMPRESSED or ROW_FORMAT=DYNAMIC, the error
message is as follows:
"ERROR 42000: Row size too large (> 8126). Changing some
columns to TEXT or BLOB may help. In current row format,
BLOB prefix of 0 bytes is stored inline."
For ROW_FORMAT=COMPACT or ROW_FORMAT=REDUNDANT, the error
message is as follows:
"ERROR 42000: Row size too large (> 8126). Changing some
columns to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or
ROW_FORMAT=COMPRESSED may help. In current row
format, BLOB prefix of 768 bytes is stored inline."
rb://1252 approved by Marko Makela
PAGE SPLIT
page_rec_get_nth_const(): Map nth==0 to the page infimum.
btr_compress(adjust=TRUE): Add a debug assertion for nth>0. The cursor
should never be positioned on the page infimum.
btr_index_page_validate(): Add test instrumentation for checking the
return values of page_rec_get_nth_const() during CHECK TABLE, and for
checking that the page directory slot 0 always contains only one
record, the predefined page infimum record.
page_cur_delete_rec(), page_delete_rec_list_end(): Add debug
assertions guarding against accessing the page slot 0.
page_copy_rec_list_start(): Clarify a comment about ret_pos==0.
rb:1248 approved by Jimmy Yang
ha_innodb::records_in_range(): Remove a debug assertion
that prohibits an open range (full table).
The patch by Jorgen Loland only removed the assertion from the
built-in InnoDB, not from the InnoDB Plugin.
- Full table scan internally uses LIMIT n, and re-starts the scan from
the last seen rowkey value. rowkey ranges are inclusive, so we will
see the same rowkey again. We should ignore it.
ha_innobase::records_in_range(): Remove a debug assertion
that prohibits an open range (full table).
This assertion catches unnecessary calls to this method,
but such calls are not harming correctness.
- We use HA_MRR_NO_ASSOC ("optimizer_switch=join_cache_hashed") mode
- Not able to use BKA's buffers yet.
- There is a variable to control batch size
- There are status counters.
- Nedeed to make some fixes in BKA code (to be checked with Igor)
The ha_innobase table handler contained two search key buffers
(srch_key_val1, srch_key_val2) of fixed size used to store the search
key. The size of these buffers where fixed at
REC_VERSION_56_MAX_INDEX_COL_LEN + 2. But this size is not sufficient
to hold the search key. Hence the following assert in
row_sel_convert_mysql_key_to_innobase() failed.
2438 /* Storing may use at most data_len bytes of buf */
2439
2440 if (UNIV_LIKELY(!is_null)) {
2441 ut_a(buf + data_len <= original_buf + buf_len);
2442 row_mysql_store_col_in_innobase_format(
2443 dfield, buf,
2444 FALSE, /* MySQL key value format col */
2445 key_ptr + data_offset, data_len,
2446 dict_table_is_comp(index->table));
2447 buf += data_len;
2448 }
The buffer size is now calculated with the formula
MAX_KEY_LENGTH + MAX_REF_PARTS*2. This properly takes into account
the extra bytes needed to store the length for each column. An index
can contain a maximum of MAX_REF_PARTS columns in it, and for each
column 2 bytes are needed to store length.
rb://1238 approved by Marko and Vasil Dimov.
Backport from mysql-5.6 the fix
(revision-id sunny.bains@oracle.com-20120315045831-20rgfa4cozxmz7kz)
Bug#13839886 - CRASH IN INNOBASE_NEXT_AUTOINC
The assertion introduce in the fix for Bug#13817703
is too strong, a negative number can be greater
than the column max value, when the column value is
a negative number.
rb://978 Approved by Jimmy Yang.
rb:1236 approved by Marko Makela
WARNING
This patch is for mysql-5.5 only,
to be null-merged to mysql-5.6 and mysql-trunk.
This is a partial rollback of the file io instrumentation,
removing the instrumentation for mysql_file_stat in the archive engine.
See the bug comments for details.
two tests still fail:
main.innodb_icp and main.range_vs_index_merge_innodb
call records_in_range() with both range ends being open
(which triggers an assert)
HEURISTICS FOR COMPRESSED PAGE SIZE
The fix of Bug#12845774 was supposed to skip known-to-fail
btr_cur_optimistic_insert() calls. There was only one such call, in
btr_cur_pessimistic_update(). All other callers of
btr_cur_pessimistic_insert() would release and reacquire the B-tree
page latch before attempting the pessimistic insert. This would allow
other threads to restructure the B-tree, allowing (and requiring) the
insert to succeed as an optimistic (single-page) operation.
Failure to attempt an optimistic insert before a pessimistic one would
trigger an attempt to split an empty page.
rb:1234 approved by Sunny Bains
sql/handler.cc:
SHOW INNODB STATUS sometimes returns 0 even if it has generated an error.
This code is here to catch it until InnoDB some day is fixed.
storage/innobase/handler/ha_innodb.cc:
Catch at least one of the possible errors from SHOW INNODB STATUS to provide a correct return code.
storage/xtradb/handler/ha_innodb.cc:
Catch at least one of the possible errors from SHOW INNODB STATUS to provide a correct return code.
support-files/my-huge.cnf.sh:
Fixed typo
Facebook got a case where the page compresses really well so that
btr_cur_optimistic_update() returns DB_UNDERFLOW, but when a record
gets updated, the compression rate radically changes so that
btr_cur_insert_if_possible() can not insert in place despite
reorganizing/recompressing the page, leading to the assertion failing.
rb:1220 approved by Sunny Bains
COMPRESSED PAGE SIZE
This was submitted as MySQL Bug 61456 and a patch provided by
Facebook. This patch follows the same idea, but instead of adding a
parameter to btr_cur_pessimistic_insert(), we simply remove the
btr_cur_optimistic_insert() call there and add it to the only caller
that needs it.
btr_cur_pessimistic_insert(): Do not try btr_cur_optimistic_insert().
btr_insert_on_non_leaf_level_func(): Invoke btr_cur_optimistic_insert()
before invoking btr_cur_pessimistic_insert().
btr_cur_pessimistic_update(): Clarify in a comment why it is not
necessary to invoke btr_cur_optimistic_insert().
btr_root_raise_and_insert(): Assert that the root page is not empty.
This could happen if a pessimistic insert (involving a split or merge)
is performed without first attempting an optimistic (intra-page) insert.
rb:1219 approved by Sunny Bains
btr_cur_optimistic_insert(): Remove a bogus assertion. The insert may
fail after reorganizing the page.
btr_cur_optimistic_update(): Do not attempt to reorganize compressed pages,
because compression may fail after reorganization.
page_copy_rec_list_start(): Use page_rec_get_nth() to restore to the
ret_pos, which may also be the page infimum.
rb:1221
sql/item_subselect.cc:
Added purecov info
sql/sql_select.cc:
Added cast
storage/innobase/handler/ha_innodb.cc:
Added cast
storage/xtradb/btr/btr0btr.c:
Added buf_block_get_frame_fast() to avoid compiler warning
storage/xtradb/handler/ha_innodb.cc:
Added cast
storage/xtradb/include/buf0buf.h:
Innodb has buf_block_get_frame(block) defined as (block)->frame.
Didn't want to do a big change to break xtradb as it may use block_get_frame() differently, so I mad this quick hack to patch one compiler warning.
client/mysqldump.c:
Slave needs to be initialized with 0
dbug/dbug.c:
Removed not existing function
plugin/semisync/semisync_master.cc:
Fixed compiler warning
sql/opt_range.cc:
thd needs to be set early as it's used in some error conditions.
sql/sql_table.cc:
Changed to use uchar* to make array indexing portable
storage/innobase/handler/ha_innodb.cc:
Removed not used variable
storage/maria/ma_delete.c:
Fixed compiler warning
storage/maria/ma_write.c:
Fixed compiler warning
IN QUERIES
This bug was caused by an incorrect fix of
Bug#13807811 BTR_PCUR_RESTORE_POSITION() CAN SKIP A RECORD
There was nothing wrong with btr_pcur_restore_position(), but with the
use of it in the table scan during index creation.
rb:1206 approved by Jimmy Yang
mysql-test/suite/heap/heap.result:
Added test case for MDEV-436
mysql-test/suite/heap/heap.test:
Added test case for MDEV-436
storage/heap/hp_block.c:
Don't allocate a set of HP_PTRS when not needed. This saves us about 1024 bytes for most allocations.
storage/heap/hp_create.c:
Made the initial allocation of block sizes depending on min_records and max_records.
make CMakeLists.txt to detect if the installed boost can be compiled with the
installed compile and specified set of compiler options.
Background: even sufficiently new Boost cannot be compiled with the sufficiently old gcc
in the presence of -fno-rtti
Problem description:
Table 't' created with two colums having compound index on both the
columns under innodb/myisam engine at remote machine. In the local
machine same table is created undet the federated engine.
A select having where clause with along 'AND' operation gives wrong
results on local machine.
Analysis:
The given query at federated engine is wrongly transformed by
federated::create_where_from_key() function and the same was sent to
the remote machine. Hence the local machine is showing wrong results.
Given query "select c1 from t where c1 <= 2 and c2 = 1;"
Query transformed, after ha_federated::create_where_from_key() function is:
SELECT `c1`, `c2` FROM `t` WHERE (`c1` IS NOT NULL ) AND
( (`c1` >= 2) AND (`c2` <= 1) ) and the same sent to real_query().
In the above the '<=' and '=' conditions were transformed to '>=' and
'<=' respectively.
ha_federated::create_where_from_key() function behaving as below:
The key_range is having both the start_key and end_key. The start_key
is used to get "(`c1` IS NOT NULL )" part of the where clause, this
transformation is correct. The end_key is used to get "( (`c1` >= 2)
AND (`c2` <= 1) )", which is wrong, here the given conditions('<=' and '=')
are changed as wrong conditions('>=' and '<=').
The end_key is having {key = 0x39fa6d0 "", length = 10, keypart_map = 3,
flag = HA_READ_AFTER_KEY}
The store_length is having value '5'. Based on store_length and length
values the condition values is applied in HA_READ_AFTER_KEY switch case.
The switch case 'HA_READ_AFTER_KEY' is applicable to only the last part of
the end_key and for previous parts it is going to 'HA_READ_KEY_OR_NEXT' case,
here the '>=' is getting added as a condition instead of '<='.
Fix:
Updated the 'if' condition in 'HA_READ_AFTER_KEY' case to affect for all
parts of the end_key. i.e 'i > 0' will used for end_key, Hence added it in
the if condition.
mysql-test/suite/federated/federated.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_archive.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_bug_13118.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_bug_25714.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_bug_35333.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_debug.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_innodb.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_server.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_transactions.test:
modified the federated.inc file location
mysql-test/suite/federated/include/federated.inc:
moved the file from federated suite to federated/include folder
mysql-test/suite/federated/include/federated_cleanup.inc:
moved the file from federated suite to federated/include folder
mysql-test/suite/federated/include/have_federated_db.inc:
moved the file from federated suite to federated/include folder
storage/federated/ha_federated.cc:
updated the 'if condition' in ha_federated::create_where_from_key()
function.
Backporting the WL#5716, "Information schema table for InnoDB
buffer pool information". Backporting revisions 2876.244.113,
2876.244.102 from mysql-trunk.
rb://1175 approved by Jimmy Yang.
Backporting the WL#5716, "Information schema table for InnoDB
buffer pool information". Backporting revisions 2876.244.113,
2876.244.102 from mysql-trunk.
rb://1177 approved by Jimmy Yang.
ISSUE: Incorrect key file. Key file is corrupted,
Reading incorrect key information (keyseg)
from index file. Key definition in .MYI
and .FRM file differs. Starting pointer
to read the keyseg information is changed
to a value greater than the pack_reclength.
Memcpy tries to read keyseg information from
unallocated memory which causes the crash.
SOLUTION: One more check added to compare the
the key definition in .MYI and .FRM
file. If the definition differ, server
produces an error.
- index_merge/intersection is unable to work on GIS indexes, because:
1. index scans have no Rowid-Ordered-Retrieval property
2. When one does an index-only read over a GIS index, they do not
get the index tuple, because index only contains bounding box of the geometry.
This is why key_copy() call crashed.
This patch fixes#1, which makes the problem go away. Theoretically, it would
be nice to check #2, too, but SE API semantics is not sufficiently precise to do it.
Now partition engine adds underlying tables to the QC and ask underlying tables engine permittion to cache the query and return result of the query.
Incorrect QC cleanup in case of table registration failure fixe.
Unified interface for myisammrg & partitioned engnes for QC.
primary key with innodb tables
The bug was triggered if a single ALTER TABLE statement both
added and dropped indexes and ALTER TABLE failed during drop
(e.g. because the index was needed in a foreign key constraint).
In such cases, the server index information would get out of
sync with InnoDB - the added index would be present inside
InnoDB, but not in the server. This could then lead to InnoDB
error messages and/or server crashes.
The root cause is that new indexes are added before old indexes
are dropped. This means that if ALTER TABLE fails while dropping
indexes, index changes will be reverted in the server but not
inside InnoDB.
This patch fixes the problem by dropping any added indexes
if drop fails (for ALTER TABLE statements that both adds
and drops indexes).
However, this won't work if we added a primary key as this
key might not be possible to drop inside InnoDB. Therefore,
we resort to the copy algorithm if a primary key is added
by an ALTER TABLE statement that also drops an index.
In 5.6 this bug is more properly fixed by the handler interface
changes done in the scope of WL#5534 "Online ALTER".
- InnoDB now returns handler specific HA_WRONG_CREATE_OPTION instead of MySQL specific ER_ILLEGAL_HA_CREATE_OPTION
- This changes the user level error message from "Unknown error" to "Wrong create options"
mysql-test/r/lowercase_table2.result:
Updated result file
mysql-test/r/partition_innodb_plugin.result:
Updated to new error message
mysql-test/r/partition_open_files_limit.result:
Updated result file
mysql-test/r/row-checksum-old.result:
Updated to new error message
mysql-test/r/row-checksum.result:
Updated to new error message
mysql-test/r/symlink.result:
Updated result file
mysql-test/suite/innodb/r/innodb-create-options.result:
Updated to new error message
mysql-test/suite/innodb/r/innodb-zip.result:
Updated to new error message
mysql-test/suite/innodb/r/innodb.result:
Updated to new error message
storage/innobase/handler/ha_innodb.cc:
Return HA_WRONG_CREATE_OPTION instead of ER_ILLEGAL_HA_CREATE_OPTION
This gives more clear and OS indepedent error messages
storage/xtradb/handler/ha_innodb.cc:
Return HA_WRONG_CREATE_OPTION instead of ER_ILLEGAL_HA_CREATE_OPTION
This gives more clear and OS indepedent error messages
ISSUE: Incorrect key file. Key file is corrupted,
Reading incorrect key information (keyseg)
from index file. Key definition in .MYI
and .FRM file differs. Starting pointer
to read the keyseg information is changed
to a value greater than the pack_reclength.
Memcpy tries to read keyseg information from
unallocated memory which causes the crash.
SOLUTION: One more check added to compare the
the key definition in .MYI and .FRM
file. If the definition differ, server
produces an error.