.. wsrep_max_ws_rows causes cluster to break when running
Galera cluster in TOI mode
Problem:
While copying records to temporary table during ALTER TABLE,
if there are more than wsrep_max_wsrep_rows records, the
command fails.
Fix:
Since, the temporary table records are not placed into the
binary log, wsrep_affected_rows must not be incremented.
Added a test.
Remove the debug parameter innodb_force_recovery_crash that was
introduced into MySQL 5.6 by me in WL#6494 which allowed InnoDB
to resize the redo log on startup.
Let innodb.log_file_size actually start up the server, but ensure
that the InnoDB storage engine refuses to start up in each of the
scenarios.
that must not send a response
Problem:- When using wsrep (w/ galera) and issuing commands that can
cause deadlocks, deadlock exception errors are sent in responses to
commands such as close prepared statement and close connection which,
by spec, must not send a response.
Solution:- In dispatch_command, we will handle COM_QUIT and COM_STMT_CLOSE
commands even in case of error.
Patch Credit:- Jaka Močnik
The temporary tables created for recursive table references
should be closed in close_thread_tables(), because they might
be used in the statements like ANALYZE WITH r AS (...) SELECT * from r
where r is defined through recursion.
As noted in MDEV-8841, any test that kills the server must issue
FLUSH TABLES, so that tables of crash-unsafe storage engines will
not be corrupted. Consistently issue this statement after any
call mtr.add_suppression() calls.
Also, do not invoke shutdown_server directly, but use helpers instead.
As noted in MDEV-8841, any test that kills the server must issue
FLUSH TABLES, so that tables of crash-unsafe storage engines will
not be corrupted. Consistently issue this statement after any
call mtr.add_suppression() calls.
Also, do not invoke shutdown_server directly, but use helpers instead.
recv_scan_log_recs(): Remember if redo log apply is needed,
even if starting up in innodb_read_only mode.
recv_recovery_from_checkpoint_start_func(): Refuse
innodb_read_only startup if redo log apply is needed.
Do not kill the server after call mtr.add_suppression(), because
the procedure modifies a crash-unsafe table, and we do not want to
corrupt that table.
Do wait only if innodb_num_page_compressed_trim_op shows that
we have succeed to do at least few trim operations (and
that will happen on insert if possible).
crashes server
This bug is the result of merging the Oracle MySQL follow-up fix
BUG#22963169 MYSQL CRASHES ON CREATE FULLTEXT INDEX
without merging the base bug fix:
Bug#79475 Insert a token of 84 4-bytes chars into fts index causes
server crash.
Unlike the above mentioned fixes in MySQL, our fix will not change
the storage format of fulltext indexes in InnoDB or XtraDB
when a character encoding with mbmaxlen=2 or mbmaxlen=3
and the length of a word is between 128 and 84*mbmaxlen bytes.
The Oracle fix would allocate 2 length bytes for these cases.
Compatibility with other MySQL and MariaDB releases is ensured by
persisting the used maximum length in the SYS_COLUMNS table in the
InnoDB data dictionary.
This fix also removes some unnecessary strcmp() calls when checking
for the legacy default collation my_charset_latin1
(my_charset_latin1.name=="latin1_swedish_ci").
fts_create_one_index_table(): Store the actual length in bytes.
This metadata will be written to the SYS_COLUMNS table.
fts_zip_initialize(): Initialize only the first byte of the buffer.
Actually the code should not even care about this first byte, because
the length is set as 0.
FTX_MAX_WORD_LEN: Define as HA_FT_MAXCHARLEN * 4 aka 336 bytes,
not as 254 bytes.
row_merge_create_fts_sort_index(): Set the actual maximum length of the
column in bytes, similar to fts_create_one_index_table().
row_merge_fts_doc_tokenize(): Remove the redundant parameter word_dtype.
Use the actual maximum length of the column. Calculate the extra_size
in the same way as row_merge_buf_encode() does.
recv_scan_log_recs(): Remember if redo log apply is needed,
even if starting up in innodb_read_only mode.
recv_recovery_from_checkpoint_start_func(): Refuse
innodb_read_only startup if redo log apply is needed.
Both dict_foreign_find_index and dict_foreign_qualify_index
did not consider virtual columns as possible foreign key
columns and there was assertion to disable virtual columns.
Fixed by also looking referencing and referenced column
from virtual columns if needed.
The fields st_select_lex::cond_pushed_into_where and
st_select_lex::cond_pushed_into_having should be re-initialized
for the unit specifying a derived table at every re-execution
of the query that uses this derived table, because the result
of condition pushdown may be different for different executions.
Before "MDEV-10709 Expressions as parameters to Dynamic SQL" only
user variables were syntactically allowed as EXECUTE parameters.
User variables were OK as both IN and OUT parameters.
When Item_param was bound to an actual parameter (a user variable),
it automatically meant that the bound Item was settable.
The DBUG_ASSERT() in Protocol_text::send_out_parameters() guarded that
the actual parameter is really settable.
After MDEV-10709, any kind of expressions are allowed as EXECUTE IN parameters.
But the patch for MDEV-10709 forgot to check that only descendants of
Settable_routine_parameter should be allowed as OUT parameters.
So an attempt to pass a non-settable parameter as an OUT parameter
made server crash on the above mentioned DBUG_ASSERT.
This patch changes Item_param::get_settable_routine_parameter(),
which previously always returned "this". Now, when Item_param is bound
to some Item, it caches if the bound Item is settable.
Item_param::get_settable_routine_parameter() now returns "this" only
if the bound actual parameter is settable, and returns NULL otherwise.
Problem was that implementation merged from 10.1 was incompatible
with InnoDB 5.7.
buf0buf.cc: Add functions to return should we punch hole and
how big.
buf0flu.cc: Add written page to IORequest
fil0fil.cc: Remove unneeded status call and add test is
sparse files and punch hole supported by file system when
tablespace is created. Add call to get file system
block size. Used file node is added to IORequest. Added
functions to check is punch hole supported and setting
punch hole.
ha_innodb.cc: Remove unneeded status variables (trim512-32768)
and trim_op_saved. Deprecate innodb_use_trim and
set it ON by default. Add function to set innodb-use-trim
dynamically.
dberr.h: Add error code DB_IO_NO_PUNCH_HOLE
if punch hole operation fails.
fil0fil.h: Add punch_hole variable to fil_space_t and
block size to fil_node_t.
os0api.h: Header to helper functions on buf0buf.cc and
fil0fil.cc for os0file.h
os0file.h: Remove unneeded m_block_size from IORequest
and add bpage to IORequest to know actual size of
the block and m_fil_node to know tablespace file
system block size and does it support punch hole.
os0file.cc: Add function punch_hole() to IORequest
to do punch_hole operation,
get the file system block size and determine
does file system support sparse files (for punch hole).
page0size.h: remove implicit copy disable and
use this implicit copy to implement copy_from()
function.
buf0dblwr.cc, buf0flu.cc, buf0rea.cc, fil0fil.cc, fil0fil.h,
os0file.h, os0file.cc, log0log.cc, log0recv.cc:
Remove unneeded write_size parameter from fil_io
calls.
srv0mon.h, srv0srv.h, srv0mon.cc: Remove unneeded
trim512-trim32678 status variables. Removed
these from monitor tests.
Problem: Item_param::basic_const_item() returned true when fixed==false.
This unexpected combination made Item::const_charset_converter() crash
on asserts.
Fix:
- Changing all Item_param::set_xxx() to set "fixed" to true.
This fixes the problem.
- Additionally, changing all Item_param::set_xxx() to set
Item_param::item_type, to avoid duplicate code, and for consistency,
to make the code symmetric between different constant types.
Before this patch only set_null() set item_type.
- Moving Item_param::state and Item_param::item_type from public to private,
to make sure easier that these members are in sync with "fixed" and to
each other.
- Adding a new argument "unsigned_arg" to Item::set_decimal(),
and reusing it in two places instead of duplicate code.
- Adding a new method Item_param::fix_temporal() and reusing it in two places.
- Adding methods has_no_value(), has_long_data_value(), has_int_value(),
instead of direct access to Item_param::state.
[0-9]*[.]?[0-9]* wasn't a sufficient regex to cover the
%lg used in Json_writer::add_double. Exponent formats
where missed.
Here we normalize all the replace_regex expressions for
ANALYZE FORMAT=JSON into one include file.
Signed-off-by: Daniel Black <daniel.black@au.ibm.com>
Problem:- When setting max_binlog_stmt_cache_size=18446744073709547520
from either command line or .cnf file, server fails to start.
Solution:- Added one more function eval_num_suffix_ull , which uses
strtoull to get unsigned ulonglong from string. And getopt_ull calls this
function instead of eval_num_suffix. Also changed previous eval_num_suffix to
eval_num_suffix_ll to remain consistent.
==== Description ====
Flashback can rollback the instances/databases/tables to an old snapshot.
It's implement on Server-Level by full image format binary logs (--binlog-row-image=FULL), so it supports all engines.
Currently, it’s a feature inside mysqlbinlog tool (with --flashback arguments).
Because the flashback binlog events will store in the memory, you should check if there is enough memory in your machine.
==== New Arguments to mysqlbinlog ====
--flashback (-B)
It will let mysqlbinlog to work on FLASHBACK mode.
==== New Arguments to mysqld ====
--flashback
Setup the server to use flashback. This enables binary log in row mode
and will enable extra logging for DDL's needed by flashback feature
==== Example ====
I have a table "t" in database "test", we can compare the output with "--flashback" and without.
#client/mysqlbinlog /data/mysqldata_10.0/binlog/mysql-bin.000001 -vv -d test -T t --start-datetime="2013-03-27 14:54:00" > /tmp/1.sql
#client/mysqlbinlog /data/mysqldata_10.0/binlog/mysql-bin.000001 -vv -d test -T t --start-datetime="2013-03-27 14:54:00" -B > /tmp/2.sql
Then, importing the output flashback file (/tmp/2.log), it can flashback your database/table to the special time (--start-datetime).
And if you know the exact postion, "--start-postion" is also works, mysqlbinlog will output the flashback logs that can flashback to "--start-postion" position.
==== Implement ====
1. As we know, if binlog_format is ROW (binlog-row-image=FULL in 10.1 and later), all columns value are store in the row event, so we can get the data before mis-operation.
2. Just do following things:
2.1 Change Event Type, INSERT->DELETE, DELETE->INSERT.
For example:
INSERT INTO t VALUES (...) ---> DELETE FROM t WHERE ...
DELETE FROM t ... ---> INSERT INTO t VALUES (...)
2.2 For Update_Event, swapping the SET part and WHERE part.
For example:
UPDATE t SET cols1 = vals1 WHERE cols2 = vals2
--->
UPDATE t SET cols2 = vals2 WHERE cols1 = vals1
2.3 For Multi-Rows Event, reverse the rows sequence, from the last row to the first row.
For example:
DELETE FROM t WHERE id=1; DELETE FROM t WHERE id=2; ...; DELETE FROM t WHERE id=n;
--->
DELETE FROM t WHERE id=n; ...; DELETE FROM t WHERE id=2; DELETE FROM t WHERE id=1;
2.4 Output those events from the last one to the first one which mis-operation happened.
For example: