- It turns out, ha_rocksdb::table_flags() can return
HA_PRIMARY_KEY_IN_READ_INDEX for all kinds of tables (as its meaning
is "if there is a PK, PK columns contribute to the secondary index
tuple". There is no assumption that a certain PK column can be decoded
from the secondary index.
(Should probably be fixed in the upstream, too, but I was unable to
construct a testcase showing this is necessary).
- Following the above, we can undo the init_with_fields() changes in
table.cc. MyRocks calls init_with_fields() from ha_rocksdb::open()
which sets index-only read capabilities properly.
==== Description ====
Flashback can rollback the instances/databases/tables to an old snapshot.
It's implement on Server-Level by full image format binary logs (--binlog-row-image=FULL), so it supports all engines.
Currently, it’s a feature inside mysqlbinlog tool (with --flashback arguments).
Because the flashback binlog events will store in the memory, you should check if there is enough memory in your machine.
==== New Arguments to mysqlbinlog ====
--flashback (-B)
It will let mysqlbinlog to work on FLASHBACK mode.
==== New Arguments to mysqld ====
--flashback
Setup the server to use flashback. This enables binary log in row mode
and will enable extra logging for DDL's needed by flashback feature
==== Example ====
I have a table "t" in database "test", we can compare the output with "--flashback" and without.
#client/mysqlbinlog /data/mysqldata_10.0/binlog/mysql-bin.000001 -vv -d test -T t --start-datetime="2013-03-27 14:54:00" > /tmp/1.sql
#client/mysqlbinlog /data/mysqldata_10.0/binlog/mysql-bin.000001 -vv -d test -T t --start-datetime="2013-03-27 14:54:00" -B > /tmp/2.sql
Then, importing the output flashback file (/tmp/2.log), it can flashback your database/table to the special time (--start-datetime).
And if you know the exact postion, "--start-postion" is also works, mysqlbinlog will output the flashback logs that can flashback to "--start-postion" position.
==== Implement ====
1. As we know, if binlog_format is ROW (binlog-row-image=FULL in 10.1 and later), all columns value are store in the row event, so we can get the data before mis-operation.
2. Just do following things:
2.1 Change Event Type, INSERT->DELETE, DELETE->INSERT.
For example:
INSERT INTO t VALUES (...) ---> DELETE FROM t WHERE ...
DELETE FROM t ... ---> INSERT INTO t VALUES (...)
2.2 For Update_Event, swapping the SET part and WHERE part.
For example:
UPDATE t SET cols1 = vals1 WHERE cols2 = vals2
--->
UPDATE t SET cols2 = vals2 WHERE cols1 = vals1
2.3 For Multi-Rows Event, reverse the rows sequence, from the last row to the first row.
For example:
DELETE FROM t WHERE id=1; DELETE FROM t WHERE id=2; ...; DELETE FROM t WHERE id=n;
--->
DELETE FROM t WHERE id=n; ...; DELETE FROM t WHERE id=2; DELETE FROM t WHERE id=1;
2.4 Output those events from the last one to the first one which mis-operation happened.
For example:
Most notably, this includes MDEV-11623, which includes a fix and
an upgrade procedure for the InnoDB file format incompatibility
that is present in MariaDB Server 10.1.0 through 10.1.20.
In other words, this merge should address
MDEV-11202 InnoDB 10.1 -> 10.2 migration does not work
When a query containing a WITH clause is printed by EXPLAIN
EXTENDED command there should not be any data expansion in
the query specifications of the WITH elements of this WITH
clause.
MySQL 5.7 allows temporary tables to be created in ROW_FORMAT=COMPRESSED.
The usefulness of this is questionable. WL#7899 in MySQL 8.0.0
prevents the creation of such compressed tables, so that all InnoDB
temporary tables will be located inside the predefined
InnoDB temporary tablespace.
Pick up and adjust some tests from MySQL 5.7 and 8.0.
dict_tf_to_fsp_flags(): Remove the parameter is_temp.
fsp_flags_init(): Remove the parameter is_temporary.
row_mysql_drop_temp_tables(): Remove. There cannot be any temporary
tables in InnoDB. (This never removed #sql* tables in the datadir
which were created by DDL.)
dict_table_t::dir_path_of_temp_table: Remove.
create_table_info_t::m_temp_path: Remove.
create_table_info_t::create_options_are_invalid(): Do not allow
ROW_FORMAT=COMPRESSED or KEY_BLOCK_SIZE for temporary tables.
create_table_info_t::innobase_table_flags(): Do not unnecessarily
prevent CREATE TEMPORARY TABLE with SPATIAL INDEX.
(MySQL 5.7 does allow this.)
fil_space_belongs_in_lru(): The only FIL_TYPE_TEMPORARY tablespace
is never subjected to closing least-recently-used files.
in Item_partition_func_safe_string(THD *thd, const char *name_arg,
uint length, CHARSET_INFO *cs= NULL), the 'name_arg' is the value
of the string constant and 'length' is the length of this constant,
so length == strlen(name_arg).
check_that_all_fields_are_given_values() relied on write_set,
but was run too early, before triggers updated write_set.
also, when triggers are present, fields might get values conditionally,
so we need to check that all fields are given values for every row.
Gtid_list_log_event::do_apply_event() did not free_root(thd->mem_root).
It can allocate on this in record_gtid(), and in some scenarios there is
nothing else that does free_root(), leading to temporary memory leak until
stop of SQL thread. One scenario is in circular replication with only one
master active. The active master receives only its own events on the slave,
all of which are ignored. But whenever the SQL thread catches up with the IO
thread, a Gtid_list_log_event is applied, leading to the leak.
* Remove duplicate lines from tests
* Use thd instead of current_thd
* Remove extra wsrep_binlog_format_names
* Correctly merge union patch from 5.5 wrt duplicate rows.
* Correctly merge SELinux changes into 10.1
It was used for get_datetime_value() and for thd->is_error().
But in fact, get_datetime_value() never used thd argument, because the
cache ptr argument was NULL. And thd->is_error() check was not needed
at that place at all.
it used current_thd->alloc() and allocated on the thd's execution arena,
not on table->expr_arena.
Remove THD::arena_for_cached_items that is temporarily set in
update_virtual_fields(), and replaces THD arena in get_datetime_value().
Instead set THD arena to table->expr_arena for the whole duration
of update_virtual_fields()
Item_func_le included Arg_comparator. Arg_comparator remembered
the current_thd during fix_fields and used that value during
execution to allocate Item_cache in get_datetime_value().
But for vcols fix_fields and val_int can happen in different threads.
Same bug for Item_func_in using in_datetime or cmp_item_datetime,
both also remembered current_thd at fix_fields() to use it later
for get_datetime_value().
As a fix, these objects no longer remember the current_thd,
and get_datetime_value() uses current_thd at run time. This
should not increase the number of current_thd calls much, as
Item_cache is created only once anyway.
Fixing Item::decimal_precision() to return at least one digit.
This fixes the problem reported in MDEV.
Also, fixing Item_func_signed::fix_length_and_dec() to reserve
space for at least one digit (plus one character for an optional sign).
This is needed to have CONVERT(expr,SIGNED) and CONVERT(expr,UNSIGNED)
create correct string fields when they appear in string context, e.g.:
CREATE TABLE t1 AS SELECT CONCAT(CONVERT('',SIGNED));
- Changed error handlers interface so that they can change error level in
the handler
- Give warnings and errors when calculating virtual columns
- On insert/update error is fatal in strict mode.
- SELECT and DELETE will only give a warning if a virtual field generates an error
- Added VCOL_UPDATE_FOR_DELETE and VCOL_UPDATE_INDEX_FOR_REPLACE to be able to
easily detect in update_virtual_fields() if we should use an error
handler to mask errors or not.
Found and fixed 2 problems:
- Filesort addon fields didn't mark virtual columns properly
- multi-range-read calculated vcol bitmap but was not using it.
This caused wrong vcol field to be calculated on read, which caused the assert.
When updating a table with virtual BLOB columns, the following might
happen:
- an old record is read from the table, it has no virtual blob values
- update_virtual_fields() is run, vcol blob gets its value into the
record. But only a pointer to the value is in the table->record[0],
the value is in Field_blob::value String (but it doesn't have to be!
it can be in the record, if the column is just a copy of another
columns: ... b VARCHAR, c BLOB AS (b) ...)
- store_record(table,record[1]), old record now is in record[1]
- fill_record() prepares new values in record[0], vcol blob is updated,
new value replaces the old one in the Field_blob::value
- now both record[1] and record[0] have a pointer that points to the
*new* vcol blob value. Or record[1] has a pointer to nowhere if
Field_blob::value had to realloc.
To fix this I have introduced a new String object 'read_value' in
Field_blob. When updating virtual columns when a row has been read,
the allocated value is stored in 'read_value' instead of 'value'. The
allocated blobs for the new row is stored in 'value' as before.
I also made, as a safety precaution, the insert delayed handling of
blobs more general by using value to store strings instead of the
record. This ensures that virtual functions on delayed insert should
work in as in the case of normal insert.
Triggers are now properly updating the read, write and vcol maps for used
fields. This means that we don't need VCOL_UPDATE_FOR_READ_WRITE anymore
and there is no need for any other special handling of triggers in
update_virtual_fields().
To be able to test how many times virtual fields are invoked, I also
relaxed rules that one can use local (@) variables in DEFAULT and non
persistent virtual field expressions.
- MDEV-11621 rpl.rpl_gtid_stop_start fails sporadically in buildbot
- MDEV-11620 rpl.rpl_upgrade_master_info fails sporadically in buildbot
The issue above was probably that the build machine was overworked and the
shutdown took longer than 30 resp 10 seconds, which caused MyISAM tables
to be marked as crashed.
Fixed by flushing myisam tables before doing a forced shutdown/kill.
I also increased timeout for forced shutdown from 10 seconds to 60 seconds
to fix other possible issues on slow machines.
Fixed also some compiler warnings
- privilege_table_io.test didn't properly reset roles_mapping
- Fixed memory allocation problem with CHECK CONSTRAINT, found when
running --valgrind main.check_constraint
- Atomic writes are enabled by default
- Automatically detect if device supports atomic write and use it if
atomic writes are enabled
- Remove ATOMIC WRITE options from CREATE TABLE
- Atomic write is a device option, not a table options as the table may
crash if the media changes
- Add support for SHANNON SSD cards
Problem was with deleting non existing .frm file for a storage engine that
doesn't have .frm files (yet)
Fixed by not giving an error for non existing .frm files for storage engines
that are using discovery
Fixed also valgrind supression related to the given test case