Commit graph

3415 commits

Author SHA1 Message Date
Andrei Elkin
ca9ed393ef MDEV-13073. rpl.perf_buildin_semisync_issue40 is corrected to expect the Rpl_semi_sync_master_clients value of 1 (ll.307..). Explicit sleeps are converted to wait_xyz. 2017-12-18 22:59:05 +02:00
Andrei Elkin
f279d3c43a MDEV-13073. This part converts the Ali patch`s identifiers to the MariaDB standard. Also some renaming is done as well as white spaces removal. 2017-12-18 13:43:38 +02:00
Andrei Elkin
c0ea3056b6 MDEV-13073. This is a part with a new test (refined Ali's
one) and affected result files.

Specifically to rpl_semi_sync_after_sync*, the changed results reflect
a fact that thanks to fixes in the dump thread functionality
there's no longer zombie thread to kill neither such thread represent
a semisync client (so the counter drops).
2017-12-18 13:43:37 +02:00
Andrei Elkin
e972125f11 MDEV-13073 This part merges the Ali semisync related changes
and specifically the ack receiving functionality.
Semisync is turned to be static instead of plugin so its functions
are invoked at the same points as RUN_HOOKS.
The RUN_HOOKS and the observer interface remain to be removed by later
patch.

Todo:
  React on killed status by repl_semisync_master.wait_after_sync(). Currently
  Repl_semi_sync_master::commit_trx does not check the killed status.

  There were few bugfixes found that are present in mysql and its unclear
  whether/how they are covered. Those include:

  Bug#15985893: GTID SKIPPED EVENTS ON MASTER CAUSE SEMI SYNC TIME-OUTS
  Bug#17932935 CALLING IS_SEMI_SYNC_SLAVE() IN EACH FUNCTION CALL
                 HAS BAD PERFORMANCE
  Bug#20574628: SEMI-SYNC REPLICATION PERFORMANCE DEGRADES WITH A HIGH NUMBER OF THREADS
2017-12-18 13:43:37 +02:00
Monty
2e53b96a0a Moved semisync from a plugin to normal server
Part of MDEV-13073 AliSQL Optimize performance of semisync

Did the following renames to match other similar variables
key_ss_mutex_LOCK_binlog_       > key_LOCK_bing
key_ss_cond_COND_binlog_send_  -> key_COND_binlog_send
COND_binlog_send_              -> COND_binlog_send
LOCK_binlog_                   -> LOCK_binlog

debian/mariadb-server-10.2.install does not install semisync libs.
2017-12-18 13:43:36 +02:00
Vesa Pentti
d91d1c8dbc Test cleanup related to MDEV-12501
* Removing unnecessary --plugin-maturity=unknown definitions from tests
2017-12-16 15:34:48 +00:00
Marko Mäkelä
34841d2305 Merge bb-10.2-ext into 10.3 2017-12-12 09:57:17 +02:00
Vesa Pentti
99bcec295d MDEV-12501 -- set --maturity-level by default
* Note: breaking change; since this commit, a plugin that has
    worked so far might get rejected due to plugin maturity
  * mariabackup is not affected (allows all plugins)
  * VERSION file defines SERVER_MATURITY, which defines the
    corresponding numeric value as SERVER_MATURITY_LEVEL in
    include/mysql_version.h
  * The default value for 'plugin_maturity' is SERVER_MATURITY_LEVEL - 1
  * Logs a warning if a plugin has maturity lower than
    SERVER_MATURITY_LEVEL
  * Tests suppress the plugin maturity warning
  * Tests use --plugin-maturity=unknown by default so as not to fail
    due to the stricter plugin maturity handling
2017-12-09 23:34:43 +00:00
Marko Mäkelä
976f6fb1b6 Merge bb-10.2-ext into 10.3 2017-12-06 19:36:33 +02:00
Marko Mäkelä
ce07676502 Merge 10.2 into bb-10.2-ext 2017-12-06 19:34:03 +02:00
Marko Mäkelä
1d526f31fb Merge 10.1 into 10.2 2017-12-05 14:23:57 +02:00
Vesa Pentti
5868a184fa Revert "MDEV-12501 -- set --maturity-level by default"
This reverts commit 1af2d7ba23.
2017-12-05 08:49:28 +00:00
Vesa Pentti
1af2d7ba23 MDEV-12501 -- set --maturity-level by default
* Note: breaking change; since this commit, a plugin that has
    worked so far might get rejected due to plugin maturity
  * mariabackup is not affected (allows all plugins)
  * VERSION file defines SERVER_MATURITY, which defines the
    corresponding numeric value as SERVER_MATURITY_LEVEL in
    include/mysql_version.h
  * The default value for 'plugin_maturity' is SERVER_MATURITY_LEVEL - 1
  * Logs a warning if a plugin has maturity lower than
    SERVER_MATURITY_LEVEL
  * Tests suppress the plugin maturity warning
  * Tests use --plugin-maturity=unknown by default so as not to fail
    due to the stricter plugin maturity handling
2017-12-04 21:12:35 +00:00
Varun Gupta
60c446584c MDEV-7773: Aggregate stored functions
This commit implements aggregate stored functions. The basic idea behind
the feature is:

* Implement a special instruction FETCH GROUP NEXT ROW that will pause
the execution of the stored function. When the instruction is reached,
execution of the initial query resumes "as if" the function returned.
This gives the server the opportunity to advance to the next row in the
result set.

* Stored aggregates behave like regular aggregate functions. The
implementation of thus resides in the class Item_sum_sp. Because it is
an aggregate function, for each new row in the group, the
Item_sum_sp::add() method will be called. This is when execution resumes
and the function does another iteration to "add" one extra element to
the final result.

* When the end of group is reached, val_xxx() method will be called for
the item. This case is handled by another execute step for the stored
function, only with a special flag to force a call to the return
handler. See Item_sum_sp::execute() for details.

To allow this pause and resume semantic, we must preserve the function
context across executions. This is stored in Item_sp::sp_query_arena only for
aggregate stored functions, but has no impact for regular functions.

We also enforce aggregate functions to include the "FETCH GROUP NEXT ROW"
instruction.

Signed-off-by: Vicențiu Ciorbaru <vicentiu@mariadb.org>
2017-12-04 13:22:29 +02:00
Monty
d7b0b8ddac MDEV-10688 rpl.rpl_row_log_innodb failed in buildbot
Problem was that Binlog_checkpoint can happen at random times.
Fixed by not write binlog_checkpoint for the rpl_log test.

Other things:
- Removed not used variable "$keep_gtid_events"
- Added option for show_binlog_events to skip binlog_checkpoint
2017-12-03 15:21:53 +02:00
Marko Mäkelä
7cb3520c06 Merge bb-10.2-ext into 10.3 2017-11-30 08:16:37 +02:00
Alexander Barkov
5b697c5a23 Merge remote-tracking branch 'origin/10.2' into bb-10.2-ext 2017-11-29 12:06:48 +04:00
Andrei Elkin
c666ca7b1b MDEV-12012. Post-push attempt to catch failure in rpl_gtid_delete_domain failing on P8. The test is made more verbose. 2017-11-23 22:10:31 +02:00
Sergei Golubchik
7f1900705b Merge branch '10.1' into 10.2 2017-11-21 19:47:46 +01:00
Alexander Barkov
4a8039b04e Merge remote-tracking branch 'origin/10.2' into bb-10.2-ext 2017-11-20 11:12:08 +04:00
Andrei Elkin
aae4932775 MDEV-12012/MDEV-11969 Can't remove GTIDs for a stale GTID Domain ID
As reported in MDEV-11969 "there's no way to ditch knowledge" about some
domain that is no longer updated on a server. Besides being of annoyance to
clutter output in DBA console stale domains can prevent the slave
to connect the master as MDEV-12012 witnesses.
What domain is obsolete must be evaluated by the user (DBA) according
to whether the domain info is still relevant and will the domain ever
receive any update.

This patch introduces a method to discard obsolete gtid domains from
the server binlog state. The removal requires no event group from such
domain present in existing binlog files though. If there are any the
containing logs must be first PURGEd in order for

  FLUSH BINARY LOGS DELETE_DOMAIN_ID=(list-of-domains)

succeed. Otherwise the command returns an error.

The list of obsolete domains can be computed through
intersecting two sets - the earliest (first) binlog's Gtid_list
and the current value of @@global.gtid_binlog_state - and extracting
the domain id components from the intersection list items.
The new DELETE_DOMAIN_ID featured FLUSH continues to rotate binlog
omitting the deleted domains from the active binlog file's Gtid_list.
Notice though when the command is ineffective - that none of requested to delete
domain exists in the binlog state - rotation does not occur.

Obsolete domain deletion is not harmful for connected slaves as long
as master side binlog files *purge* is synchronized with FLUSH-DELETE_DOMAIN_ID.
The slaves must have the last event from purged files processed as usual,
in order not to bump later into requesting a gtid from a file which
was already gone.
While the command is not replicated (as ordinary FLUSH BINLOG LOGS is)
slaves, even though having extra domains, won't suffer from reconnection errors
thanks to master-slave gtid connection protocol allowing the master
to be ignorant about a gtid domain.
Should at failover such slave to be promoted into master role it may run
the ex-master's

 FLUSH BINARY LOGS DELETE_DOMAIN_ID=(list-of-domains)

to clean its own binlog state.

NOTES.
  suite/perfschema/r/start_server_low_digest.result
is re-recorded as consequence of internal parser codes changes.
2017-11-15 22:26:32 +02:00
Marko Mäkelä
a48aa0cd56 Merge bb-10.2-ext into 10.3 2017-11-10 16:12:45 +02:00
Monty
bce807f70f Rename some errors that uses MySQL -> MariaDB 2017-11-05 22:23:32 +02:00
Marko Mäkelä
a4948dafcd MDEV-11369 Instant ADD COLUMN for InnoDB
For InnoDB tables, adding, dropping and reordering columns has
required a rebuild of the table and all its indexes. Since MySQL 5.6
(and MariaDB 10.0) this has been supported online (LOCK=NONE), allowing
concurrent modification of the tables.

This work revises the InnoDB ROW_FORMAT=REDUNDANT, ROW_FORMAT=COMPACT
and ROW_FORMAT=DYNAMIC so that columns can be appended instantaneously,
with only minor changes performed to the table structure. The counter
innodb_instant_alter_column in INFORMATION_SCHEMA.GLOBAL_STATUS
is incremented whenever a table rebuild operation is converted into
an instant ADD COLUMN operation.

ROW_FORMAT=COMPRESSED tables will not support instant ADD COLUMN.

Some usability limitations will be addressed in subsequent work:

MDEV-13134 Introduce ALTER TABLE attributes ALGORITHM=NOCOPY
and ALGORITHM=INSTANT
MDEV-14016 Allow instant ADD COLUMN, ADD INDEX, LOCK=NONE

The format of the clustered index (PRIMARY KEY) is changed as follows:

(1) The FIL_PAGE_TYPE of the root page will be FIL_PAGE_TYPE_INSTANT,
and a new field PAGE_INSTANT will contain the original number of fields
in the clustered index ('core' fields).
If instant ADD COLUMN has not been used or the table becomes empty,
or the very first instant ADD COLUMN operation is rolled back,
the fields PAGE_INSTANT and FIL_PAGE_TYPE will be reset
to 0 and FIL_PAGE_INDEX.

(2) A special 'default row' record is inserted into the leftmost leaf,
between the page infimum and the first user record. This record is
distinguished by the REC_INFO_MIN_REC_FLAG, and it is otherwise in the
same format as records that contain values for the instantly added
columns. This 'default row' always has the same number of fields as
the clustered index according to the table definition. The values of
'core' fields are to be ignored. For other fields, the 'default row'
will contain the default values as they were during the ALTER TABLE
statement. (If the column default values are changed later, those
values will only be stored in the .frm file. The 'default row' will
contain the original evaluated values, which must be the same for
every row.) The 'default row' must be completely hidden from
higher-level access routines. Assertions have been added to ensure
that no 'default row' is ever present in the adaptive hash index
or in locked records. The 'default row' is never delete-marked.

(3) In clustered index leaf page records, the number of fields must
reside between the number of 'core' fields (dict_index_t::n_core_fields
introduced in this work) and dict_index_t::n_fields. If the number
of fields is less than dict_index_t::n_fields, the missing fields
are replaced with the column value of the 'default row'.
Note: The number of fields in the record may shrink if some of the
last instantly added columns are updated to the value that is
in the 'default row'. The function btr_cur_trim() implements this
'compression' on update and rollback; dtuple::trim() implements it
on insert.

(4) In ROW_FORMAT=COMPACT and ROW_FORMAT=DYNAMIC records, the new
status value REC_STATUS_COLUMNS_ADDED will indicate the presence of
a new record header that will encode n_fields-n_core_fields-1 in
1 or 2 bytes. (In ROW_FORMAT=REDUNDANT records, the record header
always explicitly encodes the number of fields.)

We introduce the undo log record type TRX_UNDO_INSERT_DEFAULT for
covering the insert of the 'default row' record when instant ADD COLUMN
is used for the first time. Subsequent instant ADD COLUMN can use
TRX_UNDO_UPD_EXIST_REC.

This is joint work with Vin Chen (陈福荣) from Tencent. The design
that was discussed in April 2017 would not have allowed import or
export of data files, because instead of the 'default row' it would
have introduced a data dictionary table. The test
rpl.rpl_alter_instant is exactly as contributed in pull request .
The test innodb.instant_alter is based on a contributed test.

The redo log record format changes for ROW_FORMAT=DYNAMIC and
ROW_FORMAT=COMPACT are as contributed. (With this change present,
crash recovery from MariaDB 10.3.1 will fail in spectacular ways!)
Also the semantics of higher-level redo log records that modify the
PAGE_INSTANT field is changed. The redo log format version identifier
was already changed to LOG_HEADER_FORMAT_CURRENT=103 in MariaDB 10.3.1.

Everything else has been rewritten by me. Thanks to Elena Stepanova,
the code has been tested extensively.

When rolling back an instant ADD COLUMN operation, we must empty the
PAGE_FREE list after deleting or shortening the 'default row' record,
by calling either btr_page_empty() or btr_page_reorganize(). We must
know the size of each entry in the PAGE_FREE list. If rollback left a
freed copy of the 'default row' in the PAGE_FREE list, we would be
unable to determine its size (if it is in ROW_FORMAT=COMPACT or
ROW_FORMAT=DYNAMIC) because it would contain more fields than the
rolled-back definition of the clustered index.

UNIV_SQL_DEFAULT: A new special constant that designates an instantly
added column that is not present in the clustered index record.

len_is_stored(): Check if a length is an actual length. There are
two magic length values: UNIV_SQL_DEFAULT, UNIV_SQL_NULL.

dict_col_t::def_val: The 'default row' value of the column.  If the
column is not added instantly, def_val.len will be UNIV_SQL_DEFAULT.

dict_col_t: Add the accessors is_virtual(), is_nullable(), is_instant(),
instant_value().

dict_col_t::remove_instant(): Remove the 'instant ADD' status of
a column.

dict_col_t::name(const dict_table_t& table): Replaces
dict_table_get_col_name().

dict_index_t::n_core_fields: The original number of fields.
For secondary indexes and if instant ADD COLUMN has not been used,
this will be equal to dict_index_t::n_fields.

dict_index_t::n_core_null_bytes: Number of bytes needed to
represent the null flags; usually equal to UT_BITS_IN_BYTES(n_nullable).

dict_index_t::NO_CORE_NULL_BYTES: Magic value signalling that
n_core_null_bytes was not initialized yet from the clustered index
root page.

dict_index_t: Add the accessors is_instant(), is_clust(),
get_n_nullable(), instant_field_value().

dict_index_t::instant_add_field(): Adjust clustered index metadata
for instant ADD COLUMN.

dict_index_t::remove_instant(): Remove the 'instant ADD' status
of a clustered index when the table becomes empty, or the very first
instant ADD COLUMN operation is rolled back.

dict_table_t: Add the accessors is_instant(), is_temporary(),
supports_instant().

dict_table_t::instant_add_column(): Adjust metadata for
instant ADD COLUMN.

dict_table_t::rollback_instant(): Adjust metadata on the rollback
of instant ADD COLUMN.

prepare_inplace_alter_table_dict(): First create the ctx->new_table,
and only then decide if the table really needs to be rebuilt.
We must split the creation of table or index metadata from the
creation of the dictionary table records and the creation of
the data. In this way, we can transform a table-rebuilding operation
into an instant ADD COLUMN operation. Dictionary objects will only
be added to cache when table rebuilding or index creation is needed.
The ctx->instant_table will never be added to cache.

dict_table_t::add_to_cache(): Modified and renamed from
dict_table_add_to_cache(). Do not modify the table metadata.
Let the callers invoke dict_table_add_system_columns() and if needed,
set can_be_evicted.

dict_create_sys_tables_tuple(), dict_create_table_step(): Omit the
system columns (which will now exist in the dict_table_t object
already at this point).

dict_create_table_step(): Expect the callers to invoke
dict_table_add_system_columns().

pars_create_table(): Before creating the table creation execution
graph, invoke dict_table_add_system_columns().

row_create_table_for_mysql(): Expect all callers to invoke
dict_table_add_system_columns().

create_index_dict(): Replaces row_merge_create_index_graph().

innodb_update_n_cols(): Renamed from innobase_update_n_virtual().
Call my_error() if an error occurs.

btr_cur_instant_init(), btr_cur_instant_init_low(),
btr_cur_instant_root_init():
Load additional metadata from the clustered index and set
dict_index_t::n_core_null_bytes. This is invoked
when table metadata is first loaded into the data dictionary.

dict_boot(): Initialize n_core_null_bytes for the four hard-coded
dictionary tables.

dict_create_index_step(): Initialize n_core_null_bytes. This is
executed as part of CREATE TABLE.

dict_index_build_internal_clust(): Initialize n_core_null_bytes to
NO_CORE_NULL_BYTES if table->supports_instant().

row_create_index_for_mysql(): Initialize n_core_null_bytes for
CREATE TEMPORARY TABLE.

commit_cache_norebuild(): Call the code to rename or enlarge columns
in the cache only if instant ADD COLUMN is not being used.
(Instant ADD COLUMN would copy all column metadata from
instant_table to old_table, including the names and lengths.)

PAGE_INSTANT: A new 13-bit field for storing dict_index_t::n_core_fields.
This is repurposing the 16-bit field PAGE_DIRECTION, of which only the
least significant 3 bits were used. The original byte containing
PAGE_DIRECTION will be accessible via the new constant PAGE_DIRECTION_B.

page_get_instant(), page_set_instant(): Accessors for the PAGE_INSTANT.

page_ptr_get_direction(), page_get_direction(),
page_ptr_set_direction(): Accessors for PAGE_DIRECTION.

page_direction_reset(): Reset PAGE_DIRECTION, PAGE_N_DIRECTION.

page_direction_increment(): Increment PAGE_N_DIRECTION
and set PAGE_DIRECTION.

rec_get_offsets(): Use the 'leaf' parameter for non-debug purposes,
and assume that heap_no is always set.
Initialize all dict_index_t::n_fields for ROW_FORMAT=REDUNDANT records,
even if the record contains fewer fields.

rec_offs_make_valid(): Add the parameter 'leaf'.

rec_copy_prefix_to_dtuple(): Assert that the tuple is only built
on the core fields. Instant ADD COLUMN only applies to the
clustered index, and we should never build a search key that has
more than the PRIMARY KEY and possibly DB_TRX_ID,DB_ROLL_PTR.
All these columns are always present.

dict_index_build_data_tuple(): Remove assertions that would be
duplicated in rec_copy_prefix_to_dtuple().

rec_init_offsets(): Support ROW_FORMAT=REDUNDANT records whose
number of fields is between n_core_fields and n_fields.

cmp_rec_rec_with_match(): Implement the comparison between two
MIN_REC_FLAG records.

trx_t::in_rollback: Make the field available in non-debug builds.

trx_start_for_ddl_low(): Remove dangerous error-tolerance.
A dictionary transaction must be flagged as such before it has generated
any undo log records. This is because trx_undo_assign_undo() will mark
the transaction as a dictionary transaction in the undo log header
right before the very first undo log record is being written.

btr_index_rec_validate(): Account for instant ADD COLUMN

row_undo_ins_remove_clust_rec(): On the rollback of an insert into
SYS_COLUMNS, revert instant ADD COLUMN in the cache by removing the
last column from the table and the clustered index.

row_search_on_row_ref(), row_undo_mod_parse_undo_rec(), row_undo_mod(),
trx_undo_update_rec_get_update(): Handle the 'default row'
as a special case.

dtuple_t::trim(index): Omit a redundant suffix of an index tuple right
before insert or update. After instant ADD COLUMN, if the last fields
of a clustered index tuple match the 'default row', there is no
need to store them. While trimming the entry, we must hold a page latch,
so that the table cannot be emptied and the 'default row' be deleted.

btr_cur_optimistic_update(), btr_cur_pessimistic_update(),
row_upd_clust_rec_by_insert(), row_ins_clust_index_entry_low():
Invoke dtuple_t::trim() if needed.

row_ins_clust_index_entry(): Restore dtuple_t::n_fields after calling
row_ins_clust_index_entry_low().

rec_get_converted_size(), rec_get_converted_size_comp(): Allow the number
of fields to be between n_core_fields and n_fields. Do not support
infimum,supremum. They are never supposed to be stored in dtuple_t,
because page creation nowadays uses a lower-level method for initializing
them.

rec_convert_dtuple_to_rec_comp(): Assign the status bits based on the
number of fields.

btr_cur_trim(): In an update, trim the index entry as needed. For the
'default row', handle rollback specially. For user records, omit
fields that match the 'default row'.

btr_cur_optimistic_delete_func(), btr_cur_pessimistic_delete():
Skip locking and adaptive hash index for the 'default row'.

row_log_table_apply_convert_mrec(): Replace 'default row' values if needed.
In the temporary file that is applied by row_log_table_apply(),
we must identify whether the records contain the extra header for
instantly added columns. For now, we will allocate an additional byte
for this for ROW_T_INSERT and ROW_T_UPDATE records when the source table
has been subject to instant ADD COLUMN. The ROW_T_DELETE records are
fine, as they will be converted and will only contain 'core' columns
(PRIMARY KEY and some system columns) that are converted from dtuple_t.

rec_get_converted_size_temp(), rec_init_offsets_temp(),
rec_convert_dtuple_to_temp(): Add the parameter 'status'.

REC_INFO_DEFAULT_ROW = REC_INFO_MIN_REC_FLAG | REC_STATUS_COLUMNS_ADDED:
An info_bits constant for distinguishing the 'default row' record.

rec_comp_status_t: An enum of the status bit values.

rec_leaf_format: An enum that replaces the bool parameter of
rec_init_offsets_comp_ordinary().
2017-10-06 09:50:10 +03:00
Marko Mäkelä
2c1067166d Merge bb-10.2-ext into 10.3 2017-10-04 08:24:06 +03:00
Alexander Barkov
6857cb57fe MDEV-13967 Parameter data type control for Item_long_func
- Implementing stricter data type control for Item_long_func descendants
- Cleanup: renaming Type_handler::can_return_str_ascii() to can_return_text()
  (a better name).
2017-10-01 00:30:58 +04:00
Alexander Barkov
ca38b93e35 MDEV-13965 Parameter data type control for Item_longlong_func 2017-09-29 22:44:07 +04:00
Marko Mäkelä
4a32e2395e Merge bb-10.2-ext into 10.3 2017-09-25 22:05:56 +03:00
Sergei Golubchik
1320ad5b92 Merge branch '10.2' into bb-10.2-ext 2017-09-23 20:22:30 +02:00
Sergei Golubchik
f1ce69f3a9 Merge branch '10.1' into 10.2
But without f4f48e06215..f8a800bec81 - fixes for MDEV-12672
and related issues. 10.2 specific fix follows...
2017-09-22 02:27:00 +02:00
Sergei Golubchik
f4f48e0621 MDEV-12672 Replicated TIMESTAMP fields given wrong value near DST change
Implement a special Copy_field method for timestamps, that copies
timestamps without converting them to MYSQL_TIME (the conversion
is lossy around DST change dates).
2017-09-21 22:03:21 +02:00
Sergei Golubchik
2e3a16e366 Merge branch '10.0' into 10.1 2017-09-21 22:02:21 +02:00
Marko Mäkelä
e3d44f5d62 Merge bb-10.2-ext into 10.3 2017-09-21 08:12:19 +03:00
Marko Mäkelä
72a8024217 After-merge fix: Adjust some results. 2017-09-21 07:58:08 +03:00
Sergei Golubchik
b7434bacbd include/master-slave.inc must always be included last 2017-09-20 18:17:50 +02:00
Marko Mäkelä
fc3b1a7d2f Merge 10.2 into bb-10.2-ext 2017-09-20 17:47:49 +03:00
Vicențiu Ciorbaru
d66856c4f7 Update testcase post merge 2017-09-20 00:46:08 +03:00
Vicențiu Ciorbaru
22c322c649 Merge branch '10.1' into 10.2 2017-09-19 12:43:02 +03:00
Vicențiu Ciorbaru
ec6042bda0 Merge branch '10.0' into 10.1 2017-09-19 12:06:50 +03:00
Sergei Golubchik
6670b4e58c MDEV-13712 Spelling errors in the error message 2017-09-18 10:12:23 +02:00
Alexander Barkov
434e283507 MDEV-13685 Can not replay binary log due to Illegal mix of collations (latin1_swedish_ci,IMPLICIT) and (utf8mb4_general_ci,COERCIBLE) for operation 'concat' 2017-09-15 12:25:06 +04:00
Sergey Vojtovich
dd4e9cdded Get rid of Field::do_save_field_metadata()
It doesn't serve any purpose, but generates extra virtual function call.
2017-08-31 15:03:12 +04:00
Sergei Golubchik
bb8e99fdc3 Merge branch 'bb-10.2-ext' into 10.3 2017-08-26 00:34:43 +02:00
Sergei Golubchik
27412877db Merge branch '10.2' into bb-10.2-ext 2017-08-25 10:25:48 +02:00
Sergei Golubchik
77c41fa725 small cleanup of rpl.rpl_stop_slave 2017-08-24 01:05:54 +02:00
Sergei Golubchik
cb1e76e4de Merge branch '10.1' into 10.2 2017-08-17 11:38:34 +02:00
Marko Mäkelä
620ba97cfc Merge remote-tracking branch 'origin/bb-10.2-ext' into 10.3 2017-08-09 12:59:39 +03:00
Sergei Golubchik
8e8d42ddf0 Merge branch '10.0' into 10.1 2017-08-08 10:18:43 +02:00
Alexander Barkov
988a9daa94 Merge remote-tracking branch 'origin/10.2' into bb-10.2-ext
Conflicts:
	mysql-test/r/func_json.result
	mysql-test/r/win.result
	mysql-test/t/func_json.test
	mysql-test/t/win.test
	sql/share/errmsg-utf8.txt
	storage/rocksdb/ha_rocksdb.cc
	storage/rocksdb/mysql-test/rocksdb/r/tbl_opt_data_index_dir.result
2017-08-07 21:35:34 +04:00
Sergei Golubchik
496cea45e2 update error messages for 10.0 2017-07-27 12:42:40 +02:00