When an attempt to connect to the remote server fails, Spider retries to
connect to the remote server 1000 times or until the connection attempt
succeeds. This is perceived as a hang if the remote server remains
unavailable.
I have introduced changes in Spider's table status handler to fix this problem.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Merged From:
Branch bb-10.3-MDEV-15712.
Unexpected data truncation may occur when storing data to compressed blob
column having multi byte variable length character sets.
The reason was incorrect number of characters limit was enforced for
blobs.
Added --skip-test-db option to mysql_install_db. If specified, no test
database created and relevant grants issued.
Removed --skip-auth-anonymous-user option of mysql_install_db. Now it is
covered by --skip-test-db.
Dropped some Debian patches that did the same.
Removed unused make_win_bin_dist.1, make_win_bin_dist and
mysql_install_db.pl.in.
Implement innodb_flush_method as an enum parameter in Mariabackup,
instead of ignoring the option and hard-wiring it to a default value.
xb0xb.h: Remove. Only xtrabackup.cc refers to the enum parameters.
innodb_flush_method_names[], innodb_flush_method_typelib[]:
Define as non-static, so that mariabackup can share the definitions.
srv_file_flush_method: Change the type to ulong, to match the
assignment in init_one_value() and handle_options() in mariabackup.
The checks that used to be enabled by the flags
UNIV_AHI_DEBUG, UNIV_DDL_DEBUG, UNIV_DEBUG_FILE_ACCESSES
were already enabled in debug builds. So, there is no point
in setting these.
Only UNIV_ZIP_DEBUG is set independently of the debug build.
Allow WITH_INNODB_EXTRA_DEBUG to be set for non-debug builds as well.
Currently it only implies UNIV_ZIP_DEBUG, that is, extra validation
for operations on ROW_FORMAT=COMPRESSED tables.
page_zip_validate_low(): Allow the code to be built on non-debug server.
buf_LRU_block_remove_hashed(): Allow the code to be built without
WITH_INNODB_AHI.
- Removed test if HA_FT_WTYPE == HA_KEYTYPE_FLOAT as this never worked
(HA_KEYTYPE_FLOAT is an enum)
- Define HA_FT_MAXLEN to 126 (was tested before but never defined)
Change all my_stcasecmp() calls that uses lexical keywords to use
lex_string_eq. This is faster as we only call strcasecmp() for
strings of different lengths.
Removed not used function lex_string_syseq()
- test-alter now correctly drops all columns
- test-alter has a new test that times adding columns in middle of table
- test-insert has a new test to check updates that doesn't change data
- test-insert: update_with_key_prefix didn't change data. Now fixed
The assertion would fail with the following trace:
rec_init_offsets_comp_ordinary(..., format=REC_LEAF_COLUMNS_ADDED)
rec_init_offsets()
rec_get_offsets_func()
rec_copy_prefix_to_dtuple()
dict_index_build_data_tuple()
btr_pcur_restore_position_func()
When btr_cur_store_position() had stored pcur->old_rec, the table
contained instantly added columns. The table was emptied
(dict_index_t::remove_instant() invoked) between the 'store' and 'restore'
operations, causing the assertion to fail. Here is a non-deterministic
test case to repeat the scenario:
--source include/have_innodb.inc
--connect (con1,localhost,root,,test)
CREATE TABLE t1 (pk INT PRIMARY KEY) ENGINE = InnoDB;
INSERT INTO t1 VALUES (0);
ALTER TABLE t1 ADD COLUMN a INT;
ALTER TABLE t1 ADD UNIQUE KEY (a);
DELETE FROM t1;
send INSERT INTO t1 VALUES (1,0),(2,0);
--connection default
DELETE FROM t1; # the assertion could fail here
DROP TABLE t1;
--disconnect con1
The fix is to normalize the pcur->old_rec so that when the
record prefix is stored, it will always be in the plain format.
This can be done, because the record prefix never includes any
instantly added columns. (It can only include key columns, which
can never be instantly added.)
rec_copy_prefix_to_buf(): Convert REC_STATUS_COLUMNS_ADDED to
REC_STATUS_ORDINARY format.
dict_index_copy_rec_order_prefix(): Avoid invoking
dict_index_get_n_unique_in_tree_nonleaf().
create_index(): Simplify code for creating SPATIAL or FULLTEXT index.
rec_copy_prefix_to_buf(): Skip the loop for SPATIAL INDEX.
Only allocate n_uniq elements for offsets, instead of index->n_fields.
(Statistics are never computed on spatial indexes, so we never need
to access more fields even in rec_copy_prefix_to_buf().)
There is only one log_sys and only one log_sys.log.
log_t::files::create(): Replaces log_init().
log_t::files::close(): Replaces log_group_close(), log_group_close_all().
fil_close_log_files(): if (free) log_sys.log_close();
The callers that passed free=true used to call log_group_close_all().
log_header_read(): Replaces log_group_header_read().
log_t::files::file_header_bufs_ptr: Use a single allocation.
log_t::files::file_header_bufs[]: Statically allocate the pointers.
log_t::files::set_fields(): Replaces log_group_set_fields().
log_t::files::calc_lsn_offset(): Replaces log_group_calc_lsn_offset().
Simplify the computation by using fewer variables.
log_t::files::read_log_seg(): Replaces log_group_read_log_seg().
log_sys_t::complete_checkpoint(): Replaces log_io_complete_checkpoint().
fil_aio_wait(): Move the logic from log_io_complete().
There is only one redo log subsystem in InnoDB. Allocate the object
statically, to avoid unnecessary dereferencing of the pointer.
log_t::create(): Renamed from log_sys_init().
log_t::close(): Renamed from log_shutdown().
log_t::checkpoint_buf_ptr: Remove. Allocate log_t::checkpoint_buf
statically.
Bind more InnoDB parameters directly to MYSQL_SYSVAR and
remove "shadow variables".
innodb_change_buffering: Declare as ENUM, not STRING.
innodb_flush_method: Declare as ENUM, not STRING.
innodb_log_buffer_size: Bind directly to srv_log_buffer_size,
without rounding it to a multiple of innodb_page_size.
LOG_BUFFER_SIZE: Remove.
SysTablespace::normalize_size(): Renamed from normalize().
innodb_init_params(): A new function to initialize and validate
InnoDB startup parameters.
innodb_init(): Renamed from innobase_init(). Invoke innodb_init_params()
before actually trying to start up InnoDB.
srv_start(bool): Renamed from innobase_start_or_create_for_mysql().
Added the input parameter create_new_db.
SRV_ALL_O_DIRECT_FSYNC: Define only for _WIN32.
xb_normalize_init_values(): Merge to innodb_init_param().
This reverts commit 72deed5988
which was merged as commit 29d4ac2ceb.
The change caused regressions, such as mysql-test-run failing
to run at all with --valgrind, or aborting on Windows on the
first test failure.
The code passing positions in the query to constructors of
Rewritable_query_parameter descendants (e.g. Item_splocal)
was not reliable. It used various Lex_input_stream methods:
- get_tok_start()
- get_tok_start_prev()
- get_tok_end()
- get_ptr()
to find positions of the recently scanned tokens.
The challenge was mostly to choose between get_tok_start()
and get_tok_start_prev(), taking into account to the current
grammar (depending if lookahead takes place before
or after we read the positions in every particular rule).
But this approach did not work at all in combination
with token contractions, when MYSQLlex() translates
two tokens into one token ID, for example:
WITH ROLLUP -> WITH_ROLLUP_SYM
As a result, the tokenizer is already one more token ahead.
So in query fragment:
"GROUP BY d, spvar WITH ROLLUP"
get_tok_start() points to "ROLLUP".
get_tok_start_prev() points to "WITH".
As a result, it was "WITH" who was erroneously replaced
to NAME_CONST() instead of "spvar".
This patch modifies the code to do it a different way.
Changes:
1. For keywords and identifiers, the tokenizer now
returns LEX_CTRING pointing directly to the query
fragment. So query positions are now just available using:
- $1.str - for the beginning of a token
- $1.str+$1.length - for the end of a token
2. Identifiers are not allocated on the THD memory root
in the tokenizer any more. Allocation is now done
on later stages, in methods like LEX::create_item_ident().
3. Two LEX_CSTRING based structures were added:
- Lex_ident_cli_st - used to store the "client side"
identifier representation, pointing to the
query fragment. Note, these identifiers
are encoded in @@character_set_client
and can have broken byte sequences.
- Lex_ident_sys_st - used to store the "server side"
identifier representation, pointing to the
THD allocated memory. This representation
guarantees that the identifier was checked
for being well-formed, and is encoded in utf8.
4. To distinguish between two identifier types
in the grammar, two Bison types were added:
<ident_cli> and <ident_sys>
5. All non-reserved keywords were marked as
being of the type <ident_cli>.
All reserved keywords are still of the type NONE.
6. All curly brackets in rules collecting
non-reserved keywords into non-terminal
symbols were removed, e.g.:
Was:
keyword_sp_data_type:
BIT_SYM {}
| BOOLEAN_SYM {}
Now:
keyword_sp_data_type:
BIT_SYM
| BOOLEAN_SYM
This is important NOT to have brackets here!!!!
This is needed to make sure that the underlying
Lex_ident_cli_ststructure correctly passes up to
the calling rule.
6. The code to scan identifiers and keywords
was moved from lex_one_token() into new
Lex_input_stream methods:
scan_ident_sysvar()
scan_ident_start()
scan_ident_middle()
scan_ident_delimited()
This was done to:
- get rid of enormous amount of references to &yylval->lex_str
- and remove a lot of references like lip->xxx
7. The allocating functionality which puts identifiers on the
THD memory root now resides in methods of Lex_ident_sys_st,
and in THD::to_ident_sys_alloc().
get_quoted_token() was removed.
8. Cleanup: check_simple_select() was moved as a method to LEX.
9. Cleanup: Some more functionality was moved from *.yy
to new methods were added to LEX:
make_item_colon_ident_ident()
make_item_func_call_generic()
create_item_qualified_asterisk()
The test assumes that `@@global.auto_increment_offset` is equal to
`wsrep_local_index + 1`. Which is normally the case if galera runs
with option `wsrep_auto_increment_control` enabled.
However, if some prior test performs a restart of a server, then
`wsrep_local_index` may change, and galera will set the value of
`auto_increment_offset` accordingly.
However, if `auto_increment_offset` changes during a test run, then
mtr will complain. To avoid that, tests that perform restarts include
`auto_increment_offset_save.inc` and `auto_increment_offset_restore.inc`.
Which reset the value of `auto_increment_offset`. And when that
happens, `auto_increment_offset` will no longer be equal to
`wsrep_local_index + 1`, and the test fails.
To avoid this problem, simply check that the offsets are different
on the nodes that compose the cluster.
trx_undof_page_add_undo_rec_log(): Write the MLOG_UNDO_INSERT
record instead of the equivalent MLOG_2BYTES and MLOG_WRITE_STRING.
This essentially reverts commit 9ee8917dfd.
In MariaDB 10.3, I attempted to simplify the crash recovery code
by making use of lower-level redo log records. It turns out that
we must keep the redo log parsing code in order to allow crash-upgrade
from older MariaDB versions (MDEV-14848).
Now, it further turns out that the InnoDB redo log record format is
suboptimal for logging multiple changes to a single page. This simple
change to the redo logging of undo log significantly affects the
INSERT and UPDATE performance.
Essentially, we wrote
(space_id,page_number,MLOG_2BYTES,2 bytes)
(space_id,page_number,MLOG_WRITE_STRING,N+4 bytes)
instead of the previously written
(space_id,page_number,MLOG_UNDO_INSERT,N+2 bytes)
The added redo log volume caused a single-threaded INSERT
(without innodb_adaptive_hash_index) of
1,000,000 rows to consume 11 seconds instead of 9 seconds,
and a subsequent UPDATE of 30,000,000 rows to consume 64 seconds
instead of 58 seconds. If we omitted all redo logging for the
undo log, the INSERT would consume only 4 seconds.
The trx_t::undo_mutex covered both some main-memory data structures
(trx_undo_t) and access to undo pages. The trx_undo_t is only
accessed by the thread that is associated with a running transaction.
Likewise, each transaction has its private set of undo pages.
The thread that is associated with an active transaction may
lock multiple undo pages concurrently, but no other thread may
lock multiple pages of a foreign transaction.
Concurrent access to the undo logs of an active transaction is possible,
but trx_undo_get_undo_rec_low() only locks one undo page at a time,
without ever holding any undo_mutex.
It seems that the trx_t::undo_mutex would have been necessary if
multi-threaded execution or rollback of a single transaction
had been implemented in InnoDB.