RENAME TABLE code tries to update EITS statistics. It hung, because
it used an index on (db_name,table_name) to find the table, and attempted
to update these values at the same time. The fix is do what SQL UPDATE
statement does when updating index that it's used for scanning:
- First, buffer the rowids of rows to be updated,
- then make the second pass to actually update the rows
Also fixed the call to rename_table_in_stat_tables() in sql_rename.cc
to pass the correct new database (before, it passed old db_name so cross-
database renames were not handled correctly).
Variant #2, with review feedback addressed.
rpl/rpl_mdev382 ; Wrong replace in show_binlog_events2.inc
binlog/database ; Different error on Solaris if file exists
mroonga/repair_table_no_index_file ; Different system error on Solaris
partition_not_blackhole ; Different error on Solaris
partition_myisam ; Different error on Solaris
Some other failures in mroonga was because have_32bit.inc didn't correctly
detect 64 bits on Solaris. Fixed using DEFAULT_MACHINE instead of MACHINE_TYPE
for Sys_version_compile_machine.
make it possible to change feedback plugin wait intervals
* only in debug builds
* and force the feedback report to be ignored
update the test to use this feature
In domain ID based filtering, a flag is used to filter-out
the events that belong to a particular domain. This flag gets
set when IO thread receives a GTID_EVENT for the domain on
filter list and its reset at the last event in the GTID group.
The resetting, however, was wrongly done before the decision to
write/filter the event from relay log is made. As a result, the
last event in the group will always pass through the filter.
Fixed by deferring the reset logic. Also added a test case.
Problem is that FLUSH TABLES WITH READ LOCK first blocks threads from
starting new commits, then waits for running commits to complete. But
in-order parallel replication needs commits to happen in a particular
order, so this can easily deadlock.
To fix this problem, this patch introduces a way to temporarily pause
the parallel replication worker threads. Before starting FTWRL, we let
all worker threads complete in-progress transactions, and then
wait. Then we proceed to take the global read lock. Once the lock is
obtained, we unpause the worker threads. Now commits are blocked from
starting by the global read lock, so the deadlock will no longer occur.
While querying INFORMATION SCHEMA, check for a table's engine
only used table name, but not schema name; so, if there were different
rows with the same table name, a wrong one could be retrieved.
The result of the check affected the decision whether the contents
of the table should be dumped, and whether a DELAYED option can be used.
Fixed by adding a clause for table_schema to the query.
Not printing the value" with binlog-row-image=minimal"
Merged Rows_log_event::print_verbose_one_row() and log_event_print_value()
with MySQL 5.7
Added flush after writing of Table_map_log_event() to fix wrong order of
lines in output. This causes a lot of changes in some test results.
Patch from Daniel Black:
- Change the charset of mysql.column_stats.{min_value, max_value} from
utf8_bin varchar to varbinary
- Adjust the code that saves/reads the data accordingly.
- Also provide upgrade statement in mysql_system_tables_fix.sql
The bitmap implementation defines two template Bitmap classes. One
optimized for 64-bit (default) wide bitmaps while the other is used for
all other widths.
In order to optimize the computations, Bitmap<64> class has defined its
own member functions for bitmap operations, the other one, however,
relies on mysys' bitmap implementation (mysys/my_bitmap.c).
Issue 1:
In case of non 64-bit Bitmap class, intersect() wrongly reset the
received bitmap while initialising a new local bitmap structure
(bitmap_init() clears the bitmap buffer) thus, the received bitmap was
getting cleared.
Fixed by initializing the local bitmap structure by using a temporary
buffer and later copying the received bitmap to the initialised bitmap
structure.
Issue 2:
The non 64-bit Bitmap class had the Iterator missing which caused
compilation failure.
Also added a cmake variable to hold the MAX_INDEXES value when supplied
from the command prompt. (eg. cmake .. -DMAX_INDEXES=128U). Checks have
been put in place to trigger build failure if MAX_INDEXES value is
greater than 128.
Test modifications:
* Introduced include/have_max_indexes_[64|128].inc to facilitate
skipping of tests for which the output differs with different
MAX_INDEXES.
* Introduced include/max_indexes.inc which would get modified by cmake
to reflect the MAX_INDEXES value used to build the server. This file
simply sets an mtr variable '$max_indexes' to show the MAX_INDEXES
value, which will then be consumed by the above introduced include file.
* Some tests (portions), dependent on MAX_INDEXES value, have been moved
to separate test files.
MDEV-8938 Server Crash on Update with joins
Make unique table check after setup_fields of update because unique table can materialize table and we do not need field resolving after materialization.
Make unique table check after setup_fields of update because unique table can materialize table and we do not need field resolving after materialization.
Analysis: Lengths which are not UNIV_SQL_NULL, but bigger than the following
number indicate that a field contains a reference to an externally
stored part of the field in the tablespace. The length field then
contains the sum of the following flag and the locally stored len.
This was incorrectly set to
define UNIV_EXTERN_STORAGE_FIELD (UNIV_SQL_NULL - UNIV_PAGE_SIZE_MAX)
When it should be
define UNIV_EXTERN_STORAGE_FIELD (UNIV_SQL_NULL - UNIV_PAGE_SIZE_DEF)
Additionally, we need to disable support for > 16K page size for
row compressed tables because a compressed page directory entry
reserves 14 bits for the start offset and 2 bits for flags.
This limits the uncompressed page size to 16k. To support
larger pages page directory entry needs to be larger.
The regression is caused by change bind-address server parameter
in MDEV-8083, so now server listens on IPv4 only by default.
The problem however is that on Windows, connection to server on localhost
appears to be much faster, if server listens on IPv6/dual stack.
mysql_real_connect() would try to connect to IPv6 loopback first,
and if this fails, the failing connect() call takes several seconds.
To fix, use bind-address=* on Windows, and 127.0.0.1 elsewhere