The replayer did not signal replaying waiters. Added
mysql_cond_broadcast() after replaying is over.
Assertion on client error failed after replay attempt failed due
to certification failure. At this point the transaction does not
go through client state, so the client error cannot be overridden.
Assign ER_LOCK_DEADLOCK to thd directly instead.
Use timed cond wait when waiting for replayers to finish and
check if the transaction has been BF aborted during the wait.
Temporary disable WSREP while executing RESET MASTER. In situation when 2 nodes are both master/slave first stop slave on both and than reset master.
Enforce stricter causality check with wsrep_sync_wait.
Optimized the code that removed multiple equalities pushed from HAVING
into WHERE. Now this removal is postponed until all multiple equalities
are eliminated in substitute_for_best_equal_field().
The variable controls the amount of sampling analyze table performs.
If ANALYZE table with histogram collection is too slow, one can reduce the
time taken by setting analyze_sample_percentage to a lower value of the
total number of rows.
Setting it to 0 will use a formula to compute how many rows to sample:
The number of rows collected is capped to a minimum of 50000 and
increases logarithmically with a coffecient of 4096. The coffecient is
chosen so that we expect an error of less than 3% in our estimations
according to the paper:
"Random Sampling for Histogram Construction: How much is enough?”
– Surajit Chaudhuri, Rajeev Motwani, Vivek Narasayya, ACM SIGMOD, 1998.
The drawback of sampling is that avg_frequency number is computed
imprecisely and will yeild a smaller number than the real one.
The add method does not need to provide the row order number. It was
only used to detect if the minimum/maximum value was populated once or not, so
as to force an update for the first encounter of a value.
Remove CMake INSTALL command for COMPONENT DataFiles.
mysql_install_db.exe will calculate default datadir, so that it can be
called without any parameters.
The check for streaming replication logging format in
THD::decide_logging_format() did the check also for DDLs running
in TOI mode. This caused DROP DATABASE to fail if streaming
replication was enabled.
Added check for THD wsrep execution mode and perform the check
only if the THD is in local processing mode (i.e. not TOI).
Added galera_sr_create_drop test to verify that CREATE/DROP
statements pass even if streaming replication is on.
in the tree bb-10.4-mdev7486
The crash was caused because of the similar problem as in mdev-16765:
Item_cond::excl_dep_on_group_fields_for_having_pushdown() was missing.
If we instantly change the size of a fixed-length field
and treat it as kind-of variable-length, then we will need
conversions between old column values and new ones.
I tried adding such a conversion to row_build(), but then I
noticed that more conversions would be needed, because
old values still appeared in a freshly rebuilt secondary index,
causing a mismatch when trying to search with the correct
longer value that was converted in my provisional fix to row_build().
So, we will revert the essential part of
MDEV-15563: Instant ROW_FORMAT=REDUNDANT column extension
(commit 22feb179ae), but not
remove any tests.
Make sure that the Annotate_rows_log_events is written into
binlog only for the first fragment of the current statement.
Also avoid flusing pending rows event when calculating bytes
generated by the transaction.
Added and recorded a test which verifies that the binlog
contains only one Annotate_rows_log_event per statement
with various SR settings. Recrded mysql-wsrep-features#136
which produced different output with excession log events
suppressed.
innobase_build_col_map_add(): Do not assume that old_field->pack_length()
equals to field->pack_length(). Fix submitted by Aleksey Midenkov.
innobase_instant_try(): Assert that the column length of fixed-length
NOT NULL columns is only changing for ROW_FORMAT=REDUNDANT.
Change the defaults:
-histogram_size=0
+histogram_size=254
-histogram_type=SINGLE_PREC_HB
+histogram_type=DOUBLE_PREC_HB
Adjust the testcases:
- Some have ignorable changes in EXPLAIN outputs and
more counter increments due to EITS table reads.
- Testcases that meaningfully depend on the old defaults
are changed to use the old values.
The Create_field::charset can contain garbage for columns
that the SQL layer does not consider as being string columns.
InnoDB considers BIT a string column for historical reasons
(and backward compatibility with old persistent InnoDB metadata),
and therefore it checked the charset.
The Field::charset() consistently is my_charset_bin for BIT,
so we can trust that one.
Fix clang warning: 'this' pointer cannot be null in well-defined C++ code;
pointer may be assumed to always convert to true
The only caller of TABLE::best_range_rowid_filter_for_partial_join()
already seems to be assuming that s->table != NULL.
When node is JOINER and bin-log is enabled but bin-log-index is not set in configuration, we use NULL pointer which causes segfault.
Fixed by checking for NULL pointer before using variable.
Condition can be pushed from the HAVING clause into the WHERE clause
if it depends only on the fields that are used in the GROUP BY list
or depends on the fields that are equal to grouping fields.
Aggregate functions can't be pushed down.
How the pushdown is performed on the example:
SELECT t1.a,MAX(t1.b)
FROM t1
GROUP BY t1.a
HAVING (t1.a>2) AND (MAX(c)>12);
=>
SELECT t1.a,MAX(t1.b)
FROM t1
WHERE (t1.a>2)
GROUP BY t1.a
HAVING (MAX(c)>12);
The implementation scheme:
1. Extract the most restrictive condition cond from the HAVING clause of
the select that depends only on the fields that are used in the GROUP BY
list of the select (directly or indirectly through equalities)
2. Save cond as a condition that can be pushed into the WHERE clause
of the select
3. Remove cond from the HAVING clause if it is possible
The optimization is implemented in the function
st_select_lex::pushdown_from_having_into_where().
New test file having_cond_pushdown.test is created.
Galera versions below 4.x do not generate unique sequence number
for view events. Take this into account when writing the SE checkpoint
to avoid debug assertion in InnoDB.
Field_str::is_equal(): Do not allow instant conversions between
BIT (which is stored big-endian) and integer types (which can
be stored big-endian or little-endian, depending on storage engine).
row_sel_field_store_in_mysql_format_func(): Properly extend
narrower integer and DATA_FIXBINARY values to the current format.
DATA_FIXBINARY was incorrectly padded with 0x20 instead of 0.
In tests that directly write InnoDB data file pages,
compute the innodb_checksum_algorithm=crc32 checksums,
instead of writing the 0xdeadbeef value used by
innodb_checksum_algorithm=none. In this way, these tests
will not cause failures when executing
./mtr --mysqld=--loose-innodb-checksum-algorithm=strict_crc32
1. Renaming Type_handler_json to Type_handler_json_longtext
There will be other JSON handlers soon, e.g. Type_handler_json_varchar.
2. Making the code more symmetric for data types:
- Adding a new virtual method
Type_handler::Column_definition_validate_check_constraint()
- Moving JSON-specific code from sql_yacc.yy to
Type_handler_json_longtext::Column_definition_validate_check_constraint()
3. Adding new files sql_type_json.cc and sql_type_json.h
and moving Type_handler+JSON related code into these files.
instant_alter_column_possible(): Add the other MDEV-17459 work-around
condition. The existence of fulltext indexes only prevents instant
DROP COLUMN or changing the order of columns. Other forms of instant
ALTER TABLE are no problem.
Before commit 4e7ee166a9 that merged
the MDEV-18295 fix from 10.3, the work-around of MDEV-17459 in
instant_alter_column_possible() was categorically refusing any
ALGORITHM=INSTANT if any FULLTEXT INDEX was present. After that commit,
a related condition was only present in prepare_inplace_alter_table_dict()
but not in the other callers of instant_alter_column_possible().
Fix some of the galera_sr suite test failures.
Remove tests that are not going to be run because required
feature is not supported.
modified: mysql-test/suite/galera_sr/disabled.def
deleted: mysql-test/suite/galera_sr/r/GCF-574.result
modified: mysql-test/suite/galera_sr/r/galera_sr_cc_slave.result
modified: mysql-test/suite/galera_sr/r/galera_sr_kill_all_norecovery.result
modified: mysql-test/suite/galera_sr/r/galera_sr_load_data.result
deleted: mysql-test/suite/galera_sr/r/galera_sr_sbr.result
modified: mysql-test/suite/galera_sr/r/mysql-wsrep-features#148.result
deleted: mysql-test/suite/galera_sr/r/mysql-wsrep-features#29.result
deleted: mysql-test/suite/galera_sr/t/GCF-574.test
modified: mysql-test/suite/galera_sr/t/galera_sr_cc_slave.test
modified: mysql-test/suite/galera_sr/t/galera_sr_kill_all_norecovery.cnf
modified: mysql-test/suite/galera_sr/t/galera_sr_kill_all_norecovery.test
modified: mysql-test/suite/galera_sr/t/galera_sr_load_data.test
deleted: mysql-test/suite/galera_sr/t/galera_sr_sbr.test
modified: mysql-test/suite/galera_sr/t/mysql-wsrep-features#148.test
deleted: mysql-test/suite/galera_sr/t/mysql-wsrep-features#29.test