innodb.innodb_stats_drop_locked fail and
innodb.innodb_stats_fetch_nonexistent fails in buildbot on Windows
Analysis: Problem is that innodb_stats_create_on_corrupted
test renames mysql.innodb.index_stats and all the rest
are dependend on this table.
Fix: After rename back to original, restart mysqld to
make sure that table is correct.
- Drop all tables in explain_json.test
- Tabular form should print ref='' when type='fulltext' (another peculiarity
of the traditional EXPLAIN format)
- String_list::append_str should allocate memory for \0, too
- Some temporary code for EXPLAIN JSON and join buffering.
Use traditional statistics estimation by default (innodb-stats-traditional=true).
There could be performance regression for customers if there is a lot of
open table operations.
* adjust viossl.c to take account the new code
(SSL_get_error is used now, cannot simply remap it)
* remove unnecessary version check
* update the test to 10.0
This came with the upgrade from yassl 2.3.0 to 2.3.4 -
ssl tests started to hang on Windows. Comparing and removing changes
I've got to this:
void input_buffer::set_current(uint i)
{
- if (i)
- check(i - 1, size_);
- current_ = i;
+ if (error_ == 0 && i && check(i - 1, size_) == 0)
+ current_ = i;
+ else
+ error_ = -1;
}
in 2.3.0 i==0 was only used to avoid the check, in 2.3.4 it's an error.
but there are places in the code that do set_current(0) and others that
do, like, { before=get_current(); ...; set_current(before); } - and the
initial value of current_ is 0.
So, I suspect that set_current(0) should not be an error, but it should
only skip the check().
innodb_stats_sample_pages
Analysis: If you set the number of analyzed pages
to very low number compared to actual pages on
that table/index it randomly pics those pages
(default 8 pages), this leads to fact that query
after analyze table returns different results. If
the index tree is small, smaller than 10 *
n_sample_pages + total_external_size, then the
estimate is ok. For bigger index trees it is
common that we do not see any borders between
key values in the few pages we pick. But still
there may be n_sample_pages different key values,
or even more. And it just tries to
approximate to n_sample_pages (8).
Fix: (1) Introduced new dynamic configuration variable
innodb_stats_sample_traditional that retains
the current design. Default false.
(2) If traditional sample is not used we use
n_sample_pages = max(min(srv_stats_sample_pages,
index->stat_index_size),
log2(index->stat_index_size)*
srv_stats_sample_pages);
(3) Introduced new dynamic configuration variable
stat_modified_counter (default = 0) if set
sets lower bound for row updates when statistics is re-estimated.
If user has provided upper bound for how many rows needs to be updated
before we calculate new statistics we use minimum of provided value
and 1/16 of table every 16th round. If no upper bound is provided
(srv_stats_modified_counter = 0, default) then calculate new statistics
if 1 / 16 of table has been modified
since the last time a statistics batch was run.
We calculate statistics at most every 16th round, since we may have
a counter table which is very small and updated very often.
@param t table
@return true if the table has changed too much and stats need to be
recalculated
*/
#define DICT_TABLE_CHANGED_TOO_MUCH(t) \
((ib_int64_t) (t)->stat_modified_counter > (srv_stats_modified_counter ? \
ut_min(srv_stats_modified_counter, (16 + (t)->stat_n_rows / 16)) : \
16 + (t)->stat_n_rows / 16))
followup:
* explicitly disable SSLv2 and SSLv3, keep other protocols enabled
* fix a compiler warning
* rename the test and combinations to avoid confusion
vio/viossl.c:
fix a compiler warning
When the optimizer considers an option to use Loose Scan, it should
still consider UNIQUE keys (Previously, MDEV-4120 disabled loose scan
for all kinds of unique indexes. That was wrong)
However, we should not use Loose Scan when trying to satisfy
"SELECT DISTINCT col1, col2, .. colN"
when using an index defined as UNIQU(col1, col2, ... colN).
Problem is that page compressed tables currently require atomic_blobs and
that feature is not availabe currently for row_format=redundant.
Fix: Disallow page compressed create option if table row_format=redundant.
The bug was that full memory barrier was missing in the code that ensures that
a waiter on an InnoDB mutex will not go to sleep unless it is guaranteed to be
woken up again by another thread currently holding the mutex. This made
possible a race where a thread could get stuck waiting for a mutex that is in
fact no longer locked. If that thread was also holding other critical locks,
this could stall the entire server. There is an error monitor thread than can
break the stall, it runs about once per second. But if the error monitor
thread itself got stuck or was not running, then the entire server could hang
infinitely.
This was introduced on i386/amd64 platforms in 5.5.40 and 10.0.13 by an
incorrect patch that tried to fix the similar problem for PowerPC.
This commit reverts the incorrect PowerPC patch, and instead implements a fix
for PowerPC that does not change i386/amd64 behaviour, making PowerPC work
similarly to i386/amd64.
The problem is that the binlog position is updated before
Executed_log_entries and Slave_SQL_State. So, it's possible to hit
the moment when MASTER_POS_WAIT (and hence sync_with_master) already
returned success, but Slave_SQL_State and Executed_log_entries were not
modified yet.
Fixing it by adding a wait on the expected Executed_log_entries value.
in mysql_upgrade: do FLUSH PRIVILEGES at the end, not together with
mysql_fix_privilege_tables
mysql-test/t/mysql_upgrade-6984.opt:
use a dummy second option to force server restart after the test
* use the same HAVE_C/CXX_ variables for compiler flag tests as the rest of
the server and tokudb - to use cached results
* plugin's name should be "mroonga" not "ha_mroonga"
* don't use set_property(TARGET plugin_name ...), it aborts cmake when a plugin
id disabled, because the target doesn't exists in that case
result: mroonga can now be disabled from cmake command line
use the same restriction for character_set_client on the command line
and from SQL.
Also: remove strange hack from thd_init_client_charset() that contradicted
the manual (collation_connection and character_set_result were not always set)