CRASHES ON EVERY START ATTEMPT
Description:
------------
push_warning_printf function is used to print the warning message
to the client. So this function should not invoke while recovering
the server. Moreover current_thd is NULL while starting the server.
Solution:
---------
- Avoiding the warning to be printed while recovery.
This patch already pushed in mysql-5.6.
Problem was that repair() did lock and unlock tables, which leaved already locked tables in wrong state
include/my_check_opt.h:
Added option T_NO_LOCKS to disable locking during repair()
Fixed duplicated bit T_NO_CREATE_RENAME_LSN
mysql-test/suite/rpl/r/myisam_external_lock.result:
Test case for MDEV-6871
mysql-test/suite/rpl/t/myisam_external_lock-slave.opt:
Test case for MDEV-6871
mysql-test/suite/rpl/t/myisam_external_lock.test:
Test case for MDEV-6871
storage/maria/ha_maria.cc:
Don't lock tables during enable_indexes()
Removed some calls to current_thd
storage/myisam/ha_myisam.cc:
Don't lock tables during enable_indexes()
Removed some calls to current_thd
modified:
storage/connect/rcmsg.c
- Avoid Memory copying when reading an ODBC table when the entire table
is already in the result set.
modified:
storage/connect/odbconn.cpp
storage/connect/odbconn.h
storage/connect/tabodbc.cpp
storage/connect/tabodbc.h
On PPC64 high-loaded server may crash due to assertion failure in InnoDB
rwlocks code.
This happened because load order between "recursive" and "writer_thread"
wasn't properly enforced.
modified:
storage/connect/value.cpp
- These files were commited even not modified (?)
modified:
storage/connect/ha_connect.cc
storage/connect/odbconn.cpp
storage/connect/odbconn.h
storage/connect/tabodbc.cpp
after Operating system error number 36 in a file operation.
Analysis: os_file_get_status did not handle error ENAMETOOLONG
correctly.
Fix: Add correct handling for error ENAMETOOLONG. Note that on InnoDB
case the error is not passed all the way up to server. That would
be bigger rewamp.
Use traditional statistics estimation by default (innodb-stats-traditional=true).
There could be performance regression for customers if there is a lot of
open table operations.
modified:
storage/connect/odbconn.cpp
storage/connect/odbconn.h
storage/connect/tabodbc.cpp
storage/connect/tabodbc.h
- Moving the calls to VerifyConnect and GetConnectInfo into the try block
in ODBConn::Open (potential crash in case of throw)
modified:
storage/connect/odbconn.cpp
- Handling all ODBC data date types (91, 92, 93)
modified:
storage/connect/ha_connect.cc
storage/connect/odbconn.cpp
- Not assuming string results from ODBC catalog functions are zero terminated
modified:
storage/connect/odbconn.cpp
innodb_stats_sample_pages
Analysis: If you set the number of analyzed pages
to very low number compared to actual pages on
that table/index it randomly pics those pages
(default 8 pages), this leads to fact that query
after analyze table returns different results. If
the index tree is small, smaller than 10 *
n_sample_pages + total_external_size, then the
estimate is ok. For bigger index trees it is
common that we do not see any borders between
key values in the few pages we pick. But still
there may be n_sample_pages different key values,
or even more. And it just tries to
approximate to n_sample_pages (8).
Fix: (1) Introduced new dynamic configuration variable
innodb_stats_sample_traditional that retains
the current design. Default false.
(2) If traditional sample is not used we use
n_sample_pages = max(min(srv_stats_sample_pages,
index->stat_index_size),
log2(index->stat_index_size)*
srv_stats_sample_pages);
(3) Introduced new dynamic configuration variable
stat_modified_counter (default = 0) if set
sets lower bound for row updates when statistics is re-estimated.
If user has provided upper bound for how many rows needs to be updated
before we calculate new statistics we use minimum of provided value
and 1/16 of table every 16th round. If no upper bound is provided
(srv_stats_modified_counter = 0, default) then calculate new statistics
if 1 / 16 of table has been modified
since the last time a statistics batch was run.
We calculate statistics at most every 16th round, since we may have
a counter table which is very small and updated very often.
@param t table
@return true if the table has changed too much and stats need to be
recalculated
*/
#define DICT_TABLE_CHANGED_TOO_MUCH(t) \
((ib_int64_t) (t)->stat_modified_counter > (srv_stats_modified_counter ? \
ut_min(srv_stats_modified_counter, (16 + (t)->stat_n_rows / 16)) : \
16 + (t)->stat_n_rows / 16))
The bug was that full memory barrier was missing in the code that ensures that
a waiter on an InnoDB mutex will not go to sleep unless it is guaranteed to be
woken up again by another thread currently holding the mutex. This made
possible a race where a thread could get stuck waiting for a mutex that is in
fact no longer locked. If that thread was also holding other critical locks,
this could stall the entire server. There is an error monitor thread than can
break the stall, it runs about once per second. But if the error monitor
thread itself got stuck or was not running, then the entire server could hang
infinitely.
This was introduced on i386/amd64 platforms in 5.5.40 and 10.0.13 by an
incorrect patch that tried to fix the similar problem for PowerPC.
This commit reverts the incorrect PowerPC patch, and instead implements a fix
for PowerPC that does not change i386/amd64 behaviour, making PowerPC work
similarly to i386/amd64.