CODE
Problem: UDF doesn't handle the arguments properly when they
are of string type due to a misplaced break.
The length of arguments is also not set properly
when the argument is NULL.
Solution: Fixed the code by putting the break at right place
and setting the argument length to zero when the
argument is NULL.
On PPC64 high-loaded server may crash due to assertion failure in InnoDB
rwlocks code.
This happened because load order between "recursive" and "writer_thread"
wasn't properly enforced.
Applied the fix previously pushed into 10.0.
Initial Jan's commit comment:
Problem is that test could open Microsoft C++ Client Debugger
windows with abort exception. Lets not try to test this on
windows.
Description: When querying a subset of columns from the information_schema.TABLES
Analysis: When information about tables is collected for statements like
"SELECT ENGINE FROM I_S.TABLES" we do not perform full-blown table opens
in SE, instead we only use information from table shares from the Table
Definition Cache or .FRMs. Still in order to simplify I_S implementation
mock TABLE objects are created from TABLE_SHARE during this process.
This is done by calling open_table_from_share() function with special
arguments. Since this function always increments "Opened_tables" counter,
calls to it can be mistakingly interpreted as full-blown table opens in SE.
Note that claim that "'SELECT ENGINE FROM I_S.TABLES' statement doesn't
use Table Cache" is nevertheless factually correct. But it misses the
point, since such statements a) don't use full-blown TABLE objects and
therefore don't do table opens b) still use Table Definition Cache.
Fix: We are now incrementing the counter when db_stat(i.e open flags for ha_open(
we have considered an optimization which would use TABLE objects from
Table Cache when available instead of constructing mock TABLE objects,
but found it too intrusive for stable releases.
Analysis:
--------
Certain queries using intrinsic temporary tables may fail due to
name clashes in the file name for the temporary table when the
'temp-pool' enabled.
'temp-pool' tries to reduce the number of different filenames used for
temp tables by allocating them from small pool in order to avoid
problems in the Linux kernel by using a three part filename:
<tmp_file_prefix>_<pid>_<temp_pool_slot_num>.
The bit corresponding to the temp_pool_slot_num is set in the bit
map maintained for the temp-pool when it used for the file name.
It is cleared after the temp table is deleted for re-use.
The 'create_tmp_table()' function call under error condition
tries to clear the same bit twice by calling 'free_tmp_table()'
and 'bitmap_lock_clear_bit()'. 'free_tmp_table()' does a delete
of the table/file and clears the bit by calling the same function
'bitmap_lock_clear_bit()'.
The issue reported can be triggered under the timing window mentioned
below for an error condition while creating the temp table:
a) THD1: Due to an error clears the temp pool slot number used by it
by calling 'free_tmp_table'.
b) THD2: In the process of creating the temp table by using an unused
slot number in the bit map.
c) THD1: Clears the slot number used THD2 by calling
'bitmap_lock_clear_bit()' after completing the call 'free_tmp_table'.
d) THD3: Uses the slot number used the THD2 since it is freed by THD1.
When it tries to create the temp file using that slot number,
an error is reported since it is currently in use by THD2.
[The error: Error 'Can't create/write to file
'/tmp/#sql_277e_0.MYD' (Errcode: 17)']
Another issue which may occur in 5.6 and trunk is that:
When the open temporary table fails after its creation(due to ulimit
or OOM error), the file is not deleted. Thus further attempts to use
the same slot number in the 'temp-pool' results in failure.
Fix:
---
a) Under the error condition calling the 'bitmap_lock_clear_bit()'
function to clear the bit is unnecessary since 'free_tmp_table()'
deletes the table/file and clears the bit. Hence removed the
redundant call 'bitmap_lock_clear_bit()' in 'create_tmp_table()'
This prevents the timing window under which the issue reported
can be seen.
b) If open of the temporary table fails, then the file is deleted
thus allowing the temp-pool slot number to be utilized for the
subsequent temporary table creation.
c) Also if the attempt to create temp table fails since it already
exists, the temp-pool slot for it is marked as used, to avoid
the problem from re-appearing.
Use traditional statistics estimation by default (innodb-stats-traditional=true).
There could be performance regression for customers if there is a lot of
open table operations.
This came with the upgrade from yassl 2.3.0 to 2.3.4 -
ssl tests started to hang on Windows. Comparing and removing changes
I've got to this:
void input_buffer::set_current(uint i)
{
- if (i)
- check(i - 1, size_);
- current_ = i;
+ if (error_ == 0 && i && check(i - 1, size_) == 0)
+ current_ = i;
+ else
+ error_ = -1;
}
in 2.3.0 i==0 was only used to avoid the check, in 2.3.4 it's an error.
but there are places in the code that do set_current(0) and others that
do, like, { before=get_current(); ...; set_current(before); } - and the
initial value of current_ is 0.
So, I suspect that set_current(0) should not be an error, but it should
only skip the check().
followup:
* explicitly disable SSLv2 and SSLv3, keep other protocols enabled
* fix a compiler warning
* rename the test and combinations to avoid confusion
vio/viossl.c:
fix a compiler warning
innodb_stats_sample_pages
Analysis: If you set the number of analyzed pages
to very low number compared to actual pages on
that table/index it randomly pics those pages
(default 8 pages), this leads to fact that query
after analyze table returns different results. If
the index tree is small, smaller than 10 *
n_sample_pages + total_external_size, then the
estimate is ok. For bigger index trees it is
common that we do not see any borders between
key values in the few pages we pick. But still
there may be n_sample_pages different key values,
or even more. And it just tries to
approximate to n_sample_pages (8).
Fix: (1) Introduced new dynamic configuration variable
innodb_stats_sample_traditional that retains
the current design. Default false.
(2) If traditional sample is not used we use
n_sample_pages = max(min(srv_stats_sample_pages,
index->stat_index_size),
log2(index->stat_index_size)*
srv_stats_sample_pages);
(3) Introduced new dynamic configuration variable
stat_modified_counter (default = 0) if set
sets lower bound for row updates when statistics is re-estimated.
If user has provided upper bound for how many rows needs to be updated
before we calculate new statistics we use minimum of provided value
and 1/16 of table every 16th round. If no upper bound is provided
(srv_stats_modified_counter = 0, default) then calculate new statistics
if 1 / 16 of table has been modified
since the last time a statistics batch was run.
We calculate statistics at most every 16th round, since we may have
a counter table which is very small and updated very often.
@param t table
@return true if the table has changed too much and stats need to be
recalculated
*/
#define DICT_TABLE_CHANGED_TOO_MUCH(t) \
((ib_int64_t) (t)->stat_modified_counter > (srv_stats_modified_counter ? \
ut_min(srv_stats_modified_counter, (16 + (t)->stat_n_rows / 16)) : \
16 + (t)->stat_n_rows / 16))