When compressed myisam files are opened, they are always memory mapped
sometimes causing memory swapping problems.
When we mmap the myisam compressed tables of size greater than the memory
available, the kswapd0 process utilization is very high consuming 30-40% of
the cpu. This happens only with linux kernels older than 2.6.9
With newer linux kernels, we don't have this problem of high cpu consumption
and this option may not be required.
The option 'myisam_mmap_size' is added to limit the amount of memory used for
memory mapping of myisam files. This option is not dynamic.
The default value on 32 bit system is 4294967295 bytes and on 64 bit system it
is 18446744073709547520 bytes.
Note: Testcase only tests the option variable. The actual bug has be to
tested manually.
returns incorrect results with where
An outer join of a const table (outer) and a normal table
(inner) with GROUP BY on a field from the outer table would
optimize away GROUP BY, and thus trigger the optimization to
do away with a temporary table if grouping was performed on
columns from the const table, hence executing the query with
filesort without temporary table. But this should not be
done if there is a non-indexed access to the inner table,
since filesort does not handle joins. It expects either ref
access, range ditto or table scan. The join condition will
thus not be applied.
Fixed by always forcing execution with temporary table in
the case of ROLLUP with a query involving an outer join. This
is a slightly broader class of queries than need fixing, but
it is hard to ascertain the position of a ROLLUP field wrt
outer join with current query representation.
Problem: inserting a record we don't set unused null bits in the
record buffer if no default field values used.
That may lead to wrong live checksum calculation.
Fix: set unused null bits in the record buffer in such cases.
int join_read_key(JOIN_TAB*)
The eq_ref access method TABLE_REF (accessed through
JOIN_TAB) to save state and to track if this is the
first row it finds or not.
This state was not reset on subquery re-execution
causing an assert.
Fixed by resetting the state before the subquery
re-execution.
NULLable BIGINT and INT columns in comparison
Problem: a consequence of the fix for 43668.
Some Arg_comparator inner initialization missed,
that may lead to unpredictable (wrong) comparison
results.
Fix: always properly initialize Arg_comparator
before its usage.
int join_read_key(JOIN_TAB*)
The eq_ref access method TABLE_REF (accessed through
JOIN_TAB) to save state and to track if this is the
first row it finds or not.
This state was not reset on subquery re-execution
causing an assert.
Fixed by resetting the state before the subquery
re-execution.
check_key_in_view() had one code branch which returned with "return TRUE"
rather than "DBUG_RETURN(TRUE)". Only affected debug builds.
No test case added.
Problem is that purge_logs implementation in ndb (ndbcluster_binlog_index_purge_file)
calls mysql_parse (with (thd->options & OPTION_BIN_LOG) === 0))
but MYSQL_BIN_LOG first takes LOCK_log and then checks thd->options
Solution in this patch, changes so that rotate_and_purge does not hold
LOCK_log when calling purge_logs_before_date. I think this is safe
as other "purge"-function(s) is called wo/ holding LOCK_log, e.g purge_master_logs
'LOAD DATA CONCURRENT [LOCAL] INFILE ...' statment only is binlogged as
'LOAD DATA [LOCAL] INFILE ...' in SBR and MBR. As a result, if replication is on,
queries on slaves will be blocked by the replication SQL thread.
This patch write code to write 'CONCURRENT' into the log event if 'CONCURRENT' option
is in the original statement in SBR and MBR.
5.0 buffer overflow for ER_UPDATE_INFO, or truncated info message in 5.1
5.0.86 has a buffer overflow/crash, and 5.1.40 has a truncated message.
errmsg.txt contains this:
ER_UPDATE_INFO
rum "Linii identificate (matched): %ld Schimbate: %ld Atentionari
(warnings): %ld"
When that is sprintf'd into a buffer of STRING_BUFFER_USUAL_SIZE size,
a buffer overflow can happen.
The solution to this is to use MYSQL_ERRMSG_SIZE for the buffer size,
instead of STRING_BUFFER_USUAL_SIZE. This will allow longer strings.
To avoid potential crashes, we will also use my_snprintf instead of
sprintf.
timestamp primary key
Since TIMESTAMP values are adjusted by the current time zone
settings in both numeric and string contexts, using any
expressions involving TIMESTAMP values as a
(sub)partitioning function leads to undeterministic behavior of
partitioned tables. The effect may vary depending on a storage
engine, it can be either incorrect data being retrieved or
stored, or an assertion failure. The root cause of this is the
fact that the calculated partition ID may differ from a
previously calculated ID for the same data due to timezone
adjustments of the partitioning expression value.
Fixed by disabling any expressions involving TIMESTAMP values
to be used in partitioning functions with the follwing two
exceptions:
1. Creating or altering into a partitioned table that violates
the above rule is not allowed, but opening existing such tables
results in a warning rather than an error so that such tables
could be fixed.
2. UNIX_TIMESTAMP() is the only way to get a
timezone-independent value from a TIMESTAMP column, because it
returns the internal representation (a time_t value) of a
TIMESTAMP argument verbatim. So UNIX_TIMESTAMP(timestamp_column)
is allowed and should be used to fix existing tables if one
wants to use TIMESTAMP columns with partitioning.
As documented in the bug report, the double checked locking
pattern has inherent issues, and cannot guarantee correct
initialization.
This patch replaces the logic in init_available_charsets()
with the use of pthread_once(3). A wrapper function,
my_pthread_once(), is introduced and is used in lieu of direct
calls to init_available_charsets(). Related defines
MY_PTHREAD_ONCE_* are also introduced.
For the Windows platform, the implementation in lp:sysbench is
ported. For single-thread use, a simple define calls the
function and sets the pthread_once control variable.
Charset initialization is modified to use my_pthread_once().
The help text for --init-slave=name:
"Command(s) that are executed when a slave connects to this master".
This text indicate that the --init-slave option is set on a master
server, and the master server passes the option's argument to slave
which connects to it. This is wrong. Actually the --init-slave option
just can be set on a slave server, and then the slave server executes
the argument each time the SQL thread starts.
Correct the help text for --init-slave option as following:
"Command(s) that are executed by a slave server each time the SQL thread starts."
The help text for --init-slave=name:
"Command(s) that are executed when a slave connects to this master".
This text indicate that the --init-slave option is set on a master
server, and the master server passes the option's argument to slave
which connects to it. This is wrong. Actually the --init-slave option
just can be set on a slave server, and then the slave server executes
the argument each time the SQL thread starts.
Correct the help text for --init-slave option as following:
"Command(s) that are executed by a slave server each time the SQL thread starts."
SPATIAL and FULLTEXT indexes don't support algorithm
selection.
Disabled by creating a special grammar rule for these
in the parser.
Added some encasulation of duplicate parser code.
A few problems were found in the fix for bug 43668:
1) Comparison of the YEAR column with NULL always returned TRUE;
2) Comparison of the YEAR column with constants always returned
unpredictable result;
3) Unnecessary conversion warnings when comparing a non-integer
constant with a NULL value in the YEAR column;
The problems described above have been resolved with an
exception: zero (i.e. invalid) YEAR column value comparison
with 00 or 2000 still fail (it is not a regression and it was
not a regression), so MIN/MAX on YEAR column containing zero
value still fail.
Arg_comparator uses Item_cache objects to store constants being compared when
they're need a type conversion. Because this cache wasn't initialized properly
Arg_comparator might produce wrong comparison result.
The Arg_comparator::cache_converted_constant function now initializes cache
prior to usage.
This fix has been proposed by Sergey Petrunya and has been contributed
under SCA by sca@askmonty.org.
The cause for this valgrind error is that in the function
add_cond_and_fix() in sql_select.cc an Item_cond_and object is
created. This is marked as fixed but does not have a correct
table_map() attribute. Later, in make_join_select(), if
engine_condition_pushdown is in use, this table map is used and
results in the valgrind error.
The fix is to add a call to update_used_tables() in add_cond_and_fix()
so that the table map is updated correctly.
This patch is tested by multiple existing tests (e.g. the tests
innodb_mysql, innodb, fulltext, compress all produces this valgrind
warning/error without this fix).
There are three issues that caused rpl_killed_ddl fails sporadically
in pb2:
1) thd->clear_error() was not called before create Query event
if operation is executed successfully.
2) DATABASE d2 might do exist because the statement to CREATE or
ALTER it was killed
3) because of bug 43353, kill the query that do DROP FUNCTION or
DROP PROCEDURE can result in SP not found
This patch fixed all above issues by:
1) Called thd->clear_error() if the operation succeeded.
2) Add IF EXISTS to the DROP DATABASE d2 statement
3) Temporarily disabled testing DROP FUNCTION/PROCEDURE IF EXISTS.
Part 2 :
There was a special optimization on the ref access method for
ORDER BY ... DESC that was set without actually looking on the type of the
selected index for ORDER BY.
Fixed the SELECT ... ORDER BY .. DESC (it uses a different code path compared
to the ASC that has been fixed with the previous fix).
{PROCEDURE|FUNCTION} FROM ...'
The master would hit an assertion when binary log was
active. This was due to the fact that the thread's diagnostics
area was being cleared before writing to the binlog,
independently of mysql_routine_grant returning an error or
not. When mysql_routine_grant was to return an error, the return
value and the diagnostics area contents would
mismatch. Consequently, neither my_ok would be called nor an
error would be signaled in the diagnostics area, eventually
triggering the assertion in net_end_statement.
We fix this by not clearing the diagnostics area at binlogging
time.
solaris after a crash
This patch adds a Solaris-specific version of
print_stacktrace() which uses printstack(2), available on all
Solaris versions since Solaris 9. (While Solaris 11 adds
support for the glibc functions backtrace_*() as of
PSARC/2007/162, printstack() is used for consistency over all
Solaris versions.)
The symbol names are mangled, so use of c++filt may be
required as described in the MySQL documentation.
escaped field names
When in mixed or statement mode, the master logs LOAD DATA
queries by resorting to an Execute_load_query_log_event. This
event does not contain the original query, but a rewritten
version of it, which includes the table field names. However, the
rewrite does not escape the field names. If these names match a
reserved keyword, then the slave will stop with a syntax error
when executing the event.
We fix this by escaping the fields names as it happens already
for the table name.