1. unused static inline functions are only removed at -xO4,
otherwise test binaries will depend on various mysys
symbols that they don't use. Link test with libmysys.
2. Sphinx - don't instantiate (explicitly) templates before
they're defined. Or, rather, don't instantiate them explicitly at
all.
3. GIS - don't use anonymous unions and structs.
CONSTRAINT.
Analysis
=======
INSERT and UPDATE operations using the IGNORE keyword which
causes FOREIGN KEY constraint violations reports an error
despite using the IGNORE keyword.
Foreign key violation errors were not ignored and reported
as errors instead of warnings even when IGNORE was set.
Fix
===
Added code to ignore the foreign key violation errors and
report them as warnings when the IGNORE keyword is used.
Problem was that created table was not marked as used (not set query_id) and so opening tables for stored function pick it up (as opened place holder for it) and used changing TABLE internals.
The select mentioned in the bug attempted to create a temporary table
using the maria storage engine. The table needs to have primary keys such that
duplicates can be removed. Unfortunately this use case has a longer
than allowed key and the tmp table got created without a temporary key.
We must not allow materialization for the subquery if the total key
length and key parts is greater than what the storage engine supports.
When one evaluates row-based comparison like (X, Y) = (A,B), one should
first call bring_value() for the Item that returns row value. If you
don't do that and just attempt to read values of X and Y, you get stale
values.
Semi-join/Materialization can take a row-based comparison apart and
make ref access from it. In that case, we need to call bring_value()
to get the index lookup components.
Consider a query with subquery in form t.key=(select ...). Suppose, the
parent query uses this equality for ref access.
It will attempt to evaluate the subquery in get_best_combination(),
right before the join->join_tab[...] array is filled. The problem was
that subquery optimization will attempt to look at parent's join->join_tab
to check how many times subquery will be executed (and crash).
Fixed by not doing that when the subquery is constant (non-constant
subqueries are only be evaluated during join execution, so they are not
affected)
create_partition_index_description() had wrong logic to calculate
length of the key value buffer that is used by the range optimizer.
For some reason it used MAX(partitioning_columns_len,
subpartitioning_columns_len) while it should use SUM of these values.
ITEM_PARAM::SAFE_CHARSET_CONVERTER
ISSUE:
------
Charset conversion on a null parameter is not handled
correctly.
SOLUTION:
---------
Item_param's charset converter does not handle the case
where it might have to deal with a null value. This is
fine for other charset converters since the value is not
supplied to them at runtime.
The fix is to check if the parameter is now set to null and
return an Item_null object. Also, there is no need to
initialize Item_param's cnvitem in the constructor to a
string. This can be done in
ITEM_PARAM::SAFE_CHARSET_CONVERTER itself.
Members of Item_param, cnvbuf and cnvstr, have been removed
and cnvitem has been made a local variable in
ITEM_PARAM::SAFE_CHARSET_CONVERTER.
'SYSTEM LOCK' IN PROCESSLIST
Analysis
=========
Show processlist shows 'System Lock' in 'State' field while
LOAD DATA INFILE is running.
thd->proc_info update is missing in LOAD DATA INFILE path.
Thus any request will get last unpdated status from lock_table()
during open_table().
Fix:
=======
Update state information from LOAD DATA INFILE path.
GENERATED BY THE EXP() FUNCTION
When generating the error message for numeric overflow, pass a flag to
Item::print() that prevents it from expanding constant expressions and
parameters to the values they evaluate to.
For consistency, also pass the flag to Item::print() when
Item_func_spatial_collection::fix_length_and_dec() generates an error
message. It doesn't make any difference at the moment, since constant
expressions haven't been evaluated yet when this function is called.
This occurs when replication stops with an error, domain-based parallel
replication is used, and the GTID position contains more than one domain.
Furthermore, it relates to the case where the SQL thread is restarted
without first stopping the IO thread.
In this case, the file/offset relay-log position does not correctly
represent the slave's multi-dimensional position, because other domains may
be far ahead of, or behind, the domain with the failing event. So the code
reverts the relay log position back to the start of a relay log file that is
known to be before all active domains.
There was a bug that when the SQL thread was restarted, the
rli->relay_log_state was incorrectly initialised from @@gtid_slave_pos. This
position will likely be too far ahead, due to reverting the relay log
position. Thus, if the replication fails again after the SQL thread restart,
the rli->restart_gtid_pos might be updated incorrectly. This in turn would
cause a second SQL thread restart to replicate from the wrong position, if
the IO thread was still left running.
The fix is to initialise rli->relay_log_state from @@gtid_slave_pos only
when we actually purge and re-fetch relay logs from the master, not at every
SQL thread start.
A related problem is the use of sql_slave_skip_counter to resolve
replication failures in this kind of scenario. Since the slave position is
multi-dimensional, sql_slave_skip_counter can not work properly - it is
indeterminate exactly which event is to be skipped, and is unlikely to work
as expected for the user. So make this an error in the case where
domain-based parallel replication is used with multiple domains, suggesting
instead the user to set @@gtid_slave_pos to reliably skip the desired event.
PROCEDURE RESULTS IN GARBAGE BYTES
Issue:
-----
This problem occurs under the following conditions:
a) Stored procedure has a variable is declared as TEXT/BLOB.
b) Data is copied into the the variable using the
SELECT...INTO syntax from a TEXT/BLOB column.
Data corruption can occur in such cases.
SOLUTION:
---------
The blob type does not allocate space for the string to be
stored. Instead it contains a pointer to the source string.
Since the source is deallocated immediately after the
select statement, this can cause data corruption.
As part of the fix for Bug #21143080, when the source was
part of the table's write-set, blob would allocate the
neccessary space. But this fix missed the possibility that,
as in the above case, the target might be a variable.
The fix will add the copy_blobs check that was removed by
the earlier fix.
The problem was that wait_for_slave_io_to_start reported that the io thread
was ready, when it was still initializing. This caused test suite to
continue too early, for example before the semi sync plugin was properly
enabled.
Fixed by introducing a new internal stage: "Preparing". Slave_IO_Running is
now set to "Yes" only when all initializing is done and the IO thread is
ready to read things from the master.
The only test affected by this change is rpl_flsh_tbls, which got stuck in
the preparing phase while trying to read the GTID position from a table.
Fixed by having this test waiting for Preparing instead of Yes.
- Added testing if connection is killed to shortcut reading of connection data
This will allow us later in 10.2 to do a cleaner shutdown of slaves (less errors in the log)
- Add new status variables: Slaves_connected, Slaves_running and Slave_connections.
- Use MYSQL_SLAVE_NOT_RUN instead of 0 with slave_running.
- Don't print obvious extra warnings to the error log when slave is shut down normally.
UNIX_TIMESTAMP(STR_TO_DATE('201506', "%Y%M"
Issue:
-----
When an invalid date is supplied to the UNIX_TIMESTAMP
function from STR_TO_DATE, no check is performed before
converting it to a timestamp value.
SOLUTION:
---------
Add the check_date function and only if it succeeds,
proceed to the timestamp conversion.
No warning will be returned for dates having zero in
month/date, since partial dates are allowed. UNIX_TIMESTAMP
will return only a zero for such values.
The problem has been handled in 5.6+ with WL#946.
COLUMNS
ANALYSIS:
=========
A valgrind error is reported when CREATE TABLE .. SELECT
involving BIT columns triggers a column type redefinition.
In general the pack_flag is set for BIT columns in
'mysql_prepare_create_table()'. However, during the above
operation, redefined column types was handled after the
special handling for BIT columns and thus pack_flag ended
up not being set correctly triggering the valgrind error.
FIX:
====
The patch fixes this problem by setting pack_flag correctly
for BIT columns in the case of column type redefinition.
On shutdown feedback was sending a short report without creating
a THD. At that point current_thd was pointing to the already
destroyed THD from the previous full report.
backport from 10.1:
commit bfe703a
Author: Sergei Golubchik <serg@mariadb.org>
Date: Tue Feb 3 18:19:56 2015 +0100
don't let current_thd to point to a destroyed THD
Problem:
=======
rpl_binlog_index.test fails with following valgrind error.
line
Conditional jump or move depends on uninitialised value(s)
at 0x4C2F842: __memcmp_sse4_1 (in /usr/lib64/valgrind/
vgpreload_memcheck-amd64-linux.so)
0x739E39: find_uniq_filename(char*) (log.cc:2212)
0x73A11B: MYSQL_LOG::generate_new_name(char*, char const*)
(log.cc:2492)
0x73A1ED: MYSQL_LOG::init_and_set_log_file_name(char const*,
char const*, enum_log_type, cache_type) (log.cc:2289)
0x73B6F5: MYSQL_BIN_LOG::open(char const*, enum_log_type,
Analysis and fix:
=================
This issue was fixed as part of Bug#20459363 fix in 5.6 and
above. Hence backporting the fix to MySQL-5.5.
that was mistakenly merged from mysql-5.5.47
(introduces valgrind failures in main.sp, because Field_varstring
columns are created as FIELD_NORMAL and that causes aria to
read bytes between the actual value length and field max length)