mysql_rm_table_no_locks() function was modified.
When we construct log record for the DROP TABLE, now we
look if there's a comment before the first table name and
add it to the record if so.
per-file comments:
sql/sql_table.cc
MDEV-340 Save replication comments for DROP TABLE.
comment_length() function implemented to find comments in the query,
call it in mysql_rm_table_no_locks() and use the result to form log record.
mysql-test/suite/binlog/r/binlog_drop_if_exists.result
MDEV-340 Save replication comments for DROP TABLE.
test result updated.
mysql-test/suite/binlog/t/binlog_drop_if_exists.test
MDEV-340 Save replication comments for DROP TABLE.
test case added.
Now partition engine adds underlying tables to the QC and ask underlying tables engine permittion to cache the query and return result of the query.
Incorrect QC cleanup in case of table registration failure fixe.
Unified interface for myisammrg & partitioned engnes for QC.
COUNT DISTINCT GROUP BY
PROBLEM:
To calculate the final result of the count(distinct(select 1))
we call 'end_send' function instead of 'end_send_group'.
'end_send' cannot be called if we have aggregate functions
that need to be evaluated.
ANALYSIS:
While evaluating for a possible loose_index_scan option for
the query, the variable 'is_agg_distinct' is set to 'false'
as the item in the distinct clause is not a field. But, we
choose loose_index_scan by not taking this into
consideration.
So, while setting the final 'select_function' to evaluate
the result, 'precomputed_group_by' is set to TRUE as in
this case loose_index_scan is chosen and we do not have
agg_distinct in the query (which is clearly wrong as we
have one).
As a result, 'end_send' function is chosen as the final
select_function instead of 'end_send_group'. The difference
between the two being, 'end_send_group' evaluates the
aggregates while 'end_send' does not. Hence the wrong result.
FIX:
The variable 'is_agg_distinct' always represents if
'loose_idnex_scan' can be chosen for aggregate_distinct
functions present in the select.
So, we check for this variable to continue with
loose_index_scan option.
sql/opt_range.cc:
Do not continue if is_agg_distinct is not set in case
of agg_distinct functions.
primary key with innodb tables
The bug was triggered if a single ALTER TABLE statement both
added and dropped indexes and ALTER TABLE failed during drop
(e.g. because the index was needed in a foreign key constraint).
In such cases, the server index information would get out of
sync with InnoDB - the added index would be present inside
InnoDB, but not in the server. This could then lead to InnoDB
error messages and/or server crashes.
The root cause is that new indexes are added before old indexes
are dropped. This means that if ALTER TABLE fails while dropping
indexes, index changes will be reverted in the server but not
inside InnoDB.
This patch fixes the problem by dropping any added indexes
if drop fails (for ALTER TABLE statements that both adds
and drops indexes).
However, this won't work if we added a primary key as this
key might not be possible to drop inside InnoDB. Therefore,
we resort to the copy algorithm if a primary key is added
by an ALTER TABLE statement that also drops an index.
In 5.6 this bug is more properly fixed by the handler interface
changes done in the scope of WL#5534 "Online ALTER".
KEY UPDATES WITH A LIMIT OF 1
Problem: The unsafety warning for statements such as
update...limit1 where pk=1 are thrown when binlog-format
= STATEMENT,despite of the fact that such statements are
actually safe. this leads to filling up of the disk space
with false warnings.
Solution: This is not a complete fix for the problem, but
prevents the disks from getting filled up. This should
therefore be regarded as a workaround. In the future this
should be overriden by server general suppress/filtering
framework. It should also be noted that another worklog is
supposed to defeat this case's artificial unsafety.
We use a warning suppression mechanism to detect warning flood,
enable the suppression, and disable this when the average
warnings/second has reduced to acceptable limits.
Activation: The supression for LIMIT unsafe statements are
activated when the last 50 warnings were logged in less
than 50 seconds.
Supression: Once activated this supression will prevent the
individual warnings to be logged in the error log, but print
the warning for every 50 warnings with the note:
"The last warning was repeated N times in last S seconds"
Noteworthy is the fact that this supression works only on the
error logs and the warnings seen by the clients will remain as
it is (i.e. one warning/ unsafe statement)
Deactivation: The supression will be deactivated once the
average # of warnings/sec have gone down to the acceptable limits.
sql/sql_class.cc:
Added code to supress warning while logging them to error-log.
Problem:
=======
The return value from my_b_write is ignored by: `my_b_write_quoted',
`my_b_write_bit',`Query_log_event::print_query_header'
Most callers of `my_b_printf' ignore the return value. `log_event.cc'
has many calls to it.
Analysis:
========
`my_b_write' is used to write data into a file. If the write fails it
sets appropriate error number and error message through my_error()
function call and sets the IO_CACHE::error == -1.
`my_b_printf' function is also used to write data into a file, it
internally invokes my_b_write to do the write operation. Upon
success it returns number of characters written to file and on error
it returns -1 and sets the error through my_error() and also sets
IO_CACHE::error == -1. Most of the event specific print functions
for example `Create_file_log_event::print', `Execute_load_log_event::print'
etc are the ones which make several calls to the above two functions and
they do not check for the return value after the 'print' call. All the above
mentioned abuse cases deal with the client side.
Fix:
===
As part of bug fix a check for IO_CACHE::error == -1 has been added at
a very high level after the call to the 'print' function. There are
few more places where the return value of "my_b_write" is ignored
those are mentioned below.
+++ mysys/mf_iocache2.c 2012-06-04 07:03:15 +0000
@@ -430,7 +430,8 @@
memset(buffz, '0', minimum_width - length2);
else
memset(buffz, ' ', minimum_width - length2);
- my_b_write(info, buffz, minimum_width - length2);
+++ sql/log.cc 2012-06-08 09:04:46 +0000
@@ -2388,7 +2388,12 @@
{
end= strxmov(buff, "# administrator command: ", NullS);
buff_len= (ulong) (end - buff);
- my_b_write(&log_file, (uchar*) buff, buff_len);
At these places appropriate return value handlers have been added.
client/mysqlbinlog.cc:
check for IO_CACHE::error == -1 has been added after the call to
the event specific print functions
mysys/mf_iocache2.c:
Added handler to check the written value of `my_b_write'
sql/log.cc:
Added handler to check the written value of `my_b_write'
sql/log_event.cc:
Added error simulation statements in `Create_file_log_event::print`
and `Execute_load_query_log_event::print'
sql/rpl_utility.h:
Removed the extra ';'
Fixes for BUG11761686 left a flaw that managed to slip away from testing.
Only effective filtering branch was actually tested with a regression test
added to rpl_filter_tables_not_exist.
The reason of the failure is destuction of too early mem-root-allocated memory
at the end of the deferred User-var's do_apply_event().
Fixed with bypassing free_root() in the deferred execution branch.
Deallocation of created in do_apply_event() items is done by the base code
through THD::cleanup_after_query() -> free_items() that the parent Query
can't miss.
sql/log_event.cc:
Do not call free_root() in case the deferred User-var event.
Necessary methods to the User-var class are added, do_apply_event() refined.
sql/log_event.h:
Necessary methods to avoid destoying mem-root-based memory at
User-var applying are defined.
Print the warning(note):
YEAR(x) is deprecated and will be removed in a future release. Please use YEAR(4) instead
on "CREATE TABLE ... YEAR(x)" or "ALTER TABLE MODIFY ... YEAR(x)", where x != 4
Several fixes :
* sql-common/client.c
Added a validity check of the fields metadata packet sent
by the server.
Now libmysql will check if the length of the data sent by
the server matches what's expected by the protocol before
using the data.
* client/mysqltest.cc
Fixed the error handling code in mysqltest to avoid sending
new commands when the reading the result set failed (and
there are unread data in the pipe).
* sql_common.h + libmysql/libmysql.c + sql-common/client.c
unpack_fields() now generates a proper error when it fails.
Added a new argument to this function to support the error
generation.
* sql/protocol.cc
Added a debug trigger to cause the server to send a NULL
insted of the packet expected by the client for testing
purposes.
Introduction of cost based decision on filesort vs index for UPDATE
statements changed detection of the fact that the index used to scan the
table is being updated. The new design missed the case of index merge
when there is no single index to check. That was worked until a recent
change in InnoDB after which it went into infinite recursion if update of
the used index wasn't properly detected.
The fix consists of 'used key being updated' detection code from 5.1.
sql/sql_update.cc:
Bug#14248833: UPDATE ON INNODB TABLE ENTERS RECURSION
The check for used key being updated is extended to cover the case when
index merge is used.
TABLE_LIST::check_single_table made aware about fact that now if table attached to a merged view it can be (unopened) temporary table
(in 5.2 it was always leaf table or non (in case of several tables)).
the new file is fully synced to disk and binlog index. This fixes a window
where a crash would leave next server restart unable to detect that a crash
occured, causing recovery to fail.
MySQL introduced a class Deferred_log_events. This class keeps a pointer
last_added. The code was keeping this pointer around even after the memory
pointed to was freed, and later comparing the bogus pointer against other
allocated memory. This is illegal, and can randomly produce false equal
comparisons depending on whatever the malloc() subsystem decides to return.
Virtual columns of ENUM and SET data types were not supported properly
in the original patch that introduced virtual columns into MariaDB 5.2.
The problem was that for any virtual column the patch used the
interval_id field of the definition of the column in the frm file as
a reference to the virtual column expression.
The fix stores the optional interval_id of the virtual column in the
extended header of the virtual column expression.