RBR AND RC
Description: When scanning and locking rows with < or <=, InnoDB locks
the next row even though row based binary logging and read committed
is used.
Solution: In the handler, when the row is identified to fall outside
of the range (as specified in the query predicates), then request the
storage engine to unlock the row (if possible). This is done in
handler::read_range_first() and handler::read_range_next().
COUNT DISTINCT GROUP BY
PROBLEM:
To calculate the final result of the count(distinct(select 1))
we call 'end_send' function instead of 'end_send_group'.
'end_send' cannot be called if we have aggregate functions
that need to be evaluated.
ANALYSIS:
While evaluating for a possible loose_index_scan option for
the query, the variable 'is_agg_distinct' is set to 'false'
as the item in the distinct clause is not a field. But, we
choose loose_index_scan by not taking this into
consideration.
So, while setting the final 'select_function' to evaluate
the result, 'precomputed_group_by' is set to TRUE as in
this case loose_index_scan is chosen and we do not have
agg_distinct in the query (which is clearly wrong as we
have one).
As a result, 'end_send' function is chosen as the final
select_function instead of 'end_send_group'. The difference
between the two being, 'end_send_group' evaluates the
aggregates while 'end_send' does not. Hence the wrong result.
FIX:
The variable 'is_agg_distinct' always represents if
'loose_idnex_scan' can be chosen for aggregate_distinct
functions present in the select.
So, we check for this variable to continue with
loose_index_scan option.
primary key with innodb tables
The bug was triggered if a single ALTER TABLE statement both
added and dropped indexes and ALTER TABLE failed during drop
(e.g. because the index was needed in a foreign key constraint).
In such cases, the server index information would get out of
sync with InnoDB - the added index would be present inside
InnoDB, but not in the server. This could then lead to InnoDB
error messages and/or server crashes.
The root cause is that new indexes are added before old indexes
are dropped. This means that if ALTER TABLE fails while dropping
indexes, index changes will be reverted in the server but not
inside InnoDB.
This patch fixes the problem by dropping any added indexes
if drop fails (for ALTER TABLE statements that both adds
and drops indexes).
However, this won't work if we added a primary key as this
key might not be possible to drop inside InnoDB. Therefore,
we resort to the copy algorithm if a primary key is added
by an ALTER TABLE statement that also drops an index.
In 5.6 this bug is more properly fixed by the handler interface
changes done in the scope of WL#5534 "Online ALTER".
KEY UPDATES WITH A LIMIT OF 1
Problem: The unsafety warning for statements such as
update...limit1 where pk=1 are thrown when binlog-format
= STATEMENT,despite of the fact that such statements are
actually safe. this leads to filling up of the disk space
with false warnings.
Solution: This is not a complete fix for the problem, but
prevents the disks from getting filled up. This should
therefore be regarded as a workaround. In the future this
should be overriden by server general suppress/filtering
framework. It should also be noted that another worklog is
supposed to defeat this case's artificial unsafety.
We use a warning suppression mechanism to detect warning flood,
enable the suppression, and disable this when the average
warnings/second has reduced to acceptable limits.
Activation: The supression for LIMIT unsafe statements are
activated when the last 50 warnings were logged in less
than 50 seconds.
Supression: Once activated this supression will prevent the
individual warnings to be logged in the error log, but print
the warning for every 50 warnings with the note:
"The last warning was repeated N times in last S seconds"
Noteworthy is the fact that this supression works only on the
error logs and the warnings seen by the clients will remain as
it is (i.e. one warning/ unsafe statement)
Deactivation: The supression will be deactivated once the
average # of warnings/sec have gone down to the acceptable limits.
Problem:
=======
The return value from my_b_write is ignored by: `my_b_write_quoted',
`my_b_write_bit',`Query_log_event::print_query_header'
Most callers of `my_b_printf' ignore the return value. `log_event.cc'
has many calls to it.
Analysis:
========
`my_b_write' is used to write data into a file. If the write fails it
sets appropriate error number and error message through my_error()
function call and sets the IO_CACHE::error == -1.
`my_b_printf' function is also used to write data into a file, it
internally invokes my_b_write to do the write operation. Upon
success it returns number of characters written to file and on error
it returns -1 and sets the error through my_error() and also sets
IO_CACHE::error == -1. Most of the event specific print functions
for example `Create_file_log_event::print', `Execute_load_log_event::print'
etc are the ones which make several calls to the above two functions and
they do not check for the return value after the 'print' call. All the above
mentioned abuse cases deal with the client side.
Fix:
===
As part of bug fix a check for IO_CACHE::error == -1 has been added at
a very high level after the call to the 'print' function. There are
few more places where the return value of "my_b_write" is ignored
those are mentioned below.
+++ mysys/mf_iocache2.c 2012-06-04 07:03:15 +0000
@@ -430,7 +430,8 @@
memset(buffz, '0', minimum_width - length2);
else
memset(buffz, ' ', minimum_width - length2);
- my_b_write(info, buffz, minimum_width - length2);
+++ sql/log.cc 2012-06-08 09:04:46 +0000
@@ -2388,7 +2388,12 @@
{
end= strxmov(buff, "# administrator command: ", NullS);
buff_len= (ulong) (end - buff);
- my_b_write(&log_file, (uchar*) buff, buff_len);
At these places appropriate return value handlers have been added.
Fixes for BUG11761686 left a flaw that managed to slip away from testing.
Only effective filtering branch was actually tested with a regression test
added to rpl_filter_tables_not_exist.
The reason of the failure is destuction of too early mem-root-allocated memory
at the end of the deferred User-var's do_apply_event().
Fixed with bypassing free_root() in the deferred execution branch.
Deallocation of created in do_apply_event() items is done by the base code
through THD::cleanup_after_query() -> free_items() that the parent Query
can't miss.
HANDLE_FATAL_SIGNAL IN STRNLEN
Fixed the following bounds checking problems :
1. in check_if_legal_filename() make sure the null terminated
string is long enough before accessing the bytes in it.
Prevents pottential read-past-buffer-end
2. in my_wc_mb_filename() of the filename charset check
for the end of the destination buffer before sending single
byte characters into it.
Prevents write-past-end-of-buffer (and garbaling stack in
the cases reported here) errors.
Added test cases.
1. Clear text password client plugin disabled by default.
2. Added an environment variable LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN, that
when set to something starting with '1', 'Y' or 'y' will enable the clear
text
plugin for all connections.
3. Added a new mysql_options() option : MYSQL_ENABLE_CLEARTEXT_PLUGIN
that takes an my_bool argument. When the value of the argument is non-zero
the clear text plugin is enabled for this connection only.
4. Added an enable-cleartext-plugin config file option that takes a numeric
argument. If the numeric value of the numeric argument is non-zero the
clear
text plugin is enabled for the connection
5. Added a boolean command line option "--enable_cleartext_plugin" to
mysql, mysqlslap and mysqladmin. When specified it will call mysql_options
with the effect of #3
6. Added a new CLEARTEXT option to the connect command in mysqltest.
When specified it will enable the cleartext plugin for usage.
7. Added test cases and updated existing ones that need the clear text
plugin.
executing
The problem is that mysql lacks information about the objects a view
depends on so it can't dump views and tables in the proper order.
Thus it needs to create "stand-in" myisam tables for each view while
dumping the tables that it later drops and replaces with the actual view
view definition.
But since views can have much more columns than an actual table creating
these stand-in tables may be problematic.
There's no way to portably find out how many columns an mysiam table
can have. It's a complicated formula depending on internal server constants.
Thus we can't have a reliable error check without repeating the logic and
the formula inside mysqldump.
1. Changed the type of the columns of the stand-in tables mysqldump
makes to satisfy view dependencies from the original type to smallint
to save on row space.
2. Added a warning on the mysqldump's standard error for a possible
problems replaying the dump file if the columns of a view exceed 1000.
3. Added a test case.
This is a followup patch for the bug enabling the test
i_binlog.binlog_mysqlbinlog_file_write.test
this was disabled in mysql trunk and mysql 5.5 as in the release
build mysqlbinlog was not debug compiled whereas the mysqld was.
Since have_debug.inc script checks only for mysqld to be debug
compiled, the test was not being skipped on release builds.
We resolve this problem by creating a new inc file
mysqlbinlog_have_debug.inc which checks exclusively for mysqlbinlog
to be debug compiled. if not it skips the test.