slave
The stored-routine code took the contents of the (lowest) parser
and copied it directly to the binlog, which causes problems if there
is a special case of interpretation at the parser level -- which
there is, in the "/*!VER */" comments. The trailing "*/" caused
errors on the slave, naturally.
Now, since by that point we have /properly/ created parse-tree (as
the rest of the server should do!) for the stored-routine CREATE, we
can construct a perfect statement from that information, instead of
writing uncertain information from an unknown parser state.
Fortunately, there's already a function nearby that does exactly
that.
---
Update for Bug#36570. Qualify routine names with db name when
writing to the binlog ONLY if the source text is qualified.
Addresses the following bugs:
Change the fix for Bug#32440 to show bytes instead of kilobytes in
INFORMATION_SCHEMA.TABLES.DATA_FREE.
branches/5.1: Fix bug#29507 TRUNCATE shows to many rows effected
In InnoDB, the row count is only a rough estimate used by SQL
optimization. InnoDB is now return row count 0 for TRUNCATE operation.
branches/5.1: Fix bug#35537 - Innodb doesn't increment handler_update
and handler_delete
Add the calls to ha_statistic_increment() in ha_innobase::delete_row()
and ha_innobase::update_row().
Fix Bug#36169 create innodb compressed table with too large row size crashed
Sometimes it is possible that
row_drop_table_for_mysql(index->table_name, trx, FALSE); is invoked in
row_create_index_for_mysql() when the index object is freed so copy the
table name to a safe place beforehand and use the copy.
Fix Bug#36434 ha_innodb.so is installed in the wrong directory
Change pkglib_LTLIBRARIES with pkgplugin_LTLIBRARIES which has been
forgotten in this commit: http://lists.mysql.com/commits/40206
The event scheduler was not designed to work in embedded mode. This
patch disables and excludes the event scheduler when the server is
compiled for embedded build.
Problem was that mysql_create_view did not remove all comments characters
when writing to binlog, resulting in parse error of stmt on slave side.
Solution was to use the recreated select clause
and add a generated CHECK OPTION clause if needed.
- Disable the "prefer full scan on clustered primary key over full scan
of any secondary key" rule introduced by BUG#35850.
- Update test results accordingly
(bk trigger: file this for BUG#35850)
The bug is a regression introduced by the patch for bug32798.
The code in Item_func_group_concat::clear() relied on the 'distinct'
variable to check if 'unique_filter' was initialized. That, however,
is not always valid because Item_func_group_concat::setup() can do
shortcuts in some cases w/o initializing 'unique_filter'.
Fixed by checking the value of 'unique_filter' instead of 'distinct'
before dereferencing.
The problem is that since MyISAM's concurrent_insert is on by
default some concurrent SELECT statements might not see changes
made by INSERT statements in other connections, even if the
INSERT statement has returned.
The solution is to disable concurrent_insert so that INSERT
statements returns after the data is actually visible to other
statements.
100% effectively on Windows
The mysqltest docs state that the 'replace_result' command
doesn't perform any escape processing.
However the current implementation was processing backslash
escapes in the from/to strings.
This prevents replacing e.g. patch on windows (where backslash
is used as a path separator).
Fixed by removing the backslash escape processing from
'replace_result'.
for ENGINE=MRG_MYISAM (should be optimized out).
Before WL#3281 MERGE engine had the HA_NOT_EXACT_COUNT flag
unset, and it worked with COUNT optimization as desired.
After the removal of the HA_NOT_EXACT_COUNT flag neither
HA_STATS_RECORDS_IS_EXACT (opposite to former HA_NOT_EXACT_COUNT
flag) nor modern HA_HAS_RECORDS flag were not added to MERGE
table flag mask.
1. The HA_HAS_RECORDS table flag has been set.
2. The ha_myisammrg::records method has been overridden to
calculate total number of records in underlying tables.
When a zero length is provided to the my_decimal_length_to_precision
function along with unsigned_flag set to false it returns a negative value.
For queries that employs temporary tables may cause failed assertion or
excessive memory consumption while temporary table creation.
Now the my_decimal_length_to_precision and the my_decimal_precision_to_length
functions take unsigned_flag into account only if the length/precision
argument is non-zero.
The function test_if_skip_sort_order ignored any covering index used for ref
access of a table in a query with ORDER BY if this index was incompatible
with the ORDER BY list and there was another covering index compatible with
this list.
As a result sub-optimal execution plans were chosen for some queries with
ORDER BY clause.
impossible WHERE/HAVING clause
(subselect_single_select_engine::exec).
Allocation and initialization of joined table list t1, t2... of
subqueries like:
NOT IN (SELECT ... FROM t1,t2,... WHERE 0)
is optimized out, however server tries to traverse this list.
Grouping or ordering of long values in not indexed BLOB/TEXT columns
with GBK or BIG5 charsets crashes the server.
MySQL server uses sorting (the filesort procedure) in the temporary
table to evaluate the GROUP BY clause in case of lack of suitable index.
That procedure takes into account only first @max_sort_length bytes
(system variable, usually 1024) of TEXT/BLOB sorting key string.
The my_strnxfrm_gbk and my_strnxfrm_big5 fill temporary keys
with data of whole blob length instead of @max_sort_length bytes
length. That buffer overrun has been fixed.