Problem was block_size on partitioned tables was not set,
resulting in keys_per_block was not correct which affects
the cost calculation for read time of indexes (including
cost for group min/max).Which resulted in a bad optimizer
decision.
Fixed by setting stats.block_size correctly.
mode
When the master was executing in sql_mode='traditional' (which
implies that really_abort_on_warning returns TRUE - because of
MODE_STRICT_ALL_TABLES), the error code (ER_DUP_ENTRY in the
reported case) was not being set in the
Query_log_event. Therefore, even if a failure was to be expected
when replaying the statement on the slave, a failure would occur,
because the Query_log_event was not transporting the expected
error code, but 0 instead.
This was because when the master was getting the error code to
set it in the Query_log_event, the executing thread would be
assumed to have been killed:
THD::killed==THD::KILL_BAD_DATA. This would make the error code
fetch routine not to check thd->main_da.sql_errno(), but instead
the thd->killed value. What's more, is that the server would
thd->killed value if thd->killed == THD::KILL_BAD_DATA and return
0 instead. So this is a double inconsistency, as the we should
not even check thd->killed but rather thd->main_da.sql_errno().
We fix this by extending the condition used to choose whether to
check the thd->main_da.sql_errno() or thd->killed, so that it
takes into consideration the case when:
thd->killed==THD::KILL_BAD_DATA.
+ failing statements
Implicit DROP event for temporary table is not getting
LOG_EVENT_THREAD_SPECIFIC_F flag, because, in the previous
executed statement in the same thread, which might even be a
failed statement, the thread_specific_used flag is set to
FALSE (in mysql_reset_thd_for_next_command) and not set to TRUE
before connection is shutdown. This means that implicit DROP
event will take the FALSE value from thread_specific_used and
will not set LOG_EVENT_THREAD_SPECIFIC_F in the event header. As
a consequence, mysqlbinlog will not print the pseudo_thread_id
from the DROP event, because one of the requirements for the
printout is that this flag is set to TRUE.
We fix this by setting thread_specific_used whenever we are
binlogging a DROP in close_temporary_tables, and resetting it to
its previous value afterward.
work in 5.1.40)
MERGE engine fails to open child table from a different
database if child table/database name contains characters
that are subject for table name to filename encoding
(WL1324).
Another problem is that MERGE engine didn't properly open
child table from the same database if child table name
contains characters like '/', '#'.
The problem was that table name to file name encoding was
applied inconsistently:
* On CREATE: encode table name + database name if child
table is in different database; do not encode table
name if child table is in the same database;
* No decoding on open.
With this fix child table/database names are always
encoded on CREATE and decoded on open. Compatibility
with older tables preserved.
Along with this patch comes fix for SHOW CREATE TABLE,
which used to show child table/database path instead
of child table/database names.
If an outer query is broken, a subquery might not even get set up.
EXPLAIN EXTENDED did not expect this and merrily tried to de-ref all
of the half-setup info.
We now catch this case and print as much as we have, as it doesn't cost us
anything (doesn't make regular execution slower).
backport from 5.1
insert...select
Queries following bulk insert into an empty MyISAM table
may break it. This was pure MyISAM problem.
When bulk insert into an empty table is complete, MyISAM
may want to enable indexes via repair by sort. If repair
by sort fails (e.g. insufficient buffer), MyISAM failover
to repair with key cache, requesting repair of data file.
Repair of data file performs data file substitution. This
means that current table instance will point to new data
file. Other cached table instances are still pointing to
an old, deleted data file.
This is fixed by not requesting repair of data file
during enable indexes.
Explicit REPAIR is not affected, since it flushes all
table instances.
This patch fixes some typos and poorly formulated sentences in
the output from mysqld --help --verbose.
Some of the problems described in the bug report are already
handled by the patch for Bug#49447, and are therefore not
included in this patch.
START SLAVE UNTIL MASTER ... specifies only SQL thread to stop.
rpl_slave_skip erronously deployed waiting for stop of both threads.
Corrected with deploying the correct macro.
Notice, earlier a similar bug@47749 was fixed in mysql-trunk.
for same data when using bit fields
Problem: checksum for BIT fields may be computed incorrectly
in some cases due to its storage peculiarity.
Fix: convert a BIT field to a string then calculate its checksum.
A client doing multiple mysql_library_init() and
mysql_library_end() calls over the lifetime of the process may
experience lost character set data, potentially even a
SIGSEGV.
This patch reinstates the reloading of character set data when
a mysql_library_init() is done after a mysql_library_end().
Incremental commit based on previous patch.
Addresses reviewer comments to move reseting of
thd->current_stmt_binlog_row_based to after binlog_query
takes place.
The problem was that the CSV storage engine does not support NULL
fields, yet in some early 5.1 version the log tables (general_log
and slow_log) were created with null fields. On top of this, when
altering a CSV table column, all fields of the table must be NOT
NULL otherwise the alteration fails.
The solution is to ensure that during upgrade all columns of the
log tables are NOT NULL.
The test failed due to Bug #29790.
However, logics of the failing part does not need I_S selecting.
Fixing to remove the non-deterministic I_S selecting as redundant
from a part of the test dealing with BUG@22864.
The problem is that cond->fix_fields(thd, 0) breaks
condition(cuts off 'having'). The reason of that is
that NULL valued Item pointer is present in the
middle of Item list and it breaks the Item processing
loop.