The problem was that mysqltest could attempt to execute a
SHOW WARNINGS statement through a connection that was not
properly reaped, thus violating its own rules.
The solution is to skip SHOW WARNINGS if a connection has
not been properly repeaed.
The problem is that not all column names retrieved from a SELECT
statement can be used as view column names due to length and format
restrictions. The server failed to properly check the conformity
of those automatically generated column names before storing the
final view definition on disk.
Since columns retrieved from a SELECT statement can be anything
ranging from functions to constants values of any format and length,
the solution is to rewrite to a pre-defined format any names that
are not acceptable as a view column name.
The name is rewritten to "Name_exp_%u" where %u translates to the
position of the column. To avoid this conversion scheme, define
explict names for the view columns via the column_list clause.
Also, aliases are now only generated for top level statements.
The problem was that bits of the destructive equality propagation
optimization weren't being undone after the execution of a stored
program. Modifications to the parse tree that are based on transient
properties must be undone to enable the re-execution of stored
programs.
The solution is to cleanup any references to predicates generated
by the equality propagation during the execution of a stored program.
Spatial indexes were not checking for out-of-record condition in
the handler next command when the previous command didn't found
rows.
Fixed by making the rtree index to check for end of rows condition
before re-using the key from the previous search.
Fixed another crash if the tree has changed since the last search.
Added a test case for the other error.
auto_increment on duplicate entry
The bug was that when INSERT_ID was used and the storage
engine was told to release any reserved but not used
auto_increment values, it set the highest auto_increment
value to INSERT_ID.
The fix was to check if the auto_increment value was forced
by user (INSERT_ID) or by slave-thread, i.e. not auto-
generated. So that it is only allowed to release generated
values.
Spatial indexes were not checking for out-of-record condition in
the handler next command when the previous command didn't found
rows.
Fixed by making the rtree index to check for end of rows condition
before re-using the key from the previous search.
Fixed another crash if the tree has changed since the last search.
Added a test case for the other error.
consider clustered primary keys
Choosing a shortest index for the covering index scan,
the optimizer ignored the fact, that the clustered primary
key read involves whole table data.
The find_shortest_key function has been modified to
take into account that fact that a clustered PK has a
longest key of possible covering indices.
Problem was block_size on partitioned tables was not set,
resulting in keys_per_block was not correct which affects
the cost calculation for read time of indexes (including
cost for group min/max).Which resulted in a bad optimizer
decision.
Fixed by setting stats.block_size correctly.
mode
When the master was executing in sql_mode='traditional' (which
implies that really_abort_on_warning returns TRUE - because of
MODE_STRICT_ALL_TABLES), the error code (ER_DUP_ENTRY in the
reported case) was not being set in the
Query_log_event. Therefore, even if a failure was to be expected
when replaying the statement on the slave, a failure would occur,
because the Query_log_event was not transporting the expected
error code, but 0 instead.
This was because when the master was getting the error code to
set it in the Query_log_event, the executing thread would be
assumed to have been killed:
THD::killed==THD::KILL_BAD_DATA. This would make the error code
fetch routine not to check thd->main_da.sql_errno(), but instead
the thd->killed value. What's more, is that the server would
thd->killed value if thd->killed == THD::KILL_BAD_DATA and return
0 instead. So this is a double inconsistency, as the we should
not even check thd->killed but rather thd->main_da.sql_errno().
We fix this by extending the condition used to choose whether to
check the thd->main_da.sql_errno() or thd->killed, so that it
takes into consideration the case when:
thd->killed==THD::KILL_BAD_DATA.
+ failing statements
Implicit DROP event for temporary table is not getting
LOG_EVENT_THREAD_SPECIFIC_F flag, because, in the previous
executed statement in the same thread, which might even be a
failed statement, the thread_specific_used flag is set to
FALSE (in mysql_reset_thd_for_next_command) and not set to TRUE
before connection is shutdown. This means that implicit DROP
event will take the FALSE value from thread_specific_used and
will not set LOG_EVENT_THREAD_SPECIFIC_F in the event header. As
a consequence, mysqlbinlog will not print the pseudo_thread_id
from the DROP event, because one of the requirements for the
printout is that this flag is set to TRUE.
We fix this by setting thread_specific_used whenever we are
binlogging a DROP in close_temporary_tables, and resetting it to
its previous value afterward.
work in 5.1.40)
MERGE engine fails to open child table from a different
database if child table/database name contains characters
that are subject for table name to filename encoding
(WL1324).
Another problem is that MERGE engine didn't properly open
child table from the same database if child table name
contains characters like '/', '#'.
The problem was that table name to file name encoding was
applied inconsistently:
* On CREATE: encode table name + database name if child
table is in different database; do not encode table
name if child table is in the same database;
* No decoding on open.
With this fix child table/database names are always
encoded on CREATE and decoded on open. Compatibility
with older tables preserved.
Along with this patch comes fix for SHOW CREATE TABLE,
which used to show child table/database path instead
of child table/database names.
If an outer query is broken, a subquery might not even get set up.
EXPLAIN EXTENDED did not expect this and merrily tried to de-ref all
of the half-setup info.
We now catch this case and print as much as we have, as it doesn't cost us
anything (doesn't make regular execution slower).
backport from 5.1
insert...select
Queries following bulk insert into an empty MyISAM table
may break it. This was pure MyISAM problem.
When bulk insert into an empty table is complete, MyISAM
may want to enable indexes via repair by sort. If repair
by sort fails (e.g. insufficient buffer), MyISAM failover
to repair with key cache, requesting repair of data file.
Repair of data file performs data file substitution. This
means that current table instance will point to new data
file. Other cached table instances are still pointing to
an old, deleted data file.
This is fixed by not requesting repair of data file
during enable indexes.
Explicit REPAIR is not affected, since it flushes all
table instances.
This patch fixes some typos and poorly formulated sentences in
the output from mysqld --help --verbose.
Some of the problems described in the bug report are already
handled by the patch for Bug#49447, and are therefore not
included in this patch.
START SLAVE UNTIL MASTER ... specifies only SQL thread to stop.
rpl_slave_skip erronously deployed waiting for stop of both threads.
Corrected with deploying the correct macro.
Notice, earlier a similar bug@47749 was fixed in mysql-trunk.
for same data when using bit fields
Problem: checksum for BIT fields may be computed incorrectly
in some cases due to its storage peculiarity.
Fix: convert a BIT field to a string then calculate its checksum.
A client doing multiple mysql_library_init() and
mysql_library_end() calls over the lifetime of the process may
experience lost character set data, potentially even a
SIGSEGV.
This patch reinstates the reloading of character set data when
a mysql_library_init() is done after a mysql_library_end().
Incremental commit based on previous patch.
Addresses reviewer comments to move reseting of
thd->current_stmt_binlog_row_based to after binlog_query
takes place.