(regression)
Problem was that partition pruning did not exclude the
last partition if the range was beyond it
(i.e. not using MAXVALUE)
Fix was to not include the last partition if the
partitioning function value was not within the partition
range.
mysql-test/r/partition_innodb.result:
Bug#51830: Incorrect partition pruning on range partition
(regression)
Updated result
mysql-test/r/partition_pruning.result:
Bug#51830: Incorrect partition pruning on range partition
(regression)
Updated result
mysql-test/t/partition_innodb.test:
Bug#51830: Incorrect partition pruning on range partition
(regression)
Added test for pruning in InnoDB, since it does not show
for MyISAM due to 'Impossible WHERE noticed after reading
const tables'.
mysql-test/t/partition_pruning.test:
Bug#51830: Incorrect partition pruning on range partition
(regression)
Added test
sql/sql_partition.cc:
Bug#51830: Incorrect partition pruning on range partition
(regression)
Also increase the partition id if not inside the last partition
(and no MAXVALUE is defined).
Added comments and DBUG_ASSERT.
SET autocommit=1 while XA transaction is active may
cause various side effects, including memory corruption
and server crash.
The problem is that SET autocommit=1 and further queries
attempt to commit local transaction, whereas XA transaction
is still active.
As local and XA transactions are mutually exclusive, this
patch forbids enabling autocommit mode while XA transaction
is active.
mysql-test/r/xa.result:
A test case for BUG#51342.
mysql-test/t/xa.test:
A test case for BUG#51342.
sql/set_var.cc:
Forbid enabling autocommit mode while XA transaction is
active.
MySQL uses two source layouts when building : the bzr
layout and the source package layout.
The previous fix for bug 35250 contained 1 change that is
valid for both modes and a number of changes that are valid
only for the bzr source layout.
The important thing was to fix the source package layout.
And for this the change in configure.in was sufficient.
It's not trivial (and not requested by this bug) to support
VPATH builds from the bzr trees.
This is why the other changes are reverted and the change to
fix the VPATH build for source distributions is left intact.
The test case added in previous patch missed a RESET MASTER on
test start up. Without it, showing binary log contents can
sometimes show spurious entries from previously executed tests,
ultimately causing test failure - result mismatch.
The test file was added in:
revid:luis.soares@sun.com-20100224190153-k0bpdx9abe88uoo2
This patch also moves the test case into binlog_innodb_row.test
file. This way we avoid having yet another test file,
binlog_row_innodb_truncate.test, whose only purpose is to host
one test case. This had been actually suggested during original
patch review, but somehow the binlog_innodb_row was missed when
searching for a file to host the test case.
The problem was that UNINSTALL PLUGIN wasn't performing privilege
checks before removing a plugin. Any user (including users without
any kind of privileges) could uninstall any plugin.
The solution is to verify if the user has the DELETE privilege for
the mysql.plugin table before uninstalling a plugin.
mysql-test/r/plugin_not_embedded.result:
Add test case result for Bug#51770.
mysql-test/t/plugin_not_embedded-master.opt:
Add example plugin path.
mysql-test/t/plugin_not_embedded.test:
Add test case for Bug#51770.
Skip embedded as test relies on privileges checks.
The problem is that Item_direct_view_ref which is inherited
from Item_ident updates orig_table_name and table_name with
the same values. The fix is introduction of new constructor
into Item_ident and up which updates orig_table_name and
table_name separately.
mysql-test/r/metadata.result:
test case
mysql-test/t/metadata.test:
test case
sql/item.cc:
new constructor which updates
orig_table_name and table_name
separately.
sql/item.h:
new constructor which updates
orig_table_name and table_name
separately.
sql/table.cc:
used new constructor
The problem is that not all column names retrieved from a SELECT
statement can be used as view column names due to length and format
restrictions. The server failed to properly check the conformity
of those automatically generated column names before storing the
final view definition on disk.
Since columns retrieved from a SELECT statement can be anything
ranging from functions to constants values of any format and length,
the solution is to rewrite to a pre-defined format any names that
are not acceptable as a view column name.
The name is rewritten to "Name_exp_%u" where %u translates to the
position of the column. To avoid this conversion scheme, define
explict names for the view columns via the column_list clause.
Also, aliases are now only generated for top level statements.
mysql-test/include/view_alias.inc:
Add test case for Bug#40277
mysql-test/r/compare.result:
Bug#40277: SHOW CREATE VIEW returns invalid SQL
mysql-test/r/group_by.result:
Bug#40277: SHOW CREATE VIEW returns invalid SQL
mysql-test/r/ps.result:
Bug#40277: SHOW CREATE VIEW returns invalid SQL
mysql-test/r/subselect.result:
Bug#40277: SHOW CREATE VIEW returns invalid SQL
mysql-test/r/subselect3.result:
Bug#40277: SHOW CREATE VIEW returns invalid SQL
mysql-test/r/type_datetime.result:
Bug#40277: SHOW CREATE VIEW returns invalid SQL
mysql-test/r/union.result:
Bug#40277: SHOW CREATE VIEW returns invalid SQL
mysql-test/r/view.result:
Add test case result for Bug#40277
mysql-test/r/view_alias.result:
Add test case result for Bug#40277
mysql-test/t/view_alias.test:
Add test case for Bug#40277
sql/sql_view.cc:
Check if auto generated column names are conforming. Also, the
make_unique_view_field_name function is not used as it uses the
original name to construct a new one, which does not work if the
name is invalid.
The problem was that bits of the destructive equality propagation
optimization weren't being undone after the execution of a stored
program. Modifications to the parse tree that are based on transient
properties must be undone to enable the re-execution of stored
programs.
The solution is to cleanup any references to predicates generated
by the equality propagation during the execution of a stored program.
mysql-test/r/trigger.result:
Add test case result for Bug#51650.
mysql-test/t/trigger.test:
Add test case for Bug#51650.
sql/item.cc:
Remove reference to a equality predicate.
Spatial indexes were not checking for out-of-record condition in
the handler next command when the previous command didn't found
rows.
Fixed by making the rtree index to check for end of rows condition
before re-using the key from the previous search.
Fixed another crash if the tree has changed since the last search.
Added a test case for the other error.
auto_increment on duplicate entry
The bug was that when INSERT_ID was used and the storage
engine was told to release any reserved but not used
auto_increment values, it set the highest auto_increment
value to INSERT_ID.
The fix was to check if the auto_increment value was forced
by user (INSERT_ID) or by slave-thread, i.e. not auto-
generated. So that it is only allowed to release generated
values.
mysql-test/r/partition_error.result:
Bug#50392: insert_id is not reset for partitioned tables
auto_increment on duplicate entry
updated result
mysql-test/suite/parts/inc/partition_auto_increment.inc:
Bug#50392: insert_id is not reset for partitioned tables
auto_increment on duplicate entry
Added test
mysql-test/suite/parts/r/partition_auto_increment_archive.result:
Bug#50392: insert_id is not reset for partitioned tables
auto_increment on duplicate entry
Added result, note that archive does only allow increasing
auto_increment values
mysql-test/suite/parts/r/partition_auto_increment_blackhole.result:
Bug#50392: insert_id is not reset for partitioned tables
auto_increment on duplicate entry
Added result, note that blackhole accepts all inserts :)
mysql-test/suite/parts/r/partition_auto_increment_innodb.result:
Bug#50392: insert_id is not reset for partitioned tables
auto_increment on duplicate entry
Added result, note that innodb rolls back inserts on error,
but keeps the auto_increment value.
mysql-test/suite/parts/r/partition_auto_increment_memory.result:
Bug#50392: insert_id is not reset for partitioned tables
auto_increment on duplicate entry
Added result, note that memory and myisam inserts all rows
before the error.
mysql-test/suite/parts/r/partition_auto_increment_myisam.result:
Bug#50392: insert_id is not reset for partitioned tables
auto_increment on duplicate entry
Added result, note that memory and myisam inserts all rows
before the error.
mysql-test/suite/parts/r/partition_auto_increment_ndb.result:
Bug#50392: insert_id is not reset for partitioned tables
auto_increment on duplicate entry
Added result, note that NDB does not seem to handle
INSERT_ID as other engines. (Martin will look into it).
mysql-test/t/partition_error.test:
Bug#50392: insert_id is not reset for partitioned tables
auto_increment on duplicate entry
Added test
sql/ha_partition.cc:
Bug#50392: insert_id is not reset for partitioned tables
auto_increment on duplicate entry
If the next_insert_id comes from non generated (i.e
forced by INSERT_ID or slave-thread) then we cannot
lower the reserved auto_increment value, since it have
not reserved any values.
Spatial indexes were not checking for out-of-record condition in
the handler next command when the previous command didn't found
rows.
Fixed by making the rtree index to check for end of rows condition
before re-using the key from the previous search.
Fixed another crash if the tree has changed since the last search.
Added a test case for the other error.
consider clustered primary keys
Choosing a shortest index for the covering index scan,
the optimizer ignored the fact, that the clustered primary
key read involves whole table data.
The find_shortest_key function has been modified to
take into account that fact that a clustered PK has a
longest key of possible covering indices.
mysql-test/r/innodb_mysql.result:
Test case for bug #39653.
mysql-test/t/innodb_mysql.test:
Test case for bug #39653.
sql/sql_select.cc:
Bug #39653: find_shortest_key in sql_select.cc does not
consider clustered primary keys
The find_shortest_key function has been modified to
take into account that fact that a clustered PK has a
longest key of possible covering indices.
There was no check for foreign keys when altering partitioned
tables.
Added check for FK when altering partitioned tables.
mysql-test/r/partition_innodb.result:
Bug#50104: Partitioned table with just 1 partion works with fk
Updated test result
mysql-test/t/partition_innodb.test:
Bug#50104: Partitioned table with just 1 partion works with fk
Added test for adding FK on partitioned tables (both 1 and 2
partitions)
sql/sql_partition.cc:
Bug#50104: Partitioned table with just 1 partion works with fk
Disabled adding foreign key when altering a partitioned table.
Problem was block_size on partitioned tables was not set,
resulting in keys_per_block was not correct which affects
the cost calculation for read time of indexes (including
cost for group min/max).Which resulted in a bad optimizer
decision.
Fixed by setting stats.block_size correctly.
mysql-test/r/partition_range.result:
Bug#48229: group by performance issue of partitioned table
Added result
mysql-test/t/partition_range.test:
Bug#48229: group by performance issue of partitioned table
Added test
sql/ha_partition.cc:
Bug#48229: group by performance issue of partitioned table
Added missing assignment of stats.block_size.
mode
When the master was executing in sql_mode='traditional' (which
implies that really_abort_on_warning returns TRUE - because of
MODE_STRICT_ALL_TABLES), the error code (ER_DUP_ENTRY in the
reported case) was not being set in the
Query_log_event. Therefore, even if a failure was to be expected
when replaying the statement on the slave, a failure would occur,
because the Query_log_event was not transporting the expected
error code, but 0 instead.
This was because when the master was getting the error code to
set it in the Query_log_event, the executing thread would be
assumed to have been killed:
THD::killed==THD::KILL_BAD_DATA. This would make the error code
fetch routine not to check thd->main_da.sql_errno(), but instead
the thd->killed value. What's more, is that the server would
thd->killed value if thd->killed == THD::KILL_BAD_DATA and return
0 instead. So this is a double inconsistency, as the we should
not even check thd->killed but rather thd->main_da.sql_errno().
We fix this by extending the condition used to choose whether to
check the thd->main_da.sql_errno() or thd->killed, so that it
takes into consideration the case when:
thd->killed==THD::KILL_BAD_DATA.
+ failing statements
Implicit DROP event for temporary table is not getting
LOG_EVENT_THREAD_SPECIFIC_F flag, because, in the previous
executed statement in the same thread, which might even be a
failed statement, the thread_specific_used flag is set to
FALSE (in mysql_reset_thd_for_next_command) and not set to TRUE
before connection is shutdown. This means that implicit DROP
event will take the FALSE value from thread_specific_used and
will not set LOG_EVENT_THREAD_SPECIFIC_F in the event header. As
a consequence, mysqlbinlog will not print the pseudo_thread_id
from the DROP event, because one of the requirements for the
printout is that this flag is set to TRUE.
We fix this by setting thread_specific_used whenever we are
binlogging a DROP in close_temporary_tables, and resetting it to
its previous value afterward.
work in 5.1.40)
MERGE engine fails to open child table from a different
database if child table/database name contains characters
that are subject for table name to filename encoding
(WL1324).
Another problem is that MERGE engine didn't properly open
child table from the same database if child table name
contains characters like '/', '#'.
The problem was that table name to file name encoding was
applied inconsistently:
* On CREATE: encode table name + database name if child
table is in different database; do not encode table
name if child table is in the same database;
* No decoding on open.
With this fix child table/database names are always
encoded on CREATE and decoded on open. Compatibility
with older tables preserved.
Along with this patch comes fix for SHOW CREATE TABLE,
which used to show child table/database path instead
of child table/database names.
mysql-test/r/merge.result:
A test case for BUG#48265.
mysql-test/std_data/bug48265.frm:
MERGE table from 5.0 to test fix for BUG#48265 compatibility.
mysql-test/t/merge.test:
A test case for BUG#48265.
storage/myisammrg/ha_myisammrg.cc:
On CREATE always write child table/database name encoded
by table name to filename encoding to dot-MRG file.
On open decode child table/database name.
Compatibilty with previous versions preserved.
Fixed ::append_create_info() to return child
table/database name instead of path.
storage/myisammrg/myrg_open.c:
Move if (has_path) branch from myrg_parent_open() to
myisammrg_parent_open_callback. The callback function
needs to know if child table was written along with
database name to dot-MRG file. Needed for compatibility
reasons.
If an outer query is broken, a subquery might not even get set up.
EXPLAIN EXTENDED did not expect this and merrily tried to de-ref all
of the half-setup info.
We now catch this case and print as much as we have, as it doesn't cost us
anything (doesn't make regular execution slower).
backport from 5.1
mysql-test/r/explain.result:
Show that EXPLAIN EXTENDED with subquery and illegal out query doesn't crash.
Show also that SHOW WARNINGS will render an additional Note in the hope of
being, well, helpful.
mysql-test/t/explain.test:
If we have only half a query for EXPLAIN EXTENDED to print (i.e.,
incomplete subquery info as outer query is illegal), we should
provide the user with as much info as we easily can if they ask
for it. What we should not do is crash when they come asking for
help, that violates etiquette in some countries.
sql/item_subselect.cc:
If the sub-query's actually set up, print it. Otherwise, elide.
insert...select
Queries following bulk insert into an empty MyISAM table
may break it. This was pure MyISAM problem.
When bulk insert into an empty table is complete, MyISAM
may want to enable indexes via repair by sort. If repair
by sort fails (e.g. insufficient buffer), MyISAM failover
to repair with key cache, requesting repair of data file.
Repair of data file performs data file substitution. This
means that current table instance will point to new data
file. Other cached table instances are still pointing to
an old, deleted data file.
This is fixed by not requesting repair of data file
during enable indexes.
Explicit REPAIR is not affected, since it flushes all
table instances.
mysql-test/r/myisam.result:
A test case for BUG#51307.
mysql-test/t/myisam.test:
A test case for BUG#51307.
storage/myisam/ha_myisam.cc:
When enabling indexes do not attempt to repair data file.
START SLAVE UNTIL MASTER ... specifies only SQL thread to stop.
rpl_slave_skip erronously deployed waiting for stop of both threads.
Corrected with deploying the correct macro.
Notice, earlier a similar bug@47749 was fixed in mysql-trunk.
mysql-test/suite/rpl/t/rpl_slave_skip.test:
changing two waiting calls for offline of both threads into waiting for SQL to stop.
for same data when using bit fields
Problem: checksum for BIT fields may be computed incorrectly
in some cases due to its storage peculiarity.
Fix: convert a BIT field to a string then calculate its checksum.
mysql-test/r/myisam.result:
Fix for bug#51304: checksum table gives different results
for same data when using bit fields
- test result.
mysql-test/t/myisam.test:
Fix for bug#51304: checksum table gives different results
for same data when using bit fields
- test case.
sql/sql_table.cc:
Fix for bug#51304: checksum table gives different results
for same data when using bit fields
- convert BIT fields to strings calculating its checksums
as some bits may be saved among NULL bits in the record buffer.
Item_field::print method does not take into
account fields whose values may be null.
The fix is to print 'NULL' if field value is null.
mysql-test/r/explain.result:
test case
mysql-test/r/func_str.result:
result fix
mysql-test/r/having.result:
result fix
mysql-test/r/select.result:
result fix
mysql-test/r/subselect.result:
result fix
mysql-test/r/union.result:
result fix
mysql-test/t/explain.test:
test case
sql/item.cc:
print 'NULL' if field value is null.
The problem was that the CSV storage engine does not support NULL
fields, yet in some early 5.1 version the log tables (general_log
and slow_log) were created with null fields. On top of this, when
altering a CSV table column, all fields of the table must be NOT
NULL otherwise the alteration fails.
The solution is to ensure that during upgrade all columns of the
log tables are NOT NULL.
mysql-test/r/log_tables_upgrade.result:
Add test case result for Bug#49823.
mysql-test/std_data/bug49823.CSV:
Sample data for test.
mysql-test/std_data/bug49823.frm:
Add a CSV table which mimics the general_log table, except that
it contains a nullable column.
mysql-test/t/log_tables_upgrade.test:
Add test case for Bug#49823.
scripts/mysql_system_tables_fix.sql:
Ensure that all columns of the log tables are NOT NULL.
The test failed due to Bug #29790.
However, logics of the failing part does not need I_S selecting.
Fixing to remove the non-deterministic I_S selecting as redundant
from a part of the test dealing with BUG@22864.
mysql-test/suite/rpl/r/rpl_row_create_table.result:
results updated.
mysql-test/suite/rpl/t/disabled.def:
rpl_row_create_table is re-enabled.
mysql-test/suite/rpl/t/rpl_row_create_table.test:
removed a non-deterministic I_S selecting as redundant
from a part of the test dealing with BUG@22864.
The problem is that cond->fix_fields(thd, 0) breaks
condition(cuts off 'having'). The reason of that is
that NULL valued Item pointer is present in the
middle of Item list and it breaks the Item processing
loop.
mysql-test/r/having.result:
test case
mysql-test/t/having.test:
test case
sql/item_cmpfunc.h:
added ASSERT to make sure that we do not add NULL valued Item pointer
sql/sql_select.cc:
skip adding an item to condition if Item pointer is NULL.
skip adding a list to condition if this list is empty.
performance degradation.
Filesort + join cache combination is preferred to full index scan because it
is usually faster. But it's not the case when the index is clustered one.
Now test_if_skip_sort_order function prefers filesort only if index isn't
clustered.
mysql-test/r/innodb_mysql.result:
Added a test case for the bug#50843.
mysql-test/t/innodb_mysql.test:
Added a test case for the bug#50843.
sql/sql_select.cc:
Bug#50843: Filesort used instead of clustered index led to
performance degradation.
Now test_if_skip_sort_order function prefers filesort only if index isn't
clustered.
Detailed revision comments:
r6538 | sunny | 2010-01-30 00:43:06 +0200 (Sat, 30 Jan 2010) | 6 lines
branches/5.1: Check *first_value every time against the column max
value and set *first_value to next autoinc if it's > col max value.
ie. not rely on what is passed in from MySQL.
[49497] Error 1467 (ER_AUTOINC_READ_FAILED) on inserting a negative value
rb://236
Detailed revision comments:
r6536 | sunny | 2010-01-30 00:13:42 +0200 (Sat, 30 Jan 2010) | 6 lines
branches/5.1: Check *first_value everytime against the column max
value and set *first_value to next autoinc if it's > col max value.
ie. not rely on what is passed in from MySQL.
[49497] Error 1467 (ER_AUTOINC_READ_FAILED) on inserting a negative value
rb://236
Propagation of a large unsigned numeric constant
in the WHERE expression led to wrong result.
For example,
"WHERE a = CAST(0xFFFFFFFFFFFFFFFF AS USIGNED) AND FOO(a)",
where a is an UNSIGNED BIGINT, and FOO() accepts strings,
was transformed to "... AND FOO('-1')".
That has been fixed.
Also EXPLAIN EXTENDED printed incorrect numeric constants in
transformed WHERE expressions like above. That has been
fixed too.
mysql-test/r/bigint.result:
Added test case for bug #45360.
mysql-test/t/bigint.test:
Added test case for bug #45360.
sql/item.cc:
Bug #45360: wrong results
As far as Item_int_with_ref (and underlaying Item_int)
class accepts both signed and unsigned 64bit values,
Item_int::val_str and Item_int::print methods have been
modified to take into account unsigned_flag.
The problem was in an incorrect debug assertion. The expression
used in the failing assertion states that when finding
references matching ORDER BY expressions, there can be only one
reference to a single table. But that does not make any sense,
all test cases for this bug are valid examples with multiple
identical WHERE expressions referencing the same table which
are also present in the ORDER BY list.
Fixed by removing the failing assertion. We also have to take
care of the 'found' counter so that we count multiple
references only once. We rely on this fact later in
eq_ref_table().
mysql-test/r/join.result:
Added a test case for bug #50335.
mysql-test/t/join.test:
Added a test case for bug #50335.
sql/sql_select.cc:
Removing the assertion in eq_ref_table() as it does not make
any sense. We also have to take care of the 'found' counter so
that we count multiple references only once. We rely on this
fact later in eq_ref_table().
backporting of bug@30703 to 5.1.
The fixes are backed up with a regression test.
mysql-test/include/test_fieldsize.inc:
waiting to stop is to be actually exclusively for SQL thread.
mysql-test/suite/rpl/r/rpl_show_slave_running.result:
new results file is added.
mysql-test/suite/rpl/t/rpl_show_slave_running.test:
regression test for bug#30703 is added.
sql/mysqld.cc:
refining `show status like slave_running' handler to correspond to one of
`show slave status'.
sql/slave.cc:
A dbug-sync point is added to complement the regression test.
For temporary tables that are created with an engine that does
not provide the HTON_CAN_RECREATE, the truncate operation is
performed resorting to the optimized handler::ha_delete_all_rows
method. However, this means that the truncate will share
execution path, from mysql_delete, with truncate on regular
tables and other delete operations. As a consequence the truncate
operation, for the temporary table is logged, even if in row mode
because there is no distinction between this and the other delete
operations at binlogging time.
We fix this by checking if: (i) the binlog format, when the
truncate operation was issued, is ROW; (ii) if the operation is a
truncate; and (iii) if the table is a temporary table; before
writing to the binary log. If all three conditions are met, we
skip writing to the binlog. A side effect of this fix is that we
limit the scope of setting and resetting the
current_stmt_binlog_row_based. Now we just set and reset it
inside mysql_delete in the boundaries of the
handler::ha_write_row loop. This way we have access to
thd->current_stmt_binlog_row_based real value inside
mysql_delete.
mysql-test/suite/binlog/r/binlog_row_mix_innodb_myisam.result:
Updated result for spurious truncate table.
mysql-test/suite/binlog/t/binlog_row_innodb_truncate.test:
Test case.
sql/sql_delete.cc:
Added check in mysql_delete before writing the TRUNCATE statement
to the binary log. Additionally, removed the set/reset of
current_stmt_binlog_row_based so that it happens just in the
boundaries of the handler::ha_write_row loop inside mysql_delete.
When EXPLAIN EXTENDED tries to print column names, it checks whether the
referenced table is CONST (in which case, the column's value rather than
its name will be printed). If no proper table is reference (i.e. because
a derived table was used that has since gone out of scope), this will fail
spectacularly.
This ports an equivalent of the fix for Bug 43354.
mysql-test/r/func_gconcat.result:
Show that EXPLAIN EXTENDED on a GROUP_CONCAT() on a derived table
no longer crashes the server.
mysql-test/t/func_gconcat.test:
Show that EXPLAIN EXTENDED on a GROUP_CONCAT() on a derived table
no longer crashes the server.
sql/item_sum.cc:
Do not de-ref what cannot be, that is, temp-tables that have gone away.
This is of questionable utility anyway, since our deref has the sole
purpose of checking whether the table is const (in which case, we'll
substitute the column with its value in EXPLAIN EXTENDED - that is all).
CHECK_FIELD_IGNORE was treated as CHECK_FIELD_ERROR_FOR_NULL;
UPDATE...SET...NULL on NOT NULL fields behaved differently after
a trigger.
Now distinguishes between IGNORE and ERROR_FOR_NULL and save/restores
check-field options.
mysql-test/r/trigger.result:
Show that UPDATE...SET...NULL on NOT NULL columns doesn't behave differently
when run after a trigger.
mysql-test/t/trigger.test:
Show that UPDATE...SET...NULL on NOT NULL columns doesn't behave differently
when run after a trigger.
sql/field_conv.cc:
CHECK_FIELD_IGNORE was treated as CHECK_FIELD_ERROR_FOR_NULL.
Distinguish between the two.
sql/sp_head.cc:
raise error as needed
sql/sql_class.cc:
Save and restore check-fields options.
sql/sql_class.h:
Make room so we can save check-fields options.
sql/sql_insert.cc:
raise error as needed
I found three issues during the analysis:
1. Memory leak caused by temp_buf not being freed;
2. Memory leak caused when handling argv;
3. Conditional jump that depended on unitialized values.
Issue #1
--------
DESCRIPTION: when mysqlbinlog is reading from a remote location
the event temp_buf references the incoming stream (in NET
object), which is not freed by mysqlbinlog explicitly. On the
other hand, when it is reading local binary log, it points to a
temporary buffer that needs to be explicitly freed. For both
cases, the temp_buf was not freed by mysqlbinlog, instead was
set to 0. This clearly disregards the free required in the
second case, thence creating a memory leak.
FIX: we make temp_buf to be conditionally freed depending on
the value of remote_opt. Found out that similar fix is already
in most recent codebases.
Issue #2
--------
DESCRIPTION: load_defaults is called by parse_args, and it
reads default options from configuration files and put them
BEFORE the arguments that are already in argc and argv. This is
done resorting to MEM_ROOT. However, parse_args calls
handle_options immediately after which changes argv. Later when
freeing the defaults, pointers to MEM_ROOT won't match, causing
the memory not to be freed:
void free_defaults(char **argv)
{
MEM_ROOT ptr
memcpy_fixed((char*) &ptr,(char *) argv - sizeof(ptr), sizeof(ptr));
free_root(&ptr,MYF(0));
}
FIX: we remove load_defaults from parse_args and call it
before. Then we save argv with defaults in defaults_argv BEFORE
calling parse_args (which inside can then call handle_options
at will). Actually, found out that this is in fact kind of a
backport for BUG#38468 into 5.1, so I merged in the test case
as well and added error check for load_defaults call.
Fix based on:
revid:zhenxing.he@sun.com-20091002081840-uv26f0flw4uvo33y
Issue #3
--------
DESCRIPTION: the structure st_print_event_info constructor
would not initialize the sql_mode member, although it did for
sql_mode_inited (set to false). This would later raise the
warning in valgrind when printing the sql_mode in the event
header, as this print out is protected by a check against
sql_mode_inited and sql_mode variables. Given that sql_mode was
not initialized valgrind would output the warning.
FIX: we add initialization of sql_mode to the
st_print_event_info constructor.
client/mysqlbinlog.cc:
- Conditionally free ev->temp_buf.
- save defaults_argv before handle_options is called.
mysql-test/t/mysqlbinlog.test:
Added test case from BUG#38468.
sql/log_event.cc:
Added initialization of sql_mode for st_print_event_info.
Table corruption happens during table reading in ha_tina::find_current_row() func.
Field::store() method returns error(true) if stored value is 0.
The fix:
added special case for enum type which correctly processes 0 value.
Additional fix:
INSERT...(default) and INSERT...() have the same behaviour now for enum type.
mysql-test/r/csv.result:
test result
mysql-test/r/default.result:
result fix
mysql-test/t/csv.test:
test case
sql/item.cc:
Changes:
do not print warning for 'enum' type if there is no default value.
set default value.
storage/csv/ha_tina.cc:
Table corruption happens during table reading in ha_tina::find_current_row() func.
Field::store() method returns error(true) if stored value is 0.
The fix:
added special case for enum type which correctly processes 0 value.
Some logic would group by suite always
Disable this if using --noreorder
Also fix getting array from collect_one_suite() in this case
Amended according to previous comment
The problem is that during temporary table creation uneven bits
are not taken into account for hidden fields. It leads to incorrect
calculation&allocation of null bytes size for table record. And
if grouped value is null we set wrong bit for this value(see end_update()).
Fixed by adding separate calculation of uneven bit for hidden fields.
mysql-test/r/type_bit.result:
test case
mysql-test/t/type_bit.test:
test case
sql/sql_select.cc:
added separate calculation of uneven bit for hidden fields
This bug is just one facet of stored routines not being able to
detect changes in meta-data (WL#4179). This particular problem
can be triggered within a single session due to the improper
management of the pre-locking list if the view is expanded after
the pre-locking list is calculated.
Since the overall solution for the meta-data detection issue is
planned for a later release, for now a workaround is used to
fix this particular aspect that only involves a single session.
The workaround is to flush the thread-local stored routine cache
every time a view is created or modified, causing locally cached
routines to be re-evaluated upon invocation.
mysql-test/r/sp-bugs.result:
Add test case result for Bug#50624.
mysql-test/t/sp-bugs.test:
Add test case for Bug#50624.
sql/sp_cache.cc:
Update function description.
sql/sql_view.cc:
Invalidate the SP cache if a view is being created or modified.
The problem becomes apparent only if HAVE_purify is undefined.
It related to the part of code placed in open_table_from_share() fuction
where we initialize record buffer only if HAVE_purify is enabled.
So in case of HAVE_purify=OFF record buffer is not initialized
on open table stage.
Next we read key, find NULL value and update appropriate null bit
but do not update record buffer. After that the record is stored
in the join cache(store_record_in_cache). For CHAR fields we
strip trailing spaces and in our case this procedure uses
uninitialized record buffer.
The fix is to skip stripping space procedure in case of null values
for CHAR fields(partially based on 6.0 JOIN_CACHE implementation).
mysql-test/r/join.result:
test case
mysql-test/t/join.test:
test case
sql/field.cc:
code updated according to new CACHE_FIELD struct
sql/sql_select.cc:
code updated according to new CACHE_FIELD struct
sql/sql_select.h:
CACHE_FIELD struct:
added new fields: Field *field, uint type;
removed fields: Field_blob *blob_field, bool strip;
removed in MySQL 6.0
CREATE TABLE... TYPE= returns the warning "The syntax
'TYPE=storage_engine' is deprecated and will be removed in
MySQL 6.0. Please use 'ENGINE=storage_engine' instead"
This syntax is deprecated already from version 5.4.4, so
the message has been changed.
In addition, the deprecation macro was changed to reflect
the ServerPT decision not to include version number in the
warning message.
A number of test result files have been changed as a
consequence of the change in the deprecation macro.
Queries optimized with GROUP_MIN_MAX didn't cleanup KEYREAD
optimization properly. As a result subsequent queries may
return incomplete rows (fields are initialized to default
values).
mysql-test/r/group_min_max.result:
A test case for BUG#49902.
mysql-test/t/group_min_max.test:
A test case for BUG#49902.
sql/opt_range.cc:
Refactor of KEYREAD optimization switch so that KEYREAD
handler state is in sync with st_table::key_read flag.
All SQL code is supposed to switch KEYREAD optimization
via st_table::set_keyread().
sql/opt_sum.cc:
Refactor of KEYREAD optimization switch so that KEYREAD
handler state is in sync with st_table::key_read flag.
All SQL code is supposed to switch KEYREAD optimization
via st_table::set_keyread().
sql/sql_select.cc:
Refactor of KEYREAD optimization switch so that KEYREAD
handler state is in sync with st_table::key_read flag.
All SQL code is supposed to switch KEYREAD optimization
via st_table::set_keyread().
sql/sql_update.cc:
Refactor of KEYREAD optimization switch so that KEYREAD
handler state is in sync with st_table::key_read flag.
All SQL code is supposed to switch KEYREAD optimization
via st_table::set_keyread().
sql/table.cc:
Refactor of KEYREAD optimization switch so that KEYREAD
handler state is in sync with st_table::key_read flag.
All SQL code is supposed to switch KEYREAD optimization
via st_table::set_keyread().
sql/table.h:
Refactor of KEYREAD optimization switch so that KEYREAD
handler state is in sync with st_table::key_read flag.
All SQL code is supposed to switch KEYREAD optimization
via st_table::set_keyread().
Grouping by a subquery in a query with a distinct aggregate
function lead to a wrong result (wrong and unordered
grouping values).
There are two related problems:
1) The query like this:
SELECT (SELECT t1.a) aa, COUNT(DISTINCT b) c
FROM t1 GROUP BY aa
returned wrong result, because the outer reference "t1.a"
in the subquery was substituted with the Item_ref item.
The Item_ref item obtains data from the result_field object
that refreshes once after the end of each group. This data
is not applicable to filesort since filesort() doesn't care
about groups (and doesn't update result_field objects with
copy_fields() and so on). Also that data is not applicable
to group separation algorithm: end_send_group() checks every
record with test_if_group_changed() that evaluates Item_ref
items, but it refreshes those Item_ref-s only after the end
of group, that is a vicious circle and the grouped column
values in the output are shifted.
Fix: if
a) we grouping by a subquery and
b) that subquery has outer references to FROM list
of the grouping query,
then we substitute these outer references with
Item_direct_ref like references under aggregate
functions: Item_direct_ref obtains data directly
from the current record.
2) The query with a non-trivial grouping expression like:
SELECT (SELECT t1.a) aa, COUNT(DISTINCT b) c
FROM t1 GROUP BY aa+0
also returned wrong result, since JOIN::exec() substitutes
references to top-level aliases in SELECT list with Item_copy
caching items. Item_copy items have same refreshing policy
as Item_ref items, so the whole groping expression with
Item_copy inside returns wrong result in filesort() and
end_send_group().
Fix: include aliased items into GROUP BY item tree instead
of Item_ref references to them.
mysql-test/r/group_by.result:
Test case for bug #45640
mysql-test/t/group_by.test:
Test case for bug #45640
sql/item.cc:
Bug #45640: optimizer bug produces wrong results
Item_field::fix_fields() has been modified to resolve
aliases in GROUP BY item trees into aliased items instead
of Item_ref items.
sql/item.h:
Bug #45640: optimizer bug produces wrong results
- Item::find_item_processor() has been introduced.
- Item_ref::walk() has been modified to apply processors
to itself too (not only to referenced item).
sql/mysql_priv.h:
Bug #45640: optimizer bug produces wrong results
fix_inner_refs() has been modified to accept group_list
parameter.
sql/sql_lex.cc:
Bug #45640: optimizer bug produces wrong results
Initialization of st_select_lex::group_fix_field has
been added.
sql/sql_lex.h:
Bug #45640: optimizer bug produces wrong results
The st_select_lex::group_fix_field field has been introduced
to control alias resolution in Itef_fied::fix_fields.
sql/sql_select.cc:
Bug #45640: optimizer bug produces wrong results
- The fix_inner_refs function has been modified to treat
subquery outer references like outer fields under aggregate
functions, if they are included in GROUP BY item tree.
- The find_order_in_list function has been modified to
fix Item_field alias fields included in the GROUP BY item
trees in a special manner.
logging is disabled
The server would hit an assertion because of a DBUG violation.
There was a missing DBUG_RETURN and instead a plain return
was used.
This patch replaces the return with DBUG_RETURN.
into slow log
While processing a statement, down the mysql_parse execution
stack, the thd->enable_slow_log can be assigned to
opt_log_slow_admin_statements, depending whether one is executing
administrative statements, such as ALTER TABLE, OPTIMIZE,
ANALYZE, etc, or not. This can have an impact on slow logging for
statements that are executed after an administrative statement
execution is completed.
When executing statements directly from the user this is fine
because, the thd->enable_slow_log is reset right at the beginning
of the dispatch_command function, ie, everytime a new statement
is set is set to execute.
On the other hand, for slave SQL thread (sql_thd) the story is a
bit different. When in SBR the sql_thd applies statements by
calling mysql_parse. Right after, it calls log_slow_statement
function to log them if they take too long. Calling mysql_parse
directly is fine, but also means that dispatch_command function
is bypassed. As a consequence, thd->enable_slow_log does not get
a chance to be reset before the next statement to be executed by
the sql_thd. If the statement just executed by the sql_thd was an
administrative statement and logging of admin statements was
disabled, this means that sql_thd->enable_slow_log will be set to
0 (disabled) from that moment on. End result: sql_thd stops
logging slow statements.
We fix this by resetting the value of sql_thd->enable_slow_log to
the value of opt_log_slow_slave_statements right after
log_slow_stement is called by the sql_thd.
To 5.x Release
Notes
=====
This is a backport of BUG#23300 into 5.1 GA.
Original cset revid (in betony):
luis.soares@sun.com-20090929140901-s4kjtl3iiyy4ls2h
Description
===========
When using replication, the slave will not log any slow query
logs queries replicated from the master, even if the
option "--log-slow-slave-statements" is set and these take more
than "log_query_time" to execute.
In order to log slow queries in replicated thread one needs to
set the --log-slow-slave-statements, so that the SQL thread is
initialized with the correct switch. Although setting this flag
correctly configures the slave thread option to log slow queries,
there is an issue with the condition that is used to check
whether to log the slow query or not. When replaying binlog
events the statement contains the SET TIMESTAMP clause which will
force the slow logging condition check to fail. Consequently, the
slow query logging will not take place.
This patch addresses this issue by removing the second condition
from the log_slow_statements as it prevents slow queries to be
binlogged and seems to be deprecated.
When using MyIsam tables and processing concurrent DML
statements, the server may be sending back an OK to the client
before actually finishing the transaction commit procedure. This
has been reported before in BUG@37521 and BUG@29334.
This particular test case gets affected, because it performs the
following sequence:
connect (conn2, ...)
connection conn2;
LOAD DATA CONCURRENT ...
disconnect (conn2, ...)
connection master;
sync_slave_with_master
diff_tables
At this point diff_tables may report difference in the table
content (the master seems to be missing the conn2 rows).
To workaround this MyISAM concurrent DML statements issue and
make this test case deterministic, we wait on conn2 until the
rows inserted show up in the table. After this the test case
proceeds as normally would before this patch.
write()/read()
Sometimes stop/restart master or stop/restart salve can cause
network error, which can cause the 'invalid file descriptor
-1 in syscall write()/read()' warnings. All involved test
cases except rpl_slave_load_remove_tmpfile belong to the
kind of network error. So they are expected.
The 'rpl_slave_load_remove_tmpfile' belongs to file error,
but it is testing the file error as following code:
DBUG_EXECUTE_IF("remove_slave_load_file_before_write",
my_close(fd,MYF(0)); fd= -1; my_delete(fname, MYF(0)););
So it's expected too.
To fix the problem, add the valgrind warnings to the global
suppression list to suppress it.
mysql-test/include/mtr_warnings.sql:
Added code to suppress valgrind warnings: invalid file
descriptor -1 in syscall write()/read().
The test case rpl_binlog_corruption fails on windows because when
adding a line to the binary log index file it gets terminated
with a CR+LF (which btw, is the normal case in windows, but not on
Unixes - LF). This causes mismatch between the relay log names,
causing mysqld to report that it cannot find the log file.
We fix this by creating the instrumented index file through
mysql, ie, using SELECT ... INTO DUMPFILE ..., as opposed on
relying on ultimatly OS commands like: -- echo "..." >
index. These changes go into the file and make the procedure
platform independent:
include/setup_fake_relay_log.inc
Side note: when using SELECT ... INTO DUMPFILE ..., one needs to
check if mysqld is running with secure_file_priv. If it is, we do
it in two steps: 1. create the file on the allowed location;
2. move it to the datadir. If it is not, then we just create the
file directly on the datadir (so previous step 2. is not needed).
Performing fulltext prefix search (a word with truncation
operator) may cause a dead-loop. ft_min_word_len value
doesn't matter actually.
The problem was introduced along with "smarter index merge"
optimization.
mysql-test/r/fulltext.result:
A test case for BUG#50351.
mysql-test/t/fulltext.test:
A test case for BUG#50351.
storage/myisam/ft_boolean_search.c:
When going up to first-level tree, we need to restore docid[0],
so it informs fulltext index merge not to enter this second-level
tree again (avoiding dead-loop).
There was two problems:
The first was the symptom, caused by bad error handling in
ha_partition. It did not handle print_error etc. when
having no partitions (when used by dummy handler).
The second was the real problem that when dropping tables
it reused the table type (storage engine) from when the lock
was asked for, not the table type that it had when gaining
the exclusive name lock. So that it tried to delete tables
from wrong storage engines.
Solutions for the first problem was to accept some handler
calls to the partitioning handler even if it was not setup
with any partitions, and also if possible fallback
to use the base handler's default functions.
Solution for the second problem was to remove the optimization
to reuse the definition from the cache, instead always check
the frm-file when holding the LOCK_open mutex
(updated with a fix for a debug print crash and better
comments as required by reviewer, and removed optimization
to avoid reading the frm-file).
mysql-test/r/partition_debug_sync.result:
Bug#42438: Crash ha_partition::change_table_ptr
New result file using DEBUG_SYNC for deterministic results.
mysql-test/t/partition_debug_sync.test:
Bug#42438: Crash ha_partition::change_table_ptr
New test file using DEBUG_SYNC for deterministic results.
sql/ha_partition.cc:
Bug#42438: Crash ha_partition::change_table_ptr
allow some handler calls, used by error handling, even when
no partitions are setup. Fallback to default handling if possible.
sql/sql_base.cc:
Bug#42438: Crash ha_partition::change_table_ptr
Added DEBUG_SYNC point for deterministic test cases.
sql/sql_table.cc:
Bug#42438: Crash ha_partition::change_table_ptr
Always use the table type written in the .frm-file
(i.e. the current table type) when deleting a table.
Moved the check for log-table to not depend of the cache.
Added DEBUG_SYNC points for deterministic test cases.
REVOKE/GRANT; ALTER EVENT.
The following statements support the CURRENT_USER() where a user is needed.
DROP USER
RENAME USER CURRENT_USER() ...
GRANT ... TO CURRENT_USER()
REVOKE ... FROM CURRENT_USER()
ALTER DEFINER = CURRENT_USER() EVENT
but, When these statements are binlogged, CURRENT_USER() just is binlogged
as 'CURRENT_USER()', it is not expanded to the real user name. When slave
executes the log event, 'CURRENT_USER()' is expand to the user of slave
SQL thread, but SQL thread's user name always NULL. This breaks the replication.
After this patch, All above statements are rewritten when they are binlogged.
The CURRENT_USER() is expanded to the real user's name and host.
Fixed 2 problems :
1. test_if_order_by_key() was continuing on the primary key
as if it has a primary key suffix (as the secondary keys do).
This leads to crashes in ORDER BY <pk>,<pk>.
Fixed by not treating the primary key as the secondary one
and not depending on it being clustered with a primary key.
2. The cost calculation was trying to read the records
per key when operating on ORDER BYs that order on all of the
secondary key + some of the primary key.
This leads to crashes because of out-of-bounds array access.
Fixed by assuming we'll find 1 record per key in such cases.
column is used for ORDER BY
Problem: filesort isn't meant for null length sort data
(e.g. char(0)), that leads to a server crash.
Fix: disregard sort order if sort data record length is 0 (nothing
to sort).
mysql-test/r/select.result:
Fix for bug#49897: crash in ptr_compare when char(0) NOT NULL
column is used for ORDER BY
- test result.
mysql-test/t/select.test:
Fix for bug#49897: crash in ptr_compare when char(0) NOT NULL
column is used for ORDER BY
- test case.
sql/filesort.cc:
Fix for bug#49897: crash in ptr_compare when char(0) NOT NULL
column is used for ORDER BY
- assert added as filesort cannot handle null length sort data.
sql/sql_select.cc:
Fix for bug#49897: crash in ptr_compare when char(0) NOT NULL
column is used for ORDER BY
- don't sort null length data e.g. in case of ORDER BY CHAR(0).
The problem was that a DROP TRIGGER statement inside a stored
procedure could cause a crash in subsequent invocations. This
was due to the addition, on the first execution, of a temporary
table reference to the stored procedure query table list. In
a subsequent invocation, there would be a attempt to reinitialize
the temporary table reference, which by then was already gone.
The solution is to backup and reset the query table list each
time a trigger needs to be dropped. This ensures that any temp
changes to the query table list are discarded. It is safe to
do so at this time as drop trigger is restricted from more
complicated scenarios (ie, not allowed within stored functions,
etc).
mysql-test/r/sp-bugs.result:
Add test case result for Bug#50423
mysql-test/t/sp-bugs.test:
Add test case for Bug#50423
sql/sql_trigger.cc:
Backup and reset the query table list.
Remove now unnecessary manual reset of the query table list.
Server crashes when accessing ARCHIVE table with missing
.ARZ file.
When opening a table, ARCHIVE didn't properly pass through
error code from lower level azopen() to higher level open()
method.
mysql-test/r/archive.result:
A test case for BUG#48757.
mysql-test/t/archive.test:
A test case for BUG#48757.
storage/archive/ha_archive.cc:
Pass through error code from azopen().
Bulk REPLACE or bulk INSERT ... ON DUPLICATE KEY UPDATE may
break dynamic record MyISAM table.
The problem is limited to bulk REPLACE and INSERT ... ON
DUPLICATE KEY UPDATE, because only these operations may
be done via UPDATE internally and may request write cache.
When flushing write cache, MyISAM may write remaining
cached data at wrong position. Fixed by requesting write
cache to seek to a correct position.
mysql-test/r/myisam.result:
A test case for BUG#49628.
mysql-test/t/myisam.test:
A test case for BUG#49628.
storage/myisam/mi_dynrec.c:
delete_dynamic_record() may change data file position.
IO cache must be notified as it may still have cached
data, which has to be flushed later.
table and view...
Invalid memory reads after a query referencing MyISAM table
multiple times with write lock. Invalid memory reads may
lead to server crash, valgrind warnings, incorrect values
in INFORMATION_SCHEMA.TABLES.{TABLE_ROWS, DATA_LENGTH,
INDEX_LENGTH, ...}.
This may happen when one of the table instances gets closed
after a query, e.g. out of slots in open tables cache. UNION,
MERGE and VIEW are irrelevant.
The problem was that MyISAM didn't restore state info
pointer to default value.
myisam/mi_locking.c:
When a query is referencing MyISAM table multiple times
with a write lock, all table instances share the same
state info, pointing to MI_INFO::save_state of
"primary" table instance.
When lock is released, state pointer was restored only
for the primary table instance. Secondary table instances
are still pointing to save_state of primary table
instance.
Primary table instance may get closed, leaving secondary
table instances state pointer pointing to freed memory.
That's mostly ok, since next lock will update state info
pointer to correct value. But there're some cases, when
this secondary table instance state info is accessed
without a lock, e.g. INFORMATION_SCHEMA, MERGE (in 5.1
and up), MyISAM itself for DBUG purposes.
Restore default value of state pointer unconditionally,
for both primary and secondary table instances.
mysql-test/r/myisam.result:
A test case for BUG#48438.
mysql-test/t/myisam.test:
A test case for BUG#48438.
In case of 'CREATE VIEW' subselect transformation does not happen(see JOIN::prepare).
During fix_fields Item_row may call is_null() method for its arugmens which
leads to item calculation(wrong subselect in our case as
transformation did not happen before). This is_null() call
does not make sence for 'CREATE VIEW'.
Note:
Only Item_row is affected because other items don't call is_null()
during fix_fields() for arguments.
mysql-test/r/view.result:
test case
mysql-test/t/view.test:
test case
sql/item_row.cc:
skip is_null() call in case of 'CREATE VIEW' as unnecessary.
SHOW CREATE TABLE on a view (v1) that contains a function whose
statement uses another view (v2), could trigger a infinite loop
if the view referenced within the function causes a warning to
be raised while opening the said view (v2).
The problem was a infinite loop over the stack of internal error
handlers. The problem would be triggered if the stack contained
two or more handlers and the first two handlers didn't handle the
raised condition. In this case, the loop variable would always
point to the second handler in the stack.
The solution is to correct the loop variable assignment so that
the loop is able to iterate over all handlers in the stack.
mysql-test/r/view.result:
Add test case result for Bug#48449.
mysql-test/std_data/bug48449.frm:
Add a incomplete view definition that causes a warning to be
issued.
mysql-test/t/view.test:
Add test case for Bug#48449
sql/sql_class.cc:
Iterate over all handlers in the stack.
in multitable delete/subquery
SQL_BUFFER_RESULT should not have an effect on non-SELECT
statements according to our documentation.
Fixed by not passing it through to multi-table DELETE (similarly
to how it's done for multi-table UPDATE).
The problem was that a failure to open a view wasn't being
properly handled. When opening a view with unknown definer,
the open procedure would be treated as successful and would
later crash when attempting to lock the view (which wasn't
opened to begin with).
The solution is to skip further processing when opening a
table if it fails with a fatal error.
mysql-test/r/view.result:
Add test case result for Bug#47734.
mysql-test/t/view.test:
Add test case for Bug#47734.
sql/sql_base.cc:
Skip further processing if opening a table failed due to
a fatal error (for the statement).
being logged to slow query log
The problem is that the execution time for a multi-statement
stored procedure as a whole may not be accurate, and thus not
be entered into the slow query log even if the total time
exceeds long_query_time. The reason for this is that
THD::utime_after_lock used for time calculation may be reset
at the start of each new statement, possibly leaving the total
SP execution equal to the time spent executing the last
statement in the SP.
This patch stores the utime on start of SP execution, and
restores it on exit of SP execution. A test is added.
mysql-test/suite/sys_vars/r/slow_query_log_func.result:
New test results for #47905.
mysql-test/suite/sys_vars/t/slow_query_log_func.test:
New test case for #47905.
sql/sp_head.cc:
Save and restore the THD::utime_after_lock on entry and
exit of execute_procedure().
error causes debug assertion
The IGNORE option of the multiple-table UPDATE command was
not intended to suppress errors caused by the
sql_safe_updates mode. This flag will raise an error if the
execution of UPDATE does not use a key for row retrieval,
and should continue do so regardless of the IGNORE option.
However the implementation of IGNORE does not support
exceptions to the rule; it always converts errors to
warnings and cannot be extended. The Internal_error_handler
interface offers the infrastructure to handle individual
errors, making sure that the error raised by
sql_safe_updates is not silenced.
Fixed by implementing an Internal_error_handler and using it
for UPDATE IGNORE commands.
WL#5182 is a follow-up to WL#5154, deprecating a few more options
and system variables.
client/client_priv.h:
The warning message has been changed to not include
a specific version number in the text.
client/mysql.cc:
--no-tee is deprecated
client/mysqldump.c:
--all is deprecated
-a now points to create-options
mysql-test/r/mysqlbinlog.result:
Warning text changed
mysql-test/suite/rpl/r/rpl_row_mysqlbinlog.result:
Warning text changed
sql/mysql_priv.h:
The warning message has been changed to not include
a specific version number in the text.
sql/mysqld.cc:
--use-symbolic-links is deprecated
-s now points to --symbolic-links
--warnings is deprecated
-W now points to --log-warnings
myisam_max_extra_sort_file_size is deprecated
record_buffer is deprecated
--log-update is deprecated
--sql-bin-update-same is deprecated
--skip-locking is deprecated
--skip-symlink is deprecated
--enable-locking is deprecated
--delay-key-write-for-all-tables is deprecated
The 'rpl_get_master_version_and_clock' test verifies if the slave I/O
thread tries to reconnect to master when it tries to get the values of
the UNIX_TIMESTAMP, SERVER_ID from master under network disconnection.
So the master server is restarted for making the transient network
disconnection, during the period the COM_REGISTER_SLAVE failures are
produced in server log file when the slave I/O thread tries to
register on master.
To fix the problem, suppress COM_REGISTER_SLAVE failures in server log
file by mtr suppression, because they are expected.
mysql-test/suite/rpl/r/rpl_get_master_version_and_clock.result:
Removed mtr.add_suppression("Get master clock failed with error: ")
and mtr.add_suppression("Get master SERVER_ID failed with error: ").
Because they are suppressed globally.
When replicating from 4.1 master to 5.0 slave START SLAVE UNTIL can stop too late.
The necessary in calculating of the beginning of an event the event's length
did not correspond to the master's genuine information at the event's execution time.
That piece of info was changed at the event's relay-logging due to binlog_version<4 event
conversion by IO thread.
Fixed with storing the master genuine Query_log_event size into a new status
variable at relay-logging of the event. The stored info is extacted at the event
execution and participate further to caclulate the correct start position of the event
in the until-pos stopping routine.
The new status variable's algorithm will be only active when the event comes
from the master of version < 5.0 (binlog_version < 4).
mysql-test/r/rpl_until.result:
results changed.
mysql-test/std_data/bug47142_master-bin.000001:
a binlog from 4.1 master to replace one of the running 5.x master is added as
part of Bug #47142 regression test.
mysql-test/t/rpl_until.test:
Regression test for Bug #47142 is added.
sql/log_event.cc:
Storing and extracting the master's genuine size of the event from the status
var of the event packet header.
The binlog_version<4 query-log-event is
a. converted into the modern binlog_version==4 to store the original size of the event
into a new status var; the converted representation goes into the relay log.
b. the converted event is read out and the stored size is engaged in the start pos calculation.
The new status is active only for events that IO thread instantiates for the sake of the conversion.
sql/log_event.h:
Incrementing the max szie of MAX_SIZE_LOG_EVENT_STATUS because of the new status var;
Defining the new status variable to hold the master's genuine event size;
Augmenting the Query_log_event with a new member to hold a value to store/extact from the status
var of the event packet header.
Detailed revision comments:
r6471 | calvin | 2010-01-16 01:43:27 +0200 (Sat, 16 Jan 2010) | 4 lines
branches/5.1: fix bug#49396: main.innodb test fails in embedded mode
Change replace_result by using $MYSQLD_DATADIR. Tested in both embedded
mode and normal server mode.