Created new .test file - mysqldump_restore that does test restore from mysqldump
output for a limited number of basic cases.
Create new .inc file - mysqldump.inc - renames original table and uses mysqldump
output to recreate the table, then uses diff_tables.inc to compare the two tables.
Backported include/diff_tables.inc to facilitate this testing.
New patch incorporating review feedback prior to push.
mysqldump.test - removed redundant call to include/have_log_bin.inc (was used twice in the test!)
assertion .\filesort.cc, line 797
A query with the "ORDER BY @@some_system_variable" clause,
where @@some_system_variable is NULL, causes assertion
failure in the filesort procedures.
The reason of the failure is in the value of
Item_func_get_system_var::maybe_null: it was unconditionally
set to false even if the value of a variable was NULL.
Created new .test file - mysqldump_restore that does this for a limited number
of basic cases.
Created new .inc file - mysqldump.inc - renames original table and uses mysqldump
output to recreate the table, then uses diff_tables.inc to compare the two tables.
Backported include/diff_tables.inc to facilitate this testing.
bug#44766: valgrind error when using convert() in a subquery
Problem: input and output buffers may be the same
converting a string to some charset.
That may lead to wrong results/valgrind warnings.
Fix: use different buffers.
warnings after uncompressed_length
UNCOMPRESSED_LENGTH() did not validate its argument. In
particular, if the argument length was less than 4 bytes,
an uninitialized memory value was returned as a result.
Since the result of COMPRESS() is either an empty string or
a 4-byte length prefix followed by compressed data, the bug was
fixed by ensuring that the argument of UNCOMPRESSED_LENGTH() is
either an empty string or contains at least 5 bytes (as done in
UNCOMPRESS()). This is the best we can do to validate input
without decompressing.
Detail:
The results for the "normal" server testcase variants
were already adjusted to the modified information_schema
content. Therefore just do the same for the embedded
server variants.
Internal InnoDN FK parser does not recognize '\'' as quotation symbol.
Suggested fix is to add '\'' symbol check for quotation condition
(dict_strip_comments() function).
with a "HAVING" clause though query works
SELECT from views defined like:
CREATE VIEW v1 (view_column)
AS SELECT c AS alias FROM t1 HAVING alias
fails with an error 1356:
View '...' references invalid table(s) or column(s)
or function(s) or definer/invoker of view lack rights
to use them
CREATE VIEW form with a (column list) substitutes
SELECT column names/aliases with names from a
view column list.
However, alias references in HAVING clause was
not substituted.
The Item_ref::print function has been modified
to write correct aliased names of underlying
items into VIEW definition generation/.frm file.
The RAND(N) function where the N is a field of "constant" table
(table of single row) failed with a SIGFPE.
Evaluation of RAND(N) rely on constant status of its argument.
Current server "seeded" random value for each constant argument
only once, in the Item_func_rand::fix_fields method.
Then the server skipped a call to seed_random() in the
Item_func_rand::val_real method for such constant arguments.
However, non-constant state of an argument may be changed
after the call to fix_fields, if an argument is a field of
"constant" table. Thus, pre-initialization of random value
in the fix_fields method is too early.
Initialization of random value by seed_random() has been
removed from Item_func_rand::fix_fields method.
The Item_func_rand::val_real method has been modified to
call seed_random() on the first evaluation of this method
if an argument is a function.
In order to better support the usage of
IBMDB2I tables from within RPG programs,
the storage engine should ensure that the
RCDFMT name is consistent and predictable
for DB2 tables.
This patch appends a "RCDFMT <name>"
clause to the CREATE TABLE statement
that is passed to DB2. <name> is
generated from the original name of
the table itself. This ensures a
consistent and deterministic mapping
from the original table.
For the sake of simplicity only
the alpha-numeric characters are
preserved when generating the new
name, and these are upper-cased;
other characters are replaced with
an underscore (_). Following DB2
system identifier rules, the name
always begins with an alpha-character
and has a maximum of ten characters.
If no usable characters are found in
the table name, the name X is used.
always rollsback.
The global variable max_binlog_cache_size cannot be set more than 4GB on
32 bit systems, limiting transactions of all storage engines to 4G of changes.
The problem is max_binlog_cache_size is declared as ulong which is 4 bytes
on 32 bit and 8 bytes on 64 bit machines.
Fixed by using ulonglong for max_binlog_cache_size which is 8bytes on 32
and 64 bit machines.The range for max_binlog_cache_size on 32 bit and 64 bit
systems is 4096-18446744073709547520 bytes.
Details:
Most tests mentioned within the bug report were already fixed.
The test modified here failed in stability (high parallel load) tests.
Details:
1. Take care that disconnects are finished before the test terminates.
2. Correct wrong handling of send/reap in events_stress which caused
random garbled output
3. Minor beautifying of script code
It turns out that this test case no longer fails with the discrepancy
in numbers that was the original cause for disabling this test (and showed
potential genuine issues with the query cache). Therefore
this test is being enabled after some minor adjustment of error codes and
messages.
Details:
1. Add missing "disconnect <session>"
2. Take care that the disconnects are finished when the test terminates
3. Replace error names by error numbers
4. Minor beautifying of script code
Field_time::get_time() did not initialize some members of
MYSQL_TIME which led to valgrind warnings when those members
were accessed in Protocol_simple::store_time().
It is unlikely that this bug could result in wrong data
being returned, since Field_time::get_time() initializes the
'day' member of MYSQL_TIME to 0, so the value of 'day'
in Protocol_simple::store_time() would be 0 regardless
of the values for 'year' and 'month'.
In UNION if we use last SELECT without braces and this
SELECT have ORDER BY clause, such clause belongs to
global UNION. It is parsed like last SELECT
part and used further as 'unit->global_parameters->order_list' value.
During DESCRIBE EXTENDED we call select_lex->print_order() for
last SELECT where order fields refer to tmp table
which already freed. It leads to crash.
The fix is clean up global_parameters->order_list
instead of fake_select_lex->order_list.
Suspected reason for the failure is that safe_process.exe already runs in a job that does not allow breakaways.
The fix is to use a fallback - make newly created process the root of the new process group. This allows to kill process together with descendants via GenerateConsoleCtrlEvent (CTRL_BREAK_EVENT, pid)
It is not possible to prevent the server from starting if a mandatory
built-in plugin fails to start. This can in some cases lead to data
corruption when the old table name space suddenly is used by a different
storage engine.
A boolean command line option in the form of --foobar is automatically
created for every existing plugin "foobar". By changing this command line
option from a boolean to a tristate { OFF, ON, FORCE } it is possible to
specify the plugin loading policy for each plugin.
The behavior is specified as follows:
OFF = Disable the plugin and start the server
ON = Enable the plugin and start the server even if an error occurrs
during plugin initialization.
FORCE = Enable the plugin but don't start the server if an error occurrs
during plugin initialization.
UNIX sockets need to be on a path shorter than 70 characters on some older platofrms.
MTRv1 tries to fix this by moving the socket to the $TMPDIR, however this causes
issues with certain tests on Windows.
Fixed by not applying any hacks on Windows - Windows does not need them.
SQL_SELECT::test_quick_select
The crash was caused by an incomplete cleanup of JOIN_TAB::select
during the filesort of rows for GROUP BY clause inside a subquery.
Queries where a quick index access is replaced with filesort was
was affected. For example:
SELECT 1 FROM
(SELECT COUNT(DISTINCT c1) FROM t1
WHERE c2 IN (1, 1) AND c3 = 2 GROUP BY c2) x
Quick index access related data in the SQL_SELECT::test_quick_select
function was inconsistent after an incomplete cleanup.
This function has been completed to prevent crashes in the
SQL_SELECT::test_quick_select function.
SQL_SELECT::test_quick_select
The crash was caused by an incomplete cleanup of JOIN_TAB::select
during the filesort of rows for GROUP BY clause inside a subquery.
Queries where a quick index access is replaced with filesort was
was affected. For example:
SELECT 1 FROM
(SELECT COUNT(DISTINCT c1) FROM t1
WHERE c2 IN (1, 1) AND c3 = 2 GROUP BY c2) x
Quick index access related data in the SQL_SELECT::test_quick_select
function was inconsistent after an incomplete cleanup.
This function has been completed to prevent crashes in the
SQL_SELECT::test_quick_select function.
and HAVING
When calculating GROUP BY the server caches some expressions. It does
that by allocating a string slot (Item_copy_string) and assigning the
value of the expression to it. This effectively means that the result
type of the expression can be changed from whatever it was to a string.
As this substitution takes place after the compile-time result type
calculation for IN but before the run-time type calculations,
it causes the type calculations in the IN function done at run time
to get unexpected results different from what was prepared at compile time.
In the CASE ... WHEN ... THEN ... statement there was a similar problem
and it was solved by artificially adding a STRING argument to the matrix
at compile time, so if any of the arguments of the CASE function changes
its type to a string it will still be covered by the information prepared
at compile time.
Extended the CASE fix for cover the IN case.
An alternative way of fixing this problem is by caching the result type of
the arguments at compile time and using the cached information at run time
instead of re-calculating the result types.
Preferred the CASE approach for uniformity and fix localization.
BUG#42101 - Race condition in innodb_commit_concurrency
Detailed revision comments:
r4994 | marko | 2009-05-14 15:04:55 +0300 (Thu, 14 May 2009) | 18 lines
branches/5.1: Prevent a race condition in innobase_commit() by ensuring
that innodb_commit_concurrency>0 remains constant at run time. (Bug #42101)
srv_commit_concurrency: Make this a static variable in ha_innodb.cc.
innobase_commit_concurrency_validate(): Check that innodb_commit_concurrency
is not changed from or to 0 at run time. This is needed, because
innobase_commit() assumes that innodb_commit_concurrency>0 remains constant.
Without this limitation, the checks for innodb_commit_concurrency>0
in innobase_commit() should be removed and that function would have to
acquire and release commit_cond_m at least twice per invocation.
Normally, innodb_commit_concurrency=0, and introducing the mutex operations
would mean significant overhead.
innodb_bug42101.test, innodb_bug42101-nonzero.test: Test cases.
rb://123 approved by Heikki Tuuri
Problem: executing queries like "ALTER TABLE view1;" we don't
check new view's name (which is not specified),
that leads to server crash.
Fix: do nothing (to be consistent with the behaviour for tables)
in such cases.
Disabling these two tests as they are affected by this bug / causing PB2 failures
on Windows platforms. Can always disable via include/not_windows.inc if
the bug fix looks like it will take some time.
"freeing items"
The calculation of the table map log event in the event constructor
was one byte shorter than what would be actually written. This would
lead to a mismatch between the number of bytes written and the event
end_log_pos, causing bad event alignment in the binlog (corrupted
binlog) or in the transaction cache while fixing positions
(MYSQL_BIN_LOG::write_cache). This could lead to impossible to read
binlog or even infinite loops in MYSQL_BIN_LOG::write_cache.
This patch addresses this issue by correcting the expected event
length in the Table_map_log_event constructor, when the field metadata
size exceeds 255.
Problem: using LOAD_FILE() in some cases we pass a file name string
without a trailing '\0' to fn_format() which relies on that however.
That may lead to valgrind warnings.
Fix: add a trailing '\0' to the file name passed to fn_format().
The problem is that the internal variable used to specify a
transaction with consistent read was being used outside the
processing context of a START TRANSACTION WITH CONSISTENT
SNAPSHOT statement. The practical consequence was that a
consistent snapshot specification could leak to unrelated
transactions on the same session.
The solution is to ensure a consistent snapshot clause is
only relied upon for the START TRANSACTION statement.
This is already fixed in a similar way on 6.0.
In the output from mysqlbinlog, incident log events were
represented as just a comment. Since the incident log event
represents an incident that could cause the contents of the
database to change without being logged to the binary log,
it means that if the SQL is applied to a server, it could
potentially lead to that the databases are out of sync.
In order to handle that, this patch adds the statement "RELOAD
DATABASE" to the SQL output for the incident log event. This will
require a DBA to edit the file and handle the case as apropriate
before applying the output to a server.
Problem: storing "SELECT ... INTO @var ..." results in variables we used val_xxx()
methods which returned results of the current row.
So, in some cases (e.g. SELECT DISTINCT, GROUP BY or HAVING) we got data
from the first row of a new group (where we evaluate a clause) instead of
data from the last row of the previous group.
Fix: use val_xxx_result() counterparts to get proper results.
The --hexdump option crashed mysqlbinlog when used together
with the --read-from-remote-server option due to use of
uninitialized memory.
Since Log_event::print_header() relies on temp_buf to be
initialized when the --hexdump option is present,
dump_remote_log_entries() was fixed to setup temp_buf to point
to the start of a binlog event as done in
dump_local_log_entries().
The root cause of this bug is identical to the one for
bug #17654. The latter was fixed in 5.1 and up, so this
patch is backport of the patches for bug #17654 to 5.0.
Only 5.0 needs a changelog entry.
Details:
innodb-autoinc-optimize
Add DROP TABLE which is missing (Backport of fix from 5.1)
innodb_notembedded
Take care that the disconnects of additional sessions
are completed.
Note:
The merge 5.0 -> 5.1 for innodb-autoinc-optimize
should be a "null" merge = no changes in 5.1.
with seg fault
Multiple-table DELETE from a table joined to itself may cause
server crash. This was originally discovered with MEMORY engine,
but may affect other engines with different symptoms.
The problem was that the server violated SE API by performing
parallel table scan in one handler and removing records in
another (delete on the fly optimization).
Certain multi-updates gave different results on InnoDB from
to MyISAM, due to on-the-fly updates being used on the former and
the update order matters.
Fixed by turning off on-the-fly updates when update order
dependencies are present.
on cp932 and sjis environment.
Problem: case conversion erroneously changes the second bytes
of multi-byte sequences because single-byte functions were
called in a mistake.
Fix: call multi-byte aware functions instead.
'INSERT ... SELECT' statements
The code that produces result rows expected that a duplicate row
error could not occur in INSERT ... SELECT statements with
unfulfilled WHERE conditions. This may happen, however, if the
SELECT list contains only aggregate functions.
Fixed by checking if an error occured before trying to send EOF
to the client.
1. Replace waiting of SQL thread stop by waiting of SQL error on slave and stopped
SQL thread.
2. Remove debug code because it already implemented in MTR2.
EXPLAIN EXTENDED of nested query containing a error:
1054 Unknown column '...' in 'field list'
may cause a server crash.
Parse error like described above forces a call to
JOIN::destroy() on malformed subquery.
That JOIN::destroy function closes and frees temporary
tables. However, temporary fields of these tables
may be listed in st_select_lex::group_list of outer
query, and that st_select_lex may not cleanup them
properly. So, after the JOIN::destroy call that
st_select_lex::group_list may have Item_field
objects with dangling pointers to freed temporary
table Field objects. That caused a crash.
This patch adds corrections to the original patch
submitted 2009-04-08 (http://lists.mysql.com/commits/71607):
- fixed that the original patch didn't work because of an
incorrect condition;
- added a test case.
Error happens because sp_head::MULTI_RESULTS is not set for SP
which has 'show table status' command.
The fix is to add a SQLCOM_SHOW_TABLE_STATUS case into
sp_get_flags_for_command() func.
Killing the insert-select statement corrupts the MyISAM table only
when the destination table is empty and when it has indexes. When
we bulk insert huge data and if the destination table is empty we
disable the indexes for fast inserts, data is then inserted and
indexes are re-enabled after bulk_insert operation
Killing the query, aborts the repair table operation during enable
indexes phase leading to table corruption.
We now truncate the table when we detect that enable indexes is
killed for bulk insert query.As we have an empty table before the
operation, we can fix by truncating the table.
A bug in the initialization of key segment information made it point
to the wrong bit, since a bit index was used when its int value
was needed. This lead to misinterpretation of bit columns
read from MyISAM record format when a NULL bit pushed them over
a byte boundary.
Fixed by using the int value of the bit instead.
MTR is stuck for about 20 seconds checking for free ports.
The reason is that perl's connect() takes 1 second on windows
if port is not opened.
This patch fixes the mtr_ping_port implementation on Windows
to use Net::Ping for the port checking with small (0.1sec) timeout.
This patch also removes pointless second call to check_ports_free()
in case of auto build thread.
(moved from Bug 42308)
Details:
- insert_update
Add DROP TABLE which was missing, error numbers -> names
- varbinary
Add DROP TABLE which was missing
- sp_trans_log
Add missing DROP function, improved formatting
The issue of the current bug is unguarded access to mi->slave_running
by the shutdown thread calling end_slave() that is bug#29968
(alas happened not to be cross-linked with the current bug)
Fixed:
with removing the unguarded read of the running status
and perform reading it in terminate_slave_thread()
at time run_lock is taken (mostly bug#29968 backporting, still with some
improvements over that patch - see the error reporting from
terminate_slave_thread()).
Issue of bug#38716 is fixed here for 5.0 branch as well.
Note:
There has been a separate artifact identified -
a race condition between init_slave() and end_slave() -
reported as Bug#44467.
the Point() and Linestring() functions create WKB representation of an
object instead of an real geometry object.
That produced bugs when these were inserted into tables.
GIS tests fixed accordingly.
per-file messages:
mysql-test/r/gis-rtree.result
Bug#38990 Arbitrary data input plus GIS functions causes mysql server crash
test result
mysql-test/r/gis.result
Bug#38990 Arbitrary data input plus GIS functions causes mysql server crash
test result
mysql-test/t/gis-rtree.test
Bug#38990 Arbitrary data input plus GIS functions causes mysql server crash
test fixed - GeomFromWKB invocations removed
mysql-test/t/gis.test
Bug#38990 Arbitrary data input plus GIS functions causes mysql server crash
test fixed - AsWKB invocations added
sql/item_geofunc.cc
Bug#38990 Arbitrary data input plus GIS functions causes mysql server crash
Point() and similar functions to create a proper object
Bug #40925: Equality propagation takes non indexed attribute
Query execution plans and execution time of queries like
select a, b, c from t1
where a > '2008-11-21' and b = a limit 10
depended on the order of equality operator parameters:
"b = a" and "a = b" are not same.
An equality propagation algorithm has been fixed:
the substitute_for_best_equal_field function should not
substitute a field for an equal field if both fields belong
to the same table.
Respectively, replaced "--exec diff" by "--diff_files" which is a mysqltest command to run a
non-operating system specific diff. Removed the file rpl_000015-slave.sh as it is not
necessary in the new MTR.
Turned off autocommit at the start of this test per Innobase recommendation.
Noted significant reduction in run time for this test w/ a minor increase in other tests' run-times.
1) BUG#43309 - Test main.innodb can't be run twice
Detailed revision comments:
r4701 | vasil | 2009-04-13 17:03:46 +0300 (Mon, 13 Apr 2009) | 6 lines
branches/5.0:
Fix Bug#43309 Test main.innodb can't be run twice
by making the innodb.test reentrant.
perl
The problem here was the method how MTR gets its unique thread ids.
Prior to this patch, the method to do it was to maintain a global
table of pid,mtr_unique_id) pairs. The table was backed by a text
file. The table was cleaned up one in a while and dead processes leaking
unique_ids were determined with with kill(0) or with scripting tasklist
on Windows.
This method is flawed specifically on native Windows Perl. fork() is
implemented with starting a new thread, give it a syntetic negative PID
(threadID*(-1)), until this thread creates a new process with exec()
However, neither tasklist nor any other native Windows tool can cope with
negative perl PIDs. This lead to incorrect determination of dead process
and reusing already used mtr_unique_id.
The patch introduces alternative portable method of solving unique-id
problem. When a process needs a unique id in range [min...max], it just
starts to open files named min, min+1,...max in a loop . After file is
opened, we do non-blocking flock(). When flock() succeeds, process has
allocated the ID. When process dies, file is unlocked . Checks for zombies
are not necessary.
Since the change would create a co-existence problems with older version
of MTR, because of different way to calculate IDs, the default ID range
is changed from 250-299 to 300-349.
Another fix that was necessary enable --parallel option was to serialize
spawn() calls on Windows. specifically, IO redirects needed to be protected.
This patch also fixes hanging CRTL-C (as described in Bug #38629) for the
"new" MTR. The fix was already in 6.0 and is now downported.
single quote fails in 5.1.x
Performing fulltext prefix search (a word with truncation
operator) may cause a dead-loop.
The problem was in smarter index merge algorithm - it was writing
record reference to an incorrect memory area.
The warning happens because string argument is not zero ended.
The fix is to add new parameter 'length' to SQL_CRYPT() and
use ptr() instead of c_ptr().
mysqldump.test is designed to run with concurrent inserts
disabled. It is disabling concurrent inserts at the very
beginning of the test case, and re-enables them at the
bottom of the test. But for some reason (likely incorrect
merge) we enable concurrent inserts in the middle of the test.
The problem is fixed by enabling concurrent inserts only
at the bottom of the test case.
In order to define the --slave-load-tmpdir, the init_relay_log_file()
was calling fn_format(MY_PACK_FILENAME) which internally was indirectly
calling strmov_overlapp() (through pack_dirname) and the following
warning message was being printed out while running in Valgrind:
"source and destination overlap in strcpy".
We fixed the issue by removing the flag MY_PACK_FILENAME as it was not
necessary. In a nutshell, with this flag the function fn_format() tried
to replace a directory by either "~", "." or "..". However, we wanted
exactly to remove such strings.
In this patch, we also refactored the functions init_relay_log_file()
and check_temp_dir(). The former was refactored to call the fn_format()
with the flag MY_SAFE_PATH along with the MY_RETURN_REAL_PATH, in order
to avoid issues with long directories and return an absolute path,
respectively. The flag MY_SAFE_UNPACK_FILENAME was removed too as it was
responsible for removing "~", "." or ".." only from the file parameter
and we wanted to remove such strings from the directory parameter in
the fn_format(). This result is stored in an rli variable, which is then
processed by the other function in order to verify if the directory exists
and if we are able to create files in it.
to wrong results
3 problems found with DES_ENCRYPT/DES_DECRYPT :
1. The max length was not calculated properly. Fixed in fix_length_and_dec()
2. DES_ENCRYPT had a side effect of sometimes reallocating and changing
the value of its argument. Fixed by explicitly pre-allocating the necessary
space to pad the argument with trailing '*' (stars) when calculating the
DES digest.
3. in DES_ENCRYPT the string buffer for the result value was not
reallocated to the correct size and only string length was assigned to it.
Fixed by making sure there's enough space to hold the result.
information schema tables are based on internal tmp tables which are removed
after each statement execution. So HANDLER comands can not be used with
information schema.
using it.
The crash was due to a null pointer present for select_lex while
processing the view.
Adding a check while opening the view to see if its a child of a
merge table fixed this problem.
Streamlined how we increase the size of our test table.
The new method shows run time decreased by ~60%.
This is not a guarantee that we will not see test timeouts (the random failures noted in the bug),
but it should significantly reduce the chances of this occurring.
Killing insert-select statement on MyISAM corrupts the table.
Killing the insert-select statement corrupts the MyISAM table only
when the destination table is empty and when it has indexes. When
we bulk insert huge data and if the destination table is empty we
disable the indexes for fast inserts, data is then inserted and
indexes are re-enabled after bulk_insert operation
Killing the query, aborts the repair table operation during enable
indexes phase leading to table corruption.
We now truncate the table when we detect that enable indexes is
killed for bulk insert query.As we have an empty table before the
operation, we can fix by truncating the table.
1) BUG#43309 - Test main.innodb can't be run twice
2) Follow up fix for BUG#43309, adds explanatory comments.
Detailed revision comments:
r4575 | vasil | 2009-03-30 15:55:31 +0300 (Mon, 30 Mar 2009) | 8 lines
branches/5.1:
Fix Bug#43309 Test main.innodb can't be run twice
Make the innodb mysql-test more flexible by inspecting how much a
variable of interest has changed since the start of the test. Do not
assume the variables have zero values at the start of the test.
r4659 | vasil | 2009-04-06 15:34:51 +0300 (Mon, 06 Apr 2009) | 6 lines
branches/5.1:
Followup to r4575 and the fix of Bug#43309 Test main.innodb can't be run twice:
Add an explanatory comment, as suggested by Patrick Crews in the bug report.
problems
1) BUG#39320 - innodb crash in file btr/btr0pcur.c line 217 with
innodb_locks_unsafe_for_binlog
2) Fixes bug in multi-table semi consistent reads.
3) Fixes email address from dev@innodb.com to innodb_dev_ww@oracle.com
4) Fixes warning message generated by main.innodb test
Detailed revision comments:
r4399 | marko | 2009-03-12 09:38:05 +0200 (Thu, 12 Mar 2009) | 5 lines
branches/5.1: row_sel_get_clust_rec_for_mysql(): Store the cursor position
also for unlock_row(). (Bug #39320)
rb://96 approved by Heikki Tuuri.
r4400 | marko | 2009-03-12 10:06:44 +0200 (Thu, 12 Mar 2009) | 8 lines
branches/5.1: Fix a bug in multi-table semi-consistent reads.
Remember the acquired record locks per table handle (row_prebuilt_t)
rather than per transaction (trx_t), so that unlock_row should successfully
unlock all non-matching rows in multi-table operations.
This deficiency was found while investigating Bug #39320.
rb://94 approved by Heikki Tuuri.
r4481 | marko | 2009-03-19 15:01:48 +0200 (Thu, 19 Mar 2009) | 6 lines
branches/5.1: row_unlock_for_mysql(): Do not unlock records that were
modified by the current transaction. This bug was introduced or unmasked
in r4400.
rb://97 approved by Heikki Tuuri
r4573 | vasil | 2009-03-30 14:17:13 +0300 (Mon, 30 Mar 2009) | 4 lines
branches/5.1:
Fix email address from dev@innodb.com to innodb_dev_ww@oracle.com
r4574 | vasil | 2009-03-30 14:27:08 +0300 (Mon, 30 Mar 2009) | 38 lines
branches/5.1:
Restore the state of INNODB_THREAD_CONCURRENCY to silence this warning:
TEST RESULT TIME (ms)
------------------------------------------------------------
worker[1] Using MTR_BUILD_THREAD 250, with reserved ports 12500..12509
main.innodb [ pass ] 8803
MTR's internal check of the test case 'main.innodb' failed.
This means that the test case does not preserve the state that existed
before the test case was executed. Most likely the test case did not
do a proper clean-up.
This is the diff of the states of the servers before and after the
test case was executed:
mysqltest: Logging to '/tmp/autotest.sh-20090330_033000-5.1.5Hg8CY/mysql-5.1/mysql-test/var/tmp/check-mysqld_1.log'.
mysqltest: Results saved in '/tmp/autotest.sh-20090330_033000-5.1.5Hg8CY/mysql-5.1/mysql-test/var/tmp/check-mysqld_1.result'.
mysqltest: Connecting to server localhost:12500 (socket /tmp/autotest.sh-20090330_033000-5.1.5Hg8CY/mysql-5.1/mysql-test/var/tmp/mysqld.1.sock) as 'root', connection 'default', attempt 0 ...
mysqltest: ... Connected.
mysqltest: Start processing test commands from './include/check-testcase.test' ...
mysqltest: ... Done processing test commands.
--- /tmp/autotest.sh-20090330_033000-5.1.5Hg8CY/mysql-5.1/mysql-test/var/tmp/check-mysqld_1.result 2009-03-30 14:12:31.000000000 +0300
+++ /tmp/autotest.sh-20090330_033000-5.1.5Hg8CY/mysql-5.1/mysql-test/var/tmp/check-mysqld_1.reject 2009-03-30 14:12:41.000000000 +0300
@@ -99,7 +99,7 @@
INNODB_SUPPORT_XA ON
INNODB_SYNC_SPIN_LOOPS 20
INNODB_TABLE_LOCKS ON
-INNODB_THREAD_CONCURRENCY 8
+INNODB_THREAD_CONCURRENCY 16
INNODB_THREAD_SLEEP_DELAY 10000
INSERT_ID 0
INTERACTIVE_TIMEOUT 28800
mysqltest: Result content mismatch
not ok
r4576 | vasil | 2009-03-30 16:25:10 +0300 (Mon, 30 Mar 2009) | 4 lines
branches/5.1:
Revert a change to Makefile.am that I committed accidentally in c4574.
Engine doesn't allow me to.
In case of incompatible changes between old and new table
versions, the mysqlcheck program prints error messages like
this:
error: Table upgrade required. Please do
"REPAIR TABLE `table_name`" to fix it!
However, InnoDB doesn't support REPAIR TABLE query, so the
message is confusing.
Error message text has been changed to:
Table upgrade required. Please do "REPAIR TABLE `table_name`"
or dump/reload to fix it!"
mysql_rename_view can not rename view if database is not the same.
The fix is to add new argument 'new_db' to mysql_rename_view() and
allow rename with different databases
(only for ALTER DATABASE ... UPGRADE DATA DIRECTORY NAME).
comment can't be read back
A change to the lexer in 5.1 caused slash-asterisk-bang-version
sections to be terminated early if there exists a slash-asterisk-
style comment inside it. Nesting comments is usually illegal,
but we rely on versioned comment blocks in mysqldump, and the
contents of those sections must be allowed to have comments.
The problem was that when encountering open-comment tokens and
consuming -or- passing through the contents, the "in_comment"
state at the end was clobbered with the not-in-a-comment value,
regardless of whether we were in a comment before this or not.
So, """/*!VER one /* two */ three */""" would lose its in-comment
state between "two" and "three". Save the echo and in-comment
state, and restore it at the end of the comment if we consume a
comment.
routine does not exist
There is an inconsistency with DROP DATABASE IF EXISTS, DROP TABLE IF
EXISTS and DROP VIEW IF EXISTS: those are binlogged even if the DB or
TABLE does not exist, whereas DROP PROCEDURE IF EXISTS does not. It
would be nice or at least consistent if DROP PROCEDURE/STATEMENT
worked the same too.
Fixed DROP PROCEDURE|FUNCTION IF EXISTS by adding a call to
mysql_bin_log.write in mysql_execute_command. Checked also if all
documented "DROP (...) IF EXISTS" get binlogged.
NOTE: This is a 5.0 backport patch as requested by support.
The result set for multi-row statements is not the same between STMT and
RBR and among different versions. Thus to avoid test failures, we are not
printing out such result sets. Note, however, that this does not have
impact on coverage and accuracy since the execution is able to continue
without further issues when an error is found on the master and such error
is set to be skipped.
The test started failing following the push for BUG#41541.
Some of the algorithms access bytes beyond the input data
and this can affect up to one byte less than "word size"
which is BITS_SAVED / 8.
Fixed by adding (BITS_SAVED / 8) -1 bytes to buffer size
(i.e. Memory Segment #2) to avoid accessing un-allocated data.
We didn't expect "error: no error", although this is
in fact a legitimate state (if something is erroneous
on the master, but not on the slave, e.g. INSERT fails
on master due to UNIQUE constraint which does not exist
on slave).
We now list the ignore for "0: no error" the same way
as any other ignore; moreover, if no or an empty
--slave-skip-errors is passed at start-up, we show
"OFF" instead of empty list, as intended. (The code
for that was there, but was only run for the empty-
argument case, even if it subsequently tested for
both conditions.)
RBR was not considering the option --slave-skip-errors.
To fix the problem, we are reporting the ignored ERROR(s) as warnings thus avoiding
stopping the SQL Thread. Besides, it fixes the output of "SHOW VARIABLES LIKE
'slave_skip_errors'" which was showing nothing when the value "all" was assigned
to --slave-skip-errors.
@sql/log_event.cc
skipped rbr errors when the option skip-slave-errors is set.
@sql/slave.cc
fixed the output of for SHOW VARIABLES LIKE 'slave_skip_errors'"
@test-cases
fixed the output of rpl.rpl_idempotency
updated the test case rpl_skip_error
1. Test case was rewritten completely.
2. Test covers 3 cases:
a) do deadlock on slave, wait retries of transaction, unlock slave before lock
timeout;
b) do deadlock on slave and wait error 'lock timeout exceed' on slave;
c) same as b) but if of max relay log size = 0;
3. Added comments inline.
4. Updated result file.
The problem is that a SELECT .. FOR UPDATE statement might open
a table and later wait for a impeding global read lock without
noticing whether it is holding a table that is being waited upon
the the flush phase of the process that took the global read
lock.
The same problem also affected the following statements:
LOCK TABLES .. WRITE
UPDATE .. SET (update and multi-table update)
TRUNCATE TABLE ..
LOAD DATA ..
The solution is to make the above statements wait for a impending
global read lock before opening the tables. If there is no
impending global read lock, the statement raises a temporary
protection against global read locks and progresses smoothly
towards completion.
Important notice: the patch does not try to address all possible
cases, only those which are common and can be fixed unintrusively
enough for 5.0.
Mysql server crashes because unsafe statements warning is wrongly elevated to error,
which is set the error status of Diagnostics_area of the thread in THD::binlog_query().
Yet the caller believes that binary logging shouldn't touch the status, so it will
set the status also later by my_ok(), my_error() or my_message() seperately
according to the execution result of the statement or transaction.
But the status of Diagnostics_area of the thread is allowed to set only once.
Fixed to clear the error wrongly set by binary logging, but keep the warning message.
Altered the test to accommodate the new behavior of max_allowed_packet.
Had to disconnect / reconnect the default connection for the new value to register.
Re-enabled certain parts of the test that were commented out and added
some setup / cleanup code to ensure proper reset of max_allowed_packet at the end of the test.
Re-recorded the .result file to account for changes to the test.
Original commentary:
Bug #37348: Crash in or immediately after JOIN::make_sum_func_list
The optimizer pulls up aggregate functions which should be aggregated in
an outer select. At some point it may substitute such a function for a field
in the temporary table. The setup_copy_fields function doesn't take this
into account and may overrun the copy_field buffer.
Fixed by filtering out the fields referenced through the specialized
reference for aggregates (Item_aggregate_ref).
Added an assertion to make sure bugs that cause similar discrepancy
don't go undetected.
- Update testcases funcs_1.is_routines to only query
information_schema.routines about the routines in 'test'
database(where it has created it's routines)
The problem is that XML functions(items) do not reset null_value
before their execution and further item excution may use
null_value value of the previous result.
The fix is to reset null_value.
- Some tests need to modify the server(s) so much that a total restart of all servers are
necessary after test. Make it possible for a test to signal it want mtr.pl to restart
all servers.
on 5.0
The server crashes on an assert in net_end_statement indicating that the
Diagnostics area wasn't set properly during execution.
This happened on a multi table DELETE operation using the IGNORE keyword.
The keyword is suppose to allow for execution to continue on a best effort
despite some non-fatal errors. Instead execution stopped and no client
response was sent which would have led to a protocol error if it hadn't been
for the assert.
This patch corrects this issue by checking for the existence of an IGNORE
option before setting an error state during row-by-row delete iteration.
Test was flakey on some machines and showed spurious
reds for races.
New-and-improved test makes do with fewer statements,
no mysqltest-variables, and no backticks. Should hope-
fully be more robust. Heck, it's debatable whether we
should have a test for this, anyway.
updates
Attempt to execute trigger or stored function with multi-UPDATE
which used - but didn't update - a table that was also used by
the calling statement led to an error. Read-only reference to
tables used in the calling statement should be allowed.
This problem was caused by the fact that check for conflicting
use of tables in SP/triggers was performed in open_tables(),
and in case of multi-UPDATE we didn't know exact lock type at
this stage.
We solve the problem by moving this check to lock_tables(), so
it can be performed after exact lock types for tables used by
multi-UPDATE are determined.
UNION could convert fixed-point FLOAT(M,D)/DOUBLE(M,D) columns
to FLOAT/DOUBLE when aggregating data types from the SELECT
substatements. While there is nothing particularly wrong with
this behavior, especially when M is greater than the hardware
precision limits, it could be confusing in cases when all
SELECT statements in a union have the same
FLOAT(M,D)/DOUBLE(M,D) columns with equal precision
specifications listed in the same position.
Since the manual is quite vague on what data type should be
returned in such cases, the bug was fixed by implementing the
most 'expected' behavior: do not convert FLOAT(M,D)/DOUBLE(M,D)
to anything else if all SELECT statements in a UNION have the
same precision for that column.
When the thread executing a DDL was killed after finished its
execution but before writing the binlog event, the error code in
the binlog event could be set wrongly to ER_SERVER_SHUTDOWN or
ER_QUERY_INTERRUPTED.
This patch fixed the problem by ignoring the kill status when
constructing the event for DDL statements.
This patch also included the following changes in order to
provide the test case.
1) modified mysqltest to support variable for connection command
2) modified mysql-test-run.pl, add new variable MYSQL_SLAVE to
run mysql client against the slave mysqld.
The problem is that the read and write methods of the shared
memory transport (protocol) didn't react to asynchornous close
events, which could lead to a lock up as the client would wait
(until time out) for a server response that will never come.
The solution is to also wait for close events while waiting
for I/O from or to the server.
Bug report and patch submitted by: Armin Schöffmann
The problem is that the read and write methods of the shared
memory transport (protocol) didn't react to asynchronous close
events, which could lead to a lock up as the client would wait
(until time out) for a server response that will never come.
The solution is to also wait for close events while waiting
for I/O from or to the server.
including modifications according to code review
+ backport of the fix for
Bug 41932 funcs_1: is_collation_character_set_applicability path
too long for tar
which was missing in 5.0 (just a renaming of two files)
Bug#319 if while a non-transactional slave is replicating a transaction possible problem
It is impossible to roll back a mixed engines transaction when one of the engine is
non-transaction. In replication that fact is crucial because the slave can not safely
re-apply a transction that was interrupted with STOP SLAVE.
Fixed with making STOP SLAVE not be effective immediately in the case the current
group of replication events has modified a non-transaction table. In order for slave to leave
either the group needs finishing or the user issues KILL QUERY|CONNECTION slave_thread_id.
When add an aliase name after NAME_CONST, the aliase name will be overwrite.
NAME_CONST will re-set the field's name only if there isn't an aliase in the
function fix-fields().
If there is an aliase, NAME_CONST doesn't re-set the field's name and keeps the old
name.
including modifications according to code review
+ backport of the fix for
Bug 41932 funcs_1: is_collation_character_set_applicability path
too long for tar
which was missing in 5.0 (just a renaming of two files)
due to name_const substitution
Problem:
"In general, statements executed within a stored procedure
are written to the binary log using the same rules that
would apply were the statements to be executed in standalone
fashion. Some special care is taken when logging procedure
statements because statement execution within procedures
is not quite the same as in non-procedure context".
For example, each reference to a local variable in SP's
statements is replaced by NAME_CONST(var_name, var_value).
Queries like
"CREATE TABLE ... SELECT FUNC(local_var ..."
are logged as
"CREATE TABLE ... SELECT FUNC(NAME_CONST("local_var", var_value) ..."
that leads to differrent field names and
might result in "Incorrect column name" if var_value is long enough.
Fix: in 5.x we'll issue a warning in such a case.
In 6.0 we should get rid of NAME_CONST().
Note: this issue and change should be described in the documentation
("Binary Logging of Stored Programs").
Fine-tuning. Broke out comparison into method by
suggestion of Davi. Clarified comments. Reverting
test-case which I find too brittle; proper test
case in 5.1+.
(Pushing for Azundris)
We allow security-contexts with NULL users (for
system-threads and for unauthenticated users).
If a non-SUPER-user tried to KILL such a thread,
we tried to compare the user-fields to see whether
they owned that thread. Comparing against NULL was
not a good idea.
If KILLer does not have SUPER-privilege, we
specifically check whether both KILLer and KILLee
have a non-NULL user before testing for string-
equality. If either is NULL, we reject the KILL.
The issue happened to be two-fold.
The table map event was recorded into binlog having
an incorrect size when number of columns exceeded 251.
The Row-based event had incorrect recording and restoring m_width member within
the same as above conditions.
Fixed with correcting m_data_size and m_width.
After the table is compressed by the myisampack utility,
opening the table by the server produces valgrind warnings.
This happens because when we try to read a record into the buffer
we alway assume that the remaining buffer to read is always equal
to word size(4 or 8 or 2 bytes) we read. Sometimes we have
remaining buffer size less than word size and trying to read the
entire word size will end up in valgrind errors.
Fixed by reading byte by byte when we detect the remaining buffer
size is less than the word size.
expired timeout on debx86-b in PB
Turned off general log when importing DB dump in the test
case for bug #41486 due to the bug in CSV engine code that
makes logging long SQL query too slow.
expired timeout on debx86-b in PB
Moved the resource-intensive test case for bug #41486 into
a separate test file to reduce execution time for mysql.test.
LOAD_FILE
LOAD_FILE is not safe to replicate in STATEMENT mode, because it
depends on a file (which is loaded on master and may not exist in
slave(s)). This leads to scenarios on which the slave replicates the
statement with 'load_file' and it will try to load the file from local
file system. Given that the file may not exist in the slave filesystem
the operation will not succeed (probably returning NULL), causing
master and slave(s) to diverge. However, when using MIXED mode
replication, this can be made to work, if the statement including
LOAD_FILE is marked as unsafe, triggering a switch to ROW mode,
meaning that the contents of the file are written to binlog as row
events. Consequently, the contents from the file in the master will
reach the slave via the binlog.
This patch addresses this bug by marking the load_file function as
unsafe. When in mixed mode and when LOAD_FILE is issued, there will be
a switch to row mode. Furthermore, when in statement mode, the
LOAD_FILE will raise a warning that the statement is unsafe in that
mode.
The problem is that after disconnect, the DOPR TEMPORARY TABLE event didn't been
written into binlog. So after syncing with slave, the TEMPORARY table on slave
is not removed.
Waiting DROP TEMPORARY TABLE event to be written into binlog before sync slave with
master.
When do 'insert delayed' operation, the time_zone info doesn't be keeped in the row info.
So when we do insert sometime later, time_zone didn't write into binlog.
This will cause wrong result for timestamp column in slave.
Our solution is that adding time_zone info with the delayed-row and
restoring time_zone from row-info when execute that row in the furture by another thread.
So we can write correct time_zone info into binlog and got correct result in slave.
Details for Bug#43015 main.lock_multi: Weak code (sleeps etc.)
-------------------------------------------------------------
- The fix for bug 42003 already removed a lot of the weaknesses mentioned.
- Tests showed that there are unfortunately no improvements of this tests
in MySQL 5.1 which could be ported back to 5.0.
- Remove a superfluous "--sleep 1" around line 195
Details for Bug#43065 main.lock_multi: This test is too big if the disk is slow
-------------------------------------------------------------------------------
- move the subtests for the bugs 38499 and 36691 into separate scripts
- runtime under excessive parallel I/O load after applying the fix
lock_multi [ pass ] 22887
lock_multi_bug38499 [ pass ] 536926
lock_multi_bug38691 [ pass ] 258498
Don't throw an error after checking the first and the second arguments.
Continue with checking the third and higher arguments and if some of
them is stronger according to coercibility rules,
then this argument's collation is set as result collation.
When loading dump created by mysqldump tool an error is
thrown saying storage engine for the table doesn't have
an option.
mysqldump tries to re-insert the data into the federated
table which causes the error. Since the data is already
available on the remote server, mysqldump shouldn't try
to dump the data again for FEDERATED tables.
As stated in the bug page, it can be considered similar
to the MERGE ENGINE with "view only" nature.
Fixed by adding the "FEDERATED ENGINE" to the exception
list to ignore the data.
mysql.procs_priv table itself does not get replicated.
Inserting routine privilege record into mysql.procs_priv table
is triggered by creating function/procedure statements
according to current user's privileges.
Because the current user of SQL thread has GLOBAL_ACL,
which doesn't need any check mysql.procs_priv privilege
when create/alter/execute routines.
Corresponding GLOBAL_ACL privilege user
doesn't insert routine privilege record into
mysql.procs_priv when creating a routine.
Fixed by switching the current user of SQL thread to definer user if
the definer user exists on slave.
That populates procs_priv, otherwise to keep the SQL thread
user and procs_priv remains unchanged.
Compiling with debug and assigning an invalid directory to --slave-load-tmpdir
was crashing the slave due to the following assertion DBUG_ASSERT(! is_set() ||
can_overwrite_status). This assertion assumes that a thread can change its
state once (i.e. ok,error, etc) before aborting, cleaning/resuming or completing
its execution unless the overwrite flag (i.e. can_overwrite_status) is true.
The Append_block_log_event::do_apply_event which is responsible for creating
temporary file(s) was not cleaning the thread state. Thus a failure while
trying to create a file in an invalid temporary directory was causing the crash.
To fix the problem we check if the temporary directory is valid before starting
the SQL Thread and reset the thread state before creating a file in
Append_block_log_event::do_apply_event.
~40Mb after mysqldump/import
When the input string exceeds the maximum allowed size for the
internal buffer, batch_readline() returns a truncated string.
Since there was no way for a caller to determine whether the
string was truncated or not, the command line client assumed
batch_readline() to always return the whole input string and
appended a newline character. This resulted in garbled data
when importing dumps containing strings longer than the
maximum input buffer size.
Fixed by adding a flag to the batch_readline() interface to
signal a truncated string to the caller.
Other minor problems fixed during patch implementation:
- The maximum allowed buffer size for batch_readline() was set
up depending on the client's max_allowed_packet value. It does
not actully make any sense, as those variables are not
related. The input buffer size limit is now always set to 1
MB.
- fill_buffer() did not always set the EOF flag.
- The input buffer could actually grow twice as the specified
limit due to insufficient checks in intern_read_line().
Any statement reading corrupt archive data file
(CHECK/REPAIR/SELECT/UPDATE/DELETE) may cause assertion
failure in debug builds. This assertion has been removed
and an error is returned instead.
Also fixed that CHECK/REPAIR returns vague error message
when it mets corruption in archive data file. This is
fixed by returning proper error code.
become negative
- merged the fix to 5.1
- extended to cover I_S.PROCESSLIST.TIME
- Changed the column type of I_S.PROCESSLIST.TIME from LOGNLONG
UNSIGNED
to LONG (to match the SHOW PROCESSLIST type)
- Added a test case
"Release_lock("hello")" is now also successful when delivering NULL, replaced two sleeps by wait_condition. The last two "sleep 1" have not been replaced as all tried wait conditions leaded to nondeterministic results, especially to succeeding concurrent updates. To replace the sleeps there should be some time planned (or internal knowledge of the server may help).
If a sys-var has a base and a block-size>1, and then a
user-supplied value >= minimum ended up below minimum
thanks to block-size alignment, we threw a warning.
This meant for instance that when getting, then setting
the minimum, we'd see a warning. This was needlessly
confusing. (updated patch)
The problem is issued because we set wrong start position and stop position of query string into binlog.
That two values are stored as part of head info of query string.
When we parse binlog, we first get position values then get the query string according position values.
But seems that two values are not calculated correctly after the parse of Yacc.
We don't want to touch so much of yacc because it may influence other codes.
So just add one space after 'INTO' key word when parsing.
This can easily resolve the problem.
select where .. (col=col and col=col) or ... (false expression)
Problem: optimizer didn't take into account a singular case
when we eliminated all the predicates at the AND level of WHERE.
That may lead to wrong results.
Fix: replace (a=a AND a=a...) with TRUE if we eliminated all the
predicates.
Revised patch incorporating cleaner test code brought up during review.
Removed the use of grep and accomplished same actions via SQL / use of the server.
Runs as before on *nix systems and now runs on Windows without Cygwin as well.