mysql-test/lib/mtr_process.pl:
Remove junk code/comments
Moved the creation of var/log out of function mtr_kill_leftovers
mysql-test/mysql-test-run.pl:
Rename 'kill_running_server' to more appropriate name 'kill_running_servers'
If no var/log dir exists when mtr_kill_leftovers is started it should be created
Remove "unless" syntax, too perlish
Fix typo
hangs on Linux
If REPAIR TABLE ... USE_FRM is issued for table that is located in different
than default database server crash could happen.
In reopen_name_locked_table take database name from table_list (user specified
or default database) instead of from thd (default database).
Affects 4.1 only.
mysql-test/r/repair.result:
A test case for BUG#22562.
mysql-test/t/repair.test:
A test case for BUG#22562.
sql/sql_base.cc:
In reopen_name_locked_table take database name from table_list (user specified
or default database) instead of from thd (default database).
'run_testcase_need_master/slave_restart'
Remove the faulty qw
Only look for mysql_fix_privilege_tables if not windows
mysql-test/lib/mtr_cases.pl:
Move all code to determine when to restart into 'run_testcase_need_master/slave_restart'
Add possibility to wite --force-restart in -master.opt file, this will force a restart and
since master is not started with any special options there is no need to restart
again afterwards.
mysql-test/mysql-test-run.pl:
Remove the qw surrounding ENV{'LD_LIBRARY_PATH'}
Only look for the sh script mysql_fix_privileges when not on windows
Remove warnings about using unitialized variables
Improve the restart logic, eall code to determine when to restart is
now in run_testcase_need_master_restart and run_testcase_need_slave_restart
mysql-test/t/bdb-alter-table-2-master.opt:
Use --force-restart
mysql-test/t/not_embedded_server-master.opt:
Use --force-restart
into polly.local:/home/kaa/src/maint/m41-maint--07OGk
sql/field.cc:
Auto merged
sql/item_timefunc.cc:
Auto merged
mysql-test/r/func_time.result:
Manually merged
mysql-test/t/func_time.test:
Manually merged
The bug is present only in 4.1, will be null-merged to 5.0
For InnoDB, check value of thd->transaction.all.innodb_active_trans instead of thd->transaction.stmt.innobase_tid to see if we really need to rollback.
mysql-test/r/innodb_mysql.result:
Added testcase for bug #22728 "Handler_rollback value is growing"
mysql-test/t/innodb_mysql.test:
Added testcase for bug #22728 "Handler_rollback value is growing"
sql/handler.cc:
For InnoDB, check value of thd->transaction.all.innodb_active_trans instead of thd->transaction.stmt.innobase_tid to see if we really need to rollback.
statement.
The problem was that during statement re-execution if the result was
empty the old result could be returned for group functions.
The solution is to implement proper cleanup() method in group
functions.
mysql-test/r/ps.result:
Add result for bug#21354: (COUNT(*) = 1) not working in SELECT inside
prepared statement.
mysql-test/t/func_gconcat.test:
Add a comment that the test case is from bug#836.
mysql-test/t/ps.test:
Add test case for bug#21354: (COUNT(*) = 1) not working in SELECT inside
prepared statement.
sql/item_sum.cc:
Call clear() in Item_sum_count::cleanup().
sql/item_sum.h:
Add comments.
Add proper cleanup() methods.
Change Item_sum::no_rows_in_result() to call clear() instead of reset(),
as the latter also issues add(), and there is nothing to add when there
are no rows in result.
When the client program had its stdout file descriptor closed by the calling
shell, after some amount of work (enough to fill a socket buffer) the server
would complain about a packet error and then disconnect the client.
This is a serious security problem. If stdout is closed before the mysql is
exec()d, then the first socket() call allocates file number 1 to communicate
with the server. Subsequent write()s to that file number (as when printing
results that come back from the database) go back to the server instead in
the command channel. So, one should be able to craft data which, upon being
selected back from the server to the client, and injected into the command
stream become valid MySQL protocol to do something nasty when sent /back/ to
the server.
The solution is to close explicitly the file descriptor that we *printf() to,
so that the libc layer and the OS layer both agree that the file is closed.
BitKeeper/etc/collapsed:
BitKeeper file /home/cmiller/work/mysql/bug17583/my41-bug17583/BitKeeper/etc/collapsed
client/mysql.cc:
If standard output is not open (specifically, if dup() of its file number
fails) then we explicitly close it so that future uses of the file descriptor
behave correctly for a closed file.
mysql-test/r/mysql_client.result:
Prove that the problem of writing SQL output to the command socket no longer
exists.
mysql-test/t/mysql_client.test:
Prove that the problem of writing SQL output to the command socket no longer
exists.
into chilla.local:/home/mydev/mysql-4.1-bug8283-one
myisam/mi_check.c:
Auto merged
myisam/mi_packrec.c:
Auto merged
myisam/sort.c:
Auto merged
mysql-test/r/myisam.result:
Bug#8283 - OPTIMIZE TABLE causes data loss
Manual merge
mysql-test/t/myisam.test:
Bug#8283 - OPTIMIZE TABLE causes data loss
Manual merge
OPTIMIZE TABLE with myisam_repair_threads > 1 performs a non-quick
parallel repair. This means that it does not only rebuild all
indexes, but also the data file.
Non-quick parallel repair works so that there is one thread per
index. The first of the threads rebuilds also the new data file.
The problem was that all threads shared the read io cache on the
old data file. If there were holes (deleted records) in the table,
the first thread skipped them, writing only contiguous, non-deleted
records to the new data file. Then it built the new index so that
its entries pointed to the correct record positions. But the other
threads didn't know the new record positions, but put the positions
from the old data file into the index.
The new design is so that there is a shared io cache which is filled
by the first thread (the data file writer) with the new contiguous
records and read by the other threads. Now they know the new record
positions.
Another problem was that for the parallel repair of compressed
tables a common bit_buff and rec_buff was used. I changed it so
that thread specific buffers are used for parallel repair.
A similar problem existed for checksum calculation. I made this
multi-thread safe too.
include/my_sys.h:
Bug#8283 - OPTIMIZE TABLE causes data loss
Redesign of io_cache_share.
include/myisam.h:
Bug#8283 - OPTIMIZE TABLE causes data loss
Redesign of checksum calculation in mi_check.c.
'calc_checksum' is now in myisamdef.h:st_mi_sort_param.
myisam/mi_check.c:
Bug#8283 - OPTIMIZE TABLE causes data loss
Implemented a new parallel repair design.
Using a synchronized shared read/write cache.
Allowed for thread specific bit_buff, rec_buff, and calc_checksum.
myisam/mi_open.c:
Bug#8283 - OPTIMIZE TABLE causes data loss
Added DBUG output.
myisam/mi_packrec.c:
Bug#8283 - OPTIMIZE TABLE causes data loss
Allowed for thread specific bit_buff and rec_buff.
myisam/myisamdef.h:
Bug#8283 - OPTIMIZE TABLE causes data loss
Commented on checksum calculation variables.
Allowed for thread specific bit_buff.
Added DBUG output for better table crash detection.
myisam/sort.c:
Bug#8283 - OPTIMIZE TABLE causes data loss
Added implications of the new parallel repair design.
Renamed 'info' -> 'sort_param'.
Added DBUG output.
mysql-test/r/myisam.result:
Bug#8283 - OPTIMIZE TABLE causes data loss
Added test results.
mysql-test/t/myisam.test:
Bug#8283 - OPTIMIZE TABLE causes data loss
Added test cases.
mysys/mf_iocache.c:
Bug#8283 - OPTIMIZE TABLE causes data loss
Redesign of io_cache_share.
We do now allow a writer to synchronize himself with the
readers of a shared cache. When all threads join in the lock,
the writer copies the data from his write buffer to the shared
read buffer.
to run the whole testcase to find wich testcases need to be checked more carefully
and the just "copy and paste" the suspicious test case names to
a new mysql-test-run.pl command.
mysql-test/mysql-test-run.pl:
Mark test cases that fails "check_testcase"
Make run_check_testcase return value indicating if check failed
mysql-test/include/check-testcase.test:
New BitKeeper file ``mysql-test/include/check-testcase.test''
Move the code to look for exe_mysqld earlier => to initial_setup
Fix warnings detected by running with "diagnostics"
Remove unused option "opt_result_ext"
Init "path_ndb_examples_dir"
mysql-test/lib/mtr_cases.pl:
Set default number of slave to 0
Remove unused/uninitialized "$opt_result_ext"
mysql-test/lib/mtr_report.pl:
Remove unused/uninitialized "$opt_result_ext"
the "mysqld --version" command will print "/path/.libs/lt-mysqld Ver x.x.x"
mysql-test/mysql-test-run.pl:
Modify regex for parsing mysqld version as the mysqld is sometimes a libtool wrapper and
the "mysqld --version" command will print "/path/.libs/lt-mysqld"
into neptunus.(none):/home/msvensson/mysql/mysql-4.1-maint
mysql-test/r/subselect.result:
Auto merged
mysql-test/t/ps.test:
Auto merged
mysql-test/t/subselect.test:
Auto merged
Note: bug#21726 does not directly apply to 4.1, as it doesn't have stored
procedures. However, 4.1 had some bugs that were fixed in 5.0 by the
patch for bug#21726, and this patch is a backport of those fixes.
Namely, in 4.1 it fixes:
- LAST_INSERT_ID(expr) didn't return value of expr (4.1 specific).
- LAST_INSERT_ID() could return the value generated by current
statement if the call happens after the generation, like in
CREATE TABLE t1 (i INT AUTO_INCREMENT PRIMARY KEY, j INT);
INSERT INTO t1 VALUES (NULL, 0), (NULL, LAST_INSERT_ID());
- Redundant binary log LAST_INSERT_ID_EVENTs could be generated.
mysql-test/r/rpl_insert_id.result:
Add result for bug#21726: Incorrect result with multiple invocations
of LAST_INSERT_ID.
mysql-test/t/rpl_insert_id.test:
Add test case for bug#21726: Incorrect result with multiple invocations
of LAST_INSERT_ID.
sql/item_func.cc:
Add implementation of Item_func_last_insert_id::fix_fields(), where we
set THD::last_insert_id_used when statement calls LAST_INSERT_ID().
In Item_func_last_insert_id::val_int(), return THD::current_insert_id
if called like LAST_INSERT_ID(), otherwise return value of argument if
called like LAST_INSERT_ID(expr).
sql/item_func.h:
Add declaration of Item_func_last_insert_id::fix_fields().
sql/log_event.cc:
Do not set THD::last_insert_id_used on LAST_INSERT_ID_EVENT. Though we
know the statement will call LAST_INSERT_ID(), it wasn't called yet.
sql/set_var.cc:
In sys_var_last_insert_id::value_ptr(), set THD::last_insert_id_used,
and return THD::current_insert_id for @@LAST_INSERT_ID.
sql/sql_class.h:
Update comments.
Remove THD::insert_id(), as it has lost its purpose now.
sql/sql_insert.cc:
Now it is OK to read THD::last_insert_id directly.
sql/sql_load.cc:
Now it is OK to read THD::last_insert_id directly.
sql/sql_parse.cc:
In mysql_execute_command(), remember THD::last_insert_id (first
generated value of the previous statement) in THD::current_insert_id,
which then will be returned for LAST_INSERT_ID() and @@LAST_INSERT_ID.
sql/sql_select.cc:
If "IS NULL" is replaced with "= <LAST_INSERT_ID>", use right value,
which is THD::current_insert_id, and also set THD::last_insert_id_used
to issue binary log LAST_INSERT_ID_EVENT.
sql/sql_update.cc:
Now it is OK to read THD::last_insert_id directly.
tests/mysql_client_test.c:
Add test case for bug#21726: Incorrect result with multiple invocations
of LAST_INSERT_ID.
Improve 'run_testcase_need_slave_restart' to detect if a slave restart really is necessary.
So far all rpl test requires a slave restart, but for all other tests it can be skipped
Improve the sort order used by --reorder
mysql-test/lib/mtr_cases.pl:
Improve the sort order used by reorder
mysql-test/mysql-test-run.pl:
Improve 'run_testcase_need_master_restart' to require restart if master is not already started
Improve 'run_testcase_need_slave_restart' to detect if a slave restart really is necessary.
So far all rpl test requires a slave restart, but for all other tests it can be skipped
Cleanup .progress, .reject, .log and .warnings files produced by mysqltest
client/mysqltest.c:
Add printout of file in which warning was detected
mysql-test/include/ctype_like_escape.inc:
Remove warnings, convert -- comments to # comments
mysql-test/mysql-test-run.pl:
Cleanup all files produced by mysqltest before starting mysqltest again
mysql-test/mysql-test-run.pl:
Add policy directive about keeping mysqltest framework tools identical in all versions
Cleanup the initial comment to reflect current state
but having it on tmpfs gives a big speedup.
mysql-test/mysql-test-run.pl:
Make use of opt_mem and let 4.1 allow vardir to be set. Still relies on the var/ directory
but having it on tmpfs gives a big speedup.
mysql-test/mysql-test-run.pl:
Use same location for slave-load-tmpdir in all versions
mysql-test/mysql-test-run.sh:
Use same location for slave-load-tmpdir in all versions
mysql-test/r/rpl_loaddata.result:
Update result after changing slave-load-tmpdir to use a shorter path
mysql-test/r/rpl_loaddatalocal.result:
Update result after changing slave-load-tmpdir to use a shorter path
mysql-test/r/rpl_log.result:
Update result after changing slave-load-tmpdir to use a shorter path
mysql-test/t/rpl_loaddatalocal.test:
Use MYSQLTEST_VARDIR when specifying path to load from(backport from 5.0)
Use new command "remove_file" instead of s"ystem rm"
Though this is not storage engine specific problem, I was able to
repeat this problem with BDB and NDB engines only. That was the
reason to add a test case into ndb_update.test. As a result
different bad things could happen.
BDB has removed duplicate rows which is not expected.
NDB returns an error.
For multi table update notify storage engine about UPDATE IGNORE
as it is done in single table UPDATE.
mysql-test/r/ndb_update.result:
A test case for bug#21381.
mysql-test/t/ndb_update.test:
A test case for bug#21381.
sql/sql_update.cc:
For multi table update notify storage engine about UPDATE IGNORE
as it is done in single table UPDATE.
from var/ to a tmpfs area and thereby speed up the execution of the testsuite
significantly
mysql-test/mysql-test-run.pl:
Add new option --mem to mysql-test-run.pl. It will automatically setup a symlink
from var/ to a tmpfs area and thereby speed up the execution of the testsuite
significantly