into zippy.cornsilk.net:/home/cmiller/work/mysql/mysql-4.1-maint
configure.in:
Auto merged
mysql-test/t/ps.test:
Auto merged
sql/handler.cc:
Auto merged
sql/sql_delete.cc:
Auto merged
sql/sql_select.cc:
Auto merged
sql/table.cc:
Auto merged
tests/mysql_client_test.c:
Auto merged
myisam/sort.c:
Manual merge.
mysql-test/r/innodb_mysql.result:
Manual merge.
mysql-test/t/innodb_mysql.test:
Manual merge.
mysys/mf_iocache.c:
Manual merge.
into bodhi.local:/opt/local/work/mysql-4.1-runtime
mysql-test/r/ps.result:
Auto merged
mysql-test/t/func_gconcat.test:
Auto merged
sql/item_func.cc:
Auto merged
sql/item_func.h:
Auto merged
sql/item_sum.cc:
Auto merged
sql/log_event.cc:
Auto merged
sql/mysql_priv.h:
Auto merged
sql/mysqld.cc:
Auto merged
sql/set_var.cc:
Auto merged
sql/sql_class.h:
Auto merged
sql/sql_delete.cc:
Auto merged
sql/sql_select.cc:
Auto merged
sql/sql_update.cc:
Auto merged
Backport of the fix for bug #8143: A date with value 0 is treated as a NULL value
mysql-test/r/delete.result:
Fix for bug #23412: delete rows with null date field
- test result
mysql-test/t/delete.test:
Fix for bug #23412: delete rows with null date field
- test case
sql/sql_delete.cc:
Fix for bug #23412: delete rows with null date field
- during SELECT queries processing we convert 'date[time]_field is null'
conditions into 'date[time]_field = 0000-00-00[ 00:00:00]' for not null
DATE and DATETIME fields. To be consistent, we have to do the same for DELETE
queries. So we should call remove_eq_conds() in the mysql_delete() as well.
Also it may simplify and speed up DELETE queries execution.
If the error happens during DELETE IGNORE, nothing could be send to the
client, thus leaving it frozen expecting the reply.
The problem was that if some error occurred, it wouldn't be reported to
the client because of IGNORE, but neither success would be reported.
MySQL 4.1 would not freeze the client, but will report
ERROR 1105 (HY000): Unknown error
instead, which is also a bug.
The solution is to report success if we are in DELETE IGNORE and some
non-fatal error has happened.
mysql-test/r/innodb_mysql.result:
Add result for bug#18819: DELETE IGNORE hangs on foreign key parent
delete.
mysql-test/t/innodb_mysql.test:
Add test case for bug#18819: DELETE IGNORE hangs on foreign key parent
delete.
sql/sql_delete.cc:
Report success if we have got an error, but we are in DELETE IGNORE, and
the error is not fatal (if it is, it would be reported to the client).
commands and go directly to result file processing
client/mysqltest.c:
Add command "exit" to mysqltest
mysql-test/r/mysqltest.result:
Add command "exit" to mysqltest
mysql-test/t/mysqltest.test:
Add command "exit" to mysqltest
client/mysqldump.c:
fflush stderr after printing of error message
mysql-test/include/have_lowercase0.inc:
Remove extra ;
mysql-test/r/rpl000015.result:
Update result
mysql-test/r/rpl_change_master.result:
Update result
mysql-test/r/rpl_error_ignored_table.result:
Update result
mysql-test/r/rpl_loaddata.result:
Update result
mysql-test/r/rpl_log.result:
Update result
mysql-test/r/rpl_max_relay_size.result:
Update result
mysql-test/r/rpl_replicate_do.result:
Update result
mysql-test/t/lowercase_table3.test:
Backport from 5.0
mysql-test/t/mysql_protocols.test:
Backport from 5.0
mysql-test/t/rpl000015.test:
Backport from 5.0
mysql-test/t/rpl_change_master.test:
Backport from 5.0
mysql-test/t/rpl_drop_db.test:
Backport from 5.0
mysql-test/t/rpl_error_ignored_table.test:
Backport from 5.0
mysql-test/t/rpl_loaddata.test:
Backport from 5.0
mysql-test/t/rpl_log-master.opt:
Use --force-restart command in master.opt to force a restart for this test case
mysql-test/t/rpl_log.test:
Backport from 5.0
mysql-test/t/rpl_max_relay_size.test:
Backport from 5.0
mysql-test/t/rpl_replicate_do.test:
Backport from 5.0
We miss some records sometimes using RANGE method if we have
partial key segments.
Example:
Create table t1(a char(2), key(a(1)));
insert into t1 values ('a'), ('xx');
select a from t1 where a > 'x';
We call index_read() passing 'x' key and HA_READ_AFTER_KEY flag
in the handler::read_range_first() wich is wrong because we have
a partial key segment for the field and might miss records like 'xx'.
Fix: don't use open segments in such a case.
mysql-test/r/range.result:
Fix for bug #20732: Partial index and long sjis search with '>' fails sometimes
- test result.
mysql-test/t/range.test:
Fix for bug #20732: Partial index and long sjis search with '>' fails sometimes
- test case.
sql/opt_range.cc:
Fix for bug #20732: Partial index and long sjis search with '>' fails sometimes
- check if we have a partial key segment for a Item_func::GT_FUNC;
if so, don't set NEAR_MIN flag in order to use HA_READ_KEY_OR_NEXT
instead of HA_READ_AFTER_KEY.
sql/opt_range.h:
Fix for bug #20732: Partial index and long sjis search with '>' fails sometimes
- key segment 'flag' slot added.
sql/sql_select.cc:
Fix for bug #20732: Partial index and long sjis search with '>' fails sometimes
- test (HA_PART_KEY_SEG | HA_NULL_PART) as we split it in the sql/table.cc
sql/table.cc:
Fix for bug #20732: Partial index and long sjis search with '>' fails sometimes
- set HA_NULL_PART flag instead of HA_PART_KEY_SEG in order not to mix them.
Repair table could crash a server if there is not sufficient
memory (myisam_sort_buffer_size) to operate. Affects not only
repair, but also all statements that use create index by sort:
repair by sort, parallel repair, bulk insert.
Return an error if there is not sufficient memory to store at
least one key per BUFFPEK.
Also fixed memory leak if thr_find_all_keys returns an error.
myisam/sort.c:
maxbuffer is number of BUFFPEK-s for repair. It is calculated
as records / keys. keys is number of keys that can be stored
in memory (myisam_sort_buffer_size). There must be sufficient
memory to store both BUFFPEK-s and keys. It was checked
correctly before this patch. However there is another
requirement that wasn't checked: there must be sufficient
memory for at least one key per BUFFPEK, otherwise repair
by sort/parallel repair cannot operate.
Return an error if there is not sufficient memory to store at
least one key per BUFFPEK.
Also fixed memory leak if thr_find_all_keys returns an error.
mysql-test/r/repair.result:
A test case for BUG#23175.
mysql-test/t/repair.test:
A test case for BUG#23175.
When resolving unqualified name references MySQL was not
checking what is the item type for the reference. Thus
e.g a string literal item that has by convention a name
equal to its string value will also work as a reference to
a SELECT list item or a table field.
Fixed by allowing only Item_ref or Item_field to referenced by
(unqualified) name.
mysql-test/r/func_gconcat.result:
Bug #14019: group by converts literal string to column name
- removed undeterministic testcase : order by a constant
means no order.
mysql-test/r/group_by.result:
Bug #14019: group by converts literal string to column name
- test case
mysql-test/t/func_gconcat.test:
Bug #14019: group by converts literal string to column name
- removed undeterministic testcase : order by a constant
means no order.
mysql-test/t/group_by.test:
Bug #14019: group by converts literal string to column name
- test case
sql/sql_base.cc:
Bug #14019: group by converts literal string to column name
- resolve unqualified by name refs only for real references
We don't set null_value to 0 in the Item_func_compress::val_str() for
not-NULL results.
mysql-test/r/func_compress.result:
Fix for bug #23254: COMPRESS(NULL) makes all futher COMPRESS() calls on same Item return NULL
- test result.
mysql-test/t/func_compress.test:
Fix for bug #23254: COMPRESS(NULL) makes all futher COMPRESS() calls on same Item return NULL
- test case.
sql/item_strfunc.cc:
Fix for bug #23254: COMPRESS(NULL) makes all futher COMPRESS() calls on same Item return NULL
- set null_value.
hangs on Linux
If REPAIR TABLE ... USE_FRM is issued for table that is located in different
than default database server crash could happen.
In reopen_name_locked_table take database name from table_list (user specified
or default database) instead of from thd (default database).
Affects 4.1 only.
mysql-test/r/repair.result:
A test case for BUG#22562.
mysql-test/t/repair.test:
A test case for BUG#22562.
sql/sql_base.cc:
In reopen_name_locked_table take database name from table_list (user specified
or default database) instead of from thd (default database).
into polly.local:/home/kaa/src/maint/m41-maint--07OGk
sql/field.cc:
Auto merged
sql/item_timefunc.cc:
Auto merged
mysql-test/r/func_time.result:
Manually merged
mysql-test/t/func_time.test:
Manually merged
The bug is present only in 4.1, will be null-merged to 5.0
For InnoDB, check value of thd->transaction.all.innodb_active_trans instead of thd->transaction.stmt.innobase_tid to see if we really need to rollback.
mysql-test/r/innodb_mysql.result:
Added testcase for bug #22728 "Handler_rollback value is growing"
mysql-test/t/innodb_mysql.test:
Added testcase for bug #22728 "Handler_rollback value is growing"
sql/handler.cc:
For InnoDB, check value of thd->transaction.all.innodb_active_trans instead of thd->transaction.stmt.innobase_tid to see if we really need to rollback.
statement.
The problem was that during statement re-execution if the result was
empty the old result could be returned for group functions.
The solution is to implement proper cleanup() method in group
functions.
mysql-test/r/ps.result:
Add result for bug#21354: (COUNT(*) = 1) not working in SELECT inside
prepared statement.
mysql-test/t/func_gconcat.test:
Add a comment that the test case is from bug#836.
mysql-test/t/ps.test:
Add test case for bug#21354: (COUNT(*) = 1) not working in SELECT inside
prepared statement.
sql/item_sum.cc:
Call clear() in Item_sum_count::cleanup().
sql/item_sum.h:
Add comments.
Add proper cleanup() methods.
Change Item_sum::no_rows_in_result() to call clear() instead of reset(),
as the latter also issues add(), and there is nothing to add when there
are no rows in result.
When the client program had its stdout file descriptor closed by the calling
shell, after some amount of work (enough to fill a socket buffer) the server
would complain about a packet error and then disconnect the client.
This is a serious security problem. If stdout is closed before the mysql is
exec()d, then the first socket() call allocates file number 1 to communicate
with the server. Subsequent write()s to that file number (as when printing
results that come back from the database) go back to the server instead in
the command channel. So, one should be able to craft data which, upon being
selected back from the server to the client, and injected into the command
stream become valid MySQL protocol to do something nasty when sent /back/ to
the server.
The solution is to close explicitly the file descriptor that we *printf() to,
so that the libc layer and the OS layer both agree that the file is closed.
BitKeeper/etc/collapsed:
BitKeeper file /home/cmiller/work/mysql/bug17583/my41-bug17583/BitKeeper/etc/collapsed
client/mysql.cc:
If standard output is not open (specifically, if dup() of its file number
fails) then we explicitly close it so that future uses of the file descriptor
behave correctly for a closed file.
mysql-test/r/mysql_client.result:
Prove that the problem of writing SQL output to the command socket no longer
exists.
mysql-test/t/mysql_client.test:
Prove that the problem of writing SQL output to the command socket no longer
exists.
into chilla.local:/home/mydev/mysql-4.1-bug8283-one
myisam/mi_check.c:
Auto merged
myisam/mi_packrec.c:
Auto merged
myisam/sort.c:
Auto merged
mysql-test/r/myisam.result:
Bug#8283 - OPTIMIZE TABLE causes data loss
Manual merge
mysql-test/t/myisam.test:
Bug#8283 - OPTIMIZE TABLE causes data loss
Manual merge
OPTIMIZE TABLE with myisam_repair_threads > 1 performs a non-quick
parallel repair. This means that it does not only rebuild all
indexes, but also the data file.
Non-quick parallel repair works so that there is one thread per
index. The first of the threads rebuilds also the new data file.
The problem was that all threads shared the read io cache on the
old data file. If there were holes (deleted records) in the table,
the first thread skipped them, writing only contiguous, non-deleted
records to the new data file. Then it built the new index so that
its entries pointed to the correct record positions. But the other
threads didn't know the new record positions, but put the positions
from the old data file into the index.
The new design is so that there is a shared io cache which is filled
by the first thread (the data file writer) with the new contiguous
records and read by the other threads. Now they know the new record
positions.
Another problem was that for the parallel repair of compressed
tables a common bit_buff and rec_buff was used. I changed it so
that thread specific buffers are used for parallel repair.
A similar problem existed for checksum calculation. I made this
multi-thread safe too.
include/my_sys.h:
Bug#8283 - OPTIMIZE TABLE causes data loss
Redesign of io_cache_share.
include/myisam.h:
Bug#8283 - OPTIMIZE TABLE causes data loss
Redesign of checksum calculation in mi_check.c.
'calc_checksum' is now in myisamdef.h:st_mi_sort_param.
myisam/mi_check.c:
Bug#8283 - OPTIMIZE TABLE causes data loss
Implemented a new parallel repair design.
Using a synchronized shared read/write cache.
Allowed for thread specific bit_buff, rec_buff, and calc_checksum.
myisam/mi_open.c:
Bug#8283 - OPTIMIZE TABLE causes data loss
Added DBUG output.
myisam/mi_packrec.c:
Bug#8283 - OPTIMIZE TABLE causes data loss
Allowed for thread specific bit_buff and rec_buff.
myisam/myisamdef.h:
Bug#8283 - OPTIMIZE TABLE causes data loss
Commented on checksum calculation variables.
Allowed for thread specific bit_buff.
Added DBUG output for better table crash detection.
myisam/sort.c:
Bug#8283 - OPTIMIZE TABLE causes data loss
Added implications of the new parallel repair design.
Renamed 'info' -> 'sort_param'.
Added DBUG output.
mysql-test/r/myisam.result:
Bug#8283 - OPTIMIZE TABLE causes data loss
Added test results.
mysql-test/t/myisam.test:
Bug#8283 - OPTIMIZE TABLE causes data loss
Added test cases.
mysys/mf_iocache.c:
Bug#8283 - OPTIMIZE TABLE causes data loss
Redesign of io_cache_share.
We do now allow a writer to synchronize himself with the
readers of a shared cache. When all threads join in the lock,
the writer copies the data from his write buffer to the shared
read buffer.
into neptunus.(none):/home/msvensson/mysql/mysql-4.1-maint
mysql-test/r/subselect.result:
Auto merged
mysql-test/t/ps.test:
Auto merged
mysql-test/t/subselect.test:
Auto merged
Note: bug#21726 does not directly apply to 4.1, as it doesn't have stored
procedures. However, 4.1 had some bugs that were fixed in 5.0 by the
patch for bug#21726, and this patch is a backport of those fixes.
Namely, in 4.1 it fixes:
- LAST_INSERT_ID(expr) didn't return value of expr (4.1 specific).
- LAST_INSERT_ID() could return the value generated by current
statement if the call happens after the generation, like in
CREATE TABLE t1 (i INT AUTO_INCREMENT PRIMARY KEY, j INT);
INSERT INTO t1 VALUES (NULL, 0), (NULL, LAST_INSERT_ID());
- Redundant binary log LAST_INSERT_ID_EVENTs could be generated.
mysql-test/r/rpl_insert_id.result:
Add result for bug#21726: Incorrect result with multiple invocations
of LAST_INSERT_ID.
mysql-test/t/rpl_insert_id.test:
Add test case for bug#21726: Incorrect result with multiple invocations
of LAST_INSERT_ID.
sql/item_func.cc:
Add implementation of Item_func_last_insert_id::fix_fields(), where we
set THD::last_insert_id_used when statement calls LAST_INSERT_ID().
In Item_func_last_insert_id::val_int(), return THD::current_insert_id
if called like LAST_INSERT_ID(), otherwise return value of argument if
called like LAST_INSERT_ID(expr).
sql/item_func.h:
Add declaration of Item_func_last_insert_id::fix_fields().
sql/log_event.cc:
Do not set THD::last_insert_id_used on LAST_INSERT_ID_EVENT. Though we
know the statement will call LAST_INSERT_ID(), it wasn't called yet.
sql/set_var.cc:
In sys_var_last_insert_id::value_ptr(), set THD::last_insert_id_used,
and return THD::current_insert_id for @@LAST_INSERT_ID.
sql/sql_class.h:
Update comments.
Remove THD::insert_id(), as it has lost its purpose now.
sql/sql_insert.cc:
Now it is OK to read THD::last_insert_id directly.
sql/sql_load.cc:
Now it is OK to read THD::last_insert_id directly.
sql/sql_parse.cc:
In mysql_execute_command(), remember THD::last_insert_id (first
generated value of the previous statement) in THD::current_insert_id,
which then will be returned for LAST_INSERT_ID() and @@LAST_INSERT_ID.
sql/sql_select.cc:
If "IS NULL" is replaced with "= <LAST_INSERT_ID>", use right value,
which is THD::current_insert_id, and also set THD::last_insert_id_used
to issue binary log LAST_INSERT_ID_EVENT.
sql/sql_update.cc:
Now it is OK to read THD::last_insert_id directly.
tests/mysql_client_test.c:
Add test case for bug#21726: Incorrect result with multiple invocations
of LAST_INSERT_ID.
mysql-test/mysql-test-run.pl:
Use same location for slave-load-tmpdir in all versions
mysql-test/mysql-test-run.sh:
Use same location for slave-load-tmpdir in all versions
mysql-test/r/rpl_loaddata.result:
Update result after changing slave-load-tmpdir to use a shorter path
mysql-test/r/rpl_loaddatalocal.result:
Update result after changing slave-load-tmpdir to use a shorter path
mysql-test/r/rpl_log.result:
Update result after changing slave-load-tmpdir to use a shorter path
mysql-test/t/rpl_loaddatalocal.test:
Use MYSQLTEST_VARDIR when specifying path to load from(backport from 5.0)
Use new command "remove_file" instead of s"ystem rm"
Though this is not storage engine specific problem, I was able to
repeat this problem with BDB and NDB engines only. That was the
reason to add a test case into ndb_update.test. As a result
different bad things could happen.
BDB has removed duplicate rows which is not expected.
NDB returns an error.
For multi table update notify storage engine about UPDATE IGNORE
as it is done in single table UPDATE.
mysql-test/r/ndb_update.result:
A test case for bug#21381.
mysql-test/t/ndb_update.test:
A test case for bug#21381.
sql/sql_update.cc:
For multi table update notify storage engine about UPDATE IGNORE
as it is done in single table UPDATE.
into mysql.com:/home/gluh/MySQL/Merge/4.1-kt
include/m_ctype.h:
Auto merged
mysql-test/r/ctype_utf8.result:
Auto merged
mysql-test/t/ctype_utf8.test:
Auto merged
sql/table.cc:
Auto merged
sql/unireg.cc:
Auto merged
I.e take advantage of the fact that a # comment is always terminated by a new line
Add tests for the above
client/mysqltest.c:
Improve "check_eol_junk" to detect junk although there are multi line comments in the way.
I.e take advantage of the fact that a # comment is always terminated by a new line
mysql-test/r/mysqltest.result:
Update resut file
mysql-test/t/mysqltest.test:
Add test for improved check_eol_junk