mysql-test/r/func_sapdb.result:
test cases for date range edge cases added
mysql-test/r/func_time.result:
test cases for date range edge cases added
mysql-test/t/func_sapdb.test:
test cases for date range edge cases added
mysql-test/t/func_time.test:
test cases for date range edge cases added
'SELECT DISTINCT a,b FROM t1' should not use temp table if there is unique
index (or primary key) on a.
There are a number of other similar cases that can be calculated without the
use of a temp table : multi-part unique indexes, primary keys or using GROUP BY
instead of DISTINCT.
When a GROUP BY/DISTINCT clause contains all key parts of a unique
index, then it is guaranteed that the fields of the clause will be
unique, therefore we can optimize away GROUP BY/DISTINCT altogether.
This optimization has two effects:
* there is no need to create a temporary table to compute the
GROUP/DISTINCT operation (or the temporary table will be smaller if only GROUP
is removed and DISTINCT stays or if DISTINCT is removed and GROUP BY stays)
* this causes the statement in effect to become updatable in Connector/Java
because the result set columns will be direct reference to the primary key of
the table (instead to the temporary table that it currently references).
Implemented a check that will optimize away GROUP BY/DISTINCT for queries like
the above.
Currently it will work only for single non-constant table in the FROM clause.
mysql-test/r/distinct.result:
Bug #16458: Simple SELECT FOR UPDATE causes "Result Set not updatable" error
- test case
mysql-test/t/distinct.test:
Bug #16458: Simple SELECT FOR UPDATE causes "Result Set not updatable" error
- test case
sql/sql_select.cc:
Bug #16458: Simple SELECT FOR UPDATE causes "Result Set not updatable" error
- disable GROUP BY if contains the fields of a unique index.
into poseidon.ndb.mysql.com:/home/tomas/mysql-5.0-main
mysql-test/mysql-test-run.sh:
Auto merged
ndb/include/kernel/GlobalSignalNumbers.h:
Auto merged
ndb/src/kernel/blocks/dbdict/Dbdict.cpp:
Auto merged
ndb/src/kernel/blocks/dbdict/Dbdict.hpp:
Auto merged
sql/ha_ndbcluster.cc:
Auto merged
Fix a minor issue with Bug#16206 (bdb.test failed if the tree is compiled
without blackhole).
include/my_sys.h:
Change declaration of my_strdup_with_length to accept const char *,
not const byte *: in 5 places out of 6 where this function is used,
it's being passed char *, not byte *
mysql-test/r/bdb.result:
Remove dependency on an optional engine (updated test results).
mysql-test/t/bdb.test:
Remove dependency on an optional engine.
mysys/my_malloc.c:
my_strdup_with_length: const byte * -> const char *
mysys/safemalloc.c:
my_strdup_with_length: const byte * -> const char *
sql/ha_federated.cc:
my_strdup_with_length: const byte * -> const char *
sql/log_event.cc:
my_strdup_with_length: const byte * -> const char *
sql/set_var.cc:
my_strdup_with_length: const byte * -> const char *
sql/sql_class.h:
Change db_length type to uint from uint32 (see also table.h)
sql/table.h:
Change the type of db_length to uint from uint32: LEX_STRING uses uint for
length, we need a small and consistent set of types to store length to
minimize cast and compile failures.
allow user to specify scan batch size in readTuples
ndb/include/ndbapi/NdbIndexScanOperation.hpp:
Allow user to specify batch size
ndb/include/ndbapi/NdbScanOperation.hpp:
Allow user to specify batch size
ndb/src/kernel/blocks/dblqh/DblqhMain.cpp:
Fix so that last row works even if batch is complete
ndb/src/ndbapi/NdbReceiver.cpp:
Allow user yo specify batch size
ndb/src/ndbapi/NdbScanOperation.cpp:
Allow user to specify batchsize
Very complex select statements can create temporary tables
that are too big to be represented as a MyISAM table.
This was not checked at table creation time, but only at
open time. The result was an attempt to delete the
"impossible" table.
But if the server is built --with-raid, MyISAM tries to
open the table before deleting the files. It needs to find
out if the table uses the raid support and how many raid
chunks there are. This is done with an open "for repair",
which will almost always succeed.
But in this case we have an "impossible" table. The open
failed. Hence the files were not deleted. Also the error
message was a bit unspecific.
I turned an open error in this situation into the assumption
of having no raid support on the table. Thus the normal data
file is tried to be deleted. This may however leave existing
raid chunks behind.
I also added a check in mi_create() to prevent the creation
of an "impossible" table. A more decriptive error message is
given in this case.
No test case. The required select statement is way too
large for the test suite. I added a test script to the
bug report.
myisam/mi_create.c:
Bug#11824 - internal /tmp/*.{MYD,MYI} files remain, causing subsequent queries to fail
Added a check to mi_create() that the table description
header of the index file does not exceed 64KB. The header
has only 16 bits to encode its length.
myisam/mi_delete_table.c:
Bug#11824 - internal /tmp/*.{MYD,MYI} files remain, causing subsequent queries to fail
Interpret error in table open as not having a raid
configuration on the tbale. Thus try to delete the
normal data file, but leave behind raid chunks if
they exist.
- make sure to allocate just enough pages in the fragments by using the actual
row count from the backup, to avoid over allocation of pages to fragments, and
thus avoid the bug
ndb/include/kernel/GlobalSignalNumbers.h:
Bug #19852 Restoring backup made from cluster with full data memory fails
- distribute fragment complete to all participants to update row count
ndb/include/kernel/signaldata/BackupContinueB.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- time slica writing of fragment info to ctl file
ndb/include/kernel/signaldata/BackupImpl.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- 32 -> 64 bit on bytes and records
- new signal fragment complete to all participants
ndb/include/kernel/signaldata/BackupSignalData.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- 32 -> 64 bit on bytes and records
ndb/include/kernel/signaldata/DictTabInfo.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- add min and max rows to dict tab info
ndb/include/kernel/signaldata/LqhFrag.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added min and max rows to add frag req
ndb/include/kernel/signaldata/TupFrag.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added min and max rows to add frag req
ndb/include/ndbapi/NdbDictionary.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added get/set of min max rows
ndb/src/common/debugger/signaldata/BackupImpl.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- 32 -> 64 bit on bytes and records
ndb/src/common/debugger/signaldata/BackupSignalData.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- 32 -> 64 bit on bytes and records
ndb/src/common/debugger/signaldata/DictTabInfo.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added min and max rows to dict tab info
ndb/src/common/debugger/signaldata/LqhFrag.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added min and max rows to frag req
ndb/src/kernel/blocks/backup/Backup.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- new section in backup with per fragment info in ctl file
- 32 -> 64 bit on bytes and records
ndb/src/kernel/blocks/backup/Backup.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- new section in backup with per fragment info in ctl file
- 32 -> 64 bit on bytes and records
ndb/src/kernel/blocks/backup/BackupFormat.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- new section in backup with per fragment info in ctl file
- 32 -> 64 bit on bytes and records
ndb/src/kernel/blocks/backup/BackupInit.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- new signal fragment complete to all participants
ndb/src/kernel/blocks/dbdict/Dbdict.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added max and min rows to dict table object
ndb/src/kernel/blocks/dbdict/Dbdict.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added max and min rows to dict table object
ndb/src/kernel/blocks/dblqh/Dblqh.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added min and max rows to frag req
ndb/src/kernel/blocks/dblqh/DblqhMain.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added min and max rows to frag req
ndb/src/kernel/blocks/dbtup/Dbtup.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added min and max rows to frag req
ndb/src/kernel/blocks/dbtup/DbtupMeta.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added min and max rows to frag req
- move memory allocation to fragment to after adding of attributes to get correct headsize
- allocate pages to fragments according to min rows setting
ndb/src/kernel/blocks/dbtup/DbtupPageMap.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- grow page allocation starting from 2 irrespective of first page allocation
ndb/src/mgmsrv/MgmtSrvr.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- 32 -> 64 bits on bytes and records
ndb/src/mgmsrv/MgmtSrvr.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- 32 -> 64 bits on bytes and records
ndb/src/ndbapi/NdbDictionary.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- min and max rows in dict
ndb/src/ndbapi/NdbDictionaryImpl.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- min and max rows in dict
ndb/src/ndbapi/NdbDictionaryImpl.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- min and max rows in dict
ndb/tools/restore/Restore.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- add retrieval of fragment info
ndb/tools/restore/Restore.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- add retrieval of fragment info
ndb/tools/restore/consumer_restore.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- set min in restore to the actual row count (this is the actual bug fix)
sql/ha_ndbcluster.cc:
Bug #19852 Restoring backup made from cluster with full data memory fails
- set min and max rows according to sql definition
When building the UPDATE query to send to the remote server, the
federated storage engine built the query incorrectly if it was updating
a field to be NULL.
Thanks to Bjrn Steinbrink for an initial patch for the problem.
mysql-test/r/federated.result:
Add new results
mysql-test/t/federated.test:
Add new regression test
sql/ha_federated.cc:
Fix logic of how fields are added to SET and WHERE clauses of an
UPDATE statement. Fields that were NULL were being handled incorrectly.
Also reorganizes the code a little bit so the update of the two
clauses is consistent.
For compatibility, don't use {..,..} in pattern matching
make_binary_distribution.sh:
Added .dylib and .sl as shared library extensions
scripts/make_binary_distribution.sh:
Added .dylib and .sl as shared library extensions
scripts/make_sharedlib_distribution.sh:
For compatibility, don't use {..,..} in pattern matching
into mysql.com:/opt/local/work/mysql-5.0-17199
mysql-test/r/create.result:
Auto merged
mysql-test/t/create.test:
Auto merged
sql/item_strfunc.cc:
Auto merged
sql/log_event.cc:
Auto merged
sql/slave.cc:
Auto merged
sql/sp_head.cc:
Auto merged
sql/sql_class.h:
Auto merged
sql/sql_db.cc:
Auto merged
sql/sql_insert.cc:
Auto merged
sql/sql_lex.h:
Auto merged
sql/sql_parse.cc:
Auto merged
sql/sql_table.cc:
Auto merged
sql/sql_yacc.yy:
Auto merged
mysql-test/r/sp.result:
SCCS merged
mysql-test/t/sp.test:
SCCS merged
Bug#19022 "Memory bug when switching db during trigger execution"
Bug#17199 "Problem when view calls function from another database."
Bug#18444 "Fully qualified stored function names don't work correctly in
SELECT statements"
Documentation note: this patch introduces a change in behaviour of prepared
statements.
This patch adds a few new invariants with regard to how THD::db should
be used. These invariants should be preserved in future:
- one should never refer to THD::db by pointer and always make a deep copy
(strmake, strdup)
- one should never compare two databases by pointer, but use strncmp or
my_strncasecmp
- TABLE_LIST object table->db should be always initialized in the parser or
by creator of the object.
For prepared statements it means that if the current database is changed
after a statement is prepared, the database that was current at prepare
remains active. This also means that you can not prepare a statement that
implicitly refers to the current database if the latter is not set.
This is not documented, and therefore needs documentation. This is NOT a
change in behavior for almost all SQL statements except:
- ALTER TABLE t1 RENAME t2
- OPTIMIZE TABLE t1
- ANALYZE TABLE t1
- TRUNCATE TABLE t1 --
until this patch t1 or t2 could be evaluated at the first execution of
prepared statement.
CURRENT_DATABASE() still works OK and is evaluated at every execution
of prepared statement.
Note, that in stored routines this is not an issue as the default
database is the database of the stored procedure and "use" statement
is prohibited in stored routines.
This patch makes obsolete the use of check_db_used (it was never used in the
old code too) and all other places that check for table->db and assign it
from THD::db if it's NULL, except the parser.
How this patch was created: THD::{db,db_length} were replaced with a
LEX_STRING, THD::db. All the places that refer to THD::{db,db_length} were
manually checked and:
- if the place uses thd->db by pointer, it was fixed to make a deep copy
- if a place compared two db pointers, it was fixed to compare them by value
(via strcmp/my_strcasecmp, whatever was approproate)
Then this intermediate patch was used to write a smaller patch that does the
same thing but without a rename.
TODO in 5.1:
- remove check_db_used
- deploy THD::set_db in mysql_change_db
See also comments to individual files.
mysql-test/r/create.result:
Modify the result file: a database can never be NULL.
mysql-test/r/ps.result:
Update test results (Bug#17199 et al)
mysql-test/r/sp.result:
Update test results (Bug#17199 et al)
mysql-test/t/create.test:
Update the id of the returned error.
mysql-test/t/ps.test:
Add test coverage for prepared statements and current database. In scope of
work on Bug#17199 "Problem when view calls function from another database."
mysql-test/t/sp.test:
Add a test case for Bug#17199 "Problem when view calls function from another
database." and Bug#18444 "Fully qualified stored function names don't work
correctly in SELECT statements". Test a complementary problem.
sql/item_strfunc.cc:
Touch the code that reads thd->db (cleanup).
sql/log_event.cc:
While we are at it, replace direct access to thd->db with a method.
Should simplify future conversion of THD::db to LEX_STRING.
sql/slave.cc:
While we are at it, replace direct access to thd->db with a method.
Should simplify future conversion of THD::db to LEX_STRING.
sql/slave.h:
Remove a declaration for a method that is used only in one module.
sql/sp.cc:
Rewrite sp_use_new_db: this is a cleanup that I needed in order to understand
this function and ensure that it has no bugs.
sql/sp.h:
Add a new declaration for sp_use_new_db (uses LEX_STRINGs) and a comment.
sql/sp_head.cc:
- drop sp_name_current_db_new - a creator of sp_name class that was used
when sp_name was created for an identifier without an explicitly initialized
database. Now we pass thd->db to constructor of sp_name right in the
parser.
- rewrite sp_head::init_strings: name->m_db is always set now
- use the new variant of sp_use_new_db
- we don't need to update thd->db with SP MEM_ROOT pointer anymore when
parsing a stored procedure, as noone will refer to it (yes!)
sql/sp_head.h:
- remove unneded methods and members
sql/sql_class.h:
- introduce 3 THD methods to work with THD::db:
.set_db to assign the current database
.reset_db to reset the current database (temporarily) or set it to NULL
.opt_copy_db_to - to deep-copy thd->db to a pointer if it's not NULL
sql/sql_db.cc:
While we are at it, replace direct access to thd->db with a method.
Should simplify future conversion of THD::db to LEX_STRING.
sql/sql_insert.cc:
- replace checks with asserts: table_list->db must be always set in the parser.
sql/sql_lex.h:
- add a comment
sql/sql_parse.cc:
- implement the invariant described in the changeset comment.
- remove juggling with lex->sphead in SQLCOM_CREATE_PROCEDURE:
now db_load_routine uses its own LEX object and doesn't damage the main
LEX.
- add DBUG_ASSERT(0) to unused "check_db_used"
sql/sql_table.cc:
- replace a check with an assert (table_ident->db)
sql/sql_trigger.cc:
While we are at it, replace direct access to thd->db with a method.
Should simplify future conversion of THD::db to LEX_STRING.
sql/sql_udf.cc:
- use thd->set_db instead of direct modification of to thd->db
sql/sql_view.cc:
- replace a check with an assert (view->db)
sql/sql_yacc.yy:
- make sure that we always copy table->db or name->db or ident->db or
select_lex->db from thd->db if the former is not set. If thd->db
is not set but is accessed, return an error.
sql/tztime.cc:
- be nice, never copy thd->db by pointer.
Bug#17294 - INSERT DELAYED puting an \n before data
Bug#16611 - INSERT DELAYED corrupts data
Bug#13707 - Server crash with INSERT DELAYED on MyISAM table
Combined as Bug#16218.
INSERT DELAYED crashed in 5.0 on a table with a varchar that
could be NULL and was created pre-5.0 (Bugs 16218 and 13707).
INSERT DELAYED corrupted data in 5.0 on a table with varchar
fields that was created pre-5.0 (Bugs 17294 and 16611).
In case of INSERT DELAYED the open table is copied from the
delayed insert thread to be able to create a record for the
queue. When copying the fields, a method was used that did
convert old varchar to new varchar fields and did not set up
some pointers into the record buffer of the table.
The field conversion was guilty for the misinterpretation of
the record contents by the delayed insert thread. The wrong
pointer setup was guilty for the crashes.
For Bug 13707 (Server crash with INSERT DELAYED on MyISAM table)
I fixed the above mentioned method to set up one of the pointers.
For Bug 16218 I set up the other pointers too.
But when looking at the corruptions I got aware that converting
the field type was totally wrong for INSERT DELAYED. The copied
table is used to create a record that is to be sent to the
delayed insert thread. Of course it can interpret the record
correctly only if all field types are the same in both table
objects.
So I revoked the fix for Bug 13707 and changed the new_field()
method so that it can suppress conversions.
No test case as this is a migration problem. One needs to
create a table with 4.x and use it with 5.x. I added two
test scripts to the bug report.
sql/field.cc:
Bug#16218 - Crash on insert delayed
Bug#17294 - INSERT DELAYED puting an \n before data
Bug#16611 - INSERT DELAYED corrupts data
Bug#13707 - Server crash with INSERT DELAYED on MyISAM table
Combined as Bug#16218.
Added parameter 'keep_type' to Field::new_field().
Undid the change from Bug 13707 (Server crash with INSERT
DELAYED on MyISAM table).
I solved all four bugs in sql/sql_insert.cc by making exact
duplicates of the fields. The new_field() method converts
certain field types, which is wrong for INSERT DELAYED.
sql/field.h:
Bug#13707 - Server crash with INSERT DELAYED on MyISAM table
Combined as Bug#16218.
Added parameter 'keep_type' to Field::new_field().
sql/sql_insert.cc:
Bug#16218 - Crash on insert delayed
Bug#17294 - INSERT DELAYED puting an \n before data
Bug#16611 - INSERT DELAYED corrupts data
Bug#13707 - Server crash with INSERT DELAYED on MyISAM table
Combined as Bug#16218.
Added comments. Made small style fixes.
Used the new parameter 'keep_type' of Field::new_field()
to avoid field type conversion. The table copy must have
exactly the same types of fields as the original table.
Otherwise the record contents created by the foreground
thread could be misinterpreted by the delayed insert thread.
sql/sql_select.cc:
Bug#16218 - Crash on insert delayed
Bug#17294 - INSERT DELAYED puting an \n before data
Bug#16611 - INSERT DELAYED corrupts data
Bug#13707 - Server crash with INSERT DELAYED on MyISAM table
Combined as Bug#16218.
Added parameter 'keep_type' to Field::new_field().
Undid the change from Bug 13707 (Server crash with INSERT
DELAYED on MyISAM table).
I solved all four bugs in sql/sql_insert.cc by making exact
duplicates of the fields. The new_field() method converts
certain field types, which is wrong for INSERT DELAYED.
sql/sql_trigger.cc:
Bug#16218 - Crash on insert delayed
Bug#17294 - INSERT DELAYED puting an \n before data
Bug#16611 - INSERT DELAYED corrupts data
Bug#13707 - Server crash with INSERT DELAYED on MyISAM table
Combined as Bug#16218.
Added parameter 'keep_type' to Field::new_field().
Undid the change from Bug 13707 (Server crash with INSERT
DELAYED on MyISAM table).
I solved all four bugs in sql/sql_insert.cc by making exact
duplicates of the fields. The new_field() method converts
certain field types, which is wrong for INSERT DELAYED.
sql/table.cc:
Bug#16218 - Crash on insert delayed
Bug#17294 - INSERT DELAYED puting an \n before data
Bug#16611 - INSERT DELAYED corrupts data
Bug#13707 - Server crash with INSERT DELAYED on MyISAM table
Combined as Bug#16218.
Added parameter 'keep_type' to Field::new_field().
Undid the change from Bug 13707 (Server crash with INSERT
DELAYED on MyISAM table).
I solved all four bugs in sql/sql_insert.cc by making exact
duplicates of the fields. The new_field() method converts
certain field types, which is wrong for INSERT DELAYED.
Addendum fixes after changing the condition variable
for the global read lock.
The stress test suite revealed some deadlocks. Some were
related to the new condition variable (COND_global_read_lock)
and some were general problems with the global read lock.
It is now necessary to signal COND_global_read_lock whenever
COND_refresh is signalled.
We need to wait for the release of a global read lock if one
is set before every operation that requires a write lock.
But we must not wait if we have locked tables by LOCK TABLES.
After setting a global read lock a thread waits until all
write locks are released.
mysql-test/r/lock_multi.result:
Bug#16986 - Deadlock condition with MyISAM tables
Addendum fixes after changing the condition variable
for the global read lock.
Added test results.
mysql-test/t/lock_multi.test:
Bug#16986 - Deadlock condition with MyISAM tables
Addendum fixes after changing the condition variable
for the global read lock.
Added tests for possible deadlocks that did not occur
with the stress test suite.
mysys/thr_lock.c:
Bug#16986 - Deadlock condition with MyISAM tables
Addendum fixes after changing the condition variable
for the global read lock.
Added a protection against an infinite loop that occurs
with the test case for Bug #20662.
sql/lock.cc:
Bug#16986 - Deadlock condition with MyISAM tables
Addendum fixes after changing the condition variable
for the global read lock.
Signal COND_global_read_lock whenever COND_refresh
is signalled by using the new function broadcast_refresh().
Added the definition of a new function that signals
COND_global_read_lock whenever COND_refresh is signalled.
sql/mysql_priv.h:
Bug#16986 - Deadlock condition with MyISAM tables
Addendum fixes after changing the condition variable
for the global read lock.
Added a declaration for a new function that signals
COND_global_read_lock whenever COND_refresh is signalled.
sql/sql_base.cc:
Bug#16986 - Deadlock condition with MyISAM tables
Addendum fixes after changing the condition variable
for the global read lock.
Signal COND_global_read_lock whenever COND_refresh
is signalled by using the new function broadcast_refresh().
sql/sql_handler.cc:
Bug#16986 - Deadlock condition with MyISAM tables
Addendum fixes after changing the condition variable
for the global read lock.
Signal COND_global_read_lock whenever COND_refresh
is signalled by using the new function broadcast_refresh().
sql/sql_insert.cc:
Bug#16986 - Deadlock condition with MyISAM tables
Addendum fixes after changing the condition variable
for the global read lock.
Removed global read lock handling from inside of
INSERT DELAYED. It is handled on a higher level now.
sql/sql_parse.cc:
Bug#16986 - Deadlock condition with MyISAM tables
Addendum fixes after changing the condition variable
for the global read lock.
Wait for the release of a global read lock if one is set
before every operation that requires a write lock.
But don't wait if locked tables exist already.
sql/sql_table.cc:
Bug#16986 - Deadlock condition with MyISAM tables
Addendum fixes after changing the condition variable
for the global read lock.
Removed global read lock handling from inside of
CREATE TABLE. It is handled on a higher level now.
Signal COND_global_read_lock whenever COND_refresh
is signalled by using the new function broadcast_refresh().
into mysql.com:/home/tnurnberg/mysql-5.0-maint-18462
mysql-test/r/mysqldump.result:
Auto merged
mysql-test/t/mysqldump.test:
Auto merged
client/mysqldump.c:
SCCS merged
change names of some undocumented ndb status variables to better reflect what
their values mean
sql/ha_ndbcluster.cc:
rename some status variables to better reflect what they show.
part 1 - make sure return code is propagated from request tracker
ndb/src/kernel/vm/RequestTracker.hpp:
propagate return value
ndb/src/kernel/vm/SafeCounter.hpp:
make sure object is not initialized in case of seize() failure, to make sure destructor doesnt assert
Sometimes the helper connection (that is watching for the main connection
to time out) would itself time out first, causing the test to fail.
mysql-test/t/wait_timeout.test:
Increase connection timeout in connection wait_con so we will not loose
the connection that is watching for the real wait_timeout to trigger.
sql/sql_table.cc:
Check for FN_DEVCHAR in the table name just before file creation. This allows for temporary tables to contain FN_DEVCHAR in the name.
sql/table.cc:
Removed the check for FN_DEVCHAR is done at this level because it prevents Windows from creating any table with FN_DEVCHAR in the name.
Problem:
mysqld --collation-server=xxx --character-set-server=yyy
didn't work as expected: collation_server was set not to xxx,
but to the default collation of character set "yyy".
With different argument order it worked as expected:
mysqld --character-set-server=yyy --collation-server=yyy
Fix:
initializate default_collation_name to 0
when processing --character-set-server
only if --collation-server has not been specified
in command line.
mysql-test/r/ctype_ucs2_def.result:
Adding test case
mysql-test/t/ctype_ucs2_def-master.opt:
Specifying variables in reverse order, to cover the bug.
mysql-test/t/ctype_ucs2_def.test:
Adding test case
sql/mysqld.cc:
Don't clear default_collation_name when processing
--character-set-server if collation has already
been specified using --collation-server
The problem was a call to convert_dirname() with a destination buffer
that did not have room for the trailing slash added by that function.
This could cause the instance manager to crash in some cases.
mysys/mf_dirname.c:
Clarify in comments that convert_dirname destination must be larger than
source to accomodate a trailing slash.
server-tools/instance-manager/instance_options.cc:
Fix buffer overrun.
into moonbone.local:/work/tmp_merge-5.0-opt-mysql
mysql-test/r/key.result:
Auto merged
mysql-test/t/key.test:
Auto merged
sql/table.cc:
Auto merged
support-files/mysql.spec.sh:
Auto merged