mysql-test/t/federated.test:
Use --replace_result to make test work on non-standard ports.
mysql-test/r/federated.result:
Use --replace_result to make test work on non-standard ports.
into mysql.com:/home/mydev/mysql-5.0-bug11824
myisam/mi_key.c:
Auto merged
mysql-test/r/func_sapdb.result:
Auto merged
mysql-test/r/gis-rtree.result:
Auto merged
mysql-test/r/symlink.result:
Auto merged
mysql-test/t/func_sapdb.test:
Auto merged
mysql-test/t/gis-rtree.test:
Auto merged
sql/item_timefunc.cc:
Auto merged
sql/sql_parse.cc:
Auto merged
myisam/mi_check.c:
Manual merge
mysql-test/r/func_time.result:
Manual merge
mysql-test/t/func_time.test:
Manual merge
into mysql.com:/home/mydev/mysql-5.0-bug11824
configure.in:
Auto merged
mysql-test/r/federated.result:
Auto merged
mysql-test/t/federated.test:
Auto merged
sql/ha_federated.cc:
Auto merged
sql/mysqld.cc:
Auto merged
mysql-test/t/federated.test:
Use --replace_result to make test work on non-standard ports.
mysql-test/r/federated.result:
Use --replace_result to make test work on non-standard ports.
Pushbuild fixes to result file, test, and header file for federated.
mysql-test/r/federated.result:
BUG #19773
Pushbuild fixes - result file had hard-coded port
mysql-test/t/federated.test:
BUG #19773
Pushbuild fixes Test was missing --replace_result
sql/ha_federated.h:
BUG #19773
HPUX and Windows failed with variable named row and *row in method declaration
A corrupt table with dynamic record format can crash the
server when trying to select from it.
I fixed the crash that resulted from the particular type
of corruption that has been reported for this bug.
No test case. To test it, one needs a table with a very special
corruption. The bug report contains a file with such a table.
myisam/mi_dynrec.c:
Bug#19835 - Binary copy of corrupted tables crash the server when issuing a query
Added a protection against corrupted records. A dynamic
record header with invalid 'next' pointer could trigger
the assert in _mi_get_block_info(). Now I avoid this by
reporting a corruption error.
CHECK TABLE could complain about a fully intact spatial index.
A wrong comparison operator was used for table checking.
The result was that it checked for non-matching spatial keys.
This succeeded if at least two different keys were present,
but failed if only the matching key was present.
I fixed the key comparison.
myisam/mi_check.c:
Bug#17877 - Corrupted spatial index
Fixed the comparison operator for checking a spatial index.
Using MBR_EQUAL | MBR_DATA to compare for equality and
include the data pointer in the comparison. The latter
finds the index entry that points to the current record.
This is necessary for non-unique indexes.
The old operator, SEARCH_SAME, is unknown to the rtree
search functions and handled like MBR_DISJOINT.
myisam/mi_key.c:
Bug#17877 - Corrupted spatial index
Added a missing DBUG_RETURN.
myisam/rt_index.c:
Bug#17877 - Corrupted spatial index
Included the data pointer in the copy of the search key.
This is necessary for searching the index entry that points
to a specific record if the search_flag contains MBR_DATA.
myisam/rt_mbr.c:
Bug#17877 - Corrupted spatial index
Extended the RT_CMP() macro with an assert for an
unexpected comparison operator.
mysql-test/r/gis-rtree.result:
Bug#17877 - Corrupted spatial index
The test result.
mysql-test/t/gis-rtree.test:
Bug#17877 - Corrupted spatial index
The test case.
sp_grant_privileges(), the function that GRANTs EXECUTE + ALTER privs on a SP,
did so creating a user-entry with not password; mysql_routine_grant() would then
write that "change" to the user-table.
mysql-test/r/sp-security.result:
prove that creating a stored procedure will not destroy the creator's password
mysql-test/t/sp-security.test:
prove that creating a stored procedure will not destroy the creator's password
sql/sql_acl.cc:
get password from ACLs, convert to correct format, and use it when
forcing GRANTS for SPs
Final-review fixes per Monty, pre-push. OK'd for
push. Please see each file's comments.
mysql-test/r/federated.result:
BUG #19773
Results for multi-table deletes, updates
mysql-test/t/federated.test:
BUG #19773
Test multi table update and delete. Added drop table to end of previous test.
sql/ha_federated.cc:
BUG #19773
Post-review changes, per Monty. 3rd patch, OK'd for push.
- Added index_read_idx_with_result_set, which uses the result set passed to it
- Hash by entire connection scheme
- Protected store_result result set for table scan by adding a method result set
to index_read_idx and index_read which is passed to index_read_with_result, which
in turn iterates over the single record via read_next.
This is a change from having two result sets in the first two patches.
This keeps the code clean and avoids the need for yet another result set.
- Rewrote ::position and ::rnd_pos to store position - if primary key use
primary key, if not, use record buffer.
- Rewrote get_share to store hash with connect string vs. table name
- delete_row added subtration of "records" by affected->rows
- Added read_next to handle what rnd_next used to do (converting raw record
to query and vice versa)
- Removed many DBUG_PRINT lines
- Removed memset initialisation since subsequent loop accomplishes
- Removed un-necessary mysql_free_result lines
sql/ha_federated.h:
BUG #19773
Fixed "SET " to " SET " to make sure built statements are built with
"UPDATE `t1` SET .." instead of "UPDATE `t1`SET"
This finishes bug#18516, as far as "generic RPMs" are concerned.
support-files/mysql.spec.sh:
Revert all previous attempts to call "mysql_upgrade" during RPM upgrade,
there are some more aspects which need to be solved before this is possible.
For now, just ensure the binary "mysql_upgrade" is delivered and installed.
This finishes bug#18516, as far as "generic RPMs" are concerned.
Produce a warning if DATA/INDEX DIRECTORY is specified in
ALTER TABLE statement.
Ignoring of these options is documented in the symbolic links
section of the manual.
mysql-test/r/symlink.result:
Modified test result according to fix for BUG#1662.
sql/sql_parse.cc:
Produce a warning if DATA/INDEX DIRECTORY is specified in
ALTER TABLE statement.
manual merge from 4.0.
support-files/mysql.spec.sh:
Manual merge of the fix for bug#20216.
(became necessary because 4.0 and 4.1 spec files use different file sort order).
mysql-test/r/func_sapdb.result:
test cases for date range edge cases added
mysql-test/r/func_time.result:
test cases for date range edge cases added
mysql-test/t/func_sapdb.test:
test cases for date range edge cases added
mysql-test/t/func_time.test:
test cases for date range edge cases added
into poseidon.ndb.mysql.com:/home/tomas/mysql-5.0-main
mysql-test/mysql-test-run.sh:
Auto merged
ndb/include/kernel/GlobalSignalNumbers.h:
Auto merged
ndb/src/kernel/blocks/dbdict/Dbdict.cpp:
Auto merged
ndb/src/kernel/blocks/dbdict/Dbdict.hpp:
Auto merged
sql/ha_ndbcluster.cc:
Auto merged
Fix a minor issue with Bug#16206 (bdb.test failed if the tree is compiled
without blackhole).
include/my_sys.h:
Change declaration of my_strdup_with_length to accept const char *,
not const byte *: in 5 places out of 6 where this function is used,
it's being passed char *, not byte *
mysql-test/r/bdb.result:
Remove dependency on an optional engine (updated test results).
mysql-test/t/bdb.test:
Remove dependency on an optional engine.
mysys/my_malloc.c:
my_strdup_with_length: const byte * -> const char *
mysys/safemalloc.c:
my_strdup_with_length: const byte * -> const char *
sql/ha_federated.cc:
my_strdup_with_length: const byte * -> const char *
sql/log_event.cc:
my_strdup_with_length: const byte * -> const char *
sql/set_var.cc:
my_strdup_with_length: const byte * -> const char *
sql/sql_class.h:
Change db_length type to uint from uint32 (see also table.h)
sql/table.h:
Change the type of db_length to uint from uint32: LEX_STRING uses uint for
length, we need a small and consistent set of types to store length to
minimize cast and compile failures.
Very complex select statements can create temporary tables
that are too big to be represented as a MyISAM table.
This was not checked at table creation time, but only at
open time. The result was an attempt to delete the
"impossible" table.
But if the server is built --with-raid, MyISAM tries to
open the table before deleting the files. It needs to find
out if the table uses the raid support and how many raid
chunks there are. This is done with an open "for repair",
which will almost always succeed.
But in this case we have an "impossible" table. The open
failed. Hence the files were not deleted. Also the error
message was a bit unspecific.
I turned an open error in this situation into the assumption
of having no raid support on the table. Thus the normal data
file is tried to be deleted. This may however leave existing
raid chunks behind.
I also added a check in mi_create() to prevent the creation
of an "impossible" table. A more decriptive error message is
given in this case.
No test case. The required select statement is way too
large for the test suite. I added a test script to the
bug report.
myisam/mi_create.c:
Bug#11824 - internal /tmp/*.{MYD,MYI} files remain, causing subsequent queries to fail
Added a check to mi_create() that the table description
header of the index file does not exceed 64KB. The header
has only 16 bits to encode its length.
myisam/mi_delete_table.c:
Bug#11824 - internal /tmp/*.{MYD,MYI} files remain, causing subsequent queries to fail
Interpret error in table open as not having a raid
configuration on the tbale. Thus try to delete the
normal data file, but leave behind raid chunks if
they exist.
- make sure to allocate just enough pages in the fragments by using the actual
row count from the backup, to avoid over allocation of pages to fragments, and
thus avoid the bug
ndb/include/kernel/GlobalSignalNumbers.h:
Bug #19852 Restoring backup made from cluster with full data memory fails
- distribute fragment complete to all participants to update row count
ndb/include/kernel/signaldata/BackupContinueB.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- time slica writing of fragment info to ctl file
ndb/include/kernel/signaldata/BackupImpl.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- 32 -> 64 bit on bytes and records
- new signal fragment complete to all participants
ndb/include/kernel/signaldata/BackupSignalData.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- 32 -> 64 bit on bytes and records
ndb/include/kernel/signaldata/DictTabInfo.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- add min and max rows to dict tab info
ndb/include/kernel/signaldata/LqhFrag.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added min and max rows to add frag req
ndb/include/kernel/signaldata/TupFrag.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added min and max rows to add frag req
ndb/include/ndbapi/NdbDictionary.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added get/set of min max rows
ndb/src/common/debugger/signaldata/BackupImpl.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- 32 -> 64 bit on bytes and records
ndb/src/common/debugger/signaldata/BackupSignalData.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- 32 -> 64 bit on bytes and records
ndb/src/common/debugger/signaldata/DictTabInfo.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added min and max rows to dict tab info
ndb/src/common/debugger/signaldata/LqhFrag.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added min and max rows to frag req
ndb/src/kernel/blocks/backup/Backup.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- new section in backup with per fragment info in ctl file
- 32 -> 64 bit on bytes and records
ndb/src/kernel/blocks/backup/Backup.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- new section in backup with per fragment info in ctl file
- 32 -> 64 bit on bytes and records
ndb/src/kernel/blocks/backup/BackupFormat.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- new section in backup with per fragment info in ctl file
- 32 -> 64 bit on bytes and records
ndb/src/kernel/blocks/backup/BackupInit.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- new signal fragment complete to all participants
ndb/src/kernel/blocks/dbdict/Dbdict.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added max and min rows to dict table object
ndb/src/kernel/blocks/dbdict/Dbdict.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added max and min rows to dict table object
ndb/src/kernel/blocks/dblqh/Dblqh.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added min and max rows to frag req
ndb/src/kernel/blocks/dblqh/DblqhMain.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added min and max rows to frag req
ndb/src/kernel/blocks/dbtup/Dbtup.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added min and max rows to frag req
ndb/src/kernel/blocks/dbtup/DbtupMeta.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added min and max rows to frag req
- move memory allocation to fragment to after adding of attributes to get correct headsize
- allocate pages to fragments according to min rows setting
ndb/src/kernel/blocks/dbtup/DbtupPageMap.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- grow page allocation starting from 2 irrespective of first page allocation
ndb/src/mgmsrv/MgmtSrvr.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- 32 -> 64 bits on bytes and records
ndb/src/mgmsrv/MgmtSrvr.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- 32 -> 64 bits on bytes and records
ndb/src/ndbapi/NdbDictionary.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- min and max rows in dict
ndb/src/ndbapi/NdbDictionaryImpl.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- min and max rows in dict
ndb/src/ndbapi/NdbDictionaryImpl.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- min and max rows in dict
ndb/tools/restore/Restore.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- add retrieval of fragment info
ndb/tools/restore/Restore.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- add retrieval of fragment info
ndb/tools/restore/consumer_restore.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- set min in restore to the actual row count (this is the actual bug fix)
sql/ha_ndbcluster.cc:
Bug #19852 Restoring backup made from cluster with full data memory fails
- set min and max rows according to sql definition
When building the UPDATE query to send to the remote server, the
federated storage engine built the query incorrectly if it was updating
a field to be NULL.
Thanks to Bjrn Steinbrink for an initial patch for the problem.
mysql-test/r/federated.result:
Add new results
mysql-test/t/federated.test:
Add new regression test
sql/ha_federated.cc:
Fix logic of how fields are added to SET and WHERE clauses of an
UPDATE statement. Fields that were NULL were being handled incorrectly.
Also reorganizes the code a little bit so the update of the two
clauses is consistent.
For compatibility, don't use {..,..} in pattern matching
make_binary_distribution.sh:
Added .dylib and .sl as shared library extensions
scripts/make_binary_distribution.sh:
Added .dylib and .sl as shared library extensions
scripts/make_sharedlib_distribution.sh:
For compatibility, don't use {..,..} in pattern matching
into mysql.com:/opt/local/work/mysql-5.0-17199
mysql-test/r/create.result:
Auto merged
mysql-test/t/create.test:
Auto merged
sql/item_strfunc.cc:
Auto merged
sql/log_event.cc:
Auto merged
sql/slave.cc:
Auto merged
sql/sp_head.cc:
Auto merged
sql/sql_class.h:
Auto merged
sql/sql_db.cc:
Auto merged
sql/sql_insert.cc:
Auto merged
sql/sql_lex.h:
Auto merged
sql/sql_parse.cc:
Auto merged
sql/sql_table.cc:
Auto merged
sql/sql_yacc.yy:
Auto merged
mysql-test/r/sp.result:
SCCS merged
mysql-test/t/sp.test:
SCCS merged
Bug#19022 "Memory bug when switching db during trigger execution"
Bug#17199 "Problem when view calls function from another database."
Bug#18444 "Fully qualified stored function names don't work correctly in
SELECT statements"
Documentation note: this patch introduces a change in behaviour of prepared
statements.
This patch adds a few new invariants with regard to how THD::db should
be used. These invariants should be preserved in future:
- one should never refer to THD::db by pointer and always make a deep copy
(strmake, strdup)
- one should never compare two databases by pointer, but use strncmp or
my_strncasecmp
- TABLE_LIST object table->db should be always initialized in the parser or
by creator of the object.
For prepared statements it means that if the current database is changed
after a statement is prepared, the database that was current at prepare
remains active. This also means that you can not prepare a statement that
implicitly refers to the current database if the latter is not set.
This is not documented, and therefore needs documentation. This is NOT a
change in behavior for almost all SQL statements except:
- ALTER TABLE t1 RENAME t2
- OPTIMIZE TABLE t1
- ANALYZE TABLE t1
- TRUNCATE TABLE t1 --
until this patch t1 or t2 could be evaluated at the first execution of
prepared statement.
CURRENT_DATABASE() still works OK and is evaluated at every execution
of prepared statement.
Note, that in stored routines this is not an issue as the default
database is the database of the stored procedure and "use" statement
is prohibited in stored routines.
This patch makes obsolete the use of check_db_used (it was never used in the
old code too) and all other places that check for table->db and assign it
from THD::db if it's NULL, except the parser.
How this patch was created: THD::{db,db_length} were replaced with a
LEX_STRING, THD::db. All the places that refer to THD::{db,db_length} were
manually checked and:
- if the place uses thd->db by pointer, it was fixed to make a deep copy
- if a place compared two db pointers, it was fixed to compare them by value
(via strcmp/my_strcasecmp, whatever was approproate)
Then this intermediate patch was used to write a smaller patch that does the
same thing but without a rename.
TODO in 5.1:
- remove check_db_used
- deploy THD::set_db in mysql_change_db
See also comments to individual files.
mysql-test/r/create.result:
Modify the result file: a database can never be NULL.
mysql-test/r/ps.result:
Update test results (Bug#17199 et al)
mysql-test/r/sp.result:
Update test results (Bug#17199 et al)
mysql-test/t/create.test:
Update the id of the returned error.
mysql-test/t/ps.test:
Add test coverage for prepared statements and current database. In scope of
work on Bug#17199 "Problem when view calls function from another database."
mysql-test/t/sp.test:
Add a test case for Bug#17199 "Problem when view calls function from another
database." and Bug#18444 "Fully qualified stored function names don't work
correctly in SELECT statements". Test a complementary problem.
sql/item_strfunc.cc:
Touch the code that reads thd->db (cleanup).
sql/log_event.cc:
While we are at it, replace direct access to thd->db with a method.
Should simplify future conversion of THD::db to LEX_STRING.
sql/slave.cc:
While we are at it, replace direct access to thd->db with a method.
Should simplify future conversion of THD::db to LEX_STRING.
sql/slave.h:
Remove a declaration for a method that is used only in one module.
sql/sp.cc:
Rewrite sp_use_new_db: this is a cleanup that I needed in order to understand
this function and ensure that it has no bugs.
sql/sp.h:
Add a new declaration for sp_use_new_db (uses LEX_STRINGs) and a comment.
sql/sp_head.cc:
- drop sp_name_current_db_new - a creator of sp_name class that was used
when sp_name was created for an identifier without an explicitly initialized
database. Now we pass thd->db to constructor of sp_name right in the
parser.
- rewrite sp_head::init_strings: name->m_db is always set now
- use the new variant of sp_use_new_db
- we don't need to update thd->db with SP MEM_ROOT pointer anymore when
parsing a stored procedure, as noone will refer to it (yes!)
sql/sp_head.h:
- remove unneded methods and members
sql/sql_class.h:
- introduce 3 THD methods to work with THD::db:
.set_db to assign the current database
.reset_db to reset the current database (temporarily) or set it to NULL
.opt_copy_db_to - to deep-copy thd->db to a pointer if it's not NULL
sql/sql_db.cc:
While we are at it, replace direct access to thd->db with a method.
Should simplify future conversion of THD::db to LEX_STRING.
sql/sql_insert.cc:
- replace checks with asserts: table_list->db must be always set in the parser.
sql/sql_lex.h:
- add a comment
sql/sql_parse.cc:
- implement the invariant described in the changeset comment.
- remove juggling with lex->sphead in SQLCOM_CREATE_PROCEDURE:
now db_load_routine uses its own LEX object and doesn't damage the main
LEX.
- add DBUG_ASSERT(0) to unused "check_db_used"
sql/sql_table.cc:
- replace a check with an assert (table_ident->db)
sql/sql_trigger.cc:
While we are at it, replace direct access to thd->db with a method.
Should simplify future conversion of THD::db to LEX_STRING.
sql/sql_udf.cc:
- use thd->set_db instead of direct modification of to thd->db
sql/sql_view.cc:
- replace a check with an assert (view->db)
sql/sql_yacc.yy:
- make sure that we always copy table->db or name->db or ident->db or
select_lex->db from thd->db if the former is not set. If thd->db
is not set but is accessed, return an error.
sql/tztime.cc:
- be nice, never copy thd->db by pointer.