with other alterations causes lost tables
Using RENAME clause combined with other clauses of ALTER TABLE led to
data loss (the data was there but not accessible). This could happen if the
changes do not change the table much. Adding and droppping of fields and
indices was safe. Renaming a column with MODIFY or CHANGE was unsafe operation,
if the actual column didn't change (changing from int to int, which is a noop)
Depending on the storage engine (SE) the behavior is different:
1)MyISAM/MEMORY - the ALTER TABLE statement completes
without any error but next SELECT against the new table fails.
2)InnoDB (and every other transactional table) - The ALTER TABLE statement
fails. There are the the following files in the db dir -
`new_table_name.frm` and a temporary table's frm. If the SE is file
based, then the data and index files will be present but with the old
names. What happens is that for InnoDB the table is not renamed in the
internal DDIC.
Fixed by adding additional call to mysql_rename_table() method, which should
not include FRM file rename, because it has been already done during file
names juggling.
Backport of functionality in private 5.2 tree.
Added new language to parser, new mysql.servers table and associated code
to be used by the federated storage engine to allow central connection information
per WL entry.
Fixed compiler warnings (detected by VC++):
- Removed not used variables
- Added casts
- Fixed wrong assignments to bool
- Fixed wrong calls with bool arguments
- Added missing argument to store(longlong), which caused wrong store method to be called.
- Removed not used variables
- Changed some ulong parameters/variables to ulonglong (possible serious bug)
- Added casts to get rid of safe assignment from longlong to long (and similar)
- Added casts to function parameters
- Fixed signed/unsigned compares
- Added some constructores to structures
- Removed some not portable constructs
Better fix for bug Bug #21428 "skipped 9 bytes from file: socket (3)" on "mysqladmin shutdown"
(Added new parameter to net_clear() to define when we want the communication buffer to be emptied)
Problems (appear only under some circumstances):
1. we get a reference to a deleted table searching in the
thd->handler_tables_hash in the mysql_ha_read().
2. DBUG_ASSERT(table->file->inited == handler::NONE); assert fails in the
close_thread_table().
Fix: end open index scans and table scans and remove references to the
tables from the handler tables hash. After this preparation it is safe
to close the tables. The close can no longer fail on open index/table
scans and the closed table will not be used again by handler functions.
Moved .progress files into the log directory
Moved 'cluster' database tables into the MySQL database, to not have 'cluster' beeing a reserved database name
Fixed bug where mysqld got a core dump when trying to use a table created by MySQL 3.23
Fixed some compiler warnings
Fixed small memory leak in libmysql
Note that this doesn't changeset doesn't include the new mysqldump.c code required to run some tests. This will be added when I merge 5.0 to 5.1
The problem was that any VIEW columns had always implicit derivation.
Fix: derivation is now copied from the original expression
given in VIEW definition.
For example:
- a VIEW column which comes from a string constant
in CREATE VIEW definition have now coercible derivation.
- a VIEW column having COLLATE clause
in CREATE VIEW definition have now explicit derivation.
Due to the complexity of this change, everything is documented in WL#3565
This patch is the third iteration, it takes into account the comments
received to date.
(4.1 version, with post-review fixes)
The fix for another Bug (6439) limited FROM_UNIXTIME() to
TIMESTAMP_MAX_VALUE which is 2145916799 or 2037-12-01 23:59:59 GMT,
however unix timestamp in general is not considered to be limited
by this value. All dates up to power(2,31)-1 are valid.
This patch extends allowed TIMESTAMP range so, that max
TIMESTAMP value is power(2,31)-1. It also corrects
FROM_UNIXTIME() and UNIX_TIMESTAMP() functions, so that
max allowed UNIX_TIMESTAMP() is power(2,31)-1. FROM_UNIXTIME()
is fixed accordingly to allow conversion of dates up to
2038-01-19 03:14:07 UTC. The patch also fixes CONVERT_TZ()
function to allow extended range of dates.
The main problem solved in the patch is possible overflows
of variables, used in broken-time representation to time_t
conversion (required for UNIX_TIMESTAMP).
This is a performance issue for queries with subqueries evaluation
of which requires filesort.
Allocation of memory for the sort buffer at each evaluation of a
subquery may take a significant amount of time if the buffer is rather big.
With the fix we allocate the buffer at the first evaluation of the
subquery and reuse it at each subsequent evaluation.
If the user has specified --max-connections=N or --table-open-cache=M
options to the server, a warning could be given that some values were
recalculated, and table-open-cache could be assigned greater value.
Note that both warning and increase of table-open-cache were totally
harmless.
This patch fixes recalculation code to ensure that table-open-cache will
be never increased automatically and that a warning will be given only if
some values had to be decreased due to operating system limits.
No test case is provided because we neither can't predict nor control
operating system limits for maximal number of open files.