Bug#4968 "Stored procedure crash if cursor opened on altered table"
Bug#19733 "Repeated alter, or repeated create/drop, fails"
Bug#19182 "CREATE TABLE bar (m INT) SELECT n FROM foo; doesn't work from
stored procedure."
Bug#6895 "Prepared Statements: ALTER TABLE DROP COLUMN does nothing"
Bug#22060 "ALTER TABLE x AUTO_INCREMENT=y in SP crashes server"
Test cases for bugs 4968, 19733, 6895 will be added in 5.0.
Re-execution of CREATE DATABASE, CREATE TABLE and ALTER TABLE
statements in stored routines or as prepared statements caused
incorrect results (and crashes in versions prior to 5.0.25).
In 5.1 the problem occured only for CREATE DATABASE, CREATE TABLE
SELECT and CREATE TABLE with INDEX/DATA DIRECTOY options).
The problem of bugs 4968, 19733, 19282 and 6895 was that functions
mysql_prepare_table, mysql_create_table and mysql_alter_table were not
re-execution friendly: during their operation they used to modify contents
of LEX (members create_info, alter_info, key_list, create_list),
thus making the LEX unusable for the next execution.
In particular, these functions removed processed columns and keys from
create_list, key_list and drop_list. Search the code in sql_table.cc
for drop_it.remove() and similar patterns to find evidence.
The fix is to supply to these functions a usable copy of each of the
above structures at every re-execution of an SQL statement.
To simplify memory management, LEX::key_list and LEX::create_list
were added to LEX::alter_info, a fresh copy of which is created for
every execution.
The problem of crashing bug 22060 stemmed from the fact that the above
metnioned functions were not only modifying HA_CREATE_INFO structure in
LEX, but also were changing it to point to areas in volatile memory of
the execution memory root.
The patch solves this problem by creating and using an on-stack
copy of HA_CREATE_INFO (note that code in 5.1 already creates and
uses a copy of this structure in mysql_create_table()/alter_table(),
but this approach didn't work well for CREATE TABLE SELECT statement).
- Using DATA/INDEX DIRECTORY option on Windows put data/index file into
default directory because the OS doesn't support readlink().
- The procedure for changing data/index file directory is
different under Windows.
- With this fix we report a warning if DATA/INDEX option is used,
but OS doesn't support readlink().
table
ROW_FORMAT option is lost during CREATE/DROP INDEX.
This fix forces CREATE/DROP INDEX to retain ROW_FORMAT by instructing
mysql_alter_table() that ROW_FORMAT is not used during creating/dropping
indexes.
The problem is that the GEOMETRY NOT NULL can't automatically set
any value as a default one. We always tried to complete LOAD DATA
command even if there's not enough data in file. That doesn't work
for GEOMETRY NOT NULL. Now Field_*::reset() returns an error sign
and it's checked in mysql_load()
Problem: replication of LC_TIME_NAMES didn't work.
Thus, INSERTS or UPDATES using date_format() always
worked with en_US on the slave side.
Fix: adding ONE_SHOT implementation for LC_TIME_NAMES.
This error is displayed anytime the SELECT statement needs a temp table to
return correct results because the object (select_dumpvar) that represents
variables named in the INTO clause stored the results before the temp
table was considered. The problem was fixed by creating the necessary
Item_func_set_user_var objects once the correct data is ready.
ALTER TABLE DISABLE KEYS doesn't work when modifying the table
ENABLE|DISABLE KEYS combined with another ALTER TABLE option, different
than RENAME TO did nothing. Also, if the table had disabled keys
and was ALTER-ed then the end table was with enabled keys.
Fixed by checking whether the table had disabled keys and enabling them
in the copied table.
There was an improper order of doing chained operations.
To the documentor: ENABLE|DISABLE KEYS combined with RENAME TO, and no other
ALTER TABLE clause, leads to server crash independent of the presence of
indices and data in the table.
The problem was that some functions (namely IN() starting with 4.1, and
CHAR() starting with 5.0) were returning NULL in certain conditions,
while they didn't set their maybe_null flag. Because of that there could
be some problems with 'IS NULL' check, and statements that depend on the
function value domain, like CREATE TABLE t1 SELECT 1 IN (2, NULL);.
The fix is to set maybe_null correctly.
The server sends a number of columns to the client.
It uses a limited "fast" function for that instead of the
general one. This fast function cannot send numbers larger
than 2 bytes.
This causes the client to expect smaller number of columns.
The client writes outside of the allocated memory buffer
as a result.
Fixed the server to use the general function to send column
count.
Fixed the client to check the column count before writing
column data.
Problem: After introducing of LC_TIME_NAMES variable, the
function date_format() can return international non-ascii
characters in month and weekday names. Thus, it cannot return
a binary string anymore, because inserting a result of date_format()
into a column with non-utf8 character set produces garbage.
Fix: date_format() now returns a character string, using
"collation_connection" to detect character set and collation
for the returned value. This allows to insert
results of date_format() properly into columns with
various character sets.
- When returning metadata for scalar subqueries the actual type of the
column was calculated based on the value type, which limits the actual
type of a scalar subselect to the set of (currently) 3 basic types :
integer, double precision or string. This is the reason that columns
of types other then the basic ones (e.g. date/time) are reported as
being of the corresponding basic type.
Fixed by storing/returning information for the column type in addition
to the result type.
Problem: GROUP_CONCAT on a multi-byte column can truncate
in the middle of a multibyte character when applying
group_concat_max_len limit. It produces an invalid
multi-byte character in the result string.
The second, easier version - reusing old "warning_for_row" flag,
instead of introducing of "result_is_full" - which was
added in the previous commit.
The Item_func_mod objects never had maybe_null set, so users had no reason
to expect that they can be NULL, and may therefore deduce wrong results.
Now, set maybe_null.
The parser is allocating Item_field for references by name in ORDER BY
expressions. Such expressions however may point not only to Item_field
in the select list (or to a table column) but also to an arbitrary Item.
This causes Item_field::fix_fields to throw an error about missing
column.
The fix substitutes Item_field for the reference with an Item_ref when
not pointing to Item_field.
(4.1 version, with post-review fixes)
The fix for another Bug (6439) limited FROM_UNIXTIME() to
TIMESTAMP_MAX_VALUE which is 2145916799 or 2037-12-01 23:59:59 GMT,
however unix timestamp in general is not considered to be limited
by this value. All dates up to power(2,31)-1 are valid.
This patch extends allowed TIMESTAMP range so, that max
TIMESTAMP value is power(2,31)-1. It also corrects
FROM_UNIXTIME() and UNIX_TIMESTAMP() functions, so that
max allowed UNIX_TIMESTAMP() is power(2,31)-1. FROM_UNIXTIME()
is fixed accordingly to allow conversion of dates up to
2038-01-19 03:14:07 UTC. The patch also fixes CONVERT_TZ()
function to allow extended range of dates.
The main problem solved in the patch is possible overflows
of variables, used in broken-time representation to time_t
conversion (required for UNIX_TIMESTAMP).
If the error happens during DELETE IGNORE, nothing could be send to the
client, thus leaving it frozen expecting the reply.
The problem was that if some error occurred, it wouldn't be reported to
the client because of IGNORE, but neither success would be reported.
MySQL 4.1 would not freeze the client, but will report
ERROR 1105 (HY000): Unknown error
instead, which is also a bug.
The solution is to report success if we are in DELETE IGNORE and some
non-fatal error has happened.
If elements a not top-level IN subquery were accessed by an index and
the subquery result set included a NULL value then the quantified
predicate that contained the subquery was evaluated to NULL when
it should return a non-null value.
We miss some records sometimes using RANGE method if we have
partial key segments.
Example:
Create table t1(a char(2), key(a(1)));
insert into t1 values ('a'), ('xx');
select a from t1 where a > 'x';
We call index_read() passing 'x' key and HA_READ_AFTER_KEY flag
in the handler::read_range_first() wich is wrong because we have
a partial key segment for the field and might miss records like 'xx'.
Fix: don't use open segments in such a case.
Repair table could crash a server if there is not sufficient
memory (myisam_sort_buffer_size) to operate. Affects not only
repair, but also all statements that use create index by sort:
repair by sort, parallel repair, bulk insert.
Return an error if there is not sufficient memory to store at
least one key per BUFFPEK.
Also fixed memory leak if thr_find_all_keys returns an error.
When resolving unqualified name references MySQL was not
checking what is the item type for the reference. Thus
e.g a string literal item that has by convention a name
equal to its string value will also work as a reference to
a SELECT list item or a table field.
Fixed by allowing only Item_ref or Item_field to referenced by
(unqualified) name.
hangs on Linux
If REPAIR TABLE ... USE_FRM is issued for table that is located in different
than default database server crash could happen.
In reopen_name_locked_table take database name from table_list (user specified
or default database) instead of from thd (default database).
Affects 4.1 only.
The bug is present only in 4.1, will be null-merged to 5.0
For InnoDB, check value of thd->transaction.all.innodb_active_trans instead of thd->transaction.stmt.innobase_tid to see if we really need to rollback.
statement.
The problem was that during statement re-execution if the result was
empty the old result could be returned for group functions.
The solution is to implement proper cleanup() method in group
functions.
When the client program had its stdout file descriptor closed by the calling
shell, after some amount of work (enough to fill a socket buffer) the server
would complain about a packet error and then disconnect the client.
This is a serious security problem. If stdout is closed before the mysql is
exec()d, then the first socket() call allocates file number 1 to communicate
with the server. Subsequent write()s to that file number (as when printing
results that come back from the database) go back to the server instead in
the command channel. So, one should be able to craft data which, upon being
selected back from the server to the client, and injected into the command
stream become valid MySQL protocol to do something nasty when sent /back/ to
the server.
The solution is to close explicitly the file descriptor that we *printf() to,
so that the libc layer and the OS layer both agree that the file is closed.
OPTIMIZE TABLE with myisam_repair_threads > 1 performs a non-quick
parallel repair. This means that it does not only rebuild all
indexes, but also the data file.
Non-quick parallel repair works so that there is one thread per
index. The first of the threads rebuilds also the new data file.
The problem was that all threads shared the read io cache on the
old data file. If there were holes (deleted records) in the table,
the first thread skipped them, writing only contiguous, non-deleted
records to the new data file. Then it built the new index so that
its entries pointed to the correct record positions. But the other
threads didn't know the new record positions, but put the positions
from the old data file into the index.
The new design is so that there is a shared io cache which is filled
by the first thread (the data file writer) with the new contiguous
records and read by the other threads. Now they know the new record
positions.
Another problem was that for the parallel repair of compressed
tables a common bit_buff and rec_buff was used. I changed it so
that thread specific buffers are used for parallel repair.
A similar problem existed for checksum calculation. I made this
multi-thread safe too.
Note: bug#21726 does not directly apply to 4.1, as it doesn't have stored
procedures. However, 4.1 had some bugs that were fixed in 5.0 by the
patch for bug#21726, and this patch is a backport of those fixes.
Namely, in 4.1 it fixes:
- LAST_INSERT_ID(expr) didn't return value of expr (4.1 specific).
- LAST_INSERT_ID() could return the value generated by current
statement if the call happens after the generation, like in
CREATE TABLE t1 (i INT AUTO_INCREMENT PRIMARY KEY, j INT);
INSERT INTO t1 VALUES (NULL, 0), (NULL, LAST_INSERT_ID());
- Redundant binary log LAST_INSERT_ID_EVENTs could be generated.
Though this is not storage engine specific problem, I was able to
repeat this problem with BDB and NDB engines only. That was the
reason to add a test case into ndb_update.test. As a result
different bad things could happen.
BDB has removed duplicate rows which is not expected.
NDB returns an error.
For multi table update notify storage engine about UPDATE IGNORE
as it is done in single table UPDATE.
- bug #11655 "Wrong time is returning from nested selects - maximum time exists
- input and output TIME values were not validated properly in several conversion functions
- bug #20927 "sec_to_time treats big unsigned as signed"
- integer overflows were not checked in several functions. As a result, input values like 2^32 or 3600*2^32 were treated as 0
- BIGINT UNSIGNED values were treated as SIGNED in several functions
- in cases where both input string truncation and out-of-range TIME value occur, only 'truncated incorrect time value' warning was produced
Fix for bug 7894 replaces a field(s) in a non-aggregate function with a item
reference if such a field was specified in the GROUP BY clause in order to
get a correct result.
When ROLLUP is involved this lead to a wrong result due to value of a such
field is got through a copy function and copying happens after the function
evaluation.
Such replacement isn't needed if grouping is also done by such a function.
The change_group_ref() function now isn't called for a function present in
the group list.
RTree keys are really different from BTree and need specific
paramters to be set by optimizer to work.
Sometimes optimizer doesn't set those properly.
Here we decided just to add code to check that the parameters
are correct. Hope to fix optimizer sometimes.
Crash may happen when selecting from a merge table that has underlying
tables with less indexes than in a merge table itself.
If number of keys in merge table is not bigger than requested key number,
return error.
an ALL/ANY quantified subquery in HAVING.
The Item::split_sum_func2 method should not create Item_ref
for objects of any class derived from Item_subselect.
Item_substr's results are improperly stored in a temporary table due to
wrongly calculated max_length value for multi-byte charsets if two
arguments specified.
Any default value for a enum fields over UCS2 charsets was corrupted
when we put it into the frm file, as it had been overwritten by its
HEX representation.
To fix it now we save a copy of structure that represents the enum
type and when putting the default values we use this copy.
Fixed confusing error message from the storage engine when
it fails to open underlying table. The error message is issued
when a table is _opened_ (not when it is created).
statement that uses an aggregating IN subquery with
HAVING clause.
A wrong order of the call of split_sum_func2 for the HAVING
clause of the subquery and the transformation for the
subquery resulted in the creation of a andor structure
that could not be restored at an execution of the prepared
statement.
A communication packet can also be a binlog event sent from the master to the slave.
To be sent by master dump and accepted by slave io thread both have to have
the value of max_allowed_packet bigger than one that client connection had.
In the patch there is the MAX possible replicatio header size estimation for events
in binlog that embedded user query. Only these events of query_log_event type, i.e
just plain queries, require attention.
0xFF is internal separator for SET|ENUM names.
If this symbol is present in SET|ENUM names then we replace it with
','(deprecated symbol for SET|ENUM names) during frm creation
and restore to 0xFF during frm opening
- Honor unsigned_flag in the corresponding functions
- Use compare_int_signed_unsigned()/compare_int_unsigned_signed() instead of explicit comparison in GREATEST() and LEAST()
VALUES() was considered a constant. This caused replacing
(or pre-calculating) it using uninitialized values before the actual
execution takes place.
Mark it as a non-constant (still not dependent of tables) to prevent
the pre-calculation.
Corrected test case after removal of fix for bug#16377
type_date.test:
Corrected test case after removal of fix for bug#16377
item_cmpfunc.cc:
Removed changes to the agg_cmp_type() made in the for bug#16377
1003: Incorrect table name
in multi-table DELETE the set of tables to delete from actually
references then tables in the other list, e.g:
DELETE alias_of_t1 FROM t1 alias_of_t1 WHERE ....
is a valid statement.
So we must turn off table name syntactical validity check for alias_of_t1
because it's not a table name (even if it looks like one).
In order to do that we add a special flag (TL_OPTION_ALIAS) to
disable the name checking for the aliases in multi-table DELETE.
Variable character_set_results can legally be NULL (for "no conversion.")
This could result in a NULL deref that crashed the server. Fixed.
(Although ran some additional precursory tests to see whether I could break
anything else, but no breakage so far.)
The problem was due to a prior fix for BUG 9676, which limited
the rows stored in a temporary table to the LIMIT clause. This
optimization is not applicable to non-group queries with aggregate
functions. The fix disables the optimization in this case.
The problem was that during DROP TEMPORARY TABLE we tried to acquire
the name lock, though temporary tables belongs to one connection, and
no race is possible.
The solution is to not use table name locking while executing
DROP TEMPORARY TABLE.