doesn't find the column"
When a user was using 4.1 tables with VARCHAR column and 5.0 server
and a query that used a temporary table to resolve itself, the
table metadata for the varchar column sent to client was incorrect:
MYSQL_FIELD::table member was empty.
The bug was caused by implicit "upgrade" from old VARCHAR to new
VARCHAR hard-coded in Field::new_field, which did not preserve
the information about the original table. Thus, the field metadata
of the "upgraded" field pointed to an auxiliary temporary table
created for query execution.
The fix is to copy the pointer to the original table to the new field.
- BUG#15934: Instance manager fails to work;
- BUG#18020: IM connect problem;
- BUG#18027: IM: Server_ID differs;
- BUG#18033: IM: Server_ID not reported;
- BUG#21331: Instance Manager: Connect problems in tests;
The only test suite has been changed
(server codebase has not been modified).
When a view was used inside a trigger or a function, lock type for
tables used in a view was always set to READ (thus making the view
non-updatable), even if we were trying to update the view.
The solution is to set lock type properly.
init_dumping now accepts a function pointer to the table or view specific init_dumping function. This allows both tables and views to use the init_dumping function.
erroneous check
Problem: Actually there were two problems in the server code. The check
for SQLCOM_FLUSH in SF/Triggers were not according to the existing
architecture which uses sp_get_flags_for_command() from sp_head.cc .
This function was also missing a check for SQLCOM_FLUSH which has a
problem combined with prelocking. This changeset fixes both of these
deficiencies as well as the erroneous check in
sp_head::is_not_allowed_in_function() which was a copy&paste error.
const tables. This resulted in choosing extremely inefficient
execution plans in same cases when distribution of data in
joined were skewed (see the customer test case for the bug).
The following procedure was not possible if max_sp_recursion_depth is 0
create procedure show_proc() show create procedure show_proc;
Actually there is no recursive call but the limit is checked.
Solved by temporarily increasing the thread's limit just before the fetch from cache
and decreasing after that.
The mysql client uses the default character set on reconnect. The default character set is now controled by the client charset command while the client is running. The charset command now also issues a SET NAMES command to the server to make sure that the client's charset settings are in sync with the server's.
- Fix typo in Item_func_export_set::fix_length_and_dec() which caused character set aggregation to fail
- Remove default argument from last arg of agg_arg_charsets() function, to reduce potential errors
User name (host name) has limit on length. The server code relies on these
limits when storing the names. The problem was that sometimes these limits
were not checked properly, so that could lead to buffer overflow.
The fix is to check length of user/host name in parser and if string is too
long, throw an error.
GROUP BY/DISTINCT pruning optimization must be done before ORDER BY
optimization because ORDER BY may be removed when GROUP BY/DISTINCT
sorts as a side effect, e.g. in
SELECT DISTINCT <non-key-col>,<pk> FROM t1
ORDER BY <non-key-col> DISTINCT
must be removed before ORDER BY as if done the other way around
it will remove both.
used.
Sorting by RAND() uses a temporary table in order to get a correct results.
User defined variable was set during filling the temporary table and later
on it is substituted for its value from the temporary table. Due to this
it contains the last value stored in the temporary table.
Now if the result_field is set for the Item_func_set_user_var object it
updates variable from the result_field value when being sent to a client.
The Item_func_set_user_var::check() now accepts a use_result_field
parameter. Depending on its value the result_field or the args[0] is used
to get current value.
when X.509 subject was required for a connect, we tested whether it was the right
one, but did not refuse the connexion if not. fixed.
(corrected CS now --replace_results socket-path)
server to crash".
Crash caused by assertion failure happened when one ran SHOW OPEN TABLES
while concurrently doing DROP TABLE (or RENAME TABLE, CREATE TABLE LIKE
or any other command that takes name-lock) in other connection.
For non-debug version of server problem exposed itself as wrong output
of SHOW OPEN TABLES statement (it was missing name-locked tables).
Finally in 5.1 both debug and non-debug versions simply crashed in
this situation due to NULL-pointer dereference.
This problem was caused by the fact that table placeholders which were
added to table cache in order to obtain name-lock had TABLE_SHARE::table_name
set to 0. Therefore they broke assumption that this member is non-0 for
all tables in table cache which was checked by assert in list_open_tables()
(in 5.1 this function simply relies on it).
The fix simply sets this member for such placeholders to appropriate value
making this assumption true again.
This patch also includes test for similar bug 12212 "Crash that happens
during removing of database name from cache" reappeared in 5.1 as bug 19403.
A date can be represented as an int (like 20060101) and as a string (like
"2006.01.01"). When a DATE/TIME field is compared in one SELECT against both
representations the constant propagation mechanism leads to comparison
of DATE as a string and DATE as an int. In this example it compares 2006 and
20060101 integers. Obviously it fails comparison although they represents the
same date.
Now the Item_bool_func2::fix_length_and_dec() function sets the comparison
context for items being compared. I.e. if items compared as strings the
comparison context is STRING.
The constant propagation mechanism now doesn't mix items used in different
comparison contexts. The context check is done in the
Item_field::equal_fields_propagator() and in the change_cond_ref_to_const()
functions.
Also the better fix for bug 21159 is introduced.