mysqldump / SHOW CREATE TABLE will show the NEXT available value for
the PK, rather than the *first* one that was available (that named in
the original CREATE TABLE ... AUTO_INCREMENT = ... statement).
This should produce correct and robust behaviour for the obvious use
cases -- when no data were inserted, then we'll produce a statement
featuring the same value the original CREATE TABLE had; if we dump
with values, INSERTing the values on the target machine should set the
correct next_ID anyway (and if not, we'll still have our AUTO_INCREMENT =
... to do that). Lastly, just the CREATE statement (with no data) for
a table that saw inserts would still result in a table that new values
could safely be inserted to).
There seems to be no robust way however to see whether the next_ID
field is > 1 because it was set to something else with CREATE TABLE
... AUTO_INCREMENT = ..., or because there is an AUTO_INCREMENT column
in the table (but no initial value was set with AUTO_INCREMENT = ...)
and then one or more rows were INSERTed, counting up next_ID. This
means that in both cases, we'll generate an AUTO_INCREMENT =
... clause in SHOW CREATE TABLE / mysqldump. As we also show info on,
say, charsets even if the user did not explicitly give that info in
their own CREATE TABLE, this shouldn't be an issue.
As per above, the next_ID will be affected by any INSERTs that have
taken place, though. This /should/ result in correct and robust
behaviour, but it may look non-intuitive to some users if they CREATE
TABLE ... AUTO_INCREMENT = 1000 and later (after some INSERTs) have
SHOW CREATE TABLE give them a different value (say, CREATE TABLE
... AUTO_INCREMENT = 1006), so the docs should possibly feature a
caveat to that effect.
It's not very intuitive the way it works now (with the fix), but it's
*correct*. We're not storing the original value anyway, if we wanted
that, we'd have to change on-disk representation?
If we do dump/load cycles with empty DBs, nothing will change. This
changeset includes an additional test case that proves that tables
with rows will create the same next_ID for AUTO_INCREMENT = ... across
dump/restore cycles.
Confirmed by support as likely solution for client's problem.
There were two distict bugs: parse error was returned for valid
statement and that error wasn't reported to the client.
The fix ensures that EXPLAIN SELECT..INTO is accepted by parser and any
other parse error will be reported to the client.
This performance degradation was due to the fact that some
cost evaluation code added into 4.1 in the function find_best was
not merged into the code of the function best_access_path added
together with other code for greedy optimizer.
Added a parameter to the function print_plan. The parameter contains
accumulated cost for a given partial join.
The patch does not include a special test case since this performance
degradation is hard to reproduse with a simple example.
TODO: make the function find_best use the function best_access_path
in order to remove duplication of code which might result in incomplete
merges in the future.
Bug#17667: An attacker has the opportunity to bypass query logging.
This adds a new, local-only printf format specifier to our *printf functions
that allows us to print known-size buffers that must not be interpreted as
NUL-terminated "strings."
It uses this format-specifier to print to the log, thus fixing this
problem.
In the code that converts IN predicates to EXISTS predicates it is changing
the select list elements to constant 1. Example :
SELECT ... FROM ... WHERE a IN (SELECT c FROM ...)
is transformed to :
SELECT ... FROM ... WHERE EXISTS (SELECT 1 FROM ... HAVING a = c)
However there can be no FROM clause in the IN subquery and it may not be
a simple select : SELECT ... FROM ... WHERE a IN (SELECT f(..) AS
c UNION SELECT ...) This query is transformed to : SELECT ... FROM ...
WHERE EXISTS (SELECT 1 FROM (SELECT f(..) AS c UNION SELECT ...)
x HAVING a = c) In the above query c in the HAVING clause is made to be
an Item_null_helper (a subclass of Item_ref) pointing to the real
Item_field (which is not referenced anywhere else in the query anymore).
This is done because Item_ref_null_helper collects information whether
there are NULL values in the result. This is OK for directly executed
statements, because the Item_field pointed by the Item_null_helper is
already fixed when the transformation is done. But when executed as
a prepared statement all the Item instances are "un-fixed" before the
recompilation of the prepared statement. So when the Item_null_helper
gets fixed it discovers that the Item_field it points to is not fixed
and issues an error. The remedy is to keep the original select list
references when there are no tables in the FROM clause. So the above
becomes : SELECT ... FROM ... WHERE EXISTS (SELECT c FROM (SELECT f(..)
AS c UNION SELECT ...) x HAVING a = c) In this way c is referenced
directly in the select list as well as by reference in the HAVING
clause. So it gets correctly fixed even with prepared statements. And
since the Item_null_helper subclass of Item_ref_null_helper is not used
anywhere else it's taken out.
Concurrent read and update of privilege structures (like simultaneous
run of SHOW GRANTS and ADD USER) could result in server crash.
Ensure that proper locking of ACL structures is done.
No test case is provided because this bug can't be reproduced
deterministically.
too much memory. Instead, either create the equvalent SEL_TREE manually, or create only two ranges that
strictly include the area to scan
(Note: just to re-iterate: increasing NOT_IN_IGNORE_THRESHOLD will make optimization run slower for big
IN-lists, but the server will not run out of memory. O(N^2) memory use has been eliminated)
supported in SP but not in PS": just enable them in prepared
statements, the supporting functionality was implemented when
they were enabled in stored procedures.
trigger fails".
In cases when CONVERT_TZ() function was used in trigger or stored function
(or in stored procedure which was called from trigger or stored function)
error about non existing '.' table was reported.
Statements that use CONVERT_TZ() function should have time zone related
tables in their table list. tz_init_table_list() function which is used
to produce part of table list containing those tables didn't set
TABLE_LIST::db_length/table_name_length members properly. As result time
zone tables needed for CONVERT_TZ() function were incorrectly handled by
prelocking algorithm and "Table '.' doesn't exist' error was emitted.
This fix changes tz_init_table_list() in such way that it properly inits
TABLE_LIST::table_name_length/db_length members and thus produces table list
which can be handled by prelocking algorithm correctly.
The fix refines the algorithm of generating DROPs for binlog.
Temp tables with common pseudo_thread_id are clustered into one query.
Consequently one replication event per pseudo_thread_id is generated.
Backporting a changeset made for 5.0. Comments from there:
The fix refines the algorithm of generating DROPs for binlog.
Temp tables with common pseudo_thread_id are clustered into one query.
Consequently one replication event per pseudo_thread_id is generated.
Error was emitted when one tried to select information from view which used
merge algorithm and which also had CONVERT_TZ() function in its select list.
This bug was caused by wrong assumption that global table list for view
which is handled using merge algorithm begins from tables belonging to
the main select of this view. Nowadays the above assumption is not true only
when one uses convert_tz() function in view's select list, but in future
other cases may be added (for example we may support merging of views
with subqueries in select list one day). Relying on this false assumption
led to the usage of wrong table list for field lookups and therefor errors.
With this fix we explicitly use pointer to the beginning of main select's
table list.
Simply exclude INFORMATION_SCHEMA from LOAD DATA FROM MASTER just like
we exclude the `mysql` database.
There's no test for this, because it requires an unfiltered 'LOAD DATA
FROM MASTER' which is too likely to cause chaos (such as when running
the test suite against an external server).
The bug caused wrong result sets for union constructs of the form
(SELECT ... ORDER BY order_list1 [LIMIT n]) ORDER BY order_list2.
For such queries order lists were concatenated and limit clause was
completely neglected.
Bug #15684: @@version_* are not all SELECTable
Added the appropriate information as read-only system variables, and
also removed some special-case handling of @@version along the way.
@@version_bdb was added, but isn't included in the test because it
depends on the presence of BDB.
The SQL standard doesn't allow to use in HAVING clause fields that are not
present in GROUP BY clause and not under any aggregate function in the HAVING
clause. However, mysql allows using such fields. This extension assume that
the non-grouping fields will have the same group-wise values. Otherwise, the
result will be unpredictable. This extension allowed in strict
MODE_ONLY_FULL_GROUP_BY sql mode results in misunderstanding of HAVING
capabilities.
The new error message ER_NON_GROUPING_FIELD_USED message is added. It says
"non-grouping field '%-.64s' is used in %-.64s clause". This message is
supposed to be used for reporting errors when some field is not found in the
GROUP BY clause but have to be present there. Use cases for this message are
this bug and when a field is present in a SELECT item list not under any
aggregate function and there is GROUP BY clause present which doesn't mention
that field. It renders the ER_WRONG_FIELD_WITH_GROUP error message obsolete as
being more descriptive.
The resolve_ref_in_select_and_group() function now reports the
ER_NON_GROUPING_FIELD_FOUND error if the strict mode is set and the field for
HAVING clause is found in the SELECT item list only.
After a locking error the open table(s) were not fully
cleaned up for reuse. But they were put into the open table
cache even before the lock was tried. The next statement
reused the table(s) with a wrong lock type set up. This
tricked MyISAM into believing that it don't need to update
the table statistics. Hence CHECK TABLE reported a mismatch
of record count and table size.
Fortunately nothing worse has been detected yet. The effect
of the test case was that the insert worked on a read locked
table. (!)
I added a new function that clears the lock type from all
tables that were prepared for a lock. I call this function
when a lock failes.
No test case. One test would add 50 seconds to the
test suite. Another test requires file mode modifications.
I added a test script to the bug report. It contains three
cases for failing locks. All could reproduce a table
corruption. All are fixed by this patch.
This bug was not lock timeout specific.
Removed sp-goto.test, sp-goto.result and all (disabled) GOTO code.
Also removed some related code that's not needed any more (no possible
unresolved label references any more, so no need to check for them).
NB: Keeping the ER_SP_GOTO_IN_HNDLR in errmsg.txt; it might become useful
in the future, and removing it (and thus re-enumerating error codes)
might upset things. (Anything referring to explicit error codes.)
Conversion from int and real numbers to UCS2 didn't work fine:
CONVERT(100, CHAR(50) UNICODE)
CONVERT(103.9, CHAR(50) UNICODE)
The problem appeared because numbers have binary charset, so,
simple charset recast binary->ucs2 was performed
instead of real conversion.
Fixed to make numbers pretend to be non-binary.
used
In a simple queries a result of the GROUP_CONCAT() function was always of
varchar type.
But if length of GROUP_CONCAT() result is greater than 512 chars and temporary
table is used during select then the result is converted to blob, due to
policy to not to store fields longer than 512 chars in tmp table as varchar
fields.
In order to provide consistent behaviour, result of GROUP_CONCAT() now
will always be converted to blob if it is longer than 512 chars.
Item_func_group_concat::field_type() is modified accordingly.
CONNECTION_ID() was implemented as a constant Item, i.e. an instance of
Item_static_int_func class holding value computed at creation time.
Since Items are created on parsing, and trigger statements are parsed
on table open, the first connection to open a particular table would
effectively set its own CONNECTION_ID() inside trigger statements for
that table.
Re-implement CONNECTION_ID() as a class derived from Item_int_func, and
compute connection_id on every call to fix_fields().
If the second or the third argument of a BETWEEN predicate was
a constant expression, like '2005.09.01' - INTERVAL 6 MONTH,
while the other two arguments were fields then the predicate
was evaluated incorrectly and the query returned a wrong
result set.
The bug was introduced in 5.0.17 when in the fix for 12612.
a misnamed function
... in the presence of a continue handler. The problem was that with a
handler, it continued to execute as if function existed and had set a
useful return value (which it hadn't).
The fix is to set a null return value and do an error return when a function
wasn't found.
The function agg_cmp_type in item_cmpfunc.cc neglected the fact that
the first argument in a BETWEEN/IN predicate could be a field of a view.
As a result in the case when the retrieved table was hidden by a view
over it and the arguments in the BETWEEN/IN predicates are of
the date/time type the function did not perform conversion of
the constant arguments to the same format as the first field argument.
If formats of the arguments differed it caused wrong a evaluation of
the predicates.
too many open statements". The patch adds a new global variable
@@max_prepared_stmt_count. This variable limits the total number
of prepared statements in the server. The default value of
@@max_prepared_stmt_count is 16382. 16382 small statements
(a select against 3 tables with GROUP, ORDER and LIMIT) consume
100MB of RAM. Once this limit has been reached, the server will
refuse to prepare a new statement and return ER_UNKNOWN_ERROR
(unfortunately, we can't add new errors to 4.1 without breaking 5.0). The limit is changeable after startup
and can accept any value from 0 to 1 million. In case
the new value of the limit is less than the current
statement count, no new statements can be added, while the old
still can be used. Additionally, the current count of prepared
statements is now available through a global read-only variable
@@prepared_stmt_count.
gives wrong results". Implement previously missing
Item_row::cleanup. The bug is not repeatable in 5.0, probably
due to a coincidence: the problem is present in 5.0 as well.
Idea of the fix is for master to send FD event with `created' as 0
to reconnecting slave (upon slave_net_timeout, no master crash) to avoid destroying temp tables.
In a case of a connect by slave to the master after its crash temp tables have been already
cleaned up so that slave can not keep `orphan' temp tables.
Also added comments, and fixing some coding style (mostly in comments too).
There are no functional changes, so no tests or documentation needed.
(This was originally part of a bugfix, but it was decided to not include this
in that patch; instead it's done separately.)
After FLUSH STATUS max_used_connections was reset to 0, and haven't
been updated while cached threads were reused, until the moment a new
thread was created.
The first suggested fix from original bug report was implemented:
a) On flushing the status, set max_used_connections to
threads_connected, not to 0.
b) Check if it is necessary to increment max_used_connections when
taking a thread from the cache as well as when creating new threads
The problem was due to the fact that with --lower-case-table-names set to 1
the function find_field_in_group did not convert the prefix 'HU' in
HU.PROJ.CITY into lower case when looking for it in the group list. Yet the
names in the group list were extended by the database name in lower case.
counter".
When TRUNCATE TABLE was called within an stored procedure the
auto_increment counter was not reset to 0 even if straight
TRUNCATE for this table did this.
This fix makes TRUNCATE in stored procedures to be handled exactly
in the same way as straight TRUNCATE. We achieve this by rolling
back the fix for bug 8850, which is no longer needed since stored
procedures don't require prelocked mode anymore (and TRUNCATE is
not allowed in stored functions or triggers).
Additional fix for INSERT DELAYED with subselect.
Originally detected in 5.1, but 5.0 could also be affected.
The user thread creates a dummy table object,
which is not included in the lock. The 'real' table is
opened and locked by the 'delayed' system thread.
The dummy object is now marked as not locked and this is
tested in mysql_lock_have_duplicate().
Mutli-table uses temporary table to store new values for fields. With the
new values the rowid of the record to be updated is stored in a Field_string
field. Table to be updated is set as source table of the rowid field.
But when the temporary table creates the tmp field for the rowid field it
converts it to a varstring field because the table to be updated was created by
the v4.1. Due to this the stored rowids were broken and no records for
update were found.
The flag can_alter_field_type is added to Field_string class. When it is set to
0 the field won't be converted to varstring. The Field_string::type() function
now always returns MYSQL_TYPE_STRING if can_alter_field_type is set to 0.
The multi_update::initialize_tables() function now sets can_alter_field_type
flag to 0 for the rowid fields denying conversion of the field to a varstring
field.
The wrong value was being reported as the field_length for BIT
fields, resulting in confusion for at least Connector/J. The
field_length is now always the number of bits in the field, as
it should be.
The bug was caused by wrong behaviour of mysql_insert() which in case
of INSERT DELAYED into a view exited with thd->net.report_error == 0.
This blocked error reporting to the client which started waiting
infinitely for response to the query.
The code in opt_sum_query that prevented the COUNT/MIN/MAX
optimization from being applied to outer joins was not adjusted
after introducing nested joins. As a result if an outer join
contained a reference to a view as an inner table the code of
opt_sum_query missed the presence of an on expressions and
erroneously applied the mentioned optimization.