This performance degradation was due to the fact that some
cost evaluation code added into 4.1 in the function find_best was
not merged into the code of the function best_access_path added
together with other code for greedy optimizer.
Added a parameter to the function print_plan. The parameter contains
accumulated cost for a given partial join.
The patch does not include a special test case since this performance
degradation is hard to reproduse with a simple example.
TODO: make the function find_best use the function best_access_path
in order to remove duplication of code which might result in incomplete
merges in the future.
Bug#17667: An attacker has the opportunity to bypass query logging.
This adds a new, local-only printf format specifier to our *printf functions
that allows us to print known-size buffers that must not be interpreted as
NUL-terminated "strings."
It uses this format-specifier to print to the log, thus fixing this
problem.
In the code that converts IN predicates to EXISTS predicates it is changing
the select list elements to constant 1. Example :
SELECT ... FROM ... WHERE a IN (SELECT c FROM ...)
is transformed to :
SELECT ... FROM ... WHERE EXISTS (SELECT 1 FROM ... HAVING a = c)
However there can be no FROM clause in the IN subquery and it may not be
a simple select : SELECT ... FROM ... WHERE a IN (SELECT f(..) AS
c UNION SELECT ...) This query is transformed to : SELECT ... FROM ...
WHERE EXISTS (SELECT 1 FROM (SELECT f(..) AS c UNION SELECT ...)
x HAVING a = c) In the above query c in the HAVING clause is made to be
an Item_null_helper (a subclass of Item_ref) pointing to the real
Item_field (which is not referenced anywhere else in the query anymore).
This is done because Item_ref_null_helper collects information whether
there are NULL values in the result. This is OK for directly executed
statements, because the Item_field pointed by the Item_null_helper is
already fixed when the transformation is done. But when executed as
a prepared statement all the Item instances are "un-fixed" before the
recompilation of the prepared statement. So when the Item_null_helper
gets fixed it discovers that the Item_field it points to is not fixed
and issues an error. The remedy is to keep the original select list
references when there are no tables in the FROM clause. So the above
becomes : SELECT ... FROM ... WHERE EXISTS (SELECT c FROM (SELECT f(..)
AS c UNION SELECT ...) x HAVING a = c) In this way c is referenced
directly in the select list as well as by reference in the HAVING
clause. So it gets correctly fixed even with prepared statements. And
since the Item_null_helper subclass of Item_ref_null_helper is not used
anywhere else it's taken out.
Supplying --skip-rpl to mysql-test-run.pl would always disable the
slaves, but those slaves may still be needed for the federated tests.
Now we only disable the slaves when they are not used by any of the
tests.
too much memory. Instead, either create the equvalent SEL_TREE manually, or create only two ranges that
strictly include the area to scan
(Note: just to re-iterate: increasing NOT_IN_IGNORE_THRESHOLD will make optimization run slower for big
IN-lists, but the server will not run out of memory. O(N^2) memory use has been eliminated)
supported in SP but not in PS": just enable them in prepared
statements, the supporting functionality was implemented when
they were enabled in stored procedures.
trigger fails".
In cases when CONVERT_TZ() function was used in trigger or stored function
(or in stored procedure which was called from trigger or stored function)
error about non existing '.' table was reported.
Statements that use CONVERT_TZ() function should have time zone related
tables in their table list. tz_init_table_list() function which is used
to produce part of table list containing those tables didn't set
TABLE_LIST::db_length/table_name_length members properly. As result time
zone tables needed for CONVERT_TZ() function were incorrectly handled by
prelocking algorithm and "Table '.' doesn't exist' error was emitted.
This fix changes tz_init_table_list() in such way that it properly inits
TABLE_LIST::table_name_length/db_length members and thus produces table list
which can be handled by prelocking algorithm correctly.
The fix refines the algorithm of generating DROPs for binlog.
Temp tables with common pseudo_thread_id are clustered into one query.
Consequently one replication event per pseudo_thread_id is generated.
Backporting a changeset made for 5.0. Comments from there:
The fix refines the algorithm of generating DROPs for binlog.
Temp tables with common pseudo_thread_id are clustered into one query.
Consequently one replication event per pseudo_thread_id is generated.
Error was emitted when one tried to select information from view which used
merge algorithm and which also had CONVERT_TZ() function in its select list.
This bug was caused by wrong assumption that global table list for view
which is handled using merge algorithm begins from tables belonging to
the main select of this view. Nowadays the above assumption is not true only
when one uses convert_tz() function in view's select list, but in future
other cases may be added (for example we may support merging of views
with subqueries in select list one day). Relying on this false assumption
led to the usage of wrong table list for field lookups and therefor errors.
With this fix we explicitly use pointer to the beginning of main select's
table list.
Make InnoDB option "loose", as the server might be
started with this option just to find out the test
is to be skipped in the configuration (bug#17359)
Now NDB is only initialized and started when the tests that are
being run will make use of it. The same thing is also done for the
slave databases and the instance manager.
After review from Magnus: Only take a snapshot of the data directories
that are in use.
The bug caused wrong result sets for union constructs of the form
(SELECT ... ORDER BY order_list1 [LIMIT n]) ORDER BY order_list2.
For such queries order lists were concatenated and limit clause was
completely neglected.
Bug #15684: @@version_* are not all SELECTable
Added the appropriate information as read-only system variables, and
also removed some special-case handling of @@version along the way.
@@version_bdb was added, but isn't included in the test because it
depends on the presence of BDB.
This was a case of too much code. The --start-dirty option should act
just like --start-and-exit, except it skips the database initialization
step. Now it does, which means it picks up the options from the specified
test case.
The SQL standard doesn't allow to use in HAVING clause fields that are not
present in GROUP BY clause and not under any aggregate function in the HAVING
clause. However, mysql allows using such fields. This extension assume that
the non-grouping fields will have the same group-wise values. Otherwise, the
result will be unpredictable. This extension allowed in strict
MODE_ONLY_FULL_GROUP_BY sql mode results in misunderstanding of HAVING
capabilities.
The new error message ER_NON_GROUPING_FIELD_USED message is added. It says
"non-grouping field '%-.64s' is used in %-.64s clause". This message is
supposed to be used for reporting errors when some field is not found in the
GROUP BY clause but have to be present there. Use cases for this message are
this bug and when a field is present in a SELECT item list not under any
aggregate function and there is GROUP BY clause present which doesn't mention
that field. It renders the ER_WRONG_FIELD_WITH_GROUP error message obsolete as
being more descriptive.
The resolve_ref_in_select_and_group() function now reports the
ER_NON_GROUPING_FIELD_FOUND error if the strict mode is set and the field for
HAVING clause is found in the SELECT item list only.
Fixes for Bug#12429: Replication tests fail: "Slave_IO_Running" (?) differs related to MySQL 4.1
and Bug#16920 rpl_deadlock_innodb fails in show slave status (reported for MySQL 5.1)
Updating data in HEAP table with BTREE index results in wrong index_length
counter value, which keeps growing after each update.
When inserting new record into tree counter is incremented by:
sizeof(TREE_ELEMENT) + key_size + tree->size_of_element
But when deleting element from tree it doesn't decrement counter by key_size:
sizeof(TREE_ELEMENT) + tree->size_of_element
This fix makes accurate allocated memory counter for tree. That is
decrease counter by key_size when deleting tree element.
The bug caused a reported index corruption in the cases when
key_cache_block_size was not a multiple of myisam_block_size,
e.g. when key_cache_block_size=1536 while myisam_block_size=1024.
Removed sp-goto.test, sp-goto.result and all (disabled) GOTO code.
Also removed some related code that's not needed any more (no possible
unresolved label references any more, so no need to check for them).
NB: Keeping the ER_SP_GOTO_IN_HNDLR in errmsg.txt; it might become useful
in the future, and removing it (and thus re-enumerating error codes)
might upset things. (Anything referring to explicit error codes.)
does not have "NOT NULL" attribute set. Also, calculate the padding
characters more safely, so that a negative number doesn't cause it to
print MAXINT-n spaces.
MySQL 4.1
and Bug#16920 rpl_deadlock_innodb fails in show slave status (reported for MySQL 5.1)
- backport of several fixes done in MySQL 5.0 to 4.1
- fix for new discovered instability (see comment on Bug#12429 + Bug#16920)
- reenabling of testcases
Conversion from int and real numbers to UCS2 didn't work fine:
CONVERT(100, CHAR(50) UNICODE)
CONVERT(103.9, CHAR(50) UNICODE)
The problem appeared because numbers have binary charset, so,
simple charset recast binary->ucs2 was performed
instead of real conversion.
Fixed to make numbers pretend to be non-binary.
used
In a simple queries a result of the GROUP_CONCAT() function was always of
varchar type.
But if length of GROUP_CONCAT() result is greater than 512 chars and temporary
table is used during select then the result is converted to blob, due to
policy to not to store fields longer than 512 chars in tmp table as varchar
fields.
In order to provide consistent behaviour, result of GROUP_CONCAT() now
will always be converted to blob if it is longer than 512 chars.
Item_func_group_concat::field_type() is modified accordingly.
CONNECTION_ID() was implemented as a constant Item, i.e. an instance of
Item_static_int_func class holding value computed at creation time.
Since Items are created on parsing, and trigger statements are parsed
on table open, the first connection to open a particular table would
effectively set its own CONNECTION_ID() inside trigger statements for
that table.
Re-implement CONNECTION_ID() as a class derived from Item_int_func, and
compute connection_id on every call to fix_fields().
a race between updating and checking Max_used_connections. This is done in
a loop until either disconnect finished or timeout expired. In a latter case
the test will fail.
If the second or the third argument of a BETWEEN predicate was
a constant expression, like '2005.09.01' - INTERVAL 6 MONTH,
while the other two arguments were fields then the predicate
was evaluated incorrectly and the query returned a wrong
result set.
The bug was introduced in 5.0.17 when in the fix for 12612.
a misnamed function
... in the presence of a continue handler. The problem was that with a
handler, it continued to execute as if function existed and had set a
useful return value (which it hadn't).
The fix is to set a null return value and do an error return when a function
wasn't found.
The function agg_cmp_type in item_cmpfunc.cc neglected the fact that
the first argument in a BETWEEN/IN predicate could be a field of a view.
As a result in the case when the retrieved table was hidden by a view
over it and the arguments in the BETWEEN/IN predicates are of
the date/time type the function did not perform conversion of
the constant arguments to the same format as the first field argument.
If formats of the arguments differed it caused wrong a evaluation of
the predicates.
too many open statements". The patch adds a new global variable
@@max_prepared_stmt_count. This variable limits the total number
of prepared statements in the server. The default value of
@@max_prepared_stmt_count is 16382. 16382 small statements
(a select against 3 tables with GROUP, ORDER and LIMIT) consume
100MB of RAM. Once this limit has been reached, the server will
refuse to prepare a new statement and return ER_UNKNOWN_ERROR
(unfortunately, we can't add new errors to 4.1 without breaking 5.0). The limit is changeable after startup
and can accept any value from 0 to 1 million. In case
the new value of the limit is less than the current
statement count, no new statements can be added, while the old
still can be used. Additionally, the current count of prepared
statements is now available through a global read-only variable
@@prepared_stmt_count.
gives wrong results". Implement previously missing
Item_row::cleanup. The bug is not repeatable in 5.0, probably
due to a coincidence: the problem is present in 5.0 as well.
Idea of the fix is for master to send FD event with `created' as 0
to reconnecting slave (upon slave_net_timeout, no master crash) to avoid destroying temp tables.
In a case of a connect by slave to the master after its crash temp tables have been already
cleaned up so that slave can not keep `orphan' temp tables.
After FLUSH STATUS max_used_connections was reset to 0, and haven't
been updated while cached threads were reused, until the moment a new
thread was created.
The first suggested fix from original bug report was implemented:
a) On flushing the status, set max_used_connections to
threads_connected, not to 0.
b) Check if it is necessary to increment max_used_connections when
taking a thread from the cache as well as when creating new threads
The problem was due to the fact that with --lower-case-table-names set to 1
the function find_field_in_group did not convert the prefix 'HU' in
HU.PROJ.CITY into lower case when looking for it in the group list. Yet the
names in the group list were extended by the database name in lower case.
counter".
When TRUNCATE TABLE was called within an stored procedure the
auto_increment counter was not reset to 0 even if straight
TRUNCATE for this table did this.
This fix makes TRUNCATE in stored procedures to be handled exactly
in the same way as straight TRUNCATE. We achieve this by rolling
back the fix for bug 8850, which is no longer needed since stored
procedures don't require prelocked mode anymore (and TRUNCATE is
not allowed in stored functions or triggers).
The wrong value was being reported as the field_length for BIT
fields, resulting in confusion for at least Connector/J. The
field_length is now always the number of bits in the field, as
it should be.
The bug was caused by wrong behaviour of mysql_insert() which in case
of INSERT DELAYED into a view exited with thd->net.report_error == 0.
This blocked error reporting to the client which started waiting
infinitely for response to the query.
The code in opt_sum_query that prevented the COUNT/MIN/MAX
optimization from being applied to outer joins was not adjusted
after introducing nested joins. As a result if an outer join
contained a reference to a view as an inner table the code of
opt_sum_query missed the presence of an on expressions and
erroneously applied the mentioned optimization.
Multiple equalities were not adjusted after reading constant tables.
It resulted in neglecting good index based methods that could be
used to access of other tables.