a SELECT doesn't cause ROLLBACK of statem".
The idea of the fix is to ensure that we always commit the current
statement at the end of dispatch_command(). In order to not issue
redundant disc syncs, an optimization of the two-phase commit
protocol is implemented to bypass the two phase commit if
the transaction is read-only.
Problem is not about intervals and doesn't actually cause 'full table scan'.
We have an optimization for DISTINCT when we have
'DISTINCT field_from_first_join_table' we don't need to read all the
rows from the JOIN-ed table if we found one conforming row.
It stopped working in 5.0 as we return NESTED_LOOP_OK if we came upon
that case in the evaluate_join_record() and that doesn't break the
recordreading loop in sub_select().
Fixed by returning NESTED_LOOP_NO_MORE_ROWS in this case.
when executed in version 5
Zero fill is a field attribute only. So we can't always
propagate constants for zerofill fields : the values and
expression results don't have that flag.
Fixed by converting the const value to a string and
using that in const propagation when the context allows it.
Disable const propagation for fields with ZEROFILL flag in
all the other cases.
two timestamp fields.
The actual problem here was that CREATE TABLE allowed zero
date as a default value for a TIMESTAMP column in NO_ZERO_DATE mode.
The thing is that for TIMESTAMP date type specific rule is applied:
column_name TIMESTAMP == column_name TIMESTAMP DEFAULT 0
whever for any other date data type
column_name TYPE == column_name TYPE DEFAULT NULL
The fix is to raise an error when we're in NO_ZERO_DATE mode and
there is TIMESTAMP column w/o default value.
for wildcard values.
The server ignored escape character before wildcards during
the calculation of priority values for sorting of a privilege
list. (Actually the server counted an escape character as an
ordinary wildcard like % or _). I.e. the table name template
with a wildcard character like 'tbl_1' had higher priority in
a privilege list than concrete table name without wildcards
like 'tbl\_1', and some privileges of 'tbl\_1' was hidden
by privileges for 'tbl_1'.
The get_sort function has been modified to ignore escaped
wildcards as usual.
type conversion.
Instead of copying of whole character string from a temporary
buffer, the server copied a short-living pointer to that string
into a long-living structure. That has been fixed.
behave randomly with mysql_change_user.
The test case had to be moved into not_embedded_server.test file,
because SHOW GLOBAL STATUS does not work properly in embedded
server (see bug 34517).
but not collation.
The problem here was that text literals in a view were always
dumped with character set introducer. That lead to loosing
collation information.
The fix is to dump character set introducer only if it was
in the original query. That is now possible because there
is no problem any more of loss of character set of string
literals in views -- after WL#4052 the view is dumped
in the original character set.
behave randomly with mysql_change_user.
The problem was that global status variables were not updated
in THD::check_user(), so thread statistics were lost after
COM_CHANGE_USER.
The fix is to update global status variables with the thread ones
before preparing the thread for new user.
The problem is that AFTER UPDATE triggers will fire only if the
new data is different from the old data on the row. The trigger
should fire regardless of whether there are changes to the data.
The solution is to fire the trigger on UPDATE even if there are
no changes to the value (because the value is the same).
or trigger crashes server
Under some circumstances a combination of VIEWs, subselects with outer
references and PS/SP/triggers could lead to use of uninitialized memory
and server crash as a result.
Fixed by changing the code in Item_field::fix_fields() so that in cases
when the field is a VIEW reference, we first check whether the field
is also an outer reference, and mark it appropriately before returning.
The unsignedness of large integer user variables was not being
properly preserved when feeded to prepared statements. This was
happening because the unsigned flags wasn't being updated when
converting the user variable is converted to a parameter.
The solution is to copy the unsigned flag when converting the
user variable to a parameter and take the unsigned flag into
account when converting the integer to a string.
The out of memory error was thrown when the sort buffer size were too small.
This led to a user confusion.
Now filesort throws the error message about sort buffer being too small.
The problem is that one can not create a stored routine if sql_mode
contains NO_ENGINE_SUBSTITUTION or PAD_CHAR_TO_FULL_LENGTH. Also when
a event is created, the mode is silently lost if sql_mode contains one
of the aforementioned. This was happening because the table definitions
which stored sql_mode values weren't being updated to accept new values
of sql_mode.
The solution is to update, in a backwards compatible manner, the various
table definitions (columns) that store the sql_mode value to take into
account the new possible values. One incompatible change is that if a event
that is being created can't be stored to the mysql.event table, an error
will be raised.
The tests case also ensure that new SQL modes will be added to the mysql.proc
and mysql.event tables, otherwise the tests will fail.
and my_innodb_commit_concurrency global variables.
Type of the my_innodb_autoextend_increment and the
my_innodb_commit_concurrency variables has been changed to
GET_ULONG.
Server handles truncation for assignment of too-long values
into CHAR/VARCHAR/TEXT columns in a different ways when the
truncated characters are spaces:
1. CHAR(N) columns silently ignore end-space truncation;
2. TEXT columns post a truncation warning/error in the
non-strict/strict mode.
3. VARCHAR columns always post a truncation note in
any mode.
Space truncation processing has been synchronised over
CHAR/VARCHAR/TEXT columns: current behavior of VARCHAR
columns has been propagated as standard.
Binary-encoded string/BLOB columns are not affected.
The problem is that deprecated syntax warnings were not being
suppressed when the stored routine is being parsed for the first
execution. It's doesn't make sense to print out deprecated
syntax warnings when the routine is being executed because this
kind of warning only matters when the routine is being created.
The solution is to suppress deprecated syntax warnings when
parsing the stored routine for loading into the cache (might
mean that the routine is being executed for the first time).
When issuing a column level grant on a table which require pre-locking the
server crashed.
The reason behind the crash was that data structures used by the lock api
wasn't properly reinitialized in the case of a column level grant.
on table creates
The problem was in incompatible syntax for key definition in CREATE
TABLE.
5.0 supports only the following syntax for key definition (see "CREATE
TABLE syntax" in the manual):
{INDEX|KEY} [index_name] [index_type] (index_col_name,...)
While 5.1 parser supports the above syntax, the "preferred" syntax was
changed to:
{INDEX|KEY} [index_name] (index_col_name,...) [index_type]
The above syntax is used in 5.1 for the SHOW CREATE TABLE output, which
led to dumps generated by 5.1 being incompatible with 5.0.
Fixed by changing the parser in 5.0 to support both 5.0 and 5.1 syntax
for key definition.
Simple subselects are pulled into upper selects. This operation substitutes the
pulled subselect for the first item from the select list of the subselect.
If an alias is defined for a subselect it is inherited by the replacement item.
As this is done after fix_fields phase this alias isn't showed if the
replacement item is a stored function. This happens because the Item_func_sp::make_field
function makes send field from its result_field and ignores the defined alias.
Now when an alias is defined the Item_func_sp::make_field function sets it for
the returned field.
pre-locking.
The crash was caused by an implicit assumption in check_table_access() that
table_list parameter is always a part of lex->query_tables.
When iterating over the passed list of tables, check_table_access() used
to stop only when lex->query_tables_last_not_own was reached.
In case of pre-locking, lex->query_tables_last_own is not NULL and points
to some element of lex->query_tables. When the parameter
of check_table_access() was not part of lex->query_tables, loop invariant
could never be violated and a crash would happen when the current table
pointer would point beyond the end of the provided list.
The fix is to change the signature of check_table_access() to also accept
a numeric limit of loop iterations, similarly to check_grant(), and
supply this limit in all places when we want to check access of tables
that are outside lex->query_tables, or just want to check access to one table.
The problem is that the Table_locks_waited was incremented only
when the lock request succeed. If a thread waiting for the lock
gets killed or the lock request is aborted, the variable would
not be incremented, leading to inaccurate values in the variable.
The solution is to increment the Table_locks_waited whenever the
lock request is queued. This reflects better the intended behavior
of the variable -- show how many times a lock was waited.
Two disjuncts containing equalities of the form key=const1 and key=const2 can
be merged into one if const1 is equal to const2. To check it the common
collation of the constants were used rather than the collation of the field key.
For example when the default collation of the constants was cases insensitive
while the collation of the field was case sensitive, then two or-ed equality
predicates key='b' and key='B' incorrectly were merged into one f='b'. As a
result ref access was used instead of range access and wrong result sets were
returned in many cases.
Fixed the problem by comparing constant in the or-ed predicate with collation of
the key field.
The problem is when create/rename/drop users, the statement was logged regardless of error, even if no data has been changed, the statement was logged.
After this patch, create/rename/drop users don't write the binlog if the statement makes no changes, if the statement does make any changes, log the statement with possible error code.
This patch is based on the patch for BUG#29749, which is not pushed
Bug 33983 (Stored Procedures: wrong end <label> syntax is accepted)
The server used to crash when REPEAT or another control instruction
was used in conjunction with labels and a LEAVE instruction.
The crash was caused by a missing "pop" of handlers or cursors in the
code representing the stored program. When executing the code in a loop,
this missing "pop" would result in a stack overflow, corrupting memory.
Code generation has been fixed to produce the missing h_pop/c_pop
instructions.
Also, the logic checking that labels at the beginning and the end of a
statement are matched was incorrect, causing Bug 33983.
End labels, when used, must match the label used at the beginning of a block.
Bug#25347: mysqlcheck -A -r doesn't repair table marked as crashed
mysqlcheck tests nullness of the engine type to know whether the
"table" is a view or not. That also falsely catches tables that
are severly damaged.
Instead, use SHOW FULL TABLES to test whether a "table" is a view
or not.
(Don't add new function. Instead, get original data a smarter way.)
Make it safe for use against databases before when views appeared.
Add new variable m_highest_seen when only peeking at auto_increment NEXTID and not retrieving to cache. Add new method to check tupleId before calling data node
ndb_restore.result, ndb_restore.test:
Changed test to use information_schema to check auto_increment
DictCache.cpp, Ndb.cpp:
Add new variable m_highest_seen when only peeking at auto_increment NEXTID and not retrieving to cache. Add new method to check tupleId before calling data node. When setting the auto_increment value we'll also read up the new value, this is useful if we use the table the first time in this MySQL Server and haven't yet seen the NEXTID value. The kernel will avoid updating since it already has the value but will also read up the NEXTID value to ensure we don't need to do this any more time.
ndb_auto_increment.result:
Updated result file since it was incorrect
The problem occurred when one had a subquery that had an equality X=Y where
Y referred to a named select list expression from the parent select. MySQL
crashed when trying to use the X=Y equality for ref-based access.
Fixed by allowing non-Item_field items in the described case.
Fixes:
Bug #18942: DROP DATABASE does not drop an orphan FOREIGN KEY constraint
Fix Bug#18942 by dropping all foreign key constraints at the end of
DROP DATABASE. Usually, by then, there are no foreign constraints
left because all of them are dropped when the relevant tables are
dropped. This code is to ensure that any orphaned FKs are wiped too.
Bug #29157: UPDATE, changed rows incorrect
Return HA_ERR_RECORD_IS_THE_SAME from ha_innobase::update_row() if no
columns were updated.
Bug #32440: InnoDB free space info does not appear in SHOW TABLE STATUS or I_S
Put information about the free space in a tablespace in
INFORMATION_SCHEMA.TABLES.DATA_FREE. This information was previously
available in INFORMATION_SCHEMA.TABLES.TABLE_COMMENT, but MySQL has
removed it from there recently.
The stored value is in kilobytes.
This can be considered as a permanent workaround to
http://bugs.mysql.com/32440. "Workaround" becasue that bug is about the
data missing from TABLE_COMMENT and this is actually not solved.
The ROUND(X, D) function would change the Item::decimals field during
execution to achieve the effect of a dynamic number of decimal digits.
This caused a series of bugs:
Bug #30617:Round() function not working under some circumstances in InnoDB
Bug #33402:ROUND with decimal and non-constant cannot round to 0 decimal places
Bug #30889:filesort and order by with float/numeric crashes server
Fixed by never changing the number of shown digits for DECIMAL when
used with a nonconstant number of decimal digits.