The problem is that when a stored procedure is being parsed for
the first execution, the body is copied to a temporary buffer
which is disregarded sometime after the statement is parsed.
And during this parsing phase, the rule for CREATE VIEW was
holding a reference to the string being parsed for use during
the execution of the CREATE VIEW statement, leading to invalid
memory access later.
The solution is to allocate and copy the SELECT of a CREATE
VIEW statement using the thread memory root, which is set to
the permanent arena of the stored procedure.
a table name.
The problem was that fill_defined_view_parts() did not return
an error if a table is going to be altered. That happened if
the table was already in the table cache. In that case,
open_table() returned non-NULL value (valid TABLE-instance from
the cache).
The fix is to ensure that an error is thrown even if the table
is in the cache.
The test case for the bug#31048 checks that there is no crash on stack
overrun. But due to different stack sizes on different platforms it failed
on some of them.
The new test case check that a query with at least 4 level subquery nesting
works without the stack overrun nesting and other levels of nesting doesn't
cause a crash.
floating point numbers
Some math functions did not check if the result is a valid number
(i.e. neither of +-inf or nan).
Fixed by validating the result where necessary and returning NULL in
case of invalid result.
Fixes the following bugs:
- Bug #33349: possible race condition revolving around data dictionary and repartitioning
Introduce retry/sleep logic as a workaround for a transient bug
where ::open fails for partitioned tables randomly if we are using
one file per table.
- Bug #34053: normal users can enable innodb_monitor logging
In CREATE TABLE and DROP TABLE check whether the table in question is one
of the magic innodb_monitor tables and whether the user has enough rights
to mess with it before doing anything else.
- Bug #22868: 'Thread thrashing' with > 50 concurrent conns under an upd-intensive workloadw
- Bug #29560: InnoDB >= 5.0.30 hangs on adaptive hash rw-lock 'waiting for an X-lock'
This is a combination of changes that forward port the scalability fix applied to 5.0
through r1001.
It reverts changes r149 and r122 (these were 5.1 specific changes made in lieu of
scalability fix of 5.0)
Then it applies r1001 to 5.0 which is the original scalability fix.
Finally it applies r2082 which fixes an issue with the original fix.
- Bug #30930: Add auxiliary function to retrieve THD::thread_id
Add thd_get_thread_id() function. Also make check_global_access() function
visible to InnoDB under INNODB_COMPATIBILITY_HOOKS #define.
and ps-protocol
Finding a routine should be a transparent operation as
far as the binary log is concerned.
But it was influencing the binary log because of the TIMESTAMP
column in the proc table.
Fixed by preserving and restoring the time_zone usage flag when
searching for a stored routine in the proc table.
breaks replication
NAME_CONST() didn't replicate constant character set and collation
correctly.
With this fix NAME_CONST() inherits collation from the value argument.
a SELECT doesn't cause ROLLBACK of statem".
The idea of the fix is to ensure that we always commit the current
statement at the end of dispatch_command(). In order to not issue
redundant disc syncs, an optimization of the two-phase commit
protocol is implemented to bypass the two phase commit if
the transaction is read-only.
Problem is not about intervals and doesn't actually cause 'full table scan'.
We have an optimization for DISTINCT when we have
'DISTINCT field_from_first_join_table' we don't need to read all the
rows from the JOIN-ed table if we found one conforming row.
It stopped working in 5.0 as we return NESTED_LOOP_OK if we came upon
that case in the evaluate_join_record() and that doesn't break the
recordreading loop in sub_select().
Fixed by returning NESTED_LOOP_NO_MORE_ROWS in this case.
when executed in version 5
Zero fill is a field attribute only. So we can't always
propagate constants for zerofill fields : the values and
expression results don't have that flag.
Fixed by converting the const value to a string and
using that in const propagation when the context allows it.
Disable const propagation for fields with ZEROFILL flag in
all the other cases.
two timestamp fields.
The actual problem here was that CREATE TABLE allowed zero
date as a default value for a TIMESTAMP column in NO_ZERO_DATE mode.
The thing is that for TIMESTAMP date type specific rule is applied:
column_name TIMESTAMP == column_name TIMESTAMP DEFAULT 0
whever for any other date data type
column_name TYPE == column_name TYPE DEFAULT NULL
The fix is to raise an error when we're in NO_ZERO_DATE mode and
there is TIMESTAMP column w/o default value.
from storage engine
Federated may crash a server, return wrong result set, return
"ERROR 1030 (HY000): Got error 1430 from storage engine" message
when local (engine=federated) table has a key against nullable
column.
The problem was wrong implementation of function that creates
WHERE clause for remote query from key.
This is *not* a fix to the bug. I'm only disabling the failing part of
mysqldump.test until the bug is fixed. Whoever fixes it, please re-enable
the test.
for wildcard values.
The server ignored escape character before wildcards during
the calculation of priority values for sorting of a privilege
list. (Actually the server counted an escape character as an
ordinary wildcard like % or _). I.e. the table name template
with a wildcard character like 'tbl_1' had higher priority in
a privilege list than concrete table name without wildcards
like 'tbl\_1', and some privileges of 'tbl\_1' was hidden
by privileges for 'tbl_1'.
The get_sort function has been modified to ignore escaped
wildcards as usual.
type conversion.
Instead of copying of whole character string from a temporary
buffer, the server copied a short-living pointer to that string
into a long-living structure. That has been fixed.
and
bug#33932 assertion at handle_slave_sql if init_slave_thread() fails
the asserts were caused by
bug33931: having thd deleted at time of executing err: code plus
a missed initialization;
bug33932: initialization of slave_is_running member was missed;
fixed with relocating mi members initialization and removing delete thd
It is safe to do as deletion happens later explicitly in the caller of
init_slave_thread().
Todo: at merging the test is better to be moved into suite/bugs for 5.x (when x>0).
behave randomly with mysql_change_user.
The test case had to be moved into not_embedded_server.test file,
because SHOW GLOBAL STATUS does not work properly in embedded
server (see bug 34517).
but not collation.
The problem here was that text literals in a view were always
dumped with character set introducer. That lead to loosing
collation information.
The fix is to dump character set introducer only if it was
in the original query. That is now possible because there
is no problem any more of loss of character set of string
literals in views -- after WL#4052 the view is dumped
in the original character set.
behave randomly with mysql_change_user.
The problem was that global status variables were not updated
in THD::check_user(), so thread statistics were lost after
COM_CHANGE_USER.
The fix is to update global status variables with the thread ones
before preparing the thread for new user.
mysql-test/t/variables.test, because:
- mysql-test/suite/rpl/t/rpl_variables.test does not replicate anything,
so should not be in the rpl suite.
- mysql-test/t/variables.test is the place for testing variable-related
problems and features.
- I will soon commit a patch containing a test case that tests
replication of variables. It would be good if I could call the test case
mysql-test/suite/rpl/t/rpl_variables.test. I'm making place for that now.
The problem is that AFTER UPDATE triggers will fire only if the
new data is different from the old data on the row. The trigger
should fire regardless of whether there are changes to the data.
The solution is to fire the trigger on UPDATE even if there are
no changes to the value (because the value is the same).
or trigger crashes server
Under some circumstances a combination of VIEWs, subselects with outer
references and PS/SP/triggers could lead to use of uninitialized memory
and server crash as a result.
Fixed by changing the code in Item_field::fix_fields() so that in cases
when the field is a VIEW reference, we first check whether the field
is also an outer reference, and mark it appropriately before returning.
The unsignedness of large integer user variables was not being
properly preserved when feeded to prepared statements. This was
happening because the unsigned flags wasn't being updated when
converting the user variable is converted to a parameter.
The solution is to copy the unsigned flag when converting the
user variable to a parameter and take the unsigned flag into
account when converting the integer to a string.
The out of memory error was thrown when the sort buffer size were too small.
This led to a user confusion.
Now filesort throws the error message about sort buffer being too small.
Problem was incorrect data length in the key_restore function
resulting in overwriting the search key.
Solution, remove one byte in length if uneven bits are used.
Problem was that Field_bit used Field::hash() function that did not
know about using null-byte for storing bits.
Resulting in wrong length, which was caught by valgrind.
Solution: created a Field_bit::hash() that uses Field_bit::val_int()
and my_charset_bin-collation function hash_sort.
Also use the store function for platform independs
After changes to the bug fix for bug 26379 (Combination of FLUSH
TABLE and REPAIR TABLE corrupts a MERGE table) the test case
merge-big failed.
Repaired the test case.
Removed tests for INSERT ... SELECT, which is disabled for MERGE.
Test case change only.
The problem is that one can not create a stored routine if sql_mode
contains NO_ENGINE_SUBSTITUTION or PAD_CHAR_TO_FULL_LENGTH. Also when
a event is created, the mode is silently lost if sql_mode contains one
of the aforementioned. This was happening because the table definitions
which stored sql_mode values weren't being updated to accept new values
of sql_mode.
The solution is to update, in a backwards compatible manner, the various
table definitions (columns) that store the sql_mode value to take into
account the new possible values. One incompatible change is that if a event
that is being created can't be stored to the mysql.event table, an error
will be raised.
The tests case also ensure that new SQL modes will be added to the mysql.proc
and mysql.event tables, otherwise the tests will fail.
and my_innodb_commit_concurrency global variables.
Type of the my_innodb_autoextend_increment and the
my_innodb_commit_concurrency variables has been changed to
GET_ULONG.
Server handles truncation for assignment of too-long values
into CHAR/VARCHAR/TEXT columns in a different ways when the
truncated characters are spaces:
1. CHAR(N) columns silently ignore end-space truncation;
2. TEXT columns post a truncation warning/error in the
non-strict/strict mode.
3. VARCHAR columns always post a truncation note in
any mode.
Space truncation processing has been synchronised over
CHAR/VARCHAR/TEXT columns: current behavior of VARCHAR
columns has been propagated as standard.
Binary-encoded string/BLOB columns are not affected.
The problem is that deprecated syntax warnings were not being
suppressed when the stored routine is being parsed for the first
execution. It's doesn't make sense to print out deprecated
syntax warnings when the routine is being executed because this
kind of warning only matters when the routine is being created.
The solution is to suppress deprecated syntax warnings when
parsing the stored routine for loading into the cache (might
mean that the routine is being executed for the first time).
does not use trans tables
There had been two issues.
Rollback statement was recorded in binlog even though a multi-update
had not modified any non-transactional table.
The reason for this artifact was a false initial value of multi_update::transactional_tables.
Yet another artifact that explained on the bug page is that
`ha_autocommit_or_rollback' works differently depending on whether
a transaction engine has been compiled in.
Fixed: with setting multi_update::transactional_tables to zero at initialization
time. Multi-update on non-trans table won't cause ROLLBACK in binlog with
either compilation option.
The 2nd mentioned artifact comprises a self-standing issue (to be reported
separately).
Problem: some collation handlers called incorrect version
of my_like_range_xxx(), which led to wrong min_str and max_str,
so like range optimizer threw away good records.
Fix: changing the wrong handlers to call proper version of
my_like_range_xxx().
When issuing a column level grant on a table which require pre-locking the
server crashed.
The reason behind the crash was that data structures used by the lock api
wasn't properly reinitialized in the case of a column level grant.
on table creates
The problem was in incompatible syntax for key definition in CREATE
TABLE.
5.0 supports only the following syntax for key definition (see "CREATE
TABLE syntax" in the manual):
{INDEX|KEY} [index_name] [index_type] (index_col_name,...)
While 5.1 parser supports the above syntax, the "preferred" syntax was
changed to:
{INDEX|KEY} [index_name] (index_col_name,...) [index_type]
The above syntax is used in 5.1 for the SHOW CREATE TABLE output, which
led to dumps generated by 5.1 being incompatible with 5.0.
Fixed by changing the parser in 5.0 to support both 5.0 and 5.1 syntax
for key definition.
Simple subselects are pulled into upper selects. This operation substitutes the
pulled subselect for the first item from the select list of the subselect.
If an alias is defined for a subselect it is inherited by the replacement item.
As this is done after fix_fields phase this alias isn't showed if the
replacement item is a stored function. This happens because the Item_func_sp::make_field
function makes send field from its result_field and ignores the defined alias.
Now when an alias is defined the Item_func_sp::make_field function sets it for
the returned field.
pre-locking.
The crash was caused by an implicit assumption in check_table_access() that
table_list parameter is always a part of lex->query_tables.
When iterating over the passed list of tables, check_table_access() used
to stop only when lex->query_tables_last_not_own was reached.
In case of pre-locking, lex->query_tables_last_own is not NULL and points
to some element of lex->query_tables. When the parameter
of check_table_access() was not part of lex->query_tables, loop invariant
could never be violated and a crash would happen when the current table
pointer would point beyond the end of the provided list.
The fix is to change the signature of check_table_access() to also accept
a numeric limit of loop iterations, similarly to check_grant(), and
supply this limit in all places when we want to check access of tables
that are outside lex->query_tables, or just want to check access to one table.
The problem is that the Table_locks_waited was incremented only
when the lock request succeed. If a thread waiting for the lock
gets killed or the lock request is aborted, the variable would
not be incremented, leading to inaccurate values in the variable.
The solution is to increment the Table_locks_waited whenever the
lock request is queued. This reflects better the intended behavior
of the variable -- show how many times a lock was waited.
Two disjuncts containing equalities of the form key=const1 and key=const2 can
be merged into one if const1 is equal to const2. To check it the common
collation of the constants were used rather than the collation of the field key.
For example when the default collation of the constants was cases insensitive
while the collation of the field was case sensitive, then two or-ed equality
predicates key='b' and key='B' incorrectly were merged into one f='b'. As a
result ref access was used instead of range access and wrong result sets were
returned in many cases.
Fixed the problem by comparing constant in the or-ed predicate with collation of
the key field.
"Plugin enum variables can't be set from command line"
fix crash of LOCK_plugins mutex when loading plug-ins from command line.
fix off-by-one bug when loading multiple plug-ins from the command line.
initialize command line handling for ENUM and SET plugin variable types.
The problem is when create/rename/drop users, the statement was logged regardless of error, even if no data has been changed, the statement was logged.
After this patch, create/rename/drop users don't write the binlog if the statement makes no changes, if the statement does make any changes, log the statement with possible error code.
This patch is based on the patch for BUG#29749, which is not pushed
myisamchk did always show Character set: latin1_swedish_ci (8),
regardless what DEFAULT CHARSET the table had.
When the server created a MyISAM table, it did not copy the
characterset number into the MyISAM create info structure.
Added assignment of charset number to MI_CREATE_INFO.
Bug 33983 (Stored Procedures: wrong end <label> syntax is accepted)
The server used to crash when REPEAT or another control instruction
was used in conjunction with labels and a LEAVE instruction.
The crash was caused by a missing "pop" of handlers or cursors in the
code representing the stored program. When executing the code in a loop,
this missing "pop" would result in a stack overflow, corrupting memory.
Code generation has been fixed to produce the missing h_pop/c_pop
instructions.
Also, the logic checking that labels at the beginning and the end of a
statement are matched was incorrect, causing Bug 33983.
End labels, when used, must match the label used at the beginning of a block.
Bug#25347: mysqlcheck -A -r doesn't repair table marked as crashed
mysqlcheck tests nullness of the engine type to know whether the
"table" is a view or not. That also falsely catches tables that
are severly damaged.
Instead, use SHOW FULL TABLES to test whether a "table" is a view
or not.
(Don't add new function. Instead, get original data a smarter way.)
Make it safe for use against databases before when views appeared.
Add new variable m_highest_seen when only peeking at auto_increment NEXTID and not retrieving to cache. Add new method to check tupleId before calling data node
ndb_restore.result, ndb_restore.test:
Changed test to use information_schema to check auto_increment
DictCache.cpp, Ndb.cpp:
Add new variable m_highest_seen when only peeking at auto_increment NEXTID and not retrieving to cache. Add new method to check tupleId before calling data node. When setting the auto_increment value we'll also read up the new value, this is useful if we use the table the first time in this MySQL Server and haven't yet seen the NEXTID value. The kernel will avoid updating since it already has the value but will also read up the NEXTID value to ensure we don't need to do this any more time.
ndb_auto_increment.result:
Updated result file since it was incorrect
The problem occurred when one had a subquery that had an equality X=Y where
Y referred to a named select list expression from the parent select. MySQL
crashed when trying to use the X=Y equality for ref-based access.
Fixed by allowing non-Item_field items in the described case.
"Update of CSV row incorrect for some BLOBs"
when reading in rows, move blob columns into temporary storage not
allocated by Field_blob class or else row update operation will
alter original row and make mysql think that nothing has been changed.
fix incrementing wrong statistic values.
Fixes:
Bug #18942: DROP DATABASE does not drop an orphan FOREIGN KEY constraint
Fix Bug#18942 by dropping all foreign key constraints at the end of
DROP DATABASE. Usually, by then, there are no foreign constraints
left because all of them are dropped when the relevant tables are
dropped. This code is to ensure that any orphaned FKs are wiped too.
Bug #29157: UPDATE, changed rows incorrect
Return HA_ERR_RECORD_IS_THE_SAME from ha_innobase::update_row() if no
columns were updated.
Bug #32440: InnoDB free space info does not appear in SHOW TABLE STATUS or I_S
Put information about the free space in a tablespace in
INFORMATION_SCHEMA.TABLES.DATA_FREE. This information was previously
available in INFORMATION_SCHEMA.TABLES.TABLE_COMMENT, but MySQL has
removed it from there recently.
The stored value is in kilobytes.
This can be considered as a permanent workaround to
http://bugs.mysql.com/32440. "Workaround" becasue that bug is about the
data missing from TABLE_COMMENT and this is actually not solved.
and a char-field > 128 exists
CHECK TABLE (non-QUICK) and any form of repair table did wrongly rate
records as corrupted under the following conditions:
1. The table has dynamic row format and
2. it has a CHAR like column > 127 bytes (but not VARCHAR)
(for multi-byte character sets this could be less than 127
characters) and
3. it has records with > 127 bytes significant length in that column
(a byte beyond byte position 127 must be non-space).
Affected were the statements CHECK TABLE, REPAIR TABLE, OPTIMIZE TABLE,
ALTER TABLE. CHECK TABLE reported and marked the table as crashed if any
record was present that fulfilled condition 3. The other statements
deleted these records.
The problem was a signed/unsigned compare in MyISAM code. A
char to uchar change became necessary after the big byte to uchar
change.
The ROUND(X, D) function would change the Item::decimals field during
execution to achieve the effect of a dynamic number of decimal digits.
This caused a series of bugs:
Bug #30617:Round() function not working under some circumstances in InnoDB
Bug #33402:ROUND with decimal and non-constant cannot round to 0 decimal places
Bug #30889:filesort and order by with float/numeric crashes server
Fixed by never changing the number of shown digits for DECIMAL when
used with a nonconstant number of decimal digits.
The name resolution for correlated subqueries and HAVING clauses
failed to distinguish which of two was being performed when there
was a reference to an outer aliased field.
Fixed by adding the condition that HAVING clause name resulotion
is being performed.
value when inserting into a view.
The mysql_prepare_insert function checks all fields of the target table that
directly or indirectly (through a view) are specified in the INSERT
statement to have a default value. This check can be skipped if the INSERT
statement doesn't mention any insert fields. In case of a view this allows
fields that aren't mentioned in the view to bypass the check.
Now fields of the target table are always checked to have a default value
when insert goes into a view.
columns (default datatype value is assigned).
The mysql_update function has been modified to generate
an error when trying to set a NOT NULL field to NULL rather than a warning
in the set_field_to_null_with_conversions function.
When resolving references we need to take into consideration
the view "fields" and allow qualified access to them.
Fixed by extending the reference resolution to process view
fields correctly.
Problem was that the mix of handlers was not consistent between
CREATE and ALTER
changed so that it works like:
- All partitions must use the same engine
AND it must be the same as the table.
- if one does NOT specify an engine on the table level
then one must either NOT specify any engine on any
partition/subpartition OR for ALL partitions/subpartitions
Note: that after a table have been created, the storage engine
is specified for all parts of the table (table/partition/subpartition)
and so when using alter, one does not need to specify it (unless one
wants to change the storage engine, then one have to specify it on the
table level)
server crash.
The filesort implementation has an optimization for subquery execution which
consists of reusing previously allocated buffers. In particular the call to
the read_buffpek_from_file function might be skipped when a big enough buffer
for buffer descriptors (buffpeks) is already allocated. Beside allocating
memory for buffpeks this function fills allocated buffer with data read from
disk. Skipping it might led to using an arbitrary memory as fields' data and
finally to a crash.
Now the read_buffpek_from_file function is always called. It allocates
new buffer only when necessary, but always fill it with correct data.
Problem was that there are no support for symlinked files on Windows for
mysqld. So we fail when trying to create them.
Solution: Ignore the DATA/INDEX DIRECTORY clause for partitions and push
a warning. (Just like a MyISAM table)
Reseting the query cache by issuing a SET GLOBAL query_cache_size=0 caused the server
to crash if a the server concurrently was saving a new result set to the query cache. The
reason for this was that the invalidation wasn't waiting on the result writers to
release the block level locks on the query cache.
to be compiled in
The problem was that on a statically built server an attempt to create
a UDF resulted in a different, but reasonable error ("Can't open shared
library" instead of "UDFs are unavailable with the --skip-grant-tables
option"), which caused a failure for the test case for bug #32020.
Fixed by moving the test case for bug #32020 from skip_grants.test to a
separate test to ensure that it is only run when the server is built
with support for dynamically loaded libraries.
read_buffer_size set on master
BUG#33413 show binlog events fails if binlog has event size of close
to max_allowed_packet
The size of Append_block replication event was determined solely by
read_buffer_size whereas the rest of replication code deals with
max_allowed_packet.
When the former parameter was set to larger than the latter there were
two artifacts: the master could not read events from binlog;
show master events did not show.
Fixed with
- fragmenting the used io-cached buffer into pieces each size of less
than max_allowed_packet (bug#30435)
- incrementing show-binlog-events handling thread's max_allowed_packet
with the max estimated for the replication header size
Now, every transaction (including autocommit transactions) start with
a BEGIN and end with a COMMIT/ROLLBACK in the binlog.
Added a test case, and updated lots of test case result files.
w/ Field_date instead of Field_newdate
Field_date was still used in temp table creation.
Fixed by using Field_newdate consistently throughout the server
except when reading tables defined with older MySQL version.
No test suite is possible because both Field_date and Field_newdate
return the same values in all the metadata calls.
When set the server-id dynamically, the server_id member of current thread is not updated.
Update the server_id member of current thread after updated the global variable value.
Complementary patch since LOAD DATA INFILE was not covered in
the previous patch.
This patch adds a check so that the slave skip counter is not
decreased to zero if seeing a BEGIN_LOAD_QUERY_EVENT,
APPEND_BLOCK_EVENT, or CREATE_FILE_EVENT since these cannot
end a group. The group is terminated by an EXECUTE_LOAD_QUERY_
EVENT or DELETE_FILE_EVENT.