Was a double-free of the Unique member of Item_func_group_concat.
This was not causing a crash because the Unique is a descendent of
Sql_alloc.
Fixed to free the Unique only if it was allocated for the instance
of Item_func_group_concat it was referenced from
Bug#34678 @@debug variable's incremental mode
The problem is that the per-thread debugging settings stack wasn't
being deallocated before the thread termination, leaking the stack
memory. The chosen solution is to push a new state if the current
is set to the initial settings and pop it (free) once the thread
finishes.
Fixed a missed case in the patch for Bug#31931.
Also makes Bug#33722 a duplicate of Bug#31931.
Added tests for better coverage.
Replaced some legacy function calls.
documentation
While the manual mentions FRAC_SECOND only for the TIMESTAMPADD()
function, it was also possible to use FRAC_SECOND with DATE_ADD(),
DATE_SUB() and +/- INTERVAL.
Fixed the parser to match the manual, i.e. using FRAC_SECOND for
anything other than TIMESTAMPADD()/TIMESTAMPDIFF() now produces a
syntax error.
Additionally, the patch allows MICROSECOND to be used in TIMESTAMPADD/
TIMESTAMPDIFF and marks FRAC_SECOND as deprecated.
If setting a system-variable provided by a plug-in failed, no OK or
error was sent in some cases, hanging the client. We now send an error
in the case from the ticket (integer-argument out of range in STRICT
mode). We also provide a semi-generic fallback message for possible
future cases like this where an error is signalled, but no message is
sent to the client. The error/warning handling is unified so it's the
same again for variables provided by plugins and those in the server
proper.
SQL-mode PAD_CHAR_TO_FULL_LENGTH affected mysqld's user-table too. If
enabled, user-name and host were space-padded and no longer matched
the login-data of incoming connexions.
Patch disregards pad-flag while loading privileges so ability to log
in does not depend on SQL-mode.
In BENCHMARK(count, expr), count could overflow/wrap-around.
Patch changes to a sufficiently large data-type. Adds a warning
for negative count values.
log-slave-updates and circul repl
Slave SQL thread may execute one extra event when there are events
skipped by slave I/O thread (e.g. originated by the same server).
Whereas it was requested not to do so by the UNTIL condition.
This happens because we compare with the end position of previously
executed event. This is fine when there are no skipped by slave I/O
thread events, as end position of previous event equals to start
position of to be executed event. Otherwise this position equals to
start position of skipped event.
This is fixed by:
- reading the event to be executed before checking if the until condition
is satisfied.
- comparing the start position of the event to be executed. Since we do
not have the start position available, we compute it by subtracting
event length from end position (which is available).
- if there are no events on the event queue at the slave sql starting
time, that meet until condition, we stop immediately, as in this
case we do not want to wait for next event.
between 5.0 and 5.1.
The problem was that in the patch for Bug#11986 it was decided
to store original query in UTF8 encoding for the INFORMATION_SCHEMA.
This approach however turned out to be quite difficult to implement
properly. The main problem is to preserve the same IS-output after
dump/restore.
So, the fix is to rollback to the previous functionality, but also
to fix it to support multi-character-set-queries properly. The idea
is to generate INFORMATION_SCHEMA-query from the item-tree after
parsing view declaration. The IS-query should:
- be completely in UTF8;
- not contain character set introducers.
For more information, see WL4052.
suite)
Under some circumstances a combination of aggregate functions and
GROUP BY in a SELECT query over a VIEW could lead to incorrect
calculation of the result type of the aggregate function. This in
turn could result in incorrect results, or assertion failures on debug
builds.
Fixed by changing the logic in Item_sum_hybrid::fix_fields() so that
the argument's item is dereferenced before calling its type() method.
The problem is that CREATE VIEW statements inside prepared statements
weren't being expanded during the prepare phase, which leads to objects
not being allocated in the appropriate memory arenas.
The solution is to perform the validation of CREATE VIEW statements
during the prepare phase of a prepared statement. The validation
during the prepare phase assures that transformations of the parsed
tree will use the permanent arena of the prepared statement.
a table name.
The problem was that fill_defined_view_parts() did not return
an error if a table is going to be altered. That happened if
the table was already in the table cache. In that case,
open_table() returned non-NULL value (valid TABLE-instance from
the cache).
The fix is to ensure that an error is thrown even if the table
is in the cache.
(This is a backport of the original patch for 5.1)
The problem is that when a stored procedure is being parsed for
the first execution, the body is copied to a temporary buffer
which is disregarded sometime after the statement is parsed.
And during this parsing phase, the rule for CREATE VIEW was
holding a reference to the string being parsed for use during
the execution of the CREATE VIEW statement, leading to invalid
memory access later.
The solution is to allocate and copy the SELECT of a CREATE
VIEW statement using the thread memory root, which is set to
the permanent arena of the stored procedure.
a table name.
The problem was that fill_defined_view_parts() did not return
an error if a table is going to be altered. That happened if
the table was already in the table cache. In that case,
open_table() returned non-NULL value (valid TABLE-instance from
the cache).
The fix is to ensure that an error is thrown even if the table
is in the cache.
The test case for the bug#31048 checks that there is no crash on stack
overrun. But due to different stack sizes on different platforms it failed
on some of them.
The new test case check that a query with at least 4 level subquery nesting
works without the stack overrun nesting and other levels of nesting doesn't
cause a crash.
floating point numbers
Some math functions did not check if the result is a valid number
(i.e. neither of +-inf or nan).
Fixed by validating the result where necessary and returning NULL in
case of invalid result.
Fixes the following bugs:
- Bug #33349: possible race condition revolving around data dictionary and repartitioning
Introduce retry/sleep logic as a workaround for a transient bug
where ::open fails for partitioned tables randomly if we are using
one file per table.
- Bug #34053: normal users can enable innodb_monitor logging
In CREATE TABLE and DROP TABLE check whether the table in question is one
of the magic innodb_monitor tables and whether the user has enough rights
to mess with it before doing anything else.
- Bug #22868: 'Thread thrashing' with > 50 concurrent conns under an upd-intensive workloadw
- Bug #29560: InnoDB >= 5.0.30 hangs on adaptive hash rw-lock 'waiting for an X-lock'
This is a combination of changes that forward port the scalability fix applied to 5.0
through r1001.
It reverts changes r149 and r122 (these were 5.1 specific changes made in lieu of
scalability fix of 5.0)
Then it applies r1001 to 5.0 which is the original scalability fix.
Finally it applies r2082 which fixes an issue with the original fix.
- Bug #30930: Add auxiliary function to retrieve THD::thread_id
Add thd_get_thread_id() function. Also make check_global_access() function
visible to InnoDB under INNODB_COMPATIBILITY_HOOKS #define.
and ps-protocol
Finding a routine should be a transparent operation as
far as the binary log is concerned.
But it was influencing the binary log because of the TIMESTAMP
column in the proc table.
Fixed by preserving and restoring the time_zone usage flag when
searching for a stored routine in the proc table.
breaks replication
NAME_CONST() didn't replicate constant character set and collation
correctly.
With this fix NAME_CONST() inherits collation from the value argument.
a SELECT doesn't cause ROLLBACK of statem".
The idea of the fix is to ensure that we always commit the current
statement at the end of dispatch_command(). In order to not issue
redundant disc syncs, an optimization of the two-phase commit
protocol is implemented to bypass the two phase commit if
the transaction is read-only.
Problem is not about intervals and doesn't actually cause 'full table scan'.
We have an optimization for DISTINCT when we have
'DISTINCT field_from_first_join_table' we don't need to read all the
rows from the JOIN-ed table if we found one conforming row.
It stopped working in 5.0 as we return NESTED_LOOP_OK if we came upon
that case in the evaluate_join_record() and that doesn't break the
recordreading loop in sub_select().
Fixed by returning NESTED_LOOP_NO_MORE_ROWS in this case.
when executed in version 5
Zero fill is a field attribute only. So we can't always
propagate constants for zerofill fields : the values and
expression results don't have that flag.
Fixed by converting the const value to a string and
using that in const propagation when the context allows it.
Disable const propagation for fields with ZEROFILL flag in
all the other cases.
two timestamp fields.
The actual problem here was that CREATE TABLE allowed zero
date as a default value for a TIMESTAMP column in NO_ZERO_DATE mode.
The thing is that for TIMESTAMP date type specific rule is applied:
column_name TIMESTAMP == column_name TIMESTAMP DEFAULT 0
whever for any other date data type
column_name TYPE == column_name TYPE DEFAULT NULL
The fix is to raise an error when we're in NO_ZERO_DATE mode and
there is TIMESTAMP column w/o default value.
from storage engine
Federated may crash a server, return wrong result set, return
"ERROR 1030 (HY000): Got error 1430 from storage engine" message
when local (engine=federated) table has a key against nullable
column.
The problem was wrong implementation of function that creates
WHERE clause for remote query from key.
This is *not* a fix to the bug. I'm only disabling the failing part of
mysqldump.test until the bug is fixed. Whoever fixes it, please re-enable
the test.
for wildcard values.
The server ignored escape character before wildcards during
the calculation of priority values for sorting of a privilege
list. (Actually the server counted an escape character as an
ordinary wildcard like % or _). I.e. the table name template
with a wildcard character like 'tbl_1' had higher priority in
a privilege list than concrete table name without wildcards
like 'tbl\_1', and some privileges of 'tbl\_1' was hidden
by privileges for 'tbl_1'.
The get_sort function has been modified to ignore escaped
wildcards as usual.
type conversion.
Instead of copying of whole character string from a temporary
buffer, the server copied a short-living pointer to that string
into a long-living structure. That has been fixed.
and
bug#33932 assertion at handle_slave_sql if init_slave_thread() fails
the asserts were caused by
bug33931: having thd deleted at time of executing err: code plus
a missed initialization;
bug33932: initialization of slave_is_running member was missed;
fixed with relocating mi members initialization and removing delete thd
It is safe to do as deletion happens later explicitly in the caller of
init_slave_thread().
Todo: at merging the test is better to be moved into suite/bugs for 5.x (when x>0).
behave randomly with mysql_change_user.
The test case had to be moved into not_embedded_server.test file,
because SHOW GLOBAL STATUS does not work properly in embedded
server (see bug 34517).
but not collation.
The problem here was that text literals in a view were always
dumped with character set introducer. That lead to loosing
collation information.
The fix is to dump character set introducer only if it was
in the original query. That is now possible because there
is no problem any more of loss of character set of string
literals in views -- after WL#4052 the view is dumped
in the original character set.
behave randomly with mysql_change_user.
The problem was that global status variables were not updated
in THD::check_user(), so thread statistics were lost after
COM_CHANGE_USER.
The fix is to update global status variables with the thread ones
before preparing the thread for new user.
mysql-test/t/variables.test, because:
- mysql-test/suite/rpl/t/rpl_variables.test does not replicate anything,
so should not be in the rpl suite.
- mysql-test/t/variables.test is the place for testing variable-related
problems and features.
- I will soon commit a patch containing a test case that tests
replication of variables. It would be good if I could call the test case
mysql-test/suite/rpl/t/rpl_variables.test. I'm making place for that now.
The problem is that AFTER UPDATE triggers will fire only if the
new data is different from the old data on the row. The trigger
should fire regardless of whether there are changes to the data.
The solution is to fire the trigger on UPDATE even if there are
no changes to the value (because the value is the same).
or trigger crashes server
Under some circumstances a combination of VIEWs, subselects with outer
references and PS/SP/triggers could lead to use of uninitialized memory
and server crash as a result.
Fixed by changing the code in Item_field::fix_fields() so that in cases
when the field is a VIEW reference, we first check whether the field
is also an outer reference, and mark it appropriately before returning.
The unsignedness of large integer user variables was not being
properly preserved when feeded to prepared statements. This was
happening because the unsigned flags wasn't being updated when
converting the user variable is converted to a parameter.
The solution is to copy the unsigned flag when converting the
user variable to a parameter and take the unsigned flag into
account when converting the integer to a string.
The out of memory error was thrown when the sort buffer size were too small.
This led to a user confusion.
Now filesort throws the error message about sort buffer being too small.
Problem was incorrect data length in the key_restore function
resulting in overwriting the search key.
Solution, remove one byte in length if uneven bits are used.
Problem was that Field_bit used Field::hash() function that did not
know about using null-byte for storing bits.
Resulting in wrong length, which was caught by valgrind.
Solution: created a Field_bit::hash() that uses Field_bit::val_int()
and my_charset_bin-collation function hash_sort.
Also use the store function for platform independs
After changes to the bug fix for bug 26379 (Combination of FLUSH
TABLE and REPAIR TABLE corrupts a MERGE table) the test case
merge-big failed.
Repaired the test case.
Removed tests for INSERT ... SELECT, which is disabled for MERGE.
Test case change only.
The problem is that one can not create a stored routine if sql_mode
contains NO_ENGINE_SUBSTITUTION or PAD_CHAR_TO_FULL_LENGTH. Also when
a event is created, the mode is silently lost if sql_mode contains one
of the aforementioned. This was happening because the table definitions
which stored sql_mode values weren't being updated to accept new values
of sql_mode.
The solution is to update, in a backwards compatible manner, the various
table definitions (columns) that store the sql_mode value to take into
account the new possible values. One incompatible change is that if a event
that is being created can't be stored to the mysql.event table, an error
will be raised.
The tests case also ensure that new SQL modes will be added to the mysql.proc
and mysql.event tables, otherwise the tests will fail.
and my_innodb_commit_concurrency global variables.
Type of the my_innodb_autoextend_increment and the
my_innodb_commit_concurrency variables has been changed to
GET_ULONG.
Server handles truncation for assignment of too-long values
into CHAR/VARCHAR/TEXT columns in a different ways when the
truncated characters are spaces:
1. CHAR(N) columns silently ignore end-space truncation;
2. TEXT columns post a truncation warning/error in the
non-strict/strict mode.
3. VARCHAR columns always post a truncation note in
any mode.
Space truncation processing has been synchronised over
CHAR/VARCHAR/TEXT columns: current behavior of VARCHAR
columns has been propagated as standard.
Binary-encoded string/BLOB columns are not affected.
The problem is that deprecated syntax warnings were not being
suppressed when the stored routine is being parsed for the first
execution. It's doesn't make sense to print out deprecated
syntax warnings when the routine is being executed because this
kind of warning only matters when the routine is being created.
The solution is to suppress deprecated syntax warnings when
parsing the stored routine for loading into the cache (might
mean that the routine is being executed for the first time).
does not use trans tables
There had been two issues.
Rollback statement was recorded in binlog even though a multi-update
had not modified any non-transactional table.
The reason for this artifact was a false initial value of multi_update::transactional_tables.
Yet another artifact that explained on the bug page is that
`ha_autocommit_or_rollback' works differently depending on whether
a transaction engine has been compiled in.
Fixed: with setting multi_update::transactional_tables to zero at initialization
time. Multi-update on non-trans table won't cause ROLLBACK in binlog with
either compilation option.
The 2nd mentioned artifact comprises a self-standing issue (to be reported
separately).
Problem: some collation handlers called incorrect version
of my_like_range_xxx(), which led to wrong min_str and max_str,
so like range optimizer threw away good records.
Fix: changing the wrong handlers to call proper version of
my_like_range_xxx().
When issuing a column level grant on a table which require pre-locking the
server crashed.
The reason behind the crash was that data structures used by the lock api
wasn't properly reinitialized in the case of a column level grant.
on table creates
The problem was in incompatible syntax for key definition in CREATE
TABLE.
5.0 supports only the following syntax for key definition (see "CREATE
TABLE syntax" in the manual):
{INDEX|KEY} [index_name] [index_type] (index_col_name,...)
While 5.1 parser supports the above syntax, the "preferred" syntax was
changed to:
{INDEX|KEY} [index_name] (index_col_name,...) [index_type]
The above syntax is used in 5.1 for the SHOW CREATE TABLE output, which
led to dumps generated by 5.1 being incompatible with 5.0.
Fixed by changing the parser in 5.0 to support both 5.0 and 5.1 syntax
for key definition.
Simple subselects are pulled into upper selects. This operation substitutes the
pulled subselect for the first item from the select list of the subselect.
If an alias is defined for a subselect it is inherited by the replacement item.
As this is done after fix_fields phase this alias isn't showed if the
replacement item is a stored function. This happens because the Item_func_sp::make_field
function makes send field from its result_field and ignores the defined alias.
Now when an alias is defined the Item_func_sp::make_field function sets it for
the returned field.
pre-locking.
The crash was caused by an implicit assumption in check_table_access() that
table_list parameter is always a part of lex->query_tables.
When iterating over the passed list of tables, check_table_access() used
to stop only when lex->query_tables_last_not_own was reached.
In case of pre-locking, lex->query_tables_last_own is not NULL and points
to some element of lex->query_tables. When the parameter
of check_table_access() was not part of lex->query_tables, loop invariant
could never be violated and a crash would happen when the current table
pointer would point beyond the end of the provided list.
The fix is to change the signature of check_table_access() to also accept
a numeric limit of loop iterations, similarly to check_grant(), and
supply this limit in all places when we want to check access of tables
that are outside lex->query_tables, or just want to check access to one table.
The problem is that the Table_locks_waited was incremented only
when the lock request succeed. If a thread waiting for the lock
gets killed or the lock request is aborted, the variable would
not be incremented, leading to inaccurate values in the variable.
The solution is to increment the Table_locks_waited whenever the
lock request is queued. This reflects better the intended behavior
of the variable -- show how many times a lock was waited.
Two disjuncts containing equalities of the form key=const1 and key=const2 can
be merged into one if const1 is equal to const2. To check it the common
collation of the constants were used rather than the collation of the field key.
For example when the default collation of the constants was cases insensitive
while the collation of the field was case sensitive, then two or-ed equality
predicates key='b' and key='B' incorrectly were merged into one f='b'. As a
result ref access was used instead of range access and wrong result sets were
returned in many cases.
Fixed the problem by comparing constant in the or-ed predicate with collation of
the key field.
"Plugin enum variables can't be set from command line"
fix crash of LOCK_plugins mutex when loading plug-ins from command line.
fix off-by-one bug when loading multiple plug-ins from the command line.
initialize command line handling for ENUM and SET plugin variable types.
The problem is when create/rename/drop users, the statement was logged regardless of error, even if no data has been changed, the statement was logged.
After this patch, create/rename/drop users don't write the binlog if the statement makes no changes, if the statement does make any changes, log the statement with possible error code.
This patch is based on the patch for BUG#29749, which is not pushed
myisamchk did always show Character set: latin1_swedish_ci (8),
regardless what DEFAULT CHARSET the table had.
When the server created a MyISAM table, it did not copy the
characterset number into the MyISAM create info structure.
Added assignment of charset number to MI_CREATE_INFO.
Bug 33983 (Stored Procedures: wrong end <label> syntax is accepted)
The server used to crash when REPEAT or another control instruction
was used in conjunction with labels and a LEAVE instruction.
The crash was caused by a missing "pop" of handlers or cursors in the
code representing the stored program. When executing the code in a loop,
this missing "pop" would result in a stack overflow, corrupting memory.
Code generation has been fixed to produce the missing h_pop/c_pop
instructions.
Also, the logic checking that labels at the beginning and the end of a
statement are matched was incorrect, causing Bug 33983.
End labels, when used, must match the label used at the beginning of a block.
Bug#25347: mysqlcheck -A -r doesn't repair table marked as crashed
mysqlcheck tests nullness of the engine type to know whether the
"table" is a view or not. That also falsely catches tables that
are severly damaged.
Instead, use SHOW FULL TABLES to test whether a "table" is a view
or not.
(Don't add new function. Instead, get original data a smarter way.)
Make it safe for use against databases before when views appeared.
Add new variable m_highest_seen when only peeking at auto_increment NEXTID and not retrieving to cache. Add new method to check tupleId before calling data node
ndb_restore.result, ndb_restore.test:
Changed test to use information_schema to check auto_increment
DictCache.cpp, Ndb.cpp:
Add new variable m_highest_seen when only peeking at auto_increment NEXTID and not retrieving to cache. Add new method to check tupleId before calling data node. When setting the auto_increment value we'll also read up the new value, this is useful if we use the table the first time in this MySQL Server and haven't yet seen the NEXTID value. The kernel will avoid updating since it already has the value but will also read up the NEXTID value to ensure we don't need to do this any more time.
ndb_auto_increment.result:
Updated result file since it was incorrect