This is addition to fix for bug21617. Valgrind reports an error when
opening merge table that has underlying tables with less indexes than
in a merge table itself.
Copy at most min(file->keys, table->key_parts) elements from rec_per_key array.
This fixes problems when merge table and subtables have different number of keys.
Note: bug#21726 does not directly apply to 4.1, as it doesn't have stored
procedures. However, 4.1 had some bugs that were fixed in 5.0 by the
patch for bug#21726, and this patch is a backport of those fixes.
Namely, in 4.1 it fixes:
- LAST_INSERT_ID(expr) didn't return value of expr (4.1 specific).
- LAST_INSERT_ID() could return the value generated by current
statement if the call happens after the generation, like in
CREATE TABLE t1 (i INT AUTO_INCREMENT PRIMARY KEY, j INT);
INSERT INTO t1 VALUES (NULL, 0), (NULL, LAST_INSERT_ID());
- Redundant binary log LAST_INSERT_ID_EVENTs could be generated.
Though this is not storage engine specific problem, I was able to
repeat this problem with BDB and NDB engines only. That was the
reason to add a test case into ndb_update.test. As a result
different bad things could happen.
BDB has removed duplicate rows which is not expected.
NDB returns an error.
For multi table update notify storage engine about UPDATE IGNORE
as it is done in single table UPDATE.
The executing code had a safety assertion so that it refused to free Items
that it didn't create. However, there is a case, undefined user variables,
which would put Items into the list to be freed.
Instead, do something that is more risky in expectation that the code will
be refactored soon, as Kostja wants to do: Remove the assertions from
prepare() and execute(). Put one assertion at a higher level, before
stmt->set_params_from_vars(), which may then create new to-be-freed Items .
- bug #11655 "Wrong time is returning from nested selects - maximum time exists
- input and output TIME values were not validated properly in several conversion functions
- bug #20927 "sec_to_time treats big unsigned as signed"
- integer overflows were not checked in several functions. As a result, input values like 2^32 or 3600*2^32 were treated as 0
- BIGINT UNSIGNED values were treated as SIGNED in several functions
- in cases where both input string truncation and out-of-range TIME value occur, only 'truncated incorrect time value' warning was produced
Set a flag when a SHOW command is parsed, and check it in log_slow_statement(). SHOW commands are not counted as slow queries, even if they use table scans.
(race cond)
It was possible for one thread to interrupt a Data Definition Language
statement and thereby get messages to the binlog out of order. Consider:
Connection 1: Drop Foo x
Connection 2: Create or replace Foo x
Connection 2: Log "Create or replace Foo x"
Connection 1: Log "Drop Foo x"
Local end would have Foo x, but the replicated slaves would not.
The fix for this is to wrap all DDL and logging of a kind in the same mutex.
Since we already use mutexes for the various parts of altering the server,
this only entails moving the logging events down close to the action, inside
the mutex protection.
invocations of LAST_INSERT_ID.
Reding of LAST_INSERT_ID inside stored function wasn't noted by caller,
and no LAST_INSERT_ID_EVENT was issued for binary log.
The solution is to add THD::last_insert_id_used_bin_log, which is much
like THD::last_insert_id_used, but is reset only for upper-level
statements. This new variable is used to issue LAST_INSERT_ID_EVENT.
- Type casting was not consequent, thus when adding a DATE type with
a WEEK interval the result type was DATETIME and not DATE as is the
norm.
- By changing the order of the date internal enumerations the deviant
type casting is resolved (Item_date_add_interval::fix_length_and_dec()
which determines result type for this operation assumes that addition
of any interval with value <= INTERVAL_DAY to date value will result
in date). There are two independant places to change:
interval_names[] and interval_type.
Non-upper-level INSERTs (the ones in the body of stored procedure,
stored function, or trigger) into a table that have AUTO_INCREMENT
column didn't affected the result of LAST_INSERT_ID() on this level.
The problem was introduced with the fix of bug 6880, which in turn was
introduced with the fix of bug 3117, where current insert_id value was
remembered on the first call to LAST_INSERT_ID() (bug 3117) and was
returned from that function until it was reset before the next
_upper-level_ statement (bug 6880).
The fix for bug#21726 brings back the behaviour of version 4.0, and
implements the following: remember insert_id value at the beginning
of the statement or expression (which at that point equals to
the first insert_id value generated by the previous statement), and
return that remembered value from LAST_INSERT_ID() or @@LAST_INSERT_ID.
Thus, the value returned by LAST_INSERT_ID() is not affected by values
generated by current statement, nor by LAST_INSERT_ID(expr) calls in
this statement.
Version 5.1 does not have this bug (it was fixed by WL 3146).
Fix for bug 7894 replaces a field(s) in a non-aggregate function with a item
reference if such a field was specified in the GROUP BY clause in order to
get a correct result.
When ROLLUP is involved this lead to a wrong result due to value of a such
field is got through a copy function and copying happens after the function
evaluation.
Such replacement isn't needed if grouping is also done by such a function.
The change_group_ref() function now isn't called for a function present in
the group list.
After the patch for big 21698 equality propagation stopped
working for BETWEEN and IN predicates with STRING arguments.
This changeset completes the solution of the above patch.
Problem: for character sets having mbmaxlen==2,
any ALTER TABLE changed TEXT column type to MEDIUMTEXT,
due to wrong "internal length to create length" formula.
Fix: removing rounding code introduced in early 4.1 time,
which is not correct anymore.
Adding a multibyte-aware VARCHAR copying function, to put correct column prefix,
taking in account number of characters (instead just limiting on number of bytes).
For example, for a KEY(col(3)) on a UTF8 column when copying the string 'foo bar foo',
we should put only 3 leftmost characters: 'foo'.
9 characters were incorrectly put before this fix.
We use the condition from CHECK OPTION twice handling UPDATE command.
First we construnct 'update_cond' AND 'option_cond'
condition to select records to be updated, then we check the
'option_cond' for the updated row.
The problem is that first 'AND' condition is optimized during the 'select'
which can break 'option_cond' structure, so it will be unusable for
the sectond use - to check the updated row.
Possible soultion is either use copy of the condition in the first
use or to make optimization less traumatic for the operands.
I picked the first one.
On an INSERT into an updatable but non-insertable view an error message was
issued stating the view being not updatable. This can lead to a confusion of a
user.
A new error message is introduced. Is is showed when a user tries to insert
into a non-insertable view.
while space allocation
Under some circumstances DISTINCT clause can be converted to grouping.
In such cases grouping is performed by all items in the select list.
If an ORDER clause is present then items from it is prepended to group list.
But the case with ORDER wasn't taken into account when allocating the
array for sum functions. This leads to memory corruption and crash.
The JOIN::alloc_func_list() function now allocates additional space if there
is an ORDER by clause is specified and DISTINCT -> GROUP BY optimization is
possible.
create_tmp_table()".
The fix for bug 21787 "COUNT(*) + ORDER BY + LIMIT returns wrong
result" introduced valgrind warnings which occured during execution
of information_schema.test and sp-prelocking.test in version 5.0.
There were no user visible effects.
The latter fix made create_tmp_table() dependant on
THD::lex::current_select value. Valgrind warnings occured when this
function was executed and THD::lex::current_select member pointed
to uninitialized SELECT_LEX instance.
This fix tries to remove this dependancy by moving some logic
outside of create_tmp_table() function.
If mysqld is linked against system installed zlib (which is likely compiled w/o
LFS) and archive table exceedes 2G, mysqld will likely be terminated with SIGXFSZ.
Prior to actual write perform a check if there is space in data file. This fixes
abnormal process termination with SIGXFSZ.
No test case for this bugfix.
post-review fixes as indicated by Serg.
manual testing of error cases done in 5.0 due to support for DBUG_EXECUTE_IF
to insert errors.
Unable to write test case for mysql-test until 5.1 due to support for setting
debug options at runtime.
The function receives an exactly-sized buffer (not a C NUL-terminated string)
and passes it into a printf function to be interpreted with "%s".
Instead, create an intermediate String object, and copy the data into it,
and pass in a pointer to the String's NUL-terminated buffer.
There was possible stack overrun in an edge case which handles invalid body of
a SP in mysql.proc . That should be case when mysql.proc has been changed
manually. Though, due to bug 21513, it can be exploited without having access
to mysql.proc only being able to create a stored routine.
Re-execution of a parametrized prepared statement or a stored routine
with a SELECT that use LEFT JOIN with second table having only one row
could yield incorrect result.
The problem appeared only for left joins with second table having only
one row (aka const table) and equation conditions in ON or WHERE clauses
that depend on the argument passed. Once the condition was false for
second const table, a NULL row was created for it, and any field involved
got NULL-value flag, which then was never reset.
The cause of the problem was that Item_field::null_value could be set
without being reset for re-execution. The solution is to reset
Item_field::null_value in Item_field::cleanup().
The STACK_MIN_SIZE is currently set to 8192, when we actually need
(emperically discovered) 9236 bytes to raise an fatal error, on Ubuntu
Dapper Drake, libc6 2.3.6-0ubuntu2, Linux kernel 2.6.15-27-686, on x86.
I'm taking that as a new lower bound, plus 100B of wiggle-room for sundry
word sizes and stack behaviors.
The added test verifies in a cross-platform way that there are no gaps
between the space that we think we need and what we actually need to report
an error.
DOCUMENTERS: This also adds "let" to the mysqltest commands that evaluate
an argument to expand variables therein. (Only right of the "=", of course.)
Presence of a subquery in the ON expression of a join
should not block merging the view that contains this join.
Before this patch the such views were converted into
into temporary table views.
an ALL/ANY quantified subquery in HAVING.
The Item::split_sum_func2 method should not create Item_ref
for objects of any class derived from Item_subselect.
- The definition of the result type of a type_date function didn't
include INTERVAL_WEEK
- This patch adds an explicit test for INTERVAL_WEEK which results
in the result type for an item_date_add_intervall operation
being DATE rather than DATETIME when one parameter is
INTERVAL_WEEK.
Item_substr's results are improperly stored in a temporary table due to
wrongly calculated max_length value for multi-byte charsets if two
arguments specified.
this key does not stop" (version for 5.0 only).
UPDATE statement which WHERE clause used key and which invoked trigger
that modified field in this key worked indefinetely.
This problem occured because in cases when UPDATE statement was
executed in update-on-the-fly mode (in which row is updated right
during evaluation of select for WHERE clause) the new version of
the row became visible to select representing WHERE clause and was
updated again and again.
We already solve this problem for UPDATE statements which does not
invoke triggers by detecting the fact that we are going to update
field in key used for scanning and performing update in two steps,
during the first step we gather information about the rows to be
updated and then doing actual updates. We also do this for
MULTI-UPDATE and in its case we even detect situation when such
fields are updated in triggers (actually we simply assume that
we always update fields used in key if we have before update
trigger).
The fix simply extends this check which is done in check_if_key_used()/
QUICK_SELECT_I::check_if_keys_used() routine/method in such way that
it also detects cases when field used in key is updated in trigger.
As nice side-effect we have more precise and thus more optimal
perfomance-wise check for the MULTI-UPDATE.
Also check_if_key_used()/QUICK_SELECT_I::check_if_keys_used() were
renamed to is_key_used()/QUICK_SELECT_I::is_keys_used() in order to
better reflect that boolean predicate.
Note that this check is implemented in much more elegant way in 5.1
This ChangeSet will be null-merged to mysql-5.1.
Bugs fixed:
- Bug #21638: InnoDB: possible segfault in page_copy_rec_list_end_no_locks
If prefix_len is specified, write it to the insert buffer instead of type->len
- bug #10746: InnoDB: Error: stored_select_lock_type is 0 inside ::start_stmt()!
INSERT ... SELECT uses LOCK_NONE in innodb_locks_unsafe_for_binlog mode, so do
not check for LOCK_X or LOCK_S in start_stmt()
Any default value for a enum fields over UCS2 charsets was corrupted
when we put it into the frm file, as it had been overwritten by its
HEX representation.
To fix it now we save a copy of structure that represents the enum
type and when putting the default values we use this copy.
that returns the results of aggregation by GROUP_CONCAT.
The crash was due to an overflow happened for the field
sortoder->length.
The fix prevents this overflow exploiting the fact that the
value of sortoder->length cannot be greater than the value of
thd->variables.max_sort_length.
Bug#20627 - INSERT DELAYED does not honour auto_increment_* variables
INSERT DELAYED ignored an explicitly set INSERT_ID and session
specific auto_increment_* variables.
The problem was that the inserts are done by a system thread,
which does not have access to the session variables of the user
thread.
On a proposal of Guilhem I fixed it so that the variables are
copied to the data structure for every delayed row. The system
thread sets its session variables from these values.
Fixed confusing error message from the storage engine when
it fails to open underlying table. The error message is issued
when a table is _opened_ (not when it is created).
statement that uses an aggregating IN subquery with
HAVING clause.
A wrong order of the call of split_sum_func2 for the HAVING
clause of the subquery and the transformation for the
subquery resulted in the creation of a andor structure
that could not be restored at an execution of the prepared
statement.
containing a select statement that uses an aggregating IN subquery.
Added a parameter to the function fix_prepare_information
to restore correctly the having clause for the second execution.
Saved andor structure of the having conditions at the proper moment
before any calls of split_sum_func2 that could modify the having structure
adding new Item_ref objects. (These additions, are produced not with
the statement mem_root, but rather with the execution mem_root.)
Removed changes to the Item_func_between::fix_length_and_dec() made in the fix for bug#16377
query_cache.result:
Corrected a test case after removing a fix for bug#16377
The cause of the bug was an incomplete fix for bug 18080.
The problem was that setup_tables() unconditionally reset the
name resolution context to its 'tables' argument, which pointed
to the first table of an SQL statement.
The bug fix limits resetting of the name resolution context in
setup_tables() only in the cases when the context was not set
by earlier parser/optimizer phases.
Only MyISAM tables locked with LOCK TABLES ... WRITE were affected.
A query that is optimized with index_merge doesn't reflect rows
inserted within LOCK TABLES.
MyISAM doesn't flush a state within LOCK TABLES. index_merge
optimization creates a copy of the handler, which thus gets
outdated MyISAM state.
New handler->clone() method is introduced to fix this problem.
For non-MyISAM storage engines it allocates a handler and opens
it with ha_open(). For MyISAM it additionally copies MyISAM state
pointer to cloned handler.
The problem was that if after FLUSH TABLES WITH READ LOCK the user
issued DROP/ALTER PROCEDURE/FUNCTION the operation would fail (as
expected), but after UNLOCK TABLE any attempt to execute the same
operation would lead to the error 1305 "PROCEDURE/FUNCTION does not
exist", and an attempt to execute any stored function will also fail.
This happened because under FLUSH TABLES WITH READ LOCK we couldn't open
and lock mysql.proc table for update, and this fact was erroneously
remembered by setting mysql_proc_table_exists to false, so subsequent
statements believed that mysql.proc doesn't exist, and thus that there
are no functions and procedures in the database.
As a solution, we remove mysql_proc_table_exists flag completely. The
reason is that this optimization didn't work most of the time anyway.
Even if open of mysql.proc failed for some reason when we were trying to
call a function or a procedure, we were setting mysql_proc_table_exists
back to true to force table reopen for the sake of producing the same
error message (the open can fail for number of reasons). The solution
could have been to remember the reason why open failed, but that's a lot
of code for optimization of a rare case. Hence we simply remove this
optimization.
A communication packet can also be a binlog event sent from the master to the slave.
To be sent by master dump and accepted by slave io thread both have to have
the value of max_allowed_packet bigger than one that client connection had.
In the patch there is the MAX possible replicatio header size estimation for events
in binlog that embedded user query. Only these events of query_log_event type, i.e
just plain queries, require attention.
0xFF is internal separator for SET|ENUM names.
If this symbol is present in SET|ENUM names then we replace it with
','(deprecated symbol for SET|ENUM names) during frm creation
and restore to 0xFF during frm opening
wrong results
Mark the containing Item(s) (Item_subselect descendant usually) of
a subselect as containing aggregate functions if it has references
to aggregates functions that are calculated outside its context.
This tels end_send_group() not to make an Item_subselect descendant in
select list a copy and causes the correct value being returned.
- Honor unsigned_flag in the corresponding functions
- Use compare_int_signed_unsigned()/compare_int_unsigned_signed() instead of explicit comparison in GREATEST() and LEAST()
VALUES() was considered a constant. This caused replacing
(or pre-calculating) it using uninitialized values before the actual
execution takes place.
Mark it as a non-constant (still not dependent of tables) to prevent
the pre-calculation.
Corrected test case after removal of fix for bug#16377
type_date.test:
Corrected test case after removal of fix for bug#16377
item_cmpfunc.cc:
Removed changes to the agg_cmp_type() made in the for bug#16377
equal constant under any circumstances.
In fact this substitution can be allowed if the field is
not of a type string or if the field reference serves as
an argument of a comparison predicate.
Select_type in the EXPLAIN output for the query SELECT * FROM t1 was
'SIMPLE', while for the query SELECT * FROM v1, where the view v1
was defined as SELECT * FROM t1, the EXPLAIN output contained 'PRIMARY'
for the select_type column.
In 5.0 we made LOAD DATA INFILE autocommit in all engines, while
only NDB wanted that. Users and trainers complained that it affected
InnoDB and was a change compared to 4.1 where only NDB autocommitted.
To revert to the behaviour of 4.1, we move the autocommit logic out of mysql_load() into
ha_ndbcluster::external_lock().
The result is that LOAD DATA INFILE commits all uncommitted changes
of NDB if this is an NDB table, its own changes if this is an NDB
table, but does not affect other engines.
Note: even though there is no "commit the full transaction at end"
anymore, LOAD DATA INFILE stays disabled in routines (re-entrency
problems per a comment of Pem).
Note: ha_ndbcluster::has_transactions() does not give reliable results
because it says "yes" even if transactions are disabled in this engine...
1003: Incorrect table name
in multi-table DELETE the set of tables to delete from actually
references then tables in the other list, e.g:
DELETE alias_of_t1 FROM t1 alias_of_t1 WHERE ....
is a valid statement.
So we must turn off table name syntactical validity check for alias_of_t1
because it's not a table name (even if it looks like one).
In order to do that we add a special flag (TL_OPTION_ALIAS) to
disable the name checking for the aliases in multi-table DELETE.
Variable character_set_results can legally be NULL (for "no conversion.")
This could result in a NULL deref that crashed the server. Fixed.
(Although ran some additional precursory tests to see whether I could break
anything else, but no breakage so far.)
into salvation.intern.azundris.com:/home/tnurnberg/21913/my50-21913
21913: DATE_FORMAT() Crashes mysql server if I use it through mysql-connector-j driver.
Variable character_set_results can legally be NULL (for "no conversion.")
This could result in a NULL deref that crashed the server. Fixed.
(Although ran some additional precursory tests to see whether I could break
anything else, but no breakage so far.)
Variable character_set_results can legally be NULL (for "no conversion.")
This could result in a NULL deref that crashed the server. Fixed.
(Although ran some additional precursory tests to see whether I could break
anything else, but no breakage so far.)