ALTER TABLE DISABLE KEYS doesn't work when modifying the table
ENABLE|DISABLE KEYS combined with another ALTER TABLE option, different
than RENAME TO did nothing. Also, if the table had disabled keys
and was ALTER-ed then the end table was with enabled keys.
Fixed by checking whether the table had disabled keys and enabling them
in the copied table.
statements
Currently the optimizer evaluates loose index scan only for top-level SELECT
statements
Extend loose index scan applicability by :
- Test the applicability of loose scan for each sub-select, instead of the
whole query. This change enables loose index scan for sub-queries.
- allow non-select statements with SELECT parts (like, e.g.
CREATE TABLE .. SELECT ...) to use loose index scan.
- CREATE PROCEDURE stores database name based on query context instead
of 'current database' as set by 'USE' according to manual.
The bug reporter interpret the filtering statements as bug for
DROP PROCEDURE based on this behavior.
- Removed the code which changes db context.
- Added code to check that a valid db was supplied.
When implicitly converting string fields to numbers the
string-to-number conversion error was not sent to the client.
Added code to send the conversion error as warning.
We also need to prevent generation of warnings from the places
where val_xxx() methods are called for the sole purpose of updating
the Item::null_value flag.
To achieve that a special function is added (and called) :
update_null_value(). This function will set the no_errors flag and
will call val_xxx(). The warning generation in Field_string::val_xxx()
will use the flag when generating the conversion warnings.
Problem: when loading mysqlbinlog dumps, CREATE PROCEDURE having semicolons
in their bodies failed.
Fix: Using safe delimiter "/*!*/;" to dump log entries.
mysql-test-run.pl:
Removed "use diagnostics", reduces Perl speed significantly. Can be
enabled with "perl -Mdiagnostics mysql-test-run.pl".
mtr_report.pl:
Don't try output "skipped" comment if there is none (bug#24471)
master_port after a "change master" will be set to the compiled in default value
i.e not always the same as what the master report as it's port number.
If a view was created with the DEFINER security and later the definer user
was dropped then a SELECT from the view throws the error message saying that
there is no definer user is registered. This is ok for a root but too much
for a mere user.
Now the st_table_list::prepare_view_securety_context() function reveals
the absence of the definer only to a superuser and throws the 'access denied'
error to others.
Removed "use diagnostics", reduces Perl speed significantly. Can be
enabled with "perl -Mdiagnostics mysql-test-run.pl".
mtr_report.pl:
Don't try output "skipped" comment if there is none (bug#24471)
traces in Valgrind (broken libc6-dbg).
Installing libc6-dbg on Debian will still provide proper bactraces, even
without setting LD_LIBRARY_PATH explicitly.
fails randomly.
The problem was that the test case used command line tool (mysql)
without specifying connect_timeout argument. In some cases,
this lead to hanging of the test case.
The fix is to specify --connect_timeout=1 when starting mysql.
Also, the patch contains polishing and various cleanups to simplify
analyzing of the problems further.
The patch affects only test suite, no server codebase has been
touched.
Load shared libraries from zlib (fixed that mysql-test-run.pl didn't work on some Solaris boxes)
Added connect timeout to test to make im_daemon_life_cycle more predictable
- Client side readline functions unconditionally search for Unix '\n' line
endings. In this case, the delimiter statement was set to '//\r' instead
of the intended '//'. When removing the '\n' check for and remove
preceeding '\r' character as well.
mysql_fix_privilege_tables.s's ability to convert the system tables as of
3.20 to current system table format
Add similar test for 4.1.23 tables - but use "mysql < mysql_fix_privilege_tables.sql"
so it can be run on any platform.
(Mostly in DBUG_PRINT() and unused arguments)
Fixed bug in query cache when used with traceing (--with-debug)
Fixed memory leak in mysqldump
Removed warnings from mysqltest scripts (replaced -- with #)
Do not issue a 'read-only' error in case of DROP TEMPORARY TABLE on a non-existing temporary table.
Instead produce the correct "Unknown table" error or warning (in cases when the IF EXISTS clause was specified).
To a documentor: the part of the manual describing the 'read_only' system variable should be clarified to state the following:
"When the read_only variable is set to ON, all operations which create/update/drop tables are rejected with the exceptions for:
1. Any operation performed by the replication thread on a slave server
2. Any operation performed by a user that have the SUPER privilege
3. Any operation that creates/updates/drops only temporary tables"
limitation)
Note to the reviewer
====================
Warning: reviewing this patch is somewhat involved.
Due to the nature of several issues all affecting the same area,
fixing separately each issue is not practical, since each fix can not be
implemented and tested independently.
In particular, the issues with
- rule recursion
- nested case statements
- forward jump resolution (backpatch list)
are tightly coupled (see below).
Definitions
===========
The expression
CASE expr
WHEN expr THEN expr
WHEN expr THEN expr
...
END
is a "Simple Case Expression".
The expression
CASE
WHEN expr THEN expr
WHEN expr THEN expr
...
END
is a "Searched Case Expression".
The statement
CASE expr
WHEN expr THEN stmts
WHEN expr THEN stmts
...
END CASE
is a "Simple Case Statement".
The statement
CASE
WHEN expr THEN stmts
WHEN expr THEN stmts
...
END CASE
is a "Searched Case Statement".
A "Left Recursive" rule is like
list:
element
| list element
;
A "Right Recursive" rule is like
list:
element
| element list
;
Left and right recursion produces the same language, the difference only
affects the *order* in which the text is parsed.
In a descendant parser (usually written manually), right recursion works
very well, and is typically implemented with a while loop.
In an ascendant parser (yacc/bison) left recursion works very well,
and is implemented naturally by the parser stack.
In both cases, using the wrong type or recursion is very bad and should be
avoided, as it causes technical issues with the parser implementation.
Before this change
==================
The "Simple Case Expression" and "Searched Case Expression" were both
implemented by the "when_list" and "when_list2" rules, which are left
recursive (ok).
These rules, however, used lex->when_list instead of using the parser stack,
which is more complex that necessary, and potentially dangerous because
of other rules using THD::reset_lex.
The "Simple Case Statement" and "Searched Case Statements" were implemented
by the "sp_case", "sp_whens" and in part by "sp_proc_stmt" rules.
Both cases were right recursive (bad).
The grammar involved was convoluted, and is assumed to be the results of
tweaks to get the code generation to work, but is not what someone would
naturally write.
In addition, using a common rule for both "Simple" and "Searched" case
statements was implemented with sp_head::m_flags |= IN_SIMPLE_CASE,
which is a flag and not a stack, and therefore does not take into account
*nested* case statements. This leads to incorrect generated code, and either
a server crash or an incorrect result.
With regards to the backpatch mechanism, a *different* backpatch list was
created for each jump from "WHEN expr THEN stmt" to "END CASE", which
relied on the grammar to be right recursive.
This is a mis-use of the backpatch list, since this list can resolve
multiple references to the same target at once.
The optimizer algorithm used to detect dead code in the "assembly" SQL
instructions, implemented by sp_head::opt_mark(uint ip), was recursive
in some cases (a conditional jump pointing forward to another conditional
jump).
In case of specially crafted code, like
- a long list of "IF expr THEN stmt END IF"
- a long CASE statement
this would actually cause a server crash with a stack overflow.
In general, having a stack that grows proportionally with user data (the
SQL code given by the client in a CREATE PROCEDURE) is to be avoided.
In debug builds only, creating a SP / SF / Trigger which had a significant
amount of code would spend --literally-- several minutes in sp_head::create,
because of the debug code involved with DBUG_PRINT("info", ("Code %s ...
There are several issues with this code:
- in a CASE with 5 000 WHEN, there are 15 000 instructions generated,
which create a sting representation of the code which is 500 000 bytes
long,
- using a String instead of an io stream causes performances to degrade
to a total server freeze, as time is spent doing realloc of a buffer
always too short,
- Printing a 500 000 long string in the debug log is too verbose,
- Generating this string even when DBUG_PRINT is off is useless,
- Having code that potentially can affect the server behavior, used with
#ifdef / #endif is useful in some cases, but is also a bad practice.
After this change
=================
"Case Expressions" (both simple and searched) have been simplified to
not use LEX::when_list, which has been removed.
Considering all the issues affecting case statements, the grammar for these
has been totally re written.
The existing actions, used to generate "assembly" sp_inst* code, have been
preserved but moved in the new grammar, with the following changes:
a) Bison rules are no longer shared between "Simple" and "Searched" case
statements, because a stack instead of a flag is required to handle them.
Nested statements are handled naturally by the parser stack, which by
definition uses the correct rule in the correct context.
Nested statements of the opposite type (simple vs searched) works correctly.
The flag sp_head::IN_SIMPLE_CASE is no longer used.
This is a step towards resolution of WL#2999, which correctly identified
that temporary parsing flags do not belong to sp_head.
The code in the action is shared by mean of the case_stmt_action_xxx()
helpers.
b) The backpatch mechanism, used to resolve forward jumps in the generated
code, has been changed to:
- create a label for the instruction following 'END CASE',
- register each jump at the end of a "WHEN expr THEN stmt" in a *unique*
backpatch list associated with the 'END CASE' label
- resolve all the forward jumps for this label at once.
In addition, the code involving backpatch has been commented, so that a
reader can now understand by reading matching "Registering" and "Resolving"
comments how the forward jumps are resolved and what target they resolve to,
as this is far from evident when reading the code alone.
The implementation of sp_head::opt_mark() has been revised to avoid
recursive calls from jump instructions, and instead add the jump location
to the list of paths to explore during the flow analysis of the instruction
graph, with a call to sp_head::add_mark_lead().
In addition, the flow analysis will stop if an instruction has already
been marked as reachable, which the previous code failed to do in the
recursive case.
sp_head::opt_mark() is now private, to prevent new calls to this method from
being introduced.
The debug code present in sp_head::create() has been removed.
Considering that SHOW PROCEDURE CODE is also available in debug builds,
and can be used anytime regardless of the trace level, as opposed to
"CREATE PROCEDURE" time and only if the trace was on,
removing the code actually makes debugging easier (usable trace).
Tests have been written to cover the parser overflow (big CASE),
and to cover nested CASE statements.
(this is the 5.0 patch, because 4.1 differs)
There was an improper order of doing chained operations.
To the documentor: ENABLE|DISABLE KEYS combined with RENAME TO, and no other
ALTER TABLE clause, leads to server crash independent of the presence of
indices and data in the table.
There was an improper order of doing chained operations.
To the documentor: ENABLE|DISABLE KEYS combined with RENAME TO, and no other
ALTER TABLE clause, leads to server crash independent of the presence of
indices and data in the table.
The problem was that some functions (namely IN() starting with 4.1, and
CHAR() starting with 5.0) were returning NULL in certain conditions,
while they didn't set their maybe_null flag. Because of that there could
be some problems with 'IS NULL' check, and statements that depend on the
function value domain, like CREATE TABLE t1 SELECT 1 IN (2, NULL);.
The fix is to set maybe_null correctly.
- Added 'SET NAMES <charset>" upon ::open
- Added test and results for simple UTF test
federated.test:
BUG #17044 Federated Storage Engine not UTF8 clean
New test. Using hex - pasting various charsets in the terminal doesn't work.
federated.result:
BUG# 17044 Federated Storage Engine not UTF8 clean
New test results
ha_federated.cc:
BUG# 17044 Federated Storage Engine not UTF8 clean
Upon ::open, set names to table's charset
Problem: When we have a really large number (between 2^63 and 2^64)
as the left side of the mod operator, it gets improperly corerced
into a signed value.
Solution: Added check to see if the "negative" number is really
positive, and if so, cast it.
The problem was that THD::row_count_func was zeroed too. It was zeroed
as a fix for bug 4905 "Stored procedure doesn't clear for "Rows affected"
However, the proper solution is not to zero, because THD::row_count_func has
been set to -1 already in mysql_execute_command(), a later fix, which obsoletes
the incorrect fix of #4095
The regression is caused by the fix for bug 14767. When INSERT ... SELECT
used a view in the SELECT list that was not inlined, and there was an
active transaction, the server could crash in Query_cache::invalidate.
On INSERT ... SELECT only the table being inserted into is invalidated.
Thus views that can't be inlined are skipped from invalidation.
The bug manifests itself in two ways so there is 2 test cases.
One checks that the only the table being inserted into is invalidated.
And the second one checks that there is no crash on INSERT ... SELECT.
This change set implements the DROP TRIGGER IF EXISTS functionality.
This fix is considered a bug and not a feature, because without it,
there is no known method to write a database creation script that can create
a trigger without failing, when executed on a database that may or may not
contain already a trigger of the same name.
Implementing this functionality closes an orthogonality gap between triggers
and stored procedures / stored functions (which do support the DROP IF
EXISTS syntax).
In sql_trigger.cc, in mysql_create_or_drop_trigger,
the code has been reordered to:
- perform the tests that do not depend on the file system (access()),
- get the locks (wait_if_global_read_lock, LOCK_open)
- call access()
- perform the operation
- write to the binlog
- unlock (LOCK_open, start_waiting_global_read_lock)
This is to ensure that all the code that depends on the presence of the
trigger file is executed in the same critical section,
and prevents race conditions similar to the case fixed by Bug 14262 :
- thread 1 executes DROP TRIGGER IF EXISTS, access() returns a failure
- thread 2 executes CREATE TRIGGER
- thread 2 logs CREATE TRIGGER
- thread 1 logs DROP TRIGGER IF EXISTS
The patch itself is based on code contributed by the MySQL community,
under the terms of the Contributor License Agreement (See Bug 18161).
The code that set up data to be passed to user-defined functions was very
old and analyzed the "Type" of the data that was passed into the UDF, when
it really should analyze the "return_type", which is hard-coded for simple
Items and works correctly for complex ones like functions.
---
Added test at Sergei's behest.
The server sends a number of columns to the client.
It uses a limited "fast" function for that instead of the
general one. This fast function cannot send numbers larger
than 2 bytes.
This causes the client to expect smaller number of columns.
The client writes outside of the allocated memory buffer
as a result.
Fixed the server to use the general function to send column
count.
Fixed the client to check the column count before writing
column data.
stored function invoked from different connections".
Invocation of trigger which was using stored function from different
connections caused server crashes (for non-debug server this happened
in highly concurrent environment, but debug server failed on assertion
in relatively simple scenario).
Item_func_sp was not safe to use in triggers (in other words for
re-execution from different threads) as artificial TABLE object
pointed by Item_func_sp::dummy_table referenced incorrect THD
object. To fix the problem we force re-initialization of this
object for each re-execution of statement.
specifying DEFAULT
This was not specific to datetime. When there is no default value
for a column, and the user inserted DEFAULT, we would write
uninitialized memory to the table.
Now, insist on writing a default value, a zero-ish value, the same
one that comes from inserting NULL into a not-NULL field.
(This is, at best, really strange behavior that comes from allowing
sloppy usage, and serves as a good reason always to run one's server
in a strict SQL mode.)
When compiling GROUP BY Item_ref instances are dereferenced in
setup_copy_fields(), i.e. replaced with the corresponding Item_field
(if they point to one) or Item_copy_string for the other cases.
Since the Item_ref (in the Item_field case) is no longer used the information
about the aliases stored in it is lost.
Fixed by preserving the column, table and DB alias on dereferencing Item_ref
- Detect if a table has field of type MYSQL_TYPE_VAR_STRING while running
"CHECK TABLE t FOR UPGRADE" and indicate it need to be fixed
with "REPAIR TABLE t".
- When running a "REPAIR TABLE t" or "ALTER TABLE t FORCE" on the above
table, install a special copy function to trim off the trailing spaces
which we safely can say that the pre 5.0 mysqld didn't put there.
The problem was that any VIEW columns had always implicit derivation.
Fix: derivation is now copied from the original expression
given in VIEW definition.
For example:
- a VIEW column which comes from a string constant
in CREATE VIEW definition have now coercible derivation.
- a VIEW column having COLLATE clause
in CREATE VIEW definition have now explicit derivation.
Problem: when embedding a character string with introducer with charset X
into a SQL query which is generally in character set Y, the string constants
were escaped according to their own character set (i.e.X), then after reading
such a "mixed" query from binlog, the string constants were unescaped
using character set of the query (i.e. Y), instead of X, which gave wrong
results or even syntax errors with tricky charsets (e.g. sjis)
Fix: when embedding a string constant of charset X into a query of charset Y,
the string constant is now escaped according to character Y, instead of
its own character set X.
on large length
Problem: Most (all) of the numeric inputs were being coerced into
int (32 bit) sized variables. Works OK for sane inputs; any input
larger than 2^32 (or 2^31 for signed vars) exihibited predictable
wrapping behavior (up to about 10^18) and then started having really
strange behaviour past that point (since the conversion to 64 bit int
from the DECIMAL type can do weird things on out of range numbers).
Solution: 1) Add many tests. 2) Convert input from (u)long type to
(u)longlong. 3) Do (sometimes multiple) sanity checks on input,
keeping in mind that sometimes a negative longlong is not a negative
longlong (if the unsigned_flag is set). 4) Emulate existing behavior
w/rt negative and "small" out-of-bounds values.
Problem: After introducing of LC_TIME_NAMES variable, the
function date_format() can return international non-ascii
characters in month and weekday names. Thus, it cannot return
a binary string anymore, because inserting a result of date_format()
into a column with non-utf8 character set produces garbage.
Fix: date_format() now returns a character string, using
"collation_connection" to detect character set and collation
for the returned value. This allows to insert
results of date_format() properly into columns with
various character sets.
- When returning metadata for scalar subqueries the actual type of the
column was calculated based on the value type, which limits the actual
type of a scalar subselect to the set of (currently) 3 basic types :
integer, double precision or string. This is the reason that columns
of types other then the basic ones (e.g. date/time) are reported as
being of the corresponding basic type.
Fixed by storing/returning information for the column type in addition
to the result type.
Problem: GROUP_CONCAT on a multi-byte column can truncate
in the middle of a multibyte character when applying
group_concat_max_len limit. It produces an invalid
multi-byte character in the result string.
The second, easier version - reusing old "warning_for_row" flag,
instead of introducing of "result_is_full" - which was
added in the previous commit.
The Item_func_mod objects never had maybe_null set, so users had no reason
to expect that they can be NULL, and may therefore deduce wrong results.
Now, set maybe_null.
The parser is allocating Item_field for references by name in ORDER BY
expressions. Such expressions however may point not only to Item_field
in the select list (or to a table column) but also to an arbitrary Item.
This causes Item_field::fix_fields to throw an error about missing
column.
The fix substitutes Item_field for the reference with an Item_ref when
not pointing to Item_field.
When we write 'query=...' string to a frm file for views on a slave,
indentifiers are not properly quoted due to missing OPTION_QUOTE_SHOW_CREATE
flag in the thd->options.
Fix: properly set thd->options for the slave thread.
Necessary changes if one of the test scripts is to be used with a RPM installation (bug#17194).
This change handles finding the server and the other programs,
but it does not solve the problem to get a writable "var" directory.
If we want to avoid world-writable directories below "/usr/share/mysql-test" (and we do!),
any automatic solution would require fixed decisions which may not match the local installation.
For the Perl script, use "--vardir"; for the shell script, create "mysql-test/var" manually.
(4.1 version, with post-review fixes)
The fix for another Bug (6439) limited FROM_UNIXTIME() to
TIMESTAMP_MAX_VALUE which is 2145916799 or 2037-12-01 23:59:59 GMT,
however unix timestamp in general is not considered to be limited
by this value. All dates up to power(2,31)-1 are valid.
This patch extends allowed TIMESTAMP range so, that max
TIMESTAMP value is power(2,31)-1. It also corrects
FROM_UNIXTIME() and UNIX_TIMESTAMP() functions, so that
max allowed UNIX_TIMESTAMP() is power(2,31)-1. FROM_UNIXTIME()
is fixed accordingly to allow conversion of dates up to
2038-01-19 03:14:07 UTC. The patch also fixes CONVERT_TZ()
function to allow extended range of dates.
The main problem solved in the patch is possible overflows
of variables, used in broken-time representation to time_t
conversion (required for UNIX_TIMESTAMP).
sync using replicate-wild-ignore-table
Problem: changes in character set variables
before an action on an replication-ignored table
makes slave to forget new variable values.
Fix: initialize one_shot variables only when
4.1 -> 5.x replication is running.
This is a performance issue for queries with subqueries evaluation
of which requires filesort.
Allocation of memory for the sort buffer at each evaluation of a
subquery may take a significant amount of time if the buffer is rather big.
With the fix we allocate the buffer at the first evaluation of the
subquery and reuse it at each subsequent evaluation.
Evaluate "NULL IN (SELECT ...)" in a special way: Disable pushed-down
conditions and their "consequences":
= Do full table scans instead of unique_[index_subquery] lookups.
= Change appropriate "ref_or_null" accesses to full table scans in
subquery's joins.
Also cache value of NULL IN (SELECT ...) if the SELECT is not correlated
wrt any upper select.
Item::val_xxx() may be called by the server several times at execute time
for a single query. Calls to val_xxx() may be very expensive and sometimes
(count(distinct), sum(distinct), avg(distinct)) not possible.
To avoid that problem the results of calculation for these aggregate
functions are cached so that val_xxx() methods just return the calculated
value for the second and subsequent calls.
It's not possible to flush the global status variables in 5.0
Update test case so it works by recording the value of handle_rollback
before and compare it to the value after
Problem: SHOW CREATE TABLE printed garbage in table
name for tables having TURKISH I
(i.e. LATIN CAPITABLE LETTER I WITH DOT ABOVE)
when lower-case-table-name=1.
Reason: In some cases during lower/upper conversion in utf8,
the result string can be shorter the original string
(including the above letter). Old implementation of caseup_str()
and casedn_str() didn't handle the result length properly,
assuming that length cannot change.
This fix changes the result type of cs->cset->casedn_str()
and cs->cset->caseup_str() from VOID to UINT, to return
the result length, as well as put '\0' terminator on a
proper place.
Also, my_caseup_str_utf8() and my_casedn_str_utf8() were
rewritten not to use strlen() for performance purposes.
It was done with help of adding of new functions - my_utf8_uni_no_range()
and my_uni_utf8_no_range() - for null terminated strings.
Problem: Too confusing error message when cannot convert
between string and column character sets on INSERT and UPDATE.
Fix: producing a better error message, instead of "Data too long"
in such cases
Additional changes: Adding "DROP TABLE IF EXISTS" into several
tests to be safe against failures in previous tests.
decimal->ulong conversion fixed to assign max possible ULONG if decimal
is bigger
Item_func_unsigned now handles DECIMAL parameter separately as we can't
rely on decimal::val_int result here.
a updatable view.
When there's a VIEW on a base table that have AUTO_INCREMENT column, and
this VIEW doesn't provide an access such column, after INSERT to such
VIEW LAST_INSERT_ID() did not return the value just generated.
This behaviour is intended and correct, because if the VIEW doesn't list
some columns then these columns are effectively hidden from the user,
and so any side effects of inserting default values to them.
However, there was a bug that such statement inserting into a view would
reset LAST_INSERT_ID() instead of leaving it unchanged.
This patch restores the original value of LAST_INSERT_ID() instead of
resetting it to zero.
If the error happens during DELETE IGNORE, nothing could be send to the
client, thus leaving it frozen expecting the reply.
The problem was that if some error occurred, it wouldn't be reported to
the client because of IGNORE, but neither success would be reported.
MySQL 4.1 would not freeze the client, but will report
ERROR 1105 (HY000): Unknown error
instead, which is also a bug.
The solution is to report success if we are in DELETE IGNORE and some
non-fatal error has happened.