load_file() string-function should return NULL rather than throw an error if
the file doesn't exist, as per the manual.
mysql-test/t/outfile.test:
expect NULL rather than error if file given to load_file() doesn't exist
mysql-test/t/func_str.test:
show that load_file() will return NULL rather than throw an error
if file doesn't exist
mysql-test/r/outfile.result:
expect NULL rather than error if file given to load_file() doesn't exist
mysql-test/r/func_str.result:
expect NULL rather than error if file given to load_file() doesn't exist
sql/item_strfunc.cc:
load_file() should return NULL as per the docs if file not found,
rather than throw an error
A query with a group by and having clauses could return a wrong
result set if the having condition contained a constant conjunct
evaluated to FALSE.
It happened because the pushdown condition for table with
grouping columns lost its constant conjuncts.
Pushdown conditions are always built by the function make_cond_for_table
that ignores constant conjuncts. This is apparently not correct when
constant false conjuncts are present.
mysql-test/r/having.result:
Added a test case for bug #14927.
mysql-test/t/having.test:
Added a test case for bug #14927.
sql/sql_lex.cc:
Fixed bug #14927.
Initialized fields for having conditions in st_select_lex::init_query().
sql/sql_lex.h:
Fixed bug #14927.
Added a field to restore having condititions for execution in SP and PS.
sql/sql_prepare.cc:
Fixed bug #14927.
Added code to restore havinf conditions for execution in SP and PS.
sql/sql_select.cc:
Fixed bug #14927.
Performed evaluation of constant expressions in having clauses.
If the having condition contains a constant conjunct that is always false
an empty result set is returned after the optimization phase.
In this case the corresponding EXPLAIN command now returns
"Impossible HAVING" in the last column.
hog memory".
During each invocation of stored function or trigger some objects which
lifetime is one function call (e.g. sp_rcontext) were allocated on
arena/memroot of calling statement. This led to consumption of fixed amount
of memory for each function/trigger invocation and so statements which
involve lot of them were hogging memory. This in its return led to OOM
crashes or freezes.
This fix introduces new memroot and arena for objects which lifetime is
whole duration of function call. So all memory consumed by such objects
is freed at the end of function call.
sql/sp_head.cc:
sp_head::execute_function():
Introduced new memroot and arena for objects which lifetime is whole
duration of function call (e.g. sp_rcontext, sp_cursor). We can't
use caller's arena/memroot for those objects because in this case
some fixed amount of memory will be consumed for each function/trigger
invocation and so statements which involve lot of them will hog memory.
Got rid of param_values array to avoid excessive juggling with arenas.
The bug was as follows: When merge_key_fields() encounters "t.key=X OR t.key=Y" it will
try to join them into ref_or_null access via "t.key=X OR NULL". In order to make this
inference it checks if Y<=>NULL, ignoring the fact that value of Y may be not yet known.
The fix is that the check if Y<=>NULL is made only if value of Y is known (i.e. it is a
constant).
TODO: When merging to 5.0, replace used_tables() with const_item() everywhere in merge_key_fields().
mysql-test/r/innodb_mysql.result:
Testcase for BUG16798
mysql-test/t/innodb_mysql.test:
Testcase for BUG16798
sql/sql_select.cc:
BUG#16798: Inapplicable ref_or_null query plan and bad query result on random occasions
In merge_key_fields() don't call val->is_null() if the value of val is not known.
after merge.
Concurrent read and update of privilege structures (like simultaneous
run of SHOW GRANTS and ADD USER) could result in server crash.
Ensure that proper locking of ACL structures is done.
No test case is provided because this bug can't be reproduced
deterministically.
sql/sql_acl.cc:
Ensure that access to ACL data is protected by acl_cache->lock mutex.
Use system_charset_info for host names consistently.
Remove check_acl_user(). Use find_acl_user() instead.
sql/sql_acl.h:
Remove check_acl_user() declaration.
sql/sql_parse.cc:
Use is_acl_user() instead of check_acl_user().
into mysql.com:/home/tomash/src/mysql_ab/mysql-5.0-merge
mysql-test/r/func_misc.result:
Manual merge of the fix for bug#16501.
mysql-test/t/func_misc.test:
Manual merge of the fix for bug#16501.
sql/item_func.cc:
Manual merge of the fix for bug#16501.
sql/sql_acl.cc:
For the fix of bug#16372, use local version, since the fix for 5.0 is
different, and will go in separate ChangeSet.
The reason of the bug is in that `get_var_with_binlog' performs missed
assingment of
the variables as side-effect. Doing that it eventually calls
`free_underlaid_joins' to pass as an argument `thd->lex->select_lex' of the lex
which belongs to the user query, not
to one which is emulated i.e SET @var1:=NULL.
`get_var_with_binlog' is refined to supply a temporary lex to sql_set_variables's stack.
mysql-test/r/rpl_user_variables.result:
results changed
mysql-test/t/rpl_user_variables.test:
a problematic query to be binlogged is added
sql/item_func.cc:
BUG#19136: Crashing log-bin and uninitialized user variables
The reason of the bug is in that how `get_var_with_binlog' performs missed
assingment of the variables: `free_underlaid_joins' gets as an argument `thd->lex->select_lex'
which belongs to the user query, not to one which is emulated i.e SET @var1:=NULL.
`get_var_with_binlog' is refined to supply a temporary lex to sql_set_variables's stack.
into ua141d10.elisa.omakaista.fi:/home/my/bk/mysql-5.0
mysql-test/r/date_formats.result:
Auto merged
mysql-test/t/date_formats.test:
Auto merged
sql/item_timefunc.cc:
Merged from 4.1
mysql-test-run now fails in case of warnings
mysql-test/lib/mtr_report.pl:
Fail if find warnings
mysql-test/mysql-test-run.sh:
Fail if find warnings
sql/sql_lex.cc:
Initalize st_lex properly
sql/sql_view.cc:
Fixed problem with unaligned memory (wrong free)
TIME_FORMAT using "%l:%i" returns 36:00 with 24:00:00 in TIME column
mysql-test/r/date_formats.result:
Added test case for Bug#11324,
"TIME_FORMAT using "%l:%i" returns 36:00 with 24:00:00 in TIME column"
mysql-test/t/date_formats.test:
Added test case for Bug#11324,
"TIME_FORMAT using "%l:%i" returns 36:00 with 24:00:00 in TIME column"
into ua141d10.elisa.omakaista.fi:/home/my/bk/mysql-5.0
mysql-test/r/gis-rtree.result:
Auto merged
mysql-test/r/ansi.result:
Merged from 4.1
mysql-test/r/auto_increment.result:
Merged from 4.1
mysql-test/r/mysqldump.result:
Merged from 4.1
mysql-test/r/symlink.result:
Merged from 4.1
mysql-test/t/auto_increment.test:
Merged from 4.1
mysql-test/t/mysqldump.test:
Merged from 4.1
sql/set_var.cc:
Merged from 4.1
sql/sql_show.cc:
Merged from 4.1
into rurik.mysql.com:/home/igor/dev/mysql-5.0-0
mysql-test/r/subselect.result:
Auto merged
sql/mysql_priv.h:
Auto merged
sql/sql_select.cc:
Auto merged
mysqldump / SHOW CREATE TABLE will show the NEXT available value for
the PK, rather than the *first* one that was available (that named in
the original CREATE TABLE ... AUTO_INCREMENT = ... statement).
This should produce correct and robust behaviour for the obvious use
cases -- when no data were inserted, then we'll produce a statement
featuring the same value the original CREATE TABLE had; if we dump
with values, INSERTing the values on the target machine should set the
correct next_ID anyway (and if not, we'll still have our AUTO_INCREMENT =
... to do that). Lastly, just the CREATE statement (with no data) for
a table that saw inserts would still result in a table that new values
could safely be inserted to).
There seems to be no robust way however to see whether the next_ID
field is > 1 because it was set to something else with CREATE TABLE
... AUTO_INCREMENT = ..., or because there is an AUTO_INCREMENT column
in the table (but no initial value was set with AUTO_INCREMENT = ...)
and then one or more rows were INSERTed, counting up next_ID. This
means that in both cases, we'll generate an AUTO_INCREMENT =
... clause in SHOW CREATE TABLE / mysqldump. As we also show info on,
say, charsets even if the user did not explicitly give that info in
their own CREATE TABLE, this shouldn't be an issue.
As per above, the next_ID will be affected by any INSERTs that have
taken place, though. This /should/ result in correct and robust
behaviour, but it may look non-intuitive to some users if they CREATE
TABLE ... AUTO_INCREMENT = 1000 and later (after some INSERTs) have
SHOW CREATE TABLE give them a different value (say, CREATE TABLE
... AUTO_INCREMENT = 1006), so the docs should possibly feature a
caveat to that effect.
It's not very intuitive the way it works now (with the fix), but it's
*correct*. We're not storing the original value anyway, if we wanted
that, we'd have to change on-disk representation?
If we do dump/load cycles with empty DBs, nothing will change. This
changeset includes an additional test case that proves that tables
with rows will create the same next_ID for AUTO_INCREMENT = ... across
dump/restore cycles.
Confirmed by support as likely solution for client's problem.
mysql-test/r/auto_increment.result:
test for creation of AUTO_INCREMENT=... clause
mysql-test/r/gis-rtree.result:
Add AUTO_INCREMENT=... clauses where appropriate
mysql-test/r/mysqldump.result:
show that AUTO_INCREMENT=... will survive dump/restore cycles
mysql-test/r/symlink.result:
Add AUTO_INCREMENT=... clauses where appropriate
mysql-test/t/auto_increment.test:
test for creation of AUTO_INCREMENT=... clause
mysql-test/t/mysqldump.test:
show that AUTO_INCREMENT=... will survive dump/restore cycles
sql/sql_show.cc:
Add AUTO_INCREMENT=... to output of SHOW CREATE TABLE if there is an
AUTO_INCREMENT column, and NEXT_ID > 1 (the default). We must not print
the clause for engines that do not support this as it would break the
import of dumps, but as of this writing, the test for whether
AUTO_INCREMENT columns are allowed and wether AUTO_INCREMENT=...
is supported is identical, !(file->table_flags() & HA_NO_AUTO_INCREMENT))
Because of that, we do not explicitly test for the feature,
but may extrapolate its existence from that of an AUTO_INCREMENT column.
into mysql.com:/space/pekka/ndb/version/my50
mysql-test/r/ndb_blob.result:
Auto merged
mysql-test/t/ndb_blob.test:
Auto merged
ndb/include/kernel/signaldata/TcKeyReq.hpp:
Auto merged
ndb/include/ndbapi/NdbBlob.hpp:
Auto merged
ndb/src/ndbapi/NdbBlob.cpp:
Auto merged
ndb/test/ndbapi/testBlobs.cpp:
Auto merged
sql/sql_table.cc:
Auto merged
ndb/tools/delete_all.cpp:
nuts
There were two distict bugs: parse error was returned for valid
statement and that error wasn't reported to the client.
The fix ensures that EXPLAIN SELECT..INTO is accepted by parser and any
other parse error will be reported to the client.
mysql-test/r/explain.result:
Add result for bug#15463.
mysql-test/t/explain.test:
Add test case for bug#15463.
sql/sql_parse.cc:
Assert that if parsing error has occured then apropriate error message
has been pushed into error stack.
sql/sql_yacc.yy:
If there is no lex->result in select_var_ident rule, then we have
to be in DESCRIBE mode.