Due to the complexity of this change, everything is documented in WL#3565
This patch is the third iteration, it takes into account the comments
received to date.
Events: crash with procedure which alters events with function
Post-review CS
This fix also changes the handling of KILL command combined with
subquery. It changes the error message given back to "not supported",
from parse error. The error for CREATE|ALTER EVENT has also been changed
to generate "not supported yet" instead of parse error.
In case of a SP call, the error is "not supported yet". This change
cleans the parser from code which should not belong to there. Still
LEX::expr_allows_subselect is existant because it simplifies the handling
of SQLCOM_HA_READ which forbids subselects.
Fixed some possible fatal wrong arguments to printf() style functions
Initialized some not initialized variables
Fixed bug in stored procedure and continue handlers
(Fixes Bug#22150)
(4.1 version, with post-review fixes)
The fix for another Bug (6439) limited FROM_UNIXTIME() to
TIMESTAMP_MAX_VALUE which is 2145916799 or 2037-12-01 23:59:59 GMT,
however unix timestamp in general is not considered to be limited
by this value. All dates up to power(2,31)-1 are valid.
This patch extends allowed TIMESTAMP range so, that max
TIMESTAMP value is power(2,31)-1. It also corrects
FROM_UNIXTIME() and UNIX_TIMESTAMP() functions, so that
max allowed UNIX_TIMESTAMP() is power(2,31)-1. FROM_UNIXTIME()
is fixed accordingly to allow conversion of dates up to
2038-01-19 03:14:07 UTC. The patch also fixes CONVERT_TZ()
function to allow extended range of dates.
The main problem solved in the patch is possible overflows
of variables, used in broken-time representation to time_t
conversion (required for UNIX_TIMESTAMP).
Use lazy initialization for Query_tables_list::sroutines hash.
This step should significantly decrease amount of memory consumed
by stored routines as we no longer will allocate chunk of memory
required for this HASH for each statement in routine.
sync using replicate-wild-ignore-table
Problem: changes in character set variables
before an action on an replication-ignored table
makes slave to forget new variable values.
Fix: initialize one_shot variables only when
4.1 -> 5.x replication is running.
This is a performance issue for queries with subqueries evaluation
of which requires filesort.
Allocation of memory for the sort buffer at each evaluation of a
subquery may take a significant amount of time if the buffer is rather big.
With the fix we allocate the buffer at the first evaluation of the
subquery and reuse it at each subsequent evaluation.
Evaluate "NULL IN (SELECT ...)" in a special way: Disable pushed-down
conditions and their "consequences":
= Do full table scans instead of unique_[index_subquery] lookups.
= Change appropriate "ref_or_null" accesses to full table scans in
subquery's joins.
Also cache value of NULL IN (SELECT ...) if the SELECT is not correlated
wrt any upper select.
should be a separate module (i.e. a class) to make it easier to maintain the
code, e.g. by having checks within the rli checking sanity of data and making
member variables private. This will also ease implementation of multi-source
and, at least in my fantasies :), make it possible in some future to have
separate replication servers.
Item::val_xxx() may be called by the server several times at execute time
for a single query. Calls to val_xxx() may be very expensive and sometimes
(count(distinct), sum(distinct), avg(distinct)) not possible.
To avoid that problem the results of calculation for these aggregate
functions are cached so that val_xxx() methods just return the calculated
value for the second and subsequent calls.
If the user has specified --max-connections=N or --table-open-cache=M
options to the server, a warning could be given that some values were
recalculated, and table-open-cache could be assigned greater value.
Note that both warning and increase of table-open-cache were totally
harmless.
This patch fixes recalculation code to ensure that table-open-cache will
be never increased automatically and that a warning will be given only if
some values had to be decreased due to operating system limits.
No test case is provided because we neither can't predict nor control
operating system limits for maximal number of open files.
It's not possible to flush the global status variables in 5.0
Update test case so it works by recording the value of handle_rollback
before and compare it to the value after
Problem: SHOW CREATE TABLE printed garbage in table
name for tables having TURKISH I
(i.e. LATIN CAPITABLE LETTER I WITH DOT ABOVE)
when lower-case-table-name=1.
Reason: In some cases during lower/upper conversion in utf8,
the result string can be shorter the original string
(including the above letter). Old implementation of caseup_str()
and casedn_str() didn't handle the result length properly,
assuming that length cannot change.
This fix changes the result type of cs->cset->casedn_str()
and cs->cset->caseup_str() from VOID to UINT, to return
the result length, as well as put '\0' terminator on a
proper place.
Also, my_caseup_str_utf8() and my_casedn_str_utf8() were
rewritten not to use strlen() for performance purposes.
It was done with help of adding of new functions - my_utf8_uni_no_range()
and my_uni_utf8_no_range() - for null terminated strings.
Problem: Too confusing error message when cannot convert
between string and column character sets on INSERT and UPDATE.
Fix: producing a better error message, instead of "Data too long"
in such cases
Additional changes: Adding "DROP TABLE IF EXISTS" into several
tests to be safe against failures in previous tests.
decimal->ulong conversion fixed to assign max possible ULONG if decimal
is bigger
Item_func_unsigned now handles DECIMAL parameter separately as we can't
rely on decimal::val_int result here.
a updatable view.
When there's a VIEW on a base table that have AUTO_INCREMENT column, and
this VIEW doesn't provide an access such column, after INSERT to such
VIEW LAST_INSERT_ID() did not return the value just generated.
This behaviour is intended and correct, because if the VIEW doesn't list
some columns then these columns are effectively hidden from the user,
and so any side effects of inserting default values to them.
However, there was a bug that such statement inserting into a view would
reset LAST_INSERT_ID() instead of leaving it unchanged.
This patch restores the original value of LAST_INSERT_ID() instead of
resetting it to zero.
- As a sideeffect of the patch to generate lex_hash.h only once
on the machine where the source dist was produced, a problem
was found when compiling a mysqld without partition support - it
would crash when looking up the lex symbols due to mismatch between
lex.h and the generated lex_hash.h
- Remove the ifdef for partition in lex.h
- Fix minor problem with"EXPLAIN PARTITION" when not compiled with
partition(existed also without the above patch)
- Add test case that will be run when we don't have partition
support compiled into mysqld
- Return error ER_FEATURE_DISABLED if user tries to use PARTITION
when there is no support for it.
When executing the init_connect statement, thd->net.vio is set to 0, to
forbid sending any results to the client. As a side effect we don't log
possible errors, either.
Now we write warnings to the error log if an init_connect query
fails.
If the error happens during DELETE IGNORE, nothing could be send to the
client, thus leaving it frozen expecting the reply.
The problem was that if some error occurred, it wouldn't be reported to
the client because of IGNORE, but neither success would be reported.
MySQL 4.1 would not freeze the client, but will report
ERROR 1105 (HY000): Unknown error
instead, which is also a bug.
The solution is to report success if we are in DELETE IGNORE and some
non-fatal error has happened.
select OK.
The SQL parser was using Item::name to transfer user defined function attributes
to the user defined function (udf). It was not distinguishing between user defined
function call arguments and stored procedure call arguments. Setting Item::name
was causing Item_ref::print() method to print the argument as quoted identifiers
and caused views that reference aggregate functions as udf call arguments (and
rely on Item::print() for the text of the view to store) to throw an undefined
identifier error.
Overloaded Item_ref::print to print aggregate functions as such when printing
the references to aggregate functions taken out of context by split_sum_func2()
Fixed the parser to properly detect using AS clause in stored procedure arguments
as an error.
Fixed printing the arguments of udf call to print properly the udf attribute.
Problem: "greater than" and "less than" XPath operators appeared to have been implemented in reverse.
Fix: swap arguments to eq_func() and eq_func_reverse() to provide correct operation result.
when doing create unique index which mysql will silently converts to PK, ndb is not informed
so table will be useless.
change so that we never do online add index wo/ primary key.
this is not good, but it's better than a useless table
warnings in sql_trigger.cc and sql_view.cc".
According to the current version of C++ standard offsetof() macro
can't be used for non-POD types. So warnings were emitted when we
tried to use this macro for TABLE_LIST and Table_triggers_list
classes. Note that despite of these warnings it was probably safe
thing to do.
This fix tries to circumvent this limitation by implementing
custom version of offsetof() macro to be used with these
classes. This hack should go away once we will refactor
File_parser class.
Alternative approaches such as disabling this warning for
sql_trigger.cc/sql_view.cc or for the whole server were
considered less explicit. Also I was unable to find a way
to disable particular warning for particular _part_ of
file in GCC.