Do not issue a 'read-only' error in case of DROP TEMPORARY TABLE on a non-existing temporary table.
Instead produce the correct "Unknown table" error or warning (in cases when the IF EXISTS clause was specified).
To a documentor: the part of the manual describing the 'read_only' system variable should be clarified to state the following:
"When the read_only variable is set to ON, all operations which create/update/drop tables are rejected with the exceptions for:
1. Any operation performed by the replication thread on a slave server
2. Any operation performed by a user that have the SUPER privilege
3. Any operation that creates/updates/drops only temporary tables"
(this is the 5.0 patch, because 4.1 differs)
There was an improper order of doing chained operations.
To the documentor: ENABLE|DISABLE KEYS combined with RENAME TO, and no other
ALTER TABLE clause, leads to server crash independent of the presence of
indices and data in the table.
There was an improper order of doing chained operations.
To the documentor: ENABLE|DISABLE KEYS combined with RENAME TO, and no other
ALTER TABLE clause, leads to server crash independent of the presence of
indices and data in the table.
Problem: When we have a really large number (between 2^63 and 2^64)
as the left side of the mod operator, it gets improperly corerced
into a signed value.
Solution: Added check to see if the "negative" number is really
positive, and if so, cast it.
The problem was that THD::row_count_func was zeroed too. It was zeroed
as a fix for bug 4905 "Stored procedure doesn't clear for "Rows affected"
However, the proper solution is not to zero, because THD::row_count_func has
been set to -1 already in mysql_execute_command(), a later fix, which obsoletes
the incorrect fix of #4095
The code that set up data to be passed to user-defined functions was very
old and analyzed the "Type" of the data that was passed into the UDF, when
it really should analyze the "return_type", which is hard-coded for simple
Items and works correctly for complex ones like functions.
---
Added test at Sergei's behest.
specifying DEFAULT
This was not specific to datetime. When there is no default value
for a column, and the user inserted DEFAULT, we would write
uninitialized memory to the table.
Now, insist on writing a default value, a zero-ish value, the same
one that comes from inserting NULL into a not-NULL field.
(This is, at best, really strange behavior that comes from allowing
sloppy usage, and serves as a good reason always to run one's server
in a strict SQL mode.)
on large length
Problem: Most (all) of the numeric inputs were being coerced into
int (32 bit) sized variables. Works OK for sane inputs; any input
larger than 2^32 (or 2^31 for signed vars) exihibited predictable
wrapping behavior (up to about 10^18) and then started having really
strange behaviour past that point (since the conversion to 64 bit int
from the DECIMAL type can do weird things on out of range numbers).
Solution: 1) Add many tests. 2) Convert input from (u)long type to
(u)longlong. 3) Do (sometimes multiple) sanity checks on input,
keeping in mind that sometimes a negative longlong is not a negative
longlong (if the unsigned_flag is set). 4) Emulate existing behavior
w/rt negative and "small" out-of-bounds values.
The Item_func_mod objects never had maybe_null set, so users had no reason
to expect that they can be NULL, and may therefore deduce wrong results.
Now, set maybe_null.
When we write 'query=...' string to a frm file for views on a slave,
indentifiers are not properly quoted due to missing OPTION_QUOTE_SHOW_CREATE
flag in the thd->options.
Fix: properly set thd->options for the slave thread.