Test was flakey on some machines and showed spurious
reds for races.
New-and-improved test makes do with fewer statements,
no mysqltest-variables, and no backticks. Should hope-
fully be more robust. Heck, it's debatable whether we
should have a test for this, anyway.
updates
Attempt to execute trigger or stored function with multi-UPDATE
which used - but didn't update - a table that was also used by
the calling statement led to an error. Read-only reference to
tables used in the calling statement should be allowed.
This problem was caused by the fact that check for conflicting
use of tables in SP/triggers was performed in open_tables(),
and in case of multi-UPDATE we didn't know exact lock type at
this stage.
We solve the problem by moving this check to lock_tables(), so
it can be performed after exact lock types for tables used by
multi-UPDATE are determined.
UNION could convert fixed-point FLOAT(M,D)/DOUBLE(M,D) columns
to FLOAT/DOUBLE when aggregating data types from the SELECT
substatements. While there is nothing particularly wrong with
this behavior, especially when M is greater than the hardware
precision limits, it could be confusing in cases when all
SELECT statements in a union have the same
FLOAT(M,D)/DOUBLE(M,D) columns with equal precision
specifications listed in the same position.
Since the manual is quite vague on what data type should be
returned in such cases, the bug was fixed by implementing the
most 'expected' behavior: do not convert FLOAT(M,D)/DOUBLE(M,D)
to anything else if all SELECT statements in a UNION have the
same precision for that column.
The problem is that the read and write methods of the shared
memory transport (protocol) didn't react to asynchornous close
events, which could lead to a lock up as the client would wait
(until time out) for a server response that will never come.
The solution is to also wait for close events while waiting
for I/O from or to the server.
Bug report and patch submitted by: Armin Schöffmann
The problem is that the read and write methods of the shared
memory transport (protocol) didn't react to asynchronous close
events, which could lead to a lock up as the client would wait
(until time out) for a server response that will never come.
The solution is to also wait for close events while waiting
for I/O from or to the server.
including modifications according to code review
+ backport of the fix for
Bug 41932 funcs_1: is_collation_character_set_applicability path
too long for tar
which was missing in 5.0 (just a renaming of two files)
When add an aliase name after NAME_CONST, the aliase name will be overwrite.
NAME_CONST will re-set the field's name only if there isn't an aliase in the
function fix-fields().
If there is an aliase, NAME_CONST doesn't re-set the field's name and keeps the old
name.
including modifications according to code review
+ backport of the fix for
Bug 41932 funcs_1: is_collation_character_set_applicability path
too long for tar
which was missing in 5.0 (just a renaming of two files)
due to name_const substitution
Problem:
"In general, statements executed within a stored procedure
are written to the binary log using the same rules that
would apply were the statements to be executed in standalone
fashion. Some special care is taken when logging procedure
statements because statement execution within procedures
is not quite the same as in non-procedure context".
For example, each reference to a local variable in SP's
statements is replaced by NAME_CONST(var_name, var_value).
Queries like
"CREATE TABLE ... SELECT FUNC(local_var ..."
are logged as
"CREATE TABLE ... SELECT FUNC(NAME_CONST("local_var", var_value) ..."
that leads to differrent field names and
might result in "Incorrect column name" if var_value is long enough.
Fix: in 5.x we'll issue a warning in such a case.
In 6.0 we should get rid of NAME_CONST().
Note: this issue and change should be described in the documentation
("Binary Logging of Stored Programs").
Fine-tuning. Broke out comparison into method by
suggestion of Davi. Clarified comments. Reverting
test-case which I find too brittle; proper test
case in 5.1+.
(Pushing for Azundris)
We allow security-contexts with NULL users (for
system-threads and for unauthenticated users).
If a non-SUPER-user tried to KILL such a thread,
we tried to compare the user-fields to see whether
they owned that thread. Comparing against NULL was
not a good idea.
If KILLer does not have SUPER-privilege, we
specifically check whether both KILLer and KILLee
have a non-NULL user before testing for string-
equality. If either is NULL, we reject the KILL.
The issue happened to be two-fold.
The table map event was recorded into binlog having
an incorrect size when number of columns exceeded 251.
The Row-based event had incorrect recording and restoring m_width member within
the same as above conditions.
Fixed with correcting m_data_size and m_width.
After the table is compressed by the myisampack utility,
opening the table by the server produces valgrind warnings.
This happens because when we try to read a record into the buffer
we alway assume that the remaining buffer to read is always equal
to word size(4 or 8 or 2 bytes) we read. Sometimes we have
remaining buffer size less than word size and trying to read the
entire word size will end up in valgrind errors.
Fixed by reading byte by byte when we detect the remaining buffer
size is less than the word size.
expired timeout on debx86-b in PB
Turned off general log when importing DB dump in the test
case for bug #41486 due to the bug in CSV engine code that
makes logging long SQL query too slow.