Before sorting HAVING condition is split into two parts,
first part is a table related condition and the rest of is
HAVING part. Extraction of HAVING part does not take into account
the fact that some of conditions might be non-const but
have 'used_tables' == 0 (independent subqueries)
and because of that these conditions are cut off by
make_cond_for_table() function.
The fix is to use (table_map) 0 instead of used_tables in
third argument for make_cond_for_table() function.
It allows to extract elements which belong to sorted
table and in addition elements which are independend
subqueries.
Bug#11764671 57533: UNINITIALISED VALUES IN COPY_AND_CONVERT (SQL_STRING.CC) WITH CERTAIN CHA
When ROUND evaluates decimal result it uses Item::decimal
value as fraction value for the result. In some cases
Item::decimal is greater than real result fraction value
and uninitialised memory of result(decimal) buffer can be
used in further calculations. Issue is introduced by
Bug33143 fix. The fix is to remove erroneous assignment.
.0
The bug was fixed by the patch for bug number BUG 11763109 - 55779: SELECT
DOES NOT WORK PROPERLY IN MYSQL SERVER VERSION "5.1.42 SUSE MYSQL (Exact same
fix as was proposed for this bug.) Since the motivation for the two bug
reports was completely different, however, it still makes sense to push the
test case.
This patch contains only the test case.
Some multibyte sequences could be considered by my_mbcharlen() functions
as multibyte character but more exact my_ismbchar() does not think so.
In such a case this multibyte sequences is pushed into 'stack' buffer which
is too small to accommodate the sequence.
The fix is to allocate stack buffer in
compliance with max character length.
Fix for --vs-config applied
Find.pm incorrectly tested an unitialized local variable instead
of the global, corrected.
Find.pm is also wrong in 5.5: uses a non-existent global variable. Fix when
merging up.
There are two problems with ANALYSE():
1. Memory leak
it happens because do_select() can overwrite
JOIN::procedure field(with zero value in our case) and
JOIN destructor don't free the memory allocated for
JOIN::procedure. The fix is to save original JOIN::procedure
before do_select() call and restore it after do_select
execution.
2. Wrong result
If ANALYSE() procedure is used for the statement with LIMIT clause
it could retrun empty result set. It happens because of missing
analyse::end_of_records() call. First end_send() function call
returns NESTED_LOOP_QUERY_LIMIT and second call of end_send() with
end_of_records flag enabled does not happen. The fix is to return
NESTED_LOOP_OK from end_send() if procedure is active.
When we create temporary result table for UNION
incorrect max_length for YEAR field is used and
it leads to incorrect field value and incorrect
result string length as YEAR field value calculation
depends on field length.
The fix is to use underlying item max_length for
Item_sum_hybrid::max_length intialization.
Valgrind warning happens due to early null values check
in Item_func_in::fix_length_and_dec(before item evaluation).
As result null value items with uninitialized values are
placed into array and it leads to valgrind warnings during
value array sorting.
The fix is to check null value after item evaluation, item
is evaluated in in_array::set() method.
Select from a view with the underlying HAVING clause failed with a
message: "1356: View '...' references invalid table(s) or column(s)
or function(s) or definer/invoker of view lack rights to use them"
The bug is a regression of the fix for bug 11750328 - 40825 (similar
case, but the HAVING cause references an aliased field).
In the old fix for bug 40825 the Item_field::name_length value has
been used in place of the real length of Item_field::name. However,
in some cases Item_field::name_length is not in sync with the
actual name length (TODO: combine name and name_length into a
solid String field).
The Item_ref::print() method has been modified to calculate actual
name length every time.
create_schema if auto-generate-sql also set.
mysqlslap uses a schema to run its tests on and later
drops it if auto-generate-sql is used. This can be a
problem, if the schema is an already existing one.
If create-schema is used with auto-generate-sql option,
mysqlslap while performing the cleanup, drops the specified
database.
Fixed by introducing an option --no-drop, which, if used,
will prevent the dropping of schema at the end of the test.
on lctn2 systems
There was a local variable in get_all_tables() to store the
"original" value of the database name as it can get lowercased
depending on the lower_case_table_name value.
get_all_tables() iterates over database names and for each
database iterates over the tables in it.
The "original" db name was assigned in the table names loop.
Thus the first table is ok, but the second and subsequent tables
get the lowercased name from processing the first table.
Fixed by moving the assignment of the original database name
from the inner (table name) to the outer (database name) loop.
Test suite added.
In the string context the MIN() and MAX() functions don't take
into account the unsignedness of the UNSIGNED BIGINT argument
column.
I.e.:
CREATE TABLE t1 (a BIGINT UNSIGNED);
INSERT INTO t1 VALUES (18446668621106209655);
SELECT CONCAT(MAX(a)) FROM t1;
returns -75452603341961.
Valgrind warning happens because null values check happens too late
in Item_func_month::val_str(after result string calculation).The fix
is to check null value before result string calculation.
ASSERTION TABLE->DB_STAT FAILED IN
SQL_BASE.CC::OPEN_TABLE() DURING I_S Q
This assert could be triggered if a statement requiring a name
lock on a table (e.g. DROP TRIGGER) executed concurrently
with an I_S query which also used the table.
One connection first started an I_S query that opened a given table.
Then another connection started a statement requiring a name lock
on the same table. This statement was blocked since the table was
in use by the I_S query. When the I_S query resumed and tried to
open the table again as part of get_all_tables(), it would encounter
a table instance with an old version number representing the pending
name lock. Since I_S queries ignore version checks and thus pending
name locks, it would try to continue. This caused it to encounter
the assert. The assert checked that the TABLE instance found with a
different version, was a real, open table. However, since this TABLE
instance instead represented a pending name lock, the check would
fail and trigger the assert.
This patch fixes the problem by removing the assert. It is ok for
TABLE::db_stat to be 0 in this case since the TABLE instance can
represent a pending name lock.
Test case added to lock_sync.test.
Valgrind warning happens due to uninitialized cached_format_type field
which is used later in Item_func_str_to_date::val_str method.
The fix is to init cached_format_type field.
Assert fails due to overflow which happens in
Item_func_int_val::fix_num_length_and_dec() as
geometry functions have max_length value equal to
max_field_size(4294967295U). The fix is to skip
max_length calculation for some boundary cases.
Assertion happens due to missing initialization of unsigned_flag
for Item_func_set_user_var object. It leads to incorrect
calculation of decimal field size.
The fix is to add initialization of unsigned_flag.
Problem: mysqlbinlog --server-id may filter out Format_description_log_events.
If mysqlbinlog does not process the Format_description_log_event,
then mysqlbinlog cannot read the rest of the binary log correctly.
This can have the effect that mysqlbinlog crashes, generates an error,
or generates output that causes mysqld to crash, generate an error,
or corrupt data.
Fix: Never filter out Format_description_log_events. Also, never filter
out Rotate_log_events.
ARE NOT BEING HONORED
max_allowed_packet works in conjunction with net_buffer_length.
max_allowed_packet is an upper bound of net_buffer_length.
So it doesn't make sense to set the upper limit lower than the value.
Added a warning (using ER_UNKNOWN_ERRROR and a specific message)
when this is done (in the log at startup and when setting either
max_allowed_packet or the net_buffer_length variables)
Added a test case.
Fixed several tests that broke the above rule.
The slave was not able to find the correct row in the innodb
table, because the row fetched from the innodb table would not
match the before image. This happened because the (don't care)
bytes in the NULLed fields would change once the row was stored
in the storage engine (from zero to the default value). This
would make bulk memory comparison (using memcmp) to fail.
We fix this by taking a preventing measure and avoiding memcmp
for tables that contain nullable fields. Therefore, we protect
the slave search routine from engines that return arbitrary
values for don't care bytes (in the nulled fields). Instead, the
slave thread will only check null_bits and those fields that are
not set to NULL when comparing the before image against the
storage engine row.
Issue:
------
Due to prefix match, database like 'k' was matching with 'ka' and events of 'ka' we getting displayed for 'show event' of 'k'.
Resolution:
-----------
Scan for listing of events in a schema is made to be done on exact match of database (schema) name instead of just prefix.
There is a race between two threads: user thread and the dump
thread. The former sets a debug instruction that makes the latter wait
before processing an Xid event. There can be cases that the dump
thread has not yet processed the previous Xid event, causing it to
wait one Xid event too soon, thus causing sync_slave_with_master never
to resume.
We fix this by moving the instructions that set the debug variable
after calling sync_slave_with_master.