Valgrind warning happens due to missing NULL value check in
Item::get_date. The fix is to add this check.
mysql-test/r/func_time.result:
test case
mysql-test/t/func_time.test:
test case
sql/item.cc:
added check for NULL value
Valgrind warning happens because null values check happens too late
in Item_func_month::val_str(after result string calculation).The fix
is to check null value before result string calculation.
mysql-test/r/func_time.result:
test case
mysql-test/t/func_time.test:
test case
sql/item_timefunc.h:
check null value before result string calculation.
ASSERTION TABLE->DB_STAT FAILED IN
SQL_BASE.CC::OPEN_TABLE() DURING I_S Q
This assert could be triggered if a statement requiring a name
lock on a table (e.g. DROP TRIGGER) executed concurrently
with an I_S query which also used the table.
One connection first started an I_S query that opened a given table.
Then another connection started a statement requiring a name lock
on the same table. This statement was blocked since the table was
in use by the I_S query. When the I_S query resumed and tried to
open the table again as part of get_all_tables(), it would encounter
a table instance with an old version number representing the pending
name lock. Since I_S queries ignore version checks and thus pending
name locks, it would try to continue. This caused it to encounter
the assert. The assert checked that the TABLE instance found with a
different version, was a real, open table. However, since this TABLE
instance instead represented a pending name lock, the check would
fail and trigger the assert.
This patch fixes the problem by removing the assert. It is ok for
TABLE::db_stat to be 0 in this case since the TABLE instance can
represent a pending name lock.
Test case added to lock_sync.test.
Issue:
======
Test case Correction for bug#11751148.
mysql-test/r/events_bugs.result:
Result file Correction for bug#11751148.
mysql-test/t/events_bugs.test:
Test case Correction for bug#11751148.
Valgrind warning happens due to missing NULL value check in
Item_func::val_decimal. The fix is to add this check.
mysql-test/r/func_time.result:
test case
mysql-test/t/func_time.test:
test case
sql/item_func.cc:
added check for NULL value
Valgrind warning happens due to uninitialized cached_format_type field
which is used later in Item_func_str_to_date::val_str method.
The fix is to init cached_format_type field.
mysql-test/r/func_time.result:
test case
mysql-test/t/func_time.test:
test case
sql/item_timefunc.cc:
init cached_format_type field
Analysis:
There are two code paths through which JOIN::exec may produce
an all-NULL row for an empty result set. One goes via the
function return_zero_rows(), when query processing detectes
early that the where clause is false, the other one is via
do_select() in the case of join execution.
In the case of do_select(), the problem was that the executioner
didn't set TABLE::null_row to 1. As result when sending the only
result row, the evaluation of each field didn't detect that all
non-aggregated fields are NULL, because Field::is_null returned
true, after checking that field->table->null_row was false.
Given that the each non-aggregated field was not considered NULL,
select_result::send_data sent whatever was in the buffer of each
field. However, since there was no actual data in the field buffer,
send_data() accessed and sent whatever junk was in the field's
data buffer.
Solution:
Similar to the analogous case in return_zero_rows() mark all
tables that their current row is NULL before sending the
artificailly created NULL row.
Assert fails due to overflow which happens in
Item_func_int_val::fix_num_length_and_dec() as
geometry functions have max_length value equal to
max_field_size(4294967295U). The fix is to skip
max_length calculation for some boundary cases.
mysql-test/r/func_math.result:
test case
mysql-test/t/func_math.test:
test case
sql/item_func.cc:
skip max_length calculation
if argument max_length is near max_field_size.
Assertion happens due to missing initialization of unsigned_flag
for Item_func_set_user_var object. It leads to incorrect
calculation of decimal field size.
The fix is to add initialization of unsigned_flag.
mysql-test/r/variables.result:
test case
mysql-test/t/variables.test:
test case
sql/item_func.cc:
add initialization of unsigned_flag.
Valgrind warining happens due to missing
'end of the string' check. The fix is to
check if we reached the end of the string.
mysql-test/r/func_time.result:
test case
mysql-test/t/func_time.test:
test case
sql/item_timefunc.cc:
check if we reached the end of
the string after leading spaces skipping.
Problem: mysqlbinlog --server-id may filter out Format_description_log_events.
If mysqlbinlog does not process the Format_description_log_event,
then mysqlbinlog cannot read the rest of the binary log correctly.
This can have the effect that mysqlbinlog crashes, generates an error,
or generates output that causes mysqld to crash, generate an error,
or corrupt data.
Fix: Never filter out Format_description_log_events. Also, never filter
out Rotate_log_events.
client/mysqlbinlog.cc:
Process Format_description_log_events even when the
server_id does not match the number given by --server-id.
mysql-test/t/mysqlbinlog.test:
Add test case.
ARE NOT BEING HONORED
max_allowed_packet works in conjunction with net_buffer_length.
max_allowed_packet is an upper bound of net_buffer_length.
So it doesn't make sense to set the upper limit lower than the value.
Added a warning (using ER_UNKNOWN_ERRROR and a specific message)
when this is done (in the log at startup and when setting either
max_allowed_packet or the net_buffer_length variables)
Added a test case.
Fixed several tests that broke the above rule.
Analysis:
A query with implicit grouping is one with aggregate functions and
no GROUP BY clause. MariaDB inherits from MySQL an SQL extenstion
that allows mixing aggregate functions with non-aggregate fields.
If a query with such mixed select clause produces an empty result
set, the meaning of aggregate functions is well defined - either
NULL (MIN, MAX, etc.), or 0 (count(*)). However the non-aggregated
fields must also have some value, and the only reasonable value in
the case of empty result is NULL.
The cause of the many wrong results was that if a field is declared
as non-nullable (e.g. because it is a PK or NOT NULL), the semantic
analysis and the optimization phases treat this field as non-nullable,
and generate all related query plan elements based on this assumption.
Later during execution, these incorrectly configured/generated query
plan elements result in a wrong result because the selected fields
are not null due to the not-null assumption during optimization.
Solution:
Detect before the context analysys phase that a query uses implicit
grouping with mixed aggregates/non-aggregates, and set all fields
as nullable. The parser already walks the SELECT clause, and
already sets Item::with_sum_func for Items that reference aggreagate
functions. The patch adds a symmetric Item::with_field so that all
Items that reference an Item_field are marked during their
construction at parse time in the same way as with aggregate function
use.
Implement binlog_optimize_thread_scheduling option to allow benchmarking the
effect of running commit_ordered() for multiple transactions all in one
thread.
Issue:
------
Due to prefix match, database like 'k' was matching with 'ka' and events of 'ka' we getting displayed for 'show event' of 'k'.
Resolution:
-----------
Scan for listing of events in a schema is made to be done on exact match of database (schema) name instead of just prefix.
mysql-test/r/events_bugs.result:
modified expected file with the expected results.
mysql-test/t/events_bugs.test:
added a test case to reproduce the scenario.
sql/event_db_repository.cc:
Scan for schema name is made to be done on exact db name match.
and a different fix for lp:736370
cache temporal expression in Item_cache_int, not in Item_string.
invoke get_datetime_value() to create a correct Item_cache_int.
Implement Item_cache_int::clone, as it's a proper constant
The problem was that server didn't check resulting size of prepared
statement argument which was set using mysql_send_long_data() API.
By calling mysql_send_long_data() several times it was possible
to create overly big string and thus force server to allocate
memory for it. There was no way to limit this allocation.
The solution is to add check for size of result string against
value of max_long_data_size start-up parameter. When intermediate
string exceeds max_long_data_size value an appropriate error message
is emitted.
We can't use existing max_allowed_packet parameter for this purpose
since its value is limited by 1GB and therefore using it as a limit
for data set through mysql_send_long_data() API would have been an
incompatible change. Newly introduced max_long_data_size parameter
gets value from max_allowed_packet parameter unless its value is
specified explicitly. This new parameter is marked as deprecated
and will be eventually replaced by max_allowed_packet parameter.
Value of max_long_data_size parameter can be set only at server
startup.
mysql-test/t/variables.test:
Added checking for new start-up parameter max_long_data_size.
sql/item.cc:
Added call to my_message() when accumulated string exceeds
max_long_data_size value. my_message() calls error handler
that was installed in mysql_stmt_get_longdata before call
to Item_param::set_longdata.
The error handler then sets state, last_error and last_errno
fields for current statement to values which correspond to
error which was caught.
sql/mysql_priv.h:
Added max_long_data_size variable declaration.
sql/mysqld.cc:
Added support for start-up parameter 'max_long_data_size'.
This parameter limits size of data which can be sent from
client to server using mysql_send_long_data() API.
sql/set_var.cc:
Added variable 'max_long_data_size' into list of variables
displayed by command 'show variables'.
sql/sql_prepare.cc:
Added error handler class Set_longdata_error_handler.
This handler is used to catch any errors that can be
generated during execution of Item_param::set_longdata().
Source code snippet that makes checking for statement's state
during statement execution is moved from Prepared_statement::execute()
to Prepared_statement::execute_loop() in order not to call
set_parameters() when statement has failed during
set_long_data() execution. If this hadn't been done
the call to set_parameters() would have failed.
tests/mysql_client_test.c:
A testcase for the bug #56976 was added.
Analysis (BUG#719198):
The assert failed because the execution code for
partial matching is designed with the assumption that
NULLs on the left side are detected as early as possible,
and a NULL result is returned before any lookups are
performed at all.
However, in the case of an Item_cache object on the left
side, null was not detected properly, because detection
was done via Item::is_null(), which is not implemented at
all for Item_cache, and resolved to the default Item::is_null()
which always returns FALSE.
Solution:
Imlpement Item::is_null().
******
Analysis (BUG#730604):
The method Item_field::is_null() determines if an item is NULL from its
Item_field::field object. However, for Item_fields that represent internal
temporary tables, Item_field::field represents the field of the original
table that was the source for the temporary table (in this case t1.f3).
Both in the committed test case, and in the original bug report the current
value of t1.f3 is not NULL. This results in an incorrect count of NULLs
for this column. As a consequence, all related Ordered_key buffers are
allocated with incorrect sizes. Depending on the exact query and data,
these incorrect sizes result in various crashes or failed asserts.
Solution:
The correct value of the current field of the internal temp table is
in Item_field::result_field. This value is determined by
Item::is_null_result().