Learn both valgrind and asan to catch this bug:
mem_heap_t* heap = mem_heap_create(1024);
byte* p = reinterpret_cast<byte*>(heap) + sizeof(mem_heap_t);
*p = 123;
Overflows of the last allocation in a block will be catched too.
mem_heap_create_block(): poison newly allocated memory
PREBUILT->TABLE->N_MYSQL_HANDLES_OPENED == 1
ANALYSIS:
=========
Adding unique index to a InnoDB table which is locked as
mutliple instances may trigger an InnoDB assert.
When we add a primary key or an unique index, we need to
drop the original table and rebuild all indexes. InnoDB
expects that only the instance of the table that is being
rebuilt, is open during the process. In the current
scenario we have opened multiple instances of the table.
This triggers an assert during table rebuild.
'Locked_tables_list' encapsulates a list of all
instances of tables locked by LOCK TABLES statement.
FIX:
===
We are now temporarily closing all the instances of the
table except the one which is being altered and later
reopen them via Locked_tables_list::reopen_tables().
my_safe_alloca()/my_safe_afree() work as alloca() or malloc()/free()
depending on the memory size to allocate, that is, depending on
reclength here. They only work correctly if reclength doesn't
change in the middle.
Description:- Mysql client crashes when trying to connect
to a fake server which is sending incorrect packets.
Analysis:- Mysql client crashes when it tries to read
server version details.
Fix:- A check is added in "red_one_row()".
PREVIOUS TO MYSQL 8.0
Description :
-------------
The mysqld--defaults-file test fails when the test suite is run from a
non-canonical path, which happens when the current working directory
when mysql-test-run.pl is started contains a symbolic link.
The problem is that this test case uses --replace-result with
$MYSQL_TEST_DIR. This variable is a potentially non-canonical path
based on the current working directory when mtr is started. However,
the path in the expected error message from mysqld contains a
canonical path. This means it does not contain $MYSQL_TEST_DIR if
mtr's working directory is not the canonical path of the working
directory.
Because other tests produce output that may contain non-canonical
paths, making $MYSQL_TEST_DIR always canonical is not a fix.
Fix :
-----
Introduced a new environment variable '$ABS_MYSQL_TEST_DIR' which will
contin the canonical path to the test directory and replaced
$MYSQL_TEST_DIR with the new variable in main.mysqld--defaults-file
test file.
This is a back-port of BUG#24579973.
Change-Id: I3b8df6f2d7ce2b04e188a896d76250cc1addbbc1
Problem
=======
When facing decoding of corrupt binary log files, server may misbehave
without detecting the events corruption.
This patch makes MySQL server more resilient to binary log decoding.
Fixes for events de-serialization and apply
===========================================
@sql/log_event.cc
Query_log_event::Query_log_event: added a check to ensure query length
is respecting event buffer limits.
Query_log_event::do_apply_event: extended a debug print, added a check
to character set to determine if it is "parseable" or not, verified if
database name is valid for system collation.
Start_log_event_v3::do_apply_event: report an error on applying a
non-supported binary log version.
Load_log_event::copy_log_event: added a check to table_name length.
User_var_log_event::User_var_log_event: added checks to avoid reading
out of buffer limits.
User_var_log_event::do_apply_event: reported an sanity check error
properly and added individual sanity checks for variable types that
expect fixed (or minimum) amount of bytes to be read.
Rows_log_event::Rows_log_event: added checks to avoid reading out of
buffer limits.
@sql/log_event_old.cc
Old_rows_log_event::Old_rows_log_event: added a sanity check to avoid
reading out of buffer limits.
@sql/sql_priv.h
Added a sanity check to available_buffer() function.
Whenever one copies an IO_CACHE struct, one must remember to call
setup_io_cache, if not, the IO_CACHE's current_pos and end_pos
self-references will point to the previous struct's memory, which
could go out of scope. Commit 9003869390
fixes this problem in a more general fashion by removing the
self-references altogether, but for 5.5 we'll keep the old behaviour.
To disable debug instrumentation, save and restore the original value
of the variable DEBUG_DBUG. Assigning -d,... will enable the output of
a lot of unrelated DBUG messages to the server error log.
mem_heap_free_heap_top(): Remove UNIV_MEM_ASSERT_W() and unpoison
the memory region first, because part of it may have been poisoned
by an earlier mem_heap_free_top() call.
Poison the address range at the end.
mem_heap_block_free(): Poison the address range at the end.
UNIV_MEM_ASSERT_AND_ALLOC(): Replace with UNIV_MEM_ALLOC().
We want to keep the address ranges poisoned (unaccessible) as
long as possible.
UNIV_MEM_ASSERT_AND_FREE(): Replace with UNIV_MEM_FREE().
MDEV-14957: JOIN::prepare gets unusable "conds" as argument
Do not touch merged derived (it is irreversible)
Fix first argument of in_optimizer for calls possible before fix_fields()
Problem:-
If we create table using myisam/aria then this crashes the server.
CREATE TABLE t1(a bit(1), b int auto_increment , index(a,b));
insert into t1 values(1,1);
Or this query
CREATE TABLE t1 (b BIT(1), pk INTEGER AUTO_INCREMENT PRIMARY KEY);
ALTER TABLE t1 ADD INDEX(b,pk);
INSERT INTO t1 VALUES (1,b'1');
ALTER TABLE t1 DROP PRIMARY KEY;
Reason:-
The reason for this is
1st- find_ref_key() finds what key an auto_increment field belongs to by
comparing key_part->offset and field->ptr. But BIT fields might have
zero length in the record, so a key might have many key parts with the
same offset. That is, comparing offsets cannot uniquely identify the
correct key part.
2nd- Since next_number_key_offset is zero it myisam/aria will think that
auto_increment is in first part of key.
3nd- myisam/aria will call retrieve_auto_key which will see first key_part
field as a bit field and call assert(0)
Solution:-
Many key parts might have the same offset, but BIT fields do not
support auto_increment. So, we can skip all key parts over BIT fields,
and then comparing offsets will be unambiguous.
/home/kevg/work/mariadb/sql/sql_partition.cc:286:47: error: cannot initialize a parameter of type 'HA_CREATE_INFO *' (aka 'st_ha_create_information *') with an rvalue of type 'ulonglong' (aka 'unsigned long long')
(ulonglong)0, (uint)0);
^~~~~~~~~~~~
/home/kevg/work/mariadb/sql/partition_info.h:281:72: note: passing argument to parameter 'info' here
bool set_up_defaults_for_partitioning(handler *file, HA_CREATE_INFO *info,
^
The assertion failure was caused by an incorrectly set read_set for
functions in the ORDER BY clause in part of a union, when we are using
a mergeable view and the order by clause can be skipped (removed).
An order by clause can be skipped if it's part of one part of the UNION as
the result set is not meaningful when multiple SELECT queries are UNIONed. The
server is aware of this optimization and tries to remove the order by
clause before JOIN::prepare. The problem is that we need to throw an
error when the ORDER BY clause contains invalid columns. To do this, we
attempt resolving the ORDER BY expressions, then subsequently drop them
if resolution succeeded. However, ORDER BY resolution had the side
effect of adding the expressions to the all_fields list, which is used
to construct temporary tables to store the result. We may be ignoring
the ORDER BY statement, but the tmp table still tried to compute the
values for the expressions, even if the columns are never used.
The assertion only shows itself if the order by clause contains members
which were not previously in the select list, and are part of a
function.
There is an additional question as to why this only manifests when using
VIEWS and not when using a regular table. The difference lies with the
"reset" of the read_set for the temporary table during
SELECT_LEX::update_used_tables() in JOIN::optimize(). The changes
introduced in fdf789a7ea cleared the
read_set when a mergeable view is encountered in the TABLE_LIST
defintion.
Upon initial order_list resolution, the table's read_set is updated
correctly. JOIN::optimize() will only reset the read_set if it
encounters a VIEW. Since we no longer have ORDER BY clause in
JOIN::optimize() we never get to correctly update the read_set again.
Other relevant commit by Timour, which first introduced the order
resolution when we "can_skip_sort_order":
883af99e7d
Solution:
Don't add the resolved ORDER BY elements to all_fields. We only resolve
them to check if an error should be returned for the query. Ignore them
completely otherwise.
It has its limitations, e.g. it assumes that there's only one
gdb and only one valgrind process is running. And a hard-coded
one-second delay might be too short for slow machines.
Still, it's better than "doesn't work at all"
instrument table->record[0], table->record[1] and share->default_values.
One should not access record image beyond share->reclength, even
if table->record[0] has some unused space after it (functions that
work with records, might get a copy of the record as an argument,
and that copy - not being record[0] - might not have this buffer space
at the end). See b80fa4000d and 444587d8a3
TRASH was mapped to TRASH_FREE and was supposed to be used for memory
that should not be accessed anymore, while TRASH_ALLOC() is to be
used for uninitialized but to-be-used memory.
But sometimes TRASH() was used in the latter sense.
Remove TRASH() macro, always use explicit TRASH_ALLOC() or TRASH_FREE().
InnoDB limited the maximum number of bytes per character to 4.
But, the filename character set that was introduced in MySQL 5.1
uses up to 5 bytes per character.
To allow InnoDB tables to be created with wider characters, let
us split the mbminmaxlen fields into mbminlen, mbmaxlen, and increase
the limit to 7 bytes per character. This will increase the payload size
of dtype_t and dict_col_t by one bit. The storage size will be unchanged
(54 bits and 77 bits will use the same number of bytes as the
previous sizes 53 and 76 bits).