To disable debug instrumentation, save and restore the original value
of the variable DEBUG_DBUG. Assigning -d,... will enable the output of
a lot of unrelated DBUG messages to the server error log.
mem_heap_free_heap_top(): Remove UNIV_MEM_ASSERT_W() and unpoison
the memory region first, because part of it may have been poisoned
by an earlier mem_heap_free_top() call.
Poison the address range at the end.
mem_heap_block_free(): Poison the address range at the end.
UNIV_MEM_ASSERT_AND_ALLOC(): Replace with UNIV_MEM_ALLOC().
We want to keep the address ranges poisoned (unaccessible) as
long as possible.
UNIV_MEM_ASSERT_AND_FREE(): Replace with UNIV_MEM_FREE().
MDEV-14957: JOIN::prepare gets unusable "conds" as argument
Do not touch merged derived (it is irreversible)
Fix first argument of in_optimizer for calls possible before fix_fields()
Problem:-
If we create table using myisam/aria then this crashes the server.
CREATE TABLE t1(a bit(1), b int auto_increment , index(a,b));
insert into t1 values(1,1);
Or this query
CREATE TABLE t1 (b BIT(1), pk INTEGER AUTO_INCREMENT PRIMARY KEY);
ALTER TABLE t1 ADD INDEX(b,pk);
INSERT INTO t1 VALUES (1,b'1');
ALTER TABLE t1 DROP PRIMARY KEY;
Reason:-
The reason for this is
1st- find_ref_key() finds what key an auto_increment field belongs to by
comparing key_part->offset and field->ptr. But BIT fields might have
zero length in the record, so a key might have many key parts with the
same offset. That is, comparing offsets cannot uniquely identify the
correct key part.
2nd- Since next_number_key_offset is zero it myisam/aria will think that
auto_increment is in first part of key.
3nd- myisam/aria will call retrieve_auto_key which will see first key_part
field as a bit field and call assert(0)
Solution:-
Many key parts might have the same offset, but BIT fields do not
support auto_increment. So, we can skip all key parts over BIT fields,
and then comparing offsets will be unambiguous.
/home/kevg/work/mariadb/sql/sql_partition.cc:286:47: error: cannot initialize a parameter of type 'HA_CREATE_INFO *' (aka 'st_ha_create_information *') with an rvalue of type 'ulonglong' (aka 'unsigned long long')
(ulonglong)0, (uint)0);
^~~~~~~~~~~~
/home/kevg/work/mariadb/sql/partition_info.h:281:72: note: passing argument to parameter 'info' here
bool set_up_defaults_for_partitioning(handler *file, HA_CREATE_INFO *info,
^
The assertion failure was caused by an incorrectly set read_set for
functions in the ORDER BY clause in part of a union, when we are using
a mergeable view and the order by clause can be skipped (removed).
An order by clause can be skipped if it's part of one part of the UNION as
the result set is not meaningful when multiple SELECT queries are UNIONed. The
server is aware of this optimization and tries to remove the order by
clause before JOIN::prepare. The problem is that we need to throw an
error when the ORDER BY clause contains invalid columns. To do this, we
attempt resolving the ORDER BY expressions, then subsequently drop them
if resolution succeeded. However, ORDER BY resolution had the side
effect of adding the expressions to the all_fields list, which is used
to construct temporary tables to store the result. We may be ignoring
the ORDER BY statement, but the tmp table still tried to compute the
values for the expressions, even if the columns are never used.
The assertion only shows itself if the order by clause contains members
which were not previously in the select list, and are part of a
function.
There is an additional question as to why this only manifests when using
VIEWS and not when using a regular table. The difference lies with the
"reset" of the read_set for the temporary table during
SELECT_LEX::update_used_tables() in JOIN::optimize(). The changes
introduced in fdf789a7ea cleared the
read_set when a mergeable view is encountered in the TABLE_LIST
defintion.
Upon initial order_list resolution, the table's read_set is updated
correctly. JOIN::optimize() will only reset the read_set if it
encounters a VIEW. Since we no longer have ORDER BY clause in
JOIN::optimize() we never get to correctly update the read_set again.
Other relevant commit by Timour, which first introduced the order
resolution when we "can_skip_sort_order":
883af99e7d
Solution:
Don't add the resolved ORDER BY elements to all_fields. We only resolve
them to check if an error should be returned for the query. Ignore them
completely otherwise.
It has its limitations, e.g. it assumes that there's only one
gdb and only one valgrind process is running. And a hard-coded
one-second delay might be too short for slow machines.
Still, it's better than "doesn't work at all"
instrument table->record[0], table->record[1] and share->default_values.
One should not access record image beyond share->reclength, even
if table->record[0] has some unused space after it (functions that
work with records, might get a copy of the record as an argument,
and that copy - not being record[0] - might not have this buffer space
at the end). See b80fa4000d and 444587d8a3
TRASH was mapped to TRASH_FREE and was supposed to be used for memory
that should not be accessed anymore, while TRASH_ALLOC() is to be
used for uninitialized but to-be-used memory.
But sometimes TRASH() was used in the latter sense.
Remove TRASH() macro, always use explicit TRASH_ALLOC() or TRASH_FREE().
InnoDB limited the maximum number of bytes per character to 4.
But, the filename character set that was introduced in MySQL 5.1
uses up to 5 bytes per character.
To allow InnoDB tables to be created with wider characters, let
us split the mbminmaxlen fields into mbminlen, mbmaxlen, and increase
the limit to 7 bytes per character. This will increase the payload size
of dtype_t and dict_col_t by one bit. The storage size will be unchanged
(54 bits and 77 bits will use the same number of bytes as the
previous sizes 53 and 76 bits).
Resolving a stacktrace including functions in dynamic libraries requires
us to look inside the libraries for the symbols. Addr2line needs to be
started with the correct binary for each address on the stack. To do this,
figure out which library it is using dladdr, then if the addr2line
binary was started with a different binary, fork it again with the
correct one.
We only have one addr2line process running at any point during the
stacktrace resolving step. The maximum number of forks for addr2line should
generally be around 6.
One for server stacktrace code, one for plugin code, one when going back
into server code, one for pthread library, one for libc, one for the
_start function in the server. More can come up if plugin calls server
function which goes back to a plugin, etc.
In this case we were using the optimization derived_with_keys but we could not create a key
because the length of the key was greater than the max allowed(MI_MAX_KEY_LENGTH).
To do the join we needed to create a hash join key instead, but in the explain output it
showed that we were still referring to derived keys which were created but not used.
In the function JOIN::shrink_join_buffers the iteration over joined
tables was organized in a wrong way. This could cause a crash if
the optimizer chose to materialize a semi-join that used join caches
for which the sizes must be adjusted.
MEMORY engine needs the record length to be at least sizeof(void*),
because it stores a pointer there (linking deleted records into a list).
So when the reclength is less than sizeof(void*), it's set to sizeof(void*).
That is done inside heap_create(), and the upper layer doesn't know
that the engine writes beyond share->reclength.
While it's usually safe (in-memory record size is rounded up to
sizeof(double), so even if share->reclength is too small,
share->rec_buff_len is not), it could cause problems in the code that
copies records and expects them to fix in share->reclength,
e.g. in partitioning.
* get_rec_bits() was always reading two bytes, even if the
bit field contained only of one byte
* In various places the code used field->pack_length() bytes
starting from field->ptr, while it should be field->pack_length_in_rec()
* Field_bit::key_cmp and Field_bit::cmp_max passed field_length as
an argument to memcmp(), but field_length is the number of bits!
if the property is not found, set it to the empty string,
otherwise it'll show as libmysql_link_flags-NOTFOUND on the linker
command line, and the linker won't like it.
Also, don't specify LINK_FLAG_NO_UNDEFINED twice, MERGE_LIBRARIES
already put it into LINK_FLAGS.
optimizer_switch
For DATE and DATETIME columns defined as NOT NULL,
"date_notnull IS NULL" has to be modified to:
"date_notnull IS NULL OR date_notnull == 0"
if date_notnull is from an inner table of outer join);
"date_notnull == 0" - otherwise.
This must hold for such columns of mergeable views and derived
tables as well. So far the code did the above re-writing only
for columns of base tables and temporary tables.