Counter for select numbering made stored with the statement (before was global)
So now it does have always accurate value which does not depend on
interruption of statement prepare by errors like lack of table in
a view definition.
Whenever one copies an IO_CACHE struct, one must remember to call
setup_io_cache, if not, the IO_CACHE's current_pos and end_pos
self-references will point to the previous struct's memory, which
could go out of scope. Commit 9003869390
fixes this problem in a more general fashion by removing the
self-references altogether, but for 5.5 we'll keep the old behaviour.
The thd->lex->part_info should be kept intact during PS
execution. Or the second execution gets that modified part_info.
Let's modify ths->work_part_info instead.
Item_xml_str_func::fix_fields() used a local "String tmp" as a buffer
for args[1]->val_str(). "tmp" was freed at the end of fix_fields(),
while Items created during my_xpath_parse() still pointed to its fragments.
Adding a new member Item_xml_str_func::m_xpath_query and store the result
of args[1]->val_str() into it.
remove HA_EXTRA_PREPARE_FOR_RENAME - neither OPTIMIZE nor REPAIR need it
(was introduced in b58e79566c when replacing remove_table_from_cache()
with wait_while_table_is_used() even though remove_table_from_cache()
did not have it).
"tokudb_alter_table.drop_add_pk_part_104 leaves a temporary file behind"
Fixed by copying 3 lines from 10.1 to 10.0 that cleaned up the temporary
file for partitioning tables.
May also fix: MDEV-14970 "MariaDB crashed with signal 11 and Aria table"
I am not able to reproduce a crash, however there was no protection in
print_keydup_error() if the storage engine reported the wrong key number.
This patch adds such a protection and should stop any further crashes
in this case.
Other things:
- Added extra protection in Aria to not set errkey to more than number of
keys. (Don't think this is cause of this crash, but better safe than
sorry)
- Extend test_if_equal_repl_errors() to handle different cases of
ER_DUP_ENTRY. This is just mainly precaution for the future.
MDEV-14957: JOIN::prepare gets unusable "conds" as argument
Do not touch merged derived (it is irreversible)
Fix first argument of in_optimizer for calls possible before fix_fields()
Problem:-
If we create table using myisam/aria then this crashes the server.
CREATE TABLE t1(a bit(1), b int auto_increment , index(a,b));
insert into t1 values(1,1);
Or this query
CREATE TABLE t1 (b BIT(1), pk INTEGER AUTO_INCREMENT PRIMARY KEY);
ALTER TABLE t1 ADD INDEX(b,pk);
INSERT INTO t1 VALUES (1,b'1');
ALTER TABLE t1 DROP PRIMARY KEY;
Reason:-
The reason for this is
1st- find_ref_key() finds what key an auto_increment field belongs to by
comparing key_part->offset and field->ptr. But BIT fields might have
zero length in the record, so a key might have many key parts with the
same offset. That is, comparing offsets cannot uniquely identify the
correct key part.
2nd- Since next_number_key_offset is zero it myisam/aria will think that
auto_increment is in first part of key.
3nd- myisam/aria will call retrieve_auto_key which will see first key_part
field as a bit field and call assert(0)
Solution:-
Many key parts might have the same offset, but BIT fields do not
support auto_increment. So, we can skip all key parts over BIT fields,
and then comparing offsets will be unambiguous.
/home/kevg/work/mariadb/sql/sql_partition.cc:286:47: error: cannot initialize a parameter of type 'HA_CREATE_INFO *' (aka 'st_ha_create_information *') with an rvalue of type 'ulonglong' (aka 'unsigned long long')
(ulonglong)0, (uint)0);
^~~~~~~~~~~~
/home/kevg/work/mariadb/sql/partition_info.h:281:72: note: passing argument to parameter 'info' here
bool set_up_defaults_for_partitioning(handler *file, HA_CREATE_INFO *info,
^
The assertion failure was caused by an incorrectly set read_set for
functions in the ORDER BY clause in part of a union, when we are using
a mergeable view and the order by clause can be skipped (removed).
An order by clause can be skipped if it's part of one part of the UNION as
the result set is not meaningful when multiple SELECT queries are UNIONed. The
server is aware of this optimization and tries to remove the order by
clause before JOIN::prepare. The problem is that we need to throw an
error when the ORDER BY clause contains invalid columns. To do this, we
attempt resolving the ORDER BY expressions, then subsequently drop them
if resolution succeeded. However, ORDER BY resolution had the side
effect of adding the expressions to the all_fields list, which is used
to construct temporary tables to store the result. We may be ignoring
the ORDER BY statement, but the tmp table still tried to compute the
values for the expressions, even if the columns are never used.
The assertion only shows itself if the order by clause contains members
which were not previously in the select list, and are part of a
function.
There is an additional question as to why this only manifests when using
VIEWS and not when using a regular table. The difference lies with the
"reset" of the read_set for the temporary table during
SELECT_LEX::update_used_tables() in JOIN::optimize(). The changes
introduced in fdf789a7ea cleared the
read_set when a mergeable view is encountered in the TABLE_LIST
defintion.
Upon initial order_list resolution, the table's read_set is updated
correctly. JOIN::optimize() will only reset the read_set if it
encounters a VIEW. Since we no longer have ORDER BY clause in
JOIN::optimize() we never get to correctly update the read_set again.
Other relevant commit by Timour, which first introduced the order
resolution when we "can_skip_sort_order":
883af99e7d
Solution:
Don't add the resolved ORDER BY elements to all_fields. We only resolve
them to check if an error should be returned for the query. Ignore them
completely otherwise.
instrument table->record[0], table->record[1] and share->default_values.
One should not access record image beyond share->reclength, even
if table->record[0] has some unused space after it (functions that
work with records, might get a copy of the record as an argument,
and that copy - not being record[0] - might not have this buffer space
at the end). See b80fa4000d and 444587d8a3
TRASH was mapped to TRASH_FREE and was supposed to be used for memory
that should not be accessed anymore, while TRASH_ALLOC() is to be
used for uninitialized but to-be-used memory.
But sometimes TRASH() was used in the latter sense.
Remove TRASH() macro, always use explicit TRASH_ALLOC() or TRASH_FREE().
In this case we were using the optimization derived_with_keys but we could not create a key
because the length of the key was greater than the max allowed(MI_MAX_KEY_LENGTH).
To do the join we needed to create a hash join key instead, but in the explain output it
showed that we were still referring to derived keys which were created but not used.
In the function JOIN::shrink_join_buffers the iteration over joined
tables was organized in a wrong way. This could cause a crash if
the optimizer chose to materialize a semi-join that used join caches
for which the sizes must be adjusted.
* get_rec_bits() was always reading two bytes, even if the
bit field contained only of one byte
* In various places the code used field->pack_length() bytes
starting from field->ptr, while it should be field->pack_length_in_rec()
* Field_bit::key_cmp and Field_bit::cmp_max passed field_length as
an argument to memcmp(), but field_length is the number of bits!
optimizer_switch
For DATE and DATETIME columns defined as NOT NULL,
"date_notnull IS NULL" has to be modified to:
"date_notnull IS NULL OR date_notnull == 0"
if date_notnull is from an inner table of outer join);
"date_notnull == 0" - otherwise.
This must hold for such columns of mergeable views and derived
tables as well. So far the code did the above re-writing only
for columns of base tables and temporary tables.
The function trans_rollback_to_savepoint(), unlike trans_savepoint(),
did not allow xa_state=XA_ACTIVE, so an attempt to do ROLLBCK TO SAVEPOINT
inside an XA transaction incorrectly returned an error
"...command cannot be executed ... in the ACTIVE state...".
Partially merging a MySQL patch:
7fb5c47390311d9b1b5367f97cb8fedd4102dd05
This is WL#7193 (Decouple THD and st_transactions)...
The currently merged part includes these changes:
- Introducing st_xid_state::check_has_uncommitted_xa()
- Reusing it in both trans_rollback_to_savepoint() and trans_savepoint(),
so now both allow XA_ACTIVE.
(from 10.1 to 10.0-galera)
This conflicted signficantly with 7d550c76be
which added --defaults-group-suffix support.
Took the approach of 4bb49d84a9 and adapted the
--defaults-group-suffix handling to be consistent.
The following changes as follows:
SST scripts now use $MY_PRINT_DEFAULTS rather than the lowercase for
consistency and this include all required --default arguements.
Backport/merge by Daniel Black <daniel@linux.vnet.ibm.com>
The problem was in such scenario:
T1 - starts registering query and locked QC
T2 - starts disabling QC and wait for UNLOCK
T1 - unlock QC
T2 - disable QC and destroy signals without waiting for query unlock
T1 a) - not yet unlocked query in qc and crash on attempt to unlock because
QC signals are destroyed
b) if above was done before destruction, it execute end_of results first
time at exit on after try_lock which see QC disables and return TRUE.
But it do not reset query_cache_tls->first_query_block which lead to
second call of end_of_result when diagnostic arena has already
inappropriate status (not is_eof()).
Fix is:
1) wait for all queries unlocked before destroying them by locking and
unlocking
2) remove query_cache_tls->first_query_block if QC disabled
with joins, SQ, ORDER BY, semijoin=on
A bug in get_sort_by_table() could mislead the function
setup_semijoin_dups_elimination(). As a result the optimizer
could produce invalid execution plans for queries with ORDER BY
and subquery predicates that could be converted to semi-joins.
Remove non prepared (and so belonging to removed clauses FT functions) from the list.
in later version it will be fixed by building the list during preparation.