MEMORY engine needs the record length to be at least sizeof(void*),
because it stores a pointer there (linking deleted records into a list).
So when the reclength is less than sizeof(void*), it's set to sizeof(void*).
That is done inside heap_create(), and the upper layer doesn't know
that the engine writes beyond share->reclength.
While it's usually safe (in-memory record size is rounded up to
sizeof(double), so even if share->reclength is too small,
share->rec_buff_len is not), it could cause problems in the code that
copies records and expects them to fix in share->reclength,
e.g. in partitioning.
* get_rec_bits() was always reading two bytes, even if the
bit field contained only of one byte
* In various places the code used field->pack_length() bytes
starting from field->ptr, while it should be field->pack_length_in_rec()
* Field_bit::key_cmp and Field_bit::cmp_max passed field_length as
an argument to memcmp(), but field_length is the number of bits!
if the property is not found, set it to the empty string,
otherwise it'll show as libmysql_link_flags-NOTFOUND on the linker
command line, and the linker won't like it.
Also, don't specify LINK_FLAG_NO_UNDEFINED twice, MERGE_LIBRARIES
already put it into LINK_FLAGS.
optimizer_switch
For DATE and DATETIME columns defined as NOT NULL,
"date_notnull IS NULL" has to be modified to:
"date_notnull IS NULL OR date_notnull == 0"
if date_notnull is from an inner table of outer join);
"date_notnull == 0" - otherwise.
This must hold for such columns of mergeable views and derived
tables as well. So far the code did the above re-writing only
for columns of base tables and temporary tables.
The function trans_rollback_to_savepoint(), unlike trans_savepoint(),
did not allow xa_state=XA_ACTIVE, so an attempt to do ROLLBCK TO SAVEPOINT
inside an XA transaction incorrectly returned an error
"...command cannot be executed ... in the ACTIVE state...".
Partially merging a MySQL patch:
7fb5c47390311d9b1b5367f97cb8fedd4102dd05
This is WL#7193 (Decouple THD and st_transactions)...
The currently merged part includes these changes:
- Introducing st_xid_state::check_has_uncommitted_xa()
- Reusing it in both trans_rollback_to_savepoint() and trans_savepoint(),
so now both allow XA_ACTIVE.
The problem was in such scenario:
T1 - starts registering query and locked QC
T2 - starts disabling QC and wait for UNLOCK
T1 - unlock QC
T2 - disable QC and destroy signals without waiting for query unlock
T1 a) - not yet unlocked query in qc and crash on attempt to unlock because
QC signals are destroyed
b) if above was done before destruction, it execute end_of results first
time at exit on after try_lock which see QC disables and return TRUE.
But it do not reset query_cache_tls->first_query_block which lead to
second call of end_of_result when diagnostic arena has already
inappropriate status (not is_eof()).
Fix is:
1) wait for all queries unlocked before destroying them by locking and
unlocking
2) remove query_cache_tls->first_query_block if QC disabled
with joins, SQ, ORDER BY, semijoin=on
A bug in get_sort_by_table() could mislead the function
setup_semijoin_dups_elimination(). As a result the optimizer
could produce invalid execution plans for queries with ORDER BY
and subquery predicates that could be converted to semi-joins.
Remove non prepared (and so belonging to removed clauses FT functions) from the list.
in later version it will be fixed by building the list during preparation.
This bug happens when locking the same Aria "transactional" table
(page format) more then once with LOCK TABLES and inserting into one
of them with INSERT ... SELECT when the table is empty.
Fixed by ensuring we don't use fast bulk insert if table is opened
twice with LOCK TABLES (as this changes table->s->state)
Code changes:
- Added use_count to MARIA_USED_TABLES to be able to check if
table is opened twice for a statement/lock table
- Don't clear history or reset info->start_state if we
don't have versioning. One reason for the bug was
was that info->start_state was set to point to different
states for the two tables. If there is no versioning
info->start_state should always point to info->s->state.common.
Other things:
- Fixed also some typos that was noticed while scanning the code
- More DBUG_PRINT
The warning was originally added in
commit c67663054a
(MySQL 4.1.12, 5.0.3) to trace claimed undo log corruption that
was analyzed in https://lists.mysql.com/mysql/176250
on November 9, 2004.
Originally, the limit was 20,000 undo log headers or transactions,
but in commit 9d6d1902e0
in MySQL 5.5.11 it was increased to 2,000,000.
The message can be triggered when the progress of purge is prevented
by a long-running transaction (or just an idle transaction whose
read view was started a long time ago), by running many transactions
that UPDATE or DELETE some records, then starting another transaction
with a read view, and finally by executing more than 2,000,000
transactions that UPDATE or DELETE records in InnoDB tables. Finally,
when the oldest long-running transaction is completed, purge would
run up to the next-oldest transaction, and there would still be more
than 2,000,000 transactions to purge.
Because the message can be triggered when the database is obviously
not corrupted, it should be removed. Heavy users of InnoDB should be
monitoring the "History list length" in SHOW ENGINE INNODB STATUS;
there is no need to spam the error log.
The XtraDB option innodb_track_changed_pages causes
the function log_group_read_log_seg() to be invoked
even when recv_sys==NULL, leading to the SIGSEGV.
This regression was caused by
MDEV-11027 InnoDB log recovery is too noisy
trx_undo_rec_get_partial_row(): When the PRIMARY KEY includes a
column prefix of an externally stored column, the already parsed
part of the undo log record may contain a reference to
an off-page column. This is the case in the bug58912 test in
innodb.innodb.
This is a regression caused by MDEV-14051 'Undo log record is too big.'
Purge in the secondary index is wrongly skipped in
row_purge_upd_exist_or_extern() because node->row only does not contain all
indexed columns.
trx_undo_rec_get_partial_row(): Add the parameter for node->update
so that the updated columns will be copied from the initial part
of the undo log record.
* don't use Env module in tests, use $ENV{xxx} instead
* collateral changes:
** $file in the error message was unset
** $file in the other error message was unset too :)
** source file arguments are conventionally upper-cased
** abort the test (die) on error, don't just echo/exit
If translation table present when we materialize the derived table then
change it to point to the materialized table.
Added debug info to see really what happens with what derived.
In the function make_sortkey a tmp buffer was defined and in the absence of
param->tmp_buffer, tmp buffer used the sort_keys buffer. sort_keys buffer
has a length defined in sort_field->length, while param->tmp_buffer is
stored in param->rec_length. Make sure to use the appropriate length
based on which buffer we are using otherwise we'll overflow.
Also added a type cast to size_t during the calculation of the sort keys
buffer size to avoid an oveflow if the buffer size exceeds 32 bits.
Problem:
The command was:
find $paths -mindepth 1 -regex $cpat -prune -o -exec rm -rf {} \+
Which was supposed to work as
* skipping $paths directories themselves (-mindepth 1)
* see if the dir/file name matches $cpat (-regex)
* if yes - don't dive into the directory, skip it (-prune)
* otherwise (-o)
* remove it and everything inside (-exec)
Now -exec ... \+ works like this:
every new found path is appended to the end of the command line.
when accumulated command line length reaches `getconf ARG_MAX` (~2Gb)
it's executed, and find continues, appending to a new command line.
What happens here, find appends some directory to the command line,
then dives into it, and starts appending files from that directory.
At some point command line overflows, rm -rf gets executed and removes
the whole directory. Now find tries to continue scanning the directory
that was already removed.
Fix: don't dive into directories that will be recursively removed
anyway, use -prune for them. Basically, we should be pruning both paths
that have matched $cpat and paths that have not matched it. This is
achived by pruning unconditionally, before the regex is tested:
find $paths -mindepth 1 -prune -regex $cpat -o -exec rm -rf {} \+
Patch Credit:- Serg
This reverts commit 7603463a46.
The commit itself is fine, however when disabling volatile, compiler
optimizations mess up our double results due to precision differences.
Revert the patch till a proper solution is found.
This was added in c796415943 but would hurt all other compilers
because of Visual Studio. Hopefully this has been fixed now.
Signed-off-by: Daniel Black <daniel@linux.vnet.ibm.com>
For BIT field null_bit is not set to 0 even for a field defined as NOT NULL.
So now in the function TABLE::create_key_part_by_field, if the bit field is not nullable
then the null_bit is explicitly set to 0
PROBLEM
-------
This warning message is printed when trx_sys->rseg_history_len is greater than some
arbitrary magic number (2000000). By seeing the reproducing scenario where we keep
a read view open and do a lot of transactions on table which increases the hitsory
length it is entirely possible that trx_sys->rseg_history_len can exceed 2000000.
So this is not a bug due to corruption of history length.The warning message was
just added to test some scenario and not removed.
FIX
---
1.Print this warning message only for debug versions.
2.Modified the warning message with more detailed information.
3.Don't crash even in debug versions.
[#rb 17929 Reviewed by jimmy and satya]