The function trans_rollback_to_savepoint(), unlike trans_savepoint(),
did not allow xa_state=XA_ACTIVE, so an attempt to do ROLLBCK TO SAVEPOINT
inside an XA transaction incorrectly returned an error
"...command cannot be executed ... in the ACTIVE state...".
Partially merging a MySQL patch:
7fb5c47390311d9b1b5367f97cb8fedd4102dd05
This is WL#7193 (Decouple THD and st_transactions)...
The currently merged part includes these changes:
- Introducing st_xid_state::check_has_uncommitted_xa()
- Reusing it in both trans_rollback_to_savepoint() and trans_savepoint(),
so now both allow XA_ACTIVE.
The problem was in such scenario:
T1 - starts registering query and locked QC
T2 - starts disabling QC and wait for UNLOCK
T1 - unlock QC
T2 - disable QC and destroy signals without waiting for query unlock
T1 a) - not yet unlocked query in qc and crash on attempt to unlock because
QC signals are destroyed
b) if above was done before destruction, it execute end_of results first
time at exit on after try_lock which see QC disables and return TRUE.
But it do not reset query_cache_tls->first_query_block which lead to
second call of end_of_result when diagnostic arena has already
inappropriate status (not is_eof()).
Fix is:
1) wait for all queries unlocked before destroying them by locking and
unlocking
2) remove query_cache_tls->first_query_block if QC disabled
with joins, SQ, ORDER BY, semijoin=on
A bug in get_sort_by_table() could mislead the function
setup_semijoin_dups_elimination(). As a result the optimizer
could produce invalid execution plans for queries with ORDER BY
and subquery predicates that could be converted to semi-joins.
Remove non prepared (and so belonging to removed clauses FT functions) from the list.
in later version it will be fixed by building the list during preparation.
This bug happens when locking the same Aria "transactional" table
(page format) more then once with LOCK TABLES and inserting into one
of them with INSERT ... SELECT when the table is empty.
Fixed by ensuring we don't use fast bulk insert if table is opened
twice with LOCK TABLES (as this changes table->s->state)
Code changes:
- Added use_count to MARIA_USED_TABLES to be able to check if
table is opened twice for a statement/lock table
- Don't clear history or reset info->start_state if we
don't have versioning. One reason for the bug was
was that info->start_state was set to point to different
states for the two tables. If there is no versioning
info->start_state should always point to info->s->state.common.
Other things:
- Fixed also some typos that was noticed while scanning the code
- More DBUG_PRINT
STATEMENTS
ANALYSIS:
=========
A user not having FILE privilege is not allowed to create
custom data/index directories for a table or for its
partitions via CREATE TABLE but is allowed to do so via
ALTER TABLE statement.
ALTER TABLE ignores DATA DIRECTORY and INDEX DIRECTORY when
given as table options. The issue occurs during the
creation of partitions for a table via ALTER TABLE
statement with the DATA DIRECTORY and/or INDEX DIRECTORY
options. The issue exists because of the absence of FILE
privilege check for the user.
FIX:
====
A FILE privilege check has been introduced for resolving
the above scenario.
Followup: now that the man pages have actually been removed,
we no longer need to take deliberate action to ignore them.
Thus we can remove that part of the original change.
RPM: drop the conditional removal
DEB: remove from the exclude list
The warning was originally added in
commit c67663054a
(MySQL 4.1.12, 5.0.3) to trace claimed undo log corruption that
was analyzed in https://lists.mysql.com/mysql/176250
on November 9, 2004.
Originally, the limit was 20,000 undo log headers or transactions,
but in commit 9d6d1902e0
in MySQL 5.5.11 it was increased to 2,000,000.
The message can be triggered when the progress of purge is prevented
by a long-running transaction (or just an idle transaction whose
read view was started a long time ago), by running many transactions
that UPDATE or DELETE some records, then starting another transaction
with a read view, and finally by executing more than 2,000,000
transactions that UPDATE or DELETE records in InnoDB tables. Finally,
when the oldest long-running transaction is completed, purge would
run up to the next-oldest transaction, and there would still be more
than 2,000,000 transactions to purge.
Because the message can be triggered when the database is obviously
not corrupted, it should be removed. Heavy users of InnoDB should be
monitoring the "History list length" in SHOW ENGINE INNODB STATUS;
there is no need to spam the error log.
The XtraDB option innodb_track_changed_pages causes
the function log_group_read_log_seg() to be invoked
even when recv_sys==NULL, leading to the SIGSEGV.
This regression was caused by
MDEV-11027 InnoDB log recovery is too noisy
trx_undo_rec_get_partial_row(): When the PRIMARY KEY includes a
column prefix of an externally stored column, the already parsed
part of the undo log record may contain a reference to
an off-page column. This is the case in the bug58912 test in
innodb.innodb.
This is a regression caused by MDEV-14051 'Undo log record is too big.'
Purge in the secondary index is wrongly skipped in
row_purge_upd_exist_or_extern() because node->row only does not contain all
indexed columns.
trx_undo_rec_get_partial_row(): Add the parameter for node->update
so that the updated columns will be copied from the initial part
of the undo log record.
* don't use Env module in tests, use $ENV{xxx} instead
* collateral changes:
** $file in the error message was unset
** $file in the other error message was unset too :)
** source file arguments are conventionally upper-cased
** abort the test (die) on error, don't just echo/exit
If translation table present when we materialize the derived table then
change it to point to the materialized table.
Added debug info to see really what happens with what derived.
In the function make_sortkey a tmp buffer was defined and in the absence of
param->tmp_buffer, tmp buffer used the sort_keys buffer. sort_keys buffer
has a length defined in sort_field->length, while param->tmp_buffer is
stored in param->rec_length. Make sure to use the appropriate length
based on which buffer we are using otherwise we'll overflow.
Also added a type cast to size_t during the calculation of the sort keys
buffer size to avoid an oveflow if the buffer size exceeds 32 bits.
This reverts commit 7603463a46.
The commit itself is fine, however when disabling volatile, compiler
optimizations mess up our double results due to precision differences.
Revert the patch till a proper solution is found.
IS DROPPED
ANALYSIS:
=========
It is advised not to tamper with the system tables.
When primary key is dropped from a system table, certain
operations on the table which tries to access the table key
information may lead to server exit.
FIX:
====
An appropriate error is now reported in such a case.
This was added in c796415943 but would hurt all other compilers
because of Visual Studio. Hopefully this has been fixed now.
Signed-off-by: Daniel Black <daniel@linux.vnet.ibm.com>
ROOT
DESCRIPTION
===========
If the .pid file is created at a world-writable location,
it can be compromised by replacing the server's pid with
another running server's (or some other non-mysql process)
PID causing abnormal behaviour.
ANALYSIS
========
In such a case, user should be warned that .pid file is
being created at a world-writable location.
FIX
===
A new function is_file_or_dir_world_writable() is defined
and it is called in create_pid_file() before .pid file
creation. If the location is world-writable, a relevant
warning is thrown.
NOTE
====
1. PID file is always created with permission bit 0664, so
for outside world its read-only.
2. Ignoring the case when permission is denied to get the
dir stats since the .pid file creation would fail anyway in
such a case.
For BIT field null_bit is not set to 0 even for a field defined as NOT NULL.
So now in the function TABLE::create_key_part_by_field, if the bit field is not nullable
then the null_bit is explicitly set to 0
MDL_CONTEXT::TRY_ACQUIRE_LOCK_IMPL
ANALYSIS:
=========
Server sometimes exited when multiple threads tried to
acquire and release metadata locks simultaneously (for
example, necessary to access a table). The same problem
could have occurred when new objects were registered/
deregistered in Performance Schema.
The problem was caused by a bug in LF_HASH - our lock free
hash implementation which is used by metadata locking
subsystem in 5.7 branch. In 5.5 and 5.6 we only use LF_HASH
in Performance Schema Instrumentation implementation. So
for these versions, the problem was limited to P_S.
The problem was in my_lfind() function, which searches for
the specific hash element by going through the elements
list. During this search it loads information about element
checked such as key pointer and hash value into local
variables. Then it confirms that they are not corrupted by
concurrent delete operation (which will set pointer to 0)
by checking if element is still in the list. The latter
check did not take into account that compiler (and
processor) can reorder reads in such a way that load of key
pointer will happen after it, making result of the check
invalid.
FIX:
====
This patch fixes the problem by ensuring that no such
reordering can take place. This is achieved by using
my_atomic_loadptr() which contains compiler and processor
memory barriers for the check mentioned above and other
similar places.
The default (for non-Windows systems) implementation of
my_atomic*() relies on old __sync intrisics and implements
my_atomic_loadptr() as read-modify operation. To avoid
scalability/performance penalty associated with addition of
my_atomic_loadptr()'s we change the my_atomic*() to use
newer __atomic intrisics when available. This new default
implementation doesn't have such a drawback.
PROBLEM
-------
This warning message is printed when trx_sys->rseg_history_len is greater than some
arbitrary magic number (2000000). By seeing the reproducing scenario where we keep
a read view open and do a lot of transactions on table which increases the hitsory
length it is entirely possible that trx_sys->rseg_history_len can exceed 2000000.
So this is not a bug due to corruption of history length.The warning message was
just added to test some scenario and not removed.
FIX
---
1.Print this warning message only for debug versions.
2.Modified the warning message with more detailed information.
3.Don't crash even in debug versions.
[#rb 17929 Reviewed by jimmy and satya]
Imported missing test case from MySQL 5.7 for
commit 25781c154396dbbc21023786aa3be070057d6999
Author: Annamalai Gurusami <annamalai.gurusami@oracle.com>
Date: Mon Feb 24 14:00:03 2014 +0530
Bug #17604730 ASSERTION: *CURSOR->INDEX->NAME == TEMP_INDEX_PREFIX
MariaDB 5.5 does not seem to be affected.
Issue:
------
VALUES doesn't have a type() function and is considered a
Item_field.
Solution for 5.7:
-----------------
Add a new type() function for Item_values_insert.
On 8.0 and trunk it was fixed by Mithun's Bug#19601973.
Solution for 5.6:
-----------------
Additionally Bug#17458914 is backported.
This will address the problem of using VALUES() in
INSERT ... ON DUPLICATE KEY UPDATE. Create a field object
only if it is in the UPDATE clause, else return a NULL
item.
This will also address the problems mentioned in
Bug#14789787 and Bug#16756402.
Solution for 5.5:
-----------------
As mentioned above Bug#17458914 is backported.
Additionally Bug#14786324 is also backported.
When VALUES() is detected outside its meaningful place,
it should be treated as NULL and is thus replaced with a
Field_null object, with the same name as the original
field.
Fields with type NULL are generally not handled well inside
the server (e.g Innodb will not accept them and it is
impossible to create them in regular tables). So create a
new const NULL item instead.
uses alias in HAVING when sql_mode = 'ONLY_FULL_GROUP_BY'
This patch corrects the patch for bug#18739: non-standard
HAVING extension was allowed in strict ANSI sql mode
added in 2006 by commit 4b7c4cd27f.
As a result of incompleteness of the fix in the above commit
if a query with GROUP BY contained an aggregate function with an
alias and this alias was used in the HAVING clause of the query
the server reported an error when sql_mode was set to
'ONLY_FULL_GROUP_BY'.
Removed relevant man pages from file lists for RPM and DEB
RPM: added conditional removal of them, so it works both before and
after man pages are actually removed
DEB: added to exclude list (5.6+)
NOT UPDATE FILE ON DISK
Description:- When the server variable, "myisam_use_mmap" is
enabled, MyISAM tables on windows are not updating the file
on disk even when the server variable "flush" is set to 1.
This is inturn making the table corrupted when encountering
a power failure.
Analysis:- When the server variable "myisam_use_mmap" is set,
files of MyISAM tables will be memory mapped using the OS
APIs mmap()/munmap()/msync() on Unix and CreateFileMapping()
/UnmapViewOfFile()/FlushViewOfFile() on Windows. msync() and
FlushViewOfFile() is responsible for flushing the changes
made to the in-core copy of a file that was mapped into
memory using mmap()/CreateFileMapping() back to the
file system. FLUSH is determined by the OS unless
explicitly called using msync()/FlushViewOfFile().
When the server variables "myisam_use_mmap" and "flush" are
enabled, MyISAM is only flushing the files from file system
cache to disc using "mysql_file_sync()" and not the memory
mapped file from memory to FS cache using "my_msync()".
["my_msync()" inturn calls msync() on Unix and
FlushViewOfFile() on Windows.
Fix:- As part of the fix, if server variable
"myisam_use_mmap" is enabled along with "flush",
"my_msync()" is invoked to flush the data in memory to file
system cache and followed by "mysql_file_sync()" which will
flush the data from file system cache to disk.