Analysis: Problem is that SQL-layer calls handler API after storage
engine has already returned error state. InnoDB does internal
rollback when it notices transaction error (e.g. lock wait timeout,
deadlock, etc.) and after this transaction is not naturally in
correct state to continue.
Fix: Do not continue fetch operations if transaction is not started.
Description:- mysqlslap is a diagnostic utility designed to
emulate client load for a MySQL server and to report the
timing of each stage. This utility crashes when invalid
values are passed to the options 'num_int_cols_opt' or
'num_chars_cols_opt' or 'engine'.
Analysis:- mysqlslap uses "parse_option()" to parse the
values specified to the options 'num_int_cols_opt',
'num_chars_cols_opt' and 'engine'. These options takes
values separated by commas. In "parse_option()", the comma
separated values are separated and copied into a buffer
without checking the length of the string to be copied. The
size of the buffer is defined by a macro HUGE_STRING_LENGTH
whose value is 8196. So if the length of the any of the
comma separated value exceeds HUGE_STRING_LENGTH, will
result in a buffer overflow.
Fix:- A check is introduced in "parse_option()" to check
whether the size of the string to be copied is more than
HUGE_STRING_LENGTH. If it is more, an error, "Invalid value
specified for the option 'xxx'" is thrown.
Option length was incorrectly calculated for the last comma
separated value. So fixed that as well.
jessie has newer automake so build-depends could not be satisfied.
refresh build-depends, remove automake, libtool,
doxygen, texlive-latex-base, ghostscript.
remove the code that checks for correct options for
for CHECK/REPAIR VIEW. Rewrite the grammar for the parser
to check that. This changes error messages as
-ERROR 42000: You have an error ... near '' at line 1
+ERROR 42000: You have an error ... near 'quick' at line 1
Swap the key length when WORDS_BIGENDIAN is defined
Make the IOFF structure depending on WORDS_BIGENDIAN
modified: storage/connect/connect.cc
modified: storage/connect/xindex.h
Problem :
---------
This is a regression of Bug#19138298. In purge_node_t::validate_pcur
we are trying to get offsets for all columns of clustered index from
stored record in persistent cursor. This would fail when stored record
is not having all fields of the index. The stored record stores only
fields that are needed to uniquely identify the entry.
Solution :
----------
1. Use pcur.old_n_fields to get fields that are stored
2. Add comment to note dependency between stored fields in purge node
ref and stored cursor.
3. Return if the cursor record is not already stored as it is not safe
to access cursor record directly without latch.
Reviewed-by: Marko Makela <marko.makela@oracle.com>
RB: 9139
Problem :
---------
This is a regression of bug-19138298. During purge, if
btr_pcur_restore_position fails, we set found_clust to FALSE
so that it can find a possible clustered index record in future
calls for the same undo entry. This, however, overwrites the
old_rec_buf while initializing pcur again in next call.
The leak is reproducible in local environment and with the
test provided along with bug-19138298.
Solution :
----------
If btr_pcur_restore_position() fails close the cursor.
Reviewed-by: Marko Makela <Marko.Makela@oracle.com>
Reviewed-by: Annamalai Gurusami <Annamalai.Gurusami@oracle.com>
RB: 9074
Was added in function TranslateSQLType.
modified: storage/connect/ha_connect.cc
modified: storage/connect/odbconn.cpp
modified: storage/connect/value.h
Add some trace in particular in indexing routines.
modified: storage/connect/block.h
modified: storage/connect/ha_connect.cc
modified: storage/connect/plugutil.c
modified: storage/connect/xindex.cpp
modified: storage/connect/xindex.h
This is an addendum to the fix for MDEV-7026. The ARM memory model is
similar to that of PowerPC and thus needs the same semantics with
respect to memory barriers. That is, os_atomic_test_and_set_*_release()
must be a store with a release barrier followed by a full
barrier. Unlike x86 using __sync_lock_test_and_set() which is
implemented as “exclusive load with acquire barriers + exclusive store”
is insufficient in contexts where os_atomic_test_and_set_*_release()
macros are used.
When the slave processes the master restart format_description event,
parallel replication needs to complete any prior events before processing
the restart event (which closes temporary tables and such stuff).
This happens in wait_for_workers_idle(), however it was not waiting long
enough. The wait was using wait_for_prior_commit(), but at that points table
can still be open. This lead to assertion in this case.
So change wait_for_workers_idle() to wait until all worker threads have
reached finish_event_group(), at which point all tables should have been
closed.
from the PTOS argument. For this to work, finding the table options is now split
in HA_CONNECT functions and exported functions available from out of ha_connect.
modified: storage/connect/ha_connect.cc
modified: storage/connect/libdoc.cpp
modified: storage/connect/mycat.h
modified: storage/connect/plgdbsem.h
modified: storage/connect/tabjson.cpp
modified: storage/connect/tabjson.h
modified: storage/connect/tabxml.cpp
modified: storage/connect/tabxml.h
The BIN table formats have been changed to handle the case of floating point values
when used with Big Endian or Little Endian machines.
modified: storage/connect/ha_connect.cc
modified: storage/connect/mysql-test/connect/r/bin.result
modified: storage/connect/mysql-test/connect/t/bin.test
modified: storage/connect/reldef.cpp
modified: storage/connect/tabdos.cpp
modified: storage/connect/tabdos.h
modified: storage/connect/tabfix.cpp
modified: storage/connect/tabfix.
h
As man page of open(2) suggested, we should open the same file in the same
mode, to have better performance. For some data files, we will first call
os_file_create_simple_no_error_handling_func() to open them, and then call
os_file_create_func() again. We have to make sure if DIRECT IO is specified,
these two functions should both open file with O_DIRECT.
Reviewed-by: Sunny Bains <sunny.bains@oracle.com>
RB: 8981
The MySQL server uses Henry Spencer's library for regular
expressions to support the REGEXP/RLIKE string operator.
This changeset adapts a recent fix from the upstream for
better 32-bit compatiblity. (Note that we cannot simply use
the current upstream version as a drop-in replacement
for the version used by the server as the latter has
been extended to understand MySQL charsets etc.)
modified: storage/connect/tabfix.cpp
modified: storage/connect/reldef.cpp
Json array index (position) is now 0 based by default. This corresponds
to what all json applications and functions do. Also fix ROWNUM calculation.
modified: storage/connect/jsonudf.cpp
modified: storage/connect/mysql-test/connect/r/json.result
modified: storage/connect/mysql-test/connect/t/json.test
modified: storage/connect/tabjson.cpp
modified: storage/connect/tabjson.h
modified: storage/connect/filamfix.cpp
Second version of BIN table new field format (1st one was buggy)
modified: storage/connect/tabfix.cpp
modified: storage/connect/tabfix.h
Make bin.test not to fail in big-endian machines
modified: storage/connect/mysql-test/connect/r/bin.result
modified: storage/connect/mysql-test/connect/t/bin.test
Fix a bug causing wrong default offset to be generated when virtual
or special columns were placed beetween standard columns. Also
calculate the good offset for BIN columns with new field format.
modified: storage/connect/reldef.cpp
In particular enable to set length and endian setting.
This should solve all problems on IBM390s machines.
modified: storage/connect/ha_connect.cc
modified: storage/connect/tabfix.cpp
modified: storage/connect/tabfix.h
Make DBF tables to be usable in big-endian machines (test version)
modified: storage/connect/filamdbf.cpp
Ignore git files on storage/connect
modified: .gitignore
Do not use format function attribute for sql_print_xxx() family of
functions as they use a MariaDB-specific extension of printf instead
of one provided by the system.
AVOID DEADLOCK AFTER RESTORE
Analysis
--------
Accessing the restored NDB table in an active multi-statement
transaction was resulting in deadlock found error.
MySQL Server needs to discover metadata of NDB table from
data nodes after table is restored from backup. Metadata
discovery happens on the first access to restored table.
Current code mandates this statement to be the first one
in the transaction. This is because discover needs exclusive
metadata lock on the table. Lock upgrade at this point can
lead to MDL deadlock and the code was written at the time
when MDL deadlock detector was not present. In case when
discovery attempted in the statement other than the first
one in transaction ER_LOCK_DEADLOCK error is reported
pessimistically.
Fix:
---
Removed the constraint as any potential deadlock will be
handled by deadlock detector. Also changed code in discover
to keep metadata locks of active transaction.
Same issue was present in table auto repair scenario. Same
fix is added in repair path also.