- Added mem_root to all calls to new Item
- Added private method operator new(size_t size) to Item to ensure that
we always use a mem_root when creating an item.
This saves use once call to current_thd per Item creation
Added mandatory thd parameter to Item (and all derivative classes) constructor.
Added thd parameter to all routines that may create items.
Also removed "current_thd" from Item::Item. This reduced number of
pthread_getspecific() calls from 290 to 177 per OLTP RO transaction.
When wsrep is enabled, for any update on innodb tables, the
corresponding keys are appended to galera's transaction writeset
(wsrep_append_keys()). However, for LOAD DATA, this got skipped
if binary logging was disabled or it was non-ROW based.
As a result, while the updates from LOAD DATA on non-partitioned
tables replicated fine as wsrep implicitly enables binary logging
(if not enabled, explicitly), the same did not work on partitioned
tables as for partitioned tables the binary logging gets disabled
temporarily (ha_partition::write_row()).
Fixed by removing the unwanted conditions from the check.
Also backported some changes from 10.0-galera to make sure
wsrep_load_data_splitting affects LOAD DATA commands only.
In original code, sometimes one got an automatic DEFAULT value in some cases, in other cases not.
For example:
create table t1 (a int primary key) - No default
create table t2 (a int, primary key(a)) - DEFAULT 0
create table t1 SELECT .... - Default for all fields, even if they where defined as NOT NULL
ALTER TABLE ... MODIFY could sometimes add an unexpected DEFAULT value.
The patch is quite big because we had some many test cases that used
CREATE ... SELECT or CREATE ... (...PRIMARY KEY(xxx)) which doesn't have an automatic DEFAULT anymore.
Other things:
- Removed warnings from InnoDB when waiting from semaphore (got this when testing things with --big)
Issue was two fold (both in MyISAM and Aria)
- optimize and repair failed if there was an old .TMM file around. As optimized and repair are protected against multiple execution, I decided to change so that we just truncate the file if it exists.
- I had missed to check for error condition if creation of the temporary index file failed. This caused the strange behaviour that it looked as if optimized would have worked once.
modified: storage/connect/json.cpp
modified: storage/connect/json.h
modified: storage/connect/jsonudf.cpp
modified: storage/connect/mysql-test/connect/r/json_udf.result
modified: storage/connect/mysql-test/connect/t/json.test
modified: storage/connect/tabjson.cpp
Fix wrong calculation of Estimated Length when the table has virtual or special columns
modified: storage/connect/reldef.h
modified: storage/connect/tabdos.cpp
Fix wrong handling of null values in ODBCCOL::ReadColumn
modified: storage/connect/tabodbc.cpp
Fix crash when SetValue_char is called with a negative length value.
This can happen in odbconn.cpp when SQLFetch returns SQL_NO_TOTAL (-4) as length.
modified: storage/connect/odbconn.cpp
modified: storage/connect/value.cpp
Analysis: Problem was that when a new tablespace is created a default
encryption info is also created and stored to the tablespace. Later a
new encryption information was created with correct key_id but that
does not affect on IV.
Fix: Push encryption mode and key_id to lower levels and create
correct encryption info when a new tablespace is created.
This fix does not contain test case because, currently incorrect
encryption key causes page corruption and a lot of error messages
to error log causing mtr to fail.
This can happen in odbconn.cpp when SQLFetch returns SQL_NO_TOTAL (-4) as length.
modified: storage/connect/odbconn.cpp
modified: storage/connect/value.cpp
of more than 45G with a key_cache_block_size of 1024 or less.
The problem was that some of the arguments to my_multi_malloc() got to be
more than 4G.
Fix:
- Inntroduced my_multi_malloc_large() that can handle big regions.
- Changed MyISAM and Aria key caches to use my_multi_malloc_large().
I didn't change the default my_multi_malloc() as this would be a too big
patch and we don't allocate 4G blocks anywhere else.
PROBLEM
Whenever we insert in unique secondary index we take shared
locks on all possible duplicate record present in the table.
But while during a replace on the unique secondary index ,
we take exclusive and locks on the all duplicate record.
When the records are deleted, they are first delete marked
and later purged by the purge thread. While purging the
record we call the lock_update_delete() which in turn calls
lock_rec_inherit_to_gap() to inherit locks of the deleted
records. In repeatable read mode we inherit all the locks
from the record to the next record but in the read commited
mode we skip inherting them as gap type locks. We make a
exception here if the lock on the records is in shared mode
,we assume that it is set during insert for unique secondary
index and needs to be inherited to stop constraint violation.
We didnt handle the case when exclusive locks are set during
replace, we skip inheriting locks of these records and hence
causing constraint violation.
FIX
While inheriting the locks,check whether the transaction is
allowed to do TRX_DUP_REPLACE/TRX_DUP_IGNORE, if true
inherit the locks.
[ Revewied by Jimmy #rb9709]
The root cause is that x86 has a stronger memory model than the ARM
processors. And the GCC builtins didn't issue the correct fences when
setting/unsetting the lock word. In particular during the mutex release.
The solution is rewriting atomic TAS operations: replace '__sync_' by
'__atomic_' if possible.
Reviewed-by: Sunny Bains <sunny.bains@oracle.com>
Reviewed-by: Bin Su <bin.x.su@oracle.com>
Reviewed-by: Debarun Banerjee <debarun.banerjee@oracle.com>
Reviewed-by: Krunal Bauskar <krunal.bauskar@oracle.com>
RB: 9782
RB: 9665
RB: 9783
Analysis: Handler used table flag HA_REQUIRE_PRIMARY_KEY but a bug on
sql_table.cc function mysql_prepare_create_table internally marked
secondary key with NOT NULL colums as unique key and did not then
fail on requirement that table should have primary key or unique key.
Analysis: Handler table flag HA_REQUIRE_PRIMARY_KEY alone is not enough
to force primary or unique key, if table has at least one NOT NULL
column and secondary key for that column.
Fix: Add additional check that table really has primary key or
unique key for InnoDB terms.
MDEV-8409: Changing file-key-management-encryption-algorithm causes crash and no real info why
Analysis: Both bugs has two different error cases. Firstly, at startup
when server reads latest checkpoint but requested key_version,
key management plugin or encryption algorithm or method is not found
leading corrupted log entry. Secondly, similarly when reading system
tablespace if requested key_version, key management plugin or encryption
algorithm or method is not found leading buffer pool page corruption.
Fix: Firsly, when reading checkpoint at startup check if the log record
may be encrypted and if we find that it could be encrypted, print error
message and do not start server. Secondly, if page is buffer pool seems
corrupted but we find out that there is crypt_info, print additional
error message before asserting.
There is several different ways to incorrectly define
foreign key constraint. In many cases earlier MariaDB
versions the error messages produced by these cases
are not very clear and helpful. This patch improves
the warning messages produced by foreign key parsing.
INSERT INDEX RECORD
Problem:
=======
IBUF_BITMAP_FREE bit in ibuf bitmap array is used to indicate the free
space available in leaf page. IBUF_BITMAP_FREE bit indicates free
space more than actual existing free space for the leaf page.
Solution:
=========
Ibuf_bitmap_array is not updated for the secondary index leaf page when
insert operation is done by updating a delete marked existing
record in the index.
Reviewed-by: Jimmy Yang <jimmy.yang@oracle.com>
RB: 9544