Adding Converter_double_to_longlong and reusing it in:
1. Field_longlong::store(double nr)
2. Field_double::val_int()
3. Item::val_int_from_real()
4. Item_dyncol_get::val_int()
As a good side efferct, now overflow in conversion in the mentioned
val_xxx() methods return exactly the same warning.
it's not enough to look for NOT NULL IS, this also fails queries like
SELECT NOT NULL <=> NULL;
and adds no value anymore, as the grammar now requires parentheses
- Force usage of () around complex DEFAULT expressions
- Give error if DEFAULT expression contains invalid characters
- Don't use const_charset_conversion for stored Item_func_sysconf expressions
as the result is not constaint over different executions
- Fixed Item_func_user() to not store calculated value in str_value
MDEV-10134 Add full support for DEFAULT
- Added support for using tables with MySQL 5.7 virtual fields,
including MySQL 5.7 syntax
- Better error messages also for old cases
- CREATE ... SELECT now also updates timestamp columns
- Blob can now have default values
- Added new system variable "check_constraint_checks", to turn of
CHECK constraint checking if needed.
- Removed some engine independent tests in suite vcol to only test myisam
- Moved some tests from 'include' to 't'. Should some day be done for all tests.
- FRM version increased to 11 if one uses virtual fields or constraints
- Changed to use a bitmap to check if a field has got a value, instead of
setting HAS_EXPLICIT_VALUE bit in field flags
- Expressions can now be up to 65K in total
- Ensure we are not refering to uninitialized fields when handling virtual fields or defaults
- Changed check_vcol_func_processor() to return a bitmap of used types
- Had to change some functions that calculated cached value in fix_fields to do
this in val() or getdate() instead.
- store_now_in_TIME() now takes a THD argument
- fill_record() now updates default values
- Add a lookahead for NOT NULL, to be able to handle DEFAULT 1+1 NOT NULL
- Automatically generate a name for constraints that doesn't have a name
- Added support for ALTER TABLE DROP CONSTRAINT
- Ensure that partition functions register virtual fields used. This fixes
some bugs when using virtual fields in a partitioning function
use Item->neg to convert generate negative Item_num's
instead of Item_func_neg(Item_num).
Based on the following commit:
Author: Monty <monty@mariadb.org>
Date: Mon May 30 22:44:00 2016 +0300
Make negative number their own token
The negation (-) operator will call Item->neg() one underlying numeric constants
and remove itself (like the NOT() function does today for other NOT functions.
This simplifies things
- -1 is not anymore an expression but a basic_const_item
- improves optimizer
- DEFAULT -1 doesn't need special handling anymore
- When we add DEFAULT expressions, -1 will be treated exactly like 1
- printing of items doesn't anymore put braces around all negative numbers
Other things fixed:
- Fixed that longlong converted to decimal's has a more appropriate size
- Fixed that "-0.0" read into a decimal is interpreted as 0.0
Print default values for BLOB's.
This is a part commit for automatic changes to make the real commit smaller.
All changes here are related to that we now print DEFAULT NULL for blob and
text fields, like we do for all other fields.
State column of SHOW PROCESSLIST can have NULL values for being initialized
threads (between new connection was acknowledged and waiting for network data).
Fixed test case to handle such cases by waiting for State to become empty
string.
Since wsrep_sync_wait & wsrep_causal_reads variables are related,
they are always kept in sync whenever one of them changes.
Same is tried on server start, where wsrep_sync_wait get updated
based on wsrep_causal_reads' value. But, since wsrep_causal_reads
is OFF by default, wsrep_sync_wait's value gets modified and loses
its WSREP_SYNC_WAIT_BEFORE_READ bit.
Fixed by syncing wsrep_sync_wait & wsrep_causal_reads values
individually on server start in mysqld_get_one_option() based
on command line arguments used.
In CTAS, handlers get registered under statement transaction
(st_transactions::stmt), while ha_fake_trx_id(), used by CTAS,
looked under standard transaction (st_transactions::all) for
registered handlers, and thus it failed to grab a fake transaction
ID. As a result, with no valid transaction ID, wsrep commit failed
with an error.
ha_fake_trx_id() now looks for handlers registered under 'stmt'
in case 'all' is empty. Also modified the logic to print warning
only once if none of the registered handlers have fake_trx_id.
On wsrep_cluster_address update, node restarts the replication
and attempts to connect to the new address. In this process it
makes a call to wsrep provider's connect API, which could lead
to segfault if wsrep provider is not loaded (wsrep_on=OFF).
Fixed by making sure that it proceeds only if a provider is
loaded.
commit ef92aaf9ec
Author: Jan Lindström <jan.lindstrom@mariadb.com>
Date: Wed Jun 22 22:37:28 2016 +0300
MDEV-10083: Orphan ibd file when playing with foreign keys
Analysis: row_drop_table_for_mysql did not allow dropping
referenced table even in case when actual creating of the
referenced table was not successfull if foreign_key_checks=1.
Fix: Allow dropping referenced table even if foreign_key_checks=1
if actual table create returned error.
Analysis: row_drop_table_for_mysql did not allow dropping
referenced table even in case when actual creating of the
referenced table was not successfull if foreign_key_checks=1.
Fix: Allow dropping referenced table even if foreign_key_checks=1
if actual table create returned error.
Decimals with float, double and decimal now works the following way:
- DECIMAL_NOT_SPECIFIED is used when declaring DECIMALS without a firm number
of decimals. It's only used in asserts and my_decimal_int_part.
- FLOATING_POINT_DECIMALS (31) is used to mark that a FLOAT or DOUBLE
was defined without decimals. This is regarded as a floating point value.
- Max decimals allowed for FLOAT and DOUBLE is FLOATING_POINT_DECIMALS-1
- Clients assumes that float and double with decimals >= NOT_FIXED_DEC are
floating point values (no decimals)
- In the .frm decimals=FLOATING_POINT_DECIMALS are used to define
floating point for float and double (31, like before)
To ensure compatibility with old clients we do:
- When storing float and double, we change NOT_FIXED_DEC to
FLOATING_POINT_DECIMALS.
- When creating fields from .frm we change for float and double
FLOATING_POINT_DEC to NOT_FIXED_DEC
- When sending definition for a float/decimal field without decimals
to the client as part of a result set we convert NOT_FIXED_DEC to
FLOATING_POINT_DECIMALS.
- variance() and std() has changed to limit the decimals to
FLOATING_POINT_DECIMALS -1 to not get the double converted floating point.
(This was to preserve compatiblity)
- FLOAT and DOUBLE still have 30 as max number of decimals.
Bugs fixed:
variance() printed more decimals than we support for double values.
New behaviour:
- Strings now have 38 decimals instead of 30 when converted to decimal
- CREATE ... SELECT with a decimal with > 30 decimals will create a column
with a smaller range than before as we are trying to preserve the number of
decimals.
Other changes
- We are now using the obsolete bit FIELDFLAG_LEFT_FULLSCREEN to specify
decimals > 31
- NOT_FIXED_DEC is now declared in one place
- For clients, NOT_FIXED_DEC is always 31 (to ensure compatibility).
On the server NOT_FIXED_DEC is DECIMAL_NOT_SPECIFIED (39)
- AUTO_SEC_PART_DIGITS is taken from DECIMAL_NOT_SPECIFIED
- DOUBLE conversion functions are now using DECIMAL_NOT_SPECIFIED instead of
NOT_FIXED_DEC
Fix the replication failure caused by incorect initialization of
THD::invoker_host && THD::invoker_user.
Breakdown of the failure is this:
Query_log_event::host and Query_log_event::user can have their
LEX_STRING's set to length 0, but the actual str member points to
garbage. Code afterwards copies Query_log_event::host and user to
THD::invoker_host and THD::invoker_user.
Calling code for these members expects both members to be initialized.
Eg. the str member be a NULL terminated string and length have
appropriate size.
The bug is apparent when the username is longer than the rolename.
It is caused by a simple typo that caused a memcmp call to compare a
different number of bytes than necessary.
The fix was proposed by Igor Pashev. I have reviewed it and it is the
correct approach. Test case introduced by me, using the details provided
in the MDEV.
Signed-off-by: Vicențiu Ciorbaru <vicentiu@mariadb.org>
1. remove unnecessary rpl-tokudb combination file.
2. fix rpl_ignore_table to cleanup properly (not leave test
grants in memory)
3. check_temp_dir() is supposed to set the error in stmt_da - do
it even when called multiple times, this fixes a crash when
rpl.rpl_slave_load_tmpdir_not_exist is run twice.
Chery-picked commits from codership/mysql-wsrep.
MW-284: Slave I/O retry on ER_COM_UNKNOWN_ERROR
Slave would treat ER_COM_UNKNOWN_ERROR as fatal error and stop.
The fix here is to treat it as a network error and rely on the
built-in mechanism to retry.
MW-284: Add an MTR test
The combination of --remove_file and --write_file on .expect file creates
a race condition which can be hit by MTR which reads the file in a loop.
Instead, .expect file should be changed with --append_file.
It was fixed in 10.x, but in 5.5 the sporadic failure still affected buildbot.
Fixed 3 test files which use the problematic combination
mysqld maintains a list of TABLE objects for all temporary
tables created within a session in THD. Here each table is
represented by a TABLE object.
A query referencing a particular temporary table for more
than once, however, failed with ER_CANT_REOPEN_TABLE error
because a TABLE_SHARE was allocate together with the TABLE,
so temporary tables always had only one TABLE per TABLE_SHARE.
This patch lift this restriction by separating TABLE and
TABLE_SHARE objects and storing TABLE_SHAREs for temporary
tables in a list in THD, and TABLEs in a list within their
respective TABLE_SHAREs.
Use direct persistent index corruption set on InnoDB dictionary
for this test. Do not allow creating new indexes if one of the
existing indexes is already marked as corrupted.
Post-fix #2:
- Update test results
- Make the optimization conditional under @@optimizer_switch flag.
- The optimization is now disabled by default, so .result files
are changed back to be what they were before the MDEV-8989 patch.
- Validate the specified wsrep_start_position value by also
checking the return status of wsrep->sst_received. This also
ensures that changes in wsrep_start_position is not allowed
when the node is not in JOINING state.
- Do not allow decrease in seqno within same UUID.
- The initial checkpoint in SEs should be [0...:-1].
Revert following bug fix:
Bug#20685029: SLAVE IO THREAD SHOULD STOP WHEN DISK IS
FULL
Bug#21753696: MAKE SHOW SLAVE STATUS NON BLOCKING IF IO
THREAD WAITS FOR DISK SPACE
This fix results in a deadlock between slave IO thread
and SQL thread.
including the test case:
https://github.com/mysql/mysql-server/commit/520aedfe
INNODB: "DATA DIRECTORY" OPTION OF CREATE TABLE FAILS WITH PWRITE() OS
ERROR 22
Fix for version mysql-5.6
PROBLEM
========
For version mysql-5.6.27 onwards InnoDB fails to create a table
with explicit 'data directory' option when Innodb_flush_method
is set to O_DIRECT.While creating link file we get a a pwrite
error 22 due to the alignment restrictions imposed by O_DIRECT
flag which is being set for the link file created.
FIX
===
Fixed the above issue by making use of file IO functions while
creating the link file that wouldn't let the O_DIRECT flag
restrictions arise.
Reviewed-by: Kevin Lewis <kevin.lewis@oracle.com>
Reviewed-by: Shaohua Wang <shaohua.wang@oracle.com>
RB: 11387
An additional warning saying "tc-log cannot be enabled"
is emitted when InnoDB is installed at runtime on mysqld
built with wsrep-patch (-DWITH_WSREP=ON).
This happens because, installing InnoDB increments the
total number of 2pc-capable engines and with wsrep-patch
already enabled - the total count goes >1. Even though,
this condition is sufficient to enable tc-logging, it is
not permitted at runtime, and thus the warning.
Updated the testcase to avoid the warning.
take MDL_SHARED_WRITE instead of MDL_SHARED_NO_READ_WRITE
for OPTIMIZE TABLE. For engines that need a stronger
lock (like MyISAM), reopen the table with
MDL_SHARED_NO_READ_WRITE.
INSERTS/UPDATES ON TEMPORARY TABLES
Bug#14294223: CHANGES NOT ALLOWED TO TEMPORARY TABLES ON
READ-ONLY SERVERS
Problem:
========
Running 5.5.14 in read only we can create temporary tables
but can not insert or update records in the table. When we
try we get Error 1290 : The MySQL server is running with the
--read-only option so it cannot execute this statement.
Analysis:
=========
This bug is very specific to binlog being enabled and
binlog-format being stmt/mixed. Standalone server without
binlog enabled or with row based binlog-mode works fine.
How standalone server and row based replication work:
=====================================================
Standalone server and row based replication mark the
transactions as read_write only when they are modifying
non temporary tables as part of their current transaction.
Because of this when code enters commit phase it checks
if a transaction is read_write or not. If the transaction
is read_write and global read only mode is enabled those
transaction will fail with 'server is read only mode'
error.
In the case of statement based mode at the time of writing
to binary log a binlog handler is created and it is always
marked as read_write. In case of temporary tables even
though the engine did not mark the transaction as read_write
but the new transaction that is started by binlog handler is
considered as read_write.
Hence in this case when code enters commit phase it finds
one handler which has a read_write transaction even when
we are modifying temporary table. This causes the server
to throw an error when global read-only mode is enabled.
Fix:
====
At the time of commit in "ha_commit_trans" if a read_write
transaction is found, we should check if this transaction is
coming from a handler other than binlog_handler. This will
ensure that there is a genuine read_write transaction being
sent by the engine apart from binlog_handler and only then
it should be blocked.
The check_user_can_set_role() used find_user_exact() to get the
permissions for the SET ROLE NONE command. Which returned NULL too often,
for instance when user authenticated as 'user'@'%'.
Now we use find_user_wild() instead.
Problem was that in-place online alter table was used on a table
that had mismatch between MySQL frm file and InnoDB data dictionary.
Fixed so that traditional "Copy" method is used if the MySQL frm
and InnoDB data dictionary is not consistent.
FAILURES
Analysis:
=========
Test script is not ensuring that "assert_grep.inc" should be
called only after 'Disk is full' error is written to the
error log.
Test checks for "Queueing master event to the relay log"
state. But this state is set before invoking 'queue_event'.
Actual 'Disk is full' error happens at a very lower level.
It can happen that we might even reset the debug point
before even the actual disk full simulation occurs and the
"Disk is full" message will never appear in the error log.
In order to guarentee that we must have some mechanism where
in after we write "Disk is full" error messge into the error
log we must signal the test to execute SSS and then reset
the debug point. So that test is deterministic.
Fix:
===
Added debug sync point to make script deterministic.
In some cases, MariaDB 10.0 could write a master.info file that was read
incorrectly by 10.1 and could cause server to fail to start after an upgrade.
(If writing a new master.info file that is shorter than the old, extra
junk may remain at the end of the file. This is handled properly in
10.1 with an END_MARKER line, but this line is not written by
10.0. The fix here is to make 10.1 robust at reading the master.info
files written by 10.0).
Fix several things around reading master.info and read_mi_key_from_file():
- read_mi_key_from_file() did not distinguish between a line with and
without an eqals '=' sign.
- If a line was empty, read_mi_key_from_file() would incorrectly return
the key from the previous call.
- An extra using_gtid=X line left-over by MariaDB 10.0 might incorrectly
be read and overwrite the correct value.
- Fix incorrect usage of strncmp() which should be strcmp().
- Add test cases.
Previous fix using wait condition did not work because of MDEV-9867, so
we have to use conditional sleep instead. Sleep will only happen if the test
is executed after another one which also ran buffer pool dump without
server restart between two tests
The cause of the issue is when DROP DATABASE takes
metadata lock and is in progress through it's
execution, a concurrently running CREATE FUNCTION checks
for the existence of database which it succeeds and then it
waits on the metadata lock. Once DROP DATABASE writes to
BINLOG and finally releases the metadata lock on schema
object, the CREATE FUNCTION waiting on metadata lock
gets in it's code path and succeeds and writes to binlog.
Added 5 minute timeout before automaticlally removing threads from thread
cache.
This solves a problem with jemalloc, which is slow with a small
thread cache and also makes thread_cache big enough that most users
doesn't have to touch it
LOG-SLOW-SLAVE-STATEMENTS NOT DISPLAYED.
These parameters were moved from the command line options to
the system variables section. Treatment of the
opt_log_slow_slave_statements changed to let the
dynamic change of the variable.
delete deferred events after they're executed
(otherwise they can be executed again for a sub-statement)
See also
commit 0e78d1d
Author: Venkatesh Duggirala <venkatesh.duggirala@oracle.com>
Date: Wed Mar 20 11:20:47 2013 +0530
BUG#15850951-DUPLICATE ERROR IN REPLICATION WITH SLAVE
TRIGGERS AND AUTO INCREMENT
Some statements are always replicated in STATEMENT binlog format.
So upon their execution, the current binlog format is temporarily
switched to STATEMENT even though the session's format is different.
This state, stored in THD's current_stmt_binlog_format, was getting
incorrectly masked by wsrep_forced_binlog_format, causing assertions
and unintended generation of row events.
Backported galera.galera_forced_binlog_format and added a test
specific to this case.
In RHEL7/RHEL7.1 libcrack behavior seem to have been modified so that
"foobar" password is considered bad (due to descending "ba") earlier than
expected. For details google for cracklib-2.9.0-simplistic.patch.
Adjusted affected passwords not to have descending and ascending sequences.
Analysis:
-- InnoDB has n (>0) redo-log files.
-- In the first page of redo-log there is 2 checkpoint records on fixed location (checkpoint is not encrypted)
-- On every checkpoint record there is up to 5 crypt_keys containing the keys used for encryption/decryption
-- On crash recovery we read all checkpoints on every file
-- Recovery starts by reading from the latest checkpoint forward
-- Problem is that latest checkpoint might not always contain the key we need to decrypt all the
redo-log blocks (see MDEV-9422 for one example)
-- Furthermore, there is no way to identify is the log block corrupted or encrypted
For example checkpoint can contain following keys :
write chk: 4 [ chk key ]: [ 5 1 ] [ 4 1 ] [ 3 1 ] [ 2 1 ] [ 1 1 ]
so over time we could have a checkpoint
write chk: 13 [ chk key ]: [ 14 1 ] [ 13 1 ] [ 12 1 ] [ 11 1 ] [ 10 1 ]
killall -9 mysqld causes crash recovery and on crash recovery we read as
many checkpoints as there is log files, e.g.
read [ chk key ]: [ 13 1 ] [ 12 1 ] [ 11 1 ] [ 10 1 ] [ 9 1 ]
read [ chk key ]: [ 14 1 ] [ 13 1 ] [ 12 1 ] [ 11 1 ] [ 10 1 ] [ 9 1 ]
This is problematic, as we could still scan log blocks e.g. from checkpoint 4 and we do
not know anymore the correct key.
CRYPT INFO: for checkpoint 14 search 4
CRYPT INFO: for checkpoint 13 search 4
CRYPT INFO: for checkpoint 12 search 4
CRYPT INFO: for checkpoint 11 search 4
CRYPT INFO: for checkpoint 10 search 4
CRYPT INFO: for checkpoint 9 search 4 (NOTE: NOT FOUND)
For every checkpoint, code generated a new encrypted key based on key
from encryption plugin and random numbers. Only random numbers are
stored on checkpoint.
Fix: Generate only one key for every log file. If checkpoint contains only
one key, use that key to encrypt/decrypt all log blocks. If checkpoint
contains more than one key (this is case for databases created
using MariaDB server version 10.1.0 - 10.1.12 if log encryption was
used). If looked checkpoint_no is found from keys on checkpoint we use
that key to decrypt the log block. For encryption we use always the
first key. If the looked checkpoint_no is not found from keys on checkpoint
we use the first key.
Modified code also so that if log is not encrypted, we do not generate
any empty keys. If we have a log block and no keys is found from
checkpoint we assume that log block is unencrypted. Log corruption or
missing keys is found by comparing log block checksums. If we have
a keys but current log block checksum is correct we again assume
log block to be unencrypted. This is because current implementation
stores checksum only before encryption and new checksum after
encryption but before disk write is not stored anywhere.
It could have happened that one of previous tests already executed
buffer pool dump and set the status variable value, so when it's been
checked, the check passes too early, before the dump starts and
the dump file is created. See more detailed explanation in MDEV-9713.
Fixed by waiting for the current time to change in case it equals
to the timestamp in the status variable, and then checking that
the status variable not only matches the expected pattern, but also
differs from the previous value, whatever it was.
In row_search_for_mysql function on XtraDB there was a old logic
where null bytes were inited. This caused server to think that
key value is null and continue on incorrect path.