Restore the detection of default charset in command line utilities.
It worked up to 10.1, but was broken by Connector/C.
Moved code for detection of default charset from sql-common/client.c
to mysys, and make command line utilities to use this code if charset
was not specified on the command line.
Server and command line tools now support option --tls_version to specify the
TLS version between client and server. Valid values are TLSv1.0, TLSv1.1, TLSv1.2, TLSv1.3
or a combination of them. E.g.
--tls_version=TLSv1.3
--tls_version=TLSv1.2,TLSv1.3
In case there is a gap between versions, the lowest version will be used:
--tls_version=TLSv1.1,TLSv1.3 -> Only TLSv1.1 will be available.
If the used TLS library doesn't support the specified TLS version, it will use
the default configuration.
Limitations:
SSLv3 is not supported. The default configuration doesn't support TLSv1.0 anymore.
TLSv1.3 protocol currently is only supported by OpenSSL 1.1.0 (client and server) and
GnuTLS 3.6.5 (client only).
Overview of TLS implementations and protocols
Server:
+-----------+-----------------------------------------+
| Library | Supported TLS versions |
+-----------+-----------------------------------------+
| WolfSSL | TLSv1.1, TLSv1,2 |
+-----------+-----------------------------------------+
| OpenSSL | (TLSv1.0), TLSv1.1, TLSv1,2, TLSv1.3 |
+-----------+-----------------------------------------+
| LibreSSL | (TLSv1.0), TLSv1.1, TLSv1,2, TLSv1.3 |
+-----------+-----------------------------------------+
Client (MariaDB Connector/C)
+-----------+-----------------------------------------+
| Library | Supported TLS versions |
+-----------+-----------------------------------------+
| GnuTLS | (TLSv1.0), TLSv1.1, TLSv1.2, TLSv1.3 |
+-----------+-----------------------------------------+
| Schannel | (TLSv1.0), TLSv1.1, TLSv1.2 |
+-----------+-----------------------------------------+
| OpenSSL | (TLSv1.0), TLSv1.1, TLSv1,2, TLSv1.3 |
+-----------+-----------------------------------------+
| LibreSSL | (TLSv1.0), TLSv1.1, TLSv1,2, TLSv1.3 |
+-----------+-----------------------------------------+
Using different recommended speedup options for WolfSSL.
- Enable x64 assembly code on Intel.
- in my_crypt.cc, align EVP_CIPHER_CTX buffer, since some members need
alignment of 16 (for AESNI instructions), when assembler is enabled.
- Adjust MY_AES_CTX_SIZE
- Enable fastmath in wolfssl (large integer math).
This patch complements the patch that fixes bug MDEV-18479.
This patch takes care of possible overflow when calculating the
estimated number of rows in a materialized derived table / view.
- Add new submodule for WolfSSL
- Build and use wolfssl and wolfcrypt instead of yassl/taocrypt
- Use HAVE_WOLFSSL instead of HAVE_YASSL
- Increase MY_AES_CTX_SIZE, to avoid compile time asserts in my_crypt.cc
(sizeof(EVP_CIPHER_CTX) is larger on WolfSSL)
This patch is for MEM_ROOT only.
In debug mode add 8 byte of poisoned memory before every allocated chunk.
On the right of every chunk there will be either 1-7 trailing poisoned bytes, or
next chunk's redzone, or poisoned non allocated memory or redzone of a
malloc()ed buffer.
Some places didn't match the previous rules, making the Floor
address wrong.
Additional sed rules:
sed -i -e 's/Place.*Suite .*, Boston/Street, Fifth Floor, Boston/g'
sed -i -e 's/Suite .*, Boston/Fifth Floor, Boston/g'
This commit is based on the work of Michal Schorm, rebased on the
earliest MariaDB version.
Th command line used to generate this diff was:
find ./ -type f \
-exec sed -i -e 's/Foundation, Inc., 59 Temple Place, Suite 330, Boston, /Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, /g' {} \; \
-exec sed -i -e 's/Foundation, Inc. 59 Temple Place.* Suite 330, Boston, /Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, /g' {} \; \
-exec sed -i -e 's/MA.*.....-1307.*USA/MA 02110-1335 USA/g' {} \; \
-exec sed -i -e 's/Foundation, Inc., 59 Temple/Foundation, Inc., 51 Franklin/g' {} \; \
-exec sed -i -e 's/Place, Suite 330, Boston, MA.*02111-1307.*USA/Street, Fifth Floor, Boston, MA 02110-1335 USA/g' {} \; \
-exec sed -i -e 's/MA.*.....-1307/MA 02110-1335/g' {} \;
With MAX_INDEXIES=64(default), key_map=Bitmap<64> is just a wrapper around
ulonglong and thus "trivial" (can be bzero-ed, or memcpy-ed, and stays
valid still)
With MAX_INDEXES=128, key_map = Bitmap<128> is not a "trivial" type
anymore. The implementation uses MY_BITMAP, and MY_BITMAP contains pointers
which make Bitmap invalid, when it is memcpy-ed/bzero-ed.
The problem in 10.4 is that there are many new key_map members, inside TABLE
or KEY, and those are often memcopied and bzeroed
The fix makes Bitmap "trivial", by inlining most of MY_BITMAP functionality.
pointers/heap allocations are not used anymore.
SHOW STATUS LIKE 'Open_files' was showing 18446744073709551615
my_file_opened used statistic_increment/statistic_decrement,
so one-off errors were normal and expected. But they confused
monitoring tools, so let's move my_file_opened to use atomics.
This reverts commit 21b2fada7a
and commit 81d71ee6b2.
The MDEV-18464 change introduces a few data race issues. Contrary to
the documentation, the field trx_t::victim is not always being protected
by lock_sys_t::mutex and trx_t::mutex. Most importantly, it seems
that KILL QUERY could wrongly avoid acquiring both mutexes when
invoking lock_trx_handle_wait_low(), in case another thread had
already set trx->victim=true.
We also revert MDEV-12009, because it should depend on the MDEV-18464
fix being present.
As noted on kill_one_thread SUPER should be able to kill even
system threads i.e. threads/query flagged as high priority or
wsrep applier thread. Normal user, should not able to kill
threads/query flagged as high priority (BF) or wsrep applier
thread.
now we can afford it. Fix -Werror errors. Note:
* old gcc is bad at detecting uninit variables, disable it.
* time_t is int or long, cast it for printf's
The MDEV-17262 commit 26432e49d3
was skipped. In Galera 4, the implementation would seem to require
changes to the streaming replication.
In the tests archive.rnd_pos main.profiling, disable_ps_protocol
for SHOW STATUS and SHOW PROFILE commands until MDEV-18974
has been fixed.
This patch contains a fix for the MDEV-17262/17243 issues and
new mtr test.
These issues (MDEV-17262/17243) have two reasons:
1) After an intermediate commit, a transaction loses its status
of "transaction that registered in the MySQL for 2pc coordinator"
(in the InnoDB) due to the fact that since version 10.2 the
write_row() function (which located in the ha_innodb.cc) does
not call trx_register_for_2pc(m_prebuilt->trx) during the processing
of split transactions. It is necessary to restore this call inside
the write_row() when an intermediate commit was made (for a split
transaction).
Similarly, we need to set the flag of the started transaction
(m_prebuilt->sql_stat_start) after intermediate commit.
The table->file->extra(HA_EXTRA_FAKE_START_STMT) called from the
wsrep_load_data_split() function (which located in sql_load.cc)
will also do this, but it will be too late. As a result, the call
to the wsrep_append_keys() function from the InnoDB engine may be
lost or function may be called with invalid transaction identifier.
2) If a transaction with the LOAD DATA statement is divided into
logical mini-transactions (of the 10K rows) and binlog is rotated,
then in rare cases due to the wsrep handler re-registration at the
boundary of the split, the last portion of data may be lost. Since
splitting of the LOAD DATA into mini-transactions is technical,
I believe that we should not allow these mini-transactions to fall
into separate binlogs. Therefore, it is necessary to prohibit the
rotation of binlog in the middle of processing LOAD DATA statement.
https://jira.mariadb.org/browse/MDEV-17262 and
https://jira.mariadb.org/browse/MDEV-17243
* MDEV-16509 Improve wsrep commit performance with binlog disabled
Release commit order critical section early after trx_commit_low() if
binlog is not transaction coordinator. In order to avoid two phase commit,
binlog_hton is not registered for THD during IO_CACHE population.
Implemented a test which verifies that the transactions release
commit order early.
This optimization will change behavior during recovery as the commit
is not two phase when binlog is off. Fixed and recorded wsrep-recover-v25
and wsrep-recover to match the behavior.
* MDEV-18730 Ordering for wsrep binlog group commit
Previously out of order execution was allowed for wsrep commits.
Established proper ordering by populating wait_for_commit
for every wsrep THD and making group commit leader to wait for
prior commits before proceeding to trx_group_commit_leader().
* MDEV-18730 Added a test case to verify correct commit ordering
* MDEV-16509, MDEV-18730 Review fixes
Use WSREP_EMULATE_BINLOG() macro to decide if the binlog_hton
should be registered. Whitespace/syntax fixes and cleanups.
* MDEV-16509 Require binlog for galera_var_innodb_disallow_writes test
If the commit to InnoDB is done in one phase, the native InnoDB behavior
is that the transaction is committed in memory before it is persisted to
disk. This means that the innodb_disallow_writes=ON may not prevent
transaction to become visible to other readers before commit is completely
over. On the other hand, if the commit is two phase (as it is with binlog),
the transaction will be blocked in prepare phase.
Fixed the test to use binlog, which enforces two phase commit, which
in turn makes commit to block before the changes become visible to
other connections. This guarantees that the test produces expected
result.
Problem was that we skipped background persistent statistics calculation
on applier nodes if thread is marked as high priority (a.k.a BF).
However, on applier nodes all DDL which is replicate will be executed
as high priority i.e BF.
Fixed by allowing background persistent statistics calculation on
applier nodes even when thread is marked as BF. This could lead
BF lock waits but for queries on that node needs that statistics.
Removed redundant plugin_thdvar_cleanup() from end_connection(): called by
THD::free_connection(), which always follows end_connection().
Saves at least one lock(LOCK_plugin) and one
rdlock(LOCK_system_variables_hash).
Benchmarked on a 2socket/20core/40threads Broadwell system using sysbench
connect brencmark @40 threads (with select 1 disabled).
10.2 shows moderate improvement: 136219.93 -> 137766.31 CPS.
10.3 is improvement is somewhat better: 93018.29 -> 101379.77 CPS.
Also backported MyRocks memory leak fix from 10.4, which turned out to
be unrelated.
When compiling CMAKE_BUILD_TYPE=Debug WITH_ASAN using clang-7 -O2
the following tests could fail due to insufficient stack size:
main.signal_demo3 sys_vars.max_sp_recursion_depth_func
If we have a 2+ node cluster which is replicating from an async master
and the binlog_format is set to STATEMENT and multi-row inserts are executed
on a table with an auto_increment column such that values are automatically
generated by MySQL, then the server node generates wrong auto_increment
values, which are different from what was generated on the async master.
In the title of the MDEV-9519 it was proposed to ban start slave on a Galera
if master binlog_format = statement and wsrep_auto_increment_control = 1,
but the problem can be solved without such a restriction.
The causes and fixes:
1. We need to improve processing of changing the auto-increment values
after changing the cluster size.
2. If wsrep auto_increment_control switched on during operation of
the node, then we should immediately update the auto_increment_increment
and auto_increment_offset global variables, without waiting of the next
invocation of the wsrep_view_handler_cb() callback. In the current version
these variables retain its initial values if wsrep_auto_increment_control
is switched on during operation of the node, which leads to inconsistent
results on the different nodes in some scenarios.
3. If wsrep auto_increment_control switched off during operation of the node,
then we must return the original values of the auto_increment_increment and
auto_increment_offset global variables, as the user has set. To make this
possible, we need to add a "shadow copies" of these variables (which stores
the latest values set by the user).
https://jira.mariadb.org/browse/MDEV-9519
If we have a 2+ node cluster which is replicating from an async master
and the binlog_format is set to STATEMENT and multi-row inserts are executed
on a table with an auto_increment column such that values are automatically
generated by MySQL, then the server node generates wrong auto_increment
values, which are different from what was generated on the async master.
In the title of the MDEV-9519 it was proposed to ban start slave on a Galera
if master binlog_format = statement and wsrep_auto_increment_control = 1,
but the problem can be solved without such a restriction.
The causes and fixes:
1. We need to improve processing of changing the auto-increment values
after changing the cluster size.
2. If wsrep auto_increment_control switched on during operation of
the node, then we should immediately update the auto_increment_increment
and auto_increment_offset global variables, without waiting of the next
invocation of the wsrep_view_handler_cb() callback. In the current version
these variables retain its initial values if wsrep_auto_increment_control
is switched on during operation of the node, which leads to inconsistent
results on the different nodes in some scenarios.
3. If wsrep auto_increment_control switched off during operation of the node,
then we must return the original values of the auto_increment_increment and
auto_increment_offset global variables, as the user has set. To make this
possible, we need to add a "shadow copies" of these variables (which stores
the latest values set by the user).
https://jira.mariadb.org/browse/MDEV-9519
If we have a 2+ node cluster which is replicating from an async master
and the binlog_format is set to STATEMENT and multi-row inserts are executed
on a table with an auto_increment column such that values are automatically
generated by MySQL, then the server node generates wrong auto_increment
values, which are different from what was generated on the async master.
In the title of the MDEV-9519 it was proposed to ban start slave on a Galera
if master binlog_format = statement and wsrep_auto_increment_control = 1,
but the problem can be solved without such a restriction.
The causes and fixes:
1. We need to improve processing of changing the auto-increment values
after changing the cluster size.
2. If wsrep auto_increment_control switched on during operation of
the node, then we should immediately update the auto_increment_increment
and auto_increment_offset global variables, without waiting of the next
invocation of the wsrep_view_handler_cb() callback. In the current version
these variables retain its initial values if wsrep_auto_increment_control
is switched on during operation of the node, which leads to inconsistent
results on the different nodes in some scenarios.
3. If wsrep auto_increment_control switched off during operation of the node,
then we must return the original values of the auto_increment_increment and
auto_increment_offset global variables, as the user has set. To make this
possible, we need to add a "shadow copies" of these variables (which stores
the latest values set by the user).
https://jira.mariadb.org/browse/MDEV-9519
This patch implements engine independent unique hash index.
Usage:- Unique HASH index can be created automatically for blob/varchar/test column whose key
length > handler->max_key_length()
or it can be explicitly specified.
Automatic Creation:-
Create TABLE t1 (a blob unique);
Explicit Creation:-
Create TABLE t1 (a int , unique(a) using HASH);
Internal KEY_PART Representations:-
Long unique key_info will have 2 representations.
(lets understand this with an example create table t1(a blob, b blob , unique(a, b)); )
1. User Given Representation:- key_info->key_part array will be similar to what user has defined.
So in case of example it will have 2 key_parts (a, b)
2. Storage Engine Representation:- In this case there will be only one key_part and it will point to
HASH_FIELD. This key_part will be always after user defined key_parts.
So:- User Given Representation [a] [b] [hash_key_part]
key_info->key_part ----^
Storage Engine Representation [a] [b] [hash_key_part]
key_info->key_part ------------^
Table->s->key_info will have User Given Representation, While table->key_info will have Storage Engine
Representation.Representation can be changed into each other by calling re/setup_keyinfo_hash function.
Working:-
1. So when user specifies HASH_INDEX or key_length is > handler->max_key_length(), In mysql_prepare_create_table
One extra vfield is added (for each long unique key). And key_info->algorithm is set to HA_KEY_ALG_LONG_HASH.
2. In init_from_binary_frm_image values for hash_keypart is set (like fieldnr , field and flags)
3. In parse_vcol_defs, HASH_FIELD->vcol_info is created. Item_func_hash is used with list of Item_fields,
When Explicit length is given by user then Item_left is used to concatenate Item_field values.
4. In ha_write_row/ha_update_row check_duplicate_long_entry_key is called which will create the hash key from
table->record[0] and then call ha_index_read_map , if we found duplicated hash , we will compare the result
field by field.
This patch adds support for expiring user passwords.
The following statements are extended:
CREATE USER user@localhost PASSWORD EXPIRE [option]
ALTER USER user@localhost PASSWORD EXPIRE [option]
If no option is specified, the password is expired with immediate
effect. If option is DEFAULT, global policy applies according to
the default_password_lifetime system var (if 0, password never
expires, if N, password expires every N days). If option is NEVER,
the password never expires and if option is INTERVAL N DAY, the
password expires every N days.
The feature also supports the disconnect_on_expired_password system
var and the --connect-expired-password client option.
Closes#1166
Global variable wsrep_debug now can be used to filter wsrep-lib messages based on debug level provided.
Type of wsrep_debug is now set to be unsigned int, so tests and configuration files changed accordingly.
This patch contains a full implementation of the optimization
that allows to use in-memory rowid / primary filters built for range
conditions over indexes. In many cases usage of such filters reduce
the number of disk seeks spent for fetching table rows.
In this implementation the choice of what possible filter to be applied
(if any) is made purely on cost-based considerations.
This implementation re-achitectured the partial implementation of
the feature pushed by Galina Shalygina in the commit
8d5a11122c.
Besides this patch contains a better implementation of the generic
handler function handler::multi_range_read_info_const() that
takes into account gaps between ranges when calculating the cost of
range index scans. It also contains some corrections of the
implementation of the handler function records_in_range() for MyISAM.
This patch supports the feature for InnoDB and MyISAM.
Implemented and integrated THD_list as a replacement for the global
thread list. It uses own mutex instead of LOCK_thread_count for THD
list protection.
Removed unused first_global_thread() and next_global_thread().
delayed_insert_threads is now protected by LOCK_delayed_insert. Although
this patch doesn't fix very wrong synchronization of this variable.
After this patch there are only 2 legitimate uses of LOCK_thread_count
left, both in mysqld.cc: thread_count and ready_to_exit.
Aim is to reduce usage of LOCK_thread_count and COND_thread_count.
Part of MDEV-15135.
Disable LOAD DATA LOCAL INFILE suport by default and
auto-enable it for the duration of one query, if the query
string starts with the word "load". In all other cases the application
should enable LOAD DATA LOCAL INFILE support explicitly.
The problem was originally stated in
http://bugs.mysql.com/bug.php?id=82212
The size of an base64-encoded Rows_log_event exceeds its
vanilla byte representation in 4/3 times.
When a binlogged event size is about 1GB mysqlbinlog generates
a BINLOG query that can't be send out due to its size.
It is fixed with fragmenting the BINLOG argument C-string into
(approximate) halves when the base64 encoded event is over 1GB size.
The mysqlbinlog in such case puts out
SET @binlog_fragment_0='base64-encoded-fragment_0';
SET @binlog_fragment_1='base64-encoded-fragment_1';
BINLOG @binlog_fragment_0, @binlog_fragment_1;
to represent a big BINLOG.
For prompt memory release BINLOG handler is made to reset the BINLOG argument
user variables in the middle of processing, as if @binlog_fragment_{0,1} = NULL
is assigned.
Notice the 2 fragments are enough, though the client and server still may
need to tweak their @@max_allowed_packet to satisfy to the fragment
size (which they would have to do anyway with greater number of
fragments, should that be desired).
On the lower level the following changes are made:
Log_event::print_base64()
remains to call encoder and store the encoded data into a cache but
now *without* doing any formatting. The latter is left for time
when the cache is copied to an output file (e.g mysqlbinlog output).
No formatting behavior is also reflected by the change in the meaning
of the last argument which specifies whether to cache the encoded data.
Rows_log_event::print_helper()
is made to invoke a specialized fragmented cache-to-file copying function
which is
copy_cache_to_file_wrapped()
that takes care of fragmenting also optionally wraps encoded
strings (fragments) into SQL stanzas.
my_b_copy_to_file()
is refactored to into my_b_copy_all_to_file(). The former function
is generalized
to accepts more a limit argument to constraint the copying and does
not reinitialize anymore the cache into reading mode.
The limit does not do any effect on the fully read cache.
Also, apply the MDEV-17957 changes to encrypted page checksums,
and remove error message output from the checksum function,
because these messages would be useless noise when mariabackup
is retrying reads of corrupted-looking pages, and not that
useful during normal server operation either.
The error messages in fil_space_verify_crypt_checksum()
should be refactored separately.
Current implementation is conflicting. If UNICODE is defined, FormatMessage() will be FormatMessageW(), and variable win_errormsg with type char can not be passed to it, which should be changed to TCHAR instead. Since we don't use UNICODE here, we can use FormatMessageA() directly to avoid conversion error.
```
my_global.h(1092): error C2664: 'DWORD FormatMessageW(D
WORD,LPCVOID,DWORD,DWORD,LPWSTR,DWORD,va_list *)' : cannot convert argument 5 from 'char [2048]' to 'LPWSTR'
```
SIGHUP causes debug info in the error log and reload of
logs/privileges/tables/etc. The server should only do it when
a user intentionally sends SUGHUP, not when a parent terminal gets
disconnected or something.
In particular, not ignoring kernel SIGHUP causes FLUSH PRIVILEGES
at some random point during non-systemd Debian upgrades (Debian
restarts mysqld, debian-start script runs mysql_upgrade in the background,
postinit script ends and kernel sends SIGHUP to all background processes
it has started). And during mysql_upgrade privilege tables aren't
necessarily ready to be reloaded.
Current implementation is conflicting. If `UNICODE` is defined, `FormatMessage()` will be `FormatMessageW()`, and variable `win_errormsg` with type `char` can not be passed to it, which should be changed to `TCHAR` instead. Since we don't use UNICODE here, we can use `FormatMessageA()` directly to avoid conversion error.
```
my_global.h(1092): error C2664: 'DWORD FormatMessageW(D
WORD,LPCVOID,DWORD,DWORD,LPWSTR,DWORD,va_list *)' : cannot convert argument 5 from 'char [2048]' to 'LPWSTR'
```
MDEV-17625 Different warnings when comparing a garbage to DATETIME vs TIME
- Splitting processes of data type conversion (to TIME/DATE,DATETIME)
and warning generation.
Warning are now only get collected during conversion (in an "int" variable),
and are pushed in the very end of conversion (not in parallel).
Warnings generated by the low level routines str_to_xxx() and number_to_xxx()
can now be changed at the end, when TIME_FUZZY_DATES is applied,
from "Invalid value" to "Truncated invalid value".
Now "Illegal value" is issued only when the low level routine returned
an error and TIME_FUZZY_DATES was not set. Otherwise, if the low level
routine returned "false" (success), or if NULL was converted to a zero
datetime by TIME_FUZZY_DATES, then "Truncated illegal value"
is issued. This gives better warnings.
- Methods Type_handler::Item_get_date() and
Type_handler::Item_func_hybrid_field_type_get_date() now only
convert and collect warning information, but do not push warnings.
- Changing the return data type for Type_handler::Item_get_date()
and Type_handler::Item_func_hybrid_field_type_get_date() from
"bool" to "void". The conversion result (success vs error) can be
checked by testing ltime->time_type. MYSQL_TIME_{NONE|ERROR}
mean mean error, other values mean success.
- Adding new wrapper methods Type_handler::Item_get_date_with_warn() and
Type_handler::Item_func_hybrid_field_type_get_date_with_warn()
to do conversion followed by raising warnings, and changing
the code to call new Type_handler::***_with_warn() methods.
- Adding a helper class Temporal::Status, a wrapper
for MYSQL_TIME_STATUS with automatic initialization.
- Adding a helper class Temporal::Warn, to collect warnings
but without actually raising them. Moving a part of ErrConv
into a separate class ErrBuff, and deriving both Temporal::Warn
and ErrConv from ErrBuff. The ErrBuff part of Temporal::Warn
is used to collect textual representation of the input data.
- Adding a helper class Temporal::Warn_push. It's used
to collect warning information during conversion, and
automatically pushes warnings to the diagnostics area
on its destructor time (in case of non-zero warning).
- Moving more code from various functions inside class Temporal.
- Adding more Temporal_hybrid constructors and
protected Temporal methods make_from_xxx(),
which convert and only collect warning information, but do not
actually raise warnings.
- Now the low level functions str_to_datetime() and str_to_time()
always set status->warning if the return value is "true" (error).
- Now the low level functions number_to_time() and number_to_datetime()
set the "*was_cut" argument if the return value is "true" (error).
- Adding a few DBUG_ASSERTs to make sure that str_to_xxx() and
number_to_xxx() always set warnings on error.
- Adding new warning flags MYSQL_TIME_WARN_EDOM and MYSQL_TIME_WARN_ZERO_DATE
for the code symmetry. Before this change there was a special
code path for (rc==true && was_cut==0) which was treated by
Field_temporal::store_invalid_with_warning as "zero date violation".
Now was_cut==0 always means that there are no any error/warnings/notes
to be raised, not matter what rc is.
- Using new Temporal_hybrid constructors in combination with
Temporal::Warn_push inside str_to_datetime_with_warn(),
double_to_datetime_with_warn(), int_to_datetime_with_warn(),
Field::get_date(), Item::get_date_from_string(), and a few other places.
- Removing methods Dec_ptr::to_datetime_with_warn(),
Year::to_time_with_warn(), my_decimal::to_datetime_with_warn(),
Dec_ptr::to_datetime_with_warn().
Fixing Sec6::to_time() and Sec6::to_datetime() to
convert and only collect warnings, without raising warnings.
Now warning raising functionality resides in Temporal::Warn_push.
- Adding classes Longlong_hybrid_null and Double_null, to
return both value and the "IS NULL" flag. Adding methods
Item::to_double_null(), to_longlong_hybrid_null(),
Item_func_hybrid_field_type::to_longlong_hybrid_null_op(),
Item_func_hybrid_field_type::to_double_null_op().
Removing separate classes VInt and VInt_op, as they
have been replaced by a single class Longlong_hybrid_null.
- Adding a helper method Temporal::type_name_by_timestamp_type(),
moving a part of make_truncated_value_warning() into it,
and reusing in Temporal::Warn::push_conversion_warnings().
- Removing Item::make_zero_date() and
Item_func_hybrid_field_type::make_zero_mysql_time().
They provided duplicate functionality.
Now this code resides in Temporal::make_fuzzy_date().
The latter is now called for all Item types when data type
conversion (to DATE/TIME/DATETIME) is involved, including
Item_field and Item_direct_view_ref.
This fixes MDEV-17563: Item_direct_view_ref now correctly converts
NULL to a zero date when TIME_FUZZY_DATES says so.
main.derived_cond_pushdown: Move all 10.3 tests to the end,
trim trailing white space, and add an "End of 10.3 tests" marker.
Add --sorted_result to tests where the ordering is not deterministic.
main.win_percentile: Add --sorted_result to tests where the
ordering is no longer deterministic.
sql\sql_acl.cc(3114): error C2220: warning treated as error - no 'object' file generated
sql\sql_acl.cc(3114): warning C4390: ';': empty controlled statement found; is this the intent?
Support SET PASSWORD for authentication plugins.
Authentication plugin API is extended with two optional methods:
* hash_password() is used to compute a password hash (or digest)
from the plain-text password. This digest will be stored in mysql.user
table
* preprocess_hash() is used to convert this digest into some memory
representation that can be later used to authenticate a user.
Build-in plugins convert the hash from hexadecimal or base64 to binary,
to avoid doing it on every authentication attempt.
Note a change in behavior: when loading privileges (on startup or on
FLUSH PRIVILEGES) an account with an unknown plugin was loaded with a
warning (e.g. "Plugin 'foo' is not loaded"). But such an account could
not be used for authentication until the plugin is installed. Now an
account like that will not be loaded at all (with a warning, still).
Indeed, without plugin's preprocess_hash() method the server cannot know
how to load an account. Thus, if a new authentication plugin is
installed run-time, one might need FLUSH PRIVILEGES to activate all
existing accounts that were using this new plugin.
- Backported the MYSQL_SYSVAR_SIZE_T to 10.0
- The parameter innodb_ft_result_cache_limit was only 32 bits wide
also on 64-bit systems. Make it size_t, so that it will be 64 bits
on 64-bit systems.
- Added a test case that show how innodb_ft_result_cache_limit variables
behaves in 32bit and 64 bit system.
Problems:
Functions LEAST() and GREATEST() in TIME context, as well as functions
TIMESTAMP(a,b) and ADDTIME(a,b), returned confusing results when the
input TIME-alike value in a number or in a string was out of the TIME
supported range.
In case of TIMESTAMP(a,b) and ADDTIME(a,b), the second argument
value could get extra unexpected digits. For example, in:
ADDTIME('2001-01-01 00:00:00', 10000000) or
ADDTIME('2001-01-01 00:00:00', '1000:00:00')
the second argument was converted to '838:59:59.999999'
with six fractional digits, which contradicted "decimals"
previously set to 0 in fix_length_and_dec().
These unexpected fractional digits led to confusing function results.
Changes:
1. GREATEST(), LEAST()
- fixing Item_func_min_max::get_time_native()
to respect "decimals" set by fix_length_and_dec().
If a value of some numeric or string time-alike argument
goes outside of the TIME range and gets limited to '838:59:59.999999',
it's now right-truncated to the correct fractional precision.
- fixing, Type_handler_temporal_result::Item_func_min_max_fix_attributes()
to take into account arguments' time_precision() or datetime_precision(),
rather than rely on "decimals" calculated by the generic implementation
in Type_handler::Item_func_min_max_fix_attributes(). This makes
GREATEST() and LEAST() return better data types, with the same
fractional precision with what TIMESTAMP(a,b) and ADDTIME(a,b) return
for the same arguments, and with DATE(a) and TIMESTAMP(a).
2. Item_func_add_time and Item_func_timestamp
It was semantically wrong to apply the limit of the TIME data type
to the argument "b", which plays the role of "INTERVAL DAY TO SECOND" here.
Changing the code to fetch the argument "b" as INTERVAL rather than as TIME.
The low level routine calc_time_diff() now gets the interval
value without limiting to '838:59:59.999999', so in these examples:
ADDTIME('2001-01-01 00:00:00', 10000000)
ADDTIME('2001-01-01 00:00:00', '1000:00:00')
calc_time_diff() gets '1000:00:00' as is. The SQL function result
now gets limited to the supported result data type range
(datetime or time) inside calc_time_diff(), which now calculates
the return value using the real fractional digits that
came directly from the arguments (without the effect of limiting
to the TIME range), so the result does not have any unexpected
fractional digits any more.
Detailed changes in TIMESTAMP() and ADDTIME():
- Adding a new class Interval_DDhhmmssff. It's similar to Time, but:
* does not try to parse datetime format, as it's not needed for
functions TIMESTAMP() and ADDTIME().
* does not cut values to '838:59:59.999999'
The maximum supported Interval_DDhhmmssff's hard limit is
'UINT_MAX32:59:59.999999'. The maximum used soft limit is:
- '87649415:59:59.999999' (in 'hh:mm:ss.ff' format)
- '3652058 23:59:59.999999' (in 'DD hh:mm:ss.ff' format)
which is a difference between:
- TIMESTAMP'0001-01-01 00:00:00' and
- TIMESTAMP'9999-12-31 23:59:59.999999'
(the minimum datetime that supports arithmetic, and the
maximum possible datetime value).
- Fixing get_date() methods in the classes related to functions
ADDTIME(a,b) and TIMESTAMP(a,b) to use the new class Interval_DDhhmmssff
for fetching data from the second argument, instead of get_date().
- Fixing fix_length_and_dec() methods in the classes related
to functions ADDTIME(a,b) and TIMESTAMP(a,b) to use
Interval_DDhhmmssff::fsp(item) instead of item->time_precision()
to get the fractional precision of the second argument correctly.
- Splitting the low level function str_to_time() into smaller pieces
to reuse the code. Adding a new function str_to_DDhhmmssff(), to
parse "INTERVAL DAY TO SECOND" values.
After these changes, functions TIMESTAMP() and ADDTIME()
return much more predictable results, in terms of fractional
digits, and in terms of the overall result.
The full ranges of DATETIME and TIME values are now covered by TIMESTAMP()
and ADDTIME(), so the following can now be calculated:
SELECT ADDTIME(TIMESTAMP'0001-01-01 00:00:00', '87649415:59:59.999999');
-> '9999-12-31 23:59:59.999999'
SELECT TIMESTAMP(DATE'0001-01-01', '87649415:59:59.999999')
-> '9999-12-31 23:59:59.999999'
SELECT ADDTIME(TIME'-838:59:59.999999', '1677:59:59.999998');
-> '838:59:59.999999'
pthread_detach_this_thread() was intended to be defined to something
meaningful only on some ancient unixes, which don't have
pthread_attr_setdetachstate() defined. Otherwise, on normal unixes,
threads are created detached in the first place.
This was broken in 0f01bf2676 so that
we started calling pthread_detach() for already detached threads.
Intention was to detach aria checkpoint thread.
However in 87007dc2f7 aria service threads
were made joinable with appropriate handling, which makes breaking
revision unneccessary.
Revert remnants of 0f01bf2676, so that
pthread_detach_this_thread() is meaningful only on some ancient unixes
again.
- remove function prototype for shared memory (no more used), and VIO
members that are unused
- Do not call DisconnectNamedPipe on pipe handle. CloseHandle() is enough.
- Made output to be aligned in aria_chk -d
- Aria engine error texts are now written instead of "Undefined error"
- When running with --check --force, tables with wrong TRN's but otherwise
correct are now zerofilled
- Fixed several bugs in check and recovery related to fulltext
- When doing recovery, store highest found TRID in aria_control_file
Before this, the
We do not accept:
1. We did not have this problem (fixed earlier and better)
d982e717ab Bug#27510150: MYSQLDUMP FAILS FOR SPECIFIC --WHERE CLAUSES
2. We do not have such options (an DBUG_ASSERT put just in case)
bbc2e37fe4 Bug#27759871: BACKRONYM ISSUE IS STILL IN MYSQL 5.7
3. Serg fixed it in other way in this release:
e48d775c6f Bug#27980823: HEAP OVERFLOW VULNERABILITIES IN MYSQL CLIENT LIBRARY
After the MDEV-13118 fix there's no code in the server that
wants caseup/casedn to change the argument in place for simple
charsets. Let's remove this logic and always return the result in a
new string for all charsets, both simple and complex.
1. Removing the optimization that *some* character sets used in casedn()
and caseup(), which allowed (and required) to change the case in-place,
overwriting the string passed as the "src" argument.
Now all CHARSET_INFO's work in the same way:
non of them change the source string in-place, all of them now convert
case from the source string to the destination string, leaving
the source string untouched.
2. Adding "const" qualifier to the "char *src" parameter
to caseup() and casedn().
3. Removing duplicate implementations in ctype-mb.c.
Now both caseup() and casedn() implementations for all CJK character sets
use internally the same function my_casefold_mb()
(the former my_casefold_mb_varlen()).
4. Removing the "unused" attribute from parameters of some my_case{up|dn}_xxx()
implementations, as the affected parameters are now *used* in the code.
Previously these parameters were used only in DBUG_ASSERT().
This is to mark that a field is indirectly part of a key, which simplifes
checking if we need to have this field up to date to evaluate a key.
For example:
CREATE TABLE t1 (a int, b int as (a) virtual,
c int as (b) virtual, index(c))
would mark a and b with PART_INDIRECT_KEY_FLAG.
c is marked with PART_KEY_FLAG as before.
specific temporary errors
The optimistic parallel slave's worker thread could face a run-time error due to
the algorithm's specifics which allows for conflicts like the reported
"Can't find record in 'table'".
A typical stack is like
{noformat}
#0 handler::print_error (this=0x61c00008f8a0, error=149, errflag=0) at handler.cc:3650
#1 0x0000555555e95361 in write_record (thd=thd@entry=0x62a0000a2208, table=table@entry=0x61f00008ce88, info=info@entry=0x7fffdee356d0) at sql_insert.cc:1944
#2 0x0000555555ea7767 in mysql_insert (thd=thd@entry=0x62a0000a2208, table_list=0x61b00012ada0, fields=..., values_list=..., update_fields=..., update_values=..., duplic=<optimized out>, ignore=<optimized out>) at sql_insert.cc:1039
#3 0x0000555555efda90 in mysql_execute_command (thd=thd@entry=0x62a0000a2208) at sql_parse.cc:3927
#4 0x0000555555f0cc50 in mysql_parse (thd=0x62a0000a2208, rawbuf=<optimized out>, length=<optimized out>, parser_state=<optimized out>) at sql_parse.cc:7449
#5 0x00005555566d4444 in Query_log_event::do_apply_event (this=0x61200005b9c8, rgi=<optimized out>, query_arg=<optimized out>, q_len_arg=<optimized out>) at log_event.cc:4508
#6 0x00005555566d639e in Query_log_event::do_apply_event (this=<optimized out>, rgi=<optimized out>) at log_event.cc:4185
#7 0x0000555555d738cf in Log_event::apply_event (rgi=0x61d0001ea080, this=0x61200005b9c8) at log_event.h:1343
#8 apply_event_and_update_pos_apply (ev=ev@entry=0x61200005b9c8, thd=thd@entry=0x62a0000a2208, rgi=rgi@entry=0x61d0001ea080, reason=<optimized out>) at slave.cc:3479
#9 0x0000555555d8596b in apply_event_and_update_pos_for_parallel (ev=ev@entry=0x61200005b9c8, thd=thd@entry=0x62a0000a2208, rgi=rgi@entry=0x61d0001ea080) at slave.cc:3623
#10 0x00005555562aca83 in rpt_handle_event (qev=qev@entry=0x6190000fa088, rpt=rpt@entry=0x62200002bd68) at rpl_parallel.cc:50
#11 0x00005555562bd04e in handle_rpl_parallel_thread (arg=arg@entry=0x62200002bd68) at rpl_parallel.cc:1258
{noformat}
Here {{handler::print_error}} computes whether to error log the
current error when --log-warnings > 1. The decision flag is consulted
bu {{my_message_sql()}} which can be eventually called.
In the bug case the decision is to log.
However in the optimistic mode slave applier case any conflict is
attempted to resolve with rollback and retry to success. Hence the
logging is at least extraneous.
The case is fixed with adding a new flag {{ME_LOG_AS_WARN}} which
{{handler::print_error}} may propagate further on through {{my_error}}
when the error comes from an optimistically running slave worker thread.
The new flag effectively requests the warning level for the errlog record,
while the thread's DA records the actual error (which is regarded as temporary one
by the parallel slave error handler).
O_TMPFILE creates a tempfile not attached to a filename.
This is what we want for MY_TEMPORARY.
We preserve a state O_TMPFILE_works, because kernel version or
filesystem could cause failure.
Closes#662
Original problem reported by Wlad: re-compilation of 10.3 on top of 10.2
build would cache undefined HAVE_ISINF from 10.2, whereas it is expected
to be 1 in 10.3.
std::isinf() seem to be available on all supported platforms.
don't rely on imprecise my_file_opened | my_stream_opened,
scan the array for open handles instead.
also, remove my_print_open_files() and embed it in my_end()
to have one array scan instead of two.
simplify. move common code inside, specify common flags inside,
rewrite dead code (`if (mode & O_TEMPORARY)` on Linux, where
`O_TEMPORARY` is always 0) to actually do something.
Description:- Client applications establishes connection to
server, which does not support SSL, via TCP even when SSL is
enforced via MYSQL_OPT_SSL_MODE or MYSQL_OPT_SSL_ENFORCE or
MYSQL_OPT_SSL_VERIFY_SERVER_CERT.
Analysis:- There exist no error handling for catching client
applications which enforces SSL connection to connect to a
server which does not support SSL.
Fix:- Error handling is done to catch above mentioned
scenarios.
This previously unreported warning comes from casting size_t to ulong
in sql_hset.h in Hash_Set::at().
Change my_hash_element to accept size_t index parameter.
Performance schema likely/unlikely assume that performance schema
is enabled by default, which causes a performance degradation for
default installations that doesn't have performance schema enabled.
Fixed by changing the likely/unlikely in PS to assume it's
not enabled. This can be changed by compiling with
-DPSI_ON_BY_DEFAULT
Other changes:
- Added psi_likely/psi_unlikely that is depending on
PSI_ON_BY_DEFAULT. psi_likely() is assumed to be true
if PS is enabled.
- Added likely/unlikely to some PS interface code.
- Moved pfs_enabled to mysys (was initialized but not used before)
- Added "if (pfs_likely(pfs_enabled))" around calls to PS to avoid
an extra call if PS is not enabled.
- Moved checking flag_global_instrumention before other flags
to speed up the case when PS is not enabled.
To use:
- Compile with -DUSE_MY_LIKELY
- Change (with replace) all likely/unlikely to my_likely/my_/unlikely
replace likely my_likely unlikely my_unlikely -- *c *h
- Start mysqld with -T
- run some test
- When mysqld has shut down cleanely, report will be on stderr
Modern compilers (such as GCC 8) emit warnings that the
'register' keyword is deprecated and not valid C++17.
Let us remove most use of the 'register' keyword.
Code in 'extra/' is not touched.
In MDEV-8743, the port/socket of mysqld was changed to set FD_CLOEXEC.
The existing mysql_socket_socket function already set that with
SOCK_CLOEXEC when the socket was created. So here we move the fcntl
functionality to the mysql_socket_socket as port/socket are the only
callers.
Preprocessor checks of SOCK_CLOEXEC cannot be done as its a 0 if not
there and SOCK_CLOEXEC (being the value of the enum in bits/socket_type.h)
Preprocesssor logic for arithmetic and non-arithmetic defines are
hard/nonportable/ugly to read. As such we just check in my_global.h
and define HAVE_SOCK_CLOEXEC if we have it.
There was a disparity in behaviour between defined(WITH_WSREP) and
not depending on the OS, so the WITH_WSREP condition was removed
from setting calling fcntl.
All sockets are now maked SOCK_CLOEXEC/FD_CLOEXEC.
strace of mysqld with SOCK_CLOEXEC:
socket(AF_INET, SOCK_STREAM|SOCK_CLOEXEC, IPPROTO_TCP) = 10
write(2, "180419 14:52:40 [Note] Server socket created on IP: '127.0.0.1'.\n", 65180419 14:52:40 [Note] Server socket created on IP: '127.0.0.1'.
) = 65
setsockopt(10, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
bind(10, {sa_family=AF_INET, sin_port=htons(16020), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
listen(10, 150) = 0
socket(AF_INET, SOCK_STREAM|SOCK_CLOEXEC, IPPROTO_TCP) = 11
write(2, "180419 14:52:40 [Note] Server socket created on IP: '127.0.0.1'.\n", 65180419 14:52:40 [Note] Server socket created on IP: '127.0.0.1'.
) = 65
setsockopt(11, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
bind(11, {sa_family=AF_INET, sin_port=htons(16021), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
listen(11, 150) = 0
socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC, 0) = 12
unlink("/home/dan/repos/build-mariadb-server-10.0/mysql-test/var/tmp/mysqld.1.sock") = -1 ENOENT (No such file or directory)
setsockopt(12, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
umask(000) = 006
bind(12, {sa_family=AF_UNIX, sun_path="/home/dan/repos/build-mariadb-server-10.0/mysql-test/var/tmp/mysqld.1.sock"}, 110) = 0
umask(006) = 000
listen(12, 150) = 0
strmake_buf() macro should only be used with char[] arrays,
and never with char* pointers. To distinguish between the two
we create a new variable of the same type and initialize it
using array initializer, this causes compilation failure
with pointers. The variable is unused and will be removed by the
compiler. It's enough to do this check only with gcc, so
it doesn't have to be portable.
Use systemd EXTEND_TIMEOUT_USEC to advise systemd of progress
Move towards progress measures rather than pure time based measures.
Progress reporting at numberious shutdown/startup locations incuding:
* For innodb_fast_shutdown=0 trx_roll_must_shutdown() for rolling back incomplete transactions.
* For merging the change buffer (in srv_shutdown(bool ibuf_merge))
* For purging history, srv_do_purge
Thanks Marko for feedback and suggestions.
Use high accuracy timer on Windows 8.1+ for system versioning,it needs
accurate high resoution start query time.
Continue to use the inaccurate (but much faster timer function)
GetSystemTimeAsFileTime() where accuracy does not matter, e.g in
set_timespec_time_nsec(),or my_time()