Don't compare "field == table->next_number_field" because the field
can be special nullable field copy created by the trigger.
Compare field_index values instead.
Cannot do password validation in fix_lex_user(), we don't know
there what "GRANT ... TO user" means - creating a new user with
an empty password (need validation) or granting privileges
to an existing user (no validation needed).
Move validation down into replace_user_table(). And copy into
check_change_password().
Undo the change in test_if_skip_sort_order() that set ref_key=-1 when
a variant of index_merge is used (was made in fix for MDEV-9021).
It turned out that test_if_cheaper_ordering() call below assumes that
ref_key=-1 means "no index is used", that is, "an inefficient full table
scan is done".
This is not the same as index_merge, index_merge can actually be quite
efficient. So, ref_key=MAX_KEY denotes the fact that some index is used,
not any given index.
MDEV-9408 CREATE TABLE SELECT MAX(int_column) creates different columns for table vs view
There were three almost identical pieces of the code:
- Field *Item_func::tmp_table_field();
- Field *Item_sum::create_tmp_field();
- Field *create_tmp_field_from_item();
with a difference in very small details (hence the bugs):
Only Item_func::tmp_table_field() was correct, the other two were not.
Removing the two incorrect pieces of the redundant code.
Joining these three functions/methods into a single virtual method
Item::create_tmp_field().
Additionally, moving Item::make_string_field() and
Item::tmp_table_field_from_field_type() from the public into the
protected section of the class declaration, as they are now not
needed outside of Item.
This occurs when replication stops with an error, domain-based parallel
replication is used, and the GTID position contains more than one domain.
Furthermore, it relates to the case where the SQL thread is restarted
without first stopping the IO thread.
In this case, the file/offset relay-log position does not correctly
represent the slave's multi-dimensional position, because other domains may
be far ahead of, or behind, the domain with the failing event. So the code
reverts the relay log position back to the start of a relay log file that is
known to be before all active domains.
There was a bug that when the SQL thread was restarted, the
rli->relay_log_state was incorrectly initialised from @@gtid_slave_pos. This
position will likely be too far ahead, due to reverting the relay log
position. Thus, if the replication fails again after the SQL thread restart,
the rli->restart_gtid_pos might be updated incorrectly. This in turn would
cause a second SQL thread restart to replicate from the wrong position, if
the IO thread was still left running.
The fix is to initialise rli->relay_log_state from @@gtid_slave_pos only
when we actually purge and re-fetch relay logs from the master, not at every
SQL thread start.
A related problem is the use of sql_slave_skip_counter to resolve
replication failures in this kind of scenario. Since the slave position is
multi-dimensional, sql_slave_skip_counter can not work properly - it is
indeterminate exactly which event is to be skipped, and is unlikely to work
as expected for the user. So make this an error in the case where
domain-based parallel replication is used with multiple domains, suggesting
instead the user to set @@gtid_slave_pos to reliably skip the desired event.
Also fixes:
MDEV-9391 InnoDB does not produce warnings when doing WHERE int_column=varchar_column
MDEV-9337 ALTER from DECIMAL and INT to DATETIME returns a wrong result
MDEV-9340 Copying from INT/DOUBLE to ENUM is inconsistent
MDEV-9392 Copying from DECIMAL to YEAR is not consistent about warnings
The problem was that wait_for_slave_io_to_start reported that the io thread
was ready, when it was still initializing. This caused test suite to
continue too early, for example before the semi sync plugin was properly
enabled.
Fixed by introducing a new internal stage: "Preparing". Slave_IO_Running is
now set to "Yes" only when all initializing is done and the IO thread is
ready to read things from the master.
The only test affected by this change is rpl_flsh_tbls, which got stuck in
the preparing phase while trying to read the GTID position from a table.
Fixed by having this test waiting for Preparing instead of Yes.
UNIX_TIMESTAMP(STR_TO_DATE('201506', "%Y%M"
Issue:
-----
When an invalid date is supplied to the UNIX_TIMESTAMP
function from STR_TO_DATE, no check is performed before
converting it to a timestamp value.
SOLUTION:
---------
Add the check_date function and only if it succeeds,
proceed to the timestamp conversion.
No warning will be returned for dates having zero in
month/date, since partial dates are allowed. UNIX_TIMESTAMP
will return only a zero for such values.
The problem has been handled in 5.6+ with WL#946.
NOT NULL constraint must be checked *after* the BEFORE triggers.
That is for INSERT and UPDATE statements even NOT NULL fields
must be able to store a NULL temporarily at least while
BEFORE INSERT/UPDATE triggers are running.
The bug was caused by accessing uninitialized fields within the LEX related to
ssl by mysql_show_grants() -> get_current_user() -> has_auth() function.
plugin_init() works like this:
1. init MyISAM
2. load plugins from mysql.plugin, if it's a MyISAM table
3. init all not initialized plugins
4. all done, if step 2 loaded mysql.plugin,
otherwise:
5. load plugins from mysql.plugin
6. init all not initialized plugins
now, with --help --verbose, step 3 will not actually
initialize them, and if mysql.plugin is unreadable,
step 6 will try to initialize existing plugins again.
Fix: when skipping initialization because of --help,
change plugin status from PLUGIN_IS_UNINITIALIZED.
10.0 has an "analyze table .. persistent for all" syntax. This adds
--persistent to mysqlcheck(mysqlanalyize) to perform this extended
analyze table option.
Signed-off-by: Vicențiu Ciorbaru <vicentiu@mariadb.org>
As a fix for MDEV-8208, for initial wsrep threads, the
invocation of init_for_queries() was moved after plugins
were initialized. Due to which, OPTION_BEGIN bit of wsrep
applier THD (originally set in wsrep_replication_process)
got reset due to implicit commit within init_for_queries().
As a result, events from a multi-statement transaction from
another node were committed separately by the applier thread,
which leads to an assertion as they all carry same seqno.
Fixed by making sure that variable.option_bits are restored
post init_for_queries(). Also restored server_status.
Added a test case.
Backport pull request #125 from grooverdan/MDEV-8923_innodb_buffer_pool_dump_pct to 10.0
WL#6504 InnoDB buffer pool dump/load enchantments
This patch consists of two parts:
1. Dump only the hottest N% of the buffer pool(s)
2. Prevent hogging the server duing BP load
From MySQL - commit b409342c43ce2edb68807100a77001367c7e6b8e
Add testcases for innodb_buffer_pool_dump_pct_basic.
Part of the code authored by Daniel Black
MDEV-8923: port innodb_buffer_pool_dump_pct from MySQL
WL#6504 InnoDB buffer pool dump/load enchantments
This patch consists of two parts:
1. Dump only the hottest N% of the buffer pool(s)
2. Prevent hogging the server duing BP load
From MySQL - commit b409342c43ce2edb68807100a77001367c7e6b8e
Add testcases for innodb_buffer_pool_dump_pct.
Part of the code authored by Daniel Black.
while according to Storage Engine API column names should be compared
case insensitively. This can cause FRM and InnoDB data dictionary to
go out of sync.
- Added missning setting of table->rpl_write_set in record_gtid(), required by galera
- Removed output of WSREP_PATCH_VERSION from galera_defaults, as this can change over time
- Limit galera_many_tables_pk and galera_many_tables_nopk to 900, as
on many systems the default open table limit is 1024
Don't let network errors from mysql_close() leak into THD.
* remove incorrect upstream fix
** table->in_use can be NULL, must use ha_thd()
** clear_error() may remove earlier errors, don't use it
* fix the bug properly in federated and federatedx
fix innodb auto-increment handling
three bugs:
1. innobase_next_autoinc treated the case of current<offset incorrectly
2. ha_innobase::get_auto_increment didn't recalculate current when increment changed
3. ha_innobase::get_auto_increment didn't pass offset down to innobase_next_autoinc
Item_func_coalesce::fix_length_and_dec() calls
Item_func::count_string_result_length()) which called agg_arg_charsets()
with wrong flags, so the collation derivation of the COALESCE result was
not properly set to DERIVATION_COERCIBLE. It erroneously stayed
DERIVATION_NUMERIC. So GREATEST() misinterpreted the argument as
a number rather that a string and did not calculate its own length properly.
mysqldump --routine fails to dump databases containing backslash ("\")
character. This happened because escaped database name was being used as an
identifier while changing current database. Such identifers are not supposed
to be escaped, they must be properly quoted instead.
Analysis: debug only assertion I_S function (IS is XtraDB feature) is calling
buf_block_get_frame on any page it reads, which debug-asserts that the page is
buffer-fixed, which is not the case in I_S query.
Fixed by holding the buffer page mutex while the fields are read directly.
Problem:
========
1) Drop table queries are re-generated by server
before writing the events(queries) into binlog
for various reasons. If table name/db name contains
a non regular characters (like latin characters),
the generated query is wrong. Hence it breaks the
replication.
2) In the edge case, when table name/db name contains
64 characters, server is throwing an assert
assert(M_TBLLEN < 128)
3) In the edge case, when db name contains 64 latin
characters, binlog content is interpreted badly
which is leading replication failure.
Analysis & Fix :
================
1) Parser reads the table name from the query and converts
it to standard charset(utf8) and stores it in table_name variable.
When drop table query is regenerated with the same table_name
variable, it should be converted back to the original charset
from standard charset(utf8).
2) Latin character takes two bytes for each character. Limit
of the identifier is 64. SYSTEM_CHARSET_MBMAXLEN is set to '3'.
So there is a possiblity that tablename/dbname contains 3 * 64.
Hence assert is changed to
(M_TBLLEN <= NAME_CHAR_LEN*SYSTEM_CHARSET_MBMAXLEN)
3) db_len in the binlog event header is taking 1 byte.
db_len is ranged from 0 to 192 bytes (3 * 64).
While reading the db_len from the event, server
is casting to uint instead of uchar which is leading
to bad db_len. This problem is fixed by changing the
cast type to uchar.
This includes fixing all utilities to not have any memory leaks,
as safemalloc warnings stopped tests from passing on MacOSX.
- Ensure that all clients takes character-set-dir, as the
libmysqlclient library will use it.
- mysql-test-run now passes character-set-dir to all external clients.
- Changed dynstr_free() so that it can be called twice (made freeing code easier)
- Changed rpl_global_gtid_slave_state to be allocated dynamicly as it
includes a mutex that needs to be initizlied/destroyed before my_end() is called.
- Removed rpl_slave_state::init() and rpl_slave_stage::deinit() as
their job are better handling by constructor and delete.
- Print alias instead of table_name in check_duplicate_key as
table_name may have been converted to lower case.
Other things:
- Fixed a case in time_to_datetime_with_warn() where we where
using && instead of & in tests
- Better error from check_slave_param
- Better error message from TokuDB if it can't be compiled.
- Marked rpl_mixed_drop_create_temp_table and
rpl_stm_drop_create_temp_table as big tests to stop timeout
failures on power8
- Added sync_slave_with_master to semisync_future-7591 to
ensure that slave is up to date with master before calling
rpl_end.
- Disabled compiler warnings from connect and mroonga and on
MacOSX.
Mroonga:
- Fixed bug when testing if file is a normal file that can be deleted
- Marked a lot of date and datetime test to not run on macosx.
This is because mktime() can't handle negative years and this
restricts mroonga so that it can only store dates after the year 1900.
- Added some extra command to rpl_start_stop to ensure that the
IO thread has connected to the master before we shut down the server.
- if signal returns signalhandler_t, use this with the alarm code
- Added missing tests to sys_vars
- Fixed some possible overflow bugs in tabxml.cpp
Post-fix: The test case pushed with the fix had each node
acting as slave to the other two nodes with different set
of filters on server_id's. The slave's gtid_slave_pos is
updated after it processes the events received from master
nodes irrespective of whether the events were filtered
or not. Thus, sync_with_master_gtid.inc could unblock even
on filtered events.
As a result, sync_with_master_gtid.inc would fail to block
until the desired changes have been replicated. Fixed by
simplifying the topology.
Also, modified CHANGE MASTER commands to ignore based
on gtid_domain_id instead of server_id.
Problem & Analysis: If DML invokes a trigger or a
stored function that inserts into an AUTO_INCREMENT column,
that DML has to be marked as 'unsafe' statement. If the
tables are locked in the transaction prior to DML statement
(using LOCK TABLES), then the same statement is not marked as
'unsafe' statement. The logic of checking whether unsafeness
is protected with if (!thd->locked_tables_mode). Hence if
we lock the tables prior to DML statement, it is *not* entering
into this if condition. Hence the statement is not marked
as unsafe statement.
Fix: Irrespective of locked_tables_mode value, the unsafeness
check should be done. Now with this patch, the code is moved
out to 'decide_logging_format()' function where all these checks
are happening and also with out 'if(!thd->locked_tables_mode)'.
Along with the specified test case in the bug scenario
(BINLOG_STMT_UNSAFE_AUTOINC_COLUMNS), we also identified that
other cases BINLOG_STMT_UNSAFE_AUTOINC_NOT_FIRST,
BINLOG_STMT_UNSAFE_WRITE_AUTOINC_SELECT, BINLOG_STMT_UNSAFE_INSERT_TWO_KEYS
are also protected with thd->locked_tables_mode which is not right. All
of those checks also moved to 'decide_logging_format()' function.
RENAME TABLE code tries to update EITS statistics. It hung, because
it used an index on (db_name,table_name) to find the table, and attempted
to update these values at the same time. The fix is do what SQL UPDATE
statement does when updating index that it's used for scanning:
- First, buffer the rowids of rows to be updated,
- then make the second pass to actually update the rows
Also fixed the call to rename_table_in_stat_tables() in sql_rename.cc
to pass the correct new database (before, it passed old db_name so cross-
database renames were not handled correctly).
Variant #2, with review feedback addressed.
rpl/rpl_mdev382 ; Wrong replace in show_binlog_events2.inc
binlog/database ; Different error on Solaris if file exists
mroonga/repair_table_no_index_file ; Different system error on Solaris
partition_not_blackhole ; Different error on Solaris
partition_myisam ; Different error on Solaris
Some other failures in mroonga was because have_32bit.inc didn't correctly
detect 64 bits on Solaris. Fixed using DEFAULT_MACHINE instead of MACHINE_TYPE
for Sys_version_compile_machine.
make it possible to change feedback plugin wait intervals
* only in debug builds
* and force the feedback report to be ignored
update the test to use this feature
In domain ID based filtering, a flag is used to filter-out
the events that belong to a particular domain. This flag gets
set when IO thread receives a GTID_EVENT for the domain on
filter list and its reset at the last event in the GTID group.
The resetting, however, was wrongly done before the decision to
write/filter the event from relay log is made. As a result, the
last event in the group will always pass through the filter.
Fixed by deferring the reset logic. Also added a test case.
Problem is that FLUSH TABLES WITH READ LOCK first blocks threads from
starting new commits, then waits for running commits to complete. But
in-order parallel replication needs commits to happen in a particular
order, so this can easily deadlock.
To fix this problem, this patch introduces a way to temporarily pause
the parallel replication worker threads. Before starting FTWRL, we let
all worker threads complete in-progress transactions, and then
wait. Then we proceed to take the global read lock. Once the lock is
obtained, we unpause the worker threads. Now commits are blocked from
starting by the global read lock, so the deadlock will no longer occur.
DOING BAD DDL IN PREPARED STATEMENT
Analysis
========
A repeat execution of the prepared statement 'ALTER TABLE v1
CHECK PARTITION' where v1 is a view leads to server exit.
ALTER TABLE ... CHECK PARTITION is not applicable for views
and check for the same check is missing. This leads to
further execution and creation of derived table for the view
(Allocated under temp_table mem_root). Any reference to open
view or related pointers from second execution leads to
server exit as the same was freed at previous execution closure.
Fix:
======
Added check for view in mysql_admin_table() on PARTITION
operation. This will prevent mysql_admin_table() from
going ahead and creating temp table and related issues.
Changed message on admin table view operation error to
be more appropriate.
While querying INFORMATION SCHEMA, check for a table's engine
only used table name, but not schema name; so, if there were different
rows with the same table name, a wrong one could be retrieved.
The result of the check affected the decision whether the contents
of the table should be dumped, and whether a DELAYED option can be used.
Fixed by adding a clause for table_schema to the query.
Not printing the value" with binlog-row-image=minimal"
Merged Rows_log_event::print_verbose_one_row() and log_event_print_value()
with MySQL 5.7
Added flush after writing of Table_map_log_event() to fix wrong order of
lines in output. This causes a lot of changes in some test results.
Patch from Daniel Black:
- Change the charset of mysql.column_stats.{min_value, max_value} from
utf8_bin varchar to varbinary
- Adjust the code that saves/reads the data accordingly.
- Also provide upgrade statement in mysql_system_tables_fix.sql
The bitmap implementation defines two template Bitmap classes. One
optimized for 64-bit (default) wide bitmaps while the other is used for
all other widths.
In order to optimize the computations, Bitmap<64> class has defined its
own member functions for bitmap operations, the other one, however,
relies on mysys' bitmap implementation (mysys/my_bitmap.c).
Issue 1:
In case of non 64-bit Bitmap class, intersect() wrongly reset the
received bitmap while initialising a new local bitmap structure
(bitmap_init() clears the bitmap buffer) thus, the received bitmap was
getting cleared.
Fixed by initializing the local bitmap structure by using a temporary
buffer and later copying the received bitmap to the initialised bitmap
structure.
Issue 2:
The non 64-bit Bitmap class had the Iterator missing which caused
compilation failure.
Also added a cmake variable to hold the MAX_INDEXES value when supplied
from the command prompt. (eg. cmake .. -DMAX_INDEXES=128U). Checks have
been put in place to trigger build failure if MAX_INDEXES value is
greater than 128.
Test modifications:
* Introduced include/have_max_indexes_[64|128].inc to facilitate
skipping of tests for which the output differs with different
MAX_INDEXES.
* Introduced include/max_indexes.inc which would get modified by cmake
to reflect the MAX_INDEXES value used to build the server. This file
simply sets an mtr variable '$max_indexes' to show the MAX_INDEXES
value, which will then be consumed by the above introduced include file.
* Some tests (portions), dependent on MAX_INDEXES value, have been moved
to separate test files.
MDEV-8938 Server Crash on Update with joins
Make unique table check after setup_fields of update because unique table can materialize table and we do not need field resolving after materialization.
Make unique table check after setup_fields of update because unique table can materialize table and we do not need field resolving after materialization.
Analysis: Lengths which are not UNIV_SQL_NULL, but bigger than the following
number indicate that a field contains a reference to an externally
stored part of the field in the tablespace. The length field then
contains the sum of the following flag and the locally stored len.
This was incorrectly set to
define UNIV_EXTERN_STORAGE_FIELD (UNIV_SQL_NULL - UNIV_PAGE_SIZE_MAX)
When it should be
define UNIV_EXTERN_STORAGE_FIELD (UNIV_SQL_NULL - UNIV_PAGE_SIZE_DEF)
Additionally, we need to disable support for > 16K page size for
row compressed tables because a compressed page directory entry
reserves 14 bits for the start offset and 2 bits for flags.
This limits the uncompressed page size to 16k. To support
larger pages page directory entry needs to be larger.