Analysis: Purge thread does not have thd and no access to
handlerton.
Fix: If thd does not exists we use sql_print_warning instead
of push_warning_printf.
Added a SESSION-only system variable "wsrep_dirty_reads" to allow SELECT
queries to pass even when the node is not prepared to accept queries
(wsrep_ready=OFF). Added a test case.
The problem was that the maximum value of the transaction_prealloc_size
session system variable was ULONG_MAX which meant that it was possible
to cause the server to allocate excessive amounts of memory.
This patch fixes the problem by reducing the maxmimum value of
transaction_prealloc_size and transaction_alloc_block_size down
to 128K.
Note that transactions will still be able to allocate more than
128K if needed, this patch just reduces the amount that can be
preallocated - as well as the maximum size of the incremental
allocation blocks.
special character sets like utf16, utf32, ucs2.
Analysis: MySQL server does not support few special character sets like
utf16,utf32 and ucs2 as "client's character set"(eg: utf16,utf32, ucs2).
It is known limitation listed in the documentation
http://dev.mysql.com/doc/refman/5.5/en/charset-connection.html.
The default value for default-character-set parameter is 'auto'
which means that if the server's character set is not supported,
then server automatically changes client's character set to
predefined character-set which is 'latin1' in the current code.
Eg:
$ ./mysql -uroot -S$SOCKET_FILE --default-character-set=utf16
ERROR 1231 (42000): Variable 'character_set_client' can't be set to the value of 'utf16'
$ ./mysql -uroot -S$SOCKET_FILE will be successfully connected to
server with 'latin1' as default client side character set.
When IO thread is trying to connect to Master, it sets server's character
set as client's character set. When Slave server is started with these
special character sets, IO thread (which is like a connection to Master)
fails because of the above said limitation.
Fix: Now even IO thread also behaves the same as a regular client behaves.
i.e., If server's character set is not supported as client's character set,
then set default's client character set(latin1) as client's character set.
Fix:
===
Backport Bug#11756194 to mysql-5.5. slave breaks if
'drop database' fails on master and mismatched tables on
slave.
'DROP TABLE <deleted tables>' was binlogged when
'DROP DATABASE' failed and at least one table was deleted
from the database. The log event would lead slave SQL thread
stop if some of the tables did not exist on slave.
After this patch, It is always binlogged with 'IF EXISTS'
option.
There was a bug in lock handling when mixing INSERT ... SELECT on the same table.
mysql-test/suite/maria/insert_select.result:
Test case for MDEV_4010
mysql-test/suite/maria/insert_select.test:
Test case for MDEV_4010
mysys/thr_lock.c:
We wrongly alldoed TL_WRITE_CONCURRENT_INSERT when there was a TL_READ_NO_INSERT lock
Problem was that repair() did lock and unlock tables, which leaved already locked tables in wrong state
include/my_check_opt.h:
Added option T_NO_LOCKS to disable locking during repair()
Fixed duplicated bit T_NO_CREATE_RENAME_LSN
mysql-test/suite/rpl/r/myisam_external_lock.result:
Test case for MDEV-6871
mysql-test/suite/rpl/t/myisam_external_lock-slave.opt:
Test case for MDEV-6871
mysql-test/suite/rpl/t/myisam_external_lock.test:
Test case for MDEV-6871
storage/maria/ha_maria.cc:
Don't lock tables during enable_indexes()
Removed some calls to current_thd
storage/myisam/ha_myisam.cc:
Don't lock tables during enable_indexes()
Removed some calls to current_thd
Applied the fix previously pushed into 10.0.
Initial Jan's commit comment:
Problem is that test could open Microsoft C++ Client Debugger
windows with abort exception. Lets not try to test this on
windows.
innodb_stats_sample_pages
Analysis: If you set the number of analyzed pages
to very low number compared to actual pages on
that table/index it randomly pics those pages
(default 8 pages), this leads to fact that query
after analyze table returns different results. If
the index tree is small, smaller than 10 *
n_sample_pages + total_external_size, then the
estimate is ok. For bigger index trees it is
common that we do not see any borders between
key values in the few pages we pick. But still
there may be n_sample_pages different key values,
or even more. And it just tries to
approximate to n_sample_pages (8).
Fix: (1) Introduced new dynamic configuration variable
innodb_stats_sample_traditional that retains
the current design. Default false.
(2) If traditional sample is not used we use
n_sample_pages = max(min(srv_stats_sample_pages,
index->stat_index_size),
log2(index->stat_index_size)*
srv_stats_sample_pages);
(3) Introduced new dynamic configuration variable
stat_modified_counter (default = 0) if set
sets lower bound for row updates when statistics is re-estimated.
If user has provided upper bound for how many rows needs to be updated
before we calculate new statistics we use minimum of provided value
and 1/16 of table every 16th round. If no upper bound is provided
(srv_stats_modified_counter = 0, default) then calculate new statistics
if 1 / 16 of table has been modified
since the last time a statistics batch was run.
We calculate statistics at most every 16th round, since we may have
a counter table which is very small and updated very often.
@param t table
@return true if the table has changed too much and stats need to be
recalculated
*/
#define DICT_TABLE_CHANGED_TOO_MUCH(t) \
((ib_int64_t) (t)->stat_modified_counter > (srv_stats_modified_counter ? \
ut_min(srv_stats_modified_counter, (16 + (t)->stat_n_rows / 16)) : \
16 + (t)->stat_n_rows / 16))
MDEV-6789 segfault in Item_func_from_unixtime::get_date on updating table with virtual columns
* prohibit VALUES in partitioning expression
* prohibit user and system variables in virtual column expressions
* fix Item_func_date_format to cache locale (for %M/%W to return the same as MONTHNAME/DAYNAME)
* fix Item_func_from_unixtime to cache time_zone directly, not THD (and not to crash)
* added tests for other incorrectly allowed (in vcols) functions to see that they don't crash
A "field" could be either an Item_field or
(if loading into a view) an Item_direct_ref that references Item_field.
Also: when iterating fields, use fields of the TABLE_LIST (table or view),
not fields of a TABLE (actual underlying table - might have more columns).
Do not allow server to start if binlog_format is set
to a format other than ROW. Also restrict the change
of GLOBAL/SESSION binlog_format value at runtime.
The test complains that the server failed to disappear upon shutdown /
wait_for_disconnect. Trying to solve the probably race condition by
adding a wait before restart.
The test case had a classic mistake: SET DEBUG_SYNC='now SIGNAL xxx'
followed immediately by SET DEBUG_SYNC='RESET'. This makes it
possible for the waiter to miss the signal, if it does not manage
to wake up prior to the RESET.
The test cases had some --replace_result $USER USER. The problem is that the
value of $USER can be anything, depending on the name of the unix account that
runs the test suite. So random parts of the result can be errorneously
replaced, causing test failures.
Fix by making the replacements more specific, so they will match only the
intended stuff regardless of the value of $USER.
Description:
Using correct length when moving to next field in cmp_ref. The store
length already includes the length bytes of blobs, which is already considered
earlier for blob types.
Approved by Mattias, Jimmy [rb-7088]
The debug configuration parameter innodb_optimistic_insert_debug
which was introduced for testing corner cases in B-tree handling
had a bug in it. The value 1 would trigger an infinite sequence
of page splits.
Fix: When the value 1 is specified, disable this debug feature.
Approved by Yasufumi Kinoshita
The debug configuration parameter innodb_optimistic_insert_debug
which was introduced for testing corner cases in B-tree handling
had a bug in it. The value 1 would trigger an infinite sequence
of page splits.
Fix: When the value 1 is specified, disable this debug feature.
Approved by Yasufumi Kinoshita
dict_set_corrupted(): Use the canonical way of searching for
less-than-equal (PAGE_CUR_LE) and then checking low_match.
The code that was introduced in MySQL 5.5.17 in
Bug#11830883 SUPPORT "CORRUPTED" BIT FOR INNODB TABLES AND INDEXES
could position the cursor on the page supremum, and then attempt
to overwrite non-existing 7th field of the 1-field supremum record.
Approved by Jimmy Yang
dict_set_corrupted(): Use the canonical way of searching for
less-than-equal (PAGE_CUR_LE) and then checking low_match.
The code that was introduced in MySQL 5.5.17 in
Bug#11830883 SUPPORT "CORRUPTED" BIT FOR INNODB TABLES AND INDEXES
could position the cursor on the page supremum, and then attempt
to overwrite non-existing 7th field of the 1-field supremum record.
Approved by Jimmy Yang
some of the tables are created in InnoDB and some tables are created in MyISAM.
We need to create all tables on InnoDB. Fix is to add engine=innodb to the
CREATE TABLE statements.
approved in IM by Marko and Vasil.
some of the tables are created in InnoDB and some tables are created in MyISAM.
We need to create all tables on InnoDB. Fix is to add engine=innodb to the
CREATE TABLE statements.
approved in IM by Marko and Vasil.
The test case waits for other threads to complete, but the wait is only 2
seconds. This is likely to sometimes be too little on our heavily loaded
buildbot VMs, that can easily stall for more than 2 seconds from time to time.
So let's try to increase the timeout (to about 40 seconds) and see if it
helps.
The test case runs SHOW SLAVE HOSTS. The output of this is only stable after
all slaves have had time to register with the master; this happens
asynchroneously.
The test was waiting for the slave with server_id=3 to appear in the output,
but it was missing a similar wait for server_id=2. Thus, if server_id=2 was
much slower to connect for some reason, it could be missing from the output,
causing the test to fail.
innodb_buffer_pool_pages_total depends on page size. On Power8 it is 65k
compared to 4k on Intel. As we round allocations on page size we may get
slightly more memory for buffer pool.
Normally, SET SESSION SQL_LOG_BIN is used by DBAs to run a
non-conflicting command locally only, ensuring it does not
get replicated.
Setting GLOBAL SQL_LOG_BIN would not require all sessions to
disconnect. When SQL_LOG_BIN is changed globally, it does not
immediately take effect for any sessions. It takes effect by
becoming the session-level default inherited at the start of
each new session, and this setting is kept and cached for the
duration of that session. Setting it intentionally is unlikely
to have a useful effect under any circumstance; setting it
unintentionally, such as while intending to use SET [SESSION]
is potentially disastrous. Accidentally using SET GLOBAL
SQL_LOG_BIN will not show an immediate effect to the user,
instead not having the desired session-level effect, and thus
causing other potential problems with local-only maintenance
being binlogged and executed on slaves; And transactions from
new sessions (after SQL_LOG_BIN is changed globally) are not
binlogged and replicated, which would result in irrecoverable
or difficult data loss.
This is the regular GLOBAL variables way to work, but in
replication context it does not look right on a working server
(with connected sessions) 'set global sql_log_bin' and none of
that connections is affected. Unexperienced DBA after noticing
that the command did "nothing" will change the session var and
most probably won't unset the global var, causing new sessions
to not be binlog.
Setting GLOBAL SQL_LOG_BIN allows DBA to stop binlogging on all
new sessions, which can be used to make a server "replication
read-only" without restarting the server. But this has such big
requirements, stop all existing connections, that it is more
likely to make a mess, it is too risky to allow the GLOBAL variable.
The statement 'SET GLOBAL SQL_LOG_BIN=N' will produce an error
in 5.5, 5.6 and 5.7. Reading the GLOBAL SQL_LOG_BIN will produce
a deprecation warning in 5.7.
Normally, SET SESSION SQL_LOG_BIN is used by DBAs to run a
non-conflicting command locally only, ensuring it does not
get replicated.
Setting GLOBAL SQL_LOG_BIN would not require all sessions to
disconnect. When SQL_LOG_BIN is changed globally, it does not
immediately take effect for any sessions. It takes effect by
becoming the session-level default inherited at the start of
each new session, and this setting is kept and cached for the
duration of that session. Setting it intentionally is unlikely
to have a useful effect under any circumstance; setting it
unintentionally, such as while intending to use SET [SESSION]
is potentially disastrous. Accidentally using SET GLOBAL
SQL_LOG_BIN will not show an immediate effect to the user,
instead not having the desired session-level effect, and thus
causing other potential problems with local-only maintenance
being binlogged and executed on slaves; And transactions from
new sessions (after SQL_LOG_BIN is changed globally) are not
binlogged and replicated, which would result in irrecoverable
or difficult data loss.
This is the regular GLOBAL variables way to work, but in
replication context it does not look right on a working server
(with connected sessions) 'set global sql_log_bin' and none of
that connections is affected. Unexperienced DBA after noticing
that the command did "nothing" will change the session var and
most probably won't unset the global var, causing new sessions
to not be binlog.
Setting GLOBAL SQL_LOG_BIN allows DBA to stop binlogging on all
new sessions, which can be used to make a server "replication
read-only" without restarting the server. But this has such big
requirements, stop all existing connections, that it is more
likely to make a mess, it is too risky to allow the GLOBAL variable.
The statement 'SET GLOBAL SQL_LOG_BIN=N' will produce an error
in 5.5, 5.6 and 5.7. Reading the GLOBAL SQL_LOG_BIN will produce
a deprecation warning in 5.7.
FROM A FUNCTION
Scenario:
In a stored procedure, CREATE TABLE statement is not allowed. But an
exception is provided for CREATE TEMPORARY TABLE. We can create a temporary
table in a stored procedure.
Let there be two stored functions f1 and f2 and two stored procedures p1 and
p2. Their properties are as follows:
. stored function f1() calls stored procedure p1().
. stored function f2() calls stored procedure p2().
. stored procedure p1() creates temporary table t1.
. stored procedure p2() does DML on t1.
Consider the following situation:
1. Autocommit mode is on.
2. select f1()
3. select f2()
Step 2: In this step, t1 would be created via p1(). A table level transaction
lock would have been taken. The ::external_lock() would not have been called
on this table. At the end of step 2, because of autocommit mode on, this table
level lock will be released.
Step 3: When we execute DML on table t1 via p2() we have two problems:
Problem 1:
The function ha_innobase::external_lock() would have been called but since
it is a select query no table level locks would have been taken. Hence the
following assert will fail:
ut_ad(lock_table_has(thr_get_trx(thr), index->table, LOCK_IX));
Solution:
The solution would be to identify this situation and take a table level lock
and use the proper lock type prebuilt->select_lock_type = LOCK_X for DML
operations.
Problem 2:
Another problem is that in step 3, ha_innobase::open() is never called on
the table t1.
Solution:
The solution would be to identify this situation and call re-init the handler
of table t1.
rb#6429 approved by Krunal.
FROM A FUNCTION
Scenario:
In a stored procedure, CREATE TABLE statement is not allowed. But an
exception is provided for CREATE TEMPORARY TABLE. We can create a temporary
table in a stored procedure.
Let there be two stored functions f1 and f2 and two stored procedures p1 and
p2. Their properties are as follows:
. stored function f1() calls stored procedure p1().
. stored function f2() calls stored procedure p2().
. stored procedure p1() creates temporary table t1.
. stored procedure p2() does DML on t1.
Consider the following situation:
1. Autocommit mode is on.
2. select f1()
3. select f2()
Step 2: In this step, t1 would be created via p1(). A table level transaction
lock would have been taken. The ::external_lock() would not have been called
on this table. At the end of step 2, because of autocommit mode on, this table
level lock will be released.
Step 3: When we execute DML on table t1 via p2() we have two problems:
Problem 1:
The function ha_innobase::external_lock() would have been called but since
it is a select query no table level locks would have been taken. Hence the
following assert will fail:
ut_ad(lock_table_has(thr_get_trx(thr), index->table, LOCK_IX));
Solution:
The solution would be to identify this situation and take a table level lock
and use the proper lock type prebuilt->select_lock_type = LOCK_X for DML
operations.
Problem 2:
Another problem is that in step 3, ha_innobase::open() is never called on
the table t1.
Solution:
The solution would be to identify this situation and call re-init the handler
of table t1.
rb#6429 approved by Krunal.
Problem:
Creation of a table fails when innodb_strict_mode is enabled, but the same
table is created without any warning when innodb_strict_mode is enabled.
Solution:
If creation of a table fails with an error when innodb_strict_mode is
enabled, it must issue a warning when innodb_strict_mode is disabled.
rb#6723 approved by Krunal.
Problem:
Creation of a table fails when innodb_strict_mode is enabled, but the same
table is created without any warning when innodb_strict_mode is enabled.
Solution:
If creation of a table fails with an error when innodb_strict_mode is
enabled, it must issue a warning when innodb_strict_mode is disabled.
rb#6723 approved by Krunal.
DELETE FROM ports WHERE ports.id = 'f37aa3fe-ab99-4d0f-a566-6cd3169d7516'
where table ports have foreign keys.
Verified that current 5.5-galera is not affected and added test case
to regression set.
Problem:
We maintain two rb trees in each dict_table_t. The foreign_rbt must be in
sync with foreign_list. The referenced_rbt must be in sync with
referenced_list. There is one function which checks this consistency and it
failed, resulting in an assert failure.
The root cause of the problem was identified that the search order was
lost in the referenced_rbt. This is because while renaming the table,
we didn't not refresh this referenced_rbt.
Solution:
When a foreign key is renamed, we must delete and re-insert into both
foreign_rbt and referenced_rbt.
rb#6412 approved by Jimmy.
Problem:
We maintain two rb trees in each dict_table_t. The foreign_rbt must be in
sync with foreign_list. The referenced_rbt must be in sync with
referenced_list. There is one function which checks this consistency and it
failed, resulting in an assert failure.
The root cause of the problem was identified that the search order was
lost in the referenced_rbt. This is because while renaming the table,
we didn't not refresh this referenced_rbt.
Solution:
When a foreign key is renamed, we must delete and re-insert into both
foreign_rbt and referenced_rbt.
rb#6412 approved by Jimmy.
sporadically
Fix: Modify test to be smaller so that testcase timeout does not
trigger. We already have a test for --big-test setup
(innodb.innodb_simulate_comp_failures).
bzr merge -r4264 maria/5.5
Text conflict in sql/mysqld.cc
Text conflict in storage/xtradb/btr/btr0cur.c
Text conflict in storage/xtradb/buf/buf0buf.c
Text conflict in storage/xtradb/buf/buf0lru.c
Text conflict in storage/xtradb/handler/ha_innodb.cc
5 conflicts encountered.
Fix the bug properly (plugin cannot be unloaded as long as it's locked).
Enable and fix the test case.
Significantly reduce number of LOCK_plugin locks for semisync
(practically all locks were removed)
~40% bugfixed(*) applied
~40$ bugfixed reverted (incorrect or we're not buggy)
~20% bugfixed applied, despite us being not buggy
(*) only changes in the server code, e.g. not cmakefiles
This bug only happens in case of paritioned tables used in LOCK TABLES and implicit_commit() was called
(as part of trying to execute a CREATE TABLE withing lock tables)
The problem was that Aria could not move the tables from one transaction to the new one, as thd->open_tables contained
a partitioned tables and not an Aria table.
Fix:
- Store a list of all open tables that are part of a share in share->open_tables
- In maria::implict_commit() use transaction->used_tables & share->open_tables to find out which tables
was part of the current transaction instead of using thd->open_tables, which may contain partitioned tables.
mysql-test/suite/maria/maria_partition.result:
Added test case
mysql-test/suite/maria/maria_partition.test:
Added test case
storage/maria/ha_maria.cc:
Use trn->used tables and share->open_tables to find out which tables was part of the current transaction instead of using thd->open_tables.
storage/maria/ma_close.c:
Remove closed table from share->open_list
storage/maria/ma_open.c:
Add table to share->open_list
storage/maria/ma_state.c:
Added comment
storage/maria/maria_def.h:
Added share->open_list, a list of all tables that is using this share.
- adjusted a test result according to the change made for MDEV-6100;
- added explicit timezone for engines/iuds, since MTR in
MariaDB does not set it like MySQL's, and tests with constant TIMESTAMP
can have different outcome
Galera tests used default base/SST ports which led to
failures due to port conflicts when run in parallel.
Fixed by setting them to ones generated by mtr framework.
The test case makes use of the fine DEBUG_SYNC facility. Furthermore,
since it needs synchronization on internal threads (dump and SQL
threads) the server code has DEBUG_SYNC commands internally deployed
and activated through the DBUG_EXECUTE_IF macro. The internal
DBUG_SYNC commands are then controlled from the test case through the
DEBUG variable.
There were three problems around the DEBUG + DEBUG_SYNC facility
usage:
1. When signaling the SQL thread to continue, the test would reset
immediately the DEBUG_SYNC variable. This could mean that the SQL
thread might loose the signal and continue to wait forever;
2. A similar scenario was happening with the dump thread on the
master. This thread was instructed to wait, and later it would be
signaled to continue, but immediately after the DEBUG_SYNC would be
reset. This could lead to the dump thread missing the signal and
wait forever;
3. The test was not cleaning itself up with respect to the
instrumentation of the dump thread. This would leave the
conditional execution of an internal DEBUG_SYNC command active
(through the usage of DBUG_EXECUTE_IF).
We fix#1 and #2 by waiting for the threads to receive the signal and
only then issue the reset. We fix#3 by reseting the DEBUG variable,
thus deactivating the dump thread internal DEBUG_SYNC command.
The test case makes use of the fine DEBUG_SYNC facility. Furthermore,
since it needs synchronization on internal threads (dump and SQL
threads) the server code has DEBUG_SYNC commands internally deployed
and activated through the DBUG_EXECUTE_IF macro. The internal
DBUG_SYNC commands are then controlled from the test case through the
DEBUG variable.
There were three problems around the DEBUG + DEBUG_SYNC facility
usage:
1. When signaling the SQL thread to continue, the test would reset
immediately the DEBUG_SYNC variable. This could mean that the SQL
thread might loose the signal and continue to wait forever;
2. A similar scenario was happening with the dump thread on the
master. This thread was instructed to wait, and later it would be
signaled to continue, but immediately after the DEBUG_SYNC would be
reset. This could lead to the dump thread missing the signal and
wait forever;
3. The test was not cleaning itself up with respect to the
instrumentation of the dump thread. This would leave the
conditional execution of an internal DEBUG_SYNC command active
(through the usage of DBUG_EXECUTE_IF).
We fix#1 and #2 by waiting for the threads to receive the signal and
only then issue the reset. We fix#3 by reseting the DEBUG variable,
thus deactivating the dump thread internal DEBUG_SYNC command.
Bug#16415173 CRLF INSTEAD OF LF IN SQL-BENCH SCRIPTS
Correct perms and converts from Windows style to UNIX style line endings on some files.
Fix perms on installed ini files.
(MySQL 5.5 version)
Bug#16415173 CRLF INSTEAD OF LF IN SQL-BENCH SCRIPTS
Correct perms and converts from Windows style to UNIX style line endings on some files.
Fix perms on installed ini files.
(MySQL 5.5 version)
DISCONNECT CON1 AND CON2
Problem:
The test suite/binlog/t/binlog_killed.test makes the connections
con1 and con2 but forgets to disconnect them + wait till that
operation is finished at test end.
This mistake has the potential to harm subsequent tests in
case these tests depend on the content of the processlist.
Solution:
Added disconnect + wait_until_disconnected.inc
within the test cleanup.
DISCONNECT CON1 AND CON2
Problem:
The test suite/binlog/t/binlog_killed.test makes the connections
con1 and con2 but forgets to disconnect them + wait till that
operation is finished at test end.
This mistake has the potential to harm subsequent tests in
case these tests depend on the content of the processlist.
Solution:
Added disconnect + wait_until_disconnected.inc
within the test cleanup.
That particular part of slave connect to master was missing code to handle
retry in case of network errors. The same problem is present in MySQL 5.5, but
fixed in MySQL 5.6.
Fixed with this patch, by adding the code (mostly identical to MySQL 5.6), and
also adding a test case.
I checked other queries done towards master during slave connect, and they now
all seem to handle reconnect in case of network failures.