INSTALL SONAME ignores attempts to load the same plugin twice, it's not an error
(because one can load one plugin by name and then install soname for the rest).
But Federated and FederatedX are different plugins, despite having the same name.
Now plugin_add() only considers two plugins identical if their names are the same
string (compared as pointers). Otherwise it reports an error.,
rpl.rpl_gtid_crash fail on PPC64
This is an addition to the original patch.
Restored show binlog events output and adjusted filters to
replace [\d-\d-\d,\d-\d-\d,\d-\d-\d] with [#-#-#].
Remove the "don't update the row for b'' and store uninitialized bytes on disk" change.
Update test cases to allow DEFAULT b'', because b'' is a valid expression elsewhere.
4229: MDEV-5670: Assertion failure in file buf0lru.c line 2355
Add more status information if repeatable.
4230: MDEV-5673: Crash while parallel dropping multiple tables under heavy load
Improve long semaphore wait output to include all semaphore waits
and try to find out if there is a sequence of waiters.
4233: Fix compiler errors on product build.
4237: Fix too agressive long semaphore wait output and add guard against introducing
compression failures on insert buffer.
4238: Fix test failure caused by simulated compression failure on
IBUF_DUMMY table.
Reason for the problem was that the hash of changed files in the key cache was too small (was 128). Fixed by making the hash size larger and changeable.
- Introduced key-cache-file-hash-size (default 512) for MyISAM and aria_pagecache_file_hash_size (default 512) for Aria.
- Added new status variable "Feature_delay_key_write" which counts number of tables opened that are using delay_key_write
mysql-test/r/features.result:
Added test of Feature_delay_key_write
mysql-test/r/key_cache.result:
Updated tests as the number of blocks has changed
mysql-test/r/mysqld--help.result:
Updated result
mysql-test/suite/maria/maria3.result:
Updated result
mysql-test/suite/sys_vars/r/key_cache_file_hash_size_basic.result:
Test new variable
mysql-test/suite/sys_vars/t/aria_pagecache_file_hash_size_basic.test:
Test new variable
mysql-test/suite/sys_vars/t/key_cache_file_hash_size_basic.test:
Test new variable
mysql-test/t/features.test:
Added test of Feature_delay_key_write
mysql-test/t/key_cache.test:
Updated tests as the number of blocks has changed
mysys/mf_keycache.c:
Made CHANGED_BLOCKS_HASH dynamic
sql/handler.cc:
Updated call to init_key_cache()
sql/mysqld.cc:
Added "Feature_delay_key_write"
Added support for key-cache-file-hash-size
sql/mysqld.h:
Added support for key-cache-file-hash-size
sql/sql_class.h:
Added feature_files_opened_with_delayed_keys
sql/sys_vars.cc:
Added key_cache_file_hash_size
storage/maria/ha_maria.cc:
Added pagecache_file_hash_size
Added counting of files with delay_key_write
storage/maria/ma_checkpoint.c:
Fixed compiler warning
storage/maria/ma_pagecache.c:
Made PAGECACHE_CHANGED_BLOCKS_HASH into a variable
storage/maria/ma_pagecache.h:
Made PAGECACHE_CHANGED_BLOCKS_HASH into a variable
storage/maria/ma_rt_test.c:
Updated parameters for init_pagecache()
storage/maria/ma_test1.c:
Updated parameters for init_pagecache()
storage/maria/ma_test2.c:
Updated parameters for init_pagecache()
storage/maria/ma_test3.c:
Updated parameters for init_pagecache()
storage/maria/maria_chk.c:
Updated parameters for init_pagecache()
storage/maria/maria_ftdump.c:
Updated parameters for init_pagecache()
storage/maria/maria_pack.c:
Updated parameters for init_pagecache()
storage/maria/maria_read_log.c:
Updated parameters for init_pagecache()
storage/maria/unittest/ma_pagecache_consist.c:
Updated parameters for init_pagecache()
storage/maria/unittest/ma_pagecache_rwconsist.c:
Updated parameters for init_pagecache()
storage/maria/unittest/ma_pagecache_rwconsist2.c:
Updated parameters for init_pagecache()
storage/maria/unittest/ma_pagecache_single.c:
Updated parameters for init_pagecache()
storage/maria/unittest/ma_test_loghandler-t.c:
Updated parameters for init_pagecache()
storage/maria/unittest/ma_test_loghandler_first_lsn-t.c:
Updated parameters for init_pagecache()
storage/maria/unittest/ma_test_loghandler_max_lsn-t.c:
Updated parameters for init_pagecache()
storage/maria/unittest/ma_test_loghandler_multigroup-t.c:
Updated parameters for init_pagecache()
storage/maria/unittest/ma_test_loghandler_multithread-t.c:
Updated parameters for init_pagecache()
storage/maria/unittest/ma_test_loghandler_noflush-t.c:
Updated parameters for init_pagecache()
storage/maria/unittest/ma_test_loghandler_nologs-t.c:
Updated parameters for init_pagecache()
storage/maria/unittest/ma_test_loghandler_pagecache-t.c:
Updated parameters for init_pagecache()
storage/maria/unittest/ma_test_loghandler_purge-t.c:
Updated parameters for init_pagecache()
storage/myisam/ha_myisam.cc:
Added counting of files with delay_key_write
storage/myisam/mi_check.c:
Updated call to init_key_cache()
storage/myisam/mi_test1.c:
Updated call to init_key_cache()
storage/myisam/mi_test2.c:
Updated call to init_key_cache()
storage/myisam/mi_test3.c:
Updated call to init_key_cache()
storage/myisam/mi_test_all.sh:
Fixed broken test
storage/myisam/myisam_ftdump.c:
Updated call to init_key_cache()
storage/myisam/myisamchk.c:
Updated call to init_key_cache()
storage/myisam/myisamlog.c:
Updated call to init_key_cache()
The bug was that my_real_read() called net_before_header_psi() multiple times for long packets.
Fixed by adding a flag when we are reading a header.
Did also some cleanups to interface of my_net_read() to avoid unnecessary calls if performance schema is not used.
- Added my_net_read_packet() as a replacement for my_net_read(). my_net_read() is still in the client library for old clients.
- Removed THD->m_server_idle (not needed anymore as this is now given as argument to my_net_read_packet()
- Added my_net_read_packet(), which is a new version of my_net_read() with a new parameter if we are doing a read for a new command from the server.
- Added tests for compressed protocol and big packets
include/mysql.h.pp:
Added my_net_read_packet() as a replacement for my_net_read()
include/mysql_com.h:
Added my_net_read_packet() as a replacement for my_net_read()
mysql-test/r/mysql_client_test_comp.result:
New test
mysql-test/t/mysql_client_test-master.opt:
Added max_allowed_packet to be able to test big packets and packet size overflows.
mysql-test/t/mysql_client_test_comp-master.opt:
New test
mysql-test/t/mysql_client_test_nonblock-master.opt:
Added max_allowed_packet to be able to test big packets and packet size overflows.
sql-common/client.c:
Use my_net_read_packet()
sql/mf_iocache.cc:
Use my_net_read_packet()
sql/mysqld.cc:
Removed THD->m_server_idle (not needed anymore as this is now given as argument to my_net_read_packet()
sql/net_serv.cc:
Added argument to my_real_read() to indicte if we are reading the first block of the next statement and should call performance schema.
Added 'compatibilty function' my_net_read().
Added my_net_read_packet(), which is a new version of my_net_read() with a new parameter if we are doing a read for a new command from the server.
sql/sql_class.cc:
Removed m_server_idle (not needed anymore)
sql/sql_class.h:
Removed m_server_idle (not needed anymore)
sql/sql_parse.cc:
Removed m_server_idle (not needed anymore)
tests/mysql_client_test.c:
Added tests for compressed protocol and big packets
Merge the patches into MariaDB 10.0 main.
With this patch, parallel replication will now automatically retry a
transaction that fails due to deadlock or other temporary error, same as
single-threaded replication.
We catch deadlocks with InnoDB transactions due to enforced commit order. If
T1 must commit before T2 in parallel replication and T1 ends up waiting for T2
inside InnoDB, we kill T2 and retry it later to resolve the deadlock
automatically.
Fix a bug discovered in Buildbot valgrind. The logic in checking for slave
init thread completion was reversed, so depending on thread scheduling server
startup could hang.
Also add another variant of SSL valgrind suppression, needed for different
library version.
When a MyISAM query is killed midway, the query is logged to the binlog marked
with the error.
The slave does not attempt to run the query, but aborts with a suitable error
message in the error log for the DBA to act on.
In this case, the parallel replication code would check the sql_errno() code,
even no my_error() had been set. In debug builds, this causes an assertion.
Fixed the code to check that we actually have an error set before querying for
an error code.
After-review changes.
For this patch in 10.0, we do not introduce a new public storage engine API,
we just fix the InnoDB/XtraDB issues. In 10.1, we will make a better public
API that can be used for all storage engines (MDEV-6429).
Eliminate the background thread that did deadlock kills asynchroneously.
Instead, we ensure that the InnoDB/XtraDB code can handle doing the kill from
inside the deadlock detection code (when thd_report_wait_for() needs to kill a
later thread to resolve a deadlock).
(We preserve the part of the original patch that introduces dedicated mutex
and condition for the slave init thread, to remove the abuse of
LOCK_thread_count for start/stop synchronisation of the slave init thread).
- In print_explain_row(), do not forget to print r_rows.
- Switch Explain_update from using its own counters to re-using
Table_access_tracker.
- Make ANALYZE UPDATE code structure uniform with ANALYZE DELETE.
The test case makes use of the fine DEBUG_SYNC facility. Furthermore,
since it needs synchronization on internal threads (dump and SQL
threads) the server code has DEBUG_SYNC commands internally deployed
and activated through the DBUG_EXECUTE_IF macro. The internal
DBUG_SYNC commands are then controlled from the test case through the
DEBUG variable.
There were three problems around the DEBUG + DEBUG_SYNC facility
usage:
1. When signaling the SQL thread to continue, the test would reset
immediately the DEBUG_SYNC variable. This could mean that the SQL
thread might loose the signal and continue to wait forever;
2. A similar scenario was happening with the dump thread on the
master. This thread was instructed to wait, and later it would be
signaled to continue, but immediately after the DEBUG_SYNC would be
reset. This could lead to the dump thread missing the signal and
wait forever;
3. The test was not cleaning itself up with respect to the
instrumentation of the dump thread. This would leave the
conditional execution of an internal DEBUG_SYNC command active
(through the usage of DBUG_EXECUTE_IF).
We fix#1 and #2 by waiting for the threads to receive the signal and
only then issue the reset. We fix#3 by reseting the DEBUG variable,
thus deactivating the dump thread internal DEBUG_SYNC command.
The sql_slave_skip_counter is important to be able to recover replication from
certain errors. Often, an appropriate solution is to set
sql_slave_skip_counter to skip over a problem event. But setting
sql_slave_skip_counter produced an error in GTID mode, with a suggestion to
instead set @@gtid_slave_pos to point past the problem event. This however is
not always possible; for example, in case of an INCIDENT event, that event
does not have any GTID to assign to @@gtid_slave_pos.
With this patch, sql_slave_skip_counter now works in GTID mode the same was as
in non-GTID mode. When set, that many initial events are skipped when the SQL
thread starts, plus as many extra events are needed to completely skip any
partially skipped event group. The GTID position is updated to point past the
skipped event(s).
MDEV-6344: mysqldump issues FLUSH TABLES, which gets written into binlog and replicated
Add a --gtid option (for compatibility, the original behaviour is preserved
when --gtid is not used).
With --gtid, --master-data and --dump-slave output the GTID position (the
old-style file/offset position is still output, but commented out). Also, a
CHANGE MASTER TO master_use_gtid=slave_pos is output to ensure a provisioned
slave is configured in GTID, as requested.
Without --gtid, the GTID position is still output, if available, but commented
out.
Also fix MDEV-6344, to avoid FLUSH TABLES getting into the binlog. Otherwise a
mysqldump on a slave server will silently inject a GTID which does not exist
on the master, which is highly undesirable.
Also fix an incorrect error handling around obtaining binlog position with
--master-data (was probably unlikely to trigger in most cases).
Analysis: For some reason table stats for a table pointed from a index
is not initialized. Added additional warning output on this situation
and table stats initialization. This is better than asserting.
These tests use search_pattern_in_file.inc to search the error log for
expected output. However, search_pattern_in_file.inc by default searched only
the first 50000 bytes, so if the error log grew too big the tests would fail.
This patch extends search_pattern_in_file.inc with an option to specify how
much of the file to search, and whether to search from the start of the file
or from the end. Then the rpl.rpl_checksum and rpl.rpl_gtid_errorlog test
cases are fixed to search the last 50000 bytes of the error log, which will
work no matter how large prior tests have made it.
The direct cause of the assertion was missing error handling in
record_gtid(). If ha_commit_trans() fails for the statement commit, there was
missing code to catch the error and do ha_rollback_trans() in this case; this
caused close_thread_tables() to assert.
Normally, this error case is not hit, but in this case it was triggered due to
another bug: When a transaction T1 fails during parallel replication, the code
would signal following transactions that they could start to run without
properly marking the error condition. This caused subsequent transactions to
incorrectly start replicating, only to get an error later during their own
commit step. This was particularly serious if the subsequent transactions were
DDL or MyISAM updates, which cannot be rolled back and would leave replication
in an inconsistent state.
Fixed by 1) in case of error, only signal following transactions to continue
once the error has been properly marked and those transactions will know not
to start; and 2) implement proper error handling in record_gtid() in the case
that statement commit fails.
If replication breaks in GTID mode, it is not trivial to determine the GTID of
the failing event group. This is a problem, as such GTID is needed eg. to
explicitly set @@gtid_slave_pos to skip to after that event group, or to
compare errors on different servers, etc.
Fix by ensuring that relevant slave errors logged to the error log include the
GTID of the event group containing the problem event.
This is MySQL Bug#59123. The message string stored in an INCIDENT event was
not zero-terminated. This caused any following checksum bytes (if enabled on
the master) to be output to the error log as trailing garbage when the message
was printed to the error log.
Backport the patch from MySQL 5.6:
revno: 2876.228.200
revision-id: zhenxing.he@sun.com-20110111051323-w2xnzvcjn46x6h6u
committer: He Zhenxing <zhenxing.he@sun.com>
timestamp: Tue 2011-01-11 13:13:23 +0800
message:
BUG#59123 rpl_stm_binlog_max_cache_size fails sporadically with found warnings
Also add a test case.
Bug#16415173 CRLF INSTEAD OF LF IN SQL-BENCH SCRIPTS
Correct perms and converts from Windows style to UNIX style line endings on some files.
Fix perms on installed ini files.
(MySQL 5.5 version)
- "ANALYZE $stmt" should discard select's output, but it should still
evaluate the output columns (otherwise, subqueries in select list
are not executed)
- SHOW EXPLAIN's code practice of calling JOIN::save_explain_data()
after JOIN::exec() is disastrous for ANALYZE, because it resets
all counters after the first execution. It is stopped
= "Late" test_if_skip_sort_order() calls explicitly update their part
of the query plan.
= Also, I had to rewrite I_S optimization to actually have optimization
and execution stages.
MySQL 5.6 implemented WL#344, which is about a MASTER_DELAY option to CHANGE
MASTER. But as part of this worklog, the format of the realy-log.info file was
changed. The new format is not understood by earlier versions, and nor by
MariaDB 10.0, so changing server to those versions would cause the slave to
abort with an error due to reading incorrect data out of relay-log.info.
Fix this by backporting from the WL#344 patch just the code that understands
the new relay-log.info format. We still write out the old format, and none of
the MASTER_DELAY feature is backported with this commit.
CORRUPTS FRM
Analysis:
---------
ALTER TABLE on a partitioned table resulted in the wrong
engine being written into the table's FRM file and displayed
in SHOW CREATE TABLE.
The prep_alter_part_table() modifies the partition_info object
for TABLE instance representing the old version of table.
If the ALTER TABLE ENGINE statement fails, the partition_info
object for the TABLE contains the altered storage engine name.
The SHOW CREATE TABLE uses the TABLE object to display the table
information, hence displays incorrect storage engine for the table.
Also a subsequent successful ALTER TABLE operation will write the
incorrect engine information into the FRM file.
Fix:
---
A copy of the partition_info object is created before modification so
that any changes would not cause the the original partition_info object
to be modified if the ALTER TABLE fails.(Backported part of the code
provided as fix for bug#14156617 in mysql-5.6.6).
* enum values to index different ACL tables, instead of hard-coded numbers
(even different in diffent functions).
* move TABLE_LIST initialization into open_grant_tables()
and use it everywhere
* change few my_bool's to bool's
* Don't write frm for tmp tables
* pass frm image down to open_table_uncached, when possible
* don't use truncate-by-recreate for temp tables - cannot recreate
without frm, and delete_all_rows is faster anyway
Auto-generate the allowed list of values for enum/set/flagset options
in --help output. But don't do that when the help text already has them.
Also, remove lists of values from help strings of various options, where
they were simply listed without any additional information.
it checked that the default is lz4. Which only worked on systems that
had lz4 and did not have lzo. Now it checks for the default to be zlib,
which works on systems that has neither lz4 or lzo. Like our package
builders in buildbot. This is intentional, we don't want introduce
additional dependencies (lz4, lzo) for our packages just yet.
This can (and will) be reconsidered, and this test can (and will)
be updated again.
The INCIDENT_EVENT always caused slave error and abort, without checking
--slave-skip-errors.
Now, if error 1590, ER_SLAVE_INCIDENT is included in the --slave-skip-errors
list, incident events will be ignored.
This is a merge of this MySQL 5.6 patch:
revision-id: frazer@mysql.com-20110314170916-ypgin17otj3ucx95
committer: Frazer Clement <frazer@mysql.com>
timestamp: Mon 2011-03-14 17:09:16 +0000
message:
Bug#11799671 NOT POSSIBLE TO SKIP INCIDENT ERRORS
DISCONNECT CON1 AND CON2
Problem:
The test suite/binlog/t/binlog_killed.test makes the connections
con1 and con2 but forgets to disconnect them + wait till that
operation is finished at test end.
This mistake has the potential to harm subsequent tests in
case these tests depend on the content of the processlist.
Solution:
Added disconnect + wait_until_disconnected.inc
within the test cleanup.
- Filesort has an optmization where it reads only columns that are
needed before the sorting is done.
- When ref(_or_null) is picked by the join optimizer, it may remove parts
of WHERE clause that are guaranteed to be true.
- However, if we use quick select, we must put all of the range columns into the
read set. Not doing so will may cause us to fail to detect the end of the range.
That particular part of slave connect to master was missing code to handle
retry in case of network errors. The same problem is present in MySQL 5.5, but
fixed in MySQL 5.6.
Fixed with this patch, by adding the code (mostly identical to MySQL 5.6), and
also adding a test case.
I checked other queries done towards master during slave connect, and they now
all seem to handle reconnect in case of network failures.
Adapt default_tmp_storage_engine implementation from mysql-5.6
New feature (as compared to 5.6), default_tmp_storage_engine=NULL
means that temporary tables will use default_storage_engine value.
This makes the behavior backward compatible.
When plugin=mysql_native_password (or mysql_old_password) take the password
from *either* password *or* authentication_string, whichever is set.
This makes no sense, but alas, that's what MySQL-5.6 does.
replication causing replication to fail.
Remove the temporary fix for MDEV-5914, which used READ COMMITTED for parallel
replication worker threads. Replace it with a better, more selective solution.
The issue is with certain edge cases of InnoDB gap locks, for example between
INSERT and ranged DELETE. It is possible for the gap lock set by the DELETE to
block the INSERT, if the DELETE runs first, while the record lock set by
INSERT does not block the DELETE, if the INSERT runs first. This can cause a
conflict between the two in parallel replication on the slave even though they
ran without conflicts on the master.
With this patch, InnoDB will ask the server layer about the two involved
transactions before blocking on a gap lock. If the server layer tells InnoDB
that the transactions are already fixed wrt. commit order, as they are in
parallel replication, InnoDB will ignore the gap lock and allow the two
transactions to proceed in parallel, avoiding the conflict.
Improve the fix for MDEV-6020. When InnoDB itself detects a deadlock, it now
asks the server layer for any preferences about which transaction to roll
back. In case of parallel replication with two transactions T1 and T2 fixed to
commit T1 before T2, the server layer will ask InnoDB to roll back T2 as the
deadlock victim, not T1. This helps in some cases to avoid excessive deadlock
rollback, as T2 will in any case need to wait for T1 to complete before it can
itself commit.
Also some misc. fixes found during development and testing:
- Remove thd_rpl_is_parallel(), it is not used or needed.
- Use KILL_CONNECTION instead of KILL_QUERY when a parallel replication
worker thread is killed to resolve a deadlock with fixed commit
ordering. There are some cases, eg. in sql/sql_parse.cc, where a KILL_QUERY
can be ignored if the query otherwise completed successfully, and this
could cause the deadlock kill to be lost, so that the deadlock was not
correctly resolved.
- Fix random test failure due to missing wait_for_binlog_checkpoint.inc.
- Make sure that deadlock or other temporary errors during parallel
replication are not printed to the the error log; there were some places
around the replication code with extra error logging. These conditions can
occur occasionally and are handled automatically without breaking
replication, so they should not pollute the error log.
- Fix handling of rgi->gtid_sub_id. We need to be able to access this also at
the end of a transaction, to be able to detect and resolve deadlocks due to
commit ordering. But this value was also used as a flag to mark whether
record_gtid() had been called, by being set to zero, losing the value. Now,
introduce a separate flag rgi->gtid_pending, so rgi->gtid_sub_id remains
valid for the entire duration of the transaction.
- Fix one place where the code to handle ignored errors called reset_killed()
unconditionally, even if no error was caught that should be ignored. This
could cause loss of a deadlock kill signal, breaking deadlock detection and
resolution.
- Fix a couple of missing mysql_reset_thd_for_next_command(). This could
cause a prior error condition to remain for the next event executed,
causing assertions about errors already being set and possibly giving
incorrect error handling for following event executions.
- Fix code that cleared thd->rgi_slave in the parallel replication worker
threads after each event execution; this caused the deadlock detection and
handling code to not be able to correctly process the associated
transactions as belonging to replication worker threads.
- Remove useless error code in slave_background_kill_request().
- Fix bug where wfc->wakeup_error was not cleared at
wait_for_commit::unregister_wait_for_prior_commit(). This could cause the
error condition to wrongly propagate to a later wait_for_prior_commit(),
causing spurious ER_PRIOR_COMMIT_FAILED errors.
- Do not put the binlog background thread into the processlist. It causes
too many result differences in mtr, but also it probably is not useful
for users to pollute the process list with a system thread that does not
really perform any user-visible tasks...
SLOW/CRASHES SEMAPHORE
Problem:
There are 2 lakh tables - fk_000001, fk_000002 ... fk_200000. All of them
are related to the same parent_table through a foreign key constraint.
When the parent_table is loaded into the dictionary cache, all the child table
will also be loaded. This is taking lot of time. Since this operation happens
when the dictionary latch is taken, the scenario leads to "long semaphore wait"
situation and the server gets killed.
Analysis:
A simple performance analysis showed that the slowness is because of the
dict_foreign_find() function. It does a linear search on two linked list
table->foreign_list and table->referenced_list, looking for a particular
foreign key object based on foreign->id as the key. This is called two
times for each foreign key object.
Solution:
Introduce a rb tree in table->foreign_rbt and table->referenced_rbt, which
are some sort of index on table->foreign_list and table->referenced_list
respectively, using foreign->id as the key. These rbt structures will be
solely used by dict_foreign_find().
rb#5599 approved by Vasil
The method JOIN_CACHE::init may fail (return 1) if some conditions on the
used join buffer is not satisfied. For example it fails if join_buffer_size
is greater than join_buffer_space_limit. The conditions should be checked
when running the EXPLAIN command for the query. That's why the method
JOIN_CACHE::init has to be called for EXPLAIN commands as well.
Part#1.
table_cond_selectivity() should discount selectivity of table'
conditions only when ity counts that selectivity to begin with.
For non-ref-based access methods (ALL/range/index_merge/etc),
we start with sel=1.0 and hence do not need to discount any
selectivities.
- When range optimizer cannot the lookup value into [VAR]CHAR(n) column,
it should produce:
= "Impossible range" for equality
= "no range" for non-equalities.
Merge from mysql-5.6:
revno: 3257
committer: Jon Olav Hauglid <jon.hauglid@oracle.com>
branch nick: mysql-trunk-bug11756966
timestamp: Thu 2011-07-14 09:32:01 +0200
message:
Bug#11756966 - 48958: STORED PROCEDURES CAN BE LEVERAGED TO BYPASS
DATABASE SECURITY
The problem was that CREATE PROCEDURE/FUCTION could be used to
check the existence of databases for which the user had no
privileges and therefore should not be allowed to see.
The reason was that existence of a given database was checked
before privileges. So trying to create a stored routine in
a non-existent database would give a different error than trying
to create a stored routine in a restricted database.
This patch fixes the problem by changing the order of the checks
for CREATE PROCEDURE/FUNCTION so that privileges are checked first.
This means that trying to create a stored routine in a
non-existent database and in a restricted database both will
give ER_DBACCESS_DENIED_ERROR error.
Test case added to grant.test.
MDEV-6099 Bad results for DATE_ADD(.., INTERVAL 2000000000000000000.0 SECOND)
MDEV-6097 Inconsistent results for CAST(int,decimal,double AS DATETIME)
MDEV-6100 No warning on CAST(9000000 AS TIME)
We have to run the derived table prepare before the unique table check to mark the derived table (in this case the unique table check can turn that table to materialized one).
Bug #18167356: EXPLAIN W/ EXISTS(SELECT* UNION SELECT*)
WHERE ONE OF SELECT* IS DISTINCT FAILS.
the bugfix itself was not merged - MariaDB doesn't have this bug.
replication causing replication to fail.
In parallel replication, we run transactions from the master in parallel, but
force them to commit in the same order they did on the master. If we force T1
to commit before T2, but T2 holds eg. a row lock that is needed by T1, we get
a deadlock when T2 waits until T1 has committed.
Usually, we do not run T1 and T2 in parallel if there is a chance that they
can have conflicting locks like this, but there are certain edge cases where
it can occasionally happen (eg. MDEV-5914, MDEV-5941, MDEV-6020). The bug was
that this would cause replication to hang, eventually getting a lock timeout
and causing the slave to stop with error.
With this patch, InnoDB will report back to the upper layer whenever a
transactions T1 is about to do a lock wait on T2. If T1 and T2 are parallel
replication transactions, and T2 needs to commit later than T1, we can thus
detect the deadlock; we then kill T2, setting a flag that causes it to catch
the kill and convert it to a deadlock error; this error will then cause T2 to
roll back and release its locks (so that T1 can commit), and later T2 will be
re-tried and eventually also committed.
The kill happens asynchroneously in a slave background thread; this is
necessary, as the reporting from InnoDB about lock waits happen deep inside
the locking code, at a point where it is not possible to directly call
THD::awake() due to mutexes held.
Deadlock is assumed to be (very) rarely occuring, so this patch tries to
minimise the performance impact on the normal case where no deadlocks occur,
rather than optimise the handling of the occasional deadlock.
Also fix transaction retry due to deadlock when it happens after a transaction
already signalled to later transactions that it started to commit. In this
case we need to undo this signalling (and later redo it when we commit again
during retry), so following transactions will not start too early.
Also add a missing thd->send_kill_message() that got triggered during testing
(this corrects an incorrect fix for MySQL Bug#58933).
- When the optimizer chose LooseScan, make_join_readinfo() should
use the index that was chosen for LooseScan, and should not try
to find a better (shortest) index.
Handle retry of event groups that span multiple relay log files.
- If retry reaches the end of one relay log file, move on to the next.
- Handle refcounting of relay log files, and avoid purging relay log
files until all event groups have completed that might have needed
them for transaction retry.
"HAVING SUM(DISTINCT)": WRONG RESULTS.
ISSUE:
------
If a query uses loose index scan and it has both
AGG(DISTINCT) and MIN()/MAX()functions. Then, result values
of MIN/MAX() is set improperly.
When query has AGG(DISTINCT) then end_select is set to
end_send_group. "end_send_group" keeps doing aggregation
until it sees a record from next group. And, then it will
send out the result row of that group.
Since query also has MIN()/MAX() and loose index scan is
used, values of MIN/MAX() are set as part of loose index
scan itself. Setting MIN()/MAX() values as part of loose
index scan overwrites values computed in end_send_group.
This caused invalid result.
For such queries to work loose index scan should stop
performing MIN/MAX() aggregation. And, let end_send_group to
do the same. But according to current design loose index
scan can produce only one row per group key. If we have both
MIN() and MAX() then it has to give two records out. This is
not possible as interface has to use common buffer
record[0]! for both records at a time.
SOLUTIONS:
----------
For such queries to work we need a new interface for loose
index scan. Hence, do not choose loose_index_scan for such
cases. So a new rule SA7 is introduced to take care of the
same.
SA7: "If Q has both AGG_FUNC(DISTINCT ...) and
MIN/MAX() functions then loose index scan access
method is not used."
mysql-test/r/group_min_max.result:
Expected result.
mysql-test/t/group_min_max.test:
1. Test with various combination of AGG(DISTINCT) and
MIN(), MAX() functions.
2. Corrected the plan for old queries.
sql/opt_range.cc:
A new rule SA7 is introduced.
Implement that if first retry fails, we can do another attempt.
Add testcases to test multi-retry that succeeds in second attempt, and
multi-retry that eventually fails due to exceeding slave_trans_retries.
This patch allows up to 64K pages for tables with DYNAMIC, COMPACT
and REDUNDANT row types. Tables with COMPRESSED row type allows
still only <= 16K page size. Note that single row size must be
still <= 16K and max key length is not affected.
on select from I_S.INNODB_CHANGED_PAGES
Analysis: limit_lsn_range_from_condition() incorrectly parses
start_lsn and/or end_lsn conditions.
Fix from SergeyP. Added some test cases.
SHOW PROCESSLIST, SHOW BINLOGS
Fixing post push test failure (MTR does not like giving
127.0.0.1 for localhost incase of --embedded run, it thinks
it is an external ip address)
SHOW PROCESSLIST, SHOW BINLOGS
Problem: A deadlock was occurring when 4 threads were
involved in acquiring locks in the following way
Thread 1: Dump thread ( Slave is reconnecting, so on
Master, a new dump thread is trying kill
zombie dump threads. It acquired thread's
LOCK_thd_data and it is about to acquire
mysys_var->current_mutex ( which LOCK_log)
Thread 2: Application thread is executing show binlogs and
acquired LOCK_log and it is about to acquire
LOCK_index.
Thread 3: Application thread is executing Purge binary logs
and acquired LOCK_index and it is about to
acquire LOCK_thread_count.
Thread 4: Application thread is executing show processlist
and acquired LOCK_thread_count and it is
about to acquire zombie dump thread's
LOCK_thd_data.
Deadlock Cycle:
Thread 1 -> Thread 2 -> Thread 3-> Thread 4 ->Thread 1
The same above deadlock was observed even when thread 4 is
executing 'SELECT * FROM information_schema.processlist' command and
acquired LOCK_thread_count and it is about to acquire zombie
dump thread's LOCK_thd_data.
Analysis:
There are four locks involved in the deadlock. LOCK_log,
LOCK_thread_count, LOCK_index and LOCK_thd_data.
LOCK_log, LOCK_thread_count, LOCK_index are global mutexes
where as LOCK_thd_data is local to a thread.
We can divide these four locks in two groups.
Group 1 consists of LOCK_log and LOCK_index and the order
should be LOCK_log followed by LOCK_index.
Group 2 consists of other two mutexes
LOCK_thread_count, LOCK_thd_data and the order should
be LOCK_thread_count followed by LOCK_thd_data.
Unfortunately, there is no specific predefined lock order defined
to follow in the MySQL system when it comes to locks across these
two groups. In the above problematic example,
there is no problem in the way we are acquiring the locks
if you see each thread individually.
But If you combine all 4 threads, they end up in a deadlock.
Fix:
Since everything seems to be fine in the way threads are taking locks,
In this patch We are changing the duration of the locks in Thread 4
to break the deadlock. i.e., before the patch, Thread 4
('show processlist' command) mysqld_list_processes()
function acquires LOCK_thread_count for the complete duration
of the function and it also acquires/releases
each thread's LOCK_thd_data.
LOCK_thread_count is used to protect addition and
deletion of threads in global threads list. While show
process list is looping through all the existing threads,
it will be a problem if a thread is exited but there is no problem
if a new thread is added to the system. Hence a new mutex is
introduced "LOCK_thd_remove" which will protect deletion
of a thread from global threads list. All threads which are
getting exited should acquire LOCK_thd_remove
followed by LOCK_thread_count. (It should take LOCK_thread_count
also because other places of the code still thinks that exit thread
is protected with LOCK_thread_count. In this fix, we are changing
only 'show process list' query logic )
(Eg: unlink_thd logic will be protected with
LOCK_thd_remove).
Logic of mysqld_list_processes(or file_schema_processlist)
will now be protected with 'LOCK_thd_remove' instead of
'LOCK_thread_count'.
Now the new locking order after this patch is:
LOCK_thd_remove -> LOCK_thd_data -> LOCK_log ->
LOCK_index -> LOCK_thread_count
Start implementing that an event group can be re-tried in parallel replication
if it fails with a temporary error (like deadlock).
Patch is very incomplete, just some very basic retry works.
Stuff still missing (not complete list):
- Handle moving to the next relay log file, if event group to be retried
spans multiple relay log files.
- Handle refcounting of relay log files, to ensure that we do not purge a
relay log file and then later attempt to re-execute events out of it.
- Handle description_event_for_exec - we need to save this somehow for the
possible retry - and use the correct one in case it differs between relay
logs.
- Do another retry attempt in case the first retry also fails.
- Limit the max number of retries.
- Lots of testing will be needed for the various edge cases.
ISSUE:
------
For UNION of selects, rows examined by the query will be sum
of rows examined by individual select operations and rows
examined for union operation. The value of session level
global counter that is used to count the rows examined by a
select statement should be accumulated and reset before it
is used for next select statement. But we have missed to
reset the same. Because of this examined row count of a
select query is accounted more than once.
SOLUTION:
---------
In union reset the session level global counter used to
accumulate count of examined rows after its value is saved.
mysql-test/r/union.result:
Expected output of testcase added.
mysql-test/t/union.test:
Test to verify examined row count of Union operations.
sql/sql_union.cc:
Reset the value of thd->examined_row_count after
accumulating the value.
Problem:
If there is a predicate on a column referenced by MIN/MAX and
that predicate is not present in all the disjunctions on
keyparts earlier in the compound index, Loose Index Scan will
not return correct result.
Analysis:
When loose index scan is chosen, range optimizer currently
groups all the predicates that contain group parts separately
and minmax parts separately. It therefore applies all the
conditions on the group parts first to the fetched row.
Then in the call to next_max, it processes the conditions
which have min/max keypart.
For ex in the following query:
Select f1, max(f2) from t1 where (f1 = 10 and f2 = 13) or
(f1 = 3) group by f1;
Condition (f2 = 13) would be applied even for rows that
satisfy (f1 = 3) thereby giving wrong results.
Solution:
Do not choose loose_index_scan for such cases. So a new rule
WA2 is introduced to take care of the same.
WA2: "If there are predicates on C, these predicates must
be in conjuction to all predicates on all earlier keyparts
in I."
Todo the same, fix reuses the function get_constant_key_infix().
Since this funciton will fail for all multi-range conditions, it
is re-written to recognize that if the sub-conditions are
equivalent across the disjuncts: it will now succeed.
And to achieve this a new helper function is introduced called
all_same().
The fix also moves the test of NGA3 up to the former only
caller, get_constant_key_infix().
mysql-test/r/group_min_max_innodb.result:
Added test result change for Bug#17909656
mysql-test/t/group_min_max_innodb.test:
Added test cases for Bug#17909656
sql/opt_range.cc:
Introduced Rule WA2 because of Bug#17909656
Typo leading to not including the last list values (partition).
Also improved pruning to skip last partition if not used.
rb#4762 approved by Aditya and Marko.
- Fixed bug that we where using wrong checksum algorithm when using VARCHAR with fixed lenth rows
- Ensure in myisampack that HA_OPTION_NULL_FIELDS is set for tables with null fields.
mysql-test/r/myisampack.result:
Updated results
mysql-test/t/myisampack.test:
Added more tests
storage/myisam/mi_open.c:
Use correct checksum algorithm when we have VARCHAR fields with fixed length records
storage/myisam/myisampack.c:
Ensure HA_OPTION_NULL_FIELDS is set for tables with null fields.
(This was not set by default for not compressed tables without checksums to keep MyISAM tables compatible with MySQL)
Problem: Uninstallation of semi sync plugin causes replication to
break.
Analysis: A semisync enabled replication is mutual agreement between
Master and Slave when the connection (I/O thread) is established.
Once I/O thread is started and if semisync is enabled on both
master and slave, master appends special magic header to events
using semisync plugin functions and sends it to slave. And slave
expects that each event will have that special magic header format
and reads those bytes using semisync plugin functions.
When semi sync replication is in use if users execute
uninstallation of the plugin on master, slave gets confused while
interpreting that event's content because it expects special
magic header at the beginning of the event. Slave SQL thread will
be stopped with "Missing magic number in the header" error.
Similar problem will happen if uninstallation of the plugin happens
on slave when semi sync replication is in in use. Master sends
the events with magic header and slave does not know about the
added magic header and thinks that it received a corrupted event.
Hence slave SQL thread stops with "Found corrupted event" error.
Fix: Uninstallation of semisync plugin will be blocked when semisync
replication is in use and will throw 'ER_UNKNOWN_ERROR' error.
To detect that semisync replication is in use, this patch uses
semisync status variable values.
> On Master, it checks for 'Rpl_semi_sync_master_status' to be OFF
before allowing the uninstallation of rpl_semi_sync_master plugin.
>> Rpl_semi_sync_master_status is OFF when
>>> there is no dump thread running
>>> there are no semisync slaves
> On Slave, it checks for 'Rpl_semi_sync_slave_status' to be OFF
before allowing the uninstallation of rpl_semi_sync_slave plugin.
>> Rpl_semi_sync_slave_status is OFF when
>>> there is no I/O thread running
>>> replication is asynchronous replication.
It is triple bug with one test suite:
1. Incorrect outer table detection
2. Incorrect leaf table processing for multi-update (should be full like for usual updates and inserts)
3. ON condition fix_fields() fould be called for all tables of the query.
do_start_slave_sql() is called from maybe_exit().
We should not recurse when maybe_exit() is called for an error during do_start_slave_sql().
Also remove a meaningless (but safe) "goto err".
ARE PERMANENTLY SKIPPED IN 5.5/5.6).
The problem was that some result files were not updated,
so the tests were skipped.
The fix is to record updated result files.
BREAKS RBR
Analysis:
--------
A table created using a query of the format:
CREATE TABLE t1 AS SELECT REPEAT('A',1000) DIV 1 AS a;
breaks the Row Based Replication.
The query above creates a table having a field of datatype
'bigint' with a display width of 3000 which is beyond the
maximum acceptable value of 255.
In the RBR mode, CREATE TABLE SELECT statement is
replicated as a combination of CREATE TABLE statement
equivalent to one the returned by SHOW CREATE TABLE and
row events for rows inserted. When this CREATE TABLE event
is executed on the slave, an error is reported:
Display width out of range for column 'a' (max = 255)
The following is the output of 'SHOW CREATE TABLE t1':
CREATE TABLE t1(`a` bigint(3000) DEFAULT NULL)
ENGINE=InnoDB DEFAULT CHARSET=latin1;
The problem is due to the combination of two facts:
1) The above CREATE TABLE SELECT statement uses the display
width of the result of DIV operation as the display width
of the column created without validating the width for out
of bound condition.
2) The DIV operation incorrectly returns the length of its first
argument as the display width of its result; thus allowing
creation of a table with an incorrect display width of 3000
for the field.
Fix:
----
This fix changes the DIV operation implementation to correctly
evaluate the display width of its result. We check if DIV's
results estimated width crosses maximum width for integer
value (21) and if yes set it to this maximum value.
This patch also fixes fixes maximum display width evaluation
for DIV function when its first argument is in UCS2.
Bug#18396916 MAIN.OUTFILE_LOADDATA TEST FAILS ON ARM, AARCH64, PPC/PPC64
The recorded results for the failing tests were wrong.
They were introduced by the patch for
Bug#30946 mysqldump silently ignores --default-character-set when used with --tab
Correct results were returned for platforms where 'char' is implemented as unsigned.
This was reported as
Bug#46895 Test "outfile_loaddata" fails (reproducible)
Bug#11755168 46895: TEST "OUTFILE_LOADDATA" FAILS (REPRODUCIBLE)
The patch for that bug fixed only parts of the problem,
leaving the incorrect results in the .result file.
Solution: use 'uchar' for field_terminator and line_terminator on all platforms.
Also: remove some un-necessary casts, leaving the ones we actually need.
Replication caches the character sets used in a query, to be able to quickly
reuse them for the next query in the common case of them not having changed.
In parallel replication, this caching needs to be per-worker-thread. The
code was not modified to handle this correctly, so the caching in one worker
could cause another worker to run a query using the wrong character set,
causing replication corruption.
MDEV-5980: EITS: if condition is used for REF access, its selectivity is still in filtered%
MDEV-5985: EITS: selectivity estimates look illogical for join and non-key equalities
MDEV-6003: EITS: ref access, keypart2=const vs keypart2=expr - inconsistent filtered% value
- Made a number of fixes in table_cond_selectivity() so that it returns
correct selectivity estimates.
- Added comments in related code.
Better comments
Back-ported from the mysql 5.6 code line the patch with
the following comment:
Fix for Bug#11757108 CHANGE IN EXECUTION PLAN FOR COUNT_DISTINCT_GROUP_ON_KEY
CAUSES PEFORMANCE REGRESSION
The cause for the performance regression is that the access strategy for the
GROUP BY query is changed form using "index scan" in mysql-5.1 to use "loose
index scan" in mysql-5.5. The index used for group by is unique and thus each
"loose scan" group will only contain one record. Since loose scan needs to
re-position on each "loose scan" group this query will do a re-position for
each index entry. Compared to just reading the next index entry as a normal
index scan does, the use of loose scan for this query becomes more expensive.
The cause for selecting to use loose scan for this query is that in the current
code when the size of the "loose scan" group is one, the formula for
calculating the cost estimates becomes almost identical to the cost of using
normal index scan. Differences in use of integer versus floating point arithmetic
can cause one or the other access strategy to be selected.
The main issue with the formula for estimating the cost of using loose scan is
that it does not take into account that it is more costly to do a re-position
for each "loose scan" group compared to just reading the next index entry.
Both index scan and loose scan estimates the cpu cost as:
"number of entries needed too read/scan" * ROW_EVALUATE_COST
The results from testing with the query in this bug indicates that the real
cost for doing re-position four to eight times higher than just reading the
next index entry. Thus, the cpu cost estimate for loose scan should be increased.
To account for the extra work to re-position in the index we increase the
cost for loose index scan to include the cost of navigating the index.
This is modelled as a function of the height of the b-tree:
navigation cost= ceil(log(records in table)/log(indexes per block))
* ROWID_COMPARE_COST;
This will avoid loose index scan being used for indexes where the "loose scan"
group contains very few index entries.
A typo in include file caused the timeout counter to be 10x less than
expected. It was fixed in 5.5.20+ along with a bigger change, now also
fixing in 5.1-5.3.
MDEV-5985: EITS: selectivity estimates look illogical for join and non-key equalities
MDEV-6003: EITS: ref access, keypart2=const vs keypart2=expr - inconsistent filtered% value
- Made a number of fixes in table_cond_selectivity() so that it returns
correct selectivity estimates.
- Added comments in related code.
The problem is the async binlog checkpointing; this could on rare
occasions occur too late, causing SHOW BINLOG EVENTS to show the
wrong events and cause .result file difference.
back-ported the patch for bug #13256831 from mysql-5.6 code line.
Here's the comment this patch was provided with:
Fixed bug#13256831 - ERROR 1032 (HY000): CAN'T FIND RECORD.
This bug only occurs if a user tries to update a base table using
an updatable view and this view was created as a join for which
the clause 'WITH CHECK OPTION' was specified.
The reason for the bug was that when such an update was
executed, row positions were not properly handled for tables
that were not updated but had constraints that had to be
checked due to the 'WITH CHECK OPTION' clause.
The reason for the bug was that when such update is executed
then for tables specified in the view definition and
also listed in the 'WITH CHECK OPTION' clause the positioning to
row being updated is not performed.
Both bugs are caused by the same problem: the function optimize_cond() should
update the value of *cond_equal rather than the value of join->cond_equal,
because it is called not only for the WHERE condition, but for the HAVING
condition as well.
WRITTEN WHILE ROWS REMAINS
Problem:
========
When truncate table fails while using transactional based
engines even though the operation errors out we still
continue and log it to binlog. Because of this master has
data but the truncate will be written to binary log which
will cause inconsistency.
Analysis:
========
Truncate table can happen either through drop and create of
table or by deleting rows. In the second case the existing
code is written in such a way that even if an error occurs
the truncate statement will always be binlogged. Which is not
correct.
Binlogging of TRUNCATE TABLE statement should check whether
truncate is executed "transactionally or not". If the table
is transaction based we log the TRUNCATE TABLE only on
successful completion.
If table is non transactional there are possibilities that on
error we could have partial changes done hence in such cases
we do log in spite of errors as some of the lines might have
been removed, so the statement has to be sent to slave.
Fix:
===
Using table handler whether truncate table is being executed
in transaction based mode or not is identified and statement
is binlogged accordingly.
mysql-test/suite/binlog/r/binlog_truncate_kill.result:
Added test case to test the fix for Bug#17942050.
mysql-test/suite/binlog/t/binlog_truncate_kill.test:
Added test case to test the fix for Bug#17942050.
sql/sql_truncate.cc:
Check if truncation is successful or not and retun appropriate
return values so that binlogging can be done based on that.
sql/sql_truncate.h:
Added a new enum.
Add a testcase and backport this fix:
Bug#14338686: MYSQL IS GENERATING DIFFERENT AND SLOWER
(IN NEWER VERSIONS) EXECUTION PLAN
PROBLEM:
While checking for an index to sort for the order by clause
in this query
"SELECT datestamp FROM contractStatusHistory WHERE
contract_id = contracts.id ORDER BY datestamp asc limit 1;"
we do not calculate the number of rows to be examined correctly.
As a result we choose index 'idx_contractStatusHistory_datestamp'
defined on the 'datestamp' field, rather than choosing index
'contract_id'. And hence the lower performance.
ANALYSIS:
While checking if an index is present to give the records in
sorted order(datestamp), we consider the selectivity of the
'ref_key'(contract_id here) using 'table->quick_condition_rows'.
'ref_key' here can be an index from 'REF_ACCESS' or from 'RANGE'.
As this is a 'REF_ACCESS', 'table->quick_condition_rows' is not
set to the actual value which is 2. Instead is set to the number
of tuples present in the table indicating that every row that
is selected would be satisfying the condition present in the query.
Hence, the selectivity becomes 1 even when we choose the index
on the order by column instead of the join_condition.
But, in reality as only 2 rows satisy the condition, we need to
examine half of the entire data set to get one tuple when we
choose index on the order by column.
Had we chosen the 'REF_ACCESS' we would have examined only 2 tuples.
Hence the delay in executing the query specified.
FIX:
While calculating the selectivity of the ref_key:
For REF_ACCESS consider quick_rows[ref_key] if range
optimizer has an estimate for this key. Else consider
'rec_per_key' statistic.
For RANGE ACCESS consider 'table->quick_condition_rows'.
- Make JOIN::const_key_parts include keyparts for which
the WHERE clause has an equality in form
"t.key_part=reference_outside_this_select"
- This allows to avoid filesort'ing in some cases (and also
avoid a difficult choice between using filesort or using an index)
The problem was that the view substitute its fields (on prepare) with reverting the change after execution. After prepare on optimization exists2in convertion substituted arguments of '=' with constsnt '1', but then one of the arguments of '=' was reverted to the view field reference.This lead to incorrect WHERE condition on the second execution.
To fix the problem we replace whole '=' with '1' permannently.
MDEV-5984: EITS: Incorrect filtered% value for single-table select with range access
- Fix calculate_cond_selectivity_for_table() to work correctly with range accesses
over multi-component keys:
= First, take selectivity of all possible range scans into account. Remember which
fields were used bt the range scans.
= Then, calculate selectivity produced by sargable predicates on fields. If a
field was used in a possible range access, assume its selectivity is already
taken into account.
- Fix table_cond_selectivity(): when quick select is used, selectivity of
COND(table) is taken into account in matching_candidates_in_table(). In
table_cond_selectivity() we should not apply it for the second time.
MDEV-5971 Asymmetry between CAST(DATE'2001-00-00') to INT and TO CHAR in prepared statements
Consistently set maybe_null flag, even not-NULL temporal literal may become NULL
in the restrictive sql_mode.
This will help the following replicaition scenario:
MariaDB 10.0 master (statement replication) -> MariaDB 10.0 slave (row based replication) -> MySQL or MariaDB 5.x slave
mysql-test/r/mysqld--help.result:
Updated help text
mysql-test/suite/rpl/r/create_or_replace_mix.result:
Added more tests
mysql-test/suite/rpl/r/create_or_replace_row.result:
Added more tests
mysql-test/suite/rpl/r/create_or_replace_statement.result:
Added more tests
mysql-test/suite/rpl/t/create_or_replace.inc:
Added more tests
sql/handler.h:
Added org_options so that we can detect what come from the query and what was possible added later.
sql/sql_insert.cc:
Only write CREATE OR REPLACE if was originally specified or if we delete a conflicting table as part of create
sql/sql_parse.cc:
Remember orginal create options
sql/sql_table.cc:
Only write CREATE OR REPLACE if was originally specified or if we delete a conflicting table as part of create
sql/sys_vars.cc:
Updated help text
(Inadequate background LRU flushing for write workloads with InnoDB compression).
If InnoDB compression is used and the workload has writes, the
following situation is possible. The LRU flusher issues an LRU flush
request for an instance. buf_do_LRU_batch decides to perform
unzip_LRU eviction and this eviction might fully satisfy the
request. Then buf_flush_LRU_tail checks the number of flushed pages in
the last iteration, finds it to be zero, and wrongly decides not to
flush that instance anymore.
Fixed by maintaining unzip_LRU eviction counter in struct
flush_counter_t variables, and checking it in buf_flush_LRU_tail when
deciding whether to stop flushing the current instance.
Added test cases for new configuration files to get mysql-test-run suite sys_vars
to pass. Fix some small errors.
Backported multi_update_check_table_access() from 5.6
The code is slightly different in MariaDB, becasue we instansiate fields in merged tables earlier.
mysql-test/mysql-test-run.pl:
Fixed comment
mysql-test/r/view_grant.result:
Merged test case from 5.6
mysql-test/t/view_grant.test:
Merged test case from 5.6
sql/sql_parse.cc:
Reset orig_want_privilege as this will be rechecked later.
If not, we will have a problem in mysql_multi_update_prepare() for the call to mysql_handle_derived()
sql/sql_update.cc:
Backport multi_update_check_table_access() from 5.6
Fixed use-copy option to mysql-test-run
mysql-test/mysql-test-run.pl:
Fixed use-copy and added comment
sql/log.cc:
Make copy_up_file_and_fill() safe for disk full
mysql-test/r/create_or_replace.result:
More tests for create or replace
mysql-test/t/create_or_replace.test:
More tests for create or replace
sql/log.cc:
Don't use binlog_hton if binlog is not enabmed
sql/sql_base.cc:
We have to call restart_trans_for_tables also if tables where not locked with LOCK TABLES.
If not, we will get a crash in TokuDB
sql/sql_insert.cc:
Don't call binlog_reset_cache() if we don't have binary log open
sql/sql_table.cc:
Don't log to binary log if not open
Better test if we where using create or replace ... select
storage/tokudb/mysql-test/tokudb_mariadb/r/create_or_replace.result:
More tests for create or replace
storage/tokudb/mysql-test/tokudb_mariadb/t/create_or_replace.test:
More tests for create or replace
Copied relevant test cases and code from the MySQL 5.6 tree
Testing of my_use_symdir moved to engines.
mysql-test/r/partition_windows.result:
Updated result file
mysql-test/suite/archive/archive_no_symlink-master.opt:
Testing of symlinks with archive
mysql-test/suite/archive/archive_no_symlink.result:
Testing of symlinks with archive
mysql-test/suite/archive/archive_no_symlink.test:
Testing of symlinks with archive
mysql-test/suite/archive/archive_symlink.result:
Testing of symlinks with archive
mysql-test/suite/archive/archive_symlink.test:
Testing of symlinks with archive
sql/log_event.cc:
Updated comment
sql/partition_info.cc:
Don't test my_use_symdir here
sql/sql_parse.cc:
Updated comment
sql/sql_table.cc:
Don't test my_use_symdir here
sql/table.cc:
Added more DBUG_PRINT
storage/archive/ha_archive.cc:
Give warnings for index_file_name and if we can't use data directory
storage/myisam/ha_myisam.cc:
Give warnings if we can't use data directory or index directory
Bug #3329 Incomplete lower_case_table_names=2 implementation
The problem was that check_db_name() converted database names to lower case also in case of lower_case_table_names=2.
Fixed by removing the conversion in check_db_name for lower_case_table_names = 2 and instead converting db name to
lower case at same places as table names are converted.
Fixed bug that SHOW CREATE DATABASE FOO showed information for database 'foo'.
I also removed some checks of lower_case_table_names when it was enough to use table_alias_charset.
mysql-test/mysql-test-run.pl:
Added --use-copy argument to force mysql-test-run to copy files instead of doing symlinks. This is needed when you run
with test directory on another file system
mysql-test/r/lowercase_table.result:
Updated results
mysql-test/r/lowercase_table2.result:
Updated results
mysql-test/suite/parts/r/partition_mgm_lc2_innodb.result:
Updated results
mysql-test/suite/parts/r/partition_mgm_lc2_memory.result:
Updated results
mysql-test/suite/parts/r/partition_mgm_lc2_myisam.result:
Updated results
mysql-test/t/lowercase_table.test:
Added tests with mixed case databases
mysql-test/t/lowercase_table2.test:
Added tests with mixed case databases
sql/log.cc:
Don't check lower_case_table_names when we can use table_alias_charset
sql/sql_base.cc:
Don't check lower_case_table_names when we can use table_alias_charset
sql/sql_db.cc:
Use cmp_db_names() for checking if current database changed.
mysql_rm_db() now converts db to lower case if lower_case_table_names was used.
Changed database options cache to use table_alias_charset. This fixed a bug where SHOW CREATE DATABASE showed wrong information.
sql/sql_parse.cc:
Change also db name to lower case when file names are changed.
Don't need to story copy of database name anymore when lower_case_table_names == 2 as check_db_name() don't convert in this case.
Updated arguments to mysqld_show_create_db().
When adding table to TABLE_LIST also convert db name to lower case if needed (same way as we do with table names).
sql/sql_show.cc:
mysqld_show_create_db() now also takes original name as argument for output to user.
sql/sql_show.h:
Updated prototype for mysqld_show_create_db()
sql/sql_table.cc:
In mysql_rename_table(), do same conversions to database name as we do for the file name
- Histogram::find_bucket() should not walk off the end of the value range.
- Address review feedback in Histogram::point_selectivity(): different handling
for zero-width buckets, and explanations.
The reason was that a couple of variables that hold number of rows that was used to calculate buffers was uint and caused an overflow.
Fixed by changing variables that could hold number of rows from uint to ulong and also added a cast for this test.
include/heap.h:
Reorder to get better alignment. Changed variables that could hold number of rows from uint to ulong
mysql-test/suite/heap/heap.result:
Added test case
mysql-test/suite/heap/heap.test:
Added test case
mysql-test/suite/plugins/t/server_audit.test:
Added sleep as we want to have disconnect logged before we try a new connect
storage/heap/ha_heap.cc:
Changed variables that could hold number of rows from uint to ulong
Limit number of rows to 4G (as most of the variables that holds rows are ulong anyway)
reset records_changed when key_stat_version is changed to not cause increments for every row changed
storage/heap/ha_heap.h:
changed records_changed to ulong as this can get big
storage/heap/hp_create.c:
Changed variables that could hold number of rows from uint to ulong
Added cast (fixed the original bug)
storage/heap/hp_delete.c:
Changed variables that could hold number of rows from uint to ulong
storage/heap/hp_open.c:
Removed not needed cast
storage/heap/hp_write.c:
Changed variables that could hold number of rows from uint to ulong
support-files/compiler_warnings.supp:
Removed extra : from supression
- Fix Histogram::point_selectivity() to work in the case where the
passed value_pos=0 (or 1) and the first (or the last) bucket in the
histogram has zero value-range (i.e one value).
[Attempt #2]
- Use a new selectivity calculation formula in Histogram::point_selectivity.
The formula is different from the old one because it was developed from scratch.
it doesn't have any possible division-by-zero problems.