The patch lifts the limitation of the current implementation
of ALTER TABLE that does not allow to build unique/primary
indexes by sort for MyISAM and Aria engines.
- Register counters directly in the array passed to maria_declare_plugin. As
a consequence, FLUSH TABLES will reset the counters.
- Update test results accordingly.
Analysis:
The following call stack shows that it is possible to set Item_cache::value_cached, and the relevant value
without setting Item_cache::example.
#0 Item_cache_temporal::store_packed at item.cc:8395
#1 get_datetime_value at item_cmpfunc.cc:915
#2 resolve_const_item at item.cc:7987
#3 propagate_cond_constants at sql_select.cc:12264
#4 propagate_cond_constants at sql_select.cc:12227
#5 optimize_cond at sql_select.cc:13026
#6 JOIN::optimize at sql_select.cc:1016
#7 st_select_lex::optimize_unflattened_subqueries at sql_lex.cc:3161
#8 JOIN::optimize_unflattened_subqueries at opt_subselect.cc:4880
#9 JOIN::optimize at sql_select.cc:1554
The fix is to set Item_cache_temporal::example even when the value is
set directly by Item_cache_temporal::store_packed. This makes the
Item_cache_temporal object consistent.
have_openssl variable was ON even when OpenSSL was not used (but YaSSL was).
fix that, so that have_openssl really corresponds to OpenSSL
rename not_openssl.inc to not_ssl.inc and fix the test accordingly.
With MDEV-532, the binlog_checkpoint event is logged asynchronously
from a binlog background thread. This causes some sporadic failures
in some test cases whose output depends on order of events in
binlog.
Fix using an include file that waits until the binlog checkpoint
event has been logged before proceeding with the test case.
backport improved bootstrap error handling from 5.6
Was:
revno: 3768.1.1
committer: Christopher Powers <chris.powers@oracle.com>
timestamp: Wed 2012-05-02 22:16:40 -0500
message:
Bug#11766342 INITIAL DB CREATION FAILS ON WINDOWS WITH AN ASSERT IN SQL_ERROR.CC
Improved bootstrap error handling:
- Detect and report file i/o errors
- Report query size errors with nearest query text
This was failing not only for P_S, but for any engine that had
HA_PRIMARY_KEY_REQUIRED_FOR_DELETE flag set (in the tree - only P_S and federated).
Because of this flag, read_set and write_set were (possibly) changed
on update. But later the code modified these bitmaps and restored them to the default
state, losing HA_PRIMARY_KEY_REQUIRED_FOR_DELETE related changes.
sql/handler.cc:
small optimization.
don't change the *write* set only because all columns has to be *read*
revno: 3383
revision-id: georgi.kodinov@oracle.com-20110818083108-qa3h3ufqu4zne80a
committer: Georgi Kodinov <Georgi.Kodinov@Oracle.com>
timestamp: Thu 2011-08-18 11:31:08 +0300
message:
Bug #11766001: 59026: ALLOW MULTIPLE --PLUGIN-LOAD OPTIONS
Implemented support for a new command line option :
--plugin-load-add=<comma-separated-name-equals-value-list>
This option takes the same type of arguments that --plugin-load does
and complements --plugin-load (that continues to operate as before) by
appending its argument to the list specified by --plugin-load.
So --plugin-load can be considered a composite option consisting of
resetting the plugin load list and then calling --plugin-load-add to process
the argument.
Note that the order in which you specify --plugin-load and --plugin-load-add
is important : "--plugin-load=x --plugin-load-add=y" will be equivalent to
"--plugin-load=x,y" whereas "--plugin-load-add=y --plugin-load=x" will be
equivalent to "plugin-load=x".
Incompatible change : the --help --verbose command will no longer print the
--plugin-load variable's values (as it doesn't have one). Otherwise both --plugin-load
and --plugin-load-add are mentioned in it.
Make the commit checkpoint inside InnoDB be asynchroneous.
Implement a background thread in binlog to do the writing and flushing of
binlog checkpoint events to disk.
If a query referenced some system statistical tables, but not all of them,
then executing an ANALYZE command simultaneously with this query could
lead to a deadlock.
The fix prohibited reading statistics from system statistical tables
for such queries.
Removed the function unlock_tables_n_open_system_tables_for_write()
as not used anymore.
Performed some minor refactoring of the code in sql_statistics.cc.
engine-independent statistics.
If a table was created for InnoDB then the execution of the
ANALYZE command over this table blocked any INSERT/DELETE/UPDATE
of the table.
engine-independent statistics.
When the primary key was dropped or changed statistics on secondary
indexes for the prefixes that included components of the primary
key was not removed from the table mysql.index_stats.
Also fixed: in the some cases when a column was changed statistics
on the indexes that included this column was not removed from the
table mysql.index_stats.
Also disabled the test mdev-504 for --ps-protocol.
Analysys:
In the beginning of JOIN::cleanup there is code that is supposed to
free all filesort buffers. The code assumes that the table being sorted
is the first non-constant table. To get this table it calls:
first_top_level_tab(this, WITHOUT_CONST_TABLES)
However, first_top_level_tab() instead returned the wrong table - the first
one in the plan, instead of the first non-constant table. There is no other
place outside filesort() where sort buffers may be freed. As a result, the
sort buffer was not freed, and there was a memory leak.
Solution:
Change first_top_level_tab(), to test for WITH_CONST_TABLES instead of
WITHOUT_CONST_TABLES.
mysql-test/r/create.result:
Updated test results
mysql-test/t/create.test:
Updated test
sql/sql_base.cc:
Use push_internal_handler/pop_internal_handler to avoid errors & warnings instead of clear_error
Give a warnings instead of an error for CREATE TABLE IF EXISTS
sql/sql_parse.cc:
Check if we failed because of table exists (can only happen from create)
sql/sql_table.cc:
Check if we failed because of table exists (can only happen from create)
Analysis:
The reason for the suboptimal plan when querying IS tables through a view
was that the view columns that participate in an equality are wrapped by
an Item_direct_view_ref and were not recognized as being direct column
references.
Solution:
Use the original Item_field objects via the real_item() method.
mysql-test/r/create.result:
Added test case to show that CREATE TABLE also is not waiting if table exists.
mysql-test/t/create.test:
Added test case to show that CREATE TABLE also is not waiting if table exists.
sql/sql_base.cc:
Clear also warnings from acquire_locks if we retry.
- Added option to check_if_table_exists() to quickly check if table exists (either SHARE or .FRM)
- Extended lock_table_names() to not wait for meta data locks if CREATE IF NOT EXISTS is used.
mysql-test/r/create.result:
New test case
mysql-test/t/create.test:
New test case
sql/sql_base.cc:
Added option to check_if_table_exists() to quickly check if table exists (either SHARE or .FRM)
Extended lock_table_names() to not wait for meta data locks if CREATE IF NOT EXISTS is used.
sql/sql_base.h:
Updated prototype
sql/sql_db.cc:
Added extra argument to call to check_if_table_exists()
1. The PERSISTENT FOR clause of the ANALYZE command overrides
the setting of the system variable use_stat_tables:
with this clause ANALYZE unconditionally collects persistent
statistics.
2. ANALYZE collects persistent statistics only for tables of
the USER category. So it never collects persistent statistics
for system tables.
table_stat -> table_stats
column_stat -> column_stats
index_stat -> index_stats
to be in line with the names of innodb statistical tables
from mysql-5.6: innodb_table_stats and innodb_index_stats.
When inserting a record with update on duplicate keys the server calls
the ha_index_read_idx_map handler function to look for the record
that violates unique key constraints. The third parameter of this call
should mark only the base components of the index where the server is
searched for the record. Possible hidden components of the primary key
are to be unmarked.
The problem was that in debugging binaries it try to print item to assign human readable name to the item.
But subquery item was already freed (join_free/cleanup with full cleanup) so Item_field refers to temporary
table which memory had been already freed.
If the setting of system variables does not allow to use join buffer
for a join query with GROUP BY <f1,...> / ORDER BY <f1,...> then
filesort is not needed if the first joined table is scanned in
the order compatible with order specified by the list <f1,...>.
The invalid implementation of the method Field_bit::cmp_max could
trigger a valgrind complain or could lead to incorrect statistical
data when collecting engine-independent statistics on BIT fields.