fix based on review by tomas.
conform to bug we haven't fixed yet.
sql/ha_ndbcluster.cc:
return -1 on error in ::records().
produces a compiler warning, a bug and is evil.
however, until we go and really fix the bug properly, it's best to conform
Post recent handler changes, fast count(*) for cluster was broken.
Seeing as we maintain an exact count for ndb, we can easily use this for an optimisation.
With this patch, and use_exact_count DISABLED, we will use the fast way
of getting count(*) but not use the exact count for the optimiser.
With this patch and use_exact_count ENABLED, we will use the fast way of
getting count(*) and use the exact count for the optimiser.
sql/ha_ndbcluster.cc:
Implement handler::records() and set appropriate handler flag.
sql/ha_ndbcluster.h:
we implment handler::records() for fast count(*)
'SELECT DISTINCT a,b FROM t1' should not use temp table if there is unique
index (or primary key) on a.
There are a number of other similar cases that can be calculated without the
use of a temp table : multi-part unique indexes, primary keys or using GROUP BY
instead of DISTINCT.
When a GROUP BY/DISTINCT clause contains all key parts of a unique
index, then it is guaranteed that the fields of the clause will be
unique, therefore we can optimize away GROUP BY/DISTINCT altogether.
This optimization has two effects:
* there is no need to create a temporary table to compute the
GROUP/DISTINCT operation (or the temporary table will be smaller if only GROUP
is removed and DISTINCT stays or if DISTINCT is removed and GROUP BY stays)
* this causes the statement in effect to become updatable in Connector/Java
because the result set columns will be direct reference to the primary key of
the table (instead to the temporary table that it currently references).
Implemented a check that will optimize away GROUP BY/DISTINCT for queries like
the above.
Currently it will work only for single non-constant table in the FROM clause.
mysql-test/r/distinct.result:
Bug #16458: Simple SELECT FOR UPDATE causes "Result Set not updatable" error
- test case
mysql-test/t/distinct.test:
Bug #16458: Simple SELECT FOR UPDATE causes "Result Set not updatable" error
- test case
sql/sql_select.cc:
Bug #16458: Simple SELECT FOR UPDATE causes "Result Set not updatable" error
- disable GROUP BY if contains the fields of a unique index.
- make sure to allocate just enough pages in the fragments by using the actual
row count from the backup, to avoid over allocation of pages to fragments, and
thus avoid the bug
ndb/include/kernel/GlobalSignalNumbers.h:
Bug #19852 Restoring backup made from cluster with full data memory fails
- distribute fragment complete to all participants to update row count
ndb/include/kernel/signaldata/BackupContinueB.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- time slica writing of fragment info to ctl file
ndb/include/kernel/signaldata/BackupImpl.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- 32 -> 64 bit on bytes and records
- new signal fragment complete to all participants
ndb/include/kernel/signaldata/BackupSignalData.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- 32 -> 64 bit on bytes and records
ndb/include/kernel/signaldata/DictTabInfo.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- add min and max rows to dict tab info
ndb/include/kernel/signaldata/LqhFrag.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added min and max rows to add frag req
ndb/include/kernel/signaldata/TupFrag.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added min and max rows to add frag req
ndb/include/ndbapi/NdbDictionary.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added get/set of min max rows
ndb/src/common/debugger/signaldata/BackupImpl.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- 32 -> 64 bit on bytes and records
ndb/src/common/debugger/signaldata/BackupSignalData.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- 32 -> 64 bit on bytes and records
ndb/src/common/debugger/signaldata/DictTabInfo.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added min and max rows to dict tab info
ndb/src/common/debugger/signaldata/LqhFrag.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added min and max rows to frag req
ndb/src/kernel/blocks/backup/Backup.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- new section in backup with per fragment info in ctl file
- 32 -> 64 bit on bytes and records
ndb/src/kernel/blocks/backup/Backup.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- new section in backup with per fragment info in ctl file
- 32 -> 64 bit on bytes and records
ndb/src/kernel/blocks/backup/BackupFormat.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- new section in backup with per fragment info in ctl file
- 32 -> 64 bit on bytes and records
ndb/src/kernel/blocks/backup/BackupInit.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- new signal fragment complete to all participants
ndb/src/kernel/blocks/dbdict/Dbdict.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added max and min rows to dict table object
ndb/src/kernel/blocks/dbdict/Dbdict.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added max and min rows to dict table object
ndb/src/kernel/blocks/dblqh/Dblqh.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added min and max rows to frag req
ndb/src/kernel/blocks/dblqh/DblqhMain.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added min and max rows to frag req
ndb/src/kernel/blocks/dbtup/Dbtup.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added min and max rows to frag req
ndb/src/kernel/blocks/dbtup/DbtupMeta.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- added min and max rows to frag req
- move memory allocation to fragment to after adding of attributes to get correct headsize
- allocate pages to fragments according to min rows setting
ndb/src/kernel/blocks/dbtup/DbtupPageMap.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- grow page allocation starting from 2 irrespective of first page allocation
ndb/src/mgmsrv/MgmtSrvr.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- 32 -> 64 bits on bytes and records
ndb/src/mgmsrv/MgmtSrvr.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- 32 -> 64 bits on bytes and records
ndb/src/ndbapi/NdbDictionary.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- min and max rows in dict
ndb/src/ndbapi/NdbDictionaryImpl.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- min and max rows in dict
ndb/src/ndbapi/NdbDictionaryImpl.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- min and max rows in dict
ndb/tools/restore/Restore.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- add retrieval of fragment info
ndb/tools/restore/Restore.hpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- add retrieval of fragment info
ndb/tools/restore/consumer_restore.cpp:
Bug #19852 Restoring backup made from cluster with full data memory fails
- set min in restore to the actual row count (this is the actual bug fix)
sql/ha_ndbcluster.cc:
Bug #19852 Restoring backup made from cluster with full data memory fails
- set min and max rows according to sql definition
Addendum fixes after changing the condition variable
for the global read lock.
The stress test suite revealed some deadlocks. Some were
related to the new condition variable (COND_global_read_lock)
and some were general problems with the global read lock.
It is now necessary to signal COND_global_read_lock whenever
COND_refresh is signalled.
We need to wait for the release of a global read lock if one
is set before every operation that requires a write lock.
But we must not wait if we have locked tables by LOCK TABLES.
After setting a global read lock a thread waits until all
write locks are released.
mysql-test/r/lock_multi.result:
Bug#16986 - Deadlock condition with MyISAM tables
Addendum fixes after changing the condition variable
for the global read lock.
Added test results.
mysql-test/t/lock_multi.test:
Bug#16986 - Deadlock condition with MyISAM tables
Addendum fixes after changing the condition variable
for the global read lock.
Added tests for possible deadlocks that did not occur
with the stress test suite.
mysys/thr_lock.c:
Bug#16986 - Deadlock condition with MyISAM tables
Addendum fixes after changing the condition variable
for the global read lock.
Added a protection against an infinite loop that occurs
with the test case for Bug #20662.
sql/lock.cc:
Bug#16986 - Deadlock condition with MyISAM tables
Addendum fixes after changing the condition variable
for the global read lock.
Signal COND_global_read_lock whenever COND_refresh
is signalled by using the new function broadcast_refresh().
Added the definition of a new function that signals
COND_global_read_lock whenever COND_refresh is signalled.
sql/mysql_priv.h:
Bug#16986 - Deadlock condition with MyISAM tables
Addendum fixes after changing the condition variable
for the global read lock.
Added a declaration for a new function that signals
COND_global_read_lock whenever COND_refresh is signalled.
sql/sql_base.cc:
Bug#16986 - Deadlock condition with MyISAM tables
Addendum fixes after changing the condition variable
for the global read lock.
Signal COND_global_read_lock whenever COND_refresh
is signalled by using the new function broadcast_refresh().
sql/sql_handler.cc:
Bug#16986 - Deadlock condition with MyISAM tables
Addendum fixes after changing the condition variable
for the global read lock.
Signal COND_global_read_lock whenever COND_refresh
is signalled by using the new function broadcast_refresh().
sql/sql_insert.cc:
Bug#16986 - Deadlock condition with MyISAM tables
Addendum fixes after changing the condition variable
for the global read lock.
Removed global read lock handling from inside of
INSERT DELAYED. It is handled on a higher level now.
sql/sql_parse.cc:
Bug#16986 - Deadlock condition with MyISAM tables
Addendum fixes after changing the condition variable
for the global read lock.
Wait for the release of a global read lock if one is set
before every operation that requires a write lock.
But don't wait if locked tables exist already.
sql/sql_table.cc:
Bug#16986 - Deadlock condition with MyISAM tables
Addendum fixes after changing the condition variable
for the global read lock.
Removed global read lock handling from inside of
CREATE TABLE. It is handled on a higher level now.
Signal COND_global_read_lock whenever COND_refresh
is signalled by using the new function broadcast_refresh().
into mysql.com:/opt/local/work/mysql-5.1-runtime
mysql-test/r/sp-prelocking.result:
Auto merged
mysql-test/r/sp.result:
Auto merged
mysql-test/t/sp.test:
Auto merged
sql/sp_head.cc:
Auto merged
sql/sql_parse.cc:
Auto merged
storage/ndb/src/kernel/blocks/dbdih/DbdihMain.cpp:
Auto merged
storage/ndb/src/ndbapi/ndberror.c:
Auto merged
strings/ctype-mb.c:
Auto merged
mysql-test/t/sp-prelocking.test:
Manual merge.
into mysql.com:/opt/local/work/mysql-5.1-runtime
mysql-test/r/information_schema.result:
Auto merged
sql/CMakeLists.txt:
Auto merged
sql/mysql_priv.h:
Auto merged
sql/mysqld.cc:
Auto merged
sql/events.cc:
Auto merged
sql/sql_parse.cc:
Auto merged
sql/sql_show.cc:
Auto merged
sql/sql_yacc.yy:
Auto merged
mysql-test/t/events_microsec.test:
SCCS merged
change names of some undocumented ndb status variables to better reflect what
their values mean
sql/ha_ndbcluster.cc:
rename some status variables to better reflect what they show.
make sure we can only drop files from correct file group
mysql-test/r/ndb_dd_ddl.result:
add testcase
mysql-test/t/ndb_dd_ddl.test:
add testcase
sql/ha_ndbcluster.cc:
Make sure correct tablespace for dropping datafile
storage/ndb/include/ndbapi/NdbDictionary.hpp:
Cleanup {data/undo}file get{tablespace/logfilegroup}
storage/ndb/src/ndbapi/NdbDictionary.cpp:
Cleanup {data/undo}file get{tablespace/logfilegroup}
storage/ndb/src/ndbapi/NdbDictionaryImpl.hpp:
Cleanup {data/undo}file get{tablespace/logfilegroup}
storage/ndb/tools/restore/consumer_restore.cpp:
Cleanup {data/undo}file get{tablespace/logfilegroup}
into lmy004.:/work/mysql-5.1-runtime-bug16992
mysql-test/t/events.test:
Auto merged
mysql-test/t/events_grant.test:
Auto merged
sql/sql_show.cc:
Auto merged
into perch.ndb.mysql.com:/home/jonas/src/mysql-5.1-new-ndb
sql/ha_ndbcluster.cc:
Auto merged
storage/ndb/src/kernel/blocks/ERROR_codes.txt:
Auto merged
storage/ndb/src/kernel/blocks/dbdict/Dbdict.cpp:
Auto merged
storage/ndb/src/kernel/blocks/dbdih/DbdihMain.cpp:
Auto merged
support-files/mysql.spec.sh:
Auto merged
into mysql.com:/home/emurphy/src/bk-clean/mysql-5.1
mysql-test/mysql-test-run.sh:
Auto merged
mysql-test/valgrind.supp:
Auto merged
mysql-test/r/func_str.result:
Auto merged
mysql-test/r/insert_select.result:
Auto merged
mysql-test/r/myisam.result:
Auto merged
mysql-test/t/func_time.test:
Auto merged
mysql-test/t/myisam.test:
Auto merged
mysql-test/t/select.test:
Auto merged
mysys/Makefile.am:
Auto merged
sql/field.cc:
Auto merged
sql/field.h:
Auto merged
sql/ha_ndbcluster.cc:
Auto merged
sql/item_cmpfunc.cc:
Auto merged
sql/item_strfunc.cc:
Auto merged
sql/item_strfunc.h:
Auto merged
sql/mysqld.cc:
Auto merged
sql/opt_sum.cc:
Auto merged
sql/slave.cc:
Auto merged
sql/sql_select.cc:
Auto merged
storage/ndb/src/kernel/blocks/dbdih/DbdihMain.cpp:
Auto merged
storage/ndb/src/ndbapi/ndberror.c:
Auto merged
include/Makefile.am:
manual merge
mysql-test/r/func_time.result:
manual merge
mysql-test/r/select.result:
manual merge
sql/sql_table.cc:
Check for FN_DEVCHAR in the table name just before file creation. This allows for temporary tables to contain FN_DEVCHAR in the name.
sql/table.cc:
Removed the check for FN_DEVCHAR is done at this level because it prevents Windows from creating any table with FN_DEVCHAR in the name.
alter event rename".
ALTER EVENT ... RENAME statement hasn't checked privileges
for the target database. It also caused server crashes when
target database was not specified explicitly and there was
no current database.
This fix adds missing privilege check and check for the case
when target database is not specified explicitly or implicitly.
mysql-test/r/events_bugs.result:
update result
mysql-test/t/events_bugs.test:
add test case for bug 18897 Events: unauthorized action possible with alter event
rename:
- test rename to db the user does not have access to
- test rename when there is no selected db
sql/sql_parse.cc:
Additional check for the situation when no db is selected.
CREATE EVENT abc and ALTER EVENT db.abc RENAME TO xyz,
and DROP EVENT abc
won't work if there is no selected DB.
into mysql.com:/usr/local/mysql/tmp-5.1
server-tools/instance-manager/instance_options.cc:
Auto merged
sql/ha_ndbcluster.cc:
Auto merged
sql/mysqld.cc:
Auto merged
sql/rpl_injector.cc:
Auto merged
sql/rpl_injector.h:
Auto merged
into moonbone.local:/work/tmp_merge-5.0-opt-mysql
mysql-test/r/key.result:
Auto merged
mysql-test/t/key.test:
Auto merged
sql/table.cc:
Auto merged
support-files/mysql.spec.sh:
Auto merged
An UNIQUE KEY consisting of NOT NULL columns
was displayed as PRIMARY KEY in "DESC t1".
According to the code, that was intentional
behaviour for some reasons unknown to me.
This code was written before bitkeeper time,
so I cannot check who and why made this.
After discussing on dev-public, a decision
was made to remove this code
mysql-test/r/key.result:
Adding test case.
mysql-test/t/key.test:
Adding test case.
sql/table.cc:
Removing old wrong code
This change allows us to use the stmt_binlog function in the code without ifdefs
(We should avoid having ifdefs in the .cc and .c files)
sql/handler.h:
Removed compiler warnings
Fixed wrong table flags type in ndbcluster that caused previous commit to fail
client/mysqltest.c:
Portability fix
mysys/my_bitmap.c:
Remove compiler warning
mysys/my_handler.c:
Remove compiler warning
mysys/thr_lock.c:
Remove compiler warning
plugin/fulltext/plugin_example.c:
Remove compiler warning
sql/ha_ndbcluster.h:
Fixed wrong of handler flags (caused previous commit to fail)
sql/ha_ndbcluster_binlog.cc:
Remove compiler warning
sql/handler.cc:
Indentation cleanups
sql/mysql_priv.h:
Changed log_output_options to ulong to remove compiler warning (and wrong test on 64 bit machines)
sql/mysqld.cc:
Changed log_output_options to ulong to remove compiler warning (and wrong test on 64 bit machines)
Split initialization of variables of different types to remove compiler warning
sql/set_var.cc:
Fixed indentation
sql/set_var.h:
sys_var_log_output now takes a pointer to ulong
storage/archive/archive_test.c:
Remove compiler warning
Server crashed in some cases when a query required a MIN/MAX
agrregation for a 'ucs2' field.
In these cases the aggregation caused calls of the function
update_tmptable_sum_func that indirectly invoked
the method Item_sum_hybrid::min_max_update_str_field()
containing a call to strip_sp for a ucs2 character set.
The latter led directly to the crash as it used my_isspace
undefined for the ucs2 character set.
Actually the call of strip_sp is not needed at all in this
situation and has been removed by the fix.
mysql-test/r/ctype_ucs.result:
Added a test case for bug #20076.
mysql-test/t/ctype_ucs.test:
Added a test case for bug #20076.
is_injective -> table_flag() HA_HAS_OWN_BINLOGGING
(Faster and easier to understand)
Allow cluster_binlogging also in mixed replication mode.
mysql-test/t/rpl_truncate_7ndb.test:
Ensure that test is only run with mixed or row based replication
sql/ha_ndbcluster.cc:
Enforce row based replication if a cluster table is used
sql/ha_ndbcluster.h:
Remove is_injective() (Is now a table flag)
sql/ha_ndbcluster_binlog.cc:
Use cluster binlogging also in mixed binary logging
(Using a cluster table will enforce row based replication in mixed mode, so this should be ok)
sql/handler.cc:
is_injective -> HA_HAS_OWN_BINLOGGING
sql/handler.h:
is_injective -> HA_HAS_OWN_BINLOGGING
mysql-test/include/have_binlog_format_mixed_or_row.inc:
New BitKeeper file ``mysql-test/include/have_binlog_format_mixed_or_row.inc''
mysql-test/r/rpl_truncate_7ndb_2.result:
New BitKeeper file ``mysql-test/r/rpl_truncate_7ndb_2.result''
mysql-test/t/rpl_truncate_7ndb_2-master.opt:
New BitKeeper file ``mysql-test/t/rpl_truncate_7ndb_2-master.opt''
mysql-test/t/rpl_truncate_7ndb_2.test:
New BitKeeper file ``mysql-test/t/rpl_truncate_7ndb_2.test''
with PREPARE fails with weird error".
More generally, re-executing a stored procedure with a complex SP cursor query
could lead to a crash.
The cause of the problem was that SP cursor queries were not optimized
properly at first execution: their parse tree belongs to sp_instr_cpush,
not sp_instr_copen, and thus the tree was tagged "EXECUTED" when the
cursor was declared, not when it was opened. This led to loss of optimization
transformations performed at first execution, as sp_instr_copen saw that the
query is already "EXECUTED" and therefore either not ran first-execution
related blocks or wrongly rolled back the transformations caused by
first-execution code.
The fix is to update the state of the parsed tree only when the tree is
executed, as opposed to when the instruction containing the tree is executed.
Assignment if i->state is moved to reset_lex_and_exec_core.
mysql-test/r/sp.result:
Test results fixed (Bug#15217)
mysql-test/t/sp.test:
Add a test case for Bug#15217
sql/sp_head.cc:
Move assignment of stmt_arena->state to reset_lex_and_exec_core
into mysql.com:/mnt/raid/alik/MySQL/devel/5.1-rt-bug20294
mysql-test/t/disabled.def:
Auto merged
sql/mysql_priv.h:
Auto merged
sql/mysqld.cc:
Auto merged
sql/sql_parse.cc:
Auto merged
sql/sql_show.cc:
Auto merged