Commit graph

230 commits

Author SHA1 Message Date
Michael Widenius
3bc932ec17 Merge with 5.1 2012-03-28 13:49:07 +03:00
Michael Widenius
74b0649332 Fixed lp:944422 "mysql_upgrade destroys Maria tables?"
The issue was that check/optimize/anaylze did not zerofill the table before they started to work on it.
Added one more element to not often used function handler::auto_repair() to allow handler to decide when to auto repair.


mysql-test/suite/maria/r/maria-autozerofill.result:
  Test case for lp:944422
mysql-test/suite/maria/t/maria-autozerofill.test:
  Test case for lp:944422
sql/ha_partition.cc:
  Added argument to auto_repair()
sql/ha_partition.h:
  Added argument to auto_repair()
sql/handler.h:
  Added argument to auto_repair()
sql/table.cc:
  Let auto_repair() decide which errors to trigger auto-repair
storage/archive/ha_archive.h:
  Added argument to auto_repair()
storage/csv/ha_tina.h:
  Added argument to auto_repair()
storage/maria/ha_maria.cc:
  Give better error & warning messages for auto-repaired tables.
storage/maria/ha_maria.h:
  Added argument to auto_repair()
  Always auto-repair in case of moved table.
storage/maria/ma_open.c:
  Remove special handling of HA_ERR_OLD_FILE (this is now handled in auto_repair())
storage/myisam/ha_myisam.h:
  Added argument to auto_repair()
2012-03-28 13:22:21 +03:00
Mattias Jonsson
5584e61f35 merge of bug#1364811 into mysql-5.5 2012-03-14 21:57:15 +01:00
Mattias Jonsson
645bddecaf merge from mysql-5.1 2012-02-29 21:18:50 +01:00
Mattias Jonsson
937ee6b7a0 merge into mysql-5.1 2012-02-29 20:51:38 +01:00
Mattias Jonsson
8325fe02b3 Bug#13694811: THE OPTIMIZER WRONGLY USES THE FIRST INNODB
PARTITION STATISTICS

Problem was the fix for bug#11756867; It always used the first
partitions, and stopped after it checked 10 [sub]partitions.
(or until it found a partition which would contain a match).

This results in bad statistics for tables where the first 10 partitions
don't represent the majority of the data (like when the first 10
partitions only contained a few rows in total).

The solution was to take statisics from the partitions containing
the most rows instead:

Added an array of partition ids which is sorted by number of records
in descending order.

this array is used in records_in_range to cover as many records as
possible in as few calls as possible.

Also changed the limit of how many partitions to use for the statistics
from a static max of 10 partitions, into a dynamic model:
Maximum number of partitions is now log2(total number of partitions)
taken from the ordered array.
It will continue calling partitions records_in_range until it has 
checked:
(total rows in matching partitions) * (maximum number of partitions)
/ (number of used partitions)

Also reverted the changes for ha_partition::scan_time() and
ha_partition::estimate_rows_upper_bound() to before
the fix of  bug#11756867. Since they are not as slow as
records_in_range.
2012-02-22 23:13:36 +01:00
Mattias Jonsson
74374933c8 Bug#11761296: 53775: QUERY ON PARTITIONED TABLE RETURNS CACHED
RESULT FROM PREVIOUS TRANSACTION

The current Query Cache API is not fully compatible with
the partitioning engine.

There is no good way to implement support for QC due to:
1) a static callback for ha_partition would need to have access
to all partition names and call the underlying callback for each
[sub]partition with the correct name.
2) pruning would be impossible, even if one used the ulonglong
engine_data due to if engine_data is changed, the table is
invalidated by the QC.

So the only viable solution to avoid incorrect data is to not allow
caching of queries using partitioned tables.

(There are some extra changes, due to removal of \r as line break)
2012-02-20 22:59:11 +01:00
Sergei Golubchik
4f435bddfd 5.3 merge 2012-01-13 15:50:02 +01:00
Michael Widenius
6d4224a31c Merge with 5.2.
no_error handling for select (used by INSERT ... SELECT) still needs to be fixed, but I will do that in a separate commit
2011-12-11 11:34:44 +02:00
Michael Widenius
6920457142 Merge with MariaDB 5.1 2011-11-24 18:48:58 +02:00
Michael Widenius
a8d03ab235 Initail merge with MySQL 5.1 (XtraDB still needs to be merged)
Fixed up copyright messages.
2011-11-21 19:13:14 +02:00
Sergei Golubchik
0e007344ea mysql-5.5.18 merge 2011-11-03 19:17:05 +01:00
Sergei Golubchik
76f0b94bb0 merge with 5.3
sql/sql_insert.cc:
  CREATE ... IF NOT EXISTS may do nothing, but
  it is still not a failure. don't forget to my_ok it.
  ******
  CREATE ... IF NOT EXISTS may do nothing, but
  it is still not a failure. don't forget to my_ok it.
sql/sql_table.cc:
  small cleanup
  ******
  small cleanup
2011-10-19 21:45:18 +02:00
Kent Boortz
027b5f1ed4 Updated/added copyright headers 2011-07-03 17:47:37 +02:00
Sergei Golubchik
9809f05199 5.5-merge 2011-07-02 22:08:51 +02:00
Kent Boortz
68f00a5686 Updated/added copyright headers 2011-06-30 17:37:13 +02:00
Jon Olav Hauglid
9b076952ec Bug#11853126 RE-ENABLE CONCURRENT READS WHILE CREATING
SECONDARY INDEX IN INNODB

The patches for Bug#11751388 and Bug#11784056 enabled concurrent
reads while creating secondary indexes in InnoDB. However, they
introduced a regression. This regression occured if ALTER TABLE
failed after the index had been added, for example during the
lock upgrade needed to update .FRM. If this happened, InnoDB
and the server got out of sync with regards to which indexes
actually existed. Therefore the patch for Bug#11815600 again
disabled concurrent reads.

This patch re-enables concurrent reads. The original regression
is fixed by splitting the ADD INDEX operation into two parts.
First the new index is created but not made active. This is
done while concurrent reads are allowed. The second part of
the operation makes the index active (or reverts the change).
This is done after lock upgrade, which prevents the original
regression.

In order to implement this change, the patch changes the storage
API for in-place index creation. handler::add_index() is split
into two functions, handler_add_index() and
handler::final_add_index(). The former for creating indexes without
making them visible and the latter for commiting (i.e. making
visible) new indexes or reverting the changes.

Large parts of this patch were written by Marko Mäkelä.

Test case added to innodb_mysql_lock.test.
2011-06-01 10:06:55 +02:00
Michael Widenius
f197991f41 Merge with 5.1-microseconds
A lot of small fixes and new test cases.

client/mysqlbinlog.cc:
  Cast removed
client/mysqltest.cc:
  Added missing DBUG_RETURN
include/my_pthread.h:
  set_timespec_time_nsec() now only takes one argument
mysql-test/t/date_formats.test:
  Remove --disable_ps_protocl as now also ps supports microseconds
mysys/my_uuid.c:
  Changed to use my_interval_timer() instead of my_getsystime()
mysys/waiting_threads.c:
  Changed to use my_hrtime()
sql/field.h:
  Added bool special_const_compare() for fields that may convert values before compare (like year)
sql/field_conv.cc:
  Added test to get optimal copying of identical temporal values.
sql/item.cc:
  Return that item_int is equal if it's positive, even if unsigned flag is different.
  Fixed Item_cache_str::save_in_field() to have identical null check as other similar functions
  Added proper NULL check to Item_cache_int::save_in_field()
sql/item_cmpfunc.cc:
  Don't call convert_constant_item() if there is nothing that is worth converting.
  Simplified test when years should be converted
sql/item_sum.cc:
  Mark cache values in Item_sum_hybrid as not constants to ensure they are not replaced by other cache values in compare_datetime()
sql/item_timefunc.cc:
  Changed sec_to_time() to take a my_decimal argument to ensure we don't loose any sub seconds.
  Added Item_temporal_func::get_time() (This simplifies some things)
sql/mysql_priv.h:
  Added Lazy_string_decimal()
sql/mysqld.cc:
  Added my_decimal constants max_seconds_for_time_type, time_second_part_factor
sql/table.cc:
  Changed expr_arena to be of type CONVENTIONAL_EXECUTION to ensure that we don't loose any items that are created by fix_fields()
sql/tztime.cc:
  TIME_to_gmt_sec() now sets *in_dst_time_gap in case of errors
  This is needed to be able to detect if timestamp is 0
storage/maria/lockman.c:
  Changed from my_getsystime() to set_timespec_time_nsec()
storage/maria/ma_loghandler.c:
  Changed from my_getsystime() to my_hrtime()
storage/maria/ma_recovery.c:
  Changed from my_getsystime() to mmicrosecond_interval_timer()
storage/maria/unittest/trnman-t.c:
  Changed from my_getsystime() to mmicrosecond_interval_timer()
storage/xtradb/handler/ha_innodb.cc:
  Added support for new time,datetime and timestamp
unittest/mysys/thr_template.c:
  my_getsystime() -> my_interval_timer()
unittest/mysys/waiting_threads-t.c:
  my_getsystime() -> my_interval_timer()
2011-05-28 05:11:32 +03:00
Sergei Golubchik
7c459960ec db_low_byte_first is gone 2011-05-20 23:56:13 +02:00
Michael Widenius
3631146442 Original idea from Zardosht Kasheff to add HA_CLUSTERED_INDEX
- Added a lot of code comments
- Updated get_best_ror_intersec() to prefer index scan on not clustered keys before clustered keys.
- Use HA_CLUSTERED_INDEX to define if one should use HA_MRR_INDEX_ONLY
- For test of using index or filesort to resolve ORDER BY, use HA_CLUSTERED_INDEX flag instead of primary_key_is_clustered()
- Use HA_TABLE_SCAN_ON_INDEX instead of primary_key_is_clustered() to decide if ALTER TABLE ... ORDER BY will have any effect.

sql/ha_partition.h:
  Added comment with warning for code unsafe to use with multiple storage engines at the same time
sql/handler.h:
  Added HA_CLUSTERED_INDEX.
  Documented primary_key_is_clustered()
sql/opt_range.cc:
  Added code comments
  Updated get_best_ror_intersec() to ignore clustered keys.
  Optimized away cpk_scan_used and one instance of current_thd (Simpler code)
  Use HA_CLUSTERED_INDEX to define if one should use HA_MRR_INDEX_ONLY
sql/sql_select.cc:
  Changed comment to #ifdef
  For test of using index or filesort to resolve ORDER BY, use HA_CLUSTERED_INDEX flag instead of primary_key_is_clustered()
  (Change is smaller than what it looks beause of indentation change)
sql/sql_table.cc:
  Use HA_TABLE_SCAN_ON_INDEX instead of primary_key_is_clustered() to decide if ALTER TABLE ... ORDER BY will have any effect.
storage/innobase/handler/ha_innodb.h:
  Added support for HA_CLUSTERED_INDEX
storage/innodb_plugin/handler/ha_innodb.cc:
  Added support for HA_CLUSTERED_INDEX
storage/xtradb/handler/ha_innodb.cc:
  Added support for HA_CLUSTERED_INDEX
2011-05-18 19:26:30 +03:00
Kent Boortz
789aa8c485 Updated/added copyright headers 2011-07-04 01:25:49 +02:00
Kent Boortz
02e07e3b51 Updated/added copyright headers 2011-06-30 17:46:53 +02:00
Michael Widenius
1be5462d59 Merge with MariaDB 5.1 2011-05-03 19:10:10 +03:00
Michael Widenius
e415ba0fb2 Merge with MySQL 5.1.57/58
Moved some BSD string functions from Unireg
2011-05-02 20:58:45 +03:00
Mattias Jonsson
bd92ea4311 Bug#11766249 bug#59316: PARTITIONING AND INDEX_MERGE MEMORY LEAK
Update for previous patch according to reviewers comments.

Updated the constructors for ha_partitions to use the common
init_handler_variables functions

Added use of defines for size and offset to get better readability for the code that reads
and writes the .par file. Also refactored the get_from_handler_file function.
2011-04-20 17:52:33 +02:00
Mattias Jonsson
f7b98c25f4 Manual merge from 5.1 2011-04-20 19:53:08 +02:00
Mattias Jonsson
e0887df8e1 Bug#11766249 bug#59316: PARTITIONING AND INDEX_MERGE MEMORY LEAK
When executing row-ordered-retrieval index merge,
the handler was cloned, but it used the wrong
memory root, so instead of allocating memory
on the thread/query's mem_root, it used the table's
mem_root, resulting in non released memory in the
table object, and was not freed until the table was
closed.

Solution was to ensure that memory used during cloning
of a handler was allocated from the correct memory root.

This was implemented by fixing handler::clone() to also
take a name argument, so it can be used with partitioning.
And in ha_partition only allocate the ha_partition's ref, and
call the original ha_partition partitions clone() and set at cloned
partitions.

Fix of .bzrignore on Windows with VS 2010
2011-03-25 12:36:02 +01:00
Michael Widenius
3358cdd504 Merge with 5.1 to get in changes from MySQL 5.1.55 2011-02-28 19:39:30 +02:00
Michael Widenius
f2ca9c8784 Applied patch for lp:585688 "maridb crashes in federatedx code" from lp:~atcurtis/maria/federatedx:
- Fixed Partition engine to store CONNECTION string for partitions.
  Removed HA_NO_PARTITION flag from FederatedX.
  Added test 'federated_partition' to suite.
- lp:#585688 - maridb crashes in federatedx code
  FederatedX handler instances, created on one thread and used on
  another thread (via table cache) when "show table status" is executed
  crashed because txn member was not initialized for current thread.
  Added test 'federated_bug_585688' to suite.

Author for the patch is Antony Curtis

mysql-test/suite/federated/federated_bug_585688.result:
  Test for lp:585688
mysql-test/suite/federated/federated_bug_585688.test:
  Test for lp:585688
mysql-test/suite/federated/federated_partition-slave.opt:
  Test for partition support in federatedx
mysql-test/suite/federated/federated_partition.result:
  Test for partition support in federatedx
mysql-test/suite/federated/federated_partition.test:
  Test for partition support in federatedx
mysql-test/t/partition_federated.test:
  Updated error message
sql/ha_partition.cc:
  Added support for connection strings to partitions for federatedx
sql/ha_partition.h:
  Added support for connection strings to partitions for federatedx
sql/partition_element.h:
  Added support for connection strings to partitions for federatedx
sql/sql_yacc.yy:
  Added support for connection strings to partitions for federatedx
storage/federatedx/ha_federatedx.cc:
  Added support for partitions.
  FederatedX handler instances, created on one thread and used on another thread (via table cache) when "show table status"
  is executed crashed because txn member was not initialized for current thread.
2011-02-10 22:40:59 +02:00
Sergei Golubchik
31a78529bc virtual columns:
* move a capability from a virtual handler method to table_flags()
 * rephrase error messages to avoid hard-coded English parts
 * admit in test cases that they need xtradb, not innodb

mysql-test/suite/vcol/t/rpl_vcol.test:
  this test needs xtradb, it will fail with innodb
mysql-test/suite/vcol/t/vcol_blocked_sql_funcs_innodb.test:
  this test needs xtradb, it will fail with innodb
mysql-test/suite/vcol/t/vcol_column_def_options_innodb.test:
  this test needs xtradb, it will fail with innodb
mysql-test/suite/vcol/t/vcol_handler_innodb.test:
  this test needs xtradb, it will fail with innodb
mysql-test/suite/vcol/t/vcol_ins_upd_innodb.test:
  this test needs xtradb, it will fail with innodb
mysql-test/suite/vcol/t/vcol_keys_innodb.test:
  this test needs xtradb, it will fail with innodb
mysql-test/suite/vcol/t/vcol_non_stored_columns_innodb.test:
  this test needs xtradb, it will fail with innodb
mysql-test/suite/vcol/t/vcol_partition_innodb.test:
  this test needs xtradb, it will fail with innodb
mysql-test/suite/vcol/t/vcol_select_innodb.test:
  this test needs xtradb, it will fail with innodb
mysql-test/suite/vcol/t/vcol_supported_sql_funcs_innodb.test:
  this test needs xtradb, it will fail with innodb
mysql-test/suite/vcol/t/vcol_trigger_sp_innodb.test:
  this test needs xtradb, it will fail with innodb
mysql-test/suite/vcol/t/vcol_view_innodb.test:
  this test needs xtradb, it will fail with innodb
sql/ha_partition.h:
  check_if_supported_virtual_columns() -> HA_CAN_VIRTUAL_COLUMNS
sql/handler.h:
  check_if_supported_virtual_columns() -> HA_CAN_VIRTUAL_COLUMNS
sql/share/errmsg.txt:
  no hard-coded english parts in the error messages (ER_UNSUPPORTED_ACTION_ON_VIRTUAL_COLUMN)
sql/sql_table.cc:
  no hard-coded english parts in the error messages
sql/table.cc:
  * check_if_supported_virtual_columns() -> HA_CAN_VIRTUAL_COLUMNS
  * no "csv workaround" is needed
  * no hard-coded english parts in the error messages
storage/maria/ha_maria.cc:
  check_if_supported_virtual_columns() -> HA_CAN_VIRTUAL_COLUMNS
storage/maria/ha_maria.h:
  check_if_supported_virtual_columns() -> HA_CAN_VIRTUAL_COLUMNS
storage/myisam/ha_myisam.cc:
  check_if_supported_virtual_columns() -> HA_CAN_VIRTUAL_COLUMNS
storage/myisam/ha_myisam.h:
  check_if_supported_virtual_columns() -> HA_CAN_VIRTUAL_COLUMNS
storage/xtradb/handler/ha_innodb.cc:
  check_if_supported_virtual_columns() -> HA_CAN_VIRTUAL_COLUMNS
storage/xtradb/handler/ha_innodb.h:
  check_if_supported_virtual_columns() -> HA_CAN_VIRTUAL_COLUMNS
2010-12-31 10:39:14 +01:00
Mattias Jonsson
a998586d45 merge of bug#58147, including rename of the new argument,
to_binlog -> binlog_stmt.
2010-12-03 10:33:29 +01:00
Mattias Jonsson
2737a72278 Bug#58147: ALTER TABLE w/ TRUNCATE PARTITION fails
but the statement is written to binlog

TRUNCATE PARTITION was written to the binlog
even if it failed before calling any partition's
truncate function.

Solved by adding an argument to truncate_partition,
to flag if it should be written to the binlog or not.

It should be written to the binlog when a call to any
partitions truncate function is done.

mysql-test/r/partition_binlog.result:
  New result file
mysql-test/t/partition_binlog.test:
  New test file, including DROP PARTITION binlog test
sql/ha_partition.cc:
  Added argument to avoid binlogging failed truncate_partition that
  have not yet changed any data.
sql/ha_partition.h:
  Added argument to avoid excessive binlogging
sql/sql_partition_admin.cc:
  Avoid to binlog TRUNCATE PARTITION if it fails before
  any partition has tried to truncate.
2010-12-01 22:47:40 +01:00
Michael Widenius
1e5061fe3b merge with 5.1 2010-11-30 23:11:03 +02:00
Sergei Golubchik
65ca700def merge.
checkpoint.
does not compile.
2010-11-25 18:17:28 +01:00
Sergei Golubchik
bc2e383e4a mysql-5.1 -> mysql-5.5 merge 2010-11-05 10:59:51 +01:00
Davi Arnaut
a5efb91dea Bug#49938: Failing assertion: inode or deadlock in fsp/fsp0fsp.c
Bug#54678: InnoDB, TRUNCATE, ALTER, I_S SELECT, crash or deadlock

- Incompatible change: truncate no longer resorts to a row by
row delete if the storage engine does not support the truncate
method. Consequently, the count of affected rows does not, in
any case, reflect the actual number of rows.

- Incompatible change: it is no longer possible to truncate a
table that participates as a parent in a foreign key constraint,
unless it is a self-referencing constraint (both parent and child
are in the same table). To work around this incompatible change
and still be able to truncate such tables, disable foreign checks
with SET foreign_key_checks=0 before truncate. Alternatively, if
foreign key checks are necessary, please use a DELETE statement
without a WHERE condition.

Problem description:

The problem was that for storage engines that do not support
truncate table via a external drop and recreate, such as InnoDB
which implements truncate via a internal drop and recreate, the
delete_all_rows method could be invoked with a shared metadata
lock, causing problems if the engine needed exclusive access
to some internal metadata. This problem originated with the
fact that there is no truncate specific handler method, which
ended up leading to a abuse of the delete_all_rows method that
is primarily used for delete operations without a condition.

Solution:

The solution is to introduce a truncate handler method that is
invoked when the engine does not support truncation via a table
drop and recreate. This method is invoked under a exclusive
metadata lock, so that there is only a single instance of the
table when the method is invoked.

Also, the method is not invoked and a error is thrown if
the table is a parent in a non-self-referencing foreign key
relationship. This was necessary to avoid inconsistency as
some integrity checks are bypassed. This is inline with the
fact that truncate is primarily a DDL operation that was
designed to quickly remove all data from a table.

mysql-test/suite/innodb/t/innodb-truncate.test:
  Add test cases for truncate and foreign key checks.
  Also test that InnoDB resets auto-increment on truncate.
mysql-test/suite/innodb/t/innodb.test:
  FK is not necessary, test is related to auto-increment.
  
  Update error number, truncate is no longer invoked if
  table is parent in a FK relationship.
mysql-test/suite/innodb/t/innodb_mysql.test:
  Update error number, truncate is no longer invoked if
  table is parent in a FK relationship.
  
  Use delete instead of truncate, test is used to check
  the interaction of FKs, triggers and delete.
mysql-test/suite/parts/inc/partition_check.inc:
  Fix typo.
mysql-test/suite/sys_vars/t/foreign_key_checks_func.test:
  Update error number, truncate is no longer invoked if
  table is parent in a FK relationship.
mysql-test/t/mdl_sync.test:
  Modify test case to reflect and ensure that truncate takes
  a exclusive metadata lock.
mysql-test/t/trigger-trans.test:
  Update error number, truncate is no longer invoked if
  table is parent in a FK relationship.
sql/ha_partition.cc:
  Reorganize the various truncate methods. delete_all_rows is now
  passed directly to the underlying engines, so as truncate. The
  code responsible for truncating individual partitions is moved
  to ha_partition::truncate_partition, which is invoked when a
  ALTER TABLE t1 TRUNCATE PARTITION p statement is executed.
  
  Since the partition truncate no longer can be invoked via
  delete, the bitmap operations are not necessary anymore. The
  explicit reset of the auto-increment value is also removed
  as the underlying engines are now responsible for reseting
  the value.
sql/handler.cc:
  Wire up the handler truncate method.
sql/handler.h:
  Introduce and document the truncate handler method. It assumes
  certain use cases of delete_all_rows.
  
  Add method to retrieve the list of foreign keys referencing a
  table. Method is used to avoid truncating tables that are
  parent in a foreign key relationship.
sql/share/errmsg-utf8.txt:
  Add error message for truncate and FK.
sql/sql_lex.h:
  Introduce a flag so that the partition engine can detect when
  a partition is being truncated. Used to give a special error.
sql/sql_parse.cc:
  Function mysql_truncate_table no longer exists.
sql/sql_partition_admin.cc:
  Implement the TRUNCATE PARTITION statement.
sql/sql_truncate.cc:
  Change the truncate table implementation to use the new truncate
  handler method and to not rely on row-by-row delete anymore.
  
  The truncate handler method is always invoked with a exclusive
  metadata lock. Also, it is no longer possible to truncate a
  table that is parent in some non-self-referencing foreign key.
storage/archive/ha_archive.cc:
  Rename method as the description indicates that in the future
  this could be a truncate operation.
storage/blackhole/ha_blackhole.cc:
  Implement truncate as no operation for the blackhole engine in
  order to remain compatible with older releases.
storage/federated/ha_federated.cc:
  Introduce truncate method that invokes delete_all_rows.
  This is required to support partition truncate as this
  form of truncate does not implement the drop and recreate
  protocol.
storage/heap/ha_heap.cc:
  Introduce truncate method that invokes delete_all_rows.
  This is required to support partition truncate as this
  form of truncate does not implement the drop and recreate
  protocol.
storage/ibmdb2i/ha_ibmdb2i.cc:
  Introduce truncate method that invokes delete_all_rows.
  This is required to support partition truncate as this
  form of truncate does not implement the drop and recreate
  protocol.
storage/innobase/handler/ha_innodb.cc:
  Rename delete_all_rows to truncate. InnoDB now does truncate
  under a exclusive metadata lock.
  
  Introduce and reorganize methods used to retrieve the list
  of foreign keys referenced by a or referencing a table.
storage/myisammrg/ha_myisammrg.cc:
  Introduce truncate method that invokes delete_all_rows.
  This is required in order to remain compatible with earlier
  releases where truncate would resort to a row-by-row delete.
2010-10-06 11:34:28 -03:00
Mattias Jonsson
cfcf51b719 merge 2010-10-01 13:39:49 +02:00
Mattias Jonsson
814fbc5b6f Bug#51851: Server with SBR locks mutex twice on
LOAD DATA into partitioned MyISAM table

Problem was that both partitioning and myisam
used the same table_share->mutex for different protections
(auto inc and repair).

Solved by adding a specific mutex for the partitioning
auto_increment.

Also adding destroying the ha_data structure in
free_table_share (which is to be propagated
into 5.5).

This is a 5.1 ONLY patch, already fixed in 5.5+.
2010-10-01 13:39:04 +02:00
Mattias Jonsson
92655149a6 merge 2010-09-13 15:56:56 +02:00
Sergei Golubchik
a3d80d952d merge with 5.1 2010-09-11 20:43:48 +02:00
Mattias Jonsson
af951a6c36 Bug#55458: Partitioned MyISAM table gets crashed by multi-table update
Updated according to reviewers comments.
2010-09-07 17:56:43 +02:00
Mattias Jonsson
b409fad8cc Bug#55458: Partitioned MyISAM table gets crashed by multi-table update
Problem was that the handler call ::extra(HA_EXTRA_CACHE) was cached
but the ::extra(HA_EXTRA_PREPARE_FOR_UPDATE) was not.

Solution was to also cache the other call and forward it when moving
to a new partition to scan.

mysql-test/r/partition.result:
  test result
mysql-test/t/partition.test:
  New test from bug report.
sql/ha_partition.cc:
  cache the HA_EXTRA_PREPARE_FOR_UPDATE just like HA_EXTRA_CACHE.
sql/ha_partition.h:
  Added cache flag for HA_EXTRA_PREPARE_FOR_UPDATE
2010-08-10 10:43:12 +02:00
Sergei Golubchik
68f02c65ef merge with 5.1 2010-07-25 17:09:21 +02:00
Sergei Golubchik
069a068c90 restore the unintentinally broken ABI 2010-07-23 22:37:21 +02:00
Davi Arnaut
97c3182312 WL#5498: Remove dead and unused source code
Remove code that has been disabled for a long time.
2010-07-23 17:09:27 -03:00
Michael Widenius
e9166ca152 Fix for LP#588251: doStartTableScan() result not checked.
The issue was that we didn't always check result of ha_rnd_init() which caused a problem for handlers that returned an error in this code.
- Changed prototype of ha_rnd_init() to ensure that we get a compile warning if result is not checked.
- Added ha_rnd_init_with_error() that prints error on failure.
- Checked all usage of ha_rnd_init() and ensure we generate an error message on failures.
- Changed init_read_record() to return 1 on failure.




sql/create_options.cc:
  Fixed wrong printf
sql/event_db_repository.cc:
  Check result from init_read_record()
sql/events.cc:
  Check result from init_read_record()
sql/filesort.cc:
  Check result from ha_rnd_init()
sql/ha_partition.cc:
  Check result from ha_rnd_init()
sql/ha_partition.h:
  Fixed compiler warning
sql/handler.cc:
  Added ha_rnd_init_with_error()
  Check result from ha_rnd_init()
sql/handler.h:
  Added ha_rnd_init_with_error()
  Changed prototype of ha_rnd_init() to ensure that we get a compile warning if result is not checked
sql/item_subselect.cc:
  Check result from ha_rnd_init()
sql/log.cc:
  Check result from ha_rnd_init()
sql/log_event.cc:
  Check result from ha_rnd_init()
sql/log_event_old.cc:
  Check result from ha_rnd_init()
sql/mysql_priv.h:
  init_read_record() now returns error code on failure
sql/opt_range.cc:
  Check result from ha_rnd_init()
sql/records.cc:
  init_read_record() now returns error code on failure
  Check result from ha_rnd_init()
sql/sql_acl.cc:
  Check result from init_read_record()
sql/sql_cursor.cc:
  Print error if ha_rnd_init() fails
sql/sql_delete.cc:
  Check result from init_read_record()
sql/sql_help.cc:
  Check result from init_read_record()
sql/sql_plugin.cc:
  Check result from init_read_record()
sql/sql_select.cc:
  Check result from ha_rnd_init()
  Print error if ha_rnd_init() fails.
sql/sql_servers.cc:
  Check result from init_read_record()
sql/sql_table.cc:
  Check result from init_read_record()
sql/sql_udf.cc:
  Check result from init_read_record()
sql/sql_update.cc:
  Check result from init_read_record()
storage/example/ha_example.cc:
  Don't return error on rnd_init()
storage/ibmdb2i/ha_ibmdb2i.cc:
  Removed not relevant comment
2010-07-17 01:41:44 +03:00
Mattias Jonsson
7df0864598 merge 2010-07-09 15:02:27 +02:00
Mattias Jonsson
52d5941bf1 merge 2010-07-09 15:00:33 +02:00
Mattias Jonsson
70b02d3aed Bug#52517: Regression in ROW level replication performance with partitions
In bug-28430 HA_PRIMARY_KEY_REQUIRED_FOR_POSITION
was disabled in the partitioning engine in the first patch,
That bug was later fixed a second time, but that flag
was not removed.

No need to disable this flag, as it leads to bad
choise in row replication.

sql/ha_partition.h:
  Not disabling HA_PRIMARY_KEY_REQUIRED_FOR_POSITION flag.
  Updated comment (has nothing to do with hidden key.
sql/handler.h:
  Updated comments to about HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
2010-07-09 13:15:26 +02:00
Mattias Jonsson
9edde02ebb Bug#52455: Subpar INSERT ON DUPLICATE KEY UPDATE performance with many partitions
The handler function for reading one row from a specific index
was not optimized in the partitioning handler since it
used the default implementation.

No test case since it is performance only, verified by hand.

sql/ha_partition.cc:
  Implemented a optimized version of index_read_idx_map
  for the case when find flag == HA_READ_KEY_EXACT,
  which is the common case.
sql/ha_partition.h:
  Declared ha_partition::index_read_idx_map
2010-07-09 01:09:31 +02:00