Commit graph

119 commits

Author SHA1 Message Date
gkodinov/kgeorge@magare.gmz
c00605c6cf Merge magare.gmz:/home/kgeorge/mysql/work/B35298-5.0-bugteam
into  magare.gmz:/home/kgeorge/mysql/work/B35298-5.1-bugteam
2008-05-01 14:54:59 +03:00
gkodinov/kgeorge@magare.gmz
e22ef24263 Fix for bug #35298: GROUP_CONCAT with DISTINCT can crash the server
The bug is a regression introduced by the patch for bug32798.

The code in Item_func_group_concat::clear() relied on the 'distinct'
variable to check if 'unique_filter' was initialized. That, however,
is not always valid because Item_func_group_concat::setup() can do
shortcuts in some cases w/o initializing 'unique_filter'.

Fixed by checking the value of 'unique_filter' instead of 'distinct'
before dereferencing.
2008-05-01 13:49:26 +03:00
gkodinov/kgeorge@magare.gmz
9b58d7b1f8 Merge magare.gmz:/home/kgeorge/mysql/work/B34747-5.0-opt
into  magare.gmz:/home/kgeorge/mysql/work/B34747-5.1-opt
2008-02-28 15:45:54 +02:00
gkodinov/kgeorge@magare.gmz
0443189d89 Bug #34747: crash in debug assertion check after derived table
Was a double-free of the Unique member of Item_func_group_concat.
This was not causing a crash because the Unique is a descendent of
Sql_alloc.
Fixed to free the Unique only if it was allocated for the instance 
of Item_func_group_concat it was referenced from
2008-02-28 13:31:19 +02:00
mhansson/martin@linux-st28.site
bcfdd27481 Merge mhansson@bk-internal:/home/bk/mysql-5.0-opt
into  linux-st28.site:/home/martin/mysql/src/bug32798-united/my50-bug32798-united-push
2007-12-15 12:02:33 +01:00
mhansson/martin@linux-st28.site
e8d1380263 Merge mhansson@bk-internal:/home/bk/mysql-5.1-opt
into  linux-st28.site:/home/martin/mysql/src/bug32798-united/my51-bug32798-united-push
2007-12-15 11:54:02 +01:00
mhansson/martin@linux-st28.site
8e8ba8e825 Merge linux-st28.site:/home/martin/mysql/src/bug32798-united/my50-bug32798-united
into  linux-st28.site:/home/martin/mysql/src/bug32798-united/my51-bug32798-united
2007-12-14 12:40:39 +01:00
mhansson/martin@linux-st28.site
cdeecc5154 Bug#32798: DISTINCT in GROUP_CONCAT clause fails when ordering by a column
with null values

For queries containing GROUP_CONCAT(DISTINCT fields ORDER BY fields), there 
was a limitation that the DISTINCT fields had to be the same as ORDER BY 
fields, owing to the fact that one single sorted tree was used for keeping 
track of tuples, ordering and uniqueness. Fixed by introducing a second 
structure to handle uniqueness so that the original structure has only to 
order the result.
2007-12-14 12:24:20 +01:00
gluh@eagle.(none)
a2fa8f724a Merge mysql.com:/home/gluh/MySQL/Merge/5.0-opt
into  mysql.com:/home/gluh/MySQL/Merge/5.1-opt
2007-10-29 14:53:42 +04:00
gluh@mysql.com/eagle.(none)
b943d8cf3c Bug#30897 GROUP_CONCAT returns extra comma on empty fields
The fix is a copy of Martin Friebe's suggestion.
added testing for no_appended which will be false if anything,
including the empty string is in result
2007-10-29 14:53:10 +04:00
ramil/ram@ramil.myoffice.izhnet.ru
b3d7188639 Merge mysql.com:/home/ram/work/b31154/b31154.5.0
into  mysql.com:/home/ram/work/b31154/b31154.5.1
2007-10-15 09:44:22 +05:00
ramil/ram@mysql.com/ramil.myoffice.izhnet.ru
84d1e3f8f0 Fix for bug #31154: field.h:1649: virtual int Field_bit::cmp(const uchar*, const uchar*): Assertion
Problem: GROUP_CONCAT(DISTINCT BIT_FIELD...) uses a tree to store keys;
which are constructed using a temporary table fields,
see Item_func_group_concat::setup().
As a) we don't store null bits in the tree where the bit fields store parts 
of their data and b) there's no method to properly compare two table records
we've got problem.

Fix: convert BIT fields to INT in the temporary table used.
2007-10-11 17:20:34 +05:00
gshchepa/uchum@gleb.loc
c404cd7c9c Merge gleb.loc:/home/uchum/work/bk/5.0-opt
into  gleb.loc:/home/uchum/work/bk/5.1-opt
2007-07-20 04:04:57 +05:00
evgen@moonbone.local
934089a82b Bug#29850: Wrong charset of GROUP_CONCAT result when the select employs
a temporary table.

The result string of the Item_func_group_concat wasn't initialized in the 
copying constructor of the Item_func_group_concat class. This led to a
wrong charset of GROUP_CONCAT result when the select employs a temporary
table.

The copying constructor of the Item_func_group_concat class now correctly
initializes the charset of the result string.
2007-07-19 20:21:23 +04:00
mhansson@dl145s.mysql.com
d6cf093408 Merge dl145s.mysql.com:/users/mhansson/mysql/push/bug23856/my50-bug23856
into  dl145s.mysql.com:/users/mhansson/mysql/push/bug23856/mysql-5.0o-pushee
2007-05-22 14:48:49 +02:00
mhansson@dl145s.mysql.com
6870631ed4 Merge dl145s.mysql.com:/users/mhansson/mysql/push/bug23856/my51-bug23856
into  dl145s.mysql.com:/users/mhansson/mysql/push/bug23856/mysql-5.1o-pushee
2007-05-22 09:54:04 +02:00
mhansson@dl145s.mysql.com
2abc880b82 Merge dl145s.mysql.com:/users/mhansson/mysql/push/bug23856/my50-bug23856
into  dl145s.mysql.com:/users/mhansson/mysql/push/bug23856/my51-bug23856
2007-05-21 14:28:31 +02:00
mhansson@dl145s.mysql.com
6530f6806a bug#23856 2007-05-21 10:27:33 +02:00
holyfoot/hf@hfmain.(none)
85d5dfedf6 Merge mysql.com:/d2/hf/mrg/mysql-5.0-opt
into  mysql.com:/d2/hf/mrg/mysql-5.1-opt
2007-05-18 20:00:49 +05:00
mhansson/martin@linux-st28.site
b1375104b3 bug#28273: GROUP_CONCAT and ORDER BY: No warning when result gets truncated.
When using GROUP_CONCAT with ORDER BY, a tree is used for the sorting, as 
opposed to normal nested loops join used when there is no ORDER BY. 

The tree traversal that generates the result counts the lines that have been 
cut down. (as they get cut down to the field's max_size)
But the check of that count was before the tree traversal, so no 
warning was generated if the output is truncated.

Fixed by moving the check to after the tree traversal.
2007-05-11 16:05:20 +03:00
evgen@moonbone.local
f470ac2000 Merge moonbone.local:/mnt/gentoo64/work/bk-trees/mysql-5.0-opt
into  moonbone.local:/mnt/gentoo64/work/bk-trees/mysql-5.1-opt
2007-03-31 02:42:40 +04:00
gkodinov/kgeorge@magare.gmz
4f2ec8f3de Bug #26815:
When creating a temporary table the concise column type
 of a string expression is decided based on its length:
 - if its length is under 512 it is stored as either 
   varchar or char.
 - otherwise it is stored as a BLOB.
 
 There is a flag (convert_blob_length) to create_tmp_field 
 that, when >0 allows to force creation of a varchar if the
 max blob length is under convert_blob_length.
 However it must be verified that convert_blob_length 
 (settable through a SQL option in some cases) is 
 under the maximum that can be stored in a varchar column.
 While performing that check for expressions in 
 create_tmp_field_from_item the max length of the blob was
 used instead. This causes blob columns to be created in the
 heap temp table used by GROUP_CONCAT (where blobs must not
 be created in the temp table because of the constant 
 convert_blob_length that is passed to create_tmp_field() ).
 And since these blob columns are not expected in that place
 we get wrong results.
 Fixed by checking that the value of the flag variable is 
 in the limits that fit into VARCHAR instead of the max length
 of the blob column.
2007-03-27 19:28:04 +03:00
bar@bar.intranet.mysql.r18.ru
41a324ea3a Merge mysql.com:/usr/home/bar/mysql-5.0.b23451
into  mysql.com:/usr/home/bar/mysql-5.1.b23451
2006-11-08 22:14:36 +04:00
bar@mysql.com/bar.intranet.mysql.r18.ru
ac3ce653b4 Merge mysql.com:/usr/home/bar/mysql-4.1.b23451v2
into  mysql.com:/usr/home/bar/mysql-5.0.b23451
2006-11-08 17:03:37 +04:00
bar@mysql.com/bar.intranet.mysql.r18.ru
599b731660 Bug#23451 GROUP_CONCAT truncates a multibyte utf8 character
Problem: GROUP_CONCAT on a multi-byte column can truncate
  in the middle of a multibyte character when applying
  group_concat_max_len limit. It produces an invalid
  multi-byte character in the result string.
  
The second, easier version - reusing old "warning_for_row" flag,
instead of introducing of "result_is_full" - which was
added in the previous commit.
2006-11-07 12:45:48 +04:00
gkodinov@dl145s.mysql.com
aaed398254 Merge dl145s.mysql.com:/data/bk/team_tree_merge/MERGE/mysql-5.0-opt
into  dl145s.mysql.com:/data/bk/team_tree_merge/MERGE/mysql-5.1-opt
2006-10-19 16:43:46 +02:00
gkodinov/kgeorge@macbook.gmz
dff0a1ac7c Merge macbook.gmz:/Users/kgeorge/mysql/work/B14019-4.1-opt
into  macbook.gmz:/Users/kgeorge/mysql/work/B14019-5.0-opt
2006-10-16 13:24:54 +03:00
gkodinov/kgeorge@macbook.gmz
115616381d BUG#14019 : group by converts literal string to column name
When resolving unqualified name references MySQL was not
   checking what is the item type for the reference. Thus
   e.g a string literal item that has by convention a name
   equal to its string value will also work as a reference to 
   a SELECT list item or a table field.
   Fixed by allowing only Item_ref or Item_field to referenced by
   (unqualified) name.
2006-10-16 13:10:25 +03:00
gkodinov/kgeorge@macbook.gmz
a01e7b12ce Merge macbook.gmz:/Users/kgeorge/mysql/work/B21174-5.0-opt
into  macbook.gmz:/Users/kgeorge/mysql/work/B21174-5.1-opt
2006-09-27 13:03:41 +03:00
igor@rurik.mysql.com
f2225cab27 Fixed bug #22015: crash with GROUP_CONCAT over a derived table
that returns the results of aggregation by GROUP_CONCAT.
The crash was due to an overflow happened for the field
sortoder->length.
The fix prevents this overflow exploiting the fact that the
value of sortoder->length cannot be greater than the value of
thd->variables.max_sort_length.
2006-09-20 08:08:57 -07:00
gkodinov@dl145s.mysql.com
ce8ed889d7 Merge dl145s.mysql.com:/data/bk/team_tree_merge/MERGE/mysql-5.0-opt
into  dl145s.mysql.com:/data/bk/team_tree_merge/MERGE/mysql-5.1
2006-09-18 12:57:20 +02:00
gkodinov/kgeorge@macbook.gmz
91e93eb7d0 Merge macbook.gmz:/Users/kgeorge/mysql/work/B16792-4.1-opt
into  macbook.gmz:/Users/kgeorge/mysql/work/B16792-5.0-opt
2006-09-05 17:09:12 +03:00
gkodinov/kgeorge@macbook.gmz
9ff33b5d93 Bug #16792 query with subselect, join, and group not returning proper values
Treat queries with no FROM and aggregate functions as normal queries,
so the aggregate function get correctly calculated as if there is 1 row. 
This means that they will be considered to have one row, so COUNT(*) will return
1 instead of 0. Other aggregates will behave in compatible manner.
2006-08-10 16:45:02 +03:00
evgen@sunlight.local
dda7a95c59 Merge epotemkin@bk-internal.mysql.com:/home/bk/mysql-5.1-opt
into  sunlight.local:/local_work/tmp_merge-5.1-opt-mysql
2006-08-01 09:24:19 +04:00
evgen@sunlight.local
ef4f149536 Merge sunlight.local:/local_work/tmp_merge-5.0-opt-mysql
into  sunlight.local:/local_work/tmp_merge-5.1-opt-mysql
2006-07-30 00:33:24 +04:00
sergefp@mysql.com
699291a8e6 BUG#14940 "MySQL choose wrong index", v.2
- Make the range-et-al optimizer produce E(#table records after table 
                                           condition is applied),
- Make the join optimizer use this value,
- Add "filtered" column to EXPLAIN EXTENDED to show 
  fraction of records left after table condition is applied
- Adjust test results, add comments
2006-07-28 21:27:01 +04:00
gkodinov/kgeorge@macbook.gmz
9380bb837f Bug#16712: group_concat returns odd srting insead of intended result
when calculating GROUP_CONCAT all blob fields are transformed
  to varchar when making the temp table.
  However a varchar has at max 2 bytes for length. 
  This fix makes the conversion only for blobs whose max length 
  is below that limit. 
  Otherwise blob field is created by make_string_field() call.
2006-07-25 11:45:10 +03:00
monty@mysql.com
74cc73d461 This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
Changes that requires code changes in other code of other storage engines.
(Note that all changes are very straightforward and one should find all issues
by compiling a --debug build and fixing all compiler errors and all
asserts in field.cc while running the test suite),

- New optional handler function introduced: reset()
  This is called after every DML statement to make it easy for a handler to
  statement specific cleanups.
  (The only case it's not called is if force the file to be closed)

- handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
  should be moved to handler::reset()

- table->read_set contains a bitmap over all columns that are needed
  in the query.  read_row() and similar functions only needs to read these
  columns
- table->write_set contains a bitmap over all columns that will be updated
  in the query. write_row() and update_row() only needs to update these
  columns.
  The above bitmaps should now be up to date in all context
  (including ALTER TABLE, filesort()).

  The handler is informed of any changes to the bitmap after
  fix_fields() by calling the virtual function
  handler::column_bitmaps_signal(). If the handler does caching of
  these bitmaps (instead of using table->read_set, table->write_set),
  it should redo the caching in this code. as the signal() may be sent
  several times, it's probably best to set a variable in the signal
  and redo the caching on read_row() / write_row() if the variable was
  set.

- Removed the read_set and write_set bitmap objects from the handler class

- Removed all column bit handling functions from the handler class.
  (Now one instead uses the normal bitmap functions in my_bitmap.c instead
  of handler dedicated bitmap functions)

- field->query_id is removed. One should instead instead check
  table->read_set and table->write_set if a field is used in the query.

- handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
  handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
  instead use table->read_set to check for which columns to retrieve.

- If a handler needs to call Field->val() or Field->store() on columns
  that are not used in the query, one should install a temporary
  all-columns-used map while doing so. For this, we provide the following
  functions:

  my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
  field->val();
  dbug_tmp_restore_column_map(table->read_set, old_map);

  and similar for the write map:

  my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
  field->val();
  dbug_tmp_restore_column_map(table->write_set, old_map);

  If this is not done, you will sooner or later hit a DBUG_ASSERT
  in the field store() / val() functions.
  (For not DBUG binaries, the dbug_tmp_restore_column_map() and
  dbug_tmp_restore_column_map() are inline dummy functions and should
  be optimized away be the compiler).

- If one needs to temporary set the column map for all binaries (and not
  just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
  methods) one should use the functions tmp_use_all_columns() and
  tmp_restore_column_map() instead of the above dbug_ variants.

- All 'status' fields in the handler base class (like records,
  data_file_length etc) are now stored in a 'stats' struct. This makes
  it easier to know what status variables are provided by the base
  handler.  This requires some trivial variable names in the extra()
  function.

- New virtual function handler::records().  This is called to optimize
  COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
  (stats.records is not supposed to be an exact value. It's only has to
  be 'reasonable enough' for the optimizer to be able to choose a good
  optimization path).

- Non virtual handler::init() function added for caching of virtual
  constants from engine.

- Removed has_transactions() virtual method. Now one should instead return
  HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
  transactions.

- The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
  that is to be used with 'new handler_name()' to allocate the handler
  in the right area.  The xxxx_create_handler() function is also
  responsible for any initialization of the object before returning.

  For example, one should change:

  static handler *myisam_create_handler(TABLE_SHARE *table)
  {
    return new ha_myisam(table);
  }

  ->

  static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
  {
    return new (mem_root) ha_myisam(table);
  }

- New optional virtual function: use_hidden_primary_key().
  This is called in case of an update/delete when
  (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
  but we don't have a primary key. This allows the handler to take precisions
  in remembering any hidden primary key to able to update/delete any
  found row. The default handler marks all columns to be read.

- handler::table_flags() now returns a ulonglong (to allow for more flags).

- New/changed table_flags()
  - HA_HAS_RECORDS	    Set if ::records() is supported
  - HA_NO_TRANSACTIONS	    Set if engine doesn't support transactions
  - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
                            Set if we should mark all primary key columns for
			    read when reading rows as part of a DELETE
			    statement. If there is no primary key,
			    all columns are marked for read.
  - HA_PARTIAL_COLUMN_READ  Set if engine will not read all columns in some
			    cases (based on table->read_set)
 - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
   			    Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
 - HA_DUPP_POS              Renamed to HA_DUPLICATE_POS
 - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
			    Set this if we should mark ALL key columns for
			    read when when reading rows as part of a DELETE
			    statement. In case of an update we will mark
			    all keys for read for which key part changed
			    value.
  - HA_STATS_RECORDS_IS_EXACT
			     Set this if stats.records is exact.
			     (This saves us some extra records() calls
			     when optimizing COUNT(*))
			    

- Removed table_flags()
  - HA_NOT_EXACT_COUNT     Now one should instead use HA_HAS_RECORDS if
			   handler::records() gives an exact count() and
			   HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
  - HA_READ_RND_SAME	   Removed (no one supported this one)

- Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()

- Renamed handler::dupp_pos to handler::dup_pos

- Removed not used variable handler::sortkey


Upper level handler changes:

- ha_reset() now does some overall checks and calls ::reset()
- ha_table_flags() added. This is a cached version of table_flags(). The
  cache is updated on engine creation time and updated on open.


MySQL level changes (not obvious from the above):

- DBUG_ASSERT() added to check that column usage matches what is set
  in the column usage bit maps. (This found a LOT of bugs in current
  column marking code).

- In 5.1 before, all used columns was marked in read_set and only updated
  columns was marked in write_set. Now we only mark columns for which we
  need a value in read_set.

- Column bitmaps are created in open_binary_frm() and open_table_from_share().
  (Before this was in table.cc)

- handler::table_flags() calls are replaced with handler::ha_table_flags()

- For calling field->val() you must have the corresponding bit set in
  table->read_set. For calling field->store() you must have the
  corresponding bit set in table->write_set. (There are asserts in
  all store()/val() functions to catch wrong usage)

- thd->set_query_id is renamed to thd->mark_used_columns and instead
  of setting this to an integer value, this has now the values:
  MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
  Changed also all variables named 'set_query_id' to mark_used_columns.

- In filesort() we now inform the handler of exactly which columns are needed
  doing the sort and choosing the rows.

- The TABLE_SHARE object has a 'all_set' column bitmap one can use
  when one needs a column bitmap with all columns set.
  (This is used for table->use_all_columns() and other places)

- The TABLE object has 3 column bitmaps:
  - def_read_set     Default bitmap for columns to be read
  - def_write_set    Default bitmap for columns to be written
  - tmp_set          Can be used as a temporary bitmap when needed.
  The table object has also two pointer to bitmaps read_set and write_set
  that the handler should use to find out which columns are used in which way.

- count() optimization now calls handler::records() instead of using
  handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).

- Added extra argument to Item::walk() to indicate if we should also
  traverse sub queries.

- Added TABLE parameter to cp_buffer_from_ref()

- Don't close tables created with CREATE ... SELECT but keep them in
  the table cache. (Faster usage of newly created tables).


New interfaces:

- table->clear_column_bitmaps() to initialize the bitmaps for tables
  at start of new statements.

- table->column_bitmaps_set() to set up new column bitmaps and signal
  the handler about this.

- table->column_bitmaps_set_no_signal() for some few cases where we need
  to setup new column bitmaps but don't signal the handler (as the handler
  has already been signaled about these before). Used for the momement
  only in opt_range.cc when doing ROR scans.

- table->use_all_columns() to install a bitmap where all columns are marked
  as use in the read and the write set.

- table->default_column_bitmaps() to install the normal read and write
  column bitmaps, but not signaling the handler about this.
  This is mainly used when creating TABLE instances.

- table->mark_columns_needed_for_delete(),
  table->mark_columns_needed_for_delete() and
  table->mark_columns_needed_for_insert() to allow us to put additional
  columns in column usage maps if handler so requires.
  (The handler indicates what it neads in handler->table_flags())

- table->prepare_for_position() to allow us to tell handler that it
  needs to read primary key parts to be able to store them in
  future table->position() calls.
  (This replaces the table->file->ha_retrieve_all_pk function)

- table->mark_auto_increment_column() to tell handler are going to update
  columns part of any auto_increment key.

- table->mark_columns_used_by_index() to mark all columns that is part of
  an index.  It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
  it to quickly know that it only needs to read colums that are part
  of the key.  (The handler can also use the column map for detecting this,
  but simpler/faster handler can just monitor the extra() call).

- table->mark_columns_used_by_index_no_reset() to in addition to other columns,
  also mark all columns that is used by the given key.

- table->restore_column_maps_after_mark_index() to restore to default
  column maps after a call to table->mark_columns_used_by_index().

- New item function register_field_in_read_map(), for marking used columns
  in table->read_map. Used by filesort() to mark all used columns

- Maintain in TABLE->merge_keys set of all keys that are used in query.
  (Simplices some optimization loops)

- Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
  but the field in the clustered key is not assumed to be part of all index.
  (used in opt_range.cc for faster loops)

-  dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
   tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
   mark all columns as usable.  The 'dbug_' version is primarily intended
   inside a handler when it wants to just call Field:store() & Field::val()
   functions, but don't need the column maps set for any other usage.
   (ie:: bitmap_is_set() is never called)

- We can't use compare_records() to skip updates for handlers that returns
  a partial column set and the read_set doesn't cover all columns in the
  write set. The reason for this is that if we have a column marked only for
  write we can't in the MySQL level know if the value changed or not.
  The reason this worked before was that MySQL marked all to be written
  columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
  bug'.

- open_table_from_share() does not anymore setup temporary MEM_ROOT
  object as a thread specific variable for the handler. Instead we
  send the to-be-used MEMROOT to get_new_handler().
  (Simpler, faster code)



Bugs fixed:

- Column marking was not done correctly in a lot of cases.
  (ALTER TABLE, when using triggers, auto_increment fields etc)
  (Could potentially result in wrong values inserted in table handlers
  relying on that the old column maps or field->set_query_id was correct)
  Especially when it comes to triggers, there may be cases where the
  old code would cause lost/wrong values for NDB and/or InnoDB tables.

- Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
  OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
  This allowed me to remove some wrong warnings about:
  "Some non-transactional changed tables couldn't be rolled back"

- Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
  (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
  some warnings about
  "Some non-transactional changed tables couldn't be rolled back")

- Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
  which could cause delete_table to report random failures.

- Fixed core dumps for some tests when running with --debug

- Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
  (This has probably caused us to not properly remove temporary files after
  crash)

- slow_logs was not properly initialized, which could maybe cause
  extra/lost entries in slow log.

- If we get an duplicate row on insert, change column map to read and
  write all columns while retrying the operation. This is required by
  the definition of REPLACE and also ensures that fields that are only
  part of UPDATE are properly handled.  This fixed a bug in NDB and
  REPLACE where REPLACE wrongly copied some column values from the replaced
  row.

- For table handler that doesn't support NULL in keys, we would give an error
  when creating a primary key with NULL fields, even after the fields has been
  automaticly converted to NOT NULL.

- Creating a primary key on a SPATIAL key, would fail if field was not
  declared as NOT NULL.


Cleanups:

- Removed not used condition argument to setup_tables

- Removed not needed item function reset_query_id_processor().

- Field->add_index is removed. Now this is instead maintained in
  (field->flags & FIELD_IN_ADD_INDEX)

- Field->fieldnr is removed (use field->field_index instead)

- New argument to filesort() to indicate that it should return a set of
  row pointers (not used columns). This allowed me to remove some references
  to sql_command in filesort and should also enable us to return column
  results in some cases where we couldn't before.

- Changed column bitmap handling in opt_range.cc to be aligned with TABLE
  bitmap, which allowed me to use bitmap functions instead of looping over
  all fields to create some needed bitmaps. (Faster and smaller code)

- Broke up found too long lines

- Moved some variable declaration at start of function for better code
  readability.

- Removed some not used arguments from functions.
  (setup_fields(), mysql_prepare_insert_check_table())

- setup_fields() now takes an enum instead of an int for marking columns
   usage.

- For internal temporary tables, use handler::write_row(),
  handler::delete_row() and handler::update_row() instead of
  handler::ha_xxxx() for faster execution.

- Changed some constants to enum's and define's.

- Using separate column read and write sets allows for easier checking
  of timestamp field was set by statement.

- Remove calls to free_io_cache() as this is now done automaticly in ha_reset()

- Don't build table->normalized_path as this is now identical to table->path
  (after bar's fixes to convert filenames)

- Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
  do comparision with the 'convert-dbug-for-diff' tool.


Things left to do in 5.1:

- We wrongly log failed CREATE TABLE ... SELECT in some cases when using
  row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
  Mats has promised to look into this.

- Test that my fix for CREATE TABLE ... SELECT is indeed correct.
  (I added several test cases for this, but in this case it's better that
  someone else also tests this throughly).
  Lars has promosed to do this.
2006-06-04 18:52:22 +03:00
evgen@moonbone.local
6ea27b1013 Manually merged 2006-04-25 13:04:39 +04:00
igor@rurik.mysql.com
639e875032 Post merge fixes 2006-04-21 08:19:38 -07:00
evgen@moonbone.local
e347ec5cbe func_gconcat.result, func_gconcat.test:
Remove duplicate test case for bug#14169
2006-04-20 13:34:14 +04:00
igor@rurik.mysql.com
692da27388 Post merge fix 2006-04-20 00:42:12 -07:00
igor@rurik.mysql.com
27cc6f4bc3 Merge rurik.mysql.com:/home/igor/dev/mysql-4.1-2
into  rurik.mysql.com:/home/igor/dev/mysql-5.0-0
2006-04-19 18:08:15 -07:00
igor@rurik.mysql.com
67458961cf Temporarily commented out a query from the test case for bug 14169 to make it pass with --ps-protocol. 2006-04-19 16:08:37 -07:00
evgen@moonbone.local
8b4eae6e4f func_gconcat.result:
Corrected test case for the bug#14169 to make it pass in --ps-protocol mode.
2006-04-20 00:27:43 +04:00
evgen@moonbone.local
ac54aa2aee Fixed bug#14169: type of group_concat() result changed to blob if tmp_table was
used

In a simple queries a result of the GROUP_CONCAT() function was always of 
varchar type.
But if length of GROUP_CONCAT() result is greater than 512 chars and temporary
table is used during select then the result is converted to blob, due to
policy to not to store fields longer than 512 chars in tmp table as varchar
fields.

In order to provide consistent behaviour, result of GROUP_CONCAT() now
will always be converted to blob if it is longer than 512 chars.
Item_func_group_concat::field_type() is modified accordingly.
2006-04-12 23:05:38 +04:00
hartmut@mysql.com
4a4c459fa3 Merge mysql.com:/home/hartmut/projects/mysql/dev/5.0
into  mysql.com:/home/hartmut/projects/mysql/dev/5.1
2006-04-07 11:23:55 +02:00
gluh@eagle.intranet.mysql.r18.ru
4ef7e5b5f7 Fix for bug#18281 group_concat changes charset to binary
skip charset aggregation for order columns
2006-04-07 13:19:31 +05:00
igor@rurik.mysql.com
d6af1b6e39 Merge rurik.mysql.com:/home/igor/dev/mysql-5.0-0
into  rurik.mysql.com:/home/igor/dev/mysql-5.1-0
2006-04-01 02:57:56 -08:00
evgen@sunlight.local
eb075f2255 Manual merge 2006-03-30 17:14:55 +04:00