Add support for direct update and direct delete requests for spider.
A direct update/delete request handles all qualified rows in a single
operation rather than one row at a time.
Contains Spiral patches:
006_mariadb-10.2.0.direct_update_rows.diff MDEV-7704
008_mariadb-10.2.0.partition_direct_update.diff MDEV-7706
010_mariadb-10.2.0.direct_update_rows2.diff MDEV-7708
011_mariadb-10.2.0.aggregate.diff MDEV-7709
027_mariadb-10.2.0.force_bulk_update.diff MDEV-7724
061_mariadb-10.2.0.mariadb-10.1.8.diff MDEV-12870
- The differences compared to the original patches:
- Most of the parameters of the new functions are unnecessary. The
unnecessary parameters have been removed.
- Changed bit positions for new handler flags upon consideration of
handler flags not needed by other Spiral patches and handler flags
merged from MySQL.
- Added info_push() (Was originally part of bulk access patch)
- Didn't include code related to handler socket
- Added HA_CAN_DIRECT_UPDATE_AND_DELETE
Original author: Kentoku SHIBA
First reviewer: Jacob Mathew
Second reviewer: Michael Widenius
Most "new" failures fixed in the following files:
- sql_select.cc
- item.cc
- item_func.cc
- opt_subselect.cc
Other things:
- Allocate udf_handler strings in mem_root
- Required changes in sql_string.h
- Add mem_root as argument to some new [] calls
- Mark udf_handler strings as thread specific
- Removed some comment blocks with code
This was done to get more information about where time is spent.
Now we can get proper timing for time spent in commit, rollback,
binlog write etc.
Following stages was added:
- Commit
- Commit_implicit
- Rollback
- Rollback implicit
- Binlog write
- Init for update
- This is used instead of "Init" for insert, update and delete.
- Staring cleanup
Following stages where changed:
- "Unlocking tables" stage reset stage to previous stage at end
- "binlog write" stage resets stage to previous stage at end
- "end" -> "end of update loop"
- "cleaning up" -> "Reset for next command"
- Added stage_searching_rows_for_update when searching for rows
to be deleted.
Other things:
- Renamed all stages to start with big letter (before there was no
consitency)
- Increased performance_schema_max_stage_classes from 150 to 160.
- Most of the test changes in performance schema comes from renaming of
stages.
- Removed duplicate output of variables and inital state in a lot of
performance schema tests.
This was done to make it easier to change a default value for a
performance variable without affecting all tests.
- Added start_server_variables.test to check configuration
- Removed some duplicate "closing tables" stages
- Updated position for "stage_init_update" and "stage_updating" for
delete, insert and update to be just before update loop (for more
exact timing).
- Don't set "Checking permissions" twice in a row.
- Remove stage_end stage from creating views (not done for create table
either).
- Updated default performance history size from 10 to 20 because of new
stages
- Ensure that ps_enabled is correct (to be used in a later patch)
multi-update first runs a select to find affected rows, then performs
a separate update step. On the second step WITH CHECK OPTION rows
are read with rnd_read, but the first step might've been done with
keyread.
keyread over indexed virtual columns only reads the column's value, not
dependent base columns. This is reflected in the read_set too. But on
the rnd_read step base columns must be read - thus we need to update the
read_set before doing updates.
- Added sql/mariadb.h file that should be included first by files in sql
directory, if sql_plugin.h is not used (sql_plugin.h adds SHOW variables
that must be done before my_global.h is included)
- Removed a lot of include my_global.h from include files
- Removed include's of some files that my_global.h automatically includes
- Removed duplicated include's of my_sys.h
- Replaced include my_config.h with my_global.h
The problem was that the introduction of max-thread-mem-used can cause
an allocation error very early, even before mysql_parse() is called.
As mysql_parse() calls thd->reset_for_next_command(), which called
clear_error(), the error number was lost.
Fixed by adding an option to have unique messages for each KILL
signal and change max-thread-mem-used to use this new feature.
This removes a lot of problems with the original approach, where
one could get errors signaled silenty almost any time.
ixed by moving clear_error() from reset_for_next_command() to
do_command(), before any memory allocation for the thread.
Related changes:
- reset_for_next_command() now have an optional parameter if we should
call clear_error() or not. By default it's called, but not anymore from
dispatch_command() which was the original problem.
- Added optional paramater to clear_error() to force calling of
reset_diagnostics_area(). Before clear_error() only called
reset_diagnostics_area() if there was no error, so we normally
called reset_diagnostics_area() twice.
- This change removed several duplicated calls to clear_error()
when starting a query.
- Reset max_mem_used on COM_QUIT, to protect against kill during
quit.
- Use fatal_error() instead of setting is_fatal_error (cleanup)
- Set fatal_error if max_thead_mem_used is signaled.
(Same logic we use for other places where we are out of resources)
Benefits of this patch:
- Removed a lot of calls to strlen(), especially for field_string
- Strings generated by parser are now const strings, less chance of
accidently changing a string
- Removed a lot of calls with LEX_STRING as parameter (changed to pointer)
- More uniform code
- Item::name_length was not kept up to date. Now fixed
- Several bugs found and fixed (Access to null pointers,
access of freed memory, wrong arguments to printf like functions)
- Removed a lot of casts from (const char*) to (char*)
Changes:
- This caused some ABI changes
- lex_string_set now uses LEX_CSTRING
- Some fucntions are now taking const char* instead of char*
- Create_field::change and after changed to LEX_CSTRING
- handler::connect_string, comment and engine_name() changed to LEX_CSTRING
- Checked printf() related calls to find bugs. Found and fixed several
errors in old code.
- A lot of changes from LEX_STRING to LEX_CSTRING, especially related to
parsing and events.
- Some changes from LEX_STRING and LEX_STRING & to LEX_CSTRING*
- Some changes for char* to const char*
- Added printf argument checking for my_snprintf()
- Introduced null_clex_str, star_clex_string, temp_lex_str to simplify
code
- Added item_empty_name and item_used_name to be able to distingush between
items that was given an empty name and items that was not given a name
This is used in sql_yacc.yy to know when to give an item a name.
- select table_name."*' is not anymore same as table_name.*
- removed not used function Item::rename()
- Added comparision of item->name_length before some calls to
my_strcasecmp() to speed up comparison
- Moved Item_sp_variable::make_field() from item.h to item.cc
- Some minimal code changes to avoid copying to const char *
- Fixed wrong error message in wsrep_mysql_parse()
- Fixed wrong code in find_field_in_natural_join() where real_item() was
set when it shouldn't
- ER_ERROR_ON_RENAME was used with extra arguments.
- Removed some (wrong) ER_OUTOFMEMORY, as alloc_root will already
give the error.
TODO:
- Check possible unsafe casts in plugin/auth_examples/qa_auth_interface.c
- Change code to not modify LEX_CSTRING for database name
(as part of lower_case_table_names)
Filesort temporarily changes read_set to be tmp_set and marks only
fields needed for filesort. Add an assert to ensure that it doesn't
overwrite the old value of tmp_set, that is that read_set was *not*
already tmp_set when filesort was invoked.
Fix sql_update.cc that was was doing exactly that - changing read_set to
tmp_set, configuring tmp_set for keyread, and then invoking filesort.
mark_columns_used_by_index used to do
reset + mark_columns_used_by_index_no_reset + start keyread + set bitmaps
Now prepare_for_keyread does that, while mark_columns_used_by_index
does only reset + mark_columns_used_by_index_no_reset,
just as its name suggests.
TABLE::add_read_columns_used_by_index() is conceptually wrong,
it *adds* columns used by index to the bitmap, without clearing
it first. But it also enables keyread, meaning that *only* columns
from the index will be read. It is supposed to be used to
add columns used by an index to a bitmap that already has columns
of a primary key - for engines where a primary key is part of every
index.
The correct fix is to change mark_columns_used_by_index() to
take into account extended keys.
this reverts 1d0acc7754 and cf97cbd1db
move TABLE::key_read into handler. Because in index merge and DS-MRR
there can be many handlers per table, and some of them use
key read while others don't. "keyread" is really per handler,
not per TABLE property.
- Changed error handlers interface so that they can change error level in
the handler
- Give warnings and errors when calculating virtual columns
- On insert/update error is fatal in strict mode.
- SELECT and DELETE will only give a warning if a virtual field generates an error
- Added VCOL_UPDATE_FOR_DELETE and VCOL_UPDATE_INDEX_FOR_REPLACE to be able to
easily detect in update_virtual_fields() if we should use an error
handler to mask errors or not.
When updating a table with virtual BLOB columns, the following might
happen:
- an old record is read from the table, it has no virtual blob values
- update_virtual_fields() is run, vcol blob gets its value into the
record. But only a pointer to the value is in the table->record[0],
the value is in Field_blob::value String (but it doesn't have to be!
it can be in the record, if the column is just a copy of another
columns: ... b VARCHAR, c BLOB AS (b) ...)
- store_record(table,record[1]), old record now is in record[1]
- fill_record() prepares new values in record[0], vcol blob is updated,
new value replaces the old one in the Field_blob::value
- now both record[1] and record[0] have a pointer that points to the
*new* vcol blob value. Or record[1] has a pointer to nowhere if
Field_blob::value had to realloc.
To fix this I have introduced a new String object 'read_value' in
Field_blob. When updating virtual columns when a row has been read,
the allocated value is stored in 'read_value' instead of 'value'. The
allocated blobs for the new row is stored in 'value' as before.
I also made, as a safety precaution, the insert delayed handling of
blobs more general by using value to store strings instead of the
record. This ensures that virtual functions on delayed insert should
work in as in the case of normal insert.
Triggers are now properly updating the read, write and vcol maps for used
fields. This means that we don't need VCOL_UPDATE_FOR_READ_WRITE anymore
and there is no need for any other special handling of triggers in
update_virtual_fields().
To be able to test how many times virtual fields are invoked, I also
relaxed rules that one can use local (@) variables in DEFAULT and non
persistent virtual field expressions.