'version' variables.
The warnings occur on Windows build, yet they are also are valid
on 32bit Unix.
Fix is to consistently use 64bit integer on all platforms.
- Fix win64 pointer truncation warnings
(usually coming from misusing 0x%lx and long cast in DBUG)
- Also fix printf-format warnings
Make the above mentioned warnings fatal.
- fix pthread_join on Windows to set return value.
- Added TABLE_SHARE->not_usable_by_query_cache
- Moved TABLE->no_replicate to TABLE_SHARE->no_replicate as it's same for
all TABLE instances
- Renamed TABLE_SHARE->cached_row_logging_check to can_do_row_logging
- Added sql/mariadb.h file that should be included first by files in sql
directory, if sql_plugin.h is not used (sql_plugin.h adds SHOW variables
that must be done before my_global.h is included)
- Removed a lot of include my_global.h from include files
- Removed include's of some files that my_global.h automatically includes
- Removed duplicated include's of my_sys.h
- Replaced include my_config.h with my_global.h
The problem was that the introduction of max-thread-mem-used can cause
an allocation error very early, even before mysql_parse() is called.
As mysql_parse() calls thd->reset_for_next_command(), which called
clear_error(), the error number was lost.
Fixed by adding an option to have unique messages for each KILL
signal and change max-thread-mem-used to use this new feature.
This removes a lot of problems with the original approach, where
one could get errors signaled silenty almost any time.
ixed by moving clear_error() from reset_for_next_command() to
do_command(), before any memory allocation for the thread.
Related changes:
- reset_for_next_command() now have an optional parameter if we should
call clear_error() or not. By default it's called, but not anymore from
dispatch_command() which was the original problem.
- Added optional paramater to clear_error() to force calling of
reset_diagnostics_area(). Before clear_error() only called
reset_diagnostics_area() if there was no error, so we normally
called reset_diagnostics_area() twice.
- This change removed several duplicated calls to clear_error()
when starting a query.
- Reset max_mem_used on COM_QUIT, to protect against kill during
quit.
- Use fatal_error() instead of setting is_fatal_error (cleanup)
- Set fatal_error if max_thead_mem_used is signaled.
(Same logic we use for other places where we are out of resources)
The problem was that we didn't check on open of sequence if the table is a view, which is not allowed.
We are now generating a proper error message for this case.
CREATE/DROP TEMPORARY TABLE are not safe to optimistically replicate in
parallel with other transactions, so they need to be marked as "ddl" in the
binlog.
This was already done for stand-alone CREATE/DROP TEMPORARY. But temporary
tables can also be created and dropped inside a BEGIN...END transaction, and
such transactions were not marked as ddl. Nor was the DROP TEMPORARY TABLE
statement emitted implicitly when a client connection is closed.
So this patch adds such ddl mark for the missing cases.
The difference to Kristian's original patch is mainly a fix in
mysql_trans_commit_alter_copy_data() to remember the unsafe_rollback_flags
over the temporary commit.
When running setup fields during the final step of insert using select
the final setup_fields does not have any sum functions. Our current
condition for calling split_sum_func however would attempt to use an empty
NULL sum_func_list, if the item contained a window function.
The solution is to not perform another split_sum_func for the item
containing a window function if we do not actually have a sum_func_list.
table_already_fk_prelocked() was looking for a table in the wrong
list (not the complete list of prelocked tables, but only in its tail,
starting from the current table - which is always empty for the last
added table), so for circular FKs it kept adding same tables to the list
indefinitely.
* sys fields are NULL by default (with exceptions, see comment about NOT_NULL_FLAG in #77);
* error codes renamed, messages cleared out;
* SHOW CREATE TABLE fixed;
* set_max() fix;
* redundant flag setters/getters removed;
* flags are set in sql_yacc.yy, redundant copy_info_about_generated_fields() eliminated.
* BEGIN_TS(), COMMIT_TS() SQL functions;
* VTQ instead of packed stores secs + usecs like my_timestamp_to_binary() does;
* versioned SELECT to IB is translated with COMMIT_TS();
* SQL fixes:
- FOR_SYSTEM_TIME_UNSPECIFIED condition compares to TIMESTAMP_MAX_VALUE;
- segfault fix#36: multiple execute of prepared stmt;
- different tables to same stored procedure fix (#39)
* Fixes of previous parts: ON DUPLICATE KEY, other misc fixes.
Benefits of this patch:
- Removed a lot of calls to strlen(), especially for field_string
- Strings generated by parser are now const strings, less chance of
accidently changing a string
- Removed a lot of calls with LEX_STRING as parameter (changed to pointer)
- More uniform code
- Item::name_length was not kept up to date. Now fixed
- Several bugs found and fixed (Access to null pointers,
access of freed memory, wrong arguments to printf like functions)
- Removed a lot of casts from (const char*) to (char*)
Changes:
- This caused some ABI changes
- lex_string_set now uses LEX_CSTRING
- Some fucntions are now taking const char* instead of char*
- Create_field::change and after changed to LEX_CSTRING
- handler::connect_string, comment and engine_name() changed to LEX_CSTRING
- Checked printf() related calls to find bugs. Found and fixed several
errors in old code.
- A lot of changes from LEX_STRING to LEX_CSTRING, especially related to
parsing and events.
- Some changes from LEX_STRING and LEX_STRING & to LEX_CSTRING*
- Some changes for char* to const char*
- Added printf argument checking for my_snprintf()
- Introduced null_clex_str, star_clex_string, temp_lex_str to simplify
code
- Added item_empty_name and item_used_name to be able to distingush between
items that was given an empty name and items that was not given a name
This is used in sql_yacc.yy to know when to give an item a name.
- select table_name."*' is not anymore same as table_name.*
- removed not used function Item::rename()
- Added comparision of item->name_length before some calls to
my_strcasecmp() to speed up comparison
- Moved Item_sp_variable::make_field() from item.h to item.cc
- Some minimal code changes to avoid copying to const char *
- Fixed wrong error message in wsrep_mysql_parse()
- Fixed wrong code in find_field_in_natural_join() where real_item() was
set when it shouldn't
- ER_ERROR_ON_RENAME was used with extra arguments.
- Removed some (wrong) ER_OUTOFMEMORY, as alloc_root will already
give the error.
TODO:
- Check possible unsafe casts in plugin/auth_examples/qa_auth_interface.c
- Change code to not modify LEX_CSTRING for database name
(as part of lower_case_table_names)
This was wrong because:
- There was no reason to rollback name for item that will be deleted
after query.
- name_length was not rolled back
- Changing real_item() doesn't work as it may be used many times in the
same query
After removing all the old code and extending the test case, all the
related test cases passes.
Sanja and I concluded that the old code isn't needed anymore. If it
still needed for some scenario not covered by our test system, it needs
to be coded in some other way, so better to remove the wrong code.
Working features:
CREATE OR REPLACE [TEMPORARY] SEQUENCE [IF NOT EXISTS] name
[ INCREMENT [ BY | = ] increment ]
[ MINVALUE [=] minvalue | NO MINVALUE ]
[ MAXVALUE [=] maxvalue | NO MAXVALUE ]
[ START [ WITH | = ] start ] [ CACHE [=] cache ] [ [ NO ] CYCLE ]
ENGINE=xxx COMMENT=".."
SELECT NEXT VALUE FOR sequence_name;
SELECT NEXTVAL(sequence_name);
SELECT PREVIOUS VALUE FOR sequence_name;
SELECT LASTVAL(sequence_name);
SHOW CREATE SEQUENCE sequence_name;
SHOW CREATE TABLE sequence_name;
CREATE TABLE sequence-structure ... SEQUENCE=1
ALTER TABLE sequence RENAME TO sequence2;
RENAME TABLE sequence TO sequence2;
DROP [TEMPORARY] SEQUENCE [IF EXISTS] sequence_names
Missing features
- SETVAL(value,sequence_name), to be used with replication.
- Check replication, including checking that sequence tables are marked
not transactional.
- Check that a commit happens for NEXT VALUE that changes table data (may
already work)
- ALTER SEQUENCE. ANSI SQL version of setval.
- Share identical sequence entries to not add things twice to table list.
- testing insert/delete/update/truncate/load data
- Run and fix Alibaba sequence tests (part of mysql-test/suite/sql_sequence)
- Write documentation for NEXT VALUE / PREVIOUS_VALUE
- NEXTVAL in DEFAULT
- Ensure that NEXTVAL in DEFAULT uses database from base table
- Two NEXTVAL for same row should give same answer.
- Oracle syntax sequence_table.nextval, without any FOR or FROM.
- Sequence tables are treated as 'not read constant tables' by SELECT; Would
be better if we would have a separate list for sequence tables so that
select doesn't know about them, except if refereed to with FROM.
Other things done:
- Improved output for safemalloc backtrack
- frm_type_enum changed to Table_type
- Removed lex->is_view and replaced with lex->table_type. This allows
use to more easy check if item is view, sequence or table.
- Added table flag HA_CAN_TABLES_WITHOUT_ROLLBACK, needed for handlers
that want's to support sequences
- Added handler calls:
- engine_name(), to simplify getting engine name for partition and sequences
- update_first_row(), to be able to do efficient sequence implementations.
- Made binlog_log_row() global to be able to call it from ha_sequence.cc
- Added handler variable: row_already_logged, to be able to flag that the
changed row is already logging to replication log.
- Added CF_DB_CHANGE and CF_SCHEMA_CHANGE flags to simplify
deny_updates_if_read_only_option()
- Added sp_add_cfetch() to avoid new conflicts in sql_yacc.yy
- Moved code for add_table_options() out from sql_show.cc::show_create_table()
- Added String::append_longlong() and used it in sql_show.cc to simplify code.
- Added extra option to dd_frm_type() and ha_table_exists to indicate if
the table is a sequence. Needed by DROP SQUENCE to not drop a table.
This caused gcc-6.3.1 errors:7450cb7f6
mariadb-server/sql/sql_base.cc: In function ‘bool fix_all_session_vcol_exprs(THD*, TABLE_LIST*)’:
mariadb-server/sql/sql_base.cc:4821:7: warning: this ‘for’ clause does not guard... [-Wmisleading-indentation]
for (Field **df= t->default_field; df && *df; df++)
^~~
mariadb-server/sql/sql_base.cc:4826:9: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the ‘for’
for (Virtual_column_info **cc= t->check_constraints; cc && *cc; cc++)
^~~
It was obvious from 7450cb7f6 that the indenting should of been removed
Signed-off-by: Daniel Black <daniel.black@au.ibm.com>
* rename to "keyread" (to avoid conflicts with tokudb),
* change from bool to uint and store the keyread index number there
* provide a bool accessor to check if keyread is enabled
move TABLE::key_read into handler. Because in index merge and DS-MRR
there can be many handlers per table, and some of them use
key read while others don't. "keyread" is really per handler,
not per TABLE property.
Optionally do table->update_default_fields() even for INSERT
that supposedly provides values for all column. Because these
"values" might be DEFAULT, which would need table->update_default_fields()
at the end.
Also set Item_default_value::used_tables() from the default expression.
Non-zero used_field() means that mysql_insert() will initialize all
fields to their default values (with restore_record()) even if
all columns are later provided with values. Because default expressions
may refer to other columns and they must be initialized.
The temporary tables created for recursive table references
should be closed in close_thread_tables(), because they might
be used in the statements like ANALYZE WITH r AS (...) SELECT * from r
where r is defined through recursion.
- Changed error handlers interface so that they can change error level in
the handler
- Give warnings and errors when calculating virtual columns
- On insert/update error is fatal in strict mode.
- SELECT and DELETE will only give a warning if a virtual field generates an error
- Added VCOL_UPDATE_FOR_DELETE and VCOL_UPDATE_INDEX_FOR_REPLACE to be able to
easily detect in update_virtual_fields() if we should use an error
handler to mask errors or not.
When updating a table with virtual BLOB columns, the following might
happen:
- an old record is read from the table, it has no virtual blob values
- update_virtual_fields() is run, vcol blob gets its value into the
record. But only a pointer to the value is in the table->record[0],
the value is in Field_blob::value String (but it doesn't have to be!
it can be in the record, if the column is just a copy of another
columns: ... b VARCHAR, c BLOB AS (b) ...)
- store_record(table,record[1]), old record now is in record[1]
- fill_record() prepares new values in record[0], vcol blob is updated,
new value replaces the old one in the Field_blob::value
- now both record[1] and record[0] have a pointer that points to the
*new* vcol blob value. Or record[1] has a pointer to nowhere if
Field_blob::value had to realloc.
To fix this I have introduced a new String object 'read_value' in
Field_blob. When updating virtual columns when a row has been read,
the allocated value is stored in 'read_value' instead of 'value'. The
allocated blobs for the new row is stored in 'value' as before.
I also made, as a safety precaution, the insert delayed handling of
blobs more general by using value to store strings instead of the
record. This ensures that virtual functions on delayed insert should
work in as in the case of normal insert.
Triggers are now properly updating the read, write and vcol maps for used
fields. This means that we don't need VCOL_UPDATE_FOR_READ_WRITE anymore
and there is no need for any other special handling of triggers in
update_virtual_fields().
To be able to test how many times virtual fields are invoked, I also
relaxed rules that one can use local (@) variables in DEFAULT and non
persistent virtual field expressions.
Implementation of MDEV-7660 introduced unwanted incompatible change:
modifications under LOCK TABLES with autocommit enabled are rolled back on
disconnect. Previously everything was committed, because LOCK TABLES didn't
adjust autocommit setting.
This patch restores original behavior by reverting some changes done in
MDEV-7660:
- sql/sql_parse.cc: do not reset autocommit on LOCK TABLES
- sql/sql_base.cc: do not set autocommit on UNLOCK TABLES
- test cases: main.lock_tables_lost_commit, main.partition_explicit_prune,
rpl.rpl_switch_stm_row_mixed, tokudb.nested_txn_implicit_commit,
tokudb_bugs.db806
But it makes InnoDB tables under LOCK TABLES ... READ [LOCAL] not protected
against DML. To restore protection some changes from WL#6671 were merged,
specifically MDL_SHARED_READ_ONLY and test cases.
WL#6671 merge highlights:
- Not all tests merged.
- In MySQL LOCK TABLES ... READ acquires MDL_SHARED_READ_ONLY for all engines,
in MariaDB MDL_SHARED_READ is always acquired first and then upgraded to
MDL_SHARED_READ_ONLY for InnoDB only.
- The above allows us to omit MDL_SHARED_WRITE_LOW_PRIO implementation in
MariaDB, which is rather useless with InnoDB. In MySQL it is needed to
preserve locking behavior between low priority writes and LOCK TABLES ... READ
for non-InnoDB engines (covered by sys_vars.sql_low_priority_updates_func).
- Omitted HA_NO_READ_LOCAL_LOCK, we rely on lock_count() instead.
- Omitted "piglets": in MariaDB stream of DML against InnoDB table may lead to
concurrent LOCK TABLES ... READ starvation.
- HANDLER ... OPEN acquires MDL_SHARED_READ instead of MDL_SHARED in MariaDB.
- Omitted SNRW->X MDL lock upgrade for IMPORT/DISCARD TABLESPAECE under LOCK
TABLES.
- Omitted strong locks for views, triggers and SP under LOCK TABLES.
- Omitted IX schema lock for LOCK TABLES READ.
- Omitted deadlock weight juggling for LOCK TABLES.
Full WL#6671 merge status:
- innodb.innodb-lock: fully merged
- main.alter_table: not merged due to different HANDLER solution
- main.debug_sync: fully merged
- main.handler_innodb: not merged due to different HANDLER solution
- main.handler_myisam: not merged due to different HANDLER solution
- main.innodb_mysql_lock: fully merged
- main.insert_notembedded: fully merged
- main.lock: not merged (due to no strong locks for views)
- main.lock_multi: not merged
- main.lock_sync: fully merged (partially in MDEV-7660)
- main.mdl_sync: not merged
- main.partition_debug_sync: not merged due to different HANDLER solution
- main.status: fully merged
- main.view: fully merged
- perfschema.mdl_func: not merged (no such test in MariaDB)
- perfschema.table_aggregate_global_2u_2t: not merged (didn't fail in MariaDB)
- perfschema.table_aggregate_global_2u_3t: not merged (didn't fail in MariaDB)
- perfschema.table_aggregate_global_4u_2t: not merged (didn't fail in MariaDB)
- perfschema.table_aggregate_global_4u_3t: not merged (didn't fail in MariaDB)
- perfschema.table_aggregate_hist_2u_2t: not merged (didn't fail in MariaDB)
- perfschema.table_aggregate_hist_2u_3t: not merged (didn't fail in MariaDB)
- perfschema.table_aggregate_hist_4u_2t: not merged (didn't fail in MariaDB)
- perfschema.table_aggregate_hist_4u_3t: not merged (didn't fail in MariaDB)
- perfschema.table_aggregate_thread_2u_2t: not merged (didn't fail in MariaDB)
- perfschema.table_aggregate_thread_2u_3t: not merged (didn't fail in MariaDB)
- perfschema.table_aggregate_thread_4u_2t: not merged (didn't fail in MariaDB)
- perfschema.table_aggregate_thread_4u_3t: not merged (didn't fail in MariaDB)
- perfschema.table_lock_aggregate_global_2u_2t: not merged (didn't fail in MariaDB)
- perfschema.table_lock_aggregate_global_2u_3t: not merged (didn't fail in MariaDB)
- perfschema.table_lock_aggregate_global_4u_2t: not merged (didn't fail in MariaDB)
- perfschema.table_lock_aggregate_global_4u_3t: not merged (didn't fail in MariaDB)
- perfschema.table_lock_aggregate_hist_2u_2t: not merged (didn't fail in MariaDB)
- perfschema.table_lock_aggregate_hist_2u_3t: not merged (didn't fail in MariaDB)
- perfschema.table_lock_aggregate_hist_4u_2t: not merged (didn't fail in MariaDB)
- perfschema.table_lock_aggregate_hist_4u_3t: not merged (didn't fail in MariaDB)
- perfschema.table_lock_aggregate_thread_2u_2t: not merged (didn't fail in MariaDB)
- perfschema.table_lock_aggregate_thread_2u_3t: not merged (didn't fail in MariaDB)
- perfschema.table_lock_aggregate_thread_4u_2t: not merged (didn't fail in MariaDB)
- perfschema.table_lock_aggregate_thread_4u_3t: not merged (didn't fail in MariaDB)
- sys_vars.sql_low_priority_updates_func: not merged
- include/thr_rwlock.h: not merged, rw_pr_lock_assert_write_owner and
rw_pr_lock_assert_not_write_owner are macros in MariaDB
- sql/handler.h: not merged (HA_NO_READ_LOCAL_LOCK)
- sql/mdl.cc: partially merged (MDL_SHARED_READ_ONLY only)
- sql/mdl.h: partially merged (MDL_SHARED_READ_ONLY only)
- sql/lock.cc: fully merged
- sql/sp_head.cc: not merged
- sql/sp_head.h: not merged
- sql/sql_base.cc: partially merged (MDL_SHARED_READ_ONLY only)
- sql/sql_base.h: not merged
- sql/sql_class.cc: fully merged
- sql/sql_class.h: fully merged
- sql/sql_handler.cc: merged partially (different solution in MariaDB)
- sql/sql_parse.cc: partially merged, mostly omitted low priority write part
- sql/sql_reload.cc: not merged comment change
- sql/sql_table.cc: not merged SNRW->X upgrade for IMPORT/DISCARD TABLESPACE
- sql/sql_view.cc: not merged
- sql/sql_yacc.yy: not merged (MDL_SHARED_WRITE_LOW_PRIO, MDL_SHARED_READ_ONLY)
- sql/table.cc: not merged (MDL_SHARED_WRITE_LOW_PRIO)
- sql/table.h: not merged (MDL_SHARED_WRITE_LOW_PRIO)
- sql/trigger.cc: not merged
- storage/innobase/handler/ha_innodb.cc: merged store_lock()/lock_count()
changes (in MDEV-7660), didn't merge HA_NO_READ_LOCAL_LOCK
- storage/innobase/handler/ha_innodb.h: fully merged in MDEV-7660
- storage/myisammrg/ha_myisammrg.cc: not merged comment change
- storage/perfschema/table_helper.cc: not merged (no MDL support in MariaDB PFS)
- unittest/gunit/mdl-t.cc: not merged
- unittest/gunit/mdl_sync-t.cc: not merged
MariaDB specific changes:
- handler.heap: different HANDLER solution, MDEV-7660
- handler.innodb: different HANDLER solution, MDEV-7660
- handler.interface: different HANDLER solution, MDEV-7660
- handler.myisam: different HANDLER solution, MDEV-7660
- main.mdl_sync: MDEV-7660 specific changes
- main.partition_debug_sync: removed test due to different HANDLER solution,
MDEV-7660
- main.truncate_coverage: removed test due to different HANDLER solution,
MDEV-7660
- mysql-test/include/mtr_warnings.sql: additional cleanup, MDEV-7660
- mysql-test/lib/v1/mtr_report.pl: additional cleanup, MDEV-7660
- plugin/metadata_lock_info/metadata_lock_info.cc: not in MySQL
- sql/sql_handler.cc: MariaDB specific fix for mysql_ha_read(), MDEV-7660
... with wsrep_replicate_myisam enabled
Internal updates to system statistical tables could wrongly
trigger an additional total-order replication if wsrep_repli
-cate_myisam is enabled.
Fixed by adding a check to skip total-order replication for
stat tables.
for InnoDB tables"
Don't use thr_lock.c locks for InnoDB tables. Below is list of changes that
were needed to implement this:
- HANDLER OPEN acquireis MDL_SHARED_READ instead of MDL_SHARED
- HANDLER READ calls external_lock() even if SE is not going to be locked by
THR_LOCK
- InnoDB lock wait timeouts are now honored which are much shorter by default
than server lock wait timeouts (1 year vs 50 seconds)
- with @@autocommit= 1 LOCK TABLES disables autocommit implicitely, though
user still sees @@autocommt= 1
- the above starts implicit transaction
- transactions started by LOCK TABLES are now rolled back on disconnect
(previously everything was committed due to autocommit)
- transactions started by LOCK TABLES are now rolled back by ROLLBACK
(previously everything was committed due to autocommit)
- it is now impossible to change BINLOG_FORMAT under LOCK TABLES (at least
to statement) due to running transaction
- LOCK TABLES WRITE is additionally handled by MDL
- ...in contrast LOCK TABLES READ protection against DML is pure InnoDB
- combining transactional and non-transactional tables under LOCK TABLES
may cause rolled back changes in transactional table and "committed"
changes in non-transactional table
- user may disable innodb_table_locks, which will cause LOCK TABLES to be
noop basically
Removed tests for BUG#45143 and BUG#55930 which cover InnoDB + THR_LOCK. To
operate properly these tests require code flow to go through THR_LOCK debug
sync points, which is not the case after this patch. These tests are removed
by WL#6671 as well. An alternative is to port them to different storage engine.
Internal updates to system statistical tables could wrongly
trigger an additional total-order replication if wsrep_repli
-cate_myisam is enabled.
Fixed by adding a check to skip total-order replication for
stat tables.
Test: galera.galera_var_replicate_myisam_on
* revert part of the db7edfe that moved calculations from
fix_fields to val_str for Item_func_sysconst and descendants
* mark session state dependent functions in check_vcol_func_processor()
* re-run fix_fields for all such functions for every statement
* fix CURRENT_USER/CURRENT_ROLE not to use Name_resolution_context
(that is allocated on the stack in unpack_vcol_info_from_frm())
Note that NOW(), CURDATE(), etc use lazy initialization and do *not*
force fix_fields to be re-run. The rule is:
* lazy initialization is *not* allowed, if it changes metadata (so,
e.g. DAYNAME() cannot use it)
* lazy initialization is *preferrable* if it has side effects (e.g.
NOW() sets thd->time_zone_used=1, so it's better to do it when
the value of NOW is actually needed, not when NOW is simply prepared)
(like DROP TABLE) has been scheduled before conflicting DDL's (like INSERT)
are commited.
What makes these bugs hard to detect is that in most cases any wrong
schduling are caught by MDL locks. It's only when there are timing issues
that the bugs (usually deadlocks) are noticed.
This patch adds a DBUG_ASSERT() that detects, in parallel replication,
if a DDL is scheduled before any depending DML'S are commited.
It does this be checking if there are any conflicting replication locks
when the DDL is about to wait for getting it's MDL lock.
I also did some minor code cleanups in sql_base.cc to make this code
similar to other related code.
.. share->last_version' failed in myisam/mi_open.c:67: test_if_reopen
During the RENAME operation since the renamed temporary table is also
opened and added to myisam_open_list/maria_open_list, resetting the
last_version at the end of operation (HA_EXTRA_PREPARE_FOR_RENAME)
will cause an assertion failure when a subsequent query tries to open
an additional temporary table instance and thus attempts to reuse it
from the open table list.
This commit fixes the issue by skipping flush/close operations executed
toward the end of ALTER for temporary tables. It also enables a shortcut
for simple ALTERs (like rename, disable/enable keys) on temporary
tables.
As safety checks, added some assertions at code points that should not
be hit for temporary tables.
* remove a confusing method name - Field::set_default_expression()
* remove handler::register_columns_for_write()
* rename stuff
* add asserts
* remove unlikely unlikely
* remove redundant if() conditions
* fix mark_unsupported_function() to report the most important violation
* don't scan vfield list for default values (vfields don't have defaults)
* move handling for DROP CONSTRAINT IF EXIST where it belongs
* don't protect engines from Alter_inplace_info::ALTER_ADD_CONSTRAINT
* comments
MDEV-10134 Add full support for DEFAULT
- Added support for using tables with MySQL 5.7 virtual fields,
including MySQL 5.7 syntax
- Better error messages also for old cases
- CREATE ... SELECT now also updates timestamp columns
- Blob can now have default values
- Added new system variable "check_constraint_checks", to turn of
CHECK constraint checking if needed.
- Removed some engine independent tests in suite vcol to only test myisam
- Moved some tests from 'include' to 't'. Should some day be done for all tests.
- FRM version increased to 11 if one uses virtual fields or constraints
- Changed to use a bitmap to check if a field has got a value, instead of
setting HAS_EXPLICIT_VALUE bit in field flags
- Expressions can now be up to 65K in total
- Ensure we are not refering to uninitialized fields when handling virtual fields or defaults
- Changed check_vcol_func_processor() to return a bitmap of used types
- Had to change some functions that calculated cached value in fix_fields to do
this in val() or getdate() instead.
- store_now_in_TIME() now takes a THD argument
- fill_record() now updates default values
- Add a lookahead for NOT NULL, to be able to handle DEFAULT 1+1 NOT NULL
- Automatically generate a name for constraints that doesn't have a name
- Added support for ALTER TABLE DROP CONSTRAINT
- Ensure that partition functions register virtual fields used. This fixes
some bugs when using virtual fields in a partitioning function
Since a query can now refer to the same temporary table
multiple times, find_dup_table()/find_table_in_list()
have been updated to also consider this new possibility.
mysqld maintains a list of TABLE objects for all temporary
tables created within a session in THD. Here each table is
represented by a TABLE object.
A query referencing a particular temporary table for more
than once, however, failed with ER_CANT_REOPEN_TABLE error
because a TABLE_SHARE was allocate together with the TABLE,
so temporary tables always had only one TABLE per TABLE_SHARE.
This patch lift this restriction by separating TABLE and
TABLE_SHARE objects and storing TABLE_SHAREs for temporary
tables in a list in THD, and TABLEs in a list within their
respective TABLE_SHAREs.
- unused TABLE_SHARE::deleting and TABLE_LIST::deleting flags were removed
- kill_delayed_threads_for_table() and intern_close_table() are now private
methods of table cache
- removed free_share flag of closefrm(): it was never used for temporary
tables and was rarely useful for regular tables
- To ensure that mallocs are marked for the correct THD, even if it's
allocated in another thread, I added the thread_id to the THD constructor
- Added st_my_thread_var to thr_lock_info_init() to avoid a call to my_thread_var
- Moved things from THD::THD() to THD::init()
- Moved some things to THD::cleanup()
- Added THD::free_connection() and THD::reset_for_reuse()
- Added THD to CONNECT::create_thd()
- Added THD::thread_dbug_id and st_my_thread_var->dbug_id. These are needed
to ensure that we have a constant thread_id used for debugging with a THD,
even if it changes thread_id (=connection_id)
- Set variables.pseudo_thread_id in constructor. Removed not needed sets.
THAT ACTUALLY EXISTS
ANALYSIS:
=========
Stored functions updating a view where the view table has a
trigger defined that updates another table, fails reporting
an error that the table doesn't exist.
If there is a trigger defined on a table, a variable
'trg_event_map' will be set to a non-zero value after the
parsed tree creation. This indicates what triggers we need to
pre-load for the TABLE_LIST when opening an associated table.
During the prelocking phase, the variable 'trg_event_map'
will not be set for the view table. This value will be set
after the processing of triggers defined on the table. During
the processing of sub-statements, 'locked_tables_mode' will be
set to 'LTM_PRELOCKED' which denotes that further locking
of tables/functions cannot be done. This results in the other
table not being locked and thus further processing results in
an error getting reported.
FIX:
====
During the prelocking of view, the value of 'trg_event_map'
of the view is copied to 'trg_event_map' of the next table
in the TABLE_LIST. This results in the locking of tables
associated with the trigger as well.
This bug revealed a serious problem: if the same partition list
was used in two window specifications then the temporary table created
to calculate window functions contained fields for two identical
partitions. This problem was fixed as well.
Window functions need to have their own column in the work (temp) table,
like aggregate functions do.
They don't need val_int() -> val_int_result() conversion though, so they
should be wrapped with Item_direct_ref, not Item_aggregate_ref.
This fix also fixes a connection hang when trying to do INSERT DELAYED to a crashed table.
Added crash_mysqld.inc to allow easy crash+restart of mysqld
filesort and init_read_record() for the same table.
This will simplify code for WINDOW FUNCTIONS (MDEV-6115)
- Filesort_info renamed to SORT_INFO and moved to filesort.h
- filesort now returns SORT_INFO
- init_read_record() now takes a SORT_INFO parameter.
- unique declaration is moved to uniques.h
- subselect caching of buffers is now more explicit than before
- filesort_buffer is now reusable even if rec_length has changed.
- filsort_free_buffers() and free_io_cache() calls are removed
- Remove one malloc() when using get_addon_fields()
Other things:
- Added --debug-assert-on-not-freed-memory option to make it easier to
debug some not-freed-memory issues.
- Item_sum_count::remove() should check if the argument's value is NULL.
- Window Function item must have its Item_window_func::split_sum_func
called,
- and it must call split_sum_func for aggregate's arguments (see the
comment near Item_window_func::split_sum_func for explanation why)
"Re-factor the code for post-join operations".
The patch mainly contains the code ported from mysql-5.6 and
created for two essential architectural changes:
1. WL#5558: Resolve ORDER BY execution method at the optimization stage
2. WL#6071: Inline tmp tables into the nested loops algorithm
The first task was implemented for mysql-5.6 by Ole John Aske.
It allows to make all decisions on ORDER BY operation at the optimization
stage.
The second task implemented for mysql-5.6 by Evgeny Potemkin adds JOIN_TAB
nodes for post-join operations that require temporary tables. It allows
to execute these operations within the nested loops algorithm that used to
be used before this task only for join queries. Besides these task moves
all planning on the execution of these operations from the execution phase
to the optimization phase.
Some other re-factoring changes of mysql-5.6 were pulled in, mainly because
it was easier to pull them in than roll them back. In particular all
changes concerning Ref_ptr_array were incorporated.
The port required some changes in the MariaDB code that concerned the
functionality of EXPLAIN and ANALYZE. This was done mainly by Sergey
Petrunia.
Creating a CONNECT object on client connect and pass this to the working thread which creates the THD.
Split LOCK_thread_count to different mutexes
Added LOCK_thread_start to syncronize threads
Moved most usage of LOCK_thread_count to dedicated functions
Use next_thread_id() instead of thread_id++
Other things:
- Thread id now starts from 1 instead of 2
- Added cast for thread_id as thread id is now of type my_thread_id
- Made THD->host const (To ensure it's not changed)
- Removed some DBUG_PRINT() about entering/exiting mutex as these was already logged by mutex code
- Fixed that aborted_connects and connection_errors_internal are counted in all cases
- Don't take locks for current_linfo when we set it (not needed as it was 0 before)
Don't compare "field == table->next_number_field" because the field
can be special nullable field copy created by the trigger.
Compare field_index values instead.
NOT NULL constraint must be checked *after* the BEFORE triggers.
That is for INSERT and UPDATE statements even NOT NULL fields
must be able to store a NULL temporarily at least while
BEFORE INSERT/UPDATE triggers are running.
* move common code to a new set_bad_null_error() function
* move repeated comparison out of the loop
* remove unused code
* unused method Table_triggers_list::set_table
* redundant condition (if (table) after table was dereferenced)
* add an assert
On shutdown feedback was sending a short report without creating
a THD. At that point current_thd was pointing to the already
destroyed THD from the previous full report.
backport from 10.1:
commit bfe703a
Author: Sergei Golubchik <serg@mariadb.org>
Date: Tue Feb 3 18:19:56 2015 +0100
don't let current_thd to point to a destroyed THD
Problem & Analysis: If DML invokes a trigger or a
stored function that inserts into an AUTO_INCREMENT column,
that DML has to be marked as 'unsafe' statement. If the
tables are locked in the transaction prior to DML statement
(using LOCK TABLES), then the same statement is not marked as
'unsafe' statement. The logic of checking whether unsafeness
is protected with if (!thd->locked_tables_mode). Hence if
we lock the tables prior to DML statement, it is *not* entering
into this if condition. Hence the statement is not marked
as unsafe statement.
Fix: Irrespective of locked_tables_mode value, the unsafeness
check should be done. Now with this patch, the code is moved
out to 'decide_logging_format()' function where all these checks
are happening and also with out 'if(!thd->locked_tables_mode)'.
Along with the specified test case in the bug scenario
(BINLOG_STMT_UNSAFE_AUTOINC_COLUMNS), we also identified that
other cases BINLOG_STMT_UNSAFE_AUTOINC_NOT_FIRST,
BINLOG_STMT_UNSAFE_WRITE_AUTOINC_SELECT, BINLOG_STMT_UNSAFE_INSERT_TWO_KEYS
are also protected with thd->locked_tables_mode which is not right. All
of those checks also moved to 'decide_logging_format()' function.
HA_MYISAMMRG.CC:631
Analysis
========
Any attempt to open a temporary MyISAM merge table consisting
of a view in its list of tables (not the last table in the list)
under LOCK TABLES causes the server to exit.
Current implementation doesn't perform sanity checks during
merge table creation. This allows merge table to be created
with incompatible tables (table with non-myisam engine),
views or even with table doesn't exist in the system.
During view open, check to verify whether requested view
is part of a merge table is missing under LOCK TABLES path
in open_table(). This leads to opening of underlying table
with parent_l having NULL value. Later when attaching child
tables to parent, this hits an ASSERT as all child tables
should have parent_l pointing to merge parent. If the operation
does not happen under LOCK TABLES mode, open_table() checks
for view's parent_l and returns error.
Fix:
======
Check added before opening view Under LOCK TABLES in open_table()
to verify whether it is part of merge table. Error is returned
if the view is part of a merge table.
find_item_in_list() now recognize view fields like a fields even if they rever to an expression.
The problem of schema name do not taken into account for field with it and
derived table fixed.
Duplicating code removed
(MDEV-8617: Post-fix for 10.1)
* Reset THD's PS members before returning when node is
not ready
* Add CF_SKIP_WSREP_CHECK flag to COM_STMT_XXX commands
* Skip TO replication of COM_STMT_PREPAREs for MyISAM
* Updated tests
Problem: Not all permanent Item_direct_view_ref was in permanent list of used items of the view.
Solution: Detect creating permenent view/derived table reference and put them in the permanent list at once.
- Part 3: Adding mem_root to push_back() and push_front()
Other things:
- Added THD as an argument to some partition functions.
- Added memory overflow checking for XML tag's in read_xml()
- Added mem_root to all calls to new Item
- Added private method operator new(size_t size) to Item to ensure that
we always use a mem_root when creating an item.
This saves use once call to current_thd per Item creation
Added mandatory thd parameter to Item (and all derivative classes) constructor.
Added thd parameter to all routines that may create items.
Also removed "current_thd" from Item::Item. This reduced number of
pthread_getspecific() calls from 290 to 177 per OLTP RO transaction.
Alternative fix that doesn't cause view.test crash in --ps:
Remember when Item_ref was fixed right in the constructor
and did not have a full Item_ref::fix_fields() call. Later
in PS/SP, after Item_ref::cleanup, we use this knowledge
to avoid doing full fix_fields() for items that were never
supposed to be fix_field'ed.
Simplify the test case.
- Changed ER(ER_...) to ER_THD(thd, ER_...) when thd was known or if there was many calls to current_thd in the same function.
- Changed ER(ER_..) to ER_THD_OR_DEFAULT(current_thd, ER...) in some places where current_thd is not necessary defined.
- Removing calls to current_thd when we have access to thd
Part of this is optimization (not calling current_thd when not needed),
but part is bug fixing for error condition when current_thd is not defined
(For example on startup and end of mysqld)
Notable renames done as otherwise a lot of functions would have to be changed:
- In JOIN structure renamed:
examined_rows -> join_examined_rows
record_count -> join_record_count
- In Field, renamed new_field() to make_new_field()
Other things:
- Added DBUG_ASSERT(thd == tmp_thd) in Item_singlerow_subselect() just to be safe.
- Removed old 'tab' prefix in JOIN_TAB::save_explain_data() and use members directly
- Added 'thd' as argument to a few functions to avoid calling current_thd.
Fixed several optimizer issues relatied to GROUP BY:
a) Refering to a SELECT column in HAVING sometimes calculated it twice, which caused problems with non determinstic functions
b) Removing duplicate fields and constants from GROUP BY was done too late for "using index for group by" optimization to work
c) EXPLAIN SELECT ... GROUP BY did wrongly show 'Using filesort' in some cases involving "Using index for group-by"
a) was fixed by:
- Changed last argument to Item::split_sum_func2() from bool to int to allow more flags
- Added flag argument to Item::split_sum_func() to allow on to specify if the item was in the SELECT part
- Mark all split_sum_func() calls from SELECT with SPLIT_SUM_SELECT
- Changed split_sum_func2() to do nothing if called with an argument that is not a sum function and doesn't include sum functions, if we are not an argument to SELECT.
This ensures that in a case like
select a*sum(b) as f1 from t1 where a=1 group by c having f1 <= 10;
That 'a' in the SELECT part is stored as a reference in the temporary table togeher with sum(b) while the 'a' in having isn't (not needed as 'a' is already a reference to a column in the result)
b) was fixed by:
- Added an extra remove_const() pass for GROUP BY arguments before make_join_statistics() in case of one table SELECT.
This allowes get_best_group_min_max() to optimize things better.
c) was fixed by:
- Added test for group by optimization in JOIN::exec_inner for
select->quick->get_type() == QUICK_SELECT_I::QS_TYPE_GROUP_MIN_MAX
item.cc:
- Simplifed Item::split_sum_func2()
- Split test to make them faster and easier to read
- Changed last argument to Item::split_sum_func2() from bool to int to allow more flags
- Added flag argument to Item::split_sum_func() to allow on to specify if the item was in the SELECT part
- Changed split_sum_func2() to do nothing if called with an argument that is not a sum function and doesn't include sum functions, if we are not an argument to SELECT.
opt_range.cc:
- Simplified get_best_group_min_max() by calcuating first how many group_by elements.
- Use join->group instead of join->group_list to test if group by, as join->group_list may be NULL if everything was optimized away.
sql_select.cc:
- Added an extra remove_const() pass for GROUP BY arguments before make_join_statistics() in case of one table SELECT.
- Use group instead of group_list to test if group by, as group_list may be NULL if everything was optimized away.
- Moved printing of "Error in remove_const" to remove_const() instead of having it in caller.
- Simplified some if tests by re-ordering code.
- update_depend_map_for_order() and remove_const() fixed to handle the case where make_join_statistics() has not yet been called (join->join_tab is 0 in this case)
SELECT ... WHERE XX IN (SELECT YY)
this was transformed to something like:
SELECT ... WHERE IF_EXISTS(SELECT ... HAVING XX=YY)
The bug was that for normal execution XX was fixed in the original outer SELECT context while in PS it was fixed in the sub query context and this confused the optimizer.
Fixed by ensuring that XX is always fixed in the outer context.
Do not call handler::rebind_psi() and handler::unbind_psi() when performance
schema is compiled out.
Overhead change:
handler::rebind_psi 0.04% -> out of radar
handler::unbind_psi 0.03% -> out of radar
open_table 0.21% -> 0.18%
close_thread_table 0.05% -> 0.05%
sql_alloc() has additional costs compared to direct mem_root allocation:
- function call: it is defined in a separate translation unit and can't be
inlined
- it needs to call pthread_getspecific() to get THD::mem_root
It is called dozens of times implicitely at least by:
- List<>::push_back()
- List<>::push_front()
- new (for Sql_alloc derived classes)
- sql_memdup()
Replaced lots of implicit sql_alloc() calls with direct mem_root allocation,
passing through THD pointer whenever it is needed.
Number of sql_alloc() calls reduced 345 -> 41 per OLTP RO transaction.
pthread_getspecific() overhead dropped 0.76 -> 0.59
sql_alloc() overhed dropped 0.25 -> 0.06
delete_dynamic() was called 9-11x per OLTP RO query + 3x per BEGIN/COMMIT.
3 calls were performed by LEX_MASTER_INFO. Added condition to call those only
for CHANGE MASTER.
1 call was performed by lock_table_names()/Hash_set/my_hash_free(). Hash_set was
supposed to be used for DDL and LOCK TABLES to gather database names, while it
was initialized/freed for DML too. In fact Hash_set didn't do any useful job
here. Hash_set was removed and MDL requests are now added directly to the list.
The rest 5-7 calls are done by optimizer, mostly by Explain_query and friends.
Since dynamic arrays are used in most cases, they can hardly be optimized.
my_hash_free() overhead dropped 0.02 -> out of radar.
delete_dynamic() overhead dropped 0.12 -> 0.04.
AVOID DEADLOCK AFTER RESTORE
Analysis
--------
Accessing the restored NDB table in an active multi-statement
transaction was resulting in deadlock found error.
MySQL Server needs to discover metadata of NDB table from
data nodes after table is restored from backup. Metadata
discovery happens on the first access to restored table.
Current code mandates this statement to be the first one
in the transaction. This is because discover needs exclusive
metadata lock on the table. Lock upgrade at this point can
lead to MDL deadlock and the code was written at the time
when MDL deadlock detector was not present. In case when
discovery attempted in the statement other than the first
one in transaction ER_LOCK_DEADLOCK error is reported
pessimistically.
Fix:
---
Removed the constraint as any potential deadlock will be
handled by deadlock detector. Also changed code in discover
to keep metadata locks of active transaction.
Same issue was present in table auto repair scenario. Same
fix is added in repair path also.
This was a regression from the patch for MDEV-7668.
A test was incorrect, so the slave would not properly handle re-using
temporary tables, which lead to replication failure in this case.
It is possible for Item_field to have a NULL field_name. This is true if
the Item_field is created based on a field in a temporary table that has
no name. It is thus necessary to do a null check before attempting a
strcmp.
Make sure that in parallel replication, we execute wait_for_prior_commit()
before setting table->in_use for a temporary table. Otherwise we can end up
with two parallel replication worker threads competing with each other for
use of a temporary table.
Re-factor the use of find_temporary_table() to be able to handle errors
in the caller (as wait_for_prior_commit() can return error in case of
deadlock kill).
[This commit cherry-picked to be able to merge MDEV-7936, of which it
is a pre-requisite, into both 10.0 and 10.1.]
Parallel replication depends on locking (table locks, row locks, etc.) to
prevent two conflicting transactions from running and committing in parallel.
But temporary tables are designed to be visible only to one thread, and have
no such locking.
In the concrete issue, an intermediate master could commit a CREATE TEMPORARY
TABLE in the same group commit as in INSERT into that table. Thus, a
lower-level master could attempt to run them in parallel and get an error.
More generally, we need protection from parallel replication trying to run
transactions in parallel that access a common temporary table.
This patch simply causes use of a temporary table from parallel replication
to wait for all previous transactions to commit, serialising the replication
at that point.
(A more fine-grained locking could be added later, possibly. However,
using temporary tables in statement-based replication is in any case
normally undesirable; for example a restart of the server will lose
temporary tables and can break replication).
Note that row-based replication is not affected, as it does not do any
temporary tables on the slave-side.
This patch also cleans up the locking around protecting the list of
temporary tables in Relay_log_info. This used to take the
rli->data_lock at the end of every statement, which is very bad for
concurrency. With this patch, the lock is not taken unless temporary
tables (with statement-based binlogging) are in use on the slave.
Parallel replication depends on locking (table locks, row locks, etc.) to
prevent two conflicting transactions from running and committing in parallel.
But temporary tables are designed to be visible only to one thread, and have
no such locking.
In the concrete issue, an intermediate master could commit a CREATE TEMPORARY
TABLE in the same group commit as in INSERT into that table. Thus, a
lower-level master could attempt to run them in parallel and get an error.
More generally, we need protection from parallel replication trying to run
transactions in parallel that access a common temporary table.
This patch simply causes use of a temporary table from parallel replication
to wait for all previous transactions to commit, serialising the replication
at that point.
(A more fine-grained locking could be added later, possibly. However,
using temporary tables in statement-based replication is in any case
normally undesirable; for example a restart of the server will lose
temporary tables and can break replication).
Note that row-based replication is not affected, as it does not do any
temporary tables on the slave-side.
This patch also cleans up the locking around protecting the list of
temporary tables in Relay_log_info. This used to take the
rli->data_lock at the end of every statement, which is very bad for
concurrency. With this patch, the lock is not taken unless temporary
tables (with statement-based binlogging) are in use on the slave.
Do not use merge_for_insert for commands which use SELECT because optimizer can't work with such tables.
Fixes which makes multi-delete working with normally merged views.
* reset current_thd in THD::~THD, otherwise my_malloc_size_cb_func()
might access THD after it was destroyed.
* remove now redundant set_current_thd(0) calls that follow delete thd.
The reason for the failure was a bug in an include file on debian that causes 'struct stat'
to have different sized depending on the environment.
This patch fixes so that we always include my_global.h or my_config.h before we include any other files.
Other things:
- Removed #include <my_global.h> in some include files; Better to always do this at the top level to have as few
"always-include-this-file-first' files as possible.
- Removed usage of some include files that where already included by my_global.h or by other files.
client/mysql_plugin.c:
Use my_global.h first
client/mysqlslap.c:
Remove duplicated include files
extra/comp_err.c:
Remove duplicated include files
include/m_string.h:
Remove duplicated include files
include/maria.h:
Remove duplicated include files
libmysqld/emb_qcache.cc:
Use my_global.h first
plugin/semisync/semisync.h:
Use my_pthread.h first
sql/datadict.cc:
Use my_global.h first
sql/debug_sync.cc:
Use my_global.h first
sql/derror.cc:
Use my_global.h first
sql/des_key_file.cc:
Use my_global.h first
sql/discover.cc:
Use my_global.h first
sql/event_data_objects.cc:
Use my_global.h first
sql/event_db_repository.cc:
Use my_global.h first
sql/event_parse_data.cc:
Use my_global.h first
sql/event_queue.cc:
Use my_global.h first
sql/event_scheduler.cc:
Use my_global.h first
sql/events.cc:
Use my_global.h first
sql/field.cc:
Use my_global.h first
Remove duplicated include files
sql/field_conv.cc:
Use my_global.h first
sql/filesort.cc:
Use my_global.h first
Remove duplicated include files
sql/gstream.cc:
Use my_global.h first
sql/ha_ndbcluster.cc:
Use my_global.h first
sql/ha_ndbcluster_binlog.cc:
Use my_global.h first
sql/ha_ndbcluster_cond.cc:
Use my_global.h first
sql/ha_partition.cc:
Use my_global.h first
sql/handler.cc:
Use my_global.h first
sql/hash_filo.cc:
Use my_global.h first
sql/hostname.cc:
Use my_global.h first
sql/init.cc:
Use my_global.h first
sql/item.cc:
Use my_global.h first
sql/item_buff.cc:
Use my_global.h first
sql/item_cmpfunc.cc:
Use my_global.h first
sql/item_create.cc:
Use my_global.h first
sql/item_geofunc.cc:
Use my_global.h first
sql/item_inetfunc.cc:
Use my_global.h first
sql/item_row.cc:
Use my_global.h first
sql/item_strfunc.cc:
Use my_global.h first
sql/item_subselect.cc:
Use my_global.h first
sql/item_sum.cc:
Use my_global.h first
sql/item_timefunc.cc:
Use my_global.h first
sql/item_xmlfunc.cc:
Use my_global.h first
sql/key.cc:
Use my_global.h first
sql/lock.cc:
Use my_global.h first
sql/log.cc:
Use my_global.h first
sql/log_event.cc:
Use my_global.h first
sql/log_event_old.cc:
Use my_global.h first
sql/mf_iocache.cc:
Use my_global.h first
sql/mysql_install_db.cc:
Remove duplicated include files
sql/mysqld.cc:
Remove duplicated include files
sql/net_serv.cc:
Remove duplicated include files
sql/opt_range.cc:
Use my_global.h first
sql/opt_subselect.cc:
Use my_global.h first
sql/opt_sum.cc:
Use my_global.h first
sql/parse_file.cc:
Use my_global.h first
sql/partition_info.cc:
Use my_global.h first
sql/procedure.cc:
Use my_global.h first
sql/protocol.cc:
Use my_global.h first
sql/records.cc:
Use my_global.h first
sql/records.h:
Don't include my_global.h
Better to do this at the upper level
sql/repl_failsafe.cc:
Use my_global.h first
sql/rpl_filter.cc:
Use my_global.h first
sql/rpl_gtid.cc:
Use my_global.h first
sql/rpl_handler.cc:
Use my_global.h first
sql/rpl_injector.cc:
Use my_global.h first
sql/rpl_record.cc:
Use my_global.h first
sql/rpl_record_old.cc:
Use my_global.h first
sql/rpl_reporting.cc:
Use my_global.h first
sql/rpl_rli.cc:
Use my_global.h first
sql/rpl_tblmap.cc:
Use my_global.h first
sql/rpl_utility.cc:
Use my_global.h first
sql/set_var.cc:
Added comment
sql/slave.cc:
Use my_global.h first
sql/sp.cc:
Use my_global.h first
sql/sp_cache.cc:
Use my_global.h first
sql/sp_head.cc:
Use my_global.h first
sql/sp_pcontext.cc:
Use my_global.h first
sql/sp_rcontext.cc:
Use my_global.h first
sql/spatial.cc:
Use my_global.h first
sql/sql_acl.cc:
Use my_global.h first
sql/sql_admin.cc:
Use my_global.h first
sql/sql_analyse.cc:
Use my_global.h first
sql/sql_audit.cc:
Use my_global.h first
sql/sql_base.cc:
Use my_global.h first
sql/sql_binlog.cc:
Use my_global.h first
sql/sql_bootstrap.cc:
Use my_global.h first
Use my_global.h first
sql/sql_cache.cc:
Use my_global.h first
sql/sql_class.cc:
Use my_global.h first
sql/sql_client.cc:
Use my_global.h first
sql/sql_connect.cc:
Use my_global.h first
sql/sql_crypt.cc:
Use my_global.h first
sql/sql_cursor.cc:
Use my_global.h first
sql/sql_db.cc:
Use my_global.h first
sql/sql_delete.cc:
Use my_global.h first
sql/sql_derived.cc:
Use my_global.h first
sql/sql_do.cc:
Use my_global.h first
sql/sql_error.cc:
Use my_global.h first
sql/sql_explain.cc:
Use my_global.h first
sql/sql_expression_cache.cc:
Use my_global.h first
sql/sql_handler.cc:
Use my_global.h first
sql/sql_help.cc:
Use my_global.h first
sql/sql_insert.cc:
Use my_global.h first
sql/sql_lex.cc:
Use my_global.h first
sql/sql_load.cc:
Use my_global.h first
sql/sql_locale.cc:
Use my_global.h first
sql/sql_manager.cc:
Use my_global.h first
sql/sql_parse.cc:
Use my_global.h first
sql/sql_partition.cc:
Use my_global.h first
sql/sql_plugin.cc:
Added comment
sql/sql_prepare.cc:
Use my_global.h first
sql/sql_priv.h:
Added error if we use this before including my_global.h
This check is here becasue so many files includes sql_priv.h first.
sql/sql_profile.cc:
Use my_global.h first
sql/sql_reload.cc:
Use my_global.h first
sql/sql_rename.cc:
Use my_global.h first
sql/sql_repl.cc:
Use my_global.h first
sql/sql_select.cc:
Use my_global.h first
sql/sql_servers.cc:
Use my_global.h first
sql/sql_show.cc:
Added comment
sql/sql_signal.cc:
Use my_global.h first
sql/sql_statistics.cc:
Use my_global.h first
sql/sql_table.cc:
Use my_global.h first
sql/sql_tablespace.cc:
Use my_global.h first
sql/sql_test.cc:
Use my_global.h first
sql/sql_time.cc:
Use my_global.h first
sql/sql_trigger.cc:
Use my_global.h first
sql/sql_udf.cc:
Use my_global.h first
sql/sql_union.cc:
Use my_global.h first
sql/sql_update.cc:
Use my_global.h first
sql/sql_view.cc:
Use my_global.h first
sql/sys_vars.cc:
Added comment
sql/table.cc:
Use my_global.h first
sql/thr_malloc.cc:
Use my_global.h first
sql/transaction.cc:
Use my_global.h first
sql/uniques.cc:
Use my_global.h first
sql/unireg.cc:
Use my_global.h first
sql/unireg.h:
Removed inclusion of my_global.h
storage/archive/ha_archive.cc:
Added comment
storage/blackhole/ha_blackhole.cc:
Use my_global.h first
storage/csv/ha_tina.cc:
Use my_global.h first
storage/csv/transparent_file.cc:
Use my_global.h first
storage/federated/ha_federated.cc:
Use my_global.h first
storage/federatedx/federatedx_io.cc:
Use my_global.h first
storage/federatedx/federatedx_io_mysql.cc:
Use my_global.h first
storage/federatedx/federatedx_io_null.cc:
Use my_global.h first
storage/federatedx/federatedx_txn.cc:
Use my_global.h first
storage/heap/ha_heap.cc:
Use my_global.h first
storage/innobase/handler/handler0alter.cc:
Use my_global.h first
storage/maria/ha_maria.cc:
Use my_global.h first
storage/maria/unittest/ma_maria_log_cleanup.c:
Remove duplicated include files
storage/maria/unittest/test_file.c:
Added comment
storage/myisam/ha_myisam.cc:
Move sql_plugin.h first as this includes my_global.h
storage/myisammrg/ha_myisammrg.cc:
Use my_global.h first
storage/oqgraph/oqgraph_thunk.cc:
Use my_config.h and my_global.h first
One could not include my_global.h before oqgraph_thunk.h (don't know why)
storage/spider/ha_spider.cc:
Use my_global.h first
storage/spider/hs_client/config.cpp:
Use my_global.h first
storage/spider/hs_client/escape.cpp:
Use my_global.h first
storage/spider/hs_client/fatal.cpp:
Use my_global.h first
storage/spider/hs_client/hstcpcli.cpp:
Use my_global.h first
storage/spider/hs_client/socket.cpp:
Use my_global.h first
storage/spider/hs_client/string_util.cpp:
Use my_global.h first
storage/spider/spd_conn.cc:
Use my_global.h first
storage/spider/spd_copy_tables.cc:
Use my_global.h first
storage/spider/spd_db_conn.cc:
Use my_global.h first
storage/spider/spd_db_handlersocket.cc:
Use my_global.h first
storage/spider/spd_db_mysql.cc:
Use my_global.h first
storage/spider/spd_db_oracle.cc:
Use my_global.h first
storage/spider/spd_direct_sql.cc:
Use my_global.h first
storage/spider/spd_i_s.cc:
Use my_global.h first
storage/spider/spd_malloc.cc:
Use my_global.h first
storage/spider/spd_param.cc:
Use my_global.h first
storage/spider/spd_ping_table.cc:
Use my_global.h first
storage/spider/spd_sys_table.cc:
Use my_global.h first
storage/spider/spd_table.cc:
Use my_global.h first
storage/spider/spd_trx.cc:
Use my_global.h first
storage/xtradb/handler/handler0alter.cc:
Use my_global.h first
storage/xtradb/handler/i_s.cc:
Use my_global.h first
We should assume that the store engine will report the first duplicate key for this case.
Old code of suppression of unsafe logging error with LIMIT didn't work, because of wrong usage of my_interval_timer().
Suppress unsafe logging errors to the error log if we get too many unsafe logging errors in a short time.
This is to not overflow the error log with meaningless errors.
- Each error code is suppressed and counted separately.
- We do a 5 minute suppression of new errors if we get more than 10 errors in that time.
Only print unsafe logging errors if log_warnings > 1.
mysql-test/suite/binlog/r/binlog_stm_unsafe_warning.result:
Update test results as INSERT ... ON DUPLICATE KEY UPDATE doesn't get logged anymore
mysql-test/suite/binlog/r/binlog_unsafe.result:
Update test results as INSERT ... ON DUPLICATE KEY UPDATE doesn't get logged anymore
mysql-test/suite/engines/README:
Fixed typos
mysql-test/suite/rpl/r/rpl_known_bugs_detection.result:
Update test results as INSERT ... ON DUPLICATE KEY UPDATE doesn't get logged anymore
sql/sql_base.cc:
Don't log warning if there are two unique keys used with INSERT .. ON DUPLICATE KEY UPDATE.
We should assume that the store engine will report the first duplicate key for this case.
sql/sql_class.cc:
Suppress error in binary log if we get too many unsafe logging errors in a short time.
Only print unsafe logging errors if log_warnings > 1
MDEV-6560 Assertion `! is_set() ' failed in Diagnostics_area::set_ok_status on killing CREATE OR REPLACE
MDEV-6525 Assertion `table->pos_in_locked _tables == __null || table->pos_in_locked_tables->table = table' failed in mark_used_tables_as_free_for_reuse, locking problems and binlogging problems on CREATE OR REPLACE under lock.
mysql-test/r/create_or_replace.result:
Added test for MDEV-6560
mysql-test/t/create_or_replace.test:
Added test for MDEV-6560
mysql-test/valgrind.supp:
Added suppression for OpenSuse 12.3
sql/sql_base.cc:
More DBUG
sql/sql_class.cc:
Changed that thd_sqlcom_can_generate_row_events() does not report that CREATE OR REPLACE is generating row events.
This is safe as this function is only used by InnoDB/XtraDB to check if a query is generating row events as part of another transaction. As CREATE is always run as it's own transaction, this isn't a problem.
This fixed MDEV-6525.
sql/sql_table.cc:
Remember if reopen_tables() generates an error (which can only happen in case of KILL).
This fixed MDEV-6560
Merged lp:maria/maria-10.0-galera up to revision 3879.
Added a new functions to handler API to forcefully abort_transaction,
producing fake_trx_id, get_checkpoint and set_checkpoint for XA. These
were added for future possiblity to add more storage engines that
could use galera replication.