* don't disclose INVISIBLE_FULL columns in USING and NATURAL JOIN
* other INVISIBLE columns must me mentioned by name in USING,
they are hidden from NATURAL JOIN
Standard compatible behavior for UPDATE: all assignments in SET
are executed "simultaneously", not left-to-right. And `SET a=b,b=a`
will swap the values.
The test was used to result in mismatch due to unaccounted specifics
of the master-slave handshake protocol that sets the Slave_IO_Running
status to true while the semisync master status is set to active a bit later.
The test is refined to expect that.
Fixed that Truncate_versioning_privilege works as any other privilege
during upgrade:
- If the privilege field does not exists, add it to the user and db tables.
If the user had super_privilege then the user will also get the new
Truncate_versioning_privilege.
This is done to ensure that if one has GRANT ALL PRIVILEGE before, one
will continue to have it after running mysql_upgrade.
This also fixes a bug where the Truncate_versioning_privilege
does not return error
Corrected the code of st_select_lex::find_table_def_in_with_clauses() for
a proper identification of CTE references used in embedded CTEs.
When storing '0001-01-01 10:20:30x', execution went throw the last code
branch in Field_time::store_TIME_with_warning(), around the test
for (ltime->year || ltime->month). This then resulted into wrong results
because:
1. Field_time::store_TIME() does not check YYYYMM against zero.
It assumes that ltime->days and ltime->hours are already properly set.
So it mixed days to hours, even when YYYYMM was not zero.
2. Field_time_hires::store_TIME() does not check YYYYMM against zero.
It assumes that ltime->year, ltime->month, ltime->days and ltime->hours
are already properly set. So it always mixed days and even months(!) and years(!)
to hours, using pack_time(). This gave even worse results comparing to #2.
3. Field_timef::store_TIME() did not check the entire YYYYMM for being zero.
It only checked MM, but did not check YYYY. In case of a zero MM,
it mixed days to hours, even if YYYY was not zero.
The wrong code was in TIME_to_longlong_time_packed().
In the new reduction Field_time::store_TIME_with_warning() is responsible
to prepare the YYYYYMMDD part properly in all code branches
(with trailing garbage like 'x' and without trailing garbage).
It was reorganized into a more straightforward style.
Field_time:store_TIME(), Field_time_hires::store_TIME() and
TIME_to_longlong_time_packed() were fixed to do a DBUG_ASSERT
on non-zero ltime->year or ltime->month. The code testing ltime->month
was removed from TIME_to_longlong_time_packed(), as it's now
properly done on the caller level.
Truncation was moved from Field_timef::store_TIME() to
Field_time::store_TIME_with_warning().
So now all thee methods Field_time*::store_TIME() assume a properly
set input value:
- Only zero ltime->year and ltime->month are allowed.
- The value must be already properly truncated according to decimals()
(this will help to add rounding soon, see MDEV-8894)
A "const" qualifier was added to the argument of Field_time*::store_TIME().
Problem:-
create or replace table t1 (pk int auto_increment primary key invisible, i int);
alter table t1 modify pk int invisible;
This last alter makes a invisible column which is not null and does not
have default value.
Analysis:-
This is caused because our error check for NOT_NULL_FLAG and
NO_DEFAULT_VALUE_FLAG flag misses this sql_field , but this is not the fault
of error check :).Actually this field come via mysql_prepare_alter_table
and it does not have NO_DEFAULT_VALUE_FLAG flag turned on. (If it was create
table NO_DEFAULT_VALUE_FLAG would have turned on Column_definition::check)
and this would have generated error.
Solution:-
I have moved the error check to kind last of mysql_prepare_create_table
because upto this point we have applied NO_DEFAULT_VALUE_FLAG to required
column.
Problem:- If we create table field with dynamic default value then that
field always gets NULL value.
Analyze:- This is because in fill_record we simple continue at Invisible
column because we though that share->default_values(default value is
always copied into table->record[0] before insert) will have a default
value for them(which is true for constant defaults , but not for dynamic
defaults).
Solution:- We simple set all_fields_have_value to null , and this will
make call to update_default_fields (in the case of dynamic default), And
default expr will be evaluted and value will be set in field.
When identifying a table name the following should be taken into account:
a CTE name cannot be qualified with a database name, otherwise the table
name is considered as the name of a non-CTE table.
This was done in, among other things:
- thd->db and thd->db_length
- TABLE_LIST tablename, db, alias and schema_name
- Audit plugin database name
- lex->db
- All db and table names in Alter_table_ctx
- st_select_lex db
Other things:
- Changed a lot of functions to take const LEX_CSTRING* as argument
for db, table_name and alias. See init_one_table() as an example.
- Changed some function arguments from LEX_CSTRING to const LEX_CSTRING
- Changed some lists from LEX_STRING to LEX_CSTRING
- threads_mysql.result changed because process list_db wasn't always
correctly updated
- New append_identifier() function that takes LEX_CSTRING* as arguments
- Added new element tmp_buff to Alter_table_ctx to separate temp name
handling from temporary space
- Ensure we store the length after my_casedn_str() of table/db names
- Removed not used version of rename_table_in_stat_tables()
- Changed Natural_join_column::table_name and db_name() to never return
NULL (used for print)
- thd->get_db() now returns db as a printable string (thd->db.str or "")
MDEV-11415 Remove excessive undo logging during ALTER TABLE…ALGORITHM=COPY
Move a test from innodb.rename_table_debug to innodb.alter_copy.
ha_innobase::extra(HA_EXTRA_BEGIN_ALTER_COPY): Register id-versioned
tables so that mysql.transaction_registry will be updated, even for
empty tables that are subjected to ALTER TABLE…ALGORITHM=COPY.
If a crash occurs during ALTER TABLE…ALGORITHM=COPY, InnoDB would spend
a lot of time rolling back writes to the intermediate copy of the table.
To reduce the amount of busy work done, a work-around was introduced in
commit fd069e2bb3 in MySQL 4.1.8 and 5.0.2,
to commit the transaction after every 10,000 inserted rows.
A proper fix would have been to disable the undo logging altogether and
to simply drop the intermediate copy of the table on subsequent server
startup. This is what happens in MariaDB 10.3 with MDEV-14717,MDEV-14585.
In MariaDB 10.2, the intermediate copy of the table would be left behind
with a name starting with the string #sql.
This is a backport of a bug fix from MySQL 8.0.0 to MariaDB,
contributed by jixianliang <271365745@qq.com>.
Unlike recent MySQL, MariaDB supports ALTER IGNORE. For that operation
InnoDB must for now keep the undo logging enabled, so that the latest
row can be rolled back in case of an error.
In Galera cluster, the LOAD DATA statement will retain the existing
behaviour and commit the transaction after every 10,000 rows if
the parameter wsrep_load_data_splitting=ON is set. The logic to do
so (the wsrep_load_data_split() function and the call
handler::extra(HA_EXTRA_FAKE_START_STMT)) are joint work
by Ji Xianliang and Marko Mäkelä.
The original fix:
Author: Thirunarayanan Balathandayuthapani <thirunarayanan.balathandayuth@oracle.com>
Date: Wed Dec 2 16:09:15 2015 +0530
Bug#17479594 AVOID INTERMEDIATE COMMIT WHILE DOING ALTER TABLE ALGORITHM=COPY
Problem:
During ALTER TABLE, we commit and restart the transaction for every
10,000 rows, so that the rollback after recovery would not take so long.
Fix:
Suppress the undo logging during copy alter operation. If fts_index is
present then insert directly into fts auxiliary table rather
than doing at commit time.
ha_innobase::num_write_row: Remove the variable.
ha_innobase::write_row(): Remove the hack for committing every 10000 rows.
row_lock_table_for_mysql(): Remove the extra 2 parameters.
lock_get_src_table(), lock_is_table_exclusive(): Remove.
Reviewed-by: Marko Mäkelä <marko.makela@oracle.com>
Reviewed-by: Shaohua Wang <shaohua.wang@oracle.com>
Reviewed-by: Jon Olav Hauglid <jon.hauglid@oracle.com>
- Galera tests that was not updated with connection change
messages
- Test where out of memory error was changed (We are now using the
standard out of memory error in most places)
- Removed tokudb tests that uses include files that doesn't exist
in MariaDB
- Removed not supported mariadb startup option from option file
- Galera tests that was not updated with connection change
messages
- Disabled some TokuDB tests that always timed out.
These should be enabled again when we have an option to
specicy timeouts per tests.
Do not SET DEBUG_DBUG=-d,... in tests. To disable debug instrumentation,
save and restore the original value of the variable DEBUG_DBUG.
Assigning -d,... will enable the output of a lot of unrelated DBUG
messages to the server error log.
Tests started to fail after a merge of MDEV-15107 (from bb-10.2-ext to 10.3),
because MDEV-15107 additionally fixed this problem:
MDEV-15112 Inconsistent evaluation of spvariable=0 in strict mode
Modifying tests not to reply on the pre-MDEV-15112 behavior.
Now we don't open partitions if it was explicitly cpecified.
ha_partition::m_opened_partition bitmap added to track
partitions that were actually opened.
replicate_events_marked_for_skip=FILTER_ON_MASTER
When events of a big transaction are binlogged offsetting over 2GB from
the beginning of the log the semisync master's dump thread
lost such events.
The events were skipped by the Dump thread that found their skipping
status erroneously.
The current fixes make sure the skipping status is computed correctly.
The test verifies them simulating the 2GB offset.
InnoDB RNG maintains global state, causing otherwise unnecessary bus
traffic. Even worse this is cross-mutex traffic. That is different
mutexes suffer from contention.
Fixed delay of 4 was verified to give best throughput by OLTP update
index and read-write benchmarks on Intel Broadwell (2/20/40) and
ARM (1/46/46).
join_tab->distinct=true means "Before doing record read with this
join_tab, call join_tab->remove_duplicates() to eliminate duplicates".
remove_duplicates() assumes that
- there is a temporary table $T with rows that are to be de-duplicated
- there is a previous join_tab (e.g. with join_tab->fields) which was
used to populate the temp.table $T.
When the query has "Impossible WHERE" and window function, then the above
conditions are not met (but we still might need a window function
computation step when the query has implicit grouping).
The fix is to not add remove_duplicates step if the select execution is
degenerate (and we'll have at most one row in the output anyway).