Main fix was to not cache derivied tables as they may be temporary tables that are deleted before the next query.
This was a bit tricky as Item_field::fix_fields depended on cached_tables to be set to resolve some columns.
mysql-test/r/sp-bugs.result:
Added test case
mysql-test/t/sp-bugs.test:
Added test case
sql/item.cc:
Fixed fix_outer_field to handle case where found field did not have in cached_table
Idea is that if cached_table is not avaliable, use from_field->table->pos_in_table_list instead
sql/records.cc:
Also accept INTERNAL_TMP_TABLE for memmap
sql/sql_base.cc:
More DBUG_PRINT
Fixed that setup_natural_join_row_types() is not run twice.
Original code modified context->first_name_resolution_table also for second executions.
This was wrong as this could give wrong results if some joins had been optimized away between calls.
sql/sql_derived.cc:
Mark derived tables as internal temporary tables (INTERNAL_TMP_TABLE), not as NON_TRANSACTIONAL_TMP_TABLE.
This is more correct as the tables are not visible by the end user.
sql/sql_insert.cc:
Reset pos_in_table_list before calling fix_fields.
One of the consequences of the change of not caching all generated tables in Item_ident is that
pos_in_table_list needs to be correct in calls to fix_fields.
sql/sql_lex.cc:
More DBUG_PRINT
sql/sql_parse.cc:
Don't cache derivied tables as they may be temporary tables that are deleted before the next query
sql/sql_select.cc:
Reset table_vector. This was required as some code checked the vector to see if temporary tables had already been created.
sql/table.cc:
Mark tables with field translations as cacheable (as these will not disapper between stmt executions.
THD::thd->activate_stmt_arena_if_needed() should be used to temporary activating statement arena instead of direct usage of THD::set_n_backup_active_arena() because possible such scenario:
1) func1 saves current arena and activates copy1 of statement arena
2) func2 saves copy1 of statement arena setup by func1 and activates copy2
3) some changes made for copy 2
4) func2 stores changed copy2 back to statenet arena and activates copy1
5) func1 store unchanged copy1 back to statemnt arena (rewrite changed copy 2 so changes become lost) and activates arena which was before.
This only happend when using an ORDER BY on a primary key part, where all other key parts where constant.
Remove of duplicated expressions in ORDER BY (as the old code did this in some strange cases)
mysql-test/r/group_by.result:
Fixed results to take into account that duplicate order by parts are now deleted
mysql-test/r/group_by_innodb.result:
Ensure extended keys are on
mysql-test/r/innodb_ext_key.result:
More tests
mysql-test/r/order_by.result:
More tests
mysql-test/t/group_by.test:
Fixed results to take into account that duplicate order by parts are now deleted
mysql-test/t/group_by_innodb.test:
Ensure extended keys are on
mysql-test/t/innodb_ext_key.test:
More tests
mysql-test/t/order_by.test:
More tests
sql/sql_select.cc:
Fixed bug where we looked at extended key parts when we shouldn't
Remove of duplicated expressions in ORDER BY
sql/table.cc:
Indentation fixes
don't set TABLE_SHARE::keys before TABLE_SHARE::key_info is set,
otherwise an error might leave only the first property set and it will
confuse TABLE_SHARE::destroy()
fix_field() call protocol was brocken (zero pointer passed as link to item which is possible only if you are sure that there can not be Items which transforms).
Implement discovery of table non-existence, and related changes:
1. Split GTS_FORCE_DISCOVERY (that was meaning two different things in
two different functions) into GTS_FORCE_DISCOVERY and GTS_USE_DISCOVERY.
2. Move GTS_FORCE_DISCOVERY implementation into open_table_def().
3. In recover_from_failed_open() clear old errors *before* discovery,
not after successful discovery. The final error should come
from the discovery.
4. On forced discovery delete table .frm first. Discovery will write
a new one, if desired.
5. If the frm file exists, but not the table in the engine, force
rediscovery if the engine supports it.
1. default db type for partitions was stored as 1-byte DB_TYPE code,
which doesn't work for dynamically generated codes.
2. storage engine plugin for default db type wasn't locked at all,
which could trivially crash for dynamic plugins.
Now the storage engine name is stored in the extra2 section,
and the plugin is correctly locked.
Partitioning didn't store the name of default storage engine for partitions
in the frm file - it only store the typecode. Typecodes aren't stable, and
might vary depending on the order in which storage engines are loaded (can
be changed even from my.cnf without recompilation).
As a temporary workaround for 5.5, we hard-code Aria's typecode, to make sure it
never changes.
Merge of 10.0-mdev26 feature tree into 10.0-base.
Global transaction ID is prepended to each event group in the binlog.
Slave connect can request to start from GTID position instead of specifying
file name/offset of master binlog. This facilitates easy switch to a new
master.
Slave GTID state is stored in a table mysql.rpl_slave_state, which can be
InnoDB to get crash-safe slave state.
GTID includes a replication domain ID, allowing to keep track of distinct
positions for each of multiple masters.
* persistent table versions in the extra2
* ha_archive::frm_compare using TABLE_SHARE::tabledef_version
* distinguish between "important" and "optional" extra2 frm values
* write engine-defined attributes (aka "table options") to extra2, not to extra,
but still read from the old location, if they're found there.
* comments
* cosmetic changes, *(ptr+5) -> ptr[5]
* a couple of trivial functions -> inline
* remove unused argument from pack_header()
* create_frm() no longer creates frm file (the function used to prepare and
fill a memory buffer and call my_create at the end. Now it only prepares
a memory buffer). Renamed accordingly.
* don't call pack_screen twice, go for a smaller screen area in the first attempt
* remove useless calls to check_duplicate_warning()
* don't write unireg screens to .frm files
* remove make_new_entry(), it's basically dead code, always calculating
and writing into frm the same string value. replace the function call
with the constant string.