This task involves the implementation for the optimizer trace.
This feature produces a trace for any SELECT/UPDATE/DELETE/,
which contains information about decisions taken by the optimizer during
the optimization phase (choice of table access method, various costs,
transformations, etc). This feature would help to tell why some decisions were
taken by the optimizer and why some were rejected.
Trace is session-local, controlled by the @@optimizer_trace variable.
To enable optimizer trace we need to write:
set @@optimizer_trace variable= 'enabled=on';
To display the trace one can run:
SELECT trace FROM INFORMATION_SCHEMA.OPTIMIZER_TRACE;
This task also involves:
MDEV-18489: Limit the memory used by the optimizer trace
introduces a switch optimizer_trace_max_mem_size which limits
the memory used by the optimizer trace. This was implemented by
Sergei Petrunia.
Find indexes of one table which parts participate in one constraint.
These indexes are called constraint correlated.
New methods: TABLE::find_constraint_correlated_indexes() and
virtual method check_index_dependence() were added.
For each index it's own constraint correlated index map was created
where all indexes that are constraint correlated with the current are
marked.
The results of this task are used for MDEV-16188 (Use in-memory
PK filters built from range index scans).
remove TABLE_SHARE::error_table_name() and TABLE_SHARE::orig_table_name
(that was allocated in a wrong memroot in this bug).
instead, simply set TABLE_SHARE::table_name correctly.
This patch contains a full implementation of the optimization
that allows to use in-memory rowid / primary filters built for range
conditions over indexes. In many cases usage of such filters reduce
the number of disk seeks spent for fetching table rows.
In this implementation the choice of what possible filter to be applied
(if any) is made purely on cost-based considerations.
This implementation re-achitectured the partial implementation of
the feature pushed by Galina Shalygina in the commit
8d5a11122c.
Besides this patch contains a better implementation of the generic
handler function handler::multi_range_read_info_const() that
takes into account gaps between ranges when calculating the cost of
range index scans. It also contains some corrections of the
implementation of the handler function records_in_range() for MyISAM.
This patch supports the feature for InnoDB and MyISAM.
always logged properly with binlog_row_image=MINIMAL
There are two issues fixed in this commit.
The first is an observation of a multi-table UPDATE binlogged
in row-format in binlog_row_image=MINIMAL mode. While the UPDATE aims
at a table with an ON-UPDATE attribute its binlog after-image misses
to record also installed default value.
The reason for that turns out missed marking of default-capable fields
in TABLE::write_set.
This is fixed to mark such fields similarly to 10.2's MDEV-10134 patch (db7edfed17)
that introduced it. The marking follows up 93d1e5ce0b841bed's idea
to exploit TABLE:rpl_write_set introduced there though,
and thus does not mess (in 10.1) with the actual MDEV-10134 agenda.
The patch makes formerly arg-less TABLE::mark_default_fields_for_write()
to accept an argument which would be TABLE:rpl_write_set.
The 2nd issue is extra columns in in binlog_row_image=MINIMAL before-image
while merely a packed primary key is enough. The test main.mysqlbinlog_row_minimal
always had a wrong result recorded.
This is fixed to invoke a function that intended for read_set
possible filtering and which is called (supposed to) in all type of MDL, UPDATE
including; the test results have gotten corrected.
At *merging* from 10.1->10.2 the 1st "main" part of the patch is unnecessary
since the bug is not observed in 10.2, so only hunks from
sql/sql_class.cc
are required.
use frm_version, not mysql_version when parsing frm
In particular, virtual columns are stored according to
frm_version. And CHECK TABLE will overwrite mysql_version
to the current server version, so it cannot correctly
describe frm format.
Part of MDEV-5336 Implement LOCK FOR BACKUP
- Added new locks to MDL_BACKUP for all stages of backup locks and
a new MDL lock needed for backup stages.
- Renamed MDL_BACKUP_STMT to MDL_BACKUP_DDL
- flush_tables() takes a new parameter that decides what should be flushed.
- InnoDB, Aria (transactional tables with checksums), Blackhole, Federated
and Federatedx tables are marked to be safe for online backup. We are
using MDL_BACKUP_TRANS_DML instead of MDL_BACKUP_DML locks for these
which allows any DML's to proceed for these tables during the whole
backup process until BACKUP STAGE COMMIT which will block the final
commit.
If a table had a KEY_BLOCK_SIZE attribute, but no ROW_FORMAT,
it would be created as ROW_FORMAT=COMPRESSED in InnoDB.
However, TRUNCATE TABLE would lose the KEY_BLOCK_SIZE attribute
and create the table with the innodb_default_row_format (DYNAMIC).
This is a regression that was introduced by MDEV-13564.
update_create_info_from_table(): Copy also KEY_BLOCK_SIZE.
main.derived_cond_pushdown: Move all 10.3 tests to the end,
trim trailing white space, and add an "End of 10.3 tests" marker.
Add --sorted_result to tests where the ordering is not deterministic.
main.win_percentile: Add --sorted_result to tests where the
ordering is no longer deterministic.
Race condition. field->flags were copied from s->field->flags during
field->clone(), early in open_table_from_share(). But s->field->flags
were getting their PART_INDIRECT_KEY_FLAG bit much later in
TABLE::mark_columns_used_by_virtual_fields() and only once per share.
If two threads were executing the code between field->clone()
and mark_columns_used_by_virtual_fields() at the same time, only
one would get PART_INDIRECT_KEY_FLAG bits in field[].
Patch fixes two bugs:
1) BEGIN_TIMESTAMP of MYSQL.TRANSACTION_REGISTY is '0-0-0' on replication record insertion
2) BEGIN_TIMESTAMP equals COMMMIT_TIMESTAMP in MYSQL.TRANSACTION_REGISTRY
Fixed by calling THD::set_time() at appropriate places
The problem was that the original alias was replaced with a new allocated
string, but constraint item's are still pointing to the original alias.
Fixed by storing the original alias used when printing constraint in the
tables mem_root.
sql/table.cc:8561:42: error: non-constant-expression cannot be narrowed
from type 'uint' (aka 'unsigned int') to
'__darwin_suseconds_t' (aka 'int') in
initializer list [-Wc++11-narrowing]
timeval end_time= {thd->query_start(), uint(thd->query_start_sec_part())};
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
sql/table.cc:8561:42: note: insert an explicit cast to silence this issue
timeval end_time= {thd->query_start(), uint(thd->query_start_sec_part())};
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
static_cast<__darwin_suseconds_t>( )
a table value constructor shows wrong number of rows
This is another attempt to fix this bug. The previous patch did not take
into account that a transformation for ALL/ANY subqueries could be applied
to the materialized table that wrapped the table value constructor used as
a specification of the subselect used an ALL/ANY subquery. In this case
the result of the derived table used a sink of the class select_subselect
rather than of the class select_unit. Thus the previous fix could cause
memory overwrites when running EXPLAIN for queries with table value
constructors in ALL/ANY subselects.
After iterating all fields and setting PART_INDIRECT_KEY_FLAG as
necessary, TABLE::mark_columns_used_by_virtual_fields() remembers
in TABLE_SHARE that this operation was done and need not be repeated.
But as the flag is set in TABLE_SHARE, PART_INDIRECT_KEY_FLAG must
be set in TABLE_SHARE::field[], not only in TABLE::field[].
Otherwise all new TABLEs opened from this TABLE_SHARE will
never have it.
In this issue we are using derived_with_keys optimization and we are using these keys to do a hash join which is incorrect.
We cannot create keys for dervied tables whose keyparts have types are of BLOB or TEXT type. TEXT or BLOB columns can only be
indexed over a specified length.
One can create table with the same name for `field` and `table` `check` constraint.
For example:
`create table t(a int check(a>0), constraint a check(a>10));`
But when inserting new rows same error is always raised.
For example with
```insert into t values (-1);```
and
```insert into t values (10);```
same error `ER_CONSTRAINT_FAILED` is obtained and it is not clear which constraint is violated.
This patch solve this error so that in case if field constraint is violated the first parameter
in the error message is `table.field_name` and if table constraint is violated the first parameter
in error message is `constraint_name`.
* ignore CHECK constraint for historical rows;
* FOREIGN KEY test case.
TODO:
MDEV-16301 IB: use real table name for error messages on ALTER
Closes tempesta-tech/mariadb#491
Closes#748
MDEV-16512 Server crashes in find_field_in_table_ref on 2nd
execution of SP referring to non-existing field
Problem was in the natural join code that it changed TABLE_LIST and
Item_fields but didn't restore changed things if things goes wrong
and was not able to re-execute after failure.
Some of the problems could have been avoided if we would have run
fix_fields before doing natural join transformations.
Fixed by marking functions complete AFTER they had executed, instead at
start.
I had also to change some tests that checked if Item_fields are usable.
This doesn't fix all known problems, but at least avoids some crashes.
What should be done in the near future is to mark the statement in the SP
as 'not re-executable' and force a reparse of it on next execution.
Reviewer: Sergei Petrunia <psergey@askmonty.org>
The bug was that innobase_get_computed_value() trashed record[0] and data
in Field_blob::value
Fixed by using a record on the heap for innobase_get_computed_value()
Reviewer: Marko Mäkelä
This is to mark that a field is indirectly part of a key, which simplifes
checking if we need to have this field up to date to evaluate a key.
For example:
CREATE TABLE t1 (a int, b int as (a) virtual,
c int as (b) virtual, index(c))
would mark a and b with PART_INDIRECT_KEY_FLAG.
c is marked with PART_KEY_FLAG as before.
materialization scan over materialization lookup
For non-mergeable semi-joins we don't store the estimates of the IN subquery in table->file->stats.records.
In the function TABLE_LIST::fetch_number_of_rows, we store the number of rows in the tables
(estimates in case of derived table/views).
Currently we don't store the estimates for non-mergeable semi-joins, which leads to a problem of selecting
materialization scan over materialization lookup.
Fixed this by storing these estimated appropriately
MDEV-16426 Optimizer erroneously treats equal constants of different formats as same
A cleanup for MDEV-14630: fixing a crash in Item_decimal::eq().
Problems:
- old implementations of Item_decimal::eq() and
Item_temporal_literal::eq() were not symmetric
with Item_param::eq(), this caused MDEV-11361.
- old implementations for DECIMAL and temporal data types
did not take into account that in case when eq() is called
with binary_cmp==true, {{eq()}} should check not only equality
of the two values, but also equality if their decimal precision.
This cuases MDEV-16426.
- Item_decimal::eq() crashes with "item" pointing
to a non-DECIMAL value. Before MDEV-14630
non-DECIMAL values were filtered out by the test:
type() == item->type()
as literals of different types had different type().
After MDEV-14630 type() for literals of all data types return CONST_ITEM.
This caused failures in tests:
./mtr engines/iuds.insert_number
./mtr --ps --embedded main.explain_slowquerylog
(revealed by buildbot)
The essence of the fix:
Making literals and Item_param reuse the same code to avoid
asymmetries between Item_param::eq(Item_literal) and
Item_literal::eq(Item_param), now and in the future, and to
avoid code duplication between Item_literal and Item_param.
Adding tests for "decimals" for DECIMAL and temporal data types,
to treat constants of different scale as not equal when "binary_cmp"
is "true".
Details:
1. Adding a helper class Item_const to extract constant values from Items easier
2. Deriving Item_basic_value from Item_const
3. Joining Type_handler::Item_basic_value_eq() and Item_basic_value_bin_eq()
into a single method with an extra "binary_cmp" argument
(it looks simple this way) and renaming the new method to Item_const_eq().
Modifying its implementations to operate with
Item_const instead of Item_basic_value.
4. Adding a new class Type_handler_hex_hybrid,
to handle hex constants like 0x616263.
5. Removing Item::VARBIN_ITEM and fixing Item_hex_constant to
use type_handler_hex_hybrid instead of type_handler_varchar.
Item_hex_hybrid::type() now returns CONST_ITEM, like all
other literals do.
6. Move virtual methods Item::type_handler_for_system_time() and
Item::cast_to_int_type_handler() from Item to Type_handler.
7. Removing Item_decimal::eq() and Item_temporal_literal::eq().
These classes are now handled by the generic Item_basic_value::eq().
8. Implementing Type_handler_temporal_result::Item_const_eq()
and Type_handler_decimal_result::Item_const_eq(),
this fixes MDEV-11361.
9. Adding tests for "decimals" into
Type_handler_decimal_result::Item_const_eq() and
Type_handler_temporal_result::Item_const_eq()
in case if "binary_cmp" is true.
This fixes MDEV-16426.
10. Moving Item_cache out of Item_basic_value.
They share nothing. It simplifies implementation
of Item_basic_value::eq(). Deriving Item_cache
directly from Item.
11. Adding class DbugStringItemTypeValue, which
used Item::print() internally, and using
in instead of the old debug printing code.
This gives nicer output in func_debug.result.
Changes N5 and N6 do not directly relate to the bugs fixed,
but make the code fully symmetric across all literal types.
Without a new handler Type_handler_hex_hybrid we'd have
to keep two code branches (for regular literals and for
hex hybrid literals).
The problem described in the bug report happened because the code
did not test check_cols(1) after fix_fields() in a few places.
Additionally, fix_fields() could be called multiple times for SP variables,
because they are all fixed at a early stage in append_for_log().
Solution:
1. Adding a few helper methods
- fix_fields_if_needed()
- fix_fields_if_needed_for_scalar()
- fix_fields_if_needed_for_bool()
- fix_fields_if_needed_for_order_by()
and using it in many cases instead of fix_fields() where
the "fixed" status is not definitely known to be "false".
2. Adding DBUG_ASSERT(!fixed) into Item_splocal*::fix_fields()
to catch double execution.
3. Adding tests.
As a good side effect, the patch removes a lot of duplicate code (~60 lines):
if (!item->fixed &&
item->fix_fields(..) &&
item->check_cols(1))
return true;
restore its original semantics by allowing only columns
in the write_set. Generated columns work around the assert
by temporarily updating the write_set.
Fixed by deleting the sequence if we where not able to initialize it
I also noticed that we didn't always set the error message when
check_killed(), which could lead to aborted queries without error
beeing properly set. Fixed by default setting error message if
check_error() noticed that killed had been called.
This allowed me to remove a lot of calls to thd->send_kill_message().
Crash happened because in discover, table->work_part_info was not properly
reset before execution.
Fixed by resetting before calling execute alter table, create table or
mysql_create_frm_image.
The cause of this was several different bugs:
- When using binary logging with binlog_row_image=FULL
the all bits in read_set was set, which caused a
different (wrong) pattern for marking vcol_set.
- TABLE::mark_virtual_columns_for_write() didn't in all
cases mark vcol_set with the vcol_field.
- TABLE::update_virtual_fields() has to update all
vcol fields on REPLACE if binary logging with FULL
is used.
- VCOL_UPDATE_INDEXED should update all vcol fields part
of an index that was not updated by VCOL_UPDATE_FOR_READ
- max_row_length() calculated length of NULL and not
used fields. This didn't cause any crash, but used
more memory than needed.
The logic and the implementation scheme are similar with the
MDEV-9197 Pushdown conditions into non-mergeable views/derived tables
How the push down is made on the example:
select * from t1
where a>3 and b>10 and
(a,b) in (select x,max(y) from t2 group by x);
-->
select * from t1
where a>3 and b>10 and
(a,b) in (select x,max(y)
from t2
where x>3
group by x
having max(y)>10);
The implementation scheme:
1. Search for the condition cond that depends only on the fields
from the left part of the IN subquery (left_part)
2. Find fields F_group in the select of the right part of the
IN subquery (right_part) that are used in the GROUP BY
3. Extract from the cond condition cond_where that depends only on the
fields from the left_part that stay at the same places in the left_part
(have the same indexes) as the F_group fields in the projection of the
right_part
4. Transform cond_where so it can be pushed into the WHERE clause of the
right_part and delete cond_where from the cond
5. Transform cond so it can be pushed into the HAVING clause of the right_part
The optimization is made in the
Item_in_subselect::pushdown_cond_for_in_subquery() and is controlled by the
variable condition_pushdown_for_subquery.
New test file in_subq_cond_pushdown.test is created.
There are also some changes made for setup_jtbm_semi_joins().
Now it is decomposed into the 2 procedures: setup_degenerate_jtbm_semi_joins()
that is called before optimize_cond() for cond and setup_jtbm_semi_joins()
that is called after optimize_cond().
New setup_jtbm_semi_joins() is made in the way so that the result of its work is
the same as if it was called before optimize_cond().
The code that is common for pushdown into materialized derived and into materialized
IN subqueries is factored out into pushdown_cond_for_derived(),
Item_in_subselect::pushdown_cond_for_in_subquery() and
st_select_lex::pushdown_cond_into_where_clause().
Problem was that verify_constraints() didn't check if
there was an error as part of evaluating constraints
(can happen in strict mode).
In one-row-insert the error was ignored when using
binary logging as binary logging clear errors if insert
succeeded. In multi-row-insert the error was noticed
for the second row.
After this fix one will get an error for both one and
multi-row inserts if the constraints generates a warning
in strict mode.
MDEV-16100 FOR SYSTEM_TIME erroneously resolves string user variables as transaction IDs
Problem:
Vers_history_point::resolve_unit() tested item->result_type() before
item->fix_fields() was called.
- Item_func_get_user_var::result_type() returned REAL_RESULT by default.
This caused MDEV-16100.
- Item_func_sp::result_type() crashed on assert.
This caused MDEV-16094
Changes:
1. Adding item->fix_fields() into Vers_history_point::resolve_unit()
before using data type specific properties of the history point
expression.
2. Adding a new virtual method Type_handler::Vers_history_point_resolve_unit()
3. Implementing type-specific
Type_handler_xxx::Type_handler::Vers_history_point_resolve_unit()
in the way to:
a. resolve temporal and general purpose string types to TIMESTAMP
b. resolve BIT and general purpose INT types to TRANSACTION
c. disallow use of non-relevant data type expressions in FOR SYSTEM_TIME
Note, DOUBLE and DECIMAL data types are disallowed intentionally.
- DOUBLE does not have enough precision to hold huge BIGINT UNSIGNED values
- DECIMAL rounds on conversion to INT
Both lack of precision and rounding might potentionally lead to
very unpredictable results when a wrong transaction ID would be chosen.
If one really wants dangerous use of DOUBLE and DECIMAL, explicit CAST
can be used:
FOR SYSTEM_TIME AS OF CAST(double_or_decimal AS UNSIGNED)
QQ: perhaps DECIMAL(N,0) could still be allowed.
4. Adding a new virtual method Item::type_handler_for_system_time(),
to make HEX hybrids and bit literals work as TRANSACTION rather
than TIMESTAMP.
5. sql_yacc.yy: replacing the rule temporal_literal to "TIMESTAMP TEXT_STRING".
Other temporal literals now resolve to TIMESTAMP through the new
Type_handler methods. No special grammar needed. This removed
a few shift/resolve conflicts.
(TIMESTAMP related conflicts in "history_point:" will be removed separately)
6. Removing the "timestamp_only" parameter from
vers_select_conds_t::resolve_units() and Vers_history_point::resolve_unit().
It was a hint telling that a table did not have any TRANSACTION-aware
system time columns, so it's OK to resolve to TIMESTAMP in case of uncertainty.
In the new reduction it works as follows:
- the decision between TIMESTAMP and TRANSACTION is first made
based only on the expression data type only
- then, in case if the expression resolved to TRANSACTION, the table
is checked if TRANSACTION-aware columns really exist.
This way is safer against possible ALTER TABLE statements changing
ROW START and ROW END columns from "BIGINT UNSIGNED" to "TIMESTAMP(x)"
or the other way around.
failure upon SELECT with impossible condition
The problem appears because of a wrong implementation of the
Item_func_in::build_clone() method. It didn't clone 'array' and 'cmp_fields'
fields for the cloned IN predicate and this could cause crashes.
The Item_func_in::fix_length_and_dec() method was refactored and a new method
named Item_func_in::create_array() was created. It allowed to create 'array'
for cloned IN predicates in a proper way.
Store transaction start time in thd->transaction.start_time.
THD::transaction_time() wraps over transaction.start_time taking into
account current status of BEGIN.
Don't use hidden system time in versioning,
but keep the system time logic in THD
to workaround low-res system clock and
replication not versioned to versioned.
This reverts MDEV-14788 (System versioning cannot
be based on local timestamps, as it is now).
Versioning is based on local timestamps again,
but timestamps are protected by MDEV-15923
(option to control who can set session @@timestamp).
Make sure that SELECT_LEX_UNIT::derived, behaves as documented
(points to the "TABLE_LIST representing this union in the
embedding select"). For recursive CTE this was not necessarily
the case, it could've pointed to the TABLE_LIST inside the CTE,
not in the embedding select.
To fix:
* don't update unit->derived in mysql_derived_prepare(), pass derived
as an argument to st_select_lex_unit::prepare()
* prefer to set unit->derived in TABLE_LIST::init_derived()
to the TABLE_LIST in the embedding select, not to the recursive
reference. Fail if there are many TABLE_LISTs in the embedding
select with conflicting FOR SYSTEM_TIME clauses.
cleanup:
* remove redundant THD* argument from st_select_lex_unit::prepare()
table.cc:
virtual columns must be computed for INSERT, if they're part
of the partitioning expression.
this change broke gcol.gcol_partition_innodb.
fix CHECK TABLE for partitioned tables and vcols.
sql_partition.cc:
mark prerequisite base columns in full_part_field_set
ha_partition.cc
initialize vcol_set accordingly
Modern compilers (such as GCC 8) emit warnings that the
'register' keyword is deprecated and not valid C++17.
Let us remove most use of the 'register' keyword.
Code in 'extra/' is not touched.
Item::derived_field_transformer_for_having
The crash occurred due to an inappropriate handling of multiple equalities
when pushing conditions into materialized views/derived tables. If equalities
extracted from a multiple equality can be pushed into a materialized
view/derived table they should be plainly conjuncted with other pushed
predicates rather than form a separate AND sub-formula.
Because NOW() is set to system time, unless overriden.
And both should follow big manual system time changes,
while still coping with lowres system clocks.
Ignoring system time changes is both confusing and
breaks with restarts.
Vers SQL: TRT fix getting TRX_ID by COMMIT_TS
Fixed wrong assumption that records are ordered by COMMIT_TS.
This is anyway a quick hack until tempesta-tech#314 is done.
See also FIXME and TODO in TR_table::query(MYSQL_TIME, bool).
Test: SEES case for trx_id.test [closes#456]
Lots of changes:
* calculate the current history partition in ::external_lock(),
not in ::write_row() or ::update_row()
* remove dynamically collected per-partition row_end stats
* no full table scan in open_table_from_share to calculate these
stats, no manual MDL/thr_locks in open_table_from_share
* no shared stats in TABLE_SHARE = no mutexes or condition waits when
calculating current history partition
* always compare timestamps, don't convert them to MYSQL_TIME
(avoid DST ambiguity, and it's faster too)
* correct interval handling, 1 month = 1 month, not 30 * 24 * 3600 seconds
* save/restore first partition start time, and count intervals from there
* only allow to drop first partitions if INTERVAL
* when adding new history partitions, split the data in the last history
parition, if it was overflowed
* show partition boundaries in INFORMATION_SCHEMA.PARTITIONS
is not supported
Allowed to use recursive references in derived tables.
As a result usage of recursive references in operands of
INTERSECT / EXCEPT is now supported.
enum_mark_columns -> enum_column_usage
mark_used_columns -> column_usage
further commits will replace MARK_COLUMN_NONE with
COLUMN_READ and COLUMN_WRITE that convey the intention
without causing columns to be marked
Handle string length as size_t, consistently (almost always:))
Change function prototypes to accept size_t, where in the past
ulong or uint were used. change local/member variables to size_t
when appropriate.
This fix excludes rocksdb, spider,spider, sphinx and connect for now.
This will make it easier to how memory allocation is done when debugging
with either DBUG or gdb.
Will especially help when debugging stored procedures
Main change is a name argument as second argument to init_alloc_root()
init_sql_alloc()
Other things:
- Added DBUG_ENTER/EXIT to some Virtual_tmp_table functions
gcc7 warning:
sql/table.cc: In member function ‘int TABLE_SHARE::init_from_binary_frm_image(THD*, bool, const uchar*, size_t)’:
sql/table.cc:2032:11: warning: this statement may fall through [-Wimplicit-fallthrough=]
if (vers_can_native)
^~
sql/table.cc:2037:9: note: here
default:
^~~~~~~
This was done in, among other things:
- thd->db and thd->db_length
- TABLE_LIST tablename, db, alias and schema_name
- Audit plugin database name
- lex->db
- All db and table names in Alter_table_ctx
- st_select_lex db
Other things:
- Changed a lot of functions to take const LEX_CSTRING* as argument
for db, table_name and alias. See init_one_table() as an example.
- Changed some function arguments from LEX_CSTRING to const LEX_CSTRING
- Changed some lists from LEX_STRING to LEX_CSTRING
- threads_mysql.result changed because process list_db wasn't always
correctly updated
- New append_identifier() function that takes LEX_CSTRING* as arguments
- Added new element tmp_buff to Alter_table_ctx to separate temp name
handling from temporary space
- Ensure we store the length after my_casedn_str() of table/db names
- Removed not used version of rename_table_in_stat_tables()
- Changed Natural_join_column::table_name and db_name() to never return
NULL (used for print)
- thd->get_db() now returns db as a printable string (thd->db.str or "")
Now we don't open partitions if it was explicitly cpecified.
ha_partition::m_opened_partition bitmap added to track
partitions that were actually opened.