Decimals with float, double and decimal now works the following way:
- DECIMAL_NOT_SPECIFIED is used when declaring DECIMALS without a firm number
of decimals. It's only used in asserts and my_decimal_int_part.
- FLOATING_POINT_DECIMALS (31) is used to mark that a FLOAT or DOUBLE
was defined without decimals. This is regarded as a floating point value.
- Max decimals allowed for FLOAT and DOUBLE is FLOATING_POINT_DECIMALS-1
- Clients assumes that float and double with decimals >= NOT_FIXED_DEC are
floating point values (no decimals)
- In the .frm decimals=FLOATING_POINT_DECIMALS are used to define
floating point for float and double (31, like before)
To ensure compatibility with old clients we do:
- When storing float and double, we change NOT_FIXED_DEC to
FLOATING_POINT_DECIMALS.
- When creating fields from .frm we change for float and double
FLOATING_POINT_DEC to NOT_FIXED_DEC
- When sending definition for a float/decimal field without decimals
to the client as part of a result set we convert NOT_FIXED_DEC to
FLOATING_POINT_DECIMALS.
- variance() and std() has changed to limit the decimals to
FLOATING_POINT_DECIMALS -1 to not get the double converted floating point.
(This was to preserve compatiblity)
- FLOAT and DOUBLE still have 30 as max number of decimals.
Bugs fixed:
variance() printed more decimals than we support for double values.
New behaviour:
- Strings now have 38 decimals instead of 30 when converted to decimal
- CREATE ... SELECT with a decimal with > 30 decimals will create a column
with a smaller range than before as we are trying to preserve the number of
decimals.
Other changes
- We are now using the obsolete bit FIELDFLAG_LEFT_FULLSCREEN to specify
decimals > 31
- NOT_FIXED_DEC is now declared in one place
- For clients, NOT_FIXED_DEC is always 31 (to ensure compatibility).
On the server NOT_FIXED_DEC is DECIMAL_NOT_SPECIFIED (39)
- AUTO_SEC_PART_DIGITS is taken from DECIMAL_NOT_SPECIFIED
- DOUBLE conversion functions are now using DECIMAL_NOT_SPECIFIED instead of
NOT_FIXED_DEC
mysqld maintains a list of TABLE objects for all temporary
tables created within a session in THD. Here each table is
represented by a TABLE object.
A query referencing a particular temporary table for more
than once, however, failed with ER_CANT_REOPEN_TABLE error
because a TABLE_SHARE was allocate together with the TABLE,
so temporary tables always had only one TABLE per TABLE_SHARE.
This patch lift this restriction by separating TABLE and
TABLE_SHARE objects and storing TABLE_SHAREs for temporary
tables in a list in THD, and TABLEs in a list within their
respective TABLE_SHAREs.
- unused TABLE_SHARE::deleting and TABLE_LIST::deleting flags were removed
- kill_delayed_threads_for_table() and intern_close_table() are now private
methods of table cache
- removed free_share flag of closefrm(): it was never used for temporary
tables and was rarely useful for regular tables
Benefits:
- Speeds up insert,write and delete by avoiding 1-2 function calls per write/update/delete.
- Avoiding calling write_locked_table_maps() if not needed.
- The inlined code is much smaller than before
- Updating of table->s->cached_row_logging_check moved to when table is opened
- Moved some bool values together in handler class to get better alignment.
"Re-factor the code for post-join operations".
The patch mainly contains the code ported from mysql-5.6 and
created for two essential architectural changes:
1. WL#5558: Resolve ORDER BY execution method at the optimization stage
2. WL#6071: Inline tmp tables into the nested loops algorithm
The first task was implemented for mysql-5.6 by Ole John Aske.
It allows to make all decisions on ORDER BY operation at the optimization
stage.
The second task implemented for mysql-5.6 by Evgeny Potemkin adds JOIN_TAB
nodes for post-join operations that require temporary tables. It allows
to execute these operations within the nested loops algorithm that used to
be used before this task only for join queries. Besides these task moves
all planning on the execution of these operations from the execution phase
to the optimization phase.
Some other re-factoring changes of mysql-5.6 were pulled in, mainly because
it was easier to pull them in than roll them back. In particular all
changes concerning Ref_ptr_array were incorporated.
The port required some changes in the MariaDB code that concerned the
functionality of EXPLAIN and ANALYZE. This was done mainly by Sergey
Petrunia.
The following left in semi-improved state to keep patch size reasonable:
- Field operator new: left thd_alloc(current_thd)
- Sql_alloc operator new: left thd_alloc(thd_get_current_thd())
- Item_args constructors: left thd_alloc(thd)
- Item_func_interval::fix_length_and_dec(): no THD arg, have to call current_thd
- Item_func_dyncol_exists::val_int(): same
- Item_dyncol_get::val_str(): same
- Item_dyncol_get::val_int(): same
- Item_dyncol_get::val_real(): same
- Item_dyncol_get::val_decimal(): same
- Item_singlerow_subselect::fix_length_and_dec(): same
changes in query execution plans.
Fixed by introducing table->rpl_write_set which holds which columns should
be stored in the binary log.
Other things:
- Removed some not needed references to read_set and write_set to make
code really changing read_set and write_set easier to read
(in opt_range.cc)
- Added error handling of failed unpack_current_row()
- Added missing call to mark_columns_needed_for_insert() for DELAYED INSERT
- Removed not used functions in_read_set() and in_write_set()
- In rpl_record.cc, removed not used variable error
The bitmap implementation defines two template Bitmap classes. One
optimized for 64-bit (default) wide bitmaps while the other is used for
all other widths.
In order to optimize the computations, Bitmap<64> class has defined its
own member functions for bitmap operations, the other one, however,
relies on mysys' bitmap implementation (mysys/my_bitmap.c).
Issue 1:
In case of non 64-bit Bitmap class, intersect() wrongly reset the
received bitmap while initialising a new local bitmap structure
(bitmap_init() clears the bitmap buffer) thus, the received bitmap was
getting cleared.
Fixed by initializing the local bitmap structure by using a temporary
buffer and later copying the received bitmap to the initialised bitmap
structure.
Issue 2:
The non 64-bit Bitmap class had the Iterator missing which caused
compilation failure.
Also added a cmake variable to hold the MAX_INDEXES value when supplied
from the command prompt. (eg. cmake .. -DMAX_INDEXES=128U). Checks have
been put in place to trigger build failure if MAX_INDEXES value is
greater than 128.
Test modifications:
* Introduced include/have_max_indexes_[64|128].inc to facilitate
skipping of tests for which the output differs with different
MAX_INDEXES.
* Introduced include/max_indexes.inc which would get modified by cmake
to reflect the MAX_INDEXES value used to build the server. This file
simply sets an mtr variable '$max_indexes' to show the MAX_INDEXES
value, which will then be consumed by the above introduced include file.
* Some tests (portions), dependent on MAX_INDEXES value, have been moved
to separate test files.
__MEMMOVE_SSSE3_BACK FROM STRING::COPY
Issue:
-----
While using row comparators, the store_value functions call
val_xxx functions in the prepare phase. This can cause
valgrind issues.
SOLUTION:
---------
Setting up of the comparators should be done by
alloc_comparators in the prepare phase. Also, make sure
store_value will be called only during execute phase.
This is a backport of the fix for Bug#17755540.
Problem: Not all permanent Item_direct_view_ref was in permanent list of used items of the view.
Solution: Detect creating permenent view/derived table reference and put them in the permanent list at once.
- Part 3: Adding mem_root to push_back() and push_front()
Other things:
- Added THD as an argument to some partition functions.
- Added memory overflow checking for XML tag's in read_xml()
- Added mem_root to all calls to new Item
- Added private method operator new(size_t size) to Item to ensure that
we always use a mem_root when creating an item.
This saves use once call to current_thd per Item creation
Added mandatory thd parameter to Item (and all derivative classes) constructor.
Added thd parameter to all routines that may create items.
Also removed "current_thd" from Item::Item. This reduced number of
pthread_getspecific() calls from 290 to 177 per OLTP RO transaction.
- Changed ER(ER_...) to ER_THD(thd, ER_...) when thd was known or if there was many calls to current_thd in the same function.
- Changed ER(ER_..) to ER_THD_OR_DEFAULT(current_thd, ER...) in some places where current_thd is not necessary defined.
- Removing calls to current_thd when we have access to thd
Part of this is optimization (not calling current_thd when not needed),
but part is bug fixing for error condition when current_thd is not defined
(For example on startup and end of mysqld)
Notable renames done as otherwise a lot of functions would have to be changed:
- In JOIN structure renamed:
examined_rows -> join_examined_rows
record_count -> join_record_count
- In Field, renamed new_field() to make_new_field()
Other things:
- Added DBUG_ASSERT(thd == tmp_thd) in Item_singlerow_subselect() just to be safe.
- Removed old 'tab' prefix in JOIN_TAB::save_explain_data() and use members directly
- Added 'thd' as argument to a few functions to avoid calling current_thd.
When writing rows with a minimal row image, it is possible to receive
empty events. In that case m_curr_row and m_rows_end are the same,
however the event implies an insert into the table with the default
values associated for that table.
Depending on which binlog_row_image we are using, we must
mark columns which to update differently both in the before image
as well as the after image.
This is MDEV-7601, including it's sub tasks MDEV-7594, MDEV-7555, MDEV-7590, MDEV-7581, MDEV-7589
The problem was that select_lex->non_agg_fields was not properly reset for re-execution and this caused an overwrite of a random memory position.
The fix was move non_agg_fields from select_lext to JOIN, which is properly reset.
table
Performance schema discovery fails if connection has no active database set.
This happened due to restriction in SQL parser: table name with no database name
is ambiguous in such case.
Fixed by temporary substitution of default database with being discovered table
database.
Taking into account implicit dependence of constant view field from nullable table of left join added.
Fixed finding real table to check if it turned to NULL (materialized view & derived taken into account)
Removed incorrect uninitialization.
This bug manifests due to wrong computation and evaluation of
keyinfo->key_length. The issues were:
* Using table->file->max_key_length() as an absolute value that must not be
reached for a key, while it represents the maximum number of bytes
possible for a table key.
* Incorrectly computing the keyinfo->key_length size during
KEY_PART_INFO creation. The metadata information regarding the key
such the field length (for strings) was added twice.
* error reporting was never needed
* avoid useless transformaton hton to db_type to hton
* in many cases the engine was guaranteed to exist, add asserts
* use ha_default_handlerton() instead of ha_checktype(DB_TYPE_DEFAULT)