Decimals with float, double and decimal now works the following way:
- DECIMAL_NOT_SPECIFIED is used when declaring DECIMALS without a firm number
of decimals. It's only used in asserts and my_decimal_int_part.
- FLOATING_POINT_DECIMALS (31) is used to mark that a FLOAT or DOUBLE
was defined without decimals. This is regarded as a floating point value.
- Max decimals allowed for FLOAT and DOUBLE is FLOATING_POINT_DECIMALS-1
- Clients assumes that float and double with decimals >= NOT_FIXED_DEC are
floating point values (no decimals)
- In the .frm decimals=FLOATING_POINT_DECIMALS are used to define
floating point for float and double (31, like before)
To ensure compatibility with old clients we do:
- When storing float and double, we change NOT_FIXED_DEC to
FLOATING_POINT_DECIMALS.
- When creating fields from .frm we change for float and double
FLOATING_POINT_DEC to NOT_FIXED_DEC
- When sending definition for a float/decimal field without decimals
to the client as part of a result set we convert NOT_FIXED_DEC to
FLOATING_POINT_DECIMALS.
- variance() and std() has changed to limit the decimals to
FLOATING_POINT_DECIMALS -1 to not get the double converted floating point.
(This was to preserve compatiblity)
- FLOAT and DOUBLE still have 30 as max number of decimals.
Bugs fixed:
variance() printed more decimals than we support for double values.
New behaviour:
- Strings now have 38 decimals instead of 30 when converted to decimal
- CREATE ... SELECT with a decimal with > 30 decimals will create a column
with a smaller range than before as we are trying to preserve the number of
decimals.
Other changes
- We are now using the obsolete bit FIELDFLAG_LEFT_FULLSCREEN to specify
decimals > 31
- NOT_FIXED_DEC is now declared in one place
- For clients, NOT_FIXED_DEC is always 31 (to ensure compatibility).
On the server NOT_FIXED_DEC is DECIMAL_NOT_SPECIFIED (39)
- AUTO_SEC_PART_DIGITS is taken from DECIMAL_NOT_SPECIFIED
- DOUBLE conversion functions are now using DECIMAL_NOT_SPECIFIED instead of
NOT_FIXED_DEC
mysqld maintains a list of TABLE objects for all temporary
tables created within a session in THD. Here each table is
represented by a TABLE object.
A query referencing a particular temporary table for more
than once, however, failed with ER_CANT_REOPEN_TABLE error
because a TABLE_SHARE was allocate together with the TABLE,
so temporary tables always had only one TABLE per TABLE_SHARE.
This patch lift this restriction by separating TABLE and
TABLE_SHARE objects and storing TABLE_SHAREs for temporary
tables in a list in THD, and TABLEs in a list within their
respective TABLE_SHAREs.
- unused TABLE_SHARE::deleting and TABLE_LIST::deleting flags were removed
- kill_delayed_threads_for_table() and intern_close_table() are now private
methods of table cache
- removed free_share flag of closefrm(): it was never used for temporary
tables and was rarely useful for regular tables
- To ensure that mallocs are marked for the correct THD, even if it's
allocated in another thread, I added the thread_id to the THD constructor
- Added st_my_thread_var to thr_lock_info_init() to avoid a call to my_thread_var
- Moved things from THD::THD() to THD::init()
- Moved some things to THD::cleanup()
- Added THD::free_connection() and THD::reset_for_reuse()
- Added THD to CONNECT::create_thd()
- Added THD::thread_dbug_id and st_my_thread_var->dbug_id. These are needed
to ensure that we have a constant thread_id used for debugging with a THD,
even if it changes thread_id (=connection_id)
- Set variables.pseudo_thread_id in constructor. Removed not needed sets.
* simplify the code at default: label
* bugfix: flush the checksum for NULL fields
(perfschema.checksum was failing)
* cleanup: put repeated code into a function
Checksum implementations contain optimizations for calculating
checksums of larger blocks of memory.
This optimization calls my_checksum on a larger block of memory
rather than calling on multiple adjacent memory as its going though
the table columns for each table row.
filesort and init_read_record() for the same table.
This will simplify code for WINDOW FUNCTIONS (MDEV-6115)
- Filesort_info renamed to SORT_INFO and moved to filesort.h
- filesort now returns SORT_INFO
- init_read_record() now takes a SORT_INFO parameter.
- unique declaration is moved to uniques.h
- subselect caching of buffers is now more explicit than before
- filesort_buffer is now reusable even if rec_length has changed.
- filsort_free_buffers() and free_io_cache() calls are removed
- Remove one malloc() when using get_addon_fields()
Other things:
- Added --debug-assert-on-not-freed-memory option to make it easier to
debug some not-freed-memory issues.
fix a few cases where a successful ALTER was not binlogged:
* on errors after the completed ALTER, binlog it, then return an error
* don't let thd->killed abort open_table() after completed
online ALTER.
"Re-factor the code for post-join operations".
The patch mainly contains the code ported from mysql-5.6 and
created for two essential architectural changes:
1. WL#5558: Resolve ORDER BY execution method at the optimization stage
2. WL#6071: Inline tmp tables into the nested loops algorithm
The first task was implemented for mysql-5.6 by Ole John Aske.
It allows to make all decisions on ORDER BY operation at the optimization
stage.
The second task implemented for mysql-5.6 by Evgeny Potemkin adds JOIN_TAB
nodes for post-join operations that require temporary tables. It allows
to execute these operations within the nested loops algorithm that used to
be used before this task only for join queries. Besides these task moves
all planning on the execution of these operations from the execution phase
to the optimization phase.
Some other re-factoring changes of mysql-5.6 were pulled in, mainly because
it was easier to pull them in than roll them back. In particular all
changes concerning Ref_ptr_array were incorporated.
The port required some changes in the MariaDB code that concerned the
functionality of EXPLAIN and ANALYZE. This was done mainly by Sergey
Petrunia.
COLUMNS
ANALYSIS:
=========
A valgrind error is reported when CREATE TABLE .. SELECT
involving BIT columns triggers a column type redefinition.
In general the pack_flag is set for BIT columns in
'mysql_prepare_create_table()'. However, during the above
operation, redefined column types was handled after the
special handling for BIT columns and thus pack_flag ended
up not being set correctly triggering the valgrind error.
FIX:
====
The patch fixes this problem by setting pack_flag correctly
for BIT columns in the case of column type redefinition.
On shutdown feedback was sending a short report without creating
a THD. At that point current_thd was pointing to the already
destroyed THD from the previous full report.
backport from 10.1:
commit bfe703a
Author: Sergei Golubchik <serg@mariadb.org>
Date: Tue Feb 3 18:19:56 2015 +0100
don't let current_thd to point to a destroyed THD
Problem:
========
1) Drop table queries are re-generated by server
before writing the events(queries) into binlog
for various reasons. If table name/db name contains
a non regular characters (like latin characters),
the generated query is wrong. Hence it breaks the
replication.
2) In the edge case, when table name/db name contains
64 characters, server is throwing an assert
assert(M_TBLLEN < 128)
3) In the edge case, when db name contains 64 latin
characters, binlog content is interpreted badly
which is leading replication failure.
Analysis & Fix :
================
1) Parser reads the table name from the query and converts
it to standard charset(utf8) and stores it in table_name variable.
When drop table query is regenerated with the same table_name
variable, it should be converted back to the original charset
from standard charset(utf8).
2) Latin character takes two bytes for each character. Limit
of the identifier is 64. SYSTEM_CHARSET_MBMAXLEN is set to '3'.
So there is a possiblity that tablename/dbname contains 3 * 64.
Hence assert is changed to
(M_TBLLEN <= NAME_CHAR_LEN*SYSTEM_CHARSET_MBMAXLEN)
3) db_len in the binlog event header is taking 1 byte.
db_len is ranged from 0 to 192 bytes (3 * 64).
While reading the db_len from the event, server
is casting to uint instead of uchar which is leading
to bad db_len. This problem is fixed by changing the
cast type to uchar.
This includes fixing all utilities to not have any memory leaks,
as safemalloc warnings stopped tests from passing on MacOSX.
- Ensure that all clients takes character-set-dir, as the
libmysqlclient library will use it.
- mysql-test-run now passes character-set-dir to all external clients.
- Changed dynstr_free() so that it can be called twice (made freeing code easier)
- Changed rpl_global_gtid_slave_state to be allocated dynamicly as it
includes a mutex that needs to be initizlied/destroyed before my_end() is called.
- Removed rpl_slave_state::init() and rpl_slave_stage::deinit() as
their job are better handling by constructor and delete.
- Print alias instead of table_name in check_duplicate_key as
table_name may have been converted to lower case.
Other things:
- Fixed a case in time_to_datetime_with_warn() where we where
using && instead of & in tests
The following left in semi-improved state to keep patch size reasonable:
- Field operator new: left thd_alloc(current_thd)
- Sql_alloc operator new: left thd_alloc(thd_get_current_thd())
- Item_args constructors: left thd_alloc(thd)
- Item_func_interval::fix_length_and_dec(): no THD arg, have to call current_thd
- Item_func_dyncol_exists::val_int(): same
- Item_dyncol_get::val_str(): same
- Item_dyncol_get::val_int(): same
- Item_dyncol_get::val_real(): same
- Item_dyncol_get::val_decimal(): same
- Item_singlerow_subselect::fix_length_and_dec(): same
if we didn't write the CREATE TEMPORARY TABLE statement.
- Enable old code from one of my older changesets that didn't make into 10.0
- Fix test cased that failed as they expected DROP TEMPORARY TABLE in the log.
FILE
PROBLEM
In 5.5 when doing doing a rename of a column ,we ignore the case between
old and new column names while comparing them,so if the change is just
the case then we don't even mark the field FIELD_IS_RENAMED ,we just update
the frm file ,but don't recreate the table as is the norm when alter is
used.This leads to inconsistency in the innodb data dictionary which causes
index creation to fail.
FIX
According to the documentation any innodb column rename should trigger
rebuild of the table. Therefore for innodb tables we will do a strcmp()
between the column names and if there is case change in column name
we will trigger a rebuild.
ALTER TABLE should either bypass enforce-storage-engine, or mysql_upgrade
should refuse to run
Allow user to alter contents of existing table without enforcing
storage engine. However, enforce storage engine on ALTER TABLE
x ENGINE=y;
- Part 4: Removing calls to sql_alloc() and sql_calloc()
Other things:
- Added current_thd in some places to make it clear that it's called (easier to remove later)
- Move memory allocation from Item_func_case::fix_length_and_dec() to Item_func_case::fix_fields()
- Added mem_root to some new calls
- Fixed some wrong UNINIT_VAR() calls
- Fixed a bug in generate_partition_syntax() in case of errors
- Added mem_root to argument to new thread_info
- Simplified my_parse_error() call in sql_yacc.yy
- Part 3: Adding mem_root to push_back() and push_front()
Other things:
- Added THD as an argument to some partition functions.
- Added memory overflow checking for XML tag's in read_xml()
- Added mem_root to all calls to new Item
- Added private method operator new(size_t size) to Item to ensure that
we always use a mem_root when creating an item.
This saves use once call to current_thd per Item creation