numbers into char fields" and bug #12860 "Difference in zero padding of
exponent between Unix and Windows"
Rewrote the code that determines what 'precision' argument should be
passed to sprintf() to fit the string representation of the input number
into the field.
We get finer control over conversion by pre-calculating the exponent, so
we are able to determine which conversion format, 'e' or 'f', will be
used by sprintf().
We also remove the leading zero from the exponent on Windows to make it
compatible with the sprintf() output on other platforms.
PS-protocol data is stored in different format - the MYSQL_RECORDS->data
contains the link to the record content, not to array of the links to
the field's contents. So we have to handle it separately for
embedded-server query cache.
1. A bad error message was given when a MERGE table with an
InnoDB child table was tried to use.
2. After selecting from a correct MERGE table and then altering
one of the children to InnoDB, incorrect results were returned.
These bugs have been fixed with the patch for bug 26379 (Combination
of FLUSH TABLE and REPAIR TABLE corrupts a MERGE table).
For verification, I added the test case from the bug report.
filesort() uses file->estimate_rows_upper_bound() call to allocate
internal buffers. If this function returns a value smaller than
a number of row that will be returned later in find_all_keys(),
that can cause server crash.
Fixed by implementing ha_federated::estimate_rows_upper_bound() to
return maximum possible number of rows.
Present estimation for FEDERATED always returns 0 if the linked to the VIEW.
Parser rejects valid INTERVAL() expressions when associated with
arithmetic operators. The problem is the way in which the expression
and interval grammar rules were organized caused shift/reduce conflicts.
The solution is to tweak the interval rules to avoid shift/reduce
conflicts by removing the broken interval_expr rule and explicitly
specify it's content where necessary.
Original fix by Davi Arnaut, revised and improved rules by Marc Alff
The following clarification should be made in The Manual:
Standard SQL is quite clear that, if new columns are added
to a table after a view on that table is created with
"select *", the new columns will not become part of the view.
In all cases, the view definition (view structure) is frozen
at CREATE time, so changes to the underlying tables do not
affect the view structure.
Default values of variables were not subject to upper/lower bounds
and step, while setting variables was. Bounds and step are also
applied to defaults now; defaults are corrected quietly, values
given by the user are corrected, and a correction-warning is thrown
as needed. Lastly, very large values could wrap around, starting
from 0 again. They are bounded at the maximum value for the
respective data-type now if no lower maximum is specified in the
variable's definition.
This bug is actually two bugs in one, one of which is CREATE TRIGGER under
LOCK TABLES and the other is CREATE TRIGGER under LOCK TABLES simultaneous
to a FLUSH TABLES WITH READ LOCK (global read lock). Both situations could
lead to a server crash or deadlock.
The first problem arises from the fact that when under LOCK TABLES, if the
table is in the set of locked tables, the table is already open and it doesn't
need to be reopened (not a placeholder). Also in this case, if the table is
not write locked, a exclusive lock can't be acquired because of a possible
deadlock with another thread also holding a (read) lock on the table. The
second issue arises from the fact that one should never wait for a global
read lock if it's holding any locked tables, because the global read lock
is waiting for these tables and this leads to a circular wait deadlock.
The solution for the first case is to check if the table is write locked
and upgraded the write lock to a exclusive lock and fail otherwise for non
write locked tables. Grabbin the exclusive lock in this case also means
to ensure that the table is opened only by the calling thread. The second
issue is partly fixed by not waiting for the global read lock if the thread
is holding any locked tables.
The second issue is only partly addressed in this patch because it turned
out to be much wider and also affects other DDL statements. Reported as
Bug#32395
Kill of a CREATE TABLE source_table LIKE statement waiting for a
name-lock on the source table causes a bad lock interaction.
The mysql_create_like_table() has a bug that if the connection is
killed while waiting for the name-lock on the source table, it will
jump to the wrong error path and try to unlock the source table and
LOCK_open, but both weren't locked.
The solution is to simple return when the name lock request is killed,
it's safe to do so because no lock was acquired and no cleanup is needed.
Original bug report also contains description of other problems
related to this scenario but they either already fixed in 5.1 or
will be addressed separately (see bug report for details).
The bug was that for ordered index scans, ha_partition::index_init() did
not put index columns into table->read_set if the underlying storage
engine did not have HA_PARTIAL_COLUMN_READ flag.
This was causing assertion failure when handle_ordered_index_scan() tried
to sort the records according to index order.
Fixed by making ha_partition::index_init() put index columns into table->read_set
for all ordered scans.
There's currently no way of knowing the determinicity of an UDF.
And the optimizer and the sequence() UDFs were making wrong
assumptions about what the is_const member means.
Plus there was no implementation of update_system_tables()
causing the optimizer to overwrite the information returned by
the <udf>_init function.
Fixed by equating the assumptions about the semantics of
is_const and providing a implementation of update_used_tables().
Added a TODO item for the UDF API change needed to make a better
implementation.
Problem: passing a non-constant name to the NAME_CONST function results in a crash.
Fix: check the NAME_CONST name argument; return fake item type if we got
non-constant argument(s).
Loading 4.1 into 5.0 or 5.1 failed silently because procs_priv table missing.
This caused the server to crash on any attempt to store new grants because
of uninitialized structures.
This patch breaks up the grant loading function into two phases to allow
for procs_priv table to fail with an warning instead of crashing the server.
self-join
When doing DELETE with self-join on a MyISAM or MERGE table, it could
happen that a record being retrieved in join_read_next_same() has
already been deleted by previous iterations. That caused the engine's
index_next_same() method to fail with HA_ERR_RECORD_DELETED error and
the whole DELETE query to be aborted with an error.
Fixed by suppressing the HA_ERR_RECORD_DELETED error in
hy_myisam::index_next_same() and ha_myisammrg::index_next_same(). Since
HA_ERR_RECORD_DELETED can only be returned by MyISAM, there is no point
in filtering this error in the SQL layer.
crashes MySQL 5.122
There was a difference in how UNIONs are handled
on top level and when in sub-query.
Because the rules for sub-queries were syntactically
allowing cases that are not currently supported by
the server we had crashes (this bug) or wrong results
(bug 32051).
Fixed by making the syntax rules for UNIONs match the
ones at top level.
These rules however do not support nesting UNIONs, e.g.
(SELECT a FROM t1 UNION ALL SELECT b FROM t2)
UNION
(SELECT c FROM t3 UNION ALL SELECT d FROM t4)
Supports for statements with nested UNIONs will be
added in a future version.
Problems:
1. looking for a matching partition we miss the fact that the maximum
allowed value is in the PARTITION p LESS THAN MAXVALUE.
2. one can insert maximum value if numeric maximum value is the last range.
(should only work if LESS THAN MAXVALUE).
3. one cannot have both numeric maximum value and MAXVALUE string as ranges
(the same value, but different meanings).
Fix: consider the maximum value as a supremum.
table cache is full
After reading last record from freshly opened archive table
(e.g. after flush table, or if there is no room in table cache),
the table is reported as crashed.
The problem was that azio wrongly invalidated azio_stream when it
meets EOF.
FLUSH TABLES WITH READ LOCK fails to properly detect write locked
tables when running under low priority updates.
The problem is that when trying to aspire a global read lock, the
reload_acl_and_cache() function fails to properly check if the thread
has a low priority write lock, which later my cause a server crash or
deadlock.
The solution is to simple check if the thread has any type of the
possible exclusive write locks.
is_last_prefix <= 0, file .\opt_range.cc.
SELECT ... GROUP BY bit field failed with an assertion if the
bit length of that field was not divisible by 8.
Problem: setting Item_func_rollup_const::null_value property to argument's null_value
before (without) the argument evaluation may result in a crash due to wrong null_value.
Fix: use is_null() to set Item_func_rollup_const::null_value instead as it evaluates
the argument if necessary and returns a proper value.
Problem: even if an Item_xml_str_func successor returns NULL, it doesn't have
a corresponding property (maybe_null) set, that leads to a failed assertion.
Fix: set nullability property of Item_xml_str_func.
Index lookup does not always guarantee that we can
simply remove the relevant conditions from the WHERE
clause. Reasons can be e.g. conversion errors,
partial indexes etc.
The optimizer was removing these parts of the WHERE
condition without any further checking.
This leads to "false positives" when using indexes.
Fixed by checking the index reference conditions
(using WHERE) when using indexes with sub-queries.
Problem: we have CHECK TABLE options allowed (by accident?) for
ANALYZE/OPTIMIZE TABLE.
Fix: disable them.
Note: it might require additional fixes in 5.1/6.0
Problem: caching 00000000-00000099 dates as integer values we're
improperly shifting them up twice in the get_datetime_value().
Fix: don't shift cached DATETIME values up for the second time.
In several cases, an error when processing the query would cause mysql to
return to the top level without printing warnings. Fix is to always
print any available warnings before returning to the top level.
only on some occasions
Referencing an element from the SELECT list in a WHERE
clause is not permitted. The namespace of the WHERE
clause is the table columns only. This was not enforced
correctly when resolving outer references in sub-queries.
Fixed by not allowing references to aliases in a
sub-query in WHERE.
The problem is that DROP TABLE and other DDL statements failed to
automatically close handlers associated with tables that were marked
for reopen (FLUSH TABLES).
The current implementation fails to properly discard handlers of
dropped tables (that were marked for reopen) because it searches
on the open handler tables list and using the current alias of the
table being dropped. The problem is that it must not use the open
handler tables list to search because the table might have been
closed (marked for reopen) by a flush tables command and also it
must not use the current table alias at all since multiple different
aliases may be associated with a single table. This is specially
visible when a user has two open handlers (using alias) of a same
table and a flush tables command is issued before the table is
dropped (see test case). Scanning the handler table list is also
useless for dropping handlers associated with temporary tables,
because temporary tables are not kept in the THD::handler_tables
list.
The solution is to simple scan the handlers hash table searching
for, and deleting all handlers with matching table names if the
reopen flag is not passed to the flush function, indicating that
the handlers should be deleted. All matching handlers are deleted
even if the associated the table is not open.
8bit escape characters, termination and enclosed characters
were silently ignored by SELECT INTO query, but LOAD DATA INFILE
algorithm is 8bit-clean, so data was corrupted during
encoding.
Bug#31610 Remove outdated and redundant tests:
partition_02myisam partition_03ndb
Bug#32405 testsuite parts: partition_char_myisam wrong content
and cleanup of testsuite
- remove/correct wrong comments
- remove workarounds for fixed bugs
- replace error numbers with error names
- exclude subtests from execution which fail now because of
new limitations for partitioning functions
- remove code for the no more intended dual use
fast test in regression tests/slow test in testsuite
- analyze and fix problems with partition_char_innodb
- fix problems caused by last change of error numbers
- Introduce error name to error number mapping which makes
maintenance after next error renumbering easier
Loose index scan does the grouping so the temp table does
not need to do it, even when sorting.
Fixed by checking if the grouping is already done before
doing sorting and grouping in a temp table and do only
sorting.
Problem was for LINEAR HASH/KEY. Crashes because of wrong partition id
returned when creating the new altered partitions. (because of wrong
linear hash mask)
Solution: Update the linear hash mask before using it for the new
altered table.
The problem: ha_partition::read_range_first() could return a record that is
outside of the scanned range. If that record happened to be in the next
subsequent range, it would satisfy the WHERE and appear in the output twice.
(we would get it the second time when scanning the next subsequent range)
Fix:
Made ha_partition::read_range_first() check if the returned recod is within
the scanned range, like other read_range_first() implementations do.
This bug is actually two. The first one manifests itself on an EXPLAIN
SELECT query with nested subqueries that employs the filesort algorithm.
The whole SELECT under explain is marked as UNCACHEABLE_EXPLAIN to preserve
some temporary structures for explain. As a side-effect of this values of
nested subqueries weren't cached and subqueries were re-evaluated many
times. Each time buffer for filesort was allocated but wasn't freed because
freeing occurs at the end of topmost SELECT. Thus all available memory was
eaten up step by step and OOM event occur.
The second bug manifests itself on SELECT queries with conditions where
a subquery result is compared with a key field and the subquery itself also
has such condition. When a long chain of such nested subqueries is present
the stack overrun occur. This happens because at some point the range optimizer
temporary puts the PARAM structure on the stack. Its size if about 8K and
the stack is exhausted very fast.
Now the subselect_single_select_engine::exec function allows subquery result
caching when the UNCACHEABLE_EXPLAIN flag is set.
Now the SQL_SELECT::test_quick_select function calls the check_stack_overrun
function for stack checking purposes to prevent server crash.
SPATIAL key is fine actually, but the chk_key() function
mistakenly returns error. It tries to compare checksums
of btree and SPATIAL keys while the checksum for the SPATIAL isn't
calculated (always 0). Same thing with FULLTEXT keys is handled
using full_text_keys counter, so fixed by counting both
SPATIAL and FULLTEXT keys in that counter.
Comparison of a BIGINT NOT NULL column with a constant arithmetic
expression that evaluates to NULL caused error 1048: "Column '...'
cannot be null".
Made convert_constant_item() check if the constant expression is NULL
before attempting to store it in a field. Attempts to store NULL in a
NOT NULL field caused query errors.
Server failed in assert() when we tried to create a DECIMAL() temp field
with a scale of more than the allowed 30. Now we limit the scale to the
allowed maximum. A truncation warning is thrown as necessary.
Problem: there's no guarantee that the user variable item's result_field
is assigned when we're adjusting its table read map.
Fix: check the result_field before using it.
This is a regression from 2007-05-18 when code to zero out the returned struct was
added to number_to_datetime(); zero for time_type corresponds to MYSQL_TIMESTAMP_DATE.
We now explicitly set the type we return (MYSQL_TIMESTAMP_DATETIME).
checked for each record'
The problem was in incorrectly calculated length of the buffer used to
store a hexadecimal representation of an index map in
select_describe(). This could result in buffer overrun and stack
corruption under some circumstances.
Fixed by correcting the calculation.
corrupts a MERGE table
Bug 26867 - LOCK TABLES + REPAIR + merge table result in
memory/cpu hogging
Bug 26377 - Deadlock with MERGE and FLUSH TABLE
Bug 25038 - Waiting TRUNCATE
Bug 25700 - merge base tables get corrupted by
optimize/analyze/repair table
Bug 30275 - Merge tables: flush tables or unlock tables
causes server to crash
Bug 19627 - temporary merge table locking
Bug 27660 - Falcon: merge table possible
Bug 30273 - merge tables: Can't lock file (errno: 155)
The problems were:
Bug 26379 - Combination of FLUSH TABLE and REPAIR TABLE
corrupts a MERGE table
1. A thread trying to lock a MERGE table performs busy waiting while
REPAIR TABLE or a similar table administration task is ongoing on
one or more of its MyISAM tables.
2. A thread trying to lock a MERGE table performs busy waiting until all
threads that did REPAIR TABLE or similar table administration tasks
on one or more of its MyISAM tables in LOCK TABLES segments do UNLOCK
TABLES. The difference against problem #1 is that the busy waiting
takes place *after* the administration task. It is terminated by
UNLOCK TABLES only.
3. Two FLUSH TABLES within a LOCK TABLES segment can invalidate the
lock. This does *not* require a MERGE table. The first FLUSH TABLES
can be replaced by any statement that requires other threads to
reopen the table. In 5.0 and 5.1 a single FLUSH TABLES can provoke
the problem.
Bug 26867 - LOCK TABLES + REPAIR + merge table result in
memory/cpu hogging
Trying DML on a MERGE table, which has a child locked and
repaired by another thread, made an infinite loop in the server.
Bug 26377 - Deadlock with MERGE and FLUSH TABLE
Locking a MERGE table and its children in parent-child order
and flushing the child deadlocked the server.
Bug 25038 - Waiting TRUNCATE
Truncating a MERGE child, while the MERGE table was in use,
let the truncate fail instead of waiting for the table to
become free.
Bug 25700 - merge base tables get corrupted by
optimize/analyze/repair table
Repairing a child of an open MERGE table corrupted the child.
It was necessary to FLUSH the child first.
Bug 30275 - Merge tables: flush tables or unlock tables
causes server to crash
Flushing and optimizing locked MERGE children crashed the server.
Bug 19627 - temporary merge table locking
Use of a temporary MERGE table with non-temporary children
could corrupt the children.
Temporary tables are never locked. So we do now prohibit
non-temporary chidlren of a temporary MERGE table.
Bug 27660 - Falcon: merge table possible
It was possible to create a MERGE table with non-MyISAM children.
Bug 30273 - merge tables: Can't lock file (errno: 155)
This was a Windows-only bug. Table administration statements
sometimes failed with "Can't lock file (errno: 155)".
These bugs are fixed by a new implementation of MERGE table open.
When opening a MERGE table in open_tables() we do now add the
child tables to the list of tables to be opened by open_tables()
(the "query_list"). The children are not opened in the handler at
this stage.
After opening the parent, open_tables() opens each child from the
now extended query_list. When the last child is opened, we remove
the children from the query_list again and attach the children to
the parent. This behaves similar to the old open. However it does
not open the MyISAM tables directly, but grabs them from the already
open children.
When closing a MERGE table in close_thread_table() we detach the
children only. Closing of the children is done implicitly because
they are in thd->open_tables.
For more detail see the comment at the top of ha_myisammrg.cc.
Changed from open_ltable() to open_and_lock_tables() in all places
that can be relevant for MERGE tables. The latter can handle tables
added to the list on the fly. When open_ltable() was used in a loop
over a list of tables, the list must be temporarily terminated
after every table for open_and_lock_tables().
table_list->required_type is set to FRMTYPE_TABLE to avoid open of
special tables. Handling of derived tables is suppressed.
These details are handled by the new function
open_n_lock_single_table(), which has nearly the same signature as
open_ltable() and can replace it in most cases.
In reopen_tables() some of the tables open by a thread can be
closed and reopened. When a MERGE child is affected, the parent
must be closed and reopened too. Closing of the parent is forced
before the first child is closed. Reopen happens in the order of
thd->open_tables. MERGE parents do not attach their children
automatically at open. This is done after all tables are reopened.
So all children are open when attaching them.
Special lock handling like mysql_lock_abort() or mysql_lock_remove()
needs to be suppressed for MERGE children or forwarded to the parent.
This depends on the situation. In loops over all open tables one
suppresses child lock handling. When a single table is touched,
forwarding is done.
Behavioral changes:
===================
This patch changes the behavior of temporary MERGE tables.
Temporary MERGE must have temporary children.
The old behavior was wrong. A temporary table is not locked. Hence
even non-temporary children were not locked. See
Bug 19627 - temporary merge table locking.
You cannot change the union list of a non-temporary MERGE table
when LOCK TABLES is in effect. The following does *not* work:
CREATE TABLE m1 ... ENGINE=MRG_MYISAM ...;
LOCK TABLES t1 WRITE, t2 WRITE, m1 WRITE;
ALTER TABLE m1 ... UNION=(t1,t2) ...;
However, you can do this with a temporary MERGE table.
You cannot create a MERGE table with CREATE ... SELECT, neither
as a temporary MERGE table, nor as a non-temporary MERGE table.
CREATE TABLE m1 ... ENGINE=MRG_MYISAM ... SELECT ...;
Gives error message: table is not BASE TABLE.
different error code depending on platform.
On Mac OS X, KILL statement issued to kill the current
connection would return a different error code and message than on
other platforms ('MySQL server has gone away' instead of 'Shutdown
in progress').
The reason for this difference was that on Mac OS X we have macro
SIGNAL_WITH_VIO_CLOSE defined. This macro forces KILL
implementation to close the communication socket of the thread
that is being killed. SIGNAL_WITH_VIO_CLOSE macro is defined on
platforms where just sending a signal is not a reliable mechanism
to interrupt the thread from sleeping on a blocking system call.
In a nutshell, closing the socket is a hack to work around an
operating system bug and awake the blocked thread no matter what.
However, if the thread that is being killed is the same
thread that issued KILL statement, closing the socket leads to a
prematurely lost connection. At the same time it is not necessary
to close the socket in this case, since the thread in question
is not inside a blocking system call.
The fix, therefore, is to not close the socket if the thread that
is being killed is the same that issued KILL statement, even with
defined SIGNAL_WITH_VIO_CLOSE.
It's not InnoDB specific bug.
Error is in QUEUE code, about the way we handle queue->max_at_top.
It's either '0' or '-2' and we do '^' operation to get the proper
direction. Though queue->compare() function can return '-2' as
a result of comparison sometimes. So we'll get
queue->compare() ^ queue->max_at_top == 0 (when max_at_top is -2)
and _downheap() function code will go wrong way here:
...
if (next_index < elements &&
(queue->compare(queue->first_cmp_arg,
queue->root[next_index]+offset_to_key,
queue->root[next_index+1]+offset_to_key) ^
queue->max_at_top) > 0)
next_index++;
...
Fixed by changing max_at_top to be either 1 or -1, doing
'* max_at_top' to get proper direction.
storage engine system variables was not validated and
unexpected value was assigned.
The check_func_enum function used subtraction from the uint
value with the probably negative result. That result of
type uint was compared with 0 after casting to signed long
type. On architectures where long type is longer than int
type the result of comparison was unexpected.
command and reported to a client.
The fact that a timestamp field will be set to NO on UPDATE wasn't shown
by the SHOW COMMAND and reported to a client through connectors. This led to
problems in the ODBC connector and might lead to a user confusion.
A new filed flag called ON_UPDATE_NOW_FLAG is added.
Constructors of the Field_timestamp set it when a field should be set to NOW
on UPDATE.
The get_schema_column_record function now reports whether a timestamp field
will be set to NOW on UPDATE.
The columns in HAVING can reference the GROUP BY and
SELECT columns. There can be "table" prefixes when
referencing these columns. And these "table" prefixes
in HAVING use the table alias if available.
This means that table aliases are subject to the same
storage rules as table names and are dependent on
lower_case_table_names in the same way as the table
names are.
Fixed by :
1. Treating table aliases as table names
and make them lowercase when printing out the SQL
statement for view persistence.
2. Using case insensitive comparison for table
aliases when requested by lower_case_table_names
max_length parameter for BLOB-returning functions must be big enough
for any possible content. Otherwise the field created for a table
will be too small.
Partition handler fails updating tables with partitioning
based on timestamp field, as it calculates the timestamp field
AFTER it calculates the number of partition of a record.
Fixed by adding timestamp_field->set_time() call and disabling
such consequent calls
Merge fix
partition_mgm did not require have_symlink.
Moved the test case to partition_symlink, which
require have_symlink, and should work on both *nix and
Windows