Anti-patch. This patch undoes the previously pushed patch. It is
null-merged in versions 5.1 and above since there the original
patch is still desired.
When executing drop view statement on the master, the statement is written
into bin-log without checking for possible errors, so the statement would
always be bin-logged with error code cleared even if some error might occur,
for example, some of the views being dropped does not exist. This would cause
failure on the slave.
Writing bin-log after check for errors, if at least one view has been dropped
the query is bin-logged possible with an error.
numbers into char fields" and bug #12860 "Difference in zero padding of
exponent between Unix and Windows"
Rewrote the code that determines what 'precision' argument should be
passed to sprintf() to fit the string representation of the input number
into the field.
We get finer control over conversion by pre-calculating the exponent, so
we are able to determine which conversion format, 'e' or 'f', will be
used by sprintf().
We also remove the leading zero from the exponent on Windows to make it
compatible with the sprintf() output on other platforms.
filesort() uses file->estimate_rows_upper_bound() call to allocate
internal buffers. If this function returns a value smaller than
a number of row that will be returned later in find_all_keys(),
that can cause server crash.
Fixed by implementing ha_federated::estimate_rows_upper_bound() to
return maximum possible number of rows.
Present estimation for FEDERATED always returns 0 if the linked to the VIEW.
Default values of variables were not subject to upper/lower bounds
and step, while setting variables was. Bounds and step are also
applied to defaults now; defaults are corrected quietly, values
given by the user are corrected, and a correction-warning is thrown
as needed. Lastly, very large values could wrap around, starting
from 0 again. They are bounded at the maximum value for the
respective data-type now if no lower maximum is specified in the
variable's definition.
Denormalized DOUBLE-s can't be properly handled by old MIPS processors.
So we need to enable specific mode for them so IRIX will do use
software round to handle such numbers.
server status wasn't properly sent to the client after the error
by the embedded server. Wasn't noticed before as one usually stopped
retrieving results after he gets an error.
Kill of a CREATE TABLE source_table LIKE statement waiting for a
name-lock on the source table causes a bad lock interaction.
The mysql_create_like_table() has a bug that if the connection is
killed while waiting for the name-lock on the source table, it will
jump to the wrong error path and try to unlock the source table and
LOCK_open, but both weren't locked.
The solution is to simple return when the name lock request is killed,
it's safe to do so because no lock was acquired and no cleanup is needed.
Original bug report also contains description of other problems
related to this scenario but they either already fixed in 5.1 or
will be addressed separately (see bug report for details).
There's currently no way of knowing the determinicity of an UDF.
And the optimizer and the sequence() UDFs were making wrong
assumptions about what the is_const member means.
Plus there was no implementation of update_system_tables()
causing the optimizer to overwrite the information returned by
the <udf>_init function.
Fixed by equating the assumptions about the semantics of
is_const and providing a implementation of update_used_tables().
Added a TODO item for the UDF API change needed to make a better
implementation.
memory corruptions.
The right pointer field of the SEL_ARG structure was not
initialized in the constructor and sometimes that led to
server crashes.
There is no testcase because the bug occurs only when
uninitialized memory has particular values, which can't be
re-created in the test suite.
Problem: passing a non-constant name to the NAME_CONST function results in a crash.
Fix: check the NAME_CONST name argument; return fake item type if we got
non-constant argument(s).
self-join
When doing DELETE with self-join on a MyISAM or MERGE table, it could
happen that a record being retrieved in join_read_next_same() has
already been deleted by previous iterations. That caused the engine's
index_next_same() method to fail with HA_ERR_RECORD_DELETED error and
the whole DELETE query to be aborted with an error.
Fixed by suppressing the HA_ERR_RECORD_DELETED error in
hy_myisam::index_next_same() and ha_myisammrg::index_next_same(). Since
HA_ERR_RECORD_DELETED can only be returned by MyISAM, there is no point
in filtering this error in the SQL layer.
insert ... select.
The 5.0 manual page for mysql_insert_id() does not mention anything
about INSERT ... SELECT, though its current behavior is incosistent
with what the manual says about the plain INSERT.
Fixed by changing the AUTO_INCREMENT and mysql_insert_id() handling
logic in INSERT ... SELECT to be consistent with the INSERT behavior,
the manual, and the changes in 5.1 introduced by WL3146:
- mysql_insert_id() now returns the first automatically generated
AUTO_INCREMENT value that was successfully inserted by INSERT ... SELECT
- if an INSERT ... SELECT statement is executed, and no automatically
generated value is successfully inserted, mysql_insert_id() now returns
the ID of the last inserted row.
Sending several "KILL QUERY" statements to target a connection running
"SELECT SLEEP" could freeze the server.
The locking order in Item_func_sleep was wrong and this could lead to a
dead lock.
This patch solves the issue by resolving the locking order properly.
crashes MySQL 5.122
There was a difference in how UNIONs are handled
on top level and when in sub-query.
Because the rules for sub-queries were syntactically
allowing cases that are not currently supported by
the server we had crashes (this bug) or wrong results
(bug 32051).
Fixed by making the syntax rules for UNIONs match the
ones at top level.
These rules however do not support nesting UNIONs, e.g.
(SELECT a FROM t1 UNION ALL SELECT b FROM t2)
UNION
(SELECT c FROM t3 UNION ALL SELECT d FROM t4)
Supports for statements with nested UNIONs will be
added in a future version.
strmake() calls are easy to get wrong. Add checks in extra
debug mode to identify possible exploits.
Remove some dead code.
Remove some off-by-one errors identified with new checks.
FLUSH TABLES WITH READ LOCK fails to properly detect write locked
tables when running under low priority updates.
The problem is that when trying to aspire a global read lock, the
reload_acl_and_cache() function fails to properly check if the thread
has a low priority write lock, which later my cause a server crash or
deadlock.
The solution is to simple check if the thread has any type of the
possible exclusive write locks.
is_last_prefix <= 0, file .\opt_range.cc.
SELECT ... GROUP BY bit field failed with an assertion if the
bit length of that field was not divisible by 8.
Problem: setting Item_func_rollup_const::null_value property to argument's null_value
before (without) the argument evaluation may result in a crash due to wrong null_value.
Fix: use is_null() to set Item_func_rollup_const::null_value instead as it evaluates
the argument if necessary and returns a proper value.
Index lookup does not always guarantee that we can
simply remove the relevant conditions from the WHERE
clause. Reasons can be e.g. conversion errors,
partial indexes etc.
The optimizer was removing these parts of the WHERE
condition without any further checking.
This leads to "false positives" when using indexes.
Fixed by checking the index reference conditions
(using WHERE) when using indexes with sub-queries.
Problem: we have CHECK TABLE options allowed (by accident?) for
ANALYZE/OPTIMIZE TABLE.
Fix: disable them.
Note: it might require additional fixes in 5.1/6.0
Fixes the following bugs:
- Bug #29560: InnoDB >= 5.0.30 hangs on adaptive hash rw-lock 'waiting for an X-lock'
Fixed a race condition in the rw_lock where an os_event_reset()
can overwrite an earlier os_event_set() triggering an indefinite
wait.
NOTE: This fix for windows is different from that for other platforms.
NOTE2: This bug is introduced in the scalability fix to the
sync0arr which was applied to 5.0 only. Therefore, it need not be
applied to the 5.1 tree. If we decide to port the scalability fix
to 5.1 then this fix should be ported as well.
- Bug #32125: Database crash due to ha_innodb.cc:3896: ulint convert_search_mode_to_innobase
When unknown find_flag is encountered in convert_search_mode_to_innobase()
do not call assert(0); instead queue a MySQL error using my_error() and
return the error code PAGE_CUR_UNSUPP. Change the functions that call
convert_search_mode_to_innobase() to handle that error code by "canceling"
execution and returning appropriate error code further upstream.
only on some occasions
Referencing an element from the SELECT list in a WHERE
clause is not permitted. The namespace of the WHERE
clause is the table columns only. This was not enforced
correctly when resolving outer references in sub-queries.
Fixed by not allowing references to aliases in a
sub-query in WHERE.
8bit escape characters, termination and enclosed characters
were silently ignored by SELECT INTO query, but LOAD DATA INFILE
algorithm is 8bit-clean, so data was corrupted during
encoding.
Loose index scan does the grouping so the temp table does
not need to do it, even when sorting.
Fixed by checking if the grouping is already done before
doing sorting and grouping in a temp table and do only
sorting.
led to creating corrupted index.
Corrected fix. The new method called prepare2 is added to the select_create
class. As all preparations are done by the select_create::prepare function
it doesn't do anything. Slightly changed algorithm of calling the
start_bulk_insert function. Now it's called from the select_insert::prepare2
function when the SQL_BUFFER_RESULT flags is set.
The is_bulk_insert_mode flag is removed as it is not needed anymore.
This bug is actually two. The first one manifests itself on an EXPLAIN
SELECT query with nested subqueries that employs the filesort algorithm.
The whole SELECT under explain is marked as UNCACHEABLE_EXPLAIN to preserve
some temporary structures for explain. As a side-effect of this values of
nested subqueries weren't cached and subqueries were re-evaluated many
times. Each time buffer for filesort was allocated but wasn't freed because
freeing occurs at the end of topmost SELECT. Thus all available memory was
eaten up step by step and OOM event occur.
The second bug manifests itself on SELECT queries with conditions where
a subquery result is compared with a key field and the subquery itself also
has such condition. When a long chain of such nested subqueries is present
the stack overrun occur. This happens because at some point the range optimizer
temporary puts the PARAM structure on the stack. Its size if about 8K and
the stack is exhausted very fast.
Now the subselect_single_select_engine::exec function allows subquery result
caching when the UNCACHEABLE_EXPLAIN flag is set.
Now the SQL_SELECT::test_quick_select function calls the check_stack_overrun
function for stack checking purposes to prevent server crash.
When the server was out of memory it crashed because of invalid memory access.
This patch adds detection for failed memory allocations and make the server
output a proper error message.
Comparison of a BIGINT NOT NULL column with a constant arithmetic
expression that evaluates to NULL caused error 1048: "Column '...'
cannot be null".
Made convert_constant_item() check if the constant expression is NULL
before attempting to store it in a field. Attempts to store NULL in a
NOT NULL field caused query errors.
Server failed in assert() when we tried to create a DECIMAL() temp field
with a scale of more than the allowed 30. Now we limit the scale to the
allowed maximum. A truncation warning is thrown as necessary.
checked for each record'
The problem was in incorrectly calculated length of the buffer used to
store a hexadecimal representation of an index map in
select_describe(). This could result in buffer overrun and stack
corruption under some circumstances.
Fixed by correcting the calculation.
The columns in HAVING can reference the GROUP BY and
SELECT columns. There can be "table" prefixes when
referencing these columns. And these "table" prefixes
in HAVING use the table alias if available.
This means that table aliases are subject to the same
storage rules as table names and are dependent on
lower_case_table_names in the same way as the table
names are.
Fixed by :
1. Treating table aliases as table names
and make them lowercase when printing out the SQL
statement for view persistence.
2. Using case insensitive comparison for table
aliases when requested by lower_case_table_names
max_length parameter for BLOB-returning functions must be big enough
for any possible content. Otherwise the field created for a table
will be too small.
After adding an index the <VARBINARY> IN (SELECT <BINARY> ...)
clause returned a wrong result: the VARBINARY value was illegally padded
with zero bytes to the length of the BINARY column for the index search.
(<VARBINARY>, ...) IN (SELECT <BINARY>, ... ) clauses are affected too.
UNIQUE (eq-ref) lookups result in table being considered as a "constant" table.
Queries that consist of only constant tables are processed in do_select() in a
special way that doesn't invoke evaluate_join_record(), and therefore doesn't
increase the counters join->examined_rows and join->thd->row_count.
The patch increases these counters in this special case.
NOTICE:
This behavior seems to contradict what the documentation says in Sect. 5.11.4:
"Queries handled by the query cache are not added to the slow query log, nor
are queries that would not benefit from the presence of an index because the
table has zero rows or one row."
No test case in 5.0 as issue shows only in slow query log, and other counters
can give subtly different values (with regard to counting in create_sort_index(),
synthetic rows in ROLLUP, etc.).
in mysql_creata_like_table() we 'downcase' the complete path to the
.frm file. It works fine in standalone case as there usually
we only have './' as a path to the datahome, but doesn't work in
the embedded server where we add the real path there, so if a
directory has uppercase letters in it's name, it won't be found.
Fixed by 'downcasing' only database/table pair.
BETWEEN was more lenient with regard to what it accepted as a DATE/DATETIME
in comparisons than greater-than and less-than were. ChangeSet makes < >
comparisons similarly robust with regard to trailing garbage (" GMT-1")
and "missing" leading zeros. Now all three comparators behave similarly
in that they throw a warning for "junk" at the end of the data, but then
proceed anyway if possible. Before < > fell back on a string- (rather than
date-) comparison when a warning-condition was raised in the string-to-date
conversion. Now the fallback only happens on actual errors, while warning-
conditions still result in a warning being to delivered to the client.
The bug is a regression introduced by the fix for bug30596. The problem
was that in cases when groups in GROUP BY correspond to only one row,
and there is ORDER BY, the GROUP BY was removed and the ORDER BY
rewritten to ORDER BY <group_by_columns> without checking if the
columns in GROUP BY and ORDER BY are compatible. This led to
incorrect ordering of the result set as it was sorted using the
GROUP BY columns. Additionaly, the code discarded ASC/DESC modifiers
from ORDER BY even if its columns were compatible with the GROUP BY
ones.
This patch fixes the regression by checking if ORDER BY columns form a
prefix of the GROUP BY ones, and rewriting ORDER BY only in that case,
preserving the ASC/DESC modifiers. That check is sufficient, since the
GROUP BY columns contain a unique index.
When running mysqlbinlog on a 64-bit machine with a corrupt relay log,
it causes mysqlbinlog to crash. In this case, the crash is caused
because a request for 18446744073709534806U bytes is issued, which
apparantly can be served on a 64-bit machine (speculatively, I assume)
but this causes the memcpy() issued later to copy the data to segfault.
The request for the number of bytes is caused by a computation
of data_len - server_vars_len where server_vars_len is corrupt in such
a sense that it is > data_len. This causes a wrap-around, with the
the data_len given above.
This patch adds a check that if server_vars_len is greater than
data_len before the substraction, and aborts reading the event in
that case marking the event as invalid. It also adds checks to see
that reading the server variables does not go outside the bounds
of the available space, giving a limited amount of integrity check.
causes out of memory errors
The code in mysql_create_function() and mysql_drop_function() assumed
that the only reason for UDFs being uninitialized at that point is an
out-of-memory error during initialization. However, another possible
reason for that is the --skip-grant-tables option in which case UDF
initialization is skipped and UDFs are unavailable.
The solution is to check whether mysqld is running with
--skip-grant-tables and issue a proper error in such a case.
HOUR(), MINUTE(), ... returned spurious results when used on a DATE-cast.
This happened because DATE-cast object did not overload get_time() method
in superclass Item. The default method was inappropriate here and
misinterpreted the data.
Patch adds missing method; get_time() on DATE-casts now returns SQL-NULL
on NULL input, 0 otherwise. This coincides with the way DATE-columns
behave.
When constructing a key image stricter date checking (from sql_mode)
should not be enabled, because it will reject invalid dates that the
server would otherwise accept for searching when there's no index.
Fixed by disabling strict date checking when constructing a key image.
variable in where clause.
Problem: the new_item() method of Item_uint used an incorrect
constructor. "new Item_uint(name, max_length)" calls
Item_uint::Item_uint(const char *str_arg, uint length) which assumes the
first argument to be the string representation of the value, not the
item's name. This could result in either a server crash or incorrect
results depending on usage scenarios.
Fixed by using the correct constructor in new_item():
Item_uint::Item_uint(const char *str_arg, longlong i, uint length).
tables or more
The problem was that the optimizer used the join buffer in cases when
the result set is ordered by filesort. This resulted in the ORDER BY
clause being ignored, and the records being returned in the order
determined by the order of matching records in the last table in join.
Fixed by relaxing the condition in make_join_readinfo() to take
filesort-ordered result sets into account, not only index-ordered ones.
Bug #31956 auto increment bugs in MySQL Cluster: Added utility method and constant for internal prefetch default
ndb_auto_increment.result:
BitKeeper file /home/marty/MySQL/mysql-5.0-ndb/mysql-test/r/ndb_auto_increment.result
mysqld.cc:
Bug #25176 Trying to set ndb_autoincrement_prefetch_sz always fails: Changed pointer to max value
Bug #31956 auto increment bugs in MySQL Cluster: Changed meaning of ndb_autoincrement_prefetch_sz to specify prefetch between statements, changed default to 1 (with internal prefetch to at least 32 inside a statement)
ndb_insert.test, ndb_insert.result:
Moved auto_increment tests to ndb_auto_increment.test
ndb_auto_increment.test:
BitKeeper file /home/marty/MySQL/mysql-5.0-ndb/mysql-test/t/ndb_auto_increment.test
ha_ndbcluster.cc:
Bug #31956 auto increment bugs in MySQL Cluster: Changed meaning of ndb_autoincrement_prefetch_sz to specify prefetch between statements, changed default to 1 (with internal prefetch to at least 32 inside a statement), added handling of updates of pk/unique key with auto_increment
Bug #32055 Cluster does not handle auto inc correctly with insert ignore statement
Since bug@20166, which replaced the binlog file name generating to base
on pidfile_name instead of the previous glob_hostname, the binlog file
name suddenly started to be stored solely in the absolute path format,
including a case when --log-bin option meant a relative path.
What's more serious, the path for binlog file can lead unrequestedly
to pid-file directory so that after any proper fix for this bug
there might be similar to the bug report consequences for one who
upgrades from post-fix-bug@20166-pre-fix-bug@28597 to post-fix-bug@28597.
Fixed with preserving`pidfile_name' (intr.by bug@20166) but stripping
off its directory part. This restores the original logics of storing
the names in compatible with --log-bin option format and with the
requirement for --log-bin ralative path to corresond to the data directory.
Side effects for this fix:
effective fixing bug@27070, refining its test;
ensuring no overrun for buff can happen anymore (Bug#31836
insufficient space reserved for the suffix of relay log file name);
bug#31837 --remove_file $MYSQLTEST_VARDIR/tmp/bug14157.sql missed
in rpl_temporary.test;
fixes Bug@28603 Invalid log-bin default location;