- Improve mysql_upgrade and add comments describing it's logic
- Don't look for mysql and mysqlcheck randomly, use dir where mysql_upgrade
was started from
- Don't look for mysql_fix_privilege_tables.sql randomly, compile
in the mysql_fix_privilege_tables.sql file and use that to upgrade
the system tables of MySQL
- Check for any unexpected error returned from runnning the mysql_fix_privilege_tables SQL
- Fix bug#26639, bug#24248 and bug#25405
conditions when executing an equijoin query with WHERE condition
containing a subquery predicate of the form join_attr NOT IN (SELECT ...).
To resolve a problem of the correct evaluation of the expression
attr NOT IN (SELECT ...)
an array of guards is created to make it possible to filter out some
predicates of the EXISTS subquery into which the original subquery
predicate is transformed, in the cases when a takes the NULL value.
If attr is defined as a field that cannot be NULL than such an array
is not needed and is not created.
However if the field a occurred also an an equijoin predicate t2.a=t1.b
and table t1 is accessed before table t2 then it may happen that the
the EXISTS subquery is pushed down to the condition evaluated just after
table t1 has been accessed. In this case any occurrence of t2.a is
substituted for t1.b. When t1.b takes the value of NULL an attempt is
made to turn on the corresponding guard. This action caused a crash as
no guard array had been created.
Now the code of Item_in_subselect::set_cond_guard_var checks that the guard
array has been created before setting a guard variable on. Otherwise the
method does nothing. It cannot results in returning a row that could be
rejected as the condition t2.a=t1.b will be checked later anyway.
bug #27715: mysqld --character-sets-dir buffer overflow
bug ##26851: Mysql Client --pager Buffer Overflow
Using strmov() to copy an argument may cause overflow
if the argument's length is bigger than the buffer:
use strmake instead.
Also, we have to encrease the error message buffer size to fit
the longest message.
The Item_outer_ref class based on the Item_direct_ref class was always used
to represent an outer field. But if the outer select is a grouping one and the
outer field isn't under an aggregate function which is aggregated in that
outer select an Item_ref object should be used to represent such a field.
If the outer select in which the outer field is resolved isn't grouping then
the Item_field class should be used to represent such a field.
This logic also should be used for an outer field resolved through its alias
name.
Now the Item_field::fix_outer_field() uses Item_outer_field objects to
represent aliased and non-aliased outer fields for grouping outer selects
only.
Now the fix_inner_refs() function chooses which class to use to access outer
field - the Item_ref or the Item_direct_ref. An object of the chosen class
substitutes the original field in the Item_outer_ref object.
The direct_ref and the found_in_select_list fields were added to the
Item_outer_ref class.
When a table status is requested by statement like SHOW TABLE
STATUS and there is another statement (e.g. DELETE) sets
number of records to 0 concurrently, we may get division by
zero error, which crashes a server.
This is fixed by using thread local variable x->records instead
of shared info->state->records when we check if it is zero and
divide by it.
Problem: single byte do_varstring1() function was called, which didn't
check limit on "number of character", and checked only "number of bytes".
Fix: adding a multi-byte aware function do_varstring1_mb(),
to limit on "number of characters"
IGNORE/USE/FORCE INDEX hints were honored when choosing FULLTEXT
index.
With this fix these hints are ignored. For regular indexes we may
perform table scan instead of index lookup when IGNORE INDEX was
specified. We cannot do this for FULLTEXT in NLQ mode.
Support of views wasn't implemented for the TRUNCATE statement.
Now TRUNCATE on views has the same semantics as DELETE FROM view:
mysql_truncate() checks whether the table is a view and falls back
to delete if so.
In order to initialize properly the LEX::updatable for a view
st_lex::can_use_merged() now allows usage of merged views for the
TRUNCATE statement.
are used as arguments of the IN predicate.
Added a function to check compatibility of row expressions. Made sure that this
function to be called for Item_func_in objects by fix_length_and_dec().
The function CRC32() returns unsigned integer.
But the metadata (the unsigned flag) for the
function was set incorrectly.
As a result type arithmetics based on the
function's metadata (like finding the concise
type of an temporary table column to hold the result)
returned incorrect results.
Fixed by returning correct type information.
This fix is based on code contributed by Martin Friebe
(martin@hybyte.com) on 2007-03-30.
MERGE engine may return incorrect values when several representations
of equal keys are present in the index. For example "groß" and "gross"
or "gross" and "gross " (trailing space), which are considered equal,
but have different lengths.
The problem was that key length was not recalculated after key lookup.
Only MERGE engine is affected.
Added a test case.
The problem was fixed by the fix for bug #17379.
The problem was that because of some conditions
the optimizer always preferred range or full index
scan access methods to lookup access methods even
when the latter were much cheaper.
The optimizer transforms DISTINCT into a GROUP BY
when possible.
It does that by constructing the same structure
(a list of ORDER instances) the parser makes when
parsing GROUP BY.
While doing that it also eliminates duplicates.
But if a duplicate is found it doesn't advance the
pointer to ref_pointer array, so the next
(and subsequent) ORDER structures point to the wrong
element in the SELECT list.
Fixed by advancing the pointer in ref_pointer_array
even in the case of a duplicate.
Problem: setting/displaying @@LC_TIME_NAMES didn't distinguish between
GLOBAL and SESSION variable types - always SESSION variable
was set/shonw.
Fix: set either global or session value.
Also, "mysqld --lc-time-names" was added to set "global default" value.
NO_AUTO_VALUE_ON_ZERO mode.
The table->auto_increment_field_not_null variable wasn't reset after
reading a row which may lead to inserting a wrong value to the auto-increment
field to the following row.
The table->auto_increment_field_not_null variable is reset now right after a
row is being written in the read_fixed_length() and the read_sep_field()
functions.
Removed wrong setting of the table->auto_increment_field_not_null variable in
the read_sep_field() function.
In certain cases AFTER UPDATE/DELETE triggers on NDB tables that referenced
subject table didn't see the results of operation which caused invocation
of those triggers. In other words AFTER trigger invoked as result of update
(or deletion) of particular row saw version of this row before update (or
deletion).
The problem occured because NDB handler in those cases postponed actual
update/delete operations to be able to perform them later as one batch.
This fix solves the problem by disabling this optimization for particular
operation if subject table has AFTER trigger for this operation defined.
To achieve this we introduce two new flags for handler::extra() method:
HA_EXTRA_DELETE_CANNOT_BATCH and HA_EXTRA_UPDATE_CANNOT_BATCH.
These are called if there exists AFTER DELETE/UPDATE triggers during a
statement that potentially can generate calls to delete_row()/update_row().
This includes multi_delete/multi_update statements as well as insert statements
that do delete/update as part of an ON DUPLICATE statement.
IN/BETWEEN predicates in sorting expressions.
Wrong results may occur when the select list contains an expression
with IN/BETWEEN predicate that differs from a sorting expression by
an additional NOT only.
Added the method Item_func_opt_neg::eq to compare correctly expressions
containing [NOT] IN/BETWEEN.
The eq method inherited from the Item_func returns TRUE when comparing
'a IN (1,2)' with 'a NOT IN (1,2)' that is not, of course, correct.
The problem was that THD::db_access variable was not restored after
database switch in stored-routine-execution code.
The fix is to restore THD::db_access in this case.
Unfortunately, this fix requires additional changes,
because in prepare_schema_table(), called on the parsing stage, we checked
privileges. That was wrong according to our design, but this flaw haven't
struck so far, because it was masked. All privilege checkings must be
done on the execution stage in order to be compatible with prepared statements
and stored routines. So, this patch also contains patch for
prepare_schema_table(), which moves the checkings to the execution phase.
LEFT JOIN
Fixed that in certain situations MATCH ... AGAINST returns false hits
for NULLs produced by LEFT JOIN when there is no fulltext index
available.
- Change check for return value of 'SSL_CTX_set_cipher_list'
in order to handle 0 as error setting cipher.
- Thanks to Dan Lukes for finding the problem!
conditions.
When allocating memory for KEY_FIELD/SARGABLE_PARAM structures the
function update_ref_and_keys did not take into account the fact that
a single row equality could be replaced by several simple equalities.
Fixed by adjusting the counter cond_count accordingly for each subquery
when performing substitution of a row equality for simple equalities.
Pushbuild fixes:
- Make MAX_SEL_ARGS smaller (even 16K records_in_range() calls is
more than it makes sense to do in typical cases)
- Don't call sel_arg->test_use_count() if we've already allocated
more than MAX_SEL_ARGs elements. The test will succeed but will take
too much time for the test suite (and not provide much value).
NO_AUTO_VALUE_ON_ZERO mode.
In the NO_AUTO_VALUE_ON_ZERO mode the table->auto_increment_field_not_null
variable is used to indicate that a non-NULL value was specified by the user
for an auto_increment column. When an INSERT .. ON DUPLICATE updates the
auto_increment field this variable is set to true and stays unchanged for the
next insert operation. This makes the next inserted row sometimes wrongly have
0 as the value of the auto_increment field.
Now the fill_record() function resets the table->auto_increment_field_not_null
variable before filling the record.
The table->auto_increment_field_not_null variable is also reset by the
open_table() function for a case if we missed some auto_increment_field_not_null
handling bug.
Now the table->auto_increment_field_not_null is reset at the end of the
mysql_load() function.
Reset the table->auto_increment_field_not_null variable after each
write_row() call in the copy_data_between_tables() function.
- Read the pid from pidfile in order to be able to kill the real process
instead of the pseudo process. Most platforms will have the same real_pid
as pid
- Kill using the real pid
ARCHIVE table
ARCHIVE table was truncated by REPAIR TABLE ... USE_FRM statement.
The table handler returned its file name extensions in a wrong order.
REPAIR TABLE believed it has to use the meta file to create a new table
from it.
With the fixed order, REPAIR TABLE does now use the data file to create
a new table. So REPAIR TABLE ... USE_FRM works well with ARCHIVE engine
now.
This issue affects 5.0 only, since in 5.1 ARCHIVE engine stores meta
information and data in the same file.
- GRANT and REVOKE statments didn't have the "updating" flag set and
thus statements with a table specified would not replicate if
slave filtering rules where turned on.
For example "GRANT ... ON test.t1 TO ..." would not replicate.
mark the test as requiring that storage engine(if we need to do that)
Make --ndb and --with-ndbcluster and alias for
--mysqld=--default-storage-engine=ndbcluster
#27176: Assigning a string to an year column has unexpected results
#26359: Strings becoming truncated and converted to numbers under STRICT mode
Problems:
1. storing a string to an integer field we don't check
if strntoull10rnd() returns MY_ERRNO_EDOM error.
Fix: check for MY_ERRNO_EDOM.
2. storing a string to an year field we use my_strntol() function.
Fix: use strntoull10rnd() instead.
Thanks to Martin Friebe for finding and submitting a fix for this bug!
A table with maximum number of key segments and maximum length key name
would have a corrupted .frm file, due to an incorrect calculation of the
complete key length. Now the key length is computed correctly (I hope) :-)
MyISAM would reject a table with the maximum number of keys and the maximum
number of key segments in all keys. It would allow one less than this total
maximum. Now MyISAM accepts a table defined with the maximum. (This is a
very minor issue.)
- Added PARAM::alloced_sel_args where we count the # of SEL_ARGs
created by SEL_ARG tree cloning operations.
- Made the range analyzer to shortcut and not do any more cloning
if we've already created MAX_SEL_ARGS SEL_ARG objects in cloning.
- Added comments about space complexity of SEL_ARG-graph
representation.
Problem: SOUNDEX returned an invalid string for international
characters in multi-byte character sets.
For example: for a Chinese/Japanese 3-byte long character
_utf8 0xE99885 it took only the very first byte 0xE9,
put it into the outout string and then appended with three
DIGIT ZERO characters, so the result was 0xE9303030 - which
is an invalide utf8 string.
Fix: make SOUNDEX() multi-byte aware and - put only complete
characters into result, thus return only valid strings.
This patch also makes SOUNDEX() compatible with UCS2.
Geometry fields have a result type string and a
special subclass to cater for the differences
between them and the base class (just like
DATE/TIME).
When creating temporary tables for results of
functions that return results of type GEOMETRY
we must construct fields of the derived class
instead of the base class.
Fixed by creating a GEOMETRY field (Field_geom)
instead of a generic BLOB (Field_blob) in temp
tables for the results of GIS functions that
have GEOMETRY return type (Item_geometry_func).
- Turn off verification of peer if both ca_path and ca_file is null
i.e from only passing --ssl-key=<client_key> and --ssl-cert=<client_cert>
to the mysql utility programs.
The server will authenticate the client accoring to GRANT tables
but the client won't authenticate the server
execution breaks replication.
When a stored routine is executed, we switch current
database to the database, in which the routine
has been created. When the stored routine finishes,
we switch back to the original database.
The problem was that if the original database does not
exist (anymore) after routine execution, we raised an error.
The fix is to report a warning, and switch to the NULL database.
If a set function with a outer reference s(outer_ref) cannot be aggregated
the outer query against which the reference has been resolved then MySQL
interpretes s(outer_ref) in the same way as it would interpret s(const).
Hovever the standard requires throwing an error in this situation.
Added some code to support this requirement in ansi mode.
Corrected another minor bug in Item_sum::check_sum_func.
- mysqldump executes a SHOW CREATE VIEW statement to generate the text
that it outputs. When the function name is retrieved it's database
name is unconditionally prepended. This change causes the function's
database name to be prepended only when it was used to define the
function.
When creating a temporary table the concise column type
of a string expression is decided based on its length:
- if its length is under 512 it is stored as either
varchar or char.
- otherwise it is stored as a BLOB.
There is a flag (convert_blob_length) to create_tmp_field
that, when >0 allows to force creation of a varchar if the
max blob length is under convert_blob_length.
However it must be verified that convert_blob_length
(settable through a SQL option in some cases) is
under the maximum that can be stored in a varchar column.
While performing that check for expressions in
create_tmp_field_from_item the max length of the blob was
used instead. This causes blob columns to be created in the
heap temp table used by GROUP_CONCAT (where blobs must not
be created in the temp table because of the constant
convert_blob_length that is passed to create_tmp_field() ).
And since these blob columns are not expected in that place
we get wrong results.
Fixed by checking that the value of the flag variable is
in the limits that fit into VARCHAR instead of the max length
of the blob column.
- 1.84e+15 converted to unsigned bigint should be
18400000000000000000 < 18446744073709551615.
- The test will still fail on windows, and is extracted
into a new bug report.
causes incorrect duplicate entries
Keys for BTREE indexes on ENUM and SET columns of MEMORY tables
with character set UTF8 were computed incorrectly. Many
different column values got the same key value.
Apart of possible performance problems, it made unique indexes
of this type unusable because it rejected many different
values as duplicates.
The problem was that multibyte character detection was tried
on the internal numeric column value. Many values were not
identified as characters. Their key value became blank filled.
Thanks to Alexander Barkov and Ramil Kalimullin for the patch,
which sets the character set of ENUM and SET key segments to
the pseudo binary character set.
Problem: GROUP BY on empty ucs2 strings crashed server.
Reason: sometimes mi_unique_hash() is executed with
ptr=null and length=0, which means "empty string".
The branch of code handling UCS2 character set
was not safe against ptr=null and fell into and
endless loop even if length=0 because of poiter
arithmetic overflow.
Fix: adding special check for length=0 to avoid pointer arithmetic
overflow.
to 0 causes wrong (large) length to be read
from the row in _mi_calc_blob_length() when
storing NULL values in (e.g) POINT columns.
This large length is then used to allocate
a block of memory that (on some OSes) causes
trouble.
Fixed by calling the base class's
Field_blob::reset() from Field_geom::reset()
that is called when storing a NULL value into
the column.
Fix is to rewrite the MBR::overlaps() function, to compute the dimension of both
arguments, and the dimension of the intersection; test that all three dimensions are the
same (e.g., all are Polygons).
Add tests for all MBR* functions for various combinations of shapes, lines and points.
thd->options' OPTION_STATUS_NO_TRANS_UPDATE bit was not restored at the end of SF() invocation, where
SF() modified non-ta table.
As the result of this artifact it was not possible to detect whether there were any side-effects when
top-level query ends.
If the top level query table was not modified and the bit is lost there would be no binlogging.
Fixed with preserving the bit inside of thd->no_trans_update struct. The struct agregates two bool flags
telling whether the current query and the current transaction modified any non-ta table.
The flags stmt, all are dropped at the end of the query and the transaction.
context was used as an argument of GROUP_CONCAT.
Ensured correct setting of the depended_from field in references
generated for set functions aggregated in outer selects.
A wrong value of this field resulted in wrong maps returned by
used_tables() for these references.
Made sure that a temporary table field is added for any set function
aggregated in outer context when creation of a temporary table is
needed to execute the inner subquery.
Apply the following InnoDB snapshots:
innodb-5.0-ss1319
innodb-5.0-ss1331
innodb-5.0-ss1333
innodb-5.0-ss1341
Fixes:
- Bug #21409: Incorrect result returned when in READ-COMMITTED with query_cache ON
At low transaction isolation levels we let each consistent read set
its own snapshot.
- Bug #23666: strange Innodb_row_lock_time_% values in show status; also millisecs wrong
On Windows ut_usectime returns secs and usecs relative to the UNIX
epoch (which is Jan, 1 1970).
- Bug #25494: LATEST DEADLOCK INFORMATION is not always cleared
lock_deadlock_recursive(): When the search depth or length is exceeded,
rewind lock_latest_err_file and display the two transactions at the
point of aborting the search.
- Bug #25927: Foreign key with ON DELETE SET NULL on NOT NULL can crash server
Prevent ALTER TABLE ... MODIFY ... NOT NULL on columns for which
there is a foreign key constraint ON ... SET NULL.
- Bug #26835: Repeatable corruption of utf8-enabled tables inside InnoDB
The bug could be reproduced as follows:
Define a table so that the first column of the clustered index is
a VARCHAR or a UTF-8 CHAR in a collation where sequences of bytes
of differing length are considered equivalent.
Insert and delete a record. Before the delete-marked record is
purged, insert another record whose first column is of different
length but equivalent to the first record. Under certain conditions,
the insertion can be incorrectly performed as update-in-place.
Likewise, an operation that could be done as update-in-place can
unnecessarily be performed as delete and insert, but that would not
cause corruption but merely degraded performance.
another user.
When the DEFINER clause isn't specified in the ALTER statement then it's loaded
from the view definition. If the definer differs from the current user then
the error is thrown because only a super-user can set other users as a definers.
Now if the DEFINER clause is omitted in the ALTER VIEW statement then the
definer from the original view is used without check.
Server starts any binlog dump from Format_description_log_event,
this shifted all offset calculations in mysqlbinlog and made it
to stop the dump earlier than --stop-position. Now mysqlbinlog
takes Format_description_log_event into account
Possible problems: function call could be eliminated from where class and only
be evaluated once; function can be evaluated during table and item setup phase which could
cause side effects not to be registered in binlog.
Fixed with introducing func_item_sp::used_tables() returning the correct table_map constant.
in index search MySQL was not explicitly
suppressing warnings. And if the context
happens to enable warnings (e.g. INSERT ..
SELECT) the warnings resulting from converting
the data the key is compared to are
reported to the client.
Fixed by suppressing warnings when converting
the data to the same type as the key parts.
what it actually means (Monty approved the renaming)
- correcting description of transaction_alloc command-line options
(our manual is correct)
- fix for a failure of rpl_trigger.