The '@' symbol can not be used in the host name according to rfc952.
The fix:
added function check_host_name(LEX_STRING *str)
which checks that all symbols in host name string are valid and
host name length is not more than max host name length
(just moved check_string_length() function from the parser into check_host_name()).
The problem:
I_S views table does not check the presence of SHOW_VIEW_ACL|SELECT_ACL
privileges for a view. It leads to discrepancy between SHOW CREATE VIEW
and I_S.VIEWS.
The fix:
added appropriate check.
When analyzing the possible index use cases the server was re-using an internal structure.
This is wrong, as this internal structure gets updated during the analysis.
Fixed by making a copy of the internal structure for every place it needs to be used.
Also stopped the generation of empty SEL_TREE structures that unnecessary
complicate the analysis.
from stored procedure.
Problem: we replace all references to local variables in stored procedures
with NAME_CONST(name, value) logging to the binary log. However, if the
value's collation differs we might get an 'illegal mix of collation'
error as we don't pass the collation to the function.
Fix: pass the value's collation to NAME_CONST().
Note: actually we should pass to NAME_CONST() the value's derivation as well.
It's impossible without the parser modifying. Now we always set the
derivation to DERIVATION_IMPLICIT, the same as local variables have.
JOIN for the subselect wasn't cleaned if we came upon an error
during sub_select() execution. That leads to the assertion failure
in close_thread_tables()
part of the 6.0 code backported
per-file comments:
mysql-test/r/sp-error.result
Bug#37949 Crash if argument to SP is a subquery that returns more than one row
test result
mysql-test/t/sp-error.test
Bug#37949 Crash if argument to SP is a subquery that returns more than one row
test case
sql/sp_head.cc
Bug#37949 Crash if argument to SP is a subquery that returns more than one row
lex->unit.cleanup() call added if not substatement
Machines with hostname set to "localhost" cause uniqueness errors in
the SQL bootstrap data.
Now, insert zero lines for cases where the (lowercased) hostname is
the same as an already-inserted 'localhost' name. Also, fix a few tests
that expect certain local accounts to have a certain host name.
A stored procedure involving substrings could crash the server on certain
platforms because of invalid memory reads.
During storing the new blob-field value, the cached value's address range
overlapped that of the new field value. This caused problems when the
cached value storage was reallocated to provide access for a new
characater set representation. The patch checks the address ranges, and if
they overlap, the new field value is copied to a new storage before it is
converted to the new character set.
The fix for bug 31887 was incomplete : it assumes that all the
field types returned by the IS_NUM macro are descendants of
Item_num and tries to zero-fill the values before doing constant
substitution with such fields when they are compared to constant string
values.
The only exception to this is Field_timestamp : it's in the IS_NUM
macro, but is not a descendant of Field_num.
Fixed by excluding timestamp fields (Field_timestamp) when zero-filling
when converting the constant to compare with to a string.
Note that this will not exclude the timestamp columns from const
propagation.
Details:
- backport of some improvements which prevent sporadic
failures from 5.1 to 5.0
- @@GLOBAL.CONCURRENT_INSERT= 0 also for slave server
- --sorted_result before all selects which have result
sets with more than one row
- Replace error numbers by error names
Moved fix for this bug to 5.0 as other mysqldump bugs seem tied to concurrent_insert being on
Setting concurrent_insert off during this test as INSERTs weren't being
completely processed before the calls to mysqldump, resulting in failing tests.
Altered .test file to turn concurrent_insert off during the test and to restore it
to whatever the value was at the start of the test when complete.
Re-recorded .result file to account for changes to variables in the test.
in open_table()
Problem: repeating "CREATE... ( AUTOINCREMENT) ... SELECT" may lead to
an assertion failure.
Fix: reset table->auto_increment_field_not_null after each record
writing.
INSERT .. SELECT .. ON DUPLICATE KEY UPDATE col=DEFAULT
In order to get correct values from update fields that
belongs to the SELECT part in the INSERT .. SELECT .. ON
DUPLICATE KEY UPDATE statement, the server adds referenced
fields to the select list. Part of the code that does this
transformation is shared between implementations of
the DEFAULT(col) function and the DEFAULT keyword (in
the col=DEFAULT expression), and an implementation of
the DEFAULT keyword is incomplete.
returns unexpected result
If:
1. a table has a not nullable BIT column c1 with a length
shorter than 8 bits and some additional not nullable
columns c2 etc, and
2. the WHERE clause is like: (c1 = constant) AND c2 ...,
the SELECT query returns unexpected result set.
The server stores BIT columns in a tricky way to save disk
space: if column's bit length is not divisible by 8, the
server places reminder bits among the null bits at the start
of a record. The rest bytes are stored in the record itself,
and Field::ptr points to these rest bytes.
However if a bit length of the whole column is less than 8,
there are no remaining bytes, and there is nothing to store in
the record at its regular place. In this case Field::ptr points
to bytes actually occupied by the next column in a record.
If both columns (BIT and the next column) are NOT NULL,
the Field::eq function incorrectly deduces that this is the
same column, so query transformation/equal item elimination
code (see build_equal_items_for_cond) may mix these columns
and damage conditions containing references to them.
used causes server crash.
When the loose index scan access method is used values of aggregated functions
are precomputed by it. Aggregation of such functions shouldn't be performed
in this case and functions should be treated as normal ones.
The create_tmp_table function wasn't taking this into account and this led to
a crash if a query has MIN/MAX aggregate functions and employs temporary table
and loose index scan.
Now the JOIN::exec and the create_tmp_table functions treat MIN/MAX aggregate
functions as normal ones when the loose index scan is used.
Problem: data consistency check (maximum record length) for a correct
MyISAM table with CHECKSUM=1 and ROW_FORMAT=DYNAMIC option
may fail due to wrong inner MyISAM parameter. In result we may
have the table marked as 'corrupted'.
Fix: properly set MyISAM maximum record length parameter.
Bug#26687 rpl_ddl test fails if run with --innodb option
Details:
- The current test + the expected results do only fit
if the slave uses MyISAM for mysqltest1.t1.
Therefore skip the test if we do not meet these
conditions.
- The solution for 5.1 will look quite different
because "ps_ddl" is already much improved in
MySQL 5.1.
test_if_data_home_dir fixed to look into real path.
Checks added to mi_open for symlinks into data home directory.
per-file messages:
include/my_sys.h
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
my_is_symlink interface added
include/myisam.h
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
myisam_test_invalid_symlink interface added
myisam/mi_check.c
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
mi_open_datafile calls modified
myisam/mi_open.c
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
code added to mi_open to check for symlinks into data home directory.
mi_open_datafile now accepts 'original' file path to check if it's
an allowed symlink.
myisam/mi_static.c
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
myisam_test_invlaid_symlink defined
myisam/myisamchk.c
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
mi_open_datafile call modified
myisam/myisamdef.h
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
mi_open_datafile interface modified - 'real_path' parameter added
mysql-test/r/symlink.test
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
error codes corrected as some patch now rejected pointing inside datahome
mysql-test/r/symlink.result
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
error messages corrected in the result
mysys/my_symlink.c
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
my_is_symlink() implementsd
my_realpath() now returns the 'realpath' even if a file isn't a symlink
sql/mysql_priv.h
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
test_if_data_home_dir interface
sql/mysqld.cc
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
myisam_test_invalid_symlik set with the 'test_if_data_home_dir'
sql/sql_parse.cc
Bug#32167 another privilege bypass with DATA/INDEX DIRECTORY.
error messages corrected
test_if_data_home_dir code fixed
Send_field.org_col_name has broken value on secondary execution.
It happens when result field is created from the field which belongs to view
due to forgotten assignment of some Send_field attributes.
The fix:
set Send_field.org_col_name,org_table_name with correct value during Send_field intialization.
Length value is the length of the field,
Max_length is the length of the field value.
So Max_length can not be more than Length.
The fix: fixed calculation of the Item_empty_string item length
(Patch applied and queued on demand of Trudy/Davi.)
When the fractional part in a multiplication of DECIMALs
overflowed, we truncated the first operand rather than the
longest. Now truncating least significant places instead
for more precise multiplications.
(Queuing at demand of Trudy/Davi.)