Server created "arc" directories inside database directories and
maintained there useless copies of .frm files.
Creation and renaming procedures of those copies as well as
creation of "arc" directories has been discontinued.
Removal procedure has been kept untouched to be able to
cleanup existent database directories by the DROP DATABASE
query. Also view renaming procedure has been updated to remove
these directories.
JOIN for the subselect wasn't cleaned if we came upon an error
during sub_select() execution. That leads to the assertion failure
in close_thread_tables()
part of the 6.0 code backported
per-file comments:
mysql-test/r/sp-error.result
Bug#37949 Crash if argument to SP is a subquery that returns more than one row
test result
mysql-test/t/sp-error.test
Bug#37949 Crash if argument to SP is a subquery that returns more than one row
test case
sql/sp_head.cc
Bug#37949 Crash if argument to SP is a subquery that returns more than one row
lex->unit.cleanup() call added if not substatement
Machines with hostname set to "localhost" cause uniqueness errors in
the SQL bootstrap data.
Now, insert zero lines for cases where the (lowercased) hostname is
the same as an already-inserted 'localhost' name. Also, fix a few tests
that expect certain local accounts to have a certain host name.
A stored procedure involving substrings could crash the server on certain
platforms because of invalid memory reads.
During storing the new blob-field value, the cached value's address range
overlapped that of the new field value. This caused problems when the
cached value storage was reallocated to provide access for a new
characater set representation. The patch checks the address ranges, and if
they overlap, the new field value is copied to a new storage before it is
converted to the new character set.
The fix for bug 31887 was incomplete : it assumes that all the
field types returned by the IS_NUM macro are descendants of
Item_num and tries to zero-fill the values before doing constant
substitution with such fields when they are compared to constant string
values.
The only exception to this is Field_timestamp : it's in the IS_NUM
macro, but is not a descendant of Field_num.
Fixed by excluding timestamp fields (Field_timestamp) when zero-filling
when converting the constant to compare with to a string.
Note that this will not exclude the timestamp columns from const
propagation.
Details:
- backport of some improvements which prevent sporadic
failures from 5.1 to 5.0
- @@GLOBAL.CONCURRENT_INSERT= 0 also for slave server
- --sorted_result before all selects which have result
sets with more than one row
- Replace error numbers by error names
Moved fix for this bug to 5.0 as other mysqldump bugs seem tied to concurrent_insert being on
Setting concurrent_insert off during this test as INSERTs weren't being
completely processed before the calls to mysqldump, resulting in failing tests.
Altered .test file to turn concurrent_insert off during the test and to restore it
to whatever the value was at the start of the test when complete.
Re-recorded .result file to account for changes to variables in the test.
The problem here is that symbols can not be loaded, because symbol
path is not set and default path does not include the directory
where PDB is located.
The problem is _not_ reproducible on the same machine where
mysqld.exe is built - if PDB is not found in the symbol path,
dbghelp would fallback to fully qualified PDB path as given in the
executable header and on the build host this will succeed.
The solution is to calculate symbol path and pass it to SymInitialize()
call.
mysqldump creates stand-in tables before dumping the actual view.
Those tables were of the default type; if the view had more columns
than that (a pathological case, arguably), loading the dump would
fail. We now make the temporary stand-ins MyISAM tables to prevent
this.
in open_table()
Problem: repeating "CREATE... ( AUTOINCREMENT) ... SELECT" may lead to
an assertion failure.
Fix: reset table->auto_increment_field_not_null after each record
writing.
INSERT .. SELECT .. ON DUPLICATE KEY UPDATE col=DEFAULT
In order to get correct values from update fields that
belongs to the SELECT part in the INSERT .. SELECT .. ON
DUPLICATE KEY UPDATE statement, the server adds referenced
fields to the select list. Part of the code that does this
transformation is shared between implementations of
the DEFAULT(col) function and the DEFAULT keyword (in
the col=DEFAULT expression), and an implementation of
the DEFAULT keyword is incomplete.
Bug#33031 app linked to libmysql.lib crash if run as service in vista under
localsystem
There are some problems using DllMain hook functions on Windows that
automatically do global and per-thread initialization for libmysqld.dll
1)per-thread initialization(DLL_THREAD_ATTACH)
MySQL internally counts number of active threads that and causes a delay in in
my_end() if not all threads are exited. But,there are threads that can be
started either by Windows internally (often in TCP/IP scenarios) or by user
himself - those threads are not necessarily using libmysql.dll functionality,
but nonetheless the contribute to the count of open threads.
2)process-initialization (DLL_PROCESS_ATTACH)
my_init() calls WSAStartup that itself loads DLLs and can lead to a deadlock in
Windows loader.
Fix is to remove dll initialization code from libmysql.dll in general case. I
still leave an environment variable LIBMYSQL_DLLINIT, which if set to any value
will cause the old behavior (DLL init hooks will be called). This env.variable
exists only to prevent breakage of existing Windows-only applications that
don't do mysql_thread_init() and work ok today. Use of LIBMYSQL_DLLINIT is
discouraged and it will be removed in 6.0
returns unexpected result
If:
1. a table has a not nullable BIT column c1 with a length
shorter than 8 bits and some additional not nullable
columns c2 etc, and
2. the WHERE clause is like: (c1 = constant) AND c2 ...,
the SELECT query returns unexpected result set.
The server stores BIT columns in a tricky way to save disk
space: if column's bit length is not divisible by 8, the
server places reminder bits among the null bits at the start
of a record. The rest bytes are stored in the record itself,
and Field::ptr points to these rest bytes.
However if a bit length of the whole column is less than 8,
there are no remaining bytes, and there is nothing to store in
the record at its regular place. In this case Field::ptr points
to bytes actually occupied by the next column in a record.
If both columns (BIT and the next column) are NOT NULL,
the Field::eq function incorrectly deduces that this is the
same column, so query transformation/equal item elimination
code (see build_equal_items_for_cond) may mix these columns
and damage conditions containing references to them.
used causes server crash.
When the loose index scan access method is used values of aggregated functions
are precomputed by it. Aggregation of such functions shouldn't be performed
in this case and functions should be treated as normal ones.
The create_tmp_table function wasn't taking this into account and this led to
a crash if a query has MIN/MAX aggregate functions and employs temporary table
and loose index scan.
Now the JOIN::exec and the create_tmp_table functions treat MIN/MAX aggregate
functions as normal ones when the loose index scan is used.
Problem: data consistency check (maximum record length) for a correct
MyISAM table with CHECKSUM=1 and ROW_FORMAT=DYNAMIC option
may fail due to wrong inner MyISAM parameter. In result we may
have the table marked as 'corrupted'.
Fix: properly set MyISAM maximum record length parameter.