replication
MySQL server uses wrong lock type (always TL_READ instead of
TL_READ_NO_INSERT when appropriate) for tables used in
subqueries of UPDATE statement. This leads in some cases to
a broken replication as statements are written in the wrong
order to the binlog.
Restore a stub of the removed mysql_odbc_escape_string function
to fix a ABI breakage. The function was intended to be private
and used only by Connector/ODBC, but, unfortunately, it was exported
as part of the ABI. Nonetheless, only a stub is restored as the
original function is inherently broken and shouldn't be used.
This restoration only applies to MySQL 5.0. This will be addressed
differently in later versions -- reworked library versioning.
query
The fix for bug 46749 removed the check for OUTER_REF_TABLE_BIT
and substituted it for a check on the presence of
Item_ident::depended_from.
Removing it altogether was wrong : OUTER_REF_TABLE_BIT should
still be checked in addition to depended_from (because it's not
set in all cases and doesn't contradict to the check of depended_from).
Fixed by returning the old condition back as a compliment to the
new one.
- Create the "dummy" thread joinable and wait for it to
exit before continuing in 'my_thread_global_init'
- This way we know that the pthread library is initialized
by one thread only
Solaris binary packages should be compiled with '-g0', not '-g'
The main fix for this is done in the build tools,
but in the sources it affects "configure.in"
which sets "DEBUG_CXXFLAGS" to be used in all debug builds.
When parsing the service installation parameter in
default_service_handling() make sure the value of the
optional parameter doesn't overwrite it's name.
The problem is that argument buffer can be used as result buffer
and it leads to argument value change.
The fix is to use 'old buffer' as result buffer only
if first argument is not constant item.
can crash under load
Backport from 5.1.
Does also include key cache fixes from:
Bug 44068 (RESTORE can disable the MyISAM Key Cache)
Bug 40944 (Backup: crash after myisampack)
name as existing view
When trying to create a table with the same name as existing view with
join, mysql server crashes.
The problem is when create table is issued with the same name as view, while
verifying with the existing tables, we assume that base table object is
created always.
In this case, since it is a view over multiple tables, we don't have the
mysql derived table object.
Fixed the logic which checks if there is an existing table to not to assume
that table object is created when the base table is view over multiple
tables.
myisamchk tool generates warnings when run on an myisam files (.MYI or .MYD)
This is because of the conversion of max_value for certain options in myisamchk
from singed long to unsigned long
The max value for the options key_buffer_size, read_buffer_size, write_buffer
_size and sort_buffer_size is given as (long) ~0L which becomes -1 when casted
from signed long to longlong and then casted to ulonglong. When (ulonglong) -1
is compared with maximal value for GET_ULONG data type, we adjust it to
(ulonglong) ULONG_MAX and throw the warning.
Fixed by using the right max size.
Max values for the variables (from mysqld.cc)
----------------------------
1. key_buffer_size
5.0: ULONG_MAX
5.1: SIZE_T_MAX
6.0: SIZE_T_MAX
2. read_buffer_size and write_buffer_size
5.0: INT_MAX32
5.1: INT_MAX32
6.0: INT_MAX32
3. sort_buffer_size (aka myisam_sort_buffer_size)
5.0: UINT_MAX32
5.1: ULONG_MAX
6.0: ULONG_MAX
Note: testcase not attached
function,file sql_base.cc
When uncacheable queries are written to a temp table the optimizer must
preserve the original JOIN structure, because it is re-using the JOIN
structure to read from the resulting temporary table.
This was done only for uncacheable sub-queries.
But top level queries can also benefit from this mechanism, specially if
they're using index access and need a reset.
Fixed by not limiting the saving of JOIN structure to subqueries
exclusively.
Added a new test file to extend the existing (large) subquery.test.
CREATE TABLE...LIKE...
The mysql server option 'sync_frm' is ignored when table is created with
syntax CREATE TABLE .. LIKE..
Fixed by adding the MY_SYNC flag and calling my_sync() from my_copy() when
the flag is set.
In mysql_create_table(), when the 'sync_frm' is set, MY_SYNC flag is passed
to my_copy().
Note: TestCase is not attached and can be tested manually using debugger.
Failing to connect would release parts of the MYSQL struct.
We would then proceed to try again to connect without re-
initializing the struct.
We prevent the unwanted freeing of data we'll still need now.
with gcc 4.3.2
This patch fixes a number of GCC warnings about variables used
before initialized. A new macro UNINIT_VAR() is introduced for
use in the variable declaration, and LINT_INIT() usage will be
gradually deprecated. (A workaround is used for g++, pending a
patch for a g++ bug.)
GCC warnings for unused results (attribute warn_unused_result)
for a number of system calls (present at least in later
Ubuntus, where the usual void cast trick doesn't work) are
also fixed.
When a connection is dropped any remaining temporary table is also automatically
dropped and the SQL statement of this operation is written to the binary log in
order to drop such tables on the slave and keep the slave in sync. Specifically,
the current code base creates the following type of statement:
DROP /*!40005 TEMPORARY */ TABLE IF EXISTS `db`.`table`;
Unfortunately, appending the database to the table name in this manner circumvents
the replicate-rewrite-db option (and any options that check the current database).
To solve the issue, we started writing the statement to the binary as follows:
use `db`; DROP /*!40005 TEMPORARY */ TABLE IF EXISTS `table`;
field references
This error requires a combination of factors :
1. An "impossible where" in the outermost SELECT
2. An aggregate in the outermost SELECT
3. A correlated subquery with a WHERE clause that includes an outer
field reference as a top level WHERE sargable predicate
When JOIN::optimize detects an "impossible WHERE" it will bail out
without doing the rest of the work and initializations. It will not
call make_join_statistics() as well. And make_join_statistics fills
in various structures for each table referenced.
When processing the result of the "impossible WHERE" the query must
send a single row of data if there are aggregate functions in it.
In this case the server marks all the aggregates as having received
no rows and calls the relevant Item::val_xxx() method on the SELECT
list. However if this SELECT list happens to contain a correlated
subquery this subquery is evaluated in a normal evaluation mode.
And if this correlated subquery has a reference to a field from the
outermost "impossible where" SELECT the add_key_fields will mistakenly
consider the outer field reference as a "local" field reference when
looking for sargable predicates.
But since the SELECT where the outer field reference refers to is not
completely initialized due to the "impossible WHERE" in this level
we'll get a NULL pointer reference.
Fixed by making a better condition for discovering if a field is "local"
to the SELECT level being processed.
It's not enough to look for OUTER_REF_TABLE_BIT in this case since
for outer references to constant tables the Item_field::used_tables()
will return 0 regardless of whether the field reference is from the
local SELECT or not.
The crash happens because select_union object is used as result set
for queries which have derived tables.
select_union use temporary table as data storage and if
fields count exceeds 10(count of values for procedure ANALYSE())
then we get a crash on fill_record() function.
The code was using a special global buffer for the value of IS NULL ranges.
This was not always long enough to be copied by a regular memcpy. As a
result read buffer overflows may occur.
Fixed by setting the null byte to 1 and setting the rest of the field disk image
to NULL with a bzero (instead of relying on the buffer and memcpy()).
decrease for INSERTs
Bulk inserts (multiple row, CREATE ... SELECT, INSERT ... SELECT) into
MyISAM tables were performed inefficiently. This was mainly affecting
use cases where read_buffer_size was considerably large (>256K) and low
number of rows was inserted (e.g. 30-100).
The problem was that during I/O cache initialization (this happens
before each bulk insert) allocated I/O buffer was unnecessarily
initialized to '\0'.
This was happening because of mess in flag values. MyISAM informs I/O
cache to wait for free space (if out of disk space) by passing
MY_WAIT_IF_FULL flag. Since MY_WAIT_IF_FULL and MY_ZEROFILL have the
same values, memory allocator was initializing memory to '\0'.
The performance gain provided with this patch may only be visible with
non-debug binaries, since safemalloc always initializes allocated memory
to 0xA5A5...
This is a partial correction to the original fix for bug#37098
Get rid of "Installed (but unpackaged)" files in the RPM build
which used a wrong variable.
The check for stack overflow was independent of the size of the
structure stored in the heap.
Fixed by adding sizeof(PARAM) to the requested free heap size.