compression
Since uint3korr() may read 4 bytes depending on build flags and
platform, allocate 1 extra "safety" byte in the network buffer
for cases when uint3korr() in my_real_read() is called to read
last 3 bytes in the buffer.
It is practically hard to construct a reliable and reasonably
small test case for this bug as that would require constructing
input stream such that a certain sequence of bytes in a
compressed packet happens to be the last 3 bytes of the network
buffer.
The maximum value of the max_join_size variable is set by converting
a signed type (long int) with negative value (-1) to a wider unsigned
type (unsigned long long), which yields the largest possible value of
the wider unsigned type -- as per the language conversion rules. But,
depending on build options, the type of the max_join_size might be a
shorter type (ha_rows - unsigned long) which causes the warning to be
thrown once the large value is truncated to fit.
The solution is to ensure that the maximum value of the variable is
always set to the maximum value of integer type of max_join_size.
Furthermore, it would be interesting to always have a fixed type for
this variable, but this would incur in a change of behavior which is
not acceptable for a GA version. See Bug#35346.
Post-merge fix: test case could fail due to a conversion of the
max_join_size value to a integer. Fixed by preserving the value
as a string for comparison purposes.
procedures causes crashes!
The problem of that bugreport was mostly fixed by the
patch for bug 38691.
However, attached test case focused on another crash or
valgrind warning problem: SHOW PROCESSLIST query accesses
freed memory of SP instruction that run in a parallel
connection.
Changes of thd->query/thd->query_length in dangerous
places have been guarded with the per-thread
LOCK_thd_data mutex (the THD::LOCK_delete mutex has been
renamed to THD::LOCK_thd_data).
In create_myisam_from_heap() mark all errors as fatal except
HA_ERR_RECORD_FILE_FULL for a HEAP table.
Not doing so could lead to problems, e.g. in a case when a
temporary MyISAM table gets overrun due to its MAX_ROWS limit
while executing INSERT/REPLACE IGNORE ... SELECT.
The SELECT execution was aborted, but the error was
converted to a warning due to IGNORE clause, so neither 'ok'
nor 'error' packet could be sent back to the client. This
condition led to hanging client when using 5.0 server, or
assertion failure in 5.1.
not logged
Errors encountered during initialization of the SSL subsystem
are printed to stderr, rather than to the error log.
This patch adds a parameter to several SSL init functions to
report the error (if any) out to the caller. The function
init_ssl() in mysqld.cc is moved after the initialization of
the log subsystem, so that any error messages can be logged to
the error log. Printing of messages to stderr has been
retained to get diagnostic output in a client context.
When during the optimization an item is moved to the upper select
the item's context left unchanged. This caused wrong result in the
PS/SP mode.
The Item_ident::remove_dependence_processor now sets the context
of the select to which the item is moved to.
it returns misleading 'table is full'
Innodb returns a misleading error message "table is full"
when the number of active concurrent transactions is greater
than 1024.
Fixed by adding errorcode "ER_TOO_MANY_CONCURRENT_TRXS" to the
error codes. Innodb should return HA_TOO_MANY_CONCURRENT_TRXS
to mysql which is then mapped to ER_TOO_MANY_CONCURRENT_TRXS
Note: testcase is not written as this was reproducible only by
changing innodb code.
In a subselect all fields from outer selects are marked as dependent on
selects they are belong to. In some cases optimizer substitutes it for an
equivalent expression. For example "a_field IN (SELECT outer_field)" is
substituted with "a_field = outer_field". As we moved the outer_field to the
upper select it's not really outer anymore. But it was left marked as outer.
If exists an index over a_field optimizer choose wrong execution plan and thus
return wrong result.
Now the Item_in_subselect::single_value_transformer function removes dependent
marking from fields when a subselect is optimized away.
This patch is a follow up to http://lists.mysql.com/commits/76678.
When an allocation failure occurs for the buffer in the dynamic
array, an error condition was being set. The dynamic array is
usable even if the memory allocation fails. Since in most cases
the thread can continue to work without any problems the error
condition should not be set here.
This patch adds logic to remove the error condition from being set
when the memory allocation for the buffer in dynamic array fails.
sort_buffer_size cannot allocate
The NULL return from tree_insert() (on low memory) was not
checked for in Item_func_group_concat::add(). As a result
on low memory conditions a crash happens.
Fixed by properly checking the return code.
Fixed the following problems:
1. cmake 2.6 warning because of a changed default on
how the dependencies to libraries with a specified
path are resolved.
Fixed by requiring cmake 2.6.
2. Removed an obsolete pre-NT4 hack including defining
Windows system defines to alter the behavior of windows.h.
3. Disabled warning C4065 on compiling sql_yacc.cc because
of a know incompatibility in some of the newer bison binaries.
match against.
Server crashes when executing prepared statement with duplicating
MATCH() function calls in SELECT and ORDER BY expressions, e.g.:
SELECT MATCH(a) AGAINST('test') FROM t1 ORDER BY MATCH(a) AGAINST('test')
This query gets optimized by the server, so the value returned
by MATCH() from the SELECT list is reused for ORDER BY purposes.
To make this optimization server is comparing items from
SELECT and ORDER BY lists. We were getting server crash because
comparision function for MATCH() item is not intended to be called
at this point of execution.
In 5.0 and 5.1 this problem is workarounded by resetting MATCH()
item to the state as it was during PREPARE.
In 6.0 correct comparision function will be implemented and
duplicating MATCH() items from the ORDER BY list will be
optimized.
without error
When using quick access methods for searching rows in UPDATE or
DELETE there was no check if a fatal error was not already sent
to the client while evaluating the quick condition.
As a result a false OK (following the error) was sent to the
client and the error was thus transformed into a warning.
Fixed by checking for errors sent to the client during
SQL_SELECT::check_quick() and treating them as real errors.
Fixed a wrong test case in group_min_max.test
Fixed a wrong return code in mysql_update() and mysql_delete()
The crash happend because for views which are joins
we have table_list->table == 0 and
table_list->table->'any method' call leads to crash.
The fix is to perform table_list->table->file->extra()
method for all tables belonging to view.
without proper formatting
The problem is that a suitably crafted database identifier
supplied to COM_CREATE_DB or COM_DROP_DB can cause a SIGSEGV,
and thereby a denial of service. The database name is printed
to the log without using a format string, so potential
attackers can control the behavior of my_b_vprintf() by
supplying their own format string. A CREATE or DROP privilege
would be required.
This patch supplies a format string to the printing of the
database name. A test case is added to mysql_client_test.
But Should Be "Zero Rows"
After applying the innodb snapshot 5.0-ss5406 for bug#40565, the windows push build
tests failed because of the missing cast of void * pointer in row0sel.c file
Informed the innodb developers and received patch by email.