Analysis: At check_trx_exists function InnoDB allocates
a new trx if no trx is found from thd but this newly
allocated trx is not registered to thd. This is unsafe,
because nothing prevents InnoDB plugin from being uninstalled
while there's active transaction. This can cause crashes, hang
and any other odd behavior. It may also corrupt stack, as
functions pointers are not available after dlclose.
Fix: The fix is to use thd_set_ha_data() when
manipulating per-connection handler data. It does appropriate
plugin locking.
The test has performance-schema in the opt file, so it failed when the server was compiled without performance schema.
Make the option loose, then MTR will be able to reach have_perfschema.inc check and will skip the test gracefully.
Problem was that test just takes too long time in slow I/O and triggers
testcase timeout. Reduced the number of operations and inserts to make
test shorter.
When macro is expanded in an expression like ~QPLAN_QC_NO (e.g. in the
Query_cache::send_result_to_client() function in sql/sql_cache.cc) then without
the parenthesis the expression will be evaluated to a wrong value.
The fix is that if the slave has a different integer size than
the master, then they will assume the master has the same signed/unsigned modifier
as the slave.
This means that one can safely change a coon the slave an int to a bigint
or an unsigned int to an unsigned int. Changing an unsigned int to an
signed bigint will cause replication failures when the high bit of the
unsigned int is set.
We can't give an error if the signess is different on the master and slave
as the binary log doesn't contain the signess of the column on the master.
The fix is that if the slave has a different integer size than
the master, then they will assume the master has the same signed/unsigned modifier
as the slave.
This means that one can safely change a coon the slave an int to a bigint
or an unsigned int to an unsigned int. Changing an unsigned int to an
signed bigint will cause replication failures when the high bit of the
unsigned int is set.
We can't give an error if the signess is different on the master and slave
as the binary log doesn't contain the signess of the column on the master.
Let mysqld_safe_syslog.cnf force disable error log so that logging to syslog is
not affected by previous log_error setting.
Added handling of --skip-log-error to mysqld_safe.
INCORRECT RESULTS
Issue:
-----
Updating varchar and text fields in the same update
statement can produce incorrect results. When a varchar
field is assigned to the text field and the varchar field
is then set to a different value, the text field's result
contains the varchar field's new value.
SOLUTION:
---------
Currently the blob type does not allocate space for the
string to be stored. Instead it contains a pointer to the
varchar string. So when the varchar field is changed as
part of the update statement, the value contained in the
blob also changes.
The fix would be to actually store the value by allocating
space for the blob's string. We can avoid allocating this
space when the varchar field is not being written into.
Post-push fix: broken build on windows.
The problem is min/max macros from windows.h
which interfere with a template function callex max.
Solution: ADD_DEFINITIONS(-DNOMINMAX)
DATABASE WHEN USING TABLE ALIASES
Issue:
-----
When using table aliases for deleting, MySQL checks
privileges against the current database and not the
privileges on the actual table or database the table
resides.
SOLUTION:
---------
While checking privileges for multi-deletes,
correspondent_table should be used since it points to the
correct table and database.
YASSL-COMPILED SERVER/CLIENT
Description: thread_pool.thread_pool_connect hangs when the server and
client are compiled with yaSSL.
Bug-fix: Test thread_pool.thread_pool_connect was temporary disabled for
yaSSL. However, now that yaSSL is fixed it runs OK. The bug was
introduced by one of the yaSSL updates. set_current was not working for
i == 0. Now this is fixed. YASSL is updated to 2.3.7d
INITIAL STARTUP
Description: By using mysql_ssl_rsa_setup to get SSL enabled server
(after running mysqld --initialize) server don't answer properly
to "mysqladmin ping" first 30 secs after startup.
Bug-fix: YASSL validated certificate date to the minute but should have
to the second. This is why the ssl on the server side was not up right
away after new certs were created with mysql_ssl_rsa_setup. The fix for
that was submitted by Todd. YASSL was updated to 2.3.7c.
Affects at least 5.6 and 5.7. In customer case, the "client" happened to
be a replication slave, therefore his server crashed.
Bug-fix:
The bug was in yassl. Todd Ouska has provided us with the patch.
(cherry picked from commit 42ffa91aad898b02f0793b669ffd04f5c178ce39)
MYSQLADMIN -U ROOT -P
DESCRIPTION
===========
Crash occurs when no command is given while executing
mysqladmin utility.
ANALYSIS
========
In mask_password() the final write to array 'temp_argv'
is done without checking if corresponding index 'argc'
is valid (non-negative) or not. In case its negative
(would happen when this function is called with 'argc'=0),
it may cause a SEGFAULT. Logically in such a case,
mask_password() should not have been called as it would do
no valid thing.
FIX
===
mask_password() is now called after checking 'argc'. This
function is now called only when 'argc' is positive
otherwise the process terminates
Problem :
---------
The specific issue reported in this bug is with range/list column
value that is allocated and initialized by evaluating partition
expression(item tree) during execution. After evaluation the range
list value is marked fixed [part_column_list_val]. During next
execution, we don't re-evaluate the expression and use the old value
since it is marked fixed.
Solution :
----------
One way to solve the issue is to mark all column values as not fixed
during clone so that the expression is always re-evaluated once we
attempt partition_info::fix_column_value_functions() after cloning
the part_info object during execution of DDL on partitioned table.
Reviewed-by: Jimmy Yang <Jimmy.Yang@oracle.com>
Reviewed-by: Mattias Jonsson <mattias.jonsson@oracle.com>
RB: 9424
The table type is MYSQL
The query where clause includes an indexed column
The where clause contains < or <= operator on this column
Change version date
modified: storage/connect/ha_connect.cc
modified: storage/connect/tabmysql.cpp
Add visual studio 2013 files to ignore
modified: .gitignore
Valid min value for query_cache_min_res_unit is 512. But attempt
to set value greater than or equal to the ULONG_MAX(max value) is
resulting query_cache_min_res_unit value to 0. This result in
crash while searching for memory block lesser than the valid
min value to store query results.
Free memory blocks in query cache are stored in bins according
to their size. The bins are stored in size descending order.
For the memory block request the appropriate bin is searched using
binary search algorithm. The minimum free memory block request
expected is 512 bytes. And the appropriate bin is searched for block
greater than or equals to 512 bytes.
Because of the bug the query_cache_min_res_unit is set to 0. Due
to which there is a chance of request for memory blocks lesser
than the minimum size in free memory block bins. Search for bin
for this invalid input size fails and returns garbage index.
Accessing bins array element with this index is causing the issue
reported.
The valid value range for the query_cache_min_res_unit is
512 to ULONG_MAX(when value is greater than the max allowed value,
max allowed value is used i.e ULONG_MAX). While setting result unit
block size (query_cache_min_res_unit), size is memory aligned by
using a macro ALIGN_SIZE. The ALIGN_SIZE logic is as below,
(input_size + sizeof(double) - 1) & ~(sizeof(double) - 1)
For unsigned long type variable when input_size is greater than
equal to ULONG_MAX-(sizeof(double)-1), above expression is
resulting in value 0.
Fix:
-----
Comparing value set for query_cache_min_res_unit with max
aligned value which can be stored in ulong type variable.
If it is greater then setting it to the max aligned value for
ulong type variable.
Analysis; Problem is that InnoDB does not have support for generating
CURRENT_TIMESTAMP or constant default.
Fix: Add additional check if column has changed from NULL -> NOT NULL
and column default has changed. If this is is first column definition
whose SQL type is TIMESTAMP and it is defined as NOT NULL and
it has either constant default or function default we must use
"Copy" method for alter table.
MULTIPLE THREADS
Description:- The utility "mysqlimport" does not use
multiple threads for the execution with option
"--use-threads". "mysqlimport" while importing multiple
files and multiple tables, uses a single thread even if the
number of threads are specified with "--use-threads" option.
Analysis:- This utility uses ifdef HAVE_LIBPTHREAD to check
for libpthread library and if defined uses libpthread
library for mutlithreaing. Since HAVE_LIBPTHREAD is not
defined anywhere in the source, "--use-threads" option is
silently ignored.
Fix:- "-DTHREADS" is set to the COMPILE_FLAGS which will
enable pthreads. HAVE_LIBPTHREAD macro is removed.
- Removing use of calls to current_thd
- More DBUG_PRINT
- Code style changes
- Made some local functions static
Ensure that calls to print_keyuse are locked with mutex to get all lines in same debug packet
SELECT ... WHERE XX IN (SELECT YY)
this was transformed to something like:
SELECT ... WHERE IF_EXISTS(SELECT ... HAVING XX=YY)
The bug was that for normal execution XX was fixed in the original outer SELECT context while in PS it was fixed in the sub query context and this confused the optimizer.
Fixed by ensuring that XX is always fixed in the outer context.
This is MDEV-7601, including it's sub tasks MDEV-7594, MDEV-7555, MDEV-7590, MDEV-7581, MDEV-7589
The problem was that select_lex->non_agg_fields was not properly reset for re-execution and this caused an overwrite of a random memory position.
The fix was move non_agg_fields from select_lext to JOIN, which is properly reset.
The --gtid-ignore-duplicates option was not working correctly with row-based
replication. When a row event was completed, but before committing, there
was a small window where another multi-source SQL thread could wrongly try
to re-execute the same transaction, without properly ignoring the duplicate
GTID. This would lead to duplicate key error or out-of-order GTID error or
similar.
Thanks to Matt Neth for reporting this and giving an easy way to reproduce
the issue.