Logging slow stored procedures caused the slow log to write
very large lock times. The lock times was a result of a
negative number being cast to an unsigned integer.
The reason the lock time appeard negative was because
one of the measurements points was reset after execution
causing it to change order with the start time of the
statement.
This bug is related to bug 47905 which in turn was
introduced because of a joint fix for 12480,12481,12482 and 11587.
The fix is to only reset the start_time before any statement
execution in a SP while not resetting start_utime or
utime_after_lock which are used for measuring the
performance of the SP. Start_time is used to set the
timestamp on the replication event which controlls how
the slave interprets time functions like NOW().
The problem is in the Item_func_isnull::update_used_tables() function,
bracket is at the wrong place. Because of that isnull item erroneously
is treated as const item. The fix is to set brackets in the right place.
mysql-test/r/func_isnull.result:
test case
mysql-test/t/func_isnull.test:
test case
sql/item_cmpfunc.h:
set brackets in the right place.
NET::skip_big_packet isn't defined for the embedded server,
hide it in such a case.
sql/sql_connect.cc:
Fix for bug #53912: Fails to build from source
- hide net.skip_big_packet for the embedded server,
as it isn't defined there.
Some of the server implementations don't support dates later
than 2038 due to the internal time type being 32 bit.
Added checks so that the server will refuse dates that cannot
be handled by either throwing an error when setting date at
runtime or by refusing to start or shutting down the server if
the system date cannot be stored in my_time_t.
Post-push fix.
There was a valgrind issue on the loop that checks whether there
are NULL fields in the UNIQUE KEY or not. In detail, for the last
iteration the server may read out of the key_part array boundaries,
making valgrind to output warnings.
We fix this by correcting the loop, ie, moving the part that reads
from the key_part to be inside the loop statement block. This way
the assignment is protected by the loop condition.
When using Unique Keys with nullable parts in RBR, the slave can
choose the wrong row to update. This happens because a table with
an unique key containing nullable parts cannot strictly guarantee
uniqueness. As stated in the manual, for all engines, a UNIQUE
index allows multiple NULL values for columns that can contain
NULL.
We fix this at the slave by extending the checks before assuming
that the row found through an unique index is is the correct
one. This means that when a record (R) is fetched from the storage
engine and a key that is not primary (K) is used, the server does
the following:
- If K is unique and has no nullable parts, it returns R;
- Otherwise, if any field in the before image that is part of K
is null do an index scan;
- If there is no NULL field in the BI part of K, then return R.
A side change: renamed the existing test case file and added a
test case covering the changes in this patch.
Field_time::get_date method does not initialize MYSQL_TIME::time_type field.
The fix is to init this field.
mysql-test/r/type_time.result:
test case
mysql-test/t/type_time.test:
test case
sql/field.cc:
--use Field_time::get_time in Field_time::get_date
--removed duplicated code in Field_time::get_date method
and .tar.gz, windows vs linux..
On Intel x86 machines index selection by the MySQL query
optimizer could sometimes depend on the compiler version and
optimization flags used to build the server binary.
The problem was a result of a known issue with floating point
calculations on x86: since internal FPU precision (80 bit)
differs from precision used by programs (32-bit float or 64-bit
double), the result of calculating a complex expression may
depend on how FPU registers are allocated by the compiler and
whether intermediate values are spilled from FPU to memory. In
this particular case compiler versions and optimization flags
had an effect on cost calculation when choosing the best index
in best_access_path().
A possible solution to this problem which has already been
implemented in mysql-trunk is to limit FPU internal precision
to 64 bits. So the fix is a backport of the relevant code to
5.1 from mysql-trunk.
configure.in:
Configure check for fpu_control.h
mysql-test/r/explain.result:
Test case for bug #48537.
mysql-test/t/explain.test:
Test case for bug #48537.
sql/mysqld.cc:
Backport of the code to switch FPU on x86 to 64-bit precision.
without FOR UPDATE is causing a lock".
SELECT statements with subqueries referencing InnoDB tables
were acquiring shared locks on rows in these tables when they
were executed in REPEATABLE-READ mode and with statement or
mixed mode binary logging turned on.
This was a regression which were introduced when fixing
bug 39843.
The problem was that for tables belonging to subqueries
parser set TL_READ_DEFAULT as a lock type. In cases when
statement/mixed binary logging at open_tables() time this
type of lock was converted to TL_READ_NO_INSERT lock at
open_tables() time and caused InnoDB engine to acquire
shared locks on reads from these tables. Although in some
cases such behavior was correct (e.g. for subqueries in
DELETE) in case of SELECT it has caused unnecessary locking.
This patch implements minimal version of the fix for the
specific problem described in the bug-report which supposed
to be not too risky for pushing into 5.1 tree.
The 5.5 tree already contains a more appropriate solution
which also addresses other related issues like bug 53921
"Wrong locks for SELECTs used stored functions may lead
to broken SBR".
This patch tries to solve the problem by ensuring that
TL_READ_DEFAULT lock which is set in the parser for
tables participating in subqueries at open_tables()
time is interpreted as TL_READ_NO_INSERT or TL_READ.
TL_READ is used only if we know that this is a SELECT
and that this particular table is not used by a stored
function.
Test coverage is added for both InnoDB and MyISAM.
This patch introduces an "incompatible" change in locking
scheme for subqueries used in SELECT ... FOR UPDATE and
SELECT .. IN SHARE MODE.
In 4.1 (as well as in 5.0 and 5.1 before fix for bug 39843)
the server would use a snapshot InnoDB read for subqueries
in SELECT FOR UPDATE and SELECT .. IN SHARE MODE statements,
regardless of whether the binary log is on or off.
If the user required a different type of read (i.e. locking
read), he/she could request so explicitly by providing FOR
UPDATE/IN SHARE MODE clause for each individual subquery.
The patch for bug 39843 broke this behaviour (which was not
documented or tested), and started to use locking reads for
all subqueries in SELECT ... FOR UPDATE/IN SHARE MODE.
This patch restores 4.1 behaviour.
This patch should be mostly null-merged into 5.5 tree.
mysql-test/include/check_concurrent_insert.inc:
Added auxiliary script which allows to check if statement
reading table allows concurrent inserts in it.
mysql-test/include/check_no_concurrent_insert.inc:
Added auxiliary script which allows to check that statement
reading table doesn't allow concurrent inserts in it.
mysql-test/include/check_no_row_lock.inc:
Added auxiliary script which allows to check if statement
reading table doesn't take locks on its rows.
mysql-test/include/check_shared_row_lock.inc:
Added auxiliary script which allows to check if statement
reading table takes shared locks on some of its rows.
mysql-test/r/bug39022.result:
After bug #46947 'Embedded SELECT without FOR UPDATE is
causing a lock' was fixed test case for bug 39022 has to
be adjusted in order to trigger execution path on which
original problem was encountered.
mysql-test/r/innodb_mysql_lock2.result:
Added coverage for handling of locking in various cases when
we read data from InnoDB tables (includes test case for
bug #46947 'Embedded SELECT without FOR UPDATE is causing a
lock').
mysql-test/r/lock_sync.result:
Added coverage for handling of locking in various cases when
we read data from MyISAM tables.
mysql-test/t/bug39022.test:
After bug #46947 'Embedded SELECT without FOR UPDATE is
causing a lock' was fixed test case for bug 39022 has to
be adjusted in order to trigger execution path on which
original problem was encountered.
mysql-test/t/innodb_mysql_lock2.test:
Added coverage for handling of locking in various cases when
we read data from InnoDB tables (includes test case for
bug #46947 'Embedded SELECT without FOR UPDATE is causing a
lock').
mysql-test/t/lock_sync.test:
Added coverage for handling of locking in various cases when
we read data from MyISAM tables.
sql/mysql_priv.h:
Function read_lock_type_for_table() now takes pointers to
LEX and TABLE_LIST elements as its arguments since to
correctly determine lock type it needs to know what
statement is being performed and whether table element for
which lock type to be determined belongs to prelocking list.
sql/sql_base.cc:
Changed read_lock_type_for_table() to return a weak TL_READ
type of lock in cases when we are executing SELECT (and so
won't update tables directly) and table doesn't belong to
statement's prelocking list and thus can't be used by a
stored function. It is OK to do so since in this case table
won't be used by statement or function call which will be
written to the binary log, so serializability requirements
for it can be relaxed.
One of results from this change is that SELECTs on InnoDB
tables no longer takes shared row locks for tables which
are used in subqueries (i.e. bug #46947 is fixed).
Another result is that for similar SELECTs on MyISAM tables
concurrent inserts are allowed.
In order to implement this change signature of
read_lock_type_for_table() function was changed to
take pointers to LEX and TABLE_LIST objects.
sql/sql_update.cc:
Function read_lock_type_for_table() now takes pointers to
LEX and TABLE_LIST elements as its arguments since to
correctly determine lock type it needs to know what
statement is being performed and whether table element for
which lock type to be determined belongs to prelocking list.
Fixed failing test innodb.innodb-autoinc.test
Enabled innodb test suite
mysql-test/mysql-test-run.pl:
Enabled innodb test suite
mysql-test/r/innodb-autoinc.result:
Removed test as it exists in suite innodb
mysql-test/suite/innodb/t/disabled.def:
Removed innodb-autoinc
mysql-test/suite/innodb/t/innodb-autoinc.test:
Update to be able to run with plugin
mysql-test/t/innodb-autoinc.test:
Removed test as it exists in suite innodb
sql/filesort.cc:
Removed not used variable
sql/slave.cc:
Remove compiler warnings
storage/pbxt/src/ha_pbxt.cc:
Removed not used variable
storage/xtradb/dict/dict0crea.c:
Fixed compiler warning about unsigned comparison
support-files/compiler_warnings.supp:
Disable some not relevant warnings
There are two problems:
1. In simplify_joins function we calculate table dependencies. If STRAIGHT_JOIN hint
is used for whole SELECT we do not count it and as result some dependendecies
might be lost. It leads to incorrect table order which is returned by
join_tab_cmp_straight() function.
2. make_join_statistics() calculate the transitive closure for relations a particular
JOIN_TAB is 'dependent on'.
We aggregate the dependent table_map of a JOIN_TAB by adding dependencies from other
tables which we depend on. However, this may also cause new dependencies to be
available after we have completed processing a certain JOIN_TAB.
Both these problems affect condition pushdown and as result condition might be pushed
into wrong table which leads to crash or even omitted which leads to wrong result.
The fix:
1. Use modified 'transitive closure' algorithm provided by Ole John Aske
2. Update table dependences in simplify_joins according to
global STRAIGHT_JOIN hint.
Note: the patch also fixes bugs 46091 & 51492
mysql-test/r/join_outer.result:
test case
mysql-test/t/join_outer.test:
test case
sql/sql_select.cc:
1. Use modified 'transitive closure' algorithm provided by Ole John Aske
2. Update table dependences in simplify_joins according to
global STRAIGHT_JOIN hint.
Fixed some bugs introduced in 5.1.47
Disabled some tests until we have merged with latest Xtradb
configure.in:
Added testing if valgrind/memcheck.h exists
storage/pbxt/src/ha_pbxt.cc:
LOCK_plugin is not anymore locked in init
bitmap_is_set(table->read_set, field_index))
UPDATE on an InnoDB table modifying the same index that is used
to satisfy the WHERE condition could trigger a debug assertion
under some circumstances.
Since for engines with the HA_PRIMARY_KEY_IN_READ_INDEX flag
set results of an index scan on a secondary index are appended
by the primary key value, if a query involves only columns from
the primary key and a secondary index, the latter is considered
to be covering.
That tricks mysql_update() to mark for reading only columns
from the secondary index when it does an index scan to retrieve
rows to update in case a part of that key is also being
updated. However, there may be other columns in WHERE that are
part of the primary key, but not the secondary one.
What we actually want to do in this case is to add index
columns to the existing WHERE columns bitmap rather than
replace it.
mysql-test/r/innodb_mysql.result:
Test case for bug #53830.
mysql-test/t/innodb_mysql.test:
Test case for bug #53830.
sql/sql_update.cc:
Add index columns to the read_set bitmap, don't replace it.
sql/table.cc:
Added a new add_read_columns_used_by_index() function to
st_table.
sql/table.h:
Added a new add_read_columns_used_by_index() function to
st_table.
Problem: one with SELECT privilege on some table may dump other table
performing COM_TABLE_DUMP command due to missed check of the table name.
Fix: check the table name.
sql/sql_parse.cc:
Fix for bug #53907: Table dump command can be abused to dump arbitrary tables.
- check given table name performing COM_TABLE_DUMP command.
tests/mysql_client_test.c:
Fix for bug #53907: Table dump command can be abused to dump arbitrary tables.
- test case.
Problem was reporting wrong error
Fixed by adding a new error which better explain the problem.
mysql-test/r/partition_error.result:
Bug#49161: Out of memory; restart server and try again (needed 2 bytes)
Updated test result
mysql-test/t/partition_error.test:
Bug#49161: Out of memory; restart server and try again (needed 2 bytes)
Added test case
sql/ha_partition.cc:
Bug#49161: Out of memory; restart server and try again (needed 2 bytes)
Better error message. (used ER_UNKNOWN_ERROR to avoid merge
problems in mysql-trunk+)
This fixes a recently introduced regression, where a variable is
not defined for the embedded server. Although the embedded server
is not supported in 5.0, make it at least compile.
at mf_iocache.c, line 1722
The slave crashed while two threads: IO thread and user thread
raced for the same mutex (the append_buffer_lock protecting the
relay log's IO_CACHE). The IO thread was trying to flush the
cache, and for that was grabbing the append_buffer_lock.
However, the other thread was closing and reopening the relay log
when the IO thread tried to lock. Closing and reopening the log
includes destroying and reinitialising the IO_CACHE
mutex. Therefore, the IO thread tried to lock a destroyed mutex.
We fix this by backporting patch for BUG#50364 which fixed this
bug in mysql server 5.5+. The patch deploys missing
synchronization when flush_master_info is called and the relay
log is flushed by the IO thread. In detail the patch backports
revision (from mysql-trunk):
- luis.soares@sun.com-20100203165617-b1yydr0ee24ycpjm
This patch already includes the post-push fix also in BUG#50364:
- luis.soares@sun.com-20100222002629-0cijwqk6baxhj7gr
data directory name command
The check_db_name function has been modified to validate tails of
#mysql50#-prefixed database names for compliance with MySQL 5.0
database name encoding rules (the check_table_name function call
has been reused).
mysql-test/r/renamedb.result:
Updated test case.
mysql-test/r/upgrade.result:
Test case for bug #53804.
mysql-test/t/renamedb.test:
Updated test case.
mysql-test/t/upgrade.test:
Test case for bug #53804.
sql/mysql_priv.h:
Bug #53804: serious flaws in the alter database .. upgrade
data directory name command
The check_mysql50_prefix has been added.
sql/sql_table.cc:
Bug #53804: serious flaws in the alter database .. upgrade
data directory name command
- The check_mysql50_prefix has been added.
- The check_n_cut_mysql50_prefix function has been refactored
to share code with new check_mysql50_prefix function.
sql/table.cc:
Bug #53804: serious flaws in the alter database .. upgrade
data directory name command
The check_db_name function has been modified to validate tails of
#mysql50#-prefixed database names for compliance with MySQL 5.0
database name encoding rules.
Item_hex_string::Item_hex_string
The status of memory allocation in the Lex_input_stream (called
from the Parser_state constructor) was not checked which led to
a parser crash in case of the out-of-memory error.
The solution is to introduce new init() member function in
Parser_state and Lex_input_stream so that status of memory
allocation can be returned to the caller.
mysql-test/r/error_simulation.result:
Added a test case for bug #42064.
mysql-test/t/error_simulation.test:
Added a test case for bug #42064.
mysys/my_alloc.c:
Added error injection code for the regression test.
mysys/my_malloc.c:
Added error injection code for the regression test.
mysys/safemalloc.c:
Added error injection code for the regression test.
sql/event_data_objects.cc:
Use the new init() member function of Parser_state and check
its return value to handle memory allocation failures.
sql/mysqld.cc:
Added error injection code for the regression test.
sql/sp.cc:
Use the new init() member function of Parser_state and check
its return value to handle memory allocation failures.
sql/sql_lex.cc:
Moved memory allocation from constructor to the separate init()
member function.
Added error injection code for the regression test.
sql/sql_lex.h:
Moved memory allocation from constructor to the separate init()
member function.
sql/sql_parse.cc:
Use the new init() member function of Parser_state and check
its return value to handle memory allocation failures.
sql/sql_partition.cc:
Use the new init() member function of Parser_state and check
its return value to handle memory allocation failures.
sql/sql_prepare.cc:
Use the new init() member function of Parser_state and check
its return value to handle memory allocation failures.
sql/sql_trigger.cc:
Use the new init() member function of Parser_state and check
its return value to handle memory allocation failures.
sql/sql_view.cc:
Use the new init() member function of Parser_state and check
its return value to handle memory allocation failures..
sql/thr_malloc.cc:
Added error injection code for the regression test.
Server crashes on 64bit linux with 'double free or corruption'
message, on 32bit mysql-test-run silently fails on bootstrap
stage. The problem is that FreeState() is called twice
for init_settings struct in _db_end_ function.
The fix is to remove superfluous FreeState() call.
Additional fix:
fixed discrepancy of result file when
debug & valgrind options are enabled
for MTR.
dbug/dbug.c:
The problem is that FreeState() is called twice
for init_settings struct in _db_end_ function.
The fix is to remove superfluous FreeState() call.
mysql-test/r/variables_debug.result:
fixed discrepancy of result file when
debug & valgrind options are enabled
for MTR.
mysql-test/t/variables_debug.test:
fixed discrepancy of result file when
debug & valgrind options are enabled
for MTR.
sql/set_var.cc:
fixed discrepancy of result file when
debug & valgrind options are enabled
for MTR.
This patch fixes two problems described as follows:
1 - If there is an on-going transaction and a temporary table is created or
dropped, any failed statement that follows the "create" or "drop commands"
triggers a rollback and by consequence the slave will go out sync because
the binary log will have a wrong sequence of events.
To fix the problem, we changed the expression that evaluates when the
cache should be flushed after either the rollback of a statment or
transaction.
2 - When a "CREATE TEMPORARY TABLE SELECT * FROM" was executed the
OPTION_KEEP_LOG was not set into the thd->options. For that reason, if
the transaction had updated only transactional engines and was rolled
back at the end (.e.g due to a deadlock) the changes were not written
to the binary log, including the creation of the temporary table.
To fix the problem, we have set the OPTION_KEEP_LOG into the thd->options
when a "CREATE TEMPORARY TABLE SELECT * FROM" is executed.
sql/log.cc:
Reorganized the code based on the following functions:
- bool ending_trans(const THD* thd, const bool all);
- bool trans_has_updated_non_trans_table(const THD* thd);
- bool trans_has_no_stmt_committed(const THD* thd, const bool all);
- bool stmt_has_updated_non_trans_table(const THD* thd);
sql/log.h:
Added functions to organize the code in log.cc.
sql/log_event.cc:
Removed the OPTION_KEEP_LOG since it must be used only when
creating and dropping temporary tables.
sql/log_event_old.cc:
Removed the OPTION_KEEP_LOG since it must be used only when
creating and dropping temporary tables.
sql/sql_parse.cc:
When a "CREATE TEMPORARY TABLE SELECT * FROM" was executed the
OPTION_KEEP_LOG was not set into the thd->options.
To fix the problem, we have set the OPTION_KEEP_LOG into the
thd->options when a "CREATE TEMPORARY TABLE SELECT * FROM"
is executed.
Bug #50087 Interval arithmetic for Event_queue_element is not portable.
Subtraction of two unsigned months yielded a (very large) positive value.
Conversion of this to a signed value was not necessarily well defined.
Solution: do the subtraction on signed values.
mysql-test/r/events_scheduling.result:
Add test case.
mysql-test/t/events_scheduling.test:
Add test case.
sql/event_data_objects.cc:
Convert month to signed before doing the subtraction.
Analysis showed that in case of accessing I_S table
ROUTINES we perform unnecessary allocations
with get_field() function for every processed row that
in their turn causes significant memory growth.
the fix is to avoid use of get_field().
sql/sql_show.cc:
Functions store_schema_proc() are changed
to avoid use of get_field() function.
mode
Post-push fix after backporting the patch to 5.1-bugteam:
1 - changed the name of some variables to be equivalent to pe.
2 - fixed that patch to mark a statement as unsafe when both a
self-logging eng. and regular eng. are accessed and one of them
is updated.
ha_myisam::index_first(uchar*)") at assert.c:81
Single-table DELETE crash/assertion similar to single-table
UPDATE bug 14272.
Same resolution as for the bug 14272:
Don't run index scan when we should use quick select.
This could cause failures because there are table handlers (like federated)
that support quick select scanning but do not support index scanning.
mysql-test/r/delete.result:
Test case for bug #53450.
mysql-test/t/delete.test:
Test case for bug #53450.
sql/sql_delete.cc:
Bug #53450: Crash / assertion "virtual int
ha_myisam::index_first(uchar*)") at assert.c:81
The mysql_delete function has been modified to not to use
init_read_record_idx instead of init_read_record for the
quick select.
- INSERT with RAND() doesn't require row based logging again
- Some bugs fixed in opt_range() where we table->key_read was wrongly used
.bzrignore:
Ignore new xtstat binary
mysql-test/r/index_merge_myisam.result:
Update results (old result was wrong)
mysql-test/suite/binlog/r/binlog_stm_binlog.result:
Added drop table first
mysql-test/suite/binlog/r/binlog_stm_unsafe_warning.result:
Added test for when RAND() requires row based logging
mysql-test/suite/binlog/t/binlog_stm_binlog.test:
Added drop table first
mysql-test/suite/binlog/t/binlog_stm_unsafe_warning.test:
Added test for when RAND() requires row based logging
scripts/make_binary_distribution.sh:
Removed type from last commit
sql/item_create.cc:
Don't require row based logging when using RAND() with INSERT
sql/opt_range.cc:
Revert wrong patch from Oracle:
- As QUICK_RANGE_SELECT uses it's own 'file' handler to the tables, one can't use 'table->key_read' as a flag to detect if index only read (keyread) is used or not
- Don't set keyread if keyread is already enabled
- Don't disable key read, if we didn't enable it ourselves
- Simplify code (and ensure that we do proper cleanup of index only read)
sql/opt_range.h:
Added flags to detect if the range optimizer enabled index only read (key read) or not
sql/opt_sum.cc:
Use our more optimized macros
sql/sql_lex.h:
Added 'readable' function to check if we are in a sub query function or not (not normal query or sub query in FROM clause)
sql/sql_select.cc:
Use our more optimized keyread macros
Added ASSERTS early
Simplify code on eliminate_item_equal()
Fixed that substitute_for_best_equal_field() doesn't core dump in case of out of memory conditions.
Removed not needed test for 'field->maybe_null()'
Replaced master_unit()->item with is_subquery_function() (More readable)
sql/sql_update.cc:
Use our more optimized keyread macros
sql/table.cc:
Use our more optimized keyread macros
sql/table.h:
Use separate functions to enable/disable Index only reads
- Safer, more readable, better logging and faster.
NULL from outer join query
Problem: optimising MIN/MAX() queries without GROUP BY clause
by replacing the aggregate expression with a constant, we may set it
to NULL disregarding the fact that there may be outer joins involved.
Fix: don't replace MIN/MAX() with NULL if there're outer joins.
Note: the fix itself is just
- if (!count)
+ if (!count && !outer_tables)
set to NULL
The rest of the patch eliminates repeated code to improve speed
and for easy maintenance of the code.
mysql-test/r/group_by.result:
Fix for bug#52051: Aggregate functions incorrectly returns
NULL from outer join query
- test result.
mysql-test/t/group_by.test:
Fix for bug#52051: Aggregate functions incorrectly returns
NULL from outer join query
- test case.
sql/opt_sum.cc:
Fix for bug#52051: Aggregate functions incorrectly returns
NULL from outer join query
- optimising MIN/MAX() queries without GROUP BY clause by
replacing them with a constant, take into account that
there're may be outer joins involved.
- repeated code for MIN/MAX optimization in the opt_sum_query()
eliminated by introducing new functions that read MIN/MAX values
using index and combining MIN/MAX cases to one.
update statements
Only SELECT statements report any examined rows in the slow
log. Slow UPDATE, DELETE and INSERT statements report 0 rows
examined, unless the statement has a condition including a
SELECT substatement.
This patch adds counting of examined rows for the UPDATE and
DELETE statements. An INSERT ... VALUES statement will still
not report any rows as examined.
sql/sql_class.h:
Added more docs for THD::examined_row_count.
sql/sql_delete.cc:
Add incrementing thd->examined_row_count.
sql/sql_update.cc:
Add incrementing thd->examined_row_count.
MySQL handles the join syntax "JOIN ... USING( field1,
... )" and natural joins by building the same parse tree as
a corresponding join with an "ON t1.field1 = t2.field1 ..."
expression would produce. This parse tree was not cleaned up
properly in the following scenario. If a thread tries to
lock some tables and finds that the tables were dropped and
re-created while waiting for the lock, it cleans up column
references in the statement by means a per-statement free
list. But if the statement was part of a stored procedure,
column references on the stored procedure's free list
weren't cleaned up and thus contained pointers to freed
objects.
Fixed by adding a call to clean up the current prepared
statement's free list.
This is a backport from MySQL 5.1
remember range endpoints
The Loose Index Scan optimization keeps track of a sequence
of intervals. For the current interval it maintains the
current interval's endpoints. But the maximum endpoint was
not stored in the SQL layer; rather, it relied on the
storage engine to retain this value in-between reads. By
coincidence this holds for MyISAM and InnoDB. Not for the
partitioning engine, however.
Fixed by making the key values iterator
(QUICK_RANGE_SELECT) keep track of the current maximum endpoint.
This is also more efficient as we save a call through the
handler API in case of open-ended intervals.
The code to calculate endpoints was extracted into
separate methods in QUICK_RANGE_SELECT, and it was possible to
get rid of some code duplication as part of fix.
MYSQL_BIN_LOG m_table_map_version member and it's associated
functions were not used in the logic of binlogging and replication,
this patch removed all related code.
sql/log.cc:
removed unused m_table_map_version variable and functions
sql/log.h:
removed unused m_table_map_version variable and functions
sql/log_event.h:
Removed unused LOG_EVENT_UPDATE_TABLE_MAP_VERSION_F flag
sql/sql_class.cc:
Removed unused LOG_EVENT_UPDATE_TABLE_MAP_VERSION_F flag
sql/sql_load.cc:
Removed unused LOG_EVENT_UPDATE_TABLE_MAP_VERSION_F flag
sql/table.cc:
removed unused table_map_version variable
sql/table.h:
removed unused table_map_version variable
When using a non-transactional table (t1) on the master
and with autocommit disabled, no COMMIT is recorded
in the binary log ending the statement. Therefore, if
the slave has t1 in a transactional engine, then it will
be as if a transaction is started but never ends. This is
actually BUG#29288 all over again.
We fix this by cherrypicking the cset for BUG#29288 which
was pushed to a later mysql version. The revision picked
was: mats@sun.com-20090923094343-bnheplq8n95opjay .
Additionally, a test case for covering the scenario depicted
in the bug report is included in this cset.
The fix actually reverts the change introduced
by the patch for bug 51494.
The fact is that patches for bugs 52177&48419
fix bugs 51194&50575 as well.
mysql-test/r/innodb_mysql.result:
test case
mysql-test/t/innodb_mysql.test:
test case
sql/sql_select.cc:
reverted wrong fix for bug 51494
truncates text/blob to 766 chars
mysqldump and SELECT ... INTO OUTFILE truncated long BLOB/TEXT
values to size of 766 bytes (MAX_FIELD_WIDTH or 255 * 3 + 1).
The select_export::send_data method has been modified to
reallocate a conversion buffer for long field data.
mysql-test/r/mysqldump.result:
Test case for bug #53088.
mysql-test/r/outfile_loaddata.result:
Test case for bug #53088.
mysql-test/t/mysqldump.test:
Test case for bug #53088.
mysql-test/t/outfile_loaddata.test:
Test case for bug #53088.
sql/sql_class.cc:
Bug #53088: mysqldump with -T & --default-character-set set
truncates text/blob to 766 chars
The select_export::send_data method has been modified to
reallocate a conversion buffer for long field data.
greedy_search optimizer_search_depth=0
The algorithm inside restore_prev_nj_state failed to
properly update the counters within the NESTED_JOIN
tree. The counter was decremented each time a table in the
node was removed from the QEP, the correct thing to do being
only to decrement it when the last table in the child node
was removed from the plan. This lead to node counters
getting negative values and the plan thus appeared
impossible. An assertion caught this.
Fixed by not recursing up the tree unless the last table in
the join nest node is removed from the plan
Bug#53417 my_getwd() makes assumptions on the buffer sizes which not always hold true
The mysys library contains many functions for rewriting file paths. Most of these
functions makes implicit assumptions on the buffer sizes they write to. If a path is put
in my_realpath() it will propagate to my_getwd() which assumes that the buffer holding
the path name is greater than 2. This is not true in cases.
In the special case where a VARBIN_ITEM is passed as argument to the LOAD_FILE function
this can lead to a crash.
This patch fixes the issue by introduce more safe guards agaist buffer overruns.
This is the 5.1 merge and extension of the fix.
The server was happily accepting paths in table name in all places a table
name is accepted (e.g. a SELECT). This allowed all users that have some
privilege over some database to read all tables in all databases in all
mysql server instances that the server file system has access to.
Fixed by :
1. making sure no path elements are allowed in quoted table name when
constructing the path (note that the path symbols are still valid in table names
when they're properly escaped by the server).
2. checking the #mysql50# prefixed names the same way they're checked for
path elements in mysql-5.0.
When issuing a 'SET GLOBAL SQL_SLAVE_SKIP_COUNTER' statement, the previous
position along with the new position is dumped into the error log. Namely,
the following information is printed out: skip_counter, group_relay_log_name
and group_relay_log_pos.
When issuing a 'CHANGE MASTER TO' statement, key elements of the previous
state, namely the host, port, the master_log_file and the master_log_pos
are dumped into the error log.
Iterative patch improvement. Previously committed patch
caused wrong result on Windows. The previous patch also
broke secure_file_priv for symlinks since not all file
paths which must be compared against this variable are
normalized using the same norm.
The server variable opt_secure_file_priv wasn't
normalized properly and caused the operations
LOAD DATA INFILE .. INTO TABLE ..
and
SELECT load_file(..)
to do different interpretations of the
--secure-file-priv option.
The patch moves code to the server initialization
routines so that the path always is normalized
once and only once.
It was also intended that setting the option
to an empty string should be equal to
lifting all previously set restrictions. This
is also fixed by this patch.
mysql-test/r/loaddata.result:
* Removed test code which will currently break the much used --mem feature of mtr.
mysql-test/t/loaddata.test:
* Removed test code which will currently break the much used --mem feature of mtr.
sql/item_strfunc.cc:
* Replaced string comparing code on opt_secure_file_priv with an interface which guarantees that both file paths are normalized using the same norm on all platforms.
sql/mysql_priv.h:
* Added signature for is_secure_file_path()
sql/mysqld.cc:
* New function for checking if a path compatible with the secure path restriction.
* Added initialization of the opt_secure_file_priv variable.
sql/sql_class.cc:
* Replaced string comparing code on opt_secure_file_priv with an interface which guarantees that both file paths are normalized using the same norm on all platforms.
sql/sql_load.cc:
* Replaced string comparing code on opt_secure_file_priv with an interface which guarantees that both file paths are normalized using the same norm on all platforms.
The server was not checking the supplied to COM_FIELD_LIST table name
for validity and compliance to acceptable table names standards.
Fixed by checking the table name for compliance similar to how it's
normally checked by the parser and returning an error message if
it's not compliant.
WHERE predicates containing references to empty tables in a
subquery were handled incorrectly by the optimizer when
executing EXPLAIN. As a result, the optimizer could try to
evaluate such predicates rather than just stop with
"Impossible WHERE noticed after reading const tables" as
it would do in a non-subquery case. This led to valgrind
errors and crashes.
Fixed the code checking the above condition so that subqueries
are not excluded and hence are handled in the same way as top
level SELECTs.
mysql-test/r/explain.result:
Added a test case for bug #48419.
mysql-test/r/ps.result:
Updated test results to take the new (and more correct)
"Extra" comments in execution plans.
mysql-test/t/explain.test:
Added a test case for bug #48419.
sql/sql_select.cc:
There is no point in excluding subqueries from checking
for identically false WHERE conditions.
The server could be tricked to read packets indefinitely if it
received a packet larger than the maximum size of one packet.
This problem is aggravated by the fact that it can be triggered
before authentication.
The solution is to no skip big packets for non-authenticated
sessions. If a big packet is sent before a session is authen-
ticated, a error is returned and the connection is closed.
include/mysql_com.h:
Add skip flag. Only used in server builds.
sql/net_serv.cc:
Control whether big packets can be skipped.
Problem: "COM_FIELD_LIST is an old command of the MySQL server, before there was real move to only
SQL. Seems that the data sent to COM_FIELD_LIST( mysql_list_fields() function) is not
checked for sanity. By sending long data for the table a buffer is overflown, which can
be used deliberately to include code that harms".
Fix: check incoming data length.
sql/sql_parse.cc:
Fix for bug #53237: mysql_list_fields/COM_FIELD_LIST stack smashing
- check incoming mysql_list_fields() table name arg length.
sql/sql_base.cc:
Replace strmov() with strnmov() to remove the possibility for buffer overflow.
sql/sql_parse.cc:
Reject COM_FIELD_LIST with too-big table or wildcard argument.
(libmysqlclient doesn't allow sending too long arguments anyway, but we
need this to protect against buffer overflow exploits).
The problem was in an incorrect debug assertion. The expression
used in the failing assertion states that when finding
references matching ORDER BY expressions, there can be only one
reference to a single table. But that does not make any sense,
all test cases for this bug are valid examples with multiple
identical WHERE expressions referencing the same table which
are also present in the ORDER BY list.
Fixed by removing the failing assertion. We also have to take
care of the 'found' counter so that we count multiple
references only once. We rely on this fact later in
eq_ref_table().
mysql-test/r/join.result:
Added a test case for bug #50335.
mysql-test/t/join.test:
Added a test case for bug #50335.
sql/sql_select.cc:
Removing the assertion in eq_ref_table() as it does not make
any sense. We also have to take care of the 'found' counter so
that we count multiple references only once. We rely on this
fact later in eq_ref_table().
of sync
In RBR, sometimes the table->s->last_null_bit_pos can be zero. This
has impact at the slave when it compares records fetched from the
storage engine against records in the binary log event. If
last_null_bit_pos is zero the slave, while comparing in
log_event.cc:record_compare function, would set all bits in the last
null_byte to 1 (assumed all 8 were unused) . Thence it would loose the
ability to distinguish records that were similar in contents except
for the fact that some field was null in one record, but not in the
other. Ultimately this would cause wrong matches, and in the specific
case depicted in the bug report the same record would be updated
twice, resulting in a lost update.
Additionally, in the record_compare function the slave was setting the
X bit unconditionally. There are cases that the X bit does not exist
in the record header. This could also lead to wrong matches between
records.
We fix both by conditionally resetting the bits: (i) unused null_bits
are set if last_null_bit_pos > 0; (ii) X bit is set if
HA_OPTION_PACK_RECORD is in use.
mysql-test/extra/rpl_tests/rpl_record_compare.test:
Shared part of the test case for MyISAM and InnoDB.
mysql-test/suite/rpl/t/rpl_row_rec_comp_innodb.test:
InnoDB test case.
mysql-test/suite/rpl/t/rpl_row_rec_comp_myisam.test:
MyISAM test case. Added also coverage for Field_bits case.
sql/log_event.cc:
Deployed conditional setting of unused bits at record_compare.
sql/log_event_old.cc:
Same change as in log_event.cc.
Correcting a patch misstake. The converted file path is placed in 'buff' not in opt_secure_file_priv.
mysql-test/r/loaddata.result:
* Updated test case; Since secure_file_priv now is normalized the previous values are changed.
sql/mysqld.cc:
* Fixed patch misstake
The server variable opt_secure_file_priv wasn't
normalized properly and caused the operations
LOAD DATA INFILE .. INTO TABLE ..
and
SELECT load_file(..)
to do different interpretations of the
--secure-file-priv option.
The patch moves code to the server initialization
routines so that the path always is normalized
once and only once.
It was also intended that setting the option
to an empty string should be equal to
lifting all previously set restrictions. This
is also fixed by this patch.
sql/mysqld.cc:
* If --secure_file_option is an empty string then the option variable
should be unset.
* opt_secure_file_option should be normalized once when the server starts.
sql/sql_load.cc:
* moved variable normalization code to fix_paths()
Potential deadlock situation involving LOCK_plugin,
LOCK_global_system_variables and LOCK_status.
This patch backports the fix from next-mr, unlocking
LOCK_plugin before calling plugin->init() and
add_status_vars().
Arg_comparator initializes 'comparators' array in case of
ROW comparison and does not free this array on destruction.
It leads to memory leaks.
The fix:
-added Arg_comparator::cleanup() method which frees
'comparators' array.
-added Item_bool_func2::cleanup() method which calls
Arg_comparator::cleanup() method
mysql-test/r/ps.result:
test case
mysql-test/r/row.result:
test case
mysql-test/t/ps.test:
test case
mysql-test/t/row.test:
test case
sql/item_cmpfunc.h:
-added Arg_comparator::cleanup() method which frees
'comparators' array.
-added Item_bool_func2::cleanup() method which calls
Arg_comparator::cleanup() method
union...order by (select... where...)
The problem is mysql is trying to materialize and
cache the scalar sub-queries at JOIN::optimize
even for EXPLAIN where the number of columns is
totally different from what's expected.
Fixed by not executing the scalar subqueries
for EXPLAIN.
to cleanup open connections
It was possible to UNINSTALL storage engine plugin when binding
between THD object and storage engine is still active (e.g. in
the middle of transaction).
To avoid unclean deactivation (uninstall) of storage engine plugin
in the middle of transaction, additional storage engine plugin
lock is acquired by thd_set_ha_data().
If ha_data is not null and storage engine plugin was not locked
by thd_set_ha_data() in this connection before, storage engine
plugin gets locked.
If ha_data is null and storage engine plugin was locked by
thd_set_ha_data() in this connection before, storage engine
plugin lock gets released.
If handlerton::close_connection() didn't reset ha_data, server does
it immediately after calling handlerton::close_connection().
Note that this is just a framework fix, storage engines must switch
to thd_set_ha_data() from thd_ha_data() if they want to see fit.
include/mysql/plugin.h:
As thd_{get|set}_ha_data() have some extra logic now, they
must be implemented on server side.
include/mysql/plugin.h.pp:
As thd_{get|set}_ha_data() have some extra logic now, they
must be implemented on server side.
sql/handler.cc:
Make sure ha_data is reset and ha_data lock is released.
sql/handler.h:
hton is not supposed to be updated by ha_lock_engine(),
make it const.
sql/sql_class.cc:
As thd_{get|set}_ha_data() have some extra logic now, they
must be implemented on server side.
sql/sql_class.h:
Added ha_data lock.
on LOAD DATA
Two problems :
1. LOAD DATA was not checking for SQL errors and was sending an OK
packet even when there were errors reported already. Fixed to check for
SQL errors in addition to the error conditions already detected.
2. There was an over-ambitious assert() on the server to check if the
protocol is always followed by the client. This can cause crashes on
debug servers by clients not completing the protocol exchange for some
reason (e.g. --send command in mysqltest). Fixed by keeping the assert
only on client side, since the server always completes the protocol
exchange.
Removed random failures from test suite
mysql-test/extra/rpl_tests/rpl_insert_id_pk.test:
Make test predicatable.
mysql-test/include/maria_empty_logs.inc:
We can't use 'Threads_connected' for syncronization, as the 'check_warnings' thread that just quit may still be counted in 'Threads_connected'
Now we just wait until MySQLD answers again, which should be good enough for our purposes
mysql-test/suite/binlog/r/binlog_index.result:
Updated results file
mysql-test/suite/binlog/t/binlog_index-master.opt:
Added option file to not get stack traces in .err file.
mysql-test/suite/binlog/t/binlog_index.test:
Added 'flush tables' to remove warning about crashed suppression file from logs
mysql-test/suite/pbxt/r/multi_statement.result:
Updated results
mysql-test/suite/pbxt/t/multi_statement-master.opt:
Added options so that slow query testing makes sense
sql/events.cc:
Don't write Event Scheduler startup message if warnings are turned off.
sql/handler.cc:
Removed compiler warning
sql/log.cc:
Removed compiler warning
sql/mysqld.cc:
Added option 'test-expect-abort'; If this is set, we don't write message to log in case of 'DBUG_ABORT'.
(Gives us smaller, easier to read log files)
sql/set_var.cc:
Removed compiler warning
sql/slave.cc:
Removed compiler warning
sql/sql_plugin.cc:
Don't write warnings about disabled plugin if using --log_warnings=0
storage/xtradb/include/ut0lst.h:
Removed compiler warning
support-files/compiler_warnings.supp:
Supress warning from xtradb