INVOKER-security view access check wrong".
When privilege checks were done for tables used from an
INVOKER-security view which in its turn was used from
a DEFINER-security view connection's active security
context was incorrectly used instead of security context
with privileges of the second view's creator.
This meant that users which had enough rights to access
the DEFINER-security view and as result were supposed to
be able successfully access it were unable to do so in
cases when they didn't have privileges on underlying tables
of the INVOKER-security view.
This problem was caused by the fact that for INVOKER-security
views TABLE_LIST::security_ctx member for underlying tables
were set to 0 even in cases when particular view was used from
another DEFINER-security view. This meant that when checks of
privileges on these underlying tables was done in
setup_tables_and_check_access() active connection security
context was used instead of context corresponding to the
creator of caller view.
This fix addresses the problem by ensuring that underlying
tables of an INVOKER-security view inherit security context
from the view and thus correct security context is used for
privilege checks on underlying tables in cases when such view
is used from another view with DEFINER-security.
tmptable needed
The function DEFAULT() works by modifying the the data buffer pointers (often
referred to as 'record' or 'table record') of its argument. This modification
is done during name resolution (fix_fields().) Unfortunately, the same
modification is done when creating a temporary table, because default values
need to propagate to the new table.
Fixed by skipping the pointer modification for fields that are arguments to
the DEFAULT function.
if max_allowed_packet >= 16M.
This bug was introduced by patch for bug#42503.
This patch restores behaviour that there was before patch
for bug#42503 was applied.
The problem from a user point of view was that on Solaris the
time related functions (e.g. NOW(), SYSDATE(), etc) would always
return a fixed time.
This bug was happening due to a logic in the time retrieving
wrapper function which would only call the time() function every
half second. This interval between calls would be calculated
using the gethrtime() and the logic relied on the fact that time
returned by it is monotonic.
Unfortunately, due to bugs in the gethrtime() implementation,
there are some cases where the time returned by it can drift
(See Solaris bug id 6600939), potentially causing the interval
calculation logic to fail.
The solution is to retrieve the correct time whenever a drift in
the time returned by gethrtime() is detected. That is, do not
use the cached time whenever the values (previous and current)
returned by gethrtime() are not monotonically increasing.
to crash mysqld".
handler::pushed_cond was not always properly reset when table objects where
recycled via the table cache.
handler::pushed_cond is now set to NULL in handler::ha_reset(). This should
prevent pushed conditions from (incorrectly) re-apperaring in later queries.
From a user perspective, the problem is that a FLUSH LOGS or SIGHUP
signal could end up associating the stdout and stderr to random
files. In the case of this bug report, the streams would end up
associated to InnoDB ibd files.
The freopen(3) function is not thread-safe on FreeBSD. What this
means is that if another thread calls open(2) during freopen()
is executing that another thread's fd returned by open(2) may get
re-associated with the file being passed to freopen(3). See FreeBSD
PR number 79887 for reference:
http://www.freebsd.org/cgi/query-pr.cgi?pr=79887
This problem is worked around by substituting a internal hook within
the FILE structure. This avoids the loss of atomicity by not having
the original fd closed before its duplicated.
Patch based on the original work by Vasil Dimov.
Do not use nested AC_CHECK_FUNC() because they result in:
./configure: line 52688: syntax error: unexpected end of file
(which happens only on some platforms and does not happen on others,
I have no idea what is the reason for this)
AC_CHECK_FUNCS(f1 f2 f3, ACTION_IF_PRESENT)
ACTION_IF_PRESENT is executed if any of f1, f2 or f3 is present.
Fix this misusage, we want the action to be executed if all of the
functions are present.
This assert could be triggered if -1 was inserted into
an auto increment column by a statement writing more than
one row.
Unless explicitly given, an interval of auto increment values
is generated when a statement first needs an auto increment
value. The triggered assert checks that the auto increment
counter is equal to or higher than the lower bound of this
interval.
Generally, the auto increment counter starts at 1 and is
incremented by 1 each time it is used. However, inserting an
explicit value into the auto increment column, sets the auto
increment counter to this value + 1 if this value is higher
than the current value of the auto increment counter.
This bug was triggered if the explicit value was -1. Since the
value was converted to unsigned before any comparisons were made,
it was found to be higher than the current vale of the auto
increment counter and the counter was set to -1 + 1. This value
was below the reserved interval and caused the assert to be
triggered the next time the statement tried to write a row.
With the patch for Bug#39828, this bug is no longer repeatable.
Now, -1 + 1 is detected as an "overflow" which causes the auto
increment counter to be set to ULONGLONG_MAX. This avoids hitting
the assert for the next insert and causes a new interval of
auto increment values to be generated. This resolves the issue.
This patch therefore only contains a regression test and no code
changes. Test case added to auto_increment.test.
mysqlbinlog only prints "use $database" statements to its output stream
when the active default database changes between events. This will cause
"No Database Selected" error when dropping and recreating that database.
To fix the problem, we clear print_event_info->db when printing an event
of CREATE/DROP/ALTER database statements, so that the Query_log_event
after such statements will be printed with the use 'db' anyway except
transaction keywords.
- Removed files specific to compiling on OS/2
- Removed files specific to SCO Unix packaging
- Removed "libmysqld/copyright", text is included in documentation
- Removed LaTeX headers for NDB Doxygen documentation
- Removed obsolete NDB files
- Removed "mkisofs" binaries
- Removed the "cvs2cl.pl" script
- Changed a few GPL texts to use "program" instead of "library"
ASSERT happens due to improper calculation of the max_length
in Item_func_div object, if dividend has max_length == 0 then
Item_func_div::max_length is set to 0 under some circumstances.
The fix:
If decimals == NOT_FIXED_DEC then set
Item_func_div::max_length to max possible
DOUBLE length value.
multiline inserts into partition
Bug#57071: EXTRACT(WEEK from date_col) cannot be
allowed as partitioning function
Renamed function according to reviewers comments.
Bug#57071: EXTRACT(WEEK from date_col) cannot be allowed as partitioning function
There were functions allowed as partitioning functions
that implicit allowed cast. That could result in unacceptable
behaviour.
Solution was to check that the arguments of date and time functions
have allowed types (field and date/datetime/time depending on function).
Problem: master executed a statement that would fail on slave
(namely, DROP USER 'create_rout_db'@'localhost').
Then the test did:
--let $rpl_only_running_threads= 1
--source include/rpl_reset.inc
rpl_reset.inc calls rpl_sync.inc, which first checks which of
the threads are running and then syncs those threads that are
running. If the SQL thread fails after the check, the sync will
fail. So there was a race in the test and it failed on some
slow hosts.
Fix: Don't replicate the failing statement.
Item_sum_max/Item_sum_min incorrectly set null_value flag and
attempt to get result in parent functions leads to crash.
This happens due to double evaluation of the function argumet.
First evaluation happens in the comparator and second one
happens in Item_cache::cache_value().
The fix is to introduce new Item_cache object which
holds result of the argument and use this cached value
as an argument of the comparator.
This bug fix requires that Bug #58912 be fixed as well (bzr revision id
marko.makela@oracle.com-20101221093919-mcmmgd4zpse9567d). Otherwise,
another double BLOB free could occur when InnoDB would try to perform
an update-in-place as delete-and-insert-by-update-in-place.
row_upd_clust_rec_by_insert(): Do not disown the externally stored
columns from the old record (btr_cur_mark_extern_inherited_fields())
until after checking the foreign key constraints and successfully
inserting the updated record. If a lock wait timeout occurs between
the delete-marking of the old record and the insertion of the updated
record, mark the columns inherited before retrying the insert.
Distinguish the state UPD_NODE_INSERT_BLOB from
UPD_NODE_INSERT_CLUSTERED.
btr_cur_del_mark_set_clust_rec(): Replace the cursor with
block,rec,index,offsets so that the offsets need not be recalculated.
Assert that rec is on a clustered index leaf page.
btr_cur_disown_inherited_fields(): Renamed from
btr_cur_mark_extern_inherited_fields(). Use
upd_get_field_by_field_no(). Assert that there are externally stored
columns. Assert that a mini-transaction is passed. Remove the return
status. (The only caller, row_upd_clust_rec_by_insert(), will have
determined that some fields have changed ownership.)
btr_cur_mark_dtuple_inherited_extern(): Rename to
row_upd_clust_rec_by_insert_inherit_func() and declare as static. Add
the debug parameters rec, offsets. When rec is given, assert that the
off-page columns match those in the inesrt tuple and that the off-page
columns are owned by the record. Assert that the non-updated off-page
columns in the insert tuple are owned, and mark them inherited.
row_upd_clust_rec_by_insert_inherit(): A wrapper macro for
row_upd_clust_rec_by_insert_inherit_func().
row_undo_mod_upd_exist_sec(): Adjust a comment about
row_upd_clust_rec_by_insert().
rb:508 approved by Jimmy Yang
row_upd_changes_ord_field_binary(): Do not return TRUE if the update
vector changes a column that is covered by a prefix index, but does
not change the column prefix. Add the row_ext_t parameter for
determining whether the prefixes of externally stored columns match.
dfield_datas_are_binary_equal(): Add the parameter len, for comparing
column prefixes when len > 0.
innodb.test: Add a test case where the patch of Bug #55284 failed
without this fix.
rb:537 approved by Jimmy Yang
Normally, auto_increment value is generated for the column by
inserting either NULL or 0 into it. NO_AUTO_VALUE_ON_ZERO
suppresses this behavior for 0 so that only NULL generates
the auto_increment value. This behavior is also followed by
a slave, specifically by the SQL Thread, when applying events
in the statement format from a master. However, when applying
events in the row format, the flag was ignored thus causing
an assertion failure:
"Assertion failed: next_insert_id == 0, file .\handler.cc"
In fact, we never need to generate a auto_increment value for
the column when applying events in row format on slave. So we
don't allow it to happen by using 'MODE_NO_AUTO_VALUE_ON_ZERO'.
Refactoring: Get rid of all the sql_mode checks to rows_log_event
when applying it for avoiding problems caused by the inconsistency
of the sql_mode on slave and master as the sql_mode is not set for
Rows_log_event.
Problem: Warnings for truncated data were generated on hosts with
long host names because @@hostname was inserted into a CHAR(40) column.
Fix: Change CHAR(40) to TEXT.
Major replication test framework cleanup. This does the following:
- Ensure that all tests clean up the replication state when they
finish, by making check-testcase check the output of SHOW SLAVE STATUS.
This implies:
- Slave must not be running after test finished. This is good
because it removes the risk for sporadic errors in subsequent
tests when a test forgets to sync correctly.
- Slave SQL and IO errors must be cleared when test ends. This is
good because we will notice if a test gets an unexpected error in
the slave threads near the end.
- We no longer have to clean up before a test starts.
- Ensure that all tests that wait for an error in one of the slave
threads waits for a specific error. It is no longer possible to
source wait_for_slave_[sql|io]_to_stop.inc when there is an error
in one of the slave threads. This is good because:
- If a test expects an error but there is a bug that causes
another error to happen, or if it stops the slave thread without
an error, then we will notice.
- When developing tests, wait_for_*_to_[start|stop].inc will fail
immediately if there is an error in the relevant slave thread.
Before this patch, we had to wait for the timeout.
- Remove duplicated and repeated code for setting up unusual replication
topologies. Now, there is a single file that is capable of setting
up arbitrary topologies (include/rpl_init.inc, but
include/master-slave.inc is still available for the most common
topology). Tests can now end with include/rpl_end.inc, which will clean
up correctly no matter what topology is used. The topology can be
changed with include/rpl_change_topology.inc.
- Improved debug information when tests fail. This includes:
- debug info is printed on all servers configured by include/rpl_init.inc
- User can set $rpl_debug=1, which makes auxiliary replication files
print relevant debug info.
- Improved documentation for all auxiliary replication files. Now they
describe purpose, usage, parameters, and side effects.
- Many small code cleanups:
- Made have_innodb.inc output a sensible error message.
- Moved contents of rpl000017-slave.sh into rpl000017.test
- Added mysqltest variables that expose the current state of
disable_warnings/enable_warnings and friends.
- Too many to list here: see per-file comments for details.