The problem was in an incorrect debug assertion. The expression
used in the failing assertion states that when finding
references matching ORDER BY expressions, there can be only one
reference to a single table. But that does not make any sense,
all test cases for this bug are valid examples with multiple
identical WHERE expressions referencing the same table which
are also present in the ORDER BY list.
Fixed by removing the failing assertion. We also have to take
care of the 'found' counter so that we count multiple
references only once. We rely on this fact later in
eq_ref_table().
Problem: ALTER TABLE ADD INDEX may lead to table copying if there's
numeric field(s) with non-default display width modificator specified.
Fix: compare numeric field's storage lenghts when we decide whether
they can be considered 'equal' for table alteration purposes.
Previously installed dynamic plugins are explicitly not loaded
on startup with --skip-grant-tables enabled. However, INSTALL
PLUGIN/UNINSTALL PLUGIN commands are allowed, and result in
inconsistent error messages (reporting duplicate plugin or
plugin does not exist).
This patch adds a check for --skip-grant-tables mode, and
returns error ER_OPTION_PREVENTS_STATEMENT to the user when
the above commands are attempted.
partition if muliple columns used
Problem was that range scanning through the sorted array of
the column list values did not use a correct index calculation.
Fixed by also taking the number of columns in the calculation.
transaction
BUG#52616 Temp table prevents switch binlog format from STATEMENT to ROW
Before the WL#2687 and BUG#46364, every non-transactional change that happened
after a transactional change was written to trx-cache and flushed upon
committing the transaction. WL#2687 and BUG#46364 changed this behavior and
non-transactional changes are now written to the binary log upon committing
the statement.
A binary log event is identified as transactional or non-transactional through
a flag in the Log_event which is set taking into account the underlie storage
engine on what it is stems from. In the current bug, this flag was not being
set properly when the DROP TEMPORARY TABLE was executed.
However, while fixing this bug we figured out that changes to temporary tables
should be always written to the trx-cache if there is an on-going transaction.
Otherwise, binlog events in the reversed order would be produced.
Regarding concurrency, keeping changes to temporary tables in the trx-cache is
also safe as temporary tables are only visible to the owner connection.
In this patch, we classify the following statements as unsafe:
1 - INSERT INTO t_myisam SELECT * FROM t_myisam_temp
2 - INSERT INTO t_myisam_temp SELECT * FROM t_myisam
3 - CREATE TEMPORARY TABLE t_myisam_temp SELECT * FROM t_myisam
On the other hand, the following statements are classified as safe:
1 - INSERT INTO t_innodb SELECT * FROM t_myisam_temp
2 - INSERT INTO t_myisam_temp SELECT * FROM t_innodb
The patch also guarantees that transactions that have a DROP TEMPORARY are
always written to the binary log regardless of the mode and the outcome:
commit or rollback. In particular, the DROP TEMPORARY is extended with the
IF EXISTS clause when the current statement logging format is set to row.
Finally, the patch allows to switch from STATEMENT to MIXED/ROW when there
are temporary tables but the contrary is not possible.
Arg_comparator initializes 'comparators' array in case of
ROW comparison and does not free this array on destruction.
It leads to memory leaks.
The fix:
-added Arg_comparator::cleanup() method which frees
'comparators' array.
-added Item_bool_func2::cleanup() method which calls
Arg_comparator::cleanup() method
When re-setting (SET GLOBAL debug='') the GLOBAL debug settings the
server was not freeing the data elements from the top (initial) frame
before setting them to 0 without freeing the underlying memory. As these
are global settings there's a chance that something is there already.
Fixed by :
1. making sure the allocated data are cleaned up before re-setting them
while parsing a debug string
2. making sure the stuff allocated in the global settings is freed on
shutdown.
This assertion could be triggered during execution of OPTIMIZE TABLE for
InnoDB tables. As part of optimize for InnoDB tables, the table is recreated
and then opened again. If the reopen failed for any reason, the assertion
would be triggered. This could for example be caused by a concurrent DROP
TABLE executed by a different connection. The reason for the assertion was
that any failures during reopening were ignored.
This patch fixes the problem by making sure that the result of reopening the
table is checked and that any error messages are sent to the client.
Test case added to innodb_mysql_sync.test.
This was a deadlock between CREATE/ALTER/DROP EVENT and a query
accessing both the mysql.event table and I_S.GLOBAL_VARIABLES.
The root of the problem was that the LOCK_event_metadata mutex was
used to both protect the "event_scheduler" global system variable
and the internal event data structures used by CREATE/ALTER/DROP EVENT.
The deadlock would occur if CREATE/ALTER/DROP EVENT held
LOCK_event_metadata while trying to open the mysql.event table,
at the same time as the query had mysql.event open, trying to
lock LOCK_event_metadata to access "event_scheduler".
This bug was fixed in the scope of Bug#51160 by using only
LOCK_global_system_variables to protect "event_scheduler".
This makes it so that the query above won't lock LOCK_event_metadata,
thereby preventing this deadlock from occuring.
This patch contains no code changes.
Test case added to lock_sync.test.
even if myisam-recover is OFF
The problem was that a corrupted MyISAM table was auto repaired
even if the myisam_recover_options server variable (or the
myisam_recover option) was set to OFF.
The reason was that the auto_repair() function, which is supposed
to say if auto repair is to be used, did not use the server variable
setting correctly. This bug was a regression introduced by WL#4738.
This patch fixes the problem by making sure auto_repair() returns
FALSE if myisam_recover_options is set to OFF.
Test case added to myisam.test.
------------------------------------------------------------
revno: 2875.107.114
revision-id: dlenev@mysql.com-20100201114306-cve0yq5akrxjoei0
parent: dlenev@mysql.com-20100121204303-sr6d1436mac7x6vz
committer: Dmitry Lenev <dlenev@mysql.com>
branch nick: mysql-next-4284-nl-push
timestamp: Mon 2010-02-01 14:43:06 +0300
message:
Implement new type-of-operation-aware metadata locks.
Add a wait-for graph based deadlock detector to the
MDL subsystem.
Fixes bug #46272 "MySQL 5.4.4, new MDL: unnecessary deadlock" and
bug #37346 "innodb does not detect deadlock between update and
alter table".
The first bug manifested itself as an unwarranted abort of a
transaction with ER_LOCK_DEADLOCK error by a concurrent ALTER
statement, when this transaction tried to repeat use of a
table, which it has already used in a similar fashion before
ALTER started.
The second bug showed up as a deadlock between table-level
locks and InnoDB row locks, which was "detected" only after
innodb_lock_wait_timeout timeout.
A transaction would start using the table and modify a few
rows.
Then ALTER TABLE would come in, and start copying rows
into a temporary table. Eventually it would stumble on
the modified records and get blocked on a row lock.
The first transaction would try to do more updates, and get
blocked on thr_lock.c lock.
This situation of circular wait would only get resolved
by a timeout.
Both these bugs stemmed from inadequate solutions to the
problem of deadlocks occurring between different
locking subsystems.
In the first case we tried to avoid deadlocks between metadata
locking and table-level locking subsystems, when upgrading shared
metadata lock to exclusive one.
Transactions holding the shared lock on the table and waiting for
some table-level lock used to be aborted too aggressively.
We also allowed ALTER TABLE to start in presence of transactions
that modify the subject table. ALTER TABLE acquires
TL_WRITE_ALLOW_READ lock at start, and that block all writes
against the table (naturally, we don't want any writes to be lost
when switching the old and the new table). TL_WRITE_ALLOW_READ
lock, in turn, would block the started transaction on thr_lock.c
lock, should they do more updates. This, again, lead to the need
to abort such transactions.
The second bug occurred simply because we didn't have any
mechanism to detect deadlocks between the table-level locks
in thr_lock.c and row-level locks in InnoDB, other than
innodb_lock_wait_timeout.
This patch solves both these problems by moving lock conflicts
which are causing these deadlocks into the metadata locking
subsystem, thus making it possible to avoid or detect such
deadlocks inside MDL.
To do this we introduce new type-of-operation-aware metadata
locks, which allow MDL subsystem to know not only the fact that
transaction has used or is going to use some object but also what
kind of operation it has carried out or going to carry out on the
object.
This, along with the addition of a special kind of upgradable
metadata lock, allows ALTER TABLE to wait until all
transactions which has updated the table to go away.
This solves the second issue.
Another special type of upgradable metadata lock is acquired
by LOCK TABLE WRITE. This second lock type allows to solve the
first issue, since abortion of table-level locks in event of
DDL under LOCK TABLES becomes also unnecessary.
Below follows the list of incompatible changes introduced by
this patch:
- From now on, ALTER TABLE and CREATE/DROP TRIGGER SQL (i.e. those
statements that acquire TL_WRITE_ALLOW_READ lock)
wait for all transactions which has *updated* the table to
complete.
- From now on, LOCK TABLES ... WRITE, REPAIR/OPTIMIZE TABLE
(i.e. all statements which acquire TL_WRITE table-level lock) wait
for all transaction which *updated or read* from the table
to complete.
As a consequence, innodb_table_locks=0 option no longer applies
to LOCK TABLES ... WRITE.
- DROP DATABASE, DROP TABLE, RENAME TABLE no longer abort
statements or transactions which use tables being dropped or
renamed, and instead wait for these transactions to complete.
- Since LOCK TABLES WRITE now takes a special metadata lock,
not compatible with with reads or writes against the subject table
and transaction-wide, thr_lock.c deadlock avoidance algorithm
that used to ensure absence of deadlocks between LOCK TABLES
WRITE and other statements is no longer sufficient, even for
MyISAM. The wait-for graph based deadlock detector of MDL
subsystem may sometimes be necessary and is involved. This may
lead to ER_LOCK_DEADLOCK error produced for multi-statement
transactions even if these only use MyISAM:
session 1: session 2:
begin;
update t1 ... lock table t2 write, t1 write;
-- gets a lock on t2, blocks on t1
update t2 ...
(ER_LOCK_DEADLOCK)
- Finally, support of LOW_PRIORITY option for LOCK TABLES ... WRITE
was abandoned.
LOCK TABLE ... LOW_PRIORITY WRITE from now on has the same
priority as the usual LOCK TABLE ... WRITE.
SELECT HIGH PRIORITY no longer trumps LOCK TABLE ... WRITE in
the wait queue.
- We do not take upgradable metadata locks on implicitly
locked tables. So if one has, say, a view v1 that uses
table t1, and issues:
LOCK TABLE v1 WRITE;
FLUSH TABLE t1; -- (or just 'FLUSH TABLES'),
an error is produced.
In order to be able to perform DDL on a table under LOCK TABLES,
the table must be locked explicitly in the LOCK TABLES list.
@ mysql-test/include/handler.inc
Adjusted test case to trigger an execution path on which bug 41110
"crash with handler command when used concurrently with alter
table" and bug 41112 "crash in mysql_ha_close_table/get_lock_data
with alter table" were originally discovered. Left old test case
which no longer triggers this execution path for the sake of
coverage.
Added test coverage for HANDLER SQL statements and type-aware
metadata locks.
Added a test for the global shared lock and HANDLER SQL.
Updated tests to take into account that the old simple deadlock
detection heuristics was replaced with a graph-based deadlock
detector.
@ mysql-test/r/debug_sync.result
Updated results (see debug_sync.test).
@ mysql-test/r/handler_innodb.result
Updated results (see handler.inc test).
@ mysql-test/r/handler_myisam.result
Updated results (see handler.inc test).
@ mysql-test/r/innodb-lock.result
Updated results (see innodb-lock.test).
@ mysql-test/r/innodb_mysql_lock.result
Updated results (see innodb_mysql_lock.test).
@ mysql-test/r/lock.result
Updated results (see lock.test).
@ mysql-test/r/lock_multi.result
Updated results (see lock_multi.test).
@ mysql-test/r/lock_sync.result
Updated results (see lock_sync.test).
@ mysql-test/r/mdl_sync.result
Updated results (see mdl_sync.test).
@ mysql-test/r/sp-threads.result
SHOW PROCESSLIST output has changed due to the fact that waiting
for LOCK TABLES WRITE now happens within metadata locking
subsystem.
@ mysql-test/r/truncate_coverage.result
Updated results (see truncate_coverage.test).
@ mysql-test/suite/funcs_1/datadict/processlist_val.inc
SELECT FROM I_S.PROCESSLIST output has changed due to fact that
waiting for LOCK TABLES WRITE now happens within metadata locking
subsystem.
@ mysql-test/suite/funcs_1/r/processlist_val_no_prot.result
SELECT FROM I_S.PROCESSLIST output has changed due to fact that
waiting for LOCK TABLES WRITE now happens within metadata locking
subsystem.
@ mysql-test/suite/rpl/t/rpl_sp.test
Updated to a new SHOW PROCESSLIST state name.
@ mysql-test/t/debug_sync.test
Use LOCK TABLES READ instead of LOCK TABLES WRITE as the latter
no longer allows to trigger execution path involving waiting on
thr_lock.c lock and therefore reaching debug sync-point covered
by this test.
@ mysql-test/t/innodb-lock.test
Adjusted test case to the fact that innodb_table_locks=0 option is
no longer supported, since LOCK TABLES WRITE handles all its
conflicts within MDL subsystem.
@ mysql-test/t/innodb_mysql_lock.test
Added test for bug #37346 "innodb does not detect deadlock between
update and alter table".
@ mysql-test/t/lock.test
Added test coverage which checks the fact that we no longer support
DDL under LOCK TABLES on tables which were locked implicitly.
Adjusted existing test cases accordingly.
@ mysql-test/t/lock_multi.test
Added test for bug #46272 "MySQL 5.4.4, new MDL: unnecessary
deadlock". Adjusted other test cases to take into account the
fact that waiting for LOCK TABLES ... WRITE now happens within MDL
subsystem.
@ mysql-test/t/lock_sync.test
Since LOCK TABLES ... WRITE now takes SNRW metadata lock for
tables locked explicitly we have to implicitly lock InnoDB tables
(through view) to trigger the table-level lock conflict between
TL_WRITE and TL_WRITE_ALLOW_WRITE.
@ mysql-test/t/mdl_sync.test
Added basic test coverage for type-of-operation-aware metadata
locks. Also covered with tests some use cases involving HANDLER
statements in which a deadlock could arise.
Adjusted existing tests to take type-of-operation-aware MDL into
account.
@ mysql-test/t/multi_update.test
Update to a new SHOW PROCESSLIST state name.
@ mysql-test/t/truncate_coverage.test
Adjusted test case after making LOCK TABLES WRITE to wait until
transactions that use the table to be locked are completed.
Updated to the changed name of DEBUG_SYNC point.
@ sql/handler.cc
Global read lock functionality has been
moved into a class.
@ sql/lock.cc
Global read lock functionality has been
moved into a class.
Updated code to use the new MDL API.
@ sql/mdl.cc
Introduced new type-of-operation aware metadata locks.
To do this:
- Changed MDL_lock to use one list for waiting requests and one
list for granted requests. For each list, added a bitmap
that holds information what lock types a list contains.
Added a helper class MDL_lock::List to manipulate with granted
and waited lists while keeping the bitmaps in sync
with list contents.
- Changed lock-compatibility functions to use bitmaps that
define compatibility.
- Introduced a graph based deadlock detector inspired by
waiting_threads.c from Maria implementation.
- Now that we have a deadlock detector, and no longer have
a global lock to protect individual lock objects, but rather
use an rw lock per object, removed redundant code for upgrade,
and the global read lock. Changed the MDL API to
no longer require the caller to acquire the global
intention exclusive lock by means of a separate method.
Removed a few more methods that became redundant.
- Removed deadlock detection heuristic, it has been made
obsolete by the deadlock detector.
- With operation-type-aware metadata locks, MDL subsystem has
become aware of potential conflicts between DDL and open
transactions. This made it possible to remove calls to
mysql_abort_transactions_with_shared_lock() from acquisition
paths for exclusive lock and lock upgrade. Now we can simply
wait for these transactions to complete without fear of
deadlock. Function mysql_lock_abort() has also become
unnecessary for all conflicting cases except when a DDL
conflicts with a connection that has an open HANDLER.
@ sql/mdl.h
Introduced new type-of-operation aware metadata locks.
Introduced a graph based deadlock detector and supporting
methods.
Added comments.
God rid of redundant API calls.
Renamed m_lt_or_ha_sentinel to m_trans_sentinel,
since now it guards the global read lock as well as
LOCK TABLES and HANDLER locks.
@ sql/mysql_priv.h
Moved the global read lock functionality into a
class.
Added MYSQL_OPEN_FORCE_SHARED_MDL flag which forces
open_tables() to take MDL_SHARED on tables instead of
metadata locks specified in the parser. We use this to
allow PREPARE run concurrently in presence of
LOCK TABLES ... WRITE.
Added signature for find_table_for_mdl_ugprade().
@ sql/set_var.cc
Global read lock functionality has been
moved into a class.
@ sql/sp_head.cc
When creating TABLE_LIST elements for prelocking or
system tables set the type of request for metadata
lock according to the operation that will be performed
on the table.
@ sql/sql_base.cc
- Updated code to use the new MDL API.
- In order to avoid locks starvation we take upgradable
locks all at once. As result implicitly locked tables no
longer get an upgradable lock. Consequently DDL and FLUSH
TABLES for such tables is prohibited.
find_write_locked_table() was replaced by
find_table_for_mdl_upgrade() function.
open_table() was adjusted to return TABLE instance with
upgradable ticket when necessary.
- We no longer wait for all locks on OT_WAIT back off
action -- only on the lock that caused the wait
conflict. Moreover, now we distinguish cases when we
have to wait due to conflict in MDL and old version
of table in TDC.
- Upate mysql_notify_threads_having_share_locks()
to only abort thr_lock.c waits of threads that
have open HANDLERs, since lock conflicts with only
these threads now can lead to deadlocks not detectable
by the MDL deadlock detector.
- Remove mysql_abort_transactions_with_shared_locks()
which is no longer needed.
@ sql/sql_class.cc
Global read lock functionality has been moved into a class.
Re-arranged code in THD::cleanup() to simplify assert.
@ sql/sql_class.h
Introduced class to incapsulate global read lock
functionality.
Now sentinel in MDL subsystem guards the global read lock
as well as LOCK TABLES and HANDLER locks. Adjusted code
accordingly.
@ sql/sql_db.cc
Global read lock functionality has been moved into a class.
@ sql/sql_delete.cc
We no longer acquire upgradable metadata locks on tables
which are locked by LOCK TABLES implicitly. As result
TRUNCATE TABLE is no longer allowed for such tables.
Updated code to use the new MDL API.
@ sql/sql_handler.cc
Inform MDL_context about presence of open HANDLERs.
Since HANLDERs break MDL protocol by acquiring table-level
lock while holding only S metadata lock on a table MDL
subsystem should take special care about such contexts (Now
this is the only case when mysql_lock_abort() is used).
@ sql/sql_parse.cc
Global read lock functionality has been moved into a class.
Do not take upgradable metadata locks when opening tables
for CREATE TABLE SELECT as it is not necessary and limits
concurrency.
When initializing TABLE_LIST objects before adding them
to the table list set the type of request for metadata lock
according to the operation that will be performed on the
table.
We no longer acquire upgradable metadata locks on tables
which are locked by LOCK TABLES implicitly. As result FLUSH
TABLES is no longer allowed for such tables.
@ sql/sql_prepare.cc
Use MYSQL_OPEN_FORCE_SHARED_MDL flag when opening
tables during PREPARE. This allows PREPARE to run
concurrently in presence of LOCK TABLES ... WRITE.
@ sql/sql_rename.cc
Global read lock functionality has been moved into a class.
@ sql/sql_show.cc
Updated code to use the new MDL API.
@ sql/sql_table.cc
Global read lock functionality has been moved into a class.
We no longer acquire upgradable metadata locks on tables
which are locked by LOCK TABLES implicitly. As result DROP
TABLE is no longer allowed for such tables.
Updated code to use the new MDL API.
@ sql/sql_trigger.cc
Global read lock functionality has been moved into a class.
We no longer acquire upgradable metadata locks on tables
which are locked by LOCK TABLES implicitly. As result
CREATE/DROP TRIGGER is no longer allowed for such tables.
Updated code to use the new MDL API.
@ sql/sql_view.cc
Global read lock functionality has been moved into a class.
Fixed results of wrong merge that led to misuse of GLR API.
CREATE VIEW statement is not a commit statement.
@ sql/table.cc
When resetting TABLE_LIST objects for PS or SP re-execution
set the type of request for metadata lock according to the
operation that will be performed on the table. Do the same
in auxiliary function initializing metadata lock requests
in a table list.
@ sql/table.h
When initializing TABLE_LIST objects set the type of request
for metadata lock according to the operation that will be
performed on the table.
@ sql/transaction.cc
Global read lock functionality has been moved into a class.
for write by another connection
The problem was that if a table was locked in one connection by
LOCK TABLES ... WRITE, REPAIR TABLE or OPTIMIZE TABLE, SHOW CREATE
TABLE from another connection would be blocked. As SHOW CREATE TABLE
only reads metadata about the table, such blocking is not needed.
The problem was that when SHOW CREATE TABLE tried to get a metadata
lock on the table in order to open it, it used the wrong type of
metadata lock request. It used MDL_SHARED_READ which is used when
the intent is to read both table metadata and table data. Instead
it should have used MDL_SHARED_HIGH_PRIO which signifies an intent
to only read metadata.
This patch fixes the problem by making sure SHOW CREATE TABLE uses
the MDL_SHARED_HIGH_PRIO metadata lock request type when trying to
open the table. The patch also fixes a similar problem with the
mysql_list_fields API call.
Test case added to show_check.test.
Allow stored procedure variables in LIMIT clause.
Only allow variables of INTEGER types.
Handle negative values by means of an implicit cast to UNSIGNED
(similarly to prepared statement placeholders).
Add tests.
Make sure replication works by not doing NAME_CONST substitution
for variables in LIMIT clause.
Add replication tests.
ChangeSet@1.2703, 2007-12-07 09:35:28-05:00, cmiller@zippy.cornsilk.net +40 -0
Bug#13174: SHA2 function
Patch contributed from Bill Karwin, paper unnumbered CLA in Seattle
Implement SHA2 functions.
Chad added code to make it work with YaSSL. Also, he removed the
(probable) bug of embedded server never using SSL-dependent
functions. (libmysqld/Makefile.am didn't read ANY autoconf defs.)
Function specification:
SHA2( string cleartext, integer hash_length )
-> string hash, or NULL
where hash_length is one of 224, 256, 384, or 512. If either is
NULL or a length is unsupported, then the result is NULL. The
resulting string is always the length of the hash_length parameter
or is NULL.
Include the canonical hash examples from the NIST in the test
results.
---
Polish and address concerns of reviewers.
Problem: Segmentation fault in add_group_and_distinct_keys() when accessing
field of what is assumed to be an Item_field object.
Cause: In case of views, the item added to list by is_indexed_agg_distinct()
was not of type Item_field, but Item_ref.
Resolution: Add the real Item_field object, the one referred to by
Item_ref object, to the list, instead.
on LOAD DATA
Two problems :
1. LOAD DATA was not checking for SQL errors and was sending an OK
packet even when there were errors reported already. Fixed to check for
SQL errors in addition to the error conditions already detected.
2. There was an over-ambitious assert() on the server to check if the
protocol is always followed by the client. This can cause crashes on
debug servers by clients not completing the protocol exchange for some
reason (e.g. --send command in mysqltest). Fixed by keeping the assert
only on client side, since the server always completes the protocol
exchange.
The failing assertion was written with the assumption that a NULL
string can never be passed to my_strtod(). However, an empty string
may be passed under some circumstances by passing str == NULL and
*end == NULL.
Fixed the assertion to take the above case into account.
We should disable const subselect item evaluation because
subselect transformation does not happen in view_prepare_mode
and thus val_...() methods can not be called.
The problem is that we can not use make_cond_for_table().
This function relies on used_tables() condition
which is not set properly for subqueries.
As result subquery is not filtered out.
The fix is to use remove_eq_conds() function instead
of make_cond_for_table() func. 'remove_eq_conds()'
algorithm relies on const_item() value and it allows
to handle subqueries in right way.
Conflicts:
Text conflict in mysql-test/suite/binlog/r/binlog_row_mix_innodb_myisam.result
Text conflict in sql/log.cc
Text conflict in sql/set_var.cc
Text conflict in sql/sql_class.cc
Procedure, while DECIMAL works
Selecting of the CONCAT(...<SP variable>...) result into
a user variable may return wrong data.
Item_func_concat::val_str contains a number of memory
allocation-saving tricks. One of them concatenates
strings inplace inserting the value of one string
at the beginning of the other string. However,
this trick didn't care about strings those points
to the same data buffer: this is possible when
a CONCAT() parameter is a stored procedure variable -
Item_sp_variable::val_str() uses the intermediate
Item_sp_variable::str_value field, where it may
store a reference to an external buffer.
The Item_func_concat::val_str function has been
modified to take into account val_str functions
(such as Item_sp_variable::val_str) that return
a pointer to an internal Item member variable
that may reference to a buffer provided.
Conflicts:
Text conflict in mysql-test/r/func_str.result
Text conflict in mysql-test/suite/sys_vars/r/myisam_sort_buffer_size_basic_32.result
Text conflict in mysql-test/suite/sys_vars/r/myisam_sort_buffer_size_basic_64.result
Text conflict in mysql-test/t/func_str.test
Text conflict in sql/mysqld.cc
Text conflict in sql/protocol.cc
Text conflict in storage/myisam/mi_open.c
on index
'my_decimal' class has two members which can be used to access the
value. The member variable buf (inherited from parent class decimal_t)
is set to member variable buffer so that both are pointing to same value.
Item_copy_decimal::copy() uses memcpy to clone 'my_decimal'. The member
buffer is declared as an array and memcpy results in copying the values
of the array, but the inherited member buf, which should be pointing at
the begining of the array 'buffer' starts pointing to the begining of
buffer in original object (which is being cloned). Further updates on
'my_decimal' updates only the inherited member 'buf' but leaves
buffer unchanged.
Later when the new object (which now holds a inconsistent value) is cloned
again using proper cloning function 'my_decimal2decimal' the buf pointer
is fixed resulting in loss of the current value.
Using my_decimal2decimal instead of memcpy in Item_copy_decimal::copy()
fixed this problem.