Bug#12637786 was fixed with rb:692 by marko. But that fix has a remaining
bug. It added this assert;
ut_ad(ind_field->prefix_len);
before a section of code that assumes there is a prefix_len.
The patch replaced code that explicitly avoided this with a check for
prefix_len. It turns out that the purge thread can get to that assert
without a prefix_len because it does not use a row_ext_t* .
When UNIV_DEBUG is not defined, the affect of this is that the purge thread
sets the dfield->len to zero and then cannot find the entry in the index to
purge. So secondary index entries remain unpurged.
This patch does not do the assert. Instead, it uses
'if (ind_field->prefix_len) {...}'
around the section of code that assumes a prefix_len. This is the way the
patch I provided to Marko did it.
The test case is simply modified to do a sleep(10) in order to give the
purge thread a chance to run. Without the code change to row0row.c, this
modified testcase will assert if InnoDB was compiled with UNIV_DEBUG.
I tried to sleep(5), but it did not always assert.
bug. It added this assert;
ut_ad(ind_field->prefix_len);
before a section of code that assumes there is a prefix_len.
The patch replaced code that explicitly avoided this with a check for
prefix_len. It turns out that the purge thread can get to that assert
without a prefix_len because it does not use a row_ext_t* .
When UNIV_DEBUG is not defined, the affect of this is that the purge thread
sets the dfield->len to zero and then cannot find the entry in the index to
purge. So secondary index entries remain unpurged.
This patch does not do the assert. Instead, it uses
'if (ind_field->prefix_len) {...}'
around the section of code that assumes a prefix_len. This is the way the
patch I provided to Marko did it.
The test case is simply modified to do a sleep(10) in order to give the
purge thread a chance to run. Without the code change to row0row.c, this
modified testcase will assert if InnoDB was compiled with UNIV_DEBUG.
I tried to sleep(5), but it did not always assert.
OLD VALUE OF INPUT PARAMETER.
The user-visible problem was that CASE-control-flow function
(not CASE-statement) misbehaved in stored routines under some
circumstances. The problem resulted in a crash or wrong data
returned. The error happened when expressions in CASE-function
were not of the same character set.
A CASE-function should return values of the same character set
for all branches. Internally, that means a new Item-instance
for the CONVERT(... USING <some charset>)-function is added
to the item tree when needed. The problem was that such changes
were not properly recorded using THD::change_item_tree(),
thus dangling pointers remain in the item tree after
THD::rollback_item_tree_changes(), which lead to undefined
behavior (i.e. crash / wrong data) for subsequent executions of
the stored routine.
This bug was introduced by a patch for Bug 11753363
(44793 - CHARACTER SETS: CASE CLAUSE, UCS2 OR UTF32, FAILURE).
The fixed function is Item_func_case::fix_length_and_dec().
New CONVERT-items are added in agg_item_set_converter(),
which calls THD::change_item_tree().
The problem was that an intermediate array was passed
to agg_item_set_converter(). Thus, THD::change_item_tree() there
was called on intermediate objects.
Note: those intermediate objects are allocated on THD's
memory root, so it's Ok to put them into "changed item lists".
The fix is to track changes on the correct objects.
row_build_index_entry(): In innodb_file_format=Barracuda
(ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED), a secondary index on a
full column can refer to a field that is stored off-page in the
clustered index record. Take that into account.
rb:692 approved by Jimmy Yang
BOGUS "THE TABLE MYSQL.PROC IS MISSING,..."
There was a race condition between loading a stored routine
(function/procedure/trigger) specified by fully qualified name
SCHEMA_NAME.PROC_NAME and dropping the stored routine database.
The problem was that there is a window for race condition when one server
thread tries to load a stored routine being executed and the other thread
tries to drop the stored routine schema.
This condition race window exists in implementation of function
mysql_change_db() called by db_load_routine() during loading of stored
routine to cache. Function mysql_change_db() calls check_db_dir_existence()
that might failed because specified database was dropped during concurrent
execution of DROP SCHEMA statement. db_load_routine() calls mysql_change_db()
with flag 'force_switch' set to 'true' value so when referenced db is not found
then my_error() is not called and function mysql_change_db() returns ok.
This shadows information about schema opening error in db_load_routine().
Then db_load_routine() makes attempt to parse stored routine that is failed.
This makes to return error to sp_cache_routines_and_add_tables_aux() but since
during error generation a call to my_error wasn't made and hence
THD::main_da wasn't set we set the generic "mysql.proc table corrupt" error
when running sp_cache_routines_and_add_tables_aux().
The fix is to install an error handler inside db_load_routine() for
the mysql_op_change_db() call, and check later if the ER_BAD_DB_ERROR
was caught.
TO POSITION FIRST CAN CAUSE DATA TO BE CORRUPTED".
ALTER TABLE MODIFY/CHANGE ... FIRST did nothing except renaming
columns if new version of the table had exactly the same
structure as the old one (i.e. as result of such statement, names
of columns changed their order as specified but data in columns
didn't). The same thing happened for ALTER TABLE DROP COLUMN/ADD
COLUMN statements which were supposed to produce new version of
table with exactly the same structure as the old version of table.
I.e. in the latter case the result was the same as if old column
was renamed instead of being dropped and new column with default
as value being created.
Both these problems were caused by the fact that ALTER TABLE
implementation incorrectly interpreted both these situations as
simple renaming of columns and assumed that in-place ALTER TABLE
algorithm could have been used for them.
This patch fixes this problem by ensuring that in cases when some
column is moved to the first position or some column is dropped
the default ALTER TABLE algorithm involving table copying is
always used. This is achieved by detecting such situations in
mysql_prepare_alter_table() and setting Alter_info::change_level
to ALTER_TABLE_DATA_CHANGED for them.
SYNTAX TRIGGERS IN ANY WAY
Table with triggers which were using deprecated (5.0-only) syntax became
unavailable for any DML and DDL after upgrade to 5.1 version of server.
Attempt to execute any statement on such a table resulted in parsing
error reported. Since this included DROP TRIGGER and DROP TABLE
statements (actually, the latter was allowed but was not functioning
properly for such tables) it was impossible to fix the problem without
manual operations on .TRG and .TRN files in data directory.
The problem was that failure to parse trigger body (due to 5.0-only
syntax) when opening trigger file for a table prevented the table
from being open. This made all operations on the table impossible
(except DROP TABLE which due to peculiarity in its implementation
dropped the table but left trigger files around).
This patch solves this problem by silencing error which occurs when
we parse trigger body during table open. Error message is preserved
for the future use and table is marked as having a broken trigger.
We also try to analyze parse tree to recover trigger name, which
will be needed in order to drop the broken trigger. DML statements
which invoke triggers on the table marked as having broken trigger
are prohibited and emit saved error message. The same happens for
DDL which change triggers except DROP TRIGGER and DROP TABLE which
try their best to do what was requested. Table becomes no longer
marked as having broken trigger when last such trigger is dropped.
THE EVENT STATUS.
Any ALTER EVENT statement on a disabled event enabled it back
(unless this ALTER EVENT statement explicitly disabled the event).
The problem was that during processing of an ALTER EVENT statement
value of status field was overwritten unconditionally even if new
value was not specified explicitly. As a consequence this field
was set to default value for status which corresponds to ENABLE.
The solution is to check if status field was explicitly specified in
ALTER EVENT statement before assigning new value to status field.
Problem: in case of wrong data insert into indexed GEOMETRY fields
(e.g. NULL value for a not NULL field) MyISAM reported
"ERROR 126 (HY000): Incorrect key file for table; try to repair it"
due to misuse of the key deletion function.
Fix: always use R-tree key functions for R-tree based indexes
and B-tree key functions for B-tree based indexes.
FAIL IN EMBEDDED SERVER
FreeBSD 64 bit needs the FP_X_DNML to fpsetmask() to prevent exceptions from
propagating into mysql (as a threaded application).
However fpsetmask() itself is deprecated in favor of fedisableexcept().
1. Fixed the #ifdef to check for FP_X_DNML instead of i386.
2. Added a configure.in check for fedisableexcept() and, if present,
this function is called insted of the fpsetmask().
No need for new tests, as the existing tests cover this already.
Removed the affected tests from the experimental list.
The check for empty password in the user account was checking the wrong field.
Fixed to check the proper password hash.
Test case added.
Fixed native_password and old_password plugins that suffered from the same
problems.
Unambuguated the auth_string ACL_USER member : previously it was used for
both password and the authentication string (depending on the plugin). Now
fixed to contain either the authentication string specified or empty string.
SECONDARY INDEX IN INNODB
This is a follow-up patch.
This patch moves part of the new test coverage to a test
file that is only run on debug builds since it used debug-
only features and therefore broke the test case on
release builds.
SECONDARY INDEX IN INNODB
The patches for Bug#11751388 and Bug#11784056 enabled concurrent
reads while creating secondary indexes in InnoDB. However, they
introduced a regression. This regression occured if ALTER TABLE
failed after the index had been added, for example during the
lock upgrade needed to update .FRM. If this happened, InnoDB
and the server got out of sync with regards to which indexes
actually existed. Therefore the patch for Bug#11815600 again
disabled concurrent reads.
This patch re-enables concurrent reads. The original regression
is fixed by splitting the ADD INDEX operation into two parts.
First the new index is created but not made active. This is
done while concurrent reads are allowed. The second part of
the operation makes the index active (or reverts the change).
This is done after lock upgrade, which prevents the original
regression.
In order to implement this change, the patch changes the storage
API for in-place index creation. handler::add_index() is split
into two functions, handler_add_index() and
handler::final_add_index(). The former for creating indexes without
making them visible and the latter for commiting (i.e. making
visible) new indexes or reverting the changes.
Large parts of this patch were written by Marko Mäkelä.
Test case added to innodb_mysql_lock.test.
With this change, the index prefix column length lifted from 767 bytes
to 3072 bytes if "innodb_large_prefix" is set to "true".
rb://603 approved by Marko
The test case problem stemmed from the fact that a debug sync
signal is a global variable that persists until overwritten
by a new signal. This means that if two different signals
are raised in sequence, a thread waiting for the first signal
might miss it if the second signal sets the global variable
before the thread wakes up.
The solution is to deliver a subsequent signal only after the
waiting thread has received it.
will create multiple running events.
A CREATE IF NOT EXIST on an event that existed and was enabled caused
multiple instances of the event to run. Disabling the event didn't help.
If the event was dropped, the event stopped running, but when created
again, multiple instances of the event were still running. The only way
to get out of this situation was to restart the server.
The problem was that Event_db_repository::create_event() didn't return
enough information to discriminate between situation when event didn't
exist and was created and when event did exist and was not created
(but a warning was emitted). As result in the latter case event
was added to in-memory queue of events second time. And this led to
unwarranted multiple executions of the same event.
The solution is to add out-parameter to Event_db_repository::create_event()
method which will signal that event was not created because it already
exists and so it should not be added to the in-memory queue.
HA_INNOBASE::UPDATE_ROW, TEMPORARY TABLE, TABLE LOCK".
Attempt to update an InnoDB temporary table under LOCK TABLES
led to assertion failure in both debug and production builds
if this temporary table was explicitly locked for READ. The
same scenario works fine for MyISAM temporary tables.
The assertion failure was caused by discrepancy between lock
that was requested on the rows of temporary table at LOCK TABLES
time and by update operation. Since SQL-layer requested a
read-lock at LOCK TABLES time InnoDB engine assumed that upcoming
statements which are going to be executed under LOCK TABLES will
only read table and therefore should acquire only S-lock.
An update operation broken this assumption by requesting X-lock.
Possible approaches to fixing this problem are:
1) Skip locking of temporary tables as locking doesn't make any
sense for connection-local objects.
2) Prohibit changing of temporary table locked by LOCK TABLES ...
READ.
Unfortunately both of these approaches have drawbacks which make
them unviable for stable versions of server.
So this patch takes another approach and changes code in such way
that LOCK TABLES for a temporary table will always request write
lock. In 5.5 version of this patch switch from read lock to write
lock is done on SQL-layer.
HA_INNOBASE::UPDATE_ROW, TEMPORARY TABLE, TABLE LOCK".
Attempt to update an InnoDB temporary table under LOCK TABLES
led to assertion failure in both debug and production builds
if this temporary table was explicitly locked for READ. The
same scenario works fine for MyISAM temporary tables.
The assertion failure was caused by discrepancy between lock
that was requested on the rows of temporary table at LOCK TABLES
time and by update operation. Since SQL-layer requested a
read-lock at LOCK TABLES time InnoDB engine assumed that upcoming
statements which are going to be executed under LOCK TABLES will
only read table and therefore should acquire only S-lock.
An update operation broken this assumption by requesting X-lock.
Possible approaches to fixing this problem are:
1) Skip locking of temporary tables as locking doesn't make any
sense for connection-local objects.
2) Prohibit changing of temporary table locked by LOCK TABLES ...
READ.
Unfortunately both of these approaches have drawbacks which make
them unviable for stable versions of server.
So this patch takes another approach and changes code in such way
that LOCK TABLES for a temporary table will always request write
lock. In 5.1 version of this patch switch from read lock to write
lock is done inside of InnoDBs handler methods as doing it on
SQL-layer causes compatibility troubles with FLUSH TABLES WITH
READ LOCK.
Problem: MYSQL_BIN_LOG::reset_logs acquires mutexes in wrong order.
The correct order is first LOCK_thread_count and then LOCK_log. This function
does it the other way around. This leads to deadlock when run in parallel
with a thread that takes the two locks in correct order. For example, a thread
that disconnects will take the locks in the correct order.
Fix: change order of the locks in MYSQL_BIN_LOG::reset_logs:
first LOCK_thread_count and then LOCK_log.
Assertion happens due to missing NULL value check in
Item_func_round::fix_length_and_dec() function.
The fix: added NULL value check for second parameter.