The task is to
(a) add a comment on indexes and
(b) increase the maximum length of column, table and the new index comments.
The patch committed on behalf of Yoshinori Matsunobu (Yoshinori.Matsunobu@Sun.COM).
When EXPLAIN EXTENDED tries to print column names, it checks whether the
referenced table is CONST (in which case, the column's value rather than
its name will be printed). If no proper table is reference (i.e. because
a derived table was used that has since gone out of scope), this will fail
spectacularly.
This ports an equivalent of the fix for Bug 43354.
CHECK_FIELD_IGNORE was treated as CHECK_FIELD_ERROR_FOR_NULL;
UPDATE...SET...NULL on NOT NULL fields behaved differently after
a trigger.
Now distinguishes between IGNORE and ERROR_FOR_NULL and save/restores
check-field options.
an INFORMATION_SCHEMA table
When a prepared statement using a merged view containing an information
schema table was executed, a metadata lock of the view was not taken.
This meant that it was possible for concurrent view DDL to execute,
thereby breaking the binary log. For example, it was possible
for DROP VIEW to appear in the binary log before a query using the view.
This also happened when a statement in a stored routine was executed a
second time.
For such views, the information schema table is merged into the view
during the prepare phase (or first execution of a statement in a routine).
The problem was that we took a short cut and were not executing full-blown
view opening during subsequent executions of the statement. As a result,
a metadata lock on the view was not taken to protect the view definition.
This patch resolves the problem by making sure a metadata lock is taken
for views even after information schema tables are merged into them.
Test cased added to view.test.
Table corruption happens during table reading in ha_tina::find_current_row() func.
Field::store() method returns error(true) if stored value is 0.
The fix:
added special case for enum type which correctly processes 0 value.
Additional fix:
INSERT...(default) and INSERT...() have the same behaviour now for enum type.
The problem was that a shared InnoDB row lock was taken when executing
SELECT statements inside a stored function as a part of a transaction
using REPEATABLE READ. This prevented other transactions from updating
the row.
InnoDB uses multi-versioning and consistent nonlocking reads. SELECTs
should therefore not acquire locks and block other transactions
wishing to do updates.
This bug is no longer repeatable with the changes introduced in the scope
of metadata locking.
Test case added to innodb_mysql.test.
WL#5154 was a task for formally deprecating and removing items that
were mentioned in the manual as having been deprecated since MySQL
4.1 or 5.0, but that had never been removed.
Since WL#5154 was created, examination of mysqld.cc, mysql.cc, and
mysqldump.c reveals additional deprecations not mentioned in the
manual. (In some cases, the items are simply not mentioned in the
5.1+ manuals.)
This is a follow-on task to deprecate and remove these additional
items.
The deprecation happened in MySQL 5.1, and the options/variables
are now removed from the code.
The problem is that during temporary table creation uneven bits
are not taken into account for hidden fields. It leads to incorrect
calculation&allocation of null bytes size for table record. And
if grouped value is null we set wrong bit for this value(see end_update()).
Fixed by adding separate calculation of uneven bit for hidden fields.
If a prepared statement used both a MyISAMMRG table and a stored
function or trigger, execution could fail with "No such table"
error or crash.
The error would come from a failure of the MyISAMMRG engine
to meet the expectations of the prelocking algorithm,
in particular maintain lex->query_tables_own_last pointer
in sync with lex->query_tables_last pointer/the contents
of lex->query_tables. When adding merge children, the merge
engine would extend the table list. Then, when adding
prelocked tables, the prelocking algorithm would use a pointer
to the last merge child to assign to lex->query_tables_own_last.
Then, when merge children were removed at the end of
open_tables(), lex->query_tables_own_last
was not updated, and kept pointing
to a removed merge child.
The fix ensures that query_tables_own_last is always in
sync with lex->query_tables_last.
This is a regression introduced by WL#4144 and present only
in next-4284 tree and 6.0.
TEMPORARY + HANDLER + LOCK + SP".
Server crashed when one:
1) Opened HANDLER or acquired global read lock
2) Then locked one or several temporary tables with
LOCK TABLES statement (but no base tables).
3) Then issued any statement causing commit (explicit
or implicit).
4) Issued statement which should have closed HANDLER
or released global read lock.
The problem was that when entering LOCK TABLES mode in the
scenario described above we incorrectly set transactional
MDL sentinel to zero. As result during commit all metadata
locks were released (including lock for open HANDLER or
global metadata shared lock). Indeed, attempt to release
metadata lock for the second time which happened during
HANLDER CLOSE or during release of GLR caused crash.
This patch fixes problem by changing MDL_context's
set_trans_sentinel() method to set sentinel to correct
value (it should point to the most recent ticket).
DDL workload".
When a RENAME TABLE or LOCK TABLE ... WRITE statement which
mentioned the same table several times were aborted during
the process of acquring metadata locks (due to deadlock
which was discovered or because of KILL statement) server
might have crashed.
When attempt to acquire all locks requested had failed we
went through the list of requests and released locks which
we have managed to acquire by that moment one by one. Since
in the scenario described above list of requests contained
duplicates this led to releasing the same ticket twice and
a crash as result.
This patch solves the problem by employing different approach
to releasing locks in case of failure to acquire all locks
requested.
Now we take a MDL savepoint before starting acquiring locks
and simply rollback to it if things go bad.
This bug is just one facet of stored routines not being able to
detect changes in meta-data (WL#4179). This particular problem
can be triggered within a single session due to the improper
management of the pre-locking list if the view is expanded after
the pre-locking list is calculated.
Since the overall solution for the meta-data detection issue is
planned for a later release, for now a workaround is used to
fix this particular aspect that only involves a single session.
The workaround is to flush the thread-local stored routine cache
every time a view is created or modified, causing locally cached
routines to be re-evaluated upon invocation.
function with distinct.
Loose index scan is used to find MIN/MAX values using appropriate index and
thus allow to avoid grouping. For each found row it updates non-aggregated
fields with values from row with found MIN/MAX value.
Without loose index scan non-aggregated fields are copied by end_send_group
function. With loose index scan there is no need in end_send_group and
end_send is used instead. Non-aggregated fields still need to be copied and
this was wrongly implemented in QUICK_GROUP_MIN_MAX_SELECT::get_next.
WL#3220 added a case when loose index scan can be used with end_send_group to
optimize calculation of aggregate functions with distinct. In this case
the row found by QUICK_GROUP_MIN_MAX_SELECT::get_next might belong to a next
group and copying it will produce wrong result.
Update of non-aggregated fields is moved to the end_send function from
QUICK_GROUP_MIN_MAX_SELECT::get_next.
failed in enter_locked_tables_mode".
Server was aborted due to assertion failure when one tried to
execute statement requiring prelocking (i.e. firing triggers
or using stored functions) while having open HANDLERs.
The problem was that THD::enter_locked_tables_mode() method
which was called at the beginning of execution of prelocked
statement assumed there are no open HANDLERs. It had to do
so because corresponding THD::leave_locked_tables_mode()
method was unable to properly restore MDL sentinel when
leaving LOCK TABLES/prelocked mode in the presence of open
HANDLERs.
This patch solves this problem by changing the latter method
to properly restore MDL sentinel and thus removing need for
this assumption. As a side-effect, it lifts unjustified
limitation by allowing to keep HANDLERs open when entering
LOCK TABLES mode.
causing crashes!
Adding a SPATIAL INDEX on a non-geometrical column caused a
segmentation fault when the table was subsequently
inserted into.
A test was added in mysql_prepare_create_table to explicitly
check whether non-geometrical columns are used in a
spatial index, and throw an error if so.
corruption and crash results
An index creation statement where the index key
is larger/wider than the column it references
should throw an error.
A statement like:
CREATE TABLE t1 (a CHAR(1), PRIMARY KEY (A(255)))
did not error, but a segmentation fault followed when
an insertion was attempted on the table
The partial key validiation clause has been
restructured to (hopefully) better document which
uses of partial keys are valid.
Fixing problems discovered by "mtr --embedded" and "mtr --ps"
@ libmysqld/lib_sql.cc
"mtr --embedded --do-test=ps" failed.
Applying a similar change to the one previously done in protocol.cc,
to make embedded version work the same with client/server version.
(a bug in the WL#2649 patch)
@ mysql-test/include/ctype_numconv.inc
@ mysql-test/r/ctype_binary.result
@ mysql-test/r/ctype_cp1251.result
@ mysql-test/r/ctype_latin1.result
@ mysql-test/r/ctype_ucs.result
- Changing tinyint(30) to tinyint(4)
due to problems with "mtr --ps"
Possibly a bug in libmysql.cc, in function fetch_long_with_conversion().
Zerofill buffer is to short.
- Commenting tests with get_lock/release_lock
"mtr --ps" failed for some reasons in ctype_cp1251.
This patch introduces timeouts for metadata locks.
The timeout is specified in seconds using the new dynamic system
variable "lock_wait_timeout" which has both GLOBAL and SESSION
scopes. Allowed values range from 1 to 31536000 seconds (= 1 year).
The default value is 1 year.
The new server parameter "lock-wait-timeout" can be used to set
the default value parameter upon server startup.
"lock_wait_timeout" applies to all statements that use metadata locks.
These include DML and DDL operations on tables, views, stored procedures
and stored functions. They also include LOCK TABLES, FLUSH TABLES WITH
READ LOCK and HANDLER statements.
The patch also changes thr_lock.c code (table data locks used by MyISAM
and other simplistic engines) to use the same system variable.
InnoDB row locks are unaffected.
One exception to the handling of the "lock_wait_timeout" variable
is delayed inserts. All delayed inserts are executed with a timeout
of 1 year regardless of the setting for the global variable. As the
connection issuing the delayed insert gets no notification of
delayed insert timeouts, we want to avoid unnecessary timeouts.
It's important to note that the timeout value is used for each lock
acquired and that one statement can take more than one lock.
A statement can therefore block for longer than the lock_wait_timeout
value before reporting a timeout error. When lock timeout occurs,
ER_LOCK_WAIT_TIMEOUT is reported.
Test case added to lock_multi.test.
added:
include/ctype_numconv.inc
mysql-test/include/ctype_numconv.inc
mysql-test/r/ctype_binary.result
mysql-test/t/ctype_binary.test
Adding tests
modified:
mysql-test/r/bigint.result
mysql-test/r/case.result
mysql-test/r/create.result
mysql-test/r/ctype_cp1251.result
mysql-test/r/ctype_latin1.result
mysql-test/r/ctype_ucs.result
mysql-test/r/func_gconcat.result
mysql-test/r/func_str.result
mysql-test/r/metadata.result
mysql-test/r/ps_1general.result
mysql-test/r/ps_2myisam.result
mysql-test/r/ps_3innodb.result
mysql-test/r/ps_4heap.result
mysql-test/r/ps_5merge.result
mysql-test/r/show_check.result
mysql-test/r/type_datetime.result
mysql-test/r/type_ranges.result
mysql-test/r/union.result
mysql-test/suite/ndb/r/ps_7ndb.result
mysql-test/t/ctype_cp1251.test
mysql-test/t/ctype_latin1.test
mysql-test/t/ctype_ucs.test
mysql-test/t/func_str.test
Fixing tests
@ sql/field.cc
- Return str result using my_charset_numeric.
- Using real multi-byte aware str_to_XXX functions
to handle tricky charset values propely (e.g. UCS2)
@ sql/field.h
- Changing derivation of non-string field types to DERIVATION_NUMERIC.
- Changing binary() for numeric/datetime fields to always
return TRUE even if charset is not my_charset_bin. We need
this to keep ha_base_keytype() return HA_KEYTYPE_BINARY.
- Adding BINARY_FLAG into some fields, because it's not
being set automatically anymore with
"my_charset_bin to my_charset_numeric" change.
- Changing derivation for numeric/datetime datatypes to a weaker
value, to make "SELECT concat('string', field)" use character
set of the string literal for the result of the function.
@ sql/item.cc
- Implementing generic val_str_ascii().
- Using max_char_length() instead of direct read of max_length
to make "tricky" charsets like UCS2 work.
NOTE: in the future we'll possibly remove all direct reads of max_length
- Fixing Item_num::safe_charset_converter().
Previously it alligned binary string to
character string (for example by adding leading 0x00
when doing binary->UCS2 conversion). Now it just
converts from my_charset_numbner to "tocs".
- Using val_str_ascii() in Item::get_time() to make UCS2 arguments work.
- Other misc changes
@ sql/item.h
- Changing MY_COLL_CMP_CONV and MY_COLL_ALLOW_CONV to
bit operations instead of hard-coded bit masks.
- Addding new method DTCollation.set_numeric().
- Adding new methods to Item.
- Adding helper functions to make code look nicer:
agg_item_charsets_for_string_result()
agg_item_charsets_for_comparison()
- Changing charset for Item_num-derived items
from my_charset_bin to my_charset_numeric
(which is an alias for latin1).
@ sql/item_cmpfunc.cc
- Using new helper functions
- Other misc changes
@ sql/item_cmpfunc.h
- Fixing strcmp() to return max_length=2.
Previously it returned 1, which was wrong,
because it did not fit '-1'.
@ sql/item_func.cc
- Using new helper functions
- Other minor changes
@ sql/item_func.h
- Removing unused functions
- Adding helper functions
agg_arg_charsets_for_string_result()
agg_arg_charsets_for_comparison()
- Adding set_numeric() into constructors of numeric items.
- Using fix_length_and_charset() and fix_char_length()
instead of direct write to max_length.
@ sql/item_geofunc.cc
- Changing class for Item_func_geometry_type and
Item_func_as_wkt from Item_str_func to
Item_str_ascii_func, to make them return UCS2 result
properly (when character_set_connection=ucs2).
@ sql/item_geofunc.h
- Changing class for Item_func_geometry_type and
Item_func_as_wkt from Item_str_func to
Item_str_ascii_func, to make them return UCS2 result
properly (when @@character_set_connection=ucs2).
@ sql/item_strfunc.cc
- Implementing Item_str_func::val_str().
- Renaming val_str to val_str_ascii for some items,
to make them work with UCS2 properly.
- Using new helper functions
- All single-argument functions that expect string
result now call this method:
agg_arg_charsets_for_string_result(collation, args, 1);
This enables character set conversion to @@character_set_connection
in case of pure numeric input.
@ sql/item_strfunc.h
- Introducing Item_str_ascii_func - for functions
which return pure ASCII data, for performance purposes,
as well as for the cases when the old implementation
of val_str() was heavily 8-bit oriented and implementing
a UCS2-aware version is tricky.
@ sql/item_sum.cc
- Using new helper functions.
@ sql/item_timefunc.cc
- Using my_charset_numeric instead of my_charset_bin.
- Using fix_char_length(), fix_length_and_charset()
and fix_length_and_charset_datetime()
instead of direct write to max_length.
- Using tricky-charset aware function str_to_time_with_warn()
@ sql/item_timefunc.h
- Using new helper functions for charset and length initialization.
- Changing base class for Item_func_get_format() to make
it return UCS2 properly (when character_set_connection=ucs2).
@ sql/item_xmlfunc.cc
- Using new helper function
@ sql/my_decimal.cc
- Adding a new DECIMAL to CHAR converter
with real multibyte support (e.g. UCS2)
@ sql/mysql_priv.h
- Introducing a new derivation level for numeric/datetime data types.
- Adding macros for my_charset_numeric and MY_REPERTOIRE_NUMERIC.
- Adding prototypes for str_set_decimal()
- Adding prototypes for character-set aware str_to_xxx() functions.
@ sql/protocol.cc
- Changing charsetnr to "binary" client-side metadata for
numeric/datetime data types.
@ sql/time.cc
- Adding to_ascii() helper function, to convert a string
in any character set to ascii representation. In the
future can be extended to understand digits written
in various non-Latin word scripts.
- Adding real multy-byte character set aware versions for str_to_XXXX,
to make these these type of queries work correct:
INSERT INTO t1 SET datetime_column=ucs2_expression;
@ strings/ctype-ucs2.c
- endptr was not calculated correctly. INSERTing of UCS2
values into numeric columns returned warnings about
truncated wrong data.
rqg_mdl_stability".
When start of statement's waiting on a metadata lock
created more than one loop in waiters graph server might
have entered deadlock condition.
The problem was that in the case described above MDL deadlock
detector had to perform several searches for deadlock but
forgot to reset Deadlock_detection_context before performing
new search.
Failure to do so has broken assumption in code resposible for
choosing victim that if Deadlock_detection_context::victim
is set we also have read lock on m_waiting_for_lock for this
context. As result this lock could have been unlocked more
times than it was acquired which corrupted rwlock's state
which led to server deadlock.
This fix ensures that such reset is done before each attempt
to find a deadlock.
The problem becomes apparent only if HAVE_purify is undefined.
It related to the part of code placed in open_table_from_share() fuction
where we initialize record buffer only if HAVE_purify is enabled.
So in case of HAVE_purify=OFF record buffer is not initialized
on open table stage.
Next we read key, find NULL value and update appropriate null bit
but do not update record buffer. After that the record is stored
in the join cache(store_record_in_cache). For CHAR fields we
strip trailing spaces and in our case this procedure uses
uninitialized record buffer.
The fix is to skip stripping space procedure in case of null values
for CHAR fields(partially based on 6.0 JOIN_CACHE implementation).