Fix URLs.
README:
Fix URL.
mysqltest.result:
Update test result for real_sleep error message.
mysqltest.c:
Fix do_sleep() to print correct command name for real_sleep.
column is increasing when table is recreated with PS/SP":
make use of create_field::char_length more consistent in the code.
Reinit create_field::length from create_field::char_length
for every execution of a prepared statement (actually fixes the
bug).
errorneously abort reporting failure to kill child processes, where in
reality the problem was merely that the child had become a zombie because
of missing waitpid() call.
Bug #17257 ndb, update fails for inner joins if tables do not have Primary Key
change: the allocated area by setValue may not be around for later, store hidden key in special member variable instead
Bug #17158 load data infile of char values into table of char with no (PK) fails to load
Bug #17081 Doing "LOAD DATA INFILE" directly after delete can cause missing data
When setup_fields() function finds field named '*' it expands it to the list
of all table fields. It does so by checking that the first char of
field_name is '*', but it doesn't checks that the '* is the only char.
Due to this, when updating table with a field named like '*name', such field
is wrongly treated as '*' and expanded. This leads to making list of fields
to update being longer than list of the new values. Later, the fill_record()
function crashes by dereferencing null when there is left fields to update,
but no more values.
Added check in the setup_fields() function which ensures that the field
expanding will be done only when '*' is the only char in the field name.
fix for bug #15828 after review
doing val_str now before testing of null value secures the function for null values returned by dynamic functions - the fix before was incomplete andy covered constant null values
When InnoDB compares varchar field in ucs2 with given key using bin collation,
it calls my_strnncollsp_ucs2_bin() to perform comparison.
Because field length was lesser than length of key field should be padded
with trailing spaces in order to get correct result.
Because my_strnncollsp_ucs2_bin() was calling my_strnncollp_ucs2_bin(), which
doesn't pads field, wrong comparison result was returned. This results in
wrong result set.
my_strnncollsp_ucs2_bin() now compares fields like my_strnncollsp_ucs2 do,
but using binary collation.
field.cc:
BLOB variations have number-in-bytes limit,
unlike CHAR/VARCHAR which have number-of-characters limits.
A tinyblob column can store up to 255 bytes.
In the case of basic Latin letters (which use 1 byte per character)
we can store up to 255 characters in a tinyblob column.
When passing an utf8 tinyblob column as an argument into
a function (e.g. COALESCE) we need to reserve 3*255 bytes.
I.e. multiply length in bytes to mbcharlen for the character set.
Although in reality a tinyblob column can never be 3*255 bytes long,
we need to set max_length to multiply to make fix_length_and_dec()
of the function-caller (e.g. COALESCE) calculate the correct max_length
for the column being created.
ctype_utf8.result, ctype_utf8.test:
Adding test case.
for uca collation isalnum and strnncollsp don't agree on whether
0xC2A0 is a space (strnncollsp is right, isalnum is wrong).
they still don't, the bug was fixed by avoiding strnncollsp
table' lockup".
Changes from the innodb-4.1-ss11 snapshot.
Do not call os_file-create_tmpfile() at runtime. Instead, create
a tempfile at startup and guard access to it with a mutex.
Also, fix bugs:
10511: "Wrong padding of UCS2 CHAR columns in ON UPDATE CASCADE";
13778: "If FOREIGN_KEY_CHECKS=0, one can create inconsistent FOREIGN
KEYs". When FOREIGN_KEY_CHECKS=0 we still need to check that
datatypes between foreign key references are compatible.
Also, added test cases (also for bug 9802).
into parts when converting to Unicode.
m_ctype.h:
Reorganizing mb_wc return codes to be able
to return "an unassigned N-byte-long character".
sql_string.cc:
Adding code to detect and properly handle
unassigned characters (i.e. the those character
which are correctly formed according to the
character specifications, but don't have Unicode
mapping).
Many files:
Fixing conversion function to return new codes.
ctype_ujis.test, ctype_gbk.test, ctype_big5.test:
Adding a test case.
ctype_ujis.result, ctype_gbk.result, ctype_big5.result:
Fixing results accordingly.
Fix for bug#12429: Replication tests fail: "Slave_IO_Running" differs:
The value is not important, and it depends on timing. Mask it.
Backport and extension of a fix made by Matthias in 5.0, originally it was
1.1976 05/12/05 17:57:48 mleich@mysql.com
ctype-euc_kr.c:
ctype-gb2312.c:
Adding specific well_formed_length functions
for gb2312 and euckr, to allow storing characters
which are correct according to the character set
specifications but just don't have Unicode mapping.
Previously only those which have Unicode mapping
could be stored, while unassigned characters lead
to data truncation.
Many files:
new file
Problem #1: INSERT...SELECT, Version for 4.1.
INSERT ... SELECT with the same table on both sides (hidden
below a MERGE table) does now work by buffering the select result.
The duplicate detection works now after open_and_lock_tables()
on the locks.
I did not find a test case that failed without the change in
sql_update.cc. I made the change anyway as it should in theory
fix a possible MERGE table problem with multi-table update.
- Set max_length of Item_func_uuid to max_length*system_charset_info->mbmaxlen
Note! Item_func_uuid should be set to use 'ascii' charset when hex(), format(), md5()
etc will use 'ascii'
- Comitting again, the old patch seems to have been lost.
field.cc:
Adding longlong range checking to return
LONGLONG_MIN or LONGLONG_MAX when out of range.
Using (longlong) cast only when range is ok.
cast.test:
Adding test case.
cast.result:
Fixing results accordingly.
depending on table order
multi_update::send_data() was counting updates, not updated rows. Thus if one
record have several updates it will be counted several times in 'rows matched'
but updated only once.
multi_update::send_data() now counts only unique rows.
Problem #1: INSERT...SELECT
INSERT ... SELECT with the same table on both sides (hidden
below a MERGE table) does now work by buffering the select result.
The duplicate detection works now after open_and_lock_tables()
on the locks.
I did not find a test case that failed without the change in
sql_update.cc. I made the change anyway as it should in theory
fix a possible MERGE table problem with multi-table update.
CREATE TABLE and PS/SP": make sure that 'typelib' object for
ENUM values and 'Item_string' object for DEFAULT clause are
created in the statement memory root.
crash
resolve_const_item() substitutes item which will evaluate to constant with
equvalent constant item, basing on the item's result type. In this case
subselect was resolved as constant, and resolve_const_item() was substituting
it's result's Item_caches to Item_null. Later Item_cache's function was called
for Item_null object, which caused server crash.
resolve_const_item() now substitutes constants for items with
result_type == ROW_RESULT only for Item_rows.
item_strfunc.h, item_strfunc.cc, item.cc:
Try to convert a const item into destination
character set. If conversion happens without
data loss, then cache the converted value
and return it during val_str().
Otherwise, if conversion loses data, return
Illeral mix of collations error, as it happened
previously.
ctype_recoding.result, ctype_recoding.test:
Fixing tests accordingly.
ps_grant.result:
Fixing result order.
grant.result:
Adding test case,
fixing result order.
grant.test:
Adding test case.
sql_acl.cc:
Fixed that my_charset_latin1 was incorrectly used instead of system_charset_info.
This problem was previously fixed by Ingo in 5.0.
This patch is basically a backport of the same changes into 4.1.
Two handler objects were present, one was used for an insert and the other for a select
The state of the statistics was local to the handler object and thus the other handler
object didn't notice the insert.
Fix included:
1) Add a new variable key_stat_version added to whenever statistics was considered in need
of update (previously key_stats_ok= FALSE in those places)
2) Add a new handler variable key_stat_version assigned whenever key_stats_ok= TRUE was set
previously
3) Fix records_in_range to return records if records <= 1
4) Fix records_in_range to add 2 to rec_per_key to ensure we don't specify 0 or 1 when it isn't
and thus invoking incorrect optimisations.
5) Fix unique key handling for HEAP table in records_in_range
non-deterministic result in the test case for BUG#7947
the bug fix for BUG#7947 now fixed the result of mix_innodb_myisam_binlog test, which
in the past was missing DO RELEASE_LOCK() in the output of SHOW BINLOG EVENTS
Initialized usable_keys from table->keys_in_use instead of ~0
in test_if_skip_sort_order(). It was possible that a disabled
index was used for sorting.
Version for 4.0.
It fixes two problems:
1. The cause of the bug was that we did not check the table version for
the HANDLER ... READ commands. We did not notice when a table was
replaced by a new one. This can happen during ALTER TABLE, REPAIR
TABLE, and OPTIMIZE TABLE (there might be more cases). I call the fix
for this problem "the primary bug fix".
2. mysql_ha_flush() was not always called with a locked LOCK_open.
Though the function comment clearly said it must.
I changed the code so that the locking is done when required. I call
the fix for this problem "the secondary fix".
Removed wrong fix for bug #14009 (use of abs() on null value causes problems with filesort)
Mark that add_time(), time_diff() and str_to_date() can return null values
Procedure analyse() redefines select's fields_list. setup_copy_fields() assumes
that fields_list is a part of all_fields_list. Because select have only
3 columns and analyse() redefines it to have 10 columns, int overrun in
setup_copy_fields() occurs and server goes to almost infinite loop.
Because fields_list used not only to send data ad fields types, it's wrong
to allow procedure redefine it. This patch separates select's fileds_list
and procedure's one. Now if procedure is present, copy of fields_list is
created in procedure_fields_list and it is used for sending data and fields.
Date field was declared as not null, thus expression 'datefield is null'
was always false. For SELECT special handling of such cases is used.
There 'datefield is null' converted to 'datefield eq "0000-00-00"'.
In mysql_update() before creation of select added remove_eq_conds() call.
It makes some optimization of conds and in particular performs conversion
from 'is null' to 'eq'.
Also remove_eq_conds() makes some evaluation of conds and if it founds that
conds is always false then update statement is not processed further.
All this allows to perform some update statements process faster due to
optimized conds, and not wasting resources if conds known to be false.
collation
By default constant strings in second parameter of date_time() have case
insensitive collation. Because of this expressions date_format(f,'%m') and
date_format(f,'%M') wrongly becomes equal, which results in choosing wrong
column to sort by.
Now if second parameter of date_format() is constant then it's collation is
changed to case sensitive.
VALUES() can only refer to table insert going to.
But Item_insert_value::fix_fields() were passing to it's arg full table list,
This results in finding second column which shouldn't be found, and
failing with error about ambiguous field.
Item_insert_value::fix_fields() now passes only first table of full table
list.
Option to set environment variable MTR_BUILD_THREAD to a small
number, from what mysql-test-run calculate port numbers that
will not conflict with other runs with different thread num
select distinct char(column) fails with utf8
ctype_utf8.result, ctype_utf8.test:
Adding test case
sql_yacc.yy:
Adding new syntax.
item_strfunc.h:
Fixing wrong max_length calculation.
Also, adding CHAR(x USING charset),
for easier migrating from 4.1 to 5.0,
according to Monty's suggestion.
DISTINCT wasn't optimized away and caused creation of tmp table in wrong
case. This result in integer overrun and running out of memory.
Fix backported from 4.1. Now if optimizer founds that in result be only 1
row it removes distinct.
in short we now record whenever the slave I/O thread ignores a master's event because of its server id,
and use this info in the slave SQL thread to advance Exec_master_log_pos. Because if we
do not, this variable stays at the position of the last executed event, i.e. the last *non-ignored*
executed one, which may not be the last of the master's binlog (and so the slave *looks* behind
the master though it's data-wise it's not).
Bug #10308: Parse 'purge master logs' with subselect correctly.
subselect.test:
Bug #10308: Test for 'purge master logs' with subselect.
subselect.result:
Bug #10308: Test result for 'purge master logs' with subselect.
When fixing Item_func_plus in ORDER BY clause field c is searched in all
opened tables, but because c is an alias it wasn't found there.
This patch adds a flag to select_lex which allows Item_field::fix_fields()
to look up in select's item_list to find aliased fields.
After SHOW TABLE STATUS last_insert_id wasn't cleaned, and next select
erroneously rewrites WHERE condition and returs a row;
5.0 isn't affected because of different SHOW TABLE STATUS handling.
last_insert_id cleanup added to mysqld_extend_show_tables().