-----------
After compiling from source, during make test I got the following error:
test main.loaddata failed with error
CURRENT_TEST: main.loaddata
mysqltest: At line 592: query 'LOAD DATA INFILE 'tmpp.txt' INTO TABLE t1
CHARACTER SET ucs2
(@b) SET a=REVERSE(@b)' failed: 1115: Unknown character set: 'ucs2'
I noticed other tests are skipped because of no ucs2
main.mix2_myisam_ucs2 [ skipped ] Test requires:'
have_ucs2'
Should main.loaddata be skipped if there is no ucs2
How To Repeat:
-------------
Run make test on compiled source that doesn't have ucs2
Suggested fix:
-------------
the failing piece of the test should be moved from mysql-test/t/loaddata.test to
mysql-test/t/ctype_ucs.test.
THOUGH IT IS NOT.
The following error message is misleading because it claims
that the BLOB space is not counted.
"ERROR 1118 (42000): Row size too large. The maximum row size for
the used table type, not counting BLOBs, is 8126. You have to
change some columns to TEXT or BLOBs"
When the ROW_FORMAT=compact or ROW_FORMAT=REDUNDANT is used,
the BLOB prefix is stored inline along with the row. So
the above error message is changed as follows depending on
the row format used:
For ROW_FORMAT=COMPRESSED or ROW_FORMAT=DYNAMIC, the error
message is as follows:
"ERROR 42000: Row size too large (> 8126). Changing some
columns to TEXT or BLOB may help. In current row format,
BLOB prefix of 0 bytes is stored inline."
For ROW_FORMAT=COMPACT or ROW_FORMAT=REDUNDANT, the error
message is as follows:
"ERROR 42000: Row size too large (> 8126). Changing some
columns to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or
ROW_FORMAT=COMPRESSED may help. In current row
format, BLOB prefix of 768 bytes is stored inline."
rb://1252 approved by Marko Makela
Backport from mysql-5.6 the fix
(revision-id sunny.bains@oracle.com-20120315045831-20rgfa4cozxmz7kz)
Bug#13839886 - CRASH IN INNOBASE_NEXT_AUTOINC
The assertion introduce in the fix for Bug#13817703
is too strong, a negative number can be greater
than the column max value, when the column value is
a negative number.
rb://978 Approved by Jimmy Yang.
rb:1236 approved by Marko Makela
NUMBERS
If a system variable was declared as deprecated without mention of an
alternative, the message would look funny, e.g. for @@delayed_insert_limit:
Warning 1287 '@@delayed_insert_limit' is deprecated and
will be removed in MySQL .
The message was meant to display the version number, but it's not
possible to give one when declaring a system variable.
The fix does two things:
1) The definition of the message
ER_WARN_DEPRECATED_SYNTAX_NO_REPLACEMENT is changed so that it does
not display a version number. I.e. in English the message now reads:
Warning 1287 The syntax '@@delayed_insert_limit' is deprecated and
will be removed in a future version.
2) The message ER_WARN_DEPRECATED_SYNTAX_WITH_VER is discontinued in
favor of ER_WARN_DEPRECATED_SYNTAX for system variables. This change
was already done in versions 5.6 and above as part of wl#5265. This
part is simply back-ported from the worklog.
WARNING
This patch is for mysql-5.5 only,
to be null-merged to mysql-5.6 and mysql-trunk.
This is a partial rollback of the file io instrumentation,
removing the instrumentation for mysql_file_stat in the archive engine.
See the bug comments for details.
CONNECTIONS IF SPE
Problem description: -ssl-key value is not validated, you can assign any bogus
text to --ssl-key and it is not verified that it exists, and more importantly,
it allows the client to connect to mysqld.
Fix: Added proper validations checks for --ssl-key.
Note:
1) Documentation changes require for 5.1, 5.5, 5.6 and trunk in the sections
listed below and the details are :
http://dev.mysql.com/doc/refman/5.6/en/ssl-options.html#option_general_ssl
and
REQUIRE SSL section of
http://dev.mysql.com/doc/refman/5.6/en/grant.html
2) Client having with option '--ssl', should able to get ssl connection. This
will be implemented as part of separate fix in 5.6 and trunk.
Problem description:
Table 't' created with two colums having compound index on both the
columns under innodb/myisam engine at remote machine. In the local
machine same table is created undet the federated engine.
A select having where clause with along 'AND' operation gives wrong
results on local machine.
Analysis:
The given query at federated engine is wrongly transformed by
federated::create_where_from_key() function and the same was sent to
the remote machine. Hence the local machine is showing wrong results.
Given query "select c1 from t where c1 <= 2 and c2 = 1;"
Query transformed, after ha_federated::create_where_from_key() function is:
SELECT `c1`, `c2` FROM `t` WHERE (`c1` IS NOT NULL ) AND
( (`c1` >= 2) AND (`c2` <= 1) ) and the same sent to real_query().
In the above the '<=' and '=' conditions were transformed to '>=' and
'<=' respectively.
ha_federated::create_where_from_key() function behaving as below:
The key_range is having both the start_key and end_key. The start_key
is used to get "(`c1` IS NOT NULL )" part of the where clause, this
transformation is correct. The end_key is used to get "( (`c1` >= 2)
AND (`c2` <= 1) )", which is wrong, here the given conditions('<=' and '=')
are changed as wrong conditions('>=' and '<=').
The end_key is having {key = 0x39fa6d0 "", length = 10, keypart_map = 3,
flag = HA_READ_AFTER_KEY}
The store_length is having value '5'. Based on store_length and length
values the condition values is applied in HA_READ_AFTER_KEY switch case.
The switch case 'HA_READ_AFTER_KEY' is applicable to only the last part of
the end_key and for previous parts it is going to 'HA_READ_KEY_OR_NEXT' case,
here the '>=' is getting added as a condition instead of '<='.
Fix:
Updated the 'if' condition in 'HA_READ_AFTER_KEY' case to affect for all
parts of the end_key. i.e 'i > 0' will used for end_key, Hence added it in
the if condition.
Backporting the WL#5716, "Information schema table for InnoDB
buffer pool information". Backporting revisions 2876.244.113,
2876.244.102 from mysql-trunk.
rb://1175 approved by Jimmy Yang.
Backporting the WL#5716, "Information schema table for InnoDB
buffer pool information". Backporting revisions 2876.244.113,
2876.244.102 from mysql-trunk.
rb://1177 approved by Jimmy Yang.
"ORDER BY" AND "LIMIT BY" CLAUSE
PROBLEM:
When a 'limit' clause is specified in a query along with
group by and order by, optimizer chooses wrong index
there by examining more number of rows than required.
However without the 'limit' clause, optimizer chooses
the right index.
ANALYSIS:
With respect to the query specified, range optimizer chooses
the first index as there is a range present ( on 'a'). Optimizer
then checks for an index which would give records in sorted
order for the 'group by' clause.
While checking chooses the second index (on 'c,b,a') based on
the 'limit' specified and the selectivity of
'quick_condition_rows' (number of rows present in the range)
in 'test_if_skip_sort_order' function.
But, it fails to consider that an order by clause on a
different column will result in scanning the entire index and
hence the estimated number of rows calculated above are
wrong (which results in choosing the second index).
FIX:
Do not enforce the 'limit' clause in the call to
'test_if_skip_sort_order' if we are creating a temporary
table. Creation of temporary table indicates that there would be
more post-processing and hence will need all the rows.
This fix is backported from 5.6. This problem is fixed in 5.6 as
part of changes for work log #5558
rpl_cant_read_event_incident:
Slave applies updates from bug11747416_32228_binlog.000001 file which
contains a CREATE TABLE t statement and an incident, when SQL thread is
running slowly IO thread may reach the incident before SQL thread
executes the create table statement.
Execute "drop table if exists t" and also perform a RESET MASTER to
clean slave binary logs.
rpl_bug41902:
Error "MYSQL_BIN_LOG::purge_logs was called with file
./master-bin.000001 not listed in the index." suppression is not
considering windows path, there is ".\master-bin.000001".
Changed suppression to: "MYSQL_BIN_LOG::purge_logs was called with file
..master-bin.000001 not listed in the index", to match ".\" and "./".
This is a followup patch for the bug enabling the test
i_binlog.binlog_mysqlbinlog_file_write.test
this was disabled in mysql trunk and mysql 5.5 as in the release
build mysqlbinlog was not debug compiled whereas the mysqld was.
Since have_debug.inc script checks only for mysqld to be debug
compiled, the test was not being skipped on release builds.
We resolve this problem by creating a new inc file
mysqlbinlog_have_debug.inc which checks exclusively for mysqlbinlog
to be debug compiled. if not it skips the test.
1. Clear text password client plugin disabled by default.
2. Added an environment variable LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN, that
when set to something starting with '1', 'Y' or 'y' will enable the clear
text
plugin for all connections.
3. Added a new mysql_options() option : MYSQL_ENABLE_CLEARTEXT_PLUGIN
that takes an my_bool argument. When the value of the argument is non-zero
the clear text plugin is enabled for this connection only.
4. Added an enable-cleartext-plugin config file option that takes a numeric
argument. If the numeric value of the numeric argument is non-zero the
clear
text plugin is enabled for the connection
5. Added a boolean command line option "--enable_cleartext_plugin" to
mysql, mysqlslap and mysqladmin. When specified it will call mysql_options
with the effect of #3
6. Added a new CLEARTEXT option to the connect command in mysqltest.
When specified it will enable the cleartext plugin for usage.
7. Added test cases and updated existing ones that need the clear text
plugin.
executing
The problem is that mysql lacks information about the objects a view
depends on so it can't dump views and tables in the proper order.
Thus it needs to create "stand-in" myisam tables for each view while
dumping the tables that it later drops and replaces with the actual view
view definition.
But since views can have much more columns than an actual table creating
these stand-in tables may be problematic.
There's no way to portably find out how many columns an mysiam table
can have. It's a complicated formula depending on internal server constants.
Thus we can't have a reliable error check without repeating the logic and
the formula inside mysqldump.
1. Changed the type of the columns of the stand-in tables mysqldump
makes to satisfy view dependencies from the original type to smallint
to save on row space.
2. Added a warning on the mysqldump's standard error for a possible
problems replaying the dump file if the columns of a view exceed 1000.
3. Added a test case.
Print the warning(note):
YEAR(x) is deprecated and will be removed in a future release. Please use YEAR(4) instead
on "CREATE TABLE ... YEAR(x)" or "ALTER TABLE MODIFY ... YEAR(x)", where x != 4
Problem
========
Replication breaks in the cases if the event length exceeds
the size of master Dump thread's max_allowed_packet.
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet, on addition of the
max_event_header length exceeds the max_allowed_packet of the DUMP thread.
This causes the Dump thread to break replication and throw an error.
That can happen e.g with row-based replication in Update_rows event.
Fix
====
The problem is fixed in 2 steps:
1.) The Dump thread limit to read event is increased to the upper limit
i.e. Dump thread reads whatever gets logged in the binary log.
2.) On the slave side we increase the the max_allowed_packet for the
slave's threads (IO/SQL) by increasing it to 1GB.
This is done using the new server option (slave_max_allowed_packet)
included, is used to regulate the max_allowed_packet of the
slave thread (IO/SQL) by the DBA, and facilitates the sending of
large packets from the master to the slave.
This causes the large packets to be received by the slave and apply
it successfully.
Fixed by backport of:
------------------------------------------------------------
revno: 3402.50.156
committer: Jon Olav Hauglid <jon.hauglid@oracle.com>
branch nick: mysql-trunk-test
timestamp: Wed 2012-02-08 14:10:23 +0100
message:
Bug#13417754 ASSERT IN ROW_DROP_DATABASE_FOR_MYSQL DURING DROP SCHEMA
This assert could be triggered if an InnoDB table was being moved
to a different database using ALTER TABLE ... RENAME, while this
database concurrently was being dropped by DROP DATABASE.
The reason for the problem was that no metadata lock was taken
on the target database by ALTER TABLE ... RENAME.
DROP DATABASE was therefore not blocked and could remove
the database while ALTER TABLE ... RENAME was executing. This
could cause the assert in InnoDB to be triggered.
This patch fixes the problem by taking a IX metadata lock on
the target database before ALTER TABLE ... RENAME starts
moving a table to a different database.
Note that this problem did not occur with RENAME TABLE which
already takes the correct metadata locks.
Also note that this patch slightly changes the behavior of
ALTER TABLE ... RENAME. Before, the statement would abort and
return an error if a lock on the target table name could not
be taken immediately. With this patch, ALTER TABLE ... RENAME
will instead block and wait until the lock can be taken
(or until we get a lock timeout). This also means that it is
possible to get ER_LOCK_DEADLOCK errors in this situation
since we allow ALTER TABLE ... RENAME to wait and not just
abort immediately.
Problem
========
Replication breaks in the cases if the event length exceeds
the size of master Dump thread's max_allowed_packet.
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet, on addition of the
max_event_header length exceeds the max_allowed_packet of the DUMP thread.
This causes the Dump thread to break replication and throw an error.
That can happen e.g with row-based replication in Update_rows event.
Fix
====
The problem is fixed in 2 steps:
1.) The Dump thread limit to read event is increased to the upper limit
i.e. Dump thread reads whatever gets logged in the binary log.
2.) On the slave side we increase the the max_allowed_packet for the
slave's threads (IO/SQL) by increasing it to 1GB.
This is done using the new server option (slave_max_allowed_packet)
included, is used to regulate the max_allowed_packet of the
slave thread (IO/SQL) by the DBA, and facilitates the sending of
large packets from the master to the slave.
This causes the large packets to be received by the slave and apply
it successfully.
Details:
- Archive storage engine file access were not instrumented and thus
were not shown in PS tables.
Fix:
- Added instrumentation code by using PS Apis for I/O.
updating the result file. Because a multi-row insert now reserves the
auto increment values before hand, if any explicitly specified auto
increment values are there, then some of the reserved values are lost.
Problem
========
SQL statements close to the size of max_allowed_packet produce binary
log events larger than max_allowed_packet.
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet + max_event_header
length. Now since the event length exceeds this size master Dump
thread is unable to send the packet on to the slave.
That can happen e.g with row-based replication in Update_rows event.
Fix
====
The problem was fixed by increasing the max_allowed_packet for the
slave's threads (IO/SQL) by increasing it to 1GB.
This is done using the new server option included which is used to
regulate the max_allowed_packet of the slave thread (IO/SQL).
This causes the large packets to be received by the slave and apply
it successfully.