CONFIG FILES CAUSES TEST
Utility as "mysql_upgrade" forks "mysql"/"mysqlcheck". Attaching
"mysql_upgrade" shows following calls after forking "mysql" or
"mysql_check" when configuration file information is passed as
first argument to "mysql_upgrade".
strace -f ./mysql_upgrade --defaults-file=../pdb/my.cnf --socket=../pdb/mysql.sock -f
[pid 6254] stat("/etc/my.cnf", 0x7fff8e772680) = -1 ENOENT (No such file or directory)
[pid 6254] stat("/etc/mysql/my.cnf", 0x7fff8e772680) = -1 ENOENT (No such file or directory)
[pid 6254] stat("/usr/local/mysql/etc/my.cnf", 0x7fff8e772680) = -1 ENOENT (No such file or directory)
[pid 6254] stat("/home/user_name/.my.cnf", {st_mode=S_IFREG|0664, st_size=19, ...}) = 0
[pid 6254] open("/home/user_name/.my.cnf", O_RDONLY) = 3
But when tool forks "mysqlcheck"/"mysql", "--no-defaults" is passed
as first argument. Before forking, in function "find_tool" of
"mysql_upgrade", check is made to verify whether tool can be
executable or not by calling "mysqlcheck --help" and "mysql --help".
But argument "--no-defaults", "--defaults-file" or
"defaults-extra-file" is not passed to "mysql" and "mysqlcheck".
So my.cnf is searched in default paths.
Fix:
------
Modified code to pass "--no-defaults" as first argument to "mysql"
and "mysqlcheck" while checking tool can be executed or not.
SPECIFIED WITH THE BASEDIR OPTION
Description: The mysql_plugin client attempts to remove any
filename specified to the --basedir option. The problem is
that if the filename does not end with a slash, it will
attempt to unlink it, which succeeds for files, but not for
directories.
Analysis: When we are starting mysql_plugin with basedir
option and if we are giving path of a file as basedir, it
deletes that file. It was because it uses a function
my_delete which unlinks the file path given.
Fix: As a fix we replace that line using another function
my_free, which will only free the pointer which is having
that file path.
SPECIFIED WITH THE BASEDIR OPTION
Description: The mysql_plugin client attempts to remove any
filename specified to the --basedir option. The problem is
that if the filename does not end with a slash, it will
attempt to unlink it, which succeeds for files, but not for
directories.
Analysis: When we are starting mysql_plugin with basedir
option and if we are giving path of a file as basedir, it
deletes that file. It was because it uses a function
my_delete which unlinks the file path given.
Fix: As a fix we replace that line using another function
my_free, which will only free the pointer which is having
that file path.
TO DUMP DATA FROM MYSQL-5.6
Analysis
--------
Dumping mysql-5.6 data using mysql-5.1/mysql-5.5 'myqldump'
utility fails with a syntax error.
Server system variable 'sql_quote_show_create' which quotes the
identifiers is set in the mysqldump utility. The mysldump utility
of mysql-5.1/mysql-5.5 uses deprecated syntax 'SET OPTION' to set
the 'sql_quote_show_create' option. The support for the syntax is
removed in mysql-5.6. Hence syntax error is reported while taking
the dump.
Fix:
---
Changed the 'mysqldump' code to use the syntax
'SET SQL_QUOTE_SHOW_CREATE' to set the 'sql_quote_show_create'
option. That syntax is supported on mysql-5.1, mysql-5.5 and
mysql-5.6.
NOTE: I have not added an mtr test case since it is difficult
to simulate the condition. Also the syntax may not be further
simplified in the future.
MYSQL DB FROM REMOTE 5.0.96 SERVER
Problem: mysqldump tool assumes the existence of
general_log and slow_log tables in the server.
If mysqldump tool executes on a old server where
there are no log tables like these, mysqldump tool
fails.
Analysis: general_log and slow_log tables are added
in the ignore-table list as part of bug-26121 fix
causes bug-45740 (MYSQLDUMP DOESN'T DUMP GENERAL_LOG
AND SLOW_QUERY CAUSES RESTORE PROBLEM). As part of
the bug-45740 fix, mysqldump tool adds create table
queries for these two tables. But the fix assumes
that on all the servers, general_log and slow_log
will be there. If the new mysqldump tool is executed
against a old server where there are no general_log
and slow_log, the mysqldump tool fails with an error
that 'there is no general_log table'.
Fix: When mysqldump tool is trying to retrieve general_log
and slow_log table structures, first the tool should
check their existence of these tables in the server
instead of trying to dump it blindly.
In log_event.h
DESCRIPTION:
Due to inclusion of an implementation file, namely 'rpl_tblmap.cc'
in a header file, namely 'log_event.h'; linker errors occur if
log_event.h is included in an application containing multiple source
files, such as in the case of Binlog API.
Binlog API requires including log_event.h in its source files;
which leads to multiple definition errors, for functions defined
in rpl_tblmap.cc for class 'table_mapping'.
FIX:
Change the inclusion from header file(log_event.h) to source files
using this header and have flag MYSQL_CLIENT set. The only file in
the current server repository is mysqlbinlog.cc.
SINGLE DATABASE; MYSQLBINLOG
Problem: If last subevent inside a RBR event
is skipped while replaying a binary log
using mysqlbinlog with --database=... option,
generated output is missing end marker('/*!*/;)
for that RBR event. Thence causing syntax error
while replaying the generated output.
Fix: Append end marker ('/*!*/;) if the last
subevent is getting skipped.
Append end marker if the last
subevent is getting skipped.
WITH A PORT NUMBER ENCLOSED IN QUOTES
Problem: mysqldump --dump-slave --include-master-host-port
prints the CHANGE MASTER command in the generated logical
backup. The PORT number that is generated with this command
is a string and should be an integer.
Fix: Remove the Enclosed quotes for port number.
(Based on Sinisa's patch)
Added a version checking facility to mysql_upgrade.
The versions used for checking is the version of the
server that mysql_upgrade is going to upgrade and the
server version that mysql_upgrade was build/distributed
with.
Also added an option '--version-check' to enable/disable
the version checking.
Problem:
=======
Found using AddressSanitizer testing.
The mysqlbinlog utility may result in out-of-bound heap
buffer reads and thus, undefined behaviour, when processing
RBR events in the old (pre-5.1 GA) format.
The following code in process_event() would only be correct
if Rows_log_event was the base class for
Write,Update,Delete_rows_log_event_old classes:
case PRE_GA_WRITE_ROWS_EVENT:
case PRE_GA_DELETE_ROWS_EVENT:
case PRE_GA_UPDATE_ROWS_EVENT:
...
Rows_log_event *e= (Rows_log_event*) ev;
Table_map_log_event *ignored_map=
print_event_info->m_table_map_ignored.get_table(e->get_table_id());
...
if (e->get_flags(Rows_log_event::STMT_END_F))
{
...
}
However, Rows_log_event is only the base class for the
Write,Update_Delete_rows_event family of classes, but not
for their *_old counterparts. So the above typecasts are
incorrect for the old-format RBR events and may result (and
do result according to AddressSanitizer reports) in reading
memory outside of the previously allocated on heap buffer.
Fix:
===
The above mentioned invalid type cast has been replaced with
appropriate old counterpart.
Note:The above mentioned issue is present only mysql-5.1 and
5.5. This is fixed in mysql-5.6 and above as part of
Bug#55790. Hence few of the relevant changes of Bug#55790 are
being back ported to fix the current issue.
INTERACTIVE MODE
In interactive mode, libedit/readline allocates memory
for every new line entered & later the allocated memory
never gets freed.
Fixed by freeing the allocated memory blocks appropriately.
Due to an internal change in the server code in between 5.1 and 5.5
(wl#2649) the hash function used in KEY partitioning changed
for numeric and date/time columns (from binary hash calculation
to character based hash calculation).
Also enum/set changed from latin1 ci based hash calculation to
binary hash between 5.1 and 5.5. (bug#11759782).
These changes makes KEY [sub]partitioned tables on any of
the affected column types incompatible with 5.5 and above,
since the calculation of partition id differs.
Also since InnoDB asserts that a deleted row was previously
read (positioned), the server asserts on delete of a row that
is in the wrong partition.
The solution for this situation is:
1) The partitioning engine will check that delete/update will go to the
partition the row was read from and give an error otherwise, consisting
of the rows partitioning fields. This will avoid asserts in InnoDB and
also alert the user that there is a misplaced row. A detailed error
message will be given, including an entry to the error log consisting
of both table name, partition and row content (PK if exists, otherwise
all partitioning columns).
2) A new optional syntax for KEY () partitioning in 5.5 is allowed:
[SUB]PARTITION BY KEY [ALGORITHM = N] (list_of_cols)
Where N = 1 uses the same hashing as 5.1 (Numeric/date/time fields uses
binary hashing, ENUM/SET uses charset hashing) N = 2 uses the same
hashing as 5.5 (Numeric/date/time fields uses charset hashing,
ENUM/SET uses binary hashing). If not set on CREATE/ALTER it will
default to 2.
This new syntax should probably be ignored by NDB.
3) Since there is a demand for avoiding scanning through the full
table, during upgrade the ALTER TABLE t PARTITION BY ... command is
considered a no-op (only .frm change) if everything except ALGORITHM
is the same and ALGORITHM was not set before, which allows manually
upgrading such table by something like:
ALTER TABLE t PARTITION BY KEY ALGORITHM = 1 () or
ALTER TABLE t PARTITION BY KEY ALGORITHM = 2 ()
4) Enhanced partitioning with CHECK/REPAIR to also check for/repair
misplaced rows. (Also works for ALTER TABLE t CHECK/REPAIR PARTITION)
CHECK FOR UPGRADE:
If the .frm version is < 5.5.3
and uses KEY [sub]partitioning
and an affected column type
then it will fail with an message:
KEY () partitioning changed, please run:
ALTER TABLE `test`.`t1` PARTITION BY KEY ALGORITHM = 1 (a)
PARTITIONS 12
(i.e. current partitioning clause, with the addition of
ALGORITHM = 1)
CHECK without FOR UPGRADE:
if MEDIUM (default) or EXTENDED options are given:
Scan all rows and verify that it is in the correct partition.
Fail for the first misplaced row.
REPAIR:
if default or EXTENDED (i.e. not QUICK/USE_FRM):
Scan all rows and every misplaced row is moved into its correct
partitions.
5) Updated mysqlcheck (called by mysql_upgrade) to handle the
new output from CHECK FOR UPGRADE, to run the ALTER statement
instead of running REPAIR.
This will allow mysql_upgrade (or CHECK TABLE t FOR UPGRADE) to upgrade
a KEY [sub]partitioned table that has any affected field type
and a .frm version < 5.5.3 to ALGORITHM = 1 without rebuild.
Also notice that if the .frm has a version of >= 5.5.3 and ALGORITHM
is not set, it is not possible to know if it consists of rows from
5.1 or 5.5! In these cases I suggest that the user does:
(optional)
LOCK TABLE t WRITE;
SHOW CREATE TABLE t;
(verify that it has no ALGORITHM = N, and to be safe, I would suggest
backing up the .frm file, to be used if one need to change to another
ALGORITHM = N, without needing to rebuild/repair)
ALTER TABLE t <old partitioning clause, but with ALGORITHM = N>;
which should set the ALGORITHM to N (if the table has rows from
5.1 I would suggest N = 1, otherwise N = 2)
CHECK TABLE t;
(here one could use the backed up .frm instead and change to a new N
and run CHECK again and see if it passes)
and if there are misplaced rows:
REPAIR TABLE t;
(optional)
UNLOCK TABLES;
Problem: When a view, with a specific character set and collation,
is created on another view with a different character set and collation the
dump restoration results in an illegal mix of collations error.
SOLUTION: To avoid this confusion of collations, the create table datatype
being used is hardcoded as "tinyint NOT NULL". This will not matter as the table
created will be dropped at runtime and specifically tinyint is used to
avoid hitting the row size conflicts.
Analysis:
--------
REPLACE operation provides incorrect output when
user variable is supplied as an argument and there
are multiple rows on which the operation is performed.
Consider the example below:
SET @var='(( 00000000 ++ 00000000 ))';
SELECT REPLACE(@var, '00000000', table_name) AS a FROM
INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA='mysql';
Invalid output:
+---------------------------------------+
| REPLACE(@var, '00000000', TABLE_NAME) |
+---------------------------------------+
| (( columns_priv ++ columns_priv )) |
| (( columns_priv ++ columns_priv )) |
......
......
| (( columns_priv ++ columns_priv )) |
| (( columns_priv ++ columns_priv )) |
| (( columns_priv ++ columns_priv )) |
+---------------------------------------+
The user argument supplied as the string to REPLACE
operation is overwritten after the first iteration
to '(( columns_priv ++ columns_priv ))'.
The overwritten string after the first iteration
is used for the subsequent REPLACE iteration. Since
the pattern string is not found, it returns invalid
output as mentioned above.
Fix:
---
If the Alloced_length is zero, realloc() and create a
copy of the string which is then used for the REPLACE
operation for every iteration.
IN A SQL STATEMENT
While processing each lines entered at the prompt,
mysql client appends a '\n' to all the lines except
for delimiter commands. However the same logic must
not apply if 'delimiter' is part of a string or a
comment, for which a '\n' should be added.
Fixed by adding appropriate checks.
Added a test case.
When a binlog is replayed into a server, e.g.:
$ mysqlbinlog binlog.000001 | mysql
it sets a pseudo slave mode on the client connection in order to server
be able to read binlog events, there is, a format description event is
needed to correctly read following events.
Also this pseudo slave mode applies to the current connection
replication rules that are needed to correctly apply binlog events.
If a binlog dump is sourced on a connection, this pseudo slave mode will
remains after it, what will apply unexpected rules from customer
perspective to following commands.
Added a new SET statement to binlog dump that will unset pseudo slave
mode at the end of dump file.
MYSQLDUMP OUTPUT
Problem: mysqldump when used with option --routines, dumps
all the routines of the specified database into
output. The statements in this output are written
in such a way that they are version safe using C
style version commenting (of the format
/*!<version num> <sql statement>*/). If a semicolon
is present right before closing of the comment in
dump output, it results in a syntax error while
importing.
Solution: Version comments for dumped routines are
specifically to protect the ones older than 5.0.
When the import is done on 5.0 or later versions,
entire create statement gets executed as all the
check conditions at the beginning of the comments
are cleared. Since the trade off is between the
performance of newer versions which are more in
use and protection of very old versions which are
no longer supported, it is proposed that these
comments be removed altogether to maintain
stability of the versions supported.
TABLE DATA IF DUMPS MYSQL DATABA
Problem: If mysqldump is run without --events (or with --skip-events)
it will not dump the mysql.event table's data. This behaviour is inconsistent
with that of --routines option, which does not affect the dumping of
mysql.proc table. According to the Manual, --events (--skip-events) defines,
if the Event Scheduler events for the dumped databases should be included
in the mysqldump output and this has nothing to do with the mysql.event table
itself.
Solution: A warning has been added when mysqldump is used without --events
(or with --skip-events) and a separate patch with the behavioral change
will be prepared for 5.6/trunk.
TABLE DATA IF DUMPS MYSQL DATABA
Problem: If mysqldump is run without --events (or with --skip-events)
it will not dump the mysql.event table's data. This behaviour is inconsistent
with that of --routines option, which does not affect the dumping of
mysql.proc table. According to the Manual, --events (--skip-events) defines,
if the Event Scheduler events for the dumped databases should be included
in the mysqldump output and this has nothing to do with the mysql.event table
itself.
Solution: A warning has been added when mysqldump is used without --events
(or with --skip-events) and a separate patch with the behavioral change
will be prepared for 5.6/trunk.
WHEN DBNAME CONTAINS MULTIPLE QUOTES
MySQL client's USE command might fail if the
database name contains multiple quotes (backticks).
The reason behind the failure being the method
that client uses to remove/escape the quotes
while parsing the USE command's option (dbname),
where the option parsing might terminate if a
matching quote is found.
Also, C-APIs like mysql_select_db() expect a
normalized dbname. Now, in certain cases, client
might fail to normalize dbname similar to that of
server and hence mysql_select_db() would fail.
Fixed by getting the normalized dbname (indirectly)
from the server by directly sending the "USE dbanme"
as query to the server followed by a "SELECT DATABASE()".
The above steps are only performed if number of quotes
in the dbname is greater than 2. Once the normalized
dbname is received, the original db is restored.
WHEN STDIN IS A PIPE
Problem: Mysqlbinlog does not accept the input from STDIN when
STDIN is a pipe. This prevents the users from passing the input file
through a shell pipe.
Background: The my_seek() function does not check if the file descriptor
passed to it is regular (seekable) file. The check_header() function in
mysqlbinlog calls the my_b_seek() unconditionally and it fails when
the underlying file is a PIPE.
Resolution: We resolve this problem by checking if the underlying file
is a regular file by using my_fstat() before calling my_b_seek().
If the underlying file is not seekable we skip the call to my_b_seek()
in check_header().
SHOW 2012 INSTEAD OF 2011
* Added a new macro to hold the current year :
COPYRIGHT_NOTICE_CURRENT_YEAR
* Modified ORACLE_WELCOME_COPYRIGHT_NOTICE macro
to take the initial year as parameter and pick
current year from the above mentioned macro.
Problem description:
Giving "help 'contents'" in the mysql client as a first statement
gives error
Analysis:
In com_server_help() function the "server_cmd" variable was
initialised with buffer->ptr(). And the "server_cmd" variable is not
updated since we are passing "'contents'"(with single quote) so the
buffer->ptr() consists of the previous buffer values and it was sent
to the mysql_real_query() hence we are getting error.
Fix:
We are not initialising the "server_cmd" variable and we are updating
the variable with "server_cmd= cmd_buf" in any of the case i.e with
single quote or without single quote for the contents.
As part of error message improvement, added new error message in case
of "help 'contents'".
Problem:
=======
The return value from my_b_write is ignored by: `my_b_write_quoted',
`my_b_write_bit',`Query_log_event::print_query_header'
Most callers of `my_b_printf' ignore the return value. `log_event.cc'
has many calls to it.
Analysis:
========
`my_b_write' is used to write data into a file. If the write fails it
sets appropriate error number and error message through my_error()
function call and sets the IO_CACHE::error == -1.
`my_b_printf' function is also used to write data into a file, it
internally invokes my_b_write to do the write operation. Upon
success it returns number of characters written to file and on error
it returns -1 and sets the error through my_error() and also sets
IO_CACHE::error == -1. Most of the event specific print functions
for example `Create_file_log_event::print', `Execute_load_log_event::print'
etc are the ones which make several calls to the above two functions and
they do not check for the return value after the 'print' call. All the above
mentioned abuse cases deal with the client side.
Fix:
===
As part of bug fix a check for IO_CACHE::error == -1 has been added at
a very high level after the call to the 'print' function. There are
few more places where the return value of "my_b_write" is ignored
those are mentioned below.
+++ mysys/mf_iocache2.c 2012-06-04 07:03:15 +0000
@@ -430,7 +430,8 @@
memset(buffz, '0', minimum_width - length2);
else
memset(buffz, ' ', minimum_width - length2);
- my_b_write(info, buffz, minimum_width - length2);
+++ sql/log.cc 2012-06-08 09:04:46 +0000
@@ -2388,7 +2388,12 @@
{
end= strxmov(buff, "# administrator command: ", NullS);
buff_len= (ulong) (end - buff);
- my_b_write(&log_file, (uchar*) buff, buff_len);
At these places appropriate return value handlers have been added.
1. Clear text password client plugin disabled by default.
2. Added an environment variable LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN, that
when set to something starting with '1', 'Y' or 'y' will enable the clear
text
plugin for all connections.
3. Added a new mysql_options() option : MYSQL_ENABLE_CLEARTEXT_PLUGIN
that takes an my_bool argument. When the value of the argument is non-zero
the clear text plugin is enabled for this connection only.
4. Added an enable-cleartext-plugin config file option that takes a numeric
argument. If the numeric value of the numeric argument is non-zero the
clear
text plugin is enabled for the connection
5. Added a boolean command line option "--enable_cleartext_plugin" to
mysql, mysqlslap and mysqladmin. When specified it will call mysql_options
with the effect of #3
6. Added a new CLEARTEXT option to the connect command in mysqltest.
When specified it will enable the cleartext plugin for usage.
7. Added test cases and updated existing ones that need the clear text
plugin.
executing
The problem is that mysql lacks information about the objects a view
depends on so it can't dump views and tables in the proper order.
Thus it needs to create "stand-in" myisam tables for each view while
dumping the tables that it later drops and replaces with the actual view
view definition.
But since views can have much more columns than an actual table creating
these stand-in tables may be problematic.
There's no way to portably find out how many columns an mysiam table
can have. It's a complicated formula depending on internal server constants.
Thus we can't have a reliable error check without repeating the logic and
the formula inside mysqldump.
1. Changed the type of the columns of the stand-in tables mysqldump
makes to satisfy view dependencies from the original type to smallint
to save on row space.
2. Added a warning on the mysqldump's standard error for a possible
problems replaying the dump file if the columns of a view exceed 1000.
3. Added a test case.