KEY HANDLING ON SUBSEQUENT CREATE TABLE IF NOT EXISTS
PROBLEM:
--------
Consider a SP routine which does CREATE TABLE
with REFERENCES clause. The first call to this routine
invokes parser and the parsed items are cached, so as
to avoid parsing for the second execution of the routine.
It is obsevered that valgrind reports a warning
upon read of thd->lex->alter_info->key_list->Foreign_key object,
which seem to be pointing to a invalid memory address
during second time execution of the routine. Accessing this object
theoretically could cause a crash.
ANALYSIS:
---------
The problem stems from the fact that for some reason
elements of ref_columns list in thd->lex->alter_info->
key_list->Foreign_key object are changed to point to
objects allocated on runtime memory root.
During the first execution of routine we create
a copy of thd->lex->alter_info object.
As part of this process we create a clones of objects in
Alter_info::key_list and of Foreign_key object in particular.
Then Foreign_key object is cloned for some reason we
perform shallow copies of both Foreign_key::ref_columns
and Foreign_key::columns list. So new instance of
Foreign_key object starts to SHARE contents of ref_columns
and columns list with the original instance.
After that as part of cloning process we call
list_copy_and_replace_each_value() for elements of
ref_columns list. As result ref_columns lists in both
original and cloned Foreign_key object start to contain
pointers to Key_part_spec objects allocated on runtime
memory root because of shallow copy.
So when we start copying of thd->lex->alter_info object
during the second execution of stored routine we indeed
encounter pointer to the Key_part_spec object allocated
on runtime mem-root which was cleared during at the end
of previous execution. This is done in sp_head::execute(),
by a call to free_root(&execute_mem_root,MYF(0));
As result we get valgrind warnings about accessing
unreferenced memory.
FIX:
----
The safest solution to this problem is to
fix Foreign_key(Foreign_key, MEM_ROOT) constructor to do
a deep copy of columns lists, similar to Key(Key, MEM_ROOT)
constructor.
Bug#13011410 CRASH IN FILESORT CODE WITH GROUP BY/ROLLUP
The assert in 13580775 is visible in 5.6 only,
but shows that all versions are vulnerable.
13011410 crashes in all versions.
filesort tries to re-use the sort buffer between invocations in order to save
malloc/free overhead.
The fix for Bug 11748783 - 37359: FILESORT CAN BE MORE EFFICIENT.
added an assert that buffer properties (num_records, record_length) are
consistent between invocations. Indeed, they are not necessarily consistent.
Fix: re-allocate the sort buffer if properties change.
mysql-test/r/partition.result:
New tests.
mysql-test/t/partition.test:
New tests.
sql/filesort.cc:
If we already have allocated a sort buffer in a previous execution,
then verify that it is big enough for the current one.
sql/table.h:
Add sort_keys_size; Number of bytes allocated for the sort_keys buffer.
Fixing the 5.5 part (the 5.6 part will go in a separate commit soon).
Problem:
Item_direct_ref::get_date() incorrectly calculated its "null_value",
which made UNIX_TIMESTAMP(view_column) incorrectly return NULL
for a NOT NULL view_column.
Fix:
Make Item_direct_ref::get_date() calculate null_value
in the similar way with the other methods
(val_real,val_str,val_int,val_decimal):
copy null_value from the referenced Item.
modified:
mysql-test/r/func_time.result
mysql-test/t/func_time.test
sql/item.cc
routines.
mysqldump in xml mode did not dump routines, events or
triggers.
This patch fixes this issue by fixing the if conditions
that disallowed the dump of above mentioned objects in
xml mode, and added the required code to enable dump
in xml format.
client/mysqldump.c:
BUG#11760384 - 52792: mysqldump in XML mode does not dump
routines.
Fixed some if conditions to allow execution of dump methods
for xml and further added the relevant code at places to produce
the dump in xml format.
mysql-test/r/mysqldump.result:
Added a test case for Bug#11760384.
mysql-test/t/mysqldump.test:
Added a test case for Bug#11760384.
------------------------------------------------------------
revno: 3258
committer: Jon Olav Hauglid <jon.hauglid@oracle.com>
branch nick: mysql-trunk-bug12663165
timestamp: Thu 2011-07-14 10:05:12 +0200
message:
Bug#12663165 SP DEAD CODE REMOVAL DOESN'T UNDERSTAND CONTINUE HANDLERS
When stored routines are loaded, a simple optimizer tries to locate
and remove dead code. The problem was that this dead code removal
did not work correctly with CONTINUE handlers.
If a statement triggers a CONTINUE handler, the following statement
will be executed after the handler statement has completed. This
means that the following statement is not dead code even if the
previous statement unconditionally alters control flow. This fact
was lost on the dead code removal routine, which ended up with
removing instructions that could have been executed. This could
then lead to assertions, crashes and generally bad behavior when
the stored routine was executed.
This patch fixes the problem by marking as live code all stored
routine instructions that are in the same scope as a CONTINUE handler.
Test case added to sp.test.
If init_command was incorrect, we couldn't let users execute
queries, but we couldn't report the issue to the client either
as it does not expect error messages before even sending a
command. Thus, we simply disconnected them without throwing
a clear error.
We now go through the proper sequence once (without executing
any user statements) so we can report back what the problem
is. Only then do we disconnect the user.
As always, root remains unaffected by this as init_command is
(still) not executed for them.
mysql-test/r/init_connect.result:
We now report a proper error if init_command fails.
Expect as much.
mysql-test/t/init_connect.test:
We now report a proper error if init_command fails.
Expect as much.
sql/sql_connect.cc:
If init_command fails, throw an error explaining this to
the user.
--FLUSH-LOG BREAKS CONSISTENCY
The transaction started by mysqldump gets committed
implicitly when flush-log is specified along with
single-transaction option, and hence can break
consistency.
This is because, COM_REFRESH is executed in order
to flush logs and starting from 5.5 this command
performs an implicit commit.
Fixed by making sure that COM_REFRESH is executed
before the transaction has started and not after it.
Note : This patch triggers following behavioral
changes in mysqldump :
1) After this patch we no longer flush logs before
dumping each database if --single-transaction
option is given like it was done before (in the
absence of --lock-all-tables and --master-data
options).
2) Also, after this patch, we start acquiring
FTWRL before flushing logs in cases when only
--single-transaction and --flush-logs are given.
It becomes safe to use mysqldump with these two
options and without --master-data parameter for
backups.
client/mysqldump.c:
Bug#12809202 61854: MYSQLDUMP --SINGLE-TRANSACTION
--FLUSH-LOG BREAKS CONSISTENCY
Added logic to make sure that, if flush-log option
is specified, mysql_refresh() is never executed after
the transaction has started.
Added verbose messages for all the executions of
mysql_refresh() in order to track its invocation.
mysql-test/r/mysqldump.result:
Added test case for Bug#12809202.
mysql-test/t/mysqldump.test:
Added test case for Bug#12809202.
unix_timestamp() is implemented in this part of the code in place of current_time().
Also, since the pb2 machines may be extremely fast, instead of looping through the code,
we use sleep(1.1) so that the variables t0 and t1 have different values.
The time comparison using current_time() stored in an int variable was giving wrong results as
the current_time() format as an int implementation has been changed in mysql-trunk but not in mysql-5.5.
The time is stored in the format hh:mm:ss as 'time' datatype.But as an int, it is stored as hhmmss,
but only on the trunk. On mysql-5.5,as an int, it is stored as hh.
Hence, the current_time() function has been changed to unix_timestamp() function.
Problem description:
When a view is created using function FORMAT and if FORMAT function uses locale
option,definition of view saved into server doesn't contain that locale information,
Ex:
create table test2 (bb decimal (10,2));
insert into test2 values (10.32),(10009.2),(12345678.21);
create view test3 as select format(bb,1,'sk_SK') as cc from test2;
select * from test3;
+--------------+
| cc |
+--------------+
| 10.3 |
| 10,009.2 |
| 12,345,678.2 |
+--------------+
3 rows in set (0.02 sec)
show create view test3
View: test3
Create View: CREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost`
SQL SECURITY DEFINER VIEW `test3` AS select format(`test2`.`bb`,1) AS `cc`
from `test2`
character_set_client: latin1
collation_connection: latin1_swedish_ci
1 row in set (0.02 sec)
Problem Analysis:
The function Item_func_format::print() which prints the query string to create
the view does not print the third argument (i.e the locale information). Hence
view is created without locale information.
Problem Solution:
If argument count is more than 2 we now print the third argument onto the query string.
Files changed:
sql/item_strfunc.cc
Function call changes: Item_func_format::print()
mysql-test/t/select.test
Added test case to test the bug
mysql-test/r/select.result
Result of the test case appended here
SMALL KEY CACHE
The server crashed on division by zero because the key cache was not
initialized and the block length was 0 which was used in a division.
The fix was to not allow CACHE INDEX if the key cache was not initiallized.
Thus never try LOAD INDEX INTO CACHE for an uninitialized key cache.
Also added some windows files/directories to .bzrignore.
The predicate is re-written from
((`test`.`g1`.`a` = geometryfromtext('')) or ...
to
((`test`.`g1`.`a` = <cache>(geometryfromtext(''))) or ...
The range optimizer calls save_in_field_no_warnings, in order to fetch keys.
save_in_field_no_warnings returns 0 because of the cache wrapper,
and get_mm_leaf() proceeded to call Field_blob::get_key_image()
which accesses un-initialized data.
mysql-test/r/gis.result:
New test case.
mysql-test/t/gis.test:
New test case.
sql/item.cc:
If we have cached a null_value, then verify that the Field can accept it.
AND HANG IN SHOW TABLE STATUS.
ISSUE: Table corruption due to concurrent queries.
Different threads running insert and check
query leads to table corruption. Not properly locked,
rows are inserted in between check query.
SOLUTION: In check query mutex lock is acquired
for a longer time to handle concurrent
insert and check query.
NOTE: Additionally we backported the fix for CHECKSUM
issue(bug#11758979).
A patch for alter_table-big.test has been committed earlier.
This is a patch for create-big.test:
The test used to time-out after 900 seconds.
It relied on debug sleeps that are no longer present in the
code. Since the sleeps are long gone, fixing the problem didn't
involve just updating the result file or using macro
"show_binlog_events2.inc" instead of "show binlog events"
statement. The test needed to be rewritten using debug sync
points, and result then needed to be updated.
So, the sleeps have been replaced by debug_sync points and the test execution time has
been reduced significantly.
OPTION SKIP-WRITE-BINLOG
System tables were not getting upgraded when
mysql_upgrade was run with --skip-write-binlog
option. (Same for --write-binlog.) Also, with
this option, mysql_upgrade_info file was not
getting created after the upgrade.
mysql_upgrade makes use of mysql client tool in
order to run upgrade scripts, while doing so it
passes some of the command line options (used to
start mysql_upgrade) directly to mysql client.
The reason behind this bug being, some options
like skip-write-binlog and upgrade-system-tables
were being passed to mysql tool along with other
options, and hence mysql execution failed due
presence of these invalid options.
Fixed this issue by filtering out the above mentioned
options from the list of options that will be passed to
mysql and mysqlcheck tools. However, since --write-binlog
is supported by mysqlcheck, this option would be used
explicitly while running mysqlcheck. (not part of patch,
already there)
Checking the contents of general log after the upgrade
is not doable via an mtr test. So performed manual test.
Added a test to verify the creation of mysql_upgrade_info.
client/mysql_upgrade.c:
Bug#11827359 60223: MYSQL_UPGRADE PROBLEM WITH
OPTION SKIP-WRITE-BINLOG
With this patch, --upgrade-system-tables and
--write-binlog options will not be added to the
list of options, used to start mysql and mysqlcheck
tools.
mysql-test/r/mysql_upgrade.result:
Added a testcase for Bug#11827359.
mysql-test/t/mysql_upgrade.test:
Added a testcase for Bug#11827359.
alter_treable-big.test was failing due to the use of RAND() function which is no more
replication safe.
This has been modified using static values.
Also, 'sleep' has been replaced using 'debug_sync' and the execution time of the
test has been reduced significantly.
This test is now taken out of the disabled.def file and is being enabled.
A patch for this bug has already been pushed. A minor change is made here.
The database to be used after re-enabling the disabled code is 'TEST'.
But instead, 'MYSQL' was being used.
This is the minor change that is being made here.
Don't do this for echo, instead:
1) Enable replacements also for assignment from backquoted SQL
2) Allow replace_regex to take a variable for the *entire* argument list
With this, the test can be amended, but only in its version in trunk
TESTS: CRASH, CORRUPTION, 4G MEMOR
Issue: Valgrind errors due to checksum and optimize
query against archive tables with null columns.
Table record buffer was not initialized.
Solution: Initialize the record buffer.
TESTS: CRASH, CORRUPTION, 4G MEMOR
Issue: Valgrind errors due to checksum and optimize
query angaist archive tables with null columns.
Table record buffer was not initialized.
Solution: Initialize the record buffer.