Bug#35161
Fixed memory leak when failing to open a partition.
Bug#20129
Added tests for verifying REPAIR PARTITION.
mysql-test/std_data/parts/t1_will_crash#P#p1_first_1024.MYD:
Bug#20129: ALTER TABLE ... REPAIR PARTITION ... not working
(see mysql-test/suite/parts/t/partition_repair_myisam.test
Created by:
CREATE TABLE t1_will_crash (
a VARCHAR(255),
b INT,
c LONGTEXT,
PRIMARY KEY (a, b))ENGINE=MyISAM
PARTITION BY HASH (b)
PARTITIONS 7;
INSERT INTO t1_will_crash VALUES ...
and then
head -c 1024 var/master-data/test/t1_will_crash#P#p1.MYD
into this file.
mysql-test/std_data/parts/t1_will_crash#P#p2.MYD:
Bug#20129: ALTER TABLE ... REPAIR PARTITION ... not working
(see mysql-test/suite/parts/t/partition_repair_myisam.test)
copy of file right after _mi_mark_file_changed in mi_write
was done.
mysql-test/std_data/parts/t1_will_crash#P#p2.MYI:
Bug#20129: ALTER TABLE ... REPAIR PARTITION ... not working
(see mysql-test/suite/parts/t/partition_repair_myisam.test)
copy of file right after _mi_mark_file_changed in mi_write
was done.
mysql-test/std_data/parts/t1_will_crash#P#p3.MYI:
Bug#20129: ALTER TABLE ... REPAIR PARTITION ... not working
(see mysql-test/suite/parts/t/partition_repair_myisam.test)
copy of file right after *share->write_record was done.
mysql-test/std_data/parts/t1_will_crash#P#p4.MYI:
Bug#20129: ALTER TABLE ... REPAIR PARTITION ... not working
(see mysql-test/suite/parts/t/partition_repair_myisam.test)
copy of file right after flush_cached_blocks
mysql-test/std_data/parts/t1_will_crash#P#p6.MYD:
Bug#20129: ALTER TABLE ... REPAIR PARTITION ... not working
(see mysql-test/suite/parts/t/partition_repair_myisam.test)
copy of file right after _mi_write_part_record in
write_dynamic_record returned for the first time.
mysql-test/std_data/parts/t1_will_crash#P#p6_2.MYD:
Bug#20129: ALTER TABLE ... REPAIR PARTITION ... not working
(see mysql-test/suite/parts/t/partition_repair_myisam.test)
copy of file right after _mi_write_part_record in
write_dynamic_record returned for the second time.
mysql-test/std_data/parts/t1_will_crash#P#p6_3.MYD:
Bug#20129: ALTER TABLE ... REPAIR PARTITION ... not working
(see mysql-test/suite/parts/t/partition_repair_myisam.test)
copy of file right after _mi_write_part_record in
write_dynamic_record returned for the third time.
(data file fully updated).
mysql-test/suite/parts/r/partition_recover_myisam.result:
Bug#35161
Renamed since it was a test of recover
and to make repair free for use without
--myisam-recover
mysql-test/suite/parts/r/partition_repair_myisam.result:
Bug#20129: ALTER TABLE ... REPAIR PARTITION ... not working
New result file for testing CHECK/REPAIR of partitioned tables
mysql-test/suite/parts/t/partition_recover_myisam-master.opt:
Bug#35161
Renamed since it was a test of recover
and to make repair free for use without
--myisam-recover
mysql-test/suite/parts/t/partition_recover_myisam.test:
Bug#35161
Renamed since it was a test of recover
and to make repair free for use without
--myisam-recover
mysql-test/suite/parts/t/partition_repair_myisam.test:
Bug#20129: ALTER TABLE ... REPAIR PARTITION ... not working
New test file for testing CHECK/REPAIR of partitioned tables
sql/ha_partition.cc:
Bug#35161
Fix of memory leak when open of partition failed.
Tilde expansion could fail when it was to expand to an empty string (such as
when HOME is set to an empty string), especially on systems where size_t is
unsigned.
plugin init function fails
Problem: if an INFORMATION_SCHEMA plugin initialization fails
we free some inner plugin's data (schema_table) twice during the
INSTALL PLUGIN command.
Fix: free it once.
Length value is the length of the field,
Max_length is the length of the field value.
So Max_length can not be more than Length.
The fix: fixed calculation of the Item_empty_string item length
(Patch applied and queued on demand of Trudy/Davi.)
sql/item.h:
fixed calculation of the item length
sql/sql_show.cc:
removed unnecessary code
When the fractional part in a multiplication of DECIMALs
overflowed, we truncated the first operand rather than the
longest. Now truncating least significant places instead
for more precise multiplications.
(Queuing at demand of Trudy/Davi.)
mysql-test/r/type_newdecimal.result:
show that if we need to truncate the scale of an operand, we pick the
right one (that is, we discard the least significant decimal places)
mysql-test/t/type_newdecimal.test:
show that if we need to truncate the scale of an operand, we pick the
right one (that is, we discard the least significant decimal places)
strings/decimal.c:
when needing to disregard fractional parts, pick the least
significant ones
Due to unknown changes the test failed in some ways.
Fixed by checking the test case in detail, commenting the expected behavior,
and fixing error directives.
In the course of the analyze unneeded get_lock()/release_lock() use,
unneeded send/reap use, and unneeded sleeps were removed. The lock wait
timeout was reduced to 1 second, so that this is no big-test any more.
The test was split into two parts, one running the tests with
--innodb_locks_unsafe_for_binlog, the other part without.
The main part (include/concurrent.inc) conditionally expects
lock wait timeouts based on the value of the system variable
innodb_locks_unsafe_for_binlog.
The major part of the patch comes from Kristofer Pettersson.
(Chad queues this patch on demand by Trudy/Davi.)
update accross partitions.
It's not Innodb-specific bug.
ha_partition::update_row() didn't set
table->timestamp_field_type= TIMESTAMP_NO_AUTO_SET when
orig_timestamp_type == TIMESTAMP_AUTO_SET_ON_INSERT.
So that a partition sets the timestamp field when a record
is moved to a different partition.
Fixed by doing '= TIMESTAMP_NO_AUTO_SET' unconditionally.
Also ha_partition::write_row() is fixed in same way as now
Field_timestamp::set() is called twice in SET_ON_INSERT case.
(Chad queues this patch on demand by Trudy/Davi.)
mysql-test/r/partition.result:
Bug#38272 timestamps fields incorrectly defaulted on update accross partitions.
test result
mysql-test/t/partition.test:
Bug#38272 timestamps fields incorrectly defaulted on update accross partitions.
test case
sql/ha_partition.cc:
Bug#38272 timestamps fields incorrectly defaulted on update accross partitions.
Do table->timestamp_field_type= TIMESTAMP_NO_AUTO_SET unconditionally
in ha_partition::update_row and ::write_row()
Fix for a valgrind warning due to a jump on a uninitialized
variable. The problem was that the sql profile preparation
function wasn't being called for all possible code paths
of query execution.
The solution is to ensure that query profiling is always
started before dispatch_command function is called and to
explicitly call the profile preparation function on bootstrap.
sql/sql_parse.cc:
Finish query profiling properly when executing bootstrap commands.
Add query profiling to execute_init_command as it calls dispatch_command.
The problem:
CSV storage engine open function returns success even
thought it failed to open the data file
The fix:
return error
Additional fixes:
added MY_WME to my_open to avoid mysterious error message
free share struct if open the file was unsuccessful
mysql-test/r/csv.result:
test result
mysql-test/t/csv.test:
test case
storage/csv/ha_tina.cc:
The problem:
CSV storage engine open function returns success even
thought it failed to open the data file
The fix:
return error
Additional fixes:
added MY_WME to my_open to avoid mysterious error message
free share struct if open the file was unsuccessful
with blobs containing nulls
Problem: FEDERATED SE improperly stores NULL fields in the record buffer.
Fix: store them properly.
mysql-test/r/federated.result:
Fix for bug #34779: crash in checksum table on federated tables
with blobs containing nulls
- test result.
mysql-test/t/federated.test:
Fix for bug #34779: crash in checksum table on federated tables
with blobs containing nulls
- test case.
sql/ha_federated.cc:
Fix for bug #34779: crash in checksum table on federated tables
with blobs containing nulls
- storing a NULL field in the record buffer
we must initialize its data as other code
may rely on it.
Problem: missed "break" in a switch leads to unexpected assertion failure
of 'myisamchk compressed_table'.
Fix: add the break.
storage/myisam/mi_check.c:
Fix for bug#37537: myisamchk fails with Assertion failure with partitioned table
In the record links check function (chk_data_link())
missed "break" for case COMPRESSED_RECORD was added.
Problem: REGEXP in functions/PSs may return wrong results
due to improper initialization.
Fix: initialize required REGEXP params.
sql/item_cmpfunc.cc:
Fix for bug#37337: Function returns different results
prev_regexp is used in the Item_func_regex::regcomp()
to store previous regex and to avoid re-initialization
if given the same pattern.
Shoud be deleted in the Item_func_regex::cleanup() where we
clean up the regexp structure.
The problem was because the event allocated in mysql_client_binlog_statement
was not freed when an error occured while applying the event.
sql/sql_binlog.cc:
Delete the event if apply it failed
Details:
- add subtest with drop unrelated view
- rearrange existing tests so that a distinction
between drop procedure and drop function effects
is possible
partition is corrupt
Post push fix
an DBUG_ASSERT broke the embedded server, fixed by initializing
it in the embedded version of Protocol_text::prepare_for_resend
libmysqld/lib_sql.cc:
Bug#20129: ALTER TABLE ... REPAIR PARTITION ... complains that
partition is corrupt
Post push fix
an DBUG_ASSERT in Protocol_text::store broke the embedded
server, fixed by initializing it in the embedded version
of Protocol_text::prepare_for_resend
used causes server crash.
When the loose index scan access method is used values of aggregated functions
are precomputed by it. Aggregation of such functions shouldn't be performed
in this case and functions should be treated as normal ones.
The create_tmp_table function wasn't taking this into account and this led to
a crash if a query has MIN/MAX aggregate functions and employs temporary table
and loose index scan.
Now the JOIN::exec and the create_tmp_table functions treat MIN/MAX aggregate
functions as normal ones when the loose index scan is used.
mysql-test/r/group_min_max.result:
Added a test case for the bug#38195.
mysql-test/t/group_min_max.test:
Added a test case for the bug#38195.
sql/sql_select.cc:
Bug#38195: Incorrect handling of aggregate functions when loose index scan is
used causes server crash.
Now the JOIN::exec and the create_tmp_table functions treat MIN/MAX aggregate
functions as normal ones when the loose index scan is used.
Post push fix (compiler warning)
sql/ha_partition.cc:
Bug#37402: Mysql cant read partitioned table with capital letter in the name
fix to prevent a compiler warning.
This fix is for 5.1 only : back porting the 6.0 patch manually
The parser code in sql/sql_yacc.yy needs to be more robust to out of
memory conditions, so that when parsing a query fails due to OOM,
the thread gracefully returns an error.
Before this fix, a new/alloc returning NULL could:
- cause a crash, if dereferencing the NULL pointer,
- produce a corrupted parsed tree, containing NULL nodes,
- alter the semantic of a query, by silently dropping token values or nodes
With this fix:
- C++ constructors are *not* executed with a NULL "this" pointer
when operator new fails.
This is achieved by declaring "operator new" with a "throw ()" clause,
so that a failed new gracefully returns NULL on OOM conditions.
- calls to new/alloc are tested for a NULL result,
- The thread diagnostic area is set to an error status when OOM occurs.
This ensures that a request failing in the server properly returns an
ER_OUT_OF_RESOURCES error to the client.
- OOM conditions cause the parser to stop immediately (MYSQL_YYABORT).
This prevents causing further crashes when using a partially built parsed
tree in further rules in the parser.
No test scripts are provided, since automating OOM failures is not
instrumented in the server.
Tested under the debugger, to verify that an error in alloc_root cause the
thread to returns gracefully all the way to the client application, with
an ER_OUT_OF_RESOURCES error.