In case of low memory sort buffer QUICK_INDEX_MERGE_SELECT creates
temporary file where is stores row ids which meet QUICK_SELECT ranges
except of clustered pk range, clustered range is processed separately.
In init_read_record we check if temporary file is used and choose
appropriate record access method. It does not take into account that
temporary file contains partial result in case of QUICK_INDEX_MERGE_SELECT
with clustered pk range.
The fix is always to use rr_quick if QUICK_INDEX_MERGE_SELECT
with clustered pk range is used.
mysql-test/suite/innodb/r/innodb_mysql.result:
test case
mysql-test/suite/innodb/t/innodb_mysql.test:
test case
mysql-test/suite/innodb_plugin/r/innodb_mysql.result:
test case
mysql-test/suite/innodb_plugin/t/innodb_mysql.test:
test case
sql/opt_range.h:
added new method
sql/records.cc:
The fix is always to use rr_quick if QUICK_INDEX_MERGE_SELECT
with clustered pk range is used.
and related small fixes.
mysql-test/t/user_var.test:
test for bug
sql/field_conv.cc:
From the C standard, memcpy() has undefined behaviour if to->ptr==from->ptr
sql/item_func.cc:
In the case of BUG#56138, entry->value==ptr in which case memcpy()
has undefined results per the C standard.
sql/sql_select.cc:
Work around a bug in old gcc
Problem: crash in Item_float constructor on DBUG_ASSERT due
to not null-terminated string parameter.
Fix: making Item_float::Item_float non-null-termintated parameter safe:
- Using temporary buffer when generating error
modified:
@ mysql-test/r/xml.result
@ mysql-test/t/xml.test
@ sql/item.cc
Bug#55794: ulonglong options of mysqld show wrong values.
Port the few remaining system variables to the correct mechanism --
range-check in check-stage (and throw error or warning at that point
as needed and depending on STRICTness), update in update stage.
Fix some signedness errors when retrieving sysvar values for display.
mysql-test/r/variables.result:
Show that we throw warnings or errors depending on strictness
even for "special" variables now.
mysql-test/t/variables.test:
Show that we throw warnings or errors depending on strictness
even for "special" variables now.
sql/item_func.cc:
show sys_var_ulonglong_ptr and SHOW_LONGLONG type variables as unsigned.
sql/set_var.cc:
move range-checking from update stage to check stage for the remaining
few sys-vars that broke the pattern
sql/set_var.h:
add check functions.
Bug#57995: Compiler flag change build error on OSX 10.4: my_getncpus.c
Bug#57996: Compiler flag change build error on OSX 10.5 : bind.c
Bug#57994: Compiler flag change build error : my_redel.c
Bug#57993: Compiler flag change build error on FreeBsd 7.0 : regexec.c
Bug#57992: Compiler flag change build error on FreeBsd : mf_keycache.c
Bug#57997: Compiler flag change build error on OSX 10.6: debug_sync.cc
Fix assorted compiler generated warnings.
cmd-line-utils/readline/bind.c:
Bug#57996: Compiler flag change build error on OSX 10.5 : bind.c
Initialize variable to work around a false positive warning.
include/m_string.h:
Bug#57994: Compiler flag change build error : my_redel.c
The expansion of stpcpy (in glibc) causes warnings if the
return value of strmov is not being used. Since stpcpy is
a GNU extension and the expansion ends up using a built-in
provided by GCC, use the compiler provided built-in directly
when possible.
include/my_compiler.h:
Define a dummy MY_GNUC_PREREQ when not compiling with GCC.
libmysql/libmysql.c:
Bug#58057: 5.1 libmysql/libmysql.c unused variable/compile failure
Variable might not be used in some cases. So, tag it as unused.
mysys/mf_keycache.c:
Bug#57992: Compiler flag change build error on FreeBsd : mf_keycache.c
Use UNINIT_VAR to work around a false positive warning.
mysys/my_getncpus.c:
Bug#57995: Compiler flag change build error on OSX 10.4: my_getncpus.c
Declare variable in the same block where it is used.
regex/regexec.c:
Bug#57993: Compiler flag change build error on FreeBsd 7.0 : regexec.c
Work around a compiler bug which causes the cast to not be enforced.
sql/debug_sync.cc:
Bug#57997: Compiler flag change build error on OSX 10.6: debug_sync.cc
Use UNINIT_VAR to work around a false positive warning.
sql/handler.cc:
Use UNINIT_VAR to work around a false positive warning.
sql/slave.cc:
Use UNINIT_VAR to work around a false positive warning.
sql/sql_partition.cc:
Use UNINIT_VAR to work around a false positive warning.
storage/myisam/ft_nlq_search.c:
Use UNINIT_VAR to work around a false positive warning.
storage/myisam/mi_create.c:
Use UNINIT_VAR to work around a false positive warning.
storage/myisammrg/myrg_open.c:
Use UNINIT_VAR to work around a false positive warning.
tests/mysql_client_test.c:
Change function to take a pointer to const, no need for a cast.
with on duplicate key update
There was a missed corner case in the partitioning
handler, which caused the next_insert_id to be changed
in the second level handlers (i.e the hander of a partition),
which caused this debug assertion.
The solution was to always ensure that only the partitioning
level generates auto_increment values, since if it was done
within a partition, it may fail to match the partition
function.
mysql-test/suite/parts/inc/partition_auto_increment.inc:
Added tests
mysql-test/suite/parts/r/partition_auto_increment_blackhole.result:
updated results
mysql-test/suite/parts/r/partition_auto_increment_innodb.result:
updated results
mysql-test/suite/parts/r/partition_auto_increment_memory.result:
updated results
mysql-test/suite/parts/r/partition_auto_increment_myisam.result:
updated results
sql/ha_partition.cc:
In <engine>::write_row the auto_inc value is generated
through handler::update_auto_increment (which calls <engine>::get_auto_increment() if needed).
If:
* INSERT_ID was set to 0
* it was updated to 0 by 'INSERT ... ON DUPLICATE KEY UPDATE' and changed partitions for the row
Then it would try to generate a auto_increment value in the
<engine for a specific partition>::write_row, which will
trigger the assert.
So the solution is to prevent this by,
in ha_partition::write_row set auto_inc_field_not_null and
add MODE_NO_AUTO_VALUE_ON_ZERO
in ha_partition::update_row (when changing partition) temporary
set table->next_number_field to NULL which calling the
partitions ::write_row().
in different default schema.
In strict mode, when data truncation or conversion happens,
THD::killed is set to THD::KILL_BAD_DATA.
This is abuse of KILL mechanism to guarantee that execution
of statement is aborted.
The stored procedures execution, on the other hand,
upon detection that a connection was killed, would
terminate immediately, without trying to restore the caller's
context, in particular, restore the caller's current schema.
The fix is, when terminating a stored procedure execution,
to only bypass cleanup if the entire connection was killed,
not in case of other forms of KILL.
mysql-test/r/sp-bugs.result:
Added result for a test case for bug#54375.
mysql-test/t/sp-bugs.test:
Added test case for bug#54375.
sql/sp_head.cc:
sp_head::execute modified: restore saved current db if
connection is not killed.
ALTER TABLE RENAME, DISABLE KEYS.
The code of ALTER TABLE RENAME, DISABLE KEYS could
issue a commit while holding LOCK_open mutex.
This is a regression introduced by the fix for
Bug 54453.
This failed an assert guarding us against a potential
deadlock with connections trying to execute
FLUSH TABLES WITH READ LOCK.
The fix is to move acquisition of LOCK_open outside
the section that issues ha_autocommit_or_rollback().
LOCK_open is taken to protect against concurrent
operations with .frms and the table definition
cache, and doesn't need to cover the call to commit.
A test case added to innodb_mysql.test.
The patch is to be null-merged to 5.5, which
already has 54453 null-merged to it.
mysql-test/suite/innodb/r/innodb_mysql.result:
Added test results for test for bug#56619.
mysql-test/suite/innodb/t/innodb_mysql.test:
Added test for bug#56619.
sql/sql_table.cc:
mysql_alter_table() modified: moved acquisition of LOCK_open
after call to ha_autocommit_or_rollback.
Quoting from the bug report:
The pstack library has been included in MySQL since version
4.0.0. It's useless and should be removed.
Details: According to its own documentation, pstack only works
on Linux on x86 in 32 bit mode and requires LinuxThreads and a
statically linked binary. It doesn't really support any Linux
from 2003 or later and doesn't work on any other OS.
The --enable-pstack option is thus deprecated and has no effect.
sporadically.
The cause of the sporadic time out was a leaking protection
against the global read lock, taken by the RENAME statement,
and not released in case of an error occurred during RENAME.
The leaking protection counter would lead to the value of
protect_against_global_read never dropping to 0.
Consequently FLUSH TABLES in all connections, including the
one that leaked the protection, could not proceed.
The fix is to ensure that all branchesin RENAME code properly
release GRL protection.
mysql-test/r/log_tables.result:
Added results for test for bug#47924.
mysql-test/t/log_tables.test:
Added test for bug#47924.
sql/sql_rename.cc:
mysql_rename_tables() modified: replaced return from function
to goto to clean up code block in case of error.
by a function and column
The bugreport reveals two different bugs about grouping
on a function:
1) grouping by the TIME_TO_SEC function result caused
a server crash or wrong results and
2) grouping by the function returning a blob caused
an unexpected "Duplicate entry" error and wrong
result.
Details for the 1st bug:
TIME_TO_SEC() returns NULL if its argument is invalid (empty
string for example). Thus its nullability depends not only
on the nullability of its arguments but also on their values.
Fixed by (overoptimistically) setting TIME_TO_SEC() to be
nullable despite the nullability of its arguments.
Details for the 2nd bug:
The server is unable to create indices on blobs without
explicit blob key part length. However, this fact was
ignored for blob function result fields of GROUP BY
intermediate tables.
Fixed by disabling GROUP BY index creation for blob
function result fields like regular blob fields.
mysql-test/r/func_time.result:
Test case for bug #52160.
mysql-test/r/type_blob.result:
Test case for bug #52160.
mysql-test/t/func_time.test:
Test case for bug #52160.
mysql-test/t/type_blob.test:
Test case for bug #52160.
sql/item_timefunc.h:
Bug #52160: crash and inconsistent results when grouping
by a function and column
TIME_TO_SEC() returns NULL if its argument is invalid (empty
string for example). Thus its nullability depends not only
Fixed by (overoptimistically) setting TIME_TO_SEC() to be
nullable despite the nullability of its arguments.
sql/sql_select.cc:
Bug #52160: crash and inconsistent results when grouping
by a function and column
The server is unable to create indices on blobs without
explicit blob key part length. However, this fact was
ignored for blob function result fields of GROUP BY
intermediate tables.
Fixed by disabling GROUP BY index creation for blob
function result fields like regular blob fields.
"Grantor" columns' data is lost when replicating mysql.tables_priv.
Slave SQL thread used its default user ''@'' as the grantor of GRANT|REVOKE
statements executing on it.
In this patch, current user is put in query log event for all GRANT and REVOKE
statement, SQL thread uses the user in query log event as grantor.
mysql-test/suite/rpl/r/rpl_do_grant.result:
Add test for this bug.
mysql-test/suite/rpl/t/rpl_do_grant.test:
Add test for this bug.
sql/log_event.cc:
Refactoring THD::current_user_used and related functions.
current_user_used is used to judge if current user should be
binlogged in query log event. So it is better to call it m_binlog_invoker.
The related functions are renamed too.
sql/sql_class.cc:
Refactoring THD::current_user_used and related functions.
current_user_used is used to judge if current user should be
binlogged in query log event. So it is better to call it m_binlog_invoker.
The related functions are renamed too.
sql/sql_class.h:
Refactoring THD::current_user_used and related functions.
current_user_used is used to judge if current user should be
binlogged in query log event. So it is better to call it m_binlog_invoker.
The related functions are renamed too.
sql/sql_parse.cc:
Call binlog_invoker() for GRANT and REVOKE statements.
Rows events were applied wrongly on the temporary table with the same name.
But rows events are generated only for base tables. As temporary
table's data never be binlogged on row mode. Normally, base table of the
same name cannot be updated if a temporary table has the same name.
But there are two cases which can generate rows events on
the base table of same name.
Case1: 'CREATE TABLE ... SELECT' statement.
In mixed format, it will generate rows events if it is unsafe.
Case2: Drop a transactional temporary table in a transaction
(happens only on 5.5+).
BEGIN;
DROP TEMPORARY TABLE t1; # t1 is a InnoDB table
INSERT INTO t1 VALUES(rand()); # t1 is a MyISAM table
COMMIT;
'DROP TEMPORARY TABLE' will be put in the transaction cache and
binlogged after the rows events generated by the 'INSERT' statement.
After this patch, slave opens only base table when applying a rows event.
Fix assorted warnings that are generated in optimized builds.
Most of it is silencing variables that are set but unused.
This patch also introduces the MY_ASSERT_UNREACHABLE macro
which helps the compiler to deduce that a certain piece of
code is unreachable.
include/my_compiler.h:
Use GCC's __builtin_unreachable if available. It allows
GCC to deduce the unreachability of certain code paths,
thus avoiding warnings that, for example, accused that a
variable could be used without being initialized (due to
unreachable code paths).
data dictionary confusion
On file systems with case insensitive file names, and
lower_case_table_names set to '2', the server could crash
due to a table definition cache inconsistency. This is
the default setting on MacOSX, but may also be set and
used on MS Windows.
The bug is caused by using two different strategies for
creating the hash key for the table definition cache, resulting
in failure to look up an entry which is present in the cache,
or failure to delete an existing entry. One strategy was to
use the real table name (with case preserved), and the other
to use a normalized table name (i.e a lower case version).
This is manifested in two cases. One is during 'DROP DATABASE',
where all known files are removed. The removal from
the table definition cache is done via a generated list of
TABLE_LIST with keys (wrongly) created using the case preserved
name. The other is during CREATE TABLE, where the cache lookup
is also (wrongly) based on the case preserved name.
The fix was to use only the normalized table name when
creating hash keys.
sql/sql_db.cc:
Normalize table name (i.e lower case it)
sql/sql_table.cc:
table_name contains the normalized name
alias contains the real table name
For crash testing: kill the server without generating core file.
include/my_dbug.h
Use kill(getpid(), SIGKILL) which cannot be caught by signal handlers.
All DBUG_XXX macros should be no-ops in optimized mode, do that for DBUG_ABORT as well.
sql/handler.cc
Kill server without generating core.
sql/log.cc
Kill server without generating core.
replication aborts
When recieving a 'SLAVE STOP' command, slave SQL thread will roll back the
transaction and stop immidiately if there is only transactional table updated,
even through 'CREATE|DROP TEMPOARY TABLE' statement are in it. But These
statements can never be rolled back. Because the temporary tables to the user
session mapping remain until 'RESET SLAVE', Therefore it will abort SQL thread
with an error that the table already exists or doesn't exist, when it restarts
and executes the whole transaction again.
After this patch, SQL thread always waits till the transaction ends and then stops,
if 'CREATE|DROP TEMPOARY TABLE' statement are in it.
mysql-test/extra/rpl_tests/rpl_stop_slave.test:
Auxiliary file which is used to test this bug.
mysql-test/suite/rpl/t/rpl_stop_slave.test:
Test case for this bug.
sql/slave.cc:
Checking if OPTION_KEEP_LOG is set. If it is set, SQL thread should wait
until the transaction ends.
sql/sql_parse.cc:
Add a debug point for testing this bug.
mysql-test/r/grant.result:
It was added result for test case for bug#36742.
mysql-test/t/grant.test:
It was added test case for bug#36742.
sql/sql_yacc.yy:
It was added convertation of host name part of user name to lowercase.
After ALTER TABLE which changed only table's metadata, row-based
binlog sometimes got corrupted since the tablemap was unexpectedly
set to 0 for subsequent updates to the same table.
ALTER TABLE which changed only table's metadata always reset
table_map_id for the table share to 0. Despite the fact that
0 is a valid value for table_map_id, this step caused problems
as it could have created situation in which we had more than
one table share with table_map_id equal 0. If more than one
table with table_map_id are 0 were updated in the same statement,
updates to these different tables were written into the same
rows event. This caused slave server to crash.
This bug happens only on 5.1. It doesn't affect 5.5+.
This patch solves this problem by ensuring that ALTER TABLE
statements which change metadata only never reset table_map_id
to 0. To do this it changes reopen_table() to correctly use
refreshed table_map_id value instead of using the old one/
resetting it.
mysql-test/suite/rpl/r/rpl_alter.result:
Add test for BUG#56226
mysql-test/suite/rpl/t/rpl_alter.test:
Add test for BUG#56226
When slave executes a transaction bigger than slave's max_binlog_cache_size,
slave will crash. It is caused by the assert that server should only roll back
the statement but not the whole transaction if the error ER_TRANS_CACHE_FULL
happens. But slave sql thread always rollbacks the whole transaction when
an error happens.
Ather this patch, we always clear any error set in sql thread(it is different
from the error in 'SHOW SLAVE STATUS') and it is cleared before rolling back
the transaction.
mysql-test/suite/rpl/r/rpl_binlog_max_cache_size.result:
SET binlog_cache_size and max_binlog_cache_size for all test cases.
Add test case for bug#55375.
mysql-test/suite/rpl/t/rpl_binlog_max_cache_size-master.opt:
binlog_cache_size and max_binlog_cache_size can be set in the client connection.
so remove this option file.
mysql-test/suite/rpl/t/rpl_binlog_max_cache_size.test:
SET binlog_cache_size and max_binlog_cache_size for all test cases.
Add test case for bug#55375.
sql/log_event.cc:
Some functions don't return the error code, so it is a wrong error code.
The error should always be set into thd->main_da. So we use
slave_rows_error_report to report the right error.
sql/slave.cc:
exec_relay_log_event() need call cleanup_context() to clear context.
clearup_context() will call end_trans().
Clear thd's error before cleanup_context. It avoid to trigger the assert
which cause this bug.
This is a regression from the fix for bug no 38999. A storage engine capable
of reading only a subset of a table's columns updates corresponding bits in
the read buffer to signal that it has read NULL values for the corresponding
columns. It cannot, and should not, update any other bits. Bug no 38999
occurred because the implementation of UPDATE statements compare the NULL bits
using memcmp, inadvertently comparing bits that were never requested from the
storage engine. The regression was caused by the storage engine trying to
alleviate the situation by writing to all NULL bits, even those that it had no
knowledge of. This has devastating effects for the index merge algorithm,
which relies on all NULL bits, except those explicitly requested, being left
unchanged.
The fix reverts the fix for bug no 38999 in both InnoDB and InnoDB plugin and
changes the server's method of comparing records. For engines that always read
entire rows, we proceed as usual. For engines capable of reading only select
columns, the record buffers are now compared on a column by column basis. An
assertion was also added so that non comparable buffers are never read. Some
relevant copy-pasted code was also consolidated in a new function.
Suprisingly, a Slave_log_event would show up in the binary
log. This event is never used and should not appear in the
logs. As such, when the slave (or the mysqlbinlog tool) reads the
event, it will hit an invalid pointer (reference to the
descriptor event when deserializing the Slave_log_event was
purposodely set to NULL).
The presence of the Slave_log_event denotes a corrupted log, but
we cannot tell how the log got corrupted in the first
place. However, we can make the server cope with such events when
it reads them - in case of log corruption - and fail gracefully.
This patch makes the server/mysqlbinlog to report that it has
found an invalid log event when Slave_log_event is read.
LOAD DATA into partitioned MyISAM table
Problem was that both partitioning and myisam
used the same table_share->mutex for different protections
(auto inc and repair).
Solved by adding a specific mutex for the partitioning
auto_increment.
Also adding destroying the ha_data structure in
free_table_share (which is to be propagated
into 5.5).
This is a 5.1 ONLY patch, already fixed in 5.5+.
The crash happens because original join table is replaced with temporary table
at execution stage and later we attempt to use this temporary table in
select_describe. It might happen that
Item_subselect::update_used_tables() method which sets const_item flag
is not called by some reasons (no where/having conditon in subquery for example).
It prevents JOIN::join_tmp creation and breaks original join.
The fix is to call ::update_used_tables() before ::const_item() check.
mysql-test/r/ps.result:
test case
mysql-test/t/ps.test:
test case
sql/item_subselect.cc:
call ::update_used_tables() before ::const_item() check.
Bug#57113: ha_partition::extra(ha_extra_function):
Assertion `m_extra_cache' failed
Fix for bug#55458 included DBUG_ASSERTS causing
debug builds of the server to crash on
another multi-table update.
Removed the asserts since they where wrong.
(updated after testing the patch in 5.5).
mysql-test/r/partition.result:
updated result
mysql-test/t/partition.test:
Added test for bug#57113
sql/ha_partition.cc:
Removed the assert for m_extra_cache when
::extra(HA_PREPARE_FOR_UPDATE) was called.
The procedure for setting up a fake binary log, by changing the
relay log files manually, is run twice when we issue mtr with
--repeat=2. However, when setting it up for the second time,
neither the sql thread is reset nor the server is restarted. This
means that previous stale relay log IO_CACHE metadata - from
first run - is left around. As a consequence, when the test is
run for the second time, the IO_CACHE for the relay log has
position in file different than zero, triggering the assertion.
We fix this by deploying a call to my_b_seek before calling
check_binlog_magic in next_event. This prevents the server
from asserting, in the cases that the SQL thread was reads
from a hot log (relay.NNNNN), then is stopped, then is restarted
from a previous cold log (relay.MMMMM), and then it reaches
the hot log relay.NNNNN again, in which case, the read
coordinates are not set to zero, but to the old values.
Additionally, some comments to the source code were added.
In case of outer join and emtpy WHERE conditon
'always true' condition is created for WHERE clasue.
Later in mysql_select() original SELECT_LEX WHERE
condition is overwritten with created cond.
However SELECT_LEX condition is also used as inital
condition in mysql_select()->JOIN::prepare().
On second execution of PS modified SELECT_LEX condition
is taken and it leads to crash.
The fix is to restore original SELECT_LEX condition
(set to NULL if original cond is NULL) in
reinit_stmt_before_use().
HAVING clause is fixed too for safety reason
(no test case as I did not manage to think out
appropriate example).
mysql-test/r/ps.result:
test case
mysql-test/t/ps.test:
test case
sql/sql_prepare.cc:
restore original SELECT_LEX condition
(set to NULL if original cond is NULL) in
reinit_stmt_before_use()
Fixed a number of memory leaks discovered by valgrind.
dbug/dbug.c:
This is actually an addendum to the fix for bug #52629:
- there is no point in limiting the fix to just global
variables, session ones are also affected.
- zero all fields when allocating a new 'state' structure so
that FreeState() does not deal with unitialized data later.
- add a check for a NULL pointer in DBUGCloseFile()
mysql-test/r/partition_error.result:
Added a test case for bug #56709.
mysql-test/r/variables_debug.result:
Added a test case for bug #56709.
mysql-test/t/partition_error.test:
Added a test case for bug #56709.
mysql-test/t/variables_debug.test:
Added a test case for bug #56709.
sql/item_timefunc.cc:
There is no point in declaring 'value' as a member of
Item_extract and dynamically allocating memory for it in
Item_extract::fix_length_and_dec(), since this string is only
used as a temporary storage in Item_extract::val_int().
sql/item_timefunc.h:
Removed 'value' from the Item_extract class definition.
sql/sql_load.cc:
- we may need to deallocate 'buffer' even when 'error' is
non-zero in some cases, since 'error' is public, and there is
external code modifying it.
- assign NULL to buffer when deallocating it so that we don't
do it twice in the destructor
- there is no point in changing 'error' in the destructor.
Subselect executes twice, at JOIN::optimize stage
and at JOIN::execute stage. At optimize stage
Innodb prebuilt struct which is used for the
retrieval of column values is initialized in.
ha_innobase::index_read(), prebuilt->sql_stat_start is true.
After QUICK_ROR_INTERSECT_SELECT finished his job it
restores read_set/write_set bitmaps with initial values
and deactivates one of the handlers used by
QUICK_ROR_INTERSECT_SELECT in JOIN::cleanup
(it's the case when we reuse original handler as one of
handlers required by QUICK_ROR_INTERSECT_SELECT object).
On second subselect execution inactive handler is activated
in QUICK_RANGE_SELECT::reset, file->ha_index_init().
In ha_index_init Innodb prebuilt struct is reinitialized
with inappropriate read_set/write_set bitmaps. Further
reinitialization in ha_innobase::index_read() does not
happen as prebuilt->sql_stat_start is false.
It leads to partial retrieval of required field values
and we get a mix of field values from different records
in the record buffer.
The fix is to reset
read_set/write_set bitmaps as these values
are required for proper intialization of
internal InnoDB struct which is used for
the retrieval of column values
(see build_template(), ha_innodb.cc)
mysql-test/include/index_merge_ror_cpk.inc:
test case
mysql-test/r/index_merge_innodb.result:
test case
mysql-test/r/index_merge_myisam.result:
test case
sql/opt_range.cc:
if ROR merge scan is used we need to reset
read_set/write_set bitmaps as these values
are required for proper intialization of
internal InnoDB struct which is used for
the retrieval of column values
(see build_template(), ha_innodb.cc)
adding new indexes
A fast alter table requires that the existing (old) table
and indices are unchanged (i.e only new indices can be
added). To verify this, the layout and flags of the old
table/indices are compared for equality with the new.
The PACK_KEYS option is a no-op in InnoDB, but the flag
exists, and is used in the table compare. We need to
check this (table) option flag before deciding whether an
index should be packed or not. If the table has
explicitly set PACK_KEYS to 0, the created indices should
not be marked as packed/packable.
compression protocol.
The loss of connection was caused by a malformed packet
sent by the server in case when query cache was in use.
When storing data in the query cache, the query cache
memory allocation algorithm had a tendency to reduce
the amount of memory block necessary to store a result
set, up to finally storing the entire result set in a single
block. With a significant result set, this memory block
could turn out to be quite large - 30, 40 MB and on.
When such a result set was sent to the client, the entire
memory block was compressed and written to network as a
single network packet. However, the length of the
network packet is limited by 0xFFFFFF (16MB), since
the packet format only allows 3 bytes for packet length.
As a result, a malformed, overly large packet
with truncated length would be sent to the client
and break the client/server protocol.
The solution is, when sending result sets from the query
cache, to ensure that the data is chopped into
network packets of size <= 16MB, so that there
is no corruption of packet length. This solution,
however, has a shortcoming: since the result set
is still stored in the query cache as a single block,
at the time of sending, we've lost boundaries of individual
logical packets (one logical packet = one row of the result
set) and thus can end up sending a truncated logical
packet in a compressed network packet.
As a result, on the client we may require more memory than
max_allowed_packet to keep, both, the truncated
last logical packet, and the compressed next packet.
This never (or in practice never) happens without compression,
since without compression it's very unlikely that
a) a truncated logical packet would remain on the client
when it's time to read the next packet
b) a subsequent logical packet that is being read would be
so large that size-of-new-packet + size-of-old-packet-tail >
max_allowed_packet.
To remedy this issue, we send data in 1MB sized packets,
that's below the current client default of 16MB for
max_allowed_packet, but large enough to ensure there is no
unnecessary overhead from too many syscalls per result set.
sql/net_serv.cc:
net_realloc() modified: consider already used memory
when compare packet buffer length
sql/sql_cache.cc:
modified Query_cache::send_result_to_client: send result to client
in chunks limited by 1 megabyte.