The crash happens because of incorrect max_length calculation
in QUOTE function(due to overflow). max_length is set
to 0 and it leads to assert failure.
The fix is to cast expression result to
ulonglong variable and adjust it if the
result exceeds MAX_BLOB_WIDTH.
Repairing MyISAM table with fulltext indexes and low
myisam_sort_buffer_size may crash the server.
Estimation of number of index entries was done incorrectly,
causing further assertion failure or server crash.
Docs note: min value for myisam_sort_buffer_size has been
changed from 4 to 4096.
debug_max
There was a buffer overrun when unpacking the date
field. Incidentaly, this seems to affect only solaris x86_64
debug builds, but others platforms may be vulnerable as well.
In particular, the buffer size used was not taking into
consideration that the '\0' character would be written into
it.
We fix this by increasing the size of the buffer used to
accommodate one extra byte (the one for the '\0').
which was detected during the build of 5.5.3-m3.
This requires version 9 of IBM C++ to help.
More fixes will still be needed, it is very strict
(or rather: a bit picky?) about templates.
Conflicts:
Text conflict in client/mysqlbinlog.cc
Text conflict in mysql-test/Makefile.am
Text conflict in mysql-test/collections/default.daily
Text conflict in mysql-test/r/mysqlbinlog_row_innodb.result
Text conflict in mysql-test/suite/rpl/r/rpl_typeconv_innodb.result
Text conflict in mysql-test/suite/rpl/t/rpl_get_master_version_and_clock.test
Text conflict in mysql-test/suite/rpl/t/rpl_row_create_table.test
Text conflict in mysql-test/suite/rpl/t/rpl_slave_skip.test
Text conflict in mysql-test/suite/rpl/t/rpl_typeconv_innodb.test
Text conflict in mysys/charset.c
Text conflict in sql/field.cc
Text conflict in sql/field.h
Text conflict in sql/item.h
Text conflict in sql/item_func.cc
Text conflict in sql/log.cc
Text conflict in sql/log_event.cc
Text conflict in sql/log_event_old.cc
Text conflict in sql/mysqld.cc
Text conflict in sql/rpl_utility.cc
Text conflict in sql/rpl_utility.h
Text conflict in sql/set_var.cc
Text conflict in sql/share/Makefile.am
Text conflict in sql/sql_delete.cc
Text conflict in sql/sql_plugin.cc
Text conflict in sql/sql_select.cc
Text conflict in sql/sql_table.cc
Text conflict in storage/example/ha_example.h
Text conflict in storage/federated/ha_federated.cc
Text conflict in storage/myisammrg/ha_myisammrg.cc
Text conflict in storage/myisammrg/myrg_open.c
If the listed columns in the view definition of
the table used in a 'INSERT .. SELECT ..'
statement mismatched, a debug assertion would
trigger in the cache invalidation code
following the failing statement.
Although the find_field_in_view() function
correctly generated ER_BAD_FIELD_ERROR during
setup_fields(), the error failed to propagate
further than handle_select(). This patch fixes
the issue by adding a check for the return
value.
The crash happens because greedy_serach
can not determine best plan due to
wrong inner table dependences. These
dependences affects join table sorting
which performs before greedy_search starting.
In our case table which has real 'no dependences'
should be put on top of the list but it does not
happen as inner tables have no dependences as well.
The fix is to exclude RAND_TABLE_BIT mask from
condition which checks if table dependences
should be updated.
col equal to itself!
There's no need to copy the value of a field into itself.
While generally harmless (except for some performance penalties)
it may be dangerous when the copy code doesn't expect this.
Fixed by checking if the source field is the same as the destination
field before copying the data.
Note that we must preserve the order of assignment of the null
flags (hence the null_value assignment addition).
The reason of the failure was apparent flaw in that a pointer to an uninitialized buffer was
passed to DBUG_PRINT of Protocol_text::store().
Fixed with splitting the print-out into two branches:
one with length zero of the problematic arg and the rest.
CHECK_FIELD_IGNORE was treated as CHECK_FIELD_ERROR_FOR_NULL;
UPDATE...SET...NULL on NOT NULL fields behaved differently after
a trigger.
Now distinguishes between IGNORE and ERROR_FOR_NULL and save/restores
check-field options.
When mysqlbinlog was given the --database=X flag, it always printed
'ROLLBACK TO', but the corresponding 'SAVEPOINT' statement was not
printed. The replicated filter(replicated-do/ignore-db) and binlog
filter (binlog-do/ignore-db) has the same problem. They are solved
in this patch together.
After this patch, We always check whether the query is 'SAVEPOINT'
statement or not. Because this is a literal check, 'SAVEPOINT' and
'ROLLBACK TO' statements are also binlogged in uppercase with no
any comments.
The binlog before this patch can be handled correctly except one case
that any comments are in front of the keywords. for example:
/* bla bla */ SAVEPOINT a;
/* bla bla */ ROLLBACK TO a;
The log event of 'CREATE EVENT' was being binlogged with garbage
at the end of the query if 'CREATE EVENT' is followed by another SQL statement
and they were executed as one command.
for example:
DELIMITER |;
CREATE EVENT e1 ON EVERY DAY DO SELECT 1; SELECT 'a';
DELIMITER ;|
When binlogging 'CREATE EVENT', we always create a new statement with definer
and write it into the log event. The new statement is made from cpp_buf(preprocessed buffer).
which is not a c string(end with '\0'), but it is copied as a c string.
In this patch, cpp_buf is copied with its length.
A failed REVOKE statement is logged with error=0, thus causing
the slave to stop. The slave should not stop as this was an
expected error. Given that the execution failed on the master as
well the error code should be logged so that the slave can replay
the statement, get an error and compare with the master's
execution outcome. If errors match, then slave can proceed with
replication, as the error it got, when replaying the statement,
was expected.
In this particular case, the bug surfaces because the error code
is pushed to the THD diagnostics area after writing the event to
the binary log. Therefore, it would be logged with the THD
diagnostics area clean, hence its error code would not contain
the correct code.
We fix this by moving the error reporting ahead of the call to
the routine that writes the event to the binary log.
(Original patch by Sinisa Milivojevic)
The YEAR(4) value of 2000 was equal to the "bad" YEAR(4) value of 0000.
The get_year_value() function has been modified to not adjust bad
YEAR(4) value to 2000.
Conflicts:
Text conflict in mysql-test/r/partition_innodb.result
Text conflict in sql/field.h
Text conflict in sql/item.h
Text conflict in sql/item_cmpfunc.h
Text conflict in sql/item_sum.h
Text conflict in sql/log_event_old.cc
Text conflict in sql/protocol.cc
Text conflict in sql/sql_select.cc
Text conflict in sql/sql_yacc.yy
The problem is that when we make conditon for
grouped result const part of condition is cut off.
It happens because some parts of 'having' condition
which refer to outer join become const after
make_join_statistics. These parts may be lost
during further having condition transformation
in JOIN::exec. The fix is adding 'having'
condition check for const tables after
make_join_statistics is performed.
DBUG_SYNC_POINT has at least one strong limitation that it's not defined
on all platforms. It has issues cooperating with @@debug.
All in all its functionality is superseded by DEBUG_SYNC facility and
there is no reason to maintain the old less flexible one.
Fixed with adding debug_sync_set_action() function as a facility to set up
a sync-action in the server sources code and re-writing existing simulations
(found 3) to use it.
Couple of tests have been reworked as well.
The patch offers a pattern for setting sync-points in replication threads
where the standard DEBUG_SYNC does not suffice to reach goals.
Optimizer erroneously translated LEFT JOIN into INNER JOIN.
It leads to cutting rows with NULL right side. It happens
because Item_row uses not_null_tables() method form the
base(Item) class and does not calculate 'null tables'
properly. The fix is adding calculation of 'not null tables'
to Item_row.
The crash happens because of discrepancy between values of
conts_tables and join->const_table_map(make_join_statisctics).
Calculation of conts_tables used condition with
HA_STATS_RECORDS_IS_EXACT flag check. Calculation of
join->const_table_map does not use this flag check.
In case of MERGE table without union with index
the table does not become const table and
thus join_read_const_table() is not called
for the table. join->const_table_map supposes
this table is const and later in make_join_select
this table is used for making&calculation const
condition. As table record buffer is not populated
it leads to crash.
The fix is adding a check if an engine supports
HA_STATS_RECORDS_IS_EXACT flag before updating
join->const_table_map.
All numeric operators and functions on integer, floating point
and DECIMAL values now throw an 'out of range' error rather
than returning an incorrect value or NULL, when the result is
out of supported range for the corresponding data type.
Some test cases in the test suite had to be updated
accordingly either because the test case itself relied on a
value returned in case of a numeric overflow, or because a
numeric overflow was the root cause of the corresponding bugs.
The latter tests are no longer relevant, since the expressions
used to trigger the corresponding bugs are not valid anymore.
However, such test cases have been adjusted and kept "for the
record".
for InnoDB
The class Field_bit_as_char stores the metadata for the
field incorrecly because bytes_in_rec and bit_len are set
to (field_length + 7 ) / 8 and 0 respectively, while
Field_bit has the correct values field_length / 8 and
field_length % 8.
Solved the problem by re-computing the values for the
metadata based on the field_length instead of using the
bytes_in_rec and bit_len variables.
To handle compatibility with old server, a table map
flag was added to indicate that the bit computation is
exact. If the flag is clear, the slave computes the
number of bytes required to store the bit field and
compares that instead, effectively allowing replication
*without conversion* from any field length that require
the same number of bytes to store.
definition at engine
If a single ALTER TABLE contains both DROP INDEX and ADD INDEX using
the same index name (a.k.a. index modification) we need to disable
in-place alter table because we can't ask the storage engine to have
two copies of the index with the same name even temporarily (if we
first do the ADD INDEX and then DROP INDEX) and we can't modify
indexes that are needed by e.g. foreign keys if we first do
DROP INDEX and then ADD INDEX.
Fixed the problem by disabling in-place ALTER TABLE for these cases.
concurrent I_S query
There were two problem:
1) MYSQL_LOCK_IGNORE_FLUSH also ignored name locks
2) there was a race between abort_and_upgrade_locks and
alter_close_tables
(i.e. remove_table_from_cache and
close_data_files_and_morph_locks)
Which allowed the table to be opened with MYSQL_LOCK_IGNORE_FLUSH flag
resulting in renaming a partition that was already in use,
which could cause the table to be unusable.
Solution was to not allow IGNORE_FLUSH to skip waiting for
a named locked table.
And to not release the LOCK_open mutex between the
calls to remove_table_from_cache and
close_data_files_and_morph_locks by merging the functions
abort_and_upgrade_locks and alter_close_tables.
In BUG#49562 we fixed the case where numeric user var events
would not serialize the flag stating whether the value was signed
or unsigned (unsigned_flag). This fixed the case that the slave
would get an overflow while treating the unsigned values as
signed.
In this bug, we find that the unsigned_flag can sometimes change
between the moment that the user value is recorded for binlogging
purposes and the actual binlogging time. Since we take the
unsigned_flag from the runtime variable data, at binlogging time,
and the variable value is comes from the copy taken earlier in
the execution, there may be inconsistency in the
User_var_log_event between the variable value and its
unsigned_flag.
We fix this by also copying the unsigned_flag of the
user_var_entry when its value is copied, for binlogging
purposes. Later, at binlogging time, we use the copied
unsigned_flag and not the one in the runtime user_var_entry
instance.
NULL column for NULL
The optimization to read MIN() and MAX() values from an
index did not properly handle comparisons with NULL
values. Fixed by giving up the particular optimization step
if there are non-NULL safe comparisons with NULL values, as
the result is NULL anyway.
Also, Oracle copyright notice was added to all files.
Base Tables
The type inferrence of a view column caused the result to be
interpreted as the wrong type: DATE colums were interpreted
as TIME and TIME as DATETIME. This happened because view
columns are represented by Item_ref objects as opposed to
Item_field's. Item_ref had no method for retrieving a TIME
value and thus was forced to depend on the default
implementation for any expression, which caused the
expression to be evaluated as a string and then parsed into
a TIME/DATETIME value.
Fixed by letting Item_ref classes forward the request for a
TIME value to the referred Item - which is a field in this
case - this reads the TIME value directly without
conversion.
DDL no longer aborts mysql_lock_tables(), and hence
we no longer need to support need_reopen flag of this
call.
Remove the flag, and all the code in the server
that was responsible for handling the case when
it was set. This allowed to simplify:
open_and_lock_tables_derived(), the delayed thread,
multi-update.
Rename MYSQL_LOCK_IGNORE_FLUSH to MYSQL_OPEN_IGNORE_FLUSH,
since we now only support this flag in open_table().
Rename MYSQL_LOCK_PERF_SCHEMA to MYSQL_LOCK_LOG_TABLE,
to avoid confusion.
Move the wait for the global read lock for cases
when we do updates in SELECT f1() or DO (UPDATE) to
open_table() from mysql_lock_tables(). When waiting
for the read lock, we could raise need_reopen flag,
which is no longer present in mysql_lock_tables().
Since the block responsible for waiting for GRL
was moved, MYSQL_LOCK_IGNORE_GLOBAL_READ_LOCK
was renamed to MYSQL_OPEN_IGNORE_GLOBAL_READ_LOCK.
This deadlock could occour betweeen one connection executing
SET GLOBAL EVENT_SCHEDULER= ON and another executing SET GLOBAL
EVENT_SCHEDULER= OFF. The bug was introduced by WL#4738.
The first connection would hold LOCK_event_metadata (protecting
the global variable) while trying to lock LOCK_global_system_variables
starting the event scheduler thread (in THD:init()).
The second connection would hold LOCK_global_system_variables
while trying to get LOCK_event_scheduler after stopping the event
scheduler inside event_scheduler_update().
This patch fixes the problem by not using LOCK_event_metadata to
protect the event_scheduler variable. It is still protected using
LOCK_global_system_variables. This fixes the deadlock as it removes
one of the two mutexes used to produce it.
However, this patch opens up the possibility that the event_scheduler
variable and the real event_scheduler state can become out of sync
(e.g. variable = OFF, but scheduler running). But this can only
happen under very unlikely conditions - two concurrent SET GLOBAL
statments, with one thread interrupted at the exact wrong moment.
This is preferable to having the possibility of a deadlock.
This patch also fixes a bug where it was possible to exit create_event()
without releasing LOCK_event_metadata if running out of memory during
its exection.
No test case added since a repeatable test case would have required
excessive use of new sync points. Instead we rely on the fact that
this bug was easily reproduceable using RGQ tests.
SunStudio
SunStudio compilers of late warn about methods that might hide
methods in base classes due to the use of overloading combined
with overriding. SunStudio also warns about variables defined
in local socpe or method arguments that have the same name as
a member attribute of the class.
This patch renames methods that might hide base class methods,
to make it easier both for humans and compilers to see what is
actually called. It also renames variables in local scope.
There are two issues fixed here:
1. We needed to update the result file, for some of
mysqlbinlog_* tests, because now the some padding chars
are not output anymore.
2. We needed to change the Field_string::pack so that
for BINARY types the padding chars are not packed
(lengthsp will return full length for these types).
Conflicts:
Text conflict in client/mysqlbinlog.cc
Text conflict in mysql-test/r/explain.result
Text conflict in mysql-test/r/subselect.result
Text conflict in mysql-test/r/subselect3.result
Text conflict in mysql-test/r/type_datetime.result
Text conflict in sql/share/Makefile.am
The problem is introduced by WL#4435 "Support OUT-parameters in
prepared statements".
When a statement that has out parameters was reprepared,
the reprepare request error was ignored, and an
attempt to send out parameters to the client was made.
Since the out parameter list was not initialized in case
of an error, this attempt led to a crash.
Don't try to send out parameters to the client
if an error occurred in statement execution.
In BUG#51787 we were using the wrong charset to print out the
data. We were using the field charset for the string that would
hold the information. This caused the assertion, because the
string length was not aligned with UTF32 bytes requirements for
storage.
We fix this by using &my_charset_latin1 in the string object
instead of the field->charset(). As a side-effect, we needed to
extend the show_sql_type interface so that it took the field
charset is now passed as a parameter, so that one is able to
calculate the correct field size.
In BUG#51716 we had issues with Field_string::pack and
Field_string::unpack. When packing, the length was incorrectly
calculated. When unpacking, the padding the string would be
padded with the wrong bytes (a few bytes less than it should).
We fix this by resorting to charset abstractions (functions) that
calculate the correct length when packing and pad correctly the
string when unpacking.
LOCK kills the server.
Prohibit FLUSH TABLES WITH READ LOCK application to views or
temporary tables.
Fix a subtle bug in the implementation when we actually
did not remove table share objects from the table cache after
acquiring exclusive locks.
The problem was that in read only mode (read_only enabled),
the server would mistakenly deny data modification attempts
for temporary tables which belong to a transactional storage
engine (eg. InnoDB).
The solution is to allow transactional temporary tables to be
modified under read only mode. As a whole, the read only mode
does not apply to any kind of temporary table.
(regression)
Problem was that partition pruning did not exclude the
last partition if the range was beyond it
(i.e. not using MAXVALUE)
Fix was to not include the last partition if the
partitioning function value was not within the partition
range.
SET autocommit=1 while XA transaction is active may
cause various side effects, including memory corruption
and server crash.
The problem is that SET autocommit=1 and further queries
attempt to commit local transaction, whereas XA transaction
is still active.
As local and XA transactions are mutually exclusive, this
patch forbids enabling autocommit mode while XA transaction
is active.
MySQL uses two source layouts when building : the bzr
layout and the source package layout.
The previous fix for bug 35250 contained 1 change that is
valid for both modes and a number of changes that are valid
only for the bzr source layout.
The important thing was to fix the source package layout.
And for this the change in configure.in was sufficient.
It's not trivial (and not requested by this bug) to support
VPATH builds from the bzr trees.
This is why the other changes are reverted and the change to
fix the VPATH build for source distributions is left intact.
Ensure that we store the correct cached_field_type whenever we cache Field items
(in this case it allows us to compare dates as dates, rather than strings)
The problem was that killing a query during the optimization
phase of a subselect would lead to crashes. The root of the
problem is that the subselect execution engine ignores failures
(eg: killed) during the optimization phase (JOIN::optimize),
leading to a crash once the subquery is executed due to
partially initialized structures (in this case a join tab).
The optimal solution would be to cleanup certain optimizer
structures if the optimization phase fails, but currently
there is no infrastructure to properly to track and cleanup
the structures. To workaround the whole problem one somewhat
good solution is to avoid executing a subselect if the query
has been killed. Cutting short any problems caused by failures
during the optimization phase.
The problem was that UNINSTALL PLUGIN wasn't performing privilege
checks before removing a plugin. Any user (including users without
any kind of privileges) could uninstall any plugin.
The solution is to verify if the user has the DELETE privilege for
the mysql.plugin table before uninstalling a plugin.
The problem is that Item_direct_view_ref which is inherited
from Item_ident updates orig_table_name and table_name with
the same values. The fix is introduction of new constructor
into Item_ident and up which updates orig_table_name and
table_name separately.
the fill_schema_processlist function accesses THD::query() without proper protection
so the parallel thread killing can lead to access to the freed meemory.
per-file comments:
sql/sql_load.cc
Bug#51377 Crash in information_schema / processlist on concurrent DDL workload
the THD::set_query_inner() call needs to be protected.
But here we don't need to change the original thd->query() at all.
sql/sql_show.cc
Bug#51377 Crash in information_schema / processlist on concurrent DDL workload
protect the THD::query() access with the THD::LOCK_thd_data mutex.
The problem is that not all column names retrieved from a SELECT
statement can be used as view column names due to length and format
restrictions. The server failed to properly check the conformity
of those automatically generated column names before storing the
final view definition on disk.
Since columns retrieved from a SELECT statement can be anything
ranging from functions to constants values of any format and length,
the solution is to rewrite to a pre-defined format any names that
are not acceptable as a view column name.
The name is rewritten to "Name_exp_%u" where %u translates to the
position of the column. To avoid this conversion scheme, define
explict names for the view columns via the column_list clause.
Also, aliases are now only generated for top level statements.
The problem was that bits of the destructive equality propagation
optimization weren't being undone after the execution of a stored
program. Modifications to the parse tree that are based on transient
properties must be undone to enable the re-execution of stored
programs.
The solution is to cleanup any references to predicates generated
by the equality propagation during the execution of a stored program.
Before this fix, the performance schema instrumentation
in mdl.h / mdl.cc was incomplete, causing:
- build warnings,
- no data collection for the performance schema
This fix:
- added instrumentation helpers for the new preferred
reader read write lock, mysql_prlock_*
- implemented completely the performance schema
instrumentation of mdl.h / mdl.cc
Conflicts:
Text conflict in .bzr-mysql/default.conf
Text conflict in mysql-test/r/explain.result
Text conflict in mysql-test/r/having.result
Text conflict in mysql-test/suite/rpl/t/disabled.def
Text conflict in mysql-test/suite/rpl/t/rpl_slave_skip.test
Text conflict in storage/federated/ha_federated.cc
consider clustered primary keys
Choosing a shortest index for the covering index scan,
the optimizer ignored the fact, that the clustered primary
key read involves whole table data.
The find_shortest_key function has been modified to
take into account that fact that a clustered PK has a
longest key of possible covering indices.
auto_increment on duplicate entry
The bug was that when INSERT_ID was used and the storage
engine was told to release any reserved but not used
auto_increment values, it set the highest auto_increment
value to INSERT_ID.
The fix was to check if the auto_increment value was forced
by user (INSERT_ID) or by slave-thread, i.e. not auto-
generated. So that it is only allowed to release generated
values.
Problem was block_size on partitioned tables was not set,
resulting in keys_per_block was not correct which affects
the cost calculation for read time of indexes (including
cost for group min/max).Which resulted in a bad optimizer
decision.
Fixed by setting stats.block_size correctly.
on decimal column
The problem was that there was no check to disallow DECIMAL
columns in the code (it was accepted as if it was INTEGER).
Solution was to correctly disallow DECIMAL columns in
COLUMNS partitioning. As documented.
mode
When the master was executing in sql_mode='traditional' (which
implies that really_abort_on_warning returns TRUE - because of
MODE_STRICT_ALL_TABLES), the error code (ER_DUP_ENTRY in the
reported case) was not being set in the
Query_log_event. Therefore, even if a failure was to be expected
when replaying the statement on the slave, a failure would occur,
because the Query_log_event was not transporting the expected
error code, but 0 instead.
This was because when the master was getting the error code to
set it in the Query_log_event, the executing thread would be
assumed to have been killed:
THD::killed==THD::KILL_BAD_DATA. This would make the error code
fetch routine not to check thd->main_da.sql_errno(), but instead
the thd->killed value. What's more, is that the server would
thd->killed value if thd->killed == THD::KILL_BAD_DATA and return
0 instead. So this is a double inconsistency, as the we should
not even check thd->killed but rather thd->main_da.sql_errno().
We fix this by extending the condition used to choose whether to
check the thd->main_da.sql_errno() or thd->killed, so that it
takes into consideration the case when:
thd->killed==THD::KILL_BAD_DATA.
+ failing statements
Implicit DROP event for temporary table is not getting
LOG_EVENT_THREAD_SPECIFIC_F flag, because, in the previous
executed statement in the same thread, which might even be a
failed statement, the thread_specific_used flag is set to
FALSE (in mysql_reset_thd_for_next_command) and not set to TRUE
before connection is shutdown. This means that implicit DROP
event will take the FALSE value from thread_specific_used and
will not set LOG_EVENT_THREAD_SPECIFIC_F in the event header. As
a consequence, mysqlbinlog will not print the pseudo_thread_id
from the DROP event, because one of the requirements for the
printout is that this flag is set to TRUE.
We fix this by setting thread_specific_used whenever we are
binlogging a DROP in close_temporary_tables, and resetting it to
its previous value afterward.
when cmake is used for building in a symlinked directory,
and confguration is later adjusted with "cmake-gui ." After it,
GenServerSource fails with "no rule for <filename>". The reason
for the error is that cmake-gui resolves "." as realpath and rules
are generated accordingly, while "cmake" used symlinked path
The fix uses ${CMAKE_CURRENT_BINARY_DIR} instead of
${CMAKE_BINARY_DIR}/sql for generated files.
This causes CMake to use relative file names so
relative file names when generating make rules.
Using relative filenames avoids the problem of
refering to the same directory using 2 different paths.
Besides, using ${CMAKE_CURRENT_BINARY_DIR} is
a commonly used style when working with generated
files.
autotools runs
- Fix recognition of --with-debug=full in configure wrapper
- Remove CMakeCache.txt in configure wrapper, to match the original
- Fix recognition of max-no-ndb
- Fix broken dependencies of mysql_fix_privilege_table.sql from
mysql_system_tables.sql and mysql_system_tables_fix.sql
- Add "distclean target" that informs user about appropriate bzr command
Diagnostics_area::set_ok_status on DROP FUNCTION
This assert tests that the server is not trying to send "ok" to
the client if an error has occured during statement processing.
In this case, the assert was triggered by lock timeout errors when
accessing system tables to do an implicit REVOKE after executing
DROP FUNCTION/PROCEDURE. In practice, this was only likely to
happen with very low values for "lock_wait_timeout" (in the bug report
1 second was used). These errors were ignored and the server tried
to send "ok" to the client, triggering the assert.
The patch for Bug#45225 introduced lock timeouts for metadata locks.
This made it possible to get timeouts when accessing system tables.
Note that a followup patch for Bug#45225 pushed after this
bug was reported, changed accessing of system tables such
that the user-supplied timeout value is ignored and the maximum
timeout value is used instead. This exact bug was therefore
only noticeable in the period between the initial Bug#45225 patch
and the followup patch.
However, the same problem could occur for any errors during revoking
of privileges - not just timeouts. This patch fixes the problem by
making sure that any errors during revoking of privileges are
reported to the client.
Test case added to sp-destruct.test. Since the original bug is not
reproducable now that system tables are accessed using a a long
timeout value, this test instead calls DROP FUNCTION with a grant
system table missing.
If an outer query is broken, a subquery might not even get set up.
EXPLAIN EXTENDED did not expect this and merrily tried to de-ref all
of the half-setup info.
We now catch this case and print as much as we have, as it doesn't cost us
anything (doesn't make regular execution slower).
backport from 5.1
Add deprecation warning when variable optimizer_search_depth is given
the value 63.
mysql-test/r/greedy_optimizer.result
Updated with warning text.
mysql-test/r/mysqld--help-notwin.result
Updated with warning from mysqld --help --verbose.
mysql-test/r/mysqld--help-win.result
Updated with warning from mysqld --help --verbose.
sql/sys_vars.cc
Added an update check function to the constructor invocation for
the optimizer_search_depth variable. The function emits a
warning message for the value 63.
This patch fixes some typos and poorly formulated sentences in
the output from mysqld --help --verbose.
Some of the problems described in the bug report are already
handled by the patch for Bug#49447, and are therefore not
included in this patch.
There was auto-reconnecting by slave earlier than a prescribed by slave_net_timeout value.
The issue happened on 64bit solaris that spotted rather incorrect casting of
the ulong slave_net_timeout into the uint of mysql.options.read_timeout.
Notice, that there is no reason for slave_net_timeout to be of type of ulong.
Since it's primarily passed as arg to mysql_options the type can be made
as uint to avoid all conversion hassles.
That's what the fixes are made.
A "side" effect of the patch is a new value for the max of slave_net_timeout
to be the max of the unsigned int type (therefore to vary across platforms).
Note, a regression test can't be made to run reliably without making it to last over some
20 secs. That's why it is placed in suite/large_tests.
for same data when using bit fields
Problem: checksum for BIT fields may be computed incorrectly
in some cases due to its storage peculiarity.
Fix: convert a BIT field to a string then calculate its checksum.
on Windows".
On platforms where read-write lock implementation does not
prefer readers by default (Windows, Solaris) server might
have deadlocked while detecting MDL deadlock.
MDL deadlock detector relies on the fact that read-write
locks which are used in its implementation prefer readers
(see new comment for MDL_lock::m_rwlock for details).
So far MDL code assumed that default implementation of
read/write locks for the system has this property.
Indeed, this turned out ot be wrong, for example, for
Windows or Solaris. Thus MDL deadlock detector might have
deadlocked on these systems.
This fix simply adds portable implementation of read/write
lock which prefer readers and changes MDL code to use this
new type of synchronization primitive.
No test case is added as existing rqg_mdl_stability test can
serve as one.
Extend and implement the grammar that allows to FLUSH WITH READ LOCK
a list of tables, rather than all of them.
Incompatible grammar change:
Previously one could perform FLUSH TABLES, HOSTS, PRIVILEGES in a single
statement.
After this change, FLUSH TABLES must always be alone on the list.
Judging by the test suite, however, the old extended syntax
was never or very rarely used.
The new statement requires RELOAD ACL global privilege and
LOCK_TABLES_ACL | SELECT_ACL on individual tables.
In other words, it's an atomic combination of LOCK TALBES <list> READ
and FLUSH TABLES <list>, and requires respective privileges.
For additional information about the semantics, please
see WL#5000 and the comment for flush_tables_with_read_lock()
function in sql_parse.cc
A client doing multiple mysql_library_init() and
mysql_library_end() calls over the lifetime of the process may
experience lost character set data, potentially even a
SIGSEGV.
This patch reinstates the reloading of character set data when
a mysql_library_init() is done after a mysql_library_end().
Incremental commit based on previous patch.
Addresses reviewer comments to move reseting of
thd->current_stmt_binlog_row_based to after binlog_query
takes place.
The problem was that ALTER TABLE on a merge table which was locked
using LOCK TABLE ... WRITE, by mistake gave
ER_TABLE_NOT_LOCKED_FOR_WRITE.
During opening of the table to be ALTERed, open_table() tried to
get an upgradable metadata lock. In LOCK TABLEs mode, this lock
must already exist (i.e. taken by LOCK TABLE) as new locks of this
type cannot be acquired for fear of deadlock. So in LOCK TABLEs
mode, open_table() tried to find an existing upgradable lock for
the table to be altered.
The problem was that open_table() also tried to find upgradable
metadata locks for children of merge tables even if no such
locks are needed to execute ALTER TABLE on merge tables.
This patch fixes the problem by making sure that open tables code
only searches for upgradable metadata locks for the merge table
and not for the merge children tables.
The patch also fixes a related bug where an upgradable metadata
lock was aquired outside of LOCK TABLEs mode even if the table in
question was temporary. This bug meant that LOCK TABLES or DDL on
temporary tables by mistake could be blocked/aborted by locks held
on base tables with the same table name by other connections.
Test cases added to merge.test and lock_multi.test.
The problem is that cond->fix_fields(thd, 0) breaks
condition(cuts off 'having'). The reason of that is
that NULL valued Item pointer is present in the
middle of Item list and it breaks the Item processing
loop.
performance degradation.
Filesort + join cache combination is preferred to full index scan because it
is usually faster. But it's not the case when the index is clustered one.
Now test_if_skip_sort_order function prefers filesort only if index isn't
clustered.
Attempts to execute RESET statements within a transaction that
had acquired metadata locks, led to an assertion failure on
debug servers. This bug didn't cause any problems on release
builds.
The triggered assert is designed to check that caches are not
flushed or reset while having active transactions. It is triggered
if acquired metadata locks exist that are not from LOCK TABLE or
HANDLER statements.
In this case it was triggered by RESET QUERY CACHE while having
an active transaction that had acquired locks. The reason the
assertion was triggered, was that RESET statements, unlike the
similar FLUSH statements, was not causing an implicit commit.
This patch fixes the problem by making sure RESET statements
commit the current transaction before executing. The commit
causes acquired metadata locks to be released, preventing the
assertion from being triggered.
Incompatible change: This patch changes RESET statements so
that they cause an implicit commit.
Test case added to query_cache.test.
Propagation of a large unsigned numeric constant
in the WHERE expression led to wrong result.
For example,
"WHERE a = CAST(0xFFFFFFFFFFFFFFFF AS USIGNED) AND FOO(a)",
where a is an UNSIGNED BIGINT, and FOO() accepts strings,
was transformed to "... AND FOO('-1')".
That has been fixed.
Also EXPLAIN EXTENDED printed incorrect numeric constants in
transformed WHERE expressions like above. That has been
fixed too.
bool MDL_context::try_acquire_lock(MDL_request*)
This assert was triggered in the following way:
1) HANDLER OPEN t1 from connection 1
2) DROP TABLE t1 from connection 2. This will block due to the metadata lock
held by the open handler in connection 1.
3) DML statement (e.g. INSERT) from connection 1. This will close the table
opened by the HANDLER in 1) and release its metadata lock. This is done due
to the pending exclusive metadata lock from 2).
4) DROP TABLE t1 from connection 2 now completes and removes table t1.
5) HANDLER READ from connection 1. Since the handler table was closed in 3),
the handler code will try to reopen the table. First a new metadata lock on
t1 will be granted before the command fails since the table was removed in 4).
6) HANDLER READ from connection 1. This caused the assert.
The reason for the assert was that the MDL_request's pointer to the lock
ticket was not reset when the statement failed. HANDLER READ then tried to
acquire a lock using the same MDL_request object, triggering the assert.
This bug was only noticeable on debug builds and did not cause any problems
on release builds.
This patch fixes the problem by assuring that the pointer to the metadata
lock ticket is reset when reopening of handler tables fails.
Test case added to handler.inc
The problem was in an incorrect debug assertion. The expression
used in the failing assertion states that when finding
references matching ORDER BY expressions, there can be only one
reference to a single table. But that does not make any sense,
all test cases for this bug are valid examples with multiple
identical WHERE expressions referencing the same table which
are also present in the ORDER BY list.
Fixed by removing the failing assertion. We also have to take
care of the 'found' counter so that we count multiple
references only once. We rely on this fact later in
eq_ref_table().
- Remove INSTALL-BINARY from installed docs directory, we provide a copy
in the root directory (but perhaps this should be revisited later).
- Disable audit_null and daemon_example plugins.
- Fix the docs directory.
- Remove mysql-test/Makefile.in
- Build and install mysql_tzinfo_to_sql
- Remove share/charsets/languages.html
For temporary tables that are created with an engine that does
not provide the HTON_CAN_RECREATE, the truncate operation is
performed resorting to the optimized handler::ha_delete_all_rows
method. However, this means that the truncate will share
execution path, from mysql_delete, with truncate on regular
tables and other delete operations. As a consequence the truncate
operation, for the temporary table is logged, even if in row mode
because there is no distinction between this and the other delete
operations at binlogging time.
We fix this by checking if: (i) the binlog format, when the
truncate operation was issued, is ROW; (ii) if the operation is a
truncate; and (iii) if the table is a temporary table; before
writing to the binary log. If all three conditions are met, we
skip writing to the binlog. A side effect of this fix is that we
limit the scope of setting and resetting the
current_stmt_binlog_row_based. Now we just set and reset it
inside mysql_delete in the boundaries of the
handler::ha_write_row loop. This way we have access to
thd->current_stmt_binlog_row_based real value inside
mysql_delete.
This patch prevents system threads and system table accesses from
using user-specified values for "lock_wait_timeout". Instead all
such accesses are done using the default value (1 year).
This prevents background tasks (such as replication, events,
accessing stored function definitions, logging, reading time-zone
information, etc.) from failing in cases where the global value
of "lock_wait_timeout" is set very low.
The patch also simplifies the open tables API. Rather than adding
another convenience function for opening and locking system tables,
this patch removes most of the existing convenience functions for
open_and_lock_tables_derived(). Before, open_and_lock_tables() was
a convenience function that enforced derived tables handling, while
open_and_lock_tables_derived() was the main function where derived
tables handling was optional. Now, this convencience function is
gone and the main function is renamed to open_and_lock_tables().
No test case added as it would have required the use of --sleep to
check that system threads and system tables have a different timeout
value from the user-specified "lock_wait_timeout" system variable.
SUPER_ACL should be checked unconditionally while verifying if the binlog_format
or the binlog_direct_non_transactional_updates might be changed.
Roughly speaking, both session values cannot be changed in the context of a
transaction or a stored function. Note that changing the global value does
not cause any effect until a new connection is created.
So, we fixed the problem by first checking the permissions and right after further
verifications are ignored if the global value is being updated. In this patch, we
also re-structure the test case to make it more readable.
It appears that stack overflow checks for recusrive stored procedure
calls, that run in the normal server, did not work in embedded and were
dummified with preprocessor magic( #ifndef EMBEDDED_SERVER ).
The fix is to remove ifdefs, there is no reason not to run overflow checks
and crash in deeply recursive calls.
Note: Start of the stack (thd->thread_stack variable) in embedded is not
necessarily exact but stil provides the best guess. Unless the caller of
mysql_read_connect() is already deep in the stack, thd->thread_stack
variable should approximate stack start address well.
"TYPE=storage_engine" is deprecated, and will be removed
in the Celosia release of MySQL. Since the option is
present in the Betony release and the version number of
Celosia is still not decided, we need to bump the
deprecation version number back up to "6.0".
Reading from a self-logging engine and updating a transactional engine such as Innodb
generates changes that are written to the binary log in the statement format and may
make slaves diverge. In the mixed mode, such changes should be written to the binary
log in the row format.
Note that the issue does not happen if we mix a self-logging engine and MyIsam
as this case is caught by checking the mixture of non-transactional and transactional
engines.
So, we classify a mixed statement where one reads from NDB and writes into another
engine as unsafe:
if (multi_engine && flags_some_set & HA_HAS_OWN_BINLOGGING)
lex->set_stmt_unsafe(LEX::BINLOG_STMT_UNSAFE_MULTIPLE_ENGINES_AND_SELF_LOGGING_ENGINE);
There was an erroneous parameter when calling flush_master_info
from write_ignored_events_info_to_relay_log which could lead to a
server crash. This happens because the I/O thread releases the
log_lock before calling the flush_master_info.
Set the function to call flush_master_info with third parameter
to true, so that the mutex is properly taken.
The task is to
(a) add a comment on indexes and
(b) increase the maximum length of column, table and the new index comments.
The patch committed on behalf of Yoshinori Matsunobu (Yoshinori.Matsunobu@Sun.COM).
When EXPLAIN EXTENDED tries to print column names, it checks whether the
referenced table is CONST (in which case, the column's value rather than
its name will be printed). If no proper table is reference (i.e. because
a derived table was used that has since gone out of scope), this will fail
spectacularly.
This ports an equivalent of the fix for Bug 43354.
CHECK_FIELD_IGNORE was treated as CHECK_FIELD_ERROR_FOR_NULL;
UPDATE...SET...NULL on NOT NULL fields behaved differently after
a trigger.
Now distinguishes between IGNORE and ERROR_FOR_NULL and save/restores
check-field options.
an INFORMATION_SCHEMA table
When a prepared statement using a merged view containing an information
schema table was executed, a metadata lock of the view was not taken.
This meant that it was possible for concurrent view DDL to execute,
thereby breaking the binary log. For example, it was possible
for DROP VIEW to appear in the binary log before a query using the view.
This also happened when a statement in a stored routine was executed a
second time.
For such views, the information schema table is merged into the view
during the prepare phase (or first execution of a statement in a routine).
The problem was that we took a short cut and were not executing full-blown
view opening during subsequent executions of the statement. As a result,
a metadata lock on the view was not taken to protect the view definition.
This patch resolves the problem by making sure a metadata lock is taken
for views even after information schema tables are merged into them.
Test cased added to view.test.
I found three issues during the analysis:
1. Memory leak caused by temp_buf not being freed;
2. Memory leak caused when handling argv;
3. Conditional jump that depended on unitialized values.
Issue #1
--------
DESCRIPTION: when mysqlbinlog is reading from a remote location
the event temp_buf references the incoming stream (in NET
object), which is not freed by mysqlbinlog explicitly. On the
other hand, when it is reading local binary log, it points to a
temporary buffer that needs to be explicitly freed. For both
cases, the temp_buf was not freed by mysqlbinlog, instead was
set to 0. This clearly disregards the free required in the
second case, thence creating a memory leak.
FIX: we make temp_buf to be conditionally freed depending on
the value of remote_opt. Found out that similar fix is already
in most recent codebases.
Issue #2
--------
DESCRIPTION: load_defaults is called by parse_args, and it
reads default options from configuration files and put them
BEFORE the arguments that are already in argc and argv. This is
done resorting to MEM_ROOT. However, parse_args calls
handle_options immediately after which changes argv. Later when
freeing the defaults, pointers to MEM_ROOT won't match, causing
the memory not to be freed:
void free_defaults(char **argv)
{
MEM_ROOT ptr
memcpy_fixed((char*) &ptr,(char *) argv - sizeof(ptr), sizeof(ptr));
free_root(&ptr,MYF(0));
}
FIX: we remove load_defaults from parse_args and call it
before. Then we save argv with defaults in defaults_argv BEFORE
calling parse_args (which inside can then call handle_options
at will). Actually, found out that this is in fact kind of a
backport for BUG#38468 into 5.1, so I merged in the test case
as well and added error check for load_defaults call.
Fix based on:
revid:zhenxing.he@sun.com-20091002081840-uv26f0flw4uvo33y
Issue #3
--------
DESCRIPTION: the structure st_print_event_info constructor
would not initialize the sql_mode member, although it did for
sql_mode_inited (set to false). This would later raise the
warning in valgrind when printing the sql_mode in the event
header, as this print out is protected by a check against
sql_mode_inited and sql_mode variables. Given that sql_mode was
not initialized valgrind would output the warning.
FIX: we add initialization of sql_mode to the
st_print_event_info constructor.
Table corruption happens during table reading in ha_tina::find_current_row() func.
Field::store() method returns error(true) if stored value is 0.
The fix:
added special case for enum type which correctly processes 0 value.
Additional fix:
INSERT...(default) and INSERT...() have the same behaviour now for enum type.
Multi_polygon::centroid() has an error in the implementation
per-file messages:
sql/spatial.cc
Bug#38959 archive_gis fails due to rounding difference
multi_polygon::centroid() implementation fixed
WL#5154 was a task for formally deprecating and removing items that
were mentioned in the manual as having been deprecated since MySQL
4.1 or 5.0, but that had never been removed.
Since WL#5154 was created, examination of mysqld.cc, mysql.cc, and
mysqldump.c reveals additional deprecations not mentioned in the
manual. (In some cases, the items are simply not mentioned in the
5.1+ manuals.)
This is a follow-on task to deprecate and remove these additional
items.
The deprecation happened in MySQL 5.1, and the options/variables
are now removed from the code.
The problem is that during temporary table creation uneven bits
are not taken into account for hidden fields. It leads to incorrect
calculation&allocation of null bytes size for table record. And
if grouped value is null we set wrong bit for this value(see end_update()).
Fixed by adding separate calculation of uneven bit for hidden fields.
MDL_lock::find_deadlock".
On some platforms deadlock detector in metadata locking
subsystem under certain conditions might have exhausted
stack space causing server crashes.
Particularly this caused failures of rqg_mdl_stability
test on Solaris in PushBuild.
During search for deadlock MDL deadlock detector could
sometimes encounter loop in the waiters graph in which
MDL_context which has started search for a deadlock
does not participate. In such case our algorithm will
continue looping assuming that either this deadlock will
be resolved by MDL_context which has created it (i.e.
by one of loop participants) or maximum search depth
will be reached.
Since max search depth was set to 1000 in the latter case
on platforms where each iteration of deadlock search
algorithm needs more than DEFAULT_STACK_SIZE/1000 bytes
of stack (around 192 bytes for 32-bit and around 256 bytes
for 64-bit platforms) we might have exhausted stack space.
This patch solves this problem by reducing maximum search
depth for MDL deadlock detector to 32. This should be safe
at the moment as it is unlikely that each iteration of the
current deadlock detector algorithm will consume more than
1K of stack (thus total amount of stack required can't be
more than 32K) and we require at least 80K of stack in order
to open any table. Also this value should be (hopefully) big
enough to not cause too much false deadlock errors (there
is an anecdotal evidence that real-life deadlocks are
typically shorter than that).
Additional reasearch should be conducted in future in order
to determine the more optimal value of maximum search depth.
This patch does not include test case as existing
rqg_mdl_stability test can serve as one.
TEMPORARY + HANDLER + LOCK + SP".
Server crashed when one:
1) Opened HANDLER or acquired global read lock
2) Then locked one or several temporary tables with
LOCK TABLES statement (but no base tables).
3) Then issued any statement causing commit (explicit
or implicit).
4) Issued statement which should have closed HANDLER
or released global read lock.
The problem was that when entering LOCK TABLES mode in the
scenario described above we incorrectly set transactional
MDL sentinel to zero. As result during commit all metadata
locks were released (including lock for open HANDLER or
global metadata shared lock). Indeed, attempt to release
metadata lock for the second time which happened during
HANLDER CLOSE or during release of GLR caused crash.
This patch fixes problem by changing MDL_context's
set_trans_sentinel() method to set sentinel to correct
value (it should point to the most recent ticket).
DDL workload".
When a RENAME TABLE or LOCK TABLE ... WRITE statement which
mentioned the same table several times were aborted during
the process of acquring metadata locks (due to deadlock
which was discovered or because of KILL statement) server
might have crashed.
When attempt to acquire all locks requested had failed we
went through the list of requests and released locks which
we have managed to acquire by that moment one by one. Since
in the scenario described above list of requests contained
duplicates this led to releasing the same ticket twice and
a crash as result.
This patch solves the problem by employing different approach
to releasing locks in case of failure to acquire all locks
requested.
Now we take a MDL savepoint before starting acquiring locks
and simply rollback to it if things go bad.