- Added --verbose to BUILD scripts to get make to write out compile commands.
- Detect if AM_EXTRA_MAKEFLAGS=VERBOSE=1 was used with build scripts.
- Don't write warnings about replication variables when doing bootstrap.
- Fixed that mysql_cond_wait() and mysql_cond_timedwait() will report original source file in case of errors.
- Ignore some compiler warnings
BUILD/FINISH.sh:
Detect if AM_EXTRA_MAKEFLAGS=VERBOSE=1 or --verbose was used
BUILD/SETUP.sh:
Added --verbose to print out the full compile lines
Updated help message
client/mysqltest.cc:
Fixed that one can use 'replace' with cat_file
cmake/configure.pl:
If --verbose is used, get make to write out compile commands
debian/dist/Debian/rules:
Added $AM_EXTRA_MAKEFLAGS to get VERBOSE=1 on buildbot builds
debian/dist/Ubuntu/rules:
Added $AM_EXTRA_MAKEFLAGS to get VERBOSE=1 on buildbot builds
include/my_pthread.h:
Made set_timespec_time_nsec() more portable.
include/mysql/psi/mysql_thread.h:
Fixed that mysql_cond_wait() and mysql_cond_timedwait() will report original source file in case of errors.
mysql-test/suite/innodb/r/auto_increment_dup.result:
Fixed wrong DBUG_SYNC
mysql-test/suite/innodb/t/auto_increment_dup.test:
Fixed wrong DBUG_SYNC
mysql-test/suite/perfschema/include/upgrade_check.inc:
Make test more portable for changes in *.sql files
mysql-test/suite/perfschema/r/pfs_upgrade.result:
Updated test results
mysql-test/valgrind.supp:
Ignore running Aria checkpoint thread
scripts/mysqlaccess.sh:
Changed reference of bugs database
Ensure that also client-server group is read.
sql/handler.cc:
Added missing syncpoint
sql/mysqld.cc:
Don't write warnings about replication variables when doing bootstrap
sql/mysqld.h:
Don't write warnings about replication variables when doing bootstrap
sql/rpl_rli.cc:
Don't write warnings about replication variables when doing bootstrap
sql/sql_insert.cc:
Don't mask SERVER_SHUTDOWN in insert_delayed
This is done to be able to distingush between shutdown and interrupt errors
support-files/compiler_warnings.supp:
Ignore some compiler warnings in xtradb,innobase, oqgraph, yassl, string3.h
Fixed MDEV-4002: Server crash or valgrind errors in Item_func_group_concat::setup and Item_func_group_concat::add
mysql-test/r/group_by.result:
Added test case for failing GROUP_CONCAT ... ROLLUP queries
mysql-test/t/group_by.test:
Added test case for failing GROUP_CONCAT ... ROLLUP queries
sql/item_sum.cc:
Fixed issue where field->table pointed to different temporary table than expected.
Ensure that order->next points to the right object (could cause problems with setup_order())
Give error for wrong parameters to CHANGE MASTER
Extend MASTER_PASSWORD and MASTER_HOST lengths
mysql-test/suite/rpl/r/rpl_password_boundaries.result:
Test length of MASTER_PASSWORD, MASTER_HOST and MASTER_USER
mysql-test/suite/rpl/r/rpl_semi_sync.result:
Use different password than user name for better test coverage
mysql-test/suite/rpl/t/rpl_password_boundaries.test:
Test length of MASTER_PASSWORD, MASTER_HOST and MASTER_USER
mysql-test/suite/rpl/t/rpl_semi_sync.test:
Use different password than user name for better test coverage
sql/rpl_mi.h:
Extend MASTER_PASSWORD and MASTER_HOST lengths
sql/sql_repl.cc:
Give error for wrong parameters to CHANGE MASTER
sql/sql_repl.h:
Extend MASTER_PASSWORD and MASTER_HOST lengths
KILL now breaks locks inside InnoDB
Fixed possible deadlock when running INNODB STATUS
Added ha_kill_query() and kill_query() to send kill signal to all storage engines
Added reset_killed() to ensure we don't reset killed state while awake() is getting called
include/mysql/plugin.h:
Added thd_mark_as_hard_kill()
include/mysql/plugin_audit.h.pp:
Added thd_mark_as_hard_kill()
include/mysql/plugin_auth.h.pp:
Added thd_mark_as_hard_kill()
include/mysql/plugin_ftparser.h.pp:
Added thd_mark_as_hard_kill()
sql/handler.cc:
Added ha_kill_query() to send kill signal to all storage engines
sql/handler.h:
Added ha_kill_query() and kill_query() to send kill signal to all storage engines
sql/log_event.cc:
Use reset_killed()
sql/mdl.cc:
use thd->killed instead of thd_killed() to abort on soft kill
sql/sp_rcontext.cc:
Use reset_killed()
sql/sql_class.cc:
Fixed possible deadlock in INNODB STATUS by not getting thd->LOCK_thd_data if it's locked.
Use reset_killed()
Tell storge engines that KILL has been sent
sql/sql_class.h:
Added reset_killed() to ensure we don't reset killed state while awake() is getting called.
Added mark_as_hard_kill()
sql/sql_insert.cc:
Use reset_killed()
sql/sql_parse.cc:
Simplify detection of killed queries.
Use reset_killed()
sql/sql_select.cc:
Use reset_killed()
sql/sql_union.cc:
Use reset_killed()
storage/innobase/handler/ha_innodb.cc:
Added innobase_kill_query()
Fixed error reporting for interrupted queries.
storage/xtradb/handler/ha_innodb.cc:
Added innobase_kill_query()
Fixed error reporting for interrupted queries.
- Fixed broadcast without a proper mutex
- Don't break existing locks if we are just testing if we can get the lock
mysql-test/r/create_delayed.result:
Added test case for failures with INSERT DELAYED with CREATE and DROP TABLE
mysql-test/t/create_delayed.test:
Added test case for failures with INSERT DELAYED with CREATE and DROP TABLE
sql/mdl.cc:
Don't break existing locks for timeout=0 (ie, just check if there are conflicting locks).
This fixed the bug that INSERT DELAYED didn't work properly with CREATE TABLE
sql/sql_base.cc:
One neads to hold the mutex before doing a mysql_cond_broadcast()
This fixed the bug that INSERT DELAYED didn't work properly with DROP TABLE
sql/sql_insert.cc:
Protect setting of mysys_var->current_mutex.
from a MERGE view.
The problem was in the lost ability to be null for the table of a left join if it
is a view/derived table.
It hapenned because setup_table_map(), was called earlier then we merged
the view or derived.
Fixed by propagating new maybe_null flag during Item::update_used_tables().
Change in join_outer.test and join_outer_jcl6.test appeared because
IS NULL reported no used tables (i.e. constant) for argument which could not be
NULL and new maybe_null flag was propagated for IS NULL argument (Item_field)
because table the Item_field belonged to changed its maybe_null status.
Analysis:
The following call stack shows that it is possible to set Item_cache::value_cached, and the relevant value
without setting Item_cache::example.
#0 Item_cache_temporal::store_packed at item.cc:8395
#1 get_datetime_value at item_cmpfunc.cc:915
#2 resolve_const_item at item.cc:7987
#3 propagate_cond_constants at sql_select.cc:12264
#4 propagate_cond_constants at sql_select.cc:12227
#5 optimize_cond at sql_select.cc:13026
#6 JOIN::optimize at sql_select.cc:1016
#7 st_select_lex::optimize_unflattened_subqueries at sql_lex.cc:3161
#8 JOIN::optimize_unflattened_subqueries at opt_subselect.cc:4880
#9 JOIN::optimize at sql_select.cc:1554
The fix is to set Item_cache_temporal::example even when the value is
set directly by Item_cache_temporal::store_packed. This makes the
Item_cache_temporal object consistent.
Using too long table aliases in stored routines might
have caused server crashes.
Code in sp_head::merge_table_list() which is responsible
for collecting information about tables used in stored
routine was not aware of the fact that table alias might
have arbitrary length. I.e. it assumed that table alias
can't be longer than NAME_LEN bytes and allocated buffer
for a key identifying table accordingly.
This patch fixes the issue by ensuring that we use
dynamically allocated buffer for table key when table
alias is too long. By default stack based buffer is used
in which NAME_LEN bytes are reserved for table alias.
taking a change done to main 5.1 by Dmitri Lenev.
This is the original comment:
> committer: Dmitry Lenev <Dmitry.Lenev@oracle.com>
> branch nick: mysql-5.1-15954896
> timestamp: Wed 2012-12-05 19:26:56 +0400
> message:
> Bug #15954896 "SP, MULTI-TABLE DELETE AND LONG ALIAS".
Using too long table aliases in stored routines might
have caused server crashes.
Code in sp_head::merge_table_list() which is responsible
for collecting information about tables used in stored
routine was not aware of the fact that table alias might
have arbitrary length. I.e. it assumed that table alias
can't be longer than NAME_LEN bytes and allocated buffer
for a key identifying table accordingly.
This patch fixes the issue by ensuring that we use
dynamically allocated buffer for table key when table
alias is too long. By default stack based buffer is used
in which NAME_LEN bytes are reserved for table alias.
The patch decreases the duration of LOCK_thread_count, so it is not hold during THD destructor and freeing memory.
This mutex now only protects the integrity of threads list, when removing THD from it, and thread_count variable.
The add_to_status() function that updates global status during client disconnect, is now correctly protected by the LOCK_status mutex.
Benchmark : in a "non-persistent" sysbench test (oltp_ro with reconnect after each query), ~ 25% more connects/disconnects were measured
Analysys:
In the beginning of JOIN::cleanup there is code that is supposed to
free all filesort buffers. The code assumes that the table being sorted
is the first non-constant table. To get this table it calls:
first_top_level_tab(this, WITHOUT_CONST_TABLES)
However, first_top_level_tab() instead returned the wrong table - the first
one in the plan, instead of the first non-constant table. There is no other
place outside filesort() where sort buffers may be freed. As a result, the
sort buffer was not freed, and there was a memory leak.
Solution:
Change first_top_level_tab(), to test for WITH_CONST_TABLES instead of
WITHOUT_CONST_TABLES.
mysql-test/r/create.result:
Updated test results
mysql-test/t/create.test:
Updated test
sql/sql_base.cc:
Use push_internal_handler/pop_internal_handler to avoid errors & warnings instead of clear_error
Give a warnings instead of an error for CREATE TABLE IF EXISTS
sql/sql_parse.cc:
Check if we failed because of table exists (can only happen from create)
sql/sql_table.cc:
Check if we failed because of table exists (can only happen from create)
Analysis:
The reason for the suboptimal plan when querying IS tables through a view
was that the view columns that participate in an equality are wrapped by
an Item_direct_view_ref and were not recognized as being direct column
references.
Solution:
Use the original Item_field objects via the real_item() method.
mysql-test/r/create.result:
Added test case to show that CREATE TABLE also is not waiting if table exists.
mysql-test/t/create.test:
Added test case to show that CREATE TABLE also is not waiting if table exists.
sql/sql_base.cc:
Clear also warnings from acquire_locks if we retry.
- Added option to check_if_table_exists() to quickly check if table exists (either SHARE or .FRM)
- Extended lock_table_names() to not wait for meta data locks if CREATE IF NOT EXISTS is used.
mysql-test/r/create.result:
New test case
mysql-test/t/create.test:
New test case
sql/sql_base.cc:
Added option to check_if_table_exists() to quickly check if table exists (either SHARE or .FRM)
Extended lock_table_names() to not wait for meta data locks if CREATE IF NOT EXISTS is used.
sql/sql_base.h:
Updated prototype
sql/sql_db.cc:
Added extra argument to call to check_if_table_exists()
Description: A very large database name causes buffer
overflow in functions acl_get() and
check_grant_db() in sql_acl.cc. It happens
due to an unguarded string copy operation.
This puts required sanity checks before
copying db string to destination buffer.
Code in MDL subsystem assumes that identifiers of objects can't
be longer than NAME_LEN characters. This assumption was broken
when one tried to construct MDL_key based on table alias, which
can have arbitrary length. Since MDL_key's (and MDL locks) are
not really used for table aliases this patch changes code to
not initialize MDL_key object for table list element representing
aliases.
The problem was that in debugging binaries it try to print item to assign human readable name to the item.
But subquery item was already freed (join_free/cleanup with full cleanup) so Item_field refers to temporary
table which memory had been already freed.
When inserting a record with update on duplicate keys the server calls
the ha_index_read_idx_map handler function to look for the record
that violates unique key constraints. The third parameter of this call
should mark only the base components of the index where the server is
searched for the record. Possible hidden components of the primary key
are to be unmarked.
Assertion happened because sql_kill did not set OK status in diagnostic area
in the case of connection suicide (id to kill == thd->thread_id), issued
via COM_PROCESS_KILL , e.g using mysql_kill()
This patch ensures that diagnostic area is initialized in this specific case.
If the setting of system variables does not allow to use join buffer
for a join query with GROUP BY <f1,...> / ORDER BY <f1,...> then
filesort is not needed if the first joined table is scanned in
the order compatible with order specified by the list <f1,...>.
Fix some problems in the TC_LOG_MMAP commit processing, which could
lead to assertions in some cases.
Problems are mostly reproducible in MariaDB 10.0 with asynchroneous
commit checkpoints, but most of the problems were present in earlier
versions also.
fix: don't call field->val_decimal() if the field->is_null()
because the buffer at field->ptr might not hold a valid decimal value
sql/item_sum.cc:
do not call field->val_decimal() if the field->is_null()
storage/maria/ma_blockrec.c:
cleanup
storage/maria/ma_rrnd.c:
cleanup
strings/decimal.c:
typo
If triggers are used for an insert/update/delete statement than the values of
all virtual columns must be computed as any of them may be used by the triggers.
The problem is that memory alocated by copy_andor_structure() well be freed,
but if level of SELECT_LEX it will be excluded (in case of merge derived tables and view)
then sl->where/having will not be updated here but still can be accessed (so it will be access to freed memory).
(patch by Sanja)
MDEV-567: Wrong result from a query with correlated subquery if ICP is allowed:
backport the fix developed for SHOW EXPLAIN:
revision-id: psergey@askmonty.org-20120719115219-212cxmm6qvf0wlrb
branch nick: 5.5-show-explain-r21
timestamp: Thu 2012-07-19 15:52:19 +0400
BUG#992942 & MDEV-325: Pre-liminary commit for testing
and adjust it so that it handles DS-MRR scans correctly.
Use post_kill_notification in for one_thread_per_connection scheduler,
the same as already used in threadpool, to reliably wake a thread stuck in
read() or in different poll() variations.
If, when executing a query with ORDER BY col LIMIT n, the optimizer chose
an index-merge scan to access the table containing col while there existed
an index defined over col then optimizer did not consider the possibility
of using an alternative range scan by this index to avoid filesort. This
could cause a performance degradation if the optimizer flag index_merge was
set up to 'on'.
mysql-test/r/partition.result:
Added test case
mysql-test/t/partition.test:
Added test case
sql/ha_partition.cc:
Removed printing of not initialized variable
storage/maria/ha_maria.cc:
Don't copy variables that are not initialized
DISABLE AND ENABLED DURING DDL OPERATION
PROBLEM: Same thread trying to acquire the same mutex
second time leads to hang/server crash.
While [un]installing audit_log plugin
a thread acquires the LOCK_plugin mutex
and after successful initialization tries
to write in mysql.plugin table. It holds
this mutex for a long time. If some how
plugin table is corrupted then a write to
plugin table will throw an error, thread try
to log this error in the audit_log plugin,
doing so it tries to acquire the mutex
again and results is server hang/crash.
SOLUTION: Releasing the LOCK_plugin mutex before
writing in mysql.plugin table. We dont
need to hold this mutex as thread already
acquired a TL_WRITE lock on mysql.plugin
table.
This task fixes an ineffeciency that is a remainder from MySQL 5.0/5.1. There, subqueries
were optimized in a lazy manner, when executed for the first time. During this lazy optimization
it may happen that the server finds a more efficient subquery engine, and substitute the current
engine of the query being executed with the new engine. This required re-execution of the engine.
MariaDB 5.3 pre-optimizes subqueries in almost all cases, and the engine is chosen in most cases,
except when subquery materialization found that it must use partial matching. In this case, the
current code was performing one extra re-execution although it was not needed at all. The patch
performs the re-execution only if the engine was changed while executing.
In addition the patch performs small cleanup by removing "enum store_key_result" because it is
essentially a boolean, and the code that uses it already maps it to a boolean.
When a DML statement is issued, and if the index merge
access method is chosen, then many rows from the
storage engine will be locked because of the way the
algorithm works. Many rows will be locked, but they
will not be part of the final result set.
To reduce the excessive locking, the locks of unmatched
rows are released by this patch. This patch will
affect only transactions with isolation level
equal to or less stricter than READ COMMITTED. This is
because of the behaviour of ha_innobase::unlock_row().
rb://1296 approved by jorgen and olav.
Problem:-
When we execute a query which has subquery with GROUP BY, ORDER BY and have a
BLOB column,results a memory leak.
Analysis:-
In case of subquery, which have GROUP BY on BLOB and a ORDER BY on other field
and BLOB is not a key. We allocate a tmp buffer to copy_field to take care of
BLOB value.This copy_field value can have copies of its in two join(objects),
so while freeing this copy_field we have to take care that it is
not deleted twice.
The double deletion of tmp_table_param.copy_field is handled by two patches.
One by Kostja :
revid:sp1r-konstantin@mysql.com-20050627101056-55153
Fix the broken test suite in -debug build.
and other by Oleksandr
revid:sp1r-bell@sanja.is.com.ua-20060118114857-19905
Excluded posibility of tmp_table_param.copy_field double deletion (BUG#14851).
both of this patches are commited in different branch and while
merging they both get placed,but there is no need for Kostja patch as Oleksandr
patch handle this.
sql/sql_select.cc:
Bug13726751, tmp_join clean up is not necessary as later in the code we are taking care of cleaning up of tmp_join copy_field.
Problem:-
When we execute a query which has subquery with GROUP BY, ORDER BY and have a
BLOB column,results a memory leak.
Analysis:-
In case of subquery, which have GROUP BY on BLOB and a ORDER BY on other field
and BLOB is not a key. We allocate a tmp buffer to copy_field to take care of
BLOB value.This copy_field value can have copies of its in two join(objects),
so while freeing this copy_field we have to take care that it is
not deleted twice.
The double deletion of tmp_table_param.copy_field is handled by two patches.
One by Kostja :
revid:sp1r-konstantin@mysql.com-20050627101056-55153
Fix the broken test suite in -debug build.
and other by Oleksandr
revid:sp1r-bell@sanja.is.com.ua-20060118114857-19905
Excluded posibility of tmp_table_param.copy_field double deletion (BUG#14851).
both of this patches are commited in different branch and while
merging they both get placed,but there is no need for Kostja patch as Oleksandr
patch handle this.
sql/sql_select.cc:
Bug13726751, tmp_join clean up is not necessary as later in the code we are taking care of cleaning up of tmp_join copy_field.
Problem:-
using last_insert_id() on an auto_incremented bigint unsigned does
not work for values which are greater than max-bigint-signed.
Analysis:-
last_insert_id() returns the first auto_incremented value for a column
and an auto_incremented value can have only positive values.
In our code, when we are initializing a last_insert_id object, we are
taking it as a signed BIGINT, So when the auto_incremented value reaches
greater than max signed bigint, last_insert_id gives negative result.
Solution:
When we are fetching the value from last_insert_id, We are setting the
unsigned_flag, so that it take only unsigned BIGINT value.
sql/item_func.cc:
here unsigned value is converted to signed value.
sql/item_func.h:
last_insert_id() gives an auto_incremented value which can be
positive only,so defined it as a unsigned longlong sets the
unsigned_flag to 1.
Problem:-
using last_insert_id() on an auto_incremented bigint unsigned does
not work for values which are greater than max-bigint-signed.
Analysis:-
last_insert_id() returns the first auto_incremented value for a column
and an auto_incremented value can have only positive values.
In our code, when we are initializing a last_insert_id object, we are
taking it as a signed BIGINT, So when the auto_incremented value reaches
greater than max signed bigint, last_insert_id gives negative result.
Solution:
When we are fetching the value from last_insert_id, We are setting the
unsigned_flag, so that it take only unsigned BIGINT value.
sql/item_func.cc:
here unsigned value is converted to signed value.
sql/item_func.h:
last_insert_id() gives an auto_incremented value which can be
positive only,so defined it as a unsigned longlong sets the
unsigned_flag to 1.
Fix by Sergey Petrunia.
This patch only prevents the evaluation of expensive subqueries during optimization.
The crash reported in this bug has been fixed by some other patch.
The fix is to call value->is_null() only when !value->is_expensive(), because is_null()
may trigger evaluation of the Item, which in turn triggers subquery evaluation if the
Item is a subquery.
This bug had two problems:
P1) Reads out of bounds;
P2) Writes out of bounds.
PROBLEM P1
----------
User_var_log_event unmarshalling from binlog was not performing range
checks when using name_len and val_len variables to walk on event
buffer.
Added range checks to User_var_log_event unmarshalling to prevent
unmarshalling errors.
PROBLEM P2
----------
User_var_log_event value was allocated on thread stack, what caused
stack frame errors when User_var_log_event value was bigger than thread
stack size.
Currently value is allocated on heap memory.
.. into MariaDB 5.3
Fix for Bug#12667154 SAME QUERY EXEC AS WHERE SUBQ GIVES DIFFERENT
RESULTS ON IN() & NOT IN() COMP #3
This bug causes a wrong result in mysql-trunk when ICP is used
and bad performance in mysql-5.5 and mysql-trunk.
Using the query from bug report to explain what happens and causes
the wrong result from the query when ICP is enabled:
1. The t3 table contains four records. The outer query will read
these and for each of these it will execute the subquery.
2. Before the first execution of the subquery it will be optimized. In
this case the important is what happens to the first table t1:
-make_join_select() will call the range optimizer which decides
that t1 should be accessed using a range scan on the k1 index
It creates a QUICK_RANGE_SELECT object for this.
-As the last part of optimization the ICP code pushes the
condition down to the storage engine for table t1 on the k1 index.
This produces the following information in the explain for this table:
2 DEPENDENT SUBQUERY t1 range k1 k1 5 NULL 3 Using index condition; Using filesort
Note the use of filesort.
3. The first execution of the subquery does (among other things) due
to the need for sorting:
a. Call create_sort_index() which again will call find_all_keys():
b. find_all_keys() will read the required keys for all qualifying
rows from the storage engine. To do this it checks if it has a
quick-select for the table. It will use the quick-select for
reading records. In this case it will read four records from the
storage engine (based on the range criteria). The storage engine
will evaluate the pushed index condition for each record.
c. At the end of create_sort_index() there is code that cleans up a
lot of stuff on the join tab. One of the things that is cleaned
is the select object. The result of this is that the
quick-select object created in make_join_select is deleted.
4. The second execution of the subquery does the same as the first but
the result is different:
a. Call create_sort_index() which again will call find_all_keys()
(same as for the first execution)
b. find_all_keys() will read the keys from the storage engine. To
do this it checks if it has a quick-select for the table. Now
there is NO quick-select object(!) (since it was deleted in
step 3c). So find_all_keys defaults to read the table using a
table scan instead. So instead of reading the four relevant records
in the range it reads the entire table (6 records). It then
evaluates the table's condition (and here it goes wrong). Since
the entire condition has been pushed down to the storage engine
using ICP all 6 records qualify. (Note that the storage engine
will not evaluate the pushed index condition in this case since
it was pushed for the k1 index and now we do a table scan
without any index being used).
The result is that here we return six qualifying key values
instead of four due to not evaluating the table's condition.
c. As above.
5. The two last execution of the subquery will also produce wrong results
for the same reason.
Summary: The problem occurs due to all but the first executions of the
subquery is done as a table scan without evaluating the table's
condition (which is pushed to the storage engine on a different
index). This is caused by the create_sort_index() function deleting
the quick-select object that should have been used for executing the
subquery as a range scan.
Note that this bug in addition to causing wrong results also can
result in bad performance due to executing the subquery using a table
scan instead of a range scan. This is an issue in MySQL 5.5.
The fix for this problem is to avoid that the Quick-select-object that
the optimizer created is deleted when create_sort_index() is doing
clean-up of the join-tab. This will ensure that the quick-select
object and the corresponding pushed index condition will be available
and used by all following executions of the subquery.
PRIVILEGES
Description: (user,host) pair from security context is used
privilege checking at the time of granting or
revoking proxy privileges. This creates problem
when server is started with
--skip-name-resolve option because host will not
contain any value. Checks should be dependent on
consistent values regardless the way server is
started. Further, privilege check should use
(priv_user,priv_host) pair rather than values
obtained from inbound connection because
this pair represents the correct account context
obtained from mysql.user table.
CONSISTENT SNAPSHOT OPTION
A transaction is started with a consistent snapshot. After
the transaction is started new indexes are added to the
table. Now when we issue an update statement, the optimizer
chooses an index. When the index scan is being initialized
via ha_innobase::change_active_index(), InnoDB reports
the error code HA_ERR_TABLE_DEF_CHANGED, with message
stating that "insufficient history for index".
This error message is propagated up to the SQL layer. But
the my_error() api is never called. The statement level
diagnostics area is not updated with the correct error
status (it remains in Diagnostics_area::DA_EMPTY).
Hence the following check in the Protocol::end_statement()
fails.
516 case Diagnostics_area::DA_EMPTY:
517 default:
518 DBUG_ASSERT(0);
519 error= send_ok(thd->server_status, 0, 0, 0, NULL);
520 break;
The fix is to backport the fix of bugs 14365043, 11761652
and 11746399.
14365043 PROTOCOL::END_STATEMENT(): ASSERTION `0' FAILED
11761652 HA_RND_INIT() RESULT CODE NOT CHECKED
11746399 RETURN VALUES OF HA_INDEX_INIT() AND INDEX_INIT() IGNORED
rb://1227 approved by guilhem and mattiasj.
cmake/cpack_rpm.cmake:
* mark all cnf files with %config(noreplace)
* add the forgotten postun script
sql/sys_vars.cc:
0 for a string variable means "no default. But datadir has the default value.
support-files/rpm/server-postin.sh:
* use mysqld --help to determine the correct datadir in the presence of my.cnf files
(better than my_print_defaults, because it considers the correct group set).
* Only create users, and chown/chmod if it's a fresh install, not an upgrade.
* only run mysql_install_db if datadir does not exist
When a SP handler is activated, memory is allocated to hold the
MESSAGE_TEXT for the condition that caused the activation.
The problem was that this memory was allocated on the MEM_ROOT belonging
to the stored program. Since this MEM_ROOT is not freed until the
stored program ends, a stored program that causes lots of handler
activations can start using lots of memory. In 5.1 and earlier the
problem did not exist as no MESSAGE_TEXT was allocated if a condition
was raised with a handler present. However, this behavior lead to
a number of other issues such as Bug#23032.
This patch fixes the problem by allocating enough memory for the
necessary MESSAGE_TEXTs in the SP MEM_ROOT when the SP starts and
then re-using this memory each time a handler is activated.
This is the 5.5 version of the patch.
This bug depends on cmake version.
For cmake 2.6 (which is still in use for some pushbuild trees)
the main build would succeed, even if create_initial_db failed.
The problem was the chaining of commands in the CUSTOM_COMMAND
to produce 'initdb.dep'. It first invokes cmake to run mysqld,
then invokes 'touch' to create the file. Moving the 'touch'
command makes the error propagate properly for both cmake 2.6 and 2.8
Follow-up patch - Fix broken build:
error: format ‘%u’ expects argument of type ‘unsigned int’,
but argument 2 has type ‘key_part_map {aka long unsigned int}’
[-Werror=format]
- Wrong thd uses in Item_subselect, could lead to crash
- Inititalize uninitialized variable in new autoincrement handling code
sql/handler.cc:
More DBUG_PRINT
sql/item_subselect.cc:
Wrong thd uses in Item_subselect, could lead to crash
storage/innobase/handler/ha_innodb.cc:
Initialize variable needed by upper level. This only happens when auto-increment value wraps over.
storage/xtradb/handler/ha_innodb.cc:
Initialize variable needed by upper level. This only happens when auto-increment value wraps over.
n_child_sum_items kept increasing.
Since it is used for calculating the size of ref_pointer_array,
we will allocate larger and larger chunks of memory, until we hit some
operating system limit.
The memory is free()d at disconnect, but is most likely *not*
returned to the operating system.
In some rare cases when the value of the system variable join_buffer_size
was set to a number less than 256 the function JOIN_CACHE::set_constants
determined the size of an offset in the join buffer equal to 1 though
the minimal join buffer required more than 256 bytes. This could cause
a crash of the server when records from the join buffer were read.