The fill_schema_table() function used to call get_table_share() for a table name in WHERE
then clear the error list. That way plugins receive the superfluous error notification if it
happens in it. Also the problem was that error handler didn't prevent the suppressed
error message from logging anyway as the logging happens in THD::raise_condition
before the handler call.
Trigger_error_handler is remade into Warnings_only_error_handler, so it stores the error
message in all cases in the thd->stmt_da.
Then later the stored error is raised.
Reason for the bug was an optimization for higher connect speed where we moved when global status was updated,
but forgot to update states when slave thread dies.
Fixed by adding thd->add_status_to_global() before deleting slave thread's thd.
mysys/my_delete.c:
Added missing newline
sql/mysqld.cc:
Use add_status_to_global()
sql/slave.cc:
Added missing add_status_to_global()
sql/sql_class.cc:
Use add_status_to_global()
sql/sql_class.h:
Simplify adding local status to global by adding add_status_to_global()
on server shutdown after SELECT with CONVERT_TZ
It's wrong to return my_empty_string from val_str().
Removing my_empty_string. Using make_empty_result() instead.
Plugins get error notifications only when my_message_sql() is called.
But errors are launched with THD::raise_condition() calls in other
places. These are push_warning(), implementations of SIGNAL and
RESIGNAL commands.
So it makes sence to notify plugins there in THD::raise_condition().
------------------------------------------------------------
revno: 3929 [merge]
fixes bug: https://launchpad.net/bugs/1243150
committer: Teemu Ollakka <teemu.ollakka@codership.com>
branch nick: 5.5-23
timestamp: Wed 2013-10-23 20:05:01 +0300
message: 8kB/s -
References lp:1243150 - initial wsrep hton cleanups
* Removed wsrep_seqno_changed boolean
* wsrep_cleanup_transaction() is now called explicitly whenever it is
clear that transaction has come to an end
* wsrep_trans_cache_is_empty() now checks from cache_mngr recardless of
command type
* Separated call to wsrep->post_commit() to own function, called from
transaction.cc whenever appropriate
* wsrep_thd_is_brute_force() now investigates only thd->wsrep_exec_mode
* More comments and debug time assertions
* Debug code to check that wsrep position stored in InnoDB is
monotinically increasing. Enabled with UNIV_DEBUG
------------------------------------------------------------
revno: 3928
fixes bug: https://launchpad.net/bugs/1237889
committer: Teemu Ollakka <teemu.ollakka@codership.com>
branch nick: 5.5-23
timestamp: Tue 2013-10-22 22:01:20 +0300
message:
References lp:1237889 - reverting fix in r3926, it broke crash recovery
------------------------------------------------------------
revno: 3927
fixes bug: https://launchpad.net/bugs/1240040
committer: Teemu Ollakka <teemu.ollakka@codership.com>
branch nick: 5.5-23
timestamp: Tue 2013-10-15 14:46:15 +0300
message:
References lp:1240040 - added WSREP_MYSQL_DB as a key for DROP VIEW
------------------------------------------------------------
revno: 3926
fixes bug: https://launchpad.net/bugs/1237889
committer: Teemu Ollakka <teemu.ollakka@codership.com>
branch nick: 5.5-23
timestamp: Thu 2013-10-10 14:22:58 +0300
message:
References lp:1237889 - register wsrep hton only if thd->wsrep_exec_mode == LO
CAL_STATE
------------------------------------------------------------
revno: 3925
fixes bug: https://launchpad.net/bugs/1235635
committer: Alexey Yurchenko <alexey.yurchenko@codership.com>
branch nick: 5.5-23
timestamp: Sat 2013-10-05 18:03:06 +0300
message:
References lp:1235635 - fixed the warning by initializing c_lock to NULL.
(and valgrind warnings)
* move thd userstat initialization to the same function
that was adding thd userstat to global counters.
* initialize thd->start_bytes_received in THD::init
(when thd->userstat_running is set)
ORDER BY does not work
Use "dynamic" row format (instead of "block") for MARIA internal
temporary tables created for cursors.
With "block" row format MARIA may shuffle rows, with "dynamic" row
format records are inserted sequentially (there are no gaps in data
file while we fill temporary tables).
This is needed to preserve row order when scanning materialized cursors.
Description:
Original fix Bug#11765744 changed mutex to read write lock
to avoid multiple recursive lock acquire operation on
LOCK_status mutex.
On Windows, locking read-write lock recursively is not safe.
Slim read-write locks, which MySQL uses if they are supported by
Windows version, do not support recursion according to their
documentation. For our own implementation of read-write lock,
which is used in cases when Windows version doesn't support SRW,
recursive locking of read-write lock can easily lead to deadlock
if there are concurrent lock requests.
Fix:
This patch reverts the previous fix for bug#11765744 that used
read-write locks. Instead problem of recursive locking for
LOCK_status mutex is solved by tracking recursion level using
counter in THD object and acquiring lock only once when we enter
fill_status() function first time.
Description:
Original fix Bug#11765744 changed mutex to read write lock
to avoid multiple recursive lock acquire operation on
LOCK_status mutex.
On Windows, locking read-write lock recursively is not safe.
Slim read-write locks, which MySQL uses if they are supported by
Windows version, do not support recursion according to their
documentation. For our own implementation of read-write lock,
which is used in cases when Windows version doesn't support SRW,
recursive locking of read-write lock can easily lead to deadlock
if there are concurrent lock requests.
Fix:
This patch reverts the previous fix for bug#11765744 that used
read-write locks. Instead problem of recursive locking for
LOCK_status mutex is solved by tracking recursion level using
counter in THD object and acquiring lock only once when we enter
fill_status() function first time.
SERIALIZABLE
Problem:
The documentation claims that WITH CONSISTENT SNAPSHOT will work for both
REPEATABLE READ and SERIALIZABLE isolation levels. But it will work only
for REPEATABLE READ isolation level. Also, the clause WITH CONSISTENT
SNAPSHOT is silently ignored when it is not applicable to the given isolation
level.
Solution:
Generate a warning when the clause WITH CONSISTENT SNAPSHOT is ignored.
rb#2797 approved by Kevin.
Note: Support team wanted to push this to 5.5+.
bzr merge lp:maria/5.5 -rtag:mariadb-5.5.31
Text conflict in cmake/cpack_rpm.cmake
Text conflict in debian/dist/Debian/control
Text conflict in debian/dist/Ubuntu/control
Text conflict in sql/CMakeLists.txt
Conflict adding file sql/db.opt. Moved existing file to sql/db.opt.moved.
Conflict adding file sql/db.opt.moved. Moved existing file to sql/db.opt.moved.moved.
Text conflict in sql/mysqld.cc
Text conflict in support-files/mysql.spec.sh
8 conflicts encountered.
NUMBER ALREADY USED BY 5.6
The problem was that the patch for Bug#13004581 added a new error
message to 5.5. This causes it to use an error number already used
in 5.6 by ER_CANNOT_LOAD_FROM_TABLE_V2. Which means that error
message number stability between GA releases is broken.
This patch fixes the problem by removing the error message and
using ER_UNKNOWN_ERROR instead.
NUMBER ALREADY USED BY 5.6
The problem was that the patch for Bug#13004581 added a new error
message to 5.5. This causes it to use an error number already used
in 5.6 by ER_CANNOT_LOAD_FROM_TABLE_V2. Which means that error
message number stability between GA releases is broken.
This patch fixes the problem by removing the error message and
using ER_UNKNOWN_ERROR instead.
When logging to the binary log in row, updates and deletes to a BLACKHOLE
engine table are skipped.
It is impossible to log binary log in row format for updates and deletes to
a BLACKHOLE engine table, as no row events can be generated in these cases.
After fix, generate a warning for UPDATE/DELETE statements that modify a
BLACKHOLE table, as row events are not logged in row format.
SCHEDULER DROPS EVENTS
Problem: On a semi sync enabled server (Master/Slave),
if event scheduler drops an event after completion,
server crashes.
Analaysis: If an event is created with "ON COMPLETION
NOT PRESERVE" clause, event scheduler deletes the event
upon event completion(expiration) and the thread object
will be destroyed. In the destructor of the thread object,
mysys_var member is set to zero explicitly. Later from
the same destructor call(same execution path),
incase of semi sync enabled server, while cleanup is called,
THD::mysys_var member is accessed by THD::enter_cond()
function which causes server to crash.
Fix: mysys_var should not be explicitly set to zero and
also it is not required.
sql/sql_class.cc:
mysys_var should not be explicitly set to zero.
revid:georgi.kodinov@oracle.com-20120309130449-82e3bs5v3et1x0ef
committer: Georgi Kodinov <Georgi.Kodinov@Oracle.com>
timestamp: Fri 2012-03-09 15:04:49 +0200
message:
Bug #12408412: GROUP_CONCAT + ORDER BY + INPUT/OUTPUT SAME
USER VARIABLE = CRASH
Moved the preparation of the variables that receive the output from
SELECT INTO from execution time (JOIN:execute) to compile time
(JOIN::prepare). This ensures that if the same variable is used in the
SELECT part of SELECT INTO it will be properly marked as non-const
for this query.
Test case added.
Used proper fast iterator.
a better fix (much smaller and without regressions) is coming from 5.1
MySQL Bug #12408412: GROUP_CONCAT + ORDER BY + INPUT/OUTPUT SAME USER VARIABLE = CRASH
and
MySQL Bug#14664077 SEVERE PERFORMANCE DEGRADATION IN SOME CASES WHEN USER VARIABLES ARE USED
sql/item_func.cc:
don't use anything from Item_func_set_user_var::fix_fields()
in Item_func_set_user_var::save_item_result()
sql/sql_class.cc:
Call suv->save_item_result(item) *before* doing suv->fix_fields(), because
the former evaluates the item (and caches its value), while the latter marks
the user variable as non-const. The problem is that the item was fix_field'ed
when the user variable was const, and it doesn't expect it to change to non-const
in the middle of the execution.
Analysis:
--------
As part of the fix for Bug#11757464, the 'out of memory' error
condition was not pushed to the diagnostic area as it requires
memory allocation. However in cases of SIGNAL/RESIGNAL 'out of
memory' error, the server may not be out of memory. Hence it
would be good to report the error in such cases.
Fix:
---
Push only non fatal 'out of memory' errors to the diagnostic area.
Since SIGNAL/RESIGNAL of 'out of memory' error may not be fatal,
the error is reported.
allow only three failed change_user per connection.
successful change_user do NOT reset the counter
tests/mysql_client_test.c:
make --error to work for --change_user errors