first, and we do not not care whether client has received all data.
This is a TCP optimization to avoid TIME_WAIT in TCP connection teardown.
This patch would abort connection on timeout, which usually happens when
client reads a large result set, at slower pace then the server can
write.
The patch also cleans up socket timeout handling, so that Windows
is consistent with another platforms (using nonblocking socket IO
+ waiting in poll/select on single socket, rather than setsockopt).
This makes identifying timeouts easier.
Also removed the superficial shutdown() before closesocket() in a few
places where it was used, because it was never needed , and
reportedly breaks SO_LINGER on Windows.
Part2: make MyRocks add its directory into @@ignore_db_dirs when starting.
This is necessary because apparently not everybody are using plugin's my.cnf
So load ha_rocksdb.{so,dll} manually and then hit MDEV-12451, MDEV-14461
etc.
* Note: breaking change; since this commit, a plugin that has
worked so far might get rejected due to plugin maturity
* mariabackup is not affected (allows all plugins)
* VERSION file defines SERVER_MATURITY, which defines the
corresponding numeric value as SERVER_MATURITY_LEVEL in
include/mysql_version.h
* The default value for 'plugin_maturity' is SERVER_MATURITY_LEVEL - 1
* Logs a warning if a plugin has maturity lower than
SERVER_MATURITY_LEVEL
* Tests suppress the plugin maturity warning
* Tests use --plugin-maturity=unknown by default so as not to fail
due to the stricter plugin maturity handling
This was missing bug fix from MySQL wsrep i.e. Galera.
Problem was that if stored procedure declares a handler that
catches deadlock error, then the error may have been
cleared in method sp_rcontext::handle_sql_condition().
Use wsrep_conflict_state correctly to determine is the
error already sent to client.
Add test case for both this bug and MDEV-12837: WSREP: BF
lock wait long. Test requires both fixes to pass.
This is 10.1 version where no merge error exists.
wsrep_on_check
New check function. Galera can't be enabled
if innodb-lock-schedule-algorithm=VATS.
innobase_kill_query
In Galera async kill we could own lock mutex.
innobase_init
If Variance-Aware-Transaction-Sheduling Algorithm (VATS) is
used on Galera we refuse to start InnoDB.
Changed innodb-lock-schedule-algorithm as read-only parameter
as it was designed to be.
lock_rec_other_has_expl_req,
lock_rec_other_has_conflicting,
lock_rec_lock_slow
lock_table_other_has_incompatible
lock_rec_insert_check_and_lock
Change pointer to conflicting lock to normal pointer as this
pointer contents could be changed later.
* again, as in 10.2, NOW is a keyword only if followed by parentheses
* use AS OF CURRENT_TIMESTAMP or AS OF NOW()
* AS OF CURRENT_TIMESTAMP and AS OF NOW() mean AS OF NOW(6),
not AS OF NOW(0), (same behavior as in a DEFAULT clause)
LOCK_thd_data was used to protect both THD data and
ensure that the THD is not deleted while it was in use
This patch moves the THD delete protection to LOCK_thd_kill,
which already protects the THD for kill.
The benefits are:
- More well defined what LOCK_thd_data protects
- LOCK_thd_data usage is now much simpler and easier to verify
- Less chance of deadlocks in SHOW PROCESS LIST as there is less
chance of interactions between mutexes
- Remove not needed LOCK_thread_count from
thd_get_error_context_description()
- Fewer mutex taken for thd->awake()
Other things:
- Don't take mysys->var mutex in show processlist to check if thread
is kill marked
- thd->awake() now automatically takes the LOCK_thd_kill mutex
(Simplifies code)
- Apc uses LOCK_thd_kill instead of LOCK_thd_data
This will allow show processlist to continue, without blocking
all new connections, if some threads gets stuck while holding
LOCK_thd_data or mysys_var->mutex
Connections that has mutex 'stuck' are marked as 'Busy' in 'Command'
Todo:
Make F_BACKOFF to do 'pause' instead of just (1)
- Remove not used thd_rpl_is_parallel()
- Remove not used mysql_notify_thread_having_shared_lock()
- Remove not needed LOCK_thread_count from MYSQL_BIN_LOG::reset_logs()
- LOCK_thread_count is not protecting against rollback, so this
code and comment is not needed
- Remove mutex_locks in slave.cc that are not needed.
Added THD::assert_not_linked() to ensure that it was safe to remove
- Fixed not repeatable test load_data_stmt_view
- Updated binlog_killed to test removal of mutex
(thanks to Andrei Elkin for test)
- More code comments
This was missing bug fix from MySQL wsrep i.e. Galera.
Problem was that if stored procedure declares a handler that
catches deadlock error, then the error may have been
cleared in method sp_rcontext::handle_sql_condition().
Use wsrep_conflict_state correctly to determine is the
error already sent to client.
Add test case for both this bug and MDEV-12837: WSREP: BF
lock wait long. Test requires both fixes to pass.
Problem was a merge error from MySQL wsrep i.e. Galera.
wsrep_on_check
New check function. Galera can't be enabled
if innodb-lock-schedule-algorithm=VATS.
innobase_kill_query
In Galera async kill we could own lock mutex.
innobase_init
If Variance-Aware-Transaction-Sheduling Algorithm (VATS) is
used on Galera we fall back to First-Come-First-Served (FCFS)
with notice to user.
Changed innodb-lock-schedule-algorithm as read-only parameter
as it was designed to be.
lock_reset_lock_and_trx_wait
Use ib::hex() to print out transaction ID.
lock_rec_other_has_expl_req,
lock_rec_other_has_conflicting,
RecLock::add_to_waitq
lock_rec_lock_slow
lock_table_other_has_incompatible
lock_rec_insert_check_and_lock
lock_prdt_other_has_conflicting
Change pointer to conflicting lock to normal pointer as this
pointer contents could be changed later.
RecLock::create
Conclicting lock pointer is moved to last parameter with
default value NULL. This conflicting transaction could
be selected as victim in Galera if requesting transaction
is BF (brute force) transaction. In this case contents
of conflicting lock pointer will be changed. Use ib::hex() to print
transaction ids.
Do not generate fake values when adding an auto-inc column to a versioned
table. This is not a auto-inc issue, but a more general case of adding
a not nullalble unique column to a table with history. We don't support
it yet, not even with a special auto-inc hack. As a workaround, one
can use a nullable unique column, that works.
* Note: breaking change; since this commit, a plugin that has
worked so far might get rejected due to plugin maturity
* mariabackup is not affected (allows all plugins)
* VERSION file defines SERVER_MATURITY, which defines the
corresponding numeric value as SERVER_MATURITY_LEVEL in
include/mysql_version.h
* The default value for 'plugin_maturity' is SERVER_MATURITY_LEVEL - 1
* Logs a warning if a plugin has maturity lower than
SERVER_MATURITY_LEVEL
* Tests suppress the plugin maturity warning
* Tests use --plugin-maturity=unknown by default so as not to fail
due to the stricter plugin maturity handling
This commit implements aggregate stored functions. The basic idea behind
the feature is:
* Implement a special instruction FETCH GROUP NEXT ROW that will pause
the execution of the stored function. When the instruction is reached,
execution of the initial query resumes "as if" the function returned.
This gives the server the opportunity to advance to the next row in the
result set.
* Stored aggregates behave like regular aggregate functions. The
implementation of thus resides in the class Item_sum_sp. Because it is
an aggregate function, for each new row in the group, the
Item_sum_sp::add() method will be called. This is when execution resumes
and the function does another iteration to "add" one extra element to
the final result.
* When the end of group is reached, val_xxx() method will be called for
the item. This case is handled by another execute step for the stored
function, only with a special flag to force a call to the return
handler. See Item_sum_sp::execute() for details.
To allow this pause and resume semantic, we must preserve the function
context across executions. This is stored in Item_sp::sp_query_arena only for
aggregate stored functions, but has no impact for regular functions.
We also enforce aggregate functions to include the "FETCH GROUP NEXT ROW"
instruction.
Signed-off-by: Vicențiu Ciorbaru <vicentiu@mariadb.org>
In preparation for implementing custom aggregate functions, refactor
the common code between regular stored functions and aggregate stored
functions. This includes:
* initialising SP result field
* executing a SP
* access checks
In addition, refactor sp_head::execute_function to take two extra
parameters, a function rcontext and a Query_arena. These two paremeters
were initially initialised and destroyed within
sp_head::execute_function, but for aggregate stored functions we will
require control over their lifetime. The owner of these objects now
becomes Item_sp.
Signed-off-by: Vicențiu Ciorbaru <vicentiu@mariadb.org>