Added CHECK constraints to I_S.TABLE_CONSTRAINTS.
Fixed a bug regarding virtual column definitions whose name is the field name.
Added test case: check_constraint_show
The function Item::split_sum_func2() incorrectly processed the function
items with window functions that were not window functions themselfes
and were used as arguments of other functions.
The function st_select_lex_unit::exec_recursive() incorrectly determined
that a CTE mutually recursive with some others was stabilized in the case
when the non-recursive part of the CTE returned an empty set. As a result
the server fell into an infinite loop when executing a query using
this CTE.
Mutually recursive CTE could cause a crash of the server in the case
when they were not Standard compliant. The crash happened in
mysql_derived_prepare(), because the destructor the derived_result
object created for a CTE that was mutually recursive with some others
was called twice. Yet this destructor should not be called for resursive
references.
The method With_element::check_unrestricted_recursive() icorrectly performed
the check that no recursive reference is not encountered in inner parts of
outer joins. As a result the server reported errors for valid specifications
with outer joins.
When "simulate_failed_connection_1" DBUG keyword is set, current thread's
DBUG stack points to global variable in dbug (init_settings).
This global variable could be overriden while current THD is still active,
producing crash in worst scenario.
Added DBUG_SET()so that the current thread own dbug
context that cannot concurrently modified by anyone else.
- Before this patch during startup all slave threads was started without
any check that they had started properly.
- If one did a START SLAVE, STOP SLAVE or CHANGE MASTER as first command to the server
there was a chance that server could access structures that where not
properly initialized which could lead to crashes in
Log_event::read_log_event
- Fixed by waiting for slave threads to start up properly also during
server startup, like we do with START SLAVE.
The following is an updated commit message for the following commit
that was pushed before I had a chance to update the commit message:
c5e25c8b40
Fixed dead locks when doing stop slave while slave was starting.
- Added a separate lock for protecting start/stop/reset of a specific slave.
This solves some possible dead locks when one calls stop slave while
the slave is starting as the old run_locks was over used for other things.
- Set hash->records to 0 before calling free of all hash elements.
This was set to stop concurrent threads to loop over hash elements and
access members that was already freed.
This was a problem especially in start_all_slaves/stop_all_slaves
as the mutex protecting the hash was temporarily released while a slave
was started/stopped.
- Because of change to hash->records during hash_reset(),
any_slave_sql_running() will return 1 during shutdown as one can't
loop over master_info_index->master_info_hash while hash_reset() of it
is in progress.
This also fixes a potential old bug in any_slave_sql_running() where
during shutdown and ~Master_info_index(), my_hash_free() we could
potentially try to access elements that was already freed.
The value of wsrep_affected_rows were not reseted properly for
slave. Now we also wsrep_affected_rows in Xid_log_event::do_apply_event
also , apart from THD::cleanup_after_query().
Signed-off-by: Sachin Setiya <sachin.setiya@mariadb.com>
In get_mm_tree we have to change Field_geom::geom_type to
GEOMETRY as we have to let storing all types of the spatial features
in the field. So now we restore the original geom_type as it's
done.
failed with SELECT SQ, TEXT field
The functon find_all_keys does call Item_subselect::walk, which calls walk() for the subquery
The issue is that when a field is represented by Item_outer_ref(Item_direct_ref(Item_copy_string( ...))).
Item_copy_string does have a pointer to an Item_field in Item_copy::item but does not implement Item::walk method, so we are not
able to set the bitmap for that field. This is the reason why the assert fails.
Fixed by adding the walk method to Item_copy class.
character (").
The my_wildcmp function doesn't expect the string parameter to
have escapements, only the template. So the string
should be unescaped if necessary.
- add PAGESIZE property to the MSI installer
- add combobox to the MSI UI to select innodb page size
- add new parameter --innodb_page_size for mysql_install_db.exe.
this is passed down to bootstrap and also stored in my.ini.
MSI will call mysql_install_db.exe with --innodb_page_size set to the
PAGESIZE property
Option wsrep_max_ws_rows is intended to limit the maximum number of rows
in a writeset. To enforce this limit, we increment THD::wsrep_affected_rows
on every INSERT, UPDATE or DELETE. The problem is that we do so even on
insertion to internal temporary tables used for SELECTs and such.
THD::wsrep_affected_rows is now incremented only for rows that are actually
replicated.
Signed-off-by: Sachin Setiya <sachinsetia1001@gmail.com>
Try harder to show the table's engine.
If the table's engine is not loaded, the table won't open.
But we can still read the engine name from frm as a string.
Make SELECT <columns> FROM I_S.TABLES behave identically independently
from whether <columns> require opening the table in engine or
<columns> can be filled with only opening the frm.
In particular, fill_schema_table_from_frm() should not silently skip
frms with unknown engine, but should fill the I_S.TABLES row
with NULLs just like fill_schema_table_by_open() does.
has_no_default_value() should only fail the insert in the strict mode.
Additionally, don't check for "all fields are given values" twice,
it'll produce duplicate warnings.
Also, implement MDEV-11027 a little differently from 5.5 and 10.0:
recv_apply_hashed_log_recs(): Change the return type back to void
(DB_SUCCESS was always returned).
Report progress also via systemd using sd_notifyf().
Also, implement MDEV-11027 a little differently from 5.5:
recv_sys_t::report(ib_time_t): Determine whether progress should
be reported.
recv_apply_hashed_log_recs(): Rename the parameter to last_batch.
This is essentially a backport of the 10.0
commit 203f4d4193
that fixes a bug and silences a GCC 6.3.0 warning
about a left shift of a signed integer.
Missing parenthesis in a macro definition caused wrong operation
in the Query_cache::send_result_to_client() statement
thd->query_plan_flags= (thd->query_plan_flags & ~QPLAN_QC_NO) | QPLAN_QC;
This would expand to
thd->query_plan_flags= (thd->query_plan_flags & ~1) << 6 | 1 << 5;
which would shift the flags by 6 and clear an unrelated flag, instead
of clearing the flag (1 << 6).
Introduced 2f63e2e2a
Compiler error was:
sql/log_event.cc: In function ‘size_t log_event_print_value(IO_CACHE*, const uchar*, uint, uint, char*, size_t)’:
sql/log_event.cc:2897:7: warning: this ‘if’ clause does not guard... [-Wmisleading-indentation]
if (!ptr)
^~
sql/log_event.cc:2900:9: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the ‘if’
int32 i32= uint2korr(ptr);
^~~~~
Added in 950abd526
Error was:
sql/item_sum.cc:3478:5: warning: this ‘if’ clause does not guard... [-Wmisleading-indentation]
if ((!args[i]->fixed &&
^~
/sql/item_sum.cc:3482:7: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the ‘if’
with_subselect|= args[i]->with_subselect;
^~~~~~~~~~~~~~
Signed-off-by: Daniel Black <daniel.black@au.ibm.com>
The bug was caused by a wrong order of statements in With_clause::print().
As a result any view definition containing WITH clause with several
CTE specifications was put the frm file in a syntactically incorrect
form.
Define my_thread_id as an unsigned type, to avoid mismatch with
ulonglong. Change some parameters to this type.
Use size_t in a few more places.
Declare many flag constants as unsigned to avoid sign mismatch
when shifting bits or applying the unary ~ operator.
When applying the unary ~ operator to enum constants, explictly
cast the result to an unsigned type, because enum constants can
be treated as signed.
In InnoDB, change the source code line number parameters from
ulint to unsigned type. Also, make some InnoDB functions return
a narrower type (unsigned or uint32_t instead of ulint;
bool instead of ibool).
Problem: integer literals may be converted to floats for
comparison with decimal data. If the integers are large,
we may lose precision, and give wrong results.
Fix: for
<non-const decimal expression> <cmp> <const string expression>
or
<const string expression> <cmp> <non-const decimal expression>
we override the compare_type chosen by item_cmp_type(), and
do comparison as decimal rather than float.
(cherry picked from commit 1cf3489ba4 and edited by Johannes Weißl <jargon@molb.org>)
Applied lost in a merge revision 7f38a07:
MDEV-10043 - main.events_restart fails sporadically in buildbot (crashes upon
shutdown)
There was race condition between shutdown thread and event worker threads.
Shutdown thread waits for thread_count to become 0 in close_connections(). It
may happen so that event worker thread was started but didn't increment
thread_count by this time. In this case shutdown thread may miss wait for this
working thread and continue deinitialization. Worker thread in turn may continue
execution and crash on deinitialized data.
Fixed by incrementing thread_count before thread is actually created like it is
done for connection threads.
Also let event scheduler not to inc/dec running threads counter for symmetry
with other "service" threads.
This caused gcc-6.3.1 errors:7450cb7f6
mariadb-server/sql/sql_base.cc: In function ‘bool fix_all_session_vcol_exprs(THD*, TABLE_LIST*)’:
mariadb-server/sql/sql_base.cc:4821:7: warning: this ‘for’ clause does not guard... [-Wmisleading-indentation]
for (Field **df= t->default_field; df && *df; df++)
^~~
mariadb-server/sql/sql_base.cc:4826:9: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the ‘for’
for (Virtual_column_info **cc= t->check_constraints; cc && *cc; cc++)
^~~
It was obvious from 7450cb7f6 that the indenting should of been removed
Signed-off-by: Daniel Black <daniel.black@au.ibm.com>
The problem was that waiting for pause_for_ftwrl was done before
event_group was completed. This caused rpl_pause_for_ftwrl() to wait
forever during FLUSH TABLES WITH READ LOCK.
Now we only wait for FLUSH TABLES WITH READ LOCK when we are changing
to a new event group.
Protection added to reopen_file() and new_file_impl().
Without this we could get an assert in fn_format() as name == 0,
because the file was closed and name reset, atthe same time
new_file_impl() was called.
is starting.
This is needed as if we kill the START SLAVE thread too early during
shutdown then the IO_THREAD or SQL_THREAD will not have time to properly
initlize it's replication or THD structures and clean_up() will try to
delete master_info structures that are still in use.
The reason for this is that stop slave takes LOCK_active_mi over the
whole operation while some slave operations will also need LOCK_active_mi
which causes deadlocks.
Fixed by introducing object counting for Master_info and not taking
LOCK_active_mi over stop slave or even stop_all_slaves()
Another benefit of this approach is that it allows:
- Multiple threads can run SHOW SLAVE STATUS at the same time
- START/STOP/RESET/SLAVE STATUS on a slave will not block other slaves
- Simpler interface for handling get_master_info()
- Added some missing unlock of 'log_lock' in error condtions
- Moved rpl_parallel_inactivate_pool(&global_rpl_thread_pool) to end
of stop_slave() to not have to use LOCK_active_mi inside
terminate_slave_threads()
- Changed argument for remove_master_info() to Master_info, as we always
have this available
- Fixed core dump when doing FLUSH TABLES WITH READ LOCK and parallel
replication. Problem was that waiting for pause_for_ftwrl was not done
when deleting rpt->current_owner after a force_abort.
The function mysql_derived_merge() erroneously did not mark newly formed
AND formulas in ON conditions with the flag abort_on_null. As a result
not_null_tables() calculated incorrectly for these conditions. This
could prevent conversion of embedded outer joins into inner joins.
Changed a test case from table_elim.test to preserve the former execution
plan.
use update_hostname() to update the hostname.
test case comes from
commit 0abdeed1d6d
Author: gopal.shankar@oracle.com <>
Date: Thu Mar 29 00:20:54 2012 +0530
Bug#12766319 - 61865: RENAME USER DOES NOT WORK CORRECTLY -
REQUIRES FLUSH PRIVILEGES
FUNCTIONS/PRIVILEGES DIFFERENTLY'.
The problem was that attempt to grant EXECUTE or ALTER
ROUTINE privilege on stored procedure which didn't exist
succeed instead of returning an appropriate error like
it happens in similar situation for stored functions or
tables.
The code which handles granting of privileges on individual
routine calls sp_exist_routines() function to check if routine
exists and assumes that the 3rd parameter of the latter
specifies whether it should check for existence of stored
procedure or function. In practice, this parameter had
completely different meaning and, as result, this check was
not done properly for stored procedures.
This fix addresses this problem by bringing sp_exist_routines()
signature and code in line with expectation of its caller.
it was race condition prone. instead use either a pair of my_delete()
calls with already resolved paths, or a safe high-level function
my_handler_delete_with_symlink(), like MyISAM and Aria already do.
TOCTOU bug. The path is checked to be valid, symlinks are resolved.
Then the resolved path is opened. Between the check and the open,
there's a window when one can replace some path component with a
symlink, bypassing validity checks.
Fix: after we resolved all symlinks in the path, don't allow open()
to resolve symlinks, there should be none.
Compared to the old MyISAM/Aria code:
* fastpath. Opening of not-symlinked files is just one open(),
no fn_format() and lstat() anymore.
* opening of symlinked tables doesn't do fn_format() and lstat() either.
it also doesn't to realpath() (which was lstat-ing every path
component), instead if opens every path component with O_PATH.
* share->data_file_name stores realpath(path) not readlink(path). So,
SHOW CREATE TABLE needs to do lstat/readlink() now (see ::info()),
and certain error messages (cannot open file "XXX") show the real
file path with all symlinks resolved.
While DEBUG_SYNC doesn't make sense without a valid THD, it might
be put in the code that's invoked both within and outside of a
THD context (e.g. in maria_open(), there is no THD during recovery).
Workaround for join_cache + index on vcols + keyread bug.
Initialize the record to avoid caching garbage in non-read fields.
A proper fix (do not cache non-read fields at all) is done in 10.2
in commits 5d7607f340f..8d99166c697
'Not exists' optimization can be used for nested outer joins
only if IS NULL predicate from the WHERE condition is activated.
So we have to check that all guards that wrap this predicate
are in the 'open' state.
This patch supports usage of 'Not exists' optimization for any
outer join, no matter how it's nested in other outer joins.
This patch is also considered as a proper fix for bugs
#49322/#58490 and LP #817360.
Problem: CREATE TABLE using a fully qualified name with INDEX DIR/DATA DIR
option reports an error when the current database is not SET.
check_access() was incorrectly called with NULL as the database
argument in a situation where the database name was not needed for
the particular privilege being checked. This will cause the current
database to be used, or an error to be reported if there is no current
database.
Fix: Call check_access() with any_db as the database argument in this situation.
This patch is actually a complement for the fix of bug mdev-6892.
The procedure create_tmp_table() now must take into account
Item_direct_refs that wrap up constant fields of derived tables/views
that are used as inner tables in outer join operations.
The issue was that JOIN::rollup_write_data() used
JOIN::tmp_table_param::[start_]recinfo, which had uninitialized data.
These fields have uninitialized data, because JOIN::tmp_table_param
currently only stores some grouping-related data fields. The data about
the work (temporary) tables themselves is stored in
join->join_tab[...].tmp_table_param.
The fix is to make JOIN::rollup_write_data follow this convention
and look at the right TMP_TABLE_PARAM object
The bug was not visible in current HEAD. Introduced test case to catch
regressions. Also improve error messages regarding distinct usage in
window functions.
The bug was caused by several issues.
2 problems in seek_io_cache. Due to wrong offsets used, we would end up
seeking way too much (first change), or over the intended seek point
(second change). Fixing it requires correctly detecting available data
in buffer (first change), and not using "IO_SIZE alligned" reads. The
second is needed because _my_b_cache_read adjusts the pos_in_file itself
based on read_pos and read_end. Pretending buffer is empty when we want
to force a read will aleviate this problem.
Secondly, the big-table cursors didn't repect the interface definitions
of always returning the rownumber that Table_read_cursor::fetch() would activate.
At the same time, next(), prev() and move_to() should not perform any
row activation.
Window functions need to be computed after applying the HAVING clause.
An optimization that we have for regular, non-window function, cases is
to apply having only during sending of the rows to the client. This
allows rows that should be filtered from the temporary table used to
store aggregation results to be stored there.
This behaviour is undesireable for window functions, as we have to
compute window functions on the result-set after HAVING is applied.
Storing extra rows in the table leads to wrong values as the frame
bounds might capture those -to be filtered afterwards- rows.
options --log-slow-admin-statements, --log-queries-not-using-indexes and --log-slow-slave-statements have no effect if --log_slow_queries is not set
1. s/--log_slow_queries/--log-slow-queries/
2. disable log-slow-admin-statement/etc in mytr when doing mysqld --help
PART 2 of the fix adds the logic of not using password column, unless it
exists. If password column is missing we attempt to use plugin &&
authentication_string columns.
PART 1 of the fix requires a bit of refactoring to not use hard-coded
field indices any more. Create classes that express the grant tables structure,
without exposing the underlying field indices.
Most of the code is converted to use these classes, except parts which
are not directly affected by the MDEV-11170. These however are TODO
items for subsequent refactoring.
The same approach is needed for LAST_VALUE, otherwise the LAST_VALUE sum
functions are not cleared correctly. Now LAST_VALUE behaves as NTH_VALUE
with 0 offset, only that the frame that it is examining is the bottom bound,
not the top bound.
The problematic queries involve unions. For unions we have an
optimization where we skip the ORDER BY clause in a query from one side
of the union if it will be performed later due to UNION.
EX:
(SELECT a from t1 ORDER BY a) ORDER BY b;
The first ordering by a is not necessary and it gets removed.
The problem is that we still need to resolve the Items before removing the
ORDER BY list from the
SELECT_LEX structure. During this final resolve step however, we forgot to
allow SET functions within the ORDER BY clause. This caused us to return
an "Invalid use of group function" error during the checking performed
by fix_fields in Item_sum::init_sum_func_check.
change the parser not to allow SERIAL as a normal data type.
make a special rule for it, where it could be used for define
fields, but not generated fields, not return type of a stored function, etc.
JOIN_CACHE's were initialized in check_join_cache_usage()
from make_join_readinfo(). After that make_join_readinfo() was looking
whether it's possible to use keyread. Later, after make_join_readinfo(),
optimizer decided whether to use filesort. And even later, at the
execution time, from join_read_first(), keyread was actually enabled.
The problem is, that if a query uses a vcol, base columns that it
depends on are automatically added to the read_set - because they're
needed to calculate the vcol. But if we're doing keyread, vcol is taken
from the index, not calculated, and base columns do not need to be
in the read set (even should not be - as they aren't getting values).
The bug was that JOIN_CACHE used read_set with base columns,
they were not read because of keyread, so it was caching garbage.
So read_set is only known after the keyread was decided. And after the
filesort was decided, as filesort doesn't use keyread. But
check_join_cache_usage() needs to be done in make_join_readinfo(),
as the code below depends on these checks,
Fix: keep JOIN_CACHE checks where they were, but move initialization
down to the very end of JOIN::optimize_inner. If keyread was enabled,
update the read_set to include only columns that are part of the index.
Copy the keyread logic from join_read_first() to happen at optimize time.
make init_read_record() to detect enabled keyread
and use index_* access methods, not rnd_*
this makes MariaDB to use keyread a lot more often than before
* rename to "keyread" (to avoid conflicts with tokudb),
* change from bool to uint and store the keyread index number there
* provide a bool accessor to check if keyread is enabled
Filesort temporarily changes read_set to be tmp_set and marks only
fields needed for filesort. Add an assert to ensure that it doesn't
overwrite the old value of tmp_set, that is that read_set was *not*
already tmp_set when filesort was invoked.
Fix sql_update.cc that was was doing exactly that - changing read_set to
tmp_set, configuring tmp_set for keyread, and then invoking filesort.
mark_columns_used_by_index used to do
reset + mark_columns_used_by_index_no_reset + start keyread + set bitmaps
Now prepare_for_keyread does that, while mark_columns_used_by_index
does only reset + mark_columns_used_by_index_no_reset,
just as its name suggests.
old code didn't calculate vcols that were part of keyread,
but calculated other vcols. It was wrong - there was no guarantee
that vcol's base columns were part of keyread.
Technically it's possible for the vcol not be a part of keyread,
but all its base columns being part of keyread. But currently the
optimizer doesn't do that, keyread is only used if it covers
all columns used in the query.
This fixes crashes of vcol.vcol_trigger_sp_innodb
TABLE::add_read_columns_used_by_index() is conceptually wrong,
it *adds* columns used by index to the bitmap, without clearing
it first. But it also enables keyread, meaning that *only* columns
from the index will be read. It is supposed to be used to
add columns used by an index to a bitmap that already has columns
of a primary key - for engines where a primary key is part of every
index.
The correct fix is to change mark_columns_used_by_index() to
take into account extended keys.
this reverts 1d0acc7754 and cf97cbd1db
move TABLE::key_read into handler. Because in index merge and DS-MRR
there can be many handlers per table, and some of them use
key read while others don't. "keyread" is really per handler,
not per TABLE property.
Optionally do table->update_default_fields() even for INSERT
that supposedly provides values for all column. Because these
"values" might be DEFAULT, which would need table->update_default_fields()
at the end.
Also set Item_default_value::used_tables() from the default expression.
Non-zero used_field() means that mysql_insert() will initialize all
fields to their default values (with restore_record()) even if
all columns are later provided with values. Because default expressions
may refer to other columns and they must be initialized.
These are different bugs, but the fixing code is the same:
if window functions are used over implicit grouping then
now the execution should follow the general path calling
the function set in JOIN::first_select.
Due to this bug many queries that contained a window function
with MIN/MAX aggregation returned wrong results.
Calculation of a MIN/MAX aggregate function uses cache objects
and a comparator object that are created and set up in
Item_sum_hybrid::fix_fields () by a call of Item_sum_hybrid::setup_hybrid().
The latter binds the objects to the first argument of the
MIN/MAX function. Meanwhile window function perform aggregation
over fields of a temporary table. So binding must be done rather to
these fields. The earliest moment when setup the objects used in
MIN/max functions can be done is after all calls of the method
split_sum_func().
This patch introduces this late setup, but only for aggregate
functions used in window functions.
Probably it makes sense to use this late setup for all MIN/MAX
objects.
in Item_partition_func_safe_string(THD *thd, const char *name_arg,
uint length, CHARSET_INFO *cs= NULL), the 'name_arg' is the value
of the string constant and 'length' is the length of this constant,
so length == strlen(name_arg).
A proper InnoDB shutdown after aborted startup was introduced
in commit 81b7fe9d38.
Also related to this is MDEV-11985, making read-only shutdown more robust.
If startup was aborted, there may exist recovered transactions that were
not rolled back. Relax the assertions accordingly.
This patch complements the patch for bug 11138.
Without this patch some table-less queries with window functions
could cause crashes due to a memory overwrite.
The method Item_sum::print did not print opening '(' after the name
of simple window functions (like rank, dense_rank etc).
As a result the view definitions with such window functions
were formed invalid in .frm files.
Using window functions over results of implicit groupings
required special handling in JOIN::make_aggr_tables_info.
The patch made sure that the result of implicit grouping
was written into a temporary table properly.
If a window function with aggregation is over the result
set of a grouping query then the argument of the aggregate
function from the window function is allowed to be an
aggregate function itself.
This bug happens due to a conflict in the construct window_spec.
(win_ref conflicts with the non-reserved key word ROWS).
The standard SQL-2003 says that ROWS is a reserved key word.
Made this key word reserved in our grammar and removed
the conflict.
Adding keywords the following keywords into the "keyword" rules:
- EXCLUDE
- UNBOUNDED
- PRECEDING
- FOLLOWING
- TIES
- OTHERS
They are non-reserved words in the SQL standard (checked in a SQL-2011 draft),
and they don't cause any conflicts in sql_yacc.yy.
.. wsrep_max_ws_rows causes cluster to break when running
Galera cluster in TOI mode
Problem:
While copying records to temporary table during ALTER TABLE,
if there are more than wsrep_max_wsrep_rows records, the
command fails.
Fix:
Since, the temporary table records are not placed into the
binary log, wsrep_affected_rows must not be incremented.
Added a test.
that must not send a response
Problem:- When using wsrep (w/ galera) and issuing commands that can
cause deadlocks, deadlock exception errors are sent in responses to
commands such as close prepared statement and close connection which,
by spec, must not send a response.
Solution:- In dispatch_command, we will handle COM_QUIT and COM_STMT_CLOSE
commands even in case of error.
Patch Credit:- Jaka Močnik
This is backport of WL#7195 to MySQL-5.5. In 5.5, we
offload connection authentication from the acceptor
thread to tp worker threads.
Connection authentication happens in the acceptor thread that
accepts the connection for thread pool plugin. Connection authentication
involves exchanging packets with client and disk I/O
which is time consuming. This can cause other client
connections to starve and wait in the queue possibly increasing the
connect latency and decreasing throughput. In the worst case, some
connections could be dropped. n addition, SSL handshakes are quite
expensive and can stall connections in the accept queue.
This patch offloads connection authentication when thread pool
plugin is used for client connection. Each thread group
shall have a queue of connection_context objects, which represents
new connections that need to be processed by thread group threads.
The connection context is composed of THD object & list pointers
for intrusive queue implementation. Whenever a new connection
arrives, connection context object is created and added to the
queue. A new connect handler thread is created or woken up to handle
the authentication task. The worker thread loop is modified to
process connection events on connect handler threads in addition to
checking for query processing events. The initial number of connect
handler threads is one per thread group and it is restricted to
a maximum of 4 threads per thread group.
The temporary tables created for recursive table references
should be closed in close_thread_tables(), because they might
be used in the statements like ANALYZE WITH r AS (...) SELECT * from r
where r is defined through recursion.
The fields st_select_lex::cond_pushed_into_where and
st_select_lex::cond_pushed_into_having should be re-initialized
for the unit specifying a derived table at every re-execution
of the query that uses this derived table, because the result
of condition pushdown may be different for different executions.
- Put back the assert on SQL layer at the right location
- Adjust rdb_pack_with_make_sort_key to work around the assert (like
it is done at other palaces): MyRocks may need to pack a column
value even when the column is not in the read set.