In contrast to thread_count, which is decremented by THD destructor,
this one was most probably intended to be decremented after all THD
destructors are done.
THD_count class was added to achieve similar effect with thread_count.
Aim is to reduce usage of LOCK_thread_count and COND_thread_count.
Part of MDEV-15135.
Implemented and integrated THD_list as a replacement for the global
thread list. It uses own mutex instead of LOCK_thread_count for THD
list protection.
Removed unused first_global_thread() and next_global_thread().
delayed_insert_threads is now protected by LOCK_delayed_insert. Although
this patch doesn't fix very wrong synchronization of this variable.
After this patch there are only 2 legitimate uses of LOCK_thread_count
left, both in mysqld.cc: thread_count and ready_to_exit.
Aim is to reduce usage of LOCK_thread_count and COND_thread_count.
Part of MDEV-15135.
LOG_INFO::lock was useless. It could've only protect against concurrent
iterators execution, which was already protected by LOCK_thread_count.
Use LOCK_thd_data instead of LOCK_thread_count as a protection against
THD::current_linfo reset.
Aim is to reduce usage of LOCK_thread_count and COND_thread_count.
Part of MDEV-15135.
Part of MDEV-5336 Implement LOCK FOR BACKUP
- Changed check of Global_only_lock to also include BACKUP lock.
- We store latest MDL_BACKUP_DDL lock in thd->mdl_backup_ticket to be able
to downgrade lock during copy_data_between_tables()
Changing the way how a cursor is opened to fetch its structure only,
e.g. for a cursor FOR loop record variable.
The old methods with setting thd->lex->limit_rows_examined to an Item_uint(0)
was not reliable and could push these messages into diagnostics area:
The query examined at least 1 rows, which exceeds LIMIT ROWS EXAMINED (0)
The new method should be more reliable, as it completely prevents the call
of do_select() in JOIN::exec_inner() during the cursor structure discovery,
so the execution of the cursor SELECT query returns immediately after the
preparation step (when the result row structure becomes known),
without even entering the code that fetches the result rows.
truncating a temporary table
TRUNCATE expects only one TABLE instance (which is used by TRUNCATE
itself) to be open. However this requirement wasn't enforced after
"MDEV-5535: Cannot reopen temporary table".
Fixed by closing unused table instances before performing TRUNCATE.
This problem manifested itself when a join query used two or more
materialized CTE such that each of them employed the same recursive CTE.
The bug caused a crash. The crash happened because the cleanup()
function was performed premature for recursive CTE. This clean up was
induced by the cleanup of the first CTE referenced the recusrsive CTE.
This cleanup destroyed the structures that would allow to read from the
temporary table containing the rows of the recursive CTE and an attempt to read
these rows for the second CTE referencing the recursive CTE triggered a
crash.
The clean up for a recursive CTE R should be performed after the cleanup
of the last materialized CTE that uses R.
- Changed ERROR to WARNING for MyISAM/Aria message
that are warnings in the check utilities.
This affects for example "client is using or
hasn't closed the table properly".
- Print "Table is fixed" if check succeded in
fixing the table.
a table value constructor shows wrong number of rows
This is another attempt to fix this bug. The previous patch did not take
into account that a transformation for ALL/ANY subqueries could be applied
to the materialized table that wrapped the table value constructor used as
a specification of the subselect used an ALL/ANY subquery. In this case
the result of the derived table used a sink of the class select_subselect
rather than of the class select_unit. Thus the previous fix could cause
memory overwrites when running EXPLAIN for queries with table value
constructors in ALL/ANY subselects.
This patch introduces support for the system variable eq_range_index_dive_limit
that existed in MySQL starting from 5.6. The variable sets a limit for
index dives into equality ranges. Index dives are performed by optimizer
to estimate the number of rows in range scans. Index dives usually provide
good estimate but they are pretty expensive. To estimate the number of rows
in equality ranges statistical data on indexes can be employed. Its usage gives
not so good estimates but it's cheap. So if the number of equality dives
required by an index scan exceeds the set limit no dives for equality
ranges are performed by the optimizer for this index.
As the new system variable is introduced in a stable version the default
value for it is set to a special value meaning there is no limit for the number
of index dives performed by the optimizer.
The patch partially uses the MySQL code for WL 5957
'Statistics-based Range optimization for many ranges'.
If we have a 2+ node cluster which is replicating from an async master
and the binlog_format is set to STATEMENT and multi-row inserts are executed
on a table with an auto_increment column such that values are automatically
generated by MySQL, then the server node generates wrong auto_increment
values, which are different from what was generated on the async master.
The causes and fixes:
1. We need to improve processing of changing the auto-increment values
after changing the cluster size.
2. If wsrep auto_increment_control switched on during operation of
the node, then we should immediately update the auto_increment_increment
and auto_increment_offset global variables, without waiting of the next
invocation of the wsrep_view_handler_cb() callback. In the current version
these variables retain its initial values if wsrep_auto_increment_control
is switched on during operation of the node, which leads to inconsistent
results on the different nodes in some scenarios.
3. If wsrep auto_increment_control switched off during operation of the node,
then we must return the original values of the auto_increment_increment and
auto_increment_offset global variables, as the user has set. To make this
possible, we need to add a "shadow copies" of these variables (which stores
the latest values set by the user).
- Adding a helper class Sec6 to store (neg,seconds,microseconds)
- Adding a helper class VSec6 (Sec6 with a flag for "IS NULL")
- Wrapping related functions as methods of Sec6;
* number_to_datetime()
* number_to_time()
* my_decimal2seconds()
* Item::get_seconds()
* A big piece of code in Item_func_sec_to_time::get_date()
- Using the new classes in places where second-to-temporal
conversion takes place:
* Field_timestamp::store(double)
* Field_timestamp::store(longlong)
* Field_timestamp_with_dec::store_decimal(my_decimal)
* Field_temporal_with_date::store(double)
* Field_temporal_with_date::store(longlong)
* Field_time::store(double)
* Field_time::store(longlong)
* Field_time::store_decimal(my_decimal)
* Field_temporal_with_date::store_decimal(my_decimal)
* get_interval_value()
* Item_func_sec_to_time::get_date()
* Item_func_from_unixtime::get_date()
* Item_func_maketime::get_date()
This change simplifies these methods and functions a lot.
- Warnings are now sent at VSec6 initialization time, when the source
data is available in its original data type representation.
If Sec6::to_time() or Sec6::to_datetime() truncate data again during
conversion to MYSQL_TIME, they send warnings, but only if no warnings
were sent during VSec6 initialization. This helps prevents double warnings.
The call for val_str() in Item_func_sec_to_time::get_date() is not
needed any more, so it's removed. This change actually fixes the problem.
As a good effect, FROM_UNIXTIME() and MAKETIME() now also send warnings
in case if the seconds arguments is out of range. Previously these
functions returned NULL silently.
- Splitting the code in the global function make_truncated_value_warning()
into a number of methods THD::raise_warning_xxxx().
This was needed to reuse the logic that chooses between:
* ER_TRUNCATED_WRONG_VALUE
* ER_WRONG_VALUE
* ER_TRUNCATED_WRONG_VALUE_FOR_FIELD
for non-temporal data types (Sec6).
- Removing:
* Item::get_seconds()
* number_to_time_with_warn()
as this code now resides inside methods of Sec6.
- Cleanup (changes that are not directly related to the fix):
* Removing calls for field_name_or_null() and passing NULL instead
in Item_func_hybrid_field_type::get_date_from_{int|real}_op,
because Item_func_hybrid_field_type::field_name_or_null()
always returns NULL
* Replacing a number of calls for make_truncated_value_warning()
to calls for THD::raise_warning_xxx(). In these places
we know that the execution went through a certain
branch of make_truncated_value_warning(),
(e.g. the exact error code is known, or field name is always NULL,
or field name is always not-NULL). So calls for the entire
make_truncated_value_warning() after splitting are not necessary.
The previous correction of the patch for mdev-16473 did not work
correctly for the databases whose names started with '*'.
Added a test case with a database named "*".
Problem:
push_cursor() created sp_cursor instances on THD::main_mem_root,
which is freed only after the SP instructions loop.
Changes:
- Moving sp_cursor declaration from sp_rcontext.h to sql_class.h
- Deriving sp_instr_cpush from sp_cursor. So now sp_cursor is created
only once (at the SP parse time) and then reused on all loop iterations
- Adding a new method reset() into sp_cursor (and its parent classes)
to reset an sp_cursor instance before reuse.
- Moving former sp_cursor members m_fetch_count, m_row_count, m_found
into a separate class sp_cursor_statistics. This helps to reuse
the code in sp_cursor constructors, and in sp_cursor::reset()
- Adding a helper method sp_rcontext::pop_cursor().
- Adding "THD*" parameter to so_rcontext::pop_cursors() and pop_all_cursors()
- Removing "new" and "delete" from sp_rcontext::push_cursor() and
sp_rconext::pop_cursor().
- Fixing sp_cursor not to derive from Sql_alloc, as it's now allocated
only as a part of sp_instr_cpush (and not allocated separately).
- Moving lex_keeper->disable_query_cache() from sp_cursor::sp_cursor()
to sp_instr_cpush::execute().
- Adding tests
Before this patch if no default database was set the server threw
an error for any table name reference that was not fully qualified by
database name. In particular it happened for table names referenced
CTE tables. This was incorrect.
The error message was thrown at the parser stage when the names referencing
different tables were not resolved yet.
Now if no default database is set and a with clause is used in the
processed statement any table reference is just supplied with a dummy
database name "*none*" at the parser stage. Later after a call
of check_dependencies_in_with_clauses() when the names for CTE tables
can be resolved error messages are thrown only for those names that
refer to non-CTE tables. This is done in open_and_process_table().