Read the version of the view share when we read definition to prevent
simultaniouse access to a view table SHARE (and so its MEM_ROOT)
from different threads.
This is a new version of the patch instead of the reverted:
MDEV-28727 ALTER TABLE ALGORITHM=NOCOPY does not work after upgrade
Ignore the difference in key packing flags HA_BINARY_PACK_KEY and HA_PACK_KEY
during ALTER to allow ALGORITHM=INSTANT and ALGORITHM=NOCOPY in more cases.
If for some reasons (e.g. due to a bug fix such as MDEV-20704) these
cumulative (over all segments) flags in KEY::flags are different for
the old and new table inside compare_keys_but_name(), the difference
in HA_BINARY_PACK_KEY and HA_PACK_KEY in KEY::flags is not really important:
MyISAM and Aria can handle such cases well: per-segment flags are stored in
MYI and MAI files anyway and they are read during ha_myisam::open()
ha_maria::open() time. So indexes get opened with correct per-segment
flags that were calculated during the table CREATE time, no matter
what the old (CREATE time) and new (ALTER TIME) per-index compression
flags are, and no matter if they are equal or not.
All other engine ignore key compression flags, so this change
is safe for other engines as well.
on Linux this pthread_attr_setstacksize() fails with EINVAL
"The stack size is less than PTHREAD_STACK_MIN (16384) bytes".
But on FreeBSD it succeeds and causes a crash later, as 8196 is too little.
Let's keep the stack at its default size in the timer thread.
OpenSSL handles memory management using **OPENSSL_xxx** API[^1]. For
allocation, there is `OPENSSL_malloc`. To free it, `OPENSSL_free` should
be called.
We've been lucky that OPENSSL (and wolfSSL)'s implementation allowed the
usage of `free` for memory cleanup. However, other OpenSSL forks, such
as AWS-LC[^2], is not this forgiving. It will cause a server crash.
Test case `openssl_1` provides good coverage for this issue. If a user
is created using:
`grant select on test.* to user1@localhost require SUBJECT "...";`
user1 will crash the instance during connection under AWS-LC.
There have been numerous OpenSSL forks[^3]. Due to FIPS[^4] and other
related regulatory requirements, MariaDB will be built using them. This
fix will increase MariaDB's adaptability by using more compliant and
generally accepted API.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
[^1]: https://www.openssl.org/docs/man1.1.1/man3/OPENSSL_malloc.html
[^2]: https://github.com/awslabs/aws-lc
[^3]: https://en.wikipedia.org/wiki/OpenSSL#Forks
[^4]: https://en.wikipedia.org/wiki/FIPS_140-2
st_select_lex::init_query is called in the exectuion of EXECUTE
IMMEDIATE 'alter table ...'. so reset the initialization at the
same point we set join= 0.
and also MDEV-25564, MDEV-18157.
Attempt to produce EXPLAIN output caused a crash in
Explain_node::print_explain_for_children. The cause of this was that an
Explain_node (actually a derived) had a link to child select#N, but
there was no query plan present for select#N.
The query plan wasn't present because the subquery was eliminated.
- Either it was a degenerate subquery like "(SELECT 1)" in MDEV-25564.
- Or it was a subquery in a UNION subquery's ORDER BY clause:
col IN (SELECT ... UNION
SELECT ... ORDER BY (SELECT FROM t1))
In such cases, legacy code structure in subquery/union processing code(*)
makes it hard to detect that the subquery was eliminated, so we end up
with EXPLAIN data structures (Explain_node::children) having dangling
links to child subqueries.
Do make the checks and don't follow the dangling links.
(In ideal world, we should not have these dangling links. But fixing
the code (*) would have high risk for the stable versions).
The population of default values in INSERT SELECT was being
performed twice. With sequences, this resulted in every
second sequence value being used.
With SELECT INSERT we remove the second invokation of
table->update_default_fields(). This was already performed
in store_values() invoking fill_record_n_invoke_before_triggers()
which invoked update_default_fields() previously.
We do need to return an error on duplicate values, so the
::store_values is extended to take the ignore option.
=========== Problem =============
- `show columns` is not working for temporary tables, even though there
is enough privilege `create temporary tables`.
=========== Solution =============
- Append `TMP_TABLE_ACLS` privilege when running `show columns` for temp
tables.
- Additionally `check_access()` for database only once, not for each
field
=========== Additionally =============
- Update comments for function `check_table_access` arguments
Reviewed by: <vicentiu@mariadb.org>
For some queries that involve tables with different but convertible
character sets for columns taking part in the query, repeatable
execution of such queries in PS mode or as part of a stored routine
would result in server abnormal termination.
For example,
CREATE TABLE t1 (a2 varchar(10));
CREATE TABLE t2 (u1 varchar(10) CHARACTER SET utf8);
CREATE TABLE t3 (u2 varchar(10) CHARACTER SET utf8);
PREPARE stmt FROM
"SELECT t1.* FROM (t1 JOIN t2 ON (t2.u1 = t1.a2))
WHERE (EXISTS (SELECT 1 FROM t3 WHERE t3.u2 = t1.a2))";
EXECUTE stmt;
EXECUTE stmt; <== Running this prepared statement the second time
results in server crash.
The reason of server crash is that an instance of the class
Item_func_conv_charset, that created for conversion of a column
from one character set to another, is allocated on execution
memory root but pointer to this instance is stored in an item
placed on prepared statement memory root. Below is calls trace to
the place where an instance of the class Item_func_conv_charset
is created.
setup_conds
Item_func::fix_fields
Item_bool_rowready_func2::fix_length_and_dec
Item_func::setup_args_and_comparator
Item_func_or_sum::agg_arg_charsets_for_comparison
Item_func_or_sum::agg_arg_charsets
Item_func_or_sum::agg_item_set_converter
Item::safe_charset_converter
And the following trace shows the place where a pointer to
the instance of the class Item_func_conv_charset is passed
to the class Item_func_eq, that is created on a memory root of
the prepared statement.
Prepared_statement::execute
mysql_execute_command
execute_sqlcom_select
handle_select
mysql_select
JOIN::optimize
JOIN::optimize_inner
convert_join_subqueries_to_semijoins
convert_subq_to_sj
To fix the issue, switch to the Prepared Statement memory root
before calling the method Item_func::setup_args_and_comparator
in order to place any created Items on permanent memory root.
It may seem that such approach would result in a memory
leakage in case the parameter marker '?' is used in the query
as in the following example
PREPARE stmt FROM
"SELECT t1.* FROM (t1 JOIN t2 ON (t2.u1 = t1.a2))
WHERE (EXISTS (SELECT 1 FROM t3 WHERE t3.u2 = ?))";
EXECUTE stmt USING convert('A' using latin1);
but it wouldn't since for such case any of the parameter markers
is treated as a constant and no subquery to semijoin optimization
is performed.
* ODBC Connect cosmetic fixes
- Update command for connection for default `peer` authentication for user
`postgres` (unless changed in `pg_hba.conf`).
- Update command for privilege to be more verbose.
- Update path for `.sql` file
- Update instructions for `pg_hba.conf` file to use unix socket
(`local`) type as well as TCP/IP type `host`.
- Update instruction about usage of user dsn (data source file) over
system dsn.
- Update path of `odbc-postgresql` driver path in comment
* Connect SE: update ODBC result file
See also commits aa8a31da and 64678c for a Bug #22990029 fix.
In this scenario INSERT chose to check if delete unmarking is available for
a just deleted record. To build an update vector, it needed to calculate
the vcols as well. Since this INSERT was not IGNORE-flagged, recalculation
failed.
Solutiuon: temporarily set abort_on_warning=true, while calculating the
column for delete-unmarked insert.
As of now innodb does not store trx_id for each record in secondary index.
The idea behind is following: let us store only per-page max_trx_id, and
delete-mark the records when they are deleted/updated.
If the read starts, it rememders the lowest id of currently active
transaction. Innodb refers to it as trx->read_view->m_up_limit_id.
See also ReadView::open.
When the page is fetched, its max_trx_id is compared to m_up_limit_id.
If the value is lower, and the secondary index record is not delete-marked,
then this page is just safe to read as is. Else, a clustered index could be
needed ato access. See page_get_max_trx_id call in row_search_mvcc, and the
corresponding switch (row_search_idx_cond_check(...)) below.
Virtual columns are required to be updated in case if the record was
delete-marked. The motivation behind it is documented in
Row_sel_get_clust_rec_for_mysql::operator() near
row_sel_sec_rec_is_for_clust_rec call.
This was basically a description why virtual column computation can
normally happen during SELECT, and, generally, a vcol index access.
Sometimes stats tables are updated by innodb. This starts a new
transaction, and it can happen that it didn't finish to the moment of
SELECT execution, forcing virtual columns recomputation. If the result was
a something that normally outputs a warning, like division by zero, then
it could be outputted in a racy manner.
The solution is to suppress the warnings when a column is computed
for the described purpose.
ignore_wrnings argument is added innobase_get_computed_value.
Currently, it is only true for a call from
row_sel_sec_rec_is_for_clust_rec.
MDEV-19243 introduced a regression on Windows.
In (supposedly rare) case, where environment variable TZ was set,
@@system_time_zone no longer derives from TZ. Instead, it incorrecty
refers to system default time zone, eventhough UTC time conversion
takes TZ into account.
The fix is to restore TZ-aware handling (timezone name derives from
tzname), if TZ is set.