As of version 3.2.0, OpenSSL updated the error message in new versions
("https://github.com/openssl/openssl/commit/81b741f68984"). Update the
tests and result files such that they are compatible with both original
and new error messages.
All new code of the whole pull request, including one or several files that are
either new files or modified ones, are contributed under the BSD-new
license. I am contributing on behalf of my employer Amazon Web Services,
Inc.
On Windows systems, occurrences of ERROR_SHARING_VIOLATION due to
conflicting share modes between processes accessing the same file can
result in CreateFile failures.
mysys' my_open() already incorporates a workaround by implementing
wait/retry logic on Windows.
But this does not help if files are opened using shell redirection like
mysqltest traditionally did it, i.e via
--echo exec "some text" > output_file
In such cases, it is cmd.exe, that opens the output_file, and it
won't do any sharing-violation retries.
This commit addresses the issue by introducing a new built-in command,
'write_line', in mysqltest. This new command serves as a brief alternative
to 'write_file', with a single line output, that also resolves variables
like "exec" would.
Internally, this command will use my_open(), and therefore retry-on-error
logic.
Hopefully this will eliminate the very sporadic "can't open file because
it is used by another process" error on CI.
create_partitioning_metadata() should only mark transaction r/w
if it actually did anything (that is, the table is partitioned).
otherwise it's a no-op, called even for temporary tables and
it shouldn't do anything at all
Synopsis: If SELECT returned answer from Query Cache it is not really executed.
The reason for firing of assertion
DBUG_ASSERT((mem_root->flags & ROOT_FLAG_READ_ONLY) == 0);
is that in case the query_cache is on and the same query run by different
stored routines the following use case can take place:
First, lets say that bodies of routines used by the test case are the same
and contains the only query 'SELECT * FROM t1';
call p1() -- a result set is stored in query cache for further use.
call p2() -- the same query is run against the table t1, that result in
not running the actual query but using its cached result.
On finishing execution of this routine, its memory root is
marked for read only since every SP instruction that this
routine contains has been executed.
INSERT INT t1 VALUE (1); -- force following invalidation of query cache
call p2() -- query the table t1 will result in assertion failure since its
execution would require allocation on the memory root that
has been already marked as read only memory root
The root cause of firing the assertion is that memory root of the stored
routine 'p2' was marked as read only although actual execution of the query
contained inside hadn't been performed.
To fix the issue, mark a SP instruction as not yet run in case its execution
doesn't result in real query processing and a result set got from query cache
instead.
Note that, this issue relates server built in debug mode AND with the protect
statement memory root feature turned on. It doesn't affect server built
in release mode.
followup for 81f75ca83a
improve over take 2. It's technically possible, though unlikely,
to see THD after it already reset the info to NULL, but has not
changed the command to COM_SLEEP yet (see THD::mark_connection_idle()).
Let's wait for "Sleep", not for NULL.
- Add additional MTRs for more coverage on invalid options
- Updating a few error messages to be more informative
- Use the exit code from handle_options() when there is an error processing
user options
All new code of the whole pull request, including one or several files that are
either new files or modified ones, are contributed under the BSD-new license. I
am contributing on behalf of my employer Amazon Web Services, Inc.
When passing in an invalid value (e.g. incorrect data type) for a variable, the
server startup will fail with misleading error messages.
The behavior **before** this change:
For server options:
- The error message will indicate that the argument is being adjusted to a valid value
- Server startup still fails
For plugin options:
- The error message will indicate that the argument is being adjusted to a valid value
- The plugin is still disabled
- Server startup fails with a message that it does not recognize the plugin option
The behavior **after** this change:
For server options:
- Output that an invalid argument was provided
- Exit server startup
For plugin options:
- Output that an invalid argument was provided
- Disable the plugin
- Attempt to continue server startup
All new code of the whole pull request, including one or several files that are
either new files or modified ones, are contributed under the BSD-new license. I
am contributing on behalf of my employer Amazon Web Services, Inc.
In 69a4d6ae, an MTR test was added to verify that we handled multiple invalid
options. However, the logic to perform this test relied on a non-trivial regex
to filter out the noise in the logs.
Instead, we now just simply search for what we expect to be in the logs.
All new code of the whole pull request, including one or several files that are
either new files or modified ones, are contributed under the BSD-new license. I
am contributing on behalf of my employer Amazon Web Services, Inc.
TIME-alike string and numeric arguments to TIMEDIFF()
can get additional fractional seconds during the supported
TIME range adjustment in get_time().
For example, during TIMEDIFF('839:00:00','00:00:00') evaluation
in Item_func_timediff::get_date(), the call for args[0]->get_time()
returns MYSQL_TIME '838:59:59.999999'.
Item_func_timediff::get_date() did not handle these extra digits
and returned a MYSQL_TIME result with fractional digits outside
of Item_func_timediff::decimals. This mismatch could further be
caught by a DBUG_ASSERT() in various other pieces of the code,
leading to a crash.
Fix:
In case if get_time() returned MYSQL_TIMESTAMP_TIME,
let's truncate all extra digits using my_time_trunc(&l_time,decimals).
This guarantees that the rest of the code returns a MYSQL_TIME
with second_part not conflicting with Item_func_timediff::decimals.
These patches:
# commit 74891ed257
#
# MDEV-11514, MDEV-11497, MDEV-11554, MDEV-11555 - IN and CASE type aggregation problems
# commit 53499cd1ea
#
# MDEV-31303 Key not used when IN clause has both signed and usigned values
earlier fixed MDEV-18898.
Adding only an MTR case.
modified: mysql-test/main/func_in.result
modified: mysql-test/main/func_in.test
Json test about max statement time fails with freebsd because on some
architectures the test might execute faster and the statement may not fail.
To simulate failure regardless of architecture, introduce a wait of seconds
longer than the max_statement_time.
JOIN_CACHE has a light-weight initialization mode that's targeted at
EXPLAINs. In that mode, JOIN_CACHE objects are not able to execute.
Light-weight mode was used whenever the statement was an EXPLAIN. However
the EXPLAIN can execute subqueries, provided they enumerate less than
@@expensive_subquery_limit rows.
Make sure we use light-weight initialization mode only when the select is
more expensive @@expensive_subquery_limit.
Also add an assert into JOIN_CACHE::put_record() which prevents its use
if it was initialized for EXPLAIN only.
Item_func_dyncol_create::print_arguments() printed only CHARSET clause
without COLLATE.
Therefore,
HEX(column_create(1,'1212' AS CHAR CHARACTER SET utf8mb3 COLLATE utf8mb3_bin))
inside a VIEW changed to just:
HEX(column_create(1,'1212' AS CHAR CHARACTER SET utf8mb3))
which changed the collation ID seen in the HEX output.
Note, the collation ID inside column_create() is not really much important.
(It's only important what the character set is).
And for COLLATE, the more important thing is what's later written
in the AS clause of COLUMN_GET:
SELECT
COLUMN_GET(
column_create(1,'1212' AS CHAR CHARACTER SET utf8mb3 COLLATE utf8mb3_bin)
column_nr AS type -- this type is more important
);
Still, let's add the COLLATE clause into the COLUMN_CREATE() print output,
although it's not important for now for anything else than just the HEX output.
At least to make VIEW work in a more predictable way with HEX(COLUMN_CREATE()).
Also, in the future we can start using somehow the collation ID written inside
COLUMN_CREATE(), for example by making the `AS type` clause optional in
COLUMN_GET():
COLUMN_GET(dyncol_blob, column_nr [AS type]);
instead of:
COLUMN_GET(dyncol_blob, column_nr AS type);
SQL Server compatibility layer may need this for
the SQL_Variant data type support.
The discovered memory leak was introduced by the commit
762bf7a03b
(MDEV-22602 Disable UPDATE CASCADE for SQL constraints)
The reason why a memory leaked on running the test main.constraints
is that a statement arena was used for allocation a memory
for storing a constraint name. A constraint name is an entity having
temporary nature by its design so runtime arena should be used for its
allocation.
Some fixes related to commit f838b2d799 and
Rows_log_event::do_apply_event() and Update_rows_log_event::do_exec_row()
for system-versioned tables were provided by Nikita Malyavin.
This was required by test versioning.rpl,trx_id,row.
$MYSQL_TZINFO_TO_SQL works by truncating tables. Truncation is an
operation that cannot be done in-place and therefore is fundamentally
incompatible with alter_algorithm='INPLACE'. As a result, we override
the default alter_algorithm setting in tztime.cc to
alter_algorithm='COPY' so that timezones can be loaded regardless
of the previously set alter_algorithm.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer
Amazon Web Services, Inc.
Item_func_group_concat::print() did not take into account
that Item_func_group_concat::separator can be of a different character set
than the "String *str" (when the printing is being done to).
Therefore, printing did not work correctly for:
- non-ASCII separators when GROUP_CONCAT is done on 8bit data
or multi-byte data with mbminlen==1.
- all separators (even including simple ones like comma)
when GROUP_CONCAT is done on ucs2/utf16/utf32 data (mbminlen>1).
Because of this problem, VIEW definitions did not print correctly to
their FRM files. This later led to a wrong SELECT and SHOW CREATE output.
Fix:
- Adding new String methods:
bool append_for_single_quote_using_mb_wc(const char *str, size_t length,
CHARSET_INFO *cs);
bool append_for_single_quote_opt_convert(const char *str,
size_t length,
CHARSET_INFO *cs)
which perform both escaping and character set conversion at the same time.
- Adding a new String method escaped_wc_for_single_quote(),
to reuse the code between the old and the new methods.
- Fixing Item_func_group_concat::print() to use the new
method append_for_single_quote_opt_convert().
Queries that select concatenated constant strings now have
colname and value that match. For example,
SELECT '123' 'x';
will return a result where the column name and value both
are '123x'.
Review: Daniel Black
Add "real ip:<ip_or_localhost>" part to the aborted message
Only for proxy-protocoled connection, so it does not not to cause
confusion to normal users.
Add "real ip:<ip_or_localhost>" part to the aborted message
Only for proxy-protocoled connection, so it does not not to cause
confusion to normal users.
.. is not updating some system tables
Some schema changes from MDEV-24312 master_host has 60 character limit, increase to 255 bytes
failed to happen in the upgrade for tables in the mysql schema:
* mysql.global_priv
* mysql.procs_priv
* mysql.proxies_priv
* mysql.roles_mapping
https://jepsen.io/analyses/mysql-8.0.34 highlights that the
transaction isolation levels in the InnoDB storage engine do not
correspond to any widely accepted definitions, such as
"Generalized Isolation Level Definitions"
https://pmg.csail.mit.edu/papers/icde00.pdf
(PL-1 = READ UNCOMMITTED, PL-2 = READ COMMITTED, PL-2.99 = REPEATABLE READ,
PL-3 = SERIALIZABLE).
Only READ UNCOMMITTED in InnoDB seems to match the above definition.
The issue is that InnoDB does not detect write/write conflicts
(Section 4.4.3, Definition 6) in the above.
It appears that as soon as we implement write/write conflict detection
(SET SESSION innodb_snapshot_isolation=ON), the default isolation level
(SET TRANSACTION ISOLATION LEVEL REPEATABLE READ) will become
Snapshot Isolation (similar to Postgres), as defined in Section 4.2 of
"A Critique of ANSI SQL Isolation Levels", MSR-TR-95-51, June 1995
https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-95-51.pdf
Locking reads inside InnoDB used to read the latest committed version,
ignoring what should actually be visible to the transaction.
The added test innodb.lock_isolation illustrates this. The statement
UPDATE t SET a=3 WHERE b=2;
is executed in a transaction that was started before a read view or
a snapshot of the current transaction was created, and committed before
the current transaction attempts to execute
UPDATE t SET b=3;
If SET innodb_snapshot_isolation=ON is in effect when the second
transaction was started, the second transaction will be aborted with
the error ER_CHECKREAD. By default (innodb_snapshot_isolation=OFF),
the second transaction would execute inconsistently, displaying an
incorrect SELECT COUNT(*) FROM t in its read view.
If innodb_snapshot_isolation=ON, if an attempt to acquire a lock on a
record that does not exist in the current read view is made, an error
DB_RECORD_CHANGED (HA_ERR_RECORD_CHANGED, ER_CHECKREAD) will
be raised. This error will be treated in the same way as a deadlock:
the transaction will be rolled back.
lock_clust_rec_read_check_and_lock(): If the current transaction has
a read view where the record is not visible and
innodb_snapshot_isolation=ON, fail before trying to acquire the lock.
row_sel_build_committed_vers_for_mysql(): If innodb_snapshot_isolation=ON,
disable the "semi-consistent read" logic that had been implemented by
myself on the directions of Heikki Tuuri in order to address
https://bugs.mysql.com/bug.php?id=3300 that was motivated by a customer
wanting UPDATE to skip locked rows that do not match the WHERE condition.
It looks like my changes were included in the MySQL 5.1.5
commit ad126d90e019f223470e73e1b2b528f9007c4532; at that time, employees
of Innobase Oy (a recent acquisition of Oracle) had lost write access to
the repository.
The only reason why we set innodb_snapshot_isolation=OFF by default is
backward compatibility with applications, such as the one that motivated
the implementation of "semi-consistent read" back in 2005. In a later
major release, we can default to innodb_snapshot_isolation=ON.
Thanks to Peter Alvaro, Kyle Kingsbury and Alexey Gotsman for their work
on https://github.com/jepsen-io/ and to Kyle and Alexey for explanations
and some testing of this fix.
Thanks to Vladislav Lesin for the initial test for MDEV-26643,
as well as reviewing these changes.
Let us skip the recently added test main.mysql-interactive if
an instrumented ncurses library is not available.
In InnoDB, let us work around an uninstrumented libnuma, by
declaring that the objects returned by numa_get_mems_allowed()
are initialized.
Calling a stored function that uses a cursor inside its body
could produce the error ER_NO_SUCH_TABLE on the second execution
in case the cursor uses multi-table query and one of the tables
is a temporary table just created before querying the cursor and
dropped just after the query has been executed.
The reason for issue is that re-parsing of failed a SP instruction
caused be create/drop of the temporary table used LEX object
left from previous parsing of a SP instruction's query instead
re-initialize the lex object before parsing.
To fix the issue, add initialization of lex for cursor's
statement before re-parsing the query of a failed SP instruction.
MDEV-33558 Fatal error InnoDB: Clustered record field for column x not found
This is issue is about row ID filtering used with index on virtual
column(s). We hit debug assert and crash while building the record
template in Innodb. The primary reason is that we try to force the code
path to use the ICP path. With ICP, we don't support index with virtual
column and we validate it while index condition is pushed.
Simplify the code for building template to handle both ICP and Row ID
filtering by skipping virtual columns.