Before "MDEV-10709 Expressions as parameters to Dynamic SQL" only
user variables were syntactically allowed as EXECUTE parameters.
User variables were OK as both IN and OUT parameters.
When Item_param was bound to an actual parameter (a user variable),
it automatically meant that the bound Item was settable.
The DBUG_ASSERT() in Protocol_text::send_out_parameters() guarded that
the actual parameter is really settable.
After MDEV-10709, any kind of expressions are allowed as EXECUTE IN parameters.
But the patch for MDEV-10709 forgot to check that only descendants of
Settable_routine_parameter should be allowed as OUT parameters.
So an attempt to pass a non-settable parameter as an OUT parameter
made server crash on the above mentioned DBUG_ASSERT.
This patch changes Item_param::get_settable_routine_parameter(),
which previously always returned "this". Now, when Item_param is bound
to some Item, it caches if the bound Item is settable.
Item_param::get_settable_routine_parameter() now returns "this" only
if the bound actual parameter is settable, and returns NULL otherwise.
As the function Item_subselect::fix_fields does it the function
Item_subselect::update_used_tables must ignore UNCACHEABLE_EXPLAIN
when deciding whether the subquery item should be considered as a
constant item.
Problem: Item_param::basic_const_item() returned true when fixed==false.
This unexpected combination made Item::const_charset_converter() crash
on asserts.
Fix:
- Changing all Item_param::set_xxx() to set "fixed" to true.
This fixes the problem.
- Additionally, changing all Item_param::set_xxx() to set
Item_param::item_type, to avoid duplicate code, and for consistency,
to make the code symmetric between different constant types.
Before this patch only set_null() set item_type.
- Moving Item_param::state and Item_param::item_type from public to private,
to make sure easier that these members are in sync with "fixed" and to
each other.
- Adding a new argument "unsigned_arg" to Item::set_decimal(),
and reusing it in two places instead of duplicate code.
- Adding a new method Item_param::fix_temporal() and reusing it in two places.
- Adding methods has_no_value(), has_long_data_value(), has_int_value(),
instead of direct access to Item_param::state.
[0-9]*[.]?[0-9]* wasn't a sufficient regex to cover the
%lg used in Json_writer::add_double. Exponent formats
where missed.
Here we normalize all the replace_regex expressions for
ANALYZE FORMAT=JSON into one include file.
Signed-off-by: Daniel Black <daniel.black@au.ibm.com>
Most notably, this includes MDEV-11623, which includes a fix and
an upgrade procedure for the InnoDB file format incompatibility
that is present in MariaDB Server 10.1.0 through 10.1.20.
In other words, this merge should address
MDEV-11202 InnoDB 10.1 -> 10.2 migration does not work
When a query containing a WITH clause is printed by EXPLAIN
EXTENDED command there should not be any data expansion in
the query specifications of the WITH elements of this WITH
clause.
When building different range and index-merge trees the range optimizer
could build an index-merge tree with an index scan containing less ranges
then needed. This index-merge could be chosen as the best. Following this
index-merge the executioner missed some rows in the result set.
The invalid index scan was built due to an inconsistency in the code
back-ported from mysql into 5.3 that fixed mysql bug #11765831:
the code added to key_or() could change shared keys of the second
ored tree. Partially the problem was fixed in the patch for mariadb
bug #823301, but it turned out that only partially.
innodb_file_format=Barracuda is the default in MariaDB 10.2.
Do not set it, because the option will be removed in MariaDB 10.3.
Also, do not set innodb_file_per_table=1 because it is the default.
Note that MDEV-11828 should fix the test innodb.innodb-64k
already in 10.1.
check_that_all_fields_are_given_values() relied on write_set,
but was run too early, before triggers updated write_set.
also, when triggers are present, fields might get values conditionally,
so we need to check that all fields are given values for every row.
Fixing Item::decimal_precision() to return at least one digit.
This fixes the problem reported in MDEV.
Also, fixing Item_func_signed::fix_length_and_dec() to reserve
space for at least one digit (plus one character for an optional sign).
This is needed to have CONVERT(expr,SIGNED) and CONVERT(expr,UNSIGNED)
create correct string fields when they appear in string context, e.g.:
CREATE TABLE t1 AS SELECT CONCAT(CONVERT('',SIGNED));
When updating a table with virtual BLOB columns, the following might
happen:
- an old record is read from the table, it has no virtual blob values
- update_virtual_fields() is run, vcol blob gets its value into the
record. But only a pointer to the value is in the table->record[0],
the value is in Field_blob::value String (but it doesn't have to be!
it can be in the record, if the column is just a copy of another
columns: ... b VARCHAR, c BLOB AS (b) ...)
- store_record(table,record[1]), old record now is in record[1]
- fill_record() prepares new values in record[0], vcol blob is updated,
new value replaces the old one in the Field_blob::value
- now both record[1] and record[0] have a pointer that points to the
*new* vcol blob value. Or record[1] has a pointer to nowhere if
Field_blob::value had to realloc.
To fix this I have introduced a new String object 'read_value' in
Field_blob. When updating virtual columns when a row has been read,
the allocated value is stored in 'read_value' instead of 'value'. The
allocated blobs for the new row is stored in 'value' as before.
I also made, as a safety precaution, the insert delayed handling of
blobs more general by using value to store strings instead of the
record. This ensures that virtual functions on delayed insert should
work in as in the case of normal insert.
Triggers are now properly updating the read, write and vcol maps for used
fields. This means that we don't need VCOL_UPDATE_FOR_READ_WRITE anymore
and there is no need for any other special handling of triggers in
update_virtual_fields().
To be able to test how many times virtual fields are invoked, I also
relaxed rules that one can use local (@) variables in DEFAULT and non
persistent virtual field expressions.
c3cf7f47f0 reverted the patch
for BUG#24487120. After merging the reverting patch from MySQL
to MariaDB the problems described in MDEV-11079 and MDEV-11631 disappeared.
Adding test cases only.
Cherry-pick: f4a0af070ce49abae60040f6f32e1074309c27fb
Author: Dmitry Lenev <dmitry.lenev@oracle.com>
Date: Mon Jul 25 16:06:52 2016 +0300
Fix for bug #16672723 "CAN'T FIND TEMPORARY TABLE".
Attempt to execute prepared CREATE TABLE SELECT statement which used
temporary table in the subquery in FROM clause and stored function
failed with unwarranted ER_NO_SUCH_TABLE error. The same happened
when such statement was used in stored procedure and this procedure
was re-executed.
The problem occurred because execution of such prepared statement/its
re-execution as part of stored procedure incorrectly set
Query_table_list::query_tables_own_last marker, indicating the last
table which is directly used by statement. As result temporary table
used in the subquery was treated as indirectly used/belonging to
prelocking list and was not pre-opened by open_temporary_tables()
call before statement execution. Thus causing ER_NO_SUCH_TABLE errors
since our code assumes that temporary tables need to be correctly
pre-opened before statement execution.
This problem became visible only in version 5.6 after patches related to
bug 11746602/27480 "EXTEND CREATE TEMPORARY TABLES PRIVILEGE TO ALLOW
TEMP TABLE OPERATIONS" since they have introduced pre-opening of temporary
tables for statements.
Incorrect setting of Query_table_list::query_tables_own_last happened
in LEX::first_lists_tables_same() method which is called by CREATE TABLE
SELECT implementation as part of LEX::unlink_first_table(), which temporary
excludes table list element for table being created from the query table
list before handling SELECT part.
LEX::first_lists_tables_same() tries to ensure that global table list of
the statement starts with the first table list element from the first
statement select. To do this it moves such table list element to the head
of the global table list. If this table happens to be last directly-used
table for the statement, query_tables_own_last marker is pointing to it.
Since this marker was not updated when table list element was moved we
ended up with all tables except the first table separated by it as if
they were not directly used by statement (i.e. belonged to prelocked
tables list).
This fix changes code of LEX::first_lists_tables_same() to update
query_tables_own_last marker in cases when it points to the table
being moved. It is set to the table which precedes table being moved
in this case.
The fix for bug mdev-5104 did not take into account that
for any call of setup_order the size of ref_array must
be big enough. This patch fixes this problem.
The guilty part of the test checks for performance degradation on
a query with numerous joins on an empty table. The test expects
the query to take less than 1 second, and fails if it is not so
(which can happen on very slow builders).
The solution is to add more JOINs to the query. On a fixed server,
it should not have any noticeable impact on the query execution,
while on the unfixed version the query would take several times
longer (e.g. 6.5 sec vs 1.5 sec). Thus, we can increase the margin
for the error, and make the test fail when the query takes longer
than 5 seconds.
1. The rows of a recursive CTE at some point may overflow
the HEAP temporary table containing them. At this point
the table is converted to a MyISAM temporary table and the
new added rows are placed into this MyISAM table.
A bug in the of select_union_recursive::send_data prevented
the server from writing the row that caused the overflow
into the temporary table used for the result of the iteration
steps. This could lead, in particular,to a premature end
of the iterations.
2. The method TABLE::insert_all_rows_into() that was used
to copy all rows of one temporary table into another
did not take into account that the destination temporary
table must be converted to a MyISAM table at some point.
This patch fixed this problem. It also renamed the method
into TABLE::insert_all_rows_into_tmp_table() and added
an extra parameter needed for the conversion.
Backport the fix to 5.5, because it fails there too
The patch fixes two test failures:
- on slow builders, sometimes a connection attempt which should
fail due to the exceeded number of thread_pool_max_threads
actually succeeds;
- on even slow builders, MTR sometimes cannot establish the
initial connection, and check-testcase fails prior to the
test start
The problem with check-testcase was caused by connect-timeout=2
which was set for all clients in the test config file. On slow
builders it might be not enough.
There is no way to override it for the pre-test check, so it needed
to be substantially increased or removed.
The other problem was caused by a race condition between sleeps
that the test performs in existing connections and the connect
timeout for the connection attempt which was expected to fail.
If sleeps finished before the connect-timeout was exceeded, it
would allow the connection to succeed.
To solve each problem without making the other one worse,
connect-timeout should be configured dynamically during the test.
Due to the nature of the test (all connections must be busy
at the moment when we need to change the timeout, and cannot execute
SET GLOBAL ...), it needs to be done independently from the server.
The solution:
- recognize 'connect_timeout' as a connection option in mysqltest's
"connect" command;
- remove connect-timeout from the test configuration file;
- use the new connect_timeout option for those connections which
are expected to fail;
- re-arrange the test flow to allow running a huge SLEEP
without affecting the test execution time (because it would be
interrupted after the main test flow is finished).
The test is still subject to false negatives, e.g. if the connection
fails due to timeout rather than due to the exceeded number of
allowed threads, or if the connection on extra port succeeds due
to a race condition and not because the special logic for the extra
port. But those false negatives have always been possible there
on slow builders, they should not be critical because faster builders
should catch such failures if they appear.
Conflicts:
client/mysqltest.cc
mysql-test/r/pool_of_threads.result
mysql-test/t/pool_of_threads.test
The patch fixes two test failures:
- on slow builders, sometimes a connection attempt which should
fail due to the exceeded number of thread_pool_max_threads
actually succeeds;
- on even slow builders, MTR sometimes cannot establish the
initial connection, and check-testcase fails prior to the
test start
The problem with check-testcase was caused by connect-timeout=2
which was set for all clients in the test config file. On slow
builders it might be not enough.
There is no way to override it for the pre-test check, so it needed
to be substantially increased or removed.
The other problem was caused by a race condition between sleeps
that the test performs in existing connections and the connect
timeout for the connection attempt which was expected to fail.
If sleeps finished before the connect-timeout was exceeded, it
would allow the connection to succeed.
To solve each problem without making the other one worse,
connect-timeout should be configured dynamically during the test.
Due to the nature of the test (all connections must be busy
at the moment when we need to change the timeout, and cannot execute
SET GLOBAL ...), it needs to be done independently from the server.
The solution:
- recognize 'connect_timeout' as a connection option in mysqltest's
"connect" command;
- remove connect-timeout from the test configuration file;
- use the new connect_timeout option for those connections which
are expected to fail;
- re-arrange the test flow to allow running a huge SLEEP
without affecting the test execution time (because it would be
interrupted after the main test flow is finished).
The test is still subject to false negatives, e.g. if the connection
fails due to timeout rather than due to the exceeded number of
allowed threads, or if the connection on extra port succeeds due
to a race condition and not because the special logic for the extra
port. But those false negatives have always been possible there
on slow builders, they should not be critical because faster builders
should catch such failures if they appear.
* some of these tests run just fine with InnoDB:
-> s/have_xtradb/have_innodb/
* sys_var tests did basic tests for xtradb only variables
-> remove them, they're useless anyway (sysvar_innodb does it better)
* multi_update had innodb specific tests
-> move to multi_update_innodb.test