This allows one to run the test suite even if any of the following
options are changed:
- character-set-server
- collation-server
- join-cache-level
- log-basename
- max-allowed-packet
- optimizer-switch
- query-cache-size and query-cache-type
- skip-name-resolve
- table-definition-cache
- table-open-cache
- Some innodb options
etc
Changes:
- Don't print out the value of system variables as one can't depend on
them to being constants.
- Don't set global variables to 'default' as the default may not
be the same as the test was started with if there was an additional
option file. Instead save original value and reset it at end of test.
- Test that depends on the latin1 character set should include
default_charset.inc or set the character set to latin1
- Test that depends on the original optimizer switch, should include
default_optimizer_switch.inc
- Test that depends on the value of a specific system variable should
set it in the test (like optimizer_use_condition_selectivity)
- Split subselect3.test into subselect3.test and subselect3.inc to
make it easier to set and reset system variables.
- Added .opt files for test that required specfic options that could
be changed by external configuration files.
- Fixed result files in rockdsb & tokudb that had not been updated for
a while.
The problem was that sp_head::MULTI_RESULTS was not set correctly for ANALYZE statement
with SELECT ... INTO variable.
This is a follow up fix for MDEV-7023
within stored procedure
Always set SERVER_MORE_RESULTS_EXIST when executing stored procedure.
statements
If statements produce a result, EOF packet needs this flag (SP ends with
an OK packet). IF statetement does not produce a result, affected rows
count are part of the final OK packet.
Make mysqltest to use --ps-protocol more
use prepared statements for everything that server supports
with the exception of CALL (for now).
Fix discovered test failures and bugs.
tests:
* PROCESSLIST shows Execute state, not Query
* SHOW STATUS increments status variables more than in text protocol
* multi-statements should be avoided (see tests with a wrong delimiter)
* performance_schema events have different names in --ps-protocol
* --enable_prepare_warnings
mysqltest.cc:
* make sure run_query_stmt() doesn't crash if there's
no active connection (in wait_until_connected_again.inc)
* prepare all statements that server supports
protocol.h
* Protocol_discard::send_result_set_metadata() should not send
anything to the client.
sql_acl.cc:
* extract the functionality of getting the user for SHOW GRANTS
from check_show_access(), so that mysql_test_show_grants() could
generate the correct column names in the prepare step
sql_class.cc:
* result->prepare() can fail, don't ignore its return value
* use correct number of decimals for EXPLAIN columns
sql_parse.cc:
* discard profiling for SHOW PROFILE. In text protocol it's done in
prepare_schema_table(), but in --ps it is called on prepare only,
so nothing was discarding profiling during execute.
* move the permission checking code for SHOW CREATE VIEW to
mysqld_show_create_get_fields(), so that it would be called during
prepare step too.
* only set sel_result when it was created here and needs to be
destroyed in the same block. Avoid destroying lex->result.
* use the correct number of tables in check_show_access(). Saying
"as many as possible" doesn't work when first_not_own_table isn't
set yet.
sql_prepare.cc:
* use correct user name for SHOW GRANTS columns
* don't ignore verbose flag for SHOW SLAVE STATUS
* support preparing REVOKE ALL and ROLLBACK TO SAVEPOINT
* don't ignore errors from thd->prepare_explain_fields()
* use select_send result for sending ANALYZE and EXPLAIN, but don't
overwrite lex->result, because it might be needed to issue execute-time
errors (select_dumpvar - too many rows)
sql_show.cc:
* check grants for SHOW CREATE VIEW here, not in mysql_execute_command
sql_view.cc:
* use the correct function to check privileges. Old code was doing
check_access() for thd->security_ctx, which is invoker's sctx,
not definer's sctx. Hide various view related errors from the invoker.
sql_yacc.yy:
* initialize lex->select_lex for LOAD, otherwise it'll contain garbage
data that happen to fail tests with views in --ps (but not otherwise).
1. Always drop merged_for_insert flag on cleanup (there could be errors which prevent TABLE to be assigned)
2. Make more precise cleanup of select parts which was touched
The problem was that join_columns creation was not finished due to error of notfound column in USING, but next execution tried to use join_columns lists.
Solution is cleanup the lists on error. It can eat memory in statement MEM_ROOT but it is an error and error will be fixed or statement/procedure removed/altered.
Problem:
The problem was most likely introduced by a fix for MDEV-11597
(commit 5f0c31f928) which removed
the assignment "killed= KILL_BAD_DATA" from THD::raise_condition().
Before MDEV-11597, sp_head::execute() tested thd->killed after
looping through the SP instructions and exited with an error
if thd->killed is set. After MDEV-11597, sp_head::execute()
stopped to notice errors and set the OK status on top of the
error status, which crashed on assert.
Fix:
Making sp_cursor::fetch() return -1 if server_side_cursor->fetch(1)
left an error in the diagnostics area. This makes the statement
"err_status= i->execute(thd, &ip)" in sp_head::execute() set the
error code and correctly break the SP instruction loop and
return on error without setting the OK status.
MDEV--15609 engines/funcs.crash_manytables_number crashes with error 24
(too many open files)
MDEV-10286 Adjustment of table_open_cache according to system limits
does not work when open-files-limit option is provided
Fixed by adjusting tc_size downwards if there is not enough file
descriptors to use.
Other changes:
- Ensure that there is 30 (was 10) extra file descriptors for other usage
- Decrease TABLE_OPEN_CACHE_MIN to 200 as it's better to have a smaller
table cache than getting error 24
- Increase minimum of max_connections and table_open_cache from 1 to 10
as 1 is not usable for any real application, only for testing.
Counter for select numbering made stored with the statement (before was global)
So now it does have always accurate value which does not depend on
interruption of statement prepare by errors like lack of table in
a view definition.
The bug was caused by a defect of the patch for the bug 11081.
The patch was actually a port of the fix this bug from the mysql
code line. Later a correction of this fix was added to the
mysql code. Here's the comment this correction was provided with:
Bug#16499751: Opening cursor on SELECT in stored procedure causes segfault
This is a regression from the fix of bug#14740889.
The fix started using another set of expressions as the source for
the temporary table used for the materialized cursor. However,
JOIN::make_tmp_tables_info() calls setup_copy_fields() which creates
an Item_copy wrapper object on top of the function being selected.
The Item_copy objects were not properly handled by create_tmp_table -
they were simply ignored. This patch creates temporary table fields
based on the underlying item of the Item_copy objects.
The test case for the bug 13346 was taken from mdev-13380.
Different fix. Don't allow Item_func_sp to be evaluated unless
all tables are prelocked.
Extend the test case to make sure Item_func_sp::val_str is called
(the table must have at least one row for that).
The idea of this fix was taken from the patch by Roy Lyseng
for mysql-5.6 bug iBug#14740889: "Wrong result for aggregate
functions when executing query through cursor".
Here's Roy's comment for his patch:
"
The problem was that a grouped query did not behave properly when
executed using a cursor. On further inspection, the query used one
intermediate temporary table for the grouping.
Then, Select_materialize::send_result_set_metadata created a temporary
table for storing the query result. Notice that get_unit_column_types()
is used to retrieve column meta-data for the query. The items contained
in this list are later modified so that their result_field points to
the row buffer of the materialized temporary table for the cursor.
But prior to this, these result_field objects have been prepared for
use in the grouping operation, by JOIN::make_tmp_tables_info(), hence
the grouping operation operates on wrong column buffers.
The problem is solved by using the list JOIN::fields when copying data
to the materialized table. This list is set by JOIN::make_tmp_tables_info()
and points to the columns of the last intermediate temporary table of
the executed query. For a UNION, it points to the temporary table
that is the result of the UNION query.
Notice that we have to assign a value to ::fields early in JOIN::optimize()
in case the optimization shortcuts due to a const plan detection.
A more optimal solution might be to avoid creating the final temporary
table when the query result is already stored in a temporary table.
"
The patch does not contain a test case, but the description of the
problem corresponds exactly what could be observed in the test
case for mdev-11081.
mysqld maintains a list of TABLE objects for all temporary
tables created within a session in THD. Here each table is
represented by a TABLE object.
A query referencing a particular temporary table for more
than once, however, failed with ER_CANT_REOPEN_TABLE error
because a TABLE_SHARE was allocate together with the TABLE,
so temporary tables always had only one TABLE per TABLE_SHARE.
This patch lift this restriction by separating TABLE and
TABLE_SHARE objects and storing TABLE_SHAREs for temporary
tables in a list in THD, and TABLEs in a list within their
respective TABLE_SHAREs.
The problem was with Materialized_cursor and temporary table it uses.
Temorary table's fields had Field::orig_table pointing to the tables
that were used in the query that produced data for the cursor.
When "FETCH INTO sp_var" statement is executed, those original tables
were already closed. However, copying from Materialized_cursor's table
into SP variable may cause field_conv() to be invoked which calls
field->type() which may access field->orig_table (for certain field types).
Fixed by setting Materialized_cursor->table->field[i]->orig_table to point
to Materialized_cursor->table. (this is how it is done for regular base
tables)
length/dec/charset are still in LEX, because they're also used
for CAST and dynamic columns.
also
1. fix "MDEV-7041 COLLATION(CAST('a' AS CHAR BINARY)) returns a wrong result"
2. allow BINARY modifier in stored function RETURN clause
3. allow "COLLATION without CHARSET" in SP/SF (parameters, RETURN, DECLARE)
4. print correct variable name in error messages for stored routine parameters