change SSL methods to be SSLv23 (according to openssl manpage:
"A TLS/SSL connection established with these methods may understand
the SSLv2, SSLv3, TLSv1, TLSv1.1 and TLSv1.2 protocols") from
TLSv1 methods, that go back to the initial SSL implementation
in MySQL in 2001.
OpenSSL default ciphers are different if TLSv1.2 is enabled,
so tests need to take this into account.
MDEV-6789 segfault in Item_func_from_unixtime::get_date on updating table with virtual columns
* prohibit VALUES in partitioning expression
* prohibit user and system variables in virtual column expressions
* fix Item_func_date_format to cache locale (for %M/%W to return the same as MONTHNAME/DAYNAME)
* fix Item_func_from_unixtime to cache time_zone directly, not THD (and not to crash)
* added tests for other incorrectly allowed (in vcols) functions to see that they don't crash
A "field" could be either an Item_field or
(if loading into a view) an Item_direct_ref that references Item_field.
Also: when iterating fields, use fields of the TABLE_LIST (table or view),
not fields of a TABLE (actual underlying table - might have more columns).
When parsing a field declaration, grab type information from LEX before it's overwritten
by further rules. Pass type information through the parser stack to the rule that needs it.
The test complains that the server failed to disappear upon shutdown /
wait_for_disconnect. Trying to solve the probably race condition by
adding a wait before restart.
The test case had a classic mistake: SET DEBUG_SYNC='now SIGNAL xxx'
followed immediately by SET DEBUG_SYNC='RESET'. This makes it
possible for the waiter to miss the signal, if it does not manage
to wake up prior to the RESET.
The test cases had some --replace_result $USER USER. The problem is that the
value of $USER can be anything, depending on the name of the unix account that
runs the test suite. So random parts of the result can be errorneously
replaced, causing test failures.
Fix by making the replacements more specific, so they will match only the
intended stuff regardless of the value of $USER.
The test runs a query in one thread, then in another queries the processlist
and expects to find the first thread in the COM_SLEEP state. The problem is
that the thread signals completion to the client before changing to COM_SLEEP
state, so there is a window where the other thread can see the wrong state.
A previous attempt to fix this was ineffective. It set a DEBUG_SYNC to handle
proper waiting, but unfortunately that DEBUG_SYNC point ended up triggering
already at the end of SET DEBUG_SYNC=xxx, so the wait was ineffective.
Fix it properly now (hopefully) by ensuring that we wait for the DEBUG_SYNC
point to trigger at the end of the SELECT SLEEP(), not just at the end of
SET DEBUG_SYNC=xxx.
(Backport to 5.3)
(Attempt #2)
- Don't attempt to use BKA for materialized derived tables. The
table is neither filled nor fully opened yet, so attempt to
call handler->multi_range_read_info() causes crash.
(Backport to 5.3)
(variant #2, with fixed coding style)
- Make Mrr_ordered_index_reader::resume_read() restore index position
only if it was saved before with Mrr_ordered_index_reader::interrupt_read().
- TABLE::create_key_part_by_field() should not set PART_KEY_FLAG in field->flags
= The reason is that it is used by hash join code which calls it to create a hash
table lookup structure. It doesn't create a real index.
= Another caller of the function is TABLE::add_tmp_key(). Made it to set the flag itself.
- The differences in join_cache.result could also be observed before this patch: one
could put "FLUSH TABLES" before the queries and get exactly the same difference.
The test case waits for other threads to complete, but the wait is only 2
seconds. This is likely to sometimes be too little on our heavily loaded
buildbot VMs, that can easily stall for more than 2 seconds from time to time.
So let's try to increase the timeout (to about 40 seconds) and see if it
helps.
The test case runs SHOW SLAVE HOSTS. The output of this is only stable after
all slaves have had time to register with the master; this happens
asynchroneously.
The test was waiting for the slave with server_id=3 to appear in the output,
but it was missing a similar wait for server_id=2. Thus, if server_id=2 was
much slower to connect for some reason, it could be missing from the output,
causing the test to fail.
I think I finally found the problem, managed to reproduce locally using a
sleep in the test case to simulate the particular race condition that causes
the test to fail often in Buildbot.
The test starts an ALTER TABLE that does repair by sort in one thread, then
another thread waits for the sort to be visible in SHOW PROCESSLIST and runs a
SHOW statement in parallel.
The problem happens when the sort manages to run to completion before the
other thread has the time to look at SHOW PROCESSLIST. In this case, the wait
times out because the state looked for has already passed.
Earlier I added some DEBUG_SYNC to prevent this race, but it turns out that
DEBUG_SYNC itself changes the state in the processlist. So when the debug sync
point was hit, the processlist was showing the wrong state, so the wait would
still time out.
Fixed now by looking for the processlist to contain either the "Repair by
sorting" state or the debug sync wait stage.
Also clean up previous attempts to fix it. Set the wait timeout back to
reasonable 60 seconds, and simplify the DEBUG_SYNC operations to work closer
to how the original test case was intended.
1. Do not use NULL `info' field in processlist to select the thread of
interest. This can fail if the read of processlist ends up happening after
REAP succeeds, but before the `info' field is reset. Instead, select on the
CONNECTION_ID(), making sure we still scan the whole list to trigger the same
code as in the original test case.
2. Wait for the query to really complete before reading it in the
processlist. When REAP returns, it only means that ack has been sent to
client, the reset of query stage happens a bit later in the code.
Add missing REAP to the test.
A later test failed with strange incorrect values for COM_SELECT
in information_schema.global_status. Since global_status is updated
at the end of session activity, it seems appropriate to ensure that
all background connections have completed before accessing it.
(I checked that the original bug still triggers the test case after
the modification with REAP).
innodb_buffer_pool_pages_total depends on page size. On Power8 it is 65k
compared to 4k on Intel. As we round allocations on page size we may get
slightly more memory for buffer pool.
- test_if_skip_sort_order()/create_ref_for_key() may change table
access from EQ_REF(index1) to REF(index2).
- Doing so doesn't make much sense from optimization POV, but since
they are doing it, they should update tab->read_record.unlock_row
accordingly.
Don't double-check privileges for a column in the GROUP BY that refers to
the same column in SELECT clause. Privileges were already checked for SELECT clause.
Avoided exponential recursive calls of JOIN_CACHE::join_records() in the case
of non-nested outer joins.
A different solution is required to resolve this performance problem for
nested outer joins.
Don't restore the whole of thd->server_status after a routine invocation,
only restore SERVER_STATUS_CURSOR_EXISTS and SERVER_STATUS_LAST_ROW_SENT,
as --ps --embedded needs.
In particular, don't restore SERVER_STATUS_IN_TRANS.