MDEV-20704 changed the rules of how (HA_BINARY_PACK_KEY |
HA_VAR_LENGTH_KEY) flags are added. Older FRMs before that fix had
these flags for DOUBLE index. After that fix when ALTER sees such old
FRM it thinks it cannot do instant alter because of failed
compare_keys_but_name(): it compares flags against tmp table created
by ALTER.
MDEV-20704 fix was actually not about DOUBLE type but about
FIELDFLAG_BLOB which affected DOUBLE. So there is no direct knowledge
that any other types were not affected.
The proposed fix under CHECK TABLE checks if FRM has
(HA_BINARY_PACK_KEY | HA_VAR_LENGTH_KEY) flags and was created prior
MDEV-20704 and if so issues "needs upgrade". When mysqlcheck and
mysql_upgrade see such status they issue ALTER TABLE FORCE and upgrade
the table to the version of server.
Handler for existing partition was already index-inited at the
beginning of copy_partitions().
In the case of REORGANIZE PARTITION we fill new partition by calling
its ha_write_row() (handler is storage engine of new partition). From
that we go through the below conditions:
if (this->inited == RND)
table->clone_handler_for_update();
handler *h= table->update_handler ? table->update_handler : table->file;
First, the above misses the meaning of this->inited check. Now it is
new partition and this handler is not inited. So, we assign
table->file which is ha_partition and is really not known to be inited
or not. It is supposed (this == table->file), otherwise we are
out of the logic for using update_handler. This patch adds DBUG_ASSERT
for that.
Second, we call check_duplicate_long_entries() for table->file and
that calls ha_partition::index_init() which calls index_init() for
each partition's handler. But the existing parititions' handlers was
already inited in copy_partitions() and we fail on assertion.
The fix implies that we don't need check_duplicate_long_entries()
per-partition as we've already done check_duplicate_long_entries() for
ha_partition. For REORGANIZE PARTITION that means existing row was
already checked at previous INSERT/UPDATE commands, so no need to
check it again (see NOTE in handler::ha_write_row()).
The fix also optimizes ha_update_row() so
check_duplicate_long_entries_update() is not called per-partition
considering it was already called for ha_partition. Besides,
per-partition duplicate check is not really usable.
This commit restores defaults and functionality regarding binlogs
to the way it was prior to MDEV-27524. The mariabackup utility no
longer saves binlogs files as part of a backup without the --galera-info
option. However, since we use --galera-info during SST, the behavior
of mariabackup changes and, in combination with GTIDs support enabled,
mariabackup transfers one (most recent) binlog file obtained after
FLUSH BINARY LOGS. In other cases, binlogs are not transferred during
SST in mariabackup mode. As for SST in the rsync mode, it works the
same way as before MDEV-27524 - by default it transfers one last
binlog file.
The --sst-max-binlogs option for mariabackup and the sst_max_binlogs
parameter in the [sst] / server sections are no longer supported for
SST via mariabackup.
Let simplify the test.
The update_time is stored in the table metadata (dict_table_t);
it has nothing to do with buffer pool page eviction or replacement.
Starting since this commit 36cdd5c3cd
there is an ASAN stack-buffer-overflow error because we append a NULL
terminator beyond the length of memory allocated.
Reviewed by: Monty and Nayuta Yanagisawa
- Change partition does undo logging of all rows unnecessarily and
it invokes bulk insert during DDL. Better to avoid the logging of undo
records during copy of the parititon.
Bring the 5 warnings of select random_bytes(cast('x' as unsigned)+1);
back to two. 1 for Item_func_random_bytes::fix_length_and_dec and
one from Item_func_random_bytes::val_str.
The warnings are from args[0]->val_int().
Setting max_length to a negative value in Item_func_random_bytes::fix_length_and_dec
underflowed resulting in debug optimizer assertion.
Also set the maximium to 1024 rather than MAX_BLOB_WIDTH because
we aren't going to return more than that.
it's not "non deterministic", it's completely defined
by @@rand_seed1 and @@rand_seed2. And as a session func it needs
to be re-fixed at the beginning of every statement.
Test fixes:
Since fix for CONC-603 (wrong error handling in TLS read/write) in case
of a read/write error client doesn't return always error 2013 (server
has gone away), so in addition we need to check for error 2026
(TLS/SSL error) and 5014 (write error).
In 3b662c6ebd, it was discovered that the
values of the 'wsrep_is_on' and 'wsrep_cannot_replicate_tz' variables need
to be overridden for embedded builds to pass
However, there are other build configurations where these variables also
have NULL values. The mariadb-tzinfo-to-sql script (implemented in
sql/tztime.cc) can be slightly modified to set its 'wsrep_is_on' and
'wsrep_cannot_replicate_tz' variables more predictably in all such cases,
thus allowing the mysql_tzinfo_to_sql_symlink.test test to pass without
any special-casing for particular build types.
See comments:
- 3b662c6ebd (r78994411)
- https://jira.mariadb.org/browse/MDEV-28782?focusedCommentId=230038&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-230038
All new code of the whole pull request, including one or several files that
are either new files or modified ones, are contributed under the BSD-new
license. I am contributing on behalf of my employer Amazon Web Services,
Inc.
Starting with commit da094188f6 (MDEV-24393),
MariaDB will no longer acquire advisory file locks on InnoDB data
files by default, because it would create a large number of
entries in Linux /proc/locks.
The motivation for acquiring the file locks is to prevent accidental
concurrent startup of multiple server processes on the same data files.
Such mistake still turns out to be relatively common, based on
corruption bug reports from the community.
To prevent corruption due to concurrent startup attempts, the
Aria storage engine would unconditionally acquire an advisory lock
on one of its log files.
Solution: InnoDB will always lock its system tablespace files.
(Ever since commit 685d958e38
the InnoDB log file will not necessarily be open while the
server is running, because it can be accessed via memory-mapped I/O.)
If more protection is desired, then the option --external-locking
can be used.
The mandatory advisory lock also fixes intermittent failures of
some crash recovery tests. It turns out that when the mtr test harness
kills and restarts the server, it will not actually ensure that the
old process has terminated before starting the new one.
This bug could cause a crash of the server when executing queries containing
ANY/ALL predicands with redundant subqueries in GROUP BY clauses.
These subqueries are eliminated by remove_redundant_subquery_clause()
together with elimination of GROUP BY list containing these subqueries.
However the references to the elements of the GROUP BY remained in the
JOIN::all_fields list of the right operand of of the ALL/ANY predicand.
Later these references confused make_aggr_tables_info() when forming
proper execution structures after ALL/ANY predicands had been replaced
with expressions containing MIN/MAX set functions.
The patch just removes these references from JOIN::all_fields list used
by the subquery of the ALL/ANY predicand when its GROUP BY clause is
eliminated.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
New Feature:
========
This feature adds a safe replacement to the
MASTER_USE_GTID=Current_Pos option for CHANGE MASTER TO as
MASTER_DEMOTE_TO_SLAVE=<bool>. The use case of Current_Pos is to
transition a master to become a slave; however, can break
replication state if the slave executes local transactions due to
actively updating gtid_current_pos with gtid_binlog_pos and
gtid_slave_pos.
MASTER_DEMOTE_TO_SLAVE changes this use case by forcing users to set
Using_Gtid=Slave_Pos and merging gtid_binlog_pos into gtid_slave_pos
once at CHANGE MASTER TO time. Note that if gtid_slave_pos is more
recent than gtid_binlog_pos (as in the case of chain replication),
the replication state should be preserved.
Additionally, deprecate the `Current_Pos` option of MASTER_USE_GTID
to suggest the safe alternative option MASTER_DEMOTE_TO_SLAVE=TRUE.
Reviewed By:
============
Andrei Elkin <andrei.elkin@mariadb.com>
This commit makes replicas crash-safe by default by changing the
Using_Gtid value to be Slave_Pos on a fresh slave start and after
RESET SLAVE is issued. If the primary server does not support GTIDs
(i.e., version < 10), the replica will fall back to Using_Gtid=No on
slave start and after RESET SLAVE.
The following additional informational messages/warnings are added:
1. When Using_Gtid is automatically changed. That is, if RESET
SLAVE reverts Using_Gtid back to Slave_Pos, or Using_Gtid is
inferred to No from a CHANGE MASTER TO given with log coordinates
without MASTER_USE_GTID.
2. If options are ignored in CHANGE MASTER TO. If CHANGE MASTER TO
is given with log coordinates, yet also specifies
MASTER_USE_GTID=Slave_Pos, a warning message is given that the log
coordinate options are ignored.
Additionally, an MTR macro has been added for RESET SLAVE,
reset_slave.inc, which provides modes/options for resetting a slave
in log coordinate or gtid modes. When in log coordinates mode, the
macro will execute CHANGE MASTER TO MASTER_USE_GTID=No after the
RESET SLAVE command. When in GTID mode, an extra parameter,
reset_slave_keep_gtid_state, can be set to reset or preserve the
value of gtid_slave_pos.
Reviewed By:
===========
Andrei Elkin <andrei.elkin@mariadb.com>
Part #2: Extend heuristic pruning to use multiple tables as the
"Model tables".
Before the patch, heuristic pruning uses only one "Model table":
The table which had the best cost AND record became the "Model table".
After that, if a table's cost and record were both worse than
those of the Model Table, the table would be pruned away.
This didn't work well when the first table (the optimizer sorts them
by record_count) had low record_count but relatively high cost: nothing
could be pruned afterwards.
The patch adds the two additional "Model tables": one with the least
cost and the other with the least record_count.
(In both cases, a table can be pruned away if BOTH its cost and
record_count are worse than those of a Model table)
The new pruning is active when the number of tables to consider for
the prefix is higher than @@optimizer_extra_pruning_depth.
One can see the new pruning in the Optimizer Trace as
- "pruned_by_heuristic":"min_record_count", or
- "pruned_by_heuristic":"min_read_time".
Old heuristic pruning shows as "pruned_by_heuristic":1.
SELECT_LEX::first_select()->join is NULL for degenerate derived tables
which are known to have just one row and so were already materialized
by the optimizer.
This commit adds a check for this.
Elimination of unnecessary tables from SQL queries is already present
in MariaDB. But it only works for regular tables and not for derived ones.
Imagine we have a view:
CREATE VIEW v1 AS SELECT a, b, max(c) AS maxc FROM t1 GROUP BY a, b
Due to "GROUP BY a, b" the values of combinations {a, b} are unique,
and this fact can be treated as like derived table "v1" has a unique key
on fields {a, b}.
Suppose we have a SQL query:
SELECT t2.* FROM t2 LEFT JOIN v1 ON t2.a=v1.a and t2.b=v1.b
1. Since {v1.a, v1.b} is unique and both these fields are bound to t2,
"v1" is functionally dependent on t2.
This means every record of "t2" will be either joined with
a single record of "v1" or NULL-complemented.
2. No fields of "v1" are present on the SELECT list
These two facts allow the server to completely exclude (eliminate)
the derived table "v1" from the query.
MDEV-28073 Slow query performance in MariaDB when using many table
The idea is to prefer and chain EQ_REF tables (tables that uses an
unique key to find a row) when searching for the best table combination.
This significantly reduces row combinations that has to be examined.
This is optimization is enabled when setting optimizer_prune_level=2
(which is now default).
Implementation:
- optimizer_prune_level has a new level, 2, which enables EQ_REF
optimization in addition to the pruning done by level 1.
Level 2 is now default.
- Added JOIN::eq_ref_tables that contains bits of tables that could use
potentially use EQ_REF access in the query. This is calculated
in sort_and_filter_keyuse()
Under optimizer_prune_level=2:
- When the greedy_optimizer notices that the preceding table was an
EQ_REF table, it tries to add an EQ_REF table next. If an EQ_REF
table exists, only this one will be considered at this level.
We also collect all EQ_REF tables chained by the next levels and these
are ignored on the starting level as we have already examined these.
If no EQ_REF table exists, we continue as normal.
This optimization speeds up the greedy_optimizer combination test with
~25%
Other things:
- I ported the changes in MySQL 5.7 to greedy_optimizer.test to MariaDB
to be able to ensure we can handle all cases that MySQL can do.
- I have run all tests with --mysqld=--optimizer_prune_level=1 to verify that
there where no test changes.
MDEV-28073 Slow query performance in MariaDB when using many tables
The faster we can find a good query plan, the more options we have for
finding and pruning (ignoring) bad plans.
This patch adds sorting of plans to best_extension_by_limited_search().
The plans, from best_access_path() are sorted according to the numbers
of found rows. This allows us to faster find 'good tables' and we are
thus able to eliminate 'bad plans' faster.
One side effect of this patch is that if two tables have equal cost,
the table that which was used earlier in the query is preferred.
This allows users to improve plans by reordering eq_ref tables in the
order they would like them to be uses.
Result changes caused by the patch:
- Traces are different as now we print the cost for using tables before
we start considering them in the plan.
- Table order are changed for some plans. In most cases this is because
the plans are equal and tables are in this case sorted according to
their usage in the original query.
- A few plans was changed as the optimizer was able to find a better
plan (that was pruned by the original code).
Other things:
- Added a new statistic variable: "optimizer_join_prefixes_check_calls",
which counts number of calls to best_extension_by_limited_search().
This can be used to check the prune efficiency in greedy_search().
- Added variable "JOIN_TAB::embedded_dependent" to be able to handle
XX IN (SELECT..) in the greedy_optimizer. The idea is that we
should prune a table if any of the tables in embedded_dependent is
not yet read.
- When using many tables in a query, there will be some additional
memory usage as we need to pre-allocate table of
table_count*table_count*sizeof(POSITION) objects (POSITION is 312
bytes for now) to hold the pre-calculated best_access_path()
information. This memory usage is offset by the expected
performance improvement when using many tables in a query.
- Removed the code from an earlier patch to keep the table order in
join->best_ref in the original order. This is not needed anymore as we
are now sorting the tables for each best_extension_by_limited_search()
call.
prepare_inplace_add_virtual(): Over-estimate the size of the arrays
by not subtracting table->s->virtual_fields (which may refer to
stored, not virtual generated columns). InnoDB only distinguishes
virtual columns.
... on semisync slave
To provide semisync master crash-recovery the same server-id transactions
were made to accept for execution on the semisync slave when the strict gtid
mode (see MDEV-27760).
That however caused out-of-order error on a master's transaction
server of the circular setup.
The error was fair in the sense of the gtid strict mode rule as indeed
under the condition of the circular setup the replicated transaction
already exists in the local binlog.
This is fixed by the commit to ignore on the gtid strict mode semisync
slave those gtids that exist in the slave's binlog that effectively restores
the default same-server-id ignore policy.
At the same time the fixes complies with MDEV-21117 semisync slave recovery
to accept the same server-id transactions that do not exist in local binlog.
- Import tablespace re-evicts and reload the table definition. During that
time, innodb has to load the table even though the secondary fts index
marked as corrupted
- InnoDB should ignore the single word followed by apostrophe while
tokenising the document. Example is that if the input string is O'brien
then right now, InnoDB seperates into two tokens as O, brien. But
after this patch, InnoDB can ignore the token 'O' and consider
only 'brien'.
The hang may be caused by a 1pc branch that was fixed by MDEV-26031 in
10.6 and up. That commit did not look relevant in 10.5 and below
so was not pushed to the low branches.
To possibly tackle the reported issue
the MDEV-26031 is backported now with a test that
unlike 10.6 does not expose the former bug in 10.5.
It is only needed for checking a refined logics
inside MYSQL_BIN_LOG::write_transaction_to_binlog.
The latter is made to do away with xid-unlogging (which is suspected
to have been at fault) for xid-less transaction.
- InnoDB bulk insert fails to use encryption buffer for encrypting
the temporary log file. Declare the m_crypt_block, m_crypt_pfx in
row_merge_bulk_t to be used for encrypting the temporary file.
If transaction does bulk insert and disables the foreign_key_check
then InnoDB fails with the assert failure. InnoDB has strict
assertion that check_foreigns and unique_secondary_check
should be enabled if the transaction does bulk insert
in innodb_prepare_commit_versioned().
Problem:
=======
This patch addresses two issues:
1. An incident event can be incorrectly reported for transactions
which are rolled back successfully. That is, an incident event
should only be generated for failed “non-transactional transactions”
(i.e., those which modify non-transactional tables) because they
cannot be rolled back.
2. When the mariadb slave (error) stops at receiving the incident
event there's no description of what led to it. Neither in the event
nor in the master's error log.
Solution:
========
Before reporting an incident event for a transaction, first validate
that it is “non-transactional” (i.e. cannot be safely rolled back).
To determine if a transaction is non-transactional,
lex->stmt_accessed_table(LEX::STMT_WRITES_NON_TRANS_TABLE)
is used because it is set previously in
THD::decide_logging_format().
Additionally, when an incident event is written, write an error
message to the server’s error log to indicate the underlying issue.
Reviewed by:
===========
Andrei Elkin <andrei.elkin@mariadb.com>
It will go into 10.11.
Author: Luis Eduardo Oliveira Lizardo <108760288+mariadb-LuisLizardo@users.noreply.github.com>
Date: Mon Jul 18 17:48:01 2022 +0200
MDEV-28926 Add time spent on query optimizer to JSON ANALYZE (#2193)
* Add query optimizer timer to ANALYZE FORMAT=JSON
* Adapt tests and results
* Change logic to always close the writer after printing query blocks
dict_load_foreigns(): Use a correctly sized buffer for the maximum-length
SYS_FOREIGN.ID. In case of overflow, do not crash the server but instead
return DB_CORRUPTION.
This commit is a fixup for MDEV-28762
Analysis: Some recursive json functions dont check for stack control
Fix: Add check_stack_overrun(). The last argument is NULL because it is not
used
Passing $opt_parallel as $childs is wrong: child can be killed before
it connects and you will never decrement $childs for this.
Another problem is (and that is the cause of this bug): child can be
killed and never close server socket. This can happen f.ex. after
unmaskable KILL signal. In such case the socket is closed by reaping
the child but that never happens inside reading the socket loop in
run_test_server().
The proper design is the waitless reap of children inside the socket
loop and if there is no more children we finish the socket loop. Since
there is Windows variation where we don't control the children via
waitpid(), all the clients must normally close the socket and only
this can finish the socket loop. For Unix variation we reckon that
case as all children closed the socket but not all yet died and for
that we do final waiting waitpid() (was done before the patch as
well).
To be more complete, we now handle 3 end-of-game scenarios in Unix:
1. all children closed socket, all children died: everything is
handled by the socket loop;
2. all children closed socket, not all yet died: we wait for alive
children to die after exiting the socket loop;
3. not all children closed socket, all children died: everything is
handled by the socket loop.
For Windows end-of-game scenario is only one:
All children close the socket.
66832e3a introduced change that prints core dumps in very detailed
format. That's completely out of user-friendliness but serves as a
measure for debugging hard-reproducible bugs.
The proper way to implement this:
1. it must be controlled by command-line and environment variable;
2. detailed traces must be default for buildbots only, for user
invocations normal stack traces should be printed.
Options for control are: MTR_PRINT_CORE and --print-core that accept
the following values:
no Don't print core
short Print stack trace of failed thread
medium Print stack traces of all threads
detailed Print all stack traces with debug context
custom:<code> Use debugger commands <code> to print stack trace
Default setting is: short (see env_or_default() call in pre_setup())
For environment variable wrong values are silently ignored (falls back
to default setting, see env_or_default()).
Command-line option --print-core (or -C) overrides environment
variable. Its default value is 'short' if not specified explicitly
(same env_or_default() call in pre_setup()). Explicit values are
checked for validity.
--print-method option can specify by which debugger we print
cores. For Windows there is only one choice: cdb. For Unix the values
are: gdb, dbx, lldb, auto. Default value is: auto
In 'auto' we try to use all possible debuggers until success.
setup_boot_args(), setup_client_args(), setup_args() traversing
datastructures on each invocation. Even if performance is not
important to perl script (though it definitely saves some CO2), this
nonetheless provokes some code-reading questions. Reading and
debugging such code is not convenient.
The better way is to prepare all the data in advance in an easily
readable form as well as do the validation step before any further
processing.
Use mtr_report() instead of die() like the other code does.
TODO: do_args() does even more data processing magic. Prepare that
data according the above strategy in advance in pre_setup() if possible.
This avoids LF->CRLF conversion by the C runtime, which historically has
been rather buggy (see MDEV-9409)
Disabling text mode also fixes the --binary-mode in command line client
to work the same on Windows, as it does elsewhere.
The user-visible effect is that some text files, e.g output of mysqldump
or mysqlbinlog will not have CRLF end-of-lines,but LF. That should be
acceptable, as even Notepad can read this Unix EOLs since 2018
(on older Windows, Wordpad can)
Leave error log in text(CRLF) mode for now, for the sake of old Windows.
Not the SPIDER issue - happens to INSERT DELAYED.
the field::make_new_field does't copy the LONG_UNIQUE_HASH_FIELD
flag to the new field. Though the Delayed_insert::get_local_table
copies the field->vcol_info for this field. Ad a result
the parse_vcol_defs doesn't create the expression for that column
so the field->vcol_info->expr is NULL. Which leads to crash.
Backported fix for this from 10.5 - the flagg added in the
Delayed_insert::get_local_table.
Another problem with the USING HASH key is thst the
parse_vcol_defs modifies the table->keys content. Then the same
parse_vcol_defs is called on the table copy that has keys already
modified. Backported fix for that from 10.5 - key copying added
tot the Delayed_insert::get_local_table.
Finally - the created copy has to clear the expr_arena as
this table is not in the thd->open_tables list so won't be
cleared automatically.
1. For INSERT..SELECT statements: don't include table/view the data
is inserted into in the list of leaf tables
2. Remove duplicated and dead code related to table_count
This bug caused crashes when the server executed such a CREATE VIEW
statement whose view specification contained a reference to an unknown
column in a subquery used in ON condition.
The cause of this bug is quite similar to the cause of the bug MDEV-26412.
The fix of this bug is quite similar to the fix for MDEV-26412.
Approved by Sergey Petrunia <sergey@mariadb.com>
Problem:
========
When using sequences, the function
sequence_definition::write(TABLE *table, bool all_fields)
is used to save DML/DDL updates to sequence tables (e.g. nextval,
setval, and alter). Prior to this patch, the value all_fields was
always false when invoked via nextval and setval, which forced the
bitmap to only include changed columns.
Solution:
========
Change all_fields when invoked via nextval and setval to be reliant
on binlog_row_image, such that it is false when binlog_row_image is
MINIMAL, and true otherwise.
Reviewed By:
===========
Andrei Elkin <andrei.elkin@mariadb.com>
MDEV-28567 counted two execution sequences, there is third one which
executes ALTER VIEW before f() is created.
The more appropriate place for this test case is lock_sync.test
Running some statements that use IN subqueries outside context of a regular
query could result in server abnormal termination.
The reason for failure is that internal structures SELECT_LEX/SELECT_LEX_UNIT
created on behalf of parsed query were initialized incorrectly. Incorrect
initialization of the structures SELECT_LEX/SELECT_LEX_UNIT was introduced
by the commit de745ecf29
(MDEV-11953: support of brackets in UNION/EXCEPT/INTERSECT operations)
pushed into 10.4, that is the reason this bug report is not reproduced in 10.3.
To fix the issue the method SLECTE_LEX::register_unit is used for proper
initialization of the data structures SELECT_LEX/SELECT_LEX_UNIT. Additionally,
the method SELECT_LEX::get_slave() was removed from the source code base
since for those use cases where it is used it can be replaced by the method
first_inner_unit().
The incorrect type of mysql.column_stats caused the server during the
upgrade of every other table to complain:
[ERROR] Incorrect definition of table mysql.column_stats: expected column 'hist_type' at position 9
and expected column 'histogram' at position 10 to have type longblob.
To prevent these verbose server errors, we upgrade the
mysql.column_stats table first.
Consequently limit "Incorrect definition of table mysql.*" to the appropriate
set of limited test cases.
The rpl_gtid_errorhandling.result changes the GTID number by one
because of the added early suppression (adding a table row).
Reviewer: Vicențiu Ciorbaru
FixesMariaDB/mariadb-docker#438
Fix the side effect of MDEV-4750 (reenabling innodb_stats_persistent),
so that sporadic MDL acquisition for this table does not interfere with
SELECT from information_schema.metadata_lock_info
optimize_semi_joins() calls update_sj_state() to update semi-join
optimization state in the JOIN class.
greedy_search() algorithm considers different join prefixes,
and then picks one table to put into the join prefix.
Most of the semi-join optimization state is in the table's entry
in the join->positions[cur_prefix_size].
However, it also needs to call update_sj_state() to update the
semi-join optimization state in the JOIN class.
There is one exception, which is the cause of this bug: when we're
inside optimize_semi_join_nests() and are optimizing a subquery,
optimize_semi_joins() does nothing, it doesn't call update_sj_state().
greedy_search() must not do that either.