The single validation module is split in multiple (simpler)
modules. In the process, we introduce one "validation worker" per
peer. This worker handle all the `New_head` and `New_branch`
advertised by a given peer. For so, it sends "fetching request" and
"validation request" to respectively the `Distributed_db` and and the
`Block_validator`. These two global workers are responsible of the
'fair' allocation of network and CPU ressources amongst the connected
'peers'.
It now takes a `proto_header` in parameter, and it returns a full
`shell_header`. This prepares the inclusion of the context's hash in the
`shell_header`.
Let's get serious. The full index of operations is not sustainable in
the production code. We now only keep the index of operations not yet
in the chain (i.e. the mempool/prevalidation). Operations from the
chain are now only accesible through a block. For instance, see the
RPC:
/blocks/<hash>/proto/operations
This prepares the context to the inclusion the hash of the context in
the block header. By "looking" into the resulting context of a block,
we are now know able to determine whether:
- no testnet is currently associated to the branch;
- a testnet must be forked after the block;
- a previously forked testnet is running.
The minimal header now (classically) contains the root of a Merkle tree,
wrapping a list of lists of operations. Currently, the validator only
accept a single list of operations, but the 3+pass validator will
requires at least two lists.
It wait for the node to be synchronized with the network. The heuristic
is currently:
- the timestamp of current head is less than 1 minute old ;
- there was a period of 30 seconds without new block discovered.
This refactors `src/node/shell/state.ml` in order to trace the source of
blocks and operations. This prepares the node for the three-pass
validator.
In the procces, it adds an in-memory overlay for blocks and operations.