Substrate-specific P2P networking.

Important: This crate is unstable and the API and usage may change.

Node identities and addresses

In a decentralized network, each node possesses a network private key and a network public key. In Substrate, the keys are based on the ed25519 curve.

From a node's public key, we can derive its identity. In Substrate and libp2p, a node's identity is represented with the [PeerId] struct. All network communications between nodes on the network use encryption derived from both sides's keys, which means that identities cannot be faked.

A node's identity uniquely identifies a machine on the network. If you start two or more clients using the same network key, large interferences will happen.

Substrate's network protocol

Substrate's networking protocol is based upon libp2p. It is at the moment not possible and not planned to permit using something else than the libp2p network stack and the rust-libp2p library. However the libp2p framework is very flexible and the rust-libp2p library could be extended to support a wider range of protocols than what is offered by libp2p.

Discovery mechanisms

In order for our node to join a peer-to-peer network, it has to know a list of nodes that are part of said network. This includes nodes identities and their address (how to reach them). Building such a list is called the discovery mechanism. There are three mechanisms that Substrate uses:

Connection establishment

When node Alice knows node Bob's identity and address, it can establish a connection with Bob. All connections must always use encryption and multiplexing. While some node addresses (eg. addresses using /quic) already imply which encryption and/or multiplexing to use, for others the multistream-select protocol is used in order to negotiate an encryption layer and/or a multiplexing layer.

The connection establishment mechanism is called the transport.

As of the writing of this documentation, the following base-layer protocols are supported by Substrate:

On top of the base-layer protocol, the Noise protocol is negotiated and applied. The exact handshake protocol is experimental and is subject to change.

The following multiplexing protocols are supported:

Substreams

Once a connection has been established and uses multiplexing, substreams can be opened. When a substream is open, the multistream-select protocol is used to negotiate which protocol to use on that given substream.

Protocols that are specific to a certain chain have a <protocol-id> in their name. This "protocol ID" is defined in the chain specifications. For example, the protocol ID of Polkadot is "dot". In the protocol names below, <protocol-id> must be replaced with the corresponding protocol ID.

Note: It is possible for the same connection to be used for multiple chains. For example, one can use both the /dot/sync/2 and /sub/sync/2 protocols on the same connection, provided that the remote supports them.

Substrate uses the following standard libp2p protocols:

Additionally, Substrate uses the following non-libp2p-standard protocols:

The legacy Substrate substream

Substrate uses a component named the peerset manager (PSM). Through the discovery mechanism, the PSM is aware of the nodes that are part of the network and decides which nodes we should perform Substrate-based communications with. For these nodes, we open a connection if necessary and open a unique substream for Substrate-based communications. If the PSM decides that we should disconnect a node, then that substream is closed.

For more information about the PSM, see the sc-peerset crate.

Note that at the moment there is no mechanism in place to solve the issues that arise where the two sides of a connection open the unique substream simultaneously. In order to not run into issues, only the dialer of a connection is allowed to open the unique substream. When the substream is closed, the entire connection is closed as well. This is a bug that will be resolved by deprecating the protocol entirely.

Within the unique Substrate substream, messages encoded using parity-scale-codec are exchanged. The detail of theses messages is not totally in place, but they can be found in the message.rs file.

Once the substream is open, the first step is an exchange of a status message from both sides, containing information such as the chain root hash, head of chain, and so on.

Communications within this substream include:

Request-response protocols

A so-called request-response protocol is defined as follow:

Each request is performed in a new separate substream.

Notifications protocols

A so-called notifications protocol is defined as follow:

The API of sc-network allows one to register user-defined notification protocols. sc-network automatically tries to open a substream towards each node for which the legacy Substream substream is open. The handshake is then performed automatically.

For example, the sc-finality-grandpa crate registers the /paritytech/grandpa/1 notifications protocol.

At the moment, for backwards-compatibility, notification protocols are tied to the legacy Substrate substream. Additionally, the handshake message is hardcoded to be a single 8-bits integer representing the role of the node:

In the future, though, these restrictions will be removed.

Sync

The crate implements a number of syncing algorithms. The main purpose of the syncing algorithm is get the chain to the latest state and keep it synced with the rest of the network by downloading and importing new data as soon as it becomes available. Once the node starts it catches up with the network with one of the initial sync methods listed below, and once it is completed uses a keep-up sync to download new blocks.

Full and light sync

This is the default syncing method for the initial and keep-up sync. The algorithm starts with the current best block and downloads block data progressively from multiple peers if available. Once there's a sequence of blocks ready to be imported they are fed to the import queue. Full nodes download and execute full blocks, while light nodes only download and import headers. This continues until each peers has no more new blocks to give.

For each peer the sync maintains the number of our common best block with that peer. This number is updates whenever peer announce new blocks or our best block advances. This allows to keep track of peers that have new block data and request new information as soon as it is announced. In keep-up mode, we also track peers that announce blocks on all branches and not just the best branch. The sync algorithm tries to be greedy and download All data that's announced.

Fast sync

In this mode the initial downloads and verifies full header history. This allows to validate authority set transitions and arrive at a recent header. After header chain is verified and imported the node starts downloading a state snapshot using the state request protocol. Each StateRequest contains a starting storage key, which is empty for the first request. StateResponse contains a storage proof for a sequence of keys and values in the storage starting (but not including) from the key that is in the request. After iterating the proof trie against the storage root that is in the target header, the node issues The next StateRequest with set starting key set to the last key from the previous response. This continues until trie iteration reaches the end. The state is then imported into the database and the keep-up sync starts in normal full/light sync mode.

Warp sync

This is similar to fast sync, but instead of downloading and verifying full header chain, the algorithm only downloads finalized authority set changes.

GRANDPA warp sync.

GRANDPA keeps justifications for each finalized authority set change. Each change is signed by the authorities from the previous set. By downloading and verifying these signed hand-offs starting from genesis, we arrive at a recent header faster than downloading full header chain. Each WarpSyncRequest contains a block hash to a to start collecting proofs from. WarpSyncResponse contains a sequence of block headers and justifications. The proof downloader checks the justifications and continues requesting proofs from the last header hash, until it arrives at some recent header.

Once the finality chain is proved for a header, the state matching the header is downloaded much like during the fast sync. The state is verified to match the header storage root. After the state is imported into the database it is queried for the information that allows GRANDPA and BABE to continue operating from that state. This includes BABE epoch information and GRANDPA authority set id.

Background block download.

After the latest state has been imported the node is fully operational, but is still missing historic block data. I.e. it is unable to serve bock bodies and headers other than the most recent one. To make sure all nodes have block history available, a background sync process is started that downloads all the missing blocks. It is run in parallel with the keep-up sync and does not interfere with downloading of the recent blocks. During this download we also import GRANPA justifications for blocks with authority set changes, so that The warp-synced node has all the data to serve for other nodes nodes that might want to sync from it with any method.

Usage

Using the sc-network crate is done through the [NetworkWorker] struct. Create this struct by passing a [config::Params], then poll it as if it was a Future. You can extract an Arc<NetworkService> from the NetworkWorker, which can be shared amongst multiple places in order to give orders to the networking.

See the [config] module for more information about how to configure the networking.

After the NetworkWorker has been created, the important things to do are:

More precise usage details are still being worked on and will likely change in the future.

License: GPL-3.0-or-later WITH Classpath-exception-2.0