This second article in the Dfinity series gives a high-level overview of how Dfinity attempts to achieve the goals of creating an Internet-level decentralized cloud on which much of the world’s software can be run.
It has to be noted that so far, the Dfinity team has mostly only published detailed sources regarding their platform’s consensus protocol and governance. Thus, they will be discussed in more detail in this article than sharding and software and data management. The paucity of information about the native token mechanics and supply makes it impractical to cover them at all.
One of Dfinity’s biggest claimed innovations is its consensus mechanism. It has multiple moving parts and has elements of other approaches. The four layers of Dfinity’s consensus are the identity and registration layer, the random beacon layer, the notarization layer and the fork resolution layer. Let us consider each of them in turn.
Unlike the most popular platforms today, in the Dfinity setup, there are no privileged nodes for participating in consensus. A node (client) wishing to actively participate in the network has to fulfil the minimum hardware requirements, register and stake a certain amount of DFN tokens to ensure conscientious behavior. In its mature state, the Dfinity network is envisaged to be run by millions of clients. As each client must lock up some tokens, Dfinity makes use of proof of stake (PoS), like Ethereum, but strictly for the purposes of Sybil protection and incentivizing good behavior.
However, this approach to generating randomness has scalability problems because randomness generation must take much longer than the exchange of messages among nodes.
At the core of Dfinity’s approach to consensus is the idea that consensus is fundamentally about randomness. It allows a decentralized network to choose the leader without engaging in communicationally extremely onerous process that pre-Bitcoin consensus mechanisms had. In the PoW context, the mining node that wins the right to mine a block has to produce a random number called “nonce” that satisfies certain properties through brute force search.
At its so-called Random Beacon layer, Dfinity adopts a rather different approach to generating randomness called Threshold Relay. At each epoch, a group of clients (called threshold group, TG) is randomly selected and they run a key-generation process that assigns each TG member a signing key. At every round during the epoch, the TG members sign with their keys the random value from the previous round. Hence, they can be said to operate a Verifiable Random Function, or a process that continuously updates a string of random values.
The cryptographic security of the scheme is based on the scientific result called Boneh-Lynn-Shacham (BLS) signatures, and one of the authors of the paper in which they were introduced, Ben Lynn, is a full-time member of the Dfinity team. At the end of each epoch, the same randomness is used to relay (hence, Threshold Relay) the randomness generation to another pre-configured TG.
This approach to generating a running sequence of random numbers is claimed by the Dfinity team to be much less computationally intensive than proof of work because the only demanding operation is the key generation that takes place at the start of each epoch. It is also believed to be fast because there is no need for TG members to exchange messages. They just sign the previous round’s random value with their keys. If a minimum (threshold) subset of the TG signs that value, the value for the current round is generated, hence the term “threshold” in Threshold Relay. The surprising feature of the process is that it does not matter which subset of the TG signs the previous randomness.
TGs do not just generate a random number at each round, they also use it to “notarize” at least one block at the relevant round. Notarization in Dfinity is essentially a sophisticated form of time stamping, it ensures that the added block(s) have been created during the relevant round. The power of this process, according to the Dfinity team is that no communication needs to be performed by TG members among themselves, which saves a lot of time.
Notarization is not always sufficient, however, as TG members may occasionally notarize more than one block at a given round. In such cases, the fork resolution procedure comes in. It uses the so-called probabilistic slot protocol to choose one of the several notarized blocks at the next round.
Dfinity’s consensus mechanism is purported to deliver very short block times (as low as one second) and complete block finality after just two block confirmations (because forks are resolved at the next round), both of which are in stark contrast to the most popular blockchains at present.
Given that Dfinity in its mature form aims at running much of the world’s software on-chain, it may not possibly do it using just one blockchain, it has to rely on sharding. Williams has claimed that implementing sharding on Dfinity will be particularly advantageous because multiple TGs discussed above may easily be working in parallel.
The way inter-shard communication and the maintenance of the overall state of the sharded ledger are supposed to be organized do not seem to have been described in publicly available sources, yet.
Software and data management
Clients in Dfinity are not only responsible for consensus, they also run and store software uploaded onto the network. More specifically, this means that each participating client has to run computations on, and store data for, a subset of the software working on Dfinity. Such software subsets correspond to shards.
The software for Dfinity may be written in any language because it then compiles into WebAssembly, a recently created tool that allows software to run directly in the browser. However, to optimize the creation of software for the platform, the Dfinity team has created a brand new ActorScript programming language for software created to be run on Dfinity.
According to Williams, the key benefit about operating software in this way is that there is no need for APIs and databases. The data, for instance, can be stored directly in the software. To Williams, this substantially simplifies software creation and management and compensates for the need for multiple replicas of the same application being executed in parallel by one of the network’s shards.
In governance, Dfinity also attempts to be highly innovative. As Dominic Williams explains in detail here, it introduces a system of voting units called “neurons” which is referred to as the Blockchain Nervous System. At first sight, this arrangement closely resembles delegated proof of stake (DPoS) voting model employed by EOS, as each holder of Dfinity’s native tokens has her voting power set proportionally to the number of tokens she owns. This model has been almost universally criticized by experts and blockchain community leaders as leading to various inefficiencies.
However, Dfinity’s approach is rather more sophisticated. It allows neurons to make their voting decisions almost completely programmatic through tying their account’s voting patterns to those of several other neurons that could be one owned by experts or community leaders, for instance. The neuron owners both have to lock up DFN tokens in their neurons to be able to vote, and also get remunerated for participation. Moreover, the remuneration depends on how long they keep their tokens locked up.
This model seems to be primarily aimed at tackling the low voter participation problem that has plagued other token-holder-voting solutions. Even if one cannot be bothered to constantly follow the events around the project and participate in votes, she may be persuaded to delegate her voting to an authoritative account.
Of course, there is nothing in this approach that automatically prevents token holders from following demagogues or charlatans but Williams believes that if decisions pushed through by such opinion leaders were to harm the project, token holders would be motivated to unfollow them. The primary incentive to do so would be to stop the harmful opinion leaders from driving down the value of the tokens.
Dfinity’s approach also allows to potentially mitigate the problems of cartelization and governmental pressure on the network. Unlike the delegates in EOS or Lisk, Dfinity’s authoritative nodes are not directly remunerated for being such, thus there is no small subset of entities running the network that can be targeted.
Overall, Dfinity is a multi-pronged and highly complex project many aspects of which will probably become clearer in the future with the publication of more details. In line with its bold vision, it purports to introduce multiple technological innovations aimed at delivering impressive scalability and allowing its platform to serve software users globally. The final article in the series will consider the challenges the project may face, even if all its technical innovations turn out to be solid.