Cryptocurrency Consensus Algorithms Bible: Everything you Need to Know

In this article we’ll dive deep into the what, why, and how of consensus algorithms – the most overlooked component of every successful cryptocurrency.

First off, we will take a look at the Byzantine Generals’ Problem as one of the major issues a distributed network must overcome to function properly, and then we will move on to some of the most widely used consensus algorithms and look at the way they work in greater detail.

While this isn’t exactly a casual read, we promise all cryptocurrency enthusiasts (newbies and experts alike) that there’s valuable information ahead that will surely broaden your understandings on the subject.

Note: Please make sure to check out all the resources we included as hyperlinks, they’re mostly about necessary concepts that were too big to cover in this article entirely.

Also, we intentionally left out some of the alternative consensus algorithms such as Proof-of-Burn, Proof-of-Authority, Proof-of-Weight, and Proof-of-Capacity because they’re still not implemented in any of the established (top 100) cryptocurrency projects.

Introduction

Every day we’re bombarded with the latest news on cryptocurrency prices, we do our best to grasp the philosophies and motivations behind different coins and understand the trends that could carry some information on which coin is going to blow up next, but every experienced cryptocurrency enthusiast will tell you the same thing: You’re missing the bigger picture.

Cryptocurrencies are more than a get-rich-quick scheme and a casual conversation piece for you and your friends. They are the result of years of technological innovation — and a spark of genius.

Cryptocurrencies are a decentralized way of exchanging value over the Internet, meaning there is no central body (like a bank or clearing house) that exercises any amount of control over your funds.

You may ask yourself: how is this possible? How are millions and millions of users able to coexist in this mutual space we call the cryptocurrency world without an overarching system pulling the strings? The answer is simple: a well-thought-out architecture that is worked on and improved daily, whose main goal is to keep people and their means safe from harm without relying on a trusted third party.

Consensus algorithms are at the core of what makes cryptocurrencies decentralized; they are tools that facilitate the agreement between millions of users worldwide on what the playing rules are going to be, as well as the punishments and rewards for (dis)obeying them.

In order to appreciate a cryptocurrency for what it really is, one must have a deeper understanding of the underlying mechanisms that keep it alive and working. Let’s get into it!

Byzantine Fault Tolerance: The Byzantine Generals’ Problem

To say a system possesses Byzantine Fault Tolerance (BFT) is to say that that system can function properly in the event of a failure caused by the Byzantine Generals’ Problem.

The effects of the Byzantine Generals’ Problem are best seen in distributed computing systems; often times a malfunctioning (or malicious) component gives conflicting information to different parts of the system, causing inconsistencies which can only be resolved by reaching a system-wide consensus. This issue of problematic behavior was formally expressed by Leslie Lamport, Robert Shostak, and Marshall Pease in their 1982 paper titled “The Byzantine Generals’ Problem”. In the paper, the problem is proposed through the following metaphor:

A group of generals, each commanding a division of the Byzantine army, need to decide whether or not they’re going to attack an enemy city. The generals communicate with one another only by messengers. Whatever the decision may be, it’s important that it’s agreed upon by every general because an uncoordinated attack is worse than both a coordinated attack and a coordinated retreat.

The problem is further complicated by traitorous generals that try to prevent the loyal generals from reaching an agreement, as well as by unreliable messengers that may intentionally or unintentionally act in an unhelpful manner. Therefore, a solution is needed that ensures:

  1. All loyal generals decide upon the same plan of action;
  2. A small number of traitors cannot cause the loyal generals to adopt an unreasonable plan of action.

Even though at first glance the Byzantine Generals’ Problem seems deceptively simple, a simple algebraic calculation shows that it requires at least 3n+1 loyal generals to cope with n traitors. In other words, the problem can be solved if more than two thirds of the generals are loyal; a single traitor can still sabotage the attempts of two loyal generals.

The need for consensus algorithms

The Byzantine Generals’ Problem is a problem of reaching agreement on a single course of action among distributed components. In other words, it’s a consensus problem — and it takes a consensus algorithm to solve it.

“The purpose of a consensus algorithm, in general, is to allow for the secure updating of a state according to some specific state transition rules, where the right to perform the state transitions is distributed among some economic set. An economic set is a set of users which can be given the right to collectively perform transitions via some algorithm, and the important property that the economic set used for consensus needs to have is that it must be securely decentralized – meaning that no single actor, or colluding set of actors, can take up the majority of the set, even if the actor has a fairly large amount of capital and financial incentive.”- Vitalik Buterin

In other words, the goal of consensus algorithms (especially in the context of blockchain and cryptocurrencies) is to provide system reliability by achieving agreement on a single data value between a number of agents or processes. Some of those agents or processes may be acting in a faulty, unreliable, or even malicious manner, so consensus algorithms must be fault-tolerant. The main class of faults that processes are susceptible to are Byzantine failures in which, as we explained earlier, conflicting or contradictory data is spread in the network resulting in instability or malfunction. This is why consensus algorithms are necessary to maintain consistency in a distributed network.

Proof-of-Work (PoW)

Image credits: 

Along with Bitcoin’s blockchain came the first family of consensus algorithms in the world of cryptocurrency: Proof-of-Work. They’re the most widely used consensus algorithms to date, with Bitcoin, Ethereum, Litecoin, ZCash, Monero, and many other cryptocurrencies currently deploying some version of it.

The Proof-of-Work concept existed long before its implementation in cryptocurrencies; it was invented back in 1993 by Cynthia Dwork and Moni Naor, and the term was first coined by Markus Jakobsson and Ari Juels in a 1999 paper.

Originally, Proof-of-Work systems were used for deterring denial-of-service attacks, spam, and other forms of network abuse by requiring proof of computational work from the service requester prior to rendering the requested service, which makes such attempts of misuse financially infeasible.

In order for these systems to be cost-effective, the necessary work must be moderately hard to perform (by the service requester), but easy to check (by the service provider).

One of the first systems to achieve this asymmetry was Hashcash – a proof-of-work-based protocol created by Adam Back in 1997 that utilizes the SHA-256 algorithm. It required the user to submit proof of calculating a few thousand hashing operations before providing him with the requested service (like posting a message on a message board) in order to limit DoS and spam attacks on the network.

Even though computing a few thousand operations might seem like a lengthy process, it actually takes a half a second to complete; for the average user, a half a second of delay for a single post is nothing — but for a spammer that wants to post a million messages, it adds up to half a million seconds!

Generating the proof

The Hashcash implementation of Proof-of-Work for Bitcoin also uses the SHA-256 hashing algorithm. You can think of the SHA-256 hashing algorithm as a machine: it takes an arbitrary-length data input (a string of characters), and it produces a 256-bit deterministic output (a fixed-length string of characters that serves as a digital fingerprint of the input).

SHA-256 is seemingly random, but deterministic; for any given input, the resulting output (the hash) will always be the same and can be easily verified by anyone using the same hash algorithm. Now, in the process of mining, the miner takes two things to make up the input for the SHA-256 hashing algorithm:

  • The header of this newly generated block – A standard data structure that contains all the transactions in the block, the date and time, and other important information that make the useful payload that we’re trying to validate through Proof-of-Work;
  • Nonce – A random number that is used to vary the the output of the hashing function in each iteration;

The miner then runs the SHA-256 with this input quadrillions upon quadrillions of times until the hash output matches the so-called difficulty target — only the Nonce field changes in subsequent iterations, producing a different hash each time.

The difficulty target is a 256-bit number that serves somewhat as an upper limit; the SHA-256 output must be lower than or equal to the current difficulty target for the block in order for it to be accepted by the network.

In numerical terms, the difficulty in reaching this target is in finding an input that produces a hash that starts with a certain number of ‘0’ bits (currently, that number is 18).

By now your mind must be racing with all this information! Let’s look at this process through an example. This is the hash of the latest block that was added to the blockchain:

00000000000000000035fe536fb40c307a60d2740f0ce744481cb24841c9c2d9

Notice the string contains a prefix of 18 ‘0’-bits; the “smaller” this hash is (the more ‘0’-bits it has in the beginning), the more difficult it is to find such a number. There is no direct way to discover the exact Nonce value that was needed to construct this string that satisfies the difficulty target, it must be done through trial and error (known as brute-force searching).

That means that, when we see this digital fingerprint, we know that the only way that miner could’ve come up with it is by performing quadrillions of hash operations — thus proving he did the work!

We should also mention that the miner includes a special kind of transaction in the block that contains a reward for all the mining. It’s the first transaction of the block that, unlike all others, doesn’t have an input from which the funds are to be deducted.

It basically says “write a check of whatever the block reward is at that point in time, from thin air, to myself.” This effectively “creates” new BTC that are, from that point on, included in the circulating supply of the currency. A question inevitably arises: what stops the miner from altering this special transaction (called a coinbase transaction) in such a way that it awards him more BTC than he’s entitled to?

Block validation

Image credits: 

That brings us to the next step of the Proof-of-Work consensus mechanism: validation of the block. So far, the miner has created this coinbase transaction, gathered the rest of the transactions, used them to create the block header, went through all the work to find the Nonce, and produced the hash.

Next up, this new block containing all that information is distributed to the rest of the mining nodes on the network for validation. The validation consists of making sure the block is up to code by checking if it satisfies predetermined rules, one of them being: the miner that submits the block can only pay himself a reward based on the current block number.

This means that in the process of validation, the nodes will notice the attempt to tamper with the coinbase (or any other) transaction, and they will reject the block. All that work done for nothing!

Not only are miners incentivized to propose valid blocks and ensure the network’s consistency, but they are also discouraged to submit faulty blocks to the network; they waste electricity (money) to generate the Proof-of-Work, and if they attempt foul-play they end up with no compensation. In the words of Andreas M. Antonopoulos: “It simply doesn’t pay to cheat.”

Forks

The final step of the Proof-of-Work consensus mechanism is connecting the winning block with the existing blockchain. But what if two miners find the Proof-of-Work at (more or less) the same time i.e. if there are two candidate blocks?

As we know, each miner upon completion of his block broadcasts it to his immediate neighbors, who then validate and propagate the block across the network. Each node that receives this block and finds it to be valid incorporates it into its blockchain as a successor to the last valid block. Let’s call that last valid block “A”, and this newly mined block “B1”.

If that node later sees another block (let’s call this one “B2”) that is also valid, contains a solution to the Proof-of-Work, and also has block A listed as the parent, it connects this second candidate block on a secondary chain. This is what we call a fork: the occurrence of two valid blocks (B1 and B2) that both contain the solution to the Proof-of-Work and extend the same parent (the previous block we called A).

All miners will have both B1 and B2 incorporated into their blockchains as a fork. However, the miners that saw B1 first will immediately start building candidate blocks (by solving the Proof-of-Work for them) that reference B1 as the parent and, at the same time, the miners that received block B2 first will start building candidate blocks that reference B2 as the parent.

Let’s say, for example, that the miners that were building a block on top of B2 found a new block (let’s call that one C2) that extends the chain. They then propagate this new block across the entire network, and the rest of the miners see it as a valid solution.

All nodes that were building on top of the B2 block will simply extend the part of the chain they were working on with C2. The nodes that were building on top of B1, however, will now see two chains: A-B1 and A-B2-C2. The chain A-B2-C2 is now longer (more cumulative difficulty), so it becomes the main state of the blockchain for them as well.

All miners now start building candidate blocks that reference C2 as the parent in order to extend this main chain, and block B1 is dropped as an “orphan”. This means that the transactions within B1 will be queued up for processing (again) since the block they were a part of is no longer in the main chain.

Just as before, it is theoretically possible that both miners building on top of B1 and B2 find new blocks (C1 and C2 respectively) simultaneously, thus creating two chains: A-B1-C1 and A-B2-C2. However, the chances of this happening are very low, since Bitcoin’s block interval of 10 minutes balances between fast confirmation times and a low number of forks. If a faster block time was adopted, transactions would clear faster but blockchain forks would be more frequent. On the other hand, a slower block time would decrease the number of forks, but transactions would be cleared slower.

Proof-of-Stake (PoS)

Image credits: 

Proof-of-Stake was created in 2012 by Sunny King and Scott Nadal, but the concept was a frequent topic of discussion in the Bitcoin circles as early as 2011. Even though Bitcoin didn’t have a fraction of the success it currently has (nor a fraction of the environmental impact), still there were talks about the viability of “future peer-to-peer cryptocurrencies with no dependency on energy consumption.”

In what could be considered the Proof-of-Stake white paper, King and Nadal shared their realization that “the concept of coin age can facilitate an alternative design known as proof-of-stake, to Bitcoin’s proof-of-work system.” In this alternative design, Proof-of-Work is used for the initial part of the mining process and gradually reduces in significance.

Proof-of-Stake is then used to build the security model and partake in the “minting” process of their peer-to-peer cryptocurrency called Peercoin (PPCoin or PPC for short).

Proof-of-Stake is currently used by Nxt, Lisk, BitShares and other cryptocurrencies. Ethereum is also planning on moving to Proof-of-Stake in the future.

Block creation and validation

In the original Proof-of-Stake implementation, the deciding factors in picking a single validator to create the new block are coin age and randomization.

The easiest way to define the age of a coin is through the following equation:

CoinAge = Amount x HoldingPeriod

For example, if Alice gave 5 coins to Bob, and Bob held them for 10 days, then Bob has accumulated 5×10=50 coin-days of coin age. When Bob spends those coins, we say the accumulated coin age has been consumed or destroyed.

Anyone who holds a certain amount of coins (the minimum amount varies for different cryptocurrencies) is eligible to become a validator simply by sending a special type of transaction that locks up their funds into a deposit.

Most coins that implement Proof-of-Stake usually have both a minimum age and a maximum age for participating in this selection; when the coins reach maximum age we say they’ve reached its maximum weight, and if that weight is below that of the rest of the network, those coins will probably never get selected for block creation.

The issue of balancing your stake depending on the state of the network in order to avoid this problem is complex enough on its own and specific for each coin, so we won’t get into it. In any case, when it comes to the original PoS implementation, the probability of a stack of coins getting selected to create and sign a block reaches maximum after 90 days.

Older, larger stakes have a better chance of getting selected, however this limit is put as a measure against blockchain domination by stakes that are too large or too old. Once a certain coin stake has been used to create a block, it is consumed and it starts over with 0 coin age. This means that the same coins cannot be used to sign a new block for at least 30 days.

Chain-based vs. BFT-based Proof-of-Stake

There are different consensus algorithms in the Proof-of-Stake category, which means there are different ways to create and agree to new blocks, as well as to assign rewards to the participating validators.

  • Chain-based Proof-of-Stake mimics the mechanics of Proof-of-Work and simulates mining by pseudo-randomly selecting a validator and giving them the right to create a new block that (usually) points to the last block in the longest chain. Chain-based PoS was first introduced by Peercoin, followed by Blackcoin, the work of Iddo Bentov, Nxt, and others.
  • BFT-based Proof-of-Stake consensus algorithms are based on the findings from a 30-year-old body of research on Byzantine Fault Tolerance. As we mentioned in the introduction, it’s been mathematically proven that, as long as more than ⅔ of protocol participants are behaving in an honest manner, conflicting blocks will not be finalized regardless of network latency. In BFT-based PoS, validators are randomly selected to propose a block. Then there is a round of voting, in which every validator “votes” the specific block. If at the end of this process at least ⅔ of all validators signed off on the proposed block, that block is then committed to the blockchain. Otherwise, a different validator proposes his block and the process starts over.Tendermint is the first BFT-based Proof-of-Stake adaptation. The algorithm begins with a set of validators, each maintaining a full copy of the blockchain. The validators take turns proposing blocks, meaning there’s only one proposer per voting round. To ensure responsible behaviour, each proposed block is signed by the proposer’s private key. All other validators then vote the proposed block by signing it with their private keys. If indeed the block was deemed valid by the supermajority of validators, the round finishes and the block is added to the blockchain. If not, another round begins.Probably the biggest opportunity for Proof-of-Stake to reach the wider population is through Ethereum’s Casper. Casper the Friendly Finality Gadget (CFFG) is a BFT-based Proof-of-Stake finality system which overlays an existing Proof-of-Work block proposal mechanism, meaning that it’s responsible for finalizing proposed blocks in order to represent the canonical transactions of the blockchain. It’s a PoW/PoS hybrid that aims to facilitate the transition to a system that is completely based on Proof-of-Stake.

Resistance to majority attacks a.k.a. 51% attacks

Image credits: 

A 51% attack is exactly what it sounds like – an attempt to tamper with the security of a cryptocurrency network by 51% of its users. The exact percentage is there for algebraic, but also illustrative purposes; if more than half of the participants in a process can agree to a certain course of action, they would be able to overthrow the rest of the participants that disagree with them. When it comes to Proof-of-Work vs. Proof-of-Stake, the outcome is really simple.

If an attacker wanted to attempt a majority attack on a Proof-of-Work-based cryptocurrency network (like Bitcoin), he would have to acquire 51% of the computing power.

While this is highly unlikely, it’s not impossible; some of the largest mining pools could, in theory, join together to form a majority and attack the network.

This is actually one of the biggest flaws of Proof-of-Work-based cryptocurrencies; centralization of mining power might be closer than we think, with 70% of mining being done in China and 70% of the mining hardware being made by a single manufacturer.

For a majority attack to be carried out in a Proof-of-Stake based network, however, the attacker would have to own 51% of the coins since that’s the main resource in this consensus model. While this is possible in theory, it’s almost impossible in practice.

Let’s take Ethereum, for instance, since they’ve announced they’re going to transfer to Proof-of-Stake. Ethereum is currently worth over $75 billion. This means it would take over $38 billion to make such an attack possible and, even then, because the attacker would be the majority stakeholder in the network, the attacker would suffer the most. Not only is such an attack extremely expensive, it’s also, as a result, financially infeasible.

Risk of centralization

Proof-of-Work miners fight for the right to propose a block with their hashing power, while Proof-of-Stake relies on account balance. In other words, what makes mining in PoW expensive simply doesn’t play a role in PoS, and while PoS prides on reducing the risk of centralization by not depending on mining power, that’s not all there is to it.

In Proof-of-Stake-based cryptocurrencies, someone who holds 10% of all the coins has a 10% chance of being selected to propose a block. Now, acquiring those 10% of the currency could be considered a huge one-time investment (for the sake of this argument), but it seems like you reap the benefits of that investment over and over again.

In other words, in Proof-of-Stake the rich get richer; bigger stakes get you a better chance of being assigned a new block, which in return grants you a small reward (taken from the transaction fee fund), which you can use to increase your stakes, making your chances of “minting” a block even bigger… This is known as economies of scale; larger stakes are cheaper to mine than smaller stakes. A reddit user tackled this issue in greater detail in a thread in r/ethereum:

“[…]imagine you have a PC in your house with 1 ETH and your neighbor has an identical PC setup in his house but with 2 ETH. The energy [required] to run the two setups [is] exactly the same and the overhead hardware [is] the same so the cost to run each of your mining operations is the same. However, your neighbor has 2x odds you do since he has 2x more ETH. Since each block will offer the same mining reward regardless of your stake size, the expected value (odds*payout) for each unit of ETH is the same, [meaning that] you and your neighbor both have the same expected EV [(expected value)]. However, because those machines have a cost to run, their profitabilities are different. Your neighbor has ½ [of] the cost per unit of production [that you have].”

In Proof-of-Work, in order to increase your chances of mining a block, you need to increase your hashing power. This is an ongoing investment as you will probably have to buy better and more expensive equipment as time goes by and mining becomes computationally harder and harder — but your profits will rise too, so it’s not like Proof-of-work doesn’t leave any space for a workaround.

This was actually listed as a benefit in Ethereum’s Proof-of-Stake FAQ: “Reduced centralization risks, as economies of scale are much less of an issue. $10 million of coins will get you exactly 10 times higher returns than $1 million of coins, without any additional disproportionate gains because at the higher level you can afford better mass-production equipment.”

Still, what is often called the greatest merit of Proof-of-Stake could actually become its biggest disadvantage, and we can only hope that developers who work on this category of consensus algorithms take the problem of centralization very seriously.

Nothing-at-Stake

The “nothing-at-stake” problem in PoS algorithms was first introduced by Jae Kwon in May 2014 as “the fallacy of false choices” problem. At the core, the nothing-at-stake problem is an incentive problem. In PoS consensus mechanisms validators can, in theory, vote for multiple conflicting blocks at a given block height without incurring a cost for doing so.

As we’ve mentioned previously, in order for the miners in a PoW system to mine on multiple chains they must split up their hashing power, losing electricity in the process. Because in naive PoS implementations there’s no penalty for mining multiple conflicting blocks at the same time, the economically-optimal strategy becomes voting on as many forks as you can find to reap more block rewards.

Because validators are not required to devote almost any computational power to support a fork, this would mean that forks in PoS protocols could happen much more often than in PoW protocols, which would, in turn, harm the credibility of the cryptocurrency.

In the PoS FAQ, this incentive problem is explained in the following way:

“If there is an attacker, then the attacker need only overpower altruistic nodes (who would exclusively stake on the original chain), and not rational nodes (who would stake on both the original chain and the attacker’s chain), in contrast to proof of work, where the attacker must overpower both altruists and rational nodes”

However, stakeholders have already developed strategies to fix the nothing at stake problem. Slasher, for example, changes the incentive structure by including an explicit penalty for validators that simultaneously create blocks on multiple chains by providing proof of misbehavior (i.e., two conflicting signed block headers.)

This strategy basically aims to replicate the incentive structure of PoW inside PoS. In PoW, because miners need to spend electricity to mine, the penalty is external. The Slasher strategy in PoS makes the punishment explicit – if you mine blocks on the wrong chain, you will lose your staked coins.

Delegated Byzantine Fault Tolerance (dBFT)

Image credits: 

As the name suggests, dBFT is a consensus algorithm designed specifically to address the Byzantine Generals’ Problem. It’s was developed by Erik Zhang and is currently used by Neo, which is why we’re going to look at the way it works in the Neo network. Without further ado, these are the information you need to understand dBFT:

Nodes in Neo

In the Neo network, there are two types of nodes: ordinary and consensus nodes.

The majority of the cryptocurrency holders are ordinary nodes. They are capable of exchanging and transferring their assets, but do not participate in the “voting” process (more on that later). The second type of nodes in Neo are called consensus nodes (or bookkeeping nodes); they are active participants in the process of block validation meaning they have a right to vote — something like the miners in Bitcoin.

To become a consensus node, you need to have a dedicated internet connection, special equipment, and a certain amount of GAS (Neo’s utility token that ”fuels” the network).

Consensus nodes are crucial for the consensus activity on the network. During that iterative process, each of these nodes takes turns assuming one of two roles; in every iteration one consensus node is elected Speaker – that node is responsible for proposing a block, and the rest of the consensus nodes are Delegates – they are responsible for reaching a consensus on the transaction.

The Algorithm

Within the consensus algorithm, there are three requirements that need to be satisfied in order to achieve Byzantine Fault Tolerance:

  1. Delegates must reach a consensus about every transaction before a block can be committed;
  2. Dishonest nodes mustn’t be able to cause the honest nodes to commit faulty transactions;
  3. The number of delegates that are in the same state (same block height and same iteration) must be above the safe consensus threshold to start a consensus activity without exposing the network to a fault;

The algorithm works in the following way: First, a consensus node receives a transaction and broadcasts it to the entire network. All nodes that receive this transaction log it into their local memory, thus initializing the first view of the consensus activity. A view is an aggregate of information necessary to carry out a single round of the consensus process.

After the entire network of available consensus nodes has received the transaction, a Speaker is randomly selected for the given iteration.

The Speaker has a limited amount of time (currently it’s 15 seconds) allocated for block generation, at the end of which he broadcasts a proposal that contains the following information: current block height, the index of the current round (since some consensus activities require multiple rounds), the index of the consensus node that was elected Speaker, the block proposal, and other metadata.

Upon receiving the proposal, the rest of the nodes (that have the role of Delegates in this iteration) review it by checking if it satisfies the network rules. For instance, the data format needs to be consistent with the system preset, the transaction in question shouldn’t be already present on the blockchain, the block shouldn’t contain double-spend transactions, etc.

Every Delegate that found the block to be valid sends a so-called “Validated Proposal Broadcast” back to the Speaker and the other Delegates. Each Delegate then checks if the number of “Validated Proposal Broadcasts” they received is the same as or above the safe consensus threshold; if it is, it means consensus has been reached and the Delegate then signs and publishes the block, therefore binding it to the chain.

If, however, at least 1/3 of the Delegates have found the block to be faulty, the number of “Invalidated Proposal Broadcasts” becomes big enough to compromise the current round, meaning the block is rejected, a new Speaker is selected, and another round begins.

How do we Differentiate a Good Consensus Algorithm From a Bad One?

After exploring some of the major consensus algorithms currently dominating the blockchain world, you’re probably wondering: which consensus algorithm is the best solution to the consensus problem of distributed networks?

Although this is a legitimate question, the scientific world, ironically, hasn’t reached a consensus regarding this issue. Since the PoW concept was introduced, the world has been searching for an improved alternative consensus mechanism for distributed ledgers. Different stakeholders have their own underlying philosophies and distinctive flavors.

Some prefer ASIC-based PoW to GPU-based PoW; others prefer Delegated PoS to Casper PoS, PBFT to FBA or opt for completely different alternative solutions such as the lesser known Proof-of-Authority (PoA) , Proof-of-Weight (PoWeight) , Directed Acyclic Graphs (DAGs)  and others. None of these solutions is perfect, and there are many trade-offs one has to calculate to come to a decision when devising the perfect consensus algorithm.

But what does this means? Perfect for what? At first, we were aiming at better energy efficiency; then, after Bitcoin hit puberty, this criterion was overshadowed by the need for an urgent solution to the scalability problem. And after the big blockers forked their piece of the cake and created Bitcoin Cash, the scalability problem was “resolved,” but the old problem of mining centralization remained.

The consensus algorithm is essentially the Constitution of the distributed network. If the checks and balances are not designed properly, the distributed network will generate anomalies.

Concrete criteria

Image credits: 

In order for a consensus algorithm to be deemed efficient, it must possess the following three criteria:

  1. Decentralization

Decentralization is the king criteria by which all distributed ledger consensus algorithms are evaluated against. An efficient consensus algorithm must be able to keep a decentralized state and resist centralization over a long period of time. Decentralization is important for the following reasons:

  • Decentralized systems have higher fault tolerance, meaning that because they are relying on many separate components, they’re less likely to fail. Any node in a decentralized network can enter and exit the protocol without causing any loss or damage to the system.
  • Because decentralized systems lack sensitive central points, they have a higher attack resistance. Centralized systems can be attacked at much lower cost than a whole arrangement of nodes scattered around the world. Effective attacks on decentralized networks require highly organized and coordinated attacks from many (51% perhaps) powerful agents.
  • Decentralized networks have a higher collusion resistance due to the sheer number of unrelated participants in them. The whole idea of these systems is that they shouldn’t act as monopolies. Decentralization is essential here because cooperation between malevolent actors gets linearly harder the more the system is decentralized/distributed.

Apropos, decentralization is a fundamental component of highly effective distributed ledger consensus algorithms.

  1. Security

A well-designed consensus mechanism must tolerate permanent or transitional hardware malfunctions, software bugs, deviations from the protocol by different nodes in the network, malicious attacks, “nothing-at-stake” attacks, a loss of a system component due to a Byzantine Fault, and account for many other possible adverse scenarios.

The double spending problem is the most severe security breach a distributed cryptocurrency protocol can suffer, and therefore, an effective consensus mechanism should be resistant to 51% attacks, race attacks, Finney attacks, Vector76 and alternative history attacks. If a distributed network is unable to stop double spending, it’s no good at all.

  1. Strong incentives

Consensus mechanisms must be designed in such a way that they correspond to a Nash equilibrium. Named after its inventor John Nash, an American mathematician, the Nash Equilibrium is a game theoretical concept where “the optimal outcome of a game is one where no player has an incentive to deviate from his chosen strategy after considering an opponent’s choice.”

In our case, this means that there should be no deviations from the protocol that lead to a higher economic advantage than following the protocol. This is usually done by implementing the “carrot and the stick” philosophy. In Proof-of-Work if you try to cheat the system, all the electricity you invested into mining the fraudulent block is wasted. If you follow the rules, you have a chance of winning some bitcoins.

Proof-of-Stake works the same way but, instead of electricity, you lose the coins you staked. By formulating strong incentives, effective consensus algorithms create an ecosystem where “following the rules” is the most economically-sound decision a network participant could make.

So, a couple of things we learned today…

  • Any good consensus algorithm has to prevail when subject to the ever-expanding plethora of problems thrown at it;
  • There is a minimum of fixed criteria that such an algorithm needs to satisfy in order to withstand those problems — one of those criteria is Byzantine Fault Tolerance, but the list goes on;
  • Proof-of-Work, Proof-of-Stake and Delegated Byzantine Fault Tolerance are examples of consensus protocols that have managed to tackle most of the issues decentralized cryptocurrencies struggle with; there is still room for progress though, different models have their own strengths and weaknesses and it’s naive to expect a single solution for everything;
  • Familiarizing yourself with the available alternatives and understanding how they work under the hood enables you to make an informed decision when choosing which cryptocurrency to use or invest in.

Hopefully you made it to the end of this article! If you’re struggling with some of the concepts explained here, don’t lose your hope just yet. After all, this is only a rough summarization of years and years of research that require time to properly absorb.

Take your trip through the cryptocurrency rabbit hole one step at a time, and come back to brush up on all this information every once in a while; “repetitio est mater studiorum”, right? Good luck!

Featured image