Apples and oranges of blockchains.
Ever since blockchain technology became hyped, there has been a constant debate on the performance & scalability of decentralized systems. It started with the comparison of Bitcoin main net’s transaction throughput with that of the visa network. Then came Ethereum, and the same comparisons were (still are) made again.
Nowadays, as soon as there is a mention of a new blockchain platform or protocol, the first question that comes up is almost always about the transaction throughput (or transactions per second or TPS). This reminds me of the famous Maruti Suzuki ad (in India) showing people’s obsession with the mileage of cars, no matter how feature-rich they could be.
While this comparison of average transaction throughput seems fair among public networks (Bitcoin, Ethereum, and others), it does not make sense to compare these numbers between private networks.
In public networks, all kinds of transactions are sent to the same network (contracts creation, calls, simple transfers, etc.), and the block parameters are the same for all. However, private chains are different from each other, having their transaction sizes and types modeled as per their respective use-cases. In the context of private chains, the same blockchain protocol or product can have utterly different throughput numbers for different use cases.
The point is, when evaluating blockchain technologies, we should avoid making a choice just based on transaction throughput numbers. Let me explain.
A blockchain is a distributed state machine. It starts from an initial state (genesis), and then every new transaction takes the blockchain from one state to the next. These transactions are grouped together in blocks and block production is the event that triggers the state transition.
So, when we look at transactions per second, we are basically looking at the number of times the blockchain changes its state, per second. In private chains, the state and its transition function are defined as per the use-case being implemented. For example, a supply chain blockchain will have its state defined very differently than that of a financial settlements blockchain. That brings us to the question — How can we compare or generalize transactions per second for chains having differently modeled state and transactions?
How can we compare or generalize transactions per second for chains having differently modeled state and transactions?
Mainly, there are two parameters for block production — the maximum block size and the time interval between two blocks. The state of the blockchain changes when a new block is produced. Blocks can be produced faster, but then each block will have fewer transactions, reducing overall transactions per second. They can be produced at a slower rate, packing more transactions, but then their propagation will become slower (network effects), again reducing transactions per second.
What we need is a fine-tuned configuration of block parameters so that we can get the best throughput numbers. The fine-tuning of these parameters will again depend on the use-case (type and size of transactions) for which we are building the chain.
In the Ethereum protocol, these block parameters are called
gas limit (block size) and
block period (time interval).
As mentioned above, the state of the blockchain updates when a new block is produced. But to solve the problem of who (which miner/authority/validator) produces the next block, we use consensus algorithms. Various consensus algorithms aim to solve different kinds of problems; being used in multiple blockchain products.
For example, both Geth and Parity implementations of the Ethereum protocol support different type of Proof-of-Authority (PoA) consensus algorithms — Clique and Aura, respectively. Both of these PoA consensus algorithms work very differently while still providing a similar block production outcome.
Several other consensus algorithms are being used in different blockchain products, and they play an essential role in the transactions per second rate. Some of the consensus algorithms are security focussed and slow, while others are less secure but faster. For example, Hyperledger Fabric uses a
centralized ordering service based consensus which is faster but less secure (non-BFT) than other BFT algorithms (Tendermint, Honey Badger). The time it takes to process the block based on consensus rules also impacts the transaction throughput.
To sum it up
The transactions per second throughput is based on and impacted by several factors depending on the use case. Private blockchains are use-case focussed and hence there is no straight answer to the transactions per second question. While average throughput numbers can give us a hint when evaluating blockchain products, they can also be very misleading.
Most of the time, the throughput numbers advertised by blockchain products are based on the most straightforward transactions and highly optimized block parameters. When it comes to real use-case implementations, the transaction size and block parameters can be completely different, resulting in very different throughput numbers.
Results may vary!
- BLOCKBENCH: A Framework for Analyzing Private Blockchains
- PBFT vs. Proof-of-Authority: Applying the CAP Theorem to Permissioned Blockchain