Simply explained: Blockchain scalability solutions past, present, and future

A brief history of scaling blockchains

When Bitcoin first launched in 2009 it became clear that, by design, it traded off transaction speed for decentralization and security.

Each block contained ~1mb of transactions. It took about 10 minutes to produce each new block. And it took another 45–60 minutes before you were sure your transaction had made it through.

The philosophy behind Bitcoin’s decentralized vision drives the tradeoffs that define proof-of-work. The 10 minute latency and small block sizes are enforced constraints that make it possible for a node running on a laptop in Kathmandu to have a chance of finding a block.

You wanted it to be possible for anyone, fighting through network latency and old specs to be able to maintain the integrity of the network.

The point was that we achieved decentralisation & security by ensuring that anyone can process transactions.

This came at a limit to how many transactions we could ask the network to process. Bitcoin aimed for ~20 transactions per second (tps), and in reality achieved about ~4 tps.

Scaling through simple parameter tweaks

Naturally, the first family of scalability solutions evolved around the idea that we could just stuff each block with more transactions.

Bigger blocks! Smaller transaction sizes!

To do this, we could simply make blocks bigger. Or, we could make each transaction smaller.

Examples of such implementations were Litecoin and Bitcoin Cash which hard-forked from Bitcoin to with smaller transactions and larger blocks, respectively. In practice, these efforts yielded 2–10x tps optimizations.

The fundamental issue with these scaling solutions was that each change required a hard fork. This required the community to rally behind the new chains and subsequently led to limited adoption. There’s also a practical upper limit to how much we can tweak these dials while still insuring decentralization ..

Scaling through off-chain computation

The next round of solutions came about the realization that not all transactions are equally important.

For example, processing a land deed agreement is likely more consequential than paying a friend back for dinner.

Many transactions types, like micropayments, can be processed off-chain. The main chain could be a settlement layer. For example, you can process 20k transactions in a state channel or a side chain as quickly as you wish. Then, verifying them on-chain in bulk, just takes a single transaction [pictured below].

Transactions can be processed off-chain and then reconciliated on-chain

Doing things off chain reduces computation and storage load on the main chain. At the same time, it still gives you the benefits of on-chain reconciliation for transactions, over time.

Examples of such implementations include Lightning network, Bitcoin’s off-chain transaction compute solution, which theoretically resolves more than 1M tps. There’s also Raiden. which provides payment state channels on top of Ethereum, and theoretically processes more than 100M tps. And notably, there’s Plasma, which spawn child chains from the main blockchain, theoretically handling an infinite tps.

Some issues with doing things off-chain were that i) you were offloading these off-chain computations to centralized services and ii) as a result, you couldn’t get the security guarantee of an entire ecosystem maintaining the network & adhering to an immutable security protocol.

Scaling through on-chain sharding

Additional solutions came about the insight that transactions cluster around different social communities.

For example, transactions occurring within a shipping network in Singapore, vs. an eCommerce marketplace in Mexico, vs. freelancer community in Berlin, won’t often overlap.

It made sense to adopt a traditional database concept of sharding. Where, within one blockchain, you can have different computing resources, nodes, handle different transactions in parallel.

All transactions are processed on-chain but in different nodes

In theory, transactions are often contained in network clusters and are easy to parallelize.

Examples of such implementations include Zilliqa, which developed a complex sharding algorithm.

In practice, parallelization is very difficult. The challenges span from how to securely allocate transactions per node, to ensuring data availability among nodes, to resolving network asynchronicity issues…

A simple edge case where there is a transaction C in node C, which depends on a transaction A in node A and a transaction B in node B, which in turn has other dependencies, compounded with the reality of network latencies… becomes difficult to resolve.

And many more solutions…

Dags

The list for brilliant, blockchain scalability solutions spans on. From blockchains that look less like chains and more like directed acyclic graphs [pictured above], to faster consensus algorithms like PoS, PoA, specifically mutations of federated BFTs and delegated BFTs that guarantee faster block finality & production… users now have a plethora of solutions to choose from.

The challenge going forward might be around adoption.

Aside from canonical chains like Bitcoin, Ethereum, and Eos, the majority of blockchains remain severely underutilized.

Going forward, each chain in the ecosystem will need to find product market fit, given the scaling tradeoffs they have made.

The important point here is that blockchains do scale.

And blockchain scalability is, arguably, not one of the biggest blockers for consumer adoption going forward, as many pundits would posit.

read original article here