Disclosure: The views and opinions expressed right here belong solely to the authors and don’t symbolize the views and opinions of crypto.information editorial.
The second quarter of 2025 marks the fact of blockchain scaling, with the cracks within the layer 2 mannequin widening as capital continues to move into rollups and sidechains. The unique promise of L2 was easy: scale up L1, however prices, delays, liquidity and fragmentation of the consumer expertise proceed so as to add up.
abstract
- L2 was supposed to increase Ethereum, however it introduces new issues because it depends on a centralized sequencer that may be a single level of failure.
- At its core, L2 handles sequence and state computations and settles right down to L1 utilizing optimistic or ZK rollups. Every has trade-offs. Optimistic rollups have lengthy finality and ZK rollups are computationally costly.
- The way forward for effectivity lies in separating computation and validation. It makes use of centralized supercomputers for computation and distributed networks for parallel verification, attaining scalability with out sacrificing safety.
- The “whole order” mannequin of blockchain is outdated. Shifting to native, account-based ordering unlocks large parallelism, ends the “L2 breach,” and paves the best way for a scalable, future-proof Web3 basis.
New initiatives like stablecoin funds are beginning to query the L2 paradigm, asking whether or not L2 is actually safe and whether or not its sequencers are a single level of failure or censorship. Web3 typically results in the pessimistic view that fragmentation is inevitable.
Are we constructing our future on strong foundations or a home within the sand? L2 should face and reply these questions. In spite of everything, if Ethereum (ETH)'s base consensus layer had been inherently quick, low cost, and infinitely scalable, the complete L2 ecosystem as we all know it in the present day would turn into redundant. A myriad of rollups and sidechains have been proposed as “add-ons to L1” to alleviate the elemental limitations of the underlying L1. It is a kind of technical debt, a fancy piecemeal workaround that burdens Web3 customers and builders.
You may additionally like: Honest launch breaks the promise of cryptocurrencies | Opinion
To reply these questions, we have to break down the complete L2 idea into its fundamental parts and uncover a path to a extra strong and environment friendly design.
Construction of L2
Construction determines operate. It is a elementary precept of biology, and it additionally applies to pc methods. Figuring out the suitable construction and structure for L2 requires cautious consideration of its performance.
At its core, each L2 performs two essential capabilities. Sequence, or ordering of transactions. In addition to calculating and proving new states. A sequencer collects, orders, and batches consumer transactions, whether or not it's a centralized entity or a decentralized community. This batch is then executed, leading to state updates (e.g. new token stability). This situation have to be resolved at L1 by way of optimistic or ZK rollup for safety.
Optimistic rollups assume all state transitions are legitimate and depend on a problem interval (typically 7 days) throughout which anybody can present proof of wrongdoing. This creates a giant trade-off in UX and slows down finality. ZK rollup makes use of zero-knowledge proofs to mathematically confirm the correctness of all state transitions earlier than reaching L1, permitting for near-instantaneous finality. The trade-off is extra computation and extra complicated development. The ZK prover itself is buggy, with doubtlessly disastrous outcomes, and formal verification of those, if potential in any respect, is extraordinarily costly.
Sequence is a governance and design selection for every L2. Some individuals want centralized options for effectivity (or maybe censorship capabilities), whereas others want decentralized options for extra equity and robustness. Finally, the L2 decides carry out its personal sequencing.
Producing and validating state claims will be achieved far more effectively. As soon as a batch of transactions is ordered, computing the subsequent state turns into a pure computational job that may be carried out utilizing just one supercomputer centered solely on uncooked velocity, with none decentralization overhead. That supercomputer can be shared between L2s.
When this new state is requested, its validation turns into a separate parallel course of. A big community of verifiers can work in parallel to confirm claims. That is additionally the very philosophy behind Ethereum's stateless shopper and high-performance implementations like MegaETH.
Parallel verification is infinitely scalable
Parallel verification is infinitely scalable. Regardless of how briskly L2 (and its supercomputers) generate claims, the verification community can all the time catch up by including extra verifiers. Latency right here is exactly the verification time, which is a set minimal quantity. It is a theoretical optimum that may be verified relatively than calculated by successfully utilizing decentralization.
As soon as the sequence and state validation is full, the L2 job is nearly full. The ultimate step is to reveal the verified state to the decentralized community, L1, to make sure last settlement and safety.
This final step reveals the issue that blockchain is a formidable cost layer for L2. The primary computational work is finished off-chain, however L2 has to pay a hefty premium for last processing at L1. These face double overhead. The restricted throughput of L1 is burdened by the linear ordering of the sum of all transactions, inflicting congestion and excessive prices in transmitting knowledge. Moreover, it should endure the finality delay inherent in L1.
For ZK rollups, this can be a jiffy. For Optimistic Rollups, the problem is made much more tough by the week-long problem interval. Though mandatory, this can be a safety trade-off and comes at a price.
Farewell, Web3's “whole order” fantasy
Ever since Bitcoin (BTC), individuals have been working exhausting to mix all blockchain transactions into one whole order. In spite of everything, we’re speaking about blockchain! Sadly, this “excellent order” paradigm is a pricey fantasy and clearly overkill for L2 funds. How ironic that in one of many world's largest decentralized networks, the world's computer systems behave like single-threaded desktops.
It's time to maneuver on. The long run can be native, account-based ordering, the place solely transactions that work together with the identical account must be ordered, permitting for enormous parallelism and true scalability.
In fact, international ordering means native ordering, however that is additionally an extremely easy and simple resolution. After 15 years of “blockchain”, it’s time for us to open our eyes and handcraft a greater future. The scientific area of distributed methods has already moved from the sturdy consistency idea of the Nineteen Eighties (which blockchain implements) to the sturdy eventual consistency mannequin of 2015 that unlocks parallelism and concurrency. It's time for the Web3 business to equally go away behind the previous and observe forward-looking scientific advances.
The times of L2 compromise are over. It's time to construct a basis designed for the longer term, when the subsequent wave of Web3 adoption will come.
learn extra: Web3 is open and clear, however constructing on prime of it’s depressing. opinion
Chen Xiaohong
Chen Xiaohong He’s the Chief Know-how Officer at Pi Squared Inc., the place he works to develop quick, parallel, and distributed methods for funds and settlement. His pursuits embody program correctness, theorem proving, scalable ZK options, and making use of these strategies to all programming languages. Xiaohong earned a bachelor's diploma in arithmetic from Peking College and a doctorate in pc science from the College of Illinois at Urbana-Champaign.