Ethereum L1 zkEVM: The 12-month Plan That Rewrites L2s
The Ethereum Foundation set a one-year target to bring L1 zkEVM online, starting with optional proof verification on mainnet. Here is what real-time proving, multi-proof clients, and native zk-rollups could mean for gas limits, MEV, and every L2 team.

Breaking: Ethereum puts a clock on L1 zkEVM
On July 10, 2025 the Ethereum Foundation published a plan to ship an L1 zkEVM within a year, starting with optional validator clients that verify multiple zero knowledge proofs for each block rather than re-executing it. The post defines real-time proving, sets a hardware and power budget for home provers, and sketches how the network could later raise the gas limit and enable native zk-rollups once proof verification is universal. See the Foundation's plan in L1 zkEVM realtime proving.
This is the clearest signal yet that Ethereum intends to make zero knowledge proofs a first-class part of Layer 1. It also gives builders and investors a concrete 6 to 12 month horizon. Below we translate the plan into plain English, walk through the mechanisms that matter, and lay out a builder checklist with likely winners and losers if the timeline holds.
What an L1 zkEVM actually is
Today every validator replays every transaction in a block to check it is valid. In the proposed future, a validator can instead verify a small cryptographic proof that says the block would have executed correctly. The trick is that the proof must be produced before the validator needs it. Think of it like a stadium turnstile: you can either watch each ticket being printed and matched to a seat or simply scan a cryptographic stamp you trust. Stamps are quicker at the gate, but the stamp itself must be robust.
The plan begins by letting some validators run zk clients that verify several proofs for each block. Each proof is generated by a different zk virtual machine implementation, all attesting to the same block. This multi-zkVM verification extends the client diversity principle to the proof layer. It hedges against a subtle bug in any single proving stack. If three independent teams agree a block is valid, the network can be more confident in the result with little extra bandwidth because proof verification is fast and proofs are succinct.
Two protocol ingredients make the first phase possible. First, a pipelining tweak slated for the next network upgrade creates more time between receiving a block and needing to attest to it, which the prover can use. Second, an optional zkEVM attester client is being prototyped so validators can actually exercise the option to verify proofs. Once proven in production, adoption can rise from a small minority toward a supermajority over the next year.
Realtime proving and the case for proving at home
Ethereum blocks tick every 12 seconds. Network propagation consumes roughly 1.5 seconds, leaving a practical window of about 10 seconds to generate and distribute a proof for most blocks. The working definition of real-time proving targets 10 seconds or less for 99 percent of mainnet blocks, with specific constraints: proofs under 300 KiB, at least 128-bit security in steady state, and no trusted setups. The proposal also sets consumer-like limits for home proving: less than 100,000 dollars in capital expense and less than 10 kilowatts of sustained power.
The home-proving angle matters because it preserves censorship resistance. If only a few large operators can run provers, users could be pressured or priced out of participation. With a practical ceiling on power and cost, some share of solo stakers can opt in as a last line of defense.
Why real-time proofs unlock higher gas limits
Gas limits are safety rails that keep worst case block verification within reach of ordinary validators. When validators re-execute, the worst case is a block that is technically valid yet very heavy to compute. Proof verification changes the math. If every validator verifies constant-time proofs, then the worst case execution cost becomes bounded by verification cost rather than arbitrarily complex execution. That allows the protocol to contemplate higher gas limits without breaking the validator set.
Ethereum is already tiptoeing up that path. After engineering improvements highlighted in recent protocol updates, mainnet's gas limit increased to 45 million, with a research track targeting 100 million and beyond through client hardening, repricing, and benchmarking of worst case scenarios. See the Foundation's overview in Protocol Update 001: Scale L1. For a cross-chain comparison of aggressive throughput work, see Firedancer's path to higher throughput.
Put simply, proof-based validation makes higher throughput safer. It is not automatic though. The protocol needs to coordinate a supermajority of validators to prefer proof verification, and the proof systems must meet real-time and security thresholds in the wild.
Native zk-rollups from the same proofs
Once proof verification is standard for L1 blocks, the same proofs can be re-used by a new precompile, often described as EXECUTE, to enable native zk-rollups. In that model, a rollup posts its block data and a proof attesting to the correct execution. The L1 can then execute effects verified by the proof without re-running the rollup state transition. This makes rollup verification cheaper and tighter with L1, and it removes friction created by bespoke verifier contracts on L1 today. The key shift is reuse. The network already trusts the proofs for L1 execution, so it can trust them for rollups without additional overhead.
This is not the same as enshrining any single rollup. It is enshrining the mechanism by which rollups prove things to L1. That keeps competition alive among rollups and zkVM teams while giving them a shared highway to L1. For a complementary approach to stateless designs on another stack, see Polygon's stateless roadmap.
How this could reorder L2 economics
If Ethereum can safely raise the gas limit and accept proofs natively, the fee market changes. Several second-order effects follow.
- Sequencer margins compress. Many rollups charge a spread between what users pay and what the rollup spends to post data and proofs to L1. If L1 capacity rises and proof verification becomes cheaper per unit of execution, the marginal cost of rollup throughput falls. In competitive markets, that tends to bleed into lower user fees or shared rebates.
- Data availability premiums shrink for general purpose rollups. If L1 can carry more calldata or blobs at stable prices, the value proposition of paying for external data availability networks can narrow for use cases that do not strictly need separate trust domains. Specialized data availability chains can still win for app specific or compliance constrained workloads, but the default flips from external to native where economics allow.
- Bridge risk and latency improve. Native proof reuse reduces the number of moving parts in rollup verification on L1. Fewer custom verifiers and faster L1 confirmation of rollup effects can reduce both the risk surface and the time to finality experienced by users moving assets across layers.
- Proving supply becomes a competitive market. Teams behind zkVMs such as Scroll, Polygon zkEVM, Taiko, Kakarot, Risc Zero, and Succinct can compete on latency, cost, and reliability to help meet the P99 10 second target. A multi-proof world values diversity and redundancy as a feature. The most Type 1 compatible stacks, meaning the least deviation from Ethereum semantics, have an advantage because they minimize edge cases.
- L2 MEV becomes more L1-like. If rollup effects are confirmed faster on L1 and proofs are verified in the same epoch cadence, cross-domain arbitrage can compress. Some value migrates to L1 enshrined or shared markets for order flow and blockspace. This does not eliminate L2 MEV, but it can move parts of it closer to L1's builder-relay ecosystem. Compare with Arbitrum's Timeboost auctions.
- Restaking strategies reshuffle. If proof verification, data availability, and attestation duties consolidate around L1 mechanisms, some Actively Validated Services built on restaked security may face tighter margins or need to differentiate on specialized services like app specific privacy, co-processing, or regulated infrastructure.
Winners and losers if the timeline holds
Winners
- Type 1 friendly zkVMs. Teams that can consistently produce sub 10 second proofs at or near 128-bit security, with open source code, small proof sizes, and no trusted setups, will be first in line for inclusion in the multi-proof set. They become part of the core protocol's security budget rather than a peripheral service.
- Rollups that align with L1. General purpose rollups that keep their state transition compatible with EVM semantics and rely on L1 blobs and proof reuse can ride the capacity increase without changing their developer experience.
- Users and app developers. More blockspace and tighter L1–rollup coupling reduce fees and shorten time to finality, which lowers the cognitive tax of choosing where to deploy or transact.
- Client and infra teams. Execution clients that harden worst case performance and networks that deliver predictable propagation benefit from the 100 million gas target. RPC providers that can handle higher concurrent state queries also stand to gain as throughput climbs.
Losers
- Divergent EVM variants. Rollups that have strayed far from EVM equivalence for performance or feature reasons may struggle to reuse L1 proofs or to join a multi-proof set. They will either carry their own verification tax or refactor.
- External data availability by default. Chains that pitch cheaper data availability as the core benefit will need sharper positioning. Many apps will choose L1 blobs if the price and reliability are good enough.
- MEV silos. L2 specific MEV strategies that depend on long confirmation windows or bespoke sequencer policies face compression as L1 finality and L2–L1 proof coupling tighten.
- Restaking for baseline security. If L1 absorbs more of the verification and data duties, restaked services that simply replicate those duties may see yields fall. The durable wedge is specialized services like co-processors, privacy layers, and enterprise specific controls.
The builder checklist for the next 6 to 12 months
Use this as a practical to-do list, not a vision board. Each line includes what to do, why it matters, and how to start.
- Instrument for P99 proof latency
- What to do: If you run or depend on a zkVM, set up telemetry that measures proof generation latency per mainnet block, with clear P50, P95, and P99 slices, and energy usage per proof.
- Why: The target is 10 seconds for P99. You need to know if your stack clears that bar on real mainnet blocks, not only on synthetic traces.
- How: Record prover start and end timestamps keyed to block number, annotate with block gas used and opcode mix, and export to a public dashboard. Include alarms for tail outliers.
- Architect for multi-proof diversity
- What to do: Design interfaces so your block proof can co-exist with at least two other proofs from different teams. Normalize metadata and verification APIs.
- Why: The attester client will verify multiple proofs per block. Being easy to compose raises your chance of inclusion.
- How: Converge on open formats for proof objects, versioning, and verification precompiles. Build a compatibility test suite with fixtures.
- Prepare for gas limit increases
- What to do: Load test your nodes, indexers, and RPC endpoints at 2x current mainnet state size and 100 to 150 million gas per block.
- Why: The Scale L1 workstream is benchmarking exactly these conditions. If your stack breaks here, your users will feel it first. See Protocol Update 001: Scale L1.
- How: Replay mainnet traces into a forked devnet with accelerated slot times. Profile disk seeks, state access patterns, and EVM hotspots. Budget for faster disks and more memory bandwidth.
- Prototype native zk-rollup verification
- What to do: Build a minimal rollup that posts data and reuses an L1 execution proof through an EXECUTE style precompile in a devnet.
- Why: When proof verification is standard on L1, native rollups become the path of least resistance. Early movers will iron out tooling.
- How: Map your rollup's state transition to L1 proof semantics, then design a root update contract that calls the precompile and checks lineage. Track gas and failure modes.
- Reprice your economics model
- What to do: Simulate fee curves under higher L1 gas limits and lower per unit proof costs. Include sequencer revenue, blob pricing, and proof provider contracts.
- Why: Your margins can evaporate if competitors pass savings to users faster than you do.
- How: Build a simple model that takes demand elasticity and competitor behavior as inputs. Pair it with live telemetry from your network.
- Rethink MEV and order flow
- What to do: Model how faster L1 confirmation and proof reuse affect L2 MEV strategies and cross-domain arbitrage.
- Why: Some L2 MEV will compress, and some will migrate toward L1 shared markets. You want to be on the side of that migration. Compare with Arbitrum's Timeboost auctions.
- How: Backtest against historical blocks using an assumed 10 second proof cadence and compare extractable value under different sequencing policies.
- Decide your data availability stance
- What to do: If you rely on external data availability, articulate the specific conditions where it still beats L1 blobs once L1 scales.
- Why: The default choice will shift toward L1 for many general purpose use cases.
- How: Publish a transparent calculator that shows total costs and risk tradeoffs for your users.
- Energy and hardware planning
- What to do: For teams running provers, plan power delivery, heat extraction, and hardware amortization on a 10 kilowatt and 100,000 dollar envelope.
- Why: The home proving target is explicit. It shapes your bill of materials and your ability to distribute provers.
- How: Treat provers like small data center nodes. Use redundant circuits and monitor watts per proof alongside latency.
- Compliance narrative for enterprises
- What to do: If you sell to enterprises, turn the L1 zkEVM roadmap into a risk narrative that explains why proof-based validation improves predictability and auditability.
- Why: Predictable finality and native verification reduce operational variance, which audit teams like.
- How: Provide worked examples that show fixed verification time even for complex blocks.
- Community and client collaboration
- What to do: Contribute to discussions on proof formats, open source licenses, and client integration tests.
- Why: The bar for inclusion in the multi-proof set is as much about openness and reliability as it is about raw speed.
- How: Engage with client teams to upstream test vectors and to validate your verifier behavior under adversarial blocks.
Risks and how to read them
- Tail risk on proofs. Even if P99 is under 10 seconds, the remaining 1 percent matters. Builders should implement graceful fallbacks for tail blocks and plan for mitigation in future hard forks.
- Security assumptions and cryptography churn. Moving from 100-bit security during early months to 128-bit long term is sensible, but you must monitor concrete parameters chosen by proof systems and avoid wrappers that rely on trusted setups.
- Network effects of gas increases. Throughput improvements can stress RPC nodes, indexers, and wallets before they benefit users. Operators should harden their stacks now.
- Centralization pressure. Cloud proving is cheap and convenient, which can concentrate production. The home proving target and an emphasis on fully open source code are the counterweights. Teams should design their deployment model to hit the on-prem envelope.
What to watch between now and mid 2026
- zkEVM attester client trials. The first public mainnet trials will show how optional verification behaves. Expect an initial small cohort of validators before broader adoption.
- Proof speed and power disclosures. The most credible zkVM teams will publish transparent latency and wattage numbers on mainnet blocks, with reproducible setups.
- Gas limit steps and client hardening. Each safe increase, coupled with fewer worst case stalls, signals readiness for a larger jump. See Protocol Update 001: Scale L1 for the framing.
- Early native rollup demos. Watch for devnets that reuse L1 execution proofs via precompile and report end-to-end costs.
The bottom line
Ethereum's L1 zkEVM plan narrows the scaling debate from if to how fast and how safely. Real-time, home-capable proving and multi-zkVM verification give the network a path to raise gas limits without losing its decentralization story. Native zk-rollups built on the same proofs can tighten the loop between L1 and L2.
Builders do not need to bet on every detail to act. Instrument your provers. Design for multi-proof. Load test for 100 million gas. Prototype native rollup verification. Price your product for lower costs and faster finality. The work is concrete and the clock is running.