Ethereum Foundation Maps Path To zkEVM Proofs On Mainnet L1

bitcoinistPublished on 2026-01-16Last updated on 2026-01-16

Abstract

The Ethereum Foundation has outlined a detailed plan to enable Ethereum's mainnet (L1) to validate blocks using zkEVM proofs, reducing the need for validators to re-execute every transaction. The proposal, shared by EF Co-Executive Director Tomasz K. Stańczak, involves engineering work across execution and consensus clients, new proving infrastructure, and security processes. Key milestones include creating a standardized "ExecutionWitness" data structure per block, developing a zkEVM guest program for stateless validation, and updating consensus clients to verify zk proofs during block validation. The plan also emphasizes operational readiness, including proof generation integration, GPU testing, benchmarking, and security measures like reproducible builds and formal threat models. A major dependency is ePBS (enshrined Proposer-Builder Separation), which would extend proof generation time from 1–2 seconds to 6–9 seconds, and is targeted for deployment in mid-2026. If implemented, this would make proof-based validation a practical option on L1, though proving times and operational complexity remain key challenges.

The Ethereum Foundation has published a step-by-step plan to let Ethereum’s main chain validate blocks using zkEVM proofs, reducing the need for validators to re-run every computation themselves. The proposal, shared via X on Jan. 15 by Tomasz K. Stańczak, Co-Executive Director at the Ethereum Foundation, lays out the engineering work needed across Ethereum’s execution and consensus clients, plus new proving infrastructure and security processes.

Ethereum L1 Moves Toward zk Proof-Based Validation

Already in July last year, the Ethereum Foundation announced its “zk-first” approach. Today, Ethereum’s validators typically check a block by re-executing the transactions and comparing results. The plan proposes an alternative: validators could verify a cryptographic proof that the block’s execution was correct.

The document summarizes the intended pipeline in plain terms: an execution client produces a compact “witness” package for a block, a standardized zkEVM program uses that package to generate a proof of correct execution, and consensus clients verify that proof during block validation.

The first milestone is creating an “ExecutionWitness,” a per-block data structure containing the information needed to validate execution without re-running it. The plan calls for a formal witness format in Ethereum’s execution specifications, conformance tests, and a standardized RPC endpoint. It notes that the current debug_executionWitness endpoint is already “being used in production by Optimism’s Kona,” while suggesting a more zk-friendly endpoint may be needed.

A key dependency is adding better tracking of which parts of state a block touches, via Block Level Access Lists (BALs). The document says that as of November 2025, this work was not treated as urgent enough to be backported to earlier forks.

The next milestone is a “zkEVM guest program,” described as stateless validation logic that checks whether a block produces a valid state transition when combined with its witness. The plan emphasizes reproducible builds and compiling to standardized targets so assumptions are explicit and verifiable.

Beyond Ethereum-specific code, the plan aims to standardize the interface between zkVMs and the guest program: common targets, common ways to access precompiles and I/O, and agreed assumptions about how programs are loaded and executed.

On the consensus side, the roadmap calls for changes so consensus clients can accept zk proofs as part of beacon block validation, with accompanying specifications, test vectors, and an internal rollout plan. The document also flags execution payload availability as important, including an approach that could involve “putting the block in blobs.”

The proposal treats proof generation as an operational problem as much as a protocol one. It includes milestones to integrate zkVMs into EF tooling such as Ethproofs and Ere, test GPU setups (including “zkboost”), and track reliability and bottlenecks.

Benchmarking is framed as ongoing work, with explicit goals like measuring witness generation time, proof creation and verification time, and the network impact of proof propagation. Those measurements could feed into future gas repricing proposals for zk-heavy workloads.

Security is also marked as perpetual, with plans for formal specs, monitoring, supply-chain controls like reproducible builds and artifact signing, and a documented trust and threat model. The document proposes a “go/no-go framework” for deciding when proof systems are mature enough for broader use.

One external dependency stands out: ePBS, which the document describes as necessary to give provers more time. Without it, the plan says the prover has “1–2 seconds” to create a proof; with it, “6–9 seconds.” The document adds a two-sentence framing that captures the urgency: “This is not a project that we are working on. However, it is an optimization that we need.” It expects ePBS to be deployed in “Glamsterdam,” targeted for mid-2026.

If these milestones land, Ethereum would be moving toward proof-based validation as a practical option on L1, while the timing and operational complexity of proving remain the gating factors.

At press time, ETH traded at $3,300.

ETH faces the 0.618 Fib, 1-week chart | Source: ETHUSDT on TradingView.com

Related Questions

QWhat is the main goal of the Ethereum Foundation's new proposal regarding zkEVM proofs?

AThe main goal is to enable Ethereum's main chain to validate blocks using zkEVM proofs, reducing the need for validators to re-execute every computation themselves by verifying cryptographic proofs of correct execution instead.

QWhat is an 'ExecutionWitness' as described in the plan?

AAn 'ExecutionWitness' is a per-block data structure that contains the information needed to validate execution without re-running it, including a formal witness format in Ethereum’s execution specifications, conformance tests, and a standardized RPC endpoint.

QWhy is Block Level Access Lists (BALs) important for this proposal?

ABlock Level Access Lists (BALs) are important because they enable better tracking of which parts of the state a block touches, which is a key dependency for generating the execution witness needed for zkEVM proof validation.

QWhat role does ePBS play in the implementation of zkEVM proofs on L1?

AePBS (proposer-builder separation enhancement) is necessary to give provers more time to create proofs, extending the proof creation window from 1-2 seconds to 6-9 seconds, and is considered an essential optimization for the plan.

QHow does the proposal address security concerns related to zkEVM proof validation?

AThe proposal addresses security through formal specifications, monitoring, supply-chain controls like reproducible builds and artifact signing, a documented trust and threat model, and a 'go/no-go framework' to decide when proof systems are mature enough for broader use.

Related Reads

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

In a span of four days, Amazon announced an additional $25 billion investment, and Google pledged up to $40 billion—both direct competitors pouring over $65 billion into the same AI startup, Anthropic. Rather than a typical venture capital move, this signals the latest escalation in the cloud wars. The core of the deal is not equity but compute pre-orders: Anthropic must spend the majority of these funds on AWS and Google Cloud services and chips, effectively locking in massive future compute consumption. This reflects a shift in cloud market dynamics—enterprises now choose cloud providers based on which hosts the best AI models, not just price or stability. With OpenAI deeply tied to Microsoft, Anthropic’s Claude has become the only viable strategic asset for Google and Amazon to remain competitive. Anthropic’s annualized revenue has surged to $30 billion, and it is expanding into verticals like biotech, positioning itself as a cross-industry AI infrastructure layer. However, this funding comes with constraints: Anthropic’s independence is challenged as it balances two rival investors, its safety-first narrative faces pressure from regulatory scrutiny, and its path to IPO introduces new financial pressures. Globally, this accelerates a "tri-polar" closed-loop structure in AI infrastructure, with Microsoft-OpenAI, Google-Anthropic, and Amazon-Anthropic forming exclusive model-cloud alliances. In contrast, China’s landscape differs—investments like Alibaba and Tencent backing open-source model firm DeepSeek reflect a more decoupled approach, though closed-source models from major cloud providers still dominate. The $65 billion bet is ultimately about securing a seat at the table in an AI-defined future—where missing the model layer means losing the cloud war.

marsbit2h ago

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

marsbit2h ago

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

DeepSeek-V4 has been released as a preview open-source model, featuring 1 million tokens of context length as a baseline capability—previously a premium feature locked behind enterprise paywalls by major overseas AI firms. The official announcement, however, openly acknowledges computational constraints, particularly limited service throughput for the high-end DeepSeek-V4-Pro version due to restricted high-end computing power. Rather than competing on pure scale, DeepSeek adopts a pragmatic approach that balances algorithmic innovation with hardware realities in China’s AI ecosystem. The V4-Pro model uses a highly sparse architecture with 1.6T total parameters but only activates 49B during inference. It performs strongly in agentic coding, knowledge-intensive tasks, and STEM reasoning, competing closely with top-tier closed models like Gemini Pro 3.1 and Claude Opus 4.6 in certain scenarios. A key strategic product is the Flash edition, with 284B total parameters but only 13B activated—making it cost-effective and accessible for mid- and low-tier hardware, including domestic AI chips from Huawei (Ascend), Cambricon, and Hygon. This design supports broader adoption across developers and SMEs while stimulating China's domestic semiconductor ecosystem. Despite facing talent outflow and intense competition in user traffic—with rivals like Doubao and Qianwen leading in monthly active users—DeepSeek has maintained technical momentum. The release also comes amid reports of a new funding round targeting a valuation exceeding $10 billion, potentially setting a new record in China’s LLM sector. Ultimately, DeepSeek-V4 represents a shift toward open yet realistic infrastructure development in the constrained compute landscape of Chinese AI, emphasizing engineering efficiency and domestic hardware compatibility over pure model scale.

marsbit3h ago

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

marsbit3h ago

Trading

Spot
Futures

Hot Articles

Discussions

Welcome to the HTX Community. Here, you can stay informed about the latest platform developments and gain access to professional market insights. Users' opinions on the price of ETH (ETH) are presented below.

活动图片