你的链,你的规则 | Offchain Labs 公布 Arbitrum 最新技术路线图以推动创新

深潮2024-08-21 tarihinde yayınlandı2024-08-21 tarihinde güncellendi

本文旨在介绍Offchain Labs计划实现的路线图,以帮助更多开发者和创新者将区块链愿景变为现实。

摘要:你的链,你的规则。随着 Arbitrum被越来越多的应用、基础设施和 Orbit Chain 建设者 广泛采用,我们正在努力进行多项技术更新。这些更新致力于确保 Arbitrum 的可用性、互操作性和实用性,以继续推动实现 Arbitrum 的大规模采取趋势。本文旨在介绍我们计划实现的路线图,以帮助更多开发者和创新者将区块链愿景变为现实。

你的链,你的规则

在我们为未来一年规划(技术)方向时,Offchain Labs 始终坚守我们的最重要的核心价值观之一:你的链,你的规则。

我们始终相信,区块链正在构建一个更好的互联网,一个以用户和开发者为核心的互联网。利用 Arbitrum 技术,建设者可以创建强大的链上应用和充满活力的区块链生态系统。用户和机构可以在原生数字经济中安全的进行自我管理,社区拥有自我治理的权力。

考虑到这一点,我们鼓励每一个与 Arbitrum 链互动的人都拥有长期目光、保持好奇心并充满信心地向前迈进,因为我们的技术运作就是遵循这一宗旨。

路线图

当我们在 2021 年 8 月 31 日(Arbitrum 纪念日)推出 Arbitrum 时,我们解决了区块链实现采用中的第一个重大障碍:可扩展性。在过去三年中,我们不断扩展引入全新的功能,并创建了最具技术可靠性和开放性的区块链平台。

随着区块链技术在各个行业的扩展并催生新的行业,建设者和用户面临着种种挑战:基本的可用性、推动采用、为用户提供强有力的去中心化保障,以及一个有效的基础设施层。这也是我们致力于解决的问题

通过简化与 Arbitrum 链的交互,我们旨在弥合建设者和用户之间的差距,推动更广泛的采用。互操作性是我们的核心,允许使用安全技术在链之间无缝交互。我们正在抽象复杂的决策过程,帮助用户选择「使用哪个堆栈或链」,并创建统一的系统。

这很简单 —— 你的链,你的规则:让你拥有创新和构建的自由,建立在一个你可以信任的基础上。

开发者体验、用户体验与采用

为了促进采用,我们需要让「在区块链上构建」变得对开发者更具表现力、更高效且更易访问。这就是 Stylus 出现的原因。

Stylus 通过允许开发者使用 Rust、C 和 C++ 等 WASM 语言开发,这超越了在以太坊上构建的限制。

Solidity 在我们的发展历史中占有重要地位,也将在我们的未来中继续发挥作用,Arbitrum 对 EVM 的支持不会消失。同时,我们必须认识到,Solidity 开发者的数量和现有代码库远小于传统编程语言。Stylus 使我们能够更加包容,欢迎不断增长的开发者群体,同时不影响那些热爱 EVM 的人的体验。

Stylus 满足了对高效和安全智能合约语言日益增长的需求,同时扩展了越来越具表现力的链上应用的设计空间。此外,Stylus 是一个高效的执行环境,能够进一步降低复杂智能合约的 Gas 费用。使用 Stylus,计算和内存成本可以显著降低。

而且你不必等待…

如果你已经加入 Arbitrum 生态系统有一段时间,你就会知道在 Arbitrum 纪念日将会发布一些重大更新(虽然今年的 Arbitrum 纪念日恰逢美国的假期周末,所以我们会晚几天开展庆祝活动)。

在 Arbitrum 纪念日,Arbitrum Stylus 将上线 Arbitrum One 和 Nova 主网,开启生态系统创新的新阶段,使开发者和用户体验更上一层楼。这是我们行业有史以来最大的执行层升级。

去中心化

区块链技术的核心理念是去中心化和无信任,这也是 Offchain Labs 发展以及未来 Arbitrum 技术栈开发计划的核心。我们正在进行多项开发,以加强基础设施,确保去中心化不仅是一个理论概念,而是生态系统中的实际现实:

  • BoLD(2024 年下半年):除了提高安全性,BoLD 还实现了安全的去中心化验证,使 Arbitrum 更接近成为 L2 Beat 阶段定义中的最终阶段,即 Stage 2 rollup。

  • 审查超时(2024 年下半年):在 BoLD 的基础上,审查超时限制了反复审查或排序器离线对 Arbitrum 链的负面影响,这可能是由于攻击造成的。这为 Arbitrum 链提供了更强的抗审查保证,并改善了用户资金的访问。

  • 去中心化排序器(预计 2025 年):去中心化 Arbitrum 排序器是 Arbitrum 去中心化路线图的最后一步。去中心化的排序器将交易排序的责任分配到更广泛的去中心化网络参与者中,从而减少审查攻击的风险并增强可靠性。

在 Offchain Labs,我们相信区块链技术的核心理念,并为去中心化的采用构建产品。本文提到的功能可以在 Arbitrum Orbit 链可用时被采纳,或者 Arbitrum DAO 可以对其治理的链(Arbitrum One 和 Arbitrum Nova)进行任何或所有这些技术升级的投票。

互操作性与可拓展性

Arbitrum Orbit 的推出开启了一个新纪元,使团队能够为各自特定的用例创新解决方案。

Arbitrum Orbit 允许开发者以任何他们认为合适的方式自定义他们的链。我们的原则始终是:你的链,你的规则。随着建设者们专注于突破界限,我们致力于通过解决基础工程挑战来实施显著的性能和互操作性改进。我们的长期战略旨在提升纵向和横向扩展能力,使开发者能够完成更多任务。

为了统一 Arbitrum 生态系统(Arbitrum Orbit、Arbitrum One、Arbitrum Nova 和以太坊),我们正在构建高效、无摩擦的链间互操作性。Optimistic rollups 提供了最低的成本和最大的灵活性,但其横向扩展的主要障碍是挑战期引入的确认延迟。更长的确认时间意味着最坏情况下的跨链通信可能需要数天,或者需要依赖第三方。

我们正在开发几种互操作性解决方案,以缩短确认延迟时间并实现横向扩展:

  • 快速提现(2024 年第三季度):即将发布的快速提现将使 AnyTrust 链能够绕过确认延迟,并在几分钟内结算到其父链。这些快速确认将使兄弟 L2(或 L3)能够快速相互通信,从而使开发者能够分担工作负载并实现横向扩展。

  • 链集群(2025 年):展望明年,我们计划进一步扩展开发者的工具箱,通过发布链集群来横向扩展 Orbit 链。通过允许多个 Orbit 链紧密对齐它们的生态系统和基础设施,链集群可以将跨链通信时间从几分钟减少到几乎即时。

性能与效率

从一开始的 2014 年起,Arbitrum 的设计就专注于性能和效率。现在,我们希望通过对执行的基础优化,实现计算效率和性能的下一次迭代增强。

多客户端支持(2025年上半年):

Arbitrum Nitro 是支持所有基于 Arbitrum 链的节点软件,基于 Geth 构建,Geth 是以太坊 L1 执行规范的 Golang 实现。自 2022 年 8 月 31 日 Arbitrum Nitro 首次亮相以来,许多新的执行层(EL)客户端实现已推出或显著改进——这些客户端各具不同的价值主张和优化目标。随着这些替代客户端的稳定性和质量提高,Offchain Labs 一直在努力准备Arbitrum栈以支持这些替代客户端。

在评估其他客户端时,我们的主要目标是优化当前区块生产速度,这将随着时间的推移将会:

(1)降低现有节点运营商的硬件成本

(2)为安全提高Arbitrum链的速度限制(即目标吞吐量)铺平道路

我们已经开始测试和评估多个客户端的性能和基准,包括 Paradigm 新发布的 Reth 1.0、Erigon 3.0 和 Nethermind,目标是在 2025 年交付一个生产就绪的多客户端实现,并简化未来添加其他客户端的过程。

尽管我们目前的分析表明,一些替代客户端在某些性能基准上仍落后于 Geth,但我们认为,为 Arbitrum 的采用做好准备是明智的,因为这些客户端会进一步优化。

自适应定价(2025年上半年):

在当前的 EVM 链上,Gas 限制被设置以防止节点过度消耗稀缺的计算资源。这意味着链的 Gas 限制始终是一个最坏情况评估,旨在保护节点的最受限资源免受交易负载的影响。

与最坏情况方法不同,自适应定价考虑实际使用的资源,并相应动态设置 Gas 限制。通过自适应定价,当特定资源接近其实际限制时,链才会提高费用并降低资源消耗,而不是基于其他交易可能使用的假设最大资源。

自适应定价将进一步实现扩展,使智能合约能够更有效地利用节点提供的全部资源,并更接近真实的 Gas 限制。整体性能将提高,而无需增加网络节点的容量。自适应定价还改善了对极端流量模式(例如铭文)的弹性,在这些情况下,使用模式会发生剧烈但暂时的变化,仅在必要时动态降低 Gas 限制。

零知识证明

Offchain Labs 致力于通过最佳的技术栈扩展以太坊。通过不断在可用技术的边界上工作,我们能够识别出可以融入我们扩展解决方案的改进。

目前,从稳定性、成熟度、成本和安全性的角度来看,Arbitrum Nitro 显然是扩展以太坊的最佳技术栈,但我们的研究团队已经确定了几个可以有效利用零知识(ZK)的路径。

在 2023 年的 Medium 文章以及最近在 EthCC 和 SBC 的演讲中,我们的首席科学家 Ed Felten 提出了一种混合结构,以说明如何将 ZK 集成到 Arbitrum 链中,一个特别活跃的研究领域是 ZK。

ZK+Optimistic 混合证明:

在 Arbitrum 汇总和争议解决协议中,ZK 证明最终可以用于即时确认声明,作为在父链上确认的可选快速路径。如果没有提供 ZK 证明,仍然可以使用 Optimistic 证明。这使得 Arbitrum 链上的用户和开发者能够根据需要访问非常快速的原生互操作性。

展望未来

在 Offchain Labs,我们致力于在问题出现之前创造解决方案。

今年,我们为部署的三款产品(Stylus、BoLD 和 Timeboost)所付出的巨大努力证明了团队的前瞻性。这些创新将使区块链变得更加易于访问,并践行去中心化核心价值。

我们拥有一支由研究人员、工程师、产品经理、合作伙伴、市场营销人员和运营专业人士组成的强大团队,推动着不断探索 Web3 与区块链技术的边界。我们构建产品的前提是,为您提供能够顺利运行的基础设施,以便开发者与参与者能够更好的进行创新。

我们的路线图上还有更多内容,但我们希望分享更多近期即将到来的重大进展,以帮助读者进一步了解未来规划。

İlgili Okumalar

Why is China's AI Developing So Fast? The Answer Lies Inside the Labs

A US researcher's visit to China's top AI labs reveals distinct cultural and organizational factors driving China's rapid AI development. While talent, data, and compute are similar to the West, Chinese labs excel through a pragmatic, execution-focused culture: less emphasis on individual stardom and conceptual debate, and more on teamwork, engineering optimization, and mastering the full tech stack. A key advantage is the integration of young students and researchers who approach model-building with fresh perspectives and low ego, prioritizing collective progress over personal credit. This contrasts with the US culture of self-promotion and "star scientist" narratives. Chinese labs also exhibit a strong "build, don't buy" mentality, preferring to develop core capabilities—like data pipelines and environments—in-house rather than relying on external services. The ecosystem feels more collaborative than tribal, with mutual respect among labs. While government support exists, its scale is unclear, and technical decisions appear driven by labs, not state mandates. Chinese companies across sectors, from platforms to consumer tech, are building their own foundational models to control their tech destiny, reflecting a broader cultural drive for technological sovereignty. Demand for AI is emerging, with spending patterns potentially mirroring cloud infrastructure more than traditional SaaS. Despite challenges like a less mature data industry and GPU shortages, Chinese labs are propelled by vast talent, rapid iteration, and deep integration with the open-source community. The competition is evolving beyond a pure model race into a contest of organizational execution, developer ecosystems, and industrial pragmatism.

marsbit1 saat önce

Why is China's AI Developing So Fast? The Answer Lies Inside the Labs

marsbit1 saat önce

3 Years, 5 Times: The Rebirth of a Century-Old Glass Factory

Corning, a 175-year-old glass company, is experiencing a dramatic revival as a key player in AI infrastructure, driven by surging demand for high-performance optical fiber in data centers. AI data centers require vastly more fiber than traditional ones—5 to 10 times as much per rack—to handle high-speed data transmission between GPUs. This structural demand shift, coupled with supply constraints from the lengthy expansion cycle for fiber preforms, has created a significant supply-demand gap. Nvidia has invested in Corning, along with Lumentum and Coherent, in a $4.5 billion total commitment to secure the optical supply chain for AI. Corning's competitive edge lies in its expertise in producing ultra-low-loss, high-density, and bend-resistant specialty fiber, which is critical for 800G+ and future 1.6T data rates. Its deep involvement in co-packaged optics (CPO) with partners like Nvidia further solidifies its position. While not the largest fiber manufacturer globally, Corning's revenue from enterprise/data center clients now exceeds 40% of its optical communications sales, and it has secured multi-year supply agreements with major hyperscalers including Meta and Nvidia. Financially, Corning's optical communications revenue has surged, doubling from $1.3 billion in 2023 to over $3 billion in 2025. Its stock price has risen nearly 6-fold since late 2023. Key future catalysts include the rollout of Nvidia's CPO products and the scale of undisclosed customer agreements. However, risks include high current valuations and potential disruption from next-generation technologies like hollow-core fiber. The company's long-term bet on light over electricity, maintained even through the telecom bubble crash, is now being validated by the AI boom.

marsbit1 saat önce

3 Years, 5 Times: The Rebirth of a Century-Old Glass Factory

marsbit1 saat önce

In the Age of AI, the Organization Itself Is the Moat

In the AI era, where products, interfaces, and narratives are easily replicated, a company's true moat is its organizational structure. The article argues that exceptional companies like OpenAI, Anthropic, and Palantir differentiate themselves not merely through technology but by inventing new organizational forms that allow a specific type of talent to thrive and become a version of themselves they couldn't elsewhere. These companies compete on identity, offering ambitious individuals a sense of being special, chosen, close to power, and part of a historic mission. However, this emotional commitment must be matched by structural commitment—real power, ownership, status, and economic participation. For founders, the key question is not how to tell a better story, but what kind of person can only truly realize their potential within their specific company structure. For individuals evaluating opportunities, the distinction between "being chosen" (an emotional feeling) and "being seen" (a structural reality of tangible power and rewards) is crucial. The most dangerous promises are those priced in future time. While AI makes copying visible elements easy, it does not make building a great, novel organization any easier. The next frontier of competition is creating organizational vessels that attract, structure, and compound the judgment of the right people—those whom traditional boxes cannot contain. The company itself becomes the moat.

marsbit2 saat önce

In the Age of AI, the Organization Itself Is the Moat

marsbit2 saat önce

I've Been a Divorce Lawyer for 26 Years: How Has Cryptocurrency Become a New Tool for the Wealthy to Hide Assets?

Natalie Brunell reports on insights from divorce lawyer James Sexton, who has 26 years of experience. He argues that money itself is not the root of marital breakdown; rather, emotional disconnection is the core issue. While financial hardship increases divorce risk, excessive wealth can also make divorce easier by reducing the incentive to work on the relationship. Sexton discusses financial management in marriages, advocating for transparency and a "yours, mine, and ours" system that balances shared finances with individual autonomy and privacy. He notes the growing normalization of prenuptial agreements, especially among younger generations. A significant portion focuses on cryptocurrency's role in divorce. Sexton explains that crypto became a new tool for hiding assets due to its early anonymity and complexity. He highlights that many lawyers and spouses lack understanding, allowing knowledgeable parties to gain advantages. He cites a New York legal form that only added a specific crypto disclosure field in 2026. On saving relationships, Sexton emphasizes small, consistent acts of reconnection, affirmation, and expressing appreciation, which he finds more effective than criticism. He concludes that fostering warmth and kindness is a simple yet powerful way to strengthen bonds and, in his words, "put divorce lawyers out of business."

marsbit2 saat önce

I've Been a Divorce Lawyer for 26 Years: How Has Cryptocurrency Become a New Tool for the Wealthy to Hide Assets?

marsbit2 saat önce

Turing Award Laureate Sutton's New Work: Using a Formula from 1967 to Solve a Major Flaw in Streaming Reinforcement Learning

New research titled "Intentional Updates for Streaming Reinforcement Learning" (arXiv:2604.19033v1), involving Turing Award laureate Richard Sutton, addresses a core challenge in deep reinforcement learning (RL): the "stream barrier." Current deep RL methods typically rely on replay buffers and batch training for stability, failing catastrophically when learning online from single data points (streaming). The authors propose a fundamental shift: instead of prescribing how far to move parameters (a fixed step size), their "Intentional Updates" method specifies the desired change in the function's output (e.g., a 5% reduction in value prediction error). It then calculates the step size needed to achieve that intent. This idea is inspired by the Normalized Least Mean Squares (NLMS) algorithm from 1967. Applied to value and policy learning, this yields algorithms like Intentional TD(λ) and Intentional AC. The method inherently stabilizes learning by adapting the step size based on the local gradient landscape, preventing overshooting/undershooting. In experiments on MuJoCo continuous control and Atari discrete tasks, Intentional AC achieved performance rivaling batch-based algorithms like SAC in a streaming setting (batch size=1, no replay buffer), while being ~140x more computationally efficient per update. The work demonstrates significant robustness, reducing reliance on numerous stabilization tricks. A remaining challenge is bias in policy updates due to action-dependent step sizes. Overall, this approach advances efficient, online, "learn-as-you-go" RL, enabling adaptive systems without massive data buffers or compute clusters.

marsbit3 saat önce

Turing Award Laureate Sutton's New Work: Using a Formula from 1967 to Solve a Major Flaw in Streaming Reinforcement Learning

marsbit3 saat önce

İşlemler

Spot
Futures
活动图片