Polygon deploys Madhugiri hard fork, aims for 33% throughput boost

cointelegraphОпубликовано 2025-12-09Обновлено 2025-12-09

Введение

Polygon has deployed the Madhugiri hard fork, targeting a 33% increase in network throughput and reducing block consensus time to one second. The upgrade includes support for three Fusaka Ethereum Improvement Proposals (EIP-7823, EIP-7825, EIP-7883), which improve efficiency and security by limiting gas consumption for heavy mathematical operations. It also introduces a new transaction type for Ethereum-Polygon bridge traffic and adds flexibility for future upgrades. This enhancement aims to better support high-frequency use cases like stablecoins and real-world asset (RWA) tokenization. The hard fork follows the recent Heimdall 2.0 upgrade, which significantly reduced finality times, despite a brief disruption in September due to a bug that was later resolved.

Blockchain network Polygon has rolled out its latest protocol upgrade, known as the Madhugiri hard fork, which aims to achieve a 33% increase in network throughput and reduce block consensus time to one second.

Polygon core developer Krishang Shah said on X that the update includes support for three Fusaka Ethereum Improvement Proposals, specifically EIP-7823, EIP-7825 and EIP-7883. These EIPs make heavy mathematical operations more efficient and secure by limiting the amount of gas they consume.

They also prevent single transactions from consuming excessive computing power, helping the network run more smoothly and predictably.

The upgrade introduces a new transaction type for Ethereum to Polygon bridge traffic and adds a built-in flexibility feature for future upgrades. Polygon previously said that the update makes throughput increases as easy as “flipping a few switches.”

“We are also decreasing the consensus time to 1 second, so blocks can now be announced in 1 second if ready, instead of waiting the full 2 seconds,” Shah wrote.

Source: Krishang Shah

New update reinforces Polygon for stablecoins and RWAs

With Madhugiri now live, Polygon aims to reinforce its infrastructure while materially improving its performance. These are prerequisites for high-frequency and high-trust use cases, such as real-world asset (RWA) tokenization and stablecoins.

Aishwary Gupta, the global head of payments and RWAs at Polygon Labs, previously forecasted a “stablecoin supercycle.”

Gupta said there will be a surge of “at least 100,000 stablecoins” in the next five years. However, he said this would not only be about minting tokens and must have a corresponding utility like yield.

Gupta also advocated for more transparency and accountability in the RWA sector. He previously argued that RWA numbers are meaningless if the assets cannot be audited, settled or traded.

“When transparency and accountability are established, RWAs will reach even greater heights, unlocking trillions in institutional capital,” he wrote.

Related: Polygon co-founder mulls resurrecting MATIC a year after POL rebrand

Hard fork follows major Heimdall upgrade

The upgrade comes on the heels of rapid prior improvements. On July 10, Polygon deployed Heimdall 2.0, dubbed by Polygon Foundation CEO Sandeep Nailwal as the network’s “most technically complex” hard fork since its launch.

The update reduced transaction finality times from one to two minutes to roughly five seconds.

However, on Sept. 10, the network experienced a significant disruption when a bug caused finality delays of 10 to 15 minutes, affecting validator sync, remote procedure call services and third-party tooling. Despite this, the team assured the community that blocks were still running.

On Sept. 11, the Polygon Foundation announced that the consensus and finality functions had been restored through a hard fork. With the update, nodes were no longer stuck, while checkpoints and milestones were finalized as expected.

Magazine: Ethereum’s Fusaka fork explained for dummies: What the hell is PeerDAS?

Похожее

Gensyn AI: Don't Let AI Repeat the Mistakes of the Internet

In recent months, the rapid growth of the AI industry has attracted significant talent from the crypto sector. A persistent question among researchers intersecting both fields is whether blockchain can become a foundational part of AI infrastructure. While many previous AI and Crypto projects focused on application layers (like AI Agents, on-chain reasoning, data markets, and compute rentals), few achieved viable commercial models. Gensyn differentiates itself by targeting the most critical and expensive layer of AI: model training. Gensyn aims to organize globally distributed GPU resources into an open AI training network. Developers can submit training tasks, nodes provide computational power, and the network verifies results while distributing incentives. The core issue addressed is not decentralization for its own sake, but the increasing centralization of compute power among tech giants. In the era of large models, access to GPUs (like the H100) has become a decisive bottleneck, dictating the pace of AI development. Major AI companies are heavily dependent on large cloud providers for compute resources. Gensyn's approach is significant for several reasons: 1) It operates at the core infrastructure layer (model training), the most resource-intensive and technically demanding part of the AI value chain. 2) It proposes a more open, collaborative model for compute, potentially increasing resource utilization by dynamically pooling idle GPUs, similar to early cloud computing logic. 3) Its technical moat lies in solving complex challenges like verifying training results, ensuring node honesty, and maintaining reliability in a distributed environment—making it more of a deep-tech infrastructure company. 4) It targets a validated, high-growth market with genuine demand, rather than pursuing blockchain integration without purpose. Ultimately, the boundaries between Crypto and AI are blurring. AI requires global resource coordination, incentive mechanisms, and collaborative systems—areas where crypto-native solutions excel. Gensyn represents a step toward making advanced training capabilities more accessible and collaborative, moving beyond a niche controlled by a few giants. If successful, it could evolve into a fundamental piece of AI infrastructure, where the most enduring value in the AI era is often created.

marsbit5 ч. назад

Gensyn AI: Don't Let AI Repeat the Mistakes of the Internet

marsbit5 ч. назад

Why is China's AI Developing So Fast? The Answer Lies Inside the Labs

A US researcher's visit to China's top AI labs reveals distinct cultural and organizational factors driving China's rapid AI development. While talent, data, and compute are similar to the West, Chinese labs excel through a pragmatic, execution-focused culture: less emphasis on individual stardom and conceptual debate, and more on teamwork, engineering optimization, and mastering the full tech stack. A key advantage is the integration of young students and researchers who approach model-building with fresh perspectives and low ego, prioritizing collective progress over personal credit. This contrasts with the US culture of self-promotion and "star scientist" narratives. Chinese labs also exhibit a strong "build, don't buy" mentality, preferring to develop core capabilities—like data pipelines and environments—in-house rather than relying on external services. The ecosystem feels more collaborative than tribal, with mutual respect among labs. While government support exists, its scale is unclear, and technical decisions appear driven by labs, not state mandates. Chinese companies across sectors, from platforms to consumer tech, are building their own foundational models to control their tech destiny, reflecting a broader cultural drive for technological sovereignty. Demand for AI is emerging, with spending patterns potentially mirroring cloud infrastructure more than traditional SaaS. Despite challenges like a less mature data industry and GPU shortages, Chinese labs are propelled by vast talent, rapid iteration, and deep integration with the open-source community. The competition is evolving beyond a pure model race into a contest of organizational execution, developer ecosystems, and industrial pragmatism.

marsbit6 ч. назад

Why is China's AI Developing So Fast? The Answer Lies Inside the Labs

marsbit6 ч. назад

3 Years, 5 Times: The Rebirth of a Century-Old Glass Factory

Corning, a 175-year-old glass company, is experiencing a dramatic revival as a key player in AI infrastructure, driven by surging demand for high-performance optical fiber in data centers. AI data centers require vastly more fiber than traditional ones—5 to 10 times as much per rack—to handle high-speed data transmission between GPUs. This structural demand shift, coupled with supply constraints from the lengthy expansion cycle for fiber preforms, has created a significant supply-demand gap. Nvidia has invested in Corning, along with Lumentum and Coherent, in a $4.5 billion total commitment to secure the optical supply chain for AI. Corning's competitive edge lies in its expertise in producing ultra-low-loss, high-density, and bend-resistant specialty fiber, which is critical for 800G+ and future 1.6T data rates. Its deep involvement in co-packaged optics (CPO) with partners like Nvidia further solidifies its position. While not the largest fiber manufacturer globally, Corning's revenue from enterprise/data center clients now exceeds 40% of its optical communications sales, and it has secured multi-year supply agreements with major hyperscalers including Meta and Nvidia. Financially, Corning's optical communications revenue has surged, doubling from $1.3 billion in 2023 to over $3 billion in 2025. Its stock price has risen nearly 6-fold since late 2023. Key future catalysts include the rollout of Nvidia's CPO products and the scale of undisclosed customer agreements. However, risks include high current valuations and potential disruption from next-generation technologies like hollow-core fiber. The company's long-term bet on light over electricity, maintained even through the telecom bubble crash, is now being validated by the AI boom.

marsbit7 ч. назад

3 Years, 5 Times: The Rebirth of a Century-Old Glass Factory

marsbit7 ч. назад

Торговля

Спот
Фьючерсы
活动图片