Psy Protocol Achieves 521,000 TPS on a Live Proof-of-Work Network, Offers $100,000 Bounty to Anyone Who Can Prove the Results Invalid

marsbitОпубліковано о 2026-02-14Востаннє оновлено о 2026-02-14

Hong Kong | February 11, 2026 — Psy Protocol today announced that it has achieved 521,000 transactions per second (TPS) on a live, on-chain verifiable Proof-of-Work (PoW) network. The team stated that this result, achieved without sacrificing decentralization, security, or privacy, surpasses all publicly benchmarked performances of mainstream Proof-of-Stake (PoS) and Proof-of-Work chains to date.

The stress test was conducted on thousands of Google Cloud instances to simulate a high-concurrency, internet-scale operating environment. Every transaction in the benchmark is backed by cryptographic proof, and the complete dataset has been published for independent verification.

To demonstrate confidence in the results, Psy Protocol has publicly posted a $100,000 (US$100,000) bounty, to be awarded to any individual or team that can invalidate this throughput result based on the published proof materials.

"This is not a devnet demo, nor a theoretical projection," said Carter Feldman, Founder and CEO of Psy Protocol. "Every single result is verifiable. If we are wrong, the math will show it—and we will pay $100,000 to the person who shows it."

Why 521,000 TPS Matters Now

High-throughput benchmarks are not uncommon in the crypto industry. Psy Protocol believes this result is structurally significant because of the use-case context it addresses.

Blockchains were originally designed around human transaction patterns: sporadically occurring activities, manual approvals, and low concurrency. However, this model is becoming increasingly mismatched as millions of autonomous AI agents transact, collaborate, and settle at machine speeds continuously in the future.

When networks designed for human-scale usage encounter sustained machine-grade demand, the result is often network congestion, soaring fees, and cascading bottlenecks. Psy Protocol views 521,000 TPS as the "baseline" infrastructure required for a machine-native economy, not a performance ceiling.

How the Architecture Works

Most blockchains require every node to re-execute every transaction, a design that imposes a hard architectural ceiling on throughput. Psy Protocol removes this bottleneck through four synergistic design choices:

  • Parallel State Architecture (PARTH): Each user operates in isolated state partitions. This eliminates global state contention, allowing thousands of state transitions to be processed simultaneously without conflict.
  • Client-Side Proof Generation: Transaction execution and proof generation are performed on the user's device, with sensitive data always remaining under the user's control. Miners are only responsible for verifying and aggregating proofs, not re-executing transactions, thus eliminating redundant computation across the entire network.
  • Recursive Zero-Knowledge Proofs: Individual transaction proofs are recursively folded into a single succinct proof per block. As transaction volume grows, verification costs grow logarithmically rather than linearly, meaning massive throughput increases do not require proportional increases in resources.
  • Horizontal Scaling via Realms: The network scales by adding parallel processing domains ("Realms") and proof aggregation capacity. Throughput increases linearly with added infrastructure, rather than being limited by a fixed architectural cap.

Psy Protocol states that 521,000 TPS reflects the result in the current test configuration; higher throughput is achievable by scaling parallel proof generation capacity.

Verification: Open Data, Verifiable on Consumer Hardware

Because verification relies on succinct recursive proofs rather than full re-execution, Psy Protocol states that any combined proof from the benchmark can be independently verified on consumer-grade hardware—including limited-performance devices like a Raspberry Pi.

The complete test methodology, ZK circuit data, and combined proofs have all been open-sourced and are available at:

https://st8.psy.xyz/explorer

Applications Feasible at This Scale

With sustained throughput exceeding 500,000 TPS, the following new types of on-chain activity become feasible:

  • High-frequency micropayments between autonomous agents
  • Continuous clearing markets with real-time settlement replacing discrete batch settlement
  • High-density coordination among large-scale AI agent swarms operating without human intervention
  • Keyless agent execution via programmable signature circuits, eliminating reliance on human-controlled private keys or third-party custody

About Psy Protocol

Psy Protocol is building a Proof-of-Work smart contract platform for the "agentic internet." Its architecture combines the security and decentralization benefits of Proof-of-Work with the throughput and fee efficiency historically associated with Proof-of-Stake systems. Psy employs a "Proof-of-Useful-Work (PoUW)" consensus model, where miners perform cryptographically productive work—namely, aggregating and verifying zero-knowledge proofs—rather than arbitrary hash puzzle computations.

Пов'язані питання

QWhat is the TPS (Transactions Per Second) achieved by Psy Protocol in a real-world Proof-of-Work network, and what is the bounty offered for disproving this result?

APsy Protocol achieved 521,000 TPS in a real-world, verifiable Proof-of-Work network. They have offered a $100,000 bounty to anyone who can prove this result is invalid based on the published cryptographic proofs.

QAccording to the article, why is achieving 521,000 TPS particularly significant in the current context of blockchain technology?

AIt is significant because it addresses the future need for a 'machine-native economy.' Traditional blockchains designed for sporadic human transactions are inadequate for the continuous, high-speed transactions of millions of autonomous AI agents. This result represents a baseline for the infrastructure required to support such activity without congestion or high fees.

QName two of the four key architectural designs that Psy Protocol uses to remove the throughput bottleneck found in most blockchains.

ATwo of the four designs are: 1. Parallel State Architecture (PARTH), which isolates state partitions to eliminate global state contention, and 2. Client-side proof generation, where users execute transactions and generate proofs on their own devices, so miners only verify and aggregate proofs instead of re-executing transactions.

QHow can the results of Psy Protocol's benchmark test be independently verified, and what kind of hardware is sufficient for this verification?

AThe results can be independently verified using the complete test methodology, ZK circuit data, and composite proofs that have been open-sourced. This verification can be performed on consumer-grade hardware, including limited devices like a Raspberry Pi, because it relies on verifying succinct recursive proofs rather than fully re-executing all transactions.

QWhat type of consensus model does Psy Protocol use, and what useful work do the miners perform instead of solving arbitrary hash puzzles?

APsy Protocol uses a Proof-of-Useful-Work (PoUW) consensus model. Instead of performing arbitrary hash calculations, miners perform cryptographically productive work by aggregating and verifying zero-knowledge proofs.

Пов'язані матеріали

Gensyn AI: Don't Let AI Repeat the Mistakes of the Internet

In recent months, the rapid growth of the AI industry has attracted significant talent from the crypto sector. A persistent question among researchers intersecting both fields is whether blockchain can become a foundational part of AI infrastructure. While many previous AI and Crypto projects focused on application layers (like AI Agents, on-chain reasoning, data markets, and compute rentals), few achieved viable commercial models. Gensyn differentiates itself by targeting the most critical and expensive layer of AI: model training. Gensyn aims to organize globally distributed GPU resources into an open AI training network. Developers can submit training tasks, nodes provide computational power, and the network verifies results while distributing incentives. The core issue addressed is not decentralization for its own sake, but the increasing centralization of compute power among tech giants. In the era of large models, access to GPUs (like the H100) has become a decisive bottleneck, dictating the pace of AI development. Major AI companies are heavily dependent on large cloud providers for compute resources. Gensyn's approach is significant for several reasons: 1) It operates at the core infrastructure layer (model training), the most resource-intensive and technically demanding part of the AI value chain. 2) It proposes a more open, collaborative model for compute, potentially increasing resource utilization by dynamically pooling idle GPUs, similar to early cloud computing logic. 3) Its technical moat lies in solving complex challenges like verifying training results, ensuring node honesty, and maintaining reliability in a distributed environment—making it more of a deep-tech infrastructure company. 4) It targets a validated, high-growth market with genuine demand, rather than pursuing blockchain integration without purpose. Ultimately, the boundaries between Crypto and AI are blurring. AI requires global resource coordination, incentive mechanisms, and collaborative systems—areas where crypto-native solutions excel. Gensyn represents a step toward making advanced training capabilities more accessible and collaborative, moving beyond a niche controlled by a few giants. If successful, it could evolve into a fundamental piece of AI infrastructure, where the most enduring value in the AI era is often created.

marsbit13 год тому

Gensyn AI: Don't Let AI Repeat the Mistakes of the Internet

marsbit13 год тому

Why is China's AI Developing So Fast? The Answer Lies Inside the Labs

A US researcher's visit to China's top AI labs reveals distinct cultural and organizational factors driving China's rapid AI development. While talent, data, and compute are similar to the West, Chinese labs excel through a pragmatic, execution-focused culture: less emphasis on individual stardom and conceptual debate, and more on teamwork, engineering optimization, and mastering the full tech stack. A key advantage is the integration of young students and researchers who approach model-building with fresh perspectives and low ego, prioritizing collective progress over personal credit. This contrasts with the US culture of self-promotion and "star scientist" narratives. Chinese labs also exhibit a strong "build, don't buy" mentality, preferring to develop core capabilities—like data pipelines and environments—in-house rather than relying on external services. The ecosystem feels more collaborative than tribal, with mutual respect among labs. While government support exists, its scale is unclear, and technical decisions appear driven by labs, not state mandates. Chinese companies across sectors, from platforms to consumer tech, are building their own foundational models to control their tech destiny, reflecting a broader cultural drive for technological sovereignty. Demand for AI is emerging, with spending patterns potentially mirroring cloud infrastructure more than traditional SaaS. Despite challenges like a less mature data industry and GPU shortages, Chinese labs are propelled by vast talent, rapid iteration, and deep integration with the open-source community. The competition is evolving beyond a pure model race into a contest of organizational execution, developer ecosystems, and industrial pragmatism.

marsbit14 год тому

Why is China's AI Developing So Fast? The Answer Lies Inside the Labs

marsbit14 год тому

3 Years, 5 Times: The Rebirth of a Century-Old Glass Factory

Corning, a 175-year-old glass company, is experiencing a dramatic revival as a key player in AI infrastructure, driven by surging demand for high-performance optical fiber in data centers. AI data centers require vastly more fiber than traditional ones—5 to 10 times as much per rack—to handle high-speed data transmission between GPUs. This structural demand shift, coupled with supply constraints from the lengthy expansion cycle for fiber preforms, has created a significant supply-demand gap. Nvidia has invested in Corning, along with Lumentum and Coherent, in a $4.5 billion total commitment to secure the optical supply chain for AI. Corning's competitive edge lies in its expertise in producing ultra-low-loss, high-density, and bend-resistant specialty fiber, which is critical for 800G+ and future 1.6T data rates. Its deep involvement in co-packaged optics (CPO) with partners like Nvidia further solidifies its position. While not the largest fiber manufacturer globally, Corning's revenue from enterprise/data center clients now exceeds 40% of its optical communications sales, and it has secured multi-year supply agreements with major hyperscalers including Meta and Nvidia. Financially, Corning's optical communications revenue has surged, doubling from $1.3 billion in 2023 to over $3 billion in 2025. Its stock price has risen nearly 6-fold since late 2023. Key future catalysts include the rollout of Nvidia's CPO products and the scale of undisclosed customer agreements. However, risks include high current valuations and potential disruption from next-generation technologies like hollow-core fiber. The company's long-term bet on light over electricity, maintained even through the telecom bubble crash, is now being validated by the AI boom.

marsbit15 год тому

3 Years, 5 Times: The Rebirth of a Century-Old Glass Factory

marsbit15 год тому

Торгівля

Спот
Ф'ючерси
活动图片