Behind DeepSeek V4's Stunning Debut: Silicon Valley Is 'Building Walls,' China Is 'Paving Roads'

marsbitPublished on 2026-04-26Last updated on 2026-04-26

Abstract

China's AI landscape is witnessing a strategic divergence from Silicon Valley’s closed-source competition to a collaborative open-source ecosystem. On April 24, DeepSeek released V4, a top-ranked open-source model on Hugging Face, featuring breakthroughs like million-token context length with minimal KV cache and native support for domestic chips like Huawei’s Ascend. Similarly, Kimi’s K2.6, released days earlier, also adopted open-source principles. Unlike U.S. giants such as OpenAI and Anthropic—locked in revenue disputes and tactical product clashes—Chinese firms embrace shared innovation. DeepSeek and Kimi openly build on each other’s advances, like the MLA architecture and Muon optimizer, avoiding redundant R&D and driving down costs. DeepSeek V4 focused on pushing base model capabilities, while Kimi specialized in Agent-based applications. Although U.S. firms lead in revenue and valuation, China’s open-source models achieve comparable performance at a fraction of the cost (e.g., DeepSeek V3 trained for $5.58M vs. GPT-5’s $500M+). With token usage growing exponentially, China’s collaborative model promises scalable, affordable AI built on domestic hardware, shaping a more accessible path to AGI.

By Alter

On the morning of April 24th, the long-awaited DeepSeek V4 finally made its appearance.

That same day, DeepSeek-V4-Pro immediately topped the Hugging Face open-source model leaderboard, with two "bombshell innovations" being widely praised:

First, a million-token-level ultra-long context, but with a KV cache only 10% that of V3.2, praised by Amazon engineers as a solution to the HBM shortage problem;

Second, its adaptation to domestic chips, having closely collaborated with Huawei during R&D and promptly adapted to domestic chips like Ascend and Cambricon.

Coincidentally, ranked second on the Hugging Face open-source model leaderboard was Kimi K2.6, which was released and open-sourced late on April 20th.

If this were happening across the Pacific, the "clash" of two trillion-parameter models would inevitably lead to mutual attacks over valuations and commercial territories. Domestically, however, a completely different scene unfolded: there was no drama of exposing each other's secrets, no undercurrents of PR warfare, and even a "swap" at the technical foundation.

Behind this "unusual" situation lies a divergence in AI technology paths between China and the U.S.: Silicon Valley is frantically "erecting high walls," trying to protect vested interests through closed-source models; Chinese large model vendors, however, are choosing to "tear down the walls," moving toward collaborative evolution on the soil of open source.

01 Silicon Valley Trapped in a "Game of Thrones"

Unlike the open-source approach flourishing domestically, Silicon Valley's AI leaders—OpenAI, Anthropic, and Google's Gemini—are all staunch advocates of closed-source models.

As cutting-edge technological innovations are locked away in their respective data centers, under the pressure of computing costs and market expectations, the "Silicon Valley spirit" known for openness and collaboration is gradually fading. Players inevitably find themselves in a zero-sum "game of thrones."

Over the past two years, technical "shadow wars" have evolved into public spats. The most typical tactic is mutual "spotlight stealing": quickly unveiling their own major updates at competitors' key product launch moments to curb the other's momentum has become a routine operation in Silicon Valley.

As early as May 2024, OpenAI and Google simultaneously released new AI products, one claiming GPT-4o was globally leading, the other touting the Gemini family's coverage of the entire ecosystem and path. Eventually, the CEOs of both companies couldn't sit still, publicly mocking each other on social media.

It's not just a "tangle" with Google; the rivalry between OpenAI and Anthropic has also intensified: on April 16th, just after Anthropic released its new model Claude Opus 4.7, OpenAI announced a major update to Codex over two hours later, proclaiming "Codex for (almost) everything." It was clear to everyone that the timing was no coincidence but a carefully planned "snipe" by OpenAI against Anthropic.

Beyond the "cultural fights" in the court of public opinion, "military fights" of mutual "exposure" have also become the norm in Silicon Valley.

Anthropic proudly announced on April 7th that its annualized revenue had reached $30 billion, successfully surpassing OpenAI's $25 billion.

A week later, OpenAI's Chief Revenue Officer stated bluntly in an internal letter to all employees: Anthropic's claimed $30 billion in annualized revenue was seriously inflated because it used the "gross method," fully counting the share given to cloud service providers like Amazon and Google into its total revenue, resulting in an overestimation of about $8 billion.

The practice of undermining competitors in internal letters is uncommon in the tech industry, aiming无非是告诉投资人——Anthropic's growth myth is inflated.

And once hostility breeds, it permeates every decision.

After Anthropic "fell out" with the Pentagon for refusing to delete specific security clauses from a contract, OpenAI announced within hours that it had reached a cooperation agreement with the U.S. Department of Defense.

During the 2026 Super Bowl, Anthropic heavily advertised with a commercial whose content was "Advertising is entering the AI field, but it won't enter Claude." This was essentially a direct challenge to OpenAI, which had just begun testing ad features.......

Why have former "brothers-in-arms" come to such loggerheads?

The root lies in the inherent logic of the closed-source business model: the survival foundation of closed source is building moats, and the prerequisite for building moats is blocking technology diffusion, monopolizing the most advanced productivity. Coupled with incompatible technical routes and opposing product narratives, it naturally forms a Nash equilibrium: whoever "ceases fire" first will see their brand narrative collapse, ultimately sinking deeper into the quagmire of internal consumption.

02 The "Collaborative Evolution" of the Open-Source Camp

Turning the focus back to China, the script unfolds completely differently.

Rewind to over a year ago, the emergence of DeepSeek-R1 slammed the brakes on the狂奔的大模型创业赛 (frantic large model startup race), with the finalist large model "Six Little Tigers" being the first affected. The biggest difference from Silicon Valley is that DeepSeek did not play the role of a "shark" eating all the fish in the pond, but rather acted like a catfish that activated the entire Chinese large model ecosystem, leading everyone to embrace open source.

A direct example is Moonlight (Yue Zhi An Mian - 月之暗面), whose growth trajectory highly overlaps with DeepSeek's: both are startup teams that began in 2023, both maintain extremely small teams with high talent concentration, and both are firm believers in Scaling Law.

In July 2025, Moonlight released the world's first trillion-parameter open-source model, Kimi K2, openly stating in its technical report that it adopted the MLA architecture open-sourced by DeepSeek. For large models, the biggest nightmare of processing ultra-long text is the memory wall. The disruptive nature of the MLA architecture lies in its clever achievement of a staggering over 93% compression rate for KV Cache.

With the "industry standard" contributed by DeepSeek, large model teams like Moonlight no longer needed to reinvent the wheel, quickly reducing inference costs.

The story didn't stop there.

Looking through the DeepSeek V4 technical documentation, it details the model's architecture. One important upgrade was switching the optimizer for most modules from AdamW to Muon, achieving faster convergence speed and better training stability.

In the Kimi K2.6 technical documentation, the Muon optimizer is also mentioned, achieving a 2x efficiency improvement under the same training volume.

The Muon optimizer, mentioned by both models, was first proposed by independent researcher Keller Jordan in a blog post in late 2024. The Moonlight team, also troubled by AdamW, made key engineering improvements to Muon in early 2025, adding capabilities like Weight Decay and RMS control, and named it MuonClip.

Moonlight率先 validated the stability of the Muon optimizer on Kimi K2, achieving "zero Loss Spike" throughout pre-training. DeepSeek also adopted the validated Muon optimizer when training the V4 large model.

It's important to note that the "collaborative evolution" of open-source large models has not fallen into homogeneity but is moving towards a path of "harmony in diversity."

For example, DeepSeek-V4 focuses on core capabilities攻坚 of the base model, further solidifying the global open-source large model performance ceiling and providing the entire industry with a base foundation rivaling closed-source flagships; Kimi K2.6深耕 Agent engineering and落地, solving the pain points of long-range autonomous execution for large models, and打通关键路径 for large models to enter real production scenarios.

Throughout this process, there were no protracted商业谈判, no tense patent battles. In the open-source camp, technological innovation flows freely like water; whoever does it well, everyone uses it.

Absorbing nutrients from the open-source ecosystem, complementing each other in technical routes. China's large model vendors are demonstrating to the world another possibility beyond Silicon Valley through action.

03 The U.S. Is "Building Walls," China Is "Paving Roads"

While marveling at the collaborative evolution of open source, one must face a商业现实 squarely.

Currently, OpenAI and Anthropic's annualized revenues have both reached the tens of billions of USD, while the revenue of domestic leading large model vendors has just crossed the threshold of annualized $100 million.

OpenAI's valuation in the secondary market is about $880 billion, Anthropic's valuation has soared to around $1 trillion, while the valuations for Kimi and DeepSeek's latest funding rounds are $18 billion and $20 billion respectively.

Some exclaim that Chinese large model vendors are undervalued, while others believe: "The ability to translate technical reputation into real money is a life-and-death test facing Chinese companies." For a time, discussions about the "cost-effectiveness" of open source are rampant.

To see the endgame, one might start from the competition stages of large models:

The first stage was "competing on parameters, competing on Benchmark." By the end of April 2026, this stage is basically over, as scores on leaderboards can no longer create substantial gaps.

The second stage is "competing on training efficiency, competing on inference cost, competing on architectural innovation." This is the current segment, also an inevitable result forced by computing cost pressures.

The third stage will be "competing on Agent systems, competing on ecosystem, competing on developers." When Tokens change from free traffic to "fuel" for executing tasks, the prosperity of the ecosystem will determine survival.

What is the ecological niche of domestic open-source large models? We found two sets of直观的对比数据 (intuitive comparative data).

One is training cost.

GPT-5, released in August 2025, had a training cost exceeding $500 million; Kimi K2 Thinking around the same time cost about $4.6 million to train; DeepSeek did not公布 the training cost of the V4 series models, but the V3 model cost only $5.576 million... Domestic large model vendors used resources amounting to less than pocket change for OpenAI to train models of comparable level.

The other is call volume.

After entering 2026, data from the multi-model aggregation platform OpenRouter shows: driven by Agent products represented by OpenClaw, global Token consumption has shown exponential growth. China's "Open-Source Dream Team,"凭借 "好用又便宜"的口碑 (relying on a reputation for being "user-friendly and cheap"), has seen its volume连续多周超越美国 (surpass that of the U.S. for multiple consecutive weeks).

The reason isn't hard to explain.

China's open-source camp has already successfully run a "positive feedback flywheel": Company A open-sources underlying technology, Company B adopts and performs engineering optimizations, then feeds the optimization results and experiences back to the entire ecosystem. If the evolution of closed-source models is linear growth built on massive computing power stacking, what awaits the open-source route is the exponential diffusion brought by the collision of technological innovations.

According to J.P. Morgan's research report, China's AI inference token consumption will achieve a compound annual growth rate (CAGR) of about 330% between 2025 and 2030,激增 (soaring) from 10 trillion tokens in 2025 to 3900 trillion tokens in 2030, a growth scale of 370 times.

This means that 2026 is still in the early stages of the AI explosion, with hundreds of times more growth opportunities in the next 5 years—far from the time for a final verdict.

Precisely because of confidence in long-term opportunities, while Silicon Valley giants are desperately building walls, Chinese large model vendors are choosing to use collaborative positioning to continuously solidify the road to AGI.

04 In Conclusion

In this轰轰烈烈的AI浪潮 (grand AI wave), who will have the last laugh? The answer concerns not only the models but also the autonomous controllability of computing power. If models are compared to "atomic bombs," then domestic computing power, free from external technological blockades, is the "rocket" that sends the atomic bomb into the sky.

It is gratifying that the integration of domestic models and domestic computing power is becoming increasingly close: in the DeepSeek V4 technical documentation, Ascend NPU is listed alongside NVIDIA GPU in the hardware verification list; Moonlight's latest论文 (paper) runs large model inference's prefill and decoding on different chips, opening the door for domestic chips to participate in model inference on a large scale.

In early 2025, DeepSeek R1 secured a seat at the table for domestic large models; by 2026, China's open-source large model camp is continuously creating more定义牌桌规则的硬资本 (hard capital that defines the rules of the table) through collaboration.

Related Questions

QWhat are the two major innovations of DeepSeek V4 mentioned in the article?

AThe two major innovations are: 1) A million-level ultra-long context with only 10% of the KV cache of V3.2, praised by Amazon engineers for solving HBM shortage issues; 2) Adaptation to domestic chips, with close collaboration with Huawei and immediate support for Ascend and Cambricon chips.

QHow does the AI development approach in Silicon Valley differ from that in China according to the article?

ASilicon Valley AI leaders like OpenAI, Anthropic, and Google Gemini advocate for closed-source models, leading to zero-sum games and internal conflicts. In contrast, Chinese AI companies embrace open-source collaboration, promoting synergistic evolution and shared technological advancements.

QWhat is the significance of the Muon optimizer in the development of DeepSeek V4 and Kimi K2.6?

AThe Muon optimizer, improved by Moonshot AI and adopted by DeepSeek, replaced AdamW in most modules, achieving faster convergence speeds and better training stability. It contributed to a 2x efficiency improvement with the same training volume in Kimi K2.6.

QWhat are the three stages of AI model competition as described in the article?

AThe three stages are: 1) Competing on parameters and benchmark performance; 2) Competing on training efficiency, inference cost, and architectural innovation; 3) Competing on Agent systems, ecosystem, and developer support.

QHow does the article characterize the relationship between Chinese open-source AI models and domestic computing hardware?

AThe article highlights a tight integration, where Chinese AI models like DeepSeek V4 and Kimi K2.6 actively adapt to and validate domestic chips such as Huawei's Ascend and Cambricon, reducing reliance on external technology and enhancing autonomous controllability of computing power.

Related Reads

How Many Tokens Away Is Yang Zhilin from the 'Moon Chasing the Light'?

The article explores the intense competition between two leading Chinese AI companies, DeepSeek and Kimi (Moon Dark Side), and the mounting pressure on Yang Zhilin, the founder of Kimi. While DeepSeek re-emerged after 15 months of silence with its powerful V4 model—boasting 1.6 trillion parameters and low-cost, long-context capabilities—Kimi has been focusing on long-context processing and multi-agent systems with its K2.6 model. Yang faces a threefold challenge: technological rivalry, commercialization pressure, and investor expectations. Despite Kimi’s high valuation (reaching $18 billion), its revenue heavily relies on a single product with low paid conversion rates, while DeepSeek’s strategic silence and open-source influence have strengthened its market position and valuation prospects, now targeting over $20 billion. Both companies reflect broader trends in China’s AI ecosystem: Kimi aims for global influence through open-source contributions and agent-based advancements, while DeepSeek prioritizes foundational innovation and hardware independence, notably shifting to Huawei’s chips. Their competition is seen as vital for China’s AI progress, with the gap between top Chinese and U.S. models narrowing to just 2.7% on the Elo rating scale. Ultimately, the article argues that this rivalry, though anxiety-inducing for leaders like Zhilin, is essential for driving innovation and solidifying China’s role in the global AI landscape.

marsbit7h ago

How Many Tokens Away Is Yang Zhilin from the 'Moon Chasing the Light'?

marsbit7h ago

Trading

Spot
Futures

Hot Articles

What is SONIC

Sonic: Pioneering the Future of Gaming in Web3 Introduction to Sonic In the ever-evolving landscape of Web3, the gaming industry stands out as one of the most dynamic and promising sectors. At the forefront of this revolution is Sonic, a project designed to amplify the gaming ecosystem on the Solana blockchain. Leveraging cutting-edge technology, Sonic aims to deliver an unparalleled gaming experience by efficiently processing millions of requests per second, ensuring that players enjoy seamless gameplay while maintaining low transaction costs. This article delves into the intricate details of Sonic, exploring its creators, funding sources, operational mechanics, and the timeline of significant events that have shaped its journey. What is Sonic? Sonic is an innovative layer-2 network that operates atop the Solana blockchain, specifically tailored to enhance the existing Solana gaming ecosystem. It accomplishes this through a customised, VM-agnostic game engine paired with a HyperGrid interpreter, facilitating sovereign game economies that roll up back to the Solana platform. The primary goals of Sonic include: Enhanced Gaming Experiences: Sonic is committed to offering lightning-fast on-chain gameplay, allowing players and developers to engage with games at previously unattainable speeds. Atomic Interoperability: This feature enables transactions to be executed within Sonic without the need to redeploy Solana programmes and accounts. This makes the process more efficient and directly benefits from Solana Layer1 services and liquidity. Seamless Deployment: Sonic allows developers to write for Ethereum Virtual Machine (EVM) based systems and execute them on Solana’s SVM infrastructure. This interoperability is crucial for attracting a broader range of dApps and decentralised applications to the platform. Support for Developers: By offering native composable gaming primitives and extensible data types - dining within the Entity-Component-System (ECS) framework - game creators can craft intricate business logic with ease. Overall, Sonic's unique approach not only caters to players but also provides an accessible and low-cost environment for developers to innovate and thrive. Creator of Sonic The information regarding the creator of Sonic is somewhat ambiguous. However, it is known that Sonic's SVM is owned by the company Mirror World. The absence of detailed information about the individuals behind Sonic reflects a common trend in several Web3 projects, where collective efforts and partnerships often overshadow individual contributions. Investors of Sonic Sonic has garnered considerable attention and support from various investors within the crypto and gaming sectors. Notably, the project raised an impressive $12 million during its Series A funding round. The round was led by BITKRAFT Ventures, with other notable investors including Galaxy, Okx Ventures, Interactive, Big Brain Holdings, and Mirana. This financial backing signifies the confidence that investment foundations have in Sonic’s potential to revolutionise the Web3 gaming landscape, further validating its innovative approaches and technologies. How Does Sonic Work? Sonic utilises the HyperGrid framework, a sophisticated parallel processing mechanism that enhances its scalability and customisability. Here are the core features that set Sonic apart: Lightning Speed at Low Costs: Sonic offers one of the fastest on-chain gaming experiences compared to other Layer-1 solutions, powered by the scalability of Solana’s virtual machine (SVM). Atomic Interoperability: Sonic enables transaction execution without redeployment of Solana programmes and accounts, effectively streamlining the interaction between users and the blockchain. EVM Compatibility: Developers can effortlessly migrate decentralised applications from EVM chains to the Solana environment using Sonic’s HyperGrid interpreter, increasing the accessibility and integration of various dApps. Ecosystem Support for Developers: By exposing native composable gaming primitives, Sonic facilitates a sandbox-like environment where developers can experiment and implement business logic, greatly enhancing the overall development experience. Monetisation Infrastructure: Sonic natively supports growth and monetisation efforts, providing frameworks for traffic generation, payments, and settlements, thereby ensuring that gaming projects are not only viable but also sustainable financially. Timeline of Sonic The evolution of Sonic has been marked by several key milestones. Below is a brief timeline highlighting critical events in the project's history: 2022: The Sonic cryptocurrency was officially launched, marking the beginning of its journey in the Web3 gaming arena. 2024: June: Sonic SVM successfully raised $12 million in a Series A funding round. This investment allowed Sonic to further develop its platform and expand its offerings. August: The launch of the Sonic Odyssey testnet provided users with the first opportunity to engage with the platform, offering interactive activities such as collecting rings—a nod to gaming nostalgia. October: SonicX, an innovative crypto game integrated with Solana, made its debut on TikTok, capturing the attention of over 120,000 users within a short span. This integration illustrated Sonic’s commitment to reaching a broader, global audience and showcased the potential of blockchain gaming. Key Points Sonic SVM is a revolutionary layer-2 network on Solana explicitly designed to enhance the GameFi landscape, demonstrating great potential for future development. HyperGrid Framework empowers Sonic by introducing horizontal scaling capabilities, ensuring that the network can handle the demands of Web3 gaming. Integration with Social Platforms: The successful launch of SonicX on TikTok displays Sonic’s strategy to leverage social media platforms to engage users, exponentially increasing the exposure and reach of its projects. Investment Confidence: The substantial funding from BITKRAFT Ventures, among others, emphasizes the robust backing Sonic has, paving the way for its ambitious future. In conclusion, Sonic encapsulates the essence of Web3 gaming innovation, striking a balance between cutting-edge technology, developer-centric tools, and community engagement. As the project continues to evolve, it is poised to redefine the gaming landscape, making it a notable entity for gamers and developers alike. As Sonic moves forward, it will undoubtedly attract greater interest and participation, solidifying its place within the broader narrative of blockchain gaming.

1.1k Total ViewsPublished 2024.04.04Updated 2024.12.03

What is SONIC

What is $S$

Understanding SPERO: A Comprehensive Overview Introduction to SPERO As the landscape of innovation continues to evolve, the emergence of web3 technologies and cryptocurrency projects plays a pivotal role in shaping the digital future. One project that has garnered attention in this dynamic field is SPERO, denoted as SPERO,$$s$. This article aims to gather and present detailed information about SPERO, to help enthusiasts and investors understand its foundations, objectives, and innovations within the web3 and crypto domains. What is SPERO,$$s$? SPERO,$$s$ is a unique project within the crypto space that seeks to leverage the principles of decentralisation and blockchain technology to create an ecosystem that promotes engagement, utility, and financial inclusion. The project is tailored to facilitate peer-to-peer interactions in new ways, providing users with innovative financial solutions and services. At its core, SPERO,$$s$ aims to empower individuals by providing tools and platforms that enhance user experience in the cryptocurrency space. This includes enabling more flexible transaction methods, fostering community-driven initiatives, and creating pathways for financial opportunities through decentralised applications (dApps). The underlying vision of SPERO,$$s$ revolves around inclusiveness, aiming to bridge gaps within traditional finance while harnessing the benefits of blockchain technology. Who is the Creator of SPERO,$$s$? The identity of the creator of SPERO,$$s$ remains somewhat obscure, as there are limited publicly available resources providing detailed background information on its founder(s). This lack of transparency can stem from the project's commitment to decentralisation—an ethos that many web3 projects share, prioritising collective contributions over individual recognition. By centring discussions around the community and its collective goals, SPERO,$$s$ embodies the essence of empowerment without singling out specific individuals. As such, understanding the ethos and mission of SPERO remains more important than identifying a singular creator. Who are the Investors of SPERO,$$s$? SPERO,$$s$ is supported by a diverse array of investors ranging from venture capitalists to angel investors dedicated to fostering innovation in the crypto sector. The focus of these investors generally aligns with SPERO's mission—prioritising projects that promise societal technological advancement, financial inclusivity, and decentralised governance. These investor foundations are typically interested in projects that not only offer innovative products but also contribute positively to the blockchain community and its ecosystems. The backing from these investors reinforces SPERO,$$s$ as a noteworthy contender in the rapidly evolving domain of crypto projects. How Does SPERO,$$s$ Work? SPERO,$$s$ employs a multi-faceted framework that distinguishes it from conventional cryptocurrency projects. Here are some of the key features that underline its uniqueness and innovation: Decentralised Governance: SPERO,$$s$ integrates decentralised governance models, empowering users to participate actively in decision-making processes regarding the project’s future. This approach fosters a sense of ownership and accountability among community members. Token Utility: SPERO,$$s$ utilises its own cryptocurrency token, designed to serve various functions within the ecosystem. These tokens enable transactions, rewards, and the facilitation of services offered on the platform, enhancing overall engagement and utility. Layered Architecture: The technical architecture of SPERO,$$s$ supports modularity and scalability, allowing for seamless integration of additional features and applications as the project evolves. This adaptability is paramount for sustaining relevance in the ever-changing crypto landscape. Community Engagement: The project emphasises community-driven initiatives, employing mechanisms that incentivise collaboration and feedback. By nurturing a strong community, SPERO,$$s$ can better address user needs and adapt to market trends. Focus on Inclusion: By offering low transaction fees and user-friendly interfaces, SPERO,$$s$ aims to attract a diverse user base, including individuals who may not previously have engaged in the crypto space. This commitment to inclusion aligns with its overarching mission of empowerment through accessibility. Timeline of SPERO,$$s$ Understanding a project's history provides crucial insights into its development trajectory and milestones. Below is a suggested timeline mapping significant events in the evolution of SPERO,$$s$: Conceptualisation and Ideation Phase: The initial ideas forming the basis of SPERO,$$s$ were conceived, aligning closely with the principles of decentralisation and community focus within the blockchain industry. Launch of Project Whitepaper: Following the conceptual phase, a comprehensive whitepaper detailing the vision, goals, and technological infrastructure of SPERO,$$s$ was released to garner community interest and feedback. Community Building and Early Engagements: Active outreach efforts were made to build a community of early adopters and potential investors, facilitating discussions around the project’s goals and garnering support. Token Generation Event: SPERO,$$s$ conducted a token generation event (TGE) to distribute its native tokens to early supporters and establish initial liquidity within the ecosystem. Launch of Initial dApp: The first decentralised application (dApp) associated with SPERO,$$s$ went live, allowing users to engage with the platform's core functionalities. Ongoing Development and Partnerships: Continuous updates and enhancements to the project's offerings, including strategic partnerships with other players in the blockchain space, have shaped SPERO,$$s$ into a competitive and evolving player in the crypto market. Conclusion SPERO,$$s$ stands as a testament to the potential of web3 and cryptocurrency to revolutionise financial systems and empower individuals. With a commitment to decentralised governance, community engagement, and innovatively designed functionalities, it paves the way toward a more inclusive financial landscape. As with any investment in the rapidly evolving crypto space, potential investors and users are encouraged to research thoroughly and engage thoughtfully with the ongoing developments within SPERO,$$s$. The project showcases the innovative spirit of the crypto industry, inviting further exploration into its myriad possibilities. While the journey of SPERO,$$s$ is still unfolding, its foundational principles may indeed influence the future of how we interact with technology, finance, and each other in interconnected digital ecosystems.

54 Total ViewsPublished 2024.12.17Updated 2024.12.17

What is $S$

What is AGENT S

Agent S: The Future of Autonomous Interaction in Web3 Introduction In the ever-evolving landscape of Web3 and cryptocurrency, innovations are constantly redefining how individuals interact with digital platforms. One such pioneering project, Agent S, promises to revolutionise human-computer interaction through its open agentic framework. By paving the way for autonomous interactions, Agent S aims to simplify complex tasks, offering transformative applications in artificial intelligence (AI). This detailed exploration will delve into the project's intricacies, its unique features, and the implications for the cryptocurrency domain. What is Agent S? Agent S stands as a groundbreaking open agentic framework, specifically designed to tackle three fundamental challenges in the automation of computer tasks: Acquiring Domain-Specific Knowledge: The framework intelligently learns from various external knowledge sources and internal experiences. This dual approach empowers it to build a rich repository of domain-specific knowledge, enhancing its performance in task execution. Planning Over Long Task Horizons: Agent S employs experience-augmented hierarchical planning, a strategic approach that facilitates efficient breakdown and execution of intricate tasks. This feature significantly enhances its ability to manage multiple subtasks efficiently and effectively. Handling Dynamic, Non-Uniform Interfaces: The project introduces the Agent-Computer Interface (ACI), an innovative solution that enhances the interaction between agents and users. Utilizing Multimodal Large Language Models (MLLMs), Agent S can navigate and manipulate diverse graphical user interfaces seamlessly. Through these pioneering features, Agent S provides a robust framework that addresses the complexities involved in automating human interaction with machines, setting the stage for myriad applications in AI and beyond. Who is the Creator of Agent S? While the concept of Agent S is fundamentally innovative, specific information about its creator remains elusive. The creator is currently unknown, which highlights either the nascent stage of the project or the strategic choice to keep founding members under wraps. Regardless of anonymity, the focus remains on the framework's capabilities and potential. Who are the Investors of Agent S? As Agent S is relatively new in the cryptographic ecosystem, detailed information regarding its investors and financial backers is not explicitly documented. The lack of publicly available insights into the investment foundations or organisations supporting the project raises questions about its funding structure and development roadmap. Understanding the backing is crucial for gauging the project's sustainability and potential market impact. How Does Agent S Work? At the core of Agent S lies cutting-edge technology that enables it to function effectively in diverse settings. Its operational model is built around several key features: Human-like Computer Interaction: The framework offers advanced AI planning, striving to make interactions with computers more intuitive. By mimicking human behaviour in tasks execution, it promises to elevate user experiences. Narrative Memory: Employed to leverage high-level experiences, Agent S utilises narrative memory to keep track of task histories, thereby enhancing its decision-making processes. Episodic Memory: This feature provides users with step-by-step guidance, allowing the framework to offer contextual support as tasks unfold. Support for OpenACI: With the ability to run locally, Agent S allows users to maintain control over their interactions and workflows, aligning with the decentralised ethos of Web3. Easy Integration with External APIs: Its versatility and compatibility with various AI platforms ensure that Agent S can fit seamlessly into existing technological ecosystems, making it an appealing choice for developers and organisations. These functionalities collectively contribute to Agent S's unique position within the crypto space, as it automates complex, multi-step tasks with minimal human intervention. As the project evolves, its potential applications in Web3 could redefine how digital interactions unfold. Timeline of Agent S The development and milestones of Agent S can be encapsulated in a timeline that highlights its significant events: September 27, 2024: The concept of Agent S was launched in a comprehensive research paper titled “An Open Agentic Framework that Uses Computers Like a Human,” showcasing the groundwork for the project. October 10, 2024: The research paper was made publicly available on arXiv, offering an in-depth exploration of the framework and its performance evaluation based on the OSWorld benchmark. October 12, 2024: A video presentation was released, providing a visual insight into the capabilities and features of Agent S, further engaging potential users and investors. These markers in the timeline not only illustrate the progress of Agent S but also indicate its commitment to transparency and community engagement. Key Points About Agent S As the Agent S framework continues to evolve, several key attributes stand out, underscoring its innovative nature and potential: Innovative Framework: Designed to provide an intuitive use of computers akin to human interaction, Agent S brings a novel approach to task automation. Autonomous Interaction: The ability to interact autonomously with computers through GUI signifies a leap towards more intelligent and efficient computing solutions. Complex Task Automation: With its robust methodology, it can automate complex, multi-step tasks, making processes faster and less error-prone. Continuous Improvement: The learning mechanisms enable Agent S to improve from past experiences, continually enhancing its performance and efficacy. Versatility: Its adaptability across different operating environments like OSWorld and WindowsAgentArena ensures that it can serve a broad range of applications. As Agent S positions itself in the Web3 and crypto landscape, its potential to enhance interaction capabilities and automate processes signifies a significant advancement in AI technologies. Through its innovative framework, Agent S exemplifies the future of digital interactions, promising a more seamless and efficient experience for users across various industries. Conclusion Agent S represents a bold leap forward in the marriage of AI and Web3, with the capacity to redefine how we interact with technology. While still in its early stages, the possibilities for its application are vast and compelling. Through its comprehensive framework addressing critical challenges, Agent S aims to bring autonomous interactions to the forefront of the digital experience. As we move deeper into the realms of cryptocurrency and decentralisation, projects like Agent S will undoubtedly play a crucial role in shaping the future of technology and human-computer collaboration.

555 Total ViewsPublished 2025.01.14Updated 2025.01.14

What is AGENT S

Discussions

Welcome to the HTX Community. Here, you can stay informed about the latest platform developments and gain access to professional market insights. Users' opinions on the price of S (S) are presented below.

活动图片