Not Just DeepSeek, Big Tech Companies Want to 'Abandon' NVIDIA

marsbitPublished on 2026-04-24Last updated on 2026-04-24

Abstract

The article discusses how major tech companies are attempting to reduce their reliance on Nvidia, despite its dominant position in the AI chip market, where it enjoys a 75.2% GAAP gross margin. Companies like DeepSeek are adapting their models to run on domestic alternatives such as Huawei’s Ascend chips, while in the U.S., Google and Meta are developing their own AI chips (TPU and MTIA series) to complement external partnerships. Nvidia’s CEO, Jensen Huang, acknowledges that Moore’s Law is fading and that export restrictions may slow China’s AI development but could ultimately spur a self-sufficient ecosystem. Notably, Chinese firms are leading in open-source models, which could eventually challenge Nvidia’s monopoly. OpenAI is actively diversifying away from Nvidia, signing a $20 billion deal with Cerebras, a startup using wafer-scale chips to reduce latency and cost. Cerebras, founded by Andrew Feldman, aims to challenge Nvidia with its unique architecture but faces financial and competitive risks, including heavy dependence on OpenAI and geopolitical tensions. While competition is intensifying with players like AMD and Groq (which partnered with Nvidia), the overall demand for compute continues to grow. The market is shifting toward a diversified supplier model, though Nvidia remains a formidable force.

The whole world covets NVIDIA's business.

According to NVIDIA's Q4 FY2026 (ending January 2026) earnings report, its GAAP gross margin was as high as 75.2%, making it practically a money-printing machine. This immense profitability stems primarily from its dominant position in the AI chip market, which grants it powerful pricing power.

Almost all large language models run on NVIDIA's computing chips, supporting its nearly $5 trillion market capitalization.

But precisely because of this, almost all major AI companies are openly or covertly trying to break free from NVIDIA's cage, unwilling to hand over their fate to it. The recently released DeepSeek V4, based on its technical report, was most likely trained using NVIDIA chips, but it is being adapted for inference on Huawei's Ascend computing chips. Furthermore, it stated that the token cost for the Pro version will be significantly reduced after the launch of Huawei's Ascend 950 in the second half of the year. Additionally, besides Huawei Ascend, domestic chip manufacturers like Tianshu Zhixin and Cambricon have also announced support for the new DeepSeek V4 model.

In NVIDIA's home turf, the US, Google developed its own TPU (Tensor Processing Unit) computing chips. As of April 2026, the TPU has reached its eighth generation, forming a complete product line of training and inference chips. In March, Meta also disclosed its roadmap for self-developed AI chips, planning to deploy four new products in the MTIA series by the end of 2027 to meet the internal AI business computing needs, while maintaining large-scale procurement partnerships with NVIDIA and AMD, building a dual-track computing power system of "self-developed + external procurement".

Yes, for the time being, no AI company can bypass NVIDIA, but Jensen Huang still senses the crisis. In a recent podcast interview, Huang stated that Moore's Law is coming to an end, meaning the era of chip performance doubling every year is over. The performance advantage of today's most advanced chips is not a permanent moat, but a relative advantage with a time window. Once the manufacturing process approaches physical limits, the difficulty for latecomers to catch up will actually decrease.

Huang said that restricting the export of computing chips to China would indeed slow down the development speed of Chinese AI in the short term, but in the long run, it will only force China to form its own ecosystem. What he didn't delve into further is that currently, only Chinese AI companies are committed to open source, and are being adopted by numerous companies and startups. If more and more open-source models run on Chinese-made computing chips, then even if NVIDIA still holds the number one market position, it will no longer be the only one.

In fact, even without the threat of Chinese open-source large models and computing chips, market competition is likely to push the computing chip industry towards a duopoly structure, rather than letting NVIDIA dominate alone.

Interestingly, among them, OpenAI, which is extremely dependent on NVIDIA, is ironically the most active in "backstabbing" it.

01

On April 17 local time, US AI chip manufacturer Cerebras officially submitted an IPO application to the US SEC, aiming to raise $3 billion with a valuation of $35 billion.

After withdrawing its previous IPO application in October 2025, this challenger to NVIDIA, whose core selling point is "wafer-scale chips," launched another IPO冲刺 (sprint) within six months, successfully pushing its company valuation from $8.1 billion to $35 billion.

The core pillar of this valuation surge is a cooperation agreement with OpenAI worth over $20 billion.

According to the agreement, OpenAI commits to using server clusters powered by Cerebras chips over the next three years. Cerebras will deploy 750 megawatts of computing power for the latter, expected to be fully deployed by 2028. Additionally, OpenAI will provide Cerebras with approximately $1 billion in funding to help develop its data centers and obtain about 10% in warrants.

Clearly, OpenAI is no longer just a simple client; it is a creditor and potentially a major future stakeholder. The decision to re-initiate the IPO冲刺 at this time is likely a joint decision by both companies.

On the same day Cerebras submitted its IPO documents, three core OpenAI executives, including Sora lead Bill Peebles, announced their departure. Meanwhile, the $500 billion "Stargate" plan, once seen as a milestone in US AI infrastructure, is also in disarray, with internal coordination and financing issues progressing slowly.

According to media disclosures, OpenAI's revenue in 2025 was $13.1 billion, with losses as high as $8 billion. Losses are预计 to soar to $25 billion this year. Under the pressure of huge losses, OpenAI even had to make painful cuts, shutting down the popular video generation product Sora.

Some analysis suggests that Sora's daily computing power cost was approximately $15 million, with the cost of a 10-second high-precision video around $33. During Sora's operation, total user payment revenue was only $2.1 million.

In such turbulent times, Altman naturally understands that over-reliance on NVIDIA would become OpenAI's biggest weakness.

Previously, OpenAI announced collaborations with Broadcom to develop custom chips and adopted AMD's new MI450 chips, frequently sending clear signals to the outside world—it no longer wants to work for NVIDIA. It is against this backdrop that Cerebras became a key bet in OpenAI's "de-NVIDIAization" strategy.

Although Cerebras is not widely known, it has uniqueness among chip manufacturing companies.

Almost all chip design giants follow the "cut the wafer, make small chips" route. Cerebras, however, focused on the "memory wall" encountered when data is moved between chips, thus adopting a more aggressive single-chip technology路线.

Cerebras's core product is the Wafer-Scale Engine WSE-3, a single chip made from an entire 300mm wafer. Because computation, storage, and interconnection are all within a single chip, data transmission latency is reduced by 90% compared to GPU clusters, making it particularly suitable for low-latency inference of large models.

In inference scenarios, the wafer-scale architecture is expected to reduce the cost per token by 80%.

OpenAI's head of computing infrastructure stated that Cerebras has added a dedicated low-latency inference solution to the platform, which will not only allow users to get faster response times but also lay the foundation for expanding real-time AI technology to a broader user base.

More importantly, Cerebras's non-HBM dependent route might break NVIDIA's near-monopoly in the chip industry, making computing power supply more diverse.

All of these恰好 hit OpenAI's pain points perfectly, making the collaboration between the two a natural fit.

Besides OpenAI, Cerebras also reached a cooperation agreement with AWS in March. The CS-3 will be deployed in Amazon's data centers, entering the infrastructure system of mainstream hyperscale cloud platforms.

02

"The most exciting thing about this rapidly iterating industry is that algorithms will continue to become faster, more accurate, and more efficient—precisely why I am unwilling to投身 those traditional industries that remain unchanged for nine years."

Cerebras's ability to reach its current position is closely tied to its founder, Andrew Feldman.

Unlike typical chip company founders who are engineers, Feldman graduated from Stanford University with bachelor's degrees in Economics and Political Science and an MBA. From the beginning of his career, he consistently accumulated experience in product and marketing fields. This career path gave him a natural instinct for what kind of business model could succeed.

As his experience grew, Feldman gradually transitioned from an employee to a serial entrepreneur.

And all serial entrepreneurs have one极其明显的 characteristic—they want to win, desperately. These people aren't just ordinarily "competitive"; they treat "winning" as indispensable as breathing. They typically choose to bet in the "no man's land" of industry consensus, going all-in on directions most people consider "unnecessary" or "impossible." In other words, they have a relatively large "gambling spirit."

In 2007, Feldman founded the server company SeaMicro.

"Today's large processors are like us driving a space shuttle to the grocery store. Actually, I just need to drive a Prius."

SeaMicro abandoned the traditional server approach of "piling on components." It removed all components except the CPU, memory, and a self-developed ASIC, providing "more cores" for specialized internet companies needing "scale-out" workloads. The company was acquired by AMD for $355 million in 2012.

Although the microserver business gradually faded into obscurity after being integrated into AMD, this experience allowed Feldman to accumulate wealth and further solidify his entrepreneurial methodology: at the node of generational change, use "counter-mainstream" hardware design to切入细分 markets not yet covered by giants.

According to industry conventions, chip yield decreases as area increases. While chip companies were all following NVIDIA's path forward, Feldman decided, in a very "layman" way of thinking, to directly make a single chip the size of a plate.

In 2015, Feldman and his technical partner Gary Lauterbach共同 founded Cerebras and brought in several former colleagues from SeaMicro. Cerebras remained silent for a full four years until it released the first-generation WSE-1 in August 2019.

During this obscure R&D period, Feldman was betting on two things: one was that TSMC's wafer-level packaging technology would gradually mature, and the other was that AI models would become so large that the memory wall of GPUs would become a fatal bottleneck.

Judging from current developments, he bet correctly.

From 2019 to 2024, Cerebras launched a new generation every two years, with the process jumping from 16nm to 7nm to 5nm, and the number of transistors rolling from 1.2 trillion to 4 trillion. Meanwhile, Feldman began actively seeking out major clients. In 2023, he flew to Abu Dhabi and secured G42.

Cerebras and G42 collaborated to train the leading language model in the Arabic language domain and jointly created Condor Galaxy, a network of nine interconnected supercomputers. The close cooperation with this Middle Eastern enterprise also triggered a national security review of Cerebras by the US Committee on Foreign Investment, but Feldman didn't care—the review indicated his own strength.

"If you only work 38 hours a week and还想挑战 an 800-pound gorilla like NVIDIA? No way. You need every waking minute."

Feldman was once asked in an interview about his views on "work-life balance," and he gave a rather radical negative answer. He毫不掩饰 his ambition to challenge NVIDIA.

Referencing NVIDIA's hundred-fold growth over ten years, Feldman holds a optimistic outlook for Cerebras's prospects: to develop treatment plans for millions of patients in the next 3 to 5 years; to provide inference computing power for applications yet to be born; to allow the public to use the company's technology without even noticing it.

03

Cerebras's冲刺 IPO faces constant controversy. Optimists look forward to witnessing the birth of a second NVIDIA, while naysayers question the stability of its performance.

According to officially disclosed financial information, Cerebras's revenue grew from $24.6 million in 2022 to $510 million in 2025, with a four-year compound annual growth rate of 175%. Particularly突出的是, the GAAP net profit in 2025 was $238 million, successfully reversing the颓势 of a net loss of $482 million in 2024.

However, a closer analysis reveals that the GAAP profit benefited from a non-cash book gain of $363 million. This gain was actually an accounting operation resulting from the removal of G42-related liabilities from the balance sheet due to the US security review. Excluding this non-recurring item, the company's non-GAAP net loss was actually $75.7 million.

In other words, Cerebras's "return to profitability" is an accounting game.

In 2023 and 2024, G42 contributed 83% and 87% of Cerebras's total revenue, respectively. With geopolitical conflicts becoming increasingly severe, the risk of relying on a single customer from the Middle East is可想而知. After all, Cerebras's first IPO withdrawal was partly due to national security reviews.

According to the prospectus, the company's remaining performance obligations,高达 $24.6 billion, rely heavily on the $20 billion agreement signed with OpenAI. In other words, Cerebras's expected revenue is almost entirely based on OpenAI's forward commitments, rather than a diversified large-scale customer base.

Whether this "shot in the arm" order can be fulfilled depends on the fate of OpenAI itself. When the stability of the largest customer itself is being反复打量 by the market, how much of this "blank check" can be realized is something恐怕 Feldman himself cannot guarantee.

A comparison with NVIDIA更能看出 Cerebras's disadvantages.

Even before the AI industry's big explosion, NVIDIA had already established a diversified customer base across multiple fields such as gaming, professional visualization, and data centers. No single customer accounted for more than 10% of its revenue. Over more than twenty years of evolution, NVIDIA has deeply bound itself with millions of developers. Every product iteration stems from the needs of internal ecosystem expansion, and its product planning path is very clear. Cerebras's ecosystem is at a very early stage, still achieving only a single-point breakthrough in inference scenarios, and has a long way to go before becoming a true platform company.

Even without the sudden emergence of ChatGPT, NVIDIA was a high-quality company with stable revenue and considerable profits. But if the $20 billion order from OpenAI were to disappear, Cerebras恐怕 wouldn't even have the possibility of冲刺 an IPO.

In December 2025, NVIDIA reached a special cooperation agreement worth approximately $20 billion in cash with Cerebras's competitor Groq. NVIDIA obtained a permanent non-exclusive license for Groq's LPU inference architecture and full-stack chip design technology.

Jensen Huang's entry signifies that the value of Cerebras's low-latency dedicated inference architecture has been recognized by industry giants, but it also急剧 increases the competitive pressure Cerebras faces.

From a practical standpoint, OpenAI引入 Cerebras is not for replacement, but to act as a "catfish" (stimulus), increasing bargaining chips and分散 supply chain risks.

There are reports that NVIDIA's system based on Groq chips will be launched in the second half of 2026. If Altman turns around and reaches an agreement with Huang again, Cerebras could easily become the sacrifice.

In the trillion-dollar AI chip track, diversified competition is undoubtedly good for the long-term development of the industry ecosystem. But the capital market is never short of wealth creation myths and舆论炒作. Whether Cerebras can truly deliver on its technological and commercial value still requires overcoming multiple tests.

The appealing title of "NVIDIA challenger" might also turn out to be a short-lived bubble.

But as the "Jevons Paradox" reveals, technological progress improves resource utilization efficiency and reduces the cost per unit output, but because people can afford to use more and use it more widely, it反而 leads to an increase in the total consumption of resources. As AI渗透 more extensively into all aspects of people's lives, computing power demand will continue to grow rapidly in the foreseeable future.

This super-large market worth hundreds of billions or even thousands of billions of dollars is not only about economics but also involves geopolitical security. No one wants to hand over the keys to their fate to be held by NVIDIA alone.

But显然, even out of自尊, Jensen Huang will not easily hand over the keys.

This article is from WeChat public account "最话FunTalk" (ID: iFuntalker), author: He Yiran, editor: Liu Yuxiang

Related Questions

QWhy are major AI companies trying to reduce their reliance on NVIDIA?

ADue to NVIDIA's dominant market position and pricing power, which gives it significant control over the AI chip supply chain. Companies like DeepSeek, Google, Meta, and OpenAI are seeking alternatives to avoid dependency, reduce costs, and diversify their supply chains for greater strategic flexibility.

QWhat is Cerebras' unique approach to AI chip design?

ACerebras uses a wafer-scale engine (WSE) design, which involves creating a single, large chip from an entire silicon wafer. This approach reduces data transfer delays by 90% compared to GPU clusters and is particularly efficient for low-latency inference in large models, potentially lowering token costs by 80%.

QHow does OpenAI's partnership with Cerebras reflect its strategy?

AOpenAI's $20 billion agreement with Cerebras is part of its 'de-NVIDIAization' strategy to diversify its AI chip supply, reduce costs, and mitigate risks associated with over-reliance on a single vendor. This move also serves as leverage in negotiations with NVIDIA and other chip suppliers.

QWhat are the financial challenges facing Cerebras despite its IPO ambitions?

ACerebras' financials show reliance on non-recurring accounting gains and a heavy dependence on a few key clients like G42 and OpenAI. Its GAAP profitability in 2025 was largely due to a one-time $363 million non-cash gain, and without the OpenAI deal, its IPO prospects would be uncertain.

QWhat broader industry trend does the competition against NVIDIA represent?

AThe competition reflects a push for a diversified, multi-vendor AI chip ecosystem rather than NVIDIA's monopoly. This is driven by economic factors (cost reduction), geopolitical concerns (e.g., U.S.-China tensions), and the desire for technological innovation beyond traditional GPU architectures.

Related Reads

a16z: AI's 'Amnesia', Can Continuous Learning Cure It?

The article "a16z: AI's 'Amnesia' – Can Continual Learning Cure It?" explores the limitations of current large language models (LLMs), which, like the protagonist in the film *Memento*, are trapped in a perpetual present—unable to form new memories after training. While methods like in-context learning (ICL), retrieval-augmented generation (RAG), and external scaffolding (e.g., chat history, prompts) provide temporary solutions, they fail to enable true internalization of new knowledge. The authors argue that compression—the core of learning during training—is halted at deployment, preventing models from generalizing, discovering novel solutions (e.g., mathematical proofs), or handling adversarial scenarios. The piece introduces *continual learning* as a critical research direction to address this, categorizing approaches into three paths: 1. **Context**: Scaling external memory via longer context windows, multi-agent systems, and smarter retrieval. 2. **Modules**: Using pluggable adapters or external memory layers for specialization without full retraining. 3. **Weights**: Enabling parameter updates through sparse training, test-time training, meta-learning, distillation, and reinforcement learning from feedback. Challenges include catastrophic forgetting, safety risks, and auditability, but overcoming these could unlock models that learn iteratively from experience. The conclusion emphasizes that while context-based methods are effective, true breakthroughs require models to compress new information into weights post-deployment, moving from mere retrieval to genuine learning.

marsbit16m ago

a16z: AI's 'Amnesia', Can Continuous Learning Cure It?

marsbit16m ago

Can a Hair Dryer Earn $34,000? Deciphering the Reflexivity Paradox in Prediction Markets

An individual manipulated a weather sensor at Paris Charles de Gaulle Airport with a portable heat source, causing a Polymarket weather market to settle at 22°C and earning $34,000. This incident highlights a fundamental issue in prediction markets: when a market aims to reflect reality, it also incentivizes participants to influence that reality. Prediction markets operate on two layers: platform rules (what outcome counts as a win) and data sources (what actually happened). While most focus on rules, the real vulnerability lies in the data source. If reality is recorded through a specific source, influencing that source directly affects market settlement. The article categorizes markets by their vulnerability: 1. **Single-point physical data sources** (e.g., weather stations): Easily manipulated through physical interference. 2. **Insider information markets** (e.g., MrBeast video details): Insiders like team members use non-public information to trade. Kalshi fined a剪辑师 $20,000 for insider trading. 3. **Actor-manipulated markets** (e.g., Andrew Tate’s tweet counts): The subject of the market can control the outcome. Evidence suggests Tate’sociated accounts coordinated to profit. 4. **Individual-action markets** (e.g., WNBA disruptions): A single person can execute an event to profit from their pre-placed bets. Kalshi and Polymarket handle these issues differently. Kalshi enforces strict KYC, publicly penalizes insider trading, and reports to regulators. Polymarket, with its anonymous wallet-based system, has historically been more permissive, arguing that insider information improves market accuracy. However, it cooperated with authorities in the "Van Dyke case," where a user traded on classified government information. The core paradox is reflexivity: prediction markets are designed to discover truth, but their financial incentives can distort reality. The more valuable a prediction becomes, the more likely participants are to influence the event itself. The market ceases to be a mirror of reality and instead shapes it.

marsbit1h ago

Can a Hair Dryer Earn $34,000? Deciphering the Reflexivity Paradox in Prediction Markets

marsbit1h ago

Trading

Spot
Futures

Hot Articles

What is SONIC

Sonic: Pioneering the Future of Gaming in Web3 Introduction to Sonic In the ever-evolving landscape of Web3, the gaming industry stands out as one of the most dynamic and promising sectors. At the forefront of this revolution is Sonic, a project designed to amplify the gaming ecosystem on the Solana blockchain. Leveraging cutting-edge technology, Sonic aims to deliver an unparalleled gaming experience by efficiently processing millions of requests per second, ensuring that players enjoy seamless gameplay while maintaining low transaction costs. This article delves into the intricate details of Sonic, exploring its creators, funding sources, operational mechanics, and the timeline of significant events that have shaped its journey. What is Sonic? Sonic is an innovative layer-2 network that operates atop the Solana blockchain, specifically tailored to enhance the existing Solana gaming ecosystem. It accomplishes this through a customised, VM-agnostic game engine paired with a HyperGrid interpreter, facilitating sovereign game economies that roll up back to the Solana platform. The primary goals of Sonic include: Enhanced Gaming Experiences: Sonic is committed to offering lightning-fast on-chain gameplay, allowing players and developers to engage with games at previously unattainable speeds. Atomic Interoperability: This feature enables transactions to be executed within Sonic without the need to redeploy Solana programmes and accounts. This makes the process more efficient and directly benefits from Solana Layer1 services and liquidity. Seamless Deployment: Sonic allows developers to write for Ethereum Virtual Machine (EVM) based systems and execute them on Solana’s SVM infrastructure. This interoperability is crucial for attracting a broader range of dApps and decentralised applications to the platform. Support for Developers: By offering native composable gaming primitives and extensible data types - dining within the Entity-Component-System (ECS) framework - game creators can craft intricate business logic with ease. Overall, Sonic's unique approach not only caters to players but also provides an accessible and low-cost environment for developers to innovate and thrive. Creator of Sonic The information regarding the creator of Sonic is somewhat ambiguous. However, it is known that Sonic's SVM is owned by the company Mirror World. The absence of detailed information about the individuals behind Sonic reflects a common trend in several Web3 projects, where collective efforts and partnerships often overshadow individual contributions. Investors of Sonic Sonic has garnered considerable attention and support from various investors within the crypto and gaming sectors. Notably, the project raised an impressive $12 million during its Series A funding round. The round was led by BITKRAFT Ventures, with other notable investors including Galaxy, Okx Ventures, Interactive, Big Brain Holdings, and Mirana. This financial backing signifies the confidence that investment foundations have in Sonic’s potential to revolutionise the Web3 gaming landscape, further validating its innovative approaches and technologies. How Does Sonic Work? Sonic utilises the HyperGrid framework, a sophisticated parallel processing mechanism that enhances its scalability and customisability. Here are the core features that set Sonic apart: Lightning Speed at Low Costs: Sonic offers one of the fastest on-chain gaming experiences compared to other Layer-1 solutions, powered by the scalability of Solana’s virtual machine (SVM). Atomic Interoperability: Sonic enables transaction execution without redeployment of Solana programmes and accounts, effectively streamlining the interaction between users and the blockchain. EVM Compatibility: Developers can effortlessly migrate decentralised applications from EVM chains to the Solana environment using Sonic’s HyperGrid interpreter, increasing the accessibility and integration of various dApps. Ecosystem Support for Developers: By exposing native composable gaming primitives, Sonic facilitates a sandbox-like environment where developers can experiment and implement business logic, greatly enhancing the overall development experience. Monetisation Infrastructure: Sonic natively supports growth and monetisation efforts, providing frameworks for traffic generation, payments, and settlements, thereby ensuring that gaming projects are not only viable but also sustainable financially. Timeline of Sonic The evolution of Sonic has been marked by several key milestones. Below is a brief timeline highlighting critical events in the project's history: 2022: The Sonic cryptocurrency was officially launched, marking the beginning of its journey in the Web3 gaming arena. 2024: June: Sonic SVM successfully raised $12 million in a Series A funding round. This investment allowed Sonic to further develop its platform and expand its offerings. August: The launch of the Sonic Odyssey testnet provided users with the first opportunity to engage with the platform, offering interactive activities such as collecting rings—a nod to gaming nostalgia. October: SonicX, an innovative crypto game integrated with Solana, made its debut on TikTok, capturing the attention of over 120,000 users within a short span. This integration illustrated Sonic’s commitment to reaching a broader, global audience and showcased the potential of blockchain gaming. Key Points Sonic SVM is a revolutionary layer-2 network on Solana explicitly designed to enhance the GameFi landscape, demonstrating great potential for future development. HyperGrid Framework empowers Sonic by introducing horizontal scaling capabilities, ensuring that the network can handle the demands of Web3 gaming. Integration with Social Platforms: The successful launch of SonicX on TikTok displays Sonic’s strategy to leverage social media platforms to engage users, exponentially increasing the exposure and reach of its projects. Investment Confidence: The substantial funding from BITKRAFT Ventures, among others, emphasizes the robust backing Sonic has, paving the way for its ambitious future. In conclusion, Sonic encapsulates the essence of Web3 gaming innovation, striking a balance between cutting-edge technology, developer-centric tools, and community engagement. As the project continues to evolve, it is poised to redefine the gaming landscape, making it a notable entity for gamers and developers alike. As Sonic moves forward, it will undoubtedly attract greater interest and participation, solidifying its place within the broader narrative of blockchain gaming.

1.1k Total ViewsPublished 2024.04.04Updated 2024.12.03

What is SONIC

What is $S$

Understanding SPERO: A Comprehensive Overview Introduction to SPERO As the landscape of innovation continues to evolve, the emergence of web3 technologies and cryptocurrency projects plays a pivotal role in shaping the digital future. One project that has garnered attention in this dynamic field is SPERO, denoted as SPERO,$$s$. This article aims to gather and present detailed information about SPERO, to help enthusiasts and investors understand its foundations, objectives, and innovations within the web3 and crypto domains. What is SPERO,$$s$? SPERO,$$s$ is a unique project within the crypto space that seeks to leverage the principles of decentralisation and blockchain technology to create an ecosystem that promotes engagement, utility, and financial inclusion. The project is tailored to facilitate peer-to-peer interactions in new ways, providing users with innovative financial solutions and services. At its core, SPERO,$$s$ aims to empower individuals by providing tools and platforms that enhance user experience in the cryptocurrency space. This includes enabling more flexible transaction methods, fostering community-driven initiatives, and creating pathways for financial opportunities through decentralised applications (dApps). The underlying vision of SPERO,$$s$ revolves around inclusiveness, aiming to bridge gaps within traditional finance while harnessing the benefits of blockchain technology. Who is the Creator of SPERO,$$s$? The identity of the creator of SPERO,$$s$ remains somewhat obscure, as there are limited publicly available resources providing detailed background information on its founder(s). This lack of transparency can stem from the project's commitment to decentralisation—an ethos that many web3 projects share, prioritising collective contributions over individual recognition. By centring discussions around the community and its collective goals, SPERO,$$s$ embodies the essence of empowerment without singling out specific individuals. As such, understanding the ethos and mission of SPERO remains more important than identifying a singular creator. Who are the Investors of SPERO,$$s$? SPERO,$$s$ is supported by a diverse array of investors ranging from venture capitalists to angel investors dedicated to fostering innovation in the crypto sector. The focus of these investors generally aligns with SPERO's mission—prioritising projects that promise societal technological advancement, financial inclusivity, and decentralised governance. These investor foundations are typically interested in projects that not only offer innovative products but also contribute positively to the blockchain community and its ecosystems. The backing from these investors reinforces SPERO,$$s$ as a noteworthy contender in the rapidly evolving domain of crypto projects. How Does SPERO,$$s$ Work? SPERO,$$s$ employs a multi-faceted framework that distinguishes it from conventional cryptocurrency projects. Here are some of the key features that underline its uniqueness and innovation: Decentralised Governance: SPERO,$$s$ integrates decentralised governance models, empowering users to participate actively in decision-making processes regarding the project’s future. This approach fosters a sense of ownership and accountability among community members. Token Utility: SPERO,$$s$ utilises its own cryptocurrency token, designed to serve various functions within the ecosystem. These tokens enable transactions, rewards, and the facilitation of services offered on the platform, enhancing overall engagement and utility. Layered Architecture: The technical architecture of SPERO,$$s$ supports modularity and scalability, allowing for seamless integration of additional features and applications as the project evolves. This adaptability is paramount for sustaining relevance in the ever-changing crypto landscape. Community Engagement: The project emphasises community-driven initiatives, employing mechanisms that incentivise collaboration and feedback. By nurturing a strong community, SPERO,$$s$ can better address user needs and adapt to market trends. Focus on Inclusion: By offering low transaction fees and user-friendly interfaces, SPERO,$$s$ aims to attract a diverse user base, including individuals who may not previously have engaged in the crypto space. This commitment to inclusion aligns with its overarching mission of empowerment through accessibility. Timeline of SPERO,$$s$ Understanding a project's history provides crucial insights into its development trajectory and milestones. Below is a suggested timeline mapping significant events in the evolution of SPERO,$$s$: Conceptualisation and Ideation Phase: The initial ideas forming the basis of SPERO,$$s$ were conceived, aligning closely with the principles of decentralisation and community focus within the blockchain industry. Launch of Project Whitepaper: Following the conceptual phase, a comprehensive whitepaper detailing the vision, goals, and technological infrastructure of SPERO,$$s$ was released to garner community interest and feedback. Community Building and Early Engagements: Active outreach efforts were made to build a community of early adopters and potential investors, facilitating discussions around the project’s goals and garnering support. Token Generation Event: SPERO,$$s$ conducted a token generation event (TGE) to distribute its native tokens to early supporters and establish initial liquidity within the ecosystem. Launch of Initial dApp: The first decentralised application (dApp) associated with SPERO,$$s$ went live, allowing users to engage with the platform's core functionalities. Ongoing Development and Partnerships: Continuous updates and enhancements to the project's offerings, including strategic partnerships with other players in the blockchain space, have shaped SPERO,$$s$ into a competitive and evolving player in the crypto market. Conclusion SPERO,$$s$ stands as a testament to the potential of web3 and cryptocurrency to revolutionise financial systems and empower individuals. With a commitment to decentralised governance, community engagement, and innovatively designed functionalities, it paves the way toward a more inclusive financial landscape. As with any investment in the rapidly evolving crypto space, potential investors and users are encouraged to research thoroughly and engage thoughtfully with the ongoing developments within SPERO,$$s$. The project showcases the innovative spirit of the crypto industry, inviting further exploration into its myriad possibilities. While the journey of SPERO,$$s$ is still unfolding, its foundational principles may indeed influence the future of how we interact with technology, finance, and each other in interconnected digital ecosystems.

54 Total ViewsPublished 2024.12.17Updated 2024.12.17

What is $S$

What is AGENT S

Agent S: The Future of Autonomous Interaction in Web3 Introduction In the ever-evolving landscape of Web3 and cryptocurrency, innovations are constantly redefining how individuals interact with digital platforms. One such pioneering project, Agent S, promises to revolutionise human-computer interaction through its open agentic framework. By paving the way for autonomous interactions, Agent S aims to simplify complex tasks, offering transformative applications in artificial intelligence (AI). This detailed exploration will delve into the project's intricacies, its unique features, and the implications for the cryptocurrency domain. What is Agent S? Agent S stands as a groundbreaking open agentic framework, specifically designed to tackle three fundamental challenges in the automation of computer tasks: Acquiring Domain-Specific Knowledge: The framework intelligently learns from various external knowledge sources and internal experiences. This dual approach empowers it to build a rich repository of domain-specific knowledge, enhancing its performance in task execution. Planning Over Long Task Horizons: Agent S employs experience-augmented hierarchical planning, a strategic approach that facilitates efficient breakdown and execution of intricate tasks. This feature significantly enhances its ability to manage multiple subtasks efficiently and effectively. Handling Dynamic, Non-Uniform Interfaces: The project introduces the Agent-Computer Interface (ACI), an innovative solution that enhances the interaction between agents and users. Utilizing Multimodal Large Language Models (MLLMs), Agent S can navigate and manipulate diverse graphical user interfaces seamlessly. Through these pioneering features, Agent S provides a robust framework that addresses the complexities involved in automating human interaction with machines, setting the stage for myriad applications in AI and beyond. Who is the Creator of Agent S? While the concept of Agent S is fundamentally innovative, specific information about its creator remains elusive. The creator is currently unknown, which highlights either the nascent stage of the project or the strategic choice to keep founding members under wraps. Regardless of anonymity, the focus remains on the framework's capabilities and potential. Who are the Investors of Agent S? As Agent S is relatively new in the cryptographic ecosystem, detailed information regarding its investors and financial backers is not explicitly documented. The lack of publicly available insights into the investment foundations or organisations supporting the project raises questions about its funding structure and development roadmap. Understanding the backing is crucial for gauging the project's sustainability and potential market impact. How Does Agent S Work? At the core of Agent S lies cutting-edge technology that enables it to function effectively in diverse settings. Its operational model is built around several key features: Human-like Computer Interaction: The framework offers advanced AI planning, striving to make interactions with computers more intuitive. By mimicking human behaviour in tasks execution, it promises to elevate user experiences. Narrative Memory: Employed to leverage high-level experiences, Agent S utilises narrative memory to keep track of task histories, thereby enhancing its decision-making processes. Episodic Memory: This feature provides users with step-by-step guidance, allowing the framework to offer contextual support as tasks unfold. Support for OpenACI: With the ability to run locally, Agent S allows users to maintain control over their interactions and workflows, aligning with the decentralised ethos of Web3. Easy Integration with External APIs: Its versatility and compatibility with various AI platforms ensure that Agent S can fit seamlessly into existing technological ecosystems, making it an appealing choice for developers and organisations. These functionalities collectively contribute to Agent S's unique position within the crypto space, as it automates complex, multi-step tasks with minimal human intervention. As the project evolves, its potential applications in Web3 could redefine how digital interactions unfold. Timeline of Agent S The development and milestones of Agent S can be encapsulated in a timeline that highlights its significant events: September 27, 2024: The concept of Agent S was launched in a comprehensive research paper titled “An Open Agentic Framework that Uses Computers Like a Human,” showcasing the groundwork for the project. October 10, 2024: The research paper was made publicly available on arXiv, offering an in-depth exploration of the framework and its performance evaluation based on the OSWorld benchmark. October 12, 2024: A video presentation was released, providing a visual insight into the capabilities and features of Agent S, further engaging potential users and investors. These markers in the timeline not only illustrate the progress of Agent S but also indicate its commitment to transparency and community engagement. Key Points About Agent S As the Agent S framework continues to evolve, several key attributes stand out, underscoring its innovative nature and potential: Innovative Framework: Designed to provide an intuitive use of computers akin to human interaction, Agent S brings a novel approach to task automation. Autonomous Interaction: The ability to interact autonomously with computers through GUI signifies a leap towards more intelligent and efficient computing solutions. Complex Task Automation: With its robust methodology, it can automate complex, multi-step tasks, making processes faster and less error-prone. Continuous Improvement: The learning mechanisms enable Agent S to improve from past experiences, continually enhancing its performance and efficacy. Versatility: Its adaptability across different operating environments like OSWorld and WindowsAgentArena ensures that it can serve a broad range of applications. As Agent S positions itself in the Web3 and crypto landscape, its potential to enhance interaction capabilities and automate processes signifies a significant advancement in AI technologies. Through its innovative framework, Agent S exemplifies the future of digital interactions, promising a more seamless and efficient experience for users across various industries. Conclusion Agent S represents a bold leap forward in the marriage of AI and Web3, with the capacity to redefine how we interact with technology. While still in its early stages, the possibilities for its application are vast and compelling. Through its comprehensive framework addressing critical challenges, Agent S aims to bring autonomous interactions to the forefront of the digital experience. As we move deeper into the realms of cryptocurrency and decentralisation, projects like Agent S will undoubtedly play a crucial role in shaping the future of technology and human-computer collaboration.

552 Total ViewsPublished 2025.01.14Updated 2025.01.14

What is AGENT S

Discussions

Welcome to the HTX Community. Here, you can stay informed about the latest platform developments and gain access to professional market insights. Users' opinions on the price of S (S) are presented below.

活动图片