China's AI Computing Counterattack

marsbitPublished on 2026-03-04Last updated on 2026-03-04

Abstract

Eight years after the ZTE crisis, China's AI industry is fighting back against U.S. chip restrictions. In 2018, ZTE nearly collapsed under U.S. sanctions but survived with heavy fines and oversight. Today, Chinese AI firms like DeepSeek are pivoting away from NVIDIA by developing domestic alternatives and optimizing algorithms to reduce reliance on foreign technology. DeepSeek’s V4 model will use entirely domestic chips, signaling a strategic shift toward computational independence. The real challenge isn’t just hardware—it’s NVIDIA’s CUDA ecosystem, which dominates global AI development with over 4.5 million developers. U.S. export controls have tightened since 2022, banning high-end chips like the A100, H100, and their downgraded versions. In response, Chinese companies are adopting technical workarounds like Mixture-of-Experts models, which activate only parts of the network during inference, slashing costs. DeepSeek’s API is up to 75x cheaper than competitors, driving rapid global adoption. By early 2026, Chinese models accounted for nearly 60% of API calls on OpenRouter. Domestic chips, such as Huawei’s Ascend series, are now capable of full-scale training, not just inference. Production lines in cities like Xinghua manufacture servers with homegrown processors, supporting major AI training projects. Meanwhile, the U.S. faces an electricity shortage as data centers consume growing power, while China benefits from greater energy capacity and lower costs. Chinese AI is...

Eight years ago, ZTE suffered a cardiac arrest.

On April 16, 2018, a ban issued by the U.S. Department of Commerce's Bureau of Industry and Security brought ZTE Corporation, the world's fourth-largest telecommunications equipment manufacturer with 80,000 employees and annual revenue exceeding 100 billion yuan, to a standstill overnight. The content of the ban was simple: for the next seven years, any American company was prohibited from selling components, goods, software, and technology to ZTE.

Without Qualcomm's chips, base station production halted. Without Google's Android authorization, there was no usable system for its phones. Twenty-three days later, ZTE issued an announcement stating that its main business activities could no longer continue.

However, ZTE ultimately survived, but at a cost of $1.4 billion.

A $1 billion fine, paid in one lump sum; a $400 million deposit, placed in an escrow account at a U.S. bank. Additionally, all senior executives were replaced, and a U.S. compliance supervision team was stationed within the company. For the full year of 2018, ZTE reported a net loss of 7 billion yuan, with revenue plummeting 21.4% year-on-year.

Yin Yimin, then chairman of ZTE, wrote in an internal letter: "We are in a complex industry that is highly dependent on the global supply chain." At the time, these words sounded like both reflection and helplessness.

Eight years later, on February 26, 2026, Chinese AI unicorn DeepSeek announced that its upcoming V4 multimodal large model would prioritize deep cooperation with domestic chip manufacturers, achieving for the first time a full-process non-NVIDIA solution from pre-training to fine-tuning.

In other words: We are not using NVIDIA anymore.

The market's first reaction to the news was skepticism. NVIDIA holds over 90% of the global AI training chip market share. Abandoning it—does that make commercial sense?

But behind DeepSeek's choice lies a question larger than commercial logic: What kind of computing independence does Chinese AI truly need?

What Exactly Is Being Strangled

Many people think the chip ban is about hardware. But what truly suffocates Chinese AI companies is something called CUDA.

CUDA, short for Compute Unified Device Architecture, is a parallel computing platform and programming model launched by NVIDIA in 2006. It allows developers to directly utilize the computing power of NVIDIA GPUs to accelerate various complex computational tasks.

Before the AI era arrived, this was just a tool for a few geeks. But when the wave of deep learning hit, CUDA became the foundation of the entire AI industry.

The training of AI large models is essentially massive matrix operations. And this is precisely what GPUs excel at.

Thanks to planning over a decade in advance, NVIDIA used CUDA to build a complete toolchain for global AI developers, spanning from underlying hardware to upper-layer applications. Today, all mainstream AI frameworks worldwide, from Google's TensorFlow to Meta's PyTorch, are deeply tied to CUDA at their core.

A PhD student in AI, from their first day of enrollment, learns, programs, and experiments in the CUDA environment. Every line of code they write reinforces NVIDIA's moat.

As of 2025, the CUDA ecosystem already boasts over 4.5 million developers, covers more than 3,000 GPU-accelerated applications, and is used by over 40,000 companies globally. This number means that over 90% of the world's AI developers are locked into NVIDIA's ecosystem.

The terrifying thing about CUDA is that it's a flywheel. The more developers use it, the more tools, libraries, and code are generated, making the ecosystem more prosperous; the more prosperous the ecosystem, the more it attracts additional developers. Once this flywheel starts spinning, it becomes almost impossible to stop.

The result is that NVIDIA sells you the most expensive shovel and also defines the only way to dig. Want to change shovels? Fine. But you'll have to rewrite all the experience, tools, and code accumulated over the past decade by hundreds of thousands of the smartest brains worldwide using that specific method.

Who pays for that cost?

So, when the first round of controls landed on October 7, 2022, with BIS restricting exports of NVIDIA's A100 and H100 to China, Chinese AI companies collectively felt a ZTE-like suffocation for the first time. NVIDIA subsequently launched "China-specific" A800 and H800 chips, reducing the inter-chip interconnect speed, barely maintaining supply.

But just a year later, on October 17, 2023, a second round of controls tightened further, banning A800 and H800 as well, and adding 13 Chinese companies to the Entity List. NVIDIA had to launch a further neutered version, the H20. By December 2024, the final round of controls during the Biden administration landed, strictly limiting even H20 exports.

Three rounds of controls, escalating step by step.

But this time, the story's direction is completely different from that of ZTE back then.

An Asymmetric Breakout

Under the ban, everyone thought the dream of Chinese AI large models would end there.

They were wrong. Faced with the blockade, Chinese companies did not choose a head-on confrontation but began a breakout. The first battlefield of this breakout was not in chips, but in algorithms.

From late 2024 to 2025, Chinese AI companies collectively turned to a technical direction: Mixture of Experts (MoE) models.

Simply put, this means splitting a huge model into many small experts, activating only the most relevant few when processing a task, rather than engaging the entire model.

DeepSeek's V3 is a typical example of this approach. It has 671 billion parameters, but only activates 37 billion of them during each inference, just 5.5% of the total. In terms of training cost, it used 2048 NVIDIA H800 GPUs, trained for 58 days, with a total cost of $5.576 million. In comparison, external estimates for GPT-4's training cost are around $78 million. A difference of an order of magnitude.

Extreme optimization in algorithms directly reflected in pricing. DeepSeek's API price is $0.028 to $0.28 per million input tokens, and $0.42 for output. GPT-4o's input price is $5, output $15. Claude Opus is even more expensive, $15 input, $75 output. Converted, DeepSeek is 25 to 75 times cheaper than Claude.

This price difference had a huge impact on the global developer market. In February 2026, on OpenRouter, the world's largest AI model API aggregation platform, the weekly call volume for Chinese AI models surged 127% in three weeks, surpassing the U.S. for the first time. A year earlier, Chinese models had less than 2% share on OpenRouter. A year later, it grew 421%, approaching 60%.

Behind this data lies a structural change that is easy to overlook. Starting in the second half of 2025, the mainstream scenario for AI applications shifted from chat to Agent. In Agent scenarios, the token consumption for a single task is 10 to 100 times that of simple chat. When token consumption grows exponentially, price becomes the decisive factor. The extreme cost-effectiveness of Chinese models恰好踩中了这个窗口 (happened to hit this window perfectly).

But the problem is, reducing inference cost does not solve the fundamental problem of training. A large model that cannot be continuously trained and iterated on new data will see its capabilities degrade rapidly. And training remains that unavoidable computing black hole.

So, where do the "shovels" for training come from?

The Backup Plan Goes Mainstream

Xinghua, Jiangsu, a small city in central Jiangsu, known for stainless steel and health food, previously had no relation to AI. But in 2025, a 148-meter-long production line for domestic computing servers was built and put into operation here, taking only 180 days from signing to production.

The core of this production line is two fully domestic chips: the Loongson 3C6000 processor and the Taichu Yuanqi T100 AI accelerator card. The Loongson 3C6000 is entirely self-developed, from instruction set to microarchitecture. Taichu Yuanqi emerged from the National Supercomputing Center in Wuxi and a Tsinghua University team, adopting a heterogeneous many-core architecture.

At full capacity, this line produces one server every 5 minutes. The total investment for this production line is 1.1 billion yuan, with an expected annual output of 100,000 units.

More importantly, clusters based on these domestic chips, comprising tens of thousands of cards, have begun undertaking real large model training tasks.

In January 2026, Zhipu AI, in collaboration with Huawei, released GLM-Image, the first SOTA image generation model fully trained from start to finish relying on domestic chips. In February, China Telecom's billion-parameter "Star" large model completed its full training process on a domestic 10,000-card computing pool in Shanghai's Lingang.

The significance of these cases is that they prove one thing: domestic chips have transitioned from "usable for inference" to "usable for training." This is a qualitative change. Inference only requires running a pre-trained model, which places relatively lower demands on the chip; training requires processing massive data, performing complex gradient calculations and parameter updates, demanding an order of magnitude higher requirements for chip computing power, interconnect bandwidth, and software ecosystem.

The core force承担这些任务 (undertaking these tasks) is Huawei's Ascend series chips. As of the end of 2025, the Ascend ecosystem had surpassed 4 million developers, with over 3,000 partners; 43 mainstream large models completed pre-training based on Ascend, and over 200 open-source models were adapted. At the MWC conference on March 2, 2026, Huawei also launched its new computing base, SuperPoD, for overseas markets.

Ascend 910B's FP16 computing power is already comparable to NVIDIA's A100. Although the gap still exists, it has gone from unusable to usable, and from usable to becoming good. Ecosystem building cannot wait until the chips are perfect; it must be rolled out on a large scale at the usable stage, using real business needs to force the iteration of chips and software. ByteDance, Tencent, and Baidu's targets for importing domestic computing servers普遍较上一年翻倍增长 (generally doubled compared to the previous year) in 2026. Data from the Ministry of Industry and Information Technology shows China's intelligent computing scale has reached 1590 EFLOPS. 2026 is becoming the first year of large-scale deployment for domestic computing.

U.S. Power Shortages and China's Overseas Expansion

In early 2026, Virginia, which carries a large amount of data center traffic globally, suspended approval for new data center construction projects. Georgia followed suit, extending the suspension until 2027. Illinois and Michigan also introduced restrictions.

According to data from the International Energy Agency, U.S. data center electricity consumption reached 183 TWh in 2024, accounting for about 4% of the nation's total electricity consumption. By 2030, this number is expected to double to 426 TWh, potentially exceeding 12%. Arm's CEO predicted that by 2030, AI data centers would consume 20% to 25% of U.S. electricity.

The U.S. power grid is already overwhelmed. The PJM grid, covering 13 eastern U.S. states, faces a 6GW capacity shortfall. By 2033, the U.S. overall faces a 175GW power capacity gap, equivalent to the electricity consumption of 130 million households. Wholesale electricity costs in data center concentrated areas are 267% higher than five years ago.

The end of computing power is energy. And on the energy dimension, the gap between China and the U.S. is even larger than in chips, just in the opposite direction.

China's annual electricity generation is 10.4 trillion kWh, while the U.S. is 4.2 trillion kWh; China's is 2.5 times that of the U.S. More crucially, residential electricity consumption in China accounts for only 15% of total electricity usage, while in the U.S. this proportion is 36%. This means China has far more industrial electricity surplus available for computing construction than the U.S.

In terms of electricity prices, the price in U.S. AI company hubs is $0.12 to $0.15 per kWh, while industrial electricity prices in western China are about $0.03, only a quarter to a fifth of the U.S. price.

China's power generation growth is already 7 times that of the U.S.

Just as the U.S. worries about electricity, Chinese AI is quietly going global. But this time, what's going overseas is not products, not factories, but Tokens.

Tokens, the smallest unit of information processed by AI models, are becoming a new digital commodity. They are produced in Chinese computing factories and transmitted globally via submarine cables.

DeepSeek's user distribution data is telling: 30.7% in mainland China, 13.6% in India, 6.9% in Indonesia, 4.3% in the U.S., 3.2% in France. It supports 37 languages and is widely popular in emerging markets like Brazil. Globally, 26,000 enterprises have opened accounts, and 3,200 institutions have deployed the enterprise version.

In 2025, 58% of new AI startups included DeepSeek in their tech stack. In China, DeepSeek captured 89% market share. In other sanctioned countries, market share ranged from 40% to 60%.

This scene is reminiscent of another war about industrial autonomy forty years ago.

In Tokyo, 1986, under strong pressure from the U.S., the Japanese government signed the U.S.-Japan Semiconductor Agreement. The core clauses of the agreement were three: requiring Japan to open its semiconductor market, with U.S. chip market share in Japan必须达到 20% 以上 (must reach over 20%); strictly prohibiting Japanese semiconductors from being exported below cost; and imposing 100% punitive tariffs on $300 million worth of chips exported from Japan. Simultaneously, the U.S. vetoed Fujitsu's acquisition of Fairchild Semiconductor.

That year, the Japanese semiconductor industry was at its peak. In 1988, Japan controlled 51% of the global semiconductor market share, while the U.S. had only 36.8%. Six of the world's top ten semiconductor companies were Japanese: NEC ranked second, Toshiba third, Hitachi fifth, Fujitsu seventh, Mitsubishi eighth, and Panasonic ninth. In 1985, Intel lost $173 million in the U.S.-Japan semiconductor war,濒临破产 (on the verge of bankruptcy).

But after the agreement was signed, everything changed.

The U.S., through means like Section 301 investigations, launched a comprehensive suppression of Japanese semiconductor companies. Simultaneously, it扶持 (supported) South Korea's Samsung and Hynix to impact Japan's market with lower prices. Japan's DRAM share fell from 80% to 10%. By 2017, Japan's IC market share was only 7%. Once-invincible giants were either split up, acquired, or left the scene黯然离场 (dimly) amidst endless losses.

The tragedy of Japanese semiconductors was that they were content being the most excellent producer within a global division of labor system dominated by a single external force, but never thought to build an independent ecosystem of their own. When the tide went out, they found they had nothing but production itself.

Today, China's AI industry stands at a similar yet completely different crossroads.

Similar in that we同样面临着来自外部的巨大压力 (also face huge external pressure). Three rounds of chip controls, escalating step by step, the barrier of the CUDA ecosystem依然高耸 (still stands tall).

Different in that this time, we have chosen a more difficult path. From extreme optimization at the algorithm level, to the leap of domestic chips from inference to training, to the accumulation of 4 million developers in the Ascend ecosystem, to the penetration of global markets through Token exports. Every step on this path is building an independent industrial ecosystem that Japan never possessed back then.

Epilogue

On February 27, 2026, performance reports from three local AI chip companies were released on the same day.

Cambricon, revenue surged 453%, achieving full-year profitability for the first time. Moore Threads, revenue grew 243%, but with a net loss of 1 billion yuan. MetaX, revenue grew 121%, with a net loss of nearly 800 million yuan.

Half flame, half seawater.

The flame is the market's extreme hunger. The 95% void left by Jensen Huang is being filled, inch by inch, by the revenue figures of these local companies. Regardless of performance, regardless of the ecosystem, the market needs a second choice besides NVIDIA. This is a once-in-a-lifetime structural opportunity torn open by geopolitics.

The seawater is the huge cost of ecosystem building. Every cent of loss is real money paid to catch up with the CUDA ecosystem. It's R&D investment, software subsidies, the人力成本 (human cost) of engineers stationed at customer sites solving compilation issues one by one. These losses are not operational failures but the war tax that must be paid to build an independent ecosystem.

These three financial reports, more honestly than any industry report, record the true face of this computing war. It is not a triumphant victory march but a brutal battle of attrition, charging forward while bleeding.

But the form of the war has indeed changed. Eight years ago, we discussed the question of "whether we can survive." Today, we discuss the question of "what price must be paid to survive."

The price itself is progress.

Related Questions

QWhat was the core reason behind the 'choking' feeling for Chinese AI companies during the chip ban, beyond just hardware restrictions?

AThe core reason was CUDA (Compute Unified Device Architecture), NVIDIA's parallel computing platform and programming model. It formed the foundational ecosystem for AI development, binding over 90% of global AI developers. Losing access to this deeply entrenched software and toolchain ecosystem, not just the hardware, was what caused severe disruption.

QHow did Chinese AI companies like DeepSeek initially respond to the compute restrictions in a 'non-confrontational' way?

AThey focused on algorithmic breakthroughs, specifically by adopting Mixture-of-Experts (MoE) models. This approach, as seen with DeepSeek V3, used a massive number of parameters (671B) but only activated a small fraction (5.5%) for each task, drastically reducing training and inference expenses, which gave them a significant price advantage.

QWhat key milestone demonstrated that domestic Chinese chips had achieved a qualitative leap in capability?

AThe key milestone was domestic chips transitioning from being 'usable for inference' to 'capable of training' large models. This was proven by instances like Zhipu AI's GLM-Image and China Telecom's 'Xingchen' model, which were fully trained on domestic万卡 (10,000-card) clusters using chips like Huawei's Ascend.

QWhat new advantage is emerging for China's AI industry, related to a fundamental resource constraint facing the US?

AA new advantage is emerging in energy capacity and cost. The US is facing a severe electricity shortage and high costs for powering data centers, while China has 2.5 times the total electricity generation, a larger industrial power allocation, and electricity costs that are only a quarter to a fifth of those in US AI hubs.

QHow is the current Chinese AI industry's situation both similar to and different from Japan's semiconductor industry in the 1980s?

AIt is similar in facing immense external pressure and sanctions from the US. However, it is different because instead of just being a superior manufacturer within a US-led ecosystem (like Japan), China is pursuing a more difficult path of building an independent, full-stack ecosystem—from algorithms and domestic chips to software and a global developer community—which Japan never achieved.

Related Reads

Trading

Spot
Futures

Hot Articles

What is SONIC

Sonic: Pioneering the Future of Gaming in Web3 Introduction to Sonic In the ever-evolving landscape of Web3, the gaming industry stands out as one of the most dynamic and promising sectors. At the forefront of this revolution is Sonic, a project designed to amplify the gaming ecosystem on the Solana blockchain. Leveraging cutting-edge technology, Sonic aims to deliver an unparalleled gaming experience by efficiently processing millions of requests per second, ensuring that players enjoy seamless gameplay while maintaining low transaction costs. This article delves into the intricate details of Sonic, exploring its creators, funding sources, operational mechanics, and the timeline of significant events that have shaped its journey. What is Sonic? Sonic is an innovative layer-2 network that operates atop the Solana blockchain, specifically tailored to enhance the existing Solana gaming ecosystem. It accomplishes this through a customised, VM-agnostic game engine paired with a HyperGrid interpreter, facilitating sovereign game economies that roll up back to the Solana platform. The primary goals of Sonic include: Enhanced Gaming Experiences: Sonic is committed to offering lightning-fast on-chain gameplay, allowing players and developers to engage with games at previously unattainable speeds. Atomic Interoperability: This feature enables transactions to be executed within Sonic without the need to redeploy Solana programmes and accounts. This makes the process more efficient and directly benefits from Solana Layer1 services and liquidity. Seamless Deployment: Sonic allows developers to write for Ethereum Virtual Machine (EVM) based systems and execute them on Solana’s SVM infrastructure. This interoperability is crucial for attracting a broader range of dApps and decentralised applications to the platform. Support for Developers: By offering native composable gaming primitives and extensible data types - dining within the Entity-Component-System (ECS) framework - game creators can craft intricate business logic with ease. Overall, Sonic's unique approach not only caters to players but also provides an accessible and low-cost environment for developers to innovate and thrive. Creator of Sonic The information regarding the creator of Sonic is somewhat ambiguous. However, it is known that Sonic's SVM is owned by the company Mirror World. The absence of detailed information about the individuals behind Sonic reflects a common trend in several Web3 projects, where collective efforts and partnerships often overshadow individual contributions. Investors of Sonic Sonic has garnered considerable attention and support from various investors within the crypto and gaming sectors. Notably, the project raised an impressive $12 million during its Series A funding round. The round was led by BITKRAFT Ventures, with other notable investors including Galaxy, Okx Ventures, Interactive, Big Brain Holdings, and Mirana. This financial backing signifies the confidence that investment foundations have in Sonic’s potential to revolutionise the Web3 gaming landscape, further validating its innovative approaches and technologies. How Does Sonic Work? Sonic utilises the HyperGrid framework, a sophisticated parallel processing mechanism that enhances its scalability and customisability. Here are the core features that set Sonic apart: Lightning Speed at Low Costs: Sonic offers one of the fastest on-chain gaming experiences compared to other Layer-1 solutions, powered by the scalability of Solana’s virtual machine (SVM). Atomic Interoperability: Sonic enables transaction execution without redeployment of Solana programmes and accounts, effectively streamlining the interaction between users and the blockchain. EVM Compatibility: Developers can effortlessly migrate decentralised applications from EVM chains to the Solana environment using Sonic’s HyperGrid interpreter, increasing the accessibility and integration of various dApps. Ecosystem Support for Developers: By exposing native composable gaming primitives, Sonic facilitates a sandbox-like environment where developers can experiment and implement business logic, greatly enhancing the overall development experience. Monetisation Infrastructure: Sonic natively supports growth and monetisation efforts, providing frameworks for traffic generation, payments, and settlements, thereby ensuring that gaming projects are not only viable but also sustainable financially. Timeline of Sonic The evolution of Sonic has been marked by several key milestones. Below is a brief timeline highlighting critical events in the project's history: 2022: The Sonic cryptocurrency was officially launched, marking the beginning of its journey in the Web3 gaming arena. 2024: June: Sonic SVM successfully raised $12 million in a Series A funding round. This investment allowed Sonic to further develop its platform and expand its offerings. August: The launch of the Sonic Odyssey testnet provided users with the first opportunity to engage with the platform, offering interactive activities such as collecting rings—a nod to gaming nostalgia. October: SonicX, an innovative crypto game integrated with Solana, made its debut on TikTok, capturing the attention of over 120,000 users within a short span. This integration illustrated Sonic’s commitment to reaching a broader, global audience and showcased the potential of blockchain gaming. Key Points Sonic SVM is a revolutionary layer-2 network on Solana explicitly designed to enhance the GameFi landscape, demonstrating great potential for future development. HyperGrid Framework empowers Sonic by introducing horizontal scaling capabilities, ensuring that the network can handle the demands of Web3 gaming. Integration with Social Platforms: The successful launch of SonicX on TikTok displays Sonic’s strategy to leverage social media platforms to engage users, exponentially increasing the exposure and reach of its projects. Investment Confidence: The substantial funding from BITKRAFT Ventures, among others, emphasizes the robust backing Sonic has, paving the way for its ambitious future. In conclusion, Sonic encapsulates the essence of Web3 gaming innovation, striking a balance between cutting-edge technology, developer-centric tools, and community engagement. As the project continues to evolve, it is poised to redefine the gaming landscape, making it a notable entity for gamers and developers alike. As Sonic moves forward, it will undoubtedly attract greater interest and participation, solidifying its place within the broader narrative of blockchain gaming.

790 Total ViewsPublished 2024.04.04Updated 2024.12.03

What is SONIC

What is $S$

Understanding SPERO: A Comprehensive Overview Introduction to SPERO As the landscape of innovation continues to evolve, the emergence of web3 technologies and cryptocurrency projects plays a pivotal role in shaping the digital future. One project that has garnered attention in this dynamic field is SPERO, denoted as SPERO,$$s$. This article aims to gather and present detailed information about SPERO, to help enthusiasts and investors understand its foundations, objectives, and innovations within the web3 and crypto domains. What is SPERO,$$s$? SPERO,$$s$ is a unique project within the crypto space that seeks to leverage the principles of decentralisation and blockchain technology to create an ecosystem that promotes engagement, utility, and financial inclusion. The project is tailored to facilitate peer-to-peer interactions in new ways, providing users with innovative financial solutions and services. At its core, SPERO,$$s$ aims to empower individuals by providing tools and platforms that enhance user experience in the cryptocurrency space. This includes enabling more flexible transaction methods, fostering community-driven initiatives, and creating pathways for financial opportunities through decentralised applications (dApps). The underlying vision of SPERO,$$s$ revolves around inclusiveness, aiming to bridge gaps within traditional finance while harnessing the benefits of blockchain technology. Who is the Creator of SPERO,$$s$? The identity of the creator of SPERO,$$s$ remains somewhat obscure, as there are limited publicly available resources providing detailed background information on its founder(s). This lack of transparency can stem from the project's commitment to decentralisation—an ethos that many web3 projects share, prioritising collective contributions over individual recognition. By centring discussions around the community and its collective goals, SPERO,$$s$ embodies the essence of empowerment without singling out specific individuals. As such, understanding the ethos and mission of SPERO remains more important than identifying a singular creator. Who are the Investors of SPERO,$$s$? SPERO,$$s$ is supported by a diverse array of investors ranging from venture capitalists to angel investors dedicated to fostering innovation in the crypto sector. The focus of these investors generally aligns with SPERO's mission—prioritising projects that promise societal technological advancement, financial inclusivity, and decentralised governance. These investor foundations are typically interested in projects that not only offer innovative products but also contribute positively to the blockchain community and its ecosystems. The backing from these investors reinforces SPERO,$$s$ as a noteworthy contender in the rapidly evolving domain of crypto projects. How Does SPERO,$$s$ Work? SPERO,$$s$ employs a multi-faceted framework that distinguishes it from conventional cryptocurrency projects. Here are some of the key features that underline its uniqueness and innovation: Decentralised Governance: SPERO,$$s$ integrates decentralised governance models, empowering users to participate actively in decision-making processes regarding the project’s future. This approach fosters a sense of ownership and accountability among community members. Token Utility: SPERO,$$s$ utilises its own cryptocurrency token, designed to serve various functions within the ecosystem. These tokens enable transactions, rewards, and the facilitation of services offered on the platform, enhancing overall engagement and utility. Layered Architecture: The technical architecture of SPERO,$$s$ supports modularity and scalability, allowing for seamless integration of additional features and applications as the project evolves. This adaptability is paramount for sustaining relevance in the ever-changing crypto landscape. Community Engagement: The project emphasises community-driven initiatives, employing mechanisms that incentivise collaboration and feedback. By nurturing a strong community, SPERO,$$s$ can better address user needs and adapt to market trends. Focus on Inclusion: By offering low transaction fees and user-friendly interfaces, SPERO,$$s$ aims to attract a diverse user base, including individuals who may not previously have engaged in the crypto space. This commitment to inclusion aligns with its overarching mission of empowerment through accessibility. Timeline of SPERO,$$s$ Understanding a project's history provides crucial insights into its development trajectory and milestones. Below is a suggested timeline mapping significant events in the evolution of SPERO,$$s$: Conceptualisation and Ideation Phase: The initial ideas forming the basis of SPERO,$$s$ were conceived, aligning closely with the principles of decentralisation and community focus within the blockchain industry. Launch of Project Whitepaper: Following the conceptual phase, a comprehensive whitepaper detailing the vision, goals, and technological infrastructure of SPERO,$$s$ was released to garner community interest and feedback. Community Building and Early Engagements: Active outreach efforts were made to build a community of early adopters and potential investors, facilitating discussions around the project’s goals and garnering support. Token Generation Event: SPERO,$$s$ conducted a token generation event (TGE) to distribute its native tokens to early supporters and establish initial liquidity within the ecosystem. Launch of Initial dApp: The first decentralised application (dApp) associated with SPERO,$$s$ went live, allowing users to engage with the platform's core functionalities. Ongoing Development and Partnerships: Continuous updates and enhancements to the project's offerings, including strategic partnerships with other players in the blockchain space, have shaped SPERO,$$s$ into a competitive and evolving player in the crypto market. Conclusion SPERO,$$s$ stands as a testament to the potential of web3 and cryptocurrency to revolutionise financial systems and empower individuals. With a commitment to decentralised governance, community engagement, and innovatively designed functionalities, it paves the way toward a more inclusive financial landscape. As with any investment in the rapidly evolving crypto space, potential investors and users are encouraged to research thoroughly and engage thoughtfully with the ongoing developments within SPERO,$$s$. The project showcases the innovative spirit of the crypto industry, inviting further exploration into its myriad possibilities. While the journey of SPERO,$$s$ is still unfolding, its foundational principles may indeed influence the future of how we interact with technology, finance, and each other in interconnected digital ecosystems.

54 Total ViewsPublished 2024.12.17Updated 2024.12.17

What is $S$

What is AGENT S

Agent S: The Future of Autonomous Interaction in Web3 Introduction In the ever-evolving landscape of Web3 and cryptocurrency, innovations are constantly redefining how individuals interact with digital platforms. One such pioneering project, Agent S, promises to revolutionise human-computer interaction through its open agentic framework. By paving the way for autonomous interactions, Agent S aims to simplify complex tasks, offering transformative applications in artificial intelligence (AI). This detailed exploration will delve into the project's intricacies, its unique features, and the implications for the cryptocurrency domain. What is Agent S? Agent S stands as a groundbreaking open agentic framework, specifically designed to tackle three fundamental challenges in the automation of computer tasks: Acquiring Domain-Specific Knowledge: The framework intelligently learns from various external knowledge sources and internal experiences. This dual approach empowers it to build a rich repository of domain-specific knowledge, enhancing its performance in task execution. Planning Over Long Task Horizons: Agent S employs experience-augmented hierarchical planning, a strategic approach that facilitates efficient breakdown and execution of intricate tasks. This feature significantly enhances its ability to manage multiple subtasks efficiently and effectively. Handling Dynamic, Non-Uniform Interfaces: The project introduces the Agent-Computer Interface (ACI), an innovative solution that enhances the interaction between agents and users. Utilizing Multimodal Large Language Models (MLLMs), Agent S can navigate and manipulate diverse graphical user interfaces seamlessly. Through these pioneering features, Agent S provides a robust framework that addresses the complexities involved in automating human interaction with machines, setting the stage for myriad applications in AI and beyond. Who is the Creator of Agent S? While the concept of Agent S is fundamentally innovative, specific information about its creator remains elusive. The creator is currently unknown, which highlights either the nascent stage of the project or the strategic choice to keep founding members under wraps. Regardless of anonymity, the focus remains on the framework's capabilities and potential. Who are the Investors of Agent S? As Agent S is relatively new in the cryptographic ecosystem, detailed information regarding its investors and financial backers is not explicitly documented. The lack of publicly available insights into the investment foundations or organisations supporting the project raises questions about its funding structure and development roadmap. Understanding the backing is crucial for gauging the project's sustainability and potential market impact. How Does Agent S Work? At the core of Agent S lies cutting-edge technology that enables it to function effectively in diverse settings. Its operational model is built around several key features: Human-like Computer Interaction: The framework offers advanced AI planning, striving to make interactions with computers more intuitive. By mimicking human behaviour in tasks execution, it promises to elevate user experiences. Narrative Memory: Employed to leverage high-level experiences, Agent S utilises narrative memory to keep track of task histories, thereby enhancing its decision-making processes. Episodic Memory: This feature provides users with step-by-step guidance, allowing the framework to offer contextual support as tasks unfold. Support for OpenACI: With the ability to run locally, Agent S allows users to maintain control over their interactions and workflows, aligning with the decentralised ethos of Web3. Easy Integration with External APIs: Its versatility and compatibility with various AI platforms ensure that Agent S can fit seamlessly into existing technological ecosystems, making it an appealing choice for developers and organisations. These functionalities collectively contribute to Agent S's unique position within the crypto space, as it automates complex, multi-step tasks with minimal human intervention. As the project evolves, its potential applications in Web3 could redefine how digital interactions unfold. Timeline of Agent S The development and milestones of Agent S can be encapsulated in a timeline that highlights its significant events: September 27, 2024: The concept of Agent S was launched in a comprehensive research paper titled “An Open Agentic Framework that Uses Computers Like a Human,” showcasing the groundwork for the project. October 10, 2024: The research paper was made publicly available on arXiv, offering an in-depth exploration of the framework and its performance evaluation based on the OSWorld benchmark. October 12, 2024: A video presentation was released, providing a visual insight into the capabilities and features of Agent S, further engaging potential users and investors. These markers in the timeline not only illustrate the progress of Agent S but also indicate its commitment to transparency and community engagement. Key Points About Agent S As the Agent S framework continues to evolve, several key attributes stand out, underscoring its innovative nature and potential: Innovative Framework: Designed to provide an intuitive use of computers akin to human interaction, Agent S brings a novel approach to task automation. Autonomous Interaction: The ability to interact autonomously with computers through GUI signifies a leap towards more intelligent and efficient computing solutions. Complex Task Automation: With its robust methodology, it can automate complex, multi-step tasks, making processes faster and less error-prone. Continuous Improvement: The learning mechanisms enable Agent S to improve from past experiences, continually enhancing its performance and efficacy. Versatility: Its adaptability across different operating environments like OSWorld and WindowsAgentArena ensures that it can serve a broad range of applications. As Agent S positions itself in the Web3 and crypto landscape, its potential to enhance interaction capabilities and automate processes signifies a significant advancement in AI technologies. Through its innovative framework, Agent S exemplifies the future of digital interactions, promising a more seamless and efficient experience for users across various industries. Conclusion Agent S represents a bold leap forward in the marriage of AI and Web3, with the capacity to redefine how we interact with technology. While still in its early stages, the possibilities for its application are vast and compelling. Through its comprehensive framework addressing critical challenges, Agent S aims to bring autonomous interactions to the forefront of the digital experience. As we move deeper into the realms of cryptocurrency and decentralisation, projects like Agent S will undoubtedly play a crucial role in shaping the future of technology and human-computer collaboration.

368 Total ViewsPublished 2025.01.14Updated 2025.01.14

What is AGENT S

Discussions

Welcome to the HTX Community. Here, you can stay informed about the latest platform developments and gain access to professional market insights. Users' opinions on the price of S (S) are presented below.

活动图片