Not Just DeepSeek, Big Tech Companies Want to 'Abandon' NVIDIA

marsbitPublicado a 2026-04-24Actualizado a 2026-04-24

Resumen

The article discusses how major tech companies are attempting to reduce their reliance on Nvidia, despite its dominant position in the AI chip market, where it enjoys a 75.2% GAAP gross margin. Companies like DeepSeek are adapting their models to run on domestic alternatives such as Huawei’s Ascend chips, while in the U.S., Google and Meta are developing their own AI chips (TPU and MTIA series) to complement external partnerships. Nvidia’s CEO, Jensen Huang, acknowledges that Moore’s Law is fading and that export restrictions may slow China’s AI development but could ultimately spur a self-sufficient ecosystem. Notably, Chinese firms are leading in open-source models, which could eventually challenge Nvidia’s monopoly. OpenAI is actively diversifying away from Nvidia, signing a $20 billion deal with Cerebras, a startup using wafer-scale chips to reduce latency and cost. Cerebras, founded by Andrew Feldman, aims to challenge Nvidia with its unique architecture but faces financial and competitive risks, including heavy dependence on OpenAI and geopolitical tensions. While competition is intensifying with players like AMD and Groq (which partnered with Nvidia), the overall demand for compute continues to grow. The market is shifting toward a diversified supplier model, though Nvidia remains a formidable force.

The whole world covets NVIDIA's business.

According to NVIDIA's Q4 FY2026 (ending January 2026) earnings report, its GAAP gross margin was as high as 75.2%, making it practically a money-printing machine. This immense profitability stems primarily from its dominant position in the AI chip market, which grants it powerful pricing power.

Almost all large language models run on NVIDIA's computing chips, supporting its nearly $5 trillion market capitalization.

But precisely because of this, almost all major AI companies are openly or covertly trying to break free from NVIDIA's cage, unwilling to hand over their fate to it. The recently released DeepSeek V4, based on its technical report, was most likely trained using NVIDIA chips, but it is being adapted for inference on Huawei's Ascend computing chips. Furthermore, it stated that the token cost for the Pro version will be significantly reduced after the launch of Huawei's Ascend 950 in the second half of the year. Additionally, besides Huawei Ascend, domestic chip manufacturers like Tianshu Zhixin and Cambricon have also announced support for the new DeepSeek V4 model.

In NVIDIA's home turf, the US, Google developed its own TPU (Tensor Processing Unit) computing chips. As of April 2026, the TPU has reached its eighth generation, forming a complete product line of training and inference chips. In March, Meta also disclosed its roadmap for self-developed AI chips, planning to deploy four new products in the MTIA series by the end of 2027 to meet the internal AI business computing needs, while maintaining large-scale procurement partnerships with NVIDIA and AMD, building a dual-track computing power system of "self-developed + external procurement".

Yes, for the time being, no AI company can bypass NVIDIA, but Jensen Huang still senses the crisis. In a recent podcast interview, Huang stated that Moore's Law is coming to an end, meaning the era of chip performance doubling every year is over. The performance advantage of today's most advanced chips is not a permanent moat, but a relative advantage with a time window. Once the manufacturing process approaches physical limits, the difficulty for latecomers to catch up will actually decrease.

Huang said that restricting the export of computing chips to China would indeed slow down the development speed of Chinese AI in the short term, but in the long run, it will only force China to form its own ecosystem. What he didn't delve into further is that currently, only Chinese AI companies are committed to open source, and are being adopted by numerous companies and startups. If more and more open-source models run on Chinese-made computing chips, then even if NVIDIA still holds the number one market position, it will no longer be the only one.

In fact, even without the threat of Chinese open-source large models and computing chips, market competition is likely to push the computing chip industry towards a duopoly structure, rather than letting NVIDIA dominate alone.

Interestingly, among them, OpenAI, which is extremely dependent on NVIDIA, is ironically the most active in "backstabbing" it.

01

On April 17 local time, US AI chip manufacturer Cerebras officially submitted an IPO application to the US SEC, aiming to raise $3 billion with a valuation of $35 billion.

After withdrawing its previous IPO application in October 2025, this challenger to NVIDIA, whose core selling point is "wafer-scale chips," launched another IPO冲刺 (sprint) within six months, successfully pushing its company valuation from $8.1 billion to $35 billion.

The core pillar of this valuation surge is a cooperation agreement with OpenAI worth over $20 billion.

According to the agreement, OpenAI commits to using server clusters powered by Cerebras chips over the next three years. Cerebras will deploy 750 megawatts of computing power for the latter, expected to be fully deployed by 2028. Additionally, OpenAI will provide Cerebras with approximately $1 billion in funding to help develop its data centers and obtain about 10% in warrants.

Clearly, OpenAI is no longer just a simple client; it is a creditor and potentially a major future stakeholder. The decision to re-initiate the IPO冲刺 at this time is likely a joint decision by both companies.

On the same day Cerebras submitted its IPO documents, three core OpenAI executives, including Sora lead Bill Peebles, announced their departure. Meanwhile, the $500 billion "Stargate" plan, once seen as a milestone in US AI infrastructure, is also in disarray, with internal coordination and financing issues progressing slowly.

According to media disclosures, OpenAI's revenue in 2025 was $13.1 billion, with losses as high as $8 billion. Losses are预计 to soar to $25 billion this year. Under the pressure of huge losses, OpenAI even had to make painful cuts, shutting down the popular video generation product Sora.

Some analysis suggests that Sora's daily computing power cost was approximately $15 million, with the cost of a 10-second high-precision video around $33. During Sora's operation, total user payment revenue was only $2.1 million.

In such turbulent times, Altman naturally understands that over-reliance on NVIDIA would become OpenAI's biggest weakness.

Previously, OpenAI announced collaborations with Broadcom to develop custom chips and adopted AMD's new MI450 chips, frequently sending clear signals to the outside world—it no longer wants to work for NVIDIA. It is against this backdrop that Cerebras became a key bet in OpenAI's "de-NVIDIAization" strategy.

Although Cerebras is not widely known, it has uniqueness among chip manufacturing companies.

Almost all chip design giants follow the "cut the wafer, make small chips" route. Cerebras, however, focused on the "memory wall" encountered when data is moved between chips, thus adopting a more aggressive single-chip technology路线.

Cerebras's core product is the Wafer-Scale Engine WSE-3, a single chip made from an entire 300mm wafer. Because computation, storage, and interconnection are all within a single chip, data transmission latency is reduced by 90% compared to GPU clusters, making it particularly suitable for low-latency inference of large models.

In inference scenarios, the wafer-scale architecture is expected to reduce the cost per token by 80%.

OpenAI's head of computing infrastructure stated that Cerebras has added a dedicated low-latency inference solution to the platform, which will not only allow users to get faster response times but also lay the foundation for expanding real-time AI technology to a broader user base.

More importantly, Cerebras's non-HBM dependent route might break NVIDIA's near-monopoly in the chip industry, making computing power supply more diverse.

All of these恰好 hit OpenAI's pain points perfectly, making the collaboration between the two a natural fit.

Besides OpenAI, Cerebras also reached a cooperation agreement with AWS in March. The CS-3 will be deployed in Amazon's data centers, entering the infrastructure system of mainstream hyperscale cloud platforms.

02

"The most exciting thing about this rapidly iterating industry is that algorithms will continue to become faster, more accurate, and more efficient—precisely why I am unwilling to投身 those traditional industries that remain unchanged for nine years."

Cerebras's ability to reach its current position is closely tied to its founder, Andrew Feldman.

Unlike typical chip company founders who are engineers, Feldman graduated from Stanford University with bachelor's degrees in Economics and Political Science and an MBA. From the beginning of his career, he consistently accumulated experience in product and marketing fields. This career path gave him a natural instinct for what kind of business model could succeed.

As his experience grew, Feldman gradually transitioned from an employee to a serial entrepreneur.

And all serial entrepreneurs have one极其明显的 characteristic—they want to win, desperately. These people aren't just ordinarily "competitive"; they treat "winning" as indispensable as breathing. They typically choose to bet in the "no man's land" of industry consensus, going all-in on directions most people consider "unnecessary" or "impossible." In other words, they have a relatively large "gambling spirit."

In 2007, Feldman founded the server company SeaMicro.

"Today's large processors are like us driving a space shuttle to the grocery store. Actually, I just need to drive a Prius."

SeaMicro abandoned the traditional server approach of "piling on components." It removed all components except the CPU, memory, and a self-developed ASIC, providing "more cores" for specialized internet companies needing "scale-out" workloads. The company was acquired by AMD for $355 million in 2012.

Although the microserver business gradually faded into obscurity after being integrated into AMD, this experience allowed Feldman to accumulate wealth and further solidify his entrepreneurial methodology: at the node of generational change, use "counter-mainstream" hardware design to切入细分 markets not yet covered by giants.

According to industry conventions, chip yield decreases as area increases. While chip companies were all following NVIDIA's path forward, Feldman decided, in a very "layman" way of thinking, to directly make a single chip the size of a plate.

In 2015, Feldman and his technical partner Gary Lauterbach共同 founded Cerebras and brought in several former colleagues from SeaMicro. Cerebras remained silent for a full four years until it released the first-generation WSE-1 in August 2019.

During this obscure R&D period, Feldman was betting on two things: one was that TSMC's wafer-level packaging technology would gradually mature, and the other was that AI models would become so large that the memory wall of GPUs would become a fatal bottleneck.

Judging from current developments, he bet correctly.

From 2019 to 2024, Cerebras launched a new generation every two years, with the process jumping from 16nm to 7nm to 5nm, and the number of transistors rolling from 1.2 trillion to 4 trillion. Meanwhile, Feldman began actively seeking out major clients. In 2023, he flew to Abu Dhabi and secured G42.

Cerebras and G42 collaborated to train the leading language model in the Arabic language domain and jointly created Condor Galaxy, a network of nine interconnected supercomputers. The close cooperation with this Middle Eastern enterprise also triggered a national security review of Cerebras by the US Committee on Foreign Investment, but Feldman didn't care—the review indicated his own strength.

"If you only work 38 hours a week and还想挑战 an 800-pound gorilla like NVIDIA? No way. You need every waking minute."

Feldman was once asked in an interview about his views on "work-life balance," and he gave a rather radical negative answer. He毫不掩饰 his ambition to challenge NVIDIA.

Referencing NVIDIA's hundred-fold growth over ten years, Feldman holds a optimistic outlook for Cerebras's prospects: to develop treatment plans for millions of patients in the next 3 to 5 years; to provide inference computing power for applications yet to be born; to allow the public to use the company's technology without even noticing it.

03

Cerebras's冲刺 IPO faces constant controversy. Optimists look forward to witnessing the birth of a second NVIDIA, while naysayers question the stability of its performance.

According to officially disclosed financial information, Cerebras's revenue grew from $24.6 million in 2022 to $510 million in 2025, with a four-year compound annual growth rate of 175%. Particularly突出的是, the GAAP net profit in 2025 was $238 million, successfully reversing the颓势 of a net loss of $482 million in 2024.

However, a closer analysis reveals that the GAAP profit benefited from a non-cash book gain of $363 million. This gain was actually an accounting operation resulting from the removal of G42-related liabilities from the balance sheet due to the US security review. Excluding this non-recurring item, the company's non-GAAP net loss was actually $75.7 million.

In other words, Cerebras's "return to profitability" is an accounting game.

In 2023 and 2024, G42 contributed 83% and 87% of Cerebras's total revenue, respectively. With geopolitical conflicts becoming increasingly severe, the risk of relying on a single customer from the Middle East is可想而知. After all, Cerebras's first IPO withdrawal was partly due to national security reviews.

According to the prospectus, the company's remaining performance obligations,高达 $24.6 billion, rely heavily on the $20 billion agreement signed with OpenAI. In other words, Cerebras's expected revenue is almost entirely based on OpenAI's forward commitments, rather than a diversified large-scale customer base.

Whether this "shot in the arm" order can be fulfilled depends on the fate of OpenAI itself. When the stability of the largest customer itself is being反复打量 by the market, how much of this "blank check" can be realized is something恐怕 Feldman himself cannot guarantee.

A comparison with NVIDIA更能看出 Cerebras's disadvantages.

Even before the AI industry's big explosion, NVIDIA had already established a diversified customer base across multiple fields such as gaming, professional visualization, and data centers. No single customer accounted for more than 10% of its revenue. Over more than twenty years of evolution, NVIDIA has deeply bound itself with millions of developers. Every product iteration stems from the needs of internal ecosystem expansion, and its product planning path is very clear. Cerebras's ecosystem is at a very early stage, still achieving only a single-point breakthrough in inference scenarios, and has a long way to go before becoming a true platform company.

Even without the sudden emergence of ChatGPT, NVIDIA was a high-quality company with stable revenue and considerable profits. But if the $20 billion order from OpenAI were to disappear, Cerebras恐怕 wouldn't even have the possibility of冲刺 an IPO.

In December 2025, NVIDIA reached a special cooperation agreement worth approximately $20 billion in cash with Cerebras's competitor Groq. NVIDIA obtained a permanent non-exclusive license for Groq's LPU inference architecture and full-stack chip design technology.

Jensen Huang's entry signifies that the value of Cerebras's low-latency dedicated inference architecture has been recognized by industry giants, but it also急剧 increases the competitive pressure Cerebras faces.

From a practical standpoint, OpenAI引入 Cerebras is not for replacement, but to act as a "catfish" (stimulus), increasing bargaining chips and分散 supply chain risks.

There are reports that NVIDIA's system based on Groq chips will be launched in the second half of 2026. If Altman turns around and reaches an agreement with Huang again, Cerebras could easily become the sacrifice.

In the trillion-dollar AI chip track, diversified competition is undoubtedly good for the long-term development of the industry ecosystem. But the capital market is never short of wealth creation myths and舆论炒作. Whether Cerebras can truly deliver on its technological and commercial value still requires overcoming multiple tests.

The appealing title of "NVIDIA challenger" might also turn out to be a short-lived bubble.

But as the "Jevons Paradox" reveals, technological progress improves resource utilization efficiency and reduces the cost per unit output, but because people can afford to use more and use it more widely, it反而 leads to an increase in the total consumption of resources. As AI渗透 more extensively into all aspects of people's lives, computing power demand will continue to grow rapidly in the foreseeable future.

This super-large market worth hundreds of billions or even thousands of billions of dollars is not only about economics but also involves geopolitical security. No one wants to hand over the keys to their fate to be held by NVIDIA alone.

But显然, even out of自尊, Jensen Huang will not easily hand over the keys.

This article is from WeChat public account "最话FunTalk" (ID: iFuntalker), author: He Yiran, editor: Liu Yuxiang

Preguntas relacionadas

QWhy are major AI companies trying to reduce their reliance on NVIDIA?

ADue to NVIDIA's dominant market position and pricing power, which gives it significant control over the AI chip supply chain. Companies like DeepSeek, Google, Meta, and OpenAI are seeking alternatives to avoid dependency, reduce costs, and diversify their supply chains for greater strategic flexibility.

QWhat is Cerebras' unique approach to AI chip design?

ACerebras uses a wafer-scale engine (WSE) design, which involves creating a single, large chip from an entire silicon wafer. This approach reduces data transfer delays by 90% compared to GPU clusters and is particularly efficient for low-latency inference in large models, potentially lowering token costs by 80%.

QHow does OpenAI's partnership with Cerebras reflect its strategy?

AOpenAI's $20 billion agreement with Cerebras is part of its 'de-NVIDIAization' strategy to diversify its AI chip supply, reduce costs, and mitigate risks associated with over-reliance on a single vendor. This move also serves as leverage in negotiations with NVIDIA and other chip suppliers.

QWhat are the financial challenges facing Cerebras despite its IPO ambitions?

ACerebras' financials show reliance on non-recurring accounting gains and a heavy dependence on a few key clients like G42 and OpenAI. Its GAAP profitability in 2025 was largely due to a one-time $363 million non-cash gain, and without the OpenAI deal, its IPO prospects would be uncertain.

QWhat broader industry trend does the competition against NVIDIA represent?

AThe competition reflects a push for a diversified, multi-vendor AI chip ecosystem rather than NVIDIA's monopoly. This is driven by economic factors (cost reduction), geopolitical concerns (e.g., U.S.-China tensions), and the desire for technological innovation beyond traditional GPU architectures.

Lecturas Relacionadas

a16z: AI's 'Amnesia', Can Continuous Learning Cure It?

The article "a16z: AI's 'Amnesia' – Can Continual Learning Cure It?" explores the limitations of current large language models (LLMs), which, like the protagonist in the film *Memento*, are trapped in a perpetual present—unable to form new memories after training. While methods like in-context learning (ICL), retrieval-augmented generation (RAG), and external scaffolding (e.g., chat history, prompts) provide temporary solutions, they fail to enable true internalization of new knowledge. The authors argue that compression—the core of learning during training—is halted at deployment, preventing models from generalizing, discovering novel solutions (e.g., mathematical proofs), or handling adversarial scenarios. The piece introduces *continual learning* as a critical research direction to address this, categorizing approaches into three paths: 1. **Context**: Scaling external memory via longer context windows, multi-agent systems, and smarter retrieval. 2. **Modules**: Using pluggable adapters or external memory layers for specialization without full retraining. 3. **Weights**: Enabling parameter updates through sparse training, test-time training, meta-learning, distillation, and reinforcement learning from feedback. Challenges include catastrophic forgetting, safety risks, and auditability, but overcoming these could unlock models that learn iteratively from experience. The conclusion emphasizes that while context-based methods are effective, true breakthroughs require models to compress new information into weights post-deployment, moving from mere retrieval to genuine learning.

marsbitHace 17 min(s)

a16z: AI's 'Amnesia', Can Continuous Learning Cure It?

marsbitHace 17 min(s)

Can a Hair Dryer Earn $34,000? Deciphering the Reflexivity Paradox in Prediction Markets

An individual manipulated a weather sensor at Paris Charles de Gaulle Airport with a portable heat source, causing a Polymarket weather market to settle at 22°C and earning $34,000. This incident highlights a fundamental issue in prediction markets: when a market aims to reflect reality, it also incentivizes participants to influence that reality. Prediction markets operate on two layers: platform rules (what outcome counts as a win) and data sources (what actually happened). While most focus on rules, the real vulnerability lies in the data source. If reality is recorded through a specific source, influencing that source directly affects market settlement. The article categorizes markets by their vulnerability: 1. **Single-point physical data sources** (e.g., weather stations): Easily manipulated through physical interference. 2. **Insider information markets** (e.g., MrBeast video details): Insiders like team members use non-public information to trade. Kalshi fined a剪辑师 $20,000 for insider trading. 3. **Actor-manipulated markets** (e.g., Andrew Tate’s tweet counts): The subject of the market can control the outcome. Evidence suggests Tate’sociated accounts coordinated to profit. 4. **Individual-action markets** (e.g., WNBA disruptions): A single person can execute an event to profit from their pre-placed bets. Kalshi and Polymarket handle these issues differently. Kalshi enforces strict KYC, publicly penalizes insider trading, and reports to regulators. Polymarket, with its anonymous wallet-based system, has historically been more permissive, arguing that insider information improves market accuracy. However, it cooperated with authorities in the "Van Dyke case," where a user traded on classified government information. The core paradox is reflexivity: prediction markets are designed to discover truth, but their financial incentives can distort reality. The more valuable a prediction becomes, the more likely participants are to influence the event itself. The market ceases to be a mirror of reality and instead shapes it.

marsbitHace 1 hora(s)

Can a Hair Dryer Earn $34,000? Deciphering the Reflexivity Paradox in Prediction Markets

marsbitHace 1 hora(s)

Trading

Spot
Futuros

Artículos destacados

Qué es $S$

Entendiendo SPERO: Una Visión General Completa Introducción a SPERO A medida que el panorama de la innovación continúa evolucionando, la aparición de tecnologías web3 y proyectos de criptomonedas juega un papel fundamental en la configuración del futuro digital. Un proyecto que ha atraído la atención en este campo dinámico es SPERO, denotado como SPERO,$$s$. Este artículo tiene como objetivo reunir y presentar información detallada sobre SPERO, para ayudar a entusiastas e inversores a comprender sus fundamentos, objetivos e innovaciones dentro de los dominios web3 y cripto. ¿Qué es SPERO,$$s$? SPERO,$$s$ es un proyecto único dentro del espacio cripto que busca aprovechar los principios de descentralización y tecnología blockchain para crear un ecosistema que promueva la participación, la utilidad y la inclusión financiera. El proyecto está diseñado para facilitar interacciones de igual a igual de nuevas maneras, proporcionando a los usuarios soluciones y servicios financieros innovadores. En su esencia, SPERO,$$s$ tiene como objetivo empoderar a los individuos al proporcionar herramientas y plataformas que mejoren la experiencia del usuario en el espacio de las criptomonedas. Esto incluye habilitar métodos de transacción más flexibles, fomentar iniciativas impulsadas por la comunidad y crear caminos para oportunidades financieras a través de aplicaciones descentralizadas (dApps). La visión subyacente de SPERO,$$s$ gira en torno a la inclusividad, buscando cerrar brechas dentro de las finanzas tradicionales mientras aprovecha los beneficios de la tecnología blockchain. ¿Quién es el Creador de SPERO,$$s$? La identidad del creador de SPERO,$$s$ sigue siendo algo oscura, ya que hay recursos públicos limitados que proporcionan información de fondo detallada sobre su(s) fundador(es). Esta falta de transparencia puede derivarse del compromiso del proyecto con la descentralización, una ética que muchos proyectos web3 comparten, priorizando las contribuciones colectivas sobre el reconocimiento individual. Al centrar las discusiones en torno a la comunidad y sus objetivos colectivos, SPERO,$$s$ encarna la esencia del empoderamiento sin señalar a individuos específicos. Como tal, comprender la ética y la misión de SPERO sigue siendo más importante que identificar a un creador singular. ¿Quiénes son los Inversores de SPERO,$$s$? SPERO,$$s$ cuenta con el apoyo de una diversa gama de inversores que van desde capitalistas de riesgo hasta inversores ángeles dedicados a fomentar la innovación en el sector cripto. El enfoque de estos inversores generalmente se alinea con la misión de SPERO, priorizando proyectos que prometen avances tecnológicos sociales, inclusión financiera y gobernanza descentralizada. Estas fundaciones de inversores suelen estar interesadas en proyectos que no solo ofrecen productos innovadores, sino que también contribuyen positivamente a la comunidad blockchain y sus ecosistemas. El respaldo de estos inversores refuerza a SPERO,$$s$ como un contendiente notable en el dominio de proyectos cripto que evoluciona rápidamente. ¿Cómo Funciona SPERO,$$s$? SPERO,$$s$ emplea un marco multifacético que lo distingue de los proyectos de criptomonedas convencionales. Aquí hay algunas de las características clave que subrayan su singularidad e innovación: Gobernanza Descentralizada: SPERO,$$s$ integra modelos de gobernanza descentralizada, empoderando a los usuarios para participar activamente en los procesos de toma de decisiones sobre el futuro del proyecto. Este enfoque fomenta un sentido de propiedad y responsabilidad entre los miembros de la comunidad. Utilidad del Token: SPERO,$$s$ utiliza su propio token de criptomoneda, diseñado para servir diversas funciones dentro del ecosistema. Estos tokens permiten transacciones, recompensas y la facilitación de servicios ofrecidos en la plataforma, mejorando la participación y la utilidad general. Arquitectura en Capas: La arquitectura técnica de SPERO,$$s$ apoya la modularidad y escalabilidad, permitiendo la integración fluida de características y aplicaciones adicionales a medida que el proyecto evoluciona. Esta adaptabilidad es fundamental para mantener la relevancia en el cambiante paisaje cripto. Participación de la Comunidad: El proyecto enfatiza iniciativas impulsadas por la comunidad, empleando mecanismos que incentivan la colaboración y la retroalimentación. Al nutrir una comunidad sólida, SPERO,$$s$ puede abordar mejor las necesidades de los usuarios y adaptarse a las tendencias del mercado. Enfoque en la Inclusión: Al ofrecer tarifas de transacción bajas e interfaces amigables para el usuario, SPERO,$$s$ busca atraer a una base de usuarios diversa, incluyendo a individuos que anteriormente pueden no haber participado en el espacio cripto. Este compromiso con la inclusión se alinea con su misión general de empoderamiento a través de la accesibilidad. Cronología de SPERO,$$s$ Entender la historia de un proyecto proporciona información crucial sobre su trayectoria de desarrollo y hitos. A continuación se presenta una cronología sugerida que mapea eventos significativos en la evolución de SPERO,$$s$: Fase de Conceptualización e Ideación: Las ideas iniciales que forman la base de SPERO,$$s$ fueron concebidas, alineándose estrechamente con los principios de descentralización y enfoque comunitario dentro de la industria blockchain. Lanzamiento del Whitepaper del Proyecto: Tras la fase conceptual, se lanzó un whitepaper completo que detalla la visión, los objetivos y la infraestructura tecnológica de SPERO,$$s$ para generar interés y retroalimentación de la comunidad. Construcción de Comunidad y Primeras Interacciones: Se realizaron esfuerzos de divulgación activa para construir una comunidad de primeros adoptantes y posibles inversores, facilitando discusiones en torno a los objetivos del proyecto y obteniendo apoyo. Evento de Generación de Tokens: SPERO,$$s$ llevó a cabo un evento de generación de tokens (TGE) para distribuir sus tokens nativos a los primeros seguidores y establecer liquidez inicial dentro del ecosistema. Lanzamiento de la dApp Inicial: La primera aplicación descentralizada (dApp) asociada con SPERO,$$s$ se puso en marcha, permitiendo a los usuarios interactuar con las funcionalidades centrales de la plataforma. Desarrollo Continuo y Alianzas: Actualizaciones y mejoras continuas a las ofertas del proyecto, incluyendo alianzas estratégicas con otros actores en el espacio blockchain, han moldeado a SPERO,$$s$ en un jugador competitivo y en evolución en el mercado cripto. Conclusión SPERO,$$s$ se erige como un testimonio del potencial de web3 y las criptomonedas para revolucionar los sistemas financieros y empoderar a los individuos. Con un compromiso con la gobernanza descentralizada, la participación comunitaria y funcionalidades diseñadas de manera innovadora, allana el camino hacia un paisaje financiero más inclusivo. Como con cualquier inversión en el espacio cripto que evoluciona rápidamente, se anima a los posibles inversores y usuarios a investigar a fondo y participar de manera reflexiva con los desarrollos en curso dentro de SPERO,$$s$. El proyecto muestra el espíritu innovador de la industria cripto, invitando a una mayor exploración de sus innumerables posibilidades. Mientras el viaje de SPERO,$$s$ aún se desarrolla, sus principios fundamentales pueden, de hecho, influir en el futuro de cómo interactuamos con la tecnología, las finanzas y entre nosotros en ecosistemas digitales interconectados.

72 Vistas totalesPublicado en 2024.12.17Actualizado en 2024.12.17

Qué es $S$

Qué es AGENT S

Agent S: El Futuro de la Interacción Autónoma en Web3 Introducción En el paisaje en constante evolución de Web3 y las criptomonedas, las innovaciones están redefiniendo constantemente cómo los individuos interactúan con las plataformas digitales. Uno de estos proyectos pioneros, Agent S, promete revolucionar la interacción humano-computadora a través de su marco agente abierto. Al allanar el camino para interacciones autónomas, Agent S busca simplificar tareas complejas, ofreciendo aplicaciones transformadoras en inteligencia artificial (IA). Esta exploración detallada profundizará en las complejidades del proyecto, sus características únicas y las implicaciones para el dominio de las criptomonedas. ¿Qué es Agent S? Agent S se presenta como un marco agente abierto innovador, diseñado específicamente para abordar tres desafíos fundamentales en la automatización de tareas informáticas: Adquisición de Conocimiento Específico del Dominio: El marco aprende inteligentemente de diversas fuentes de conocimiento externas y experiencias internas. Este enfoque dual le permite construir un rico repositorio de conocimiento específico del dominio, mejorando su rendimiento en la ejecución de tareas. Planificación a Largo Plazo de Tareas: Agent S emplea planificación jerárquica aumentada por la experiencia, un enfoque estratégico que facilita la descomposición y ejecución eficiente de tareas complejas. Esta característica mejora significativamente su capacidad para gestionar múltiples subtareas de manera eficiente y efectiva. Manejo de Interfaces Dinámicas y No Uniformes: El proyecto introduce la Interfaz Agente-Computadora (ACI), una solución innovadora que mejora la interacción entre agentes y usuarios. Utilizando Modelos de Lenguaje Multimodal de Gran Escala (MLLMs), Agent S puede navegar y manipular diversas interfaces gráficas de usuario sin problemas. A través de estas características pioneras, Agent S proporciona un marco robusto que aborda las complejidades involucradas en la automatización de la interacción humana con las máquinas, preparando el terreno para una multitud de aplicaciones en IA y más allá. ¿Quién es el Creador de Agent S? Si bien el concepto de Agent S es fundamentalmente innovador, la información específica sobre su creador sigue siendo elusiva. El creador es actualmente desconocido, lo que resalta ya sea la etapa incipiente del proyecto o la elección estratégica de mantener a los miembros fundadores en el anonimato. Independientemente de la anonimidad, el enfoque sigue siendo en las capacidades y el potencial del marco. ¿Quiénes son los Inversores de Agent S? Dado que Agent S es relativamente nuevo en el ecosistema criptográfico, la información detallada sobre sus inversores y patrocinadores financieros no está documentada explícitamente. La falta de información disponible públicamente sobre las bases de inversión u organizaciones que apoyan el proyecto plantea preguntas sobre su estructura de financiamiento y hoja de ruta de desarrollo. Comprender el respaldo es crucial para evaluar la sostenibilidad del proyecto y su posible impacto en el mercado. ¿Cómo Funciona Agent S? En el núcleo de Agent S se encuentra una tecnología de vanguardia que le permite funcionar de manera efectiva en diversos entornos. Su modelo operativo se basa en varias características clave: Interacción Humano-Computadora Similar a la Humana: El marco ofrece planificación avanzada de IA, esforzándose por hacer que las interacciones con las computadoras sean más intuitivas. Al imitar el comportamiento humano en la ejecución de tareas, promete elevar las experiencias de los usuarios. Memoria Narrativa: Empleada para aprovechar experiencias de alto nivel, Agent S utiliza memoria narrativa para hacer un seguimiento de las historias de tareas, mejorando así sus procesos de toma de decisiones. Memoria Episódica: Esta característica proporciona a los usuarios una guía paso a paso, permitiendo que el marco ofrezca apoyo contextual a medida que se desarrollan las tareas. Soporte para OpenACI: Con la capacidad de ejecutarse localmente, Agent S permite a los usuarios mantener el control sobre sus interacciones y flujos de trabajo, alineándose con la ética descentralizada de Web3. Fácil Integración con APIs Externas: Su versatilidad y compatibilidad con varias plataformas de IA aseguran que Agent S pueda encajar sin problemas en ecosistemas tecnológicos existentes, convirtiéndolo en una opción atractiva para desarrolladores y organizaciones. Estas funcionalidades contribuyen colectivamente a la posición única de Agent S dentro del espacio cripto, ya que automatiza tareas complejas y de múltiples pasos con una intervención humana mínima. A medida que el proyecto evoluciona, sus posibles aplicaciones en Web3 podrían redefinir cómo se desarrollan las interacciones digitales. Cronología de Agent S El desarrollo y los hitos de Agent S pueden encapsularse en una cronología que resalta sus eventos significativos: 27 de septiembre de 2024: El concepto de Agent S fue lanzado en un documento de investigación integral titulado “Un Marco Agente Abierto que Usa Computadoras Como un Humano”, mostrando las bases del proyecto. 10 de octubre de 2024: El documento de investigación fue puesto a disposición del público en arXiv, ofreciendo una exploración profunda del marco y su evaluación de rendimiento basada en el benchmark OSWorld. 12 de octubre de 2024: Se lanzó una presentación en video, proporcionando una visión visual de las capacidades y características de Agent S, involucrando aún más a posibles usuarios e inversores. Estos marcadores en la cronología no solo ilustran el progreso de Agent S, sino que también indican su compromiso con la transparencia y la participación comunitaria. Puntos Clave Sobre Agent S A medida que el marco Agent S continúa evolucionando, varios atributos clave destacan, subrayando su naturaleza innovadora y potencial: Marco Innovador: Diseñado para proporcionar un uso intuitivo de las computadoras similar a la interacción humana, Agent S aporta un enfoque novedoso a la automatización de tareas. Interacción Autónoma: La capacidad de interactuar de manera autónoma con las computadoras a través de GUI significa un salto hacia soluciones informáticas más inteligentes y eficientes. Automatización de Tareas Complejas: Con su metodología robusta, puede automatizar tareas complejas y de múltiples pasos, haciendo que los procesos sean más rápidos y menos propensos a errores. Mejora Continua: Los mecanismos de aprendizaje permiten a Agent S mejorar a partir de experiencias pasadas, mejorando continuamente su rendimiento y eficacia. Versatilidad: Su adaptabilidad en diferentes entornos operativos como OSWorld y WindowsAgentArena asegura que pueda servir a una amplia gama de aplicaciones. A medida que Agent S se posiciona en el paisaje de Web3 y criptomonedas, su potencial para mejorar las capacidades de interacción y automatizar procesos significa un avance significativo en las tecnologías de IA. A través de su marco innovador, Agent S ejemplifica el futuro de las interacciones digitales, prometiendo una experiencia más fluida y eficiente para los usuarios en diversas industrias. Conclusión Agent S representa un audaz avance en la unión de la IA y Web3, con la capacidad de redefinir cómo interactuamos con la tecnología. Aunque aún se encuentra en sus primeras etapas, las posibilidades para su aplicación son vastas y atractivas. A través de su marco integral que aborda desafíos críticos, Agent S busca llevar las interacciones autónomas al primer plano de la experiencia digital. A medida que nos adentramos más en los reinos de las criptomonedas y la descentralización, proyectos como Agent S sin duda desempeñarán un papel crucial en la configuración del futuro de la tecnología y la colaboración humano-computadora.

346 Vistas totalesPublicado en 2025.01.14Actualizado en 2025.01.14

Qué es AGENT S

Cómo comprar S

¡Bienvenido a HTX.com! Hemos hecho que comprar Sonic (S) sea simple y conveniente. Sigue nuestra guía paso a paso para iniciar tu viaje de criptos.Paso 1: crea tu cuenta HTXUtiliza tu correo electrónico o número de teléfono para registrarte y obtener una cuenta gratuita en HTX. Experimenta un proceso de registro sin complicaciones y desbloquea todas las funciones.Obtener mi cuentaPaso 2: ve a Comprar cripto y elige tu método de pagoTarjeta de crédito/débito: usa tu Visa o Mastercard para comprar Sonic (S) al instante.Saldo: utiliza fondos del saldo de tu cuenta HTX para tradear sin problemas.Terceros: hemos agregado métodos de pago populares como Google Pay y Apple Pay para mejorar la comodidad.P2P: tradear directamente con otros usuarios en HTX.Over-the-Counter (OTC): ofrecemos servicios personalizados y tipos de cambio competitivos para los traders.Paso 3: guarda tu Sonic (S)Después de comprar tu Sonic (S), guárdalo en tu cuenta HTX. Alternativamente, puedes enviarlo a otro lugar mediante transferencia blockchain o utilizarlo para tradear otras criptomonedas.Paso 4: tradear Sonic (S)Tradear fácilmente con Sonic (S) en HTX's mercado spot. Simplemente accede a tu cuenta, selecciona tu par de trading, ejecuta tus trades y monitorea en tiempo real. Ofrecemos una experiencia fácil de usar tanto para principiantes como para traders experimentados.

715 Vistas totalesPublicado en 2025.01.15Actualizado en 2025.03.21

Cómo comprar S

Discusiones

Bienvenido a la comunidad de HTX. Aquí puedes mantenerte informado sobre los últimos desarrollos de la plataforma y acceder a análisis profesionales del mercado. A continuación se presentan las opiniones de los usuarios sobre el precio de S (S).

活动图片